[
  {
    "path": ".gitignore",
    "content": "docs/test_video.mp4\ntest/test_tao.py\nconfig/tao*\ntracker/mot/tao.py\neval/error_log.txt\nconfig/got10k*\nconfig/lasot*\nconfig/tc128*\nconfig/tlp*\nconfig/trackingnet*\nconfig/vfs*\nconfig/ssib*\n\nweights/\nresults/\nout/\nvis/\n*.ipynb\n# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\npip-wheel-metadata/\nshare/python-wheels/\n*.egg-info/\n.installed.cfg\n*.egg\nMANIFEST\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.nox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n*.py,cover\n.hypothesis/\n.pytest_cache/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\ndb.sqlite3\ndb.sqlite3-journal\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# IPython\nprofile_default/\nipython_config.py\n\n# pyenv\n.python-version\n\n# pipenv\n#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.\n#   However, in case of collaboration, if having platform-specific dependencies or dependencies\n#   having no cross-platform support, pipenv may install dependencies that don't work, or not\n#   install all needed dependencies.\n#Pipfile.lock\n\n# PEP 582; used by e.g. github.com/David-OConnor/pyflow\n__pypackages__/\n\n# Celery stuff\ncelerybeat-schedule\ncelerybeat.pid\n\n# SageMath parsed files\n*.sage.py\n\n# Environments\n.env\n.venv\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n.dmypy.json\ndmypy.json\n\n# Pyre type checker\n.pyre/\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2021 ZhongdaoWang\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "<p align=\"center\"> <img src=\"docs/logo.png\" width=\"500\"/> \n\n--------------------------------------------------------------------------------\n\n**[NeurIPS 2021] Do different tracking tasks require different appearance model?**\n\n**[[ArXiv](https://arxiv.org/abs/2107.02156)]**  **[[Project Page](https://zhongdao.github.io/UniTrack)]**\n\nUniTrack is a simple and Unified framework for addressing multiple tracking tasks. \n\nBeing a fundamental problem in computer vision, tracking has been fragmented into a multitude of different experimental setups. As a consequence, the literature has fragmented too, and now the novel approaches proposed by the community are usually specialized to fit only one specific setup. To understand to what extent this specialization is actually necessary, we present UniTrack, a solution to address multiple different tracking tasks within the same framework. All tasks share the same [appearance model](#appearance-model). UniTrack\n\n- Does **NOT** need training on a specific tracking task.\n\n- Shows [competitive performance](docs/RESULTS.md) on six out of seven tracking tasks considered.\n\n- Can be easily adapted to even [more tasks](##Demo).\n\n- Can be used as an evaluation platform to [test pre-trained self-supervised models](docs/MODELZOO.md).\n    \n \n## Demo\n**Multi-Object Tracking demo for 80 COCO classes ([YOLOX](https://github.com/Megvii-BaseDetection/YOLOX) + UniTrack)**\n<img src=\"docs/unitrack_yolox.gif\" width=\"480\"/> \n\nIn this demo we run the YOLOX detector and perform MOT for the 80 COCO classes. Try the demo by:\n```python\npython demo/mot_demo.py --classes cls1 cls2 ... clsN\n```\nwhere cls1 to clsN represent the indices of classes you would like to detect and track. See [here](https://gist.github.com/AruniRC/7b3dadd004da04c80198557db5da4bda) for the index list. By default all 80 classes are detected and tracked.\n    \n**Single-Object Tracking demo for custom videos**\n```python\npython demo/sot_demo.py --config ./config/imagenet_resnet18_s3.yaml --input /path/to/your/video\n```\nIn this demo, you are asked to annotate the target to be tracked, by drawing a rectangle in the first frame of the video. Then the algorithm tracks the target in following timesteps without object detection.\n  \n## Tasks & Framework\n![tasksframework](docs/tasksframework.png)\n\n### Tasks\nWe classify existing tracking tasks along four axes: (1) Single or multiple targets; (2) Users specify targets or automatic detectors specify targets; (3) Observation formats (bounding box/mask/pose); (2) Class-agnostic or class-specific (i.e. human/vehicles). We mainly experiment on 5 tasks: **SOT, VOS, MOT, MOTS, and PoseTrack**. Task setups are summarized in the above figure.\n\n### Appearance model\nAn appearance model is the only learnable component in UniTrack. It should provide universal visual representation, and is usually pre-trained on large-scale dataset in supervised or unsupervised manners. Typical examples include ImageNet pre-trained ResNets (supervised), and recent self-supervised models such as MoCo and SimCLR (unsupervised).\n\n### Propagation and Association\n*Propagation* and *Association* are the two core primitives used in UniTrack to address a wide variety of tracking tasks (currently 7, but more can be added), Both use the features extracted by the pre-trained appearance model. For propagation, we adopt exiting methods such as [cross correlation](https://www.robots.ox.ac.uk/~luca/siamese-fc.html), [DCF](https://openaccess.thecvf.com/content_cvpr_2017/html/Valmadre_End-To-End_Representation_Learning_CVPR_2017_paper.html), and [mask propation](https://github.com/ajabri/videowalk). For association we employ a simple algorithm as in [JDE](https://github.com/Zhongdao/Towards-Realtime-MOT) and develop a novel reconstruction-based similairty metric that allows to compare objects across shapes and sizes.\n    \n    \n## Getting started\n\n1. Installation: Please check out [docs/INSTALL.md](docs/INSTALL.md)\n2. Data preparation: Please check out [docs/DATA.md](docs/DATA.md)\n3. Appearance model preparation: Please check out [docs/MODELZOO.md](docs/MODELZOO.md)\n4. Run evaluation on all datasets: Please check out [docs/RUN.md](docs/RUN.md)\n\n## Results\nBelow we show results of UniTrack with a simple **ImageNet Pre-trained ResNet-18** as the appearance model. More results can be found in [RESULTS.md](docs/RESULTS.md).\n\n**Single Object Tracking (SOT) on OTB-2015**\n\n<img src=\"docs/sot1.gif\" width=\"320\"/>  <img src=\"docs/sot2.gif\" width=\"320\"/>\n\n**Video Object Segmentation (VOS) on DAVIS-2017 *val* split**\n\n<img src=\"docs/vos1.gif\" width=\"320\"/>  <img src=\"docs/vos2.gif\" width=\"320\"/>\n\n**Multiple Object Tracking (MOT) on MOT-16 [*test* set *private detector* track](https://motchallenge.net/method/MOT=3856&chl=5)** (Detections from FairMOT)\n\n<img src=\"docs/MOT1.gif\" width=\"320\"/>  <img src=\"docs/MOT2.gif\" width=\"320\"/>\n\n**Multiple Object Tracking and Segmentation (MOTS) on MOTS challenge [*test* set](https://motchallenge.net/method/MOTS=109&chl=17)** (Detections from COSTA_st)\n\n<img src=\"docs/MOTS1.gif\" width=\"320\"/>  <img src=\"docs/MOTS2.gif\" width=\"320\"/>\n\n**Pose Tracking on PoseTrack-2018 *val* split** (Detections from LightTrack)\n\n<img src=\"docs/posetrack1.gif\" width=\"320\"/>  <img src=\"docs/posetrack2.gif\" width=\"320\"/>\n\n## Acknowledgement\nA part of code is borrowed from \n    \n[VideoWalk](https://github.com/ajabri/videowalk) by Allan A. Jabri\n\n[SOT code](https://github.com/JudasDie/SOTS) by Zhipeng Zhang\n    \n## Citation\n```bibtex\n@article{wang2021different,\n  author    = {Wang, Zhongdao and Zhao, Hengshuang and Li, Ya-Li and Wang, Shengjin and Torr, Philip and Bertinetto, Luca},\n  title     = {Do different tracking tasks require different appearance models?},\n  journal   = {Thirty-Fifth Conference on Neural Infromation Processing Systems},\n  year      = {2021},\n}\n```\n"
  },
  {
    "path": "config/crw_resnet18_s3.yaml",
    "content": "common:\n    exp_name: crw_resnet18_s3\n   \n    # Model related\n    model_type: crw\n    remove_layers: ['layer4']\n    im_mean: [0.4914, 0.4822, 0.4465]\n    im_std: [0.2023, 0.1994, 0.2010]\n    nopadding: False\n    head_depth: -1\n    resume: 'weights/crw.pth'\n    \n    # Misc\n    down_factor: 8\n    infer2D: True \n    workers: 4\n    gpu_id: 0\n    device: cuda\n\nsot:\n    dataset: 'OTB2015'\n    dataroot: '/home/wangzd/datasets/GOT/OTB100/'\n    epoch_test: False\n\nvos:\n    davisroot: '/home/wangzd/datasets/uvc/DAVIS/'\n    split: 'val'\n    temperature: 0.05\n    topk: 10\n    radius: 12\n    videoLen: 5 \n    cropSize: -1\n    head_depth: -1\n    no_l2: False\n    long_mem: [0]\n    infer2D: False\n    norm_mask: False\n\nmot:\n    obid: 'FairMOT'\n    mot_root: '/home/wangzd/datasets/MOT/MOT16'\n    feat_size: [4,10]\n    save_videos: True\n    save_images: False\n    test_mot16: False\n    track_buffer: 30\n    min_box_area: 200\n    nms_thres: 0.4\n    conf_thres: 0.5\n    iou_thres: 0.5\n    dup_iou_thres: 0.15\n    confirm_iou_thres: 0.7\n    img_size: [1088, 608]\n    prop_flag: False\n    use_kalman: True \n    asso_with_motion: True \n    motion_lambda: 0.98\n    motion_gated: True \n\nmots:\n    obid: 'COSTA'\n    mots_root: '/home/wangzd/datasets/GOT/MOTS'\n    save_videos: False\n    save_images: True\n    test: False\n    track_buffer: 30\n    nms_thres: 0.4\n    conf_thres: 0.5\n    iou_thres: 0.5\n    prop_flag: False\n    max_mask_area: 200\n    dup_iou_thres: 0.15\n    confirm_iou_thres: 0.7 \n    first_stage_thres: 0.7\n    feat_size: [4,10]\n    use_kalman: True \n    asso_with_motion: True \n    motion_lambda: 0.98 \n    motion_gated: False\n\nposetrack:\n    obid: 'lighttrack_MSRA152'\n    data_root: '/home/wangzd/datasets/GOT/Posetrack2018'\n    split: 'val'\n    track_buffer: 30\n    nms_thres: 0.4\n    conf_thres: 0.5\n    iou_thres: 0.5\n    frame_rate: 6\n    save_videos: False\n    save_images: True\n    prop_flag: False\n    feat_size: [4,10]\n    max_mask_area: 400\n    dup_iou_thres: 0.2\n    confirm_iou_thres: 0.6\n    first_stage_thres: 0.7\n    use_kalman: True \n    asso_with_motion: True \n    motion_lambda: 0.9999\n    motion_gated: False\n    only_position: True\n\n"
  },
  {
    "path": "config/crw_resnet18_s3_womotion.yaml",
    "content": "common:\n    exp_name: crw_resnet18_s3_womotion\n   \n    # Model related\n    model_type: crw\n    remove_layers: ['layer4']\n    im_mean: [0.4914, 0.4822, 0.4465]\n    im_std: [0.2023, 0.1994, 0.2010]\n    nopadding: False\n    head_depth: -1\n    resume: 'weights/crw.pth'\n    \n    # Misc\n    down_factor: 8\n    infer2D: True \n    workers: 4\n    gpu_id: 0\n    device: cuda\n\nsot:\n    dataset: 'OTB2015'\n    dataroot: '/home/wangzd/datasets/GOT/OTB100/'\n    epoch_test: False\n\nvos:\n    davisroot: '/home/wangzd/datasets/uvc/DAVIS/'\n    split: 'val'\n    temperature: 0.05\n    topk: 10\n    radius: 12\n    videoLen: 5 \n    cropSize: -1\n    head_depth: -1\n    no_l2: False\n    long_mem: [0]\n    infer2D: False\n    norm_mask: False\n\nmot:\n    obid: 'FairMOT'\n    mot_root: '/home/wangzd/datasets/MOT/MOT16'\n    feat_size: [4,10]\n    save_videos: True\n    save_images: False\n    test_mot16: False\n    track_buffer: 30\n    min_box_area: 200\n    nms_thres: 0.4\n    conf_thres: 0.5\n    iou_thres: 0.5\n    dup_iou_thres: 0.15\n    confirm_iou_thres: 0.7\n    img_size: [1088, 608]\n    prop_flag: False\n    use_kalman: True \n    asso_with_motion: False \n    motion_lambda: 1\n    motion_gated: False\n\nmots:\n    obid: 'COSTA'\n    mots_root: '/home/wangzd/datasets/GOT/MOTS'\n    save_videos: False\n    save_images: True\n    test: False\n    track_buffer: 30\n    nms_thres: 0.4\n    conf_thres: 0.5\n    iou_thres: 0.5\n    prop_flag: False\n    max_mask_area: 200\n    dup_iou_thres: 0.15\n    confirm_iou_thres: 0.7 \n    first_stage_thres: 0.7\n    feat_size: [4,10]\n    use_kalman: True \n    asso_with_motion: False \n    motion_lambda: 1 \n    motion_gated: False\n\nposetrack:\n    obid: 'lighttrack_MSRA152'\n    data_root: '/home/wangzd/datasets/GOT/Posetrack2018'\n    split: 'val'\n    track_buffer: 30\n    nms_thres: 0.4\n    conf_thres: 0.5\n    iou_thres: 0.5\n    frame_rate: 6\n    save_videos: False\n    save_images: True\n    prop_flag: False\n    feat_size: [4,10]\n    max_mask_area: 400\n    dup_iou_thres: 0.2\n    confirm_iou_thres: 0.6\n    first_stage_thres: 0.7\n    use_kalman: True \n    asso_with_motion: False \n    motion_lambda: 1 \n    motion_gated: False\n    only_position: True\n\n"
  },
  {
    "path": "config/imagenet_resnet18_s3.yaml",
    "content": "common:\n    exp_name: imagenet_resnet18_s3 \n   \n    # Model related\n    model_type: imagenet18\n    remove_layers: ['layer4']\n    im_mean: [0.485, 0.456, 0.406]\n    im_std: [0.229, 0.224, 0.225]\n    nopadding: False\n    resume: None\n    \n    # Misc\n    down_factor: 8\n    infer2D: True \n    workers: 4\n    gpu_id: 0\n    device: cuda\n\nsot:\n    dataset: 'OTB2015'\n    dataroot: '/home/wangzd/datasets/GOT/OTB100/'\n    epoch_test: False\n\nvos:\n    davisroot: '/home/wangzd/datasets/uvc/DAVIS/'\n    split: 'val'\n    temperature: 0.05\n    topk: 10\n    radius: 12\n    videoLen: 5 \n    cropSize: -1\n    head_depth: -1\n    no_l2: False\n    long_mem: [0]\n    infer2D: False\n    norm_mask: False\n\nmot:\n    obid: 'FairMOT'\n    mot_root: '/home/wangzd/datasets/MOT/MOT16'\n    feat_size: [4,10]\n    save_videos: True\n    save_images: False\n    test_mot16: False\n    track_buffer: 30\n    min_box_area: 200\n    nms_thres: 0.4\n    conf_thres: 0.5\n    iou_thres: 0.5\n    dup_iou_thres: 0.15\n    confirm_iou_thres: 0.7\n    img_size: [1088, 608]\n    prop_flag: False\n    use_kalman: True \n    asso_with_motion: True \n    motion_lambda: 0.98\n    motion_gated: True \n\nmots:\n    obid: 'COSTA'\n    mots_root: '/home/wangzd/datasets/GOT/MOTS'\n    save_videos: False\n    save_images: True\n    test: False\n    track_buffer: 30\n    nms_thres: 0.4\n    conf_thres: 0.5\n    iou_thres: 0.5\n    prop_flag: False\n    max_mask_area: 200\n    dup_iou_thres: 0.15\n    confirm_iou_thres: 0.7 \n    first_stage_thres: 0.7\n    feat_size: [4,10]\n    use_kalman: True \n    asso_with_motion: True \n    motion_lambda: 0.98 \n    motion_gated: False\n\nposetrack:\n    obid: 'lighttrack_MSRA152'\n    data_root: '/home/wangzd/datasets/GOT/Posetrack2018'\n    split: 'val'\n    track_buffer: 30\n    nms_thres: 0.4\n    conf_thres: 0.5\n    iou_thres: 0.5\n    frame_rate: 6\n    save_videos: False\n    save_images: True\n    prop_flag: False\n    feat_size: [4,10]\n    max_mask_area: 400\n    dup_iou_thres: 0.2\n    confirm_iou_thres: 0.6\n    first_stage_thres: 0.7\n    use_kalman: True \n    asso_with_motion: True \n    motion_lambda: 0.9999\n    motion_gated: False\n    only_position: True\n\n"
  },
  {
    "path": "config/imagenet_resnet18_s3_womotion.yaml",
    "content": "common:\n    exp_name: imagenet_resnet18_s3_womotion\n   \n    # Model related\n    model_type: imagenet18\n    remove_layers: ['layer4']\n    im_mean: [0.485, 0.456, 0.406]\n    im_std: [0.229, 0.224, 0.225]\n    nopadding: False\n    resume: None\n    \n    # Misc\n    down_factor: 8\n    infer2D: True \n    workers: 4\n    gpu_id: 0\n    device: cuda\n\nsot:\n    dataset: 'OTB2015'\n    dataroot: '/home/wangzd/datasets/GOT/OTB100/'\n    epoch_test: False\n\nvos:\n    davisroot: '/home/wangzd/datasets/uvc/DAVIS/'\n    split: 'val'\n    temperature: 0.05\n    topk: 10\n    radius: 12\n    videoLen: 5 \n    cropSize: -1\n    head_depth: -1\n    no_l2: False\n    long_mem: [0]\n    infer2D: False\n    norm_mask: False\n\nmot:\n    obid: 'FairMOT'\n    mot_root: '/home/wangzd/datasets/MOT/MOT16'\n    feat_size: [4,10]\n    save_videos: True\n    save_images: False\n    test_mot16: False\n    track_buffer: 30\n    min_box_area: 200\n    nms_thres: 0.4\n    conf_thres: 0.5\n    iou_thres: 0.5\n    dup_iou_thres: 0.15\n    confirm_iou_thres: 0.7\n    img_size: [1088, 608]\n    prop_flag: False\n    use_kalman: True \n    asso_with_motion: False \n    motion_lambda: 1\n    motion_gated: False\n\nmots:\n    obid: 'COSTA'\n    mots_root: '/home/wangzd/datasets/GOT/MOTS'\n    save_videos: False\n    save_images: True\n    test: False\n    track_buffer: 30\n    nms_thres: 0.4\n    conf_thres: 0.5\n    iou_thres: 0.5\n    prop_flag: False\n    max_mask_area: 200\n    dup_iou_thres: 0.15\n    confirm_iou_thres: 0.7 \n    first_stage_thres: 0.7\n    feat_size: [4,10]\n    use_kalman: True \n    asso_with_motion: False\n    motion_lambda: 1 \n    motion_gated: False\n\nposetrack:\n    obid: 'lighttrack_MSRA152'\n    data_root: '/home/wangzd/datasets/GOT/Posetrack2018'\n    split: 'val'\n    track_buffer: 30\n    nms_thres: 0.4\n    conf_thres: 0.5\n    iou_thres: 0.5\n    frame_rate: 6\n    save_videos: False\n    save_images: True\n    prop_flag: False\n    feat_size: [4,10]\n    max_mask_area: 400\n    dup_iou_thres: 0.2\n    confirm_iou_thres: 0.6\n    first_stage_thres: 0.7\n    use_kalman: True\n    asso_with_motion: False\n    motion_lambda: 1 \n    motion_gated: False\n    only_position: True\n\nvis:\n    obid: 'MaskTrackRCNN'\n    data_root: '/home/wangzd/datasets/GOT/YoutubeVIS/'\n    split: 'val'\n    track_buffer: 30\n    nms_thres: 0.4\n    conf_thres: 0.5\n    iou_thres: 0.5\n    frame_rate: 6\n    save_videos: False\n    save_images: True\n    prop_flag: False\n    feat_size: [12,12]\n    max_mask_area: 1000\n    dup_iou_thres: 0.2\n    confirm_iou_thres: 0.6\n    first_stage_thres: 0.9\n    use_kalman: True\n    asso_with_motion: False\n    motion_lambda: 1 \n    motion_gated: False\n\n\n\n    \n\n\n\n\n\n"
  },
  {
    "path": "core/association/__init__.py",
    "content": ""
  },
  {
    "path": "core/association/matching.py",
    "content": "import pdb\r\nimport cv2\r\nimport torch\r\nimport torch.nn.functional as F\r\nimport numpy as np\r\nimport scipy\r\nfrom scipy.spatial.distance import cdist\r\nimport lap\r\n\r\nfrom cython_bbox import bbox_overlaps as bbox_ious\r\nfrom core.motion import kalman_filter\r\nimport time\r\n\r\ndef merge_matches(m1, m2, shape):\r\n    O,P,Q = shape\r\n    m1 = np.asarray(m1)\r\n    m2 = np.asarray(m2)\r\n\r\n    M1 = scipy.sparse.coo_matrix((np.ones(len(m1)), (m1[:, 0], m1[:, 1])), shape=(O, P))\r\n    M2 = scipy.sparse.coo_matrix((np.ones(len(m2)), (m2[:, 0], m2[:, 1])), shape=(P, Q))\r\n\r\n    mask = M1*M2\r\n    match = mask.nonzero()\r\n    match = list(zip(match[0], match[1]))\r\n    unmatched_O = tuple(set(range(O)) - set([i for i, j in match]))\r\n    unmatched_Q = tuple(set(range(Q)) - set([j for i, j in match]))\r\n\r\n    return match, unmatched_O, unmatched_Q\r\n\r\n\r\ndef linear_assignment(cost_matrix, thresh):\r\n    if cost_matrix.size == 0:\r\n        return np.empty((0, 2), dtype=int), tuple(range(cost_matrix.shape[0])), tuple(range(cost_matrix.shape[1]))\r\n    matches, unmatched_a, unmatched_b = [], [], []\r\n    cost, x, y = lap.lapjv(cost_matrix, extend_cost=True, cost_limit=thresh)\r\n    for ix, mx in enumerate(x):\r\n        if mx >= 0:\r\n            matches.append([ix, mx])\r\n    unmatched_a = np.where(x < 0)[0]\r\n    unmatched_b = np.where(y < 0)[0]\r\n    matches = np.asarray(matches)\r\n    return matches, unmatched_a, unmatched_b\r\n            \r\n\r\ndef ious(atlbrs, btlbrs):\r\n    \"\"\"\r\n    Compute cost based on IoU\r\n    :type atlbrs: list[tlbr] | np.ndarray\r\n    :type atlbrs: list[tlbr] | np.ndarray\r\n\r\n    :rtype ious np.ndarray\r\n    \"\"\"\r\n    ious = np.zeros((len(atlbrs), len(btlbrs)), dtype=np.float)\r\n    if ious.size == 0:\r\n        return ious\r\n\r\n    ious = bbox_ious(\r\n        np.ascontiguousarray(atlbrs, dtype=np.float),\r\n        np.ascontiguousarray(btlbrs, dtype=np.float)\r\n    )\r\n\r\n    return ious\r\n\r\n\r\ndef iou_distance(atracks, btracks):\r\n    \"\"\"\r\n    Compute cost based on IoU\r\n    :type atracks: list[STrack]\r\n    :type btracks: list[STrack]\r\n\r\n    :rtype cost_matrix np.ndarray\r\n    \"\"\"\r\n\r\n    if (len(atracks)>0 and isinstance(atracks[0], np.ndarray)) or (len(btracks) > 0 and isinstance(btracks[0], np.ndarray)):\r\n        atlbrs = atracks\r\n        btlbrs = btracks\r\n    else:\r\n        atlbrs = [track.tlbr for track in atracks]\r\n        btlbrs = [track.tlbr for track in btracks]\r\n    _ious = ious(atlbrs, btlbrs)\r\n    cost_matrix = 1 - _ious\r\n\r\n    return cost_matrix\r\n\r\ndef embedding_distance(tracks, detections, metric='cosine'):\r\n    \"\"\"\r\n    :param tracks: list[STrack]\r\n    :param detections: list[BaseTrack]\r\n    :param metric:\r\n    :return: cost_matrix np.ndarray\r\n    \"\"\"\r\n\r\n    cost_matrix = np.zeros((len(tracks), len(detections)), dtype=np.float)\r\n    if cost_matrix.size == 0:\r\n        return cost_matrix\r\n    det_features = np.asarray([track.curr_feat for track in detections], dtype=np.float)\r\n    track_features = np.asarray([track.smooth_feat for track in tracks], dtype=np.float)\r\n    cost_matrix = np.maximum(0.0, cdist(track_features, det_features)) # Nomalized features\r\n    return cost_matrix\r\n\r\n\r\ndef fuse_motion(kf, cost_matrix, tracks, detections, only_position=False, lambda_=0.98, gate=True):\r\n    if cost_matrix.size == 0:\r\n        return cost_matrix\r\n    gating_dim = 2 if only_position else 4\r\n    gating_threshold = kalman_filter.chi2inv95[gating_dim]\r\n    measurements = np.asarray([det.to_xyah() for det in detections])\r\n    for row, track in enumerate(tracks):\r\n        gating_distance = kf.gating_distance(\r\n            track.mean, track.covariance, measurements, only_position, metric='maha')\r\n        if gate:\r\n            cost_matrix[row, gating_distance > gating_threshold] = np.inf\r\n        cost_matrix[row] = lambda_ * cost_matrix[row] + (1-lambda_)* gating_distance\r\n    return cost_matrix\r\n\r\n\r\ndef center_emb_distance(tracks, detections, metric='cosine'):\r\n    \"\"\"\r\n    :param tracks: list[STrack]\r\n    :param detections: list[BaseTrack]\r\n    :param metric:\r\n    :return: cost_matrix np.ndarray\r\n    \"\"\"\r\n\r\n    cost_matrix = np.zeros((len(tracks), len(detections)), dtype=np.float)\r\n    if cost_matrix.size == 0:\r\n        return cost_matrix\r\n    det_features = torch.stack([track.curr_feat.squeeze() for track in detections])\r\n    track_features = torch.stack([track.smooth_feat.squeeze() for track in tracks])\r\n    normed_det = F.normalize(det_features)\r\n    normed_track = F.normalize(track_features)\r\n    cost_matrix = torch.mm(normed_track, normed_det.T)\r\n    cost_matrix = 1 - cost_matrix.detach().cpu().numpy()\r\n    return cost_matrix\r\n\r\ndef recons_distance(tracks, detections, tmp=100):\r\n    \"\"\"\r\n    :param tracks: list[STrack]\r\n    :param detections: list[BaseTrack]\r\n    :param metric:\r\n    :return: cost_matrix np.ndarray\r\n    \"\"\"\r\n\r\n    cost_matrix = np.zeros((len(tracks), len(detections)), dtype=np.float)\r\n    if cost_matrix.size == 0:\r\n        return cost_matrix\r\n    det_features_ = torch.stack([track.curr_feat.squeeze() for track in detections])\r\n    track_features_ = torch.stack([track.smooth_feat for track in tracks])\r\n    det_features = F.normalize(det_features_, dim=1)\r\n    track_features = F.normalize(track_features_, dim=1)\r\n\r\n    ndet, ndim, nw, nh = det_features.shape\r\n    ntrk, _, _, _ = track_features.shape\r\n    fdet = det_features.permute(0,2,3,1).reshape(-1, ndim).cuda()        # ndet*nw*nh, ndim\r\n    ftrk = track_features.permute(0,2,3,1).reshape(-1, ndim).cuda()      # ntrk*nw*nh, ndim\r\n\r\n    aff = torch.mm(ftrk, fdet.transpose(0,1))                             # ntrk*nw*nh, ndet*nw*nh\r\n    aff_td = F.softmax(tmp*aff, dim=1)\r\n    aff_dt = F.softmax(tmp*aff, dim=0).transpose(0,1)\r\n\r\n    recons_ftrk = torch.einsum('tds,dsm->tdm', aff_td.view(ntrk*nw*nh, ndet, nw*nh), \r\n                                fdet.view(ndet, nw*nh, ndim))         # ntrk*nw*nh, ndet, ndim\r\n    recons_fdet = torch.einsum('dts,tsm->dtm', aff_dt.view(ndet*nw*nh, ntrk, nw*nh),\r\n                                ftrk.view(ntrk, nw*nh, ndim))         # ndet*nw*nh, ntrk, ndim\r\n \r\n    res_ftrk = (recons_ftrk.permute(0,2,1) - ftrk.unsqueeze(-1)).view(ntrk, nw*nh*ndim, ndet)\r\n    res_fdet = (recons_fdet.permute(0,2,1) - fdet.unsqueeze(-1)).view(ndet, nw*nh*ndim, ntrk)\r\n\r\n    cost_matrix = (torch.abs(res_ftrk).mean(1) + torch.abs(res_fdet).mean(1).transpose(0,1)) * 0.5\r\n    cost_matrix = cost_matrix / cost_matrix.max(1)[0].unsqueeze(-1) \r\n    #pdb.set_trace()\r\n    cost_matrix = cost_matrix.cpu().numpy()\r\n    return cost_matrix\r\n\r\n\r\ndef get_track_feat(tracks, feat_flag='curr'):\r\n    if feat_flag == 'curr':\r\n        feat_list = [track.curr_feat.squeeze(0) for track in tracks]\r\n    elif feat_flag == 'smooth':\r\n        feat_list = [track.smooth_feat.squeeze(0) for track in tracks]\r\n    else:\r\n        raise NotImplementedError\r\n    \r\n    n = len(tracks)\r\n    fdim = feat_list[0].shape[0]\r\n    fdim_num = len(feat_list[0].shape)\r\n    if fdim_num > 2:\r\n        feat_list = [f.view(fdim,-1) for f in feat_list]\r\n    numels = [f.shape[1] for f in feat_list]\r\n    \r\n    ret = torch.zeros(n, fdim, np.max(numels)).to(feat_list[0].device)\r\n    for i, f in enumerate(feat_list):\r\n        ret[i, :, :numels[i]] = f\r\n    return ret \r\n\r\ndef reconsdot_distance(tracks, detections, tmp=100):\r\n    \"\"\"\r\n    :param tracks: list[STrack]\r\n    :param detections: list[BaseTrack]\r\n    :param metric:\r\n    :return: cost_matrix np.ndarray\r\n    \"\"\"\r\n    cost_matrix = np.zeros((len(tracks), len(detections)), dtype=np.float)\r\n    if cost_matrix.size == 0:\r\n        return cost_matrix, None\r\n    det_features_ = get_track_feat(detections)\r\n    track_features_ = get_track_feat(tracks, feat_flag='curr')\r\n\r\n    det_features = F.normalize(det_features_, dim=1)\r\n    track_features = F.normalize(track_features_, dim=1)\r\n\r\n    ndet, ndim, nsd = det_features.shape\r\n    ntrk, _, nst = track_features.shape\r\n\r\n    fdet = det_features.permute(0, 2, 1).reshape(-1, ndim).cuda()\r\n    ftrk = track_features.permute(0, 2, 1).reshape(-1, ndim).cuda()\r\n\r\n    aff = torch.mm(ftrk, fdet.transpose(0, 1))\r\n    aff_td = F.softmax(tmp*aff, dim=1)\r\n    aff_dt = F.softmax(tmp*aff, dim=0).transpose(0, 1)\r\n\r\n    recons_ftrk = torch.einsum('tds,dsm->tdm', aff_td.view(ntrk*nst, ndet, nsd),\r\n                               fdet.view(ndet, nsd, ndim))\r\n    recons_fdet = torch.einsum('dts,tsm->dtm', aff_dt.view(ndet*nsd, ntrk, nst),\r\n                               ftrk.view(ntrk, nst, ndim))\r\n\r\n    recons_ftrk = recons_ftrk.permute(0, 2, 1).view(ntrk, nst*ndim, ndet)\r\n    recons_ftrk_norm = F.normalize(recons_ftrk, dim=1)\r\n    recons_fdet = recons_fdet.permute(0, 2, 1).view(ndet, nsd*ndim, ntrk)\r\n    recons_fdet_norm = F.normalize(recons_fdet, dim=1)\r\n\r\n    dot_td = torch.einsum('tad,ta->td', recons_ftrk_norm,\r\n                          F.normalize(ftrk.reshape(ntrk, nst*ndim), dim=1))\r\n    dot_dt = torch.einsum('dat,da->dt', recons_fdet_norm,\r\n                          F.normalize(fdet.reshape(ndet, nsd*ndim), dim=1))\r\n\r\n    cost_matrix = 1 - 0.5 * (dot_td + dot_dt.transpose(0, 1))\r\n    cost_matrix = cost_matrix.detach().cpu().numpy()\r\n\r\n    return cost_matrix, None\r\n\r\n\r\ndef category_gate(cost_matrix, tracks, detections):\r\n    \"\"\"\r\n    :param tracks: list[STrack]\r\n    :param detections: list[BaseTrack]\r\n    :param metric:\r\n    :return: cost_matrix np.ndarray\r\n    \"\"\"\r\n    if cost_matrix.size == 0:\r\n        return cost_matrix\r\n\r\n    det_categories = np.array([d.category for d in detections])\r\n    trk_categories = np.array([t.category for t in tracks])\r\n\r\n    cost_matrix = cost_matrix + np.abs(\r\n            det_categories[None, :] - trk_categories[:, None])\r\n    return cost_matrix\r\n\r\n\r\n"
  },
  {
    "path": "core/motion/kalman_filter.py",
    "content": "# vim: expandtab:ts=4:sw=4\r\nimport numpy as np\r\nimport scipy.linalg\r\n\r\n\r\n\"\"\"\r\nTable for the 0.95 quantile of the chi-square distribution with N degrees of\r\nfreedom (contains values for N=1, ..., 9). Taken from MATLAB/Octave's chi2inv\r\nfunction and used as Mahalanobis gating threshold.\r\n\"\"\"\r\nchi2inv95 = {\r\n    1: 3.8415,\r\n    2: 5.9915,\r\n    3: 7.8147,\r\n    4: 9.4877,\r\n    5: 11.070,\r\n    6: 12.592,\r\n    7: 14.067,\r\n    8: 15.507,\r\n    9: 16.919}\r\n\r\n\r\nclass KalmanFilter(object):\r\n    \"\"\"\r\n    A simple Kalman filter for tracking bounding boxes in image space.\r\n\r\n    The 8-dimensional state space\r\n\r\n        x, y, a, h, vx, vy, va, vh\r\n\r\n    contains the bounding box center position (x, y), aspect ratio a, height h,\r\n    and their respective velocities.\r\n\r\n    Object motion follows a constant velocity model. The bounding box location\r\n    (x, y, a, h) is taken as direct observation of the state space (linear\r\n    observation model).\r\n\r\n    \"\"\"\r\n\r\n    def __init__(self):\r\n        ndim, dt = 4, 1.\r\n\r\n        # Create Kalman filter model matrices.\r\n        self._motion_mat = np.eye(2 * ndim, 2 * ndim)\r\n        for i in range(ndim):\r\n            self._motion_mat[i, ndim + i] = dt\r\n        self._update_mat = np.eye(ndim, 2 * ndim)\r\n\r\n        # Motion and observation uncertainty are chosen relative to the current\r\n        # state estimate. These weights control the amount of uncertainty in\r\n        # the model. This is a bit hacky.\r\n        self._std_weight_position = 1. / 20\r\n        self._std_weight_velocity = 1. / 160\r\n\r\n    def initiate(self, measurement):\r\n        \"\"\"Create track from unassociated measurement.\r\n\r\n        Parameters\r\n        ----------\r\n        measurement : ndarray\r\n            Bounding box coordinates (x, y, a, h) with center position (x, y),\r\n            aspect ratio a, and height h.\r\n\r\n        Returns\r\n        -------\r\n        (ndarray, ndarray)\r\n            Returns the mean vector (8 dimensional) and covariance matrix (8x8\r\n            dimensional) of the new track. Unobserved velocities are initialized\r\n            to 0 mean.\r\n\r\n        \"\"\"\r\n        mean_pos = measurement\r\n        mean_vel = np.zeros_like(mean_pos)\r\n        mean = np.r_[mean_pos, mean_vel]\r\n\r\n        std = [\r\n            2 * self._std_weight_position * measurement[3],\r\n            2 * self._std_weight_position * measurement[3],\r\n            1e-2,\r\n            2 * self._std_weight_position * measurement[3],\r\n            10 * self._std_weight_velocity * measurement[3],\r\n            10 * self._std_weight_velocity * measurement[3],\r\n            1e-5,\r\n            10 * self._std_weight_velocity * measurement[3]]\r\n        covariance = np.diag(np.square(std))\r\n        return mean, covariance\r\n\r\n    def predict(self, mean, covariance):\r\n        \"\"\"Run Kalman filter prediction step.\r\n\r\n        Parameters\r\n        ----------\r\n        mean : ndarray\r\n            The 8 dimensional mean vector of the object state at the previous\r\n            time step.\r\n        covariance : ndarray\r\n            The 8x8 dimensional covariance matrix of the object state at the\r\n            previous time step.\r\n\r\n        Returns\r\n        -------\r\n        (ndarray, ndarray)\r\n            Returns the mean vector and covariance matrix of the predicted\r\n            state. Unobserved velocities are initialized to 0 mean.\r\n\r\n        \"\"\"\r\n        std_pos = [\r\n            self._std_weight_position * mean[3],\r\n            self._std_weight_position * mean[3],\r\n            1e-2,\r\n            self._std_weight_position * mean[3]]\r\n        std_vel = [\r\n            self._std_weight_velocity * mean[3],\r\n            self._std_weight_velocity * mean[3],\r\n            1e-5,\r\n            self._std_weight_velocity * mean[3]]\r\n        motion_cov = np.diag(np.square(np.r_[std_pos, std_vel]))\r\n\r\n        mean = np.dot(mean, self._motion_mat.T)\r\n        covariance = np.linalg.multi_dot((\r\n            self._motion_mat, covariance, self._motion_mat.T)) + motion_cov\r\n\r\n        return mean, covariance\r\n\r\n    def project(self, mean, covariance):\r\n        \"\"\"Project state distribution to measurement space.\r\n\r\n        Parameters\r\n        ----------\r\n        mean : ndarray\r\n            The state's mean vector (8 dimensional array).\r\n        covariance : ndarray\r\n            The state's covariance matrix (8x8 dimensional).\r\n\r\n        Returns\r\n        -------\r\n        (ndarray, ndarray)\r\n            Returns the projected mean and covariance matrix of the given state\r\n            estimate.\r\n\r\n        \"\"\"\r\n        std = [\r\n            self._std_weight_position * mean[3],\r\n            self._std_weight_position * mean[3],\r\n            1e-1,\r\n            self._std_weight_position * mean[3]]\r\n        innovation_cov = np.diag(np.square(std))\r\n\r\n        mean = np.dot(self._update_mat, mean)\r\n        covariance = np.linalg.multi_dot((\r\n            self._update_mat, covariance, self._update_mat.T))\r\n        return mean, covariance + innovation_cov\r\n    \r\n    def multi_predict(self, mean, covariance):\r\n        \"\"\"Run Kalman filter prediction step (Vectorized version).\r\n\r\n        Parameters\r\n        ----------\r\n        mean : ndarray\r\n            The Nx8 dimensional mean matrix of the object states at the previous\r\n            time step.\r\n        covariance : ndarray\r\n            The Nx8x8 dimensional covariance matrics of the object states at the\r\n            previous time step.\r\n\r\n        Returns\r\n        -------\r\n        (ndarray, ndarray)\r\n            Returns the mean vector and covariance matrix of the predicted\r\n            state. Unobserved velocities are initialized to 0 mean.\r\n\r\n        \"\"\"\r\n        std_pos = [\r\n            self._std_weight_position * mean[:, 3],\r\n            self._std_weight_position * mean[:, 3],\r\n            1e-2 * np.ones_like(mean[:, 3]),\r\n            self._std_weight_position * mean[:, 3]]\r\n        std_vel = [\r\n            self._std_weight_velocity * mean[:, 3],\r\n            self._std_weight_velocity * mean[:, 3],\r\n            1e-5 * np.ones_like(mean[:, 3]),\r\n            self._std_weight_velocity * mean[:, 3]]\r\n        sqr = np.square(np.r_[std_pos, std_vel]).T\r\n        \r\n        motion_cov = []\r\n        for i in range(len(mean)):\r\n            motion_cov.append(np.diag(sqr[i]))\r\n        motion_cov = np.asarray(motion_cov)\r\n            \r\n        mean = np.dot(mean, self._motion_mat.T)\r\n        left = np.dot(self._motion_mat, covariance).transpose((1,0,2))\r\n        covariance = np.dot(left, self._motion_mat.T) + motion_cov\r\n\r\n        return mean, covariance\r\n\r\n    def update(self, mean, covariance, measurement):\r\n        \"\"\"Run Kalman filter correction step.\r\n\r\n        Parameters\r\n        ----------\r\n        mean : ndarray\r\n            The predicted state's mean vector (8 dimensional).\r\n        covariance : ndarray\r\n            The state's covariance matrix (8x8 dimensional).\r\n        measurement : ndarray\r\n            The 4 dimensional measurement vector (x, y, a, h), where (x, y)\r\n            is the center position, a the aspect ratio, and h the height of the\r\n            bounding box.\r\n\r\n        Returns\r\n        -------\r\n        (ndarray, ndarray)\r\n            Returns the measurement-corrected state distribution.\r\n\r\n        \"\"\"\r\n        projected_mean, projected_cov = self.project(mean, covariance)\r\n\r\n        chol_factor, lower = scipy.linalg.cho_factor(\r\n            projected_cov, lower=True, check_finite=False)\r\n        kalman_gain = scipy.linalg.cho_solve(\r\n            (chol_factor, lower), np.dot(covariance, self._update_mat.T).T,\r\n            check_finite=False).T\r\n        innovation = measurement - projected_mean\r\n\r\n        new_mean = mean + np.dot(innovation, kalman_gain.T)\r\n        new_covariance = covariance - np.linalg.multi_dot((\r\n            kalman_gain, projected_cov, kalman_gain.T))\r\n        return new_mean, new_covariance\r\n\r\n    def gating_distance(self, mean, covariance, measurements,\r\n                        only_position=False, metric='maha'):\r\n        \"\"\"Compute gating distance between state distribution and measurements.\r\n\r\n        A suitable distance threshold can be obtained from `chi2inv95`. If\r\n        `only_position` is False, the chi-square distribution has 4 degrees of\r\n        freedom, otherwise 2.\r\n\r\n        Parameters\r\n        ----------\r\n        mean : ndarray\r\n            Mean vector over the state distribution (8 dimensional).\r\n        covariance : ndarray\r\n            Covariance of the state distribution (8x8 dimensional).\r\n        measurements : ndarray\r\n            An Nx4 dimensional matrix of N measurements, each in\r\n            format (x, y, a, h) where (x, y) is the bounding box center\r\n            position, a the aspect ratio, and h the height.\r\n        only_position : Optional[bool]\r\n            If True, distance computation is done with respect to the bounding\r\n            box center position only.\r\n\r\n        Returns\r\n        -------\r\n        ndarray\r\n            Returns an array of length N, where the i-th element contains the\r\n            squared Mahalanobis distance between (mean, covariance) and\r\n            `measurements[i]`.\r\n\r\n        \"\"\"\r\n        mean, covariance = self.project(mean, covariance)\r\n        if only_position:\r\n            mean, covariance = mean[:2], covariance[:2, :2]\r\n            measurements = measurements[:, :2]\r\n        \r\n        d = measurements - mean\r\n        if metric == 'gaussian':\r\n            return np.sum(d * d, axis=1)\r\n        elif metric == 'maha':\r\n            cholesky_factor = np.linalg.cholesky(covariance)\r\n            z = scipy.linalg.solve_triangular(\r\n                cholesky_factor, d.T, lower=True, check_finite=False,\r\n                overwrite_b=True)\r\n            squared_maha = np.sum(z * z, axis=0)\r\n            return squared_maha\r\n        else:\r\n            raise ValueError('invalid distance metric')\r\n\r\n"
  },
  {
    "path": "core/propagation/__init__.py",
    "content": "###################################################################\n# File Name: __init__.py\n# Author: Zhongdao Wang\n# mail: wcd17@mails.tsinghua.edu.cn\n# Created Time: Mon Jan 18 15:57:34 2021\n###################################################################\n\nfrom __future__ import print_function\nfrom __future__ import division\nfrom __future__ import absolute_import\n\nfrom .propagate_box import propagate_box\nfrom .propagate_mask import propagate_mask\nfrom .propagate_pose import propagate_pose\n\ndef propagate(temp_feats, obs, img, model, format='box'):\n    if format == 'box':\n        return propagate_box(temp_feats, obs, img, model)\n    elif format == 'mask':\n        return propagate_box(temp_feats, obs, img, model)\n    elif format == 'pose':\n        return propagate_pose(temp_feats, obs, img, model)\n    else:\n        raise ValueError('Observation format not supported.')\n"
  },
  {
    "path": "core/propagation/propagate_box.py",
    "content": "###################################################################\n# File Name: propagate_box.py\n# Author: Zhongdao Wang\n# mail: wcd17@mails.tsinghua.edu.cn\n# Created Time: Mon Jan 18 16:01:46 2021\n###################################################################\n\nfrom __future__ import print_function\nfrom __future__ import division\nfrom __future__ import absolute_import\n\ndef propagate_box(temp_feats, box, img, model):\n    pass\n"
  },
  {
    "path": "core/propagation/propagate_mask.py",
    "content": "###################################################################\n# File Name: propagate_box.py\n# Author: Zhongdao Wang\n# mail: wcd17@mails.tsinghua.edu.cn\n# Created Time: Mon Jan 18 16:01:46 2021\n###################################################################\n\nfrom __future__ import print_function\nfrom __future__ import division\nfrom __future__ import absolute_import\n\ndef propagate_mask(temp_feats, mask, img, model):\n    pass\n"
  },
  {
    "path": "core/propagation/propagate_pose.py",
    "content": "###################################################################\n# File Name: propagate_box.py\n# Author: Zhongdao Wang\n# mail: wcd17@mails.tsinghua.edu.cn\n# Created Time: Mon Jan 18 16:01:46 2021\n###################################################################\n\nfrom __future__ import print_function\nfrom __future__ import division\nfrom __future__ import absolute_import\n\ndef propagate_pose(temp_feats, pose, img, model):\n    pass\n"
  },
  {
    "path": "data/jhmdb.py",
    "content": "from __future__ import print_function, absolute_import\n\nimport os\nimport numpy as np\nimport math\nimport scipy.io as sio\n\nimport cv2\nimport torch\nfrom matplotlib import cm\n\nfrom utils import im_to_numpy, im_to_torch\n\ndef resize(img, owidth, oheight):\n    img = im_to_numpy(img)\n    img = cv2.resize( img, (owidth, oheight) )\n    img = im_to_torch(img)\n    return img\n\ndef load_image(img_path):\n    # H x W x C => C x H x W\n    img = cv2.imread(img_path)\n#     print(img_path)\n    img = img.astype(np.float32)\n    img = img / 255.0\n    img = img[:,:,::-1]\n    img = img.copy()\n    return im_to_torch(img)\n\ndef color_normalize(x, mean, std):\n    if x.size(0) == 1:\n        x = x.repeat(3, 1, 1)\n\n    for t, m, s in zip(x, mean, std):\n        t.sub_(m)\n        t.div_(s)\n    return x\n\nimport time\n\n\n\n\n######################################################################\ndef try_np_load(p):\n    try:\n        return np.load(p)\n    except:\n        return None\n\ndef make_lbl_set(lbls):\n    print(lbls.shape)\n    t00 = time.time()\n\n    lbl_set = [np.zeros(3).astype(np.uint8)]\n    count_lbls = [0]    \n    \n    flat_lbls_0 = lbls[0].copy().reshape(-1, lbls.shape[-1]).astype(np.uint8)\n    lbl_set = np.unique(flat_lbls_0, axis=0)\n\n    # print(lbl_set)\n    # if (lbl_set > 20).sum() > 0:\n    #     import pdb; pdb.set_trace()\n    # count_lbls = [np.all(flat_lbls_0 == ll, axis=-1).sum() for ll in lbl_set]\n    \n    print('lbls', time.time() - t00)\n\n    return lbl_set\n\n\ndef texturize(onehot):\n    flat_onehot = onehot.reshape(-1, onehot.shape[-1])\n    lbl_set = np.unique(flat_onehot, axis=0)\n\n    count_lbls = [np.all(flat_onehot == ll, axis=-1).sum() for ll in lbl_set]\n    object_id = np.argsort(count_lbls)[::-1][1]\n\n    hidxs = []\n    for h in range(onehot.shape[0]):\n        # appears = any(np.all(onehot[h] == lbl_set[object_id], axis=-1))\n        appears = np.any(onehot[h, :, 1:] == 1)\n        if appears:    \n            hidxs.append(h)\n\n    nstripes = min(10, len(hidxs))\n\n    out = np.zeros((*onehot.shape[:2], nstripes+1))\n    out[:, :, 0] = 1\n\n    for i, h in enumerate(hidxs):\n        cidx = int(i // (len(hidxs) / nstripes))\n        w = np.any(onehot[h, :, 1:] == 1, axis=-1)\n        out[h][w] = 0\n        out[h][w, cidx+1] = 1\n        # print(i, h, cidx)\n\n    return out\n\n\n\nclass JhmdbSet(torch.utils.data.Dataset):\n    def __init__(self, args, sigma=0.5):\n\n        self.filelist = args.filelist\n        self.imgSize = args.imgSize\n        self.videoLen = args.videoLen\n        self.mapScale = args.mapScale\n\n        self.sigma = sigma\n\n        f = open(self.filelist, 'r')\n        self.jpgfiles = []\n        self.lblfiles = []\n\n        for line in f:\n            rows = line.split()\n            jpgfile = rows[1]\n            lblfile = rows[0]\n\n            self.jpgfiles.append(jpgfile)\n            self.lblfiles.append(lblfile)\n\n        f.close()\n    \n    def get_onehot_lbl(self, lbl_path):\n        name = '/' + '/'.join(lbl_path.split('.')[:-1]) + '_onehot.npy'\n        if os.path.exists(name):\n            return np.load(name)\n        else:\n            return None\n    \n\n    def make_paths(self, folder_path, label_path):\n        I = [ ll for ll in os.listdir(folder_path) if '.png' in ll ]\n\n        frame_num = len(I) + self.videoLen\n        I.sort(key=lambda x:int(x.split('.')[0]))\n\n        I_out, L_out = [], []\n\n        for i in range(frame_num):\n            i = max(0, i - self.videoLen)\n            img_path = \"%s/%s\" % (folder_path, I[i])\n\n            I_out.append(img_path)\n\n        return I_out\n\n\n    def __getitem__(self, index):\n\n        folder_path = self.jpgfiles[index]\n        label_path = self.lblfiles[index]\n\n        imgs = []\n        imgs_orig = []\n        lbls = []\n        lbls_onehot = []\n        patches = []\n        target_imgs = []\n        \n        img_paths = self.make_paths(folder_path, label_path)\n        frame_num = len(img_paths)\n\n        mean, std = [0.485, 0.456, 0.406], [0.229, 0.224, 0.225]\n\n        t000 = time.time()\n\n        # frame_num = 30\n        for i in range(frame_num):\n            t00 = time.time()\n\n            img_path = img_paths[i]\n            img = load_image(img_path)  # CxHxW\n\n            # print('loaded', i, time.time() - t00)\n\n            ht, wd = img.size(1), img.size(2)\n            if self.imgSize > 0:\n                newh, neww = ht, wd\n\n                if ht <= wd:\n                    ratio  = 1.0 #float(wd) / float(ht)\n                    # width, height\n                    img = resize(img, int(self.imgSize * ratio), self.imgSize)\n                    newh = self.imgSize\n                    neww = int(self.imgSize * ratio)\n                else:\n                    ratio  = 1.0 #float(ht) / float(wd)\n                    # width, height\n                    img = resize(img, self.imgSize, int(self.imgSize * ratio))\n                    newh = int(self.imgSize * ratio)\n                    neww = self.imgSize\n\n\n            img_orig = img.clone()\n            img = color_normalize(img, mean, std)\n\n            imgs_orig.append(img_orig)\n            imgs.append(img)\n\n        rsz_h, rsz_w = math.ceil(img.size(1) / self.mapScale[0]), math.ceil(img.size(2) /self.mapScale[1])\n\n        lbls_mat = sio.loadmat(label_path)\n\n        lbls_coord = lbls_mat['pos_img']\n        lbls_coord = lbls_coord - 1\n\n\n        lbls_coord[0, :, :] = lbls_coord[0, :, :] * float(neww) / float(wd) / self.mapScale[0]\n        lbls_coord[1, :, :] = lbls_coord[1, :, :] * float(newh) / float(ht) / self.mapScale[1]\n        lblsize =  (rsz_h, rsz_w)\n\n        lbls = np.zeros((lbls_coord.shape[2], lblsize[0], lblsize[1], lbls_coord.shape[1]))\n\n        for i in range(lbls_coord.shape[2]):\n            lbls_coord_now = lbls_coord[:, :, i]\n            scales = lbls_coord_now.max(1) - lbls_coord_now.min(1)\n            scale = scales.max()\n            scale = max(0.5, scale*0.015)\n            for j in range(lbls_coord.shape[1]):\n                if self.sigma > 0:\n                    draw_labelmap_np(lbls[i, :, :, j], lbls_coord_now[:, j], scale)\n                else:\n                    tx = int(lbls_coord_now[0, j])\n                    ty = int(lbls_coord_now[1, j])\n                    if tx < lblsize[1] and ty < lblsize[0] and tx >=0 and ty >=0:\n                        lbls[i, ty, tx, j] = 1.0\n\n        lbls_tensor = torch.zeros(frame_num, lblsize[0], lblsize[1], lbls_coord.shape[1])\n\n        for i in range(frame_num):\n            if i < self.videoLen:\n                nowlbl = lbls[0]\n            else:\n                if(i - self.videoLen < len(lbls)):\n                    nowlbl = lbls[i - self.videoLen]\n            lbls_tensor[i] = torch.from_numpy(nowlbl)\n        \n        lbls_tensor = torch.cat([(lbls_tensor.sum(-1) == 0)[..., None] *1.0, lbls_tensor], dim=-1)\n\n        lblset = np.arange(lbls_tensor.shape[-1]-1)\n        lblset = np.array([[0, 0, 0]] + [cm.Paired(i)[:3] for i in lblset]) * 255.0\n\n        # Meta info\n        meta = dict(folder_path=folder_path, img_paths=img_paths, lbl_paths=[])\n        \n        imgs = torch.stack(imgs)\n        imgs_orig = torch.stack(imgs_orig)\n        lbls_resize = lbls_tensor #np.stack(resizes)\n\n        assert lbls_resize.shape[0] == len(meta['img_paths'])\n        #print('vid', i, 'took', time.time() - t000)\n\n        return imgs, imgs_orig, lbls_resize, lbls_tensor, lblset, meta\n\n    def __len__(self):\n        return len(self.jpgfiles)\n\n\ndef draw_labelmap_np(img, pt, sigma, type='Gaussian'):\n    # Draw a 2D gaussian\n    # Adopted from https://github.com/anewell/pose-hg-train/blob/master/src/pypose/draw.py\n\n    # Check that any part of the gaussian is in-bounds\n    ul = [int(pt[0] - 3 * sigma), int(pt[1] - 3 * sigma)]\n    br = [int(pt[0] + 3 * sigma + 1), int(pt[1] + 3 * sigma + 1)]\n    if (ul[0] >= img.shape[1] or ul[1] >= img.shape[0] or\n            br[0] < 0 or br[1] < 0):\n        # If not, just return the image as is\n        return img\n\n    # Generate gaussian\n    size = 6 * sigma + 1\n    x = np.arange(0, size, 1, float)\n    y = x[:, np.newaxis]\n    x0 = y0 = size // 2\n    # The gaussian is not normalized, we want the center value to equal 1\n    if type == 'Gaussian':\n        g = np.exp(- ((x - x0) ** 2 + (y - y0) ** 2) / (2 * sigma ** 2))\n    elif type == 'Cauchy':\n        g = sigma / (((x - x0) ** 2 + (y - y0) ** 2 + sigma ** 2) ** 1.5)\n\n\n    # Usable gaussian range\n    g_x = max(0, -ul[0]), min(br[0], img.shape[1]) - ul[0]\n    g_y = max(0, -ul[1]), min(br[1], img.shape[0]) - ul[1]\n    # Image range\n    img_x = max(0, ul[0]), min(br[0], img.shape[1])\n    img_y = max(0, ul[1]), min(br[1], img.shape[0])\n\n    img[img_y[0]:img_y[1], img_x[0]:img_x[1]] = g[g_y[0]:g_y[1], g_x[0]:g_x[1]]\n    return img\n"
  },
  {
    "path": "data/kinetics.py",
    "content": "import torchvision.datasets.video_utils\n\nfrom torchvision.datasets.video_utils import VideoClips\nfrom torchvision.datasets.utils import list_dir\nfrom torchvision.datasets.folder import make_dataset\nfrom torchvision.datasets.vision import VisionDataset\n\nimport numpy as np\n\nclass Kinetics400(VisionDataset):\n    \"\"\"\n    `Kinetics-400 <https://deepmind.com/research/open-source/open-source-datasets/kinetics/>`_\n    dataset.\n\n    Kinetics-400 is an action recognition video dataset.\n    This dataset consider every video as a collection of video clips of fixed size, specified\n    by ``frames_per_clip``, where the step in frames between each clip is given by\n    ``step_between_clips``.\n\n    To give an example, for 2 videos with 10 and 15 frames respectively, if ``frames_per_clip=5``\n    and ``step_between_clips=5``, the dataset size will be (2 + 3) = 5, where the first two\n    elements will come from video 1, and the next three elements from video 2.\n    Note that we drop clips which do not have exactly ``frames_per_clip`` elements, so not all\n    frames in a video might be present.\n\n    Internally, it uses a VideoClips object to handle clip creation.\n\n    Args:\n        root (string): Root directory of the Kinetics-400 Dataset.\n        frames_per_clip (int): number of frames in a clip\n        step_between_clips (int): number of frames between each clip\n        transform (callable, optional): A function/transform that  takes in a TxHxWxC video\n            and returns a transformed version.\n\n    Returns:\n        video (Tensor[T, H, W, C]): the `T` video frames\n        audio(Tensor[K, L]): the audio frames, where `K` is the number of channels\n            and `L` is the number of points\n        label (int): class of the video clip\n    \"\"\"\n\n    def __init__(self, root, frames_per_clip, step_between_clips=1, frame_rate=None,\n                 extensions=('mp4',), transform=None, cached=None, _precomputed_metadata=None):\n        super(Kinetics400, self).__init__(root)\n        extensions = extensions\n\n        classes = list(sorted(list_dir(root)))\n        class_to_idx = {classes[i]: i for i in range(len(classes))}\n        \n        self.samples = make_dataset(self.root, class_to_idx, extensions, is_valid_file=None)\n        self.classes = classes\n        video_list = [x[0] for x in self.samples]\n        self.video_clips = VideoClips(\n            video_list,\n            frames_per_clip,\n            step_between_clips,\n            frame_rate,\n            _precomputed_metadata,\n        )\n        self.transform = transform\n\n    def __len__(self):\n        return self.video_clips.num_clips()\n\n    def __getitem__(self, idx):\n        success = False\n        while not success:\n            try:\n                video, audio, info, video_idx = self.video_clips.get_clip(idx)\n                success = True\n            except:\n                print('skipped idx', idx)\n                idx = np.random.randint(self.__len__())\n\n        label = self.samples[video_idx][1]\n        if self.transform is not None:\n            video = self.transform(video)\n\n        return video, audio, label\n"
  },
  {
    "path": "data/video.py",
    "content": "import os\r\nimport pdb\r\nimport glob\r\nimport json\r\nimport os.path as osp\r\n\r\nimport cv2\r\nimport numpy as np\r\n\r\nimport pycocotools.mask as mask_utils\r\nfrom utils.box import xyxy2xywh\r\n\r\nfrom torchvision.transforms import transforms as T\r\n\r\n\r\nclass LoadImages:  # for inference\r\n    def __init__(self, path, img_size=(1088, 608)):\r\n        if os.path.isdir(path):\r\n            image_format = ['.jpg', '.jpeg', '.png', '.tif']\r\n            self.files = sorted(glob.glob('%s/*.*' % path))\r\n            self.files = list(filter(lambda x: os.path.splitext(x)[1].lower()\r\n                                     in image_format, self.files))\r\n        elif os.path.isfile(path):\r\n            self.files = [path]\r\n\r\n        self.nF = len(self.files)  # number of image files\r\n        self.width = img_size[0]\r\n        self.height = img_size[1]\r\n        self.count = 0\r\n\r\n        assert self.nF > 0, 'No images found in ' + path\r\n\r\n    def __iter__(self):\r\n        self.count = -1\r\n        return self\r\n\r\n    def __next__(self):\r\n        self.count += 1\r\n        if self.count == self.nF:\r\n            raise StopIteration\r\n        img_path = self.files[self.count]\r\n\r\n        # Read image\r\n        img0 = cv2.imread(img_path)  # BGR\r\n        assert img0 is not None, 'Failed to load ' + img_path\r\n\r\n        # Padded resize\r\n        img, _, _, _ = letterbox(img0, height=self.height, width=self.width)\r\n\r\n        # Normalize RGB\r\n        img = img[:, :, ::-1].transpose(2, 0, 1)\r\n        img = np.ascontiguousarray(img, dtype=np.float32)\r\n        img /= 255.0\r\n\r\n        return img_path, img, img0\r\n\r\n    def __getitem__(self, idx):\r\n        idx = idx % self.nF\r\n        img_path = self.files[idx]\r\n\r\n        # Read image\r\n        img0 = cv2.imread(img_path)  # BGR\r\n        assert img0 is not None, 'Failed to load ' + img_path\r\n\r\n        # Padded resize\r\n        img, _, _, _ = letterbox(img0, height=self.height, width=self.width)\r\n\r\n        # Normalize RGB\r\n        img = img[:, :, ::-1].transpose(2, 0, 1)\r\n        img = np.ascontiguousarray(img, dtype=np.float32)\r\n        img /= 255.0\r\n\r\n        return img_path, img, img0\r\n\r\n    def __len__(self):\r\n        return self.nF  # number of files\r\n\r\n\r\nclass LoadVideo:  # for inference\r\n    def __init__(self, path, img_size=(1088, 608)):\r\n        self.cap = cv2.VideoCapture(path) \r\n        self.frame_rate = int(round(self.cap.get(cv2.CAP_PROP_FPS)))\r\n        self.vw = int(self.cap.get(cv2.CAP_PROP_FRAME_WIDTH))\r\n        self.vh = int(self.cap.get(cv2.CAP_PROP_FRAME_HEIGHT))\r\n        self.vn = int(self.cap.get(cv2.CAP_PROP_FRAME_COUNT))\r\n\r\n        self.width = img_size[0]\r\n        self.height = img_size[1]\r\n        self.count = 0\r\n\r\n        self.w, self.h = self.get_size(self.vw, self.vh, self.width, self.height)\r\n        print('Lenth of the video: {:d} frames'.format(self.vn))\r\n\r\n    def get_size(self, vw, vh, dw, dh):\r\n        wa, ha = float(dw) / vw, float(dh) / vh\r\n        a = min(wa, ha)\r\n        return int(vw * a), int(vh*a)\r\n\r\n    def __iter__(self):\r\n        self.count = -1\r\n        return self\r\n\r\n    def __next__(self):\r\n        self.count += 1\r\n        if self.count == len(self):\r\n            raise StopIteration\r\n        # Read image\r\n        res, img0 = self.cap.read()  # BGR\r\n        assert img0 is not None, 'Failed to load frame {:d}'.format(self.count)\r\n        img0 = cv2.resize(img0, (self.w, self.h))\r\n\r\n        # Padded resize\r\n        img, _, _, _ = letterbox(img0, height=self.height, width=self.width)\r\n\r\n        # Normalize RGB\r\n        img = img[:, :, ::-1]\r\n        img = np.ascontiguousarray(img, dtype=np.float32)\r\n\r\n        return self.count, img, img0\r\n   \r\n    def __len__(self):\r\n        return self.vn  # number of files\r\n\r\n\r\nclass LoadImagesAndObs: \r\n    def __init__(self, path, opt):\r\n        obid = opt.obid\r\n        img_size = getattr(opt,'img_size', None)\r\n        if os.path.isdir(path):\r\n            image_format = ['.jpg', '.jpeg', '.png', '.tif']\r\n            self.img_files = sorted(glob.glob('%s/*.*' % path))\r\n            self.img_files = list(filter(\r\n                lambda x: os.path.splitext(x)[1].lower() in image_format, self.img_files))\r\n        elif os.path.isfile(path):\r\n            self.img_files = [path,]\r\n\r\n        self.label_files = [x.replace('images', osp.join('obs', obid)).replace(\r\n            '.png', '.txt').replace('.jpg', '.txt') for x in self.img_files]\r\n\r\n        self.nF = len(self.img_files)  # number of image files\r\n        self.transforms = T.Compose([T.ToTensor(), T.Normalize(opt.im_mean, opt.im_std)])\r\n        self.use_lab = getattr(opt, 'use_lab', False)\r\n        if not img_size is None:\r\n            self.width = img_size[0]\r\n            self.height = img_size[1]\r\n\r\n    def __getitem__(self, files_index):\r\n        img_path = self.img_files[files_index]\r\n        label_path = self.label_files[files_index]\r\n        return self.get_data(img_path, label_path)\r\n\r\n    def get_data(self, img_path, label_path):\r\n        height = self.height\r\n        width = self.width\r\n        img_ori = cv2.imread(img_path)  # BGR\r\n        if img_ori is None:\r\n            raise ValueError('File corrupt {}'.format(img_path))\r\n\r\n        h, w, _ = img_ori.shape\r\n        img, ratio, padw, padh = letterbox(img_ori, height=height, width=width)\r\n\r\n        # Load labels\r\n        if os.path.isfile(label_path):\r\n            labels0 = np.loadtxt(label_path, dtype=np.float32).reshape(-1, 5)\r\n            # Normalized xywh to pixel xyxy format\r\n            labels = labels0.copy()\r\n            labels[:, 0] = ratio * w * (labels0[:, 0] - labels0[:, 2] / 2) + padw\r\n            labels[:, 1] = ratio * h * (labels0[:, 1] - labels0[:, 3] / 2) + padh\r\n            labels[:, 2] = ratio * w * (labels0[:, 0] + labels0[:, 2] / 2) + padw\r\n            labels[:, 3] = ratio * h * (labels0[:, 1] + labels0[:, 3] / 2) + padh\r\n        else:\r\n            labels = np.array([])\r\n        nL = len(labels)\r\n        if nL > 0:\r\n            # convert xyxy to xywh\r\n            labels[:, 0:4] = xyxy2xywh(labels[:, 0:4].copy())\r\n            labels[:, 0] /= width\r\n            labels[:, 1] /= height\r\n            labels[:, 2] /= width\r\n            labels[:, 3] /= height\r\n       \r\n        if self.use_lab:\r\n            img = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)\r\n            img = np.array([img[:, :, 0], ]*3)\r\n            img = img.transpose(1, 2, 0)\r\n        img = img / 255.\r\n        img = np.ascontiguousarray(img[:, :, ::-1])  # BGR to RGB\r\n        if self.transforms is not None:\r\n            img = self.transforms(img)\r\n\r\n        return img, labels, img_ori, (h, w)\r\n\r\n    def __len__(self):\r\n        return self.nF  # number of batches\r\n\r\nclass LoadImagesAndObsTAO:\r\n    def __init__(self, root, video_meta, obs, opt):\r\n        self.dataroot = root\r\n        self.img_ind = [x['id'] for x in video_meta]\r\n        self.img_files = [x['file_name'] for x in video_meta]\r\n        self.img_files = [osp.join(root, 'frames', x) for x in self.img_files]\r\n        self.obs = [obs.get(x, []) for x in self.img_ind]\r\n        self.use_lab = getattr(opt, 'use_lab', False)\r\n        self.transforms = T.Compose([T.ToTensor(), T.Normalize(opt.im_mean, opt.im_std)])\r\n\r\n    def __getitem__(self, index):\r\n        img_ori = cv2.imread(self.img_files[index])\r\n        if img_ori is None:\r\n            raise ValueError('File corrupt {}'.format(img_path))\r\n\r\n        h, w, _ = img_ori.shape\r\n        img = img_ori\r\n        if self.use_lab:\r\n            img = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)\r\n            img = np.array([img[:,:,0],]*3)\r\n            img = img.transpose(1,2,0)\r\n        img = img / 255.\r\n        img = np.ascontiguousarray(img[ :, :, ::-1]) # BGR to RGB\r\n        if self.transforms is not None:\r\n            img = self.transforms(img)\r\n\r\n        obs = self.obs[index]\r\n        if len(obs) == 0:\r\n            labels = np.array([[0,0,1,1,-1,-1]])\r\n        else:\r\n            boxes = np.array([x.get('bbox', [0,0,1,1]) for x in obs])\r\n            scores = np.array([x.get('score', 0) for x in obs])[:, None]\r\n            cat_ids = np.array([x.get('category_id',-1) for x in obs])[:, None]\r\n            labels = np.concatenate([boxes, scores, cat_ids], axis=1)\r\n            if len(labels) > 0:\r\n                # From tlwh to xywh: (x,y) is the box center\r\n                labels[:, 0] = labels[:, 0] + labels[:, 2] / 2\r\n                labels[:, 1] = labels[:, 1] + labels[:, 3] / 2\r\n                labels[:, 0] /= w\r\n                labels[:, 1] /= h\r\n                labels[:, 2] /= w\r\n                labels[:, 3] /= h\r\n\r\n        return img, labels, img_ori, (h,w)\r\n\r\n    def __len__(self):\r\n        return len(self.img_files)\r\n\r\n\r\n\r\nclass LoadImagesAndMaskObsVIS:\r\n    def __init__(self, path, info, obs, opt):\r\n        self.dataroot = path\r\n        self.nF = info['length']\r\n        self.img_files = [osp.join(path, p) for p in info['file_names']]\r\n        self.obsbyobj = obs\r\n        self.transforms = T.Compose([T.ToTensor(), T.Normalize(opt.im_mean, opt.im_std)])\r\n        self.use_lab = getattr(opt, 'use_lab', False)\r\n\r\n\r\n    def __getitem__(self, idx):\r\n        img_ori = cv2.imread(self.img_files[idx])\r\n        if img_ori is None:\r\n            raise ValueError('File corrupt {}'.format(img_path))\r\n\r\n        h, w, _ = img_ori.shape\r\n        img = img_ori\r\n        if self.use_lab:\r\n            img = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)\r\n            img = np.array([img[:,:,0],]*3)\r\n            img = img.transpose(1,2,0)\r\n        img = img / 255.\r\n        img = np.ascontiguousarray(img[ :, :, ::-1]) # BGR to RGB\r\n        if self.transforms is not None:\r\n            img = self.transforms(img)\r\n\r\n\r\n        labels = list()\r\n        for obj in self.obsbyobj:\r\n            RLE = obj['segmentations'][idx]\r\n            if RLE: labels.append(mask_utils.decode(RLE))\r\n            else: labels.append(np.zeros((h, w), dtype=np.uint8))\r\n        labels = np.stack(labels)\r\n\r\n        return img, labels, img_ori, (h, w)\r\n\r\n    def __len__(self):\r\n        return self.nF\r\n\r\n    \r\nclass LoadImagesAndMaskObsMOTS(LoadImagesAndObs): \r\n    def __init__(self, path, opt):\r\n        super(LoadImagesAndMaskObsMOTS, self).__init__(path, opt)\r\n\r\n    def get_data(self, img_path, label_path):\r\n        img_ori = cv2.imread(img_path)  # BGR\r\n        if img_ori is None:\r\n            raise ValueError('File corrupt {}'.format(img_path))\r\n        h, w, _ = img_ori.shape\r\n\r\n        img = img_ori\r\n        if self.use_lab:\r\n            img = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)\r\n            img = np.array([img[:,:,0],]*3)\r\n            img = img.transpose(1,2,0)\r\n        img = img / 255.\r\n        img = np.ascontiguousarray(img[ :, :, ::-1]) # BGR to RGB\r\n        if self.transforms is not None:\r\n            img = self.transforms(img)\r\n\r\n        # Load labels\r\n        labels = []\r\n        if os.path.isfile(label_path):\r\n            with open(label_path, 'r') as f:\r\n                for line in f:\r\n                    labels.append(line.strip().split())\r\n        nL = len(labels)\r\n        if nL > 0:\r\n            labels = [{'size':(int(h),int(w)), 'counts':m} for \\\r\n                    _, _,cid,h,w,m in labels if cid=='2']\r\n            labels = [mask_utils.decode(rle) for rle in labels]\r\n        labels = np.stack(labels) \r\n        return img, labels, img_ori, (h, w)\r\n\r\n\r\nclass LoadImagesAndPoseObs(LoadImagesAndObs): \r\n    def __init__(self, obs_jpath, opt):\r\n        fjson = open(obs_jpath, 'r')\r\n        self.infoj = json.load(fjson)['annolist']\r\n        self.dataroot = opt.data_root\r\n        self.nF = len(self.infoj)\r\n        self.img_files = [osp.join(opt.data_root, p['image'][0]['name']) for p in self.infoj]\r\n        self.transforms = T.Compose([T.ToTensor(), T.Normalize(opt.im_mean, opt.im_std)])\r\n        self.use_lab = getattr(opt, 'use_lab', False)\r\n\r\n    def __getitem__(self, idx):\r\n        img_ori = cv2.imread(self.img_files[idx])\r\n        if img_ori is None:\r\n            raise ValueError('File corrupt {}'.format(img_path))\r\n\r\n        h, w, _ = img_ori.shape\r\n        img = img_ori\r\n        \r\n        if self.use_lab:\r\n            img = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)\r\n            img = np.array([img[:,:,0],]*3)\r\n            img = img.transpose(1,2,0)\r\n        \r\n        img = img / 255.\r\n        img = np.ascontiguousarray(img[ :, :, ::-1]) # BGR to RGB\r\n        if self.transforms is not None:\r\n            img = self.transforms(img)\r\n\r\n        info_label = self.infoj[idx]['annorect']\r\n        nobj = len(info_label)\r\n        labels = list() \r\n        labels = [l['annopoints'][0]['point'] for l in info_label]\r\n\r\n        return img, labels, img_ori, (h, w)\r\n        \r\n\r\n\r\ndef letterbox(img, height=608, width=1088, color=(127.5, 127.5, 127.5)):  # resize a rectangular image to a padded rectangular \r\n    shape = img.shape[:2]  # shape = [height, width]\r\n    ratio = min(float(height)/shape[0], float(width)/shape[1])\r\n    new_shape = (round(shape[1] * ratio), round(shape[0] * ratio)) # new_shape = [width, height]\r\n    dw = (width - new_shape[0]) / 2  # width padding\r\n    dh = (height - new_shape[1]) / 2  # height padding\r\n    top, bottom = round(dh - 0.1), round(dh + 0.1)\r\n    left, right = round(dw - 0.1), round(dw + 0.1)\r\n    img = cv2.resize(img, new_shape, interpolation=cv2.INTER_AREA)  # resized, no border\r\n    img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color)  # padded rectangular\r\n    return img, ratio, dw, dh\r\n\r\n"
  },
  {
    "path": "data/vos.py",
    "content": "from __future__ import print_function, absolute_import\n\nimport os\nimport pdb\nimport os.path as osp\nimport numpy as np\nimport math\nimport cv2\nimport torch\nimport time\nfrom matplotlib import cm\nfrom utils import im_to_numpy, im_to_torch\n\n\ndef resize(img, owidth, oheight):\n    img = im_to_numpy(img)\n    img = cv2.resize(img, (owidth, oheight))\n    img = im_to_torch(img)\n    return img\n\n\ndef load_image(img):\n    # H x W x C => C x H x W\n    if isinstance(img, str):\n        img = cv2.imread(img)\n    if len(img.shape) == 2:\n        img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)\n    img = img.astype(np.float32)\n    img = img / 255.0\n    img = img[:, :, ::-1]\n    img = img.copy()\n    return im_to_torch(img)\n\n\ndef color_normalize(x, mean, std):\n    if x.size(0) == 1:\n        x = x.repeat(3, 1, 1)\n    for t, m, s in zip(x, mean, std):\n        t.sub_(m)\n        t.div_(s)\n    return x\n\n######################################################################\ndef try_np_load(p):\n    try:\n        return np.load(p)\n    except:\n        return None\n\ndef make_lbl_set(lbls):\n    lbl_set = [np.zeros(3).astype(np.uint8)]\n    \n    flat_lbls_0 = lbls[0].copy().reshape(-1, lbls.shape[-1]).astype(np.uint8)\n    lbl_set = np.unique(flat_lbls_0, axis=0)\n\n    return lbl_set\n\ndef texturize(onehot):\n    flat_onehot = onehot.reshape(-1, onehot.shape[-1])\n    lbl_set = np.unique(flat_onehot, axis=0)\n\n    count_lbls = [np.all(flat_onehot == ll, axis=-1).sum() for ll in lbl_set]\n    object_id = np.argsort(count_lbls)[::-1][1]\n\n    hidxs = []\n    for h in range(onehot.shape[0]):\n        appears = np.any(onehot[h, :, 1:] == 1)\n        if appears:    \n            hidxs.append(h)\n\n    nstripes = min(10, len(hidxs))\n\n    out = np.zeros((*onehot.shape[:2], nstripes+1))\n    out[:, :, 0] = 1\n\n    for i, h in enumerate(hidxs):\n        cidx = int(i // (len(hidxs) / nstripes))\n        w = np.any(onehot[h, :, 1:] == 1, axis=-1)\n        out[h][w] = 0\n        out[h][w, cidx+1] = 1\n\n    return out\n\n\nclass VOSDataset(torch.utils.data.Dataset):\n    def __init__(self, args):\n\n        self.davisroot = args.davisroot\n        self.split = args.split\n        self.imgSize = args.imgSize\n        self.videoLen = args.videoLen\n        self.mapScale = args.mapScale\n\n        self.texture = False \n        self.round = False \n        self.use_lab = getattr(args, 'use_lab', False)\n        self.im_mean = args.im_mean\n        self.im_std = args.im_std\n\n        filelist = osp.join(self.davisroot, 'ImageSets/2017', self.split+'.txt')\n        f = open(filelist, 'r')\n        self.jpgfiles = []\n        self.lblfiles = []\n\n        for line in f:\n            seq = line.strip()\n\n            self.jpgfiles.append(osp.join(self.davisroot,'JPEGImages','480p', seq))\n            self.lblfiles.append(osp.join(self.davisroot, 'Annotations','480p', seq))\n\n        f.close()\n    \n    def get_onehot_lbl(self, lbl_path):\n        name = '/' + '/'.join(lbl_path.split('.')[:-1]) + '_onehot.npy'\n        if os.path.exists(name):\n            return np.load(name)\n        else:\n            return None\n    \n\n    def make_paths(self, folder_path, label_path):\n        I, L = os.listdir(folder_path), os.listdir(label_path)\n        L = [ll for ll in L if 'npy' not in ll]\n\n        frame_num = len(I) + self.videoLen\n        I.sort(key=lambda x:int(x.split('.')[0]))\n        L.sort(key=lambda x:int(x.split('.')[0]))\n\n        I_out, L_out = [], []\n\n        for i in range(frame_num):\n            i = max(0, i - self.videoLen)\n            img_path = \"%s/%s\" % (folder_path, I[i])\n            lbl_path = \"%s/%s\" % (label_path,  L[i])\n\n            I_out.append(img_path)\n            L_out.append(lbl_path)\n\n        return I_out, L_out\n\n\n    def __getitem__(self, index):\n\n        folder_path = self.jpgfiles[index]\n        label_path = self.lblfiles[index]\n\n        imgs = []\n        imgs_orig = []\n        lbls = []\n        lbls_onehot = []\n        patches = []\n        target_imgs = []\n\n        frame_num = len(os.listdir(folder_path)) + self.videoLen\n\n        img_paths, lbl_paths = self.make_paths(folder_path, label_path)\n\n        t000 = time.time()\n\n        for i in range(frame_num):\n            t00 = time.time()\n\n            img_path, lbl_path = img_paths[i], lbl_paths[i]\n            img = load_image(img_path)  # CxHxW\n            lblimg = cv2.imread(lbl_path)\n\n\n            '''\n            Resize img to 320x320\n            '''\n            ht, wd = img.size(1), img.size(2)\n            if self.imgSize > 0:\n                newh, neww = ht, wd\n\n                if ht <= wd:\n                    ratio  = 1.0 #float(wd) / float(ht)\n                    # width, height\n                    img = resize(img, int(self.imgSize * ratio), self.imgSize)\n                    newh = self.imgSize\n                    neww = int(self.imgSize * ratio)\n                else:\n                    ratio  = 1.0 #float(ht) / float(wd)\n                    # width, height\n                    img = resize(img, self.imgSize, int(self.imgSize * ratio))\n                    newh = int(self.imgSize * ratio)\n                    neww = self.imgSize\n\n                lblimg = cv2.resize(lblimg, (newh, neww), cv2.INTER_NEAREST)\n\n            # Resized, but not augmented image\n            img_orig = img.clone()\n            '''\n            Transforms\n            '''\n            if self.use_lab:\n                img = im_to_numpy(img)\n                img = (img * 255).astype(np.uint8)[:,:,::-1]\n                img = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)\n                img = im_to_torch(img) / 255.\n                img = color_normalize(img, self.im_mean, self.im_std)\n                img = torch.stack([img[0]]*3)\n            else:\n                img = color_normalize(img, self.im_mean, self.im_std)\n\n            imgs_orig.append(img_orig)\n            imgs.append(img)\n            lbls.append(lblimg.copy())\n            \n        # Meta info\n        meta = dict(folder_path=folder_path, img_paths=img_paths, lbl_paths=lbl_paths)\n\n        ########################################################\n        # Load reshaped label information (load cached versions if possible)\n        lbls = np.stack(lbls)\n        prefix = '/' + '/'.join(lbl_paths[0].split('.')[:-1])\n\n        # Get lblset\n        lblset = make_lbl_set(lbls)\n\n        if np.all((lblset[1:] - lblset[:-1]) == 1):\n            lblset = lblset[:, 0:1]\n\n        onehots = []\n        resizes = []\n\n        rsz_h, rsz_w = math.ceil(img.size(1) / self.mapScale[0]), math.ceil(img.size(2) /self.mapScale[1])\n\n        for i,p in enumerate(lbl_paths):\n            prefix = '/' + '/'.join(p.split('.')[:-1])\n            # print(prefix)\n            oh_path = \"%s_%s.npy\" % (prefix, 'onehot')\n            rz_path = \"%s_%s.npy\" % (prefix, 'size%sx%s' % (rsz_h, rsz_w))\n\n            onehot = try_np_load(oh_path) \n            if onehot is None:\n                print('computing onehot lbl for', oh_path)\n                onehot = np.stack([np.all(lbls[i] == ll, axis=-1) for ll in lblset], axis=-1)\n                np.save(oh_path, onehot)\n\n            resized = try_np_load(rz_path)\n            if resized is None:\n                print('computing resized lbl for', rz_path)\n                resized = cv2.resize(np.float32(onehot), (rsz_w, rsz_h), cv2.INTER_LINEAR)\n                np.save(rz_path, resized)\n\n            if self.texture:\n                texturized = texturize(resized)\n                resizes.append(texturized)\n                lblset = np.array([[0, 0, 0]] + [cm.Paired(i)[:3] for i in range(texturized.shape[-1])]) * 255.0\n                break\n            else:\n                resizes.append(resized)\n                onehots.append(onehot)\n\n        if self.texture:\n            resizes = resizes * self.videoLen\n            for _ in range(len(lbl_paths)-self.videoLen):\n                resizes.append(np.zeros(resizes[0].shape))\n            onehots = resizes\n\n        ########################################################\n        \n        imgs = torch.stack(imgs)\n        imgs_orig = torch.stack(imgs_orig)\n        lbls_tensor = torch.from_numpy(np.stack(lbls))\n        lbls_resize = np.stack(resizes)\n\n        assert lbls_resize.shape[0] == len(meta['lbl_paths'])\n\n        return imgs, imgs_orig, lbls_resize, lbls_tensor, lblset, meta\n\n    def __len__(self):\n        return len(self.jpgfiles)\n\n\n\n\n\n\n"
  },
  {
    "path": "demo/mot_demo.py",
    "content": "###################################################################\n# File Name: mot_demo.py\n# Author: Zhongdao Wang\n# mail: wcd17@mails.tsinghua.edu.cn\n# Created Time: Sat Jul 24 16:07:23 2021\n###################################################################\n\nimport os\nimport sys\nimport yaml\nimport argparse\nimport os.path as osp\nfrom loguru import logger\n\nimport cv2\nimport torch\nimport numpy as np\nfrom torchvision.transforms import transforms as T\n\nsys.path[0] = os.getcwd()\nfrom data.video import LoadVideo\nfrom utils.meter import Timer\nfrom utils import visualize as vis\nfrom detector.YOLOX.yolox.exp import get_exp\nfrom detector.YOLOX.yolox.utils import get_model_info\nfrom detector.YOLOX.yolox.data.datasets import COCO_CLASSES\nfrom detector.YOLOX.tools.demo import Predictor\n\nfrom utils.box import scale_box_input_size\nfrom tracker.mot.box import BoxAssociationTracker\n\n\ndef make_parser():\n    parser = argparse.ArgumentParser(\"YOLOX + UniTrack MOT demo\")\n    # Common arguments\n    parser.add_argument('--demo', default='video',\n                        help='demo type, eg. video or webcam')\n    parser.add_argument('--path', default='./docs/test_video.mp3',\n                        help='path to images or video')\n    parser.add_argument('--save_result', action='store_true',\n                        help='whether to save result')\n    parser.add_argument(\"--nms\", default=None, type=float,\n                        help=\"test nms threshold\")\n    parser.add_argument(\"--tsize\", default=[640, 480], type=int, nargs='+',\n                        help=\"test img size\")\n    parser.add_argument(\"--exp_file\", type=str,\n                        default='./detector/YOLOX/exps/default/yolox_x.py',\n                        help=\"pls input your expriment description file\")\n    parser.add_argument('--output-root', default='./results/mot_demo',\n                        help='output directory')\n    parser.add_argument('--classes', type=int, nargs='+',\n                        default=list(range(90)), help='COCO_CLASSES')\n\n    # Detector related\n    parser.add_argument(\"-c\", \"--ckpt\",  type=str,\n                        default='./detector/YOLOX/weights/yolox_x.pth',\n                        help=\"model weights of the detector\")\n    parser.add_argument(\"--conf\", default=0.65, type=float,\n                        help=\"detection confidence threshold\")\n\n    # UniTrack related\n    parser.add_argument('--config', type=str, help='tracker config file',\n                        default='./config/imagenet_resnet18_s3.yaml')\n\n    return parser\n\n\ndef dets2obs(dets, imginfo, cls):\n    if dets is None or len(dets) == 0:\n        return np.array([])\n    obs = dets.cpu().numpy()\n    h, w = imginfo['height'], imginfo['width']\n    # To xywh\n    ret = np.zeros((len(obs), 6))\n    ret[:, 0] = (obs[:, 0] + obs[:, 2]) * 0.5 / w\n    ret[:, 1] = (obs[:, 1] + obs[:, 3]) * 0.5 / h\n    ret[:, 2] = (obs[:, 2] - obs[:, 0]) / w\n    ret[:, 3] = (obs[:, 3] - obs[:, 1]) / h\n    ret[:, 4] = obs[:, 4] * obs[:, 5]\n    ret[:, 5] = obs[:, 6]\n\n    ret = [r for r in ret if int(r[5]) in cls]\n    ret = np.array(ret)\n\n    return ret\n\n\ndef eval_seq(opt, dataloader, detector, tracker,\n             result_filename, save_dir=None,\n             show_image=True):\n    transforms = T.Compose([T.ToTensor(),\n                            T.Normalize(opt.im_mean, opt.im_std)])\n    if save_dir:\n        os.makedirs(save_dir, exist_ok=True)\n    timer = Timer()\n    results = []\n    for frame_id, (_, img, img0) in enumerate(dataloader):\n        if frame_id % 20 == 0:\n            logger.info('Processing frame {} ({:.2f} fps)'.format(\n                frame_id, 1./max(1e-5, timer.average_time)))\n\n        # run tracking\n        timer.tic()\n        det_outputs, img_info = detector.inference(img)\n        img = img / 255.\n        img = transforms(img)\n        obs = dets2obs(det_outputs[0], img_info, opt.classes)\n        if len(obs) == 0:\n            online_targets = []\n        else:\n            online_targets = tracker.update(img, img0, obs)\n        online_tlwhs = []\n        online_ids = []\n        for t in online_targets:\n            tlwh = t.tlwh\n            tid = t.track_id\n            online_tlwhs.append(tlwh)\n            online_ids.append(tid)\n        timer.toc()\n        # save results\n        results.append((frame_id + 1, online_tlwhs, online_ids))\n        if show_image or save_dir is not None:\n            online_im = vis.plot_tracking(\n                    img0, online_tlwhs, online_ids, frame_id=frame_id,\n                    fps=1. / timer.average_time)\n        if show_image:\n            cv2.imshow('online_im', online_im)\n        if save_dir is not None:\n            cv2.imwrite(os.path.join(\n                save_dir, '{:05d}.jpg'.format(frame_id)), online_im)\n    return frame_id, timer.average_time, timer.calls\n\n\ndef main(exp, args):\n    logger.info(\"Args: {}\".format(args))\n\n    # Data, I/O\n    dataloader = LoadVideo(args.path, args.tsize)\n    video_name = osp.basename(args.path).split('.')[0]\n    result_root = osp.join(args.output_root, video_name)\n    result_filename = os.path.join(result_root, 'results.txt')\n    args.frame_rate = dataloader.frame_rate\n\n    # Detector init\n    det_model = exp.get_model()\n    logger.info(\"Model Summary: {}\".format(\n        get_model_info(det_model, exp.test_size)))\n    det_model.cuda()\n    det_model.eval()\n    logger.info(\"loading checkpoint\")\n    ckpt = torch.load(args.ckpt, map_location=\"cpu\")\n    # load the model state dict\n    det_model.load_state_dict(ckpt[\"model\"])\n    logger.info(\"loaded checkpoint done.\")\n    detector = Predictor(det_model, exp, COCO_CLASSES, None, None, 'gpu')\n\n    # Tracker init\n    tracker = BoxAssociationTracker(args)\n\n    frame_dir = osp.join(result_root, 'frame')\n    try:\n        eval_seq(args, dataloader, detector, tracker, result_filename,\n                 save_dir=frame_dir, show_image=False)\n    except Exception as e:\n        print(e)\n\n    output_video_path = osp.join(result_root, video_name+'.avi')\n    cmd_str = 'ffmpeg -f image2 -i {}/%05d.jpg -c:v copy {}'.format(\n            osp.join(result_root, 'frame'), output_video_path)\n    os.system(cmd_str)\n\n\nif __name__ == '__main__':\n    args = make_parser().parse_args()\n    with open(args.config) as f:\n        common_args = yaml.load(f)\n    for k, v in common_args['common'].items():\n        setattr(args, k, v)\n    for k, v in common_args['mot'].items():\n        setattr(args, k, v)\n    exp = get_exp(args.exp_file, None)\n    if args.conf is not None:\n        args.conf_thres = args.conf\n        exp.test_conf = args.conf\n    if args.nms is not None:\n        exp.nmsthre = args.nms\n    if args.tsize is not None:\n        exp.test_size = args.tsize[::-1]\n        args.img_size = args.tsize\n    args.classes = [x for x in args.classes]\n    main(exp, args)\n"
  },
  {
    "path": "demo/sot_demo.py",
    "content": "# ------------------------------------------------------------------------------\r\n# Copyright (c) Microsoft\r\n# Licensed under the MIT License.\r\n# Written by Zhipeng Zhang (zhangzhipeng2017@ia.ac.cn)\r\n# ------------------------------------------------------------------------------\r\n\r\nimport os\r\nimport pdb\r\nimport sys\r\nsys.path[0] = os.getcwd()\r\nimport cv2\r\nimport yaml\r\nimport argparse\r\nfrom PIL import Image\r\nfrom glob import glob\r\nfrom os.path import exists, join\r\nfrom easydict import EasyDict as edict\r\n\r\nimport torch\r\nimport numpy as np\r\n\r\nimport tracker.sot.lib.models as models\r\nfrom tracker.sot.lib.utils.utils import  load_dataset, crop_chw, \\\r\n    gaussian_shaped_labels, cxy_wh_2_rect1, rect1_2_cxy_wh, cxy_wh_2_bbox\r\nfrom tracker.sot.lib.core.eval_otb import eval_auc_tune\r\n\r\nimport utils\r\nfrom model import AppearanceModel, partial_load\r\nfrom data.vos import color_normalize, load_image, im_to_numpy, im_to_torch\r\n\r\n\r\ndef get_frames(video_name):\r\n    if not video_name:\r\n        cap = cv2.VideoCapture(0)\r\n        # warmup\r\n        for i in range(5):\r\n            cap.read()\r\n        while True:\r\n            ret, frame = cap.read()\r\n            if ret:\r\n                yield frame\r\n            else:\r\n                break\r\n    elif video_name.endswith('avi') or video_name.endswith('mp4'):\r\n        cap = cv2.VideoCapture(video_name)\r\n        while True:\r\n            ret, frame = cap.read()\r\n            if ret:\r\n                yield frame\r\n            else:\r\n                break\r\n    else:\r\n        images = glob(os.path.join(video_name, '*.jp*'))\r\n        images = sorted(images,\r\n                        key=lambda x: int(x.split('/')[-1].split('.')[0]))\r\n        for img in images:\r\n            frame = cv2.imread(img)\r\n            yield frame\r\n\r\n\r\ndef preproc(img, im_mean, im_std, use_lab=False):\r\n    img = load_image(img)\r\n    if use_lab:\r\n        img = im_to_numpy(img)\r\n        img = (img*255).astype(np.uint8)[:, :, ::-1]\r\n        img = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)\r\n        img = im_to_torch(img) / 255.\r\n    img = color_normalize(img, im_mean, im_std)\r\n    if use_lab:\r\n        img = torch.stack([img[0], ]*3)\r\n    img = img.permute(1, 2, 0).numpy()  # H, W, C\r\n    return img\r\n\r\n\r\nclass TrackerConfig(object):\r\n    crop_sz = 512 + 8\r\n    downscale = 8\r\n    temp_sz = crop_sz // downscale\r\n\r\n    lambda0 = 1e-4\r\n    padding = 3.5\r\n    interp_factor = 0.01\r\n    num_scale = 3\r\n    scale_step = 1.0275\r\n    scale_factor = scale_step ** (np.arange(num_scale) - num_scale // 2)\r\n    min_scale_factor = 0.2\r\n    max_scale_factor = 5\r\n    scale_penalty = 0.985\r\n    scale_penalties = scale_penalty ** (np.abs((np.arange(num_scale) - num_scale // 2)))\r\n\r\n    net_output_size = [temp_sz, temp_sz]\r\n    cos_window = torch.Tensor(np.outer(np.hanning(temp_sz), np.hanning(temp_sz))).cuda()\r\n\r\n\r\ndef track(net, args):\r\n    toc = 0\r\n    config = TrackerConfig()\r\n    video_name = os.path.basename(args.input) if args.input else 'webcam'\r\n    regions = []  # FINAL RESULTS\r\n    for f, img_raw in enumerate(get_frames(args.input)):\r\n        img_raw = cv2.resize(img_raw, (640,480))\r\n        use_lab = getattr(args, 'use_lab', False)\r\n        im = preproc(img_raw, args.im_mean, args.im_std, use_lab)\r\n        tic = cv2.getTickCount()\r\n        # Init\r\n        if f == 0:\r\n            try:\r\n                init_rect = cv2.selectROI(video_name, img_raw, False, False)\r\n            except Exception:\r\n                exit()\r\n            target_pos, target_sz = rect1_2_cxy_wh(init_rect)\r\n            min_sz = np.maximum(config.min_scale_factor * target_sz, 4)\r\n            max_sz = np.minimum(im.shape[:2], config.max_scale_factor * target_sz)\r\n\r\n            # crop template\r\n            window_sz = target_sz * (1 + config.padding)\r\n            bbox = cxy_wh_2_bbox(target_pos, window_sz)\r\n            patch = crop_chw(im, bbox, config.crop_sz)\r\n\r\n            target = patch\r\n            net.update(torch.Tensor(np.expand_dims(target, axis=0)).cuda(), lr=1)\r\n            regions.append(cxy_wh_2_rect1(target_pos, target_sz))\r\n            patch_crop = np.zeros((config.num_scale, patch.shape[0],\r\n                                   patch.shape[1], patch.shape[2]), np.float32)\r\n        # Track\r\n        else:\r\n            for i in range(config.num_scale):  # crop multi-scale search region\r\n                window_sz = target_sz * (config.scale_factor[i] * (1 + config.padding))\r\n                bbox = cxy_wh_2_bbox(target_pos, window_sz)\r\n                patch_crop[i, :] = crop_chw(im, bbox, config.crop_sz)\r\n\r\n            search = patch_crop\r\n            response = net(torch.Tensor(search).cuda())\r\n            net_output_size = [response.shape[-2], response.shape[-1]]\r\n            peak, idx = torch.max(response.view(config.num_scale, -1), 1)\r\n            peak = peak.data.cpu().numpy() * config.scale_penalties\r\n            best_scale = np.argmax(peak)\r\n            r_max, c_max = np.unravel_index(idx[best_scale].cpu(), net_output_size)\r\n\r\n            r_max = r_max - net_output_size[0] * 0.5\r\n            c_max = c_max - net_output_size[1] * 0.5\r\n            window_sz = target_sz * (config.scale_factor[best_scale] * (1 + config.padding))\r\n\r\n            target_pos = target_pos + np.array([c_max, r_max]) * window_sz / net_output_size\r\n            target_sz = np.minimum(np.maximum(window_sz / (1 + config.padding), min_sz), max_sz)\r\n\r\n            # model update\r\n            window_sz = target_sz * (1 + config.padding)\r\n            bbox = cxy_wh_2_bbox(target_pos, window_sz)\r\n            patch = crop_chw(im, bbox, config.crop_sz)\r\n            target = patch\r\n\r\n            regions.append(cxy_wh_2_rect1(target_pos, target_sz))  # 1-index\r\n\r\n        toc += cv2.getTickCount() - tic\r\n        \r\n        bbox = list(map(int, regions[-1]))\r\n        cv2.rectangle(img_raw, (bbox[0], bbox[1]),\r\n                      (bbox[0]+bbox[2], bbox[1]+bbox[3]), (0, 255, 0), 3)\r\n        cv2.imshow(video_name, img_raw)\r\n        cv2.waitKey(40)\r\n\r\n    toc /= cv2.getTickFrequency()\r\n\r\n\r\ndef main():\r\n    parser = argparse.ArgumentParser()\r\n    parser.add_argument('--config', default='', required=True, type=str)\r\n    parser.add_argument('--input', required=True, type=str)\r\n    args = parser.parse_args()\r\n\r\n    with open(args.config) as f:\r\n        common_args = yaml.load(f)\r\n    for k, v in common_args['common'].items():\r\n        setattr(args, k, v)\r\n    for k, v in common_args['sot'].items():\r\n        setattr(args, k, v)\r\n    args.arch = 'SiamFC'\r\n\r\n    # prepare model\r\n    base = AppearanceModel(args).to(args.device)\r\n    print('Total params: %.2fM' %\r\n          (sum(p.numel() for p in base.parameters())/1e6))\r\n    print(base)\r\n\r\n    net = models.__dict__[args.arch](base=base, config=TrackerConfig())\r\n    net.eval()\r\n    net = net.cuda()\r\n\r\n    track(net, args)\r\n\r\n\r\nif __name__ == '__main__':\r\n    main()\r\n\r\n"
  },
  {
    "path": "detector/YOLOX/.gitignore",
    "content": "### Linux ###\n*~\n\n# temporary files which can be created if a process still has a handle open of a deleted file\n.fuse_hidden*\n\n# KDE directory preferences\n.directory\n\n# Linux trash folder which might appear on any partition or disk\n.Trash-*\n\n# .nfs files are created when an open file is removed but is still being accessed\n.nfs*\n\n### PyCharm ###\n# User-specific stuff\n.idea\n\n# CMake\ncmake-build-*/\n\n# Mongo Explorer plugin\n.idea/**/mongoSettings.xml\n\n# File-based project format\n*.iws\n\n# IntelliJ\nout/\n\n# mpeltonen/sbt-idea plugin\n.idea_modules/\n\n# JIRA plugin\natlassian-ide-plugin.xml\n\n# Cursive Clojure plugin\n.idea/replstate.xml\n\n# Crashlytics plugin (for Android Studio and IntelliJ)\ncom_crashlytics_export_strings.xml\ncrashlytics.properties\ncrashlytics-build.properties\nfabric.properties\n\n# Editor-based Rest Client\n.idea/httpRequests\n\n# Android studio 3.1+ serialized cache file\n.idea/caches/build_file_checksums.ser\n\n# JetBrains templates\n**___jb_tmp___\n\n### Python ###\n# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\npip-wheel-metadata/\nshare/python-wheels/\n*.egg-info/\n.installed.cfg\n*.egg\nMANIFEST\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.nox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n.hypothesis/\n.pytest_cache/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\ndb.sqlite3\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\ndocs/build/\n\n# PyBuilder\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# IPython\nprofile_default/\nipython_config.py\n\n# pyenv\n.python-version\n\n# pipenv\n#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.\n#   However, in case of collaboration, if having platform-specific dependencies or dependencies\n#   having no cross-platform support, pipenv may install dependencies that don’t work, or not\n#   install all needed dependencies.\n#Pipfile.lock\n\n# celery beat schedule file\ncelerybeat-schedule\n\n# SageMath parsed files\n*.sage.py\n\n# Environments\n.env\n.venv\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n.dmypy.json\ndmypy.json\n\n# Pyre type checker\n.pyre/\n\n### Vim ###\n# Swap\n[._]*.s[a-v][a-z]\n[._]*.sw[a-p]\n[._]s[a-rt-v][a-z]\n[._]ss[a-gi-z]\n[._]sw[a-p]\n\n# Session\nSession.vim\n\n# Temporary\n.netrwhist\n# Auto-generated tag files\ntags\n# Persistent undo\n[._]*.un~\n\n# output\ndocs/api\n.code-workspace.code-workspace\n*.pkl\n*.npy\n*.pth\n*.onnx\nevents.out.tfevents*\n\n# vscode\n*.code-workspace\n.vscode\n\n# vim\n.vim\n"
  },
  {
    "path": "detector/YOLOX/LICENSE",
    "content": "                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"{}\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright 2021 Megvii, Base Detection\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n"
  },
  {
    "path": "detector/YOLOX/README.md",
    "content": "<div align=\"center\"><img src=\"assets/logo.png\" width=\"350\"></div>\n<img src=\"assets/demo.png\" >\n\n## Introduction\nYOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and industrial communities.\nFor more details, please refer to our [report on Arxiv](https://arxiv.org/abs/2107.08430).\n\n<img src=\"assets/git_fig.png\" width=\"1000\" >\n\n## Updates!!\n* 【2021/07/20】 We have released our technical report on [Arxiv](https://arxiv.org/abs/2107.08430).\n\n## Comming soon\n- [ ] YOLOX-P6 and larger model.\n- [ ] Objects365 pretrain.\n- [ ] Transformer modules.\n- [ ] More features in need.\n\n## Benchmark\n\n#### Standard Models.\n|Model |size |mAP<sup>test<br>0.5:0.95 | Speed V100<br>(ms) | Params<br>(M) |FLOPs<br>(G)| weights |\n| ------        |:---: | :---:       |:---:     |:---:  | :---: | :----: |\n|[YOLOX-s](./exps/default/yolox_s.py)    |640  |39.6      |9.8     |9.0 | 26.8 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EW62gmO2vnNNs5npxjzunVwB9p307qqygaCkXdTO88BLUg?e=NMTQYw)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_s.pth) |\n|[YOLOX-m](./exps/default/yolox_m.py)    |640  |46.4      |12.3     |25.3 |73.8| [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/ERMTP7VFqrVBrXKMU7Vl4TcBQs0SUeCT7kvc-JdIbej4tQ?e=1MDo9y)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_m.pth) |\n|[YOLOX-l](./exps/default/yolox_l.py)    |640  |50.0  |14.5 |54.2| 155.6 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EWA8w_IEOzBKvuueBqfaZh0BeoG5sVzR-XYbOJO4YlOkRw?e=wHWOBE)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_l.pth) |\n|[YOLOX-x](./exps/default/yolox_x.py)   |640  |**51.2**      | 17.3 |99.1 |281.9 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EdgVPHBziOVBtGAXHfeHI5kBza0q9yyueMGdT0wXZfI1rQ?e=tABO5u)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_x.pth) |\n|[YOLOX-Darknet53](./exps/default/yolov3.py)   |640  | 47.4      | 11.1 |63.7 | 185.3 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EZ-MV1r_fMFPkPrNjvbJEMoBLOLAnXH-XKEB77w8LhXL6Q?e=mf6wOc)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_darknet53.pth) |\n\n#### Light Models.\n|Model |size |mAP<sup>val<br>0.5:0.95 | Params<br>(M) |FLOPs<br>(G)| weights |\n| ------        |:---:  |  :---:       |:---:     |:---:  | :---: |\n|[YOLOX-Nano](./exps/default/nano.py) |416  |25.3  | 0.91 |1.08 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EdcREey-krhLtdtSnxolxiUBjWMy6EFdiaO9bdOwZ5ygCQ?e=yQpdds)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_nano.pth) |\n|[YOLOX-Tiny](./exps/default/yolox_tiny.py) |416  |31.7 | 5.06 |6.45 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EYtjNFPqvZBBrQ-VowLcSr4B6Z5TdTflUsr_gO2CwhC3bQ?e=SBTwXj)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_tiny.pth) |\n\n## Quick Start\n\n<details>\n<summary>Installation</summary>\n\nStep1. Install YOLOX.\n```shell\ngit clone git@github.com:Megvii-BaseDetection/YOLOX.git\ncd YOLOX\npip3 install -U pip && pip3 install -r requirements.txt\npip3 install -v -e .  # or  python3 setup.py develop\n```\nStep2. Install [apex](https://github.com/NVIDIA/apex).\n\n```shell\n# skip this step if you don't want to train model.\ngit clone https://github.com/NVIDIA/apex\ncd apex\npip3 install -v --disable-pip-version-check --no-cache-dir --global-option=\"--cpp_ext\" --global-option=\"--cuda_ext\" ./\n```\nStep3. Install [pycocotools](https://github.com/cocodataset/cocoapi).\n\n```shell\npip3 install cython; pip3 install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'\n```\n\n</details>\n\n<details>\n<summary>Demo</summary>\n\nStep1. Download a pretrained model from the benchmark table.\n\nStep2. Use either -n or -f to specify your detector's config. For example:\n\n```shell\npython tools/demo.py image -n yolox-s -c /path/to/your/yolox_s.pth.tar --path assets/dog.jpg --conf 0.3 --nms 0.65 --tsize 640 --save_result --device [cpu/gpu]\n```\nor\n```shell\npython tools/demo.py image -f exps/default/yolox_s.py -c /path/to/your/yolox_s.pth.tar --path assets/dog.jpg --conf 0.3 --nms 0.65 --tsize 640 --save_result --device [cpu/gpu]\n```\nDemo for video:\n```shell\npython tools/demo.py video -n yolox-s -c /path/to/your/yolox_s.pth.tar --path /path/to/your/video --conf 0.3 --nms 0.65 --tsize 640 --save_result --device [cpu/gpu]\n```\n\n\n</details>\n\n<details>\n<summary>Reproduce our results on COCO</summary>\n\nStep1. Prepare COCO dataset\n```shell\ncd <YOLOX_HOME>\nln -s /path/to/your/COCO ./datasets/COCO\n```\n\nStep2. Reproduce our results on COCO by specifying -n:\n\n```shell\npython tools/train.py -n yolox-s -d 8 -b 64 --fp16 -o\n                         yolox-m\n                         yolox-l\n                         yolox-x\n```\n* -d: number of gpu devices\n* -b: total batch size, the recommended number for -b is num-gpu * 8\n* --fp16: mixed precision training\n\nWhen using -f, the above commands are equivalent to:\n\n```shell\npython tools/train.py -f exps/default/yolox-s.py -d 8 -b 64 --fp16 -o\n                         exps/default/yolox-m.py\n                         exps/default/yolox-l.py\n                         exps/default/yolox-x.py\n```\n\n</details>\n\n\n<details>\n<summary>Evaluation</summary>\n\nWe support batch testing for fast evaluation:\n\n```shell\npython tools/eval.py -n  yolox-s -c yolox_s.pth.tar -b 64 -d 8 --conf 0.001 [--fp16] [--fuse]\n                         yolox-m\n                         yolox-l\n                         yolox-x\n```\n* --fuse: fuse conv and bn\n* -d: number of GPUs used for evaluation. DEFAULT: All GPUs available will be used.\n* -b: total batch size across on all GPUs\n\nTo reproduce speed test, we use the following command:\n```shell\npython tools/eval.py -n  yolox-s -c yolox_s.pth.tar -b 1 -d 1 --conf 0.001 --fp16 --fuse\n                         yolox-m\n                         yolox-l\n                         yolox-x\n```\n\n</details>\n\n\n<details open>\n<summary>Tutorials</summary>\n\n*  [Training on custom data](docs/train_custom_data.md).\n\n</details>\n\n## Deployment\n\n\n1.  [ONNX export and an ONNXRuntime](./demo/ONNXRuntime)\n2.  [TensorRT in C++ and Python](./demo/TensorRT)\n3.  [ncnn in C++ and Java](./demo/ncnn)\n4.  [OpenVINO in C++ and Python](./demo/OpenVINO)\n\n\n## Third-party resources\n* The ncnn android app with video support: [ncnn-android-yolox](https://github.com/FeiGeChuanShu/ncnn-android-yolox) from [FeiGeChuanShu](https://github.com/FeiGeChuanShu)\n* YOLOX with Tengine support: [Tengine](https://github.com/OAID/Tengine/blob/tengine-lite/examples/tm_yolox.cpp) from [BUG1989](https://github.com/BUG1989)\n* YOLOX + ROS2 Foxy: [YOLOX-ROS](https://github.com/Ar-Ray-code/YOLOX-ROS) from [Ar-Ray](https://github.com/Ar-Ray-code)\n* YOLOX Deploy DeepStream: [YOLOX-deepstream](https://github.com/nanmi/YOLOX-deepstream) from [nanmi](https://github.com/nanmi)\n* YOLOX ONNXRuntime C++ Demo: [lite.ai](https://github.com/DefTruth/lite.ai/blob/main/ort/cv/yolox.cpp) from [DefTruth](https://github.com/DefTruth)\n\n## Cite YOLOX\nIf you use YOLOX in your research, please cite our work by using the following BibTeX entry:\n\n```latex\n @article{yolox2021,\n  title={YOLOX: Exceeding YOLO Series in 2021},\n  author={Ge, Zheng and Liu, Songtao and Wang, Feng and Li, Zeming and Sun, Jian},\n  journal={arXiv preprint arXiv:2107.08430},\n  year={2021}\n}\n```\n"
  },
  {
    "path": "detector/YOLOX/datasets/README.md",
    "content": "# Prepare datasets\n\nIf you have a dataset directory, you could use os environment variable named `YOLOX_DATADIR`. Under this directory, YOLOX will look for datasets in the structure described below, if needed.\n```\n$YOLOX_DATADIR/\n  COCO/\n```\nYou can set the location for builtin datasets by\n```shell\nexport YOLOX_DATADIR=/path/to/your/datasets\n```\nIf `YOLOX_DATADIR` is not set, the default value of dataset directory is `./datasets` relative to your current working directory.\n\n## Expected dataset structure for [COCO detection](https://cocodataset.org/#download):\n\n```\nCOCO/\n  annotations/\n    instances_{train,val}2017.json\n  {train,val}2017/\n    # image files that are mentioned in the corresponding json\n```\n\nYou can use the 2014 version of the dataset as well.\n"
  },
  {
    "path": "detector/YOLOX/demo/ONNXRuntime/README.md",
    "content": "## YOLOX-ONNXRuntime in Python\n\nThis doc introduces how to convert your pytorch model into onnx, and how to run an onnxruntime demo to verify your convertion.\n\n### Download ONNX models.\n| Model | Parameters | GFLOPs | Test Size | mAP | Weights |\n|:------| :----: | :----: | :---: | :---: | :---: |\n|  YOLOX-Nano |  0.91M  | 1.08 | 416x416 | 25.3 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EfAGwvevU-lNhW5OqFAyHbwBJdI_7EaKu5yU04fgF5BU7w?e=gvq4hf)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_nano.onnx) |\n|  YOLOX-Tiny | 5.06M     | 6.45 | 416x416 |31.7 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EVigCszU1ilDn-MwLwHCF1ABsgTy06xFdVgZ04Yyo4lHVA?e=hVKiCw)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_tiny.onnx) |\n|  YOLOX-S | 9.0M | 26.8 | 640x640 |39.6 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/Ec0L1d1x2UtIpbfiahgxhtgBZVjb1NCXbotO8SCOdMqpQQ?e=siyIsK)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_s.onnx) |\n|  YOLOX-M | 25.3M | 73.8 | 640x640 |46.4 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/ERUKlQe-nlxBoTKPy1ynbxsBmAZ_h-VBEV-nnfPdzUIkZQ?e=hyQQtl)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_m.onnx) |\n|  YOLOX-L | 54.2M | 155.6 | 640x640 |50.0 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/ET5w926jCA5GlVfg9ixB4KEBiW0HYl7SzaHNRaRG9dYO_A?e=ISmCYX)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_l.onnx) |\n|  YOLOX-Darknet53| 63.72M | 185.3 | 640x640 |47.3 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/ESArloSW-MlPlLuemLh9zKkBdovgweKbfu4zkvzKAp7pPQ?e=f81Ikw)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_darknet53.onnx) |\n|  YOLOX-X | 99.1M | 281.9 | 640x640 |51.2 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/ERjqoeMJlFdGuM3tQfXQmhABmGHlIHydWCwhlugeWLE9AA)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox.onnx) |\n\n\n### Convert Your Model to ONNX\n\nFirst, you should move to <YOLOX_HOME> by:\n```shell\ncd <YOLOX_HOME>\n```\nThen, you can:\n\n1. Convert a standard YOLOX model by -n:\n```shell\npython3 tools/export_onnx.py --output-name yolox_s.onnx -n yolox-s -c yolox_s.pth.tar\n```\nNotes:\n* -n: specify a model name. The model name must be one of the [yolox-s,m,l,x and yolox-nane, yolox-tiny, yolov3]\n* -c: the model you have trained\n* -o: opset version, default 11. **However, if you will further convert your onnx model to [OpenVINO](../OpenVINO/), please specify the opset version to 10.**\n* --no-onnxsim: disable onnxsim\n* To customize an input shape for onnx model,  modify the following code in tools/export.py:\n\n    ```python\n    dummy_input = torch.randn(1, 3, exp.test_size[0], exp.test_size[1])\n    ```\n\n2. Convert a standard YOLOX model by -f. When using -f, the above command is equivalent to:\n\n```shell\npython3 tools/export_onnx.py --output-name yolox_s.onnx -f exps/default/yolox_s.py -c yolox_s.pth.tar\n```\n\n3. To convert your customized model, please use -f:\n\n```shell\npython3 tools/export_onnx.py --output-name your_yolox.onnx -f exps/your_dir/your_yolox.py -c your_yolox.pth.tar\n```\n\n### ONNXRuntime Demo\n\nStep1.\n```shell\ncd <YOLOX_HOME>/demo/ONNXRuntime\n```\n\nStep2. \n```shell\npython3 onnx_inference.py -m <ONNX_MODEL_PATH> -i <IMAGE_PATH> -o <OUTPUT_DIR> -s 0.3 --input_shape 640,640\n```\nNotes:\n* -m: your converted onnx model\n* -i: input_image\n* -s: score threshold for visualization.\n* --input_shape: should be consistent with the shape you used for onnx convertion.\n"
  },
  {
    "path": "detector/YOLOX/demo/ONNXRuntime/onnx_inference.py",
    "content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport argparse\nimport os\n\nimport cv2\nimport numpy as np\n\nimport onnxruntime\n\nfrom yolox.data.data_augment import preproc as preprocess\nfrom yolox.data.datasets import COCO_CLASSES\nfrom yolox.utils import mkdir, multiclass_nms, demo_postprocess, vis\n\n\ndef make_parser():\n    parser = argparse.ArgumentParser(\"onnxruntime inference sample\")\n    parser.add_argument(\n        \"-m\",\n        \"--model\",\n        type=str,\n        default=\"yolox.onnx\",\n        help=\"Input your onnx model.\",\n    )\n    parser.add_argument(\n        \"-i\",\n        \"--image_path\",\n        type=str,\n        default='test_image.png',\n        help=\"Path to your input image.\",\n    )\n    parser.add_argument(\n        \"-o\",\n        \"--output_dir\",\n        type=str,\n        default='demo_output',\n        help=\"Path to your output directory.\",\n    )\n    parser.add_argument(\n        \"-s\",\n        \"--score_thr\",\n        type=float,\n        default=0.3,\n        help=\"Score threshould to filter the result.\",\n    )\n    parser.add_argument(\n        \"--input_shape\",\n        type=str,\n        default=\"640,640\",\n        help=\"Specify an input shape for inference.\",\n    )\n    parser.add_argument(\n        \"--with_p6\",\n        action=\"store_true\",\n        help=\"Whether your model uses p6 in FPN/PAN.\",\n    )\n    return parser\n\n\nif __name__ == '__main__':\n    args = make_parser().parse_args()\n\n    input_shape = tuple(map(int, args.input_shape.split(',')))\n    origin_img = cv2.imread(args.image_path)\n    mean = (0.485, 0.456, 0.406)\n    std = (0.229, 0.224, 0.225)\n    img, ratio = preprocess(origin_img, input_shape, mean, std)\n\n    session = onnxruntime.InferenceSession(args.model)\n\n    ort_inputs = {session.get_inputs()[0].name: img[None, :, :, :]}\n    output = session.run(None, ort_inputs)\n    predictions = demo_postprocess(output[0], input_shape, p6=args.with_p6)[0]\n\n    boxes = predictions[:, :4]\n    scores = predictions[:, 4:5] * predictions[:, 5:]\n\n    boxes_xyxy = np.ones_like(boxes)\n    boxes_xyxy[:, 0] = boxes[:, 0] - boxes[:, 2]/2.\n    boxes_xyxy[:, 1] = boxes[:, 1] - boxes[:, 3]/2.\n    boxes_xyxy[:, 2] = boxes[:, 0] + boxes[:, 2]/2.\n    boxes_xyxy[:, 3] = boxes[:, 1] + boxes[:, 3]/2.\n    boxes_xyxy /= ratio\n    dets = multiclass_nms(boxes_xyxy, scores, nms_thr=0.65, score_thr=0.1)\n    \n    if dets is not None:\n        final_boxes, final_scores, final_cls_inds = dets[:, :4], dets[:, 4], dets[:, 5]\n        origin_img = vis(origin_img, final_boxes, final_scores, final_cls_inds,\n                         conf=args.score_thr, class_names=COCO_CLASSES)\n\n    mkdir(args.output_dir)\n    output_path = os.path.join(args.output_dir, args.image_path.split(\"/\")[-1])\n    cv2.imwrite(output_path, origin_img)\n"
  },
  {
    "path": "detector/YOLOX/demo/OpenVINO/README.md",
    "content": "## YOLOX for OpenVINO\n\n* [C++ Demo](./cpp)\n* [Python Demo](./python)"
  },
  {
    "path": "detector/YOLOX/demo/OpenVINO/cpp/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 3.4.1)\nset(CMAKE_CXX_STANDARD 14)\n\nproject(yolox_openvino_demo)\n\nfind_package(OpenCV REQUIRED)\nfind_package(InferenceEngine REQUIRED)\nfind_package(ngraph REQUIRED)\n\ninclude_directories(\n    ${OpenCV_INCLUDE_DIRS}\n    ${CMAKE_CURRENT_SOURCE_DIR}\n    ${CMAKE_CURRENT_BINARY_DIR}\n)\n\nadd_executable(yolox_openvino yolox_openvino.cpp)\n\ntarget_link_libraries(\n     yolox_openvino\n    ${InferenceEngine_LIBRARIES}\n    ${NGRAPH_LIBRARIES}\n    ${OpenCV_LIBS} \n)"
  },
  {
    "path": "detector/YOLOX/demo/OpenVINO/cpp/README.md",
    "content": "# YOLOX-OpenVINO in C++\n\nThis toturial includes a C++ demo for OpenVINO, as well as some converted models.\n\n### Download OpenVINO models.\n| Model | Parameters | GFLOPs | Test Size | mAP | Weights |\n|:------| :----: | :----: | :---: | :---: | :---: |\n|  [YOLOX-Nano](../../../exps/nano.py) |  0.91M  | 1.08 | 416x416 | 25.3 | [Download](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EeWY57o5wQZFtXYd1KJw6Z8B4vxZru649XxQHYIFgio3Qw?e=ZS81ce)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_nano_openvino.tar.gz) |\n|  [YOLOX-Tiny](../../../exps/yolox_tiny.py) | 5.06M     | 6.45 | 416x416 |31.7 | [Download](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/ETfvOoCXdVZNinoSpKA_sEYBIQVqfjjF5_M6VvHRnLVcsA?e=STL1pi)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_tiny_openvino.tar.gz) |\n|  [YOLOX-S](../../../exps/yolox_s.py) | 9.0M | 26.8 | 640x640 |39.6 | [Download](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EXUjf3PQnbBLrxNrXPueqaIBzVZOrYQOnJpLK1Fytj5ssA?e=GK0LOM)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_s_openvino.tar.gz) |\n|  [YOLOX-M](../../../exps/yolox_m.py) | 25.3M | 73.8 | 640x640 |46.4 | [Download](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EcoT1BPpeRpLvE_4c441zn8BVNCQ2naxDH3rho7WqdlgLQ?e=95VaM9)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_m_openvino.tar.gz) |\n|  [YOLOX-L](../../../exps/yolox_l.py) | 54.2M | 155.6 | 640x640 |50.0 | [Download](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EZvmn-YLRuVPh0GAP_w3xHMB2VGvrKqQXyK_Cv5yi_DXUg?e=YRh6Eq)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_l_openvino.tar.gz) |\n|  [YOLOX-Darknet53](../../../exps/yolov3.py) | 63.72M | 185.3 | 640x640 |47.3 | [Download](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EQP8LSroikFHuwX0jFRetmcBOCDWSFmylHxolV7ezUPXGw?e=bEw5iq)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_darknet53_openvino.tar.gz) |\n|  [YOLOX-X](../../../exps/yolox_x.py) | 99.1M | 281.9 | 640x640 |51.2 | [Download](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EZFPnLqiD-xIlt7rcZYDjQgB4YXE9wnq1qaSXQwJrsKbdg?e=83nwEz)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_x_openvino.tar.gz) |\n\n## Install OpenVINO Toolkit\n\nPlease visit [Openvino Homepage](https://docs.openvinotoolkit.org/latest/get_started_guides.html) for more details.\n\n## Set up the Environment\n\n### For Linux\n\n**Option1. Set up the environment tempororally. You need to run this command everytime you start a new shell window.**\n\n```shell\nsource /opt/intel/openvino_2021/bin/setupvars.sh\n```\n\n**Option2. Set up the environment permenantly.**\n\n*Step1.* For Linux:\n```shell\nvim ~/.bashrc \n```\n\n*Step2.* Add the following line into your file:\n\n```shell\nsource /opt/intel/openvino_2021/bin/setupvars.sh\n```\n\n*Step3.* Save and exit the file, then run:\n\n```shell\nsource ~/.bashrc\n```\n\n\n## Convert model\n\n1. Export ONNX model\n   \n   Please refer to the [ONNX toturial](../../ONNXRuntime). **Note that you should set --opset to 10, otherwise your next step will fail.**\n\n2. Convert ONNX to OpenVINO \n\n   ``` shell\n   cd <INSTSLL_DIR>/openvino_2021/deployment_tools/model_optimizer\n   ```\n\n   Install requirements for convert tool\n\n   ```shell\n   sudo ./install_prerequisites/install_prerequisites_onnx.sh\n   ```\n\n   Then convert model.\n   ```shell\n   python3 mo.py --input_model <ONNX_MODEL> --input_shape <INPUT_SHAPE> [--data_type FP16]\n   ```\n   For example:\n   ```shell\n   python3 mo.py --input_model yolox.onnx --input_shape (1,3,640,640) --data_type FP16\n   ```  \n\n## Build \n\n### Linux\n```shell\nsource /opt/intel/openvino_2021/bin/setupvars.sh\nmkdir build\ncd build\ncmake ..\nmake\n```\n\n## Demo\n\n### c++\n\n```shell\n./yolox_openvino <XML_MODEL_PATH> <IMAGE_PATH> <DEVICE>\n```\n"
  },
  {
    "path": "detector/YOLOX/demo/OpenVINO/cpp/yolox_openvino.cpp",
    "content": "// Copyright (C) 2018-2021 Intel Corporation\n// SPDX-License-Identifier: Apache-2.0\n//\n\n#include <iterator>\n#include <memory>\n#include <string>\n#include <vector>\n#include <opencv2/opencv.hpp>\n#include <iostream>\n#include <inference_engine.hpp>\n\nusing namespace InferenceEngine;\n\n/**\n * @brief Define names based depends on Unicode path support\n */\n#define tcout                  std::cout\n#define file_name_t            std::string\n#define imread_t               cv::imread\n#define NMS_THRESH 0.65\n#define BBOX_CONF_THRESH 0.3\n\nstatic const int INPUT_W = 416;\nstatic const int INPUT_H = 416;\n\ncv::Mat static_resize(cv::Mat& img) {\n    float r = std::min(INPUT_W / (img.cols*1.0), INPUT_H / (img.rows*1.0));\n    // r = std::min(r, 1.0f);\n    int unpad_w = r * img.cols;\n    int unpad_h = r * img.rows;\n    cv::Mat re(unpad_h, unpad_w, CV_8UC3);\n    cv::resize(img, re, re.size());\n    cv::Mat out(INPUT_W, INPUT_H, CV_8UC3, cv::Scalar(114, 114, 114));\n    re.copyTo(out(cv::Rect(0, 0, re.cols, re.rows)));\n    return out;\n}\n\nvoid blobFromImage(cv::Mat& img, Blob::Ptr& blob){\n    cv::cvtColor(img, img, cv::COLOR_BGR2RGB);\n    int channels = 3;\n    int img_h = img.rows;\n    int img_w = img.cols;\n    std::vector<float> mean = {0.485, 0.456, 0.406};\n    std::vector<float> std = {0.229, 0.224, 0.225};\n    InferenceEngine::MemoryBlob::Ptr mblob = InferenceEngine::as<InferenceEngine::MemoryBlob>(blob);\n    if (!mblob) \n    {\n        THROW_IE_EXCEPTION << \"We expect blob to be inherited from MemoryBlob in matU8ToBlob, \"\n            << \"but by fact we were not able to cast inputBlob to MemoryBlob\";\n    }\n    // locked memory holder should be alive all time while access to its buffer happens\n    auto mblobHolder = mblob->wmap();\n\n    float *blob_data = mblobHolder.as<float *>();\n\n    for (size_t c = 0; c < channels; c++) \n    {\n        for (size_t  h = 0; h < img_h; h++) \n        {\n            for (size_t w = 0; w < img_w; w++) \n            {\n                blob_data[c * img_w * img_h + h * img_w + w] =\n                    (((float)img.at<cv::Vec3b>(h, w)[c]) / 255.0f - mean[c]) / std[c];\n            }\n        }\n    }\n}\n\n\nstruct Object\n{\n    cv::Rect_<float> rect;\n    int label;\n    float prob;\n};\n\nstruct GridAndStride\n{\n    int grid0;\n    int grid1;\n    int stride;\n};\n\nstatic void generate_grids_and_stride(const int target_size, std::vector<int>& strides, std::vector<GridAndStride>& grid_strides)\n{\n    for (auto stride : strides)\n    {\n        int num_grid = target_size / stride;\n        for (int g1 = 0; g1 < num_grid; g1++)\n        {\n            for (int g0 = 0; g0 < num_grid; g0++)\n            {\n                grid_strides.push_back((GridAndStride){g0, g1, stride});\n            }\n        }\n    }\n}\n\n\nstatic void generate_yolox_proposals(std::vector<GridAndStride> grid_strides, const float* feat_ptr, float prob_threshold, std::vector<Object>& objects)\n{\n    const int num_class = 80;  // COCO has 80 classes. Modify this value on your own dataset.\n\n    const int num_anchors = grid_strides.size();\n\n    for (int anchor_idx = 0; anchor_idx < num_anchors; anchor_idx++)\n    {\n        const int grid0 = grid_strides[anchor_idx].grid0;\n        const int grid1 = grid_strides[anchor_idx].grid1;\n        const int stride = grid_strides[anchor_idx].stride;\n\n\tconst int basic_pos = anchor_idx * 85;\n\n        // yolox/models/yolo_head.py decode logic\n        //  outputs[..., :2] = (outputs[..., :2] + grids) * strides\n        //  outputs[..., 2:4] = torch.exp(outputs[..., 2:4]) * strides\n        float x_center = (feat_ptr[basic_pos + 0] + grid0) * stride;\n        float y_center = (feat_ptr[basic_pos + 1] + grid1) * stride;\n        float w = exp(feat_ptr[basic_pos + 2]) * stride;\n        float h = exp(feat_ptr[basic_pos + 3]) * stride;\n        float x0 = x_center - w * 0.5f;\n        float y0 = y_center - h * 0.5f;\n\n        float box_objectness = feat_ptr[basic_pos + 4];\n        for (int class_idx = 0; class_idx < num_class; class_idx++)\n        {\n            float box_cls_score = feat_ptr[basic_pos + 5 + class_idx];\n            float box_prob = box_objectness * box_cls_score;\n            if (box_prob > prob_threshold)\n            {\n                Object obj;\n                obj.rect.x = x0;\n                obj.rect.y = y0;\n                obj.rect.width = w;\n                obj.rect.height = h;\n                obj.label = class_idx;\n                obj.prob = box_prob;\n\n                objects.push_back(obj);\n            }\n\n        } // class loop\n\n    } // point anchor loop\n}\n\nstatic inline float intersection_area(const Object& a, const Object& b)\n{\n    cv::Rect_<float> inter = a.rect & b.rect;\n    return inter.area();\n}\n\nstatic void qsort_descent_inplace(std::vector<Object>& faceobjects, int left, int right)\n{\n    int i = left;\n    int j = right;\n    float p = faceobjects[(left + right) / 2].prob;\n\n    while (i <= j)\n    {\n        while (faceobjects[i].prob > p)\n            i++;\n\n        while (faceobjects[j].prob < p)\n            j--;\n\n        if (i <= j)\n        {\n            // swap\n            std::swap(faceobjects[i], faceobjects[j]);\n\n            i++;\n            j--;\n        }\n    }\n\n    #pragma omp parallel sections\n    {\n        #pragma omp section\n        {\n            if (left < j) qsort_descent_inplace(faceobjects, left, j);\n        }\n        #pragma omp section\n        {\n            if (i < right) qsort_descent_inplace(faceobjects, i, right);\n        }\n    }\n}\n\n\nstatic void qsort_descent_inplace(std::vector<Object>& objects)\n{\n    if (objects.empty())\n        return;\n\n    qsort_descent_inplace(objects, 0, objects.size() - 1);\n}\n\nstatic void nms_sorted_bboxes(const std::vector<Object>& faceobjects, std::vector<int>& picked, float nms_threshold)\n{\n    picked.clear();\n\n    const int n = faceobjects.size();\n\n    std::vector<float> areas(n);\n    for (int i = 0; i < n; i++)\n    {\n        areas[i] = faceobjects[i].rect.area();\n    }\n\n    for (int i = 0; i < n; i++)\n    {\n        const Object& a = faceobjects[i];\n\n        int keep = 1;\n        for (int j = 0; j < (int)picked.size(); j++)\n        {\n            const Object& b = faceobjects[picked[j]];\n\n            // intersection over union\n            float inter_area = intersection_area(a, b);\n            float union_area = areas[i] + areas[picked[j]] - inter_area;\n            // float IoU = inter_area / union_area\n            if (inter_area / union_area > nms_threshold)\n                keep = 0;\n        }\n\n        if (keep)\n            picked.push_back(i);\n    }\n}\n\n\nstatic void decode_outputs(const float* prob, std::vector<Object>& objects, float scale, const int img_w, const int img_h) {\n        std::vector<Object> proposals;\n        std::vector<int> strides = {8, 16, 32};\n        std::vector<GridAndStride> grid_strides;\n\n        generate_grids_and_stride(INPUT_W, strides, grid_strides);\n        generate_yolox_proposals(grid_strides, prob,  BBOX_CONF_THRESH, proposals);\n        qsort_descent_inplace(proposals);\n\n        std::vector<int> picked;\n        nms_sorted_bboxes(proposals, picked, NMS_THRESH);\n        int count = picked.size();\n        objects.resize(count);\n\n        for (int i = 0; i < count; i++)\n        {\n            objects[i] = proposals[picked[i]];\n\n            // adjust offset to original unpadded\n            float x0 = (objects[i].rect.x) / scale;\n            float y0 = (objects[i].rect.y) / scale;\n            float x1 = (objects[i].rect.x + objects[i].rect.width) / scale;\n            float y1 = (objects[i].rect.y + objects[i].rect.height) / scale;\n\n            // clip\n            x0 = std::max(std::min(x0, (float)(img_w - 1)), 0.f);\n            y0 = std::max(std::min(y0, (float)(img_h - 1)), 0.f);\n            x1 = std::max(std::min(x1, (float)(img_w - 1)), 0.f);\n            y1 = std::max(std::min(y1, (float)(img_h - 1)), 0.f);\n\n            objects[i].rect.x = x0;\n            objects[i].rect.y = y0;\n            objects[i].rect.width = x1 - x0;\n            objects[i].rect.height = y1 - y0;\n        }\n}\n\nconst float color_list[80][3] =\n{\n    {0.000, 0.447, 0.741},\n    {0.850, 0.325, 0.098},\n    {0.929, 0.694, 0.125},\n    {0.494, 0.184, 0.556},\n    {0.466, 0.674, 0.188},\n    {0.301, 0.745, 0.933},\n    {0.635, 0.078, 0.184},\n    {0.300, 0.300, 0.300},\n    {0.600, 0.600, 0.600},\n    {1.000, 0.000, 0.000},\n    {1.000, 0.500, 0.000},\n    {0.749, 0.749, 0.000},\n    {0.000, 1.000, 0.000},\n    {0.000, 0.000, 1.000},\n    {0.667, 0.000, 1.000},\n    {0.333, 0.333, 0.000},\n    {0.333, 0.667, 0.000},\n    {0.333, 1.000, 0.000},\n    {0.667, 0.333, 0.000},\n    {0.667, 0.667, 0.000},\n    {0.667, 1.000, 0.000},\n    {1.000, 0.333, 0.000},\n    {1.000, 0.667, 0.000},\n    {1.000, 1.000, 0.000},\n    {0.000, 0.333, 0.500},\n    {0.000, 0.667, 0.500},\n    {0.000, 1.000, 0.500},\n    {0.333, 0.000, 0.500},\n    {0.333, 0.333, 0.500},\n    {0.333, 0.667, 0.500},\n    {0.333, 1.000, 0.500},\n    {0.667, 0.000, 0.500},\n    {0.667, 0.333, 0.500},\n    {0.667, 0.667, 0.500},\n    {0.667, 1.000, 0.500},\n    {1.000, 0.000, 0.500},\n    {1.000, 0.333, 0.500},\n    {1.000, 0.667, 0.500},\n    {1.000, 1.000, 0.500},\n    {0.000, 0.333, 1.000},\n    {0.000, 0.667, 1.000},\n    {0.000, 1.000, 1.000},\n    {0.333, 0.000, 1.000},\n    {0.333, 0.333, 1.000},\n    {0.333, 0.667, 1.000},\n    {0.333, 1.000, 1.000},\n    {0.667, 0.000, 1.000},\n    {0.667, 0.333, 1.000},\n    {0.667, 0.667, 1.000},\n    {0.667, 1.000, 1.000},\n    {1.000, 0.000, 1.000},\n    {1.000, 0.333, 1.000},\n    {1.000, 0.667, 1.000},\n    {0.333, 0.000, 0.000},\n    {0.500, 0.000, 0.000},\n    {0.667, 0.000, 0.000},\n    {0.833, 0.000, 0.000},\n    {1.000, 0.000, 0.000},\n    {0.000, 0.167, 0.000},\n    {0.000, 0.333, 0.000},\n    {0.000, 0.500, 0.000},\n    {0.000, 0.667, 0.000},\n    {0.000, 0.833, 0.000},\n    {0.000, 1.000, 0.000},\n    {0.000, 0.000, 0.167},\n    {0.000, 0.000, 0.333},\n    {0.000, 0.000, 0.500},\n    {0.000, 0.000, 0.667},\n    {0.000, 0.000, 0.833},\n    {0.000, 0.000, 1.000},\n    {0.000, 0.000, 0.000},\n    {0.143, 0.143, 0.143},\n    {0.286, 0.286, 0.286},\n    {0.429, 0.429, 0.429},\n    {0.571, 0.571, 0.571},\n    {0.714, 0.714, 0.714},\n    {0.857, 0.857, 0.857},\n    {0.000, 0.447, 0.741},\n    {0.314, 0.717, 0.741},\n    {0.50, 0.5, 0}\n};\n\nstatic void draw_objects(const cv::Mat& bgr, const std::vector<Object>& objects)\n{\n    static const char* class_names[] = {\n        \"person\", \"bicycle\", \"car\", \"motorcycle\", \"airplane\", \"bus\", \"train\", \"truck\", \"boat\", \"traffic light\",\n        \"fire hydrant\", \"stop sign\", \"parking meter\", \"bench\", \"bird\", \"cat\", \"dog\", \"horse\", \"sheep\", \"cow\",\n        \"elephant\", \"bear\", \"zebra\", \"giraffe\", \"backpack\", \"umbrella\", \"handbag\", \"tie\", \"suitcase\", \"frisbee\",\n        \"skis\", \"snowboard\", \"sports ball\", \"kite\", \"baseball bat\", \"baseball glove\", \"skateboard\", \"surfboard\",\n        \"tennis racket\", \"bottle\", \"wine glass\", \"cup\", \"fork\", \"knife\", \"spoon\", \"bowl\", \"banana\", \"apple\",\n        \"sandwich\", \"orange\", \"broccoli\", \"carrot\", \"hot dog\", \"pizza\", \"donut\", \"cake\", \"chair\", \"couch\",\n        \"potted plant\", \"bed\", \"dining table\", \"toilet\", \"tv\", \"laptop\", \"mouse\", \"remote\", \"keyboard\", \"cell phone\",\n        \"microwave\", \"oven\", \"toaster\", \"sink\", \"refrigerator\", \"book\", \"clock\", \"vase\", \"scissors\", \"teddy bear\",\n        \"hair drier\", \"toothbrush\"\n    };\n\n    cv::Mat image = bgr.clone();\n\n    for (size_t i = 0; i < objects.size(); i++)\n    {\n        const Object& obj = objects[i];\n\n        fprintf(stderr, \"%d = %.5f at %.2f %.2f %.2f x %.2f\\n\", obj.label, obj.prob,\n                obj.rect.x, obj.rect.y, obj.rect.width, obj.rect.height);\n\n        cv::Scalar color = cv::Scalar(color_list[obj.label][0], color_list[obj.label][1], color_list[obj.label][2]);\n        float c_mean = cv::mean(color)[0];\n        cv::Scalar txt_color;\n        if (c_mean > 0.5){\n            txt_color = cv::Scalar(0, 0, 0);\n        }else{\n            txt_color = cv::Scalar(255, 255, 255);\n        }\n\n        cv::rectangle(image, obj.rect, color * 255, 2);\n\n        char text[256];\n        sprintf(text, \"%s %.1f%%\", class_names[obj.label], obj.prob * 100);\n\n        int baseLine = 0;\n        cv::Size label_size = cv::getTextSize(text, cv::FONT_HERSHEY_SIMPLEX, 0.4, 1, &baseLine);\n\n        cv::Scalar txt_bk_color = color * 0.7 * 255;\n\n        int x = obj.rect.x;\n        int y = obj.rect.y + 1;\n        //int y = obj.rect.y - label_size.height - baseLine;\n        if (y > image.rows)\n            y = image.rows;\n        //if (x + label_size.width > image.cols)\n            //x = image.cols - label_size.width;\n\n        cv::rectangle(image, cv::Rect(cv::Point(x, y), cv::Size(label_size.width, label_size.height + baseLine)),\n                      txt_bk_color, -1);\n\n        cv::putText(image, text, cv::Point(x, y + label_size.height),\n                    cv::FONT_HERSHEY_SIMPLEX, 0.4, txt_color, 1);\n    }\n\n    cv::imwrite(\"_demo.jpg\" , image);\n    fprintf(stderr, \"save vis file\\n\");\n    /* cv::imshow(\"image\", image); */\n    /* cv::waitKey(0); */\n}\n\n\nint main(int argc, char* argv[]) {\n    try {\n        // ------------------------------ Parsing and validation of input arguments\n        // ---------------------------------\n        if (argc != 4) {\n            tcout << \"Usage : \" << argv[0] << \" <path_to_model> <path_to_image> <device_name>\" << std::endl;\n            return EXIT_FAILURE;\n        }\n\n        const file_name_t input_model {argv[1]};\n        const file_name_t input_image_path {argv[2]};\n        const std::string device_name {argv[3]};\n        // -----------------------------------------------------------------------------------------------------\n\n        // --------------------------- Step 1. Initialize inference engine core\n        // -------------------------------------\n        Core ie;\n        // -----------------------------------------------------------------------------------------------------\n\n        // Step 2. Read a model in OpenVINO Intermediate Representation (.xml and\n        // .bin files) or ONNX (.onnx file) format\n        CNNNetwork network = ie.ReadNetwork(input_model);\n        if (network.getOutputsInfo().size() != 1)\n            throw std::logic_error(\"Sample supports topologies with 1 output only\");\n        if (network.getInputsInfo().size() != 1)\n            throw std::logic_error(\"Sample supports topologies with 1 input only\");\n        // -----------------------------------------------------------------------------------------------------\n\n        // --------------------------- Step 3. Configure input & output\n        // ---------------------------------------------\n        // --------------------------- Prepare input blobs\n        // -----------------------------------------------------\n        InputInfo::Ptr input_info = network.getInputsInfo().begin()->second;\n        std::string input_name = network.getInputsInfo().begin()->first;\n\n        /* Mark input as resizable by setting of a resize algorithm.\n         * In this case we will be able to set an input blob of any shape to an\n         * infer request. Resize and layout conversions are executed automatically\n         * during inference */\n        //input_info->getPreProcess().setResizeAlgorithm(RESIZE_BILINEAR);\n        //input_info->setLayout(Layout::NHWC);\n        //input_info->setPrecision(Precision::FP32);\n\n        // --------------------------- Prepare output blobs\n        // ----------------------------------------------------\n        if (network.getOutputsInfo().empty()) {\n            std::cerr << \"Network outputs info is empty\" << std::endl;\n            return EXIT_FAILURE;\n        }\n        DataPtr output_info = network.getOutputsInfo().begin()->second;\n        std::string output_name = network.getOutputsInfo().begin()->first;\n\n        output_info->setPrecision(Precision::FP32);\n        // -----------------------------------------------------------------------------------------------------\n\n        // --------------------------- Step 4. Loading a model to the device\n        // ------------------------------------------\n        ExecutableNetwork executable_network = ie.LoadNetwork(network, device_name);\n        // -----------------------------------------------------------------------------------------------------\n\n        // --------------------------- Step 5. Create an infer request\n        // -------------------------------------------------\n        InferRequest infer_request = executable_network.CreateInferRequest();\n        // -----------------------------------------------------------------------------------------------------\n\n        // --------------------------- Step 6. Prepare input\n        // --------------------------------------------------------\n        /* Read input image to a blob and set it to an infer request without resize\n         * and layout conversions. */\n        cv::Mat image = imread_t(input_image_path);\n\t    cv::Mat pr_img = static_resize(image);\n        Blob::Ptr imgBlob = infer_request.GetBlob(input_name);     // just wrap Mat data by Blob::Ptr\n\t    blobFromImage(pr_img, imgBlob);\n\n        // infer_request.SetBlob(input_name, imgBlob);  // infer_request accepts input blob of any size\n        // -----------------------------------------------------------------------------------------------------\n\n        // --------------------------- Step 7. Do inference\n        // --------------------------------------------------------\n        /* Running the request synchronously */\n        infer_request.Infer();\n        // -----------------------------------------------------------------------------------------------------\n\n        // --------------------------- Step 8. Process output\n        // ------------------------------------------------------\n        const Blob::Ptr output_blob = infer_request.GetBlob(output_name);\n        MemoryBlob::CPtr moutput = as<MemoryBlob>(output_blob);\n        if (!moutput) {\n            throw std::logic_error(\"We expect output to be inherited from MemoryBlob, \"\n                                   \"but by fact we were not able to cast output to MemoryBlob\");\n        }\n        // locked memory holder should be alive all time while access to its buffer\n        // happens\n        auto moutputHolder = moutput->rmap();\n        const float* net_pred = moutputHolder.as<const PrecisionTrait<Precision::FP32>::value_type*>();\n        \n        const int image_size = 416;\n\t    int img_w = image.cols;\n        int img_h = image.rows;\n\t    float scale = std::min(INPUT_W / (image.cols*1.0), INPUT_H / (image.rows*1.0));\n        std::vector<Object> objects;\n\n        decode_outputs(net_pred, objects, scale, img_w, img_h);\n        draw_objects(image, objects);\n\n            // -----------------------------------------------------------------------------------------------------\n        } catch (const std::exception& ex) {\n            std::cerr << ex.what() << std::endl;\n            return EXIT_FAILURE;\n    }\n    return EXIT_SUCCESS;\n}\n"
  },
  {
    "path": "detector/YOLOX/demo/OpenVINO/python/README.md",
    "content": "# YOLOX-OpenVINO in Python\n\nThis toturial includes a Python demo for OpenVINO, as well as some converted models.\n\n### Download OpenVINO models.\n| Model | Parameters | GFLOPs | Test Size | mAP | Weights |\n|:------| :----: | :----: | :---: | :---: | :---: |\n|  [YOLOX-Nano](../../../exps/default/nano.py) |  0.91M  | 1.08 | 416x416 | 25.3 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EeWY57o5wQZFtXYd1KJw6Z8B4vxZru649XxQHYIFgio3Qw?e=ZS81ce)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_nano_openvino.tar.gz) |\n|  [YOLOX-Tiny](../../../exps/default/yolox_tiny.py) | 5.06M     | 6.45 | 416x416 |31.7 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/ETfvOoCXdVZNinoSpKA_sEYBIQVqfjjF5_M6VvHRnLVcsA?e=STL1pi)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_tiny_openvino.tar.gz) |\n|  [YOLOX-S](../../../exps/default/yolox_s.py) | 9.0M | 26.8 | 640x640 |39.6 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EXUjf3PQnbBLrxNrXPueqaIBzVZOrYQOnJpLK1Fytj5ssA?e=GK0LOM)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_s_openvino.tar.gz) |\n|  [YOLOX-M](../../../exps/default/yolox_m.py) | 25.3M | 73.8 | 640x640 |46.4 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EcoT1BPpeRpLvE_4c441zn8BVNCQ2naxDH3rho7WqdlgLQ?e=95VaM9)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_m_openvino.tar.gz) |\n|  [YOLOX-L](../../../exps/default/yolox_l.py) | 54.2M | 155.6 | 640x640 |50.0 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EZvmn-YLRuVPh0GAP_w3xHMB2VGvrKqQXyK_Cv5yi_DXUg?e=YRh6Eq)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_l_openvino.tar.gz) |\n|  [YOLOX-Darknet53](../../../exps/default/yolov3.py) | 63.72M | 185.3 | 640x640 |47.3 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EQP8LSroikFHuwX0jFRetmcBOCDWSFmylHxolV7ezUPXGw?e=bEw5iq)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_darknet53_openvino.tar.gz) | \n|  [YOLOX-X](../../../exps/default/yolox_x.py) | 99.1M | 281.9 | 640x640 |51.2 | [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EZFPnLqiD-xIlt7rcZYDjQgB4YXE9wnq1qaSXQwJrsKbdg?e=83nwEz)/[github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_x_openvino.tar.gz) |\n\n## Install OpenVINO Toolkit\n\nPlease visit [Openvino Homepage](https://docs.openvinotoolkit.org/latest/get_started_guides.html) for more details.\n\n## Set up the Environment\n\n### For Linux\n\n**Option1. Set up the environment tempororally. You need to run this command everytime you start a new shell window.**\n\n```shell\nsource /opt/intel/openvino_2021/bin/setupvars.sh\n```\n\n**Option2. Set up the environment permenantly.**\n\n*Step1.* For Linux:\n```shell\nvim ~/.bashrc\n```\n\n*Step2.* Add the following line into your file:\n\n```shell\nsource /opt/intel/openvino_2021/bin/setupvars.sh\n```\n\n*Step3.* Save and exit the file, then run:\n\n```shell\nsource ~/.bashrc\n```\n\n\n## Convert model\n\n1. Export ONNX model\n\n   Please refer to the [ONNX toturial](../../ONNXRuntime). **Note that you should set --opset to 10, otherwise your next step will fail.**\n\n2. Convert ONNX to OpenVINO\n\n   ``` shell\n   cd <INSTSLL_DIR>/openvino_2021/deployment_tools/model_optimizer\n   ```\n\n   Install requirements for convert tool\n\n   ```shell\n   sudo ./install_prerequisites/install_prerequisites_onnx.sh\n   ```\n\n   Then convert model.\n   ```shell\n   python3 mo.py --input_model <ONNX_MODEL> --input_shape <INPUT_SHAPE> [--data_type FP16]\n   ```\n   For example:\n   ```shell\n   python3 mo.py --input_model yolox.onnx --input_shape [1,3,640,640] --data_type FP16 --output_dir converted_output\n   ```\n\n## Demo\n\n### python\n\n```shell\npython openvino_inference.py -m <XML_MODEL_PATH> -i <IMAGE_PATH> \n```\nor\n```shell\npython openvino_inference.py -m <XML_MODEL_PATH> -i <IMAGE_PATH> -o <OUTPUT_DIR> -s <SCORE_THR> -d <DEVICE>\n```\n\n"
  },
  {
    "path": "detector/YOLOX/demo/OpenVINO/python/openvino_inference.py",
    "content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n# Copyright (C) 2018-2021 Intel Corporation\n# SPDX-License-Identifier: Apache-2.0\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport argparse\nimport logging as log\nimport os\nimport sys\n\nimport cv2\nimport numpy as np\n\nfrom openvino.inference_engine import IECore\n\nfrom yolox.data.data_augment import preproc as preprocess\nfrom yolox.data.datasets import COCO_CLASSES\nfrom yolox.utils import mkdir, multiclass_nms, demo_postprocess, vis\n\n\ndef parse_args() -> argparse.Namespace:\n    \"\"\"Parse and return command line arguments\"\"\"\n    parser = argparse.ArgumentParser(add_help=False)\n    args = parser.add_argument_group('Options')\n    args.add_argument(\n        '-h',\n        '--help',\n        action='help',\n        help='Show this help message and exit.')\n    args.add_argument(\n        '-m',\n        '--model',\n        required=True,\n        type=str,\n        help='Required. Path to an .xml or .onnx file with a trained model.')\n    args.add_argument(\n        '-i',\n        '--input',\n        required=True,\n        type=str,\n        help='Required. Path to an image file.')\n    args.add_argument(\n        '-o',\n        '--output_dir',\n        type=str,\n        default='demo_output',\n        help='Path to your output dir.')\n    args.add_argument(\n        '-s',\n        '--score_thr',\n        type=float,\n        default=0.3,\n        help=\"Score threshould to visualize the result.\")\n    args.add_argument(\n        '-d',\n        '--device',\n        default='CPU',\n        type=str,\n        help='Optional. Specify the target device to infer on; CPU, GPU, \\\n              MYRIAD, HDDL or HETERO: is acceptable. The sample will look \\\n              for a suitable plugin for device specified. Default value \\\n              is CPU.')\n    args.add_argument(\n        '--labels',\n        default=None,\n        type=str,\n        help='Option:al. Path to a labels mapping file.')\n    args.add_argument(\n        '-nt',\n        '--number_top',\n        default=10,\n        type=int,\n        help='Optional. Number of top results.')\n    return parser.parse_args()\n\n\ndef main():\n    log.basicConfig(format='[ %(levelname)s ] %(message)s', level=log.INFO, stream=sys.stdout)\n    args = parse_args()\n\n    # ---------------------------Step 1. Initialize inference engine core--------------------------------------------------\n    log.info('Creating Inference Engine')\n    ie = IECore()\n\n    # ---------------------------Step 2. Read a model in OpenVINO Intermediate Representation or ONNX format---------------\n    log.info(f'Reading the network: {args.model}')\n    # (.xml and .bin files) or (.onnx file)\n    net = ie.read_network(model=args.model)\n\n    if len(net.input_info) != 1:\n        log.error('Sample supports only single input topologies')\n        return -1\n    if len(net.outputs) != 1:\n        log.error('Sample supports only single output topologies')\n        return -1\n\n    # ---------------------------Step 3. Configure input & output----------------------------------------------------------\n    log.info('Configuring input and output blobs')\n    # Get names of input and output blobs\n    input_blob = next(iter(net.input_info))\n    out_blob = next(iter(net.outputs))\n\n    # Set input and output precision manually\n    net.input_info[input_blob].precision = 'FP32'\n    net.outputs[out_blob].precision = 'FP16'\n\n    # Get a number of classes recognized by a model\n    num_of_classes = max(net.outputs[out_blob].shape)\n\n    # ---------------------------Step 4. Loading model to the device-------------------------------------------------------\n    log.info('Loading the model to the plugin')\n    exec_net = ie.load_network(network=net, device_name=args.device)\n\n    # ---------------------------Step 5. Create infer request--------------------------------------------------------------\n    # load_network() method of the IECore class with a specified number of requests (default 1) returns an ExecutableNetwork\n    # instance which stores infer requests. So you already created Infer requests in the previous step.\n\n    # ---------------------------Step 6. Prepare input---------------------------------------------------------------------\n    origin_img = cv2.imread(args.input)\n    _, _, h, w = net.input_info[input_blob].input_data.shape\n    mean = (0.485, 0.456, 0.406)\n    std = (0.229, 0.224, 0.225)\n    image, ratio = preprocess(origin_img, (h, w), mean, std)\n\n    # ---------------------------Step 7. Do inference----------------------------------------------------------------------\n    log.info('Starting inference in synchronous mode')\n    res = exec_net.infer(inputs={input_blob: image})\n\n    # ---------------------------Step 8. Process output--------------------------------------------------------------------\n    res = res[out_blob]\n\n    predictions = demo_postprocess(res, (h, w), p6=False)[0]\n\n    boxes = predictions[:, :4]\n    scores = predictions[:, 4, None] * predictions[:, 5:]\n\n    boxes_xyxy = np.ones_like(boxes)\n    boxes_xyxy[:, 0] = boxes[:, 0] - boxes[:, 2]/2.\n    boxes_xyxy[:, 1] = boxes[:, 1] - boxes[:, 3]/2.\n    boxes_xyxy[:, 2] = boxes[:, 0] + boxes[:, 2]/2.\n    boxes_xyxy[:, 3] = boxes[:, 1] + boxes[:, 3]/2.\n    boxes_xyxy /= ratio\n    dets = multiclass_nms(boxes_xyxy, scores, nms_thr=0.65, score_thr=0.1)\n    \n    if dets is not None:\n        final_boxes = dets[:, :4]\n        final_scores, final_cls_inds = dets[:, 4], dets[:, 5]\n        origin_img = vis(origin_img, final_boxes, final_scores, final_cls_inds,\n                         conf=args.score_thr, class_names=COCO_CLASSES)\n\n    mkdir(args.output_dir)\n    output_path = os.path.join(args.output_dir, args.image_path.split(\"/\")[-1])\n    cv2.imwrite(output_path, origin_img)\n\n\nif __name__ == '__main__':\n    sys.exit(main())\n"
  },
  {
    "path": "detector/YOLOX/demo/TensorRT/cpp/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.6)\n\nproject(yolox)\n\nadd_definitions(-std=c++11)\n\noption(CUDA_USE_STATIC_CUDA_RUNTIME OFF)\nset(CMAKE_CXX_STANDARD 11)\nset(CMAKE_BUILD_TYPE Debug)\n\nfind_package(CUDA REQUIRED)\n\ninclude_directories(${PROJECT_SOURCE_DIR}/include)\n# include and link dirs of cuda and tensorrt, you need adapt them if yours are different\n# cuda\ninclude_directories(/data/cuda/cuda-10.2/cuda/include)\nlink_directories(/data/cuda/cuda-10.2/cuda/lib64)\n# cudnn\ninclude_directories(/data/cuda/cuda-10.2/cudnn/v8.0.4/include)\nlink_directories(/data/cuda/cuda-10.2/cudnn/v8.0.4/lib64)\n# tensorrt\ninclude_directories(/data/cuda/cuda-10.2/TensorRT/v7.2.1.6/include)\nlink_directories(/data/cuda/cuda-10.2/TensorRT/v7.2.1.6/lib)\n\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++11 -Wall -Ofast -Wfatal-errors -D_MWAITXINTRIN_H_INCLUDED\")\n\nfind_package(OpenCV)\ninclude_directories(${OpenCV_INCLUDE_DIRS})\n\nadd_executable(yolox ${PROJECT_SOURCE_DIR}/yolox.cpp)\ntarget_link_libraries(yolox nvinfer)\ntarget_link_libraries(yolox cudart)\ntarget_link_libraries(yolox ${OpenCV_LIBS})\n\nadd_definitions(-O2 -pthread)\n\n"
  },
  {
    "path": "detector/YOLOX/demo/TensorRT/cpp/README.md",
    "content": "# YOLOX-TensorRT in C++\n\nAs YOLOX models is easy to converted to tensorrt using [torch2trt gitrepo](https://github.com/NVIDIA-AI-IOT/torch2trt), \nour C++ demo will not include the model converting or constructing like other tenorrt demos.\n\n\n## Step 1: Prepare serialized engine file\n\nFollow the trt [python demo README](../python/README.md) to convert and save the serialized engine file.\n\nCheck the 'model_trt.engine' file generated from Step 1, which will automatically saved at the current demo dir.\n\n\n## Step 2: build the demo\n\nPlease follow the [TensorRT Installation Guide](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html) to install TensorRT.\n\nInstall opencv with ```sudo apt-get install libopencv-dev```.\n\nbuild the demo:\n\n```shell\nmkdir build\ncd build\ncmake ..\nmake\n```\n\nThen run the demo:\n\n```shell\n./yolox ../model_trt.engine -i ../../../../assets/dog.jpg\n```\n\nor\n\n```shell\n./yolox <path/to/your/engine_file> -i <path/to/image>\n```\n"
  },
  {
    "path": "detector/YOLOX/demo/TensorRT/cpp/logging.h",
    "content": "/*\n * Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#ifndef TENSORRT_LOGGING_H\n#define TENSORRT_LOGGING_H\n\n#include \"NvInferRuntimeCommon.h\"\n#include <cassert>\n#include <ctime>\n#include <iomanip>\n#include <iostream>\n#include <ostream>\n#include <sstream>\n#include <string>\n\nusing Severity = nvinfer1::ILogger::Severity;\n\nclass LogStreamConsumerBuffer : public std::stringbuf\n{\npublic:\n    LogStreamConsumerBuffer(std::ostream& stream, const std::string& prefix, bool shouldLog)\n        : mOutput(stream)\n        , mPrefix(prefix)\n        , mShouldLog(shouldLog)\n    {\n    }\n\n    LogStreamConsumerBuffer(LogStreamConsumerBuffer&& other)\n        : mOutput(other.mOutput)\n    {\n    }\n\n    ~LogStreamConsumerBuffer()\n    {\n        // std::streambuf::pbase() gives a pointer to the beginning of the buffered part of the output sequence\n        // std::streambuf::pptr() gives a pointer to the current position of the output sequence\n        // if the pointer to the beginning is not equal to the pointer to the current position,\n        // call putOutput() to log the output to the stream\n        if (pbase() != pptr())\n        {\n            putOutput();\n        }\n    }\n\n    // synchronizes the stream buffer and returns 0 on success\n    // synchronizing the stream buffer consists of inserting the buffer contents into the stream,\n    // resetting the buffer and flushing the stream\n    virtual int sync()\n    {\n        putOutput();\n        return 0;\n    }\n\n    void putOutput()\n    {\n        if (mShouldLog)\n        {\n            // prepend timestamp\n            std::time_t timestamp = std::time(nullptr);\n            tm* tm_local = std::localtime(&timestamp);\n            std::cout << \"[\";\n            std::cout << std::setw(2) << std::setfill('0') << 1 + tm_local->tm_mon << \"/\";\n            std::cout << std::setw(2) << std::setfill('0') << tm_local->tm_mday << \"/\";\n            std::cout << std::setw(4) << std::setfill('0') << 1900 + tm_local->tm_year << \"-\";\n            std::cout << std::setw(2) << std::setfill('0') << tm_local->tm_hour << \":\";\n            std::cout << std::setw(2) << std::setfill('0') << tm_local->tm_min << \":\";\n            std::cout << std::setw(2) << std::setfill('0') << tm_local->tm_sec << \"] \";\n            // std::stringbuf::str() gets the string contents of the buffer\n            // insert the buffer contents pre-appended by the appropriate prefix into the stream\n            mOutput << mPrefix << str();\n            // set the buffer to empty\n            str(\"\");\n            // flush the stream\n            mOutput.flush();\n        }\n    }\n\n    void setShouldLog(bool shouldLog)\n    {\n        mShouldLog = shouldLog;\n    }\n\nprivate:\n    std::ostream& mOutput;\n    std::string mPrefix;\n    bool mShouldLog;\n};\n\n//!\n//! \\class LogStreamConsumerBase\n//! \\brief Convenience object used to initialize LogStreamConsumerBuffer before std::ostream in LogStreamConsumer\n//!\nclass LogStreamConsumerBase\n{\npublic:\n    LogStreamConsumerBase(std::ostream& stream, const std::string& prefix, bool shouldLog)\n        : mBuffer(stream, prefix, shouldLog)\n    {\n    }\n\nprotected:\n    LogStreamConsumerBuffer mBuffer;\n};\n\n//!\n//! \\class LogStreamConsumer\n//! \\brief Convenience object used to facilitate use of C++ stream syntax when logging messages.\n//!  Order of base classes is LogStreamConsumerBase and then std::ostream.\n//!  This is because the LogStreamConsumerBase class is used to initialize the LogStreamConsumerBuffer member field\n//!  in LogStreamConsumer and then the address of the buffer is passed to std::ostream.\n//!  This is necessary to prevent the address of an uninitialized buffer from being passed to std::ostream.\n//!  Please do not change the order of the parent classes.\n//!\nclass LogStreamConsumer : protected LogStreamConsumerBase, public std::ostream\n{\npublic:\n    //! \\brief Creates a LogStreamConsumer which logs messages with level severity.\n    //!  Reportable severity determines if the messages are severe enough to be logged.\n    LogStreamConsumer(Severity reportableSeverity, Severity severity)\n        : LogStreamConsumerBase(severityOstream(severity), severityPrefix(severity), severity <= reportableSeverity)\n        , std::ostream(&mBuffer) // links the stream buffer with the stream\n        , mShouldLog(severity <= reportableSeverity)\n        , mSeverity(severity)\n    {\n    }\n\n    LogStreamConsumer(LogStreamConsumer&& other)\n        : LogStreamConsumerBase(severityOstream(other.mSeverity), severityPrefix(other.mSeverity), other.mShouldLog)\n        , std::ostream(&mBuffer) // links the stream buffer with the stream\n        , mShouldLog(other.mShouldLog)\n        , mSeverity(other.mSeverity)\n    {\n    }\n\n    void setReportableSeverity(Severity reportableSeverity)\n    {\n        mShouldLog = mSeverity <= reportableSeverity;\n        mBuffer.setShouldLog(mShouldLog);\n    }\n\nprivate:\n    static std::ostream& severityOstream(Severity severity)\n    {\n        return severity >= Severity::kINFO ? std::cout : std::cerr;\n    }\n\n    static std::string severityPrefix(Severity severity)\n    {\n        switch (severity)\n        {\n        case Severity::kINTERNAL_ERROR: return \"[F] \";\n        case Severity::kERROR: return \"[E] \";\n        case Severity::kWARNING: return \"[W] \";\n        case Severity::kINFO: return \"[I] \";\n        case Severity::kVERBOSE: return \"[V] \";\n        default: assert(0); return \"\";\n        }\n    }\n\n    bool mShouldLog;\n    Severity mSeverity;\n};\n\n//! \\class Logger\n//!\n//! \\brief Class which manages logging of TensorRT tools and samples\n//!\n//! \\details This class provides a common interface for TensorRT tools and samples to log information to the console,\n//! and supports logging two types of messages:\n//!\n//! - Debugging messages with an associated severity (info, warning, error, or internal error/fatal)\n//! - Test pass/fail messages\n//!\n//! The advantage of having all samples use this class for logging as opposed to emitting directly to stdout/stderr is\n//! that the logic for controlling the verbosity and formatting of sample output is centralized in one location.\n//!\n//! In the future, this class could be extended to support dumping test results to a file in some standard format\n//! (for example, JUnit XML), and providing additional metadata (e.g. timing the duration of a test run).\n//!\n//! TODO: For backwards compatibility with existing samples, this class inherits directly from the nvinfer1::ILogger\n//! interface, which is problematic since there isn't a clean separation between messages coming from the TensorRT\n//! library and messages coming from the sample.\n//!\n//! In the future (once all samples are updated to use Logger::getTRTLogger() to access the ILogger) we can refactor the\n//! class to eliminate the inheritance and instead make the nvinfer1::ILogger implementation a member of the Logger\n//! object.\n\nclass Logger : public nvinfer1::ILogger\n{\npublic:\n    Logger(Severity severity = Severity::kWARNING)\n        : mReportableSeverity(severity)\n    {\n    }\n\n    //!\n    //! \\enum TestResult\n    //! \\brief Represents the state of a given test\n    //!\n    enum class TestResult\n    {\n        kRUNNING, //!< The test is running\n        kPASSED,  //!< The test passed\n        kFAILED,  //!< The test failed\n        kWAIVED   //!< The test was waived\n    };\n\n    //!\n    //! \\brief Forward-compatible method for retrieving the nvinfer::ILogger associated with this Logger\n    //! \\return The nvinfer1::ILogger associated with this Logger\n    //!\n    //! TODO Once all samples are updated to use this method to register the logger with TensorRT,\n    //! we can eliminate the inheritance of Logger from ILogger\n    //!\n    nvinfer1::ILogger& getTRTLogger()\n    {\n        return *this;\n    }\n\n    //!\n    //! \\brief Implementation of the nvinfer1::ILogger::log() virtual method\n    //!\n    //! Note samples should not be calling this function directly; it will eventually go away once we eliminate the\n    //! inheritance from nvinfer1::ILogger\n    //!\n    void log(Severity severity, const char* msg) override\n    {\n        LogStreamConsumer(mReportableSeverity, severity) << \"[TRT] \" << std::string(msg) << std::endl;\n    }\n\n    //!\n    //! \\brief Method for controlling the verbosity of logging output\n    //!\n    //! \\param severity The logger will only emit messages that have severity of this level or higher.\n    //!\n    void setReportableSeverity(Severity severity)\n    {\n        mReportableSeverity = severity;\n    }\n\n    //!\n    //! \\brief Opaque handle that holds logging information for a particular test\n    //!\n    //! This object is an opaque handle to information used by the Logger to print test results.\n    //! The sample must call Logger::defineTest() in order to obtain a TestAtom that can be used\n    //! with Logger::reportTest{Start,End}().\n    //!\n    class TestAtom\n    {\n    public:\n        TestAtom(TestAtom&&) = default;\n\n    private:\n        friend class Logger;\n\n        TestAtom(bool started, const std::string& name, const std::string& cmdline)\n            : mStarted(started)\n            , mName(name)\n            , mCmdline(cmdline)\n        {\n        }\n\n        bool mStarted;\n        std::string mName;\n        std::string mCmdline;\n    };\n\n    //!\n    //! \\brief Define a test for logging\n    //!\n    //! \\param[in] name The name of the test.  This should be a string starting with\n    //!                  \"TensorRT\" and containing dot-separated strings containing\n    //!                  the characters [A-Za-z0-9_].\n    //!                  For example, \"TensorRT.sample_googlenet\"\n    //! \\param[in] cmdline The command line used to reproduce the test\n    //\n    //! \\return a TestAtom that can be used in Logger::reportTest{Start,End}().\n    //!\n    static TestAtom defineTest(const std::string& name, const std::string& cmdline)\n    {\n        return TestAtom(false, name, cmdline);\n    }\n\n    //!\n    //! \\brief A convenience overloaded version of defineTest() that accepts an array of command-line arguments\n    //!        as input\n    //!\n    //! \\param[in] name The name of the test\n    //! \\param[in] argc The number of command-line arguments\n    //! \\param[in] argv The array of command-line arguments (given as C strings)\n    //!\n    //! \\return a TestAtom that can be used in Logger::reportTest{Start,End}().\n    static TestAtom defineTest(const std::string& name, int argc, char const* const* argv)\n    {\n        auto cmdline = genCmdlineString(argc, argv);\n        return defineTest(name, cmdline);\n    }\n\n    //!\n    //! \\brief Report that a test has started.\n    //!\n    //! \\pre reportTestStart() has not been called yet for the given testAtom\n    //!\n    //! \\param[in] testAtom The handle to the test that has started\n    //!\n    static void reportTestStart(TestAtom& testAtom)\n    {\n        reportTestResult(testAtom, TestResult::kRUNNING);\n        assert(!testAtom.mStarted);\n        testAtom.mStarted = true;\n    }\n\n    //!\n    //! \\brief Report that a test has ended.\n    //!\n    //! \\pre reportTestStart() has been called for the given testAtom\n    //!\n    //! \\param[in] testAtom The handle to the test that has ended\n    //! \\param[in] result The result of the test. Should be one of TestResult::kPASSED,\n    //!                   TestResult::kFAILED, TestResult::kWAIVED\n    //!\n    static void reportTestEnd(const TestAtom& testAtom, TestResult result)\n    {\n        assert(result != TestResult::kRUNNING);\n        assert(testAtom.mStarted);\n        reportTestResult(testAtom, result);\n    }\n\n    static int reportPass(const TestAtom& testAtom)\n    {\n        reportTestEnd(testAtom, TestResult::kPASSED);\n        return EXIT_SUCCESS;\n    }\n\n    static int reportFail(const TestAtom& testAtom)\n    {\n        reportTestEnd(testAtom, TestResult::kFAILED);\n        return EXIT_FAILURE;\n    }\n\n    static int reportWaive(const TestAtom& testAtom)\n    {\n        reportTestEnd(testAtom, TestResult::kWAIVED);\n        return EXIT_SUCCESS;\n    }\n\n    static int reportTest(const TestAtom& testAtom, bool pass)\n    {\n        return pass ? reportPass(testAtom) : reportFail(testAtom);\n    }\n\n    Severity getReportableSeverity() const\n    {\n        return mReportableSeverity;\n    }\n\nprivate:\n    //!\n    //! \\brief returns an appropriate string for prefixing a log message with the given severity\n    //!\n    static const char* severityPrefix(Severity severity)\n    {\n        switch (severity)\n        {\n        case Severity::kINTERNAL_ERROR: return \"[F] \";\n        case Severity::kERROR: return \"[E] \";\n        case Severity::kWARNING: return \"[W] \";\n        case Severity::kINFO: return \"[I] \";\n        case Severity::kVERBOSE: return \"[V] \";\n        default: assert(0); return \"\";\n        }\n    }\n\n    //!\n    //! \\brief returns an appropriate string for prefixing a test result message with the given result\n    //!\n    static const char* testResultString(TestResult result)\n    {\n        switch (result)\n        {\n        case TestResult::kRUNNING: return \"RUNNING\";\n        case TestResult::kPASSED: return \"PASSED\";\n        case TestResult::kFAILED: return \"FAILED\";\n        case TestResult::kWAIVED: return \"WAIVED\";\n        default: assert(0); return \"\";\n        }\n    }\n\n    //!\n    //! \\brief returns an appropriate output stream (cout or cerr) to use with the given severity\n    //!\n    static std::ostream& severityOstream(Severity severity)\n    {\n        return severity >= Severity::kINFO ? std::cout : std::cerr;\n    }\n\n    //!\n    //! \\brief method that implements logging test results\n    //!\n    static void reportTestResult(const TestAtom& testAtom, TestResult result)\n    {\n        severityOstream(Severity::kINFO) << \"&&&& \" << testResultString(result) << \" \" << testAtom.mName << \" # \"\n                                         << testAtom.mCmdline << std::endl;\n    }\n\n    //!\n    //! \\brief generate a command line string from the given (argc, argv) values\n    //!\n    static std::string genCmdlineString(int argc, char const* const* argv)\n    {\n        std::stringstream ss;\n        for (int i = 0; i < argc; i++)\n        {\n            if (i > 0)\n                ss << \" \";\n            ss << argv[i];\n        }\n        return ss.str();\n    }\n\n    Severity mReportableSeverity;\n};\n\nnamespace\n{\n\n//!\n//! \\brief produces a LogStreamConsumer object that can be used to log messages of severity kVERBOSE\n//!\n//! Example usage:\n//!\n//!     LOG_VERBOSE(logger) << \"hello world\" << std::endl;\n//!\ninline LogStreamConsumer LOG_VERBOSE(const Logger& logger)\n{\n    return LogStreamConsumer(logger.getReportableSeverity(), Severity::kVERBOSE);\n}\n\n//!\n//! \\brief produces a LogStreamConsumer object that can be used to log messages of severity kINFO\n//!\n//! Example usage:\n//!\n//!     LOG_INFO(logger) << \"hello world\" << std::endl;\n//!\ninline LogStreamConsumer LOG_INFO(const Logger& logger)\n{\n    return LogStreamConsumer(logger.getReportableSeverity(), Severity::kINFO);\n}\n\n//!\n//! \\brief produces a LogStreamConsumer object that can be used to log messages of severity kWARNING\n//!\n//! Example usage:\n//!\n//!     LOG_WARN(logger) << \"hello world\" << std::endl;\n//!\ninline LogStreamConsumer LOG_WARN(const Logger& logger)\n{\n    return LogStreamConsumer(logger.getReportableSeverity(), Severity::kWARNING);\n}\n\n//!\n//! \\brief produces a LogStreamConsumer object that can be used to log messages of severity kERROR\n//!\n//! Example usage:\n//!\n//!     LOG_ERROR(logger) << \"hello world\" << std::endl;\n//!\ninline LogStreamConsumer LOG_ERROR(const Logger& logger)\n{\n    return LogStreamConsumer(logger.getReportableSeverity(), Severity::kERROR);\n}\n\n//!\n//! \\brief produces a LogStreamConsumer object that can be used to log messages of severity kINTERNAL_ERROR\n//         (\"fatal\" severity)\n//!\n//! Example usage:\n//!\n//!     LOG_FATAL(logger) << \"hello world\" << std::endl;\n//!\ninline LogStreamConsumer LOG_FATAL(const Logger& logger)\n{\n    return LogStreamConsumer(logger.getReportableSeverity(), Severity::kINTERNAL_ERROR);\n}\n\n} // anonymous namespace\n\n#endif // TENSORRT_LOGGING_H\n"
  },
  {
    "path": "detector/YOLOX/demo/TensorRT/cpp/yolox.cpp",
    "content": "#include <fstream>\n#include <iostream>\n#include <sstream>\n#include <numeric>\n#include <chrono>\n#include <vector>\n#include <opencv2/opencv.hpp>\n#include <dirent.h>\n#include \"NvInfer.h\"\n#include \"cuda_runtime_api.h\"\n#include \"logging.h\"\n\n#define CHECK(status) \\\n    do\\\n    {\\\n        auto ret = (status);\\\n        if (ret != 0)\\\n        {\\\n            std::cerr << \"Cuda failure: \" << ret << std::endl;\\\n            abort();\\\n        }\\\n    } while (0)\n\n#define DEVICE 0  // GPU id\n#define NMS_THRESH 0.65\n#define BBOX_CONF_THRESH 0.3\n\nusing namespace nvinfer1;\n\n// stuff we know about the network and the input/output blobs\nstatic const int INPUT_W = 640;\nstatic const int INPUT_H = 640;\nconst char* INPUT_BLOB_NAME = \"input_0\";\nconst char* OUTPUT_BLOB_NAME = \"output_0\";\nstatic Logger gLogger;\n\ncv::Mat static_resize(cv::Mat& img) {\n    float r = std::min(INPUT_W / (img.cols*1.0), INPUT_H / (img.rows*1.0));\n    // r = std::min(r, 1.0f);\n    int unpad_w = r * img.cols;\n    int unpad_h = r * img.rows;\n    cv::Mat re(unpad_h, unpad_w, CV_8UC3);\n    cv::resize(img, re, re.size());\n    cv::Mat out(INPUT_W, INPUT_H, CV_8UC3, cv::Scalar(114, 114, 114));\n    re.copyTo(out(cv::Rect(0, 0, re.cols, re.rows)));\n    return out;\n}\n\nstruct Object\n{\n    cv::Rect_<float> rect;\n    int label;\n    float prob;\n};\n\nstruct GridAndStride\n{\n    int grid0;\n    int grid1;\n    int stride;\n};\n\nstatic void generate_grids_and_stride(const int target_size, std::vector<int>& strides, std::vector<GridAndStride>& grid_strides)\n{\n    for (auto stride : strides)\n    {\n        int num_grid = target_size / stride;\n        for (int g1 = 0; g1 < num_grid; g1++)\n        {\n            for (int g0 = 0; g0 < num_grid; g0++)\n            {\n                grid_strides.push_back((GridAndStride){g0, g1, stride});\n            }\n        }\n    }\n}\n\nstatic inline float intersection_area(const Object& a, const Object& b)\n{\n    cv::Rect_<float> inter = a.rect & b.rect;\n    return inter.area();\n}\n\nstatic void qsort_descent_inplace(std::vector<Object>& faceobjects, int left, int right)\n{\n    int i = left;\n    int j = right;\n    float p = faceobjects[(left + right) / 2].prob;\n\n    while (i <= j)\n    {\n        while (faceobjects[i].prob > p)\n            i++;\n\n        while (faceobjects[j].prob < p)\n            j--;\n\n        if (i <= j)\n        {\n            // swap\n            std::swap(faceobjects[i], faceobjects[j]);\n\n            i++;\n            j--;\n        }\n    }\n\n    #pragma omp parallel sections\n    {\n        #pragma omp section\n        {\n            if (left < j) qsort_descent_inplace(faceobjects, left, j);\n        }\n        #pragma omp section\n        {\n            if (i < right) qsort_descent_inplace(faceobjects, i, right);\n        }\n    }\n}\n\nstatic void qsort_descent_inplace(std::vector<Object>& objects)\n{\n    if (objects.empty())\n        return;\n\n    qsort_descent_inplace(objects, 0, objects.size() - 1);\n}\n\nstatic void nms_sorted_bboxes(const std::vector<Object>& faceobjects, std::vector<int>& picked, float nms_threshold)\n{\n    picked.clear();\n\n    const int n = faceobjects.size();\n\n    std::vector<float> areas(n);\n    for (int i = 0; i < n; i++)\n    {\n        areas[i] = faceobjects[i].rect.area();\n    }\n\n    for (int i = 0; i < n; i++)\n    {\n        const Object& a = faceobjects[i];\n\n        int keep = 1;\n        for (int j = 0; j < (int)picked.size(); j++)\n        {\n            const Object& b = faceobjects[picked[j]];\n\n            // intersection over union\n            float inter_area = intersection_area(a, b);\n            float union_area = areas[i] + areas[picked[j]] - inter_area;\n            // float IoU = inter_area / union_area\n            if (inter_area / union_area > nms_threshold)\n                keep = 0;\n        }\n\n        if (keep)\n            picked.push_back(i);\n    }\n}\n\n\nstatic void generate_yolox_proposals(std::vector<GridAndStride> grid_strides, float* feat_blob, float prob_threshold, std::vector<Object>& objects)\n{\n    const int num_class = 80;\n\n    const int num_anchors = grid_strides.size();\n\n    for (int anchor_idx = 0; anchor_idx < num_anchors; anchor_idx++)\n    {\n        const int grid0 = grid_strides[anchor_idx].grid0;\n        const int grid1 = grid_strides[anchor_idx].grid1;\n        const int stride = grid_strides[anchor_idx].stride;\n\n        const int basic_pos = anchor_idx * 85;\n\n        // yolox/models/yolo_head.py decode logic\n        float x_center = (feat_blob[basic_pos+0] + grid0) * stride;\n        float y_center = (feat_blob[basic_pos+1] + grid1) * stride;\n        float w = exp(feat_blob[basic_pos+2]) * stride;\n        float h = exp(feat_blob[basic_pos+3]) * stride;\n        float x0 = x_center - w * 0.5f;\n        float y0 = y_center - h * 0.5f;\n\n        float box_objectness = feat_blob[basic_pos+4];\n        for (int class_idx = 0; class_idx < num_class; class_idx++)\n        {\n            float box_cls_score = feat_blob[basic_pos + 5 + class_idx];\n            float box_prob = box_objectness * box_cls_score;\n            if (box_prob > prob_threshold)\n            {\n                Object obj;\n                obj.rect.x = x0;\n                obj.rect.y = y0;\n                obj.rect.width = w;\n                obj.rect.height = h;\n                obj.label = class_idx;\n                obj.prob = box_prob;\n\n                objects.push_back(obj);\n            }\n\n        } // class loop\n\n    } // point anchor loop\n}\n\nfloat* blobFromImage(cv::Mat& img){\n    cv::cvtColor(img, img, cv::COLOR_BGR2RGB);\n\n    float* blob = new float[img.total()*3];\n    int channels = 3;\n    int img_h = 640;\n    int img_w = 640;\n    std::vector<float> mean = {0.485, 0.456, 0.406};\n    std::vector<float> std = {0.229, 0.224, 0.225};\n    for (size_t c = 0; c < channels; c++) \n    {\n        for (size_t  h = 0; h < img_h; h++) \n        {\n            for (size_t w = 0; w < img_w; w++) \n            {\n                blob[c * img_w * img_h + h * img_w + w] =\n                    (((float)img.at<cv::Vec3b>(h, w)[c]) / 255.0f - mean[c]) / std[c];\n            }\n        }\n    }\n    return blob;\n}\n\n\nstatic void decode_outputs(float* prob, std::vector<Object>& objects, float scale, const int img_w, const int img_h) {\n        std::vector<Object> proposals;\n        std::vector<int> strides = {8, 16, 32};\n        std::vector<GridAndStride> grid_strides;\n        generate_grids_and_stride(INPUT_W, strides, grid_strides);\n        generate_yolox_proposals(grid_strides, prob,  BBOX_CONF_THRESH, proposals);\n        std::cout << \"num of boxes before nms: \" << proposals.size() << std::endl;\n\n        qsort_descent_inplace(proposals);\n\n        std::vector<int> picked;\n        nms_sorted_bboxes(proposals, picked, NMS_THRESH);\n\n\n        int count = picked.size();\n\n        std::cout << \"num of boxes: \" << count << std::endl;\n\n        objects.resize(count);\n        for (int i = 0; i < count; i++)\n        {\n            objects[i] = proposals[picked[i]];\n\n            // adjust offset to original unpadded\n            float x0 = (objects[i].rect.x) / scale;\n            float y0 = (objects[i].rect.y) / scale;\n            float x1 = (objects[i].rect.x + objects[i].rect.width) / scale;\n            float y1 = (objects[i].rect.y + objects[i].rect.height) / scale;\n\n            // clip\n            x0 = std::max(std::min(x0, (float)(img_w - 1)), 0.f);\n            y0 = std::max(std::min(y0, (float)(img_h - 1)), 0.f);\n            x1 = std::max(std::min(x1, (float)(img_w - 1)), 0.f);\n            y1 = std::max(std::min(y1, (float)(img_h - 1)), 0.f);\n\n            objects[i].rect.x = x0;\n            objects[i].rect.y = y0;\n            objects[i].rect.width = x1 - x0;\n            objects[i].rect.height = y1 - y0;\n        }\n}\n\nconst float color_list[80][3] =\n{\n    {0.000, 0.447, 0.741},\n    {0.850, 0.325, 0.098},\n    {0.929, 0.694, 0.125},\n    {0.494, 0.184, 0.556},\n    {0.466, 0.674, 0.188},\n    {0.301, 0.745, 0.933},\n    {0.635, 0.078, 0.184},\n    {0.300, 0.300, 0.300},\n    {0.600, 0.600, 0.600},\n    {1.000, 0.000, 0.000},\n    {1.000, 0.500, 0.000},\n    {0.749, 0.749, 0.000},\n    {0.000, 1.000, 0.000},\n    {0.000, 0.000, 1.000},\n    {0.667, 0.000, 1.000},\n    {0.333, 0.333, 0.000},\n    {0.333, 0.667, 0.000},\n    {0.333, 1.000, 0.000},\n    {0.667, 0.333, 0.000},\n    {0.667, 0.667, 0.000},\n    {0.667, 1.000, 0.000},\n    {1.000, 0.333, 0.000},\n    {1.000, 0.667, 0.000},\n    {1.000, 1.000, 0.000},\n    {0.000, 0.333, 0.500},\n    {0.000, 0.667, 0.500},\n    {0.000, 1.000, 0.500},\n    {0.333, 0.000, 0.500},\n    {0.333, 0.333, 0.500},\n    {0.333, 0.667, 0.500},\n    {0.333, 1.000, 0.500},\n    {0.667, 0.000, 0.500},\n    {0.667, 0.333, 0.500},\n    {0.667, 0.667, 0.500},\n    {0.667, 1.000, 0.500},\n    {1.000, 0.000, 0.500},\n    {1.000, 0.333, 0.500},\n    {1.000, 0.667, 0.500},\n    {1.000, 1.000, 0.500},\n    {0.000, 0.333, 1.000},\n    {0.000, 0.667, 1.000},\n    {0.000, 1.000, 1.000},\n    {0.333, 0.000, 1.000},\n    {0.333, 0.333, 1.000},\n    {0.333, 0.667, 1.000},\n    {0.333, 1.000, 1.000},\n    {0.667, 0.000, 1.000},\n    {0.667, 0.333, 1.000},\n    {0.667, 0.667, 1.000},\n    {0.667, 1.000, 1.000},\n    {1.000, 0.000, 1.000},\n    {1.000, 0.333, 1.000},\n    {1.000, 0.667, 1.000},\n    {0.333, 0.000, 0.000},\n    {0.500, 0.000, 0.000},\n    {0.667, 0.000, 0.000},\n    {0.833, 0.000, 0.000},\n    {1.000, 0.000, 0.000},\n    {0.000, 0.167, 0.000},\n    {0.000, 0.333, 0.000},\n    {0.000, 0.500, 0.000},\n    {0.000, 0.667, 0.000},\n    {0.000, 0.833, 0.000},\n    {0.000, 1.000, 0.000},\n    {0.000, 0.000, 0.167},\n    {0.000, 0.000, 0.333},\n    {0.000, 0.000, 0.500},\n    {0.000, 0.000, 0.667},\n    {0.000, 0.000, 0.833},\n    {0.000, 0.000, 1.000},\n    {0.000, 0.000, 0.000},\n    {0.143, 0.143, 0.143},\n    {0.286, 0.286, 0.286},\n    {0.429, 0.429, 0.429},\n    {0.571, 0.571, 0.571},\n    {0.714, 0.714, 0.714},\n    {0.857, 0.857, 0.857},\n    {0.000, 0.447, 0.741},\n    {0.314, 0.717, 0.741},\n    {0.50, 0.5, 0}\n};\n\nstatic void draw_objects(const cv::Mat& bgr, const std::vector<Object>& objects, std::string f)\n{\n    static const char* class_names[] = {\n        \"person\", \"bicycle\", \"car\", \"motorcycle\", \"airplane\", \"bus\", \"train\", \"truck\", \"boat\", \"traffic light\",\n        \"fire hydrant\", \"stop sign\", \"parking meter\", \"bench\", \"bird\", \"cat\", \"dog\", \"horse\", \"sheep\", \"cow\",\n        \"elephant\", \"bear\", \"zebra\", \"giraffe\", \"backpack\", \"umbrella\", \"handbag\", \"tie\", \"suitcase\", \"frisbee\",\n        \"skis\", \"snowboard\", \"sports ball\", \"kite\", \"baseball bat\", \"baseball glove\", \"skateboard\", \"surfboard\",\n        \"tennis racket\", \"bottle\", \"wine glass\", \"cup\", \"fork\", \"knife\", \"spoon\", \"bowl\", \"banana\", \"apple\",\n        \"sandwich\", \"orange\", \"broccoli\", \"carrot\", \"hot dog\", \"pizza\", \"donut\", \"cake\", \"chair\", \"couch\",\n        \"potted plant\", \"bed\", \"dining table\", \"toilet\", \"tv\", \"laptop\", \"mouse\", \"remote\", \"keyboard\", \"cell phone\",\n        \"microwave\", \"oven\", \"toaster\", \"sink\", \"refrigerator\", \"book\", \"clock\", \"vase\", \"scissors\", \"teddy bear\",\n        \"hair drier\", \"toothbrush\"\n    };\n\n    cv::Mat image = bgr.clone();\n\n    for (size_t i = 0; i < objects.size(); i++)\n    {\n        const Object& obj = objects[i];\n\n        fprintf(stderr, \"%d = %.5f at %.2f %.2f %.2f x %.2f\\n\", obj.label, obj.prob,\n                obj.rect.x, obj.rect.y, obj.rect.width, obj.rect.height);\n\n        cv::Scalar color = cv::Scalar(color_list[obj.label][0], color_list[obj.label][1], color_list[obj.label][2]);\n        float c_mean = cv::mean(color)[0];\n        cv::Scalar txt_color;\n        if (c_mean > 0.5){\n            txt_color = cv::Scalar(0, 0, 0);\n        }else{\n            txt_color = cv::Scalar(255, 255, 255);\n        }\n\n        cv::rectangle(image, obj.rect, color * 255, 2);\n\n        char text[256];\n        sprintf(text, \"%s %.1f%%\", class_names[obj.label], obj.prob * 100);\n\n        int baseLine = 0;\n        cv::Size label_size = cv::getTextSize(text, cv::FONT_HERSHEY_SIMPLEX, 0.4, 1, &baseLine);\n\n        cv::Scalar txt_bk_color = color * 0.7 * 255;\n\n        int x = obj.rect.x;\n        int y = obj.rect.y + 1;\n        //int y = obj.rect.y - label_size.height - baseLine;\n        if (y > image.rows)\n            y = image.rows;\n        //if (x + label_size.width > image.cols)\n            //x = image.cols - label_size.width;\n\n        cv::rectangle(image, cv::Rect(cv::Point(x, y), cv::Size(label_size.width, label_size.height + baseLine)),\n                      txt_bk_color, -1);\n\n        cv::putText(image, text, cv::Point(x, y + label_size.height),\n                    cv::FONT_HERSHEY_SIMPLEX, 0.4, txt_color, 1);\n    }\n\n    cv::imwrite(\"det_res.jpg\", image);\n    fprintf(stderr, \"save vis file\\n\");\n    /* cv::imshow(\"image\", image); */\n    /* cv::waitKey(0); */\n}\n\n\nvoid doInference(IExecutionContext& context, float* input, float* output, const int output_size, cv::Size input_shape) {\n    const ICudaEngine& engine = context.getEngine();\n\n    // Pointers to input and output device buffers to pass to engine.\n    // Engine requires exactly IEngine::getNbBindings() number of buffers.\n    assert(engine.getNbBindings() == 2);\n    void* buffers[2];\n\n    // In order to bind the buffers, we need to know the names of the input and output tensors.\n    // Note that indices are guaranteed to be less than IEngine::getNbBindings()\n    const int inputIndex = engine.getBindingIndex(INPUT_BLOB_NAME);\n\n    assert(engine.getBindingDataType(inputIndex) == nvinfer1::DataType::kFLOAT);\n    const int outputIndex = engine.getBindingIndex(OUTPUT_BLOB_NAME);\n    assert(engine.getBindingDataType(outputIndex) == nvinfer1::DataType::kFLOAT);\n    int mBatchSize = engine.getMaxBatchSize();\n\n    // Create GPU buffers on device\n    CHECK(cudaMalloc(&buffers[inputIndex], 3 * input_shape.height * input_shape.width * sizeof(float)));\n    CHECK(cudaMalloc(&buffers[outputIndex], output_size*sizeof(float)));\n\n    // Create stream\n    cudaStream_t stream;\n    CHECK(cudaStreamCreate(&stream));\n\n    // DMA input batch data to device, infer on the batch asynchronously, and DMA output back to host\n    CHECK(cudaMemcpyAsync(buffers[inputIndex], input, 3 * input_shape.height * input_shape.width * sizeof(float), cudaMemcpyHostToDevice, stream));\n    context.enqueue(1, buffers, stream, nullptr);\n    CHECK(cudaMemcpyAsync(output, buffers[outputIndex], output_size * sizeof(float), cudaMemcpyDeviceToHost, stream));\n    cudaStreamSynchronize(stream);\n\n    // Release stream and buffers\n    cudaStreamDestroy(stream);\n    CHECK(cudaFree(buffers[inputIndex]));\n    CHECK(cudaFree(buffers[outputIndex]));\n}\n\nint main(int argc, char** argv) {\n    cudaSetDevice(DEVICE);\n    // create a model using the API directly and serialize it to a stream\n    char *trtModelStream{nullptr};\n    size_t size{0};\n\n    if (argc == 4 && std::string(argv[2]) == \"-i\") {\n        const std::string engine_file_path {argv[1]};\n        std::ifstream file(engine_file_path, std::ios::binary);\n        if (file.good()) {\n            file.seekg(0, file.end);\n            size = file.tellg();\n            file.seekg(0, file.beg);\n            trtModelStream = new char[size];\n            assert(trtModelStream);\n            file.read(trtModelStream, size);\n            file.close();\n        }\n    } else {\n        std::cerr << \"arguments not right!\" << std::endl;\n        std::cerr << \"run 'python3 yolox/deploy/trt.py -n yolox-{tiny, s, m, l, x}' to serialize model first!\" << std::endl;\n        std::cerr << \"Then use the following command:\" << std::endl;\n        std::cerr << \"./yolox ../model_trt.engine -i ../../../assets/dog.jpg  // deserialize file and run inference\" << std::endl;\n        return -1;\n    }\n    const std::string input_image_path {argv[3]};\n\n    //std::vector<std::string> file_names;\n    //if (read_files_in_dir(argv[2], file_names) < 0) {\n        //std::cout << \"read_files_in_dir failed.\" << std::endl;\n        //return -1;\n    //}\n\n    IRuntime* runtime = createInferRuntime(gLogger);\n    assert(runtime != nullptr);\n    ICudaEngine* engine = runtime->deserializeCudaEngine(trtModelStream, size);\n    assert(engine != nullptr); \n    IExecutionContext* context = engine->createExecutionContext();\n    assert(context != nullptr);\n    delete[] trtModelStream;\n    auto out_dims = engine->getBindingDimensions(1);\n    auto output_size = 1;\n    for(int j=0;j<out_dims.nbDims;j++) {\n        output_size *= out_dims.d[j];\n    }\n    static float* prob = new float[output_size];\n\n    cv::Mat img = cv::imread(input_image_path);\n    int img_w = img.cols;\n    int img_h = img.rows;\n    cv::Mat pr_img = static_resize(img);\n    std::cout << \"blob image\" << std::endl;\n\n    float* blob;\n    blob = blobFromImage(pr_img);\n    float scale = std::min(INPUT_W / (img.cols*1.0), INPUT_H / (img.rows*1.0));\n\n    // run inference\n    auto start = std::chrono::system_clock::now();\n    doInference(*context, blob, prob, output_size, pr_img.size());\n    auto end = std::chrono::system_clock::now();\n    std::cout << std::chrono::duration_cast<std::chrono::milliseconds>(end - start).count() << \"ms\" << std::endl;\n\n    std::vector<Object> objects;\n    decode_outputs(prob, objects, scale, img_w, img_h);\n    draw_objects(img, objects, input_image_path);\n\n    // destroy the engine\n    context->destroy();\n    engine->destroy();\n    runtime->destroy();\n    return 0;\n}\n"
  },
  {
    "path": "detector/YOLOX/demo/TensorRT/python/README.md",
    "content": "# YOLOX-TensorRT in Python\n\nThis toturial includes a Python demo for TensorRT.\n\n## Install TensorRT Toolkit\n\nPlease follow the [TensorRT Installation Guide](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html) and [torch2trt gitrepo](https://github.com/NVIDIA-AI-IOT/torch2trt) to install TensorRT and torch2trt.\n\n## Convert model\n\nYOLOX models can be easily conveted to TensorRT models using torch2trt\n\n   If you want to convert our model, use the flag -n to specify a model name:\n   ```shell\n   python tools/trt.py -n <YOLOX_MODEL_NAME> -c <YOLOX_CHECKPOINT>\n   ```\n   For example:\n   ```shell\n   python tools/trt.py -n yolox-s -c your_ckpt.pth.tar\n   ```\n   <YOLOX_MODEL_NAME> can be: yolox-nano, yolox-tiny. yolox-s, yolox-m, yolox-l, yolox-x.\n\n   If you want to convert your customized model, use the flag -f to specify you exp file:\n   ```shell\n   python tools/trt.py -f <YOLOX_EXP_FILE> -c <YOLOX_CHECKPOINT>\n   ```\n   For example:\n   ```shell\n   python tools/trt.py -f /path/to/your/yolox/exps/yolox_s.py -c your_ckpt.pth.tar\n   ```\n   *yolox_s.py* can be any exp file modified by you.\n\nThe converted model and the serialized engine file (for C++ demo) will be saved on your experiment output dir.  \n\n## Demo\n\nThe TensorRT python demo is merged on our pytorch demo file, so you can run the pytorch demo command with ```--trt```.\n\n```shell\npython tools/demo.py image -n yolox-s --trt --save_result\n```\nor\n```shell\npython tools/demo.py image -f exps/default/yolox_s.py --trt --save_result\n```\n\n"
  },
  {
    "path": "detector/YOLOX/demo/ncnn/android/README.md",
    "content": "# YOLOX-Android-ncnn\n\nAndoird app of YOLOX object detection base on [ncnn](https://github.com/Tencent/ncnn)\n\n\n## Tutorial\n\n### Step1\n\nDownload ncnn-android-vulkan.zip from [releases of ncnn](https://github.com/Tencent/ncnn/releases). This repo us\n[20210525 release](https://github.com/Tencent/ncnn/releases/download/20210525/ncnn-20210525-android-vulkan.zip) for building.\n\n### Step2\n\nAfter downloading, please extract your zip file. Then, there are two ways to finish this step:\n* put your extracted directory into app/src/main/jni\n* change the ncnn_DIR path in app/src/main/jni/CMakeLists.txt to your extracted directory.\n\n### Step3\nDownload example param and bin file from [onedrive](https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/ESXBH_GSSmFMszWJ6YG2VkQB5cWDfqVWXgk0D996jH0rpQ?e=qzEqUh) or [github](https://github.com/Megvii-BaseDetection/storage/releases/download/0.0.1/yolox_s_ncnn.tar.gz). Unzip the file to app/src/main/assets.\n\n### Step4\nOpen this project with Android Studio, build it and enjoy!\n\n## Reference\n\n* [ncnn-android-yolov5](https://github.com/nihui/ncnn-android-yolov5)\n"
  },
  {
    "path": "detector/YOLOX/demo/ncnn/android/app/build.gradle",
    "content": "apply plugin: 'com.android.application'\n\nandroid {\n    compileSdkVersion 24\n    buildToolsVersion \"29.0.2\"\n\n    defaultConfig {\n        applicationId \"com.megvii.yoloXncnn\"\n        archivesBaseName = \"$applicationId\"\n\n        ndk {\n            moduleName \"ncnn\"\n            abiFilters \"armeabi-v7a\", \"arm64-v8a\"\n        }\n        minSdkVersion 24\n    }\n\n    externalNativeBuild {\n        cmake {\n            version \"3.10.2\"\n            path file('src/main/jni/CMakeLists.txt')\n        }\n    }\n}\n"
  },
  {
    "path": "detector/YOLOX/demo/ncnn/android/app/src/main/AndroidManifest.xml",
    "content": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\"\n      package=\"com.megvii.yoloXncnn\"\n      android:versionCode=\"1\"\n      android:versionName=\"1.1\">\n    <application android:label=\"@string/app_name\" >\n        <activity android:name=\"MainActivity\"\n                  android:label=\"@string/app_name\">\n            <intent-filter>\n                <action android:name=\"android.intent.action.MAIN\" />\n                <category android:name=\"android.intent.category.LAUNCHER\" />\n            </intent-filter>\n        </activity>\n    </application>\n</manifest> \n"
  },
  {
    "path": "detector/YOLOX/demo/ncnn/android/app/src/main/assets/yolox.param",
    "content": "7767517\n220 250\nInput                    images                   0 1 images\nYoloV5Focus              focus                    1 1 images 503\nConvolution              Conv_41                  1 1 503 877 0=32 1=3 4=1 5=1 6=3456\nSwish                    Mul_43                   1 1 877 507\nConvolution              Conv_44                  1 1 507 880 0=64 1=3 3=2 4=1 5=1 6=18432\nSwish                    Mul_46                   1 1 880 511\nSplit                    splitncnn_0              1 2 511 511_splitncnn_0 511_splitncnn_1\nConvolution              Conv_47                  1 1 511_splitncnn_1 883 0=32 1=1 5=1 6=2048\nSwish                    Mul_49                   1 1 883 515\nSplit                    splitncnn_1              1 2 515 515_splitncnn_0 515_splitncnn_1\nConvolution              Conv_50                  1 1 511_splitncnn_0 886 0=32 1=1 5=1 6=2048\nSwish                    Mul_52                   1 1 886 519\nConvolution              Conv_53                  1 1 515_splitncnn_1 889 0=32 1=1 5=1 6=1024\nSwish                    Mul_55                   1 1 889 523\nConvolution              Conv_56                  1 1 523 892 0=32 1=3 4=1 5=1 6=9216\nSwish                    Mul_58                   1 1 892 527\nBinaryOp                 Add_59                   2 1 527 515_splitncnn_0 528\nConcat                   Concat_60                2 1 528 519 529\nConvolution              Conv_61                  1 1 529 895 0=64 1=1 5=1 6=4096\nSwish                    Mul_63                   1 1 895 533\nConvolution              Conv_64                  1 1 533 898 0=128 1=3 3=2 4=1 5=1 6=73728\nSwish                    Mul_66                   1 1 898 537\nSplit                    splitncnn_2              1 2 537 537_splitncnn_0 537_splitncnn_1\nConvolution              Conv_67                  1 1 537_splitncnn_1 901 0=64 1=1 5=1 6=8192\nSwish                    Mul_69                   1 1 901 541\nSplit                    splitncnn_3              1 2 541 541_splitncnn_0 541_splitncnn_1\nConvolution              Conv_70                  1 1 537_splitncnn_0 904 0=64 1=1 5=1 6=8192\nSwish                    Mul_72                   1 1 904 545\nConvolution              Conv_73                  1 1 541_splitncnn_1 907 0=64 1=1 5=1 6=4096\nSwish                    Mul_75                   1 1 907 549\nConvolution              Conv_76                  1 1 549 910 0=64 1=3 4=1 5=1 6=36864\nSwish                    Mul_78                   1 1 910 553\nBinaryOp                 Add_79                   2 1 553 541_splitncnn_0 554\nSplit                    splitncnn_4              1 2 554 554_splitncnn_0 554_splitncnn_1\nConvolution              Conv_80                  1 1 554_splitncnn_1 913 0=64 1=1 5=1 6=4096\nSwish                    Mul_82                   1 1 913 558\nConvolution              Conv_83                  1 1 558 916 0=64 1=3 4=1 5=1 6=36864\nSwish                    Mul_85                   1 1 916 562\nBinaryOp                 Add_86                   2 1 562 554_splitncnn_0 563\nSplit                    splitncnn_5              1 2 563 563_splitncnn_0 563_splitncnn_1\nConvolution              Conv_87                  1 1 563_splitncnn_1 919 0=64 1=1 5=1 6=4096\nSwish                    Mul_89                   1 1 919 567\nConvolution              Conv_90                  1 1 567 922 0=64 1=3 4=1 5=1 6=36864\nSwish                    Mul_92                   1 1 922 571\nBinaryOp                 Add_93                   2 1 571 563_splitncnn_0 572\nConcat                   Concat_94                2 1 572 545 573\nConvolution              Conv_95                  1 1 573 925 0=128 1=1 5=1 6=16384\nSwish                    Mul_97                   1 1 925 577\nSplit                    splitncnn_6              1 2 577 577_splitncnn_0 577_splitncnn_1\nConvolution              Conv_98                  1 1 577_splitncnn_1 928 0=256 1=3 3=2 4=1 5=1 6=294912\nSwish                    Mul_100                  1 1 928 581\nSplit                    splitncnn_7              1 2 581 581_splitncnn_0 581_splitncnn_1\nConvolution              Conv_101                 1 1 581_splitncnn_1 931 0=128 1=1 5=1 6=32768\nSwish                    Mul_103                  1 1 931 585\nSplit                    splitncnn_8              1 2 585 585_splitncnn_0 585_splitncnn_1\nConvolution              Conv_104                 1 1 581_splitncnn_0 934 0=128 1=1 5=1 6=32768\nSwish                    Mul_106                  1 1 934 589\nConvolution              Conv_107                 1 1 585_splitncnn_1 937 0=128 1=1 5=1 6=16384\nSwish                    Mul_109                  1 1 937 593\nConvolution              Conv_110                 1 1 593 940 0=128 1=3 4=1 5=1 6=147456\nSwish                    Mul_112                  1 1 940 597\nBinaryOp                 Add_113                  2 1 597 585_splitncnn_0 598\nSplit                    splitncnn_9              1 2 598 598_splitncnn_0 598_splitncnn_1\nConvolution              Conv_114                 1 1 598_splitncnn_1 943 0=128 1=1 5=1 6=16384\nSwish                    Mul_116                  1 1 943 602\nConvolution              Conv_117                 1 1 602 946 0=128 1=3 4=1 5=1 6=147456\nSwish                    Mul_119                  1 1 946 606\nBinaryOp                 Add_120                  2 1 606 598_splitncnn_0 607\nSplit                    splitncnn_10             1 2 607 607_splitncnn_0 607_splitncnn_1\nConvolution              Conv_121                 1 1 607_splitncnn_1 949 0=128 1=1 5=1 6=16384\nSwish                    Mul_123                  1 1 949 611\nConvolution              Conv_124                 1 1 611 952 0=128 1=3 4=1 5=1 6=147456\nSwish                    Mul_126                  1 1 952 615\nBinaryOp                 Add_127                  2 1 615 607_splitncnn_0 616\nConcat                   Concat_128               2 1 616 589 617\nConvolution              Conv_129                 1 1 617 955 0=256 1=1 5=1 6=65536\nSwish                    Mul_131                  1 1 955 621\nSplit                    splitncnn_11             1 2 621 621_splitncnn_0 621_splitncnn_1\nConvolution              Conv_132                 1 1 621_splitncnn_1 958 0=512 1=3 3=2 4=1 5=1 6=1179648\nSwish                    Mul_134                  1 1 958 625\nConvolution              Conv_135                 1 1 625 961 0=256 1=1 5=1 6=131072\nSwish                    Mul_137                  1 1 961 629\nSplit                    splitncnn_12             1 4 629 629_splitncnn_0 629_splitncnn_1 629_splitncnn_2 629_splitncnn_3\nPooling                  MaxPool_138              1 1 629_splitncnn_3 630 1=5 3=2 5=1\nPooling                  MaxPool_139              1 1 629_splitncnn_2 631 1=9 3=4 5=1\nPooling                  MaxPool_140              1 1 629_splitncnn_1 632 1=13 3=6 5=1\nConcat                   Concat_141               4 1 629_splitncnn_0 630 631 632 633\nConvolution              Conv_142                 1 1 633 964 0=512 1=1 5=1 6=524288\nSwish                    Mul_144                  1 1 964 637\nSplit                    splitncnn_13             1 2 637 637_splitncnn_0 637_splitncnn_1\nConvolution              Conv_145                 1 1 637_splitncnn_1 967 0=256 1=1 5=1 6=131072\nSwish                    Mul_147                  1 1 967 641\nConvolution              Conv_148                 1 1 637_splitncnn_0 970 0=256 1=1 5=1 6=131072\nSwish                    Mul_150                  1 1 970 645\nConvolution              Conv_151                 1 1 641 973 0=256 1=1 5=1 6=65536\nSwish                    Mul_153                  1 1 973 649\nConvolution              Conv_154                 1 1 649 976 0=256 1=3 4=1 5=1 6=589824\nSwish                    Mul_156                  1 1 976 653\nConcat                   Concat_157               2 1 653 645 654\nConvolution              Conv_158                 1 1 654 979 0=512 1=1 5=1 6=262144\nSwish                    Mul_160                  1 1 979 658\nConvolution              Conv_161                 1 1 658 982 0=256 1=1 5=1 6=131072\nSwish                    Mul_163                  1 1 982 662\nSplit                    splitncnn_14             1 2 662 662_splitncnn_0 662_splitncnn_1\nInterp                   Resize_165               1 1 662_splitncnn_1 667 0=1 1=2.000000e+00 2=2.000000e+00\nConcat                   Concat_166               2 1 667 621_splitncnn_0 668\nSplit                    splitncnn_15             1 2 668 668_splitncnn_0 668_splitncnn_1\nConvolution              Conv_167                 1 1 668_splitncnn_1 985 0=128 1=1 5=1 6=65536\nSwish                    Mul_169                  1 1 985 672\nConvolution              Conv_170                 1 1 668_splitncnn_0 988 0=128 1=1 5=1 6=65536\nSwish                    Mul_172                  1 1 988 676\nConvolution              Conv_173                 1 1 672 991 0=128 1=1 5=1 6=16384\nSwish                    Mul_175                  1 1 991 680\nConvolution              Conv_176                 1 1 680 994 0=128 1=3 4=1 5=1 6=147456\nSwish                    Mul_178                  1 1 994 684\nConcat                   Concat_179               2 1 684 676 685\nConvolution              Conv_180                 1 1 685 997 0=256 1=1 5=1 6=65536\nSwish                    Mul_182                  1 1 997 689\nConvolution              Conv_183                 1 1 689 1000 0=128 1=1 5=1 6=32768\nSwish                    Mul_185                  1 1 1000 693\nSplit                    splitncnn_16             1 2 693 693_splitncnn_0 693_splitncnn_1\nInterp                   Resize_187               1 1 693_splitncnn_1 698 0=1 1=2.000000e+00 2=2.000000e+00\nConcat                   Concat_188               2 1 698 577_splitncnn_0 699\nSplit                    splitncnn_17             1 2 699 699_splitncnn_0 699_splitncnn_1\nConvolution              Conv_189                 1 1 699_splitncnn_1 1003 0=64 1=1 5=1 6=16384\nSwish                    Mul_191                  1 1 1003 703\nConvolution              Conv_192                 1 1 699_splitncnn_0 1006 0=64 1=1 5=1 6=16384\nSwish                    Mul_194                  1 1 1006 707\nConvolution              Conv_195                 1 1 703 1009 0=64 1=1 5=1 6=4096\nSwish                    Mul_197                  1 1 1009 711\nConvolution              Conv_198                 1 1 711 1012 0=64 1=3 4=1 5=1 6=36864\nSwish                    Mul_200                  1 1 1012 715\nConcat                   Concat_201               2 1 715 707 716\nConvolution              Conv_202                 1 1 716 1015 0=128 1=1 5=1 6=16384\nSwish                    Mul_204                  1 1 1015 720\nSplit                    splitncnn_18             1 2 720 720_splitncnn_0 720_splitncnn_1\nConvolution              Conv_205                 1 1 720_splitncnn_1 1018 0=128 1=3 3=2 4=1 5=1 6=147456\nSwish                    Mul_207                  1 1 1018 724\nConcat                   Concat_208               2 1 724 693_splitncnn_0 725\nSplit                    splitncnn_19             1 2 725 725_splitncnn_0 725_splitncnn_1\nConvolution              Conv_209                 1 1 725_splitncnn_1 1021 0=128 1=1 5=1 6=32768\nSwish                    Mul_211                  1 1 1021 729\nConvolution              Conv_212                 1 1 725_splitncnn_0 1024 0=128 1=1 5=1 6=32768\nSwish                    Mul_214                  1 1 1024 733\nConvolution              Conv_215                 1 1 729 1027 0=128 1=1 5=1 6=16384\nSwish                    Mul_217                  1 1 1027 737\nConvolution              Conv_218                 1 1 737 1030 0=128 1=3 4=1 5=1 6=147456\nSwish                    Mul_220                  1 1 1030 741\nConcat                   Concat_221               2 1 741 733 742\nConvolution              Conv_222                 1 1 742 1033 0=256 1=1 5=1 6=65536\nSwish                    Mul_224                  1 1 1033 746\nSplit                    splitncnn_20             1 2 746 746_splitncnn_0 746_splitncnn_1\nConvolution              Conv_225                 1 1 746_splitncnn_1 1036 0=256 1=3 3=2 4=1 5=1 6=589824\nSwish                    Mul_227                  1 1 1036 750\nConcat                   Concat_228               2 1 750 662_splitncnn_0 751\nSplit                    splitncnn_21             1 2 751 751_splitncnn_0 751_splitncnn_1\nConvolution              Conv_229                 1 1 751_splitncnn_1 1039 0=256 1=1 5=1 6=131072\nSwish                    Mul_231                  1 1 1039 755\nConvolution              Conv_232                 1 1 751_splitncnn_0 1042 0=256 1=1 5=1 6=131072\nSwish                    Mul_234                  1 1 1042 759\nConvolution              Conv_235                 1 1 755 1045 0=256 1=1 5=1 6=65536\nSwish                    Mul_237                  1 1 1045 763\nConvolution              Conv_238                 1 1 763 1048 0=256 1=3 4=1 5=1 6=589824\nSwish                    Mul_240                  1 1 1048 767\nConcat                   Concat_241               2 1 767 759 768\nConvolution              Conv_242                 1 1 768 1051 0=512 1=1 5=1 6=262144\nSwish                    Mul_244                  1 1 1051 772\nConvolution              Conv_245                 1 1 720_splitncnn_0 1054 0=128 1=1 5=1 6=16384\nSwish                    Mul_247                  1 1 1054 776\nSplit                    splitncnn_22             1 2 776 776_splitncnn_0 776_splitncnn_1\nConvolution              Conv_248                 1 1 776_splitncnn_1 1057 0=128 1=3 4=1 5=1 6=147456\nSwish                    Mul_250                  1 1 1057 780\nConvolution              Conv_251                 1 1 780 1060 0=128 1=3 4=1 5=1 6=147456\nSwish                    Mul_253                  1 1 1060 784\nConvolution              Conv_254                 1 1 784 797 0=80 1=1 5=1 6=10240 9=4\nConvolution              Conv_255                 1 1 776_splitncnn_0 1063 0=128 1=3 4=1 5=1 6=147456\nSwish                    Mul_257                  1 1 1063 789\nConvolution              Conv_258                 1 1 789 1066 0=128 1=3 4=1 5=1 6=147456\nSwish                    Mul_260                  1 1 1066 793\nSplit                    splitncnn_23             1 2 793 793_splitncnn_0 793_splitncnn_1\nConvolution              Conv_261                 1 1 793_splitncnn_1 794 0=4 1=1 5=1 6=512\nConvolution              Conv_262                 1 1 793_splitncnn_0 796 0=1 1=1 5=1 6=128 9=4\nConcat                   Concat_265               3 1 794 796 797 798\nConvolution              Conv_266                 1 1 746_splitncnn_0 1069 0=128 1=1 5=1 6=32768\nSwish                    Mul_268                  1 1 1069 802\nSplit                    splitncnn_24             1 2 802 802_splitncnn_0 802_splitncnn_1\nConvolution              Conv_269                 1 1 802_splitncnn_1 1072 0=128 1=3 4=1 5=1 6=147456\nSwish                    Mul_271                  1 1 1072 806\nConvolution              Conv_272                 1 1 806 1075 0=128 1=3 4=1 5=1 6=147456\nSwish                    Mul_274                  1 1 1075 810\nConvolution              Conv_275                 1 1 810 823 0=80 1=1 5=1 6=10240 9=4\nConvolution              Conv_276                 1 1 802_splitncnn_0 1078 0=128 1=3 4=1 5=1 6=147456\nSwish                    Mul_278                  1 1 1078 815\nConvolution              Conv_279                 1 1 815 1081 0=128 1=3 4=1 5=1 6=147456\nSwish                    Mul_281                  1 1 1081 819\nSplit                    splitncnn_25             1 2 819 819_splitncnn_0 819_splitncnn_1\nConvolution              Conv_282                 1 1 819_splitncnn_1 820 0=4 1=1 5=1 6=512\nConvolution              Conv_283                 1 1 819_splitncnn_0 822 0=1 1=1 5=1 6=128 9=4\nConcat                   Concat_286               3 1 820 822 823 824\nConvolution              Conv_287                 1 1 772 1084 0=128 1=1 5=1 6=65536\nSwish                    Mul_289                  1 1 1084 828\nSplit                    splitncnn_26             1 2 828 828_splitncnn_0 828_splitncnn_1\nConvolution              Conv_290                 1 1 828_splitncnn_1 1087 0=128 1=3 4=1 5=1 6=147456\nSwish                    Mul_292                  1 1 1087 832\nConvolution              Conv_293                 1 1 832 1090 0=128 1=3 4=1 5=1 6=147456\nSwish                    Mul_295                  1 1 1090 836\nConvolution              Conv_296                 1 1 836 849 0=80 1=1 5=1 6=10240 9=4\nConvolution              Conv_297                 1 1 828_splitncnn_0 1093 0=128 1=3 4=1 5=1 6=147456\nSwish                    Mul_299                  1 1 1093 841\nConvolution              Conv_300                 1 1 841 1096 0=128 1=3 4=1 5=1 6=147456\nSwish                    Mul_302                  1 1 1096 845\nSplit                    splitncnn_27             1 2 845 845_splitncnn_0 845_splitncnn_1\nConvolution              Conv_303                 1 1 845_splitncnn_1 846 0=4 1=1 5=1 6=512\nConvolution              Conv_304                 1 1 845_splitncnn_0 848 0=1 1=1 5=1 6=128 9=4\nConcat                   Concat_307               3 1 846 848 849 850\nReshape                  Reshape_315              1 1 798 858 0=-1 1=85\nReshape                  Reshape_323              1 1 824 866 0=-1 1=85\nReshape                  Reshape_331              1 1 850 874 0=-1 1=85\nConcat                   Concat_332               3 1 858 866 874 875 0=1\nPermute                  Transpose_333            1 1 875 output 0=1\n"
  },
  {
    "path": "detector/YOLOX/demo/ncnn/android/app/src/main/java/com/megvii/yoloXncnn/MainActivity.java",
    "content": "// Some code in this file is based on:\n// https://github.com/nihui/ncnn-android-yolov5/blob/master/app/src/main/java/com/tencent/yolov5ncnn/MainActivity.java\n// Copyright (C) 2020 THL A29 Limited, a Tencent company. All rights reserved.\n// Copyright (C) Megvii, Inc. and its affiliates. All rights reserved.\n\npackage com.megvii.yoloXncnn;\n\nimport android.app.Activity;\nimport android.content.Intent;\nimport android.graphics.Bitmap;\nimport android.graphics.BitmapFactory;\nimport android.graphics.Canvas;\nimport android.graphics.Color;\nimport android.graphics.Paint;\nimport android.media.ExifInterface;\nimport android.graphics.Matrix;\nimport android.net.Uri;\nimport android.os.Bundle;\nimport android.util.Log;\nimport android.view.View;\nimport android.widget.Button;\nimport android.widget.ImageView;\n\nimport java.io.FileNotFoundException;\nimport java.io.InputStream;\nimport java.io.IOException;\n\npublic class MainActivity extends Activity\n{\n    private static final int SELECT_IMAGE = 1;\n\n    private ImageView imageView;\n    private Bitmap bitmap = null;\n    private Bitmap yourSelectedImage = null;\n\n    private YOLOXncnn yoloX = new YOLOXncnn();\n\n    /** Called when the activity is first created. */\n    @Override\n    public void onCreate(Bundle savedInstanceState)\n    {\n        super.onCreate(savedInstanceState);\n        setContentView(R.layout.main);\n\n        boolean ret_init = yoloX.Init(getAssets());\n        if (!ret_init)\n        {\n            Log.e(\"MainActivity\", \"yoloXncnn Init failed\");\n        }\n\n        imageView = (ImageView) findViewById(R.id.imageView);\n\n        Button buttonImage = (Button) findViewById(R.id.buttonImage);\n        buttonImage.setOnClickListener(new View.OnClickListener() {\n            @Override\n            public void onClick(View arg0) {\n                Intent i = new Intent(Intent.ACTION_PICK);\n                i.setType(\"image/*\");\n                startActivityForResult(i, SELECT_IMAGE);\n            }\n        });\n\n        Button buttonDetect = (Button) findViewById(R.id.buttonDetect);\n        buttonDetect.setOnClickListener(new View.OnClickListener() {\n            @Override\n            public void onClick(View arg0) {\n                if (yourSelectedImage == null)\n                    return;\n                YOLOXncnn.Obj[] objects = yoloX.Detect(yourSelectedImage, false);\n\n                showObjects(objects);\n            }\n        });\n\n        Button buttonDetectGPU = (Button) findViewById(R.id.buttonDetectGPU);\n        buttonDetectGPU.setOnClickListener(new View.OnClickListener() {\n            @Override\n            public void onClick(View arg0) {\n                if (yourSelectedImage == null)\n                    return;\n\n                YOLOXncnn.Obj[] objects = yoloX.Detect(yourSelectedImage, true);\n\n                showObjects(objects);\n            }\n        });\n    }\n\n    private void showObjects(YOLOXncnn.Obj[] objects)\n    {\n        if (objects == null)\n        {\n            imageView.setImageBitmap(bitmap);\n            return;\n        }\n\n        // draw objects on bitmap\n        Bitmap rgba = bitmap.copy(Bitmap.Config.ARGB_8888, true);\n\n        final int[] colors = new int[] {\n            Color.rgb( 54,  67, 244),\n            Color.rgb( 99,  30, 233),\n            Color.rgb(176,  39, 156),\n            Color.rgb(183,  58, 103),\n            Color.rgb(181,  81,  63),\n            Color.rgb(243, 150,  33),\n            Color.rgb(244, 169,   3),\n            Color.rgb(212, 188,   0),\n            Color.rgb(136, 150,   0),\n            Color.rgb( 80, 175,  76),\n            Color.rgb( 74, 195, 139),\n            Color.rgb( 57, 220, 205),\n            Color.rgb( 59, 235, 255),\n            Color.rgb(  7, 193, 255),\n            Color.rgb(  0, 152, 255),\n            Color.rgb( 34,  87, 255),\n            Color.rgb( 72,  85, 121),\n            Color.rgb(158, 158, 158),\n            Color.rgb(139, 125,  96)\n        };\n\n        Canvas canvas = new Canvas(rgba);\n\n        Paint paint = new Paint();\n        paint.setStyle(Paint.Style.STROKE);\n        paint.setStrokeWidth(4);\n\n        Paint textbgpaint = new Paint();\n        textbgpaint.setColor(Color.WHITE);\n        textbgpaint.setStyle(Paint.Style.FILL);\n\n        Paint textpaint = new Paint();\n        textpaint.setColor(Color.BLACK);\n        textpaint.setTextSize(26);\n        textpaint.setTextAlign(Paint.Align.LEFT);\n\n        for (int i = 0; i < objects.length; i++)\n        {\n            paint.setColor(colors[i % 19]);\n\n            canvas.drawRect(objects[i].x, objects[i].y, objects[i].x + objects[i].w, objects[i].y + objects[i].h, paint);\n\n            // draw filled text inside image\n            {\n                String text = objects[i].label + \" = \" + String.format(\"%.1f\", objects[i].prob * 100) + \"%\";\n\n                float text_width = textpaint.measureText(text);\n                float text_height = - textpaint.ascent() + textpaint.descent();\n\n                float x = objects[i].x;\n                float y = objects[i].y - text_height;\n                if (y < 0)\n                    y = 0;\n                if (x + text_width > rgba.getWidth())\n                    x = rgba.getWidth() - text_width;\n\n                canvas.drawRect(x, y, x + text_width, y + text_height, textbgpaint);\n\n                canvas.drawText(text, x, y - textpaint.ascent(), textpaint);\n            }\n        }\n\n        imageView.setImageBitmap(rgba);\n    }\n\n    @Override\n    protected void onActivityResult(int requestCode, int resultCode, Intent data)\n    {\n        super.onActivityResult(requestCode, resultCode, data);\n\n        if (resultCode == RESULT_OK && null != data) {\n            Uri selectedImage = data.getData();\n\n            try\n            {\n                if (requestCode == SELECT_IMAGE) {\n                    bitmap = decodeUri(selectedImage);\n\n                    yourSelectedImage = bitmap.copy(Bitmap.Config.ARGB_8888, true);\n\n                    imageView.setImageBitmap(bitmap);\n                }\n            }\n            catch (FileNotFoundException e)\n            {\n                Log.e(\"MainActivity\", \"FileNotFoundException\");\n                return;\n            }\n        }\n    }\n\n    private Bitmap decodeUri(Uri selectedImage) throws FileNotFoundException\n    {\n        // Decode image size\n        BitmapFactory.Options o = new BitmapFactory.Options();\n        o.inJustDecodeBounds = true;\n        BitmapFactory.decodeStream(getContentResolver().openInputStream(selectedImage), null, o);\n\n        // The new size we want to scale to\n        final int REQUIRED_SIZE = 640;\n\n        // Find the correct scale value. It should be the power of 2.\n        int width_tmp = o.outWidth, height_tmp = o.outHeight;\n        int scale = 1;\n        while (true) {\n            if (width_tmp / 2 < REQUIRED_SIZE || height_tmp / 2 < REQUIRED_SIZE) {\n                break;\n            }\n            width_tmp /= 2;\n            height_tmp /= 2;\n            scale *= 2;\n        }\n\n        // Decode with inSampleSize\n        BitmapFactory.Options o2 = new BitmapFactory.Options();\n        o2.inSampleSize = scale;\n        Bitmap bitmap = BitmapFactory.decodeStream(getContentResolver().openInputStream(selectedImage), null, o2);\n\n        // Rotate according to EXIF\n        int rotate = 0;\n        try\n        {\n            ExifInterface exif = new ExifInterface(getContentResolver().openInputStream(selectedImage));\n            int orientation = exif.getAttributeInt(ExifInterface.TAG_ORIENTATION, ExifInterface.ORIENTATION_NORMAL);\n            switch (orientation) {\n                case ExifInterface.ORIENTATION_ROTATE_270:\n                    rotate = 270;\n                    break;\n                case ExifInterface.ORIENTATION_ROTATE_180:\n                    rotate = 180;\n                    break;\n                case ExifInterface.ORIENTATION_ROTATE_90:\n                    rotate = 90;\n                    break;\n            }\n        }\n        catch (IOException e)\n        {\n            Log.e(\"MainActivity\", \"ExifInterface IOException\");\n        }\n\n        Matrix matrix = new Matrix();\n        matrix.postRotate(rotate);\n        return Bitmap.createBitmap(bitmap, 0, 0, bitmap.getWidth(), bitmap.getHeight(), matrix, true);\n    }\n\n}\n"
  },
  {
    "path": "detector/YOLOX/demo/ncnn/android/app/src/main/java/com/megvii/yoloXncnn/YOLOXncnn.java",
    "content": "// Copyright (C) Megvii, Inc. and its affiliates. All rights reserved.\n\npackage com.megvii.yoloXncnn;\n\nimport android.content.res.AssetManager;\nimport android.graphics.Bitmap;\n\npublic class YOLOXncnn\n{\n    public native boolean Init(AssetManager mgr);\n\n    public class Obj\n    {\n        public float x;\n        public float y;\n        public float w;\n        public float h;\n        public String label;\n        public float prob;\n    }\n\n    public native Obj[] Detect(Bitmap bitmap, boolean use_gpu);\n\n    static {\n        System.loadLibrary(\"yoloXncnn\");\n    }\n}\n"
  },
  {
    "path": "detector/YOLOX/demo/ncnn/android/app/src/main/java/com/megvii/yoloXncnn/yoloXncnn.java",
    "content": "// Copyright (C) Megvii, Inc. and its affiliates. All rights reserved.\n\npackage com.megvii.yoloXncnn;\n\nimport android.content.res.AssetManager;\nimport android.graphics.Bitmap;\n\npublic class YOLOXncnn\n{\n    public native boolean Init(AssetManager mgr);\n\n    public class Obj\n    {\n        public float x;\n        public float y;\n        public float w;\n        public float h;\n        public String label;\n        public float prob;\n    }\n\n    public native Obj[] Detect(Bitmap bitmap, boolean use_gpu);\n\n    static {\n        System.loadLibrary(\"yoloXncnn\");\n    }\n}\n"
  },
  {
    "path": "detector/YOLOX/demo/ncnn/android/app/src/main/jni/CMakeLists.txt",
    "content": "project(yoloXncnn)\n\ncmake_minimum_required(VERSION 3.4.1)\n\nset(ncnn_DIR ${CMAKE_SOURCE_DIR}/ncnn-20210525-android-vulkan/${ANDROID_ABI}/lib/cmake/ncnn)\nfind_package(ncnn REQUIRED)\n\nadd_library(yoloXncnn SHARED yoloXncnn_jni.cpp)\n\ntarget_link_libraries(yoloXncnn\n    ncnn\n\n    jnigraphics\n)\n"
  },
  {
    "path": "detector/YOLOX/demo/ncnn/android/app/src/main/jni/yoloXncnn_jni.cpp",
    "content": "// Some code in this file is based on:\n// https://github.com/nihui/ncnn-android-yolov5/blob/master/app/src/main/jni/yolov5ncnn_jni.cpp\n// Copyright (C) 2020 THL A29 Limited, a Tencent company. All rights reserved.\n// Copyright (C) Megvii, Inc. and its affiliates. All rights reserved.\n\n#include <android/asset_manager_jni.h>\n#include <android/bitmap.h>\n#include <android/log.h>\n\n#include <jni.h>\n\n#include <string>\n#include <vector>\n\n// ncnn\n#include \"layer.h\"\n#include \"net.h\"\n#include \"benchmark.h\"\n\nstatic ncnn::UnlockedPoolAllocator g_blob_pool_allocator;\nstatic ncnn::PoolAllocator g_workspace_pool_allocator;\n\nstatic ncnn::Net yoloX;\n\nclass YoloV5Focus : public ncnn::Layer\n{\npublic:\n    YoloV5Focus()\n    {\n        one_blob_only = true;\n    }\n\n    virtual int forward(const ncnn::Mat& bottom_blob, ncnn::Mat& top_blob, const ncnn::Option& opt) const\n    {\n        int w = bottom_blob.w;\n        int h = bottom_blob.h;\n        int channels = bottom_blob.c;\n\n        int outw = w / 2;\n        int outh = h / 2;\n        int outc = channels * 4;\n\n        top_blob.create(outw, outh, outc, 4u, 1, opt.blob_allocator);\n        if (top_blob.empty())\n            return -100;\n\n        #pragma omp parallel for num_threads(opt.num_threads)\n        for (int p = 0; p < outc; p++)\n        {\n            const float* ptr = bottom_blob.channel(p % channels).row((p / channels) % 2) + ((p / channels) / 2);\n            float* outptr = top_blob.channel(p);\n\n            for (int i = 0; i < outh; i++)\n            {\n                for (int j = 0; j < outw; j++)\n                {\n                    *outptr = *ptr;\n\n                    outptr += 1;\n                    ptr += 2;\n                }\n\n                ptr += w;\n            }\n        }\n\n        return 0;\n    }\n};\n\nDEFINE_LAYER_CREATOR(YoloV5Focus)\n\nstruct Object\n{\n    float x;\n    float y;\n    float w;\n    float h;\n    int label;\n    float prob;\n};\n\nstruct GridAndStride\n{\n    int grid0;\n    int grid1;\n    int stride;\n};\n\nstatic inline float intersection_area(const Object& a, const Object& b)\n{\n    if (a.x > b.x + b.w || a.x + a.w < b.x || a.y > b.y + b.h || a.y + a.h < b.y)\n    {\n        // no intersection\n        return 0.f;\n    }\n\n    float inter_width = std::min(a.x + a.w, b.x + b.w) - std::max(a.x, b.x);\n    float inter_height = std::min(a.y + a.h, b.y + b.h) - std::max(a.y, b.y);\n\n    return inter_width * inter_height;\n}\n\nstatic void qsort_descent_inplace(std::vector<Object>& faceobjects, int left, int right)\n{\n    int i = left;\n    int j = right;\n    float p = faceobjects[(left + right) / 2].prob;\n\n    while (i <= j)\n    {\n        while (faceobjects[i].prob > p)\n            i++;\n\n        while (faceobjects[j].prob < p)\n            j--;\n\n        if (i <= j)\n        {\n            // swap\n            std::swap(faceobjects[i], faceobjects[j]);\n\n            i++;\n            j--;\n        }\n    }\n\n    #pragma omp parallel sections\n    {\n        #pragma omp section\n        {\n            if (left < j) qsort_descent_inplace(faceobjects, left, j);\n        }\n        #pragma omp section\n        {\n            if (i < right) qsort_descent_inplace(faceobjects, i, right);\n        }\n    }\n}\n\nstatic void qsort_descent_inplace(std::vector<Object>& faceobjects)\n{\n    if (faceobjects.empty())\n        return;\n\n    qsort_descent_inplace(faceobjects, 0, faceobjects.size() - 1);\n}\n\nstatic void nms_sorted_bboxes(const std::vector<Object>& faceobjects, std::vector<int>& picked, float nms_threshold)\n{\n    picked.clear();\n\n    const int n = faceobjects.size();\n\n    std::vector<float> areas(n);\n    for (int i = 0; i < n; i++)\n    {\n        areas[i] = faceobjects[i].w * faceobjects[i].h;\n    }\n\n    for (int i = 0; i < n; i++)\n    {\n        const Object& a = faceobjects[i];\n\n        int keep = 1;\n        for (int j = 0; j < (int)picked.size(); j++)\n        {\n            const Object& b = faceobjects[picked[j]];\n\n            // intersection over union\n            float inter_area = intersection_area(a, b);\n            float union_area = areas[i] + areas[picked[j]] - inter_area;\n            // float IoU = inter_area / union_area\n            if (inter_area / union_area > nms_threshold)\n                keep = 0;\n        }\n\n        if (keep)\n            picked.push_back(i);\n    }\n}\n\nstatic void generate_grids_and_stride(const int target_size, std::vector<int>& strides, std::vector<GridAndStride>& grid_strides)\n{\n    for (auto stride : strides)\n    {\n        int num_grid = target_size / stride;\n        for (int g1 = 0; g1 < num_grid; g1++)\n        {\n            for (int g0 = 0; g0 < num_grid; g0++)\n            {\n                grid_strides.push_back((GridAndStride){g0, g1, stride});\n            }\n        }\n    }\n}\n\nstatic void generate_yolox_proposals(std::vector<GridAndStride> grid_strides, const ncnn::Mat& feat_blob, float prob_threshold, std::vector<Object>& objects)\n{\n    const int num_grid = feat_blob.h;\n    fprintf(stderr, \"output height: %d, width: %d, channels: %d, dims:%d\\n\", feat_blob.h, feat_blob.w, feat_blob.c, feat_blob.dims);\n\n    const int num_class = feat_blob.w - 5;\n\n    const int num_anchors = grid_strides.size();\n\n    const float* feat_ptr = feat_blob.channel(0);\n    for (int anchor_idx = 0; anchor_idx < num_anchors; anchor_idx++)\n    {\n        const int grid0 = grid_strides[anchor_idx].grid0;\n        const int grid1 = grid_strides[anchor_idx].grid1;\n        const int stride = grid_strides[anchor_idx].stride;\n\n        // yolox/models/yolo_head.py decode logic\n        //  outputs[..., :2] = (outputs[..., :2] + grids) * strides\n        //  outputs[..., 2:4] = torch.exp(outputs[..., 2:4]) * strides\n        float x_center = (feat_ptr[0] + grid0) * stride;\n        float y_center = (feat_ptr[1] + grid1) * stride;\n        float w = exp(feat_ptr[2]) * stride;\n        float h = exp(feat_ptr[3]) * stride;\n        float x0 = x_center - w * 0.5f;\n        float y0 = y_center - h * 0.5f;\n\n        float box_objectness = feat_ptr[4];\n        for (int class_idx = 0; class_idx < num_class; class_idx++)\n        {\n            float box_cls_score = feat_ptr[5 + class_idx];\n            float box_prob = box_objectness * box_cls_score;\n            if (box_prob > prob_threshold)\n            {\n                Object obj;\n                obj.x = x0;\n                obj.y = y0;\n                obj.w = w;\n                obj.h = h;\n                obj.label = class_idx;\n                obj.prob = box_prob;\n\n                objects.push_back(obj);\n            }\n\n        } // class loop\n        feat_ptr += feat_blob.w;\n\n    } // point anchor loop\n}\n\n\nextern \"C\" {\n\n// FIXME DeleteGlobalRef is missing for objCls\nstatic jclass objCls = NULL;\nstatic jmethodID constructortorId;\nstatic jfieldID xId;\nstatic jfieldID yId;\nstatic jfieldID wId;\nstatic jfieldID hId;\nstatic jfieldID labelId;\nstatic jfieldID probId;\n\nJNIEXPORT jint JNI_OnLoad(JavaVM* vm, void* reserved)\n{\n    __android_log_print(ANDROID_LOG_DEBUG, \"YOLOXncnn\", \"JNI_OnLoad\");\n\n    ncnn::create_gpu_instance();\n\n    return JNI_VERSION_1_4;\n}\n\nJNIEXPORT void JNI_OnUnload(JavaVM* vm, void* reserved)\n{\n    __android_log_print(ANDROID_LOG_DEBUG, \"YOLOXncnn\", \"JNI_OnUnload\");\n\n    ncnn::destroy_gpu_instance();\n}\n\n// public native boolean Init(AssetManager mgr);\nJNIEXPORT jboolean JNICALL Java_com_megvii_yoloXncnn_YOLOXncnn_Init(JNIEnv* env, jobject thiz, jobject assetManager)\n{\n    ncnn::Option opt;\n    opt.lightmode = true;\n    opt.num_threads = 4;\n    opt.blob_allocator = &g_blob_pool_allocator;\n    opt.workspace_allocator = &g_workspace_pool_allocator;\n    opt.use_packing_layout = true;\n\n    // use vulkan compute\n    if (ncnn::get_gpu_count() != 0)\n        opt.use_vulkan_compute = true;\n\n    AAssetManager* mgr = AAssetManager_fromJava(env, assetManager);\n\n    yoloX.opt = opt;\n\n    yoloX.register_custom_layer(\"YoloV5Focus\", YoloV5Focus_layer_creator);\n\n    // init param\n    {\n        int ret = yoloX.load_param(mgr, \"yolox.param\");\n        if (ret != 0)\n        {\n            __android_log_print(ANDROID_LOG_DEBUG, \"YOLOXncnn\", \"load_param failed\");\n            return JNI_FALSE;\n        }\n    }\n\n    // init bin\n    {\n        int ret = yoloX.load_model(mgr, \"yolox.bin\");\n        if (ret != 0)\n        {\n            __android_log_print(ANDROID_LOG_DEBUG, \"YOLOXncnn\", \"load_model failed\");\n            return JNI_FALSE;\n        }\n    }\n\n    // init jni glue\n    jclass localObjCls = env->FindClass(\"com/megvii/yoloXncnn/YOLOXncnn$Obj\");\n    objCls = reinterpret_cast<jclass>(env->NewGlobalRef(localObjCls));\n\n    constructortorId = env->GetMethodID(objCls, \"<init>\", \"(Lcom/megvii/yoloXncnn/YOLOXncnn;)V\");\n\n    xId = env->GetFieldID(objCls, \"x\", \"F\");\n    yId = env->GetFieldID(objCls, \"y\", \"F\");\n    wId = env->GetFieldID(objCls, \"w\", \"F\");\n    hId = env->GetFieldID(objCls, \"h\", \"F\");\n    labelId = env->GetFieldID(objCls, \"label\", \"Ljava/lang/String;\");\n    probId = env->GetFieldID(objCls, \"prob\", \"F\");\n\n    return JNI_TRUE;\n}\n\n// public native Obj[] Detect(Bitmap bitmap, boolean use_gpu);\nJNIEXPORT jobjectArray JNICALL Java_com_megvii_yoloXncnn_YOLOXncnn_Detect(JNIEnv* env, jobject thiz, jobject bitmap, jboolean use_gpu)\n{\n    if (use_gpu == JNI_TRUE && ncnn::get_gpu_count() == 0)\n    {\n        return NULL;\n        //return env->NewStringUTF(\"no vulkan capable gpu\");\n    }\n\n    double start_time = ncnn::get_current_time();\n\n    AndroidBitmapInfo info;\n    AndroidBitmap_getInfo(env, bitmap, &info);\n    const int width = info.width;\n    const int height = info.height;\n    if (info.format != ANDROID_BITMAP_FORMAT_RGBA_8888)\n        return NULL;\n\n    // parameters which might change for different model\n    const int target_size = 640;\n    const float prob_threshold = 0.3f;\n    const float nms_threshold = 0.65f;\n    std::vector<int> strides = {8, 16, 32}; // might have stride=64\n    // python 0-1 input tensor with rgb_means = (0.485, 0.456, 0.406), std = (0.229, 0.224, 0.225)\n    // so for 0-255 input image, rgb_mean should multiply 255 and norm should div by std.\n    const float mean_vals[3] = {255.f * 0.485f, 255.f * 0.456, 255.f * 0.406f};\n    const float norm_vals[3] = {1 / (255.f * 0.229f), 1 / (255.f * 0.224f), 1 / (255.f * 0.225f)};\n\n    int w = width;\n    int h = height;\n    float scale = 1.f;\n    if (w > h)\n    {\n        scale = (float)target_size / w;\n        w = target_size;\n        h = h * scale;\n    }\n    else\n    {\n        scale = (float)target_size / h;\n        h = target_size;\n        w = w * scale;\n    }\n\n    ncnn::Mat in = ncnn::Mat::from_android_bitmap_resize(env, bitmap, ncnn::Mat::PIXEL_RGB, w, h);\n\n    // pad to target_size rectangle\n    int wpad = target_size - w;\n    int hpad = target_size - h;\n    ncnn::Mat in_pad;\n    // different from yolov5, yolox only pad on bottom and right side,\n    // which means users don't need to extra padding info to decode boxes coordinate.\n    ncnn::copy_make_border(in, in_pad, 0, hpad, 0, wpad, ncnn::BORDER_CONSTANT, 114.f);\n\n    // yolox\n    std::vector<Object> objects;\n    {\n\n        in_pad.substract_mean_normalize(mean_vals, norm_vals);\n\n        ncnn::Extractor ex = yoloX.create_extractor();\n\n        ex.set_vulkan_compute(use_gpu);\n\n        ex.input(\"images\", in_pad);\n\n        std::vector<Object> proposals;\n\n        // yolox decode and generate proposal logic\n        {\n            ncnn::Mat out;\n            ex.extract(\"output\", out);\n\n            std::vector<GridAndStride> grid_strides;\n            generate_grids_and_stride(target_size, strides, grid_strides);\n            generate_yolox_proposals(grid_strides, out, prob_threshold, proposals);\n\n        }\n\n        // sort all proposals by score from highest to lowest\n        qsort_descent_inplace(proposals);\n\n        // apply nms with nms_threshold\n        std::vector<int> picked;\n        nms_sorted_bboxes(proposals, picked, nms_threshold);\n\n        int count = picked.size();\n\n        objects.resize(count);\n        for (int i = 0; i < count; i++)\n        {\n            objects[i] = proposals[picked[i]];\n\n            // adjust offset to original unpadded\n            float x0 = (objects[i].x) / scale;\n            float y0 = (objects[i].y) / scale;\n            float x1 = (objects[i].x + objects[i].w) / scale;\n            float y1 = (objects[i].y + objects[i].h) / scale;\n\n            // clip\n            x0 = std::max(std::min(x0, (float)(width - 1)), 0.f);\n            y0 = std::max(std::min(y0, (float)(height - 1)), 0.f);\n            x1 = std::max(std::min(x1, (float)(width - 1)), 0.f);\n            y1 = std::max(std::min(y1, (float)(height - 1)), 0.f);\n\n            objects[i].x = x0;\n            objects[i].y = y0;\n            objects[i].w = x1 - x0;\n            objects[i].h = y1 - y0;\n        }\n    }\n\n    // objects to Obj[]\n    static const char* class_names[] = {\n        \"person\", \"bicycle\", \"car\", \"motorcycle\", \"airplane\", \"bus\", \"train\", \"truck\", \"boat\", \"traffic light\",\n        \"fire hydrant\", \"stop sign\", \"parking meter\", \"bench\", \"bird\", \"cat\", \"dog\", \"horse\", \"sheep\", \"cow\",\n        \"elephant\", \"bear\", \"zebra\", \"giraffe\", \"backpack\", \"umbrella\", \"handbag\", \"tie\", \"suitcase\", \"frisbee\",\n        \"skis\", \"snowboard\", \"sports ball\", \"kite\", \"baseball bat\", \"baseball glove\", \"skateboard\", \"surfboard\",\n        \"tennis racket\", \"bottle\", \"wine glass\", \"cup\", \"fork\", \"knife\", \"spoon\", \"bowl\", \"banana\", \"apple\",\n        \"sandwich\", \"orange\", \"broccoli\", \"carrot\", \"hot dog\", \"pizza\", \"donut\", \"cake\", \"chair\", \"couch\",\n        \"potted plant\", \"bed\", \"dining table\", \"toilet\", \"tv\", \"laptop\", \"mouse\", \"remote\", \"keyboard\", \"cell phone\",\n        \"microwave\", \"oven\", \"toaster\", \"sink\", \"refrigerator\", \"book\", \"clock\", \"vase\", \"scissors\", \"teddy bear\",\n        \"hair drier\", \"toothbrush\"\n    };\n\n    jobjectArray jObjArray = env->NewObjectArray(objects.size(), objCls, NULL);\n\n    for (size_t i=0; i<objects.size(); i++)\n    {\n        jobject jObj = env->NewObject(objCls, constructortorId, thiz);\n\n        env->SetFloatField(jObj, xId, objects[i].x);\n        env->SetFloatField(jObj, yId, objects[i].y);\n        env->SetFloatField(jObj, wId, objects[i].w);\n        env->SetFloatField(jObj, hId, objects[i].h);\n        env->SetObjectField(jObj, labelId, env->NewStringUTF(class_names[objects[i].label]));\n        env->SetFloatField(jObj, probId, objects[i].prob);\n\n        env->SetObjectArrayElement(jObjArray, i, jObj);\n    }\n\n    double elasped = ncnn::get_current_time() - start_time;\n    __android_log_print(ANDROID_LOG_DEBUG, \"YOLOXncnn\", \"%.2fms   detect\", elasped);\n\n    return jObjArray;\n}\n\n}\n"
  },
  {
    "path": "detector/YOLOX/demo/ncnn/android/app/src/main/res/layout/main.xml",
    "content": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<LinearLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n    android:orientation=\"vertical\"\n    android:layout_width=\"fill_parent\"\n    android:layout_height=\"fill_parent\">\n\n    <LinearLayout\n        android:orientation=\"horizontal\"\n        android:layout_width=\"fill_parent\"\n        android:layout_height=\"wrap_content\">\n\n    <Button\n        android:id=\"@+id/buttonImage\"\n        android:layout_width=\"wrap_content\"\n        android:layout_height=\"wrap_content\"\n        android:text=\"image\" />\n    <Button\n        android:id=\"@+id/buttonDetect\"\n        android:layout_width=\"wrap_content\"\n        android:layout_height=\"wrap_content\"\n        android:text=\"infer-cpu\" />\n    <Button\n        android:id=\"@+id/buttonDetectGPU\"\n        android:layout_width=\"wrap_content\"\n        android:layout_height=\"wrap_content\"\n        android:text=\"infer-gpu\" />\n    </LinearLayout>\n\n    <ImageView\n        android:id=\"@+id/imageView\"\n        android:layout_width=\"fill_parent\"\n        android:layout_height=\"fill_parent\"\n        android:layout_weight=\"1\" />\n\n</LinearLayout>\n"
  },
  {
    "path": "detector/YOLOX/demo/ncnn/android/app/src/main/res/values/strings.xml",
    "content": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<resources>\n    <string name=\"app_name\">yoloXncnn</string>\n</resources>\n"
  },
  {
    "path": "detector/YOLOX/demo/ncnn/android/build.gradle",
    "content": "// Top-level build file where you can add configuration options common to all sub-projects/modules.\nbuildscript {\n    repositories {\n        jcenter()\n        google()\n    }\n    dependencies {\n        classpath 'com.android.tools.build:gradle:3.5.0'\n    }\n}\n\nallprojects {\n    repositories {\n        jcenter()\n        google()\n    }\n}\n"
  },
  {
    "path": "detector/YOLOX/demo/ncnn/android/gradle/wrapper/gradle-wrapper.properties",
    "content": "#Sun Aug 25 10:34:48 CST 2019\ndistributionBase=GRADLE_USER_HOME\ndistributionPath=wrapper/dists\nzipStoreBase=GRADLE_USER_HOME\nzipStorePath=wrapper/dists\ndistributionUrl=https\\://services.gradle.org/distributions/gradle-5.4.1-all.zip\n"
  },
  {
    "path": "detector/YOLOX/demo/ncnn/android/gradlew",
    "content": "#!/usr/bin/env sh\n\n##############################################################################\n##\n##  Gradle start up script for UN*X\n##\n##############################################################################\n\n# Attempt to set APP_HOME\n# Resolve links: $0 may be a link\nPRG=\"$0\"\n# Need this for relative symlinks.\nwhile [ -h \"$PRG\" ] ; do\n    ls=`ls -ld \"$PRG\"`\n    link=`expr \"$ls\" : '.*-> \\(.*\\)$'`\n    if expr \"$link\" : '/.*' > /dev/null; then\n        PRG=\"$link\"\n    else\n        PRG=`dirname \"$PRG\"`\"/$link\"\n    fi\ndone\nSAVED=\"`pwd`\"\ncd \"`dirname \\\"$PRG\\\"`/\" >/dev/null\nAPP_HOME=\"`pwd -P`\"\ncd \"$SAVED\" >/dev/null\n\nAPP_NAME=\"Gradle\"\nAPP_BASE_NAME=`basename \"$0\"`\n\n# Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.\nDEFAULT_JVM_OPTS=\"\"\n\n# Use the maximum available, or set MAX_FD != -1 to use that value.\nMAX_FD=\"maximum\"\n\nwarn () {\n    echo \"$*\"\n}\n\ndie () {\n    echo\n    echo \"$*\"\n    echo\n    exit 1\n}\n\n# OS specific support (must be 'true' or 'false').\ncygwin=false\nmsys=false\ndarwin=false\nnonstop=false\ncase \"`uname`\" in\n  CYGWIN* )\n    cygwin=true\n    ;;\n  Darwin* )\n    darwin=true\n    ;;\n  MINGW* )\n    msys=true\n    ;;\n  NONSTOP* )\n    nonstop=true\n    ;;\nesac\n\nCLASSPATH=$APP_HOME/gradle/wrapper/gradle-wrapper.jar\n\n# Determine the Java command to use to start the JVM.\nif [ -n \"$JAVA_HOME\" ] ; then\n    if [ -x \"$JAVA_HOME/jre/sh/java\" ] ; then\n        # IBM's JDK on AIX uses strange locations for the executables\n        JAVACMD=\"$JAVA_HOME/jre/sh/java\"\n    else\n        JAVACMD=\"$JAVA_HOME/bin/java\"\n    fi\n    if [ ! -x \"$JAVACMD\" ] ; then\n        die \"ERROR: JAVA_HOME is set to an invalid directory: $JAVA_HOME\n\nPlease set the JAVA_HOME variable in your environment to match the\nlocation of your Java installation.\"\n    fi\nelse\n    JAVACMD=\"java\"\n    which java >/dev/null 2>&1 || die \"ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.\n\nPlease set the JAVA_HOME variable in your environment to match the\nlocation of your Java installation.\"\nfi\n\n# Increase the maximum file descriptors if we can.\nif [ \"$cygwin\" = \"false\" -a \"$darwin\" = \"false\" -a \"$nonstop\" = \"false\" ] ; then\n    MAX_FD_LIMIT=`ulimit -H -n`\n    if [ $? -eq 0 ] ; then\n        if [ \"$MAX_FD\" = \"maximum\" -o \"$MAX_FD\" = \"max\" ] ; then\n            MAX_FD=\"$MAX_FD_LIMIT\"\n        fi\n        ulimit -n $MAX_FD\n        if [ $? -ne 0 ] ; then\n            warn \"Could not set maximum file descriptor limit: $MAX_FD\"\n        fi\n    else\n        warn \"Could not query maximum file descriptor limit: $MAX_FD_LIMIT\"\n    fi\nfi\n\n# For Darwin, add options to specify how the application appears in the dock\nif $darwin; then\n    GRADLE_OPTS=\"$GRADLE_OPTS \\\"-Xdock:name=$APP_NAME\\\" \\\"-Xdock:icon=$APP_HOME/media/gradle.icns\\\"\"\nfi\n\n# For Cygwin, switch paths to Windows format before running java\nif $cygwin ; then\n    APP_HOME=`cygpath --path --mixed \"$APP_HOME\"`\n    CLASSPATH=`cygpath --path --mixed \"$CLASSPATH\"`\n    JAVACMD=`cygpath --unix \"$JAVACMD\"`\n\n    # We build the pattern for arguments to be converted via cygpath\n    ROOTDIRSRAW=`find -L / -maxdepth 1 -mindepth 1 -type d 2>/dev/null`\n    SEP=\"\"\n    for dir in $ROOTDIRSRAW ; do\n        ROOTDIRS=\"$ROOTDIRS$SEP$dir\"\n        SEP=\"|\"\n    done\n    OURCYGPATTERN=\"(^($ROOTDIRS))\"\n    # Add a user-defined pattern to the cygpath arguments\n    if [ \"$GRADLE_CYGPATTERN\" != \"\" ] ; then\n        OURCYGPATTERN=\"$OURCYGPATTERN|($GRADLE_CYGPATTERN)\"\n    fi\n    # Now convert the arguments - kludge to limit ourselves to /bin/sh\n    i=0\n    for arg in \"$@\" ; do\n        CHECK=`echo \"$arg\"|egrep -c \"$OURCYGPATTERN\" -`\n        CHECK2=`echo \"$arg\"|egrep -c \"^-\"`                                 ### Determine if an option\n\n        if [ $CHECK -ne 0 ] && [ $CHECK2 -eq 0 ] ; then                    ### Added a condition\n            eval `echo args$i`=`cygpath --path --ignore --mixed \"$arg\"`\n        else\n            eval `echo args$i`=\"\\\"$arg\\\"\"\n        fi\n        i=$((i+1))\n    done\n    case $i in\n        (0) set -- ;;\n        (1) set -- \"$args0\" ;;\n        (2) set -- \"$args0\" \"$args1\" ;;\n        (3) set -- \"$args0\" \"$args1\" \"$args2\" ;;\n        (4) set -- \"$args0\" \"$args1\" \"$args2\" \"$args3\" ;;\n        (5) set -- \"$args0\" \"$args1\" \"$args2\" \"$args3\" \"$args4\" ;;\n        (6) set -- \"$args0\" \"$args1\" \"$args2\" \"$args3\" \"$args4\" \"$args5\" ;;\n        (7) set -- \"$args0\" \"$args1\" \"$args2\" \"$args3\" \"$args4\" \"$args5\" \"$args6\" ;;\n        (8) set -- \"$args0\" \"$args1\" \"$args2\" \"$args3\" \"$args4\" \"$args5\" \"$args6\" \"$args7\" ;;\n        (9) set -- \"$args0\" \"$args1\" \"$args2\" \"$args3\" \"$args4\" \"$args5\" \"$args6\" \"$args7\" \"$args8\" ;;\n    esac\nfi\n\n# Escape application args\nsave () {\n    for i do printf %s\\\\n \"$i\" | sed \"s/'/'\\\\\\\\''/g;1s/^/'/;\\$s/\\$/' \\\\\\\\/\" ; done\n    echo \" \"\n}\nAPP_ARGS=$(save \"$@\")\n\n# Collect all arguments for the java command, following the shell quoting and substitution rules\neval set -- $DEFAULT_JVM_OPTS $JAVA_OPTS $GRADLE_OPTS \"\\\"-Dorg.gradle.appname=$APP_BASE_NAME\\\"\" -classpath \"\\\"$CLASSPATH\\\"\" org.gradle.wrapper.GradleWrapperMain \"$APP_ARGS\"\n\n# by default we should be in the correct project dir, but when run from Finder on Mac, the cwd is wrong\nif [ \"$(uname)\" = \"Darwin\" ] && [ \"$HOME\" = \"$PWD\" ]; then\n  cd \"$(dirname \"$0\")\"\nfi\n\nexec \"$JAVACMD\" \"$@\"\n"
  },
  {
    "path": "detector/YOLOX/demo/ncnn/android/gradlew.bat",
    "content": "@if \"%DEBUG%\" == \"\" @echo off\n@rem ##########################################################################\n@rem\n@rem  Gradle startup script for Windows\n@rem\n@rem ##########################################################################\n\n@rem Set local scope for the variables with windows NT shell\nif \"%OS%\"==\"Windows_NT\" setlocal\n\nset DIRNAME=%~dp0\nif \"%DIRNAME%\" == \"\" set DIRNAME=.\nset APP_BASE_NAME=%~n0\nset APP_HOME=%DIRNAME%\n\n@rem Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.\nset DEFAULT_JVM_OPTS=\n\n@rem Find java.exe\nif defined JAVA_HOME goto findJavaFromJavaHome\n\nset JAVA_EXE=java.exe\n%JAVA_EXE% -version >NUL 2>&1\nif \"%ERRORLEVEL%\" == \"0\" goto init\n\necho.\necho ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.\necho.\necho Please set the JAVA_HOME variable in your environment to match the\necho location of your Java installation.\n\ngoto fail\n\n:findJavaFromJavaHome\nset JAVA_HOME=%JAVA_HOME:\"=%\nset JAVA_EXE=%JAVA_HOME%/bin/java.exe\n\nif exist \"%JAVA_EXE%\" goto init\n\necho.\necho ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME%\necho.\necho Please set the JAVA_HOME variable in your environment to match the\necho location of your Java installation.\n\ngoto fail\n\n:init\n@rem Get command-line arguments, handling Windows variants\n\nif not \"%OS%\" == \"Windows_NT\" goto win9xME_args\n\n:win9xME_args\n@rem Slurp the command line arguments.\nset CMD_LINE_ARGS=\nset _SKIP=2\n\n:win9xME_args_slurp\nif \"x%~1\" == \"x\" goto execute\n\nset CMD_LINE_ARGS=%*\n\n:execute\n@rem Setup the command line\n\nset CLASSPATH=%APP_HOME%\\gradle\\wrapper\\gradle-wrapper.jar\n\n@rem Execute Gradle\n\"%JAVA_EXE%\" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %GRADLE_OPTS% \"-Dorg.gradle.appname=%APP_BASE_NAME%\" -classpath \"%CLASSPATH%\" org.gradle.wrapper.GradleWrapperMain %CMD_LINE_ARGS%\n\n:end\n@rem End local scope for the variables with windows NT shell\nif \"%ERRORLEVEL%\"==\"0\" goto mainEnd\n\n:fail\nrem Set variable GRADLE_EXIT_CONSOLE if you need the _script_ return code instead of\nrem the _cmd.exe /c_ return code!\nif  not \"\" == \"%GRADLE_EXIT_CONSOLE%\" exit 1\nexit /b 1\n\n:mainEnd\nif \"%OS%\"==\"Windows_NT\" endlocal\n\n:omega\n"
  },
  {
    "path": "detector/YOLOX/demo/ncnn/android/settings.gradle",
    "content": "include ':app'\n"
  },
  {
    "path": "detector/YOLOX/demo/ncnn/cpp/README.md",
    "content": "# YOLOX-CPP-ncnn\n\nCpp file compile of YOLOX object detection base on [ncnn](https://github.com/Tencent/ncnn).\n\n## Tutorial\n\n### Step1\nClone [ncnn](https://github.com/Tencent/ncnn) first, then please following [build tutorial of ncnn](https://github.com/Tencent/ncnn/wiki/how-to-build) to build on your own device.\n\n### Step2\nUse provided tools to generate onnx file.\nFor example, if you want to generate onnx file of yolox-s, please run the following command:\n```shell\ncd <path of yolox>\npython3 tools/export_onnx.py -n yolox-s\n```\nThen, a yolox.onnx file is generated.\n\n### Step3\nGenerate ncnn param and bin file.\n```shell\ncd <path of ncnn>\ncd build/tools/ncnn\n./onnx2ncnn yolox.onnx yolox.param yolox.bin\n```\n\nSince Focus module is not supported in ncnn. Warnings like:\n```shell\nUnsupported slice step ! \n```\nwill be printed. However, don't  worry!  C++ version of Focus layer is already implemented in yolox.cpp.\n\n### Step4\nOpen **yolox.param**, and modify it.\nBefore (just an example):\n```\n295 328\nInput            images                   0 1 images\nSplit            splitncnn_input0         1 4 images images_splitncnn_0 images_splitncnn_1 images_splitncnn_2 images_splitncnn_3\nCrop             Slice_4                  1 1 images_splitncnn_3 647 -23309=1,0 -23310=1,2147483647 -23311=1,1\nCrop             Slice_9                  1 1 647 652 -23309=1,0 -23310=1,2147483647 -23311=1,2\nCrop             Slice_14                 1 1 images_splitncnn_2 657 -23309=1,0 -23310=1,2147483647 -23311=1,1\nCrop             Slice_19                 1 1 657 662 -23309=1,1 -23310=1,2147483647 -23311=1,2\nCrop             Slice_24                 1 1 images_splitncnn_1 667 -23309=1,1 -23310=1,2147483647 -23311=1,1\nCrop             Slice_29                 1 1 667 672 -23309=1,0 -23310=1,2147483647 -23311=1,2\nCrop             Slice_34                 1 1 images_splitncnn_0 677 -23309=1,1 -23310=1,2147483647 -23311=1,1\nCrop             Slice_39                 1 1 677 682 -23309=1,1 -23310=1,2147483647 -23311=1,2\nConcat           Concat_40                4 1 652 672 662 682 683 0=0\n...\n```\n* Change first number for 295 to 295 - 9 = 286(since we will remove 10 layers and add 1 layers, total layers number should minus 9). \n* Then remove 10 lines of code from Split to Concat, but remember the last but 2nd number: 683.\n* Add YoloV5Focus layer After Input (using previous number 683):\n```\nYoloV5Focus      focus                    1 1 images 683\n```\nAfter(just an exmaple):\n```\n286 328\nInput            images                   0 1 images\nYoloV5Focus      focus                    1 1 images 683\n...\n```\n\n### Step5\nUse onnx_optimze to generate new param and bin:\n```shell\n# suppose you are still under ncnn/build/tools/ncnn dir.\n../ncnnoptimize model.param model.bin yolox.param yolox.bin 65536\n```\n\n### Step6\nCopy or Move yolox.cpp file into ncnn/examples, modify the CMakeList.txt, then build yolox\n\n### Step7\nInference image with executable file yolox, enjoy the detect result:\n```shell\n./yolox demo.jpg\n```\n\n## Acknowledgement\n\n* [ncnn](https://github.com/Tencent/ncnn)\n"
  },
  {
    "path": "detector/YOLOX/demo/ncnn/cpp/yolox.cpp",
    "content": "// This file is wirtten base on the following file:\n// https://github.com/Tencent/ncnn/blob/master/examples/yolov5.cpp\n// Copyright (C) 2020 THL A29 Limited, a Tencent company. All rights reserved.\n// Licensed under the BSD 3-Clause License (the \"License\"); you may not use this file except\n// in compliance with the License. You may obtain a copy of the License at\n//\n// https://opensource.org/licenses/BSD-3-Clause\n//\n// Unless required by applicable law or agreed to in writing, software distributed\n// under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR\n// CONDITIONS OF ANY KIND, either express or implied. See the License for the\n// specific language governing permissions and limitations under the License.\n// ------------------------------------------------------------------------------\n// Copyright (C) 2020-2021, Megvii Inc. All rights reserved.\n\n#include \"layer.h\"\n#include \"net.h\"\n\n#if defined(USE_NCNN_SIMPLEOCV)\n#include \"simpleocv.h\"\n#else\n#include <opencv2/core/core.hpp>\n#include <opencv2/highgui/highgui.hpp>\n#include <opencv2/imgproc/imgproc.hpp>\n#endif\n#include <float.h>\n#include <stdio.h>\n#include <vector>\n\n// YOLOX use the same focus in yolov5\nclass YoloV5Focus : public ncnn::Layer\n{\npublic:\n    YoloV5Focus()\n    {\n        one_blob_only = true;\n    }\n\n    virtual int forward(const ncnn::Mat& bottom_blob, ncnn::Mat& top_blob, const ncnn::Option& opt) const\n    {\n        int w = bottom_blob.w;\n        int h = bottom_blob.h;\n        int channels = bottom_blob.c;\n\n        int outw = w / 2;\n        int outh = h / 2;\n        int outc = channels * 4;\n\n        top_blob.create(outw, outh, outc, 4u, 1, opt.blob_allocator);\n        if (top_blob.empty())\n            return -100;\n\n        #pragma omp parallel for num_threads(opt.num_threads)\n        for (int p = 0; p < outc; p++)\n        {\n            const float* ptr = bottom_blob.channel(p % channels).row((p / channels) % 2) + ((p / channels) / 2);\n            float* outptr = top_blob.channel(p);\n\n            for (int i = 0; i < outh; i++)\n            {\n                for (int j = 0; j < outw; j++)\n                {\n                    *outptr = *ptr;\n\n                    outptr += 1;\n                    ptr += 2;\n                }\n\n                ptr += w;\n            }\n        }\n\n        return 0;\n    }\n};\n\nDEFINE_LAYER_CREATOR(YoloV5Focus)\n\nstruct Object\n{\n    cv::Rect_<float> rect;\n    int label;\n    float prob;\n};\n\nstruct GridAndStride\n{\n    int grid0;\n    int grid1;\n    int stride;\n};\n\nstatic inline float intersection_area(const Object& a, const Object& b)\n{\n    cv::Rect_<float> inter = a.rect & b.rect;\n    return inter.area();\n}\n\nstatic void qsort_descent_inplace(std::vector<Object>& faceobjects, int left, int right)\n{\n    int i = left;\n    int j = right;\n    float p = faceobjects[(left + right) / 2].prob;\n\n    while (i <= j)\n    {\n        while (faceobjects[i].prob > p)\n            i++;\n\n        while (faceobjects[j].prob < p)\n            j--;\n\n        if (i <= j)\n        {\n            // swap\n            std::swap(faceobjects[i], faceobjects[j]);\n\n            i++;\n            j--;\n        }\n    }\n\n    #pragma omp parallel sections\n    {\n        #pragma omp section\n        {\n            if (left < j) qsort_descent_inplace(faceobjects, left, j);\n        }\n        #pragma omp section\n        {\n            if (i < right) qsort_descent_inplace(faceobjects, i, right);\n        }\n    }\n}\n\nstatic void qsort_descent_inplace(std::vector<Object>& objects)\n{\n    if (objects.empty())\n        return;\n\n    qsort_descent_inplace(objects, 0, objects.size() - 1);\n}\n\nstatic void nms_sorted_bboxes(const std::vector<Object>& faceobjects, std::vector<int>& picked, float nms_threshold)\n{\n    picked.clear();\n\n    const int n = faceobjects.size();\n\n    std::vector<float> areas(n);\n    for (int i = 0; i < n; i++)\n    {\n        areas[i] = faceobjects[i].rect.area();\n    }\n\n    for (int i = 0; i < n; i++)\n    {\n        const Object& a = faceobjects[i];\n\n        int keep = 1;\n        for (int j = 0; j < (int)picked.size(); j++)\n        {\n            const Object& b = faceobjects[picked[j]];\n\n            // intersection over union\n            float inter_area = intersection_area(a, b);\n            float union_area = areas[i] + areas[picked[j]] - inter_area;\n            // float IoU = inter_area / union_area\n            if (inter_area / union_area > nms_threshold)\n                keep = 0;\n        }\n\n        if (keep)\n            picked.push_back(i);\n    }\n}\n\nstatic void generate_grids_and_stride(const int target_size, std::vector<int>& strides, std::vector<GridAndStride>& grid_strides)\n{\n    for (auto stride : strides)\n    {\n        int num_grid = target_size / stride;\n        for (int g1 = 0; g1 < num_grid; g1++)\n        {\n            for (int g0 = 0; g0 < num_grid; g0++)\n            {\n                grid_strides.push_back((GridAndStride){g0, g1, stride});\n            }\n        }\n    }\n}\n\nstatic void generate_yolox_proposals(std::vector<GridAndStride> grid_strides, const ncnn::Mat& feat_blob, float prob_threshold, std::vector<Object>& objects)\n{\n    const int num_grid = feat_blob.h;\n    fprintf(stderr, \"output height: %d, width: %d, channels: %d, dims:%d\\n\", feat_blob.h, feat_blob.w, feat_blob.c, feat_blob.dims);\n\n    const int num_class = feat_blob.w - 5;\n\n    const int num_anchors = grid_strides.size();\n\n    const float* feat_ptr = feat_blob.channel(0);\n    for (int anchor_idx = 0; anchor_idx < num_anchors; anchor_idx++)\n    {\n        const int grid0 = grid_strides[anchor_idx].grid0;\n        const int grid1 = grid_strides[anchor_idx].grid1;\n        const int stride = grid_strides[anchor_idx].stride;\n\n        // yolox/models/yolo_head.py decode logic\n        //  outputs[..., :2] = (outputs[..., :2] + grids) * strides\n        //  outputs[..., 2:4] = torch.exp(outputs[..., 2:4]) * strides\n        float x_center = (feat_ptr[0] + grid0) * stride;\n        float y_center = (feat_ptr[1] + grid1) * stride;\n        float w = exp(feat_ptr[2]) * stride;\n        float h = exp(feat_ptr[3]) * stride;\n        float x0 = x_center - w * 0.5f;\n        float y0 = y_center - h * 0.5f;\n\n        float box_objectness = feat_ptr[4];\n        for (int class_idx = 0; class_idx < num_class; class_idx++)\n        {\n            float box_cls_score = feat_ptr[5 + class_idx];\n            float box_prob = box_objectness * box_cls_score;\n            if (box_prob > prob_threshold)\n            {\n                Object obj;\n                obj.rect.x = x0;\n                obj.rect.y = y0;\n                obj.rect.width = w;\n                obj.rect.height = h;\n                obj.label = class_idx;\n                obj.prob = box_prob;\n\n                objects.push_back(obj);\n            }\n\n        } // class loop\n        feat_ptr += feat_blob.w;\n\n    } // point anchor loop\n}\n \nstatic int detect_yolox(const cv::Mat& bgr, std::vector<Object>& objects)\n{\n    ncnn::Net yolox;\n\n    yolox.opt.use_vulkan_compute = true;\n    // yolox.opt.use_bf16_storage = true;\n\n    yolox.register_custom_layer(\"YoloV5Focus\", YoloV5Focus_layer_creator);\n\n    // original pretrained model from https://github.com/yolox\n    // TODO ncnn model https://github.com/nihui/ncnn-assets/tree/master/models\n    yolox.load_param(\"yolox.param\");\n    yolox.load_model(\"yolox.bin\");\n\n    const int target_size = 416;\n    const float prob_threshold = 0.3f;\n    const float nms_threshold = 0.65f;\n\n    int img_w = bgr.cols;\n    int img_h = bgr.rows;\n\n    int w = img_w;\n    int h = img_h;\n    float scale = 1.f;\n    if (w > h)\n    {\n        scale = (float)target_size / w;\n        w = target_size;\n        h = h * scale;\n    }\n    else\n    {\n        scale = (float)target_size / h;\n        h = target_size;\n        w = w * scale;\n    }\n    ncnn::Mat in = ncnn::Mat::from_pixels_resize(bgr.data, ncnn::Mat::PIXEL_BGR2RGB, img_w, img_h, w, h);\n\n    // pad to target_size rectangle\n    int wpad = target_size - w;\n    int hpad = target_size - h;\n    ncnn::Mat in_pad;\n    // different from yolov5, yolox only pad on bottom and right side,\n    // which means users don't need to extra padding info to decode boxes coordinate.\n    ncnn::copy_make_border(in, in_pad, 0, hpad, 0, wpad, ncnn::BORDER_CONSTANT, 114.f);\n\n    // python 0-1 input tensor with rgb_means = (0.485, 0.456, 0.406), std = (0.229, 0.224, 0.225)\n    // so for 0-255 input image, rgb_mean should multiply 255 and norm should div by std.\n    const float mean_vals[3] = {255.f * 0.485f, 255.f * 0.456, 255.f * 0.406f};\n    const float norm_vals[3] = {1 / (255.f * 0.229f), 1 / (255.f * 0.224f), 1 / (255.f * 0.225f)};\n\n    in_pad.substract_mean_normalize(mean_vals, norm_vals);\n\n    ncnn::Extractor ex = yolox.create_extractor();\n\n    ex.input(\"images\", in_pad);\n\n    std::vector<Object> proposals;\n\n    {\n        ncnn::Mat out;\n        ex.extract(\"output\", out);\n\n        std::vector<int> strides = {8, 16, 32}; // might have stride=64\n        std::vector<GridAndStride> grid_strides;\n        generate_grids_and_stride(target_size, strides, grid_strides);\n        generate_yolox_proposals(grid_strides, out, prob_threshold, proposals);\n    }\n\n    // sort all proposals by score from highest to lowest\n    qsort_descent_inplace(proposals);\n\n    // apply nms with nms_threshold\n    std::vector<int> picked;\n    nms_sorted_bboxes(proposals, picked, nms_threshold);\n\n    int count = picked.size();\n\n    objects.resize(count);\n    for (int i = 0; i < count; i++)\n    {\n        objects[i] = proposals[picked[i]];\n\n        // adjust offset to original unpadded\n        float x0 = (objects[i].rect.x) / scale;\n        float y0 = (objects[i].rect.y) / scale;\n        float x1 = (objects[i].rect.x + objects[i].rect.width) / scale;\n        float y1 = (objects[i].rect.y + objects[i].rect.height) / scale;\n\n        // clip\n        x0 = std::max(std::min(x0, (float)(img_w - 1)), 0.f);\n        y0 = std::max(std::min(y0, (float)(img_h - 1)), 0.f);\n        x1 = std::max(std::min(x1, (float)(img_w - 1)), 0.f);\n        y1 = std::max(std::min(y1, (float)(img_h - 1)), 0.f);\n\n        objects[i].rect.x = x0;\n        objects[i].rect.y = y0;\n        objects[i].rect.width = x1 - x0;\n        objects[i].rect.height = y1 - y0;\n    }\n\n    return 0;\n}\n\nstatic void draw_objects(const cv::Mat& bgr, const std::vector<Object>& objects)\n{\n    static const char* class_names[] = {\n        \"person\", \"bicycle\", \"car\", \"motorcycle\", \"airplane\", \"bus\", \"train\", \"truck\", \"boat\", \"traffic light\",\n        \"fire hydrant\", \"stop sign\", \"parking meter\", \"bench\", \"bird\", \"cat\", \"dog\", \"horse\", \"sheep\", \"cow\",\n        \"elephant\", \"bear\", \"zebra\", \"giraffe\", \"backpack\", \"umbrella\", \"handbag\", \"tie\", \"suitcase\", \"frisbee\",\n        \"skis\", \"snowboard\", \"sports ball\", \"kite\", \"baseball bat\", \"baseball glove\", \"skateboard\", \"surfboard\",\n        \"tennis racket\", \"bottle\", \"wine glass\", \"cup\", \"fork\", \"knife\", \"spoon\", \"bowl\", \"banana\", \"apple\",\n        \"sandwich\", \"orange\", \"broccoli\", \"carrot\", \"hot dog\", \"pizza\", \"donut\", \"cake\", \"chair\", \"couch\",\n        \"potted plant\", \"bed\", \"dining table\", \"toilet\", \"tv\", \"laptop\", \"mouse\", \"remote\", \"keyboard\", \"cell phone\",\n        \"microwave\", \"oven\", \"toaster\", \"sink\", \"refrigerator\", \"book\", \"clock\", \"vase\", \"scissors\", \"teddy bear\",\n        \"hair drier\", \"toothbrush\"\n    };\n\n    cv::Mat image = bgr.clone();\n\n    for (size_t i = 0; i < objects.size(); i++)\n    {\n        const Object& obj = objects[i];\n\n        fprintf(stderr, \"%d = %.5f at %.2f %.2f %.2f x %.2f\\n\", obj.label, obj.prob,\n                obj.rect.x, obj.rect.y, obj.rect.width, obj.rect.height);\n\n        cv::rectangle(image, obj.rect, cv::Scalar(255, 0, 0));\n\n        char text[256];\n        sprintf(text, \"%s %.1f%%\", class_names[obj.label], obj.prob * 100);\n\n        int baseLine = 0;\n        cv::Size label_size = cv::getTextSize(text, cv::FONT_HERSHEY_SIMPLEX, 0.5, 1, &baseLine);\n\n        int x = obj.rect.x;\n        int y = obj.rect.y - label_size.height - baseLine;\n        if (y < 0)\n            y = 0;\n        if (x + label_size.width > image.cols)\n            x = image.cols - label_size.width;\n\n        cv::rectangle(image, cv::Rect(cv::Point(x, y), cv::Size(label_size.width, label_size.height + baseLine)),\n                      cv::Scalar(255, 255, 255), -1);\n\n        cv::putText(image, text, cv::Point(x, y + label_size.height),\n                    cv::FONT_HERSHEY_SIMPLEX, 0.5, cv::Scalar(0, 0, 0));\n    }\n\n    cv::imshow(\"image\", image);\n    cv::waitKey(0);\n}\n\nint main(int argc, char** argv)\n{\n    if (argc != 2)\n    {\n        fprintf(stderr, \"Usage: %s [imagepath]\\n\", argv[0]);\n        return -1;\n    }\n\n    const char* imagepath = argv[1];\n\n    cv::Mat m = cv::imread(imagepath, 1);\n    if (m.empty())\n    {\n        fprintf(stderr, \"cv::imread %s failed\\n\", imagepath);\n        return -1;\n    }\n\n    std::vector<Object> objects;\n    detect_yolox(m, objects);\n\n    draw_objects(m, objects);\n\n    return 0;\n}\n"
  },
  {
    "path": "detector/YOLOX/demo.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport argparse\nimport os\nimport pdb\nimport time\nfrom loguru import logger\n\nimport cv2\n\nimport torch\n\nfrom yolox.data.data_augment import preproc\nfrom yolox.data.datasets import COCO_CLASSES\nfrom yolox.exp import get_exp\nfrom yolox.utils import fuse_model, get_model_info, postprocess, vis\n\nIMAGE_EXT = ['.jpg', '.jpeg', '.webp', '.bmp', '.png']\n\n\ndef make_parser():\n    parser = argparse.ArgumentParser(\"YOLOX Demo!\")\n    parser.add_argument('demo', default='image', help='demo type, eg. image, video and webcam')\n    parser.add_argument(\"-expn\", \"--experiment-name\", type=str, default=None)\n    parser.add_argument(\"-n\", \"--name\", type=str, default=None, help=\"model name\")\n\n    parser.add_argument('--path', default='./assets/dog.jpg', help='path to images or video')\n    parser.add_argument('--camid', type=int, default=0, help='webcam demo camera id')\n    parser.add_argument(\n        '--save_result', action='store_true',\n        help='whether to save the inference result of image/video'\n    )\n\n    # exp file\n    parser.add_argument(\n        \"-f\",\n        \"--exp_file\",\n        default=None,\n        type=str,\n        help=\"pls input your expriment description file\",\n    )\n    parser.add_argument(\"-c\", \"--ckpt\", default=None, type=str, help=\"ckpt for eval\")\n    parser.add_argument(\"--device\", default=\"cpu\", type=str, help=\"device to run our model, can either be cpu or gpu\")\n    parser.add_argument(\"--conf\", default=None, type=float, help=\"test conf\")\n    parser.add_argument(\"--nms\", default=None, type=float, help=\"test nms threshold\")\n    parser.add_argument(\"--tsize\", default=None, type=int, help=\"test img size\")\n    parser.add_argument(\n        \"--fp16\",\n        dest=\"fp16\",\n        default=False,\n        action=\"store_true\",\n        help=\"Adopting mix precision evaluating.\",\n    )\n    parser.add_argument(\n        \"--fuse\",\n        dest=\"fuse\",\n        default=False,\n        action=\"store_true\",\n        help=\"Fuse conv and bn for testing.\",\n    )\n    parser.add_argument(\n        \"--trt\",\n        dest=\"trt\",\n        default=False,\n        action=\"store_true\",\n        help=\"Using TensorRT model for testing.\",\n    )\n    return parser\n\n\ndef get_image_list(path):\n    image_names = []\n    for maindir, subdir, file_name_list in os.walk(path):\n        for filename in file_name_list:\n            apath = os.path.join(maindir, filename)\n            ext = os.path.splitext(apath)[1]\n            if ext in IMAGE_EXT:\n                image_names.append(apath)\n    return image_names\n\n\nclass Predictor(object):\n    def __init__(self, model, exp, cls_names=COCO_CLASSES, trt_file=None, decoder=None, device=\"cpu\"):\n        self.model = model\n        self.cls_names = cls_names\n        self.decoder = decoder\n        self.num_classes = exp.num_classes\n        self.confthre = exp.test_conf\n        self.nmsthre = exp.nmsthre\n        self.test_size = exp.test_size\n        self.device = device\n        if trt_file is not None:\n            from torch2trt import TRTModule\n            model_trt = TRTModule()\n            model_trt.load_state_dict(torch.load(trt_file))\n\n            x = torch.ones(1, 3, exp.test_size[0], exp.test_size[1]).cuda()\n            self.model(x)\n            self.model = model_trt\n        self.rgb_means = (0.485, 0.456, 0.406)\n        self.std = (0.229, 0.224, 0.225)\n\n    def inference(self, img):\n        img_info = {'id': 0}\n        if isinstance(img, str):\n            img_info['file_name'] = os.path.basename(img)\n            img = cv2.imread(img)\n        else:\n            img_info['file_name'] = None\n\n        height, width = img.shape[:2]\n        img_info['height'] = height\n        img_info['width'] = width\n        img_info['raw_img'] = img\n\n        img, ratio = preproc(img, self.test_size, self.rgb_means, self.std)\n        img_info['ratio'] = ratio\n        img = torch.from_numpy(img).unsqueeze(0)\n        if self.device == \"gpu\":\n            img = img.cuda()\n\n        with torch.no_grad():\n            t0 = time.time()\n            outputs = self.model(img)\n            if self.decoder is not None:\n                outputs = self.decoder(outputs, dtype=outputs.type())\n            outputs = postprocess(\n                        outputs, self.num_classes, self.confthre, self.nmsthre\n                    )\n            logger.info('Infer time: {:.4f}s'.format(time.time()-t0))\n        return outputs, img_info\n\n    def visual(self, output, img_info, cls_conf=0.35):\n        ratio = img_info['ratio']\n        img = img_info['raw_img']\n        if output is None:\n            return img\n        output = output.cpu()\n\n        bboxes = output[:, 0:4]\n\n        # preprocessing: resize\n        bboxes /= ratio\n\n        cls = output[:, 6]\n        scores = output[:, 4] * output[:, 5]\n\n        vis_res = vis(img, bboxes, scores, cls, cls_conf, self.cls_names)\n        return vis_res\n\n\ndef image_demo(predictor, vis_folder, path, current_time, save_result):\n    if os.path.isdir(path):\n        files = get_image_list(path)\n    else:\n        files = [path]\n    files.sort()\n    for image_name in files:\n        outputs, img_info = predictor.inference(image_name)\n        result_image = predictor.visual(outputs[0], img_info)\n        if save_result:\n            save_folder = os.path.join(\n                vis_folder, time.strftime(\"%Y_%m_%d_%H_%M_%S\", current_time)\n            )\n            os.makedirs(save_folder, exist_ok=True)\n            save_file_name = os.path.join(save_folder, os.path.basename(image_name))\n            logger.info(\"Saving detection result in {}\".format(save_file_name))\n            cv2.imwrite(save_file_name, result_image)\n        ch = cv2.waitKey(0)\n        if ch == 27 or ch == ord('q') or ch == ord('Q'):\n            break\n\n\ndef imageflow_demo(predictor, vis_folder, current_time, args):\n    cap = cv2.VideoCapture(args.path if args.demo == 'video' else args.camid)\n    width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)  # float\n    height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)  # float\n    fps = cap.get(cv2.CAP_PROP_FPS)\n    save_folder = os.path.join(vis_folder, time.strftime(\"%Y_%m_%d_%H_%M_%S\", current_time))\n    os.makedirs(save_folder, exist_ok=True)\n    if args.demo == \"video\":\n        save_path = os.path.join(save_folder, args.path.split('/')[-1])\n    else:\n        save_path = os.path.join(save_folder, 'camera.mp4')\n    logger.info(f'video save_path is {save_path}')\n    vid_writer = cv2.VideoWriter(\n        save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (int(width), int(height))\n    )\n    while True:\n        ret_val, frame = cap.read()\n        if ret_val:\n            outputs, img_info = predictor.inference(frame)\n            result_frame = predictor.visual(outputs[0], img_info)\n            if args.save_result:\n                vid_writer.write(result_frame)\n            ch = cv2.waitKey(1)\n            if ch == 27 or ch == ord('q') or ch == ord('Q'):\n                break\n        else:\n            break\n\n\ndef main(exp, args):\n    if not args.experiment_name:\n        args.experiment_name = exp.exp_name\n\n    file_name = os.path.join(exp.output_dir, args.experiment_name)\n    os.makedirs(file_name, exist_ok=True)\n\n    if args.save_result:\n        vis_folder = os.path.join(file_name, 'vis_res')\n        os.makedirs(vis_folder, exist_ok=True)\n\n    if args.trt:\n        args.device=\"gpu\"\n\n    logger.info(\"Args: {}\".format(args))\n\n    if args.conf is not None:\n        exp.test_conf = args.conf\n    if args.nms is not None:\n        exp.nmsthre = args.nms\n    if args.tsize is not None:\n        exp.test_size = (args.tsize, args.tsize)\n\n    model = exp.get_model()\n    logger.info(\"Model Summary: {}\".format(get_model_info(model, exp.test_size)))\n\n    if args.device == \"gpu\":\n        model.cuda()\n    model.eval()\n\n    if not args.trt:\n        if args.ckpt is None:\n            ckpt_file = os.path.join(file_name, \"best_ckpt.pth.tar\")\n        else:\n            ckpt_file = args.ckpt\n        logger.info(\"loading checkpoint\")\n        ckpt = torch.load(ckpt_file, map_location=\"cpu\")\n        # load the model state dict\n        model.load_state_dict(ckpt[\"model\"])\n        logger.info(\"loaded checkpoint done.\")\n\n    if args.fuse:\n        logger.info(\"\\tFusing model...\")\n        model = fuse_model(model)\n\n    if args.trt:\n        assert (not args.fuse),\\\n            \"TensorRT model is not support model fusing!\"\n        trt_file = os.path.join(file_name, \"model_trt.pth\")\n        assert os.path.exists(trt_file), (\n            \"TensorRT model is not found!\\n Run python3 tools/trt.py first!\"\n        )\n        model.head.decode_in_inference = False\n        decoder = model.head.decode_outputs\n        logger.info(\"Using TensorRT to inference\")\n    else:\n        trt_file = None\n        decoder = None\n\n    predictor = Predictor(model, exp, COCO_CLASSES, trt_file, decoder, args.device)\n    current_time = time.localtime()\n    if args.demo == 'image':\n        image_demo(predictor, vis_folder, args.path, current_time, args.save_result)\n    elif args.demo == 'video' or args.demo == 'webcam':\n        imageflow_demo(predictor, vis_folder, current_time, args)\n\n\nif __name__ == \"__main__\":\n    args = make_parser().parse_args()\n    exp = get_exp(args.exp_file, args.name)\n\n    main(exp, args)\n"
  },
  {
    "path": "detector/YOLOX/docs/train_custom_data.md",
    "content": "# Train Custom Data.\nThis page explains how to train your own custom data with YOLOX.\n\nWe take an example of fine-tuning YOLOX-S model on VOC dataset to give a more clear guide.\n\n## 0. Before you start\nClone this repo and follow the [README](../README.md) to install YOLOX.\n\n## 1. Create your own dataset\n**Step 1** Prepare your own dataset with images and labels first. For labeling images, you may use a tool like [Labelme](https://github.com/wkentaro/labelme) or [CVAT](https://github.com/openvinotoolkit/cvat).\n\n**Step 2** Then, you should write the corresponding Dataset Class which can load images and labels through \"\\_\\_getitem\\_\\_\" method. We currently support COCO format and VOC format.\n\nYou can also write the Dataset by you own. Let's take the [VOC](../yolox/data/datasets/voc.py#L151) Dataset file for example:\n```python\n    @Dataset.resize_getitem\n    def __getitem__(self, index):\n        img, target, img_info, img_id = self.pull_item(index)\n\n        if self.preproc is not None:\n            img, target = self.preproc(img, target, self.input_dim)\n\n        return img, target, img_info, img_id\n```\n\nOne more thing worth noting is that you should also implement \"[pull_item](../yolox/data/datasets/voc.py#L129)\" and \"[load_anno](../yolox/data/datasets/voc.py#L121)\" method for the Mosiac and MixUp augmentation.\n\n**Step 3** Prepare the evaluator. We currently have [COCO evaluator](../yolox/evaluators/coco_evaluator.py) and [VOC evaluator](../yolox/evaluators/voc_evaluator.py).\nIf you have your own format data or evaluation metric, you may write your own evaluator.\n\n**Step 4** Put your dataset under $YOLOX_DIR/datasets$, for VOC:\n```shell\nln -s /path/to/your/VOCdevkit ./datasets/VOCdevkit\n```\n* The path \"VOCdevkit\" will be used in your exp file described in next section.Specifically, in \"get_data_loader\" and \"get_eval_loader\" function.\n\n## 2. Create your Exp file to control everything\nWe put everything involved in a model to one single Exp file, including model setting, training setting, and testing setting.\n\n**A complete Exp file is at [yolox_base.py](../yolox/exp/yolox_base.py).** It may be too long to write for every exp, but you can inherit the base Exp file and only overwrite the changed part.\n\nLet's still take the [VOC Exp file](../exps/example/yolox_voc/yolox_voc_s.py) for an example.\n\nWe select YOLOX-S model here, so we should change the network depth and width. VOC has only 20 classes, so we should also change the num_classes.\n\nThese configs are changed in the init() methd:\n```python\nclass Exp(MyExp):\n    def __init__(self):\n        super(Exp, self).__init__()\n        self.num_classes = 20\n        self.depth = 0.33\n        self.width = 0.50\n        self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(\".\")[0]\n```\n\nBesides, you should also overwrite the dataset and evaluator preprared before to training the model on your own data.\n\nPlease see \"[get_data_loader](../exps/example/yolox_voc/yolox_voc_s.py#L20)\", \"[get_eval_loader](../exps/example/yolox_voc/yolox_voc_s.py#L82)\", and \"[get_evaluator](../exps/example/yolox_voc/yolox_voc_s.py#L113)\" for more details.\n\n## 3. Train\nExcept special cases, we always recommend to use our [COCO pretrained weights](../README.md) for initializing.\n\nOnce you get the Exp file and the COCO pretrained weights we provided, you can train your own model by the following command:\n```bash\npython tools/train.py -f /path/to/your/Exp/file -d 8 -b 64 --fp16 -o -c /path/to/the/pretrained/weights\n```\n\nor take the YOLOX-S VOC training for example:\n```bash\npython tools/train.py -f exps/example/yolox_voc/yolox_voc_s.py -d 8 -b 64 --fp16 -o -c /path/to/yolox_s.pth.tar\n```\n\n(Don't worry for the different shape of detection head between the pretrained weights and your own model, we will handle it)\n\n## 4. Tips for Best Training Results\n\nAs YOLOX is an anchor-free detector with only several hyper-parameters, most of the time good results can be obtained with no changes to the models or training settings.\nWe thus always recommend you first train with all default training settings.\n\nIf at first you don't get good results, there are steps you could consider to take to improve.\n\n**Model Selection** We provide YOLOX-Nano, YOLOX-Tiny, and YOLOX-S for mobile deployments, while YOLOX-M/L/X for cloud or high performance GPU deployments.\n\nIf your deployment meets some trouble of compatibility. we recommand YOLOX-DarkNet53.\n\n**Training Configs** If your training overfits early, then you can reduce max\\_epochs or decrease the base\\_lr and min\\_lr\\_ratio in your Exp file:\n```python\n# --------------  training config --------------------- #\n    self.warmup_epochs = 5\n    self.max_epoch = 300\n    self.warmup_lr = 0\n    self.basic_lr_per_img = 0.01 / 64.0\n    self.scheduler = \"yoloxwarmcos\"\n    self.no_aug_epochs = 15\n    self.min_lr_ratio = 0.05\n    self.ema = True\n\n    self.weight_decay = 5e-4\n    self.momentum = 0.9\n```\n\n**Aug Configs** You may also change the degree of the augmentations.\n\nGenerally, for small models, you should weak the aug, while for large models or small size of dataset, you may enchance the aug in your Exp file:\n```python\n# --------------- transform config ----------------- #\n    self.degrees = 10.0\n    self.translate = 0.1\n    self.scale = (0.1, 2)\n    self.mscale = (0.8, 1.6)\n    self.shear = 2.0\n    self.perspective = 0.0\n    self.enable_mixup = True\n```\n\n**Design your own detector** You may refer to our [Arxiv](https://arxiv.org/abs/2107.08430) paper for details and suggestions for designing your own detector.\n"
  },
  {
    "path": "detector/YOLOX/exps/default/nano.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport os\nimport torch.nn as nn\n\nfrom yolox.exp import Exp as MyExp\n\n\nclass Exp(MyExp):\n    def __init__(self):\n        super(Exp, self).__init__()\n        self.depth = 0.33\n        self.width = 0.25\n        self.scale = (0.5, 1.5)\n        self.random_size = (10, 20)\n        self.test_size = (416, 416)\n        self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(\".\")[0]\n        self.enable_mixup = False\n\n    def get_model(self, sublinear=False):\n\n        def init_yolo(M):\n            for m in M.modules():\n                if isinstance(m, nn.BatchNorm2d):\n                    m.eps = 1e-3\n                    m.momentum = 0.03\n        if \"model\" not in self.__dict__:\n            from yolox.models import YOLOX, YOLOPAFPN, YOLOXHead\n            in_channels = [256, 512, 1024]\n            # NANO model use depthwise = True, which is main difference.\n            backbone = YOLOPAFPN(self.depth, self.width, in_channels=in_channels, depthwise=True)\n            head = YOLOXHead(self.num_classes, self.width, in_channels=in_channels, depthwise=True)\n            self.model = YOLOX(backbone, head)\n\n        self.model.apply(init_yolo)\n        self.model.head.initialize_biases(1e-2)\n        return self.model\n"
  },
  {
    "path": "detector/YOLOX/exps/default/yolov3.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport os\nimport torch\nimport torch.nn as nn\n\nfrom yolox.exp import Exp as MyExp\n\n\nclass Exp(MyExp):\n    def __init__(self):\n        super(Exp, self).__init__()\n        self.depth = 1.0\n        self.width = 1.0\n        self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(\".\")[0]\n\n    def get_model(self, sublinear=False):\n        def init_yolo(M):\n            for m in M.modules():\n                if isinstance(m, nn.BatchNorm2d):\n                    m.eps = 1e-3\n                    m.momentum = 0.03\n        if \"model\" not in self.__dict__:\n            from yolox.models import YOLOX, YOLOFPN, YOLOXHead\n            backbone = YOLOFPN()\n            head = YOLOXHead(self.num_classes, self.width, in_channels=[128, 256, 512], act=\"lrelu\")\n            self.model = YOLOX(backbone, head)\n        self.model.apply(init_yolo)\n        self.model.head.initialize_biases(1e-2)\n\n        return self.model\n\n    def get_data_loader(self, batch_size, is_distributed, no_aug=False):\n        from data.datasets.cocodataset import COCODataset\n        from data.datasets.mosaicdetection import MosaicDetection\n        from data.datasets.data_augment import TrainTransform\n        from data.datasets.dataloading import YoloBatchSampler, DataLoader, InfiniteSampler\n        import torch.distributed as dist\n\n        dataset = COCODataset(\n                data_dir='data/COCO/',\n                json_file=self.train_ann,\n                img_size=self.input_size,\n                preproc=TrainTransform(\n                    rgb_means=(0.485, 0.456, 0.406),\n                    std=(0.229, 0.224, 0.225),\n                    max_labels=50\n                ),\n        )\n\n        dataset = MosaicDetection(\n            dataset,\n            mosaic=not no_aug,\n            img_size=self.input_size,\n            preproc=TrainTransform(\n                rgb_means=(0.485, 0.456, 0.406),\n                std=(0.229, 0.224, 0.225),\n                max_labels=120\n            ),\n            degrees=self.degrees,\n            translate=self.translate,\n            scale=self.scale,\n            shear=self.shear,\n            perspective=self.perspective,\n        )\n\n        self.dataset = dataset\n\n        if is_distributed:\n            batch_size = batch_size // dist.get_world_size()\n            sampler = InfiniteSampler(len(self.dataset), seed=self.seed if self.seed else 0)\n        else:\n            sampler = torch.utils.data.RandomSampler(self.dataset)\n\n        batch_sampler = YoloBatchSampler(\n            sampler=sampler,\n            batch_size=batch_size,\n            drop_last=False,\n            input_dimension=self.input_size,\n            mosaic=not no_aug\n        )\n\n        dataloader_kwargs = {\"num_workers\": self.data_num_workers, \"pin_memory\": True}\n        dataloader_kwargs[\"batch_sampler\"] = batch_sampler\n        train_loader = DataLoader(self.dataset, **dataloader_kwargs)\n\n        return train_loader\n"
  },
  {
    "path": "detector/YOLOX/exps/default/yolox_l.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport os\n\nfrom yolox.exp import Exp as MyExp\n\n\nclass Exp(MyExp):\n    def __init__(self):\n        super(Exp, self).__init__()\n        self.depth = 1.0\n        self.width = 1.0\n        self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(\".\")[0]\n"
  },
  {
    "path": "detector/YOLOX/exps/default/yolox_m.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport os\n\nfrom yolox.exp import Exp as MyExp\n\n\nclass Exp(MyExp):\n    def __init__(self):\n        super(Exp, self).__init__()\n        self.depth = 0.67\n        self.width = 0.75\n        self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(\".\")[0]\n"
  },
  {
    "path": "detector/YOLOX/exps/default/yolox_s.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport os\n\nfrom yolox.exp import Exp as MyExp\n\n\nclass Exp(MyExp):\n    def __init__(self):\n        super(Exp, self).__init__()\n        self.depth = 0.33\n        self.width = 0.50\n        self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(\".\")[0]\n"
  },
  {
    "path": "detector/YOLOX/exps/default/yolox_tiny.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport os\n\nfrom yolox.exp import Exp as MyExp\n\n\nclass Exp(MyExp):\n    def __init__(self):\n        super(Exp, self).__init__()\n        self.depth = 0.33\n        self.width = 0.375\n        self.scale = (0.5, 1.5)\n        self.random_size = (10, 20)\n        self.test_size = (416, 416)\n        self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(\".\")[0]\n        self.enable_mixup = False\n"
  },
  {
    "path": "detector/YOLOX/exps/default/yolox_x.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport os\n\nfrom yolox.exp import Exp as MyExp\n\n\nclass Exp(MyExp):\n    def __init__(self):\n        super(Exp, self).__init__()\n        self.depth = 1.33\n        self.width = 1.25\n        self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(\".\")[0]\n"
  },
  {
    "path": "detector/YOLOX/exps/example/yolox_voc/yolox_voc_s.py",
    "content": "# encoding: utf-8\nimport os\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\n\nfrom yolox.exp import Exp as MyExp\nfrom yolox.data import get_yolox_datadir\n\nclass Exp(MyExp):\n    def __init__(self):\n        super(Exp, self).__init__()\n        self.num_classes = 20\n        self.depth = 0.33\n        self.width = 0.50\n        self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(\".\")[0]\n\n    def get_data_loader(self, batch_size, is_distributed, no_aug=False):\n        from yolox.data import (\n            VOCDetection,\n            TrainTransform,\n            YoloBatchSampler,\n            DataLoader,\n            InfiniteSampler,\n            MosaicDetection,\n        )\n\n        dataset = VOCDetection(\n            data_dir=os.path.join(get_yolox_datadir(), \"VOCdevkit\"),\n            image_sets=[('2007', 'trainval'), ('2012', 'trainval')],\n            img_size=self.input_size,\n            preproc=TrainTransform(\n                rgb_means=(0.485, 0.456, 0.406),\n                std=(0.229, 0.224, 0.225),\n                max_labels=50,\n            ),\n        )\n\n        dataset = MosaicDetection(\n            dataset,\n            mosaic=not no_aug,\n            img_size=self.input_size,\n            preproc=TrainTransform(\n                rgb_means=(0.485, 0.456, 0.406),\n                std=(0.229, 0.224, 0.225),\n                max_labels=120,\n            ),\n            degrees=self.degrees,\n            translate=self.translate,\n            scale=self.scale,\n            shear=self.shear,\n            perspective=self.perspective,\n            enable_mixup=self.enable_mixup,\n        )\n\n        self.dataset = dataset\n\n        if is_distributed:\n            batch_size = batch_size // dist.get_world_size()\n\n        sampler = InfiniteSampler(\n            len(self.dataset), seed=self.seed if self.seed else 0\n        )\n\n        batch_sampler = YoloBatchSampler(\n            sampler=sampler,\n            batch_size=batch_size,\n            drop_last=False,\n            input_dimension=self.input_size,\n            mosaic=not no_aug,\n        )\n\n        dataloader_kwargs = {\"num_workers\": self.data_num_workers, \"pin_memory\": True}\n        dataloader_kwargs[\"batch_sampler\"] = batch_sampler\n        train_loader = DataLoader(self.dataset, **dataloader_kwargs)\n\n        return train_loader\n\n    def get_eval_loader(self, batch_size, is_distributed, testdev=False):\n        from yolox.data import VOCDetection, ValTransform\n\n        valdataset = VOCDetection(\n            data_dir=os.path.join(get_yolox_datadir(), \"VOCdevkit\"),\n            image_sets=[('2007', 'test')],\n            img_size=self.test_size,\n            preproc=ValTransform(\n                rgb_means=(0.485, 0.456, 0.406),\n                std=(0.229, 0.224, 0.225),\n            ),\n        )\n\n        if is_distributed:\n            batch_size = batch_size // dist.get_world_size()\n            sampler = torch.utils.data.distributed.DistributedSampler(\n                valdataset, shuffle=False\n            )\n        else:\n            sampler = torch.utils.data.SequentialSampler(valdataset)\n\n        dataloader_kwargs = {\n            \"num_workers\": self.data_num_workers,\n            \"pin_memory\": True,\n            \"sampler\": sampler,\n        }\n        dataloader_kwargs[\"batch_size\"] = batch_size\n        val_loader = torch.utils.data.DataLoader(valdataset, **dataloader_kwargs)\n\n        return val_loader\n\n    def get_evaluator(self, batch_size, is_distributed, testdev=False):\n        from yolox.evaluators import VOCEvaluator\n\n        val_loader = self.get_eval_loader(batch_size, is_distributed, testdev=testdev)\n        evaluator = VOCEvaluator(\n            dataloader=val_loader,\n            img_size=self.test_size,\n            confthre=self.test_conf,\n            nmsthre=self.nmsthre,\n            num_classes=self.num_classes,\n        )\n        return evaluator\n"
  },
  {
    "path": "detector/YOLOX/requirements.txt",
    "content": "numpy\ntorch>=1.7\nopencv_python\nloguru\nscikit-image\ntqdm\ntorchvision\nPillow\nthop\nninja\ntabulate\ntensorboard\nonnxruntime\n\n\n"
  },
  {
    "path": "detector/YOLOX/setup.cfg",
    "content": "[isort]\nline_length = 100\nmulti_line_output = 3\nbalanced_wrapping = True\nknown_standard_library = setuptools\nknown_third_party = tqdm,loguru\nknown_data_processing = cv2,numpy,scipy,PIL,matplotlib,scikit_image\nknown_datasets = pycocotools\nknown_deeplearning = torch,torchvision,caffe2,onnx,apex,timm,thop,torch2trt,tensorrt,openvino,onnxruntime\nknown_myself = yolox\nsections = FUTURE,STDLIB,THIRDPARTY,data_processing,datasets,deeplearning,myself,FIRSTPARTY,LOCALFOLDER\nno_lines_before=STDLIB,THIRDPARTY,datasets\ndefault_section = FIRSTPARTY\n\n[flake8]\nmax-line-length = 100\nmax-complexity = 18\nexclude = __init__.py\n"
  },
  {
    "path": "detector/YOLOX/setup.py",
    "content": "#!/usr/bin/env python\n# Copyright (c) Megvii, Inc. and its affiliates. All Rights Reserved\n\nimport re\nimport setuptools\nimport glob\nfrom os import path\nimport torch\nfrom torch.utils.cpp_extension import CppExtension\n\ntorch_ver = [int(x) for x in torch.__version__.split(\".\")[:2]]\nassert torch_ver >= [1, 3], \"Requires PyTorch >= 1.3\"\n\n\ndef get_extensions():\n    this_dir = path.dirname(path.abspath(__file__))\n    extensions_dir = path.join(this_dir, \"yolox\", \"layers\", \"csrc\")\n\n    main_source = path.join(extensions_dir, \"vision.cpp\")\n    sources = glob.glob(path.join(extensions_dir, \"**\", \"*.cpp\"))\n\n    sources = [main_source] + sources\n    extension = CppExtension\n\n    extra_compile_args = {\"cxx\": [\"-O3\"]}\n    define_macros = []\n\n    include_dirs = [extensions_dir]\n\n    ext_modules = [\n        extension(\n            \"yolox._C\",\n            sources,\n            include_dirs=include_dirs,\n            define_macros=define_macros,\n            extra_compile_args=extra_compile_args,\n        )\n    ]\n\n    return ext_modules\n\n\nwith open(\"yolox/__init__.py\", \"r\") as f:\n    version = re.search(\n        r'^__version__\\s*=\\s*[\\'\"]([^\\'\"]*)[\\'\"]',\n        f.read(), re.MULTILINE\n    ).group(1)\n\n\nwith open(\"README.md\", \"r\") as f:\n    long_description = f.read()\n\n\nsetuptools.setup(\n    name=\"yolox\",\n    version=version,\n    author=\"basedet team\",\n    python_requires=\">=3.6\",\n    long_description=long_description,\n    ext_modules=None,\n    classifiers=[\"Programming Language :: Python :: 3\", \"Operating System :: OS Independent\"],\n    cmdclass={\"build_ext\": torch.utils.cpp_extension.BuildExtension},\n    packages=setuptools.find_packages(),\n)\n"
  },
  {
    "path": "detector/YOLOX/tools/__init__.py",
    "content": "###################################################################\n# File Name: __init__.py\n# Author: Zhongdao Wang\n# mail: wcd17@mails.tsinghua.edu.cn\n# Created Time: Sun Jul 25 17:14:12 2021\n###################################################################\n\nfrom __future__ import print_function\nfrom __future__ import division\nfrom __future__ import absolute_import\n"
  },
  {
    "path": "detector/YOLOX/tools/demo.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport argparse\nimport os\nimport time\nfrom loguru import logger\n\nimport cv2\n\nimport torch\n\nfrom yolox.data.data_augment import preproc\nfrom yolox.data.datasets import COCO_CLASSES\nfrom yolox.exp import get_exp\nfrom yolox.utils import fuse_model, get_model_info, postprocess, vis\n\nIMAGE_EXT = ['.jpg', '.jpeg', '.webp', '.bmp', '.png']\n\n\ndef make_parser():\n    parser = argparse.ArgumentParser(\"YOLOX Demo!\")\n    parser.add_argument('demo', default='image', help='demo type, eg. image, video and webcam')\n    parser.add_argument(\"-expn\", \"--experiment-name\", type=str, default=None)\n    parser.add_argument(\"-n\", \"--name\", type=str, default=None, help=\"model name\")\n\n    parser.add_argument('--path', default='./assets/dog.jpg', help='path to images or video')\n    parser.add_argument('--camid', type=int, default=0, help='webcam demo camera id')\n    parser.add_argument(\n        '--save_result', action='store_true',\n        help='whether to save the inference result of image/video'\n    )\n\n    # exp file\n    parser.add_argument(\n        \"-f\",\n        \"--exp_file\",\n        default=None,\n        type=str,\n        help=\"pls input your expriment description file\",\n    )\n    parser.add_argument(\"-c\", \"--ckpt\", default=None, type=str, help=\"ckpt for eval\")\n    parser.add_argument(\"--device\", default=\"cpu\", type=str, help=\"device to run our model, can either be cpu or gpu\")\n    parser.add_argument(\"--conf\", default=None, type=float, help=\"test conf\")\n    parser.add_argument(\"--nms\", default=None, type=float, help=\"test nms threshold\")\n    parser.add_argument(\"--tsize\", default=None, type=int, help=\"test img size\")\n    parser.add_argument(\n        \"--fp16\",\n        dest=\"fp16\",\n        default=False,\n        action=\"store_true\",\n        help=\"Adopting mix precision evaluating.\",\n    )\n    parser.add_argument(\n        \"--fuse\",\n        dest=\"fuse\",\n        default=False,\n        action=\"store_true\",\n        help=\"Fuse conv and bn for testing.\",\n    )\n    parser.add_argument(\n        \"--trt\",\n        dest=\"trt\",\n        default=False,\n        action=\"store_true\",\n        help=\"Using TensorRT model for testing.\",\n    )\n    return parser\n\n\ndef get_image_list(path):\n    image_names = []\n    for maindir, subdir, file_name_list in os.walk(path):\n        for filename in file_name_list:\n            apath = os.path.join(maindir, filename)\n            ext = os.path.splitext(apath)[1]\n            if ext in IMAGE_EXT:\n                image_names.append(apath)\n    return image_names\n\n\nclass Predictor(object):\n    def __init__(self, model, exp, cls_names=COCO_CLASSES, trt_file=None, decoder=None, device=\"cpu\"):\n        self.model = model\n        self.cls_names = cls_names\n        self.decoder = decoder\n        self.num_classes = exp.num_classes\n        self.confthre = exp.test_conf\n        self.nmsthre = exp.nmsthre\n        self.test_size = exp.test_size\n        self.device = device\n        if trt_file is not None:\n            from torch2trt import TRTModule\n            model_trt = TRTModule()\n            model_trt.load_state_dict(torch.load(trt_file))\n\n            x = torch.ones(1, 3, exp.test_size[0], exp.test_size[1]).cuda()\n            self.model(x)\n            self.model = model_trt\n        self.rgb_means = (0.485, 0.456, 0.406)\n        self.std = (0.229, 0.224, 0.225)\n\n    def inference(self, img):\n        img_info = {'id': 0}\n        if isinstance(img, str):\n            img_info['file_name'] = os.path.basename(img)\n            img = cv2.imread(img)\n        else:\n            img_info['file_name'] = None\n\n        height, width = img.shape[:2]\n        img_info['height'] = height\n        img_info['width'] = width\n        img_info['raw_img'] = img\n        img, ratio = preproc(img, self.test_size, self.rgb_means, self.std)\n        img_info['ratio'] = ratio\n        img = torch.from_numpy(img).unsqueeze(0)\n        if self.device == \"gpu\":\n            img = img.cuda()\n\n        with torch.no_grad():\n            t0 = time.time()\n            outputs = self.model(img)\n            if self.decoder is not None:\n                outputs = self.decoder(outputs, dtype=outputs.type())\n            outputs = postprocess(\n                        outputs, self.num_classes, self.confthre, self.nmsthre\n                    )\n            logger.info('Infer time: {:.4f}s'.format(time.time()-t0))\n        return outputs, img_info\n\n    def visual(self, output, img_info, cls_conf=0.35):\n        ratio = img_info['ratio']\n        img = img_info['raw_img']\n        if output is None:\n            return img\n        output = output.cpu()\n\n        bboxes = output[:, 0:4]\n\n        # preprocessing: resize\n        bboxes /= ratio\n\n        cls = output[:, 6]\n        scores = output[:, 4] * output[:, 5]\n\n        vis_res = vis(img, bboxes, scores, cls, cls_conf, self.cls_names)\n        return vis_res\n\n\ndef image_demo(predictor, vis_folder, path, current_time, save_result):\n    if os.path.isdir(path):\n        files = get_image_list(path)\n    else:\n        files = [path]\n    files.sort()\n    for image_name in files:\n        outputs, img_info = predictor.inference(image_name)\n        result_image = predictor.visual(outputs[0], img_info)\n        if save_result:\n            save_folder = os.path.join(\n                vis_folder, time.strftime(\"%Y_%m_%d_%H_%M_%S\", current_time)\n            )\n            os.makedirs(save_folder, exist_ok=True)\n            save_file_name = os.path.join(save_folder, os.path.basename(image_name))\n            logger.info(\"Saving detection result in {}\".format(save_file_name))\n            cv2.imwrite(save_file_name, result_image)\n        ch = cv2.waitKey(0)\n        if ch == 27 or ch == ord('q') or ch == ord('Q'):\n            break\n\n\ndef imageflow_demo(predictor, vis_folder, current_time, args):\n    cap = cv2.VideoCapture(args.path if args.demo == 'video' else args.camid)\n    width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)  # float\n    height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)  # float\n    fps = cap.get(cv2.CAP_PROP_FPS)\n    save_folder = os.path.join(vis_folder, time.strftime(\"%Y_%m_%d_%H_%M_%S\", current_time))\n    os.makedirs(save_folder, exist_ok=True)\n    if args.demo == \"video\":\n        save_path = os.path.join(save_folder, args.path.split('/')[-1])\n    else:\n        save_path = os.path.join(save_folder, 'camera.mp4')\n    logger.info(f'video save_path is {save_path}')\n    vid_writer = cv2.VideoWriter(\n        save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (int(width), int(height))\n    )\n    while True:\n        ret_val, frame = cap.read()\n        if ret_val:\n            outputs, img_info = predictor.inference(frame)\n            result_frame = predictor.visual(outputs[0], img_info)\n            if args.save_result:\n                vid_writer.write(result_frame)\n            ch = cv2.waitKey(1)\n            if ch == 27 or ch == ord('q') or ch == ord('Q'):\n                break\n        else:\n            break\n\n\ndef main(exp, args):\n    if not args.experiment_name:\n        args.experiment_name = exp.exp_name\n\n    file_name = os.path.join(exp.output_dir, args.experiment_name)\n    os.makedirs(file_name, exist_ok=True)\n\n    if args.save_result:\n        vis_folder = os.path.join(file_name, 'vis_res')\n        os.makedirs(vis_folder, exist_ok=True)\n\n    if args.trt:\n        args.device = \"gpu\"\n\n    logger.info(\"Args: {}\".format(args))\n\n    if args.conf is not None:\n        exp.test_conf = args.conf\n    if args.nms is not None:\n        exp.nmsthre = args.nms\n    if args.tsize is not None:\n        exp.test_size = (args.tsize, args.tsize)\n\n    model = exp.get_model()\n    logger.info(\"Model Summary: {}\".format(get_model_info(model, exp.test_size)))\n\n    if args.device == \"gpu\":\n        model.cuda()\n    model.eval()\n\n    if not args.trt:\n        if args.ckpt is None:\n            ckpt_file = os.path.join(file_name, \"best_ckpt.pth.tar\")\n        else:\n            ckpt_file = args.ckpt\n        logger.info(\"loading checkpoint\")\n        ckpt = torch.load(ckpt_file, map_location=\"cpu\")\n        # load the model state dict\n        model.load_state_dict(ckpt[\"model\"])\n        logger.info(\"loaded checkpoint done.\")\n\n    if args.fuse:\n        logger.info(\"\\tFusing model...\")\n        model = fuse_model(model)\n\n    if args.trt:\n        assert (not args.fuse),\\\n            \"TensorRT model is not support model fusing!\"\n        trt_file = os.path.join(file_name, \"model_trt.pth\")\n        assert os.path.exists(trt_file), (\n            \"TensorRT model is not found!\\n Run python3 tools/trt.py first!\"\n        )\n        model.head.decode_in_inference = False\n        decoder = model.head.decode_outputs\n        logger.info(\"Using TensorRT to inference\")\n    else:\n        trt_file = None\n        decoder = None\n\n    predictor = Predictor(model, exp, COCO_CLASSES, trt_file, decoder, args.device)\n    current_time = time.localtime()\n    if args.demo == 'image':\n        image_demo(predictor, vis_folder, args.path, current_time, args.save_result)\n    elif args.demo == 'video' or args.demo == 'webcam':\n        imageflow_demo(predictor, vis_folder, current_time, args)\n\n\nif __name__ == \"__main__\":\n    args = make_parser().parse_args()\n    exp = get_exp(args.exp_file, args.name)\n\n    main(exp, args)\n"
  },
  {
    "path": "detector/YOLOX/tools/eval.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport argparse\nimport os\nimport random\nimport warnings\nfrom loguru import logger\n\nimport torch\nimport torch.backends.cudnn as cudnn\nfrom torch.nn.parallel import DistributedDataParallel as DDP\n\nfrom yolox.core import launch\nfrom yolox.exp import get_exp\nfrom yolox.utils import configure_nccl, fuse_model, get_local_rank, get_model_info, setup_logger\n\n\ndef make_parser():\n    parser = argparse.ArgumentParser(\"YOLOX Eval\")\n    parser.add_argument(\"-expn\", \"--experiment-name\", type=str, default=None)\n    parser.add_argument(\"-n\", \"--name\", type=str, default=None, help=\"model name\")\n\n    # distributed\n    parser.add_argument(\n        \"--dist-backend\", default=\"nccl\", type=str, help=\"distributed backend\"\n    )\n    parser.add_argument(\n        \"--dist-url\", default=None, type=str, help=\"url used to set up distributed training\"\n    )\n    parser.add_argument(\"-b\", \"--batch-size\", type=int, default=64, help=\"batch size\")\n    parser.add_argument(\n        \"-d\", \"--devices\", default=None, type=int, help=\"device for training\"\n    )\n    parser.add_argument(\n        \"--local_rank\", default=0, type=int, help=\"local rank for dist training\"\n    )\n    parser.add_argument(\n        \"--num_machine\", default=1, type=int, help=\"num of node for training\"\n    )\n    parser.add_argument(\n        \"--machine_rank\", default=0, type=int, help=\"node rank for multi-node training\"\n    )\n    parser.add_argument(\n        \"-f\",\n        \"--exp_file\",\n        default=None,\n        type=str,\n        help=\"pls input your expriment description file\",\n    )\n    parser.add_argument(\"-c\", \"--ckpt\", default=None, type=str, help=\"ckpt for eval\")\n    parser.add_argument(\"--conf\", default=None, type=float, help=\"test conf\")\n    parser.add_argument(\"--nms\", default=None, type=float, help=\"test nms threshold\")\n    parser.add_argument(\"--tsize\", default=None, type=int, help=\"test img size\")\n    parser.add_argument(\"--seed\", default=None, type=int, help=\"eval seed\")\n    parser.add_argument(\n        \"--fp16\",\n        dest=\"fp16\",\n        default=False,\n        action=\"store_true\",\n        help=\"Adopting mix precision evaluating.\",\n    )\n    parser.add_argument(\n        \"--fuse\",\n        dest=\"fuse\",\n        default=False,\n        action=\"store_true\",\n        help=\"Fuse conv and bn for testing.\",\n    )\n    parser.add_argument(\n        \"--trt\",\n        dest=\"trt\",\n        default=False,\n        action=\"store_true\",\n        help=\"Using TensorRT model for testing.\",\n    )\n    parser.add_argument(\n        \"--test\",\n        dest=\"test\",\n        default=False,\n        action=\"store_true\",\n        help=\"Evaluating on test-dev set.\",\n    )\n    parser.add_argument(\n        \"--speed\", dest=\"speed\", default=False, action=\"store_true\", help=\"speed test only.\"\n    )\n    parser.add_argument(\n        \"opts\",\n        help=\"Modify config options using the command-line\",\n        default=None,\n        nargs=argparse.REMAINDER,\n    )\n    return parser\n\n\n@logger.catch\ndef main(exp, num_gpu, args):\n    if not args.experiment_name:\n        args.experiment_name = exp.exp_name\n\n    if args.seed is not None:\n        random.seed(args.seed)\n        torch.manual_seed(args.seed)\n        cudnn.deterministic = True\n        warnings.warn(\n            \"You have chosen to seed testing. This will turn on the CUDNN deterministic setting, \"\n        )\n\n    is_distributed = num_gpu > 1\n\n    # set environment variables for distributed training\n    configure_nccl()\n    cudnn.benchmark = True\n\n    # rank = args.local_rank\n    rank = get_local_rank()\n\n    if rank == 0:\n        if os.path.exists(\"./\" + args.experiment_name + \"ip_add.txt\"):\n            os.remove(\"./\" + args.experiment_name + \"ip_add.txt\")\n\n    file_name = os.path.join(exp.output_dir, args.experiment_name)\n\n    if rank == 0:\n        os.makedirs(file_name, exist_ok=True)\n\n    setup_logger(\n        file_name, distributed_rank=rank, filename=\"val_log.txt\", mode=\"a\"\n    )\n    logger.info(\"Args: {}\".format(args))\n\n    if args.conf is not None:\n        exp.test_conf = args.conf\n    if args.nms is not None:\n        exp.nmsthre = args.nms\n    if args.tsize is not None:\n        exp.test_size = (args.tsize, args.tsize)\n\n    model = exp.get_model()\n    logger.info(\"Model Summary: {}\".format(get_model_info(model, exp.test_size)))\n    logger.info(\"Model Structure:\\n{}\".format(str(model)))\n\n    evaluator = exp.get_evaluator(args.batch_size, is_distributed, args.test)\n\n    torch.cuda.set_device(rank)\n    model.cuda(rank)\n    model.eval()\n\n    if not args.speed and not args.trt:\n        if args.ckpt is None:\n            ckpt_file = os.path.join(file_name, \"best_ckpt.pth.tar\")\n        else:\n            ckpt_file = args.ckpt\n        logger.info(\"loading checkpoint\")\n        loc = \"cuda:{}\".format(rank)\n        ckpt = torch.load(ckpt_file, map_location=loc)\n        # load the model state dict\n        model.load_state_dict(ckpt[\"model\"])\n        logger.info(\"loaded checkpoint done.\")\n\n    if is_distributed:\n        model = DDP(model, device_ids=[rank])\n\n    if args.fuse:\n        logger.info(\"\\tFusing model...\")\n        model = fuse_model(model)\n\n    if args.trt:\n        assert (not args.fuse and not is_distributed and args.batch_size == 1),\\\n            \"TensorRT model is not support model fusing and distributed inferencing!\"\n        trt_file = os.path.join(file_name, \"model_trt.pth\")\n        assert os.path.exists(trt_file), \"TensorRT model is not found!\\n Run tools/trt.py first!\"\n        model.head.decode_in_inference = False\n        decoder = model.head.decode_outputs\n    else:\n        trt_file = None\n        decoder = None\n\n    # start evaluate\n    *_, summary = evaluator.evaluate(\n        model, is_distributed, args.fp16, trt_file, decoder, exp.test_size\n    )\n    logger.info(\"\\n\" + summary)\n\n\nif __name__ == \"__main__\":\n    args = make_parser().parse_args()\n    exp = get_exp(args.exp_file, args.name)\n    exp.merge(args.opts)\n\n    num_gpu = torch.cuda.device_count() if args.devices is None else args.devices\n    assert num_gpu <= torch.cuda.device_count()\n\n    dist_url = \"auto\" if args.dist_url is None else args.dist_url\n    launch(\n        main, num_gpu, args.num_machine, backend=args.dist_backend,\n        dist_url=dist_url, args=(exp, num_gpu, args)\n    )\n"
  },
  {
    "path": "detector/YOLOX/tools/export_onnx.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport argparse\nimport os\nfrom loguru import logger\n\nimport torch\nfrom torch import nn\n\nfrom yolox.exp import get_exp\nfrom yolox.models.network_blocks import SiLU\nfrom yolox.utils import replace_module\n\n\ndef make_parser():\n    parser = argparse.ArgumentParser(\"YOLOX onnx deploy\")\n    parser.add_argument(\n        \"--output-name\", type=str, default=\"yolox.onnx\", help=\"output name of models\"\n    )\n    parser.add_argument(\"--input\", default=\"images\", type=str, help=\"input name of onnx model\")\n    parser.add_argument(\"--output\", default=\"output\", type=str, help=\"output name of onnx model\")\n    parser.add_argument(\"-o\", \"--opset\", default=11, type=int, help=\"onnx opset version\")\n    parser.add_argument(\"--no-onnxsim\", action=\"store_true\", help=\"use onnxsim or not\")\n\n    parser.add_argument(\n        \"-f\",\n        \"--exp_file\",\n        default=None,\n        type=str,\n        help=\"expriment description file\",\n    )\n    parser.add_argument(\"-expn\", \"--experiment-name\", type=str, default=None)\n    parser.add_argument(\"-n\", \"--name\", type=str, default=None, help=\"model name\")\n    parser.add_argument(\"-c\", \"--ckpt\", default=None, type=str, help=\"ckpt path\")\n    parser.add_argument(\n        \"opts\",\n        help=\"Modify config options using the command-line\",\n        default=None,\n        nargs=argparse.REMAINDER,\n    )\n\n    return parser\n\n\n@logger.catch\ndef main():\n    args = make_parser().parse_args()\n    logger.info(\"args value: {}\".format(args))\n    exp = get_exp(args.exp_file, args.name)\n    exp.merge(args.opts)\n\n    if not args.experiment_name:\n        args.experiment_name = exp.exp_name\n\n    model = exp.get_model()\n    if args.ckpt is None:\n        file_name = os.path.join(exp.output_dir, args.experiment_name)\n        ckpt_file = os.path.join(file_name, \"best_ckpt.pth.tar\")\n    else:\n        ckpt_file = args.ckpt\n\n    ckpt = torch.load(ckpt_file, map_location=\"cpu\")\n    # load the model state dict\n\n    model.eval()\n    if \"model\" in ckpt:\n        ckpt = ckpt[\"model\"]\n    model.load_state_dict(ckpt)\n    model = replace_module(model, nn.SiLU, SiLU)\n    model.head.decode_in_inference = False\n\n    logger.info(\"loaded checkpoint done.\")\n    dummy_input = torch.randn(1, 3, exp.test_size[0], exp.test_size[1])\n    torch.onnx._export(\n        model,\n        dummy_input,\n        args.output_name,\n        input_names=[args.input],\n        output_names=[args.output],\n        opset_version=args.opset,\n    )\n    logger.info(\"generate onnx named {}\".format(args.output_name))\n\n    if not args.no_onnxsim:\n        # use onnxsimplify to reduce reduent model.\n        os.system(\"python3 -m onnxsim {} {}\".format(args.output_name, args.output_name))\n        logger.info(\"generate simplify onnx named {}\".format(args.output_name))\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "detector/YOLOX/tools/train.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport argparse\nimport random\nimport warnings\nfrom loguru import logger\n\nimport torch\nimport torch.backends.cudnn as cudnn\n\nfrom yolox.core import Trainer, launch\nfrom yolox.exp import get_exp\nfrom yolox.utils import configure_nccl\n\n\ndef make_parser():\n    parser = argparse.ArgumentParser(\"YOLOX train parser\")\n    parser.add_argument(\"-expn\", \"--experiment-name\", type=str, default=None)\n    parser.add_argument(\"-n\", \"--name\", type=str, default=None, help=\"model name\")\n\n    # distributed\n    parser.add_argument(\n        \"--dist-backend\", default=\"nccl\", type=str, help=\"distributed backend\"\n    )\n    parser.add_argument(\n        \"--dist-url\", default=None, type=str, help=\"url used to set up distributed training\"\n    )\n    parser.add_argument(\"-b\", \"--batch-size\", type=int, default=64, help=\"batch size\")\n    parser.add_argument(\n        \"-d\", \"--devices\", default=None, type=int, help=\"device for training\"\n    )\n    parser.add_argument(\n        \"--local_rank\", default=0, type=int, help=\"local rank for dist training\"\n    )\n    parser.add_argument(\n        \"-f\",\n        \"--exp_file\",\n        default=None,\n        type=str,\n        help=\"plz input your expriment description file\",\n    )\n    parser.add_argument(\n        \"--resume\", default=False, action=\"store_true\", help=\"resume training\"\n    )\n    parser.add_argument(\"-c\", \"--ckpt\", default=None, type=str, help=\"checkpoint file\")\n    parser.add_argument(\n        \"-e\", \"--start_epoch\", default=None, type=int, help=\"resume training start epoch\"\n    )\n    parser.add_argument(\n        \"--num_machine\", default=1, type=int, help=\"num of node for training\"\n    )\n    parser.add_argument(\n        \"--machine_rank\", default=0, type=int, help=\"node rank for multi-node training\"\n    )\n    parser.add_argument(\n        \"--fp16\",\n        dest=\"fp16\",\n        default=True,\n        action=\"store_true\",\n        help=\"Adopting mix precision training.\",\n    )\n    parser.add_argument(\n        \"-o\",\n        \"--occumpy\",\n        dest=\"occumpy\",\n        default=False,\n        action=\"store_true\",\n        help=\"occumpy GPU memory first for training.\",\n    )\n    parser.add_argument(\n        \"opts\",\n        help=\"Modify config options using the command-line\",\n        default=None,\n        nargs=argparse.REMAINDER,\n    )\n    return parser\n\n\n@logger.catch\ndef main(exp, args):\n    if not args.experiment_name:\n        args.experiment_name = exp.exp_name\n\n    if exp.seed is not None:\n        random.seed(exp.seed)\n        torch.manual_seed(exp.seed)\n        cudnn.deterministic = True\n        warnings.warn(\n            \"You have chosen to seed training. This will turn on the CUDNN deterministic setting, \"\n            \"which can slow down your training considerably! You may see unexpected behavior \"\n            \"when restarting from checkpoints.\"\n        )\n\n    # set environment variables for distributed training\n    configure_nccl()\n    cudnn.benchmark = True\n\n    trainer = Trainer(exp, args)\n    trainer.train()\n\n\nif __name__ == \"__main__\":\n    args = make_parser().parse_args()\n    exp = get_exp(args.exp_file, args.name)\n    exp.merge(args.opts)\n\n    num_gpu = torch.cuda.device_count() if args.devices is None else args.devices\n    assert num_gpu <= torch.cuda.device_count()\n\n    dist_url = \"auto\" if args.dist_url is None else args.dist_url\n    launch(\n        main, num_gpu, args.num_machine, backend=args.dist_backend,\n        dist_url=dist_url, args=(exp, args)\n    )\n"
  },
  {
    "path": "detector/YOLOX/tools/trt.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport argparse\nimport os\nimport shutil\nfrom loguru import logger\n\nimport tensorrt as trt\nimport torch\nfrom torch2trt import torch2trt\n\nfrom yolox.exp import get_exp\n\n\ndef make_parser():\n    parser = argparse.ArgumentParser(\"YOLOX ncnn deploy\")\n    parser.add_argument(\"-expn\", \"--experiment-name\", type=str, default=None)\n    parser.add_argument(\"-n\", \"--name\", type=str, default=None, help=\"model name\")\n\n    parser.add_argument(\n        \"-f\",\n        \"--exp_file\",\n        default=None,\n        type=str,\n        help=\"pls input your expriment description file\",\n    )\n    parser.add_argument(\"-c\", \"--ckpt\", default=None, type=str, help=\"ckpt path\")\n    return parser\n\n\n@logger.catch\ndef main():\n    args = make_parser().parse_args()\n    exp = get_exp(args.exp_file, args.name)\n    if not args.experiment_name:\n        args.experiment_name = exp.exp_name\n\n    model = exp.get_model()\n    file_name = os.path.join(exp.output_dir, args.experiment_name)\n    os.makedirs(file_name, exist_ok=True)\n    if args.ckpt is None:\n        ckpt_file = os.path.join(file_name, \"best_ckpt.pth.tar\")\n    else:\n        ckpt_file = args.ckpt\n\n    ckpt = torch.load(ckpt_file, map_location=\"cpu\")\n    # load the model state dict\n\n    model.load_state_dict(ckpt[\"model\"])\n    logger.info(\"loaded checkpoint done.\")\n    model.eval()\n    model.cuda()\n    model.head.decode_in_inference = False\n    x = torch.ones(1, 3, exp.test_size[0], exp.test_size[1]).cuda()\n    model_trt = torch2trt(\n        model,\n        [x],\n        fp16_mode=True,\n        log_level=trt.Logger.INFO,\n        max_workspace_size=(1 << 32),\n    )\n    torch.save(model_trt.state_dict(), os.path.join(file_name, 'model_trt.pth'))\n    logger.info(\"Converted TensorRT model done.\")\n    engine_file = os.path.join(file_name, 'model_trt.engine')\n    engine_file_demo = os.path.join('demo', 'TensorRT', 'cpp', 'model_trt.engine')\n    with open(engine_file, 'wb') as f:\n        f.write(model_trt.engine.serialize())\n\n    shutil.copyfile(engine_file, engine_file_demo)\n\n    logger.info(\"Converted TensorRT model engine file is saved for C++ inference.\")\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "detector/YOLOX/yolox/__init__.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n\nfrom .utils import configure_module\n\nconfigure_module()\n\n__version__ = \"0.1.0\"\n"
  },
  {
    "path": "detector/YOLOX/yolox/core/__init__.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nfrom .launch import launch\nfrom .trainer import Trainer\n"
  },
  {
    "path": "detector/YOLOX/yolox/core/launch.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Code are based on\n# https://github.com/facebookresearch/detectron2/blob/master/detectron2/engine/launch.py\n# Copyright (c) Facebook, Inc. and its affiliates.\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nfrom loguru import logger\n\nimport torch\nimport torch.distributed as dist\nimport torch.multiprocessing as mp\n\nimport yolox.utils.dist as comm\n\n__all__ = [\"launch\"]\n\n\ndef _find_free_port():\n    \"\"\"\n    Find an available port of current machine / node.\n    \"\"\"\n    import socket\n\n    sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n    # Binding to port 0 will cause the OS to find an available port for us\n    sock.bind((\"\", 0))\n    port = sock.getsockname()[1]\n    sock.close()\n    # NOTE: there is still a chance the port could be taken by other processes.\n    return port\n\n\ndef launch(\n    main_func, num_gpus_per_machine, num_machines=1, machine_rank=0,\n    backend=\"nccl\", dist_url=None, args=()\n):\n    \"\"\"\n    Args:\n        main_func: a function that will be called by `main_func(*args)`\n        num_machines (int): the total number of machines\n        machine_rank (int): the rank of this machine (one per machine)\n        dist_url (str): url to connect to for distributed training, including protocol\n                       e.g. \"tcp://127.0.0.1:8686\".\n                       Can be set to auto to automatically select a free port on localhost\n        args (tuple): arguments passed to main_func\n    \"\"\"\n    world_size = num_machines * num_gpus_per_machine\n    if world_size > 1:\n        # https://github.com/pytorch/pytorch/pull/14391\n        # TODO prctl in spawned processes\n\n        if dist_url == \"auto\":\n            assert num_machines == 1, \"dist_url=auto cannot work with distributed training.\"\n            port = _find_free_port()\n            dist_url = f\"tcp://127.0.0.1:{port}\"\n\n        mp.spawn(\n            _distributed_worker,\n            nprocs=num_gpus_per_machine,\n            args=(\n                main_func, world_size, num_gpus_per_machine,\n                machine_rank, backend, dist_url, args\n            ),\n            daemon=False,\n        )\n    else:\n        main_func(*args)\n\n\ndef _distributed_worker(\n    local_rank, main_func, world_size, num_gpus_per_machine,\n    machine_rank, backend, dist_url, args\n):\n    assert torch.cuda.is_available(), \"cuda is not available. Please check your installation.\"\n    global_rank = machine_rank * num_gpus_per_machine + local_rank\n    logger.info(\"Rank {} initialization finished.\".format(global_rank))\n    try:\n        dist.init_process_group(\n            backend=backend,\n            init_method=dist_url,\n            world_size=world_size,\n            rank=global_rank,\n        )\n    except Exception:\n        logger.error(\"Process group URL: {}\".format(dist_url))\n        raise\n    # synchronize is needed here to prevent a possible timeout after calling init_process_group\n    # See: https://github.com/facebookresearch/maskrcnn-benchmark/issues/172\n    comm.synchronize()\n\n    assert num_gpus_per_machine <= torch.cuda.device_count()\n    torch.cuda.set_device(local_rank)\n\n    # Setup the local process group (which contains ranks within the same machine)\n    assert comm._LOCAL_PROCESS_GROUP is None\n    num_machines = world_size // num_gpus_per_machine\n    for i in range(num_machines):\n        ranks_on_i = list(range(i * num_gpus_per_machine, (i + 1) * num_gpus_per_machine))\n        pg = dist.new_group(ranks_on_i)\n        if i == machine_rank:\n            comm._LOCAL_PROCESS_GROUP = pg\n\n    main_func(*args)\n"
  },
  {
    "path": "detector/YOLOX/yolox/core/trainer.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport datetime\nimport os\nimport time\nfrom loguru import logger\n\nimport apex\nimport torch\nfrom apex import amp\nfrom torch.utils.tensorboard import SummaryWriter\n\nfrom yolox.data import DataPrefetcher\nfrom yolox.utils import (\n    MeterBuffer,\n    ModelEMA,\n    all_reduce_norm,\n    get_local_rank,\n    get_model_info,\n    get_rank,\n    get_world_size,\n    gpu_mem_usage,\n    load_ckpt,\n    occumpy_mem,\n    save_checkpoint,\n    setup_logger,\n    synchronize\n)\n\n\nclass Trainer:\n\n    def __init__(self, exp, args):\n        # init function only defines some basic attr, other attrs like model, optimizer are built in\n        # before_train methods.\n        self.exp = exp\n        self.args = args\n\n        # training related attr\n        self.max_epoch = exp.max_epoch\n        self.amp_training = args.fp16\n        self.is_distributed = get_world_size() > 1\n        self.rank = get_rank()\n        self.local_rank = get_local_rank()\n        self.device = \"cuda:{}\".format(self.local_rank)\n        self.use_model_ema = exp.ema\n\n        # data/dataloader related attr\n        self.data_type = torch.float16 if args.fp16 else torch.float32\n        self.input_size = exp.input_size\n        self.best_ap = 0\n\n        # metric record\n        self.meter = MeterBuffer(window_size=exp.print_interval)\n        self.file_name = os.path.join(exp.output_dir, args.experiment_name)\n\n        if self.rank == 0 and os.path.exists(\"./\" + args.experiment_name + \"ip_add.txt\"):\n            os.remove(\"./\" + args.experiment_name + \"ip_add.txt\")\n\n        if self.rank == 0:\n            os.makedirs(self.file_name, exist_ok=True)\n\n        setup_logger(self.file_name, distributed_rank=self.rank, filename=\"train_log.txt\", mode=\"a\")\n\n    def train(self):\n        self.before_train()\n        try:\n            self.train_in_epoch()\n        except Exception:\n            raise\n        finally:\n            self.after_train()\n\n    def train_in_epoch(self):\n        for self.epoch in range(self.start_epoch, self.max_epoch):\n            self.before_epoch()\n            self.train_in_iter()\n            self.after_epoch()\n\n    def train_in_iter(self):\n        for self.iter in range(self.max_iter):\n            self.before_iter()\n            self.train_one_iter()\n            self.after_iter()\n\n    def train_one_iter(self):\n        iter_start_time = time.time()\n\n        inps, targets = self.prefetcher.next()\n        inps = inps.to(self.data_type)\n        targets = targets.to(self.data_type)\n        targets.requires_grad = False\n        data_end_time = time.time()\n\n        outputs = self.model(inps, targets)\n        loss = outputs[\"total_loss\"]\n\n        self.optimizer.zero_grad()\n        if self.amp_training:\n            with amp.scale_loss(loss, self.optimizer) as scaled_loss:\n                scaled_loss.backward()\n        else:\n            loss.backward()\n        self.optimizer.step()\n\n        if self.use_model_ema:\n            self.ema_model.update(self.model)\n\n        lr = self.lr_scheduler.update_lr(self.progress_in_iter + 1)\n        for param_group in self.optimizer.param_groups:\n            param_group[\"lr\"] = lr\n\n        iter_end_time = time.time()\n        self.meter.update(\n            iter_time=iter_end_time - iter_start_time,\n            data_time=data_end_time - iter_start_time,\n            lr=lr,\n            **outputs,\n        )\n\n    def before_train(self):\n        logger.info(\"args: {}\".format(self.args))\n        logger.info(\"exp value:\\n{}\".format(self.exp))\n\n        # model related init\n        torch.cuda.set_device(self.local_rank)\n        model = self.exp.get_model()\n        logger.info(\"Model Summary: {}\".format(get_model_info(model, self.exp.test_size)))\n        model.to(self.device)\n\n        # solver related init\n        self.optimizer = self.exp.get_optimizer(self.args.batch_size)\n\n        if self.amp_training:\n            model, optimizer = amp.initialize(model, self.optimizer, opt_level=\"O1\")\n\n        # value of epoch will be set in `resume_train`\n        model = self.resume_train(model)\n\n        # data related init\n        self.no_aug = self.start_epoch >= self.max_epoch - self.exp.no_aug_epochs\n        self.train_loader = self.exp.get_data_loader(\n            batch_size=self.args.batch_size,\n            is_distributed=self.is_distributed,\n            no_aug=self.no_aug\n        )\n        logger.info(\"init prefetcher, this might take one minute or less...\")\n        self.prefetcher = DataPrefetcher(self.train_loader)\n        # max_iter means iters per epoch\n        self.max_iter = len(self.train_loader)\n\n        self.lr_scheduler = self.exp.get_lr_scheduler(\n            self.exp.basic_lr_per_img * self.args.batch_size, self.max_iter\n        )\n        if self.args.occumpy:\n            occumpy_mem(self.local_rank)\n\n        if self.is_distributed:\n            model = apex.parallel.DistributedDataParallel(model)\n            # from torch.nn.parallel import DistributedDataParallel as DDP\n            # model = DDP(model, device_ids=[self.local_rank], broadcast_buffers=False)\n\n        if self.use_model_ema:\n            self.ema_model = ModelEMA(model, 0.9998)\n            self.ema_model.updates = self.max_iter * self.start_epoch\n\n        self.model = model\n        self.model.train()\n\n        self.evaluator = self.exp.get_evaluator(\n            batch_size=self.args.batch_size, is_distributed=self.is_distributed\n        )\n        # Tensorboard logger\n        if self.rank == 0:\n            self.tblogger = SummaryWriter(self.file_name)\n\n        logger.info(\"Training start...\")\n        logger.info(\"\\n{}\".format(model))\n\n    def after_train(self):\n        logger.info(\n            \"Training of experiment is done and the best AP is {:.2f}\".format(self.best_ap * 100)\n        )\n\n    def before_epoch(self):\n        logger.info(\"---> start train epoch{}\".format(self.epoch + 1))\n\n        if self.epoch + 1 == self.max_epoch - self.exp.no_aug_epochs or self.no_aug:\n            logger.info(\"--->No mosaic aug now!\")\n            self.train_loader.close_mosaic()\n            logger.info(\"--->Add additional L1 loss now!\")\n            if self.is_distributed:\n                self.model.module.head.use_l1 = True\n            else:\n                self.model.head.use_l1 = True\n            self.exp.eval_interval = 1\n            if not self.no_aug:\n                self.save_ckpt(ckpt_name=\"last_mosaic_epoch\")\n\n    def after_epoch(self):\n        if self.use_model_ema:\n            self.ema_model.update_attr(self.model)\n\n        self.save_ckpt(ckpt_name=\"latest\")\n\n        if (self.epoch + 1) % self.exp.eval_interval == 0:\n            all_reduce_norm(self.model)\n            self.evaluate_and_save_model()\n\n    def before_iter(self):\n        pass\n\n    def after_iter(self):\n        \"\"\"\n        `after_iter` contains two parts of logic:\n            * log information\n            * reset setting of resize\n        \"\"\"\n        # log needed information\n        if (self.iter + 1) % self.exp.print_interval == 0:\n            # TODO check ETA logic\n            left_iters = self.max_iter * self.max_epoch - (self.progress_in_iter + 1)\n            eta_seconds = self.meter[\"iter_time\"].global_avg * left_iters\n            eta_str = \"ETA: {}\".format(datetime.timedelta(seconds=int(eta_seconds)))\n\n            progress_str = \"epoch: {}/{}, iter: {}/{}\".format(\n                self.epoch + 1, self.max_epoch, self.iter + 1, self.max_iter\n            )\n            loss_meter = self.meter.get_filtered_meter(\"loss\")\n            loss_str = \", \".join([\"{}: {:.1f}\".format(k, v.latest) for k, v in loss_meter.items()])\n\n            time_meter = self.meter.get_filtered_meter(\"time\")\n            time_str = \", \".join([\"{}: {:.3f}s\".format(k, v.avg) for k, v in time_meter.items()])\n\n            logger.info(\n                \"{}, mem: {:.0f}Mb, {}, {}, lr: {:.3e}\".format(\n                    progress_str,\n                    gpu_mem_usage(),\n                    time_str,\n                    loss_str,\n                    self.meter[\"lr\"].latest,\n                )\n                + (\", size: {:d}, {}\".format(self.input_size[0], eta_str))\n            )\n            self.meter.clear_meters()\n\n        # random resizing\n        if self.exp.random_size is not None and (self.progress_in_iter + 1) % 10 == 0:\n            self.input_size = self.exp.random_resize(\n                self.train_loader, self.epoch, self.rank, self.is_distributed\n            )\n\n    @property\n    def progress_in_iter(self):\n        return self.epoch * self.max_iter + self.iter\n\n    def resume_train(self, model):\n        if self.args.resume:\n            logger.info(\"resume training\")\n            if self.args.ckpt is None:\n                ckpt_file = os.path.join(self.file_name, \"latest\" + \"_ckpt.pth.tar\")\n            else:\n                ckpt_file = self.args.ckpt\n\n            ckpt = torch.load(ckpt_file, map_location=self.device)\n            # resume the model/optimizer state dict\n            model.load_state_dict(ckpt[\"model\"])\n            self.optimizer.load_state_dict(ckpt[\"optimizer\"])\n            # resume the training states variables\n            if self.amp_training and \"amp\" in ckpt:\n                amp.load_state_dict(ckpt[\"amp\"])\n            start_epoch = (\n                self.args.start_epoch - 1\n                if self.args.start_epoch is not None\n                else ckpt[\"start_epoch\"]\n            )\n            self.start_epoch = start_epoch\n            logger.info(\"loaded checkpoint '{}' (epoch {})\".format(self.args.resume, self.start_epoch))  # noqa\n        else:\n            if self.args.ckpt is not None:\n                logger.info(\"loading checkpoint for fine tuning\")\n                ckpt_file = self.args.ckpt\n                ckpt = torch.load(ckpt_file, map_location=self.device)[\"model\"]\n                model = load_ckpt(model, ckpt)\n            self.start_epoch = 0\n\n        return model\n\n    def evaluate_and_save_model(self):\n        evalmodel = self.ema_model.ema if self.use_model_ema else self.model\n        ap50_95, ap50, summary = self.exp.eval(evalmodel, self.evaluator, self.is_distributed)\n        self.model.train()\n        if self.rank == 0:\n            self.tblogger.add_scalar(\"val/COCOAP50\", ap50, self.epoch + 1)\n            self.tblogger.add_scalar(\"val/COCOAP50_95\", ap50_95, self.epoch + 1)\n            logger.info(\"\\n\" + summary)\n        synchronize()\n\n        self.save_ckpt(\"last_epoch\", ap50_95 > self.best_ap)\n        self.best_ap = max(self.best_ap, ap50_95)\n\n    def save_ckpt(self, ckpt_name, update_best_ckpt=False):\n        if self.rank == 0:\n            save_model = self.ema_model.ema if self.use_model_ema else self.model\n            logger.info(\"Save weights to {}\".format(self.file_name))\n            ckpt_state = {\n                \"start_epoch\": self.epoch + 1,\n                \"model\": save_model.state_dict(),\n                \"optimizer\": self.optimizer.state_dict(),\n            }\n            if self.amp_training:\n                # save amp state according to\n                # https://nvidia.github.io/apex/amp.html#checkpointing\n                ckpt_state[\"amp\"] = amp.state_dict()\n            save_checkpoint(\n                ckpt_state,\n                update_best_ckpt,\n                self.file_name,\n                ckpt_name,\n            )\n"
  },
  {
    "path": "detector/YOLOX/yolox/data/__init__.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nfrom .data_augment import TrainTransform, ValTransform\nfrom .data_prefetcher import DataPrefetcher\nfrom .dataloading import DataLoader, get_yolox_datadir\nfrom .datasets import *\nfrom .samplers import InfiniteSampler, YoloBatchSampler\n"
  },
  {
    "path": "detector/YOLOX/yolox/data/data_augment.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\"\"\"\nData augmentation functionality. Passed as callable transformations to\nDataset classes.\n\nThe data augmentation procedures were interpreted from @weiliu89's SSD paper\nhttp://arxiv.org/abs/1512.02325\n\"\"\"\n\nimport math\nimport random\n\nimport cv2\nimport numpy as np\n\nimport torch\n\n\ndef augment_hsv(img, hgain=0.015, sgain=0.7, vgain=0.4):\n    r = np.random.uniform(-1, 1, 3) * [hgain, sgain, vgain] + 1  # random gains\n    hue, sat, val = cv2.split(cv2.cvtColor(img, cv2.COLOR_BGR2HSV))\n    dtype = img.dtype  # uint8\n\n    x = np.arange(0, 256, dtype=np.int16)\n    lut_hue = ((x * r[0]) % 180).astype(dtype)\n    lut_sat = np.clip(x * r[1], 0, 255).astype(dtype)\n    lut_val = np.clip(x * r[2], 0, 255).astype(dtype)\n\n    img_hsv = cv2.merge(\n        (cv2.LUT(hue, lut_hue), cv2.LUT(sat, lut_sat), cv2.LUT(val, lut_val))\n    ).astype(dtype)\n    cv2.cvtColor(img_hsv, cv2.COLOR_HSV2BGR, dst=img)  # no return needed\n\n\ndef box_candidates(box1, box2, wh_thr=2, ar_thr=20, area_thr=0.2):\n    # box1(4,n), box2(4,n)\n    # Compute candidate boxes which include follwing 5 things:\n    # box1 before augment, box2 after augment, wh_thr (pixels), aspect_ratio_thr, area_ratio\n    w1, h1 = box1[2] - box1[0], box1[3] - box1[1]\n    w2, h2 = box2[2] - box2[0], box2[3] - box2[1]\n    ar = np.maximum(w2 / (h2 + 1e-16), h2 / (w2 + 1e-16))  # aspect ratio\n    return (\n        (w2 > wh_thr)\n        & (h2 > wh_thr)\n        & (w2 * h2 / (w1 * h1 + 1e-16) > area_thr)\n        & (ar < ar_thr)\n    )  # candidates\n\n\ndef random_perspective(\n    img, targets=(), degrees=10, translate=0.1, scale=0.1, shear=10, perspective=0.0, border=(0, 0),\n):\n    # targets = [cls, xyxy]\n    height = img.shape[0] + border[0] * 2  # shape(h,w,c)\n    width = img.shape[1] + border[1] * 2\n\n    # Center\n    C = np.eye(3)\n    C[0, 2] = -img.shape[1] / 2  # x translation (pixels)\n    C[1, 2] = -img.shape[0] / 2  # y translation (pixels)\n\n    # Rotation and Scale\n    R = np.eye(3)\n    a = random.uniform(-degrees, degrees)\n    # a += random.choice([-180, -90, 0, 90])  # add 90deg rotations to small rotations\n    s = random.uniform(scale[0], scale[1])\n    # s = 2 ** random.uniform(-scale, scale)\n    R[:2] = cv2.getRotationMatrix2D(angle=a, center=(0, 0), scale=s)\n\n    # Shear\n    S = np.eye(3)\n    S[0, 1] = math.tan(random.uniform(-shear, shear) * math.pi / 180)  # x shear (deg)\n    S[1, 0] = math.tan(random.uniform(-shear, shear) * math.pi / 180)  # y shear (deg)\n\n    # Translation\n    T = np.eye(3)\n    T[0, 2] = (random.uniform(0.5 - translate, 0.5 + translate) * width)  # x translation (pixels)\n    T[1, 2] = (random.uniform(0.5 - translate, 0.5 + translate) * height)  # y translation (pixels)\n\n    # Combined rotation matrix\n    M = T @ S @ R @ C  # order of operations (right to left) is IMPORTANT\n\n    ###########################\n    # For Aug out of Mosaic\n    # s = 1.\n    # M = np.eye(3)\n    ###########################\n\n    if (border[0] != 0) or (border[1] != 0) or (M != np.eye(3)).any():  # image changed\n        if perspective:\n            img = cv2.warpPerspective(img, M, dsize=(width, height), borderValue=(114, 114, 114))\n        else:  # affine\n            img = cv2.warpAffine(img, M[:2], dsize=(width, height), borderValue=(114, 114, 114))\n\n    # Transform label coordinates\n    n = len(targets)\n    if n:\n        # warp points\n        xy = np.ones((n * 4, 3))\n        xy[:, :2] = targets[:, [0, 1, 2, 3, 0, 3, 2, 1]].reshape(n * 4, 2)  # x1y1, x2y2, x1y2, x2y1\n        xy = xy @ M.T  # transform\n        if perspective:\n            xy = (xy[:, :2] / xy[:, 2:3]).reshape(n, 8)  # rescale\n        else:  # affine\n            xy = xy[:, :2].reshape(n, 8)\n\n        # create new boxes\n        x = xy[:, [0, 2, 4, 6]]\n        y = xy[:, [1, 3, 5, 7]]\n        xy = np.concatenate((x.min(1), y.min(1), x.max(1), y.max(1))).reshape(4, n).T\n\n        # clip boxes\n        xy[:, [0, 2]] = xy[:, [0, 2]].clip(0, width)\n        xy[:, [1, 3]] = xy[:, [1, 3]].clip(0, height)\n\n        # filter candidates\n        i = box_candidates(box1=targets[:, :4].T * s, box2=xy.T)\n        targets = targets[i]\n        targets[:, :4] = xy[i]\n\n    return img, targets\n\n\ndef _distort(image):\n    def _convert(image, alpha=1, beta=0):\n        tmp = image.astype(float) * alpha + beta\n        tmp[tmp < 0] = 0\n        tmp[tmp > 255] = 255\n        image[:] = tmp\n\n    image = image.copy()\n\n    if random.randrange(2):\n        _convert(image, beta=random.uniform(-32, 32))\n\n    if random.randrange(2):\n        _convert(image, alpha=random.uniform(0.5, 1.5))\n\n    image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)\n\n    if random.randrange(2):\n        tmp = image[:, :, 0].astype(int) + random.randint(-18, 18)\n        tmp %= 180\n        image[:, :, 0] = tmp\n\n    if random.randrange(2):\n        _convert(image[:, :, 1], alpha=random.uniform(0.5, 1.5))\n\n    image = cv2.cvtColor(image, cv2.COLOR_HSV2BGR)\n\n    return image\n\n\ndef _mirror(image, boxes):\n    _, width, _ = image.shape\n    if random.randrange(2):\n        image = image[:, ::-1]\n        boxes = boxes.copy()\n        boxes[:, 0::2] = width - boxes[:, 2::-2]\n    return image, boxes\n\n\ndef preproc(image, input_size, mean, std, swap=(2, 0, 1)):\n    if len(image.shape) == 3:\n        padded_img = np.ones((input_size[0], input_size[1], 3)) * 114.0\n    else:\n        padded_img = np.ones(input_size) * 114.0\n    img = np.array(image)\n    r = min(input_size[0] / img.shape[0], input_size[1] / img.shape[1])\n    resized_img = cv2.resize(\n        img, (int(img.shape[1] * r), int(img.shape[0] * r)), interpolation=cv2.INTER_LINEAR\n    ).astype(np.float32)\n    padded_img[: int(img.shape[0] * r), : int(img.shape[1] * r)] = resized_img\n    image = padded_img\n\n    image = image.astype(np.float32)\n    image = image[:, :, ::-1]\n    image /= 255.0\n    if mean is not None:\n        image -= mean\n    if std is not None:\n        image /= std\n    image = image.transpose(swap)\n    image = np.ascontiguousarray(image, dtype=np.float32)\n    return image, r\n\n\nclass TrainTransform:\n    def __init__(self, p=0.5, rgb_means=None, std=None, max_labels=50):\n        self.means = rgb_means\n        self.std = std\n        self.p = p\n        self.max_labels = max_labels\n\n    def __call__(self, image, targets, input_dim):\n        boxes = targets[:, :4].copy()\n        labels = targets[:, 4].copy()\n        if targets.shape[1] > 5:\n            mixup = True\n            ratios = targets[:, -1].copy()\n            ratios_o = targets[:, -1].copy()\n        else:\n            mixup = False\n            ratios = None\n            ratios_o = None\n        lshape = 6 if mixup else 5\n        if len(boxes) == 0:\n            targets = np.zeros((self.max_labels, lshape), dtype=np.float32)\n            image, r_o = preproc(image, input_dim, self.means, self.std)\n            image = np.ascontiguousarray(image, dtype=np.float32)\n            return image, targets\n\n        image_o = image.copy()\n        targets_o = targets.copy()\n        height_o, width_o, _ = image_o.shape\n        boxes_o = targets_o[:, :4]\n        labels_o = targets_o[:, 4]\n        # bbox_o: [xyxy] to [c_x,c_y,w,h]\n        b_x_o = (boxes_o[:, 2] + boxes_o[:, 0]) * 0.5\n        b_y_o = (boxes_o[:, 3] + boxes_o[:, 1]) * 0.5\n        b_w_o = (boxes_o[:, 2] - boxes_o[:, 0]) * 1.0\n        b_h_o = (boxes_o[:, 3] - boxes_o[:, 1]) * 1.0\n        boxes_o[:, 0] = b_x_o\n        boxes_o[:, 1] = b_y_o\n        boxes_o[:, 2] = b_w_o\n        boxes_o[:, 3] = b_h_o\n\n        image_t = _distort(image)\n        image_t, boxes = _mirror(image_t, boxes)\n        height, width, _ = image_t.shape\n        image_t, r_ = preproc(image_t, input_dim, self.means, self.std)\n        boxes = boxes.copy()\n        # boxes [xyxy] 2 [cx,cy,w,h]\n        b_x = (boxes[:, 2] + boxes[:, 0]) * 0.5\n        b_y = (boxes[:, 3] + boxes[:, 1]) * 0.5\n        b_w = (boxes[:, 2] - boxes[:, 0]) * 1.0\n        b_h = (boxes[:, 3] - boxes[:, 1]) * 1.0\n        boxes[:, 0] = b_x\n        boxes[:, 1] = b_y\n        boxes[:, 2] = b_w\n        boxes[:, 3] = b_h\n\n        boxes *= r_\n\n        mask_b = np.minimum(boxes[:, 2], boxes[:, 3]) > 8\n        boxes_t = boxes[mask_b]\n        labels_t = labels[mask_b].copy()\n        if mixup:\n            ratios_t = ratios[mask_b].copy()\n\n        if len(boxes_t) == 0:\n            image_t, r_o = preproc(image_o, input_dim, self.means, self.std)\n            boxes_o *= r_o\n            boxes_t = boxes_o\n            labels_t = labels_o\n            ratios_t = ratios_o\n\n        labels_t = np.expand_dims(labels_t, 1)\n        if mixup:\n            ratios_t = np.expand_dims(ratios_t, 1)\n            targets_t = np.hstack((labels_t, boxes_t, ratios_t))\n        else:\n            targets_t = np.hstack((labels_t, boxes_t))\n        padded_labels = np.zeros((self.max_labels, lshape))\n        padded_labels[range(len(targets_t))[: self.max_labels]] = targets_t[\n            : self.max_labels\n        ]\n        padded_labels = np.ascontiguousarray(padded_labels, dtype=np.float32)\n        image_t = np.ascontiguousarray(image_t, dtype=np.float32)\n        return image_t, padded_labels\n\n\nclass ValTransform:\n    \"\"\"\n    Defines the transformations that should be applied to test PIL image\n    for input into the network\n\n    dimension -> tensorize -> color adj\n\n    Arguments:\n        resize (int): input dimension to SSD\n        rgb_means ((int,int,int)): average RGB of the dataset\n            (104,117,123)\n        swap ((int,int,int)): final order of channels\n\n    Returns:\n        transform (transform) : callable transform to be applied to test/val\n        data\n    \"\"\"\n\n    def __init__(self, rgb_means=None, std=None, swap=(2, 0, 1)):\n        self.means = rgb_means\n        self.swap = swap\n        self.std = std\n\n    # assume input is cv2 img for now\n    def __call__(self, img, res, input_size):\n        img, _ = preproc(img, input_size, self.means, self.std, self.swap)\n        return torch.from_numpy(img), torch.zeros(1, 5)\n"
  },
  {
    "path": "detector/YOLOX/yolox/data/data_prefetcher.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport random\n\nimport torch\nimport torch.distributed as dist\n\nfrom ..utils import synchronize\n\n\nclass DataPrefetcher:\n    \"\"\"\n    DataPrefetcher is inspired by code of following file:\n    https://github.com/NVIDIA/apex/blob/master/examples/imagenet/main_amp.py\n    It could speedup your pytorch dataloader. For more information, please check\n    https://github.com/NVIDIA/apex/issues/304#issuecomment-493562789.\n    \"\"\"\n\n    def __init__(self, loader):\n        self.loader = iter(loader)\n        self.stream = torch.cuda.Stream()\n        self.input_cuda = self._input_cuda_for_image\n        self.record_stream = DataPrefetcher._record_stream_for_image\n        self.preload()\n\n    def preload(self):\n        try:\n            self.next_input, self.next_target, _, _ = next(self.loader)\n        except StopIteration:\n            self.next_input = None\n            self.next_target = None\n            return\n\n        with torch.cuda.stream(self.stream):\n            self.input_cuda()\n            self.next_target = self.next_target.cuda(non_blocking=True)\n\n    def next(self):\n        torch.cuda.current_stream().wait_stream(self.stream)\n        input = self.next_input\n        target = self.next_target\n        if input is not None:\n            self.record_stream(input)\n        if target is not None:\n            target.record_stream(torch.cuda.current_stream())\n        self.preload()\n        return input, target\n\n    def _input_cuda_for_image(self):\n        self.next_input = self.next_input.cuda(non_blocking=True)\n\n    @staticmethod\n    def _record_stream_for_image(input):\n        input.record_stream(torch.cuda.current_stream())\n\n\ndef random_resize(data_loader, exp, epoch, rank, is_distributed):\n    tensor = torch.LongTensor(1).cuda()\n    if is_distributed:\n        synchronize()\n\n    if rank == 0:\n        if epoch > exp.max_epoch - 10:\n            size = exp.input_size\n        else:\n            size = random.randint(*exp.random_size)\n            size = int(32 * size)\n        tensor.fill_(size)\n\n    if is_distributed:\n        synchronize()\n        dist.broadcast(tensor, 0)\n\n    input_size = data_loader.change_input_dim(multiple=tensor.item(), random_range=None)\n    return input_size\n"
  },
  {
    "path": "detector/YOLOX/yolox/data/dataloading.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport os\nimport random\n\nimport torch\nfrom torch.utils.data.dataloader import DataLoader as torchDataLoader\nfrom torch.utils.data.dataloader import default_collate\n\nfrom .samplers import YoloBatchSampler\n\n\ndef get_yolox_datadir():\n    \"\"\"\n    get dataset dir of YOLOX. If environment variable named `YOLOX_DATADIR` is set,\n    this function will return value of the environment variable. Otherwise, use data\n    \"\"\"\n    yolox_datadir = os.getenv(\"YOLOX_DATADIR\", None)\n    if yolox_datadir is None:\n        import yolox\n        yolox_path = os.path.dirname(os.path.dirname(yolox.__file__))\n        yolox_datadir = os.path.join(yolox_path, \"datasets\")\n    return yolox_datadir\n\n\nclass DataLoader(torchDataLoader):\n    \"\"\"\n    Lightnet dataloader that enables on the fly resizing of the images.\n    See :class:`torch.utils.data.DataLoader` for more information on the arguments.\n    Check more on the following website:\n    https://gitlab.com/EAVISE/lightnet/-/blob/master/lightnet/data/_dataloading.py\n\n    Note:\n        This dataloader only works with :class:`lightnet.data.Dataset` based datasets.\n\n    Example:\n        >>> class CustomSet(ln.data.Dataset):\n        ...     def __len__(self):\n        ...         return 4\n        ...     @ln.data.Dataset.resize_getitem\n        ...     def __getitem__(self, index):\n        ...         # Should return (image, anno) but here we return (input_dim,)\n        ...         return (self.input_dim,)\n        >>> dl = ln.data.DataLoader(\n        ...     CustomSet((200,200)),\n        ...     batch_size = 2,\n        ...     collate_fn = ln.data.list_collate   # We want the data to be grouped as a list\n        ... )\n        >>> dl.dataset.input_dim    # Default input_dim\n        (200, 200)\n        >>> for d in dl:\n        ...     d\n        [[(200, 200), (200, 200)]]\n        [[(200, 200), (200, 200)]]\n        >>> dl.change_input_dim(320, random_range=None)\n        (320, 320)\n        >>> for d in dl:\n        ...     d\n        [[(320, 320), (320, 320)]]\n        [[(320, 320), (320, 320)]]\n        >>> dl.change_input_dim((480, 320), random_range=None)\n        (480, 320)\n        >>> for d in dl:\n        ...     d\n        [[(480, 320), (480, 320)]]\n        [[(480, 320), (480, 320)]]\n    \"\"\"\n\n    def __init__(self, *args, **kwargs):\n        super().__init__(*args, **kwargs)\n        self.__initialized = False\n        shuffle = False\n        batch_sampler = None\n        if len(args) > 5:\n            shuffle = args[2]\n            sampler = args[3]\n            batch_sampler = args[4]\n        elif len(args) > 4:\n            shuffle = args[2]\n            sampler = args[3]\n            if \"batch_sampler\" in kwargs:\n                batch_sampler = kwargs[\"batch_sampler\"]\n        elif len(args) > 3:\n            shuffle = args[2]\n            if \"sampler\" in kwargs:\n                sampler = kwargs[\"sampler\"]\n            if \"batch_sampler\" in kwargs:\n                batch_sampler = kwargs[\"batch_sampler\"]\n        else:\n            if \"shuffle\" in kwargs:\n                shuffle = kwargs[\"shuffle\"]\n            if \"sampler\" in kwargs:\n                sampler = kwargs[\"sampler\"]\n            if \"batch_sampler\" in kwargs:\n                batch_sampler = kwargs[\"batch_sampler\"]\n\n        # Use custom BatchSampler\n        if batch_sampler is None:\n            if sampler is None:\n                if shuffle:\n                    sampler = torch.utils.data.sampler.RandomSampler(self.dataset)\n                    # sampler = torch.utils.data.DistributedSampler(self.dataset)\n                else:\n                    sampler = torch.utils.data.sampler.SequentialSampler(self.dataset)\n            batch_sampler = YoloBatchSampler(\n                sampler,\n                self.batch_size,\n                self.drop_last,\n                input_dimension=self.dataset.input_dim,\n            )\n            # batch_sampler = IterationBasedBatchSampler(batch_sampler, num_iterations =\n\n        self.batch_sampler = batch_sampler\n\n        self.__initialized = True\n\n    def close_mosaic(self):\n        self.batch_sampler.mosaic = False\n\n    def change_input_dim(self, multiple=32, random_range=(10, 19)):\n        \"\"\" This function will compute a new size and update it on the next mini_batch.\n\n        Args:\n            multiple (int or tuple, optional): values to multiply the randomly generated range by.\n                Default **32**\n            random_range (tuple, optional): This (min, max) tuple sets the range\n                for the randomisation; Default **(10, 19)**\n\n        Return:\n            tuple: width, height tuple with new dimension\n\n        Note:\n            The new size is generated as follows: |br|\n            First we compute a random integer inside ``[random_range]``.\n            We then multiply that number with the ``multiple`` argument,\n            which gives our final new input size. |br|\n            If ``multiple`` is an integer we generate a square size. If you give a tuple\n            of **(width, height)**, the size is computed\n            as :math:`rng * multiple[0], rng * multiple[1]`.\n\n        Note:\n            You can set the ``random_range`` argument to **None** to set\n            an exact size of multiply. |br|\n            See the example above for how this works.\n        \"\"\"\n        if random_range is None:\n            size = 1\n        else:\n            size = random.randint(*random_range)\n\n        if isinstance(multiple, int):\n            size = (size * multiple, size * multiple)\n        else:\n            size = (size * multiple[0], size * multiple[1])\n\n        self.batch_sampler.new_input_dim = size\n\n        return size\n\n\ndef list_collate(batch):\n    \"\"\"\n    Function that collates lists or tuples together into one list (of lists/tuples).\n    Use this as the collate function in a Dataloader, if you want to have a list of\n    items as an output, as opposed to tensors (eg. Brambox.boxes).\n    \"\"\"\n    items = list(zip(*batch))\n\n    for i in range(len(items)):\n        if isinstance(items[i][0], (list, tuple)):\n            items[i] = list(items[i])\n        else:\n            items[i] = default_collate(items[i])\n\n    return items\n"
  },
  {
    "path": "detector/YOLOX/yolox/data/datasets/__init__.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nfrom .coco import COCODataset\nfrom .coco_classes import COCO_CLASSES\nfrom .datasets_wrapper import ConcatDataset, Dataset, MixConcatDataset\n#from .mosaicdetection import MosaicDetection\n#from .voc import VOCDetection\n"
  },
  {
    "path": "detector/YOLOX/yolox/data/datasets/coco.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport os\n\nimport cv2\nimport numpy as np\nfrom pycocotools.coco import COCO\n\nfrom ..dataloading import get_yolox_datadir\nfrom .datasets_wrapper import Dataset\n\n\nclass COCODataset(Dataset):\n    \"\"\"\n    COCO dataset class.\n    \"\"\"\n\n    def __init__(\n        self,\n        data_dir=None,\n        json_file=\"instances_train2017.json\",\n        name=\"train2017\",\n        img_size=(416, 416),\n        preproc=None,\n    ):\n        \"\"\"\n        COCO dataset initialization. Annotation data are read into memory by COCO API.\n        Args:\n            data_dir (str): dataset root directory\n            json_file (str): COCO json file name\n            name (str): COCO data name (e.g. 'train2017' or 'val2017')\n            img_size (int): target image size after pre-processing\n            preproc: data augmentation strategy\n        \"\"\"\n        super().__init__(img_size)\n        if data_dir is None:\n            data_dir = os.path.join(get_yolox_datadir(), \"COCO\")\n        self.data_dir = data_dir\n        self.json_file = json_file\n\n        self.coco = COCO(os.path.join(self.data_dir, \"annotations\", self.json_file))\n        self.ids = self.coco.getImgIds()\n        self.class_ids = sorted(self.coco.getCatIds())\n        cats = self.coco.loadCats(self.coco.getCatIds())\n        self._classes = tuple([c[\"name\"] for c in cats])\n        self.name = name\n        self.img_size = img_size\n        self.preproc = preproc\n\n    def __len__(self):\n        return len(self.ids)\n\n    def load_anno(self, index):\n        id_ = self.ids[index]\n        anno_ids = self.coco.getAnnIds(imgIds=[int(id_)], iscrowd=False)\n        annotations = self.coco.loadAnns(anno_ids)\n\n        im_ann = self.coco.loadImgs(id_)[0]\n        width = im_ann[\"width\"]\n        height = im_ann[\"height\"]\n\n        # load labels\n        valid_objs = []\n        for obj in annotations:\n            x1 = np.max((0, obj[\"bbox\"][0]))\n            y1 = np.max((0, obj[\"bbox\"][1]))\n            x2 = np.min((width - 1, x1 + np.max((0, obj[\"bbox\"][2] - 1))))\n            y2 = np.min((height - 1, y1 + np.max((0, obj[\"bbox\"][3] - 1))))\n            if obj[\"area\"] > 0 and x2 >= x1 and y2 >= y1:\n                obj[\"clean_bbox\"] = [x1, y1, x2, y2]\n                valid_objs.append(obj)\n        objs = valid_objs\n        num_objs = len(objs)\n\n        res = np.zeros((num_objs, 5))\n\n        for ix, obj in enumerate(objs):\n            cls = self.class_ids.index(obj[\"category_id\"])\n            res[ix, 0:4] = obj[\"clean_bbox\"]\n            res[ix, 4] = cls\n\n        return res\n\n    def pull_item(self, index):\n        id_ = self.ids[index]\n\n        im_ann = self.coco.loadImgs(id_)[0]\n        width = im_ann[\"width\"]\n        height = im_ann[\"height\"]\n\n        # load image and preprocess\n        img_file = os.path.join(\n            self.data_dir, self.name, \"{:012}\".format(id_) + \".jpg\"\n        )\n\n        img = cv2.imread(img_file)\n        assert img is not None\n\n        # load anno\n        res = self.load_anno(index)\n        img_info = (height, width)\n\n        return img, res, img_info, id_\n\n    @Dataset.resize_getitem\n    def __getitem__(self, index):\n        \"\"\"\n        One image / label pair for the given index is picked up and pre-processed.\n\n        Args:\n            index (int): data index\n\n        Returns:\n            img (numpy.ndarray): pre-processed image\n            padded_labels (torch.Tensor): pre-processed label data.\n                The shape is :math:`[max_labels, 5]`.\n                each label consists of [class, xc, yc, w, h]:\n                    class (float): class index.\n                    xc, yc (float) : center of bbox whose values range from 0 to 1.\n                    w, h (float) : size of bbox whose values range from 0 to 1.\n            info_img : tuple of h, w, nh, nw, dx, dy.\n                h, w (int): original shape of the image\n                nh, nw (int): shape of the resized image without padding\n                dx, dy (int): pad size\n            img_id (int): same as the input index. Used for evaluation.\n        \"\"\"\n        img, res, img_info, img_id = self.pull_item(index)\n\n        if self.preproc is not None:\n            img, target = self.preproc(img, res, self.input_dim)\n        return img, target, img_info, img_id\n"
  },
  {
    "path": "detector/YOLOX/yolox/data/datasets/coco_classes.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nCOCO_CLASSES = (\n    \"person\",\n    \"bicycle\",\n    \"car\",\n    \"motorcycle\",\n    \"airplane\",\n    \"bus\",\n    \"train\",\n    \"truck\",\n    \"boat\",\n    \"traffic light\",\n    \"fire hydrant\",\n    \"stop sign\",\n    \"parking meter\",\n    \"bench\",\n    \"bird\",\n    \"cat\",\n    \"dog\",\n    \"horse\",\n    \"sheep\",\n    \"cow\",\n    \"elephant\",\n    \"bear\",\n    \"zebra\",\n    \"giraffe\",\n    \"backpack\",\n    \"umbrella\",\n    \"handbag\",\n    \"tie\",\n    \"suitcase\",\n    \"frisbee\",\n    \"skis\",\n    \"snowboard\",\n    \"sports ball\",\n    \"kite\",\n    \"baseball bat\",\n    \"baseball glove\",\n    \"skateboard\",\n    \"surfboard\",\n    \"tennis racket\",\n    \"bottle\",\n    \"wine glass\",\n    \"cup\",\n    \"fork\",\n    \"knife\",\n    \"spoon\",\n    \"bowl\",\n    \"banana\",\n    \"apple\",\n    \"sandwich\",\n    \"orange\",\n    \"broccoli\",\n    \"carrot\",\n    \"hot dog\",\n    \"pizza\",\n    \"donut\",\n    \"cake\",\n    \"chair\",\n    \"couch\",\n    \"potted plant\",\n    \"bed\",\n    \"dining table\",\n    \"toilet\",\n    \"tv\",\n    \"laptop\",\n    \"mouse\",\n    \"remote\",\n    \"keyboard\",\n    \"cell phone\",\n    \"microwave\",\n    \"oven\",\n    \"toaster\",\n    \"sink\",\n    \"refrigerator\",\n    \"book\",\n    \"clock\",\n    \"vase\",\n    \"scissors\",\n    \"teddy bear\",\n    \"hair drier\",\n    \"toothbrush\",\n)\n"
  },
  {
    "path": "detector/YOLOX/yolox/data/datasets/datasets_wrapper.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport bisect\nfrom functools import wraps\n\nfrom torch.utils.data.dataset import ConcatDataset as torchConcatDataset\nfrom torch.utils.data.dataset import Dataset as torchDataset\n\n\nclass ConcatDataset(torchConcatDataset):\n    def __init__(self, datasets):\n        super(ConcatDataset, self).__init__(datasets)\n        if hasattr(self.datasets[0], \"input_dim\"):\n            self._input_dim = self.datasets[0].input_dim\n            self.input_dim = self.datasets[0].input_dim\n\n    def pull_item(self, idx):\n        if idx < 0:\n            if -idx > len(self):\n                raise ValueError(\n                    \"absolute value of index should not exceed dataset length\"\n                )\n            idx = len(self) + idx\n        dataset_idx = bisect.bisect_right(self.cumulative_sizes, idx)\n        if dataset_idx == 0:\n            sample_idx = idx\n        else:\n            sample_idx = idx - self.cumulative_sizes[dataset_idx - 1]\n        return self.datasets[dataset_idx].pull_item(sample_idx)\n\n\nclass MixConcatDataset(torchConcatDataset):\n    def __init__(self, datasets):\n        super(MixConcatDataset, self).__init__(datasets)\n        if hasattr(self.datasets[0], \"input_dim\"):\n            self._input_dim = self.datasets[0].input_dim\n            self.input_dim = self.datasets[0].input_dim\n\n    def __getitem__(self, index):\n\n        if not isinstance(index, int):\n            idx = index[1]\n        if idx < 0:\n            if -idx > len(self):\n                raise ValueError(\n                    \"absolute value of index should not exceed dataset length\"\n                )\n            idx = len(self) + idx\n        dataset_idx = bisect.bisect_right(self.cumulative_sizes, idx)\n        if dataset_idx == 0:\n            sample_idx = idx\n        else:\n            sample_idx = idx - self.cumulative_sizes[dataset_idx - 1]\n        if not isinstance(index, int):\n            index = (index[0], sample_idx, index[2])\n\n        return self.datasets[dataset_idx][index]\n\n\nclass Dataset(torchDataset):\n    \"\"\" This class is a subclass of the base :class:`torch.utils.data.Dataset`,\n    that enables on the fly resizing of the ``input_dim``.\n\n    Args:\n        input_dimension (tuple): (width,height) tuple with default dimensions of the network\n    \"\"\"\n\n    def __init__(self, input_dimension, mosaic=True):\n        super().__init__()\n        self.__input_dim = input_dimension[:2]\n        self._mosaic = mosaic\n\n    @property\n    def input_dim(self):\n        \"\"\"\n        Dimension that can be used by transforms to set the correct image size, etc.\n        This allows transforms to have a single source of truth\n        for the input dimension of the network.\n\n        Return:\n            list: Tuple containing the current width,height\n        \"\"\"\n        if hasattr(self, \"_input_dim\"):\n            return self._input_dim\n        return self.__input_dim\n\n    @staticmethod\n    def resize_getitem(getitem_fn):\n        \"\"\"\n        Decorator method that needs to be used around the ``__getitem__`` method. |br|\n        This decorator enables the on the fly resizing of\n        the ``input_dim`` with our :class:`~lightnet.data.DataLoader` class.\n\n        Example:\n            >>> class CustomSet(ln.data.Dataset):\n            ...     def __len__(self):\n            ...         return 10\n            ...     @ln.data.Dataset.resize_getitem\n            ...     def __getitem__(self, index):\n            ...         # Should return (image, anno) but here we return input_dim\n            ...         return self.input_dim\n            >>> data = CustomSet((200,200))\n            >>> data[0]\n            (200, 200)\n            >>> data[(480,320), 0]\n            (480, 320)\n        \"\"\"\n\n        @wraps(getitem_fn)\n        def wrapper(self, index):\n            if not isinstance(index, int):\n                has_dim = True\n                self._input_dim = index[0]\n                self._mosaic = index[2]\n                index = index[1]\n            else:\n                has_dim = False\n\n            ret_val = getitem_fn(self, index)\n\n            if has_dim:\n                del self._input_dim\n\n            return ret_val\n\n        return wrapper\n"
  },
  {
    "path": "detector/YOLOX/yolox/data/datasets/mosaicdetection.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport random\n\nimport cv2\nimport numpy as np\n\nfrom yolox.utils import adjust_box_anns\n\nfrom ..data_augment import box_candidates, random_perspective\nfrom .datasets_wrapper import Dataset\n\n\ndef get_mosaic_coordinate(mosaic_image, mosaic_index, xc, yc, w, h, input_h, input_w):\n    # TODO update doc\n    # index0 to top left part of image\n    if mosaic_index == 0:\n        x1, y1, x2, y2 = max(xc - w, 0), max(yc - h, 0), xc, yc\n        small_coord = w - (x2 - x1), h - (y2 - y1), w, h\n    # index1 to top right part of image\n    elif mosaic_index == 1:\n        x1, y1, x2, y2 = xc, max(yc - h, 0), min(xc + w, input_w * 2), yc\n        small_coord = 0, h - (y2 - y1), min(w, x2 - x1), h\n    # index2 to bottom left part of image\n    elif mosaic_index == 2:\n        x1, y1, x2, y2 = max(xc - w, 0), yc, xc, min(input_h * 2, yc + h)\n        small_coord = w - (x2 - x1), 0, w, min(y2 - y1, h)\n    # index2 to bottom right part of image\n    elif mosaic_index == 3:\n        x1, y1, x2, y2 = xc, yc, min(xc + w, input_w * 2), min(input_h * 2, yc + h)  # noqa\n        small_coord = 0, 0, min(w, x2 - x1), min(y2 - y1, h)\n    return (x1, y1, x2, y2), small_coord\n\n\nclass MosaicDetection(Dataset):\n    \"\"\"Detection dataset wrapper that performs mixup for normal dataset.\"\"\"\n\n    def __init__(\n        self, dataset, img_size, mosaic=True, preproc=None,\n        degrees=10.0, translate=0.1, scale=(0.5, 1.5), mscale=(0.5, 1.5),\n        shear=2.0, perspective=0.0, enable_mixup=True, *args\n    ):\n        \"\"\"\n\n        Args:\n            dataset(Dataset) : Pytorch dataset object.\n            img_size (tuple):\n            mosaic (bool): enable mosaic augmentation or not.\n            preproc (func):\n            degrees (float):\n            translate (float):\n            scale (tuple):\n            mscale (tuple):\n            shear (float):\n            perspective (float):\n            enable_mixup (bool):\n            *args(tuple) : Additional arguments for mixup random sampler.\n        \"\"\"\n        super().__init__(img_size, mosaic=mosaic)\n        self._dataset = dataset\n        self.preproc = preproc\n        self.degrees = degrees\n        self.translate = translate\n        self.scale = scale\n        self.shear = shear\n        self.perspective = perspective\n        self.mixup_scale = mscale\n        self.enable_mosaic = mosaic\n        self.enable_mixup = enable_mixup\n\n    def __len__(self):\n        return len(self._dataset)\n\n    @Dataset.resize_getitem\n    def __getitem__(self, idx):\n        if self.enable_mosaic:\n            mosaic_labels = []\n            input_dim = self._dataset.input_dim\n            input_h, input_w = input_dim[0], input_dim[1]\n\n            # yc, xc = s, s  # mosaic center x, y\n            yc = int(random.uniform(0.5 * input_h, 1.5 * input_h))\n            xc = int(random.uniform(0.5 * input_w, 1.5 * input_w))\n\n            # 3 additional image indices\n            indices = [idx] + [random.randint(0, len(self._dataset) - 1) for _ in range(3)]\n\n            for i_mosaic, index in enumerate(indices):\n                img, _labels, _, _ = self._dataset.pull_item(index)\n                h0, w0 = img.shape[:2]  # orig hw\n                scale = min(1. * input_h / h0, 1. * input_w / w0)\n                img = cv2.resize(\n                    img, (int(w0 * scale), int(h0 * scale)), interpolation=cv2.INTER_LINEAR\n                )\n                # generate output mosaic image\n                (h, w, c) = img.shape[:3]\n                if i_mosaic == 0:\n                    mosaic_img = np.full((input_h * 2, input_w * 2, c), 114, dtype=np.uint8)\n\n                # suffix l means large image, while s means small image in mosaic aug.\n                (l_x1, l_y1, l_x2, l_y2), (s_x1, s_y1, s_x2, s_y2) = get_mosaic_coordinate(\n                    mosaic_img, i_mosaic, xc, yc, w, h, input_h, input_w\n                )\n\n                mosaic_img[l_y1:l_y2, l_x1:l_x2] = img[s_y1:s_y2, s_x1:s_x2]\n                padw, padh = l_x1 - s_x1, l_y1 - s_y1\n\n                labels = _labels.copy()\n                # Normalized xywh to pixel xyxy format\n                if _labels.size > 0:\n                    labels[:, 0] = scale * _labels[:, 0] + padw\n                    labels[:, 1] = scale * _labels[:, 1] + padh\n                    labels[:, 2] = scale * _labels[:, 2] + padw\n                    labels[:, 3] = scale * _labels[:, 3] + padh\n                mosaic_labels.append(labels)\n\n            if len(mosaic_labels):\n                mosaic_labels = np.concatenate(mosaic_labels, 0)\n                np.clip(mosaic_labels[:, 0], 0, 2 * input_w, out=mosaic_labels[:, 0])\n                np.clip(mosaic_labels[:, 1], 0, 2 * input_h, out=mosaic_labels[:, 1])\n                np.clip(mosaic_labels[:, 2], 0, 2 * input_w, out=mosaic_labels[:, 2])\n                np.clip(mosaic_labels[:, 3], 0, 2 * input_h, out=mosaic_labels[:, 3])\n\n            mosaic_img, mosaic_labels = random_perspective(\n                mosaic_img,\n                mosaic_labels,\n                degrees=self.degrees,\n                translate=self.translate,\n                scale=self.scale,\n                shear=self.shear,\n                perspective=self.perspective,\n                border=[-input_h // 2, -input_w // 2],\n            )  # border to remove\n\n            # -----------------------------------------------------------------\n            # CopyPaste: https://arxiv.org/abs/2012.07177\n            # -----------------------------------------------------------------\n            if self.enable_mixup and not len(mosaic_labels) == 0:\n                mosaic_img, mosaic_labels = self.mixup(mosaic_img, mosaic_labels, self.input_dim)\n            mix_img, padded_labels = self.preproc(mosaic_img, mosaic_labels, self.input_dim)\n            img_info = (mix_img.shape[1], mix_img.shape[0])\n\n            return mix_img, padded_labels, img_info, int(idx)\n\n        else:\n            self._dataset._input_dim = self.input_dim\n            img, label, img_info, idx = self._dataset.pull_item(idx)\n            img, label = self.preproc(img, label, self.input_dim)\n            return img, label, img_info, int(idx)\n\n    def mixup(self, origin_img, origin_labels, input_dim):\n        jit_factor = random.uniform(*self.mixup_scale)\n        FLIP = random.uniform(0, 1) > 0.5\n        cp_labels = []\n        while len(cp_labels) == 0:\n            cp_index = random.randint(0, self.__len__() - 1)\n            cp_labels = self._dataset.load_anno(cp_index)\n        img, cp_labels, _, _ = self._dataset.pull_item(cp_index)\n\n        if len(img.shape) == 3:\n            cp_img = np.ones((input_dim[0], input_dim[1], 3)) * 114.0\n        else:\n            cp_img = np.ones(input_dim) * 114.0\n        cp_scale_ratio = min(input_dim[0] / img.shape[0], input_dim[1] / img.shape[1])\n        resized_img = cv2.resize(\n            img,\n            (int(img.shape[1] * cp_scale_ratio), int(img.shape[0] * cp_scale_ratio)),\n            interpolation=cv2.INTER_LINEAR,\n        ).astype(np.float32)\n        cp_img[\n            : int(img.shape[0] * cp_scale_ratio), : int(img.shape[1] * cp_scale_ratio)\n        ] = resized_img\n        cp_img = cv2.resize(\n            cp_img,\n            (int(cp_img.shape[1] * jit_factor), int(cp_img.shape[0] * jit_factor)),\n        )\n        cp_scale_ratio *= jit_factor\n        if FLIP:\n            cp_img = cp_img[:, ::-1, :]\n\n        origin_h, origin_w = cp_img.shape[:2]\n        target_h, target_w = origin_img.shape[:2]\n        padded_img = np.zeros(\n            (max(origin_h, target_h), max(origin_w, target_w), 3)\n        ).astype(np.uint8)\n        padded_img[:origin_h, :origin_w] = cp_img\n\n        x_offset, y_offset = 0, 0\n        if padded_img.shape[0] > target_h:\n            y_offset = random.randint(0, padded_img.shape[0] - target_h - 1)\n        if padded_img.shape[1] > target_w:\n            x_offset = random.randint(0, padded_img.shape[1] - target_w - 1)\n        padded_cropped_img = padded_img[\n            y_offset: y_offset + target_h, x_offset: x_offset + target_w\n        ]\n\n        cp_bboxes_origin_np = adjust_box_anns(\n            cp_labels[:, :4], cp_scale_ratio, 0, 0, origin_w, origin_h\n        )\n        if FLIP:\n            cp_bboxes_origin_np[:, 0::2] = (\n                origin_w - cp_bboxes_origin_np[:, 0::2][:, ::-1]\n            )\n        cp_bboxes_transformed_np = cp_bboxes_origin_np.copy()\n        cp_bboxes_transformed_np[:, 0::2] = np.clip(\n            cp_bboxes_transformed_np[:, 0::2] - x_offset, 0, target_w\n        )\n        cp_bboxes_transformed_np[:, 1::2] = np.clip(\n            cp_bboxes_transformed_np[:, 1::2] - y_offset, 0, target_h\n        )\n        keep_list = box_candidates(cp_bboxes_origin_np.T, cp_bboxes_transformed_np.T, 5)\n\n        if keep_list.sum() >= 1.0:\n            cls_labels = cp_labels[keep_list, 4:5]\n            box_labels = cp_bboxes_transformed_np[keep_list]\n            labels = np.hstack((box_labels, cls_labels))\n            origin_labels = np.vstack((origin_labels, labels))\n            origin_img = origin_img.astype(np.float32)\n            origin_img = 0.5 * origin_img + 0.5 * padded_cropped_img.astype(np.float32)\n\n        return origin_img.astype(np.uint8), origin_labels\n"
  },
  {
    "path": "detector/YOLOX/yolox/data/datasets/voc.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Code are based on\n# https://github.com/fmassa/vision/blob/voc_dataset/torchvision/datasets/voc.py\n# Copyright (c) Francisco Massa.\n# Copyright (c) Ellis Brown, Max deGroot.\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport os\nimport os.path\nimport pickle\nimport xml.etree.ElementTree as ET\n\nimport cv2\nimport numpy as np\n\nfrom yolox.evaluators.voc_eval import voc_eval\n\nfrom .datasets_wrapper import Dataset\nfrom .voc_classes import VOC_CLASSES\n\n\nclass AnnotationTransform(object):\n\n    \"\"\"Transforms a VOC annotation into a Tensor of bbox coords and label index\n    Initilized with a dictionary lookup of classnames to indexes\n\n    Arguments:\n        class_to_ind (dict, optional): dictionary lookup of classnames -> indexes\n            (default: alphabetic indexing of VOC's 20 classes)\n        keep_difficult (bool, optional): keep difficult instances or not\n            (default: False)\n        height (int): height\n        width (int): width\n    \"\"\"\n\n    def __init__(self, class_to_ind=None, keep_difficult=True):\n        self.class_to_ind = class_to_ind or dict(zip(VOC_CLASSES, range(len(VOC_CLASSES))))\n        self.keep_difficult = keep_difficult\n\n    def __call__(self, target):\n        \"\"\"\n        Arguments:\n            target (annotation) : the target annotation to be made usable\n                will be an ET.Element\n        Returns:\n            a list containing lists of bounding boxes  [bbox coords, class name]\n        \"\"\"\n        res = np.empty((0, 5))\n        for obj in target.iter(\"object\"):\n            difficult = int(obj.find(\"difficult\").text) == 1\n            if not self.keep_difficult and difficult:\n                continue\n            name = obj.find(\"name\").text.lower().strip()\n            bbox = obj.find(\"bndbox\")\n\n            pts = [\"xmin\", \"ymin\", \"xmax\", \"ymax\"]\n            bndbox = []\n            for i, pt in enumerate(pts):\n                cur_pt = int(bbox.find(pt).text) - 1\n                # scale height or width\n                # cur_pt = cur_pt / width if i % 2 == 0 else cur_pt / height\n                bndbox.append(cur_pt)\n            label_idx = self.class_to_ind[name]\n            bndbox.append(label_idx)\n            res = np.vstack((res, bndbox))  # [xmin, ymin, xmax, ymax, label_ind]\n            # img_id = target.find('filename').text[:-4]\n\n        return res  # [[xmin, ymin, xmax, ymax, label_ind], ... ]\n\n\nclass VOCDetection(Dataset):\n\n    \"\"\"\n    VOC Detection Dataset Object\n\n    input is image, target is annotation\n\n    Args:\n        root (string): filepath to VOCdevkit folder.\n        image_set (string): imageset to use (eg. 'train', 'val', 'test')\n        transform (callable, optional): transformation to perform on the\n            input image\n        target_transform (callable, optional): transformation to perform on the\n            target `annotation`\n            (eg: take in caption string, return tensor of word indices)\n        dataset_name (string, optional): which dataset to load\n            (default: 'VOC2007')\n    \"\"\"\n\n    def __init__(\n        self,\n        data_dir,\n        image_sets=[('2007', 'trainval'), ('2012', 'trainval')],\n        img_size=(416, 416),\n        preproc=None,\n        target_transform=AnnotationTransform(),\n        dataset_name=\"VOC0712\",\n    ):\n        super().__init__(img_size)\n        self.root = data_dir\n        self.image_set = image_sets\n        self.img_size = img_size\n        self.preproc = preproc\n        self.target_transform = target_transform\n        self.name = dataset_name\n        self._annopath = os.path.join(\"%s\", \"Annotations\", \"%s.xml\")\n        self._imgpath = os.path.join(\"%s\", \"JPEGImages\", \"%s.jpg\")\n        self._classes = VOC_CLASSES\n        self.ids = list()\n        for (year, name) in image_sets:\n            self._year = year\n            rootpath = os.path.join(self.root, \"VOC\" + year)\n            for line in open(\n                os.path.join(rootpath, \"ImageSets\", \"Main\", name + \".txt\")\n            ):\n                self.ids.append((rootpath, line.strip()))\n\n    def __len__(self):\n        return len(self.ids)\n\n    def load_anno(self, index):\n        img_id = self.ids[index]\n        target = ET.parse(self._annopath % img_id).getroot()\n        if self.target_transform is not None:\n            target = self.target_transform(target)\n\n        return target\n\n    def pull_item(self, index):\n        \"\"\"Returns the original image and target at an index for mixup\n\n        Note: not using self.__getitem__(), as any transformations passed in\n        could mess up this functionality.\n\n        Argument:\n            index (int): index of img to show\n        Return:\n            img, target\n        \"\"\"\n        img_id = self.ids[index]\n        img = cv2.imread(self._imgpath % img_id, cv2.IMREAD_COLOR)\n        height, width, _ = img.shape\n\n        target = self.load_anno(index)\n\n        img_info = (width, height)\n\n        return img, target, img_info, index\n\n    @Dataset.resize_getitem\n    def __getitem__(self, index):\n        img, target, img_info, img_id = self.pull_item(index)\n\n        if self.preproc is not None:\n            img, target = self.preproc(img, target, self.input_dim)\n\n        return img, target, img_info, img_id\n\n    def evaluate_detections(self, all_boxes, output_dir=None):\n        \"\"\"\n        all_boxes is a list of length number-of-classes.\n        Each list element is a list of length number-of-images.\n        Each of those list elements is either an empty list []\n        or a numpy array of detection.\n\n        all_boxes[class][image] = [] or np.array of shape #dets x 5\n        \"\"\"\n        self._write_voc_results_file(all_boxes)\n        IouTh = np.linspace(0.5, 0.95, int(np.round((0.95 - 0.5) / 0.05)) + 1, endpoint=True)\n        mAPs = []\n        for iou in IouTh:\n            mAP = self._do_python_eval(output_dir, iou)\n            mAPs.append(mAP)\n\n        print(\"--------------------------------------------------------------\")\n        print(\"map_5095:\", np.mean(mAPs))\n        print(\"map_50:\", mAPs[0])\n        print(\"--------------------------------------------------------------\")\n        return np.mean(mAPs), mAPs[0]\n\n    def _get_voc_results_file_template(self):\n        filename = \"comp4_det_test\" + \"_{:s}.txt\"\n        filedir = os.path.join(self.root, \"results\", \"VOC\" + self._year, \"Main\")\n        if not os.path.exists(filedir):\n            os.makedirs(filedir)\n        path = os.path.join(filedir, filename)\n        return path\n\n    def _write_voc_results_file(self, all_boxes):\n        for cls_ind, cls in enumerate(VOC_CLASSES):\n            cls_ind = cls_ind\n            if cls == \"__background__\":\n                continue\n            print(\"Writing {} VOC results file\".format(cls))\n            filename = self._get_voc_results_file_template().format(cls)\n            with open(filename, \"wt\") as f:\n                for im_ind, index in enumerate(self.ids):\n                    index = index[1]\n                    dets = all_boxes[cls_ind][im_ind]\n                    if dets == []:\n                        continue\n                    for k in range(dets.shape[0]):\n                        f.write(\n                            \"{:s} {:.3f} {:.1f} {:.1f} {:.1f} {:.1f}\\n\".format(\n                                index,\n                                dets[k, -1],\n                                dets[k, 0] + 1,\n                                dets[k, 1] + 1,\n                                dets[k, 2] + 1,\n                                dets[k, 3] + 1,\n                            )\n                        )\n\n    def _do_python_eval(self, output_dir=\"output\", iou=0.5):\n        rootpath = os.path.join(self.root, \"VOC\" + self._year)\n        name = self.image_set[0][1]\n        annopath = os.path.join(rootpath, \"Annotations\", \"{:s}.xml\")\n        imagesetfile = os.path.join(rootpath, \"ImageSets\", \"Main\", name + \".txt\")\n        cachedir = os.path.join(\n            self.root, \"annotations_cache\", \"VOC\" + self._year, name\n        )\n        if not os.path.exists(cachedir):\n            os.makedirs(cachedir)\n        aps = []\n        # The PASCAL VOC metric changed in 2010\n        use_07_metric = True if int(self._year) < 2010 else False\n        print(\"Eval IoU : {:.2f}\".format(iou))\n        if output_dir is not None and not os.path.isdir(output_dir):\n            os.mkdir(output_dir)\n        for i, cls in enumerate(VOC_CLASSES):\n\n            if cls == \"__background__\":\n                continue\n\n            filename = self._get_voc_results_file_template().format(cls)\n            rec, prec, ap = voc_eval(\n                filename,\n                annopath,\n                imagesetfile,\n                cls,\n                cachedir,\n                ovthresh=iou,\n                use_07_metric=use_07_metric,\n            )\n            aps += [ap]\n            if iou == 0.5:\n                print(\"AP for {} = {:.4f}\".format(cls, ap))\n            if output_dir is not None:\n                with open(os.path.join(output_dir, cls + \"_pr.pkl\"), \"wb\") as f:\n                    pickle.dump({\"rec\": rec, \"prec\": prec, \"ap\": ap}, f)\n        if iou == 0.5:\n            print(\"Mean AP = {:.4f}\".format(np.mean(aps)))\n            print(\"~~~~~~~~\")\n            print(\"Results:\")\n            for ap in aps:\n                print(\"{:.3f}\".format(ap))\n            print(\"{:.3f}\".format(np.mean(aps)))\n            print(\"~~~~~~~~\")\n            print(\"\")\n            print(\"--------------------------------------------------------------\")\n            print(\"Results computed with the **unofficial** Python eval code.\")\n            print(\"Results should be very close to the official MATLAB eval code.\")\n            print(\"Recompute with `./tools/reval.py --matlab ...` for your paper.\")\n            print(\"-- Thanks, The Management\")\n            print(\"--------------------------------------------------------------\")\n\n        return np.mean(aps)\n"
  },
  {
    "path": "detector/YOLOX/yolox/data/datasets/voc_classes.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\n# VOC_CLASSES = ( '__background__', # always index 0\nVOC_CLASSES = (\n    \"aeroplane\",\n    \"bicycle\",\n    \"bird\",\n    \"boat\",\n    \"bottle\",\n    \"bus\",\n    \"car\",\n    \"cat\",\n    \"chair\",\n    \"cow\",\n    \"diningtable\",\n    \"dog\",\n    \"horse\",\n    \"motorbike\",\n    \"person\",\n    \"pottedplant\",\n    \"sheep\",\n    \"sofa\",\n    \"train\",\n    \"tvmonitor\",\n)\n"
  },
  {
    "path": "detector/YOLOX/yolox/data/samplers.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport itertools\nfrom typing import Optional\n\nimport torch\nimport torch.distributed as dist\nfrom torch.utils.data.sampler import BatchSampler as torchBatchSampler\nfrom torch.utils.data.sampler import Sampler\n\n\nclass YoloBatchSampler(torchBatchSampler):\n    \"\"\"\n    This batch sampler will generate mini-batches of (dim, index) tuples from another sampler.\n    It works just like the :class:`torch.utils.data.sampler.BatchSampler`,\n    but it will prepend a dimension, whilst ensuring it stays the same across one mini-batch.\n    \"\"\"\n\n    def __init__(self, *args, input_dimension=None, mosaic=True, **kwargs):\n        super().__init__(*args, **kwargs)\n        self.input_dim = input_dimension\n        self.new_input_dim = None\n        self.mosaic = mosaic\n\n    def __iter__(self):\n        self.__set_input_dim()\n        for batch in super().__iter__():\n            yield [(self.input_dim, idx, self.mosaic) for idx in batch]\n            self.__set_input_dim()\n\n    def __set_input_dim(self):\n        \"\"\" This function randomly changes the the input dimension of the dataset. \"\"\"\n        if self.new_input_dim is not None:\n            self.input_dim = (self.new_input_dim[0], self.new_input_dim[1])\n            self.new_input_dim = None\n\n\nclass InfiniteSampler(Sampler):\n    \"\"\"\n    In training, we only care about the \"infinite stream\" of training data.\n    So this sampler produces an infinite stream of indices and\n    all workers cooperate to correctly shuffle the indices and sample different indices.\n    The samplers in each worker effectively produces `indices[worker_id::num_workers]`\n    where `indices` is an infinite stream of indices consisting of\n    `shuffle(range(size)) + shuffle(range(size)) + ...` (if shuffle is True)\n    or `range(size) + range(size) + ...` (if shuffle is False)\n    \"\"\"\n\n    def __init__(\n        self,\n        size: int,\n        shuffle: bool = True,\n        seed: Optional[int] = 0,\n        rank=0,\n        world_size=1,\n    ):\n        \"\"\"\n        Args:\n            size (int): the total number of data of the underlying dataset to sample from\n            shuffle (bool): whether to shuffle the indices or not\n            seed (int): the initial seed of the shuffle. Must be the same\n                across all workers. If None, will use a random seed shared\n                among workers (require synchronization among all workers).\n        \"\"\"\n        self._size = size\n        assert size > 0\n        self._shuffle = shuffle\n        self._seed = int(seed)\n\n        if dist.is_available() and dist.is_initialized():\n            self._rank = dist.get_rank()\n            self._world_size = dist.get_world_size()\n        else:\n            self._rank = rank\n            self._world_size = world_size\n\n    def __iter__(self):\n        start = self._rank\n        yield from itertools.islice(\n            self._infinite_indices(), start, None, self._world_size\n        )\n\n    def _infinite_indices(self):\n        g = torch.Generator()\n        g.manual_seed(self._seed)\n        while True:\n            if self._shuffle:\n                yield from torch.randperm(self._size, generator=g)\n            else:\n                yield from torch.arange(self._size)\n\n    def __len__(self):\n        return self._size // self._world_size\n"
  },
  {
    "path": "detector/YOLOX/yolox/evaluators/__init__.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nfrom .coco_evaluator import COCOEvaluator\nfrom .voc_evaluator import VOCEvaluator\n"
  },
  {
    "path": "detector/YOLOX/yolox/evaluators/coco_evaluator.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport contextlib\nimport io\nimport itertools\nimport json\nimport tempfile\nimport time\nfrom loguru import logger\nfrom tqdm import tqdm\n\nimport torch\n\nfrom yolox.utils import (\n    gather,\n    is_main_process,\n    postprocess,\n    synchronize,\n    time_synchronized,\n    xyxy2xywh\n)\n\n\nclass COCOEvaluator:\n    \"\"\"\n    COCO AP Evaluation class.  All the data in the val2017 dataset are processed\n    and evaluated by COCO API.\n    \"\"\"\n\n    def __init__(\n        self, dataloader, img_size, confthre, nmsthre, num_classes, testdev=False\n    ):\n        \"\"\"\n        Args:\n            dataloader (Dataloader): evaluate dataloader.\n            img_size (int): image size after preprocess. images are resized\n                to squares whose shape is (img_size, img_size).\n            confthre (float): confidence threshold ranging from 0 to 1, which\n                is defined in the config file.\n            nmsthre (float): IoU threshold of non-max supression ranging from 0 to 1.\n        \"\"\"\n        self.dataloader = dataloader\n        self.img_size = img_size\n        self.confthre = confthre\n        self.nmsthre = nmsthre\n        self.num_classes = num_classes\n        self.testdev = testdev\n\n    def evaluate(\n        self,\n        model,\n        distributed=False,\n        half=False,\n        trt_file=None,\n        decoder=None,\n        test_size=None,\n    ):\n        \"\"\"\n        COCO average precision (AP) Evaluation. Iterate inference on the test dataset\n        and the results are evaluated by COCO API.\n\n        NOTE: This function will change training mode to False, please save states if needed.\n\n        Args:\n            model : model to evaluate.\n\n        Returns:\n            ap50_95 (float) : COCO AP of IoU=50:95\n            ap50 (float) : COCO AP of IoU=50\n            summary (sr): summary info of evaluation.\n        \"\"\"\n        # TODO half to amp_test\n        tensor_type = torch.cuda.HalfTensor if half else torch.cuda.FloatTensor\n        model = model.eval()\n        if half:\n            model = model.half()\n        ids = []\n        data_list = []\n        progress_bar = tqdm if is_main_process() else iter\n\n        inference_time = 0\n        nms_time = 0\n        n_samples = len(self.dataloader) - 1\n\n        if trt_file is not None:\n            from torch2trt import TRTModule\n\n            model_trt = TRTModule()\n            model_trt.load_state_dict(torch.load(trt_file))\n\n            x = torch.ones(1, 3, test_size[0], test_size[1]).cuda()\n            model(x)\n            model = model_trt\n\n        for cur_iter, (imgs, _, info_imgs, ids) in enumerate(\n            progress_bar(self.dataloader)\n        ):\n            with torch.no_grad():\n                imgs = imgs.type(tensor_type)\n\n                # skip the the last iters since batchsize might be not enough for batch inference\n                is_time_record = cur_iter < len(self.dataloader) - 1\n                if is_time_record:\n                    start = time.time()\n\n                outputs = model(imgs)\n                if decoder is not None:\n                    outputs = decoder(outputs, dtype=outputs.type())\n\n                if is_time_record:\n                    infer_end = time_synchronized()\n                    inference_time += infer_end - start\n\n                outputs = postprocess(\n                    outputs, self.num_classes, self.confthre, self.nmsthre\n                )\n                if is_time_record:\n                    nms_end = time_synchronized()\n                    nms_time += nms_end - infer_end\n\n            data_list.extend(self.convert_to_coco_format(outputs, info_imgs, ids))\n\n        statistics = torch.cuda.FloatTensor([inference_time, nms_time, n_samples])\n        if distributed:\n            data_list = gather(data_list, dst=0)\n            data_list = list(itertools.chain(*data_list))\n            torch.distributed.reduce(statistics, dst=0)\n\n        eval_results = self.evaluate_prediction(data_list, statistics)\n        synchronize()\n        return eval_results\n\n    def convert_to_coco_format(self, outputs, info_imgs, ids):\n        data_list = []\n        for (output, img_h, img_w, img_id) in zip(\n            outputs, info_imgs[0], info_imgs[1], ids\n        ):\n            if output is None:\n                continue\n            output = output.cpu()\n\n            bboxes = output[:, 0:4]\n\n            # preprocessing: resize\n            scale = min(\n                self.img_size[0] / float(img_h), self.img_size[1] / float(img_w)\n            )\n            bboxes /= scale\n            bboxes = xyxy2xywh(bboxes)\n\n            cls = output[:, 6]\n            scores = output[:, 4] * output[:, 5]\n            for ind in range(bboxes.shape[0]):\n                label = self.dataloader.dataset.class_ids[int(cls[ind])]\n                pred_data = {\n                    \"image_id\": int(img_id),\n                    \"category_id\": label,\n                    \"bbox\": bboxes[ind].numpy().tolist(),\n                    \"score\": scores[ind].numpy().item(),\n                    \"segmentation\": [],\n                }  # COCO json format\n                data_list.append(pred_data)\n        return data_list\n\n    def evaluate_prediction(self, data_dict, statistics):\n        if not is_main_process():\n            return 0, 0, None\n\n        logger.info(\"Evaluate in main process...\")\n\n        annType = [\"segm\", \"bbox\", \"keypoints\"]\n\n        inference_time = statistics[0].item()\n        nms_time = statistics[1].item()\n        n_samples = statistics[2].item()\n\n        a_infer_time = 1000 * inference_time / (n_samples * self.dataloader.batch_size)\n        a_nms_time = 1000 * nms_time / (n_samples * self.dataloader.batch_size)\n\n        time_info = \", \".join(\n            [\n                \"Average {} time: {:.2f} ms\".format(k, v)\n                for k, v in zip(\n                    [\"forward\", \"NMS\", \"inference\"],\n                    [a_infer_time, a_nms_time, (a_infer_time + a_nms_time)],\n                )\n            ]\n        )\n\n        info = time_info + \"\\n\"\n\n        # Evaluate the Dt (detection) json comparing with the ground truth\n        if len(data_dict) > 0:\n            cocoGt = self.dataloader.dataset.coco\n            # TODO: since pycocotools can't process dict in py36, write data to json file.\n            if self.testdev:\n                json.dump(data_dict, open(\"./yolox_testdev_2017.json\", \"w\"))\n                cocoDt = cocoGt.loadRes(\"./yolox_testdev_2017.json\")\n            else:\n                _, tmp = tempfile.mkstemp()\n                json.dump(data_dict, open(tmp, \"w\"))\n                cocoDt = cocoGt.loadRes(tmp)\n            try:\n                from yolox.layers import COCOeval_opt as COCOeval\n            except ImportError:\n                from .cocoeval_mr import COCOeval\n\n                logger.warning(\"Use standard COCOeval.\")\n\n            cocoEval = COCOeval(cocoGt, cocoDt, annType[1])\n            cocoEval.evaluate()\n            cocoEval.accumulate()\n            redirect_string = io.StringIO()\n            with contextlib.redirect_stdout(redirect_string):\n                cocoEval.summarize()\n            info += redirect_string.getvalue()\n            return cocoEval.stats[0], cocoEval.stats[1], info\n        else:\n            return 0, 0, info\n"
  },
  {
    "path": "detector/YOLOX/yolox/evaluators/voc_eval.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Code are based on\n# https://github.com/rbgirshick/py-faster-rcnn/blob/master/lib/datasets/voc_eval.py\n# Copyright (c) Bharath Hariharan.\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport os\nimport pickle\nimport xml.etree.ElementTree as ET\n\nimport numpy as np\n\n\ndef parse_rec(filename):\n    \"\"\" Parse a PASCAL VOC xml file \"\"\"\n    tree = ET.parse(filename)\n    objects = []\n    for obj in tree.findall(\"object\"):\n        obj_struct = {}\n        obj_struct[\"name\"] = obj.find(\"name\").text\n        obj_struct[\"pose\"] = obj.find(\"pose\").text\n        obj_struct[\"truncated\"] = int(obj.find(\"truncated\").text)\n        obj_struct[\"difficult\"] = int(obj.find(\"difficult\").text)\n        bbox = obj.find(\"bndbox\")\n        obj_struct[\"bbox\"] = [\n            int(bbox.find(\"xmin\").text),\n            int(bbox.find(\"ymin\").text),\n            int(bbox.find(\"xmax\").text),\n            int(bbox.find(\"ymax\").text),\n        ]\n        objects.append(obj_struct)\n\n    return objects\n\n\ndef voc_ap(rec, prec, use_07_metric=False):\n    \"\"\" ap = voc_ap(rec, prec, [use_07_metric])\n    Compute VOC AP given precision and recall.\n    If use_07_metric is true, uses the\n    VOC 07 11 point method (default:False).\n    \"\"\"\n    if use_07_metric:\n        # 11 point metric\n        ap = 0.0\n        for t in np.arange(0.0, 1.1, 0.1):\n            if np.sum(rec >= t) == 0:\n                p = 0\n            else:\n                p = np.max(prec[rec >= t])\n            ap = ap + p / 11.0\n    else:\n        # correct AP calculation\n        # first append sentinel values at the end\n        mrec = np.concatenate(([0.0], rec, [1.0]))\n        mpre = np.concatenate(([0.0], prec, [0.0]))\n\n        # compute the precision envelope\n        for i in range(mpre.size - 1, 0, -1):\n            mpre[i - 1] = np.maximum(mpre[i - 1], mpre[i])\n\n        # to calculate area under PR curve, look for points\n        # where X axis (recall) changes value\n        i = np.where(mrec[1:] != mrec[:-1])[0]\n\n        # and sum (\\Delta recall) * prec\n        ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1])\n    return ap\n\n\ndef voc_eval(\n    detpath,\n    annopath,\n    imagesetfile,\n    classname,\n    cachedir,\n    ovthresh=0.5,\n    use_07_metric=False,\n):\n    # first load gt\n    if not os.path.isdir(cachedir):\n        os.mkdir(cachedir)\n    cachefile = os.path.join(cachedir, \"annots.pkl\")\n    # read list of images\n    with open(imagesetfile, \"r\") as f:\n        lines = f.readlines()\n    imagenames = [x.strip() for x in lines]\n\n    if not os.path.isfile(cachefile):\n        # load annots\n        recs = {}\n        for i, imagename in enumerate(imagenames):\n            recs[imagename] = parse_rec(annopath.format(imagename))\n            if i % 100 == 0:\n                print(\"Reading annotation for {:d}/{:d}\".format(i + 1, len(imagenames)))\n        # save\n        print(\"Saving cached annotations to {:s}\".format(cachefile))\n        with open(cachefile, \"wb\") as f:\n            pickle.dump(recs, f)\n    else:\n        # load\n        with open(cachefile, \"rb\") as f:\n            recs = pickle.load(f)\n\n    # extract gt objects for this class\n    class_recs = {}\n    npos = 0\n    for imagename in imagenames:\n        R = [obj for obj in recs[imagename] if obj[\"name\"] == classname]\n        bbox = np.array([x[\"bbox\"] for x in R])\n        difficult = np.array([x[\"difficult\"] for x in R]).astype(np.bool)\n        det = [False] * len(R)\n        npos = npos + sum(~difficult)\n        class_recs[imagename] = {\"bbox\": bbox, \"difficult\": difficult, \"det\": det}\n\n    # read dets\n    detfile = detpath.format(classname)\n    with open(detfile, \"r\") as f:\n        lines = f.readlines()\n\n    if len(lines) == 0:\n        return 0, 0, 0\n\n    splitlines = [x.strip().split(\" \") for x in lines]\n    image_ids = [x[0] for x in splitlines]\n    confidence = np.array([float(x[1]) for x in splitlines])\n    BB = np.array([[float(z) for z in x[2:]] for x in splitlines])\n\n    # sort by confidence\n    sorted_ind = np.argsort(-confidence)\n    BB = BB[sorted_ind, :]\n    image_ids = [image_ids[x] for x in sorted_ind]\n\n    # go down dets and mark TPs and FPs\n    nd = len(image_ids)\n    tp = np.zeros(nd)\n    fp = np.zeros(nd)\n    for d in range(nd):\n        R = class_recs[image_ids[d]]\n        bb = BB[d, :].astype(float)\n        ovmax = -np.inf\n        BBGT = R[\"bbox\"].astype(float)\n\n        if BBGT.size > 0:\n            # compute overlaps\n            # intersection\n            ixmin = np.maximum(BBGT[:, 0], bb[0])\n            iymin = np.maximum(BBGT[:, 1], bb[1])\n            ixmax = np.minimum(BBGT[:, 2], bb[2])\n            iymax = np.minimum(BBGT[:, 3], bb[3])\n            iw = np.maximum(ixmax - ixmin + 1.0, 0.0)\n            ih = np.maximum(iymax - iymin + 1.0, 0.0)\n            inters = iw * ih\n\n            # union\n            uni = (\n                (bb[2] - bb[0] + 1.0) * (bb[3] - bb[1] + 1.0)\n                + (BBGT[:, 2] - BBGT[:, 0] + 1.0) * (BBGT[:, 3] - BBGT[:, 1] + 1.0)\n                - inters\n            )\n\n            overlaps = inters / uni\n            ovmax = np.max(overlaps)\n            jmax = np.argmax(overlaps)\n\n        if ovmax > ovthresh:\n            if not R[\"difficult\"][jmax]:\n                if not R[\"det\"][jmax]:\n                    tp[d] = 1.0\n                    R[\"det\"][jmax] = 1\n                else:\n                    fp[d] = 1.0\n        else:\n            fp[d] = 1.0\n\n        # compute precision recall\n    fp = np.cumsum(fp)\n    tp = np.cumsum(tp)\n    rec = tp / float(npos)\n    # avoid divide by zero in case the first detection matches a difficult\n    # ground truth\n    prec = tp / np.maximum(tp + fp, np.finfo(np.float64).eps)\n    ap = voc_ap(rec, prec, use_07_metric)\n\n    return rec, prec, ap\n"
  },
  {
    "path": "detector/YOLOX/yolox/evaluators/voc_evaluator.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) Megvii, Inc. and its affiliates.\n\nimport sys\nimport tempfile\nimport time\nfrom collections import ChainMap\nfrom loguru import logger\nfrom tqdm import tqdm\n\nimport numpy as np\n\nimport torch\n\nfrom yolox.utils import gather, is_main_process, postprocess, synchronize, time_synchronized\n\n\nclass VOCEvaluator:\n    \"\"\"\n    VOC AP Evaluation class.\n    \"\"\"\n\n    def __init__(\n        self, dataloader, img_size, confthre, nmsthre, num_classes,\n    ):\n        \"\"\"\n        Args:\n            dataloader (Dataloader): evaluate dataloader.\n            img_size (int): image size after preprocess. images are resized\n                to squares whose shape is (img_size, img_size).\n            confthre (float): confidence threshold ranging from 0 to 1, which\n                is defined in the config file.\n            nmsthre (float): IoU threshold of non-max supression ranging from 0 to 1.\n        \"\"\"\n        self.dataloader = dataloader\n        self.img_size = img_size\n        self.confthre = confthre\n        self.nmsthre = nmsthre\n        self.num_classes = num_classes\n        self.num_images = len(dataloader.dataset)\n\n    def evaluate(\n        self, model, distributed=False, half=False, trt_file=None, decoder=None, test_size=None\n    ):\n        \"\"\"\n        VOC average precision (AP) Evaluation. Iterate inference on the test dataset\n        and the results are evaluated by COCO API.\n\n        NOTE: This function will change training mode to False, please save states if needed.\n\n        Args:\n            model : model to evaluate.\n\n        Returns:\n            ap50_95 (float) : COCO style AP of IoU=50:95\n            ap50 (float) : VOC 2007 metric AP of IoU=50\n            summary (sr): summary info of evaluation.\n        \"\"\"\n        # TODO half to amp_test\n        tensor_type = torch.cuda.HalfTensor if half else torch.cuda.FloatTensor\n        model = model.eval()\n        if half:\n            model = model.half()\n        ids = []\n        data_list = {}\n        progress_bar = tqdm if is_main_process() else iter\n\n        inference_time = 0\n        nms_time = 0\n        n_samples = len(self.dataloader) - 1\n\n        if trt_file is not None:\n            from torch2trt import TRTModule\n            model_trt = TRTModule()\n            model_trt.load_state_dict(torch.load(trt_file))\n\n            x = torch.ones(1, 3, test_size[0], test_size[1]).cuda()\n            model(x)\n            model = model_trt\n\n        for cur_iter, (imgs, _, info_imgs, ids) in enumerate(progress_bar(self.dataloader)):\n            with torch.no_grad():\n                imgs = imgs.type(tensor_type)\n\n                # skip the the last iters since batchsize might be not enough for batch inference\n                is_time_record = cur_iter < len(self.dataloader) - 1\n                if is_time_record:\n                    start = time.time()\n\n                outputs = model(imgs)\n                if decoder is not None:\n                    outputs = decoder(outputs, dtype=outputs.type())\n\n                if is_time_record:\n                    infer_end = time_synchronized()\n                    inference_time += infer_end - start\n\n                outputs = postprocess(\n                    outputs, self.num_classes, self.confthre, self.nmsthre\n                )\n                if is_time_record:\n                    nms_end = time_synchronized()\n                    nms_time += nms_end - infer_end\n\n            data_list.update(self.convert_to_voc_format(outputs, info_imgs, ids))\n\n        statistics = torch.cuda.FloatTensor([inference_time, nms_time, n_samples])\n        if distributed:\n            data_list = gather(data_list, dst=0)\n            data_list = ChainMap(*data_list)\n            torch.distributed.reduce(statistics, dst=0)\n\n        eval_results = self.evaluate_prediction(data_list, statistics)\n        synchronize()\n        return eval_results\n\n    def convert_to_voc_format(self, outputs, info_imgs, ids):\n        predictions = {}\n        for (output, img_h, img_w, img_id) in zip(outputs, info_imgs[0], info_imgs[1], ids):\n            if output is None:\n                predictions[int(img_id)] = (None, None, None)\n                continue\n            output = output.cpu()\n\n            bboxes = output[:, 0:4]\n\n            # preprocessing: resize\n            scale = min(self.img_size[0] / float(img_h), self.img_size[1] / float(img_w))\n            bboxes /= scale\n\n            cls = output[:, 6]\n            scores = output[:, 4] * output[:, 5]\n\n            predictions[int(img_id)] = (bboxes, cls, scores)\n        return predictions\n\n    def evaluate_prediction(self, data_dict, statistics):\n        if not is_main_process():\n            return 0, 0, None\n\n        logger.info(\"Evaluate in main process...\")\n\n        inference_time = statistics[0].item()\n        nms_time = statistics[1].item()\n        n_samples = statistics[2].item()\n\n        a_infer_time = 1000 * inference_time / (n_samples * self.dataloader.batch_size)\n        a_nms_time = 1000 * nms_time / (n_samples * self.dataloader.batch_size)\n\n        time_info = \", \".join(\n            [\"Average {} time: {:.2f} ms\".format(k, v) for k, v in zip(\n                [\"forward\", \"NMS\", \"inference\"],\n                [a_infer_time, a_nms_time, (a_infer_time + a_nms_time)]\n            )]\n        )\n\n        info = time_info + \"\\n\"\n\n        all_boxes = [[[] for _ in range(self.num_images)] for _ in range(self.num_classes)]\n        for img_num in range(self.num_images):\n            bboxes, cls, scores = data_dict[img_num]\n            if bboxes is None:\n                for j in range(self.num_classes):\n                    all_boxes[j][img_num] = np.empty([0, 5], dtype=np.float32)\n                continue\n            for j in range(self.num_classes):\n                mask_c = cls == j\n                if sum(mask_c) == 0:\n                    all_boxes[j][img_num] = np.empty([0, 5], dtype=np.float32)\n                    continue\n\n                c_dets = torch.cat((bboxes, scores.unsqueeze(1)), dim=1)\n                all_boxes[j][img_num] = c_dets[mask_c].numpy()\n\n            sys.stdout.write(\n                \"im_eval: {:d}/{:d} \\r\".format(img_num + 1, self.num_images)\n            )\n            sys.stdout.flush()\n\n        with tempfile.TemporaryDirectory() as tempdir:\n            mAP50, mAP70 = self.dataloader.dataset.evaluate_detections(all_boxes, tempdir)\n            return mAP50, mAP70, info\n"
  },
  {
    "path": "detector/YOLOX/yolox/exp/__init__.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.\n\nfrom .base_exp import BaseExp\nfrom .build import get_exp\nfrom .yolox_base import Exp\n"
  },
  {
    "path": "detector/YOLOX/yolox/exp/base_exp.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.\n\nimport ast\nimport pprint\nfrom abc import ABCMeta, abstractmethod\nfrom typing import Dict\nfrom tabulate import tabulate\n\nimport torch\nfrom torch.nn import Module\n\nfrom ..utils import LRScheduler\n\n\nclass BaseExp(metaclass=ABCMeta):\n    \"\"\"Basic class for any experiment.\n    \"\"\"\n\n    def __init__(self):\n        self.seed = None\n        self.output_dir = \"./YOLOX_outputs\"\n        self.print_interval = 100\n        self.eval_interval = 10\n\n    @abstractmethod\n    def get_model(self) -> Module:\n        pass\n\n    @abstractmethod\n    def get_data_loader(\n        self, batch_size: int, is_distributed: bool\n    ) -> Dict[str, torch.utils.data.DataLoader]:\n        pass\n\n    @abstractmethod\n    def get_optimizer(self, batch_size: int) -> torch.optim.Optimizer:\n        pass\n\n    @abstractmethod\n    def get_lr_scheduler(\n        self, lr: float, iters_per_epoch: int, **kwargs\n    ) -> LRScheduler:\n        pass\n\n    @abstractmethod\n    def get_evaluator(self):\n        pass\n\n    @abstractmethod\n    def eval(self, model, evaluator, weights):\n        pass\n\n    def __repr__(self):\n        table_header = [\"keys\", \"values\"]\n        exp_table = [\n            (str(k), pprint.pformat(v)) for k, v in vars(self).items() if not k.startswith(\"_\")\n        ]\n        return tabulate(exp_table, headers=table_header, tablefmt=\"fancy_grid\")\n\n    def merge(self, cfg_list):\n        assert len(cfg_list) % 2 == 0\n        for k, v in zip(cfg_list[0::2], cfg_list[1::2]):\n            # only update value with same key\n            if hasattr(self, k):\n                src_value = getattr(self, k)\n                src_type = type(src_value)\n                if src_value is not None and src_type != type(v):\n                    try:\n                        v = src_type(v)\n                    except Exception:\n                        v = ast.literal_eval(v)\n                setattr(self, k, v)\n"
  },
  {
    "path": "detector/YOLOX/yolox/exp/build.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.\n\nimport importlib\nimport os\nimport sys\n\n\ndef get_exp_by_file(exp_file):\n    sys.path.append(os.path.dirname(exp_file))\n    current_exp = importlib.import_module(os.path.basename(exp_file).split(\".\")[0])\n    exp = current_exp.Exp()\n    return exp\n\n\ndef get_exp_by_name(exp_name):\n    import yolox\n    yolox_path = os.path.dirname(os.path.dirname(yolox.__file__))\n    filedict = {\n        \"yolox-s\": \"yolox_s.py\",\n        \"yolox-m\": \"yolox_m.py\",\n        \"yolox-l\": \"yolox_l.py\",\n        \"yolox-x\": \"yolox_x.py\",\n        \"yolox-tiny\": \"yolox_tiny.py\",\n        \"yolox-nano\": \"nano.py\",\n        \"yolov3\": \"yolov3.py\",\n    }\n    filename = filedict[exp_name]\n    exp_path = os.path.join(yolox_path, \"exps\", \"default\", filename)\n    return get_exp_by_file(exp_path)\n\n\ndef get_exp(exp_file, exp_name):\n    \"\"\"\n    get Exp object by file or name. If exp_file and exp_name\n    are both provided, get Exp by exp_file.\n\n    Args:\n        exp_file (str): file path of experiment.\n        exp_name (str): name of experiment. \"yolo-s\",\n    \"\"\"\n    assert exp_file is not None or exp_name is not None, \"plz provide exp file or exp name.\"\n    if exp_file is not None:\n        return get_exp_by_file(exp_file)\n    else:\n        return get_exp_by_name(exp_name)\n"
  },
  {
    "path": "detector/YOLOX/yolox/exp/yolox_base.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.\n\nimport os\nimport random\n\nimport torch\nimport torch.distributed as dist\nimport torch.nn as nn\n\nfrom .base_exp import BaseExp\n\n\nclass Exp(BaseExp):\n\n    def __init__(self):\n        super().__init__()\n\n        # ---------------- model config ---------------- #\n        self.num_classes = 80\n        self.depth = 1.00\n        self.width = 1.00\n\n        # ---------------- dataloader config ---------------- #\n        # set worker to 4 for shorter dataloader init time\n        self.data_num_workers = 4\n        self.input_size = (640, 640)\n        self.random_size = (14, 26)\n        self.train_ann = \"instances_train2017.json\"\n        self.val_ann = \"instances_val2017.json\"\n\n        # --------------- transform config ----------------- #\n        self.degrees = 10.0\n        self.translate = 0.1\n        self.scale = (0.1, 2)\n        self.mscale = (0.8, 1.6)\n        self.shear = 2.0\n        self.perspective = 0.0\n        self.enable_mixup = True\n\n        # --------------  training config --------------------- #\n        self.warmup_epochs = 5\n        self.max_epoch = 300\n        self.warmup_lr = 0\n        self.basic_lr_per_img = 0.01 / 64.0\n        self.scheduler = \"yoloxwarmcos\"\n        self.no_aug_epochs = 15\n        self.min_lr_ratio = 0.05\n        self.ema = True\n\n        self.weight_decay = 5e-4\n        self.momentum = 0.9\n        self.print_interval = 10\n        self.eval_interval = 10\n        self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(\".\")[0]\n\n        # -----------------  testing config ------------------ #\n        self.test_size = (640, 640)\n        self.test_conf = 0.01\n        self.nmsthre = 0.65\n\n    def get_model(self):\n        from yolox.models import YOLOX, YOLOPAFPN, YOLOXHead\n\n        def init_yolo(M):\n            for m in M.modules():\n                if isinstance(m, nn.BatchNorm2d):\n                    m.eps = 1e-3\n                    m.momentum = 0.03\n\n        if getattr(self, \"model\", None) is None:\n            in_channels = [256, 512, 1024]\n            backbone = YOLOPAFPN(self.depth, self.width, in_channels=in_channels)\n            head = YOLOXHead(self.num_classes, self.width, in_channels=in_channels)\n            self.model = YOLOX(backbone, head)\n\n        self.model.apply(init_yolo)\n        self.model.head.initialize_biases(1e-2)\n        return self.model\n\n    def get_data_loader(self, batch_size, is_distributed, no_aug=False):\n        from yolox.data import (\n            COCODataset,\n            TrainTransform,\n            YoloBatchSampler,\n            DataLoader,\n            InfiniteSampler,\n            MosaicDetection,\n        )\n\n        dataset = COCODataset(\n            data_dir=None,\n            json_file=self.train_ann,\n            img_size=self.input_size,\n            preproc=TrainTransform(\n                rgb_means=(0.485, 0.456, 0.406),\n                std=(0.229, 0.224, 0.225),\n                max_labels=50,\n            ),\n        )\n\n        dataset = MosaicDetection(\n            dataset,\n            mosaic=not no_aug,\n            img_size=self.input_size,\n            preproc=TrainTransform(\n                rgb_means=(0.485, 0.456, 0.406),\n                std=(0.229, 0.224, 0.225),\n                max_labels=120,\n            ),\n            degrees=self.degrees,\n            translate=self.translate,\n            scale=self.scale,\n            shear=self.shear,\n            perspective=self.perspective,\n            enable_mixup=self.enable_mixup,\n        )\n\n        self.dataset = dataset\n\n        if is_distributed:\n            batch_size = batch_size // dist.get_world_size()\n\n        sampler = InfiniteSampler(\n            len(self.dataset), seed=self.seed if self.seed else 0\n        )\n\n        batch_sampler = YoloBatchSampler(\n            sampler=sampler,\n            batch_size=batch_size,\n            drop_last=False,\n            input_dimension=self.input_size,\n            mosaic=not no_aug,\n        )\n\n        dataloader_kwargs = {\"num_workers\": self.data_num_workers, \"pin_memory\": True}\n        dataloader_kwargs[\"batch_sampler\"] = batch_sampler\n        train_loader = DataLoader(self.dataset, **dataloader_kwargs)\n\n        return train_loader\n\n    def random_resize(self, data_loader, epoch, rank, is_distributed):\n        tensor = torch.LongTensor(2).cuda()\n\n        if rank == 0:\n            size_factor = self.input_size[1] * 1. / self.input_size[0]\n            size = random.randint(*self.random_size)\n            size = (int(32 * size), 32 * int(size * size_factor))\n            tensor[0] = size[0]\n            tensor[1] = size[1]\n\n        if is_distributed:\n            dist.barrier()\n            dist.broadcast(tensor, 0)\n\n        input_size = data_loader.change_input_dim(\n            multiple=(tensor[0].item(), tensor[1].item()), random_range=None\n        )\n        return input_size\n\n    def get_optimizer(self, batch_size):\n        if \"optimizer\" not in self.__dict__:\n            if self.warmup_epochs > 0:\n                lr = self.warmup_lr\n            else:\n                lr = self.basic_lr_per_img * batch_size\n\n            pg0, pg1, pg2 = [], [], []  # optimizer parameter groups\n\n            for k, v in self.model.named_modules():\n                if hasattr(v, \"bias\") and isinstance(v.bias, nn.Parameter):\n                    pg2.append(v.bias)  # biases\n                if isinstance(v, nn.BatchNorm2d) or \"bn\" in k:\n                    pg0.append(v.weight)  # no decay\n                elif hasattr(v, \"weight\") and isinstance(v.weight, nn.Parameter):\n                    pg1.append(v.weight)  # apply decay\n\n            optimizer = torch.optim.SGD(\n                pg0, lr=lr, momentum=self.momentum, nesterov=True\n            )\n            optimizer.add_param_group(\n                {\"params\": pg1, \"weight_decay\": self.weight_decay}\n            )  # add pg1 with weight_decay\n            optimizer.add_param_group({\"params\": pg2})\n            self.optimizer = optimizer\n\n        return self.optimizer\n\n    def get_lr_scheduler(self, lr, iters_per_epoch):\n        from yolox.utils import LRScheduler\n        scheduler = LRScheduler(\n            self.scheduler,\n            lr,\n            iters_per_epoch,\n            self.max_epoch,\n            warmup_epochs=self.warmup_epochs,\n            warmup_lr_start=self.warmup_lr,\n            no_aug_epochs=self.no_aug_epochs,\n            min_lr_ratio=self.min_lr_ratio,\n        )\n        return scheduler\n\n    def get_eval_loader(self, batch_size, is_distributed, testdev=False):\n        from yolox.data import COCODataset, ValTransform\n\n        valdataset = COCODataset(\n            data_dir=None,\n            json_file=self.val_ann if not testdev else \"image_info_test-dev2017.json\",\n            name=\"val2017\" if not testdev else \"test2017\",\n            img_size=self.test_size,\n            preproc=ValTransform(\n                rgb_means=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)\n            ),\n        )\n\n        if is_distributed:\n            batch_size = batch_size // dist.get_world_size()\n            sampler = torch.utils.data.distributed.DistributedSampler(\n                valdataset, shuffle=False\n            )\n        else:\n            sampler = torch.utils.data.SequentialSampler(valdataset)\n\n        dataloader_kwargs = {\n            \"num_workers\": self.data_num_workers,\n            \"pin_memory\": True,\n            \"sampler\": sampler,\n        }\n        dataloader_kwargs[\"batch_size\"] = batch_size\n        val_loader = torch.utils.data.DataLoader(valdataset, **dataloader_kwargs)\n\n        return val_loader\n\n    def get_evaluator(self, batch_size, is_distributed, testdev=False):\n        from yolox.evaluators import COCOEvaluator\n\n        val_loader = self.get_eval_loader(batch_size, is_distributed, testdev=testdev)\n        evaluator = COCOEvaluator(\n            dataloader=val_loader,\n            img_size=self.test_size,\n            confthre=self.test_conf,\n            nmsthre=self.nmsthre,\n            num_classes=self.num_classes,\n            testdev=testdev,\n        )\n        return evaluator\n\n    def eval(self, model, evaluator, is_distributed, half=False):\n        return evaluator.evaluate(model, is_distributed, half)\n"
  },
  {
    "path": "detector/YOLOX/yolox/layers/__init__.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.\n\nfrom .fast_coco_eval_api import COCOeval_opt\n"
  },
  {
    "path": "detector/YOLOX/yolox/layers/csrc/cocoeval/cocoeval.cpp",
    "content": "// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n#include \"cocoeval.h\"\n#include <time.h>\n#include <algorithm>\n#include <cstdint>\n#include <numeric>\n\nusing namespace pybind11::literals;\n\nnamespace COCOeval {\n\n// Sort detections from highest score to lowest, such that\n// detection_instances[detection_sorted_indices[t]] >=\n// detection_instances[detection_sorted_indices[t+1]].  Use stable_sort to match\n// original COCO API\nvoid SortInstancesByDetectionScore(\n    const std::vector<InstanceAnnotation>& detection_instances,\n    std::vector<uint64_t>* detection_sorted_indices) {\n  detection_sorted_indices->resize(detection_instances.size());\n  std::iota(\n      detection_sorted_indices->begin(), detection_sorted_indices->end(), 0);\n  std::stable_sort(\n      detection_sorted_indices->begin(),\n      detection_sorted_indices->end(),\n      [&detection_instances](size_t j1, size_t j2) {\n        return detection_instances[j1].score > detection_instances[j2].score;\n      });\n}\n\n// Partition the ground truth objects based on whether or not to ignore them\n// based on area\nvoid SortInstancesByIgnore(\n    const std::array<double, 2>& area_range,\n    const std::vector<InstanceAnnotation>& ground_truth_instances,\n    std::vector<uint64_t>* ground_truth_sorted_indices,\n    std::vector<bool>* ignores) {\n  ignores->clear();\n  ignores->reserve(ground_truth_instances.size());\n  for (auto o : ground_truth_instances) {\n    ignores->push_back(\n        o.ignore || o.area < area_range[0] || o.area > area_range[1]);\n  }\n\n  ground_truth_sorted_indices->resize(ground_truth_instances.size());\n  std::iota(\n      ground_truth_sorted_indices->begin(),\n      ground_truth_sorted_indices->end(),\n      0);\n  std::stable_sort(\n      ground_truth_sorted_indices->begin(),\n      ground_truth_sorted_indices->end(),\n      [&ignores](size_t j1, size_t j2) {\n        return (int)(*ignores)[j1] < (int)(*ignores)[j2];\n      });\n}\n\n// For each IOU threshold, greedily match each detected instance to a ground\n// truth instance (if possible) and store the results\nvoid MatchDetectionsToGroundTruth(\n    const std::vector<InstanceAnnotation>& detection_instances,\n    const std::vector<uint64_t>& detection_sorted_indices,\n    const std::vector<InstanceAnnotation>& ground_truth_instances,\n    const std::vector<uint64_t>& ground_truth_sorted_indices,\n    const std::vector<bool>& ignores,\n    const std::vector<std::vector<double>>& ious,\n    const std::vector<double>& iou_thresholds,\n    const std::array<double, 2>& area_range,\n    ImageEvaluation* results) {\n  // Initialize memory to store return data matches and ignore\n  const int num_iou_thresholds = iou_thresholds.size();\n  const int num_ground_truth = ground_truth_sorted_indices.size();\n  const int num_detections = detection_sorted_indices.size();\n  std::vector<uint64_t> ground_truth_matches(\n      num_iou_thresholds * num_ground_truth, 0);\n  std::vector<uint64_t>& detection_matches = results->detection_matches;\n  std::vector<bool>& detection_ignores = results->detection_ignores;\n  std::vector<bool>& ground_truth_ignores = results->ground_truth_ignores;\n  detection_matches.resize(num_iou_thresholds * num_detections, 0);\n  detection_ignores.resize(num_iou_thresholds * num_detections, false);\n  ground_truth_ignores.resize(num_ground_truth);\n  for (auto g = 0; g < num_ground_truth; ++g) {\n    ground_truth_ignores[g] = ignores[ground_truth_sorted_indices[g]];\n  }\n\n  for (auto t = 0; t < num_iou_thresholds; ++t) {\n    for (auto d = 0; d < num_detections; ++d) {\n      // information about best match so far (match=-1 -> unmatched)\n      double best_iou = std::min(iou_thresholds[t], 1 - 1e-10);\n      int match = -1;\n      for (auto g = 0; g < num_ground_truth; ++g) {\n        // if this ground truth instance is already matched and not a\n        // crowd, it cannot be matched to another detection\n        if (ground_truth_matches[t * num_ground_truth + g] > 0 &&\n            !ground_truth_instances[ground_truth_sorted_indices[g]].is_crowd) {\n          continue;\n        }\n\n        // if detected instance matched to a regular ground truth\n        // instance, we can break on the first ground truth instance\n        // tagged as ignore (because they are sorted by the ignore tag)\n        if (match >= 0 && !ground_truth_ignores[match] &&\n            ground_truth_ignores[g]) {\n          break;\n        }\n\n        // if IOU overlap is the best so far, store the match appropriately\n        if (ious[d][ground_truth_sorted_indices[g]] >= best_iou) {\n          best_iou = ious[d][ground_truth_sorted_indices[g]];\n          match = g;\n        }\n      }\n      // if match was made, store id of match for both detection and\n      // ground truth\n      if (match >= 0) {\n        detection_ignores[t * num_detections + d] = ground_truth_ignores[match];\n        detection_matches[t * num_detections + d] =\n            ground_truth_instances[ground_truth_sorted_indices[match]].id;\n        ground_truth_matches[t * num_ground_truth + match] =\n            detection_instances[detection_sorted_indices[d]].id;\n      }\n\n      // set unmatched detections outside of area range to ignore\n      const InstanceAnnotation& detection =\n          detection_instances[detection_sorted_indices[d]];\n      detection_ignores[t * num_detections + d] =\n          detection_ignores[t * num_detections + d] ||\n          (detection_matches[t * num_detections + d] == 0 &&\n           (detection.area < area_range[0] || detection.area > area_range[1]));\n    }\n  }\n\n  // store detection score results\n  results->detection_scores.resize(detection_sorted_indices.size());\n  for (size_t d = 0; d < detection_sorted_indices.size(); ++d) {\n    results->detection_scores[d] =\n        detection_instances[detection_sorted_indices[d]].score;\n  }\n}\n\nstd::vector<ImageEvaluation> EvaluateImages(\n    const std::vector<std::array<double, 2>>& area_ranges,\n    int max_detections,\n    const std::vector<double>& iou_thresholds,\n    const ImageCategoryInstances<std::vector<double>>& image_category_ious,\n    const ImageCategoryInstances<InstanceAnnotation>&\n        image_category_ground_truth_instances,\n    const ImageCategoryInstances<InstanceAnnotation>&\n        image_category_detection_instances) {\n  const int num_area_ranges = area_ranges.size();\n  const int num_images = image_category_ground_truth_instances.size();\n  const int num_categories =\n      image_category_ious.size() > 0 ? image_category_ious[0].size() : 0;\n  std::vector<uint64_t> detection_sorted_indices;\n  std::vector<uint64_t> ground_truth_sorted_indices;\n  std::vector<bool> ignores;\n  std::vector<ImageEvaluation> results_all(\n      num_images * num_area_ranges * num_categories);\n\n  // Store results for each image, category, and area range combination. Results\n  // for each IOU threshold are packed into the same ImageEvaluation object\n  for (auto i = 0; i < num_images; ++i) {\n    for (auto c = 0; c < num_categories; ++c) {\n      const std::vector<InstanceAnnotation>& ground_truth_instances =\n          image_category_ground_truth_instances[i][c];\n      const std::vector<InstanceAnnotation>& detection_instances =\n          image_category_detection_instances[i][c];\n\n      SortInstancesByDetectionScore(\n          detection_instances, &detection_sorted_indices);\n      if ((int)detection_sorted_indices.size() > max_detections) {\n        detection_sorted_indices.resize(max_detections);\n      }\n\n      for (size_t a = 0; a < area_ranges.size(); ++a) {\n        SortInstancesByIgnore(\n            area_ranges[a],\n            ground_truth_instances,\n            &ground_truth_sorted_indices,\n            &ignores);\n\n        MatchDetectionsToGroundTruth(\n            detection_instances,\n            detection_sorted_indices,\n            ground_truth_instances,\n            ground_truth_sorted_indices,\n            ignores,\n            image_category_ious[i][c],\n            iou_thresholds,\n            area_ranges[a],\n            &results_all\n                [c * num_area_ranges * num_images + a * num_images + i]);\n      }\n    }\n  }\n\n  return results_all;\n}\n\n// Convert a python list to a vector\ntemplate <typename T>\nstd::vector<T> list_to_vec(const py::list& l) {\n  std::vector<T> v(py::len(l));\n  for (int i = 0; i < (int)py::len(l); ++i) {\n    v[i] = l[i].cast<T>();\n  }\n  return v;\n}\n\n// Helper function to Accumulate()\n// Considers the evaluation results applicable to a particular category, area\n// range, and max_detections parameter setting, which begin at\n// evaluations[evaluation_index].  Extracts a sorted list of length n of all\n// applicable detection instances concatenated across all images in the dataset,\n// which are represented by the outputs evaluation_indices, detection_scores,\n// image_detection_indices, and detection_sorted_indices--all of which are\n// length n. evaluation_indices[i] stores the applicable index into\n// evaluations[] for instance i, which has detection score detection_score[i],\n// and is the image_detection_indices[i]'th of the list of detections\n// for the image containing i.  detection_sorted_indices[] defines a sorted\n// permutation of the 3 other outputs\nint BuildSortedDetectionList(\n    const std::vector<ImageEvaluation>& evaluations,\n    const int64_t evaluation_index,\n    const int64_t num_images,\n    const int max_detections,\n    std::vector<uint64_t>* evaluation_indices,\n    std::vector<double>* detection_scores,\n    std::vector<uint64_t>* detection_sorted_indices,\n    std::vector<uint64_t>* image_detection_indices) {\n  assert(evaluations.size() >= evaluation_index + num_images);\n\n  // Extract a list of object instances of the applicable category, area\n  // range, and max detections requirements such that they can be sorted\n  image_detection_indices->clear();\n  evaluation_indices->clear();\n  detection_scores->clear();\n  image_detection_indices->reserve(num_images * max_detections);\n  evaluation_indices->reserve(num_images * max_detections);\n  detection_scores->reserve(num_images * max_detections);\n  int num_valid_ground_truth = 0;\n  for (auto i = 0; i < num_images; ++i) {\n    const ImageEvaluation& evaluation = evaluations[evaluation_index + i];\n\n    for (int d = 0;\n         d < (int)evaluation.detection_scores.size() && d < max_detections;\n         ++d) { // detected instances\n      evaluation_indices->push_back(evaluation_index + i);\n      image_detection_indices->push_back(d);\n      detection_scores->push_back(evaluation.detection_scores[d]);\n    }\n    for (auto ground_truth_ignore : evaluation.ground_truth_ignores) {\n      if (!ground_truth_ignore) {\n        ++num_valid_ground_truth;\n      }\n    }\n  }\n\n  // Sort detections by decreasing score, using stable sort to match\n  // python implementation\n  detection_sorted_indices->resize(detection_scores->size());\n  std::iota(\n      detection_sorted_indices->begin(), detection_sorted_indices->end(), 0);\n  std::stable_sort(\n      detection_sorted_indices->begin(),\n      detection_sorted_indices->end(),\n      [&detection_scores](size_t j1, size_t j2) {\n        return (*detection_scores)[j1] > (*detection_scores)[j2];\n      });\n\n  return num_valid_ground_truth;\n}\n\n// Helper function to Accumulate()\n// Compute a precision recall curve given a sorted list of detected instances\n// encoded in evaluations, evaluation_indices, detection_scores,\n// detection_sorted_indices, image_detection_indices (see\n// BuildSortedDetectionList()). Using vectors precisions and recalls\n// and temporary storage, output the results into precisions_out, recalls_out,\n// and scores_out, which are large buffers containing many precion/recall curves\n// for all possible parameter settings, with precisions_out_index and\n// recalls_out_index defining the applicable indices to store results.\nvoid ComputePrecisionRecallCurve(\n    const int64_t precisions_out_index,\n    const int64_t precisions_out_stride,\n    const int64_t recalls_out_index,\n    const std::vector<double>& recall_thresholds,\n    const int iou_threshold_index,\n    const int num_iou_thresholds,\n    const int num_valid_ground_truth,\n    const std::vector<ImageEvaluation>& evaluations,\n    const std::vector<uint64_t>& evaluation_indices,\n    const std::vector<double>& detection_scores,\n    const std::vector<uint64_t>& detection_sorted_indices,\n    const std::vector<uint64_t>& image_detection_indices,\n    std::vector<double>* precisions,\n    std::vector<double>* recalls,\n    std::vector<double>* precisions_out,\n    std::vector<double>* scores_out,\n    std::vector<double>* recalls_out) {\n  assert(recalls_out->size() > recalls_out_index);\n\n  // Compute precision/recall for each instance in the sorted list of detections\n  int64_t true_positives_sum = 0, false_positives_sum = 0;\n  precisions->clear();\n  recalls->clear();\n  precisions->reserve(detection_sorted_indices.size());\n  recalls->reserve(detection_sorted_indices.size());\n  assert(!evaluations.empty() || detection_sorted_indices.empty());\n  for (auto detection_sorted_index : detection_sorted_indices) {\n    const ImageEvaluation& evaluation =\n        evaluations[evaluation_indices[detection_sorted_index]];\n    const auto num_detections =\n        evaluation.detection_matches.size() / num_iou_thresholds;\n    const auto detection_index = iou_threshold_index * num_detections +\n        image_detection_indices[detection_sorted_index];\n    assert(evaluation.detection_matches.size() > detection_index);\n    assert(evaluation.detection_ignores.size() > detection_index);\n    const int64_t detection_match =\n        evaluation.detection_matches[detection_index];\n    const bool detection_ignores =\n        evaluation.detection_ignores[detection_index];\n    const auto true_positive = detection_match > 0 && !detection_ignores;\n    const auto false_positive = detection_match == 0 && !detection_ignores;\n    if (true_positive) {\n      ++true_positives_sum;\n    }\n    if (false_positive) {\n      ++false_positives_sum;\n    }\n\n    const double recall =\n        static_cast<double>(true_positives_sum) / num_valid_ground_truth;\n    recalls->push_back(recall);\n    const int64_t num_valid_detections =\n        true_positives_sum + false_positives_sum;\n    const double precision = num_valid_detections > 0\n        ? static_cast<double>(true_positives_sum) / num_valid_detections\n        : 0.0;\n    precisions->push_back(precision);\n  }\n\n  (*recalls_out)[recalls_out_index] = !recalls->empty() ? recalls->back() : 0;\n\n  for (int64_t i = static_cast<int64_t>(precisions->size()) - 1; i > 0; --i) {\n    if ((*precisions)[i] > (*precisions)[i - 1]) {\n      (*precisions)[i - 1] = (*precisions)[i];\n    }\n  }\n\n  // Sample the per instance precision/recall list at each recall threshold\n  for (size_t r = 0; r < recall_thresholds.size(); ++r) {\n    // first index in recalls >= recall_thresholds[r]\n    std::vector<double>::iterator low = std::lower_bound(\n        recalls->begin(), recalls->end(), recall_thresholds[r]);\n    size_t precisions_index = low - recalls->begin();\n\n    const auto results_ind = precisions_out_index + r * precisions_out_stride;\n    assert(results_ind < precisions_out->size());\n    assert(results_ind < scores_out->size());\n    if (precisions_index < precisions->size()) {\n      (*precisions_out)[results_ind] = (*precisions)[precisions_index];\n      (*scores_out)[results_ind] =\n          detection_scores[detection_sorted_indices[precisions_index]];\n    } else {\n      (*precisions_out)[results_ind] = 0;\n      (*scores_out)[results_ind] = 0;\n    }\n  }\n}\npy::dict Accumulate(\n    const py::object& params,\n    const std::vector<ImageEvaluation>& evaluations) {\n  const std::vector<double> recall_thresholds =\n      list_to_vec<double>(params.attr(\"recThrs\"));\n  const std::vector<int> max_detections =\n      list_to_vec<int>(params.attr(\"maxDets\"));\n  const int num_iou_thresholds = py::len(params.attr(\"iouThrs\"));\n  const int num_recall_thresholds = py::len(params.attr(\"recThrs\"));\n  const int num_categories = params.attr(\"useCats\").cast<int>() == 1\n      ? py::len(params.attr(\"catIds\"))\n      : 1;\n  const int num_area_ranges = py::len(params.attr(\"areaRng\"));\n  const int num_max_detections = py::len(params.attr(\"maxDets\"));\n  const int num_images = py::len(params.attr(\"imgIds\"));\n\n  std::vector<double> precisions_out(\n      num_iou_thresholds * num_recall_thresholds * num_categories *\n          num_area_ranges * num_max_detections,\n      -1);\n  std::vector<double> recalls_out(\n      num_iou_thresholds * num_categories * num_area_ranges *\n          num_max_detections,\n      -1);\n  std::vector<double> scores_out(\n      num_iou_thresholds * num_recall_thresholds * num_categories *\n          num_area_ranges * num_max_detections,\n      -1);\n\n  // Consider the list of all detected instances in the entire dataset in one\n  // large list.  evaluation_indices, detection_scores,\n  // image_detection_indices, and detection_sorted_indices all have the same\n  // length as this list, such that each entry corresponds to one detected\n  // instance\n  std::vector<uint64_t> evaluation_indices; // indices into evaluations[]\n  std::vector<double> detection_scores; // detection scores of each instance\n  std::vector<uint64_t> detection_sorted_indices; // sorted indices of all\n                                                  // instances in the dataset\n  std::vector<uint64_t>\n      image_detection_indices; // indices into the list of detected instances in\n                               // the same image as each instance\n  std::vector<double> precisions, recalls;\n\n  for (auto c = 0; c < num_categories; ++c) {\n    for (auto a = 0; a < num_area_ranges; ++a) {\n      for (auto m = 0; m < num_max_detections; ++m) {\n        // The COCO PythonAPI assumes evaluations[] (the return value of\n        // COCOeval::EvaluateImages() is one long list storing results for each\n        // combination of category, area range, and image id, with categories in\n        // the outermost loop and images in the innermost loop.\n        const int64_t evaluations_index =\n            c * num_area_ranges * num_images + a * num_images;\n        int num_valid_ground_truth = BuildSortedDetectionList(\n            evaluations,\n            evaluations_index,\n            num_images,\n            max_detections[m],\n            &evaluation_indices,\n            &detection_scores,\n            &detection_sorted_indices,\n            &image_detection_indices);\n\n        if (num_valid_ground_truth == 0) {\n          continue;\n        }\n\n        for (auto t = 0; t < num_iou_thresholds; ++t) {\n          // recalls_out is a flattened vectors representing a\n          // num_iou_thresholds X num_categories X num_area_ranges X\n          // num_max_detections matrix\n          const int64_t recalls_out_index =\n              t * num_categories * num_area_ranges * num_max_detections +\n              c * num_area_ranges * num_max_detections +\n              a * num_max_detections + m;\n\n          // precisions_out and scores_out are flattened vectors\n          // representing a num_iou_thresholds X num_recall_thresholds X\n          // num_categories X num_area_ranges X num_max_detections matrix\n          const int64_t precisions_out_stride =\n              num_categories * num_area_ranges * num_max_detections;\n          const int64_t precisions_out_index = t * num_recall_thresholds *\n                  num_categories * num_area_ranges * num_max_detections +\n              c * num_area_ranges * num_max_detections +\n              a * num_max_detections + m;\n\n          ComputePrecisionRecallCurve(\n              precisions_out_index,\n              precisions_out_stride,\n              recalls_out_index,\n              recall_thresholds,\n              t,\n              num_iou_thresholds,\n              num_valid_ground_truth,\n              evaluations,\n              evaluation_indices,\n              detection_scores,\n              detection_sorted_indices,\n              image_detection_indices,\n              &precisions,\n              &recalls,\n              &precisions_out,\n              &scores_out,\n              &recalls_out);\n        }\n      }\n    }\n  }\n\n  time_t rawtime;\n  struct tm local_time;\n  std::array<char, 200> buffer;\n  time(&rawtime);\n#ifdef _WIN32\n  localtime_s(&local_time, &rawtime);\n#else\n  localtime_r(&rawtime, &local_time);\n#endif\n  strftime(\n      buffer.data(), 200, \"%Y-%m-%d %H:%num_max_detections:%S\", &local_time);\n  return py::dict(\n      \"params\"_a = params,\n      \"counts\"_a = std::vector<int64_t>({num_iou_thresholds,\n                                         num_recall_thresholds,\n                                         num_categories,\n                                         num_area_ranges,\n                                         num_max_detections}),\n      \"date\"_a = buffer,\n      \"precision\"_a = precisions_out,\n      \"recall\"_a = recalls_out,\n      \"scores\"_a = scores_out);\n}\n\n} // namespace COCOeval\n"
  },
  {
    "path": "detector/YOLOX/yolox/layers/csrc/cocoeval/cocoeval.h",
    "content": "// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n#pragma once\n\n#include <pybind11/numpy.h>\n#include <pybind11/pybind11.h>\n#include <pybind11/stl.h>\n#include <pybind11/stl_bind.h>\n#include <vector>\n\nnamespace py = pybind11;\n\nnamespace COCOeval {\n\n// Annotation data for a single object instance in an image\nstruct InstanceAnnotation {\n  InstanceAnnotation(\n      uint64_t id,\n      double score,\n      double area,\n      bool is_crowd,\n      bool ignore)\n      : id{id}, score{score}, area{area}, is_crowd{is_crowd}, ignore{ignore} {}\n  uint64_t id;\n  double score = 0.;\n  double area = 0.;\n  bool is_crowd = false;\n  bool ignore = false;\n};\n\n// Stores intermediate results for evaluating detection results for a single\n// image that has D detected instances and G ground truth instances. This stores\n// matches between detected and ground truth instances\nstruct ImageEvaluation {\n  // For each of the D detected instances, the id of the matched ground truth\n  // instance, or 0 if unmatched\n  std::vector<uint64_t> detection_matches;\n\n  // The detection score of each of the D detected instances\n  std::vector<double> detection_scores;\n\n  // Marks whether or not each of G instances was ignored from evaluation (e.g.,\n  // because it's outside area_range)\n  std::vector<bool> ground_truth_ignores;\n\n  // Marks whether or not each of D instances was ignored from evaluation (e.g.,\n  // because it's outside aRng)\n  std::vector<bool> detection_ignores;\n};\n\ntemplate <class T>\nusing ImageCategoryInstances = std::vector<std::vector<std::vector<T>>>;\n\n// C++ implementation of COCO API cocoeval.py::COCOeval.evaluateImg().  For each\n// combination of image, category, area range settings, and IOU thresholds to\n// evaluate, it matches detected instances to ground truth instances and stores\n// the results into a vector of ImageEvaluation results, which will be\n// interpreted by the COCOeval::Accumulate() function to produce precion-recall\n// curves.  The parameters of nested vectors have the following semantics:\n//   image_category_ious[i][c][d][g] is the intersection over union of the d'th\n//     detected instance and g'th ground truth instance of\n//     category category_ids[c] in image image_ids[i]\n//   image_category_ground_truth_instances[i][c] is a vector of ground truth\n//     instances in image image_ids[i] of category category_ids[c]\n//   image_category_detection_instances[i][c] is a vector of detected\n//     instances in image image_ids[i] of category category_ids[c]\nstd::vector<ImageEvaluation> EvaluateImages(\n    const std::vector<std::array<double, 2>>& area_ranges, // vector of 2-tuples\n    int max_detections,\n    const std::vector<double>& iou_thresholds,\n    const ImageCategoryInstances<std::vector<double>>& image_category_ious,\n    const ImageCategoryInstances<InstanceAnnotation>&\n        image_category_ground_truth_instances,\n    const ImageCategoryInstances<InstanceAnnotation>&\n        image_category_detection_instances);\n\n// C++ implementation of COCOeval.accumulate(), which generates precision\n// recall curves for each set of category, IOU threshold, detection area range,\n// and max number of detections parameters.  It is assumed that the parameter\n// evaluations is the return value of the functon COCOeval::EvaluateImages(),\n// which was called with the same parameter settings params\npy::dict Accumulate(\n    const py::object& params,\n    const std::vector<ImageEvaluation>& evalutations);\n\n} // namespace COCOeval\n"
  },
  {
    "path": "detector/YOLOX/yolox/layers/csrc/vision.cpp",
    "content": "#include \"cocoeval/cocoeval.h\"\n\nPYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {\n    m.def(\"COCOevalAccumulate\", &COCOeval::Accumulate, \"COCOeval::Accumulate\");\n    m.def(\n        \"COCOevalEvaluateImages\",\n        &COCOeval::EvaluateImages,\n        \"COCOeval::EvaluateImages\");\n    pybind11::class_<COCOeval::InstanceAnnotation>(m, \"InstanceAnnotation\")\n        .def(pybind11::init<uint64_t, double, double, bool, bool>());\n    pybind11::class_<COCOeval::ImageEvaluation>(m, \"ImageEvaluation\")\n        .def(pybind11::init<>());\n}\n"
  },
  {
    "path": "detector/YOLOX/yolox/layers/fast_coco_eval_api.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# This file comes from\n# https://github.com/facebookresearch/detectron2/blob/master/detectron2/evaluation/fast_eval_api.py\n# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.\n\nimport copy\nimport time\n\nimport numpy as np\nfrom pycocotools.cocoeval import COCOeval\n\n# import torch first to make yolox._C work without ImportError of libc10.so\n# in YOLOX, env is already set in __init__.py.\nfrom yolox import _C\n\n\nclass COCOeval_opt(COCOeval):\n    \"\"\"\n    This is a slightly modified version of the original COCO API, where the functions evaluateImg()\n    and accumulate() are implemented in C++ to speedup evaluation\n    \"\"\"\n\n    def evaluate(self):\n        \"\"\"\n        Run per image evaluation on given images and store results in self.evalImgs_cpp, a\n        datastructure that isn't readable from Python but is used by a c++ implementation of\n        accumulate().  Unlike the original COCO PythonAPI, we don't populate the datastructure\n        self.evalImgs because this datastructure is a computational bottleneck.\n        :return: None\n        \"\"\"\n        tic = time.time()\n\n        print(\"Running per image evaluation...\")\n        p = self.params\n        # add backward compatibility if useSegm is specified in params\n        if p.useSegm is not None:\n            p.iouType = \"segm\" if p.useSegm == 1 else \"bbox\"\n            print(\n                \"useSegm (deprecated) is not None. Running {} evaluation\".format(p.iouType)\n            )\n        print(\"Evaluate annotation type *{}*\".format(p.iouType))\n        p.imgIds = list(np.unique(p.imgIds))\n        if p.useCats:\n            p.catIds = list(np.unique(p.catIds))\n        p.maxDets = sorted(p.maxDets)\n        self.params = p\n\n        self._prepare()\n\n        # loop through images, area range, max detection number\n        catIds = p.catIds if p.useCats else [-1]\n\n        if p.iouType == \"segm\" or p.iouType == \"bbox\":\n            computeIoU = self.computeIoU\n        elif p.iouType == \"keypoints\":\n            computeIoU = self.computeOks\n        self.ious = {\n            (imgId, catId): computeIoU(imgId, catId)\n            for imgId in p.imgIds\n            for catId in catIds\n        }\n\n        maxDet = p.maxDets[-1]\n\n        # <<<< Beginning of code differences with original COCO API\n        def convert_instances_to_cpp(instances, is_det=False):\n            # Convert annotations for a list of instances in an image to a format that's fast\n            # to access in C++\n            instances_cpp = []\n            for instance in instances:\n                instance_cpp = _C.InstanceAnnotation(\n                    int(instance[\"id\"]),\n                    instance[\"score\"] if is_det else instance.get(\"score\", 0.0),\n                    instance[\"area\"],\n                    bool(instance.get(\"iscrowd\", 0)),\n                    bool(instance.get(\"ignore\", 0)),\n                )\n                instances_cpp.append(instance_cpp)\n            return instances_cpp\n\n        # Convert GT annotations, detections, and IOUs to a format that's fast to access in C++\n        ground_truth_instances = [\n            [convert_instances_to_cpp(self._gts[imgId, catId]) for catId in p.catIds]\n            for imgId in p.imgIds\n        ]\n        detected_instances = [\n            [\n                convert_instances_to_cpp(self._dts[imgId, catId], is_det=True)\n                for catId in p.catIds\n            ]\n            for imgId in p.imgIds\n        ]\n        ious = [[self.ious[imgId, catId] for catId in catIds] for imgId in p.imgIds]\n\n        if not p.useCats:\n            # For each image, flatten per-category lists into a single list\n            ground_truth_instances = [\n                [[o for c in i for o in c]] for i in ground_truth_instances\n            ]\n            detected_instances = [\n                [[o for c in i for o in c]] for i in detected_instances\n            ]\n\n        # Call C++ implementation of self.evaluateImgs()\n        self._evalImgs_cpp = _C.COCOevalEvaluateImages(\n            p.areaRng,\n            maxDet,\n            p.iouThrs,\n            ious,\n            ground_truth_instances,\n            detected_instances,\n        )\n        self._evalImgs = None\n\n        self._paramsEval = copy.deepcopy(self.params)\n        toc = time.time()\n        print(\"COCOeval_opt.evaluate() finished in {:0.2f} seconds.\".format(toc - tic))\n        # >>>> End of code differences with original COCO API\n\n    def accumulate(self):\n        \"\"\"\n        Accumulate per image evaluation results and store the result in self.eval.  Does not\n        support changing parameter settings from those used by self.evaluate()\n        \"\"\"\n        print(\"Accumulating evaluation results...\")\n        tic = time.time()\n        if not hasattr(self, \"_evalImgs_cpp\"):\n            print(\"Please run evaluate() first\")\n\n        self.eval = _C.COCOevalAccumulate(self._paramsEval, self._evalImgs_cpp)\n\n        # recall is num_iou_thresholds X num_categories X num_area_ranges X num_max_detections\n        self.eval[\"recall\"] = np.array(self.eval[\"recall\"]).reshape(\n            self.eval[\"counts\"][:1] + self.eval[\"counts\"][2:]\n        )\n\n        # precision and scores are num_iou_thresholds X num_recall_thresholds X num_categories X\n        # num_area_ranges X num_max_detections\n        self.eval[\"precision\"] = np.array(self.eval[\"precision\"]).reshape(\n            self.eval[\"counts\"]\n        )\n        self.eval[\"scores\"] = np.array(self.eval[\"scores\"]).reshape(self.eval[\"counts\"])\n        toc = time.time()\n        print(\n            \"COCOeval_opt.accumulate() finished in {:0.2f} seconds.\".format(toc - tic)\n        )\n"
  },
  {
    "path": "detector/YOLOX/yolox/models/__init__.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.\n\nfrom .darknet import CSPDarknet, Darknet\nfrom .losses import IOUloss\nfrom .yolo_fpn import YOLOFPN\nfrom .yolo_head import YOLOXHead\nfrom .yolo_pafpn import YOLOPAFPN\nfrom .yolox import YOLOX\n"
  },
  {
    "path": "detector/YOLOX/yolox/models/darknet.py",
    "content": "#!/usr/bin/env python\n# -*- encoding: utf-8 -*-\n# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.\n\nfrom torch import nn\n\nfrom .network_blocks import BaseConv, CSPLayer, DWConv, Focus, ResLayer, SPPBottleneck\n\n\nclass Darknet(nn.Module):\n    # number of blocks from dark2 to dark5.\n    depth2blocks = {21: [1, 2, 2, 1], 53: [2, 8, 8, 4]}\n\n    def __init__(\n        self, depth, in_channels=3, stem_out_channels=32, out_features=(\"dark3\", \"dark4\", \"dark5\"),\n    ):\n        \"\"\"\n        Args:\n            depth (int): depth of darknet used in model, usually use [21, 53] for this param.\n            in_channels (int): number of input channels, for example, use 3 for RGB image.\n            stem_out_channels (int): number of output chanels of darknet stem.\n                It decides channels of darknet layer2 to layer5.\n            out_features (Tuple[str]): desired output layer name.\n        \"\"\"\n        super().__init__()\n        assert out_features, \"please provide output features of Darknet\"\n        self.out_features = out_features\n        self.stem = nn.Sequential(\n            BaseConv(in_channels, stem_out_channels, ksize=3, stride=1, act=\"lrelu\"),\n            *self.make_group_layer(stem_out_channels, num_blocks=1, stride=2),\n        )\n        in_channels = stem_out_channels * 2  # 64\n\n        num_blocks = Darknet.depth2blocks[depth]\n        # create darknet with `stem_out_channels` and `num_blocks` layers.\n        # to make model structure more clear, we don't use `for` statement in python.\n        self.dark2 = nn.Sequential(*self.make_group_layer(in_channels, num_blocks[0], stride=2))\n        in_channels *= 2  # 128\n        self.dark3 = nn.Sequential(*self.make_group_layer(in_channels, num_blocks[1], stride=2))\n        in_channels *= 2  # 256\n        self.dark4 = nn.Sequential(*self.make_group_layer(in_channels, num_blocks[2], stride=2))\n        in_channels *= 2  # 512\n\n        self.dark5 = nn.Sequential(\n            *self.make_group_layer(in_channels, num_blocks[3], stride=2),\n            *self.make_spp_block([in_channels, in_channels * 2], in_channels * 2),\n        )\n\n    def make_group_layer(self, in_channels: int, num_blocks: int, stride: int = 1):\n        \"starts with conv layer then has `num_blocks` `ResLayer`\"\n        return [\n            BaseConv(in_channels, in_channels * 2, ksize=3, stride=stride, act=\"lrelu\"),\n            *[(ResLayer(in_channels * 2)) for _ in range(num_blocks)]\n        ]\n\n    def make_spp_block(self, filters_list, in_filters):\n        m = nn.Sequential(\n            *[\n                BaseConv(in_filters, filters_list[0], 1, stride=1, act=\"lrelu\"),\n                BaseConv(filters_list[0], filters_list[1], 3, stride=1, act=\"lrelu\"),\n                SPPBottleneck(\n                    in_channels=filters_list[1],\n                    out_channels=filters_list[0],\n                    activation=\"lrelu\"\n                ),\n                BaseConv(filters_list[0], filters_list[1], 3, stride=1, act=\"lrelu\"),\n                BaseConv(filters_list[1], filters_list[0], 1, stride=1, act=\"lrelu\"),\n            ]\n        )\n        return m\n\n    def forward(self, x):\n        outputs = {}\n        x = self.stem(x)\n        outputs[\"stem\"] = x\n        x = self.dark2(x)\n        outputs[\"dark2\"] = x\n        x = self.dark3(x)\n        outputs[\"dark3\"] = x\n        x = self.dark4(x)\n        outputs[\"dark4\"] = x\n        x = self.dark5(x)\n        outputs[\"dark5\"] = x\n        return {k: v for k, v in outputs.items() if k in self.out_features}\n\n\nclass CSPDarknet(nn.Module):\n\n    def __init__(\n        self, dep_mul, wid_mul,\n        out_features=(\"dark3\", \"dark4\", \"dark5\"),\n        depthwise=False, act=\"silu\",\n    ):\n        super().__init__()\n        assert out_features, \"please provide output features of Darknet\"\n        self.out_features = out_features\n        Conv = DWConv if depthwise else BaseConv\n\n        base_channels = int(wid_mul * 64)  # 64\n        base_depth = max(round(dep_mul * 3), 1)  # 3\n\n        # stem\n        self.stem = Focus(3, base_channels, ksize=3, act=act)\n\n        # dark2\n        self.dark2 = nn.Sequential(\n            Conv(base_channels, base_channels * 2, 3, 2, act=act),\n            CSPLayer(\n                base_channels * 2, base_channels * 2,\n                n=base_depth, depthwise=depthwise, act=act\n            ),\n        )\n\n        # dark3\n        self.dark3 = nn.Sequential(\n            Conv(base_channels * 2, base_channels * 4, 3, 2, act=act),\n            CSPLayer(\n                base_channels * 4, base_channels * 4,\n                n=base_depth * 3, depthwise=depthwise, act=act,\n            ),\n        )\n\n        # dark4\n        self.dark4 = nn.Sequential(\n            Conv(base_channels * 4, base_channels * 8, 3, 2, act=act),\n            CSPLayer(\n                base_channels * 8, base_channels * 8,\n                n=base_depth * 3, depthwise=depthwise, act=act,\n            ),\n        )\n\n        # dark5\n        self.dark5 = nn.Sequential(\n            Conv(base_channels * 8, base_channels * 16, 3, 2, act=act),\n            SPPBottleneck(base_channels * 16, base_channels * 16, activation=act),\n            CSPLayer(\n                base_channels * 16, base_channels * 16, n=base_depth,\n                shortcut=False, depthwise=depthwise, act=act,\n            ),\n        )\n\n    def forward(self, x):\n        outputs = {}\n        x = self.stem(x)\n        outputs[\"stem\"] = x\n        x = self.dark2(x)\n        outputs[\"dark2\"] = x\n        x = self.dark3(x)\n        outputs[\"dark3\"] = x\n        x = self.dark4(x)\n        outputs[\"dark4\"] = x\n        x = self.dark5(x)\n        outputs[\"dark5\"] = x\n        return {k: v for k, v in outputs.items() if k in self.out_features}\n"
  },
  {
    "path": "detector/YOLOX/yolox/models/losses.py",
    "content": "#!/usr/bin/env python\n# -*- encoding: utf-8 -*-\n# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.\n\nimport torch\nimport torch.nn as nn\n\n\nclass IOUloss(nn.Module):\n    def __init__(self, reduction=\"none\", loss_type=\"iou\"):\n        super(IOUloss, self).__init__()\n        self.reduction = reduction\n        self.loss_type = loss_type\n\n    def forward(self, pred, target):\n        assert pred.shape[0] == target.shape[0]\n\n        pred = pred.view(-1, 4)\n        target = target.view(-1, 4)\n        tl = torch.max(\n            (pred[:, :2] - pred[:, 2:] / 2), (target[:, :2] - target[:, 2:] / 2)\n        )\n        br = torch.min(\n            (pred[:, :2] + pred[:, 2:] / 2), (target[:, :2] + target[:, 2:] / 2)\n        )\n\n        area_p = torch.prod(pred[:, 2:], 1)\n        area_g = torch.prod(target[:, 2:], 1)\n\n        en = (tl < br).type(tl.type()).prod(dim=1)\n        area_i = torch.prod(br - tl, 1) * en\n        iou = (area_i) / (area_p + area_g - area_i + 1e-16)\n\n        if self.loss_type == \"iou\":\n            loss = 1 - iou ** 2\n        elif self.loss_type == \"giou\":\n            c_tl = torch.min(\n                (pred[:, :2] - pred[:, 2:] / 2), (target[:, :2] - target[:, 2:] / 2)\n            )\n            c_br = torch.max(\n                (pred[:, :2] + pred[:, 2:] / 2), (target[:, :2] + target[:, 2:] / 2)\n            )\n            area_c = torch.prod(c_br - c_tl, 1)\n            giou = iou - (area_c - area_i) / area_c.clamp(1e-16)\n            loss = 1 - giou.clamp(min=-1.0, max=1.0)\n\n        if self.reduction == \"mean\":\n            loss = loss.mean()\n        elif self.reduction == \"sum\":\n            loss = loss.sum()\n\n        return loss\n"
  },
  {
    "path": "detector/YOLOX/yolox/models/network_blocks.py",
    "content": "#!/usr/bin/env python\n# -*- encoding: utf-8 -*-\n# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.\n\nimport torch\nimport torch.nn as nn\n\n\nclass SiLU(nn.Module):\n    \"\"\"export-friendly version of nn.SiLU()\"\"\"\n\n    @staticmethod\n    def forward(x):\n        return x * torch.sigmoid(x)\n\n\ndef get_activation(name=\"silu\", inplace=True):\n    if name == \"silu\":\n        module = nn.SiLU(inplace=inplace)\n    elif name == \"relu\":\n        module = nn.ReLU(inplace=inplace)\n    elif name == \"lrelu\":\n        module = nn.LeakyReLU(0.1, inplace=inplace)\n    else:\n        raise AttributeError(\"Unsupported act type: {}\".format(name))\n    return module\n\n\nclass BaseConv(nn.Module):\n    \"\"\"A Conv2d -> Batchnorm -> silu/leaky relu block\"\"\"\n\n    def __init__(self, in_channels, out_channels, ksize, stride, groups=1, bias=False, act=\"silu\"):\n        super().__init__()\n        # same padding\n        pad = (ksize - 1) // 2\n        self.conv = nn.Conv2d(\n            in_channels,\n            out_channels,\n            kernel_size=ksize,\n            stride=stride,\n            padding=pad,\n            groups=groups,\n            bias=bias,\n        )\n        self.bn = nn.BatchNorm2d(out_channels)\n        self.act = get_activation(act, inplace=True)\n\n    def forward(self, x):\n        return self.act(self.bn(self.conv(x)))\n\n    def fuseforward(self, x):\n        return self.act(self.conv(x))\n\n\nclass DWConv(nn.Module):\n    \"\"\"Depthwise Conv + Conv\"\"\"\n    def __init__(self, in_channels, out_channels, ksize, stride=1, act=\"silu\"):\n        super().__init__()\n        self.dconv = BaseConv(\n            in_channels, in_channels, ksize=ksize,\n            stride=stride, groups=in_channels, act=act\n        )\n        self.pconv = BaseConv(\n            in_channels, out_channels, ksize=1,\n            stride=1, groups=1, act=act\n        )\n\n    def forward(self, x):\n        x = self.dconv(x)\n        return self.pconv(x)\n\n\nclass Bottleneck(nn.Module):\n    # Standard bottleneck\n    def __init__(\n        self, in_channels, out_channels, shortcut=True,\n        expansion=0.5, depthwise=False, act=\"silu\"\n    ):\n        super().__init__()\n        hidden_channels = int(out_channels * expansion)\n        Conv = DWConv if depthwise else BaseConv\n        self.conv1 = BaseConv(in_channels, hidden_channels, 1, stride=1, act=act)\n        self.conv2 = Conv(hidden_channels, out_channels, 3, stride=1, act=act)\n        self.use_add = shortcut and in_channels == out_channels\n\n    def forward(self, x):\n        y = self.conv2(self.conv1(x))\n        if self.use_add:\n            y = y + x\n        return y\n\n\nclass ResLayer(nn.Module):\n    \"Residual layer with `in_channels` inputs.\"\n    def __init__(self, in_channels: int):\n        super().__init__()\n        mid_channels = in_channels // 2\n        self.layer1 = BaseConv(in_channels, mid_channels, ksize=1, stride=1, act=\"lrelu\")\n        self.layer2 = BaseConv(mid_channels, in_channels, ksize=3, stride=1, act=\"lrelu\")\n\n    def forward(self, x):\n        out = self.layer2(self.layer1(x))\n        return x + out\n\n\nclass SPPBottleneck(nn.Module):\n    \"\"\"Spatial pyramid pooling layer used in YOLOv3-SPP\"\"\"\n    def __init__(self, in_channels, out_channels, kernel_sizes=(5, 9, 13), activation=\"silu\"):\n        super().__init__()\n        hidden_channels = in_channels // 2\n        self.conv1 = BaseConv(in_channels, hidden_channels, 1, stride=1, act=activation)\n        self.m = nn.ModuleList(\n            [nn.MaxPool2d(kernel_size=ks, stride=1, padding=ks // 2) for ks in kernel_sizes]\n        )\n        conv2_channels = hidden_channels * (len(kernel_sizes) + 1)\n        self.conv2 = BaseConv(conv2_channels, out_channels, 1, stride=1, act=activation)\n\n    def forward(self, x):\n        x = self.conv1(x)\n        x = torch.cat([x] + [m(x) for m in self.m], dim=1)\n        x = self.conv2(x)\n        return x\n\n\nclass CSPLayer(nn.Module):\n    \"\"\"C3 in yolov5, CSP Bottleneck with 3 convolutions\"\"\"\n\n    def __init__(\n        self, in_channels, out_channels, n=1,\n        shortcut=True, expansion=0.5, depthwise=False, act=\"silu\"\n    ):\n        \"\"\"\n        Args:\n            in_channels (int): input channels.\n            out_channels (int): output channels.\n            n (int): number of Bottlenecks. Default value: 1.\n        \"\"\"\n        # ch_in, ch_out, number, shortcut, groups, expansion\n        super().__init__()\n        hidden_channels = int(out_channels * expansion)  # hidden channels\n        self.conv1 = BaseConv(in_channels, hidden_channels, 1, stride=1, act=act)\n        self.conv2 = BaseConv(in_channels, hidden_channels, 1, stride=1, act=act)\n        self.conv3 = BaseConv(2 * hidden_channels, out_channels, 1, stride=1, act=act)\n        module_list = [\n            Bottleneck(hidden_channels, hidden_channels, shortcut, 1.0, depthwise, act=act)\n            for _ in range(n)\n        ]\n        self.m = nn.Sequential(*module_list)\n\n    def forward(self, x):\n        x_1 = self.conv1(x)\n        x_2 = self.conv2(x)\n        x_1 = self.m(x_1)\n        x = torch.cat((x_1, x_2), dim=1)\n        return self.conv3(x)\n\n\nclass Focus(nn.Module):\n    \"\"\"Focus width and height information into channel space.\"\"\"\n\n    def __init__(self, in_channels, out_channels, ksize=1, stride=1, act=\"silu\"):\n        super().__init__()\n        self.conv = BaseConv(in_channels * 4, out_channels, ksize, stride, act=act)\n\n    def forward(self, x):\n        # shape of x (b,c,w,h) -> y(b,4c,w/2,h/2)\n        patch_top_left = x[..., ::2, ::2]\n        patch_top_right = x[..., ::2, 1::2]\n        patch_bot_left = x[..., 1::2, ::2]\n        patch_bot_right = x[..., 1::2, 1::2]\n        x = torch.cat(\n            (patch_top_left, patch_bot_left, patch_top_right, patch_bot_right,), dim=1,\n        )\n        return self.conv(x)\n"
  },
  {
    "path": "detector/YOLOX/yolox/models/yolo_fpn.py",
    "content": "#!/usr/bin/env python\n# -*- encoding: utf-8 -*-\n# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.\n\nimport torch\nimport torch.nn as nn\n\nfrom .darknet import Darknet\nfrom .network_blocks import BaseConv\n\n\nclass YOLOFPN(nn.Module):\n    \"\"\"\n    YOLOFPN module. Darknet 53 is the default backbone of this model.\n    \"\"\"\n\n    def __init__(\n        self, depth=53, in_features=[\"dark3\", \"dark4\", \"dark5\"],\n    ):\n        super().__init__()\n\n        self.backbone = Darknet(depth)\n        self.in_features = in_features\n\n        # out 1\n        self.out1_cbl = self._make_cbl(512, 256, 1)\n        self.out1 = self._make_embedding([256, 512], 512 + 256)\n\n        # out 2\n        self.out2_cbl = self._make_cbl(256, 128, 1)\n        self.out2 = self._make_embedding([128, 256], 256 + 128)\n\n        # upsample\n        self.upsample = nn.Upsample(scale_factor=2, mode=\"nearest\")\n\n    def _make_cbl(self, _in, _out, ks):\n        return BaseConv(_in, _out, ks, stride=1, act=\"lrelu\")\n\n    def _make_embedding(self, filters_list, in_filters):\n        m = nn.Sequential(\n            *[\n                self._make_cbl(in_filters, filters_list[0], 1),\n                self._make_cbl(filters_list[0], filters_list[1], 3),\n\n                self._make_cbl(filters_list[1], filters_list[0], 1),\n\n                self._make_cbl(filters_list[0], filters_list[1], 3),\n                self._make_cbl(filters_list[1], filters_list[0], 1),\n            ]\n        )\n        return m\n\n    def load_pretrained_model(self, filename=\"./weights/darknet53.mix.pth\"):\n        with open(filename, \"rb\") as f:\n            state_dict = torch.load(f, map_location=\"cpu\")\n        print(\"loading pretrained weights...\")\n        self.backbone.load_state_dict(state_dict)\n\n    def forward(self, inputs):\n        \"\"\"\n        Args:\n            inputs (Tensor): input image.\n\n        Returns:\n            Tuple[Tensor]: FPN output features..\n        \"\"\"\n        #  backbone\n        out_features = self.backbone(inputs)\n        x2, x1, x0 = [out_features[f] for f in self.in_features]\n\n        #  yolo branch 1\n        x1_in = self.out1_cbl(x0)\n        x1_in = self.upsample(x1_in)\n        x1_in = torch.cat([x1_in, x1], 1)\n        out_dark4 = self.out1(x1_in)\n\n        #  yolo branch 2\n        x2_in = self.out2_cbl(out_dark4)\n        x2_in = self.upsample(x2_in)\n        x2_in = torch.cat([x2_in, x2], 1)\n        out_dark3 = self.out2(x2_in)\n\n        outputs = (out_dark3, out_dark4, x0)\n        return outputs\n"
  },
  {
    "path": "detector/YOLOX/yolox/models/yolo_head.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.\n\nimport math\nfrom loguru import logger\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom yolox.utils import bboxes_iou\n\nfrom .losses import IOUloss\nfrom .network_blocks import BaseConv, DWConv\n\n\nclass YOLOXHead(nn.Module):\n    def __init__(\n        self, num_classes, width=1.0, strides=[8, 16, 32],\n        in_channels=[256, 512, 1024], act=\"silu\", depthwise=False\n    ):\n        \"\"\"\n        Args:\n            act (str): activation type of conv. Defalut value: \"silu\".\n            depthwise (bool): wheather apply depthwise conv in conv branch. Defalut value: False.\n        \"\"\"\n        super().__init__()\n\n        self.n_anchors = 1\n        self.num_classes = num_classes\n        self.decode_in_inference = True  # for deploy, set to False\n\n        self.cls_convs = nn.ModuleList()\n        self.reg_convs = nn.ModuleList()\n        self.cls_preds = nn.ModuleList()\n        self.reg_preds = nn.ModuleList()\n        self.obj_preds = nn.ModuleList()\n        self.stems = nn.ModuleList()\n        Conv = DWConv if depthwise else BaseConv\n\n        for i in range(len(in_channels)):\n            self.stems.append(\n                BaseConv(\n                    in_channels=int(in_channels[i] * width),\n                    out_channels=int(256 * width),\n                    ksize=1,\n                    stride=1,\n                    act=act,\n                )\n            )\n            self.cls_convs.append(\n                nn.Sequential(\n                    *[\n                        Conv(\n                            in_channels=int(256 * width),\n                            out_channels=int(256 * width),\n                            ksize=3,\n                            stride=1,\n                            act=act,\n                        ),\n                        Conv(\n                            in_channels=int(256 * width),\n                            out_channels=int(256 * width),\n                            ksize=3,\n                            stride=1,\n                            act=act,\n                        ),\n                    ]\n                )\n            )\n            self.reg_convs.append(\n                nn.Sequential(\n                    *[\n                        Conv(\n                            in_channels=int(256 * width),\n                            out_channels=int(256 * width),\n                            ksize=3,\n                            stride=1,\n                            act=act,\n                        ),\n                        Conv(\n                            in_channels=int(256 * width),\n                            out_channels=int(256 * width),\n                            ksize=3,\n                            stride=1,\n                            act=act,\n                        ),\n                    ]\n                )\n            )\n            self.cls_preds.append(\n                nn.Conv2d(\n                    in_channels=int(256 * width),\n                    out_channels=self.n_anchors * self.num_classes,\n                    kernel_size=1,\n                    stride=1,\n                    padding=0,\n                )\n            )\n            self.reg_preds.append(\n                nn.Conv2d(\n                    in_channels=int(256 * width),\n                    out_channels=4,\n                    kernel_size=1,\n                    stride=1,\n                    padding=0,\n                )\n            )\n            self.obj_preds.append(\n                nn.Conv2d(\n                    in_channels=int(256 * width),\n                    out_channels=self.n_anchors * 1,\n                    kernel_size=1,\n                    stride=1,\n                    padding=0,\n                )\n            )\n\n        self.use_l1 = False\n        self.l1_loss = nn.L1Loss(reduction=\"none\")\n        self.bcewithlog_loss = nn.BCEWithLogitsLoss(reduction=\"none\")\n        self.iou_loss = IOUloss(reduction=\"none\")\n        self.strides = strides\n        self.grids = [torch.zeros(1)] * len(in_channels)\n        self.expanded_strides = [None] * len(in_channels)\n\n    def initialize_biases(self, prior_prob):\n        for conv in self.cls_preds:\n            b = conv.bias.view(self.n_anchors, -1)\n            b.data.fill_(-math.log((1 - prior_prob) / prior_prob))\n            conv.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)\n\n        for conv in self.obj_preds:\n            b = conv.bias.view(self.n_anchors, -1)\n            b.data.fill_(-math.log((1 - prior_prob) / prior_prob))\n            conv.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)\n\n    def forward(self, xin, labels=None, imgs=None):\n        outputs = []\n        origin_preds = []\n        x_shifts = []\n        y_shifts = []\n        expanded_strides = []\n\n        for k, (cls_conv, reg_conv, stride_this_level, x) in enumerate(\n            zip(self.cls_convs, self.reg_convs, self.strides, xin)\n        ):\n            x = self.stems[k](x)\n            cls_x = x\n            reg_x = x\n\n            cls_feat = cls_conv(cls_x)\n            cls_output = self.cls_preds[k](cls_feat)\n\n            reg_feat = reg_conv(reg_x)\n            reg_output = self.reg_preds[k](reg_feat)\n            obj_output = self.obj_preds[k](reg_feat)\n\n            if self.training:\n                output = torch.cat([reg_output, obj_output, cls_output], 1)\n                output, grid = self.get_output_and_grid(output, k, stride_this_level, xin[0].type())\n                x_shifts.append(grid[:, :, 0])\n                y_shifts.append(grid[:, :, 1])\n                expanded_strides.append(\n                    torch.zeros(1, grid.shape[1]).fill_(stride_this_level).type_as(xin[0])\n                )\n                if self.use_l1:\n                    batch_size = reg_output.shape[0]\n                    hsize, wsize = reg_output.shape[-2:]\n                    reg_output = reg_output.view(batch_size, self.n_anchors, 4, hsize, wsize)\n                    reg_output = (\n                        reg_output.permute(0, 1, 3, 4, 2)\n                        .reshape(batch_size, -1, 4)\n                    )\n                    origin_preds.append(reg_output.clone())\n\n            else:\n                output = torch.cat([reg_output, obj_output.sigmoid(), cls_output.sigmoid()], 1)\n\n            outputs.append(output)\n\n        if self.training:\n            return self.get_losses(\n                imgs, x_shifts, y_shifts, expanded_strides, labels,\n                torch.cat(outputs, 1), origin_preds, dtype=xin[0].dtype\n            )\n        else:\n            self.hw = [x.shape[-2:] for x in outputs]\n            # [batch, n_anchors_all, 85]\n            outputs = torch.cat([x.flatten(start_dim=2) for x in outputs], dim=2).permute(0, 2, 1)\n            if self.decode_in_inference:\n                return self.decode_outputs(outputs, dtype=xin[0].type())\n            else:\n                return outputs\n\n    def get_output_and_grid(self, output, k, stride, dtype):\n        grid = self.grids[k]\n\n        batch_size = output.shape[0]\n        n_ch = 5 + self.num_classes\n        hsize, wsize = output.shape[-2:]\n        if grid.shape[2:4] != output.shape[2:4]:\n            yv, xv = torch.meshgrid([torch.arange(hsize), torch.arange(wsize)])\n            grid = torch.stack((xv, yv), 2).view(1, 1, hsize, wsize, 2).type(dtype)\n            self.grids[k] = grid\n\n        output = output.view(batch_size, self.n_anchors, n_ch, hsize, wsize)\n        output = (\n            output.permute(0, 1, 3, 4, 2)\n            .reshape(batch_size, self.n_anchors * hsize * wsize, -1)\n        )\n        grid = grid.view(1, -1, 2)\n        output[..., :2] = (output[..., :2] + grid) * stride\n        output[..., 2:4] = torch.exp(output[..., 2:4]) * stride\n        return output, grid\n\n    def decode_outputs(self, outputs, dtype):\n        grids = []\n        strides = []\n        for (hsize, wsize), stride in zip(self.hw, self.strides):\n            yv, xv = torch.meshgrid([torch.arange(hsize), torch.arange(wsize)])\n            grid = torch.stack((xv, yv), 2).view(1, -1, 2)\n            grids.append(grid)\n            shape = grid.shape[:2]\n            strides.append(torch.full((*shape, 1), stride))\n\n        grids = torch.cat(grids, dim=1).type(dtype)\n        strides = torch.cat(strides, dim=1).type(dtype)\n\n        outputs[..., :2] = (outputs[..., :2] + grids) * strides\n        outputs[..., 2:4] = torch.exp(outputs[..., 2:4]) * strides\n        return outputs\n\n    def get_losses(\n        self, imgs, x_shifts, y_shifts, expanded_strides, labels, outputs, origin_preds, dtype,\n    ):\n        bbox_preds = outputs[:, :, :4]  # [batch, n_anchors_all, 4]\n        obj_preds = outputs[:, :, 4].unsqueeze(-1)  # [batch, n_anchors_all, 1]\n        cls_preds = outputs[:, :, 5:]  # [batch, n_anchors_all, n_cls]\n\n        # calculate targets\n        mixup = labels.shape[2] > 5\n        if mixup:\n            label_cut = labels[..., :5]\n        else:\n            label_cut = labels\n        nlabel = (label_cut.sum(dim=2) > 0).sum(dim=1)  # number of objects\n\n        total_num_anchors = outputs.shape[1]\n        x_shifts = torch.cat(x_shifts, 1)  # [1, n_anchors_all]\n        y_shifts = torch.cat(y_shifts, 1)  # [1, n_anchors_all]\n        expanded_strides = torch.cat(expanded_strides, 1)\n        if self.use_l1:\n            origin_preds = torch.cat(origin_preds, 1)\n\n        cls_targets = []\n        reg_targets = []\n        l1_targets = []\n        obj_targets = []\n        fg_masks = []\n\n        num_fg = 0.0\n        num_gts = 0.0\n\n        for batch_idx in range(outputs.shape[0]):\n            num_gt = int(nlabel[batch_idx])\n            num_gts += num_gt\n            if num_gt == 0:\n                cls_target = outputs.new_zeros((0, self.num_classes))\n                reg_target = outputs.new_zeros((0, 4))\n                l1_target = outputs.new_zeros((0, 4))\n                obj_target = outputs.new_zeros((total_num_anchors, 1))\n                fg_mask = outputs.new_zeros(total_num_anchors).bool()\n            else:\n                gt_bboxes_per_image = labels[batch_idx, :num_gt, 1:5]\n                gt_classes = labels[batch_idx, :num_gt, 0]\n                bboxes_preds_per_image = bbox_preds[batch_idx]\n\n                try:\n                    gt_matched_classes, fg_mask, pred_ious_this_matching, matched_gt_inds, num_fg_img = self.get_assignments(  # noqa\n                        batch_idx, num_gt, total_num_anchors, gt_bboxes_per_image, gt_classes,\n                        bboxes_preds_per_image, expanded_strides, x_shifts, y_shifts,\n                        cls_preds, bbox_preds, obj_preds, labels, imgs,\n                    )\n                except RuntimeError:\n                    logger.error(\n                        \"OOM RuntimeError is raised due to the huge memory cost during label assignment. \\\n                           CPU mode is applied in this batch. If you want to avoid this issue, \\\n                           try to reduce the batch size or image size.\"\n                    )\n                    torch.cuda.empty_cache()\n                    gt_matched_classes, fg_mask, pred_ious_this_matching, matched_gt_inds, num_fg_img = self.get_assignments(  # noqa\n                        batch_idx, num_gt, total_num_anchors, gt_bboxes_per_image, gt_classes,\n                        bboxes_preds_per_image, expanded_strides, x_shifts, y_shifts,\n                        cls_preds, bbox_preds, obj_preds, labels, imgs, \"cpu\",\n                    )\n\n                torch.cuda.empty_cache()\n                num_fg += num_fg_img\n\n                cls_target = F.one_hot(\n                    gt_matched_classes.to(torch.int64), self.num_classes\n                ) * pred_ious_this_matching.unsqueeze(-1)\n                obj_target = fg_mask.unsqueeze(-1)\n                reg_target = gt_bboxes_per_image[matched_gt_inds]\n                if self.use_l1:\n                    l1_target = self.get_l1_target(\n                        outputs.new_zeros((num_fg_img, 4)),\n                        gt_bboxes_per_image[matched_gt_inds],\n                        expanded_strides[0][fg_mask],\n                        x_shifts=x_shifts[0][fg_mask],\n                        y_shifts=y_shifts[0][fg_mask],\n                    )\n\n            cls_targets.append(cls_target)\n            reg_targets.append(reg_target)\n            obj_targets.append(obj_target.to(dtype))\n            fg_masks.append(fg_mask)\n            if self.use_l1:\n                l1_targets.append(l1_target)\n\n        cls_targets = torch.cat(cls_targets, 0)\n        reg_targets = torch.cat(reg_targets, 0)\n        obj_targets = torch.cat(obj_targets, 0)\n        fg_masks = torch.cat(fg_masks, 0)\n        if self.use_l1:\n            l1_targets = torch.cat(l1_targets, 0)\n\n        num_fg = max(num_fg, 1)\n        loss_iou = (self.iou_loss(bbox_preds.view(-1, 4)[fg_masks], reg_targets)).sum() / num_fg\n        loss_obj = (self.bcewithlog_loss(obj_preds.view(-1, 1), obj_targets)).sum() / num_fg\n        loss_cls = (\n            self.bcewithlog_loss(cls_preds.view(-1, self.num_classes)[fg_masks], cls_targets)\n        ).sum() / num_fg\n        if self.use_l1:\n            loss_l1 = (self.l1_loss(origin_preds.view(-1, 4)[fg_masks], l1_targets)).sum() / num_fg\n        else:\n            loss_l1 = 0.0\n\n        reg_weight = 5.0\n        loss = reg_weight * loss_iou + loss_obj + loss_cls + loss_l1\n\n        return loss, reg_weight * loss_iou, loss_obj, loss_cls, loss_l1, num_fg / max(num_gts, 1)\n\n    def get_l1_target(self, l1_target, gt, stride, x_shifts, y_shifts, eps=1e-8):\n        l1_target[:, 0] = gt[:, 0] / stride - x_shifts\n        l1_target[:, 1] = gt[:, 1] / stride - y_shifts\n        l1_target[:, 2] = torch.log(gt[:, 2] / stride + eps)\n        l1_target[:, 3] = torch.log(gt[:, 3] / stride + eps)\n        return l1_target\n\n    @torch.no_grad()\n    def get_assignments(\n        self, batch_idx, num_gt, total_num_anchors, gt_bboxes_per_image, gt_classes,\n        bboxes_preds_per_image, expanded_strides, x_shifts, y_shifts,\n        cls_preds, bbox_preds, obj_preds, labels, imgs, mode=\"gpu\",\n    ):\n\n        if mode == \"cpu\":\n            print(\"------------CPU Mode for This Batch-------------\")\n            gt_bboxes_per_image = gt_bboxes_per_image.cpu().float()\n            bboxes_preds_per_image = bboxes_preds_per_image.cpu().float()\n            gt_classes = gt_classes.cpu().float()\n            expanded_strides = expanded_strides.cpu().float()\n            x_shifts = x_shifts.cpu()\n            y_shifts = y_shifts.cpu()\n\n        fg_mask, is_in_boxes_and_center = self.get_in_boxes_info(\n            gt_bboxes_per_image, expanded_strides, x_shifts, y_shifts, total_num_anchors, num_gt,\n        )\n\n        bboxes_preds_per_image = bboxes_preds_per_image[fg_mask]\n        cls_preds_ = cls_preds[batch_idx][fg_mask]\n        obj_preds_ = obj_preds[batch_idx][fg_mask]\n        num_in_boxes_anchor = bboxes_preds_per_image.shape[0]\n\n        if mode == \"cpu\":\n            gt_bboxes_per_image = gt_bboxes_per_image.cpu()\n            bboxes_preds_per_image = bboxes_preds_per_image.cpu()\n\n        pair_wise_ious = bboxes_iou(\n            gt_bboxes_per_image, bboxes_preds_per_image, False\n        )\n\n        gt_cls_per_image = (\n            F.one_hot(gt_classes.to(torch.int64), self.num_classes).float()\n            .unsqueeze(1).repeat(1, num_in_boxes_anchor, 1)\n        )\n        pair_wise_ious_loss = -torch.log(pair_wise_ious + 1e-8)\n\n        if mode == \"cpu\":\n            cls_preds_, obj_preds_ = cls_preds_.cpu(), obj_preds_.cpu()\n\n        cls_preds_ = (\n            cls_preds_.float().unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_()\n            * obj_preds_.unsqueeze(0).repeat(num_gt, 1, 1).sigmoid_()\n        )\n        pair_wise_cls_loss = F.binary_cross_entropy(\n            cls_preds_.sqrt_(), gt_cls_per_image, reduction=\"none\"\n        ).sum(-1)\n        del cls_preds_\n\n        cost = (\n            pair_wise_cls_loss\n            + 3.0 * pair_wise_ious_loss\n            + 100000.0 * (~is_in_boxes_and_center)\n        )\n\n        (\n            num_fg, gt_matched_classes, pred_ious_this_matching, matched_gt_inds\n        ) = self.dynamic_k_matching(cost, pair_wise_ious, gt_classes, num_gt, fg_mask)\n        del pair_wise_cls_loss, cost, pair_wise_ious, pair_wise_ious_loss\n\n        if mode == \"cpu\":\n            gt_matched_classes = gt_matched_classes.cuda()\n            fg_mask = fg_mask.cuda()\n            pred_ious_this_matching = pred_ious_this_matching.cuda()\n            matched_gt_inds = matched_gt_inds.cuda()\n\n        return gt_matched_classes, fg_mask, pred_ious_this_matching, matched_gt_inds, num_fg\n\n    def get_in_boxes_info(\n        self, gt_bboxes_per_image, expanded_strides, x_shifts, y_shifts, total_num_anchors, num_gt,\n    ):\n        expanded_strides_per_image = expanded_strides[0]\n        x_shifts_per_image = x_shifts[0] * expanded_strides_per_image\n        y_shifts_per_image = y_shifts[0] * expanded_strides_per_image\n        x_centers_per_image = (\n            (x_shifts_per_image + 0.5 * expanded_strides_per_image)\n            .unsqueeze(0)\n            .repeat(num_gt, 1)\n        )  # [n_anchor] -> [n_gt, n_anchor]\n        y_centers_per_image = (\n            (y_shifts_per_image + 0.5 * expanded_strides_per_image)\n            .unsqueeze(0)\n            .repeat(num_gt, 1)\n        )\n\n        gt_bboxes_per_image_l = (\n            (gt_bboxes_per_image[:, 0] - 0.5 * gt_bboxes_per_image[:, 2])\n            .unsqueeze(1)\n            .repeat(1, total_num_anchors)\n        )\n        gt_bboxes_per_image_r = (\n            (gt_bboxes_per_image[:, 0] + 0.5 * gt_bboxes_per_image[:, 2])\n            .unsqueeze(1)\n            .repeat(1, total_num_anchors)\n        )\n        gt_bboxes_per_image_t = (\n            (gt_bboxes_per_image[:, 1] - 0.5 * gt_bboxes_per_image[:, 3])\n            .unsqueeze(1)\n            .repeat(1, total_num_anchors)\n        )\n        gt_bboxes_per_image_b = (\n            (gt_bboxes_per_image[:, 1] + 0.5 * gt_bboxes_per_image[:, 3])\n            .unsqueeze(1)\n            .repeat(1, total_num_anchors)\n        )\n\n        b_l = x_centers_per_image - gt_bboxes_per_image_l\n        b_r = gt_bboxes_per_image_r - x_centers_per_image\n        b_t = y_centers_per_image - gt_bboxes_per_image_t\n        b_b = gt_bboxes_per_image_b - y_centers_per_image\n        bbox_deltas = torch.stack([b_l, b_t, b_r, b_b], 2)\n\n        is_in_boxes = bbox_deltas.min(dim=-1).values > 0.0\n        is_in_boxes_all = is_in_boxes.sum(dim=0) > 0\n        # in fixed center\n\n        center_radius = 2.5\n\n        gt_bboxes_per_image_l = (gt_bboxes_per_image[:, 0]).unsqueeze(1).repeat(\n            1, total_num_anchors\n        ) - center_radius * expanded_strides_per_image.unsqueeze(0)\n        gt_bboxes_per_image_r = (gt_bboxes_per_image[:, 0]).unsqueeze(1).repeat(\n            1, total_num_anchors\n        ) + center_radius * expanded_strides_per_image.unsqueeze(0)\n        gt_bboxes_per_image_t = (gt_bboxes_per_image[:, 1]).unsqueeze(1).repeat(\n            1, total_num_anchors\n        ) - center_radius * expanded_strides_per_image.unsqueeze(0)\n        gt_bboxes_per_image_b = (gt_bboxes_per_image[:, 1]).unsqueeze(1).repeat(\n            1, total_num_anchors\n        ) + center_radius * expanded_strides_per_image.unsqueeze(0)\n\n        c_l = x_centers_per_image - gt_bboxes_per_image_l\n        c_r = gt_bboxes_per_image_r - x_centers_per_image\n        c_t = y_centers_per_image - gt_bboxes_per_image_t\n        c_b = gt_bboxes_per_image_b - y_centers_per_image\n        center_deltas = torch.stack([c_l, c_t, c_r, c_b], 2)\n        is_in_centers = center_deltas.min(dim=-1).values > 0.0\n        is_in_centers_all = is_in_centers.sum(dim=0) > 0\n\n        # in boxes and in centers\n        is_in_boxes_anchor = is_in_boxes_all | is_in_centers_all\n\n        is_in_boxes_and_center = (\n            is_in_boxes[:, is_in_boxes_anchor] & is_in_centers[:, is_in_boxes_anchor]\n        )\n        return is_in_boxes_anchor, is_in_boxes_and_center\n\n    def dynamic_k_matching(self, cost, pair_wise_ious, gt_classes, num_gt, fg_mask):\n        # Dynamic K\n        # ---------------------------------------------------------------\n        matching_matrix = torch.zeros_like(cost)\n\n        ious_in_boxes_matrix = pair_wise_ious\n        n_candidate_k = 10\n        topk_ious, _ = torch.topk(ious_in_boxes_matrix, n_candidate_k, dim=1)\n        dynamic_ks = torch.clamp(topk_ious.sum(1).int(), min=1)\n        for gt_idx in range(num_gt):\n            _, pos_idx = torch.topk(\n                cost[gt_idx], k=dynamic_ks[gt_idx].item(), largest=False\n            )\n            matching_matrix[gt_idx][pos_idx] = 1.0\n\n        del topk_ious, dynamic_ks, pos_idx\n\n        anchor_matching_gt = matching_matrix.sum(0)\n        if (anchor_matching_gt > 1).sum() > 0:\n            cost_min, cost_argmin = torch.min(cost[:, anchor_matching_gt > 1], dim=0)\n            matching_matrix[:, anchor_matching_gt > 1] *= 0.0\n            matching_matrix[cost_argmin, anchor_matching_gt > 1] = 1.0\n        fg_mask_inboxes = matching_matrix.sum(0) > 0.0\n        num_fg = fg_mask_inboxes.sum().item()\n\n        fg_mask[fg_mask.clone()] = fg_mask_inboxes\n\n        matched_gt_inds = matching_matrix[:, fg_mask_inboxes].argmax(0)\n        gt_matched_classes = gt_classes[matched_gt_inds]\n\n        pred_ious_this_matching = (matching_matrix * pair_wise_ious).sum(0)[fg_mask_inboxes]\n        return num_fg, gt_matched_classes, pred_ious_this_matching, matched_gt_inds\n"
  },
  {
    "path": "detector/YOLOX/yolox/models/yolo_pafpn.py",
    "content": "#!/usr/bin/env python\n# -*- encoding: utf-8 -*-\n# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.\n\nimport torch\nimport torch.nn as nn\n\nfrom .darknet import CSPDarknet\nfrom .network_blocks import BaseConv, CSPLayer, DWConv\n\n\nclass YOLOPAFPN(nn.Module):\n    \"\"\"\n    YOLOv3 model. Darknet 53 is the default backbone of this model.\n    \"\"\"\n\n    def __init__(\n        self, depth=1.0, width=1.0, in_features=(\"dark3\", \"dark4\", \"dark5\"),\n        in_channels=[256, 512, 1024], depthwise=False, act=\"silu\",\n    ):\n        super().__init__()\n        self.backbone = CSPDarknet(depth, width, depthwise=depthwise, act=act)\n        self.in_features = in_features\n        self.in_channels = in_channels\n        Conv = DWConv if depthwise else BaseConv\n\n        self.upsample = nn.Upsample(scale_factor=2, mode=\"nearest\")\n        self.lateral_conv0 = BaseConv(\n            int(in_channels[2] * width), int(in_channels[1] * width), 1, 1, act=act\n        )\n        self.C3_p4 = CSPLayer(\n            int(2 * in_channels[1] * width),\n            int(in_channels[1] * width),\n            round(3 * depth),\n            False,\n            depthwise=depthwise,\n            act=act,\n        )  # cat\n\n        self.reduce_conv1 = BaseConv(\n            int(in_channels[1] * width), int(in_channels[0] * width), 1, 1, act=act\n        )\n        self.C3_p3 = CSPLayer(\n            int(2 * in_channels[0] * width),\n            int(in_channels[0] * width),\n            round(3 * depth),\n            False,\n            depthwise=depthwise,\n            act=act,\n        )\n\n        # bottom-up conv\n        self.bu_conv2 = Conv(\n            int(in_channels[0] * width), int(in_channels[0] * width), 3, 2, act=act\n        )\n        self.C3_n3 = CSPLayer(\n            int(2 * in_channels[0] * width),\n            int(in_channels[1] * width),\n            round(3 * depth),\n            False,\n            depthwise=depthwise,\n            act=act,\n        )\n\n        # bottom-up conv\n        self.bu_conv1 = Conv(\n            int(in_channels[1] * width), int(in_channels[1] * width), 3, 2, act=act\n        )\n        self.C3_n4 = CSPLayer(\n            int(2 * in_channels[1] * width),\n            int(in_channels[2] * width),\n            round(3 * depth),\n            False,\n            depthwise=depthwise,\n            act=act,\n        )\n\n    def forward(self, input):\n        \"\"\"\n        Args:\n            inputs: input images.\n\n        Returns:\n            Tuple[Tensor]: FPN feature.\n        \"\"\"\n\n        #  backbone\n        out_features = self.backbone(input)\n        features = [out_features[f] for f in self.in_features]\n        [x2, x1, x0] = features\n\n        fpn_out0 = self.lateral_conv0(x0)  # 1024->512/32\n        f_out0 = self.upsample(fpn_out0)  # 512/16\n        f_out0 = torch.cat([f_out0, x1], 1)  # 512->1024/16\n        f_out0 = self.C3_p4(f_out0)  # 1024->512/16\n\n        fpn_out1 = self.reduce_conv1(f_out0)  # 512->256/16\n        f_out1 = self.upsample(fpn_out1)  # 256/8\n        f_out1 = torch.cat([f_out1, x2], 1)  # 256->512/8\n        pan_out2 = self.C3_p3(f_out1)  # 512->256/8\n\n        p_out1 = self.bu_conv2(pan_out2)  # 256->256/16\n        p_out1 = torch.cat([p_out1, fpn_out1], 1)  # 256->512/16\n        pan_out1 = self.C3_n3(p_out1)  # 512->512/16\n\n        p_out0 = self.bu_conv1(pan_out1)  # 512->512/32\n        p_out0 = torch.cat([p_out0, fpn_out0], 1)  # 512->1024/32\n        pan_out0 = self.C3_n4(p_out0)  # 1024->1024/32\n\n        outputs = (pan_out2, pan_out1, pan_out0)\n        return outputs\n"
  },
  {
    "path": "detector/YOLOX/yolox/models/yolox.py",
    "content": "#!/usr/bin/env python\n# -*- encoding: utf-8 -*-\n# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.\n\nimport torch.nn as nn\n\nfrom .yolo_head import YOLOXHead\nfrom .yolo_pafpn import YOLOPAFPN\n\n\nclass YOLOX(nn.Module):\n    \"\"\"\n    YOLOX model module. The module list is defined by create_yolov3_modules function.\n    The network returns loss values from three YOLO layers during training\n    and detection results during test.\n    \"\"\"\n\n    def __init__(self, backbone=None, head=None):\n        super().__init__()\n        if backbone is None:\n            backbone = YOLOPAFPN()\n        if head is None:\n            head = YOLOXHead(80)\n\n        self.backbone = backbone\n        self.head = head\n\n    def forward(self, x, targets=None):\n        # fpn output content features of [dark3, dark4, dark5]\n        fpn_outs = self.backbone(x)\n\n        if self.training:\n            assert targets is not None\n            loss, iou_loss, conf_loss, cls_loss, l1_loss, num_fg = self.head(\n                fpn_outs, targets, x\n            )\n            outputs = {\n                \"total_loss\": loss,\n                \"iou_loss\": iou_loss,\n                \"l1_loss\": l1_loss,\n                \"conf_loss\": conf_loss,\n                \"cls_loss\": cls_loss,\n                \"num_fg\": num_fg,\n            }\n        else:\n            outputs = self.head(fpn_outs)\n\n        return outputs\n"
  },
  {
    "path": "detector/YOLOX/yolox/utils/__init__.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.\n\nfrom .allreduce_norm import *\nfrom .boxes import *\nfrom .checkpoint import load_ckpt, save_checkpoint\nfrom .demo_utils import *\nfrom .dist import *\nfrom .ema import ModelEMA\nfrom .logger import setup_logger\nfrom .lr_scheduler import LRScheduler\nfrom .metric import *\nfrom .model_utils import *\nfrom .setup_env import *\nfrom .visualize import *\n"
  },
  {
    "path": "detector/YOLOX/yolox/utils/allreduce_norm.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.\n\nimport pickle\nfrom collections import OrderedDict\n\nimport torch\nfrom torch import distributed as dist\nfrom torch import nn\n\nfrom .dist import _get_global_gloo_group, get_world_size\n\nASYNC_NORM = (\n    nn.BatchNorm1d,\n    nn.BatchNorm2d,\n    nn.BatchNorm3d,\n    nn.InstanceNorm1d,\n    nn.InstanceNorm2d,\n    nn.InstanceNorm3d,\n)\n\n__all__ = [\n    \"get_async_norm_states\", \"pyobj2tensor\", \"tensor2pyobj\", \"all_reduce\", \"all_reduce_norm\"\n]\n\n\ndef get_async_norm_states(module):\n    async_norm_states = OrderedDict()\n    for name, child in module.named_modules():\n        if isinstance(child, ASYNC_NORM):\n            for k, v in child.state_dict().items():\n                async_norm_states[\".\".join([name, k])] = v\n    return async_norm_states\n\n\ndef pyobj2tensor(pyobj, device=\"cuda\"):\n    \"\"\"serialize picklable python object to tensor\"\"\"\n    storage = torch.ByteStorage.from_buffer(pickle.dumps(pyobj))\n    return torch.ByteTensor(storage).to(device=device)\n\n\ndef tensor2pyobj(tensor):\n    \"\"\"deserialize tensor to picklable python object\"\"\"\n    return pickle.loads(tensor.cpu().numpy().tobytes())\n\n\ndef _get_reduce_op(op_name):\n    return {\n        \"sum\": dist.ReduceOp.SUM,\n        \"mean\": dist.ReduceOp.SUM,\n    }[op_name.lower()]\n\n\ndef all_reduce(py_dict, op=\"sum\", group=None):\n    \"\"\"\n    Apply all reduce function for python dict object.\n    NOTE: make sure that every py_dict has the same keys and values are in the same shape.\n\n    Args:\n        py_dict (dict): dict to apply all reduce op.\n        op (str): operator, could be \"sum\" or \"mean\".\n    \"\"\"\n    world_size = get_world_size()\n    if world_size == 1:\n        return py_dict\n    if group is None:\n        group = _get_global_gloo_group()\n    if dist.get_world_size(group) == 1:\n        return py_dict\n\n    # all reduce logic across different devices.\n    py_key = list(py_dict.keys())\n    py_key_tensor = pyobj2tensor(py_key)\n    dist.broadcast(py_key_tensor, src=0)\n    py_key = tensor2pyobj(py_key_tensor)\n\n    tensor_shapes = [py_dict[k].shape for k in py_key]\n    tensor_numels = [py_dict[k].numel() for k in py_key]\n\n    flatten_tensor = torch.cat([py_dict[k].flatten() for k in py_key])\n    dist.all_reduce(flatten_tensor, op=_get_reduce_op(op))\n    if op == \"mean\":\n        flatten_tensor /= world_size\n\n    split_tensors = [\n        x.reshape(shape) for x, shape in zip(\n            torch.split(flatten_tensor, tensor_numels), tensor_shapes\n        )\n    ]\n    return OrderedDict({k: v for k, v in zip(py_key, split_tensors)})\n\n\ndef all_reduce_norm(module):\n    \"\"\"\n    All reduce norm statistics in different devices.\n    \"\"\"\n    states = get_async_norm_states(module)\n    states = all_reduce(states, op=\"mean\")\n    module.load_state_dict(states, strict=False)\n"
  },
  {
    "path": "detector/YOLOX/yolox/utils/boxes.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.\n\nimport numpy as np\n\nimport torch\nimport torchvision\n\n__all__ = [\n    \"filter_box\", \"postprocess\", \"bboxes_iou\", \"matrix_iou\",\n    \"adjust_box_anns\", \"xyxy2xywh\",\n]\n\n\ndef filter_box(output, scale_range):\n    \"\"\"\n    output: (N, 5+class) shape\n    \"\"\"\n    min_scale, max_scale = scale_range\n    w = output[:, 2] - output[:, 0]\n    h = output[:, 3] - output[:, 1]\n    keep = (w * h > min_scale * min_scale) & (w * h < max_scale * max_scale)\n    return output[keep]\n\n\ndef postprocess(prediction, num_classes, conf_thre=0.7, nms_thre=0.45):\n    box_corner = prediction.new(prediction.shape)\n    box_corner[:, :, 0] = prediction[:, :, 0] - prediction[:, :, 2] / 2\n    box_corner[:, :, 1] = prediction[:, :, 1] - prediction[:, :, 3] / 2\n    box_corner[:, :, 2] = prediction[:, :, 0] + prediction[:, :, 2] / 2\n    box_corner[:, :, 3] = prediction[:, :, 1] + prediction[:, :, 3] / 2\n    prediction[:, :, :4] = box_corner[:, :, :4]\n\n    output = [None for _ in range(len(prediction))]\n    for i, image_pred in enumerate(prediction):\n\n        # If none are remaining => process next image\n        if not image_pred.size(0):\n            continue\n        # Get score and class with highest confidence\n        class_conf, class_pred = torch.max(image_pred[:, 5: 5 + num_classes], 1, keepdim=True)\n\n        conf_mask = (image_pred[:, 4] * class_conf.squeeze() >= conf_thre).squeeze()\n        # _, conf_mask = torch.topk((image_pred[:, 4] * class_conf.squeeze()), 1000)\n        # Detections ordered as (x1, y1, x2, y2, obj_conf, class_conf, class_pred)\n        detections = torch.cat((image_pred[:, :5], class_conf, class_pred.float()), 1)\n        detections = detections[conf_mask]\n        if not detections.size(0):\n            continue\n\n        nms_out_index = torchvision.ops.batched_nms(\n            detections[:, :4],\n            detections[:, 4] * detections[:, 5],\n            detections[:, 6],\n            nms_thre,\n        )\n        detections = detections[nms_out_index]\n        if output[i] is None:\n            output[i] = detections\n        else:\n            output[i] = torch.cat((output[i], detections))\n\n    return output\n\n\ndef bboxes_iou(bboxes_a, bboxes_b, xyxy=True):\n    if bboxes_a.shape[1] != 4 or bboxes_b.shape[1] != 4:\n        raise IndexError\n\n    if xyxy:\n        tl = torch.max(bboxes_a[:, None, :2], bboxes_b[:, :2])\n        br = torch.min(bboxes_a[:, None, 2:], bboxes_b[:, 2:])\n        area_a = torch.prod(bboxes_a[:, 2:] - bboxes_a[:, :2], 1)\n        area_b = torch.prod(bboxes_b[:, 2:] - bboxes_b[:, :2], 1)\n    else:\n        tl = torch.max(\n            (bboxes_a[:, None, :2] - bboxes_a[:, None, 2:] / 2),\n            (bboxes_b[:, :2] - bboxes_b[:, 2:] / 2),\n        )\n        br = torch.min(\n            (bboxes_a[:, None, :2] + bboxes_a[:, None, 2:] / 2),\n            (bboxes_b[:, :2] + bboxes_b[:, 2:] / 2),\n        )\n\n        area_a = torch.prod(bboxes_a[:, 2:], 1)\n        area_b = torch.prod(bboxes_b[:, 2:], 1)\n    en = (tl < br).type(tl.type()).prod(dim=2)\n    area_i = torch.prod(br - tl, 2) * en  # * ((tl < br).all())\n    return area_i / (area_a[:, None] + area_b - area_i)\n\n\ndef matrix_iou(a, b):\n    \"\"\"\n    return iou of a and b, numpy version for data augenmentation\n    \"\"\"\n    lt = np.maximum(a[:, np.newaxis, :2], b[:, :2])\n    rb = np.minimum(a[:, np.newaxis, 2:], b[:, 2:])\n\n    area_i = np.prod(rb - lt, axis=2) * (lt < rb).all(axis=2)\n    area_a = np.prod(a[:, 2:] - a[:, :2], axis=1)\n    area_b = np.prod(b[:, 2:] - b[:, :2], axis=1)\n    return area_i / (area_a[:, np.newaxis] + area_b - area_i + 1e-12)\n\n\ndef adjust_box_anns(bbox, scale_ratio, padw, padh, w_max, h_max):\n    bbox[:, 0::2] = np.clip(bbox[:, 0::2] * scale_ratio + padw, 0, w_max)\n    bbox[:, 1::2] = np.clip(bbox[:, 1::2] * scale_ratio + padh, 0, h_max)\n    return bbox\n\n\ndef xyxy2xywh(bboxes):\n    bboxes[:, 2] = bboxes[:, 2] - bboxes[:, 0]\n    bboxes[:, 3] = bboxes[:, 3] - bboxes[:, 1]\n    return bboxes\n"
  },
  {
    "path": "detector/YOLOX/yolox/utils/checkpoint.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.\nimport os\nimport shutil\nfrom loguru import logger\n\nimport torch\n\n\ndef load_ckpt(model, ckpt):\n    model_state_dict = model.state_dict()\n    load_dict = {}\n    for key_model, v in model_state_dict.items():\n        if key_model not in ckpt:\n            logger.warning(\n                \"{} is not in the ckpt. Please double check and see if this is desired.\".format(\n                    key_model\n                )\n            )\n            continue\n        v_ckpt = ckpt[key_model]\n        if v.shape != v_ckpt.shape:\n            logger.warning(\n                \"Shape of {} in checkpoint is {}, while shape of {} in model is {}.\".format(\n                    key_model, v_ckpt.shape, key_model, v.shape\n                )\n            )\n            continue\n        load_dict[key_model] = v_ckpt\n\n    model.load_state_dict(load_dict, strict=False)\n    return model\n\n\ndef save_checkpoint(state, is_best, save_dir, model_name=\"\"):\n    if not os.path.exists(save_dir):\n        os.makedirs(save_dir)\n    filename = os.path.join(save_dir, model_name + \"_ckpt.pth.tar\")\n    torch.save(state, filename)\n    if is_best:\n        best_filename = os.path.join(save_dir, \"best_ckpt.pth.tar\")\n        shutil.copyfile(filename, best_filename)\n"
  },
  {
    "path": "detector/YOLOX/yolox/utils/demo_utils.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.\n\nimport os\n\nimport numpy as np\n\n__all__ = [\"mkdir\", \"nms\", \"multiclass_nms\", \"demo_postprocess\"]\n\n\ndef mkdir(path):\n    if not os.path.exists(path):\n        os.makedirs(path)\n\n\ndef nms(boxes, scores, nms_thr):\n    \"\"\"Single class NMS implemented in Numpy.\"\"\"\n    x1 = boxes[:, 0]\n    y1 = boxes[:, 1]\n    x2 = boxes[:, 2]\n    y2 = boxes[:, 3]\n\n    areas = (x2 - x1 + 1) * (y2 - y1 + 1)\n    order = scores.argsort()[::-1]\n\n    keep = []\n    while order.size > 0:\n        i = order[0]\n        keep.append(i)\n        xx1 = np.maximum(x1[i], x1[order[1:]])\n        yy1 = np.maximum(y1[i], y1[order[1:]])\n        xx2 = np.minimum(x2[i], x2[order[1:]])\n        yy2 = np.minimum(y2[i], y2[order[1:]])\n\n        w = np.maximum(0.0, xx2 - xx1 + 1)\n        h = np.maximum(0.0, yy2 - yy1 + 1)\n        inter = w * h\n        ovr = inter / (areas[i] + areas[order[1:]] - inter)\n\n        inds = np.where(ovr <= nms_thr)[0]\n        order = order[inds + 1]\n\n    return keep\n\n\ndef multiclass_nms(boxes, scores, nms_thr, score_thr):\n    \"\"\"Multiclass NMS implemented in Numpy\"\"\"\n    final_dets = []\n    num_classes = scores.shape[1]\n    for cls_ind in range(num_classes):\n        cls_scores = scores[:, cls_ind]\n        valid_score_mask = cls_scores > score_thr\n        if valid_score_mask.sum() == 0:\n            continue\n        else:\n            valid_scores = cls_scores[valid_score_mask]\n            valid_boxes = boxes[valid_score_mask]\n            keep = nms(valid_boxes, valid_scores, nms_thr)\n            if len(keep) > 0:\n                cls_inds = np.ones((len(keep), 1)) * cls_ind\n                dets = np.concatenate([valid_boxes[keep], valid_scores[keep, None], cls_inds], 1)\n                final_dets.append(dets)\n    if len(final_dets) == 0:\n        return None\n    return np.concatenate(final_dets, 0)\n\n\ndef demo_postprocess(outputs, img_size, p6=False):\n\n    grids = []\n    expanded_strides = []\n\n    if not p6:\n        strides = [8, 16, 32]\n    else:\n        strides = [8, 16, 32, 64]\n\n    hsizes = [img_size[0]//stride for stride in strides]\n    wsizes = [img_size[1]//stride for stride in strides]\n\n    for hsize, wsize, stride in zip(hsizes, wsizes, strides):\n        xv, yv = np.meshgrid(np.arange(hsize), np.arange(wsize))\n        grid = np.stack((xv, yv), 2).reshape(1, -1, 2)\n        grids.append(grid)\n        shape = grid.shape[:2]\n        expanded_strides.append(np.full((*shape, 1), stride))\n\n    grids = np.concatenate(grids, 1)\n    expanded_strides = np.concatenate(expanded_strides, 1)\n    outputs[..., :2] = (outputs[..., :2] + grids) * expanded_strides\n    outputs[..., 2:4] = np.exp(outputs[..., 2:4]) * expanded_strides\n\n    return outputs\n"
  },
  {
    "path": "detector/YOLOX/yolox/utils/dist.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# This file mainly comes from\n# https://github.com/facebookresearch/detectron2/blob/master/detectron2/utils/comm.py\n# Copyright (c) Facebook, Inc. and its affiliates.\n# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.\n\"\"\"\nThis file contains primitives for multi-gpu communication.\nThis is useful when doing distributed training.\n\"\"\"\n\nimport functools\nimport logging\nimport pickle\nimport time\n\nimport numpy as np\n\nimport torch\nfrom torch import distributed as dist\n\n__all__ = [\n    \"is_main_process\",\n    \"synchronize\",\n    \"get_world_size\",\n    \"get_rank\",\n    \"get_local_rank\",\n    \"get_local_size\",\n    \"time_synchronized\",\n    \"gather\",\n    \"all_gather\",\n]\n\n_LOCAL_PROCESS_GROUP = None\n\n\ndef synchronize():\n    \"\"\"\n    Helper function to synchronize (barrier) among all processes when using distributed training\n    \"\"\"\n    if not dist.is_available():\n        return\n    if not dist.is_initialized():\n        return\n    world_size = dist.get_world_size()\n    if world_size == 1:\n        return\n    dist.barrier()\n\n\ndef get_world_size() -> int:\n    if not dist.is_available():\n        return 1\n    if not dist.is_initialized():\n        return 1\n    return dist.get_world_size()\n\n\ndef get_rank() -> int:\n    if not dist.is_available():\n        return 0\n    if not dist.is_initialized():\n        return 0\n    return dist.get_rank()\n\n\ndef get_local_rank() -> int:\n    \"\"\"\n    Returns:\n        The rank of the current process within the local (per-machine) process group.\n    \"\"\"\n    if not dist.is_available():\n        return 0\n    if not dist.is_initialized():\n        return 0\n    assert _LOCAL_PROCESS_GROUP is not None\n    return dist.get_rank(group=_LOCAL_PROCESS_GROUP)\n\n\ndef get_local_size() -> int:\n    \"\"\"\n    Returns:\n        The size of the per-machine process group, i.e. the number of processes per machine.\n    \"\"\"\n    if not dist.is_available():\n        return 1\n    if not dist.is_initialized():\n        return 1\n    return dist.get_world_size(group=_LOCAL_PROCESS_GROUP)\n\n\ndef is_main_process() -> bool:\n    return get_rank() == 0\n\n\n@functools.lru_cache()\ndef _get_global_gloo_group():\n    \"\"\"\n    Return a process group based on gloo backend, containing all the ranks\n    The result is cached.\n    \"\"\"\n    if dist.get_backend() == \"nccl\":\n        return dist.new_group(backend=\"gloo\")\n    else:\n        return dist.group.WORLD\n\n\ndef _serialize_to_tensor(data, group):\n    backend = dist.get_backend(group)\n    assert backend in [\"gloo\", \"nccl\"]\n    device = torch.device(\"cpu\" if backend == \"gloo\" else \"cuda\")\n\n    buffer = pickle.dumps(data)\n    if len(buffer) > 1024 ** 3:\n        logger = logging.getLogger(__name__)\n        logger.warning(\n            \"Rank {} trying to all-gather {:.2f} GB of data on device {}\".format(\n                get_rank(), len(buffer) / (1024 ** 3), device\n            )\n        )\n    storage = torch.ByteStorage.from_buffer(buffer)\n    tensor = torch.ByteTensor(storage).to(device=device)\n    return tensor\n\n\ndef _pad_to_largest_tensor(tensor, group):\n    \"\"\"\n    Returns:\n        list[int]: size of the tensor, on each rank\n        Tensor: padded tensor that has the max size\n    \"\"\"\n    world_size = dist.get_world_size(group=group)\n    assert (\n        world_size >= 1\n    ), \"comm.gather/all_gather must be called from ranks within the given group!\"\n    local_size = torch.tensor([tensor.numel()], dtype=torch.int64, device=tensor.device)\n    size_list = [\n        torch.zeros([1], dtype=torch.int64, device=tensor.device)\n        for _ in range(world_size)\n    ]\n    dist.all_gather(size_list, local_size, group=group)\n    size_list = [int(size.item()) for size in size_list]\n\n    max_size = max(size_list)\n\n    # we pad the tensor because torch all_gather does not support\n    # gathering tensors of different shapes\n    if local_size != max_size:\n        padding = torch.zeros(\n            (max_size - local_size,), dtype=torch.uint8, device=tensor.device\n        )\n        tensor = torch.cat((tensor, padding), dim=0)\n    return size_list, tensor\n\n\ndef all_gather(data, group=None):\n    \"\"\"\n    Run all_gather on arbitrary picklable data (not necessarily tensors).\n\n    Args:\n        data: any picklable object\n        group: a torch process group. By default, will use a group which\n            contains all ranks on gloo backend.\n    Returns:\n        list[data]: list of data gathered from each rank\n    \"\"\"\n    if get_world_size() == 1:\n        return [data]\n    if group is None:\n        group = _get_global_gloo_group()\n    if dist.get_world_size(group) == 1:\n        return [data]\n\n    tensor = _serialize_to_tensor(data, group)\n\n    size_list, tensor = _pad_to_largest_tensor(tensor, group)\n    max_size = max(size_list)\n\n    # receiving Tensor from all ranks\n    tensor_list = [\n        torch.empty((max_size,), dtype=torch.uint8, device=tensor.device)\n        for _ in size_list\n    ]\n    dist.all_gather(tensor_list, tensor, group=group)\n\n    data_list = []\n    for size, tensor in zip(size_list, tensor_list):\n        buffer = tensor.cpu().numpy().tobytes()[:size]\n        data_list.append(pickle.loads(buffer))\n\n    return data_list\n\n\ndef gather(data, dst=0, group=None):\n    \"\"\"\n    Run gather on arbitrary picklable data (not necessarily tensors).\n\n    Args:\n        data: any picklable object\n        dst (int): destination rank\n        group: a torch process group. By default, will use a group which\n            contains all ranks on gloo backend.\n\n    Returns:\n        list[data]: on dst, a list of data gathered from each rank. Otherwise,\n            an empty list.\n    \"\"\"\n    if get_world_size() == 1:\n        return [data]\n    if group is None:\n        group = _get_global_gloo_group()\n    if dist.get_world_size(group=group) == 1:\n        return [data]\n    rank = dist.get_rank(group=group)\n\n    tensor = _serialize_to_tensor(data, group)\n    size_list, tensor = _pad_to_largest_tensor(tensor, group)\n\n    # receiving Tensor from all ranks\n    if rank == dst:\n        max_size = max(size_list)\n        tensor_list = [\n            torch.empty((max_size,), dtype=torch.uint8, device=tensor.device)\n            for _ in size_list\n        ]\n        dist.gather(tensor, tensor_list, dst=dst, group=group)\n\n        data_list = []\n        for size, tensor in zip(size_list, tensor_list):\n            buffer = tensor.cpu().numpy().tobytes()[:size]\n            data_list.append(pickle.loads(buffer))\n        return data_list\n    else:\n        dist.gather(tensor, [], dst=dst, group=group)\n        return []\n\n\ndef shared_random_seed():\n    \"\"\"\n    Returns:\n        int: a random number that is the same across all workers.\n            If workers need a shared RNG, they can use this shared seed to\n            create one.\n    All workers must call this function, otherwise it will deadlock.\n    \"\"\"\n    ints = np.random.randint(2 ** 31)\n    all_ints = all_gather(ints)\n    return all_ints[0]\n\n\ndef time_synchronized():\n    \"\"\"pytorch-accurate time\"\"\"\n    if torch.cuda.is_available():\n        torch.cuda.synchronize()\n    return time.time()\n"
  },
  {
    "path": "detector/YOLOX/yolox/utils/ema.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.\nimport math\nfrom copy import deepcopy\n\nimport torch\nimport torch.nn as nn\n\n\ndef is_parallel(model):\n    \"\"\"check if model is in parallel mode.\"\"\"\n    import apex\n\n    parallel_type = (\n        nn.parallel.DataParallel,\n        nn.parallel.DistributedDataParallel,\n        apex.parallel.distributed.DistributedDataParallel,\n    )\n    return isinstance(model, parallel_type)\n\n\ndef copy_attr(a, b, include=(), exclude=()):\n    # Copy attributes from b to a, options to only include [...] and to exclude [...]\n    for k, v in b.__dict__.items():\n        if (len(include) and k not in include) or k.startswith(\"_\") or k in exclude:\n            continue\n        else:\n            setattr(a, k, v)\n\n\nclass ModelEMA:\n    \"\"\"\n    Model Exponential Moving Average from https://github.com/rwightman/pytorch-image-models\n    Keep a moving average of everything in the model state_dict (parameters and buffers).\n    This is intended to allow functionality like\n    https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage\n    A smoothed version of the weights is necessary for some training schemes to perform well.\n    This class is sensitive where it is initialized in the sequence of model init,\n    GPU assignment and distributed training wrappers.\n    \"\"\"\n    def __init__(self, model, decay=0.9999, updates=0):\n        \"\"\"\n        Args:\n            model (nn.Module): model to apply EMA.\n            decay (float): ema decay reate.\n            updates (int): counter of EMA updates.\n        \"\"\"\n        # Create EMA(FP32)\n        self.ema = deepcopy(model.module if is_parallel(model) else model).eval()\n        self.updates = updates\n        # decay exponential ramp (to help early epochs)\n        self.decay = lambda x: decay * (1 - math.exp(-x / 2000))\n        for p in self.ema.parameters():\n            p.requires_grad_(False)\n\n    def update(self, model):\n        # Update EMA parameters\n        with torch.no_grad():\n            self.updates += 1\n            d = self.decay(self.updates)\n\n            msd = (\n                model.module.state_dict() if is_parallel(model) else model.state_dict()\n            )  # model state_dict\n            for k, v in self.ema.state_dict().items():\n                if v.dtype.is_floating_point:\n                    v *= d\n                    v += (1.0 - d) * msd[k].detach()\n\n    def update_attr(self, model, include=(), exclude=(\"process_group\", \"reducer\")):\n        # Update EMA attributes\n        copy_attr(self.ema, model, include, exclude)\n"
  },
  {
    "path": "detector/YOLOX/yolox/utils/logger.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.\n\nimport inspect\nimport os\nimport sys\nfrom loguru import logger\n\n\ndef get_caller_name(depth=0):\n    \"\"\"\n    Args:\n        depth (int): Depth of caller conext, use 0 for caller depth. Default value: 0.\n\n    Returns:\n        str: module name of the caller\n    \"\"\"\n    # the following logic is a little bit faster than inspect.stack() logic\n    frame = inspect.currentframe().f_back\n    for _ in range(depth):\n        frame = frame.f_back\n\n    return frame.f_globals[\"__name__\"]\n\n\nclass StreamToLoguru:\n    \"\"\"\n    stream object that redirects writes to a logger instance.\n    \"\"\"\n    def __init__(self, level=\"INFO\", caller_names=(\"apex\", \"pycocotools\")):\n        \"\"\"\n        Args:\n            level(str): log level string of loguru. Default value: \"INFO\".\n            caller_names(tuple): caller names of redirected module.\n                Default value: (apex, pycocotools).\n        \"\"\"\n        self.level = level\n        self.linebuf = \"\"\n        self.caller_names = caller_names\n\n    def write(self, buf):\n        full_name = get_caller_name(depth=1)\n        module_name = full_name.rsplit(\".\", maxsplit=-1)[0]\n        if module_name in self.caller_names:\n            for line in buf.rstrip().splitlines():\n                # use caller level log\n                logger.opt(depth=2).log(self.level, line.rstrip())\n        else:\n            sys.__stdout__.write(buf)\n\n    def flush(self):\n        pass\n\n\ndef redirect_sys_output(log_level=\"INFO\"):\n    redirect_logger = StreamToLoguru(log_level)\n    sys.stderr = redirect_logger\n    sys.stdout = redirect_logger\n\n\ndef setup_logger(save_dir, distributed_rank=0, filename=\"log.txt\", mode=\"a\"):\n    \"\"\"setup logger for training and testing.\n    Args:\n        save_dir(str): location to save log file\n        distributed_rank(int): device rank when multi-gpu environment\n        filename (string): log save name.\n        mode(str): log file write mode, `append` or `override`. default is `a`.\n\n    Return:\n        logger instance.\n    \"\"\"\n    loguru_format = (\n        \"<green>{time:YYYY-MM-DD HH:mm:ss}</green> | \"\n        \"<level>{level: <8}</level> | \"\n        \"<cyan>{name}</cyan>:<cyan>{line}</cyan> - <level>{message}</level>\"\n    )\n\n    logger.remove()\n    save_file = os.path.join(save_dir, filename)\n    if mode == \"o\" and os.path.exists(save_file):\n        os.remove(save_file)\n    # only keep logger in rank0 process\n    if distributed_rank == 0:\n        logger.add(\n            sys.stderr,\n            format=loguru_format,\n            level=\"INFO\",\n            enqueue=True,\n        )\n        logger.add(save_file)\n\n    # redirect stdout/stderr to loguru\n    redirect_sys_output(\"INFO\")\n"
  },
  {
    "path": "detector/YOLOX/yolox/utils/lr_scheduler.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.\n\nimport math\nfrom functools import partial\n\n\nclass LRScheduler:\n    def __init__(self, name, lr, iters_per_epoch, total_epochs, **kwargs):\n        \"\"\"\n        Supported lr schedulers: [cos, warmcos, multistep]\n\n        Args:\n            lr (float): learning rate.\n            iters_per_peoch (int): number of iterations in one epoch.\n            total_epochs (int): number of epochs in training.\n            kwargs (dict):\n                - cos: None\n                - warmcos: [warmup_epochs, warmup_lr_start (default 1e-6)]\n                - multistep: [milestones (epochs), gamma (default 0.1)]\n        \"\"\"\n\n        self.lr = lr\n        self.iters_per_epoch = iters_per_epoch\n        self.total_epochs = total_epochs\n        self.total_iters = iters_per_epoch * total_epochs\n\n        self.__dict__.update(kwargs)\n\n        self.lr_func = self._get_lr_func(name)\n\n    def update_lr(self, iters):\n        return self.lr_func(iters)\n\n    def _get_lr_func(self, name):\n        if name == \"cos\":  # cosine lr schedule\n            lr_func = partial(cos_lr, self.lr, self.total_iters)\n        elif name == \"warmcos\":\n            warmup_total_iters = self.iters_per_epoch * self.warmup_epochs\n            warmup_lr_start = getattr(self, \"warmup_lr_start\", 1e-6)\n            lr_func = partial(\n                warm_cos_lr,\n                self.lr,\n                self.total_iters,\n                warmup_total_iters,\n                warmup_lr_start,\n            )\n        elif name == \"yoloxwarmcos\":\n            warmup_total_iters = self.iters_per_epoch * self.warmup_epochs\n            no_aug_iters = self.iters_per_epoch * self.no_aug_epochs\n            warmup_lr_start = getattr(self, \"warmup_lr_start\", 0)\n            min_lr_ratio = getattr(self, \"min_lr_ratio\", 0.2)\n            lr_func = partial(\n                yolox_warm_cos_lr,\n                self.lr,\n                min_lr_ratio,\n                self.total_iters,\n                warmup_total_iters,\n                warmup_lr_start,\n                no_aug_iters,\n            )\n        elif name == \"yoloxsemiwarmcos\":\n            warmup_lr_start = getattr(self, \"warmup_lr_start\", 0)\n            min_lr_ratio = getattr(self, \"min_lr_ratio\", 0.2)\n            warmup_total_iters = self.iters_per_epoch * self.warmup_epochs\n            no_aug_iters = self.iters_per_epoch * self.no_aug_epochs\n            normal_iters = self.iters_per_epoch * self.semi_epoch\n            semi_iters = self.iters_per_epoch_semi * (\n                self.total_epochs - self.semi_epoch - self.no_aug_epochs\n            )\n            lr_func = partial(\n                yolox_semi_warm_cos_lr,\n                self.lr,\n                min_lr_ratio,\n                warmup_lr_start,\n                self.total_iters,\n                normal_iters,\n                no_aug_iters,\n                warmup_total_iters,\n                semi_iters,\n                self.iters_per_epoch,\n                self.iters_per_epoch_semi,\n            )\n        elif name == \"multistep\":  # stepwise lr schedule\n            milestones = [\n                int(self.total_iters * milestone / self.total_epochs)\n                for milestone in self.milestones\n            ]\n            gamma = getattr(self, \"gamma\", 0.1)\n            lr_func = partial(multistep_lr, self.lr, milestones, gamma)\n        else:\n            raise ValueError(\"Scheduler version {} not supported.\".format(name))\n        return lr_func\n\n\ndef cos_lr(lr, total_iters, iters):\n    \"\"\"Cosine learning rate\"\"\"\n    lr *= 0.5 * (1.0 + math.cos(math.pi * iters / total_iters))\n    return lr\n\n\ndef warm_cos_lr(lr, total_iters, warmup_total_iters, warmup_lr_start, iters):\n    \"\"\"Cosine learning rate with warm up.\"\"\"\n    if iters <= warmup_total_iters:\n        lr = (lr - warmup_lr_start) * iters / float(\n            warmup_total_iters\n        ) + warmup_lr_start\n    else:\n        lr *= 0.5 * (\n            1.0\n            + math.cos(\n                math.pi\n                * (iters - warmup_total_iters)\n                / (total_iters - warmup_total_iters)\n            )\n        )\n    return lr\n\n\ndef yolox_warm_cos_lr(\n    lr,\n    min_lr_ratio,\n    total_iters,\n    warmup_total_iters,\n    warmup_lr_start,\n    no_aug_iter,\n    iters,\n):\n    \"\"\"Cosine learning rate with warm up.\"\"\"\n    min_lr = lr * min_lr_ratio\n    if iters <= warmup_total_iters:\n        # lr = (lr - warmup_lr_start) * iters / float(warmup_total_iters) + warmup_lr_start\n        lr = (lr - warmup_lr_start) * pow(\n            iters / float(warmup_total_iters), 2\n        ) + warmup_lr_start\n    elif iters >= total_iters - no_aug_iter:\n        lr = min_lr\n    else:\n        lr = min_lr + 0.5 * (lr - min_lr) * (\n            1.0\n            + math.cos(\n                math.pi\n                * (iters - warmup_total_iters)\n                / (total_iters - warmup_total_iters - no_aug_iter)\n            )\n        )\n    return lr\n\n\ndef yolox_semi_warm_cos_lr(\n    lr,\n    min_lr_ratio,\n    warmup_lr_start,\n    total_iters,\n    normal_iters,\n    no_aug_iters,\n    warmup_total_iters,\n    semi_iters,\n    iters_per_epoch,\n    iters_per_epoch_semi,\n    iters,\n):\n    \"\"\"Cosine learning rate with warm up.\"\"\"\n    min_lr = lr * min_lr_ratio\n    if iters <= warmup_total_iters:\n        # lr = (lr - warmup_lr_start) * iters / float(warmup_total_iters) + warmup_lr_start\n        lr = (lr - warmup_lr_start) * pow(\n            iters / float(warmup_total_iters), 2\n        ) + warmup_lr_start\n    elif iters >= normal_iters + semi_iters:\n        lr = min_lr\n    elif iters <= normal_iters:\n        lr = min_lr + 0.5 * (lr - min_lr) * (\n            1.0\n            + math.cos(\n                math.pi\n                * (iters - warmup_total_iters)\n                / (total_iters - warmup_total_iters - no_aug_iters)\n            )\n        )\n    else:\n        lr = min_lr + 0.5 * (lr - min_lr) * (\n            1.0\n            + math.cos(\n                math.pi\n                * (\n                    normal_iters\n                    - warmup_total_iters\n                    + (iters - normal_iters)\n                    * iters_per_epoch\n                    * 1.0\n                    / iters_per_epoch_semi\n                )\n                / (total_iters - warmup_total_iters - no_aug_iters)\n            )\n        )\n    return lr\n\n\ndef multistep_lr(lr, milestones, gamma, iters):\n    \"\"\"MultiStep learning rate\"\"\"\n    for milestone in milestones:\n        lr *= gamma if iters >= milestone else 1.0\n    return lr\n"
  },
  {
    "path": "detector/YOLOX/yolox/utils/metric.py",
    "content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.\nimport functools\nimport os\nimport time\nfrom collections import defaultdict, deque\n\nimport numpy as np\n\nimport torch\n\n__all__ = [\n    \"AverageMeter\",\n    \"MeterBuffer\",\n    \"get_total_and_free_memory_in_Mb\",\n    \"occumpy_mem\",\n    \"gpu_mem_usage\",\n]\n\n\ndef get_total_and_free_memory_in_Mb(cuda_device):\n    devices_info_str = os.popen(\n        \"nvidia-smi --query-gpu=memory.total,memory.used --format=csv,nounits,noheader\"\n    )\n    devices_info = devices_info_str.read().strip().split(\"\\n\")\n    total, used = devices_info[int(cuda_device)].split(\",\")\n    return int(total), int(used)\n\n\ndef occumpy_mem(cuda_device, mem_ratio=0.9):\n    \"\"\"\n    pre-allocate gpu memory for training to avoid memory Fragmentation.\n    \"\"\"\n    total, used = get_total_and_free_memory_in_Mb(cuda_device)\n    max_mem = int(total * mem_ratio)\n    block_mem = max_mem - used\n    x = torch.cuda.FloatTensor(256, 1024, block_mem)\n    del x\n    time.sleep(5)\n\n\ndef gpu_mem_usage():\n    \"\"\"\n    Compute the GPU memory usage for the current device (MB).\n    \"\"\"\n    mem_usage_bytes = torch.cuda.max_memory_allocated()\n    return mem_usage_bytes / (1024 * 1024)\n\n\nclass AverageMeter:\n    \"\"\"Track a series of values and provide access to smoothed values over a\n    window or the global series average.\n    \"\"\"\n\n    def __init__(self, window_size=50):\n        self._deque = deque(maxlen=window_size)\n        self._total = 0.0\n        self._count = 0\n\n    def update(self, value):\n        self._deque.append(value)\n        self._count += 1\n        self._total += value\n\n    @property\n    def median(self):\n        d = np.array(list(self._deque))\n        return np.median(d)\n\n    @property\n    def avg(self):\n        # if deque is empty, nan will be returned.\n        d = np.array(list(self._deque))\n        return d.mean()\n\n    @property\n    def global_avg(self):\n        return self._total / max(self._count, 1e-5)\n\n    @property\n    def latest(self):\n        return self._deque[-1] if len(self._deque) > 0 else None\n\n    @property\n    def total(self):\n        return self._total\n\n    def reset(self):\n        self._deque.clear()\n        self._total = 0.0\n        self._count = 0\n\n    def clear(self):\n        self._deque.clear()\n\n\nclass MeterBuffer(defaultdict):\n    \"\"\"Computes and stores the average and current value\"\"\"\n\n    def __init__(self, window_size=20):\n        factory = functools.partial(AverageMeter, window_size=window_size)\n        super().__init__(factory)\n\n    def reset(self):\n        for v in self.values():\n            v.reset()\n\n    def get_filtered_meter(self, filter_key=\"time\"):\n        return {k: v for k, v in self.items() if filter_key in k}\n\n    def update(self, values=None, **kwargs):\n        if values is None:\n            values = {}\n        values.update(kwargs)\n        for k, v in values.items():\n            self[k].update(v)\n\n    def clear_meters(self):\n        for v in self.values():\n            v.clear()\n"
  },
  {
    "path": "detector/YOLOX/yolox/utils/model_utils.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.\n\nfrom copy import deepcopy\n\nimport torch\nimport torch.nn as nn\nfrom thop import profile\n\n__all__ = [\n    \"fuse_conv_and_bn\", \"fuse_model\", \"get_model_info\", \"replace_module\",\n]\n\n\ndef get_model_info(model, tsize):\n\n    stride = 64\n    img = torch.zeros((1, 3, stride, stride), device=next(model.parameters()).device)\n    flops, params = profile(deepcopy(model), inputs=(img,), verbose=False)\n    params /= 1e6\n    flops /= 1e9\n    flops *= tsize[0] * tsize[1] / stride / stride * 2  # Gflops\n    info = \"Params: {:.2f}M, Gflops: {:.2f}\".format(params, flops)\n    return info\n\n\ndef fuse_conv_and_bn(conv, bn):\n    # Fuse convolution and batchnorm layers https://tehnokv.com/posts/fusing-batchnorm-and-conv/\n    fusedconv = (\n        nn.Conv2d(\n            conv.in_channels,\n            conv.out_channels,\n            kernel_size=conv.kernel_size,\n            stride=conv.stride,\n            padding=conv.padding,\n            groups=conv.groups,\n            bias=True,\n        )\n        .requires_grad_(False)\n        .to(conv.weight.device)\n    )\n\n    # prepare filters\n    w_conv = conv.weight.clone().view(conv.out_channels, -1)\n    w_bn = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var)))\n    fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.shape))\n\n    # prepare spatial bias\n    b_conv = (\n        torch.zeros(conv.weight.size(0), device=conv.weight.device)\n        if conv.bias is None\n        else conv.bias\n    )\n    b_bn = bn.bias - bn.weight.mul(bn.running_mean).div(\n        torch.sqrt(bn.running_var + bn.eps)\n    )\n    fusedconv.bias.copy_(torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn)\n\n    return fusedconv\n\n\ndef fuse_model(model):\n    from yolox.models.network_blocks import BaseConv\n\n    for m in model.modules():\n        if type(m) is BaseConv and hasattr(m, \"bn\"):\n            m.conv = fuse_conv_and_bn(m.conv, m.bn)  # update conv\n            delattr(m, \"bn\")  # remove batchnorm\n            m.forward = m.fuseforward  # update forward\n    return model\n\n\ndef replace_module(module, replaced_module_type, new_module_type, replace_func=None):\n    \"\"\"\n    Replace given type in module to a new type. mostly used in deploy.\n\n    Args:\n        module (nn.Module): model to apply replace operation.\n        replaced_module_type (Type): module type to be replaced.\n        new_module_type (Type)\n        replace_func (function): python function to describe replace logic. Defalut value None.\n\n    Returns:\n        model (nn.Module): module that already been replaced.\n    \"\"\"\n    def default_replace_func(replaced_module_type, new_module_type):\n        return new_module_type()\n\n    if replace_func is None:\n        replace_func = default_replace_func\n\n    model = module\n    if isinstance(module, replaced_module_type):\n        model = replace_func(replaced_module_type, new_module_type)\n    else:  # recurrsively replace\n        for name, child in module.named_children():\n            new_child = replace_module(child, replaced_module_type, new_module_type)\n            if new_child is not child:  # child is already replaced\n                model.add_module(name, new_child)\n\n    return model\n"
  },
  {
    "path": "detector/YOLOX/yolox/utils/setup_env.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.\n\nimport os\nimport subprocess\n\nimport cv2\n\n__all__ = [\"configure_nccl\", \"configure_module\"]\n\n\ndef configure_nccl():\n    \"\"\"Configure multi-machine environment variables of NCCL.\"\"\"\n    os.environ[\"NCCL_LAUNCH_MODE\"] = \"PARALLEL\"\n    os.environ[\"NCCL_IB_HCA\"] = subprocess.getoutput(\n        \"pushd /sys/class/infiniband/ > /dev/null; for i in mlx5_*; \"\n        \"do cat $i/ports/1/gid_attrs/types/* 2>/dev/null \"\n        \"| grep v >/dev/null && echo $i ; done; popd > /dev/null\"\n    )\n    os.environ[\"NCCL_IB_GID_INDEX\"] = \"3\"\n    os.environ[\"NCCL_IB_TC\"] = \"106\"\n\n\ndef configure_module(ulimit_value=8192):\n    \"\"\"\n    Configure pytorch module environment. setting of ulimit and cv2 will be set.\n\n    Args:\n        ulimit_value(int): default open file number on linux. Default value: 8192.\n    \"\"\"\n    # system setting\n    try:\n        import resource\n        rlimit = resource.getrlimit(resource.RLIMIT_NOFILE)\n        resource.setrlimit(resource.RLIMIT_NOFILE, (ulimit_value, rlimit[1]))\n    except Exception:\n        # Exception might be raised in Windows OS or rlimit reaches max limit number.\n        # However, set rlimit value might not be necessary.\n        pass\n\n    # cv2\n    # multiprocess might be harmful on performance of torch dataloader\n    os.environ[\"OPENCV_OPENCL_RUNTIME\"] = \"disabled\"\n    try:\n        cv2.setNumThreads(0)\n        cv2.ocl.setUseOpenCL(False)\n    except Exception:\n        # cv2 version mismatch might rasie exceptions.\n        pass\n"
  },
  {
    "path": "detector/YOLOX/yolox/utils/visualize.py",
    "content": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n# Copyright (c) 2014-2021 Megvii Inc. All rights reserved.\n\nimport cv2\nimport numpy as np\n\n__all__ = [\"vis\"]\n\n\ndef vis(img, boxes, scores, cls_ids, conf=0.5, class_names=None):\n\n    for i in range(len(boxes)):\n        box = boxes[i]\n        cls_id = int(cls_ids[i])\n        score = scores[i]\n        if score < conf:\n            continue\n        x0 = int(box[0])\n        y0 = int(box[1])\n        x1 = int(box[2])\n        y1 = int(box[3])\n\n        color = (_COLORS[cls_id] * 255).astype(np.uint8).tolist()\n        text = '{}:{:.1f}%'.format(class_names[cls_id], score * 100)\n        txt_color = (0, 0, 0) if np.mean(_COLORS[cls_id]) > 0.5 else (255, 255, 255)\n        font = cv2.FONT_HERSHEY_SIMPLEX\n\n        txt_size = cv2.getTextSize(text, font, 0.4, 1)[0]\n        cv2.rectangle(img, (x0, y0), (x1, y1), color, 2)\n\n        txt_bk_color = (_COLORS[cls_id] * 255 * 0.7).astype(np.uint8).tolist()\n        cv2.rectangle(\n            img,\n            (x0, y0 + 1),\n            (x0 + txt_size[0] + 1, y0 + int(1.5*txt_size[1])),\n            txt_bk_color,\n            -1\n        )\n        cv2.putText(img, text, (x0, y0 + txt_size[1]), font, 0.4, txt_color, thickness=1)\n\n    return img\n\n\n_COLORS = np.array(\n    [\n        0.000, 0.447, 0.741,\n        0.850, 0.325, 0.098,\n        0.929, 0.694, 0.125,\n        0.494, 0.184, 0.556,\n        0.466, 0.674, 0.188,\n        0.301, 0.745, 0.933,\n        0.635, 0.078, 0.184,\n        0.300, 0.300, 0.300,\n        0.600, 0.600, 0.600,\n        1.000, 0.000, 0.000,\n        1.000, 0.500, 0.000,\n        0.749, 0.749, 0.000,\n        0.000, 1.000, 0.000,\n        0.000, 0.000, 1.000,\n        0.667, 0.000, 1.000,\n        0.333, 0.333, 0.000,\n        0.333, 0.667, 0.000,\n        0.333, 1.000, 0.000,\n        0.667, 0.333, 0.000,\n        0.667, 0.667, 0.000,\n        0.667, 1.000, 0.000,\n        1.000, 0.333, 0.000,\n        1.000, 0.667, 0.000,\n        1.000, 1.000, 0.000,\n        0.000, 0.333, 0.500,\n        0.000, 0.667, 0.500,\n        0.000, 1.000, 0.500,\n        0.333, 0.000, 0.500,\n        0.333, 0.333, 0.500,\n        0.333, 0.667, 0.500,\n        0.333, 1.000, 0.500,\n        0.667, 0.000, 0.500,\n        0.667, 0.333, 0.500,\n        0.667, 0.667, 0.500,\n        0.667, 1.000, 0.500,\n        1.000, 0.000, 0.500,\n        1.000, 0.333, 0.500,\n        1.000, 0.667, 0.500,\n        1.000, 1.000, 0.500,\n        0.000, 0.333, 1.000,\n        0.000, 0.667, 1.000,\n        0.000, 1.000, 1.000,\n        0.333, 0.000, 1.000,\n        0.333, 0.333, 1.000,\n        0.333, 0.667, 1.000,\n        0.333, 1.000, 1.000,\n        0.667, 0.000, 1.000,\n        0.667, 0.333, 1.000,\n        0.667, 0.667, 1.000,\n        0.667, 1.000, 1.000,\n        1.000, 0.000, 1.000,\n        1.000, 0.333, 1.000,\n        1.000, 0.667, 1.000,\n        0.333, 0.000, 0.000,\n        0.500, 0.000, 0.000,\n        0.667, 0.000, 0.000,\n        0.833, 0.000, 0.000,\n        1.000, 0.000, 0.000,\n        0.000, 0.167, 0.000,\n        0.000, 0.333, 0.000,\n        0.000, 0.500, 0.000,\n        0.000, 0.667, 0.000,\n        0.000, 0.833, 0.000,\n        0.000, 1.000, 0.000,\n        0.000, 0.000, 0.167,\n        0.000, 0.000, 0.333,\n        0.000, 0.000, 0.500,\n        0.000, 0.000, 0.667,\n        0.000, 0.000, 0.833,\n        0.000, 0.000, 1.000,\n        0.000, 0.000, 0.000,\n        0.143, 0.143, 0.143,\n        0.286, 0.286, 0.286,\n        0.429, 0.429, 0.429,\n        0.571, 0.571, 0.571,\n        0.714, 0.714, 0.714,\n        0.857, 0.857, 0.857,\n        0.000, 0.447, 0.741,\n        0.314, 0.717, 0.741,\n        0.50, 0.5, 0\n    ]\n).astype(np.float32).reshape(-1, 3)\n"
  },
  {
    "path": "docs/DATA.md",
    "content": "# Dataset preparation\n\n### Introduction\n\nIn this documentation we introduce how to prepara standard datasets to benchmark UniTrack on different tasks. We consider five tasks: Single Object Tracking (SOT) on OTB 2015 dataset, Video Object Segmentation (VOS) on DAVIS 2017 dataset, Multiple Object Tracking (MOT) on MOT 16 dataset, Multiple Object Tracking and Segmentation (MOTS) on MOTS dataset, Pose Tracking on PoseTrack 2018 dataset. Among them, SOT and VOS are propagation-type tasks, in which only one observation (usually in the very first frame) is given to indicate the object to be tracked, while others are association-type tasks that support to make use of observations in every timestamp given by an automatic detector . \n\n- **Table of contents**\n  - [Prepare OTB 2015 datset for SOT](#OTB-2015-dataset-for-SOT)\n  - [Prepare DAVIS 2017 datset for VOS](#DAVIS-2017-dataset-for-VOS)\n  - [Prepare MOT 16 datset for MOT](#MOT-16-dataset-for-MOT)\n  - [Prepare MOTS dataset for MOTS](#MOTS-dataset-for-MOTS)\n  - [Prepare PoseTrack 2018 for Pose Tracking](#PoseTrack-2018-dataset-for-Pose-Tracking)\n\n\n### OTB 2015 dataset for SOT\n\nThe [original source](http://cvlab.hanyang.ac.kr/tracker_benchmark/datasets.html) of OTB benchmark does not provide a convenient way to download the entire dataset. Lukily [Gluon CV](https://cv.gluon.ai/contents.html) provides a [script](https://cv.gluon.ai/_downloads/719c5c0d73fb22deacc84b4557b6fd5f/otb2015.py) for easy downloading all OTB video sequences. This script  includes both dataset downloading and data processing，simply run this script:\n\n`python otb2015.py`\n\nand you will get all the 100 sequences of OTB. After this, you need to copy Jogging to Jogging-1 and Jogging-2, and copy Skating2 to Skating2-1 and Skating2-2 or using softlink, following [STVIR](https://github.com/STVIR/pysot/tree/master/testing_dataset). Finally, please download OTB2015.josn \\[[Google Drive](https://drive.google.com/file/d/1jHYta8wsSid9DwcWl5hcNJNPzgQMcI_r/view?usp=sharing)\\]\\[[Baidu NetDisk](https://pan.baidu.com/s/1d9oR7ZEHq4V5i6bLpEllng)\\] (code:k93s) and place it under the OTB-2015 root. The structrue should look like this:\n\n```\n${OTB_ROOT}\n   |——— OTB2015.json\n   |        \n   └———Basketball/\n   | \n   └———Biker/\n   | \n   ...\n```\n\n\n\n### DAVIS 2017 dataset for VOS\n\nDownload DAVIS 2017 trainval via [this link](https://data.vision.ee.ethz.ch/csergi/share/davis/DAVIS-2017-trainval-480p.zip) and unzip it. No other processing is needed.\n\n### MOT 16 dataset for MOT\n\n1. Download MOT-16 dataset from [this page](https://motchallenge.net/data/MOT16/).\n2. Get detections for MOT-16 sequences. Here we offer three options:\n\n   - Using ground-truth detections. This is feasible only in the *train* split (we do not have labels for the *test* split). Run `python tools/gen_mot16_gt.py` to prepare the detections.\n   - Using three kind of official detections (DPM/FRCNN/SDP) provided by MOT Challenge. Detections are from MOT-17 dataset (the video sequences in MOT-17 are the same as MOT-16), so you may need download MOT-17 and unzip it under the same root of MOT-16 first. Then run `python tools/gen_mot16_label17.py` to prepare the detections. Can generate detections for both *train* and *test* splits. \n   - [Recommended] Using custom detectors to generate detection results. You need to first run the detector on MOT-16 dataset and output a series of `MOT16-XX.txt` files to store the detection results, where XX ranges from 01 to 14. Each line in the `.txt` file represents a bounding box in format of `[frame_index](starts from 1), x, y, w, h, confidence`. We provide an example generated by FairMOT detector \\[[Google Drive](https://drive.google.com/file/d/113xks7UIZ6LeBY_CTlOh5Z_OQ0hiP551/view?usp=sharing)\\]/\\[[Baidu NetDisk](https://pan.baidu.com/s/1-E9SN4rWWpZRT1ermcX0JA)\\] (code:k93s). Finally run `tools/gen_mot16_fairmot.py` to prepare the detections. \n   \n       A good point is that you can also download `.txt` results of other trackers from the [MOT-16 leaderboard](https://motchallenge.net/results/MOT16/) or of other detectors from the [MOT-17 DET leaderboard](https://motchallenge.net/results/MOT17Det/), and use their detection results with very few modifications on `tools/gen_mot16_fairmot.py`. \n\n### MOTS dataset for MOTS\n\n1. Download MOT-16 dataset from [this page](https://motchallenge.net/data/MOTS/).\n2. Get segmentation masks for MOTS sequences. Here we offer two options:\n   - Using ground-truth detections. This is feasible only in the *train* split (we do not have labels for the *test* split). Run `python tools/gen_mots_gt.py` to prepare the detections.\n   - [Recommended] Using custom models to generate segmentation masks. You need to first run the model on MOTS dataset and output a series of `MOTS-XX.txt` files to store the mask results. See [here](https://motchallenge.net/instructions/) for the output format. Note that we do not use the track \"id\" field so you can output any number as a place holder. You can download results of off-the-shelf trackers and use their masks, for example, simply download the raw data of results of the COSTA tracker in the bottom of [this page](https://motchallenge.net/method/MOTS=87&chl=17). Finally run `tools/gen_mot16_fairmot.py` to prepare the masks (The script will keep the track \"id\" field. But again, it should be noted that the track \"id\" field is ignored when run tracking with UniTrack). \n   \n   \n### PoseTrack 2018 dataset for Pose Tracking\n\n1. Register and download [PoseTrack 2018 dataset](https://posetrack.net/).\n2. Get single-frame pose estimation results. Run a single-frame pose estimator, and save results in a `$OBS_NAME.json` file.Results should be formatted as instructed [here](https://github.com/leonid-pishchulin/poseval). The \"track_id\" field is ignored so you can output any number as a place holder. \n3. Put the `.json` file under `$POSETRACK_ROOT/obs/$SPLIT/` folder, where `SPLIT` could be \"train\" or \"val\"."
  },
  {
    "path": "docs/INSTALL.md",
    "content": "# Installation\n\n### Requirements\n* Nvidia device with CUDA \n* Python 3.7+\n* PyTorch 1.7.0+\n* torchvision 0.8.0+\n* Other python packages in requirements.txt\n\n### Code installation\n\n#### (Recommended) Install with conda\n\nInstall conda from [here](https://repo.anaconda.com/miniconda/), Miniconda3-latest-(OS)-(platform).\n```shell\n# 1. Create a conda virtual environment.\nconda create -n unitrack python=3.7 -y\nconda activate unitrack\n\n# 2. Install PyTorch\nconda install pytorch==1.7.0 torchvision cudatoolkit\n\n# 3. Get UniTrack\ngit clone https://github.com/Zhongdao/UniTrack.git\ncd UniTrack\n\n# 4. Install ohter dependency\nconda install --file requirements.txt \npip install cython_bbox==0.1.3\npython setup.py\n\n```\n\n"
  },
  {
    "path": "docs/MODELZOO.md",
    "content": "# MODEL ZOO\n\n\n### Prepare apperance models\nOne beneficial usage of UniTrack is that it allows easy evaluation of pre-trained models (as appearance models) on diverse tracking tasks. By far we have tested the following models, mostly self-supervised pre-trained:\n\n| Pre-training Method | Architecture |Link | \n| :---: | :---: | :---: |\n| ImageNet classification | ResNet-50 | torchvision |\n| InsDist| ResNet-50 | [Google Drive](https://www.dropbox.com/sh/87d24jqsl6ra7t2/AACcsSIt1_Njv7GsmsuzZ6Sta/InsDis.pth)|\n| MoCo-V1| ResNet-50 |[Google Drive](https://dl.fbaipublicfiles.com/moco/moco_checkpoints/moco_v1_200ep/moco_v1_200ep_pretrain.pth.tar)|\n| PCL-V1| ResNet-50 |[Google Drive](https://storage.googleapis.com/sfr-pcl-data-research/PCL_checkpoint/PCL_v1_epoch200.pth.tar)|\n| PIRL| ResNet-50 | [Google Drive](https://www.dropbox.com/sh/87d24jqsl6ra7t2/AADN4jKnvTI0U5oT6hTmQZz8a/PIRL.pth)|\n| PCL-V2| ResNet-50 | [Google Drive](https://storage.googleapis.com/sfr-pcl-data-research/PCL_checkpoint/PCL_v2_epoch200.pth.tar)|\n| SimCLR-V1| ResNet-50 |[Google Drive](https://drive.google.com/file/d/1RdB2KaaXOtU2_t-Uk_HQbxMZgSGUcy6c/view?usp=sharing)|\n| MoCo-V2| ResNet-50 |[Google Drive](https://dl.fbaipublicfiles.com/moco/moco_checkpoints/moco_v2_800ep/moco_v2_800ep_pretrain.pth.tar)|\n| SimCLR-V2| ResNet-50 |[Google Drive](https://drive.google.com/file/d/1NSCrZ7MaejJaOS7yA3URtbubxLR-fz5X/view?usp=sharing)|\n| SeLa-V2| ResNet-50 |[Google Drive](https://dl.fbaipublicfiles.com/deepcluster/selav2_400ep_pretrain.pth.tar)|\n| InfoMin| ResNet-50 | [Google Drive](https://www.dropbox.com/sh/87d24jqsl6ra7t2/AAAzMTynP3Qc8mIE4XWkgILUa/InfoMin_800.pth)|\n| BarlowTwins| ResNet-50 | [Google Drive](https://drive.google.com/file/d/1iXfAiAZP3Lrc-Hk4QHUzO-mk4M4fElQw/view?usp=sharing)|\n| BYOL| ResNet-50 | [Google Drive](https://storage.googleapis.com/deepmind-byol/checkpoints/pretrain_res50x1.pkl)|\n| DeepCluster-V2| ResNet-50 |[Google Drive](https://dl.fbaipublicfiles.com/deepcluster/deepclusterv2_800ep_pretrain.pth.tar)|\n| SwAV| ResNet-50 |[Google Drive](https://dl.fbaipublicfiles.com/deepcluster/swav_800ep_pretrain.pth.tar)|\n| PixPro| ResNet-50 |[Google Drive](https://drive.google.com/file/d/1u172sUx-kldPvrZzZxijciBHLMiSJp46/view?usp=sharing)|\n| DetCo| ResNet-50 | [Google Drive](https://drive.google.com/file/d/1ahyX8HEbLUZXS-9Jr2GIMWDEZdqWe1GV/view?usp=sharing)|\n| TimeCycle| ResNet-50 |[Google Drive](https://drive.google.com/file/d/1WUYLkfowJ853RG_9OhbrKpb3r-cc-cOA/view?usp=sharing)|\n| ImageNet classification | ResNet-18 |torchvision|\n| Colorization + memory| ResNet-18 | [Google Drive](https://drive.google.com/file/d/1gWPRgYH70t-9uwj0EId826ZxFdosbzQv/view?usp=sharing)|\n| UVC| ResNet-18 |[Google Drive](https://drive.google.com/file/d/1nl0ehS8mvE5PUBOPLQSCWtrmFmS0-dPX/view?usp=sharing)|\n| CRW| ResNet-18 |[Google Drive](https://drive.google.com/file/d/1C1ujnpFRijJqVD3PV7qzyYwGSWoS9fLb/view?usp=sharing)|\n\nAfter downloading an appearance model, please place it under `$UNITRACK_ROOT/weights`. A large part of the model checkpoints are adopted from [ssl-transfer](https://github.com/linusericsson/ssl-transfer), many thanks to [linusericsson](https://github.com/linusericsson)!\n\n### Test your own pre-trained models as appearance models\nIf your model uses the standard ResNet architecture, you can directly test it using UniTrack without additional modifications. If you use ResNet but the parameter names are not consistent with the standard naming, you can simply rename parameter groups and load your weights into the standard ResNet. If you are using other architectures, it is also possible to test it with UniTrack. You may need a little hack: just remember to let the model output 8x down-sampled feature maps. You can check out `models/hrnet.py` for an example. \n"
  },
  {
    "path": "docs/RESULTS.md",
    "content": "### Quantitative results\n\n**Single Object Tracking (SOT) on OTB-2015**\n\n| Method | SiamFC | SiamRPN | SiamRPN++ | UDT* | UDT+* | LUDT* | LUDT+* | UniTrack_XCorr* | UniTrack_DCF* |\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n| AUC | 58.2 | 63.7 | 69.6 | 59.4 | 63.2 | 60.2 | 63.9 | 55.5 | 61.8|\n\n \\* indicates non-supervised methods\n\n**Video Object Segmentation (VOS) on DAVIS-2017 *val* split**\n\n| Method | SiamMask | FeelVOS | STM | Colorization* | TimeCycle* | UVC* | CRW* | VFS* | UniTrack* |\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n| J-mean | 54.3 | 63.7 | 79.2 | 34.6 | 40.1 | 56.7 | 64.8 | 66.5 | 58.4|\n\n \\* indicates non-supervised methods \n\n**Multiple Object Tracking (MOT) on MOT-16 [*test* set *private detector* track](https://motchallenge.net/method/MOT=3856&chl=5)**\n\n| Method | POI | DeepSORT-2 | JDE | CTrack | TubeTK | TraDes | CSTrack | FairMOT* | UniTrack* |\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n| IDF-1 | 65.1 | 62.2 | 55.8 | 57.2 | 62.2 | 64.7 | 71.8 | 72.8 | 71.8|\n| IDs | 805 | 781 | 1544 | 1897 | 1236 | 1144 | 1071 | 1074 | 683 |\n| MOTA | 66.1 | 61.4 | 64.4 | 67.6 | 66.9 | 70.1 | 70.7 | 74.9 | 74.7|\n\n \\* indicates methods using the same detections\n\n**Multiple Object Tracking and Segmentation (MOTS) on MOTS challenge [*test* set](https://motchallenge.net/method/MOTS=109&chl=17)**\n\n| Method | TrackRCNN | SORTS | PointTrack | GMPHD | COSTA_st* | UniTrack* |\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: | \n| IDF-1 | 42.7 | 57.3 | 42.9 | 65.6 | 70.3 | 67.2 |\n| IDs | 567 | 577 | 868 | 566 | 421 | 622 | \n| sMOTA | 40.6 | 55.0 | 62.3 | 69.0 | 70.2 | 68.9 | \n\n \\* indicates methods using the same detections\n\n**Pose Tracking on PoseTrack-2018 *val* split**\n\n| Method | MDPN | OpenSVAI | Miracle | KeyTrack | LightTrack* | UniTrack* |\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: | \n| IDF-1 | - | - | - | - | 52.2 | 73.2 |\n| IDs | - | - | - | - | 3024 | 6760 | \n| sMOTA | 50.6 | 62.4 | 64.0 | 66.6 | 64.8 | 63.5 | \n\n \\* indicates methods using the same detections\n"
  },
  {
    "path": "docs/RUN.md",
    "content": "# Run evaluation on multiple tasks\n\n### Prepare config file\n\nTo evaluate an apperance model on multiple tasks, first you need to prepare a config file `${EXP_NAME}.yaml` and place it under the `config/` folder. We provide several example config files: \n   1. `crw_resnet18_s3.yaml` : Self-supervised model trained with Contrastive Random Walk [1], ResNet-18 stage-3 features.\n   2. `imagenet_resnet18_s3.yaml`: ImageNet pre-trained model, ResNet-18 stage-3 features.\n   3. `crw_resnet18_s3_womotion.yaml` : Model same as 1 but motion cues are discarded in association type tasks. This way, distinctions between different representations are better highlighted and  potential confounding factors are avoided.\n   4. `imagenet_resnet18_s3_womotion.yaml`: Model same as 2, motion cues are discared in association type tasks.\n   \n\n### Note for the config file\n\nWhen you are testing a new model, please take care to make sure the following fields in the config file are correct:\n\n```yaml\ncommon:\n    # Experiment name, an identifier.\n    exp_name: crw_resnet18_s3   \n    \n    # Model type, currently support:\n    # ['imagenet18', 'imagenet50', 'imagenet101', 'random18', 'random50',\n    # 'imagenet_resnext50', 'imagenet_resnext101'\n    # 'byol', 'deepcluster-v2', 'infomin', 'insdis', 'moco-v1', 'moco-v2',\n    # 'pcl-v1', 'pcl-v2','pirl', 'sela-v2', 'swav', 'simclr-v1', 'simclr-v2',\n    # 'pixpro', 'detco', 'barlowtwins', 'crw', 'uvc', 'timecycle']\n    model_type: crw                    \n\n    # For ResNet architecture, remove layer4 means output layer3 features\n    remove_layers: ['layer4']             \n    \n    # Be careful about this\n    im_mean: [0.4914, 0.4822, 0.4465]        \n    im_std: [0.2023, 0.1994, 0.2010]\n    \n    # Path to the model weights.\n    resume: 'weights/crw.pth'\n    \nmot:\n    # The single-frame observations. should correspond to a folder ${mot_root}/obs/${obid}\n    obid: 'FairMOT'\n    # Dataset root\n    mot_root: '/home/wangzd/datasets/MOT/MOT16'\n    # There is no validation set, so by default we test on the train split. \n    \nmots:\n    # The single-frame observations. should correspond to a folder ${mots_root}/obs/${obid}\n    obid: 'COSTA'\n    # Dataset root\n    mots_root: '/home/wangzd/datasets/GOT/MOTS'\n    # There is no validation set, so by default we test on the train split.  \n    \nposetrack:\n    # The single-frame observations. should correspond to a folder ${mots_root}/obs/val/${obid}\n    obid: 'lighttrack_MSRA152\n    # Dataset root\n    data_root: '/home/wangzd/datasets/GOT/Posetrack2018'\n    # There is a validation set, by default we test on the val split.\n    split: 'val'\n                                 \n```\n\nFor other arguments, just refer to `crw_resnet18_s3.yaml` or `crw_resnet18_s3_womotion.yaml`.\n\n\n### Run\n\nSuppose the current path is `$UNITRACK_ROOT`, you can run multiple tasks with a single command:\n\n```shell\n./eval.sh $EXP_NAME $GPU_ID\n```\n\nYou will obtain a set of summaries of quantitative results under `results/summary`, and also visualizations of all results under `results`\n\n\n[1]. Jabri, Allan, Andrew Owens, and Alexei A. Efros. \"Space-time correspondence as a contrastive random walk.\" In NeurIPS, 2020."
  },
  {
    "path": "eval/convert_davis.py",
    "content": "import os\nimport numpy as np\nimport cv2\nimport os.path as osp\nimport pdb\n\nfrom PIL import Image\n\njpglist = []\n\nimport palette\nimport argparse\n\nparser = argparse.ArgumentParser()\nparser.add_argument('-o', '--out_folder', default='/scratch/ajabri/davis_results/', type=str)\nparser.add_argument('-i', '--in_folder', default='/scratch/ajabri/davis_results_masks/', type=str)\nparser.add_argument('-d', '--dataset', default='/scratch/ajabri/data/davis/', type=str)\n\nargs = parser.parse_args()\n\nannotations_folder = args.dataset + '/Annotations/480p/'\nf1 = open(args.dataset + '/ImageSets/2017/val.txt', 'r')\nfor line in f1:\n    line = line[:-1]\n    jpglist.append(line)\nf1.close()\n\nout_folder = args.out_folder\ncurrent_folder = args.in_folder\n\nif not os.path.exists(out_folder):\n    os.makedirs(out_folder)\n\npalette = palette.tensor.astype(np.uint8)\ndef color2id(c):\n    return np.arange(0, palette.shape[0])[np.all(palette == c, axis=-1)]\n\ndef convert_dir(i):\n    fname = jpglist[i]\n    gtfolder = osp.join(annotations_folder,fname)\n    outfolder = osp.join(out_folder,fname)\n\n    if not os.path.exists(outfolder):\n        os.mkdir(outfolder)\n\n    files = [_ for _ in os.listdir(gtfolder) if _[-4:] == '.png']\n    lblimg  = cv2.imread(osp.join(gtfolder,\"{:05d}.png\".format(0)))\n    height = lblimg.shape[0]\n    width  = lblimg.shape[1]\n\n    for j in range(len(files)):\n        outname = osp.join(outfolder, \"{:05d}.png\".format(j))\n        inname  = osp.join(current_folder,  str(i) + '_' + str(j) + '_mask.png')\n\n        lblimg  = cv2.imread(inname)[:,:,::-1]\n        flat_lblimg = lblimg.reshape(-1, 3)\n        lblidx  = np.zeros((lblimg.shape[0], lblimg.shape[1]))\n        lblidx2  = np.zeros((lblimg.shape[0], lblimg.shape[1]))\n\n        colors = np.unique(flat_lblimg, axis=0)\n\n        for c in colors:\n            cid = color2id(c)\n            if len(cid) > 0:\n                lblidx2[np.all(lblimg == c, axis=-1)] = cid\n\n        lblidx = lblidx2\n\n        lblidx = lblidx.astype(np.uint8)\n        lblidx = cv2.resize(lblidx, (width, height), interpolation=cv2.INTER_NEAREST)\n        lblidx = lblidx.astype(np.uint8)\n\n        im = Image.fromarray(lblidx)\n        im.putpalette(palette.ravel())\n        im.save(outname, format='PNG')\n\nimport multiprocessing as mp\npool = mp.Pool(10)\nresults = pool.map(convert_dir, range(len(jpglist)))\n"
  },
  {
    "path": "eval/davis_dummy.txt",
    "content": "/home/wangzd/datasets/uvc/DAVIS/JPEGImages/480p/dance-twirl /home/wangzd/datasets/uvc/DAVIS/Annotations/480p/dance-twirl\n/home/wangzd/datasets/uvc/DAVIS/JPEGImages/480p/dog /home/wangzd/datasets/uvc/DAVIS/Annotations/480p/dog\n/home/wangzd/datasets/uvc/DAVIS/JPEGImages/480p/dogs-jump /home/wangzd/datasets/uvc/DAVIS/Annotations/480p/dogs-jump\n/home/wangzd/datasets/uvc/DAVIS/JPEGImages/480p/drift-chicane /home/wangzd/datasets/uvc/DAVIS/Annotations/480p/drift-chicane\n/home/wangzd/datasets/uvc/DAVIS/JPEGImages/480p/drift-straight /home/wangzd/datasets/uvc/DAVIS/Annotations/480p/drift-straight\n/home/wangzd/datasets/uvc/DAVIS/JPEGImages/480p/goat /home/wangzd/datasets/uvc/DAVIS/Annotations/480p/goat\n/home/wangzd/datasets/uvc/DAVIS/JPEGImages/480p/gold-fish /home/wangzd/datasets/uvc/DAVIS/Annotations/480p/gold-fish\n/home/wangzd/datasets/uvc/DAVIS/JPEGImages/480p/horsejump-high /home/wangzd/datasets/uvc/DAVIS/Annotations/480p/horsejump-high\n/home/wangzd/datasets/uvc/DAVIS/JPEGImages/480p/india /home/wangzd/datasets/uvc/DAVIS/Annotations/480p/india\n/home/wangzd/datasets/uvc/DAVIS/JPEGImages/480p/judo /home/wangzd/datasets/uvc/DAVIS/Annotations/480p/judo\n/home/wangzd/datasets/uvc/DAVIS/JPEGImages/480p/kite-surf /home/wangzd/datasets/uvc/DAVIS/Annotations/480p/kite-surf\n/home/wangzd/datasets/uvc/DAVIS/JPEGImages/480p/lab-coat /home/wangzd/datasets/uvc/DAVIS/Annotations/480p/lab-coat\n/home/wangzd/datasets/uvc/DAVIS/JPEGImages/480p/libby /home/wangzd/datasets/uvc/DAVIS/Annotations/480p/libby\n/home/wangzd/datasets/uvc/DAVIS/JPEGImages/480p/loading /home/wangzd/datasets/uvc/DAVIS/Annotations/480p/loading\n/home/wangzd/datasets/uvc/DAVIS/JPEGImages/480p/mbike-trick /home/wangzd/datasets/uvc/DAVIS/Annotations/480p/mbike-trick\n/home/wangzd/datasets/uvc/DAVIS/JPEGImages/480p/motocross-jump /home/wangzd/datasets/uvc/DAVIS/Annotations/480p/motocross-jump\n/home/wangzd/datasets/uvc/DAVIS/JPEGImages/480p/paragliding-launch /home/wangzd/datasets/uvc/DAVIS/Annotations/480p/paragliding-launch\n/home/wangzd/datasets/uvc/DAVIS/JPEGImages/480p/parkour /home/wangzd/datasets/uvc/DAVIS/Annotations/480p/parkour\n/home/wangzd/datasets/uvc/DAVIS/JPEGImages/480p/pigs /home/wangzd/datasets/uvc/DAVIS/Annotations/480p/pigs\n/home/wangzd/datasets/uvc/DAVIS/JPEGImages/480p/scooter-black /home/wangzd/datasets/uvc/DAVIS/Annotations/480p/scooter-black\n/home/wangzd/datasets/uvc/DAVIS/JPEGImages/480p/shooting /home/wangzd/datasets/uvc/DAVIS/Annotations/480p/shooting\n/home/wangzd/datasets/uvc/DAVIS/JPEGImages/480p/soapbox /home/wangzd/datasets/uvc/DAVIS/Annotations/480p/soapbox\n"
  },
  {
    "path": "eval/eval_mot.py",
    "content": "import os\r\nimport numpy as np\r\nimport copy\r\nimport motmetrics as mm\r\nmm.lap.default_solver = 'lap'\r\nfrom utils.io import read_mot_results, unzip_objs\r\n\r\n\r\nclass Evaluator(object):\r\n\r\n    def __init__(self, data_root, seq_name, data_type='mot'):\r\n        self.data_root = data_root\r\n        self.seq_name = seq_name\r\n        self.data_type = data_type\r\n\r\n        self.load_annotations()\r\n        self.reset_accumulator()\r\n\r\n    def load_annotations(self):\r\n        assert self.data_type == 'mot'\r\n\r\n        gt_filename = os.path.join(self.data_root, self.seq_name, 'gt', 'gt.txt')\r\n        self.gt_frame_dict = read_mot_results(gt_filename, self.data_type, is_gt=True)\r\n        self.gt_ignore_frame_dict = read_mot_results(gt_filename, self.data_type, is_ignore=True)\r\n\r\n    def reset_accumulator(self):\r\n        self.acc = mm.MOTAccumulator(auto_id=True)\r\n\r\n    def eval_frame(self, frame_id, trk_tlwhs, trk_ids, rtn_events=False):\r\n        # results\r\n        trk_tlwhs = np.copy(trk_tlwhs)\r\n        trk_ids = np.copy(trk_ids)\r\n\r\n        # gts\r\n        gt_objs = self.gt_frame_dict.get(frame_id, [])\r\n        gt_tlwhs, gt_ids = unzip_objs(gt_objs)[:2]\r\n\r\n        # ignore boxes\r\n        ignore_objs = self.gt_ignore_frame_dict.get(frame_id, [])\r\n        ignore_tlwhs = unzip_objs(ignore_objs)[0]\r\n\r\n\r\n        # remove ignored results\r\n        keep = np.ones(len(trk_tlwhs), dtype=bool)\r\n        iou_distance = mm.distances.iou_matrix(ignore_tlwhs, trk_tlwhs, max_iou=0.5)\r\n        if len(iou_distance) > 0:\r\n            match_is, match_js = mm.lap.linear_sum_assignment(iou_distance)\r\n            match_is, match_js = map(lambda a: np.asarray(a, dtype=int), [match_is, match_js])\r\n            match_ious = iou_distance[match_is, match_js]\r\n\r\n            match_js = np.asarray(match_js, dtype=int)\r\n            match_js = match_js[np.logical_not(np.isnan(match_ious))]\r\n            keep[match_js] = False\r\n            trk_tlwhs = trk_tlwhs[keep]\r\n            trk_ids = trk_ids[keep]\r\n\r\n        # get distance matrix\r\n        iou_distance = mm.distances.iou_matrix(gt_tlwhs, trk_tlwhs, max_iou=0.5)\r\n\r\n        # acc\r\n        self.acc.update(gt_ids, trk_ids, iou_distance)\r\n\r\n        if rtn_events and iou_distance.size > 0 and hasattr(self.acc, 'last_mot_events'):\r\n            events = self.acc.last_mot_events  # only supported by https://github.com/longcw/py-motmetrics\r\n        else:\r\n            events = None\r\n        return events\r\n\r\n    def eval_file(self, filename):\r\n        self.reset_accumulator()\r\n\r\n        result_frame_dict = read_mot_results(filename, self.data_type, is_gt=False)\r\n        frames = sorted(list(set(self.gt_frame_dict.keys()) | set(result_frame_dict.keys())))\r\n        for frame_id in frames:\r\n            trk_objs = result_frame_dict.get(frame_id, [])\r\n            trk_tlwhs, trk_ids = unzip_objs(trk_objs)[:2]\r\n            self.eval_frame(frame_id, trk_tlwhs, trk_ids, rtn_events=False)\r\n\r\n        return self.acc\r\n\r\n    @staticmethod\r\n    def get_summary(accs, names, metrics=('mota', 'num_switches', 'idp', 'idr', 'idf1', 'precision', 'recall')):\r\n        names = copy.deepcopy(names)\r\n        if metrics is None:\r\n            metrics = mm.metrics.motchallenge_metrics\r\n        metrics = copy.deepcopy(metrics)\r\n\r\n        mh = mm.metrics.create()\r\n        summary = mh.compute_many(\r\n            accs,\r\n            metrics=metrics,\r\n            names=names,\r\n            generate_overall=True\r\n        )\r\n\r\n        return summary\r\n\r\n    @staticmethod\r\n    def save_summary(summary, filename):\r\n        import pandas as pd\r\n        writer = pd.ExcelWriter(filename)\r\n        summary.to_excel(writer)\r\n        writer.save()\r\n"
  },
  {
    "path": "eval/eval_pck.py",
    "content": "from scipy.io import loadmat\nfrom numpy import transpose\nimport skimage.io as sio\nimport numpy as np\nimport os\nimport cv2\n\nimport scipy.io as sio\n\nfilelist = '/home/wangzd/datasets/GOT/JHMDB/split.txt'\nsrc_folder = 'results/poseprop/womotion_resnet18_s3/'\n\nf = open(filelist, 'r')\ngts = []\nheights = []\nwidths  = []\npreds = []\njnt_visible_set = []\nhuman_boxes = []\n\nfeat_res = 40\n\nfor cnt, line in enumerate(f):\n    rows = line.strip().split()\n    lblpath = rows[0] #+ '/joint_positions.mat'\n    lbls_mat = sio.loadmat(lblpath)\n    lbls_coord = lbls_mat['pos_img']\n    lbls_coord = lbls_coord - 1\n\n    gts.append(lbls_coord)\n\n    imgpath = rows[1] + '/00001.png'\n    img = cv2.imread(imgpath)\n    heights.append(img.shape[0])\n    widths.append(img.shape[1])\n\nf.close()\n\n# gts = gts[0: 200]\nprint('read gt')\n\n# read prediction results\nfor i in range(len(gts)):\n\n    # import pdb; pdb.set_trace()\n    predfile = src_folder + str(i) + '.dat'\n    predres  = np.load(predfile, allow_pickle=True)\n\n    # import pdb; pdb.set_trace()\n\n    jnt_visible = np.ones((predres.shape[1], predres.shape[2]))\n\n    for j in range(predres.shape[1]):\n        for k in range(predres.shape[2]):\n            if predres[0, j, k] < 0:\n                jnt_visible[j, k] = 0\n\n    jnt_visible_set.append(jnt_visible)\n\n    now_height = heights[i]\n    now_width  = widths[i]\n    predres[0, :, :] = predres[0, :, :] / float(feat_res) * now_width\n    predres[1, :, :] = predres[1, :, :] / float(feat_res) * now_height\n\n    preds.append(predres)\n\nprint('read prediction')\n\n# compute the human box for normalization\nfor i in range(len(gts)):\n\n    nowgt = gts[i]\n    jnt_visible = jnt_visible_set[i]\n    now_boxes = np.zeros(nowgt.shape[2])\n\n    for k in range(nowgt.shape[2]):\n        minx = 1e6\n        maxx = -1\n        miny = 1e6\n        maxy = -1\n        for j in range(nowgt.shape[1]):\n\n            if jnt_visible[j, k] == 0:\n                continue\n\n            minx = np.min([minx, nowgt[0, j, k]])\n            miny = np.min([miny, nowgt[1, j, k]])\n            maxx = np.max([maxx, nowgt[0, j, k]])\n            maxy = np.max([maxy, nowgt[1, j, k]])\n\n        now_boxes[k] = 0.6 * np.linalg.norm(np.subtract([maxx,maxy],[minx,miny]))\n        # now_boxes[k] = np.max([maxy - miny, maxx - minx])\n\n\n    human_boxes.append(now_boxes)\n\nprint('done box')\n\n# compute distances\ndistAll = {}\nfor pidx in range(15):\n    distAll[pidx] = np.zeros([0,0])\n\nfor i in range(len(gts)):\n\n    predres = preds[i]\n    nowgt = gts[i]\n    now_boxes = human_boxes[i]\n    jnt_visible = jnt_visible_set[i]\n    for j in range(nowgt.shape[1]):\n        for k in range(nowgt.shape[2]):\n\n            if jnt_visible[j, k] == 0:\n                continue\n\n            if k == 0:\n                continue\n\n            predx = predres[0, j, k]\n            predy = predres[1, j, k]\n            gtx   = nowgt[0, j, k]\n            gty   = nowgt[1, j, k]\n            d = np.linalg.norm(np.subtract([predx, predy],[gtx, gty]))\n            dNorm = d / now_boxes[k]\n\n            distAll[j] = np.append(distAll[j],[[dNorm]])\n\nprint('done distances')\n\ndef computePCK(distAll,distThresh):\n\n    pckAll = np.zeros([len(distAll)+1,1])\n    nCorrect = 0\n    nTotal = 0\n    for pidx in range(len(distAll)):\n        idxs = np.argwhere(distAll[pidx] <= distThresh)\n        pck = 100.0*len(idxs)/len(distAll[pidx])\n        pckAll[pidx,0] = pck\n        nCorrect += len(idxs)\n        nTotal   += len(distAll[pidx])\n\n    pckAll[len(distAll),0] = np.mean(pckAll[0 :len(distAll),0]) # 100.0*nCorrect/nTotal\n\n    return pckAll\n\nrng = [0.1, 0.2, 0.3, 0.4, 0.5]\n\nfor i in range(len(rng)):\n    pckall = computePCK(distAll, rng[i])\n    print(str(rng[i]) + ': ' + str(pckall[-1]) )\n    # print(pckall[-1])\n"
  },
  {
    "path": "eval/mots/Evaluator.py",
    "content": "import pdb\nimport sys, os\nsys.path.append(os.getcwd())\nimport argparse\n\nimport traceback\nimport time\nimport pickle\nimport pandas as pd\nimport glob\nfrom os import path\nimport numpy as np\n\n\n\nclass Evaluator(object):\n    \"\"\" The `Evaluator` class runs evaluation per sequence and computes the overall performance on the benchmark\"\"\"\n    def __init__(self):\n        pass\n\n    def run(self, benchmark_name = None ,  gt_dir = None, res_dir = None, save_pkl = None, eval_mode = \"train\", seqmaps_dir = \"seqmaps\"):\n        \"\"\"\n        Params\n        -----\n        benchmark_name: Name of benchmark, e.g. MOT17\n        gt_dir: directory of folders with gt data, including the c-files with sequences\n        res_dir: directory with result files\n            <seq1>.txt\n            <seq2>.txt\n            ...\n            <seq3>.txt\n        eval_mode:\n        seqmaps_dir:\n        seq_file: File name of file containing sequences, e.g. 'c10-train.txt'\n        save_pkl: path to output directory for final results\n        \"\"\"\n\n        start_time = time.time()\n\n        self.benchmark_gt_dir = gt_dir\n        self.seq_file =  \"{}-{}.txt\".format(benchmark_name, eval_mode)\n\n        res_dir = res_dir\n        self.benchmark_name = benchmark_name\n        self.seqmaps_dir = seqmaps_dir\n\n        self.mode = eval_mode\n\n        self.datadir = os.path.join(gt_dir, self.mode)\n\n        # getting names of sequences to evaluate\n        error_traceback = \"\"\n        assert self.mode in [\"train\", \"test\", \"all\"], \"mode: %s not valid \" %s\n\n        print(\"Evaluating Benchmark: %s\" % self.benchmark_name)\n\n        # ======================================================\n        # Handle evaluation\n        # ======================================================\n\n\n\n        # load list of all sequences\n        self.sequences = os.listdir(self.datadir) \n\n        self.gtfiles = []\n        self.tsfiles = []\n        for seq in self.sequences:\n            gtf = os.path.join(self.benchmark_gt_dir, self.mode ,seq, 'gt/gt.txt')\n            if path.exists(gtf): self.gtfiles.append(gtf)\n            else: raise Exception(\"Ground Truth %s missing\" % gtf)\n            tsf = os.path.join( res_dir, \"%s.txt\" % seq)\n            if path.exists(gtf): self.tsfiles.append(tsf)\n            else: raise Exception(\"Result file %s missing\" % tsf)\n\n\n        print('Found {} ground truth files and {} test files.'.format(len(self.gtfiles), len(self.tsfiles)))\n        print( self.tsfiles)\n\n        self.MULTIPROCESSING = False \n        MAX_NR_CORES = 10\n        # set number of core for mutliprocessing\n        if self.MULTIPROCESSING: self.NR_CORES = np.minimum( MAX_NR_CORES, len(self.tsfiles))\n        try:\n\n            \"\"\" run evaluation \"\"\"\n            results = self.eval()\n\n            # calculate overall results\n            results_attributes = self.Overall_Results.metrics.keys()\n\n            for attr in results_attributes:\n                \"\"\" accumulate evaluation values over all sequences \"\"\"\n                try:\n                    self.Overall_Results.__dict__[attr] = sum(obj.__dict__[attr] for obj in self.results)\n                except:\n                    pass\n            cache_attributes = self.Overall_Results.cache_dict.keys()\n            for attr in cache_attributes:\n                \"\"\" accumulate cache values over all sequences \"\"\"\n                try:\n                    self.Overall_Results.__dict__[attr] = self.Overall_Results.cache_dict[attr]['func']([obj.__dict__[attr] for obj in self.results])\n                except:\n                    pass\n            print(\"evaluation successful\")\n\n\n            # Compute clearmot metrics for overall and all sequences\n            for res in self.results:\n                res.compute_clearmot()\n            self.Overall_Results.compute_clearmot()\n\n\n            self.accumulate_df(type = \"mail\")\n            self.failed = False\n            error = None\n\n\n        except Exception as e:\n            print(str(traceback.format_exc()))\n            print (\"<br> Evaluation failed! <br>\")\n\n            error_traceback+= str(traceback.format_exc())\n            self.failed = True\n            self.summary = None\n\n        end_time=time.time()\n\n        self.duration = (end_time - start_time)/60.\n\n\n\n        # ======================================================\n        # Collect evaluation errors\n        # ======================================================\n        if self.failed:\n\n            startExc = error_traceback.split(\"<exc>\")\n            error_traceback = [m.split(\"<!exc>\")[0] for m in startExc[1:]]\n\n            error = \"\"\n\n            for err in error_traceback:\n                error+=\"Error: %s\" % err\n\n\n            print( \"Error Message\", error)\n            self.error = error\n            print(\"ERROR %s\" % error)\n\n\n        print (\"Evaluation Finished\")\n        print(\"Your Results\")\n        print(self.render_summary())\n        # save results if path set\n        if save_pkl:\n\n\n            self.Overall_Results.save_dict(os.path.join( save_pkl, \"%s-%s-overall.pkl\" % (self.benchmark_name, self.mode)))\n            for res in self.results:\n                res.save_dict(os.path.join( save_pkl, \"%s-%s-%s.pkl\" % (self.benchmark_name, self.mode, res.seqName)))\n            print(\"Successfully save results\")\n\n        return self.Overall_Results, self.results\n    def eval(self):\n        raise NotImplementedError\n\n\n    def accumulate_df(self, type = None):\n        \"\"\" create accumulated dataframe with all sequences \"\"\"\n        for k, res in enumerate(self.results):\n            res.to_dataframe(display_name = True, type = type )\n            if k == 0: summary = res.df\n            else: summary = summary.append(res.df)\n        summary = summary.sort_index()\n\n\n        self.Overall_Results.to_dataframe(display_name = True, type = type )\n\n        self.summary = summary.append(self.Overall_Results.df)\n\n\n\n    def render_summary( self, buf = None):\n        \"\"\"Render metrics summary to console friendly tabular output.\n\n        Params\n        ------\n        summary : pd.DataFrame\n            Dataframe containing summaries in rows.\n\n        Kwargs\n        ------\n        buf : StringIO-like, optional\n            Buffer to write to\n        formatters : dict, optional\n            Dicionary defining custom formatters for individual metrics.\n            I.e `{'mota': '{:.2%}'.format}`. You can get preset formatters\n            from MetricsHost.formatters\n        namemap : dict, optional\n            Dictionary defining new metric names for display. I.e\n            `{'num_false_positives': 'FP'}`.\n\n        Returns\n        -------\n        string\n            Formatted string\n        \"\"\"\n\n        output = self.summary.to_string(\n            buf=buf,\n            formatters=self.Overall_Results.formatters,\n            justify = \"left\"\n        )\n\n        return output\ndef run_metrics( metricObject, args ):\n    \"\"\" Runs metric for individual sequences\n    Params:\n    -----\n    metricObject: metricObject that has computer_compute_metrics_per_sequence function\n    args: dictionary with args for evaluation function\n    \"\"\"\n    metricObject.compute_metrics_per_sequence(**args)\n    return metricObject\n\n\nif __name__ == \"__main__\":\n    Evaluator()\n"
  },
  {
    "path": "eval/mots/LICENSE",
    "content": "MIT License\n\nCopyright (c) 2019 Visual Computing Institute\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "eval/mots/MOTSVisualization.py",
    "content": "import sys\nfrom Visualize import Visualizer\nfrom mots_common.io import load_sequences, load_seqmap, load_txt\nimport pycocotools.mask as rletools\nimport glob\nimport os\nimport cv2\nimport colorsys\nimport numpy as np\n\ndef apply_mask(image, mask, color, alpha=0.5):\n\t\"\"\"\n\t Apply the given mask to the image.\n\t\"\"\"\n\tfor c in range(3):\n\t\timage[:, :, c] = np.where(mask == 1, image[:, :, c] * (1 - alpha) + alpha * color[c],image[:, :, c])\n\treturn image\n\n\nclass MOTSVisualizer(Visualizer):\n\n\tdef load(self, FilePath):\n\t\treturn load_txt(FilePath)\n\n\n\n\tdef drawResults(self, im = None, t = 0):\n\t\tself.draw_boxes = False\n\n\n\t\tfor obj in self.resFile[t]:\n\n\t\t\tcolor = self.colors[obj.track_id % len(self.colors)]\n\n\t\t\tcolor = tuple([int(c*255) for c in color])\n\n\n\t\t\tif obj.class_id == 1:\n\t\t\t\tcategory_name = \"Car\"\n\t\t\telif obj.class_id == 2:\n\t\t\t\tcategory_name = \"Ped\"\n\t\t\telse:\n\t\t\t\tcategory_name = \"Ignore\"\n\t\t\t\tcolor = (0.7*255, 0.7*255, 0.7*255)\n\n\t\t\tif obj.class_id == 1 or obj.class_id == 2:  # Don't show boxes or ids for ignore regions\n\t\t\t\tx, y, w, h = rletools.toBbox(obj.mask)\n\n\t\t\t\tpt1=(int(x),int(y))\n\t\t\t\tpt2=(int(x+w),int(y+h))\n\n\t\t\t\tcategory_name += \":\" + str(obj.track_id)\n\t\t\t\tcv2.putText(im, category_name, (int(x + 0.5 * w), int( y + 0.5 * h)), cv2.FONT_HERSHEY_TRIPLEX,self.imScale,color,thickness =2)\n\t\t\t\tif self.draw_boxes:\n\t\t\t\t\tcv2.rectangle(im,pt1,pt2,color,2)\n\n\n\t\t\tbinary_mask = rletools.decode(obj.mask)\n\n\t\t\tim = apply_mask(im, binary_mask, color)\n\t\treturn im\n\nif __name__ == \"__main__\":\n\tvisualizer = MOTSVisualizer(\n\tseqName = \"MOTS20-11\",\n\tFilePath =\"data/MOTS/train/MOTS20-11/gt/gt.txt\",\n\timage_dir = \"data/MOTS/train/MOTS20-11/img1\",\n\tmode = \"gt\",\n\toutput_dir  = \"vid\")\n\n\tvisualizer.generateVideo(\n\t        displayTime = True,\n\t        displayName = \"seg\",\n\t        showOccluder = True,\n\t        fps = 25 )\n"
  },
  {
    "path": "eval/mots/MOTS_metrics.py",
    "content": "import math\nfrom collections import defaultdict\nimport pycocotools.mask as rletools\nfrom mots_common.io import SegmentedObject\nfrom mots_common.io import load_seqmap, load_sequences, load_txt\nfrom Metrics import Metrics\nimport os, sys\nimport numpy as np\nfrom scipy.optimize import linear_sum_assignment as linear_assignment\n\n# we only consider pedestrians\nIGNORE_CLASS = 10\nCLASS_ID = 2\n\n\n\ndef mask_iou(a, b, criterion=\"union\"):\n  is_crowd = criterion != \"union\"\n  return rletools.iou([a.mask], [b.mask], [is_crowd])[0][0]\n\n\n\n\nclass MOTSMetrics(Metrics):\n\tdef __init__(self, seqName = None):\n\t\tsuper().__init__()\n\t\tif seqName:\n\t\t\tself.seqName = seqName\n\t\telse: self.seqName = 0\n\t\t# Evaluation metrics\n\n\t\tself.register(name = \"sMOTSA\", formatter='{:.2f}'.format)\n\t\tself.register(name = \"MOTSA\", formatter='{:.2f}'.format)\n\t\tself.register(name = \"MOTSP\", formatter='{:.2f}'.format)\n\t\tself.register(name = \"MOTSAL\", formatter='{:.2f}'.format,  write_mail = False)\n\t\tself.register(name = \"MODSA\", formatter='{:.2f}'.format,  write_mail = False)\n\t\tself.register(name = \"MODSP\", formatter='{:.2f}'.format,  write_mail = False)\n\n\n\t\tself.register(name = \"IDF1\", formatter='{:.2f}'.format)\n\t\tself.register(name = \"IDTP\", formatter='{:.2f}'.format, write_mail = False)\n\n\n\t\tself.register(name = \"MT\", formatter='{:.0f}'.format)\n\t\tself.register(name = \"PT\", formatter='{:.0f}'.format, write_mail = False )\n\t\tself.register(name = \"ML\", formatter='{:.0f}'.format)\n\n\t\tself.register(name = \"MTR\", formatter='{:.2f}'.format)\n\t\tself.register(name = \"PTR\", formatter='{:.2f}'.format)\n\t\tself.register(name = \"MLR\", formatter='{:.2f}'.format)\n\n\n\t\tself.register(name = \"n_gt_trajectories\", display_name = \"GT\",formatter='{:.0f}'.format, write_mail = True)\n\n\t\tself.register(name = \"tp\", display_name=\"TP\", formatter='{:.0f}'.format)  # number of true positives\n\t\tself.register(name = \"fp\", display_name=\"FP\", formatter='{:.0f}'.format) # number of false positives\n\t\tself.register(name = \"fn\", display_name=\"FN\", formatter='{:.0f}'.format)  # number of false negatives\n\n\t\tself.register(name = \"recall\", display_name=\"Rcll\", formatter='{:.2f}'.format)\n\t\tself.register(name = \"precision\", display_name=\"Prcn\", formatter='{:.2f}'.format)\n\n\t\tself.register(name = \"F1\", display_name=\"F1\", formatter='{:.2f}'.format, write_mail = False)\n\t\tself.register(name = \"FAR\", formatter='{:.2f}'.format, write_mail = False)\n\t\tself.register(name = \"total_cost\", display_name=\"COST\", formatter='{:.0f}'.format, write_mail = False)\n\t\tself.register(name = \"fragments\", display_name=\"FM\", formatter='{:.0f}'.format)\n\t\tself.register(name = \"fragments_rel\", display_name=\"FMR\", formatter='{:.2f}'.format)\n\t\tself.register(name = \"id_switches\", display_name=\"IDSW\", formatter='{:.0f}'.format)\n\n\t\tself.register(name = \"id_switches_rel\", display_name=\"IDSWR\", formatter='{:.1f}'.format)\n\n\t\tself.register(name = \"n_tr_trajectories\", display_name = \"TR\", formatter='{:.0f}'.format, write_mail = False)\n\t\tself.register(name = \"total_num_frames\", display_name=\"TOTAL_NUM\", formatter='{:.0f}'.format, write_mail = False)\n\n\n\t\tself.register(name = \"n_gt\", display_name = \"GT_OBJ\", formatter='{:.0f}'.format, write_mail = False) # number of ground truth detections\n\t\tself.register(name = \"n_tr\", display_name = \"TR_OBJ\", formatter='{:.0f}'.format, write_mail = False) # number of tracker detections minus ignored tracker detections\n\t\tself.register(name = \"n_itr\",display_name=\"IGNORED\",  formatter='{:.0f}'.format, write_mail = False)  # number of ignored tracker detections\n\t\tself.register(name = \"id_n_tr\", display_name = \"ID_TR_OBJ\", formatter='{:.0f}'.format, write_mail = False)\n\t\tself.register(name = \"nbox_gt\", display_name = \"NBOX_GT\", formatter='{:.0f}'.format, write_mail = False)\n\n\n\n\n\t\n\tdef compute_clearmot(self):\n\t    # precision/recall etc.\n\t    if (self.fp + self.tp) == 0 or (self.tp + self.fn) == 0:\n\t        self.recall = 0.\n\t        self.precision = 0.\n\t    else:\n\t        self.recall = self.tp / float(self.tp + self.fn) * 100.\n\t        self.precision = self.tp / float(self.fp + self.tp) * 100.\n\t    if (self.recall + self.precision) == 0:\n\t        self.F1 = 0.\n\t    else:\n\t        self.F1 = (2. * (self.precision * self.recall) / (self.precision + self.recall) ) * 100.\n\t    if self.total_num_frames == 0:\n\t        self.FAR = \"n/a\"\n\t    else:\n\t        self.FAR = (self.fp / float(self.total_num_frames) ) \n\t    # compute CLEARMOT\n\t    if self.n_gt == 0:\n\t        self.MOTSA = -float(\"inf\")\n\t        self.MODSA = -float(\"inf\")\n\t        self.sMOTSA = -float(\"inf\")\n\t    else:\n\t        self.MOTSA = (1 - (self.fn + self.fp + self.id_switches) / float(self.n_gt) ) * 100.\n\t        self.MODSA = (1 - (self.fn + self.fp) / float(self.n_gt)) * 100.\n\t        self.sMOTSA = ((self.total_cost - self.fp - self.id_switches) / float(self.n_gt)) * 100.\n\t    if self.tp == 0:\n\t        self.MOTSP = float(\"inf\")\n\t    else:\n\t        self.MOTSP = self.total_cost / float(self.tp) * 100.\n\t    if self.n_gt != 0:\n\t        if self.id_switches == 0:\n\t            self.MOTSAL = (1 - (self.fn + self.fp + self.id_switches) / float(self.n_gt)) * 100.\n\t        else:\n\t            self.MOTSAL = (1 - (self.fn + self.fp + math.log10(self.id_switches)) / float(\n\t            self.n_gt))*100.\n\t    else:\n\t        self.MOTSAL = -float(\"inf\")\n\t\n\t    if self.total_num_frames == 0:\n\t        self.MODSP = \"n/a\"\n\t    else:\n\t        self.MODSP = self.MODSP / float(self.total_num_frames) * 100.\n\n\n\n\n\t    if self.n_gt_trajectories == 0:\n\t        self.MTR = 0.\n\t        self.PTR = 0.\n\t        self.MLR = 0.\n\t    else:\n\t        self.MTR = self.MT * 100. / float(self.n_gt_trajectories)\n\t        self.PTR = self.PT * 100. / float(self.n_gt_trajectories)\n\t        self.MLR = self.ML * 100. / float(self.n_gt_trajectories)\n\n\n\t    # calculate relative IDSW and FM\n\n\t    if self.recall != 0:\n\t        self.id_switches_rel = self.id_switches/self.recall*100\n\t        self.fragments_rel = self.fragments/self.recall* 100 \n\n\t    else:\n\t        self.id_switches_rel = float(\"inf\")\n\t        self.fragments_rel = float(\"inf\")\n\n\n\n\n\t    # IDF1\n\t    if self.n_gt_trajectories == 0:\n\t         self.IDF1 = 0.\n\t    else:\n\t         self.IDF1 = (2 * self.IDTP) / (self.nbox_gt + self.id_n_tr) * 100.\n\t    return self\n\n\n\n\n\t# go through all frames and associate ground truth and tracker results\n\n\tdef compute_metrics_per_sequence(self, sequence, pred_file, gt_file, gtDataDir, benchmark_name,\n\t\t\t\t\t\tignore_class = IGNORE_CLASS, class_id = CLASS_ID, overlap_function = mask_iou):\n\n\t\tgt_seq = load_txt(gt_file)\n\t\tresults_seq = load_txt(pred_file)\n\n\n\t\t# load information about sequence\n\t\timport configparser\n\t\tconfig = configparser.ConfigParser()\n\n\n\t\tconfig.read(os.path.join(gtDataDir,  \"seqinfo.ini\"))\n\n\t\tmax_frames = int(config['Sequence'][\"seqlength\"])\n\n\t\tself.total_num_frames = max_frames + 1\n\n\t\tseq_trajectories = defaultdict(list)\n\n\t\t# To count number of track ids\n\t\tgt_track_ids = set()\n\t\ttr_track_ids = set()\n\n\t\t# Statistics over the current sequence\n\t\tseqtp = 0\n\t\tseqfn = 0\n\t\tseqfp = 0\n\t\tseqitr = 0\n\n\t\tn_gts = 0\n\t\tn_trs = 0\n\n\t\tframe_to_ignore_region = {}\n\n\t\t# Iterate over frames in this sequence\n\t\tfor f in range(max_frames + 1):\n\t\t\tg = []\n\t\t\tdc = []\n\t\t\tt = []\n\n\t\t\tif f in gt_seq:\n\t\t\t\tfor obj in gt_seq[f]:\n\t\t\t\t\tif obj.class_id == ignore_class:\n\t\t\t\t\t\tdc.append(obj)\n\t\t\t\t\telif obj.class_id == class_id:\n\t\t\t\t\t\tg.append(obj)\n\t\t\t\t\t\tgt_track_ids.add(obj.track_id)\n\t\t\tif f in results_seq:\n\t\t\t\tfor obj in results_seq[f]:\n\t\t\t\t\tif obj.class_id == class_id:\n\t\t\t\t\t\tt.append(obj)\n\t\t\t\t\t\ttr_track_ids.add(obj.track_id)\n\n\t\t\t# Handle ignore regions as one large ignore region\n\t\t\tdc = SegmentedObject(mask=rletools.merge([d.mask for d in dc], intersect=False),\n\t\t\t                                         class_id=ignore_class, track_id=ignore_class)\n\t\t\tframe_to_ignore_region[f] = dc\n\n\t\t\ttracks_valid = [False for _ in range(len(t))]\n\n\t\t\t# counting total number of ground truth and tracker objects\n\t\t\tself.n_gt += len(g)\n\t\t\tself.n_tr += len(t)\n\n\t\t\tn_gts += len(g)\n\t\t\tn_trs += len(t)\n\n\t\t\t# tmp variables for sanity checks and MODSP computation\n\t\t\ttmptp = 0\n\t\t\ttmpfp = 0\n\t\t\ttmpfn = 0\n\t\t\ttmpc = 0    # this will sum up the overlaps for all true positives\n\t\t\ttmpcs = [0] * len(g)    # this will save the overlaps for all true positives\n\t\t\t# the reason is that some true positives might be ignored\n\t\t\t# later such that the corrsponding overlaps can\n\t\t\t# be subtracted from tmpc for MODSP computation\n\n\t\t\t# To associate, simply take for each ground truth the (unique!) detection with IoU>0.5 if it exists\n\n\t\t\t# all ground truth trajectories are initially not associated\n\t\t\t# extend groundtruth trajectories lists (merge lists)\n\t\t\tfor gg in g:\n\t\t\t\tseq_trajectories[gg.track_id].append(-1)\n\t\t\tnum_associations = 0\n\t\t\tfor row, gg in enumerate(g):\n\t\t\t\tfor col, tt in enumerate(t):\n\t\t\t\t\tc = overlap_function(gg, tt)\n\t\t\t\t\tif c > 0.5:\n\t\t\t\t\t\ttracks_valid[col] = True\n\t\t\t\t\t\tself.total_cost += c\n\t\t\t\t\t\ttmpc += c\n\t\t\t\t\t\ttmpcs[row] = c\n\t\t\t\t\t\tseq_trajectories[g[row].track_id][-1] = t[col].track_id\n\n\t\t\t\t\t\t# true positives are only valid associations\n\t\t\t\t\t\tself.tp += 1\n\t\t\t\t\t\ttmptp += 1\n\n\t\t\t\t\t\tnum_associations += 1\n\n\t\t\t# associate tracker and DontCare areas\n\t\t\t# ignore tracker in neighboring classes\n\t\t\tnignoredtracker = 0    # number of ignored tracker detections\n\n\t\t\tfor i, tt in enumerate(t):\n\t\t\t\toverlap = overlap_function(tt, dc, \"a\")\n\t\t\t\tif overlap > 0.5 and not tracks_valid[i]:\n\t\t\t\t\tnignoredtracker += 1\n\n\t\t\t# count the number of ignored tracker objects\n\t\t\tself.n_itr += nignoredtracker\n\n\t\t\t# false negatives = non-associated gt instances\n\t\t\t#\n\t\t\ttmpfn += len(g) - num_associations\n\t\t\tself.fn += len(g) - num_associations\n\n\t\t\t# false positives = tracker instances - associated tracker instances\n\t\t\t# mismatches (mme_t)\n\t\t\ttmpfp += len(t) - tmptp - nignoredtracker\n\t\t\tself.fp += len(t) - tmptp - nignoredtracker\n\t\t\t# tmpfp     = len(t) - tmptp - nignoredtp # == len(t) - (tp - ignoredtp) - ignoredtp\n\t\t\t# self.fp += len(t) - tmptp - nignoredtp\n\n\t\t\t# update sequence data\n\t\t\tseqtp += tmptp\n\t\t\tseqfp += tmpfp\n\t\t\tseqfn += tmpfn\n\t\t\tseqitr += nignoredtracker\n\n\t\t\t# sanity checks\n\t\t\t# - the number of true positives minues ignored true positives\n\t\t\t#     should be greater or equal to 0\n\t\t\t# - the number of false negatives should be greater or equal to 0\n\t\t\t# - the number of false positives needs to be greater or equal to 0\n\t\t\t#     otherwise ignored detections might be counted double\n\t\t\t# - the number of counted true positives (plus ignored ones)\n\t\t\t#     and the number of counted false negatives (plus ignored ones)\n\t\t\t#     should match the total number of ground truth objects\n\t\t\t# - the number of counted true positives (plus ignored ones)\n\t\t\t#     and the number of counted false positives\n\t\t\t#     plus the number of ignored tracker detections should\n\t\t\t#     match the total number of tracker detections\n\t\t\tif tmptp < 0:\n\t\t\t\tprint(tmptp)\n\t\t\t\traise NameError(\"Something went wrong! TP is negative\")\n\t\t\tif tmpfn < 0:\n\t\t\t\tprint(tmpfn, len(g), num_associations)\n\t\t\t\traise NameError(\"Something went wrong! FN is negative\")\n\t\t\tif tmpfp < 0:\n\t\t\t\tprint(tmpfp, len(t), tmptp, nignoredtracker)\n\t\t\t\traise NameError(\"Something went wrong! FP is negative\")\n\t\t\tif tmptp + tmpfn != len(g):\n\t\t\t\tprint(\"seqname\", seq_name)\n\t\t\t\tprint(\"frame \", f)\n\t\t\t\tprint(\"TP        \", tmptp)\n\t\t\t\tprint(\"FN        \", tmpfn)\n\t\t\t\tprint(\"FP        \", tmpfp)\n\t\t\t\tprint(\"nGT     \", len(g))\n\t\t\t\tprint(\"nAss    \", num_associations)\n\t\t\t\traise NameError(\"Something went wrong! nGroundtruth is not TP+FN\")\n\t\t\tif tmptp + tmpfp + nignoredtracker != len(t):\n\t\t\t\tprint(seq_name, f, len(t), tmptp, tmpfp)\n\t\t\t\tprint(num_associations)\n\t\t\t\traise NameError(\"Something went wrong! nTracker is not TP+FP\")\n\n\t\t\t# compute MODSP\n\t\t\tMODSP_f = 1\n\t\t\tif tmptp != 0:\n\t\t\t\tMODSP_f = tmpc / float(tmptp)\n\t\t\tself.MODSP += MODSP_f\n\n\t\tassert len(seq_trajectories) == len(gt_track_ids)\n\t\tself.n_gt_trajectories = len(gt_track_ids)\n\t\tself.n_tr_trajectories = len(tr_track_ids)\n\n\t\t# compute MT/PT/ML, fragments, idswitches for all groundtruth trajectories\n\t\tif len(seq_trajectories) != 0:\n\t\t\tfor g in seq_trajectories.values():\n\t\t\t\t# all frames of this gt trajectory are not assigned to any detections\n\t\t\t\tif all([this == -1 for this in g]):\n\t\t\t\t\tself.ML += 1\n\t\t\t\t\tcontinue\n\t\t\t\t# compute tracked frames in trajectory\n\t\t\t\tlast_id = g[0]\n\t\t\t\t# first detection (necessary to be in gt_trajectories) is always tracked\n\t\t\t\ttracked = 1 if g[0] >= 0 else 0\n\t\t\t\tfor f in range(1, len(g)):\n\t\t\t\t\tif last_id != g[f] and last_id != -1 and g[f] != -1:\n\t\t\t\t\t\tself.id_switches += 1\n\t\t\t\t\tif f < len(g) - 1 and g[f - 1] != g[f] and last_id != -1 and g[f] != -1 and g[f + 1] != -1:\n\t\t\t\t\t\tself.fragments += 1\n\t\t\t\t\tif g[f] != -1:\n\t\t\t\t\t\ttracked += 1\n\t\t\t\t\t\tlast_id = g[f]\n\t\t\t\t# handle last frame; tracked state is handled in for loop (g[f]!=-1)\n\t\t\t\tif len(g) > 1 and g[f - 1] != g[f] and last_id != -1 and g[f] != -1:\n\t\t\t\t\tself.fragments += 1\n\n\t\t\t\t# compute MT/PT/ML\n\t\t\t\ttracking_ratio = tracked / float(len(g))\n\t\t\t\tif tracking_ratio > 0.8:\n\t\t\t\t\tself.MT += 1\n\t\t\t\telif tracking_ratio < 0.2:\n\t\t\t\t\tself.ML += 1\n\t\t\t\telse:    # 0.2 <= tracking_ratio <= 0.8\n\t\t\t\t\tself.PT += 1\n\n\t\t\t# compute IDF1\n\t\t\tidf1, idtp, nbox_gt, id_n_tr  = compute_idf1_and_idtp_for_sequence(gt_seq, results_seq, gt_track_ids, tr_track_ids, frame_to_ignore_region)\n\t\t\tself.IDTP = idtp\n\t\t\t#self.id_ign = id_ign\n\t\t\tself.id_n_tr = id_n_tr\n\t\t\tself.nbox_gt = nbox_gt\n\t\t\t\n\t\treturn self\n\n\n\n### IDF1 stuff\n### code below adapted from https://github.com/shenh10/mot_evaluation/blob/5dd51e5cb7b45992774ea150e4386aa0b02b586f/utils/measurements.py\ndef compute_idf1_and_idtp_for_sequence(frame_to_gt, frame_to_pred, gt_ids, st_ids, frame_to_ignore_region):\n\tframe_to_can_be_ignored = {}\n\tfor t in frame_to_pred.keys():\n\t\tpreds_t = frame_to_pred[t]\n\t\tpred_masks_t = [p.mask for p in preds_t]\n\t\tignore_region_t = frame_to_ignore_region[t].mask\n\t\toverlap = np.squeeze(rletools.iou(pred_masks_t, [ignore_region_t], [1]), axis=1)\n\t\tframe_to_can_be_ignored[t] = overlap > 0.5\n\n\tgt_ids = sorted(gt_ids)\n\tst_ids = sorted(st_ids)\n\tgroundtruth = [[] for _ in gt_ids]\n\tprediction = [[] for _ in st_ids]\n\tfor t, gts_t in frame_to_gt.items():\n\t\tfor gt_t in gts_t:\n\t\t\tif gt_t.track_id in gt_ids:\n\t\t\t\tgroundtruth[gt_ids.index(gt_t.track_id)].append((t, gt_t))\n\tfor t in frame_to_pred.keys():\n\t\tpreds_t = frame_to_pred[t]\n\t\tcan_be_ignored_t = frame_to_can_be_ignored[t]\n\t\tassert len(preds_t) == len(can_be_ignored_t)\n\t\tfor pred_t, ign_t in zip(preds_t, can_be_ignored_t):\n\t\t\tif pred_t.track_id in st_ids:\n\t\t\t\tprediction[st_ids.index(pred_t.track_id)].append((t, pred_t, ign_t))\n\tfor gt in groundtruth:\n\t\tgt.sort(key=lambda x: x[0])\n\tfor pred in prediction:\n\t\tpred.sort(key=lambda x: x[0])\n\n\tn_gt = len(gt_ids)\n\tn_st = len(st_ids)\n\tcost = np.zeros((n_gt + n_st, n_st + n_gt), dtype=float)\n\tcost[n_gt:, :n_st] = sys.maxsize    # float('inf')\n\tcost[:n_gt, n_st:] = sys.maxsize    # float('inf')\n\n\tfp = np.zeros(cost.shape)\n\tfn = np.zeros(cost.shape)\n\tign = np.zeros(cost.shape)\n\t# cost matrix of all trajectory pairs\n\tcost_block, fp_block, fn_block, ign_block = cost_between_gt_pred(groundtruth, prediction)\n\tcost[:n_gt, :n_st] = cost_block\n\tfp[:n_gt, :n_st] = fp_block\n\tfn[:n_gt, :n_st] = fn_block\n\tign[:n_gt, :n_st] = ign_block\n\n\t# computed trajectory match no groundtruth trajectory, FP\n\tfor i in range(n_st):\n\t\t#cost[i + n_gt, i] = prediction[i].shape[0]\n\t\t#fp[i + n_gt, i] = prediction[i].shape[0]\n\t\t# don't count fp in case of ignore region\n\t\tfps = sum([~x[2] for x in prediction[i]])\n\t\tig = sum([x[2] for x in prediction[i]])\n\t\tcost[i + n_gt, i] = fps\n\t\tfp[i + n_gt, i] = fps\n\t\tign[i + n_gt, i] = ig\n\n\t# groundtruth trajectory match no computed trajectory, FN\n\tfor i in range(n_gt):\n\t\t#cost[i, i + n_st] = groundtruth[i].shape[0]\n\t\t#fn[i, i + n_st] = groundtruth[i].shape[0]\n\t\tcost[i, i + n_st] = len(groundtruth[i])\n\t\tfn[i, i + n_st] = len(groundtruth[i])\n\t# TODO: add error handling here?\n\tmatched_indices = linear_assignment(cost)\n\t#nbox_gt = sum([groundtruth[i].shape[0] for i in range(n_gt)])\n\t#nbox_st = sum([prediction[i].shape[0] for i in range(n_st)])\n\tnbox_gt = sum([len(groundtruth[i]) for i in range(n_gt)])\n\n\tnbox_st = sum([len(prediction[i]) for i in range(n_st)])\n\n\t#IDFP = 0\n\tIDFN = 0\n\tid_ign = 0\n\tfor matched in zip(*matched_indices):\n\t\t#IDFP += fp[matched[0], matched[1]]\n\t\tIDFN += fn[matched[0], matched[1]]\n\t\t# exclude detections which are not matched and ignored from total count\n\t\tid_ign += ign[matched[0], matched[1]]\n\tid_n_tr = nbox_st - id_ign\n\n\tIDTP = nbox_gt - IDFN\n\n\tIDF1 = 2 * IDTP / (nbox_gt + id_n_tr)\n\n\treturn IDF1, IDTP, nbox_gt, id_n_tr\n\n\n\n\ndef cost_between_gt_pred(groundtruth, prediction):\n\tn_gt = len(groundtruth)\n\tn_st = len(prediction)\n\tcost = np.zeros((n_gt, n_st), dtype=float)\n\tfp = np.zeros((n_gt, n_st), dtype=float)\n\tfn = np.zeros((n_gt, n_st), dtype=float)\n\tign = np.zeros((n_gt, n_st), dtype=float)\n\tfor i in range(n_gt):\n\t\tfor j in range(n_st):\n\t\t\tfp[i, j], fn[i, j], ign[i, j] = cost_between_trajectories(groundtruth[i], prediction[j])\n\t\t\tcost[i, j] = fp[i, j] + fn[i, j]\n\treturn cost, fp, fn, ign\n\n\ndef cost_between_trajectories(traj1, traj2):\n\t#[npoints1, dim1] = traj1.shape\n\t#[npoints2, dim2] = traj2.shape\n\tnpoints1 = len(traj1)\n\tnpoints2 = len(traj2)\n\t# find start and end frame of each trajectories\n\t#start1 = traj1[0, 0]\n\t#end1 = traj1[-1, 0]\n\t#start2 = traj2[0, 0]\n\t#end2 = traj2[-1, 0]\n\ttimes1 = [x[0] for x in traj1]\n\ttimes2 = [x[0] for x in traj2]\n\tstart1 = min(times1)\n\tstart2 = min(times2)\n\tend1 = max(times1)\n\tend2 = max(times2)\n\n\tign = [traj2[i][2] for i in range(npoints2)]\n\n\t# check frame overlap\n\t#has_overlap = max(start1, start2) < min(end1, end2)\n\t# careful, changed this to <=, but I think now it's right\n\thas_overlap = max(start1, start2) <= min(end1, end2)\n\tif not has_overlap:\n\t\tfn = npoints1\n\t\t#fp = npoints2\n\t\t# disregard detections which can be ignored\n\t\tfp = sum([~x for x in ign])\n\t\tig = sum(ign)\n\t\treturn fp, fn, ig\n\n\t# gt trajectory mapping to st, check gt missed\n\tmatched_pos1 = corresponding_frame(times1, npoints1, times2, npoints2)\n\t# st trajectory mapping to gt, check computed one false alarms\n\tmatched_pos2 = corresponding_frame(times2, npoints2, times1, npoints1)\n\toverlap1 = compute_overlap(traj1, traj2, matched_pos1)\n\toverlap2 = compute_overlap(traj2, traj1, matched_pos2)\n\t# FN\n\tfn = sum([1 for i in range(npoints1) if overlap1[i] < 0.5])\n\t# FP\n\t# don't count false positive in case of ignore region\n\tunmatched = [overlap2[i] < 0.5 for i in range(npoints2)]\n\t#fp = sum([1 for i in range(npoints2) if overlap2[i] < 0.5 and not traj2[i][2]])\n\tfp = sum([1 for i in range(npoints2) if unmatched[i] and not ign[i]])\n\tig = sum([1 for i in range(npoints2) if unmatched[i] and ign[i]])\n\treturn fp, fn, ig\n\n\ndef corresponding_frame(traj1, len1, traj2, len2):\n\t\"\"\"\n\tFind the matching position in traj2 regarding to traj1\n\tAssume both trajectories in ascending frame ID\n\t\"\"\"\n\tp1, p2 = 0, 0\n\tloc = -1 * np.ones((len1,), dtype=int)\n\twhile p1 < len1 and p2 < len2:\n\t\tif traj1[p1] < traj2[p2]:\n\t\t\tloc[p1] = -1\n\t\t\tp1 += 1\n\t\telif traj1[p1] == traj2[p2]:\n\t\t\tloc[p1] = p2\n\t\t\tp1 += 1\n\t\t\tp2 += 1\n\t\telse:\n\t\t\tp2 += 1\n\treturn loc\n\n\ndef compute_overlap(traj1, traj2, matched_pos):\n\t\"\"\"\n\tCompute the loss hit in traj2 regarding to traj1\n\t\"\"\"\n\toverlap = np.zeros((len(matched_pos),), dtype=float)\n\tfor i in range(len(matched_pos)):\n\t\tif matched_pos[i] == -1:\n\t\t\tcontinue\n\t\telse:\n\t\t\tmask1 = traj1[i][1].mask\n\t\t\tmask2 = traj2[matched_pos[i]][1].mask\n\t\t\tiou = rletools.iou([mask1], [mask2], [False])[0][0]\n\t\t\toverlap[i] = iou\n\treturn overlap\n\n"
  },
  {
    "path": "eval/mots/Metrics.py",
    "content": "from __future__ import division\nfrom collections import OrderedDict, Iterable\nimport pandas as pd\nimport numpy as np\nimport pickle\n\nclass Metrics(object):\n\tdef __init__(self):\n\t\tself.metrics = OrderedDict()\n\t\tself.cache_dict = OrderedDict()\n\n\n\tdef register(self, name=None, value=None, formatter=None,\n\t\t\t\tdisplay_name=None, write_db = True, write_mail = True):\n\t\t\"\"\"Register a new metric.\n\t\tParams\n\t\t------\n\t\tname: str\n\t\t\tName of the metric. Name is used for computation and set as attribute.\n        display_name: str or None\n            Disoplay name of variable written in db and mail\n\t\tvalue:\n\t\tformatter:\n\t\t\tFormatter to present value of metric. E.g. `'{:.2f}'.format`\n\t\twrite_db: boolean, default = True\n\t\t\tWrite value into db\n\t\twrite_mail: boolean, default = True\n\t\t\tWrite metric in result mail to user\n\t\t\"\"\"\n\t\tassert not name is None, 'No name specified'.format(name)\n\n\t\tif not value:\n\t\t\tvalue = 0\n\n\t\tself.__setattr__( name, value)\n\n\t\tif not display_name: display_name = name\n\t\tself.metrics[name] = {\n\t\t    'name' : name,\n\t\t    'write_db' : write_db,\n\t\t    'formatter' : formatter,\n\t\t    'write_mail' : write_mail,\n\t\t    'display_name' : display_name\n\t\t}\n\n\tdef cache(self, name=None, value=None, func=None):\n\t\tassert not name is None, 'No name specified'.format(name)\n\n\n\t\tself.__setattr__( name, value)\n\n\n\t\tself.cache_dict[name] = {\n\t\t    'name' : name,\n\t\t    'func' : func\n\t\t}\n\n\n\n\tdef __call__(self, name):\n\t\treturn self.metrics[name]\n\n\t@property\n\tdef names(self):\n\t\t\"\"\"Returns the name identifiers of all registered metrics.\"\"\"\n\t\treturn [v['name'] for v in self.metrics.values()]\n\n\t@property\n\tdef display_names(self):\n\t\t\"\"\"Returns the display name identifiers of all registered metrics.\"\"\"\n\t\treturn [v['display_name'] for v in self.metrics.values()]\n\n\n\t@property\n\tdef formatters(self):\n\t\t\"\"\"Returns the formatters for all metrics that have associated formatters.\"\"\"\n\t\treturn dict([(v['display_name'], v['formatter']) for k, v in self.metrics.items() if not v['formatter'] is None])\n\n\t#@property\n\tdef val_dict(self, display_name = False, object = \"metrics\"):\n\t\t\"\"\"Returns dictionary of all registered values of object name or display_name as key.\n\t\tParams\n        ------\n\n       display_name: boolean, default = False\n            If True, display_name of keys in dict. (default names)\n        object: \"cache\" or \"metrics\", default = \"metrics\"\n\t\t\"\"\"\n\t\tif display_name: key_string = \"display_name\"\n\t\telse: key_string = \"name\"\n\t\tprint(\"object dict: \", object)\n\t\tval_dict = dict([(self.__getattribute__(object)[key][key_string], self.__getattribute__(key)) for key in self.__getattribute__(object).keys() ])\n\t\treturn val_dict\n\n\tdef val_db(self, display_name = True):\n\t\t\"\"\"Returns dictionary of all registered values metrics to write in db.\"\"\"\n\t\tif display_name: key_string = \"display_name\"\n\t\telse: key_string = \"name\"\n\t\tval_dict = dict([(self.metrics[key][key_string], self.__getattribute__(key)) for key in self.metrics.keys() if self.metrics[key][\"write_db\"] ])\n\t\treturn val_dict\n\n\n\tdef val_mail(self, display_name = True):\n\t\t\"\"\"Returns dictionary of all registered values metrics to write in mail.\"\"\"\n\t\tif display_name: key_string = \"display_name\"\n\t\telse: key_string = \"name\"\n\t\tval_dict = dict([(self.metrics[key][key_string], self.__getattribute__(key)) for key in self.metrics.keys() if self.metrics[key][\"write_mail\"] ])\n\t\treturn val_dict\n\n\n\tdef to_dataframe(self, display_name = False, type = None):\n\t\t\"\"\"Returns pandas dataframe of all registered values metrics. \"\"\"\n\t\tif type==\"mail\":\n\t\t\tself.df = pd.DataFrame(self.val_mail(display_name = display_name), index=[self.seqName])\n\t\telse:\n\t\t\tself.df = pd.DataFrame(self.val_dict(display_name = display_name), index=[self.seqName])\n\tdef update_values(self, value_dict = None):\n\t\t\"\"\"Updates registered metrics with new values in value_dict. \"\"\"\n\t\tif value_dict:\n\t\t\tfor key, value in value_dict.items() :\n\t\t\t\tif hasattr(self, key):\n\t\t\t\t\tself.__setattr__(key, value)\n\t\t\t\t\n\n\tdef print_type(self, object = \"metrics\"):\n\t\t\"\"\"Prints  variable type of registered metrics or caches. \"\"\"\n\t\tprint( \"OBJECT \" , object)\n\t\tval_dict = self.val_dict(object = object)\n\t\tfor key, item in val_dict.items() :\n\t\t\tprint(\"%s: %s; Shape: %s\" %(key, type(item), np.shape(item)))\n\n\tdef print_results(self):\n\t\t\"\"\"Prints metrics. \"\"\"\n\t\tresult_dict = self.val_dict()\n\t\tfor key, item in result_dict.items():\n\t\t\tprint(key)\n\t\t\tprint(\"%s: %s\" %(key, self.metrics[key][\"formatter\"](item)))\n\n\n\tdef save_dict(self, path):\n\t\t\"\"\"Save value dict to path as pickle file.\"\"\"\n\t\twith open(path, 'wb') as handle:\n\t\t\tpickle.dump(self.__dict__, handle, protocol=pickle.HIGHEST_PROTOCOL)\n\n\n\tdef compute_metrics_per_sequence(self):\n\t\traise NotImplementedError\n\n\n"
  },
  {
    "path": "eval/mots/README.md",
    "content": "# MOTS\r\n![MOTS_PIC](https://motchallenge.net/sequenceVideos/MOTS20-11-gt.jpg)\r\n\r\n## Requirements\r\n* Python 3.6.9\r\n* install [requirements.txt](requirements.txt)\r\n\r\n## Usage\r\n1) Run \r\n```\r\npython MOTS/evalMOTS.py\r\n```\r\n\r\n\r\n\r\n## Evaluation\r\nTo run the evaluation for your method please adjust the file ```MOTS/evalMOTS.py``` using the following arguments:\r\n\r\n```benchmark_name```: Name of the benchmark, e.g. MOTS  \r\n```gt_dir```: Directory containing ground truth files in ```<gt_dir>/<sequence>/gt/gt.txt```    \r\n```res_dir```: The folder containing the tracking results. Each one should be saved in a separate .txt file with the name of the respective sequence (see ./res/data)    \r\n```save_pkl```: path to output directory for final results (pickle)  (default: False)  \r\n```eval_mode```: Mode of evaluation out of ```[\"train\", \"test\", \"all\"]``` (default : \"train\")\r\n\r\n```\r\neval.run(\r\n    benchmark_name = benchmark_name,\r\n    seq_file = seq_file,\r\n    gt_dir = gt_dir,\r\n    res_dir = res_dir\r\n        )\r\n```\r\n## Visualization\r\nTo visualize your results or the annotations run\r\n<code>\r\npython MOTS/MOTSVisualization.py\r\n</code>\r\n\r\nInside the script adjust the following values for the ```MOTSVisualizer``` class:\r\n\r\n```seqName```: Name of the sequence  \r\n```FilePath```: Data file  \r\n```image_dir```: Directory containing images  \r\n```mode```: Video mode. Options: ```None``` for method results, ```raw``` for data video only, and ```gt``` for annotations  \r\n```output_dir```: Directory for created video and thumbnail images  \r\n\r\nAdditionally, adjust the following values for the ```generateVideo``` function:\r\n\r\n```displayTime```: If true, display frame number (default false)  \r\n```displayName```: Name of the method  \r\n```showOccluder```: If true, show occluder of gt data  \r\n```fps```: Frame rate  \r\n\r\n```\r\nvisualizer = MOTSVisualizer(seqName, FilePath, image_dir, mode, output_dir )\r\nvisualizer.generateVideo(displayTime, displayName, showOccluder, fps  )\r\n```\r\n\r\n## Data Format\r\n\r\nEach line of an annotation txt file is structured like this (where rle means run-length encoding from COCO):\r\n```\r\ntime_frame id class_id img_height img_width rle\r\n```\r\nAn example line from a txt file:\r\n```\r\n52 1005 1 375 1242 WSV:2d;1O10000O10000O1O100O100O1O100O1000000000000000O100O102N5K00O1O1N2O110OO2O001O1NTga3\r\n```\r\nMeaning:\r\n<br>time frame 52\r\n<br>object id 1005 (meaning class id is 1, i.e. car and instance id is 5)\r\n<br>class id 1\r\n<br>image height 375\r\n<br>image width 1242\r\n<br>rle WSV:2d;1O10000O10000O1O100O100O1O100O1000000000000000O100O...1O1N </p>\r\n\r\nimage height, image width, and rle can be used together to decode a mask using [cocotools](https://github.com/cocodataset/cocoapi).\r\n\r\n## Citation\r\nIf you work with the code and the benchmark, please cite:\r\n\r\n```\r\n@inproceedings{Voigtlaender19CVPR_MOTS,\r\n author = {Paul Voigtlaender and Michael Krause and Aljo\\u{s}a O\\u{s}ep and Jonathon Luiten and Berin Balachandar Gnana Sekar and Andreas Geiger and Bastian Leibe},\r\n title = {{MOTS}: Multi-Object Tracking and Segmentation},\r\n booktitle = {CVPR},\r\n year = {2019},\r\n}\r\n```\r\n\r\n## License\r\nMIT License\r\n\r\n## Contact\r\nIf you find a problem in the code, please open an issue.\r\n\r\nFor general questions, please contact Paul Voigtlaender (voigtlaender@vision.rwth-aachen.de) or Michael Krause (michael.krause@rwth-aachen.de)\r\n"
  },
  {
    "path": "eval/mots/Visualize.py",
    "content": "\nimport cv2\nimport numpy as np\nimport os, sys\nimport glob\nimport colorsys\nimport traceback\n\n\nclass Visualizer(object):\n    def __init__(self,\n                seqName = None,\n                mode = None,\n                FilePath = None,\n                output_dir = None,\n                image_dir = None,\n                metaInfoDir = None\n\n                        ):\n\n        assert mode  in [None, \"raw\", \"gt\"], \"Not valid mode. Value has to be None, 'raw', or 'gt'\"\n\n        self.seqName = seqName\n        self.mode = mode\n        self.image_dir= image_dir\n        self.output_dir = output_dir\n        self.FilePath = FilePath\n        self.metaInfoDir = metaInfoDir\n\n    # adapted from https://github.com/matterport/Mask_RCNN/blob/master/mrcnn/visualize.py\n    @property\n    def generate_colors(self):\n        \"\"\"\n        Generate random colors.\n        To get visually distinct colors, generate them in HSV space then\n        convert to RGB.\n        \"\"\"\n        N = 30\n        brightness = 0.7\n        hsv = [(i / N, 1, brightness) for i in range(N)]\n        colors = list(map(lambda c: colorsys.hsv_to_rgb(*c), hsv))\n        perm = [15, 13, 25, 12, 19, 8, 22, 24, 29, 17, 28, 20, 2, 27, 11, 26, 21, 4, 3, 18, 9, 5, 14, 1, 16, 0, 23, 7, 6, 10]\n        colors = [colors[idx] for idx in perm]\n        return colors\n    def generateVideo(self,\n        outputName = None,\n        extensions = [],\n        displayTime = False,\n        displayName = False,\n        showOccluder = False,\n        fps = 25):\n\n        self.showOccluder = showOccluder\n\n        if outputName == None:\n            outputName = self.seqName\n\n\n        if not self.mode == \"raw\":\n            #load File\n            self.resFile = self.load(self.FilePath)\n\n        # get images Folder\n        # check if image folder exists\n        if not os.path.isdir(self.image_dir):\n            print (\"imgFolder does not exist\")\n            sys.exit()\n\n\n        imgFile = \"000001.jpg\"\n        img = os.path.join(self.image_dir,imgFile)\n        print(\"image file\" , img)\n        im =  cv2.imread(img,1)\n        height, width, c = im.shape\n\n        self.imScale = 1\n        if width > 800:\n            self.imScale = .5\n            width = int(width*self.imScale)\n            height = int(height*self.imScale)\n\n        # video extension\n        extension = \".mp4\"\n        if self.mode:\n            self.outputNameNoExt = os.path.join(self.output_dir, \"%s-%s\" % (outputName, self.mode))\n        else:\n            self.outputNameNoExt = os.path.join(self.output_dir, outputName)\n        self.outputName = \"%s%s\" % (self.outputNameNoExt, extension)\n\n        self.out = cv2.VideoWriter(self.outputName,cv2.VideoWriter_fourcc(*'mp4v'), fps, (width,height))\n\n        print (\"Output name: %s\"%self.outputName)\n        self.colors = self.generate_colors\n        t=0\n\n        for img in sorted(glob.glob(os.path.join(self.image_dir,\"*.jpg\"))):\n            t+=1\n\n            im = cv2.imread(img,1)\n\n            if not self.mode == \"raw\":\n                try:\n                    im = self.drawResults(im, t)\n                except Exception as e:\n                    print(str(traceback.format_exc()))\n            im=cv2.resize(im,(0,0),fx=self.imScale,fy=self.imScale)\n        \n            if displayTime:\n                cv2.putText(im,\"%d\" % t,(25,50),cv2.FONT_HERSHEY_PLAIN,self.imScale*6,[255, 255, 255], thickness = 3)\n            if displayName:\n                text = \"%s: %s\" %(self.seqName, displayName)\n                cv2.putText(im, text,(25,height - 25 ),cv2.FONT_HERSHEY_DUPLEX,self.imScale* 2,[255, 255, 255],  thickness = 2)\n\n            if t == 1:\n                cv2.imwrite(\"{}.jpg\".format(self.outputNameNoExt), im)\n                im_mini = cv2.resize(im, (0,0), fx=0.25, fy=0.25)\n                cv2.imwrite(\"{}-mini.jpg\".format(self.outputNameNoExt), im_mini)\n            self.out.write(im)\n        self.out.release()\n\n        print(\"Finished: %s\"%self.outputName)\n        if not len(extensions)==0:\n            print(\"Convert Video to : \", extensions)\n            self.convertVideo(extensions)\n    \n    def drawResults(self, image = None):\n        NotImplemented\n    \n    def load(self):\n        NotImplemented\n\n    def convertVideo(self, extensions):\n\n        for ext in extensions:\n            print( self.outputName)\n            outputNameNewExt = \"%s%s\" % (self.outputNameNoExt, ext)\n            print(\"Convert Video to: %s\" % outputNameNewExt)\n            command = \"ffmpeg -loglevel warning -y -i %s -c:v libvpx-vp9 -crf 30 -b:v 0 -b:a 128k -c:a libvorbis -cpu-used 8  %s\" % (self.outputName, outputNameNewExt)\n            os.system(command)\n"
  },
  {
    "path": "eval/mots/__init__.py",
    "content": "import os, sys\nsys.path.append(os.path.dirname(os.path.abspath(__file__)))\n"
  },
  {
    "path": "eval/mots/evalMOTS.py",
    "content": "import sys, os\nimport pdb\nsys.path.append(os.path.abspath(os.getcwd()))\nimport math\nfrom collections import defaultdict\nfrom MOTS_metrics import MOTSMetrics\nfrom Evaluator import Evaluator, run_metrics\n\nimport multiprocessing as mp\n\n\n\nclass MOTSEvaluator(Evaluator):\n    def __init__(self):\n        self.type = \"MOTS\"\n    def eval(self):\n\n        arguments = []\n        for seq, res, gt in zip(self.sequences, self.tsfiles, self.gtfiles):\n            arguments.append({\"metricObject\": MOTSMetrics(seq),\n            \"args\" : {\"gtDataDir\": os.path.join( self.datadir,seq),\n            \"sequence\": str(seq) ,\n            \"pred_file\":res,\n            \"gt_file\": gt,\n            \"benchmark_name\": self.benchmark_name}})\n\n        if self.MULTIPROCESSING:\n            p = mp.Pool(self.NR_CORES)\n            processes = [p.apply_async(run_metrics, kwds=inp) for inp in arguments]\n            self.results = [p.get() for p in processes]\n            p.close()\n            p.join()\n\n        else:\n            self.results = [run_metrics(**inp) for inp in arguments]\n\n\n        # Sum up results for all sequences\n        self.Overall_Results = MOTSMetrics(\"OVERALL\")\n        return self.results\n\n\nif __name__ == \"__main__\":\n    eval = MOTSEvaluator()\n    benchmark_name = \"MOTS\"\n    gt_dir = \"data/MOTS\"\n    res_dir = \"res/MOTSres\"\n    eval_mode = \"train\"\n    eval.run(\n             benchmark_name = benchmark_name,\n             gt_dir = gt_dir,\n             res_dir = res_dir,\n             eval_mode = eval_mode)\n"
  },
  {
    "path": "eval/mots/mots_common/images_to_txt.py",
    "content": "import sys\nfrom mots_common.io import load_sequences, load_seqmap, write_sequences\n\n\nif __name__ == \"__main__\":\n  if len(sys.argv) != 4:\n    print(\"Usage: python images_to_txt.py gt_img_folder gt_txt_output_folder seqmap\")\n    sys.exit(1)\n\n  gt_img_folder = sys.argv[1]\n  gt_txt_output_folder = sys.argv[2]\n  seqmap_filename = sys.argv[3]\n\n  seqmap, _ = load_seqmap(seqmap_filename)\n  print(\"Loading ground truth images...\")\n  gt = load_sequences(gt_img_folder, seqmap)\n  print(\"Writing ground truth txts...\")\n  write_sequences(gt, gt_txt_output_folder)\n"
  },
  {
    "path": "eval/mots/mots_common/io.py",
    "content": "import PIL.Image as Image\nimport numpy as np\nimport pycocotools.mask as rletools\nimport glob\nimport os\n\n\nclass SegmentedObject:\n  def __init__(self, mask, class_id, track_id):\n    self.mask = mask\n    self.class_id = class_id\n    self.track_id = track_id\n\n\ndef load_sequences(path, seqmap):\n  objects_per_frame_per_sequence = {}\n  for seq in seqmap:\n    print(\"Loading sequence\", seq)\n    seq_path_folder = os.path.join(path, seq)\n    seq_path_txt = os.path.join(path, seq + \".txt\")\n    if os.path.isdir(seq_path_folder):\n      objects_per_frame_per_sequence[seq] = load_images_for_folder(seq_path_folder)\n    elif os.path.exists(seq_path_txt):\n      objects_per_frame_per_sequence[seq] = load_txt(seq_path_txt)\n    else:\n      raise Exception( \"<exc>Can't find data in directory \" + path + \"<!exc>\")\n\n  return objects_per_frame_per_sequence\n\n\ndef load_txt(path):\n  objects_per_frame = {}\n  track_ids_per_frame = {}  # To check that no frame contains two objects with same id\n  combined_mask_per_frame = {}  # To check that no frame contains overlapping masks\n  with open(path, \"r\") as f:\n    for line in f:\n      line = line.strip()\n      fields = line.split(\" \")\n      try:\n        frame = int(fields[0])\n      except:\n        raise Exception(\"<exc>Error in {} in line: {}<!exc>\".format(path.split(\"/\")[-1], line))\n      if frame not in objects_per_frame:\n        objects_per_frame[frame] = []\n      if frame not in track_ids_per_frame:\n        track_ids_per_frame[frame] = set()\n      if int(fields[1]) in track_ids_per_frame[frame]:\n        raise Exception(\"<exc>Multiple objects with track id \" + fields[1] + \" in frame \" + fields[0] + \"<!exc>\")\n      else:\n        track_ids_per_frame[frame].add(int(fields[1]))\n\n      class_id = int(fields[2])\n      if not(class_id == 1 or class_id == 2 or class_id == 10):\n        raise Exception( \"<exc>Unknown object class \" + fields[2] + \"<!exc>\")\n\n      mask = {'size': [int(fields[3]), int(fields[4])], 'counts': fields[5].encode(encoding='UTF-8')}\n      if frame not in combined_mask_per_frame:\n        combined_mask_per_frame[frame] = mask\n      elif rletools.area(rletools.merge([combined_mask_per_frame[frame], mask], intersect=True)) > 0.0:\n        raise Exception( \"<exc>Objects with overlapping masks in frame \" + fields[0] + \"<!exc>\")\n      else:\n        combined_mask_per_frame[frame] = rletools.merge([combined_mask_per_frame[frame], mask], intersect=False)\n      objects_per_frame[frame].append(SegmentedObject(\n        mask,\n        class_id,\n        int(fields[1])\n      ))\n\n  return objects_per_frame\n\n\ndef load_images_for_folder(path):\n  files = sorted(glob.glob(os.path.join(path, \"*.png\")))\n\n  objects_per_frame = {}\n  for file in files:\n    objects = load_image(file)\n    frame = filename_to_frame_nr(os.path.basename(file))\n    objects_per_frame[frame] = objects\n\n  return objects_per_frame\n\n\ndef filename_to_frame_nr(filename):\n  assert len(filename) == 10, \"Expect filenames to have format 000000.png, 000001.png, ...\"\n  return int(filename.split('.')[0])\n\n\ndef load_image(filename, id_divisor=1000):\n  img = np.array(Image.open(filename))\n  obj_ids = np.unique(img)\n\n  objects = []\n  mask = np.zeros(img.shape, dtype=np.uint8, order=\"F\")  # Fortran order needed for pycocos RLE tools\n  for idx, obj_id in enumerate(obj_ids):\n    if obj_id == 0:  # background\n      continue\n    mask.fill(0)\n    pixels_of_elem = np.where(img == obj_id)\n    mask[pixels_of_elem] = 1\n    objects.append(SegmentedObject(\n      rletools.encode(mask),\n      obj_id // id_divisor,\n      obj_id\n    ))\n\n  return objects\n\n\ndef load_seqmap(seqmap_filename):\n  print(\"Loading seqmap...\")\n  seqmap = []\n  max_frames = {}\n  with open(seqmap_filename, \"r\") as fh:\n    for i, l in enumerate(fh):\n      fields = l.split(\" \")\n      seq = \"%04d\" % int(fields[0])\n      seqmap.append(seq)\n      max_frames[seq] = int(fields[3])\n  return seqmap, max_frames\n\n\ndef write_sequences(gt, output_folder):\n  os.makedirs(output_folder, exist_ok=True)\n  for seq, seq_frames in gt.items():\n    write_sequence(seq_frames, os.path.join(output_folder, seq + \".txt\"))\n  return\n\n\ndef write_sequence(frames, path):\n  with open(path, \"w\") as f:\n    for t, objects in frames.items():\n      for obj in objects:\n        print(t, obj.track_id, obj.class_id, obj.mask[\"size\"][0], obj.mask[\"size\"][1],\n              obj.mask[\"counts\"].decode(encoding='UTF-8'), file=f)\n"
  },
  {
    "path": "eval/mots/requirements.txt",
    "content": "cycler==0.10.0\nCython==0.29.20\nkiwisolver==1.2.0\nmatplotlib==3.2.2\nnumpy==1.19.0\nopencv-python==4.2.0.34\npandas==1.0.5\nPillow==7.1.2\npycocotools==2.0.1\npyparsing==2.4.7\npython-dateutil==2.8.1\npytz==2020.1\nscipy==1.5.0\nsix==1.15.0\n"
  },
  {
    "path": "eval/palette.py",
    "content": "palette_str = '''0 0 0\n128 0 0\n0 128 0\n128 128 0\n0 0 128\n128 0 128\n0 128 128\n128 128 128\n64 0 0\n191 0 0\n64 128 0\n191 128 0\n64 0 128\n191 0 128\n64 128 128\n191 128 128\n0 64 0\n128 64 0\n0 191 0\n128 191 0\n0 64 128\n128 64 128\n22 22 22\n23 23 23\n24 24 24\n25 25 25\n26 26 26\n27 27 27\n28 28 28\n29 29 29\n30 30 30\n31 31 31\n32 32 32\n33 33 33\n34 34 34\n35 35 35\n36 36 36\n37 37 37\n38 38 38\n39 39 39\n40 40 40\n41 41 41\n42 42 42\n43 43 43\n44 44 44\n45 45 45\n46 46 46\n47 47 47\n48 48 48\n49 49 49\n50 50 50\n51 51 51\n52 52 52\n53 53 53\n54 54 54\n55 55 55\n56 56 56\n57 57 57\n58 58 58\n59 59 59\n60 60 60\n61 61 61\n62 62 62\n63 63 63\n64 64 64\n65 65 65\n66 66 66\n67 67 67\n68 68 68\n69 69 69\n70 70 70\n71 71 71\n72 72 72\n73 73 73\n74 74 74\n75 75 75\n76 76 76\n77 77 77\n78 78 78\n79 79 79\n80 80 80\n81 81 81\n82 82 82\n83 83 83\n84 84 84\n85 85 85\n86 86 86\n87 87 87\n88 88 88\n89 89 89\n90 90 90\n91 91 91\n92 92 92\n93 93 93\n94 94 94\n95 95 95\n96 96 96\n97 97 97\n98 98 98\n99 99 99\n100 100 100\n101 101 101\n102 102 102\n103 103 103\n104 104 104\n105 105 105\n106 106 106\n107 107 107\n108 108 108\n109 109 109\n110 110 110\n111 111 111\n112 112 112\n113 113 113\n114 114 114\n115 115 115\n116 116 116\n117 117 117\n118 118 118\n119 119 119\n120 120 120\n121 121 121\n122 122 122\n123 123 123\n124 124 124\n125 125 125\n126 126 126\n127 127 127\n128 128 128\n129 129 129\n130 130 130\n131 131 131\n132 132 132\n133 133 133\n134 134 134\n135 135 135\n136 136 136\n137 137 137\n138 138 138\n139 139 139\n140 140 140\n141 141 141\n142 142 142\n143 143 143\n144 144 144\n145 145 145\n146 146 146\n147 147 147\n148 148 148\n149 149 149\n150 150 150\n151 151 151\n152 152 152\n153 153 153\n154 154 154\n155 155 155\n156 156 156\n157 157 157\n158 158 158\n159 159 159\n160 160 160\n161 161 161\n162 162 162\n163 163 163\n164 164 164\n165 165 165\n166 166 166\n167 167 167\n168 168 168\n169 169 169\n170 170 170\n171 171 171\n172 172 172\n173 173 173\n174 174 174\n175 175 175\n176 176 176\n177 177 177\n178 178 178\n179 179 179\n180 180 180\n181 181 181\n182 182 182\n183 183 183\n184 184 184\n185 185 185\n186 186 186\n187 187 187\n188 188 188\n189 189 189\n190 190 190\n191 191 191\n192 192 192\n193 193 193\n194 194 194\n195 195 195\n196 196 196\n197 197 197\n198 198 198\n199 199 199\n200 200 200\n201 201 201\n202 202 202\n203 203 203\n204 204 204\n205 205 205\n206 206 206\n207 207 207\n208 208 208\n209 209 209\n210 210 210\n211 211 211\n212 212 212\n213 213 213\n214 214 214\n215 215 215\n216 216 216\n217 217 217\n218 218 218\n219 219 219\n220 220 220\n221 221 221\n222 222 222\n223 223 223\n224 224 224\n225 225 225\n226 226 226\n227 227 227\n228 228 228\n229 229 229\n230 230 230\n231 231 231\n232 232 232\n233 233 233\n234 234 234\n235 235 235\n236 236 236\n237 237 237\n238 238 238\n239 239 239\n240 240 240\n241 241 241\n242 242 242\n243 243 243\n244 244 244\n245 245 245\n246 246 246\n247 247 247\n248 248 248\n249 249 249\n250 250 250\n251 251 251\n252 252 252\n253 253 253\n254 254 254\n255 255 255'''\nimport numpy as np\ntensor = np.array([[int(x) for x in line.split()] for line in palette_str.split('\\n')])\n"
  },
  {
    "path": "eval/poseval/.gitignore",
    "content": "# python\nvenv*/\n__pycache__\n*.egg-info/\n\n# editor\n.vscode/\n\n# data\nout/\n"
  },
  {
    "path": "eval/poseval/.gitmodules",
    "content": ""
  },
  {
    "path": "eval/poseval/.pylintrc",
    "content": "[BASIC]\n\nvariable-rgx=[a-z0-9_]{1,30}$\ngood-names=ap,ar,ax,b,d,f,g,gt,h,i,im,lr,n,p,r,s,t,t1,t2,th,v,vs,w,wh,x,x1,x2,xs,y,ys,xy\n\n\n[IMPORTS]\n\nallow-any-import-level=pycocotools,pycocotools.coco\n\n\n[SIMILARITIES]\n\n# Minimum lines number of a similarity.\nmin-similarity-lines=15\n\n# Ignore comments when computing similarities.\nignore-comments=yes\n\n# Ignore docstrings when computing similarities.\nignore-docstrings=yes\n\n# Ignore imports when computing similarities.\nignore-imports=yes\n\n\n[TYPECHECK]\n\n# List of members which are set dynamically and missed by pylint inference\n# system, and so shouldn't trigger E1101 when accessed. Python regular\n# expressions are accepted.\ngenerated-members=numpy.*,torch.*,cv2.*,openpifpaf.functional.*\n\nignored-modules=openpifpaf.functional,pycocotools\n\n\n# for pytorch: not-callable\n# for pytorch 1.6.0: abstract-method\ndisable=missing-docstring,too-many-arguments,too-many-instance-attributes,too-many-locals,too-few-public-methods,not-callable,abstract-method,invalid-name,unused-variable,unused-import,superfluous-parens,line-too-long,wrong-import-order,too-many-nested-blocks,too-many-branches,too-many-statements,multiple-statements,bad-indentation,unnecessary-semicolon,consider-using-enumerate,unused-argument,len-as-condition,import-outside-toplevel,too-many-lines\n"
  },
  {
    "path": "eval/poseval/README.md",
    "content": "# Poseval\n\nCreated by Leonid Pishchulin.\nAdapted by Sven Kreiss.\n\nInstall directly from GitHub:\n\n```\npip install https://github.com/svenkreiss/poseval.git\n```\n\nInstall from a local clone:\n\n```\ngit clone https://github.com/svenkreiss/poseval.git\ncd poseval\npip install -e .  # install the local package ('.') in editable mode ('-e')\n```\n\nChanges:\n\n* Python 3\n* uses latest `motmetrics` from PyPI (much(!!!) faster); removed git submodule py-motmetrics\n\nTest command with small test data:\n\n```sh\npython -m poseval.evaluate \\\n    --groundTruth test_data/gt/ \\\n    --predictions test_data/pred/ \\\n    --evalPoseTracking \\\n    --evalPoseEstimation \\\n    --saveEvalPerSequence\n```\n\nLint: `pylint poseval`.\n\n---\n\n# Evaluation of Multi-Person Pose Estimation and Tracking\n\nCreated by Leonid Pishchulin\n\n## Introduction\n\nThis README provides instructions how to evaluate your method's predictions on [PoseTrack Dataset](https://posetrack.net) locally or using evaluation server.\n\n## Prerequisites\n\n- numpy>=1.12.1\n- pandas>=0.19.2\n- scipy>=0.19.0\n- tqdm>=4.24.0\n- click>=6.7\n\n## Install\n```\n$ git clone https://github.com/leonid-pishchulin/poseval.git --recursive\n$ cd poseval/py && export PYTHONPATH=$PWD/../py-motmetrics:$PYTHONPATH\n```\n## Data preparation\n\nEvaluation requires ground truth (GT) annotations available at [PoseTrack](https://posetrack.net) and  your method's predictions. Both GT annotations and your predictions must be saved in json format. Following GT annotations, predictions must be stored per sequence, for each frame of the sequence, using the same structure as GT annotations, and have the same filename as GT annotations. For evaluation on Posetrack 2017, predictions have to follow Posetrack 2017 annotation format, while for evaluation on Posetrack 2018 corresponding 2018 format should be used. Example of json prediction structure for Posetrack 2017 format:\n```\n{\n   \"annolist\": [\n       {\n\t   \"image\": [\n\t       {\n\t\t  \"name\": \"images\\/bonn_5sec\\/000342_mpii\\/00000001.jpg\"\n\t       }\n           ],\n           \"annorect\": [\n\t       {\n\t           \"x1\": [625],\n\t\t   \"y1\": [94],\n\t\t   \"x2\": [681],\n\t\t   \"y2\": [178],\n\t\t   \"score\": [0.9],\n\t\t   \"track_id\": [0],\n\t\t   \"annopoints\": [\n\t\t       {\n\t\t\t   \"point\": [\n\t\t\t       {\n\t\t\t           \"id\": [0],\n\t\t\t\t   \"x\": [394],\n\t\t\t\t   \"y\": [173],\n\t\t\t\t   \"score\": [0.7],\n\t\t\t       },\n\t\t\t       { ... }\n\t\t\t   ]\n\t\t       }\n\t\t   ]\n\t\t},\n\t\t{ ... }\n\t   ],\n       },\n       { ... }\n   ]\n}\n```\nNote: values of `track_id` must integers from the interval [0, 999].\nFor example annotation format of Posetrack 2018 please refer to the corresponding GT annotations.\n\nWe provide a possibility to convert a Matlab structure into json format.\n```\n$ cd poseval/matlab\n$ matlab -nodisplay -nodesktop -r \"mat2json('/path/to/dir/with/mat/files/'); quit\"\n```\n\n## Metrics\n\nThis code allows to perform evaluation of per-frame multi-person pose estimation and evaluation of video-based multi-person pose tracking.\n\n### Per-frame multi-person pose estimation\n\nAverage Precision (AP) metric is used for evaluation of per-frame multi-person pose estimation. Our implementation follows the measure proposed in [1] and requires predicted body poses with body joint detection scores as input. First, multiple body pose predictions are greedily assigned to the ground truth (GT) based on the highest PCKh [3]. Only single pose can be assigned to GT. Unassigned predictions are counted as false positives. Finally, part detection score is used to compute AP for each body part. Mean AP over all body parts is reported as well.\n\n### Video-based pose tracking\n\nMultiple Object Tracking (MOT) metrics [2] are used for evaluation of video-based pose tracking. Our implementation builds on the MOT evaluation code [4] and requires predicted body poses with tracklet IDs as input. First, for each frame, for each body joint class, distances between predicted locations and GT locations are computed. Then, predicted tracklet IDs and GT tracklet IDs are taken into account and all (prediction, GT) pairs with distances not exceeding PCKh [3] threshold are considered during global matching of predicted tracklets to GT tracklets for each particular body joint. Global matching minimizes the total assignment distance. Finally, Multiple Object Tracker Accuracy (MOTA), Multiple Object Tracker Precision (MOTP), Precision, and Recall metrics are computed. We report MOTA metric for each body joint class and average over all body joints, while for MOTP, Precision, and Recall we report averages only.\n\n## Evaluation (local)\n\nEvaluation code has been tested in Linux and Ubuntu OS. Evaluation takes as input path to directory with GT annotations and path to directory with predictions. See \"Data preparation\" for details on prediction format.\n\n```\n$ git clone https://github.com/leonid-pishchulin/poseval.git --recursive\n$ cd poseval/py && export PYTHONPATH=$PWD/../py-motmetrics:$PYTHONPATH\n$ python evaluate.py \\\n  --groundTruth=/path/to/annotations/val/ \\\n  --predictions=/path/to/predictions \\\n  --evalPoseTracking \\\n  --evalPoseEstimation\n```\n\nEvaluation of multi-person pose estimation requires joint detection scores, while evaluation of pose tracking requires predicted tracklet IDs per pose.\n\n## Evaluation (server)\n\nIn order to evaluate using evaluation server, zip your directory containing json prediction files and submit at https://posetrack.net. Shortly you will receive an email containing evaluation results. **Prior to submitting your results to evaluation server, make sure you are able to evaluate locally on val set to avoid issues due to incorrect formatting of predictions.**\n\n## References\n\n[1] DeepCut: Joint Subset Partition and Labeling for Multi Person Pose Estimation. L. Pishchulin, E. Insafutdinov, S. Tang, B. Andres, M. Andriluka, P. Gehler, and B. Schiele. In CVPR'16\n\n[2] Evaluating multiple object tracking performance: the CLEAR MOT metrics. K. Bernardin and R. Stiefelhagen. EURASIP J. Image Vide.'08\n\n[3] 2D Human Pose Estimation: New Benchmark and State of the Art Analysis. M. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele. In CVPR'14\n\n[4] https://github.com/cheind/py-motmetrics\n\nFor further questions and details, contact PoseTrack Team <mailto:admin@posetrack.net>\n"
  },
  {
    "path": "eval/poseval/evaluate.py",
    "content": "import json\r\nimport pdb\r\nimport os\r\nimport sys\r\nimport numpy as np\r\nimport argparse\r\n\r\nfrom poseval.evaluateAP import evaluateAP\r\nfrom poseval.evaluateTracking import evaluateTracking\r\n\r\nimport poseval.eval_helpers as eval_helpers\r\nfrom poseval.eval_helpers import  Joint\r\n\r\ndef parseArgs():\r\n\r\n    parser = argparse.ArgumentParser(description=\"Evaluation of Pose Estimation and Tracking (PoseTrack)\")\r\n    parser.add_argument(\"-g\", \"--groundTruth\",required=False,type=str,help=\"Directory containing ground truth annotatations per sequence in json format\")\r\n    parser.add_argument(\"-p\", \"--predictions\",required=False,type=str,help=\"Directory containing predictions per sequence in json format\")\r\n    parser.add_argument(\"-e\", \"--evalPoseEstimation\",required=False,action=\"store_true\",help=\"Evaluation of per-frame  multi-person pose estimation using AP metric\")\r\n    parser.add_argument(\"-t\", \"--evalPoseTracking\",required=False,action=\"store_true\",help=\"Evaluation of video-based  multi-person pose tracking using MOT metrics\")\r\n    parser.add_argument(\"-s\",\"--saveEvalPerSequence\",required=False,action=\"store_true\",help=\"Save evaluation results per sequence\",default=False)\r\n    parser.add_argument(\"-o\", \"--outputDir\",required=False,type=str,help=\"Output directory to save the results\",default=\"./out\")\r\n    return parser.parse_args()\r\n\r\n\r\ndef main():\r\n\r\n    args = parseArgs()\r\n    print(args)\r\n    argv = ['',args.groundTruth,args.predictions]\r\n\r\n    print(\"Loading data\")\r\n    gtFramesAll,prFramesAll = eval_helpers.load_data_dir(argv)\r\n\r\n    print(\"# gt frames  :\", len(gtFramesAll))\r\n    print(\"# pred frames:\", len(prFramesAll))\r\n\r\n    if (not os.path.exists(args.outputDir)):\r\n        os.makedirs(args.outputDir)\r\n\r\n    if (args.evalPoseEstimation):\r\n        #####################################################\r\n        # evaluate per-frame multi-person pose estimation (AP)\r\n\r\n        # compute AP\r\n        print(\"Evaluation of per-frame multi-person pose estimation\")\r\n        apAll,preAll,recAll = evaluateAP(gtFramesAll,prFramesAll,args.outputDir,True,args.saveEvalPerSequence)\r\n\r\n        # print AP\r\n        print(\"Average Precision (AP) metric:\")\r\n        eval_helpers.printTable(apAll)\r\n\r\n    if (args.evalPoseTracking):\r\n        #####################################################\r\n        # evaluate multi-person pose tracking in video (MOTA)\r\n\r\n        # compute MOTA\r\n        print(\"Evaluation of video-based  multi-person pose tracking\")\r\n        metricsAll = evaluateTracking(gtFramesAll,prFramesAll,args.outputDir,True,args.saveEvalPerSequence)\r\n\r\n        metrics = np.zeros([Joint().count + 6,1])\r\n        for i in range(Joint().count+1):\r\n            metrics[i,0] = metricsAll['mota'][0,i]\r\n        metrics[Joint().count+1,0] = metricsAll['motp'][0,Joint().count]\r\n        metrics[Joint().count+2,0] = metricsAll['pre'][0,Joint().count]\r\n        metrics[Joint().count+3,0] = metricsAll['rec'][0,Joint().count]\r\n        metrics[Joint().count+4,0] = metricsAll['idf1'][0,Joint().count]\r\n        metrics[Joint().count+5,0] = metricsAll['num_switches'][0,Joint().count]\r\n\r\n        # print AP\r\n        print(\"Multiple Object Tracking (MOT) metrics:\")\r\n        eval_helpers.printTable(metrics,motHeader=True)\r\n\r\nif __name__ == \"__main__\":\r\n    main()\r\n"
  },
  {
    "path": "eval/poseval/license.txt",
    "content": "Copyright (c) 2018, Leonid Pishchulin\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met: \n\n1. Redistributions of source code must retain the above copyright notice, this\n   list of conditions and the following disclaimer. \n2. Redistributions in binary form must reproduce the above copyright notice,\n   this list of conditions and the following disclaimer in the documentation\n   and/or other materials provided with the distribution. \n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\nANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\nWARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR\nANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\nLOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND\nON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\nSOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nThe views and conclusions contained in the software and documentation are those\nof the authors and should not be interpreted as representing official policies, \neither expressed or implied, of the FreeBSD Project.\n"
  },
  {
    "path": "eval/poseval/matlab/external/jsonlab/AUTHORS.txt",
    "content": "The author of \"jsonlab\" toolbox is Qianqian Fang. Qianqian\nis currently an Assistant Professor in the Department of Bioengineering,\nNortheastern University.\n\nAddress: Qianqian Fang\n         Department of Bioengineering\n         Northeastern University\n         212A Lake Hall\n         360 Huntington Ave, Boston, MA 02115, USA\n         Office:   503 Holmes Hall\n         Phone[O]: 617-373-3829\nURL: http://fanglab.org\nEmail: <q.fang at neu.edu> and <fangqq at gmail.com>\n\n\nThe script loadjson.m was built upon previous works by\n\n- Nedialko Krouchev: http://www.mathworks.com/matlabcentral/fileexchange/25713\n       date: 2009/11/02\n- François Glineur: http://www.mathworks.com/matlabcentral/fileexchange/23393\n       date: 2009/03/22\n- Joel Feenstra: http://www.mathworks.com/matlabcentral/fileexchange/20565\n       date: 2008/07/03\n\n\nThis toolbox contains patches submitted by the following contributors:\n\n- Blake Johnson <bjohnso at bbn.com>\n  part of revision 341\n\n- Niclas Borlin <Niclas.Borlin at cs.umu.se>\n  various fixes in revision 394, including\n  - loadjson crashes for all-zero sparse matrix.\n  - loadjson crashes for empty sparse matrix.\n  - Non-zero size of 0-by-N and N-by-0 empty matrices is lost after savejson/loadjson.\n  - loadjson crashes for sparse real column vector.\n  - loadjson crashes for sparse complex column vector.\n  - Data is corrupted by savejson for sparse real row vector.\n  - savejson crashes for sparse complex row vector. \n\n- Yul Kang <yul.kang.on at gmail.com>\n  patches for svn revision 415.\n  - savejson saves an empty cell array as [] instead of null\n  - loadjson differentiates an empty struct from an empty array\n\n- Mykhailo Bratukha <bratukha.m at gmail.com>\n  (Pull#14) Bug fix: File path is wrongly inerpreted as JSON string\n\n- Insik Kim <insik92 at gmail.com>\n  (Pull#12) Bug fix: Resolving bug that cell type is converted to json with transposed data\n\n- Sertan Senturk \n  (Pull#10,#11)  Feature: Added matlab object saving to savejson and saveubjson\n"
  },
  {
    "path": "eval/poseval/matlab/external/jsonlab/ChangeLog.txt",
    "content": "============================================================================\n\n   JSONlab - a toolbox to encode/decode JSON/UBJSON files in MATLAB/Octave\n\n----------------------------------------------------------------------------\n\nJSONlab ChangeLog (key features marked by *):\n\n== JSONlab 1.2 (codename: Optimus - Update 2), FangQ <fangq (at) nmr.mgh.harvard.edu> ==\n\n 2015/12/16  replacing string concatenation by str cells to gain 2x speed in savejson (Issue#17)\n 2015/12/11  fix FileName option case bug (SVN rev#495)\n 2015/12/11  add SingletCell option, add SingletArray to replace NoRowBracket (Issue#15,#8)\n 2015/11/10  fix bug for inerpreting file names as JSON string - by Mykhailo Bratukha (Pull#14)\n 2015/10/16  fix bug for cell with transposed data - by Insik Kim (Pull#12)\n 2015/09/25  support exporting matlab object to JSON - by Sertan Senturk (Pull#10, #11)\n\n== JSONlab 1.1 (codename: Optimus - Update 1), FangQ <fangq (at) nmr.mgh.harvard.edu> ==\n\n 2015/05/05 *massively accelerating loadjson for parsing large collection of unstructured small objects\n 2015/05/05  force array bracket in 1x1 struct to maintain depth (Issue#1)\n 2015/05/05  parse logicals in loadjson\n 2015/05/05  make options case insensitive\n 2015/05/01  reading unicode encoded json files (thanks to Sertan Senturk,Issue#3)\n 2015/04/30  allow \\uXXXX to represent a unicode in a string (Issue#2)\n 2015/03/30  save a 0x0 solid real empty array as null and handel empty struct array\n 2015/03/30  properly handle escape characters in a string\n 2015/01/24 *implement the UBJSON Draft12 new name format\n 2015/01/13  correct cell array indentation inconsistency\n\n== JSONlab 1.0 (codename: Optimus - Final), FangQ <fangq (at) nmr.mgh.harvard.edu> ==\n\n 2015/01/02  polish help info for all major functions, update examples, finalize 1.0\n 2014/12/19  fix a bug to strictly respect NoRowBracket in savejson\n\n== JSONlab 1.0.0-RC2 (codename: Optimus - RC2), FangQ <fangq (at) nmr.mgh.harvard.edu> ==\n\n 2014/11/22  show progress bar in loadjson ('ShowProgress') \n 2014/11/17 *add Compact option in savejson to output compact JSON format ('Compact')\n 2014/11/17  add FastArrayParser in loadjson to specify fast parser applicable levels\n 2014/09/18 *start official github mirror: https://github.com/fangq/jsonlab\n\n== JSONlab 1.0.0-RC1 (codename: Optimus - RC1), FangQ <fangq (at) nmr.mgh.harvard.edu> ==\n\n 2014/09/17  fix several compatibility issues when running on octave versions 3.2-3.8\n 2014/09/17 *support 2D cell and struct arrays in both savejson and saveubjson\n 2014/08/04  escape special characters in a JSON string\n 2014/02/16  fix a bug when saving ubjson files\n\n== JSONlab 0.9.9 (codename: Optimus - beta), FangQ <fangq (at) nmr.mgh.harvard.edu> ==\n\n 2014/01/22  use binary read and write in saveubjson and loadubjson\n\n== JSONlab 0.9.8-1 (codename: Optimus - alpha update 1), FangQ <fangq (at) nmr.mgh.harvard.edu> ==\n\n 2013/10/07  better round-trip conservation for empty arrays and structs (patch submitted by Yul Kang)\n\n== JSONlab 0.9.8 (codename: Optimus - alpha), FangQ <fangq (at) nmr.mgh.harvard.edu> ==\n 2013/08/23 *universal Binary JSON (UBJSON) support, including both saveubjson and loadubjson\n\n== JSONlab 0.9.1 (codename: Rodimus, update 1), FangQ <fangq (at) nmr.mgh.harvard.edu> ==\n 2012/12/18 *handling of various empty and sparse matrices (fixes submitted by Niclas Borlin)\n\n== JSONlab 0.9.0 (codename: Rodimus), FangQ <fangq (at) nmr.mgh.harvard.edu> ==\n\n 2012/06/17 *new format for an invalid leading char, unpacking hex code in savejson\n 2012/06/01  support JSONP in savejson\n 2012/05/25  fix the empty cell bug (reported by Cyril Davin)\n 2012/04/05  savejson can save to a file (suggested by Patrick Rapin)\n\n== JSONlab 0.8.1 (codename: Sentiel, Update 1), FangQ <fangq (at) nmr.mgh.harvard.edu> ==\n\n 2012/02/28  loadjson quotation mark escape bug, see http://bit.ly/yyk1nS\n 2012/01/25  patch to handle root-less objects, contributed by Blake Johnson\n\n== JSONlab 0.8.0 (codename: Sentiel), FangQ <fangq (at) nmr.mgh.harvard.edu> ==\n\n 2012/01/13 *speed up loadjson by 20 fold when parsing large data arrays in matlab\n 2012/01/11  remove row bracket if an array has 1 element, suggested by Mykel Kochenderfer\n 2011/12/22 *accept sequence of 'param',value input in savejson and loadjson\n 2011/11/18  fix struct array bug reported by Mykel Kochenderfer\n\n== JSONlab 0.5.1 (codename: Nexus Update 1), FangQ <fangq (at) nmr.mgh.harvard.edu> ==\n\n 2011/10/21  fix a bug in loadjson, previous code does not use any of the acceleration\n 2011/10/20  loadjson supports JSON collections - concatenated JSON objects\n\n== JSONlab 0.5.0 (codename: Nexus), FangQ <fangq (at) nmr.mgh.harvard.edu> ==\n\n 2011/10/16  package and release jsonlab 0.5.0\n 2011/10/15 *add json demo and regression test, support cpx numbers, fix double quote bug\n 2011/10/11 *speed up readjson dramatically, interpret _Array* tags, show data in root level\n 2011/10/10  create jsonlab project, start jsonlab website, add online documentation\n 2011/10/07 *speed up savejson by 25x using sprintf instead of mat2str, add options support\n 2011/10/06 *savejson works for structs, cells and arrays\n 2011/09/09  derive loadjson from JSON parser from MATLAB Central, draft savejson.m\n"
  },
  {
    "path": "eval/poseval/matlab/external/jsonlab/LICENSE_BSD.txt",
    "content": "Copyright 2011-2015 Qianqian Fang <fangq at nmr.mgh.harvard.edu>. All rights reserved.\n\nRedistribution and use in source and binary forms, with or without modification, are\npermitted provided that the following conditions are met:\n\n   1. Redistributions of source code must retain the above copyright notice, this list of\n      conditions and the following disclaimer.\n\n   2. Redistributions in binary form must reproduce the above copyright notice, this list\n      of conditions and the following disclaimer in the documentation and/or other materials\n      provided with the distribution.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ''AS IS'' AND ANY EXPRESS OR IMPLIED\nWARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND\nFITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS \nOR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\nCONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\nSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON\nANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\nNEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF\nADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nThe views and conclusions contained in the software and documentation are those of the\nauthors and should not be interpreted as representing official policies, either expressed\nor implied, of the copyright holders.\n"
  },
  {
    "path": "eval/poseval/matlab/external/jsonlab/README.txt",
    "content": "===============================================================================\n=                                 JSONLab                                     =\n=           An open-source MATLAB/Octave JSON encoder and decoder             =\n===============================================================================\n\n*Copyright (C) 2011-2015  Qianqian Fang <fangq at nmr.mgh.harvard.edu>\n*License: BSD License, see License_BSD.txt for details\n*Version: 1.2 (Optimus - Update 2)\n\n-------------------------------------------------------------------------------\n\nTable of Content:\n\nI.  Introduction\nII. Installation\nIII.Using JSONLab\nIV. Known Issues and TODOs\nV.  Contribution and feedback\n\n-------------------------------------------------------------------------------\n\nI.  Introduction\n\nJSON ([http://www.json.org/ JavaScript Object Notation]) is a highly portable, \nhuman-readable and \"[http://en.wikipedia.org/wiki/JSON fat-free]\" text format \nto represent complex and hierarchical data. It is as powerful as \n[http://en.wikipedia.org/wiki/XML XML], but less verbose. JSON format is widely \nused for data-exchange in applications, and is essential for the wild success \nof [http://en.wikipedia.org/wiki/Ajax_(programming) Ajax] and \n[http://en.wikipedia.org/wiki/Web_2.0 Web2.0]. \n\nUBJSON (Universal Binary JSON) is a binary JSON format, specifically \noptimized for compact file size and better performance while keeping\nthe semantics as simple as the text-based JSON format. Using the UBJSON\nformat allows to wrap complex binary data in a flexible and extensible\nstructure, making it possible to process complex and large dataset \nwithout accuracy loss due to text conversions.\n\nWe envision that both JSON and its binary version will serve as part of \nthe mainstream data-exchange formats for scientific research in the future. \nIt will provide the flexibility and generality achieved by other popular \ngeneral-purpose file specifications, such as\n[http://www.hdfgroup.org/HDF5/whatishdf5.html HDF5], with significantly \nreduced complexity and enhanced performance.\n\nJSONLab is a free and open-source implementation of a JSON/UBJSON encoder \nand a decoder in the native MATLAB language. It can be used to convert a MATLAB \ndata structure (array, struct, cell, struct array and cell array) into \nJSON/UBJSON formatted strings, or to decode a JSON/UBJSON file into MATLAB \ndata structure. JSONLab supports both MATLAB and  \n[http://www.gnu.org/software/octave/ GNU Octave] (a free MATLAB clone).\n\n-------------------------------------------------------------------------------\n\nII. Installation\n\nThe installation of JSONLab is no different than any other simple\nMATLAB toolbox. You only need to download/unzip the JSONLab package\nto a folder, and add the folder's path to MATLAB/Octave's path list\nby using the following command:\n\n    addpath('/path/to/jsonlab');\n\nIf you want to add this path permanently, you need to type \"pathtool\", \nbrowse to the jsonlab root folder and add to the list, then click \"Save\".\nThen, run \"rehash\" in MATLAB, and type \"which loadjson\", if you see an \noutput, that means JSONLab is installed for MATLAB/Octave.\n\n-------------------------------------------------------------------------------\n\nIII.Using JSONLab\n\nJSONLab provides two functions, loadjson.m -- a MATLAB->JSON decoder, \nand savejson.m -- a MATLAB->JSON encoder, for the text-based JSON, and \ntwo equivallent functions -- loadubjson and saveubjson for the binary \nJSON. The detailed help info for the four functions can be found below:\n\n=== loadjson.m ===\n<pre>\n  data=loadjson(fname,opt)\n     or\n  data=loadjson(fname,'param1',value1,'param2',value2,...)\n \n  parse a JSON (JavaScript Object Notation) file or string\n \n  authors:Qianqian Fang (fangq<at> nmr.mgh.harvard.edu)\n  created on 2011/09/09, including previous works from \n \n          Nedialko Krouchev: http://www.mathworks.com/matlabcentral/fileexchange/25713\n             created on 2009/11/02\n          Franois Glineur: http://www.mathworks.com/matlabcentral/fileexchange/23393\n             created on  2009/03/22\n          Joel Feenstra:\n          http://www.mathworks.com/matlabcentral/fileexchange/20565\n             created on 2008/07/03\n \n  $Id: loadjson.m 487 2015-05-06 18:19:07Z fangq $\n \n  input:\n       fname: input file name, if fname contains \"{}\" or \"[]\", fname\n              will be interpreted as a JSON string\n       opt: a struct to store parsing options, opt can be replaced by \n            a list of ('param',value) pairs - the param string is equivallent\n            to a field in opt. opt can have the following \n            fields (first in [.|.] is the default)\n \n            opt.SimplifyCell [0|1]: if set to 1, loadjson will call cell2mat\n                          for each element of the JSON data, and group \n                          arrays based on the cell2mat rules.\n            opt.FastArrayParser [1|0 or integer]: if set to 1, use a\n                          speed-optimized array parser when loading an \n                          array object. The fast array parser may \n                          collapse block arrays into a single large\n                          array similar to rules defined in cell2mat; 0 to \n                          use a legacy parser; if set to a larger-than-1\n                          value, this option will specify the minimum\n                          dimension to enable the fast array parser. For\n                          example, if the input is a 3D array, setting\n                          FastArrayParser to 1 will return a 3D array;\n                          setting to 2 will return a cell array of 2D\n                          arrays; setting to 3 will return to a 2D cell\n                          array of 1D vectors; setting to 4 will return a\n                          3D cell array.\n            opt.ShowProgress [0|1]: if set to 1, loadjson displays a progress bar.\n \n  output:\n       dat: a cell array, where {...} blocks are converted into cell arrays,\n            and [...] are converted to arrays\n \n  examples:\n       dat=loadjson('{\"obj\":{\"string\":\"value\",\"array\":[1,2,3]}}')\n       dat=loadjson(['examples' filesep 'example1.json'])\n       dat=loadjson(['examples' filesep 'example1.json'],'SimplifyCell',1)\n</pre>\n\n=== savejson.m ===\n\n<pre>\n  json=savejson(rootname,obj,filename)\n     or\n  json=savejson(rootname,obj,opt)\n  json=savejson(rootname,obj,'param1',value1,'param2',value2,...)\n \n  convert a MATLAB object (cell, struct or array) into a JSON (JavaScript\n  Object Notation) string\n \n  author: Qianqian Fang (fangq<at> nmr.mgh.harvard.edu)\n  created on 2011/09/09\n \n  $Id: savejson.m 486 2015-05-05 20:37:11Z fangq $\n \n  input:\n       rootname: the name of the root-object, when set to '', the root name\n         is ignored, however, when opt.ForceRootName is set to 1 (see below),\n         the MATLAB variable name will be used as the root name.\n       obj: a MATLAB object (array, cell, cell array, struct, struct array).\n       filename: a string for the file name to save the output JSON data.\n       opt: a struct for additional options, ignore to use default values.\n         opt can have the following fields (first in [.|.] is the default)\n \n         opt.FileName [''|string]: a file name to save the output JSON data\n         opt.FloatFormat ['%.10g'|string]: format to show each numeric element\n                          of a 1D/2D array;\n         opt.ArrayIndent [1|0]: if 1, output explicit data array with\n                          precedent indentation; if 0, no indentation\n         opt.ArrayToStruct[0|1]: when set to 0, savejson outputs 1D/2D\n                          array in JSON array format; if sets to 1, an\n                          array will be shown as a struct with fields\n                          \"_ArrayType_\", \"_ArraySize_\" and \"_ArrayData_\"; for\n                          sparse arrays, the non-zero elements will be\n                          saved to _ArrayData_ field in triplet-format i.e.\n                          (ix,iy,val) and \"_ArrayIsSparse_\" will be added\n                          with a value of 1; for a complex array, the \n                          _ArrayData_ array will include two columns \n                          (4 for sparse) to record the real and imaginary \n                          parts, and also \"_ArrayIsComplex_\":1 is added. \n         opt.ParseLogical [0|1]: if this is set to 1, logical array elem\n                          will use true/false rather than 1/0.\n         opt.NoRowBracket [1|0]: if this is set to 1, arrays with a single\n                          numerical element will be shown without a square\n                          bracket, unless it is the root object; if 0, square\n                          brackets are forced for any numerical arrays.\n         opt.ForceRootName [0|1]: when set to 1 and rootname is empty, savejson\n                          will use the name of the passed obj variable as the \n                          root object name; if obj is an expression and \n                          does not have a name, 'root' will be used; if this \n                          is set to 0 and rootname is empty, the root level \n                          will be merged down to the lower level.\n         opt.Inf ['\"$1_Inf_\"'|string]: a customized regular expression pattern\n                          to represent +/-Inf. The matched pattern is '([-+]*)Inf'\n                          and $1 represents the sign. For those who want to use\n                          1e999 to represent Inf, they can set opt.Inf to '$11e999'\n         opt.NaN ['\"_NaN_\"'|string]: a customized regular expression pattern\n                          to represent NaN\n         opt.JSONP [''|string]: to generate a JSONP output (JSON with padding),\n                          for example, if opt.JSONP='foo', the JSON data is\n                          wrapped inside a function call as 'foo(...);'\n         opt.UnpackHex [1|0]: conver the 0x[hex code] output by loadjson \n                          back to the string form\n         opt.SaveBinary [0|1]: 1 - save the JSON file in binary mode; 0 - text mode.\n         opt.Compact [0|1]: 1- out compact JSON format (remove all newlines and tabs)\n \n         opt can be replaced by a list of ('param',value) pairs. The param \n         string is equivallent to a field in opt and is case sensitive.\n  output:\n       json: a string in the JSON format (see http://json.org)\n \n  examples:\n       jsonmesh=struct('MeshNode',[0 0 0;1 0 0;0 1 0;1 1 0;0 0 1;1 0 1;0 1 1;1 1 1],... \n                'MeshTetra',[1 2 4 8;1 3 4 8;1 2 6 8;1 5 6 8;1 5 7 8;1 3 7 8],...\n                'MeshTri',[1 2 4;1 2 6;1 3 4;1 3 7;1 5 6;1 5 7;...\n                           2 8 4;2 8 6;3 8 4;3 8 7;5 8 6;5 8 7],...\n                'MeshCreator','FangQ','MeshTitle','T6 Cube',...\n                'SpecialData',[nan, inf, -inf]);\n       savejson('jmesh',jsonmesh)\n       savejson('',jsonmesh,'ArrayIndent',0,'FloatFormat','\\t%.5g')\n </pre>\n\n=== loadubjson.m ===\n\n<pre>\n  data=loadubjson(fname,opt)\n     or\n  data=loadubjson(fname,'param1',value1,'param2',value2,...)\n \n  parse a JSON (JavaScript Object Notation) file or string\n \n  authors:Qianqian Fang (fangq<at> nmr.mgh.harvard.edu)\n  created on 2013/08/01\n \n  $Id: loadubjson.m 487 2015-05-06 18:19:07Z fangq $\n \n  input:\n       fname: input file name, if fname contains \"{}\" or \"[]\", fname\n              will be interpreted as a UBJSON string\n       opt: a struct to store parsing options, opt can be replaced by \n            a list of ('param',value) pairs - the param string is equivallent\n            to a field in opt. opt can have the following \n            fields (first in [.|.] is the default)\n \n            opt.SimplifyCell [0|1]: if set to 1, loadubjson will call cell2mat\n                          for each element of the JSON data, and group \n                          arrays based on the cell2mat rules.\n            opt.IntEndian [B|L]: specify the endianness of the integer fields\n                          in the UBJSON input data. B - Big-Endian format for \n                          integers (as required in the UBJSON specification); \n                          L - input integer fields are in Little-Endian order.\n            opt.NameIsString [0|1]: for UBJSON Specification Draft 8 or \n                          earlier versions (JSONLab 1.0 final or earlier), \n                          the \"name\" tag is treated as a string. To load \n                          these UBJSON data, you need to manually set this \n                          flag to 1.\n \n  output:\n       dat: a cell array, where {...} blocks are converted into cell arrays,\n            and [...] are converted to arrays\n \n  examples:\n       obj=struct('string','value','array',[1 2 3]);\n       ubjdata=saveubjson('obj',obj);\n       dat=loadubjson(ubjdata)\n       dat=loadubjson(['examples' filesep 'example1.ubj'])\n       dat=loadubjson(['examples' filesep 'example1.ubj'],'SimplifyCell',1)\n</pre>\n\n=== saveubjson.m ===\n\n<pre>\n  json=saveubjson(rootname,obj,filename)\n     or\n  json=saveubjson(rootname,obj,opt)\n  json=saveubjson(rootname,obj,'param1',value1,'param2',value2,...)\n \n  convert a MATLAB object (cell, struct or array) into a Universal \n  Binary JSON (UBJSON) binary string\n \n  author: Qianqian Fang (fangq<at> nmr.mgh.harvard.edu)\n  created on 2013/08/17\n \n  $Id: saveubjson.m 465 2015-01-25 00:46:07Z fangq $\n \n  input:\n       rootname: the name of the root-object, when set to '', the root name\n         is ignored, however, when opt.ForceRootName is set to 1 (see below),\n         the MATLAB variable name will be used as the root name.\n       obj: a MATLAB object (array, cell, cell array, struct, struct array)\n       filename: a string for the file name to save the output UBJSON data\n       opt: a struct for additional options, ignore to use default values.\n         opt can have the following fields (first in [.|.] is the default)\n \n         opt.FileName [''|string]: a file name to save the output JSON data\n         opt.ArrayToStruct[0|1]: when set to 0, saveubjson outputs 1D/2D\n                          array in JSON array format; if sets to 1, an\n                          array will be shown as a struct with fields\n                          \"_ArrayType_\", \"_ArraySize_\" and \"_ArrayData_\"; for\n                          sparse arrays, the non-zero elements will be\n                          saved to _ArrayData_ field in triplet-format i.e.\n                          (ix,iy,val) and \"_ArrayIsSparse_\" will be added\n                          with a value of 1; for a complex array, the \n                          _ArrayData_ array will include two columns \n                          (4 for sparse) to record the real and imaginary \n                          parts, and also \"_ArrayIsComplex_\":1 is added. \n         opt.ParseLogical [1|0]: if this is set to 1, logical array elem\n                          will use true/false rather than 1/0.\n         opt.NoRowBracket [1|0]: if this is set to 1, arrays with a single\n                          numerical element will be shown without a square\n                          bracket, unless it is the root object; if 0, square\n                          brackets are forced for any numerical arrays.\n         opt.ForceRootName [0|1]: when set to 1 and rootname is empty, saveubjson\n                          will use the name of the passed obj variable as the \n                          root object name; if obj is an expression and \n                          does not have a name, 'root' will be used; if this \n                          is set to 0 and rootname is empty, the root level \n                          will be merged down to the lower level.\n         opt.JSONP [''|string]: to generate a JSONP output (JSON with padding),\n                          for example, if opt.JSON='foo', the JSON data is\n                          wrapped inside a function call as 'foo(...);'\n         opt.UnpackHex [1|0]: conver the 0x[hex code] output by loadjson \n                          back to the string form\n \n         opt can be replaced by a list of ('param',value) pairs. The param \n         string is equivallent to a field in opt and is case sensitive.\n  output:\n       json: a binary string in the UBJSON format (see http://ubjson.org)\n \n  examples:\n       jsonmesh=struct('MeshNode',[0 0 0;1 0 0;0 1 0;1 1 0;0 0 1;1 0 1;0 1 1;1 1 1],... \n                'MeshTetra',[1 2 4 8;1 3 4 8;1 2 6 8;1 5 6 8;1 5 7 8;1 3 7 8],...\n                'MeshTri',[1 2 4;1 2 6;1 3 4;1 3 7;1 5 6;1 5 7;...\n                           2 8 4;2 8 6;3 8 4;3 8 7;5 8 6;5 8 7],...\n                'MeshCreator','FangQ','MeshTitle','T6 Cube',...\n                'SpecialData',[nan, inf, -inf]);\n       saveubjson('jsonmesh',jsonmesh)\n       saveubjson('jsonmesh',jsonmesh,'meshdata.ubj')\n</pre>\n\n\n=== examples ===\n\nUnder the \"examples\" folder, you can find several scripts to demonstrate the\nbasic utilities of JSONLab. Running the \"demo_jsonlab_basic.m\" script, you \nwill see the conversions from MATLAB data structure to JSON text and backward.\nIn \"jsonlab_selftest.m\", we load complex JSON files downloaded from the Internet\nand validate the loadjson/savejson functions for regression testing purposes.\nSimilarly, a \"demo_ubjson_basic.m\" script is provided to test the saveubjson\nand loadubjson functions for various matlab data structures.\n\nPlease run these examples and understand how JSONLab works before you use\nit to process your data.\n\n-------------------------------------------------------------------------------\n\nIV. Known Issues and TODOs\n\nJSONLab has several known limitations. We are striving to make it more general\nand robust. Hopefully in a few future releases, the limitations become less.\n\nHere are the known issues:\n\n# 3D or higher dimensional cell/struct-arrays will be converted to 2D arrays;\n# When processing names containing multi-byte characters, Octave and MATLAB \\\ncan give different field-names; you can use feature('DefaultCharacterSet','latin1') \\\nin MATLAB to get consistant results\n# savejson can not handle class and dataset.\n# saveubjson converts a logical array into a uint8 ([U]) array\n# an unofficial N-D array count syntax is implemented in saveubjson. We are \\\nactively communicating with the UBJSON spec maintainer to investigate the \\\npossibility of making it upstream\n# loadubjson can not parse all UBJSON Specification (Draft 9) compliant \\\nfiles, however, it can parse all UBJSON files produced by saveubjson.\n\n-------------------------------------------------------------------------------\n\nV. Contribution and feedback\n\nJSONLab is an open-source project. This means you can not only use it and modify\nit as you wish, but also you can contribute your changes back to JSONLab so\nthat everyone else can enjoy the improvement. For anyone who want to contribute,\nplease download JSONLab source code from its source code repositories by using the\nfollowing command:\n\n git clone https://github.com/fangq/jsonlab.git jsonlab\n\nor browsing the github site at\n\n https://github.com/fangq/jsonlab\n\nalternatively, if you prefer svn, you can checkout the latest code by using\n\n svn checkout svn://svn.code.sf.net/p/iso2mesh/code/trunk/jsonlab jsonlab\n\nYou can make changes to the files as needed. Once you are satisfied with your\nchanges, and ready to share it with others, please cd the root directory of \nJSONLab, and type\n\n git diff --no-prefix > yourname_featurename.patch\n\nor\n\n svn diff > yourname_featurename.patch\n\nYou then email the .patch file to JSONLab's maintainer, Qianqian Fang, at\nthe email address shown in the beginning of this file. Qianqian will review \nthe changes and commit it to the subversion if they are satisfactory.\n\nWe appreciate any suggestions and feedbacks from you. Please use the following\nmailing list to report any questions you may have regarding JSONLab:\n\nhttps://groups.google.com/forum/?hl=en#!forum/jsonlab-users\n\n(Subscription to the mailing list is needed in order to post messages).\n"
  },
  {
    "path": "eval/poseval/matlab/external/jsonlab/examples/demo_jsonlab_basic.m",
    "content": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%         Demonstration of Basic Utilities of JSONlab\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\nrngstate = rand ('state');\nrandseed=hex2dec('623F9A9E');\nclear data2json json2data\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  a simple scalar value \\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=pi\nsavejson('',data2json)\njson2data=loadjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  a complex number\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\nclear i;\ndata2json=1+2*i\nsavejson('',data2json)\njson2data=loadjson(ans) \n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  a complex matrix\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=magic(6);\ndata2json=data2json(:,1:3)+data2json(:,4:6)*i\nsavejson('',data2json)\njson2data=loadjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  MATLAB special constants\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=[NaN Inf -Inf]\nsavejson('specials',data2json)\njson2data=loadjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  a real sparse matrix\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=sprand(10,10,0.1)\nsavejson('sparse',data2json)\njson2data=loadjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  a complex sparse matrix\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=data2json-data2json*i\nsavejson('complex_sparse',data2json)\njson2data=loadjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  an all-zero sparse matrix\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=sparse(2,3);\nsavejson('all_zero_sparse',data2json)\njson2data=loadjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  an empty sparse matrix\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=sparse([]);\nsavejson('empty_sparse',data2json)\njson2data=loadjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  an empty 0-by-0 real matrix\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=[];\nsavejson('empty_0by0_real',data2json)\njson2data=loadjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  an empty 0-by-3 real matrix\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=zeros(0,3);\nsavejson('empty_0by3_real',data2json)\njson2data=loadjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  a sparse real column vector\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=sparse([0,3,0,1,4]');\nsavejson('sparse_column_vector',data2json)\njson2data=loadjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  a sparse complex column vector\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=data2json-1i*data2json;\nsavejson('complex_sparse_column_vector',data2json)\njson2data=loadjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  a sparse real row vector\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=sparse([0,3,0,1,4]);\nsavejson('sparse_row_vector',data2json)\njson2data=loadjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  a sparse complex row vector\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=data2json-1i*data2json;\nsavejson('complex_sparse_row_vector',data2json)\njson2data=loadjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  a structure\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=struct('name','Think Different','year',1997,'magic',magic(3),...\n                 'misfits',[Inf,NaN],'embedded',struct('left',true,'right',false))\nsavejson('astruct',data2json,struct('ParseLogical',1))\njson2data=loadjson(ans)\nclass(json2data.astruct.embedded.left)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  a structure array\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=struct('name','Nexus Prime','rank',9);\ndata2json(2)=struct('name','Sentinel Prime','rank',9);\ndata2json(3)=struct('name','Optimus Prime','rank',9);\nsavejson('Supreme Commander',data2json)\njson2data=loadjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  a cell array\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=cell(3,1);\ndata2json{1}=struct('buzz',1.1,'rex',1.2,'bo',1.3,'hamm',2.0,'slink',2.1,'potato',2.2,...\n              'woody',3.0,'sarge',3.1,'etch',4.0,'lenny',5.0,'squeeze',6.0,'wheezy',7.0);\ndata2json{2}=struct('Ubuntu',['Kubuntu';'Xubuntu';'Lubuntu']);\ndata2json{3}=[10.04,10.10,11.04,11.10]\nsavejson('debian',data2json,struct('FloatFormat','%.2f'))\njson2data=loadjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  invalid field-name handling\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\njson2data=loadjson('{\"ValidName\":1, \"_InvalidName\":2, \":Field:\":3, \"项目\":\"绝密\"}')\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  a 2D cell array\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json={{1,{2,3}},{4,5},{6};{7},{8,9},{10}};\nsavejson('data2json',data2json)\njson2data=loadjson(ans)  % only savejson works for cell arrays, loadjson has issues\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  a 2D struct array\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=repmat(struct('idx',0,'data','structs'),[2,3])\nfor i=1:6\n    data2json(i).idx=i;\nend\nsavejson('data2json',data2json)\njson2data=loadjson(ans)\n\nrand ('state',rngstate);\n\n"
  },
  {
    "path": "eval/poseval/matlab/external/jsonlab/examples/demo_ubjson_basic.m",
    "content": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%         Demonstration of Basic Utilities of JSONlab\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\nrngstate = rand ('state');\nrandseed=hex2dec('623F9A9E');\nclear data2json json2data\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  a simple scalar value \\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=pi\nsaveubjson('',data2json)\njson2data=loadubjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  a complex number\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\nclear i;\ndata2json=1+2*i\nsaveubjson('',data2json)\njson2data=loadubjson(ans) \n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  a complex matrix\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=magic(6);\ndata2json=data2json(:,1:3)+data2json(:,4:6)*i\nsaveubjson('',data2json)\njson2data=loadubjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  MATLAB special constants\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=[NaN Inf -Inf]\nsaveubjson('specials',data2json)\njson2data=loadubjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  a real sparse matrix\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=sprand(10,10,0.1)\nsaveubjson('sparse',data2json)\njson2data=loadubjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  a complex sparse matrix\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=data2json-data2json*i\nsaveubjson('complex_sparse',data2json)\njson2data=loadubjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  an all-zero sparse matrix\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=sparse(2,3);\nsaveubjson('all_zero_sparse',data2json)\njson2data=loadubjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  an empty sparse matrix\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=sparse([]);\nsaveubjson('empty_sparse',data2json)\njson2data=loadubjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  an empty 0-by-0 real matrix\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=[];\nsaveubjson('empty_0by0_real',data2json)\njson2data=loadubjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  an empty 0-by-3 real matrix\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=zeros(0,3);\nsaveubjson('empty_0by3_real',data2json)\njson2data=loadubjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  a sparse real column vector\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=sparse([0,3,0,1,4]');\nsaveubjson('sparse_column_vector',data2json)\njson2data=loadubjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  a sparse complex column vector\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=data2json-1i*data2json;\nsaveubjson('complex_sparse_column_vector',data2json)\njson2data=loadubjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  a sparse real row vector\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=sparse([0,3,0,1,4]);\nsaveubjson('sparse_row_vector',data2json)\njson2data=loadubjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  a sparse complex row vector\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=data2json-1i*data2json;\nsaveubjson('complex_sparse_row_vector',data2json)\njson2data=loadubjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  a structure\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=struct('name','Think Different','year',1997,'magic',magic(3),...\n                 'misfits',[Inf,NaN],'embedded',struct('left',true,'right',false))\nsaveubjson('astruct',data2json,struct('ParseLogical',1))\njson2data=loadubjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  a structure array\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=struct('name','Nexus Prime','rank',9);\ndata2json(2)=struct('name','Sentinel Prime','rank',9);\ndata2json(3)=struct('name','Optimus Prime','rank',9);\nsaveubjson('Supreme Commander',data2json)\njson2data=loadubjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  a cell array\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=cell(3,1);\ndata2json{1}=struct('buzz',1.1,'rex',1.2,'bo',1.3,'hamm',2.0,'slink',2.1,'potato',2.2,...\n              'woody',3.0,'sarge',3.1,'etch',4.0,'lenny',5.0,'squeeze',6.0,'wheezy',7.0);\ndata2json{2}=struct('Ubuntu',['Kubuntu';'Xubuntu';'Lubuntu']);\ndata2json{3}=[10.04,10.10,11.04,11.10]\nsaveubjson('debian',data2json,struct('FloatFormat','%.2f'))\njson2data=loadubjson(ans)\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  invalid field-name handling\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\njson2data=loadubjson(saveubjson('',loadjson('{\"ValidName\":1, \"_InvalidName\":2, \":Field:\":3, \"项目\":\"绝密\"}')))\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  a 2D cell array\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json={{1,{2,3}},{4,5},{6};{7},{8,9},{10}};\nsaveubjson('data2json',data2json)\njson2data=loadubjson(ans)  % only savejson works for cell arrays, loadjson has issues\n\nfprintf(1,'\\n%%=================================================\\n')\nfprintf(1,'%%  a 2D struct array\\n')\nfprintf(1,'%%=================================================\\n\\n')\n\ndata2json=repmat(struct('idx',0,'data','structs'),[2,3])\nfor i=1:6\n    data2json(i).idx=i;\nend\nsaveubjson('data2json',data2json)\njson2data=loadubjson(ans)\n\nrand ('state',rngstate);\n\n"
  },
  {
    "path": "eval/poseval/matlab/external/jsonlab/examples/example1.json",
    "content": " {\n     \"firstName\": \"John\",\n     \"lastName\": \"Smith\",\n     \"age\": 25,\n     \"address\":\n     {\n         \"streetAddress\": \"21 2nd Street\",\n         \"city\": \"New York\",\n         \"state\": \"NY\",\n         \"postalCode\": \"10021\"\n     },\n     \"phoneNumber\":\n     [\n         {\n           \"type\": \"home\",\n           \"number\": \"212 555-1234\"\n         },\n         {\n           \"type\": \"fax\",\n           \"number\": \"646 555-4567\"\n         }\n     ]\n }\n"
  },
  {
    "path": "eval/poseval/matlab/external/jsonlab/examples/example2.json",
    "content": "{\n    \"glossary\": {\n        \"title\": \"example glossary\",\n\t\t\"GlossDiv\": {\n            \"title\": \"S\",\n\t\t\t\"GlossList\": {\n                \"GlossEntry\": {\n                    \"ID\": \"SGML\",\n\t\t\t\t\t\"SortAs\": \"SGML\",\n\t\t\t\t\t\"GlossTerm\": \"Standard Generalized Markup Language\",\n\t\t\t\t\t\"Acronym\": \"SGML\",\n\t\t\t\t\t\"Abbrev\": \"ISO 8879:1986\",\n\t\t\t\t\t\"GlossDef\": {\n                        \"para\": \"A meta-markup language, used to create markup languages such as DocBook.\",\n\t\t\t\t\t\t\"GlossSeeAlso\": [\"GML\", \"XML\"]\n                    },\n\t\t\t\t\t\"GlossSee\": \"markup\"\n                }\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "eval/poseval/matlab/external/jsonlab/examples/example3.json",
    "content": "{\"menu\": {\n  \"id\": \"file\",\n  \"value\": \"_&File\",\n  \"popup\": {\n    \"menuitem\": [\n      {\"value\": \"_&New\", \"onclick\": \"CreateNewDoc(\\\"'\\\\\\\"Untitled\\\\\\\"'\\\")\"},\n      {\"value\": \"_&Open\", \"onclick\": \"OpenDoc()\"},\n      {\"value\": \"_&Close\", \"onclick\": \"CloseDoc()\"}\n    ]\n  }\n}}\n"
  },
  {
    "path": "eval/poseval/matlab/external/jsonlab/examples/example4.json",
    "content": "[\n  {\n      \"sample\" : {\n          \"rho\" : 1\n      }\n  },\n  {\n      \"sample\" : {\n          \"rho\" : 2\n      }\n  },\n  [\n    {\n        \"_ArrayType_\" : \"double\",\n        \"_ArraySize_\" : [1,2],\n        \"_ArrayData_\" : [1,0]\n    },\n    {\n        \"_ArrayType_\" : \"double\",\n        \"_ArraySize_\" : [1,2],\n        \"_ArrayData_\" : [1,1]\n    },\n    {\n        \"_ArrayType_\" : \"double\",\n        \"_ArraySize_\" : [1,2],\n        \"_ArrayData_\" : [1,2]\n    }\n  ],\n  [\n     \"Paper\",\n     \"Scissors\",\n     \"Stone\"\n  ],\n  [\"a\", \"b\\\\\", \"c\\\"\",\"d\\\\\\\"\",\"e\\\"[\",\"f\\\\\\\"[\",\"g[\\\\\",\"h[\\\\\\\"\"]\n]\n"
  },
  {
    "path": "eval/poseval/matlab/external/jsonlab/examples/jsonlab_basictest.matlab",
    "content": "\n                            < M A T L A B (R) >\n                  Copyright 1984-2010 The MathWorks, Inc.\n                Version 7.11.0.584 (R2010b) 64-bit (glnxa64)\n                              August 16, 2010\n\n \n  To get started, type one of these: helpwin, helpdesk, or demo.\n  For product information, visit www.mathworks.com.\n \n>> >> >> >> >> >> >> >> >> \n%=================================================\n>> %  a simple scalar value \n>> %=================================================\n\n>> >> \ndata2json =\n\n    3.1416\n\n>> \nans =\n\n[3.141592654]\n\n\n>> \njson2data =\n\n    3.1416\n\n>> >> \n%=================================================\n>> %  a complex number\n>> %=================================================\n\n>> >> >> \ndata2json =\n\n   1.0000 + 2.0000i\n\n>> \nans =\n\n{\n\t\"_ArrayType_\": \"double\",\n\t\"_ArraySize_\": [1,1],\n\t\"_ArrayIsComplex_\": 1,\n\t\"_ArrayData_\": [1,2]\n}\n\n\n>> \njson2data =\n\n   1.0000 + 2.0000i\n\n>> >> \n%=================================================\n>> %  a complex matrix\n>> %=================================================\n\n>> >> >> \ndata2json =\n\n  35.0000 +26.0000i   1.0000 +19.0000i   6.0000 +24.0000i\n   3.0000 +21.0000i  32.0000 +23.0000i   7.0000 +25.0000i\n  31.0000 +22.0000i   9.0000 +27.0000i   2.0000 +20.0000i\n   8.0000 +17.0000i  28.0000 +10.0000i  33.0000 +15.0000i\n  30.0000 +12.0000i   5.0000 +14.0000i  34.0000 +16.0000i\n   4.0000 +13.0000i  36.0000 +18.0000i  29.0000 +11.0000i\n\n>> \nans =\n\n{\n\t\"_ArrayType_\": \"double\",\n\t\"_ArraySize_\": [6,3],\n\t\"_ArrayIsComplex_\": 1,\n\t\"_ArrayData_\": [\n\t\t[35,26],\n\t\t[3,21],\n\t\t[31,22],\n\t\t[8,17],\n\t\t[30,12],\n\t\t[4,13],\n\t\t[1,19],\n\t\t[32,23],\n\t\t[9,27],\n\t\t[28,10],\n\t\t[5,14],\n\t\t[36,18],\n\t\t[6,24],\n\t\t[7,25],\n\t\t[2,20],\n\t\t[33,15],\n\t\t[34,16],\n\t\t[29,11]\n\t]\n}\n\n\n>> \njson2data =\n\n  35.0000 +26.0000i   1.0000 +19.0000i   6.0000 +24.0000i\n   3.0000 +21.0000i  32.0000 +23.0000i   7.0000 +25.0000i\n  31.0000 +22.0000i   9.0000 +27.0000i   2.0000 +20.0000i\n   8.0000 +17.0000i  28.0000 +10.0000i  33.0000 +15.0000i\n  30.0000 +12.0000i   5.0000 +14.0000i  34.0000 +16.0000i\n   4.0000 +13.0000i  36.0000 +18.0000i  29.0000 +11.0000i\n\n>> >> \n%=================================================\n>> %  MATLAB special constants\n>> %=================================================\n\n>> >> \ndata2json =\n\n   NaN   Inf  -Inf\n\n>> \nans =\n\n{\n\t\"specials\": [\"_NaN_\",\"_Inf_\",\"-_Inf_\"]\n}\n\n\n>> \njson2data = \n\n    specials: [NaN Inf -Inf]\n\n>> >> \n%=================================================\n>> %  a real sparse matrix\n>> %=================================================\n\n>> >> \ndata2json =\n\n   (1,2)       0.6557\n   (9,2)       0.7577\n   (3,5)       0.8491\n  (10,5)       0.7431\n  (10,8)       0.3922\n   (7,9)       0.6787\n   (2,10)      0.0357\n   (6,10)      0.9340\n  (10,10)      0.6555\n\n>> \nans =\n\n{\n\t\"sparse\": {\n\t\t\"_ArrayType_\": \"double\",\n\t\t\"_ArraySize_\": [10,10],\n\t\t\"_ArrayIsSparse_\": 1,\n\t\t\"_ArrayData_\": [\n\t\t\t[1,2,0.6557406992],\n\t\t\t[9,2,0.7577401306],\n\t\t\t[3,5,0.8491293059],\n\t\t\t[10,5,0.7431324681],\n\t\t\t[10,8,0.3922270195],\n\t\t\t[7,9,0.6787351549],\n\t\t\t[2,10,0.03571167857],\n\t\t\t[6,10,0.9339932478],\n\t\t\t[10,10,0.6554778902]\n\t\t]\n\t}\n}\n\n\n>> \njson2data = \n\n    sparse: [10x10 double]\n\n>> >> \n%=================================================\n>> %  a complex sparse matrix\n>> %=================================================\n\n>> >> \ndata2json =\n\n   (1,2)      0.6557 - 0.6557i\n   (9,2)      0.7577 - 0.7577i\n   (3,5)      0.8491 - 0.8491i\n  (10,5)      0.7431 - 0.7431i\n  (10,8)      0.3922 - 0.3922i\n   (7,9)      0.6787 - 0.6787i\n   (2,10)     0.0357 - 0.0357i\n   (6,10)     0.9340 - 0.9340i\n  (10,10)     0.6555 - 0.6555i\n\n>> \nans =\n\n{\n\t\"complex_sparse\": {\n\t\t\"_ArrayType_\": \"double\",\n\t\t\"_ArraySize_\": [10,10],\n\t\t\"_ArrayIsComplex_\": 1,\n\t\t\"_ArrayIsSparse_\": 1,\n\t\t\"_ArrayData_\": [\n\t\t\t[1,2,0.6557406992,-0.6557406992],\n\t\t\t[9,2,0.7577401306,-0.7577401306],\n\t\t\t[3,5,0.8491293059,-0.8491293059],\n\t\t\t[10,5,0.7431324681,-0.7431324681],\n\t\t\t[10,8,0.3922270195,-0.3922270195],\n\t\t\t[7,9,0.6787351549,-0.6787351549],\n\t\t\t[2,10,0.03571167857,-0.03571167857],\n\t\t\t[6,10,0.9339932478,-0.9339932478],\n\t\t\t[10,10,0.6554778902,-0.6554778902]\n\t\t]\n\t}\n}\n\n\n>> \njson2data = \n\n    complex_sparse: [10x10 double]\n\n>> >> \n%=================================================\n>> %  an all-zero sparse matrix\n>> %=================================================\n\n>> >> >> \nans =\n\n{\n\t\"all_zero_sparse\": {\n\t\t\"_ArrayType_\": \"double\",\n\t\t\"_ArraySize_\": [2,3],\n\t\t\"_ArrayIsSparse_\": 1,\n\t\t\"_ArrayData_\": null\n\t}\n}\n\n\n>> \njson2data = \n\n    all_zero_sparse: [2x3 double]\n\n>> >> \n%=================================================\n>> %  an empty sparse matrix\n>> %=================================================\n\n>> >> >> \nans =\n\n{\n\t\"empty_sparse\": {\n\t\t\"_ArrayType_\": \"double\",\n\t\t\"_ArraySize_\": [0,0],\n\t\t\"_ArrayIsSparse_\": 1,\n\t\t\"_ArrayData_\": null\n\t}\n}\n\n\n>> \njson2data = \n\n    empty_sparse: []\n\n>> >> \n%=================================================\n>> %  an empty 0-by-0 real matrix\n>> %=================================================\n\n>> >> >> \nans =\n\n{\n\t\"empty_0by0_real\": null\n}\n\n\n>> \njson2data = \n\n    empty_0by0_real: []\n\n>> >> \n%=================================================\n>> %  an empty 0-by-3 real matrix\n>> %=================================================\n\n>> >> >> \nans =\n\n{\n\t\"empty_0by3_real\": {\n\t\t\"_ArrayType_\": \"double\",\n\t\t\"_ArraySize_\": [0,3],\n\t\t\"_ArrayData_\": null\n\t}\n}\n\n\n>> \njson2data = \n\n    empty_0by3_real: [0x3 double]\n\n>> >> \n%=================================================\n>> %  a sparse real column vector\n>> %=================================================\n\n>> >> >> \nans =\n\n{\n\t\"sparse_column_vector\": {\n\t\t\"_ArrayType_\": \"double\",\n\t\t\"_ArraySize_\": [5,1],\n\t\t\"_ArrayIsSparse_\": 1,\n\t\t\"_ArrayData_\": [\n\t\t\t[2,3],\n\t\t\t[4,1],\n\t\t\t[5,4]\n\t\t]\n\t}\n}\n\n\n>> \njson2data = \n\n    sparse_column_vector: [5x1 double]\n\n>> >> \n%=================================================\n>> %  a sparse complex column vector\n>> %=================================================\n\n>> >> >> \nans =\n\n{\n\t\"complex_sparse_column_vector\": {\n\t\t\"_ArrayType_\": \"double\",\n\t\t\"_ArraySize_\": [5,1],\n\t\t\"_ArrayIsComplex_\": 1,\n\t\t\"_ArrayIsSparse_\": 1,\n\t\t\"_ArrayData_\": [\n\t\t\t[2,3,-3],\n\t\t\t[4,1,-1],\n\t\t\t[5,4,-4]\n\t\t]\n\t}\n}\n\n\n>> \njson2data = \n\n    complex_sparse_column_vector: [5x1 double]\n\n>> >> \n%=================================================\n>> %  a sparse real row vector\n>> %=================================================\n\n>> >> >> \nans =\n\n{\n\t\"sparse_row_vector\": {\n\t\t\"_ArrayType_\": \"double\",\n\t\t\"_ArraySize_\": [1,5],\n\t\t\"_ArrayIsSparse_\": 1,\n\t\t\"_ArrayData_\": [\n\t\t\t[2,3],\n\t\t\t[4,1],\n\t\t\t[5,4]\n\t\t]\n\t}\n}\n\n\n>> \njson2data = \n\n    sparse_row_vector: [0 3 0 1 4]\n\n>> >> \n%=================================================\n>> %  a sparse complex row vector\n>> %=================================================\n\n>> >> >> \nans =\n\n{\n\t\"complex_sparse_row_vector\": {\n\t\t\"_ArrayType_\": \"double\",\n\t\t\"_ArraySize_\": [1,5],\n\t\t\"_ArrayIsComplex_\": 1,\n\t\t\"_ArrayIsSparse_\": 1,\n\t\t\"_ArrayData_\": [\n\t\t\t[2,3,-3],\n\t\t\t[4,1,-1],\n\t\t\t[5,4,-4]\n\t\t]\n\t}\n}\n\n\n>> \njson2data = \n\n    complex_sparse_row_vector: [1x5 double]\n\n>> >> \n%=================================================\n>> %  a structure\n>> %=================================================\n\n>> >> \ndata2json = \n\n        name: 'Think Different'\n        year: 1997\n       magic: [3x3 double]\n     misfits: [Inf NaN]\n    embedded: [1x1 struct]\n\n>> \nans =\n\n{\n\t\"astruct\": {\n\t\t\"name\": \"Think Different\",\n\t\t\"year\": 1997,\n\t\t\"magic\": [\n\t\t\t[8,1,6],\n\t\t\t[3,5,7],\n\t\t\t[4,9,2]\n\t\t],\n\t\t\"misfits\": [\"_Inf_\",\"_NaN_\"],\n\t\t\"embedded\": {\n\t\t\t\"left\": true,\n\t\t\t\"right\": false\n\t\t}\n\t}\n}\n\n\n>> \njson2data = \n\n    astruct: [1x1 struct]\n\n>> \nans =\n\nlogical\n\n>> >> \n%=================================================\n>> %  a structure array\n>> %=================================================\n\n>> >> >> >> >> \nans =\n\n{\n\t\"Supreme Commander\": [\n\t\t{\n\t\t\t\"name\": \"Nexus Prime\",\n\t\t\t\"rank\": 9\n\t\t},\n\t\t{\n\t\t\t\"name\": \"Sentinel Prime\",\n\t\t\t\"rank\": 9\n\t\t},\n\t\t{\n\t\t\t\"name\": \"Optimus Prime\",\n\t\t\t\"rank\": 9\n\t\t}\n\t]\n}\n\n\n>> \njson2data = \n\n    Supreme_0x20_Commander: {[1x1 struct]  [1x1 struct]  [1x1 struct]}\n\n>> >> \n%=================================================\n>> %  a cell array\n>> %=================================================\n\n>> >> >> >> >> \ndata2json = \n\n    [1x1 struct]\n    [1x1 struct]\n    [1x4 double]\n\n>> \nans =\n\n{\n\t\"debian\": [\n\t\t[\n\t\t\t{\n\t\t\t\t\"buzz\": 1.10,\n\t\t\t\t\"rex\": 1.20,\n\t\t\t\t\"bo\": 1.30,\n\t\t\t\t\"hamm\": 2.00,\n\t\t\t\t\"slink\": 2.10,\n\t\t\t\t\"potato\": 2.20,\n\t\t\t\t\"woody\": 3.00,\n\t\t\t\t\"sarge\": 3.10,\n\t\t\t\t\"etch\": 4.00,\n\t\t\t\t\"lenny\": 5.00,\n\t\t\t\t\"squeeze\": 6.00,\n\t\t\t\t\"wheezy\": 7.00\n\t\t\t}\n\t\t],\n\t\t[\n\t\t\t{\n\t\t\t\t\"Ubuntu\": [\n\t\t\t\t\t\"Kubuntu\",\n\t\t\t\t\t\"Xubuntu\",\n\t\t\t\t\t\"Lubuntu\"\n\t\t\t\t]\n\t\t\t}\n\t\t],\n\t\t[\n\t\t\t[10.04,10.10,11.04,11.10]\n\t\t]\n\t]\n}\n\n\n>> \njson2data = \n\n    debian: {{1x1 cell}  {1x1 cell}  [10.0400 10.1000 11.0400 11.1000]}\n\n>> >> \n%=================================================\n>> %  invalid field-name handling\n>> %=================================================\n\n>> >> \njson2data = \n\n               ValidName: 1\n       x0x5F_InvalidName: 2\n       x0x3A_Field_0x3A_: 3\n    x0xE9A1B9__0xE79BAE_: '绝密'\n\n>> >> \n%=================================================\n>> %  a 2D cell array\n>> %=================================================\n\n>> >> >> \nans =\n\n{\n\t\"data2json\": [\n\t\t[\n\t\t\t[\n\t\t\t\t1,\n\t\t\t\t[\n\t\t\t\t\t2,\n\t\t\t\t\t3\n\t\t\t\t]\n\t\t\t],\n\t\t\t[\n\t\t\t\t4,\n\t\t\t\t5\n\t\t\t],\n\t\t\t[\n\t\t\t\t6\n\t\t\t]\n\t\t],\n\t\t[\n\t\t\t[\n\t\t\t\t7\n\t\t\t],\n\t\t\t[\n\t\t\t\t8,\n\t\t\t\t9\n\t\t\t],\n\t\t\t[\n\t\t\t\t10\n\t\t\t]\n\t\t]\n\t]\n}\n\n\n>> \njson2data = \n\n    data2json: {{1x3 cell}  {1x3 cell}}\n\n>> >> \n%=================================================\n>> %  a 2D struct array\n>> %=================================================\n\n>> >> \ndata2json = \n\n2x3 struct array with fields:\n    idx\n    data\n\n>> >> \nans =\n\n{\n\t\"data2json\": [\n\t\t[\n\t\t\t{\n\t\t\t\t\"idx\": 1,\n\t\t\t\t\"data\": \"structs\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"idx\": 2,\n\t\t\t\t\"data\": \"structs\"\n\t\t\t}\n\t\t],\n\t\t[\n\t\t\t{\n\t\t\t\t\"idx\": 3,\n\t\t\t\t\"data\": \"structs\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"idx\": 4,\n\t\t\t\t\"data\": \"structs\"\n\t\t\t}\n\t\t],\n\t\t[\n\t\t\t{\n\t\t\t\t\"idx\": 5,\n\t\t\t\t\"data\": \"structs\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"idx\": 6,\n\t\t\t\t\"data\": \"structs\"\n\t\t\t}\n\t\t]\n\t]\n}\n\n\n>> \njson2data = \n\n    data2json: {{1x2 cell}  {1x2 cell}  {1x2 cell}}\n\n>> >> >> >> "
  },
  {
    "path": "eval/poseval/matlab/external/jsonlab/examples/jsonlab_selftest.m",
    "content": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%         Regression Test Unit of loadjson and savejson\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\nfor i=1:4\n    fname=sprintf('example%d.json',i);\n    if(exist(fname,'file')==0) break; end\n    fprintf(1,'===============================================\\n>> %s\\n',fname);\n    json=savejson('data',loadjson(fname));\n    fprintf(1,'%s\\n',json);\n    fprintf(1,'%s\\n',savejson('data',loadjson(fname),'Compact',1));\n    data=loadjson(json);\n    savejson('data',data,'selftest.json');\n    data=loadjson('selftest.json');\nend\n\nfor i=1:4\n    fname=sprintf('example%d.json',i);\n    if(exist(fname,'file')==0) break; end\n    fprintf(1,'===============================================\\n>> %s\\n',fname);\n    json=saveubjson('data',loadjson(fname));\n    fprintf(1,'%s\\n',json);\n    data=loadubjson(json);\n    savejson('',data);\n    saveubjson('data',data,'selftest.ubj');\n    data=loadubjson('selftest.ubj');\nend\n"
  },
  {
    "path": "eval/poseval/matlab/external/jsonlab/examples/jsonlab_selftest.matlab",
    "content": "\n                            < M A T L A B (R) >\n                  Copyright 1984-2010 The MathWorks, Inc.\n                Version 7.11.0.584 (R2010b) 64-bit (glnxa64)\n                              August 16, 2010\n\n \n  To get started, type one of these: helpwin, helpdesk, or demo.\n  For product information, visit www.mathworks.com.\n \n>> >> >> >> >> ===============================================\n>> example1.json\n{\n\t\"data\": {\n\t\t\"firstName\": \"John\",\n\t\t\"lastName\": \"Smith\",\n\t\t\"age\": 25,\n\t\t\"address\": {\n\t\t\t\"streetAddress\": \"21 2nd Street\",\n\t\t\t\"city\": \"New York\",\n\t\t\t\"state\": \"NY\",\n\t\t\t\"postalCode\": \"10021\"\n\t\t},\n\t\t\"phoneNumber\": [\n\t\t\t{\n\t\t\t\t\"type\": \"home\",\n\t\t\t\t\"number\": \"212 555-1234\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"type\": \"fax\",\n\t\t\t\t\"number\": \"646 555-4567\"\n\t\t\t}\n\t\t]\n\t}\n}\n\n{\"data\": {\"firstName\": \"John\",\"lastName\": \"Smith\",\"age\": 25,\"address\": {\"streetAddress\": \"21 2nd Street\",\"city\": \"New York\",\"state\": \"NY\",\"postalCode\": \"10021\"},\"phoneNumber\": [{\"type\": \"home\",\"number\": \"212 555-1234\"},{\"type\": \"fax\",\"number\": \"646 555-4567\"}]}}\n\n===============================================\n>> example2.json\n{\n\t\"data\": {\n\t\t\"glossary\": {\n\t\t\t\"title\": \"example glossary\",\n\t\t\t\"GlossDiv\": {\n\t\t\t\t\"title\": \"S\",\n\t\t\t\t\"GlossList\": {\n\t\t\t\t\t\"GlossEntry\": {\n\t\t\t\t\t\t\"ID\": \"SGML\",\n\t\t\t\t\t\t\"SortAs\": \"SGML\",\n\t\t\t\t\t\t\"GlossTerm\": \"Standard Generalized Markup Language\",\n\t\t\t\t\t\t\"Acronym\": \"SGML\",\n\t\t\t\t\t\t\"Abbrev\": \"ISO 8879:1986\",\n\t\t\t\t\t\t\"GlossDef\": {\n\t\t\t\t\t\t\t\"para\": \"A meta-markup language, used to create markup languages such as DocBook.\",\n\t\t\t\t\t\t\t\"GlossSeeAlso\": [\n\t\t\t\t\t\t\t\t\"GML\",\n\t\t\t\t\t\t\t\t\"XML\"\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"GlossSee\": \"markup\"\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\n{\"data\": {\"glossary\": {\"title\": \"example glossary\",\"GlossDiv\": {\"title\": \"S\",\"GlossList\": {\"GlossEntry\": {\"ID\": \"SGML\",\"SortAs\": \"SGML\",\"GlossTerm\": \"Standard Generalized Markup Language\",\"Acronym\": \"SGML\",\"Abbrev\": \"ISO 8879:1986\",\"GlossDef\": {\"para\": \"A meta-markup language, used to create markup languages such as DocBook.\",\"GlossSeeAlso\": [\"GML\",\"XML\"]},\"GlossSee\": \"markup\"}}}}}}\n\n===============================================\n>> example3.json\n{\n\t\"data\": {\n\t\t\"menu\": {\n\t\t\t\"id\": \"file\",\n\t\t\t\"value\": \"_&File\",\n\t\t\t\"popup\": {\n\t\t\t\t\"menuitem\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"value\": \"_&New\",\n\t\t\t\t\t\t\"onclick\": \"CreateNewDoc(\\\"'\\\\\\\"Untitled\\\\\\\"'\\\")\"\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"value\": \"_&Open\",\n\t\t\t\t\t\t\"onclick\": \"OpenDoc()\"\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"value\": \"_&Close\",\n\t\t\t\t\t\t\"onclick\": \"CloseDoc()\"\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n}\n\n{\"data\": {\"menu\": {\"id\": \"file\",\"value\": \"_&File\",\"popup\": {\"menuitem\": [{\"value\": \"_&New\",\"onclick\": \"CreateNewDoc(\\\"'\\\\\\\"Untitled\\\\\\\"'\\\")\"},{\"value\": \"_&Open\",\"onclick\": \"OpenDoc()\"},{\"value\": \"_&Close\",\"onclick\": \"CloseDoc()\"}]}}}}\n\n===============================================\n>> example4.json\n{\n\t\"data\": [\n\t\t{\n\t\t\t\"sample\": {\n\t\t\t\t\"rho\": 1\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"sample\": {\n\t\t\t\t\"rho\": 2\n\t\t\t}\n\t\t},\n\t\t[\n\t\t\t[1,0],\n\t\t\t[1,1],\n\t\t\t[1,2]\n\t\t],\n\t\t[\n\t\t\t\"Paper\",\n\t\t\t\"Scissors\",\n\t\t\t\"Stone\"\n\t\t],\n\t\t[\n\t\t\t\"a\",\n\t\t\t\"b\\\\\",\n\t\t\t\"c\\\"\",\n\t\t\t\"d\\\\\\\"\",\n\t\t\t\"e\\\"[\",\n\t\t\t\"f\\\\\\\"[\",\n\t\t\t\"g[\\\\\",\n\t\t\t\"h[\\\\\\\"\"\n\t\t]\n\t]\n}\n\n{\"data\": [{\"sample\": {\"rho\": 1}},{\"sample\": {\"rho\": 2}},[[1,0],[1,1],[1,2]],[\"Paper\",\"Scissors\",\"Stone\"],[\"a\",\"b\\\\\",\"c\\\"\",\"d\\\\\\\"\",\"e\\\"[\",\"f\\\\\\\"[\",\"g[\\\\\",\"h[\\\\\\\"\"]]}\n\n>> >> ===============================================\n>> example1.json\n{U\u0004data{U\tfirstNameSU\u0004JohnU\blastNameSU\u0005SmithU\u0003agei\u0019U\u0007address{U\rstreetAddressSU\r21 2nd StreetU\u0004citySU\bNew YorkU\u0005stateSU\u0002NYU\npostalCodeSU\u000510021}U\u000bphoneNumber[{U\u0004typeSU\u0004homeU\u0006numberSU\f212 555-1234}{U\u0004typeSU\u0003faxU\u0006numberSU\f646 555-4567}]}}\n===============================================\n>> example2.json\n{U\u0004data{U\bglossary{U\u0005titleSU\u0010example glossaryU\bGlossDiv{U\u0005titleCSU\tGlossList{U\nGlossEntry{U\u0002IDSU\u0004SGMLU\u0006SortAsSU\u0004SGMLU\tGlossTermSU$Standard Generalized Markup LanguageU\u0007AcronymSU\u0004SGMLU\u0006AbbrevSU\rISO 8879:1986U\bGlossDef{U\u0004paraSUHA meta-markup language, used to create markup languages such as DocBook.U\fGlossSeeAlso[SU\u0003GMLSU\u0003XML]}U\bGlossSeeSU\u0006markup}}}}}}\n===============================================\n>> example3.json\n{U\u0004data{U\u0004menu{U\u0002idSU\u0004fileU\u0005valueSU\u0006_&FileU\u0005popup{U\bmenuitem[{U\u0005valueSU\u0005_&NewU\u0007onclickSU\u001eCreateNewDoc(\"'\\\"Untitled\\\"'\")}{U\u0005valueSU\u0006_&OpenU\u0007onclickSU\tOpenDoc()}{U\u0005valueSU\u0007_&CloseU\u0007onclickSU\nCloseDoc()}]}}}}\n===============================================\n>> example4.json\n{U\u0004data[{U\u0006sample{U\u0003rhoi\u0001}}{U\u0006sample{U\u0003rhoi\u0002}}[[$i#U\u0002\u0001[$i#U\u0002\u0001\u0001[$i#U\u0002\u0001\u0002][SU\u0005PaperSU\bScissorsSU\u0005Stone][CaSU\u0002b\\SU\u0002c\"SU\u0003d\\\"SU\u0003e\"[SU\u0004f\\\"[SU\u0003g[\\SU\u0004h[\\\"]]}\n>> "
  },
  {
    "path": "eval/poseval/matlab/external/jsonlab/examples/jsonlab_speedtest.m",
    "content": "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%         Benchmarking processing speed of savejson and loadjson\n%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\ndatalen=[1e3 1e4 1e5 1e6];\nlen=length(datalen);\ntsave=zeros(len,1);\ntload=zeros(len,1);\nfor i=1:len\n    tic;\n    json=savejson('data',struct('d1',rand(datalen(i),3),'d2',rand(datalen(i),3)>0.5));\n    tsave(i)=toc;\n    data=loadjson(json);\n    tload(i)=toc-tsave(i);\n    fprintf(1,'matrix size: %d\\n',datalen(i));\nend\n\nloglog(datalen,tsave,'o-',datalen,tload,'r*-');\nlegend('savejson runtime (s)','loadjson runtime (s)');\nxlabel('array size');\nylabel('running time (s)');\n"
  },
  {
    "path": "eval/poseval/matlab/external/jsonlab/jsonopt.m",
    "content": "function val=jsonopt(key,default,varargin)\n%\n% val=jsonopt(key,default,optstruct)\n%\n% setting options based on a struct. The struct can be produced\n% by varargin2struct from a list of 'param','value' pairs\n%\n% authors:Qianqian Fang (fangq<at> nmr.mgh.harvard.edu)\n%\n% $Id: loadjson.m 371 2012-06-20 12:43:06Z fangq $\n%\n% input:\n%      key: a string with which one look up a value from a struct\n%      default: if the key does not exist, return default\n%      optstruct: a struct where each sub-field is a key \n%\n% output:\n%      val: if key exists, val=optstruct.key; otherwise val=default\n%\n% license:\n%     BSD License, see LICENSE_BSD.txt files for details\n%\n% -- this function is part of jsonlab toolbox (http://iso2mesh.sf.net/cgi-bin/index.cgi?jsonlab)\n% \n\nval=default;\nif(nargin<=2) return; end\nopt=varargin{1};\nif(isstruct(opt))\n    if(isfield(opt,key))\n       val=getfield(opt,key);\n    elseif(isfield(opt,lower(key)))\n       val=getfield(opt,lower(key));\n    end\nend\n"
  },
  {
    "path": "eval/poseval/matlab/external/jsonlab/loadjson.m",
    "content": "function data = loadjson(fname,varargin)\n%\n% data=loadjson(fname,opt)\n%    or\n% data=loadjson(fname,'param1',value1,'param2',value2,...)\n%\n% parse a JSON (JavaScript Object Notation) file or string\n%\n% authors:Qianqian Fang (fangq<at> nmr.mgh.harvard.edu)\n% created on 2011/09/09, including previous works from \n%\n%         Nedialko Krouchev: http://www.mathworks.com/matlabcentral/fileexchange/25713\n%            created on 2009/11/02\n%         François Glineur: http://www.mathworks.com/matlabcentral/fileexchange/23393\n%            created on  2009/03/22\n%         Joel Feenstra:\n%         http://www.mathworks.com/matlabcentral/fileexchange/20565\n%            created on 2008/07/03\n%\n% $Id$\n%\n% input:\n%      fname: input file name, if fname contains \"{}\" or \"[]\", fname\n%             will be interpreted as a JSON string\n%      opt: a struct to store parsing options, opt can be replaced by \n%           a list of ('param',value) pairs - the param string is equivallent\n%           to a field in opt. opt can have the following \n%           fields (first in [.|.] is the default)\n%\n%           opt.SimplifyCell [0|1]: if set to 1, loadjson will call cell2mat\n%                         for each element of the JSON data, and group \n%                         arrays based on the cell2mat rules.\n%           opt.FastArrayParser [1|0 or integer]: if set to 1, use a\n%                         speed-optimized array parser when loading an \n%                         array object. The fast array parser may \n%                         collapse block arrays into a single large\n%                         array similar to rules defined in cell2mat; 0 to \n%                         use a legacy parser; if set to a larger-than-1\n%                         value, this option will specify the minimum\n%                         dimension to enable the fast array parser. For\n%                         example, if the input is a 3D array, setting\n%                         FastArrayParser to 1 will return a 3D array;\n%                         setting to 2 will return a cell array of 2D\n%                         arrays; setting to 3 will return to a 2D cell\n%                         array of 1D vectors; setting to 4 will return a\n%                         3D cell array.\n%           opt.ShowProgress [0|1]: if set to 1, loadjson displays a progress bar.\n%\n% output:\n%      dat: a cell array, where {...} blocks are converted into cell arrays,\n%           and [...] are converted to arrays\n%\n% examples:\n%      dat=loadjson('{\"obj\":{\"string\":\"value\",\"array\":[1,2,3]}}')\n%      dat=loadjson(['examples' filesep 'example1.json'])\n%      dat=loadjson(['examples' filesep 'example1.json'],'SimplifyCell',1)\n%\n% license:\n%     BSD License, see LICENSE_BSD.txt files for details \n%\n% -- this function is part of JSONLab toolbox (http://iso2mesh.sf.net/cgi-bin/index.cgi?jsonlab)\n%\n\nglobal pos inStr len  esc index_esc len_esc isoct arraytoken\n\nif(regexp(fname,'^\\s*(?:\\[.+\\])|(?:\\{.+\\})\\s*$','once'))\n   string=fname;\nelseif(exist(fname,'file'))\n   try\n       string = fileread(fname);\n   catch\n       try\n           string = urlread(['file://',fname]);\n       catch\n           string = urlread(['file://',fullfile(pwd,fname)]);\n       end\n   end\nelse\n   error('input file does not exist');\nend\n\npos = 1; len = length(string); inStr = string;\nisoct=exist('OCTAVE_VERSION','builtin');\narraytoken=find(inStr=='[' | inStr==']' | inStr=='\"');\njstr=regexprep(inStr,'\\\\\\\\','  ');\nescquote=regexp(jstr,'\\\\\"');\narraytoken=sort([arraytoken escquote]);\n\n% String delimiters and escape chars identified to improve speed:\nesc = find(inStr=='\"' | inStr=='\\' ); % comparable to: regexp(inStr, '[\"\\\\]');\nindex_esc = 1; len_esc = length(esc);\n\nopt=varargin2struct(varargin{:});\n\nif(jsonopt('ShowProgress',0,opt)==1)\n    opt.progressbar_=waitbar(0,'loading ...');\nend\njsoncount=1;\nwhile pos <= len\n    switch(next_char)\n        case '{'\n            data{jsoncount} = parse_object(opt);\n        case '['\n            data{jsoncount} = parse_array(opt);\n        otherwise\n            error_pos('Outer level structure must be an object or an array');\n    end\n    jsoncount=jsoncount+1;\nend % while\n\njsoncount=length(data);\nif(jsoncount==1 && iscell(data))\n    data=data{1};\nend\n\nif(isfield(opt,'progressbar_'))\n    close(opt.progressbar_);\nend\n\n%%-------------------------------------------------------------------------\nfunction object = parse_object(varargin)\n    parse_char('{');\n    object = [];\n    if next_char ~= '}'\n        while 1\n            str = parseStr(varargin{:});\n            if isempty(str)\n                error_pos('Name of value at position %d cannot be empty');\n            end\n            parse_char(':');\n            val = parse_value(varargin{:});\n            object.(valid_field(str))=val;\n            if next_char == '}'\n                break;\n            end\n            parse_char(',');\n        end\n    end\n    parse_char('}');\n    if(isstruct(object))\n        object=struct2jdata(object);\n    end\n\n%%-------------------------------------------------------------------------\n\nfunction object = parse_array(varargin) % JSON array is written in row-major order\nglobal pos inStr isoct\n    parse_char('[');\n    object = cell(0, 1);\n    dim2=[];\n    arraydepth=jsonopt('JSONLAB_ArrayDepth_',1,varargin{:});\n    pbar=-1;\n    if(isfield(varargin{1},'progressbar_'))\n        pbar=varargin{1}.progressbar_;\n    end\n\n    if next_char ~= ']'\n\tif(jsonopt('FastArrayParser',1,varargin{:})>=1 && arraydepth>=jsonopt('FastArrayParser',1,varargin{:}))\n            [endpos, e1l, e1r]=matching_bracket(inStr,pos);\n            arraystr=['[' inStr(pos:endpos)];\n            arraystr=regexprep(arraystr,'\"_NaN_\"','NaN');\n            arraystr=regexprep(arraystr,'\"([-+]*)_Inf_\"','$1Inf');\n            arraystr(arraystr==sprintf('\\n'))=[];\n            arraystr(arraystr==sprintf('\\r'))=[];\n            %arraystr=regexprep(arraystr,'\\s*,',','); % this is slow,sometimes needed\n            if(~isempty(e1l) && ~isempty(e1r)) % the array is in 2D or higher D\n        \tastr=inStr((e1l+1):(e1r-1));\n        \tastr=regexprep(astr,'\"_NaN_\"','NaN');\n        \tastr=regexprep(astr,'\"([-+]*)_Inf_\"','$1Inf');\n        \tastr(astr==sprintf('\\n'))=[];\n        \tastr(astr==sprintf('\\r'))=[];\n        \tastr(astr==' ')='';\n        \tif(isempty(find(astr=='[', 1))) % array is 2D\n                    dim2=length(sscanf(astr,'%f,',[1 inf]));\n        \tend\n            else % array is 1D\n        \tastr=arraystr(2:end-1);\n        \tastr(astr==' ')='';\n        \t[obj, count, errmsg, nextidx]=sscanf(astr,'%f,',[1,inf]);\n        \tif(nextidx>=length(astr)-1)\n                    object=obj;\n                    pos=endpos;\n                    parse_char(']');\n                    return;\n        \tend\n            end\n            if(~isempty(dim2))\n        \tastr=arraystr;\n        \tastr(astr=='[')='';\n        \tastr(astr==']')='';\n        \tastr(astr==' ')='';\n        \t[obj, count, errmsg, nextidx]=sscanf(astr,'%f,',inf);\n        \tif(nextidx>=length(astr)-1)\n                    object=reshape(obj,dim2,numel(obj)/dim2)';\n                    pos=endpos;\n                    parse_char(']');\n                    if(pbar>0)\n                        waitbar(pos/length(inStr),pbar,'loading ...');\n                    end\n                    return;\n        \tend\n            end\n            arraystr=regexprep(arraystr,'\\]\\s*,','];');\n\telse\n            arraystr='[';\n\tend\n        try\n           if(isoct && regexp(arraystr,'\"','once'))\n                error('Octave eval can produce empty cells for JSON-like input');\n           end\n           object=eval(arraystr);\n           pos=endpos;\n        catch\n         while 1\n            newopt=varargin2struct(varargin{:},'JSONLAB_ArrayDepth_',arraydepth+1);\n            val = parse_value(newopt);\n            object{end+1} = val;\n            if next_char == ']'\n                break;\n            end\n            parse_char(',');\n         end\n        end\n    end\n    if(jsonopt('SimplifyCell',0,varargin{:})==1)\n      try\n        oldobj=object;\n        object=cell2mat(object')';\n        if(iscell(oldobj) && isstruct(object) && numel(object)>1 && jsonopt('SimplifyCellArray',1,varargin{:})==0)\n            object=oldobj;\n        elseif(size(object,1)>1 && ismatrix(object))\n            object=object';\n        end\n      catch\n      end\n    end\n    parse_char(']');\n    \n    if(pbar>0)\n        waitbar(pos/length(inStr),pbar,'loading ...');\n    end\n%%-------------------------------------------------------------------------\n\nfunction parse_char(c)\n    global pos inStr len\n    pos=skip_whitespace(pos,inStr,len);\n    if pos > len || inStr(pos) ~= c\n        error_pos(sprintf('Expected %c at position %%d', c));\n    else\n        pos = pos + 1;\n        pos=skip_whitespace(pos,inStr,len);\n    end\n\n%%-------------------------------------------------------------------------\n\nfunction c = next_char\n    global pos inStr len\n    pos=skip_whitespace(pos,inStr,len);\n    if pos > len\n        c = [];\n    else\n        c = inStr(pos);\n    end\n\n%%-------------------------------------------------------------------------\n\nfunction newpos=skip_whitespace(pos,inStr,len)\n    newpos=pos;\n    while newpos <= len && isspace(inStr(newpos))\n        newpos = newpos + 1;\n    end\n\n%%-------------------------------------------------------------------------\nfunction str = parseStr(varargin)\n    global pos inStr len  esc index_esc len_esc\n % len, ns = length(inStr), keyboard\n    if inStr(pos) ~= '\"'\n        error_pos('String starting with \" expected at position %d');\n    else\n        pos = pos + 1;\n    end\n    str = '';\n    while pos <= len\n        while index_esc <= len_esc && esc(index_esc) < pos\n            index_esc = index_esc + 1;\n        end\n        if index_esc > len_esc\n            str = [str inStr(pos:len)];\n            pos = len + 1;\n            break;\n        else\n            str = [str inStr(pos:esc(index_esc)-1)];\n            pos = esc(index_esc);\n        end\n        nstr = length(str);\n        switch inStr(pos)\n            case '\"'\n                pos = pos + 1;\n                if(~isempty(str))\n                    if(strcmp(str,'_Inf_'))\n                        str=Inf;\n                    elseif(strcmp(str,'-_Inf_'))\n                        str=-Inf;\n                    elseif(strcmp(str,'_NaN_'))\n                        str=NaN;\n                    end\n                end\n                return;\n            case '\\'\n                if pos+1 > len\n                    error_pos('End of file reached right after escape character');\n                end\n                pos = pos + 1;\n                switch inStr(pos)\n                    case {'\"' '\\' '/'}\n                        str(nstr+1) = inStr(pos);\n                        pos = pos + 1;\n                    case {'b' 'f' 'n' 'r' 't'}\n                        str(nstr+1) = sprintf(['\\' inStr(pos)]);\n                        pos = pos + 1;\n                    case 'u'\n                        if pos+4 > len\n                            error_pos('End of file reached in escaped unicode character');\n                        end\n                        str(nstr+(1:6)) = inStr(pos-1:pos+4);\n                        pos = pos + 5;\n                end\n            otherwise % should never happen\n                str(nstr+1) = inStr(pos);\n                keyboard;\n                pos = pos + 1;\n        end\n    end\n    error_pos('End of file while expecting end of inStr');\n\n%%-------------------------------------------------------------------------\n\nfunction num = parse_number(varargin)\n    global pos inStr isoct\n    currstr=inStr(pos:min(pos+30,end));\n    if(isoct~=0)\n        numstr=regexp(currstr,'^\\s*-?(?:0|[1-9]\\d*)(?:\\.\\d+)?(?:[eE][+\\-]?\\d+)?','end');\n        [num] = sscanf(currstr, '%f', 1);\n        delta=numstr+1;\n    else\n        [num, one, err, delta] = sscanf(currstr, '%f', 1);\n        if ~isempty(err)\n            error_pos('Error reading number at position %d');\n        end\n    end\n    pos = pos + delta-1;\n\n%%-------------------------------------------------------------------------\n\nfunction val = parse_value(varargin)\n    global pos inStr len\n    \n    if(isfield(varargin{1},'progressbar_'))\n        waitbar(pos/len,varargin{1}.progressbar_,'loading ...');\n    end\n    \n    switch(inStr(pos))\n        case '\"'\n            val = parseStr(varargin{:});\n            return;\n        case '['\n            val = parse_array(varargin{:});\n            return;\n        case '{'\n            val = parse_object(varargin{:});\n            return;\n        case {'-','0','1','2','3','4','5','6','7','8','9'}\n            val = parse_number(varargin{:});\n            return;\n        case 't'\n            if pos+3 <= len && strcmpi(inStr(pos:pos+3), 'true')\n                val = true;\n                pos = pos + 4;\n                return;\n            end\n        case 'f'\n            if pos+4 <= len && strcmpi(inStr(pos:pos+4), 'false')\n                val = false;\n                pos = pos + 5;\n                return;\n            end\n        case 'n'\n            if pos+3 <= len && strcmpi(inStr(pos:pos+3), 'null')\n                val = [];\n                pos = pos + 4;\n                return;\n            end\n    end\n    error_pos('Value expected at position %d');\n%%-------------------------------------------------------------------------\n\nfunction error_pos(msg)\n    global pos inStr len\n    poShow = max(min([pos-15 pos-1 pos pos+20],len),1);\n    if poShow(3) == poShow(2)\n        poShow(3:4) = poShow(2)+[0 -1];  % display nothing after\n    end\n    msg = [sprintf(msg, pos) ': ' ...\n    inStr(poShow(1):poShow(2)) '<error>' inStr(poShow(3):poShow(4)) ];\n    error( ['JSONparser:invalidFormat: ' msg] );\n\n%%-------------------------------------------------------------------------\n\nfunction str = valid_field(str)\nglobal isoct\n% From MATLAB doc: field names must begin with a letter, which may be\n% followed by any combination of letters, digits, and underscores.\n% Invalid characters will be converted to underscores, and the prefix\n% \"x0x[Hex code]_\" will be added if the first character is not a letter.\n    pos=regexp(str,'^[^A-Za-z]','once');\n    if(~isempty(pos))\n        if(~isoct)\n            str=regexprep(str,'^([^A-Za-z])','x0x${sprintf(''%X'',unicode2native($1))}_','once');\n        else\n            str=sprintf('x0x%X_%s',char(str(1)),str(2:end));\n        end\n    end\n    if(isempty(regexp(str,'[^0-9A-Za-z_]', 'once' )))\n        return;\n    end\n    if(~isoct)\n        str=regexprep(str,'([^0-9A-Za-z_])','_0x${sprintf(''%X'',unicode2native($1))}_');\n    else\n        pos=regexp(str,'[^0-9A-Za-z_]');\n        if(isempty(pos))\n            return;\n        end\n        str0=str;\n        pos0=[0 pos(:)' length(str)];\n        str='';\n        for i=1:length(pos)\n            str=[str str0(pos0(i)+1:pos(i)-1) sprintf('_0x%X_',str0(pos(i)))];\n        end\n        if(pos(end)~=length(str))\n            str=[str str0(pos0(end-1)+1:pos0(end))];\n        end\n    end\n    %str(~isletter(str) & ~('0' <= str & str <= '9')) = '_';\n\n%%-------------------------------------------------------------------------\nfunction endpos = matching_quote(str,pos)\nlen=length(str);\nwhile(pos<len)\n    if(str(pos)=='\"')\n        if(~(pos>1 && str(pos-1)=='\\'))\n            endpos=pos;\n            return;\n        end        \n    end\n    pos=pos+1;\nend\nerror('unmatched quotation mark');\n%%-------------------------------------------------------------------------\nfunction [endpos, e1l, e1r, maxlevel] = matching_bracket(str,pos)\nglobal arraytoken\nlevel=1;\nmaxlevel=level;\nendpos=0;\nbpos=arraytoken(arraytoken>=pos);\ntokens=str(bpos);\nlen=length(tokens);\npos=1;\ne1l=[];\ne1r=[];\nwhile(pos<=len)\n    c=tokens(pos);\n    if(c==']')\n        level=level-1;\n        if(isempty(e1r))\n            e1r=bpos(pos);\n        end\n        if(level==0)\n            endpos=bpos(pos);\n            return\n        end\n    end\n    if(c=='[')\n        if(isempty(e1l))\n            e1l=bpos(pos);\n        end\n        level=level+1;\n        maxlevel=max(maxlevel,level);\n    end\n    if(c=='\"')\n        pos=matching_quote(tokens,pos+1);\n    end\n    pos=pos+1;\nend\nif(endpos==0) \n    error('unmatched \"]\"');\nend\n"
  },
  {
    "path": "eval/poseval/matlab/external/jsonlab/loadubjson.m",
    "content": "function data = loadubjson(fname,varargin)\n%\n% data=loadubjson(fname,opt)\n%    or\n% data=loadubjson(fname,'param1',value1,'param2',value2,...)\n%\n% parse a JSON (JavaScript Object Notation) file or string\n%\n% authors:Qianqian Fang (fangq<at> nmr.mgh.harvard.edu)\n% created on 2013/08/01\n%\n% $Id$\n%\n% input:\n%      fname: input file name, if fname contains \"{}\" or \"[]\", fname\n%             will be interpreted as a UBJSON string\n%      opt: a struct to store parsing options, opt can be replaced by \n%           a list of ('param',value) pairs - the param string is equivallent\n%           to a field in opt. opt can have the following \n%           fields (first in [.|.] is the default)\n%\n%           opt.SimplifyCell [0|1]: if set to 1, loadubjson will call cell2mat\n%                         for each element of the JSON data, and group \n%                         arrays based on the cell2mat rules.\n%           opt.IntEndian [B|L]: specify the endianness of the integer fields\n%                         in the UBJSON input data. B - Big-Endian format for \n%                         integers (as required in the UBJSON specification); \n%                         L - input integer fields are in Little-Endian order.\n%           opt.NameIsString [0|1]: for UBJSON Specification Draft 8 or \n%                         earlier versions (JSONLab 1.0 final or earlier), \n%                         the \"name\" tag is treated as a string. To load \n%                         these UBJSON data, you need to manually set this \n%                         flag to 1.\n%\n% output:\n%      dat: a cell array, where {...} blocks are converted into cell arrays,\n%           and [...] are converted to arrays\n%\n% examples:\n%      obj=struct('string','value','array',[1 2 3]);\n%      ubjdata=saveubjson('obj',obj);\n%      dat=loadubjson(ubjdata)\n%      dat=loadubjson(['examples' filesep 'example1.ubj'])\n%      dat=loadubjson(['examples' filesep 'example1.ubj'],'SimplifyCell',1)\n%\n% license:\n%     BSD License, see LICENSE_BSD.txt files for details \n%\n% -- this function is part of JSONLab toolbox (http://iso2mesh.sf.net/cgi-bin/index.cgi?jsonlab)\n%\n\nglobal pos inStr len  esc index_esc len_esc isoct arraytoken fileendian systemendian\n\nif(regexp(fname,'[\\{\\}\\]\\[]','once'))\n   string=fname;\nelseif(exist(fname,'file'))\n   fid = fopen(fname,'rb');\n   string = fread(fid,inf,'uint8=>char')';\n   fclose(fid);\nelse\n   error('input file does not exist');\nend\n\npos = 1; len = length(string); inStr = string;\nisoct=exist('OCTAVE_VERSION','builtin');\narraytoken=find(inStr=='[' | inStr==']' | inStr=='\"');\njstr=regexprep(inStr,'\\\\\\\\','  ');\nescquote=regexp(jstr,'\\\\\"');\narraytoken=sort([arraytoken escquote]);\n\n% String delimiters and escape chars identified to improve speed:\nesc = find(inStr=='\"' | inStr=='\\' ); % comparable to: regexp(inStr, '[\"\\\\]');\nindex_esc = 1; len_esc = length(esc);\n\nopt=varargin2struct(varargin{:});\nfileendian=upper(jsonopt('IntEndian','B',opt));\n[os,maxelem,systemendian]=computer;\n\njsoncount=1;\nwhile pos <= len\n    switch(next_char)\n        case '{'\n            data{jsoncount} = parse_object(opt);\n        case '['\n            data{jsoncount} = parse_array(opt);\n        otherwise\n            error_pos('Outer level structure must be an object or an array');\n    end\n    jsoncount=jsoncount+1;\nend % while\n\njsoncount=length(data);\nif(jsoncount==1 && iscell(data))\n    data=data{1};\nend\n\n%%-------------------------------------------------------------------------\nfunction object = parse_object(varargin)\n    parse_char('{');\n    object = [];\n    type='';\n    count=-1;\n    if(next_char == '$')\n        type=inStr(pos+1); % TODO\n        pos=pos+2;\n    end\n    if(next_char == '#')\n        pos=pos+1;\n        count=double(parse_number());\n    end\n    if next_char ~= '}'\n        num=0;\n        while 1\n            if(jsonopt('NameIsString',0,varargin{:}))\n                str = parseStr(varargin{:});\n            else\n                str = parse_name(varargin{:});\n            end\n            if isempty(str)\n                error_pos('Name of value at position %d cannot be empty');\n            end\n            %parse_char(':');\n            val = parse_value(varargin{:});\n            num=num+1;\n            object.(valid_field(str))=val;\n            if next_char == '}' || (count>=0 && num>=count)\n                break;\n            end\n            %parse_char(',');\n        end\n    end\n    if(count==-1)\n        parse_char('}');\n    end\n    if(isstruct(object))\n        object=struct2jdata(object);\n    end\n\n%%-------------------------------------------------------------------------\nfunction [cid,len]=elem_info(type)\nid=strfind('iUIlLdD',type);\ndataclass={'int8','uint8','int16','int32','int64','single','double'};\nbytelen=[1,1,2,4,8,4,8];\nif(id>0)\n    cid=dataclass{id};\n    len=bytelen(id);\nelse\n    error_pos('unsupported type at position %d');\nend\n%%-------------------------------------------------------------------------\n\n\nfunction [data, adv]=parse_block(type,count,varargin)\nglobal pos inStr isoct fileendian systemendian\n[cid,len]=elem_info(type);\ndatastr=inStr(pos:pos+len*count-1);\nif(isoct)\n    newdata=int8(datastr);\nelse\n    newdata=uint8(datastr);\nend\nid=strfind('iUIlLdD',type);\nif(id<=5 && fileendian~=systemendian)\n    newdata=swapbytes(typecast(newdata,cid));\nend\ndata=typecast(newdata,cid);\nadv=double(len*count);\n\n%%-------------------------------------------------------------------------\n\n\nfunction object = parse_array(varargin) % JSON array is written in row-major order\nglobal pos inStr\n    parse_char('[');\n    object = cell(0, 1);\n    dim=[];\n    type='';\n    count=-1;\n    if(next_char == '$')\n        type=inStr(pos+1);\n        pos=pos+2;\n    end\n    if(next_char == '#')\n        pos=pos+1;\n        if(next_char=='[')\n            dim=parse_array(varargin{:});\n            count=prod(double(dim));\n        else\n            count=double(parse_number());\n        end\n    end\n    if(~isempty(type))\n        if(count>=0)\n            [object, adv]=parse_block(type,count,varargin{:});\n            if(~isempty(dim))\n                object=reshape(object,dim);\n            end\n            pos=pos+adv;\n            return;\n        else\n            endpos=matching_bracket(inStr,pos);\n            [cid,len]=elem_info(type);\n            count=(endpos-pos)/len;\n            [object, adv]=parse_block(type,count,varargin{:});\n            pos=pos+adv;\n            parse_char(']');\n            return;\n        end\n    end\n    if next_char ~= ']'\n         while 1\n            val = parse_value(varargin{:});\n            object{end+1} = val;\n            if next_char == ']'\n                break;\n            end\n            %parse_char(',');\n         end\n    end\n    if(jsonopt('SimplifyCell',0,varargin{:})==1)\n      try\n        oldobj=object;\n        object=cell2mat(object')';\n        if(iscell(oldobj) && isstruct(object) && numel(object)>1 && jsonopt('SimplifyCellArray',1,varargin{:})==0)\n            object=oldobj;\n        elseif(size(object,1)>1 && ismatrix(object))\n            object=object';\n        end\n      catch\n      end\n    end\n    if(count==-1)\n        parse_char(']');\n    end\n\n%%-------------------------------------------------------------------------\n\nfunction parse_char(c)\n    global pos inStr len\n    skip_whitespace;\n    if pos > len || inStr(pos) ~= c\n        error_pos(sprintf('Expected %c at position %%d', c));\n    else\n        pos = pos + 1;\n        skip_whitespace;\n    end\n\n%%-------------------------------------------------------------------------\n\nfunction c = next_char\n    global pos inStr len\n    skip_whitespace;\n    if pos > len\n        c = [];\n    else\n        c = inStr(pos);\n    end\n\n%%-------------------------------------------------------------------------\n\nfunction skip_whitespace\n    global pos inStr len\n    while pos <= len && isspace(inStr(pos))\n        pos = pos + 1;\n    end\n\n%%-------------------------------------------------------------------------\nfunction str = parse_name(varargin)\n    global pos inStr\n    bytelen=double(parse_number());\n    if(length(inStr)>=pos+bytelen-1)\n        str=inStr(pos:pos+bytelen-1);\n        pos=pos+bytelen;\n    else\n        error_pos('End of file while expecting end of name');\n    end\n%%-------------------------------------------------------------------------\n\nfunction str = parseStr(varargin)\n    global pos inStr\n % len, ns = length(inStr), keyboard\n    type=inStr(pos);\n    if type ~= 'S' && type ~= 'C' && type ~= 'H'\n        error_pos('String starting with S expected at position %d');\n    else\n        pos = pos + 1;\n    end\n    if(type == 'C')\n        str=inStr(pos);\n        pos=pos+1;\n        return;\n    end\n    bytelen=double(parse_number());\n    if(length(inStr)>=pos+bytelen-1)\n        str=inStr(pos:pos+bytelen-1);\n        pos=pos+bytelen;\n    else\n        error_pos('End of file while expecting end of inStr');\n    end\n\n%%-------------------------------------------------------------------------\n\nfunction num = parse_number(varargin)\n    global pos inStr isoct fileendian systemendian\n    id=strfind('iUIlLdD',inStr(pos));\n    if(isempty(id))\n        error_pos('expecting a number at position %d');\n    end\n    type={'int8','uint8','int16','int32','int64','single','double'};\n    bytelen=[1,1,2,4,8,4,8];\n    datastr=inStr(pos+1:pos+bytelen(id));\n    if(isoct)\n        newdata=int8(datastr);\n    else\n        newdata=uint8(datastr);\n    end\n    if(id<=5 && fileendian~=systemendian)\n        newdata=swapbytes(typecast(newdata,type{id}));\n    end\n    num=typecast(newdata,type{id});\n    pos = pos + bytelen(id)+1;\n\n%%-------------------------------------------------------------------------\n\nfunction val = parse_value(varargin)\n    global pos inStr\n\n    switch(inStr(pos))\n        case {'S','C','H'}\n            val = parseStr(varargin{:});\n            return;\n        case '['\n            val = parse_array(varargin{:});\n            return;\n        case '{'\n            val = parse_object(varargin{:});\n            return;\n        case {'i','U','I','l','L','d','D'}\n            val = parse_number(varargin{:});\n            return;\n        case 'T'\n            val = true;\n            pos = pos + 1;\n            return;\n        case 'F'\n            val = false;\n            pos = pos + 1;\n            return;\n        case {'Z','N'}\n            val = [];\n            pos = pos + 1;\n            return;\n    end\n    error_pos('Value expected at position %d');\n%%-------------------------------------------------------------------------\n\nfunction error_pos(msg)\n    global pos inStr len\n    poShow = max(min([pos-15 pos-1 pos pos+20],len),1);\n    if poShow(3) == poShow(2)\n        poShow(3:4) = poShow(2)+[0 -1];  % display nothing after\n    end\n    msg = [sprintf(msg, pos) ': ' ...\n    inStr(poShow(1):poShow(2)) '<error>' inStr(poShow(3):poShow(4)) ];\n    error( ['JSONparser:invalidFormat: ' msg] );\n\n%%-------------------------------------------------------------------------\n\nfunction str = valid_field(str)\nglobal isoct\n% From MATLAB doc: field names must begin with a letter, which may be\n% followed by any combination of letters, digits, and underscores.\n% Invalid characters will be converted to underscores, and the prefix\n% \"x0x[Hex code]_\" will be added if the first character is not a letter.\n    pos=regexp(str,'^[^A-Za-z]','once');\n    if(~isempty(pos))\n        if(~isoct)\n            str=regexprep(str,'^([^A-Za-z])','x0x${sprintf(''%X'',unicode2native($1))}_','once');\n        else\n            str=sprintf('x0x%X_%s',char(str(1)),str(2:end));\n        end\n    end\n    if(isempty(regexp(str,'[^0-9A-Za-z_]', 'once' )))\n        return;\n    end\n    if(~isoct)\n        str=regexprep(str,'([^0-9A-Za-z_])','_0x${sprintf(''%X'',unicode2native($1))}_');\n    else\n        pos=regexp(str,'[^0-9A-Za-z_]');\n        if(isempty(pos))\n            return;\n        end\n        str0=str;\n        pos0=[0 pos(:)' length(str)];\n        str='';\n        for i=1:length(pos)\n            str=[str str0(pos0(i)+1:pos(i)-1) sprintf('_0x%X_',str0(pos(i)))];\n        end\n        if(pos(end)~=length(str))\n            str=[str str0(pos0(end-1)+1:pos0(end))];\n        end\n    end\n    %str(~isletter(str) & ~('0' <= str & str <= '9')) = '_';\n\n%%-------------------------------------------------------------------------\nfunction endpos = matching_quote(str,pos)\nlen=length(str);\nwhile(pos<len)\n    if(str(pos)=='\"')\n        if(~(pos>1 && str(pos-1)=='\\'))\n            endpos=pos;\n            return;\n        end        \n    end\n    pos=pos+1;\nend\nerror('unmatched quotation mark');\n%%-------------------------------------------------------------------------\nfunction [endpos, e1l, e1r, maxlevel] = matching_bracket(str,pos)\nglobal arraytoken\nlevel=1;\nmaxlevel=level;\nendpos=0;\nbpos=arraytoken(arraytoken>=pos);\ntokens=str(bpos);\nlen=length(tokens);\npos=1;\ne1l=[];\ne1r=[];\nwhile(pos<=len)\n    c=tokens(pos);\n    if(c==']')\n        level=level-1;\n        if(isempty(e1r))\n            e1r=bpos(pos);\n        end\n        if(level==0)\n            endpos=bpos(pos);\n            return\n        end\n    end\n    if(c=='[')\n        if(isempty(e1l))\n            e1l=bpos(pos);\n        end\n        level=level+1;\n        maxlevel=max(maxlevel,level);\n    end\n    if(c=='\"')\n        pos=matching_quote(tokens,pos+1);\n    end\n    pos=pos+1;\nend\nif(endpos==0) \n    error('unmatched \"]\"');\nend\n\n"
  },
  {
    "path": "eval/poseval/matlab/external/jsonlab/mergestruct.m",
    "content": "function s=mergestruct(s1,s2)\n%\n% s=mergestruct(s1,s2)\n%\n% merge two struct objects into one\n%\n% authors:Qianqian Fang (fangq<at> nmr.mgh.harvard.edu)\n% date: 2012/12/22\n%\n% input:\n%      s1,s2: a struct object, s1 and s2 can not be arrays\n%\n% output:\n%      s: the merged struct object. fields in s1 and s2 will be combined in s.\n%\n% license:\n%     BSD License, see LICENSE_BSD.txt files for details \n%\n% -- this function is part of jsonlab toolbox (http://iso2mesh.sf.net/cgi-bin/index.cgi?jsonlab)\n%\n\nif(~isstruct(s1) || ~isstruct(s2))\n    error('input parameters contain non-struct');\nend\nif(length(s1)>1 || length(s2)>1)\n    error('can not merge struct arrays');\nend\nfn=fieldnames(s2);\ns=s1;\nfor i=1:length(fn)              \n    s=setfield(s,fn{i},getfield(s2,fn{i}));\nend\n\n"
  },
  {
    "path": "eval/poseval/matlab/external/jsonlab/savejson.m",
    "content": "function json=savejson(rootname,obj,varargin)\n%\n% json=savejson(rootname,obj,filename)\n%    or\n% json=savejson(rootname,obj,opt)\n% json=savejson(rootname,obj,'param1',value1,'param2',value2,...)\n%\n% convert a MATLAB object (cell, struct or array) into a JSON (JavaScript\n% Object Notation) string\n%\n% author: Qianqian Fang (fangq<at> nmr.mgh.harvard.edu)\n% created on 2011/09/09\n%\n% $Id$\n%\n% input:\n%      rootname: the name of the root-object, when set to '', the root name\n%        is ignored, however, when opt.ForceRootName is set to 1 (see below),\n%        the MATLAB variable name will be used as the root name.\n%      obj: a MATLAB object (array, cell, cell array, struct, struct array,\n%      class instance).\n%      filename: a string for the file name to save the output JSON data.\n%      opt: a struct for additional options, ignore to use default values.\n%        opt can have the following fields (first in [.|.] is the default)\n%\n%        opt.FileName [''|string]: a file name to save the output JSON data\n%        opt.FloatFormat ['%.10g'|string]: format to show each numeric element\n%                         of a 1D/2D array;\n%        opt.ArrayIndent [1|0]: if 1, output explicit data array with\n%                         precedent indentation; if 0, no indentation\n%        opt.ArrayToStruct[0|1]: when set to 0, savejson outputs 1D/2D\n%                         array in JSON array format; if sets to 1, an\n%                         array will be shown as a struct with fields\n%                         \"_ArrayType_\", \"_ArraySize_\" and \"_ArrayData_\"; for\n%                         sparse arrays, the non-zero elements will be\n%                         saved to _ArrayData_ field in triplet-format i.e.\n%                         (ix,iy,val) and \"_ArrayIsSparse_\" will be added\n%                         with a value of 1; for a complex array, the \n%                         _ArrayData_ array will include two columns \n%                         (4 for sparse) to record the real and imaginary \n%                         parts, and also \"_ArrayIsComplex_\":1 is added. \n%        opt.ParseLogical [0|1]: if this is set to 1, logical array elem\n%                         will use true/false rather than 1/0.\n%        opt.SingletArray [0|1]: if this is set to 1, arrays with a single\n%                         numerical element will be shown without a square\n%                         bracket, unless it is the root object; if 0, square\n%                         brackets are forced for any numerical arrays.\n%        opt.SingletCell  [1|0]: if 1, always enclose a cell with \"[]\" \n%                         even it has only one element; if 0, brackets\n%                         are ignored when a cell has only 1 element.\n%        opt.ForceRootName [0|1]: when set to 1 and rootname is empty, savejson\n%                         will use the name of the passed obj variable as the \n%                         root object name; if obj is an expression and \n%                         does not have a name, 'root' will be used; if this \n%                         is set to 0 and rootname is empty, the root level \n%                         will be merged down to the lower level.\n%        opt.Inf ['\"$1_Inf_\"'|string]: a customized regular expression pattern\n%                         to represent +/-Inf. The matched pattern is '([-+]*)Inf'\n%                         and $1 represents the sign. For those who want to use\n%                         1e999 to represent Inf, they can set opt.Inf to '$11e999'\n%        opt.NaN ['\"_NaN_\"'|string]: a customized regular expression pattern\n%                         to represent NaN\n%        opt.JSONP [''|string]: to generate a JSONP output (JSON with padding),\n%                         for example, if opt.JSONP='foo', the JSON data is\n%                         wrapped inside a function call as 'foo(...);'\n%        opt.UnpackHex [1|0]: conver the 0x[hex code] output by loadjson \n%                         back to the string form\n%        opt.SaveBinary [0|1]: 1 - save the JSON file in binary mode; 0 - text mode.\n%        opt.Compact [0|1]: 1- out compact JSON format (remove all newlines and tabs)\n%\n%        opt can be replaced by a list of ('param',value) pairs. The param \n%        string is equivallent to a field in opt and is case sensitive.\n% output:\n%      json: a string in the JSON format (see http://json.org)\n%\n% examples:\n%      jsonmesh=struct('MeshNode',[0 0 0;1 0 0;0 1 0;1 1 0;0 0 1;1 0 1;0 1 1;1 1 1],... \n%               'MeshTetra',[1 2 4 8;1 3 4 8;1 2 6 8;1 5 6 8;1 5 7 8;1 3 7 8],...\n%               'MeshTri',[1 2 4;1 2 6;1 3 4;1 3 7;1 5 6;1 5 7;...\n%                          2 8 4;2 8 6;3 8 4;3 8 7;5 8 6;5 8 7],...\n%               'MeshCreator','FangQ','MeshTitle','T6 Cube',...\n%               'SpecialData',[nan, inf, -inf]);\n%      savejson('jmesh',jsonmesh)\n%      savejson('',jsonmesh,'ArrayIndent',0,'FloatFormat','\\t%.5g')\n%\n% license:\n%     BSD License, see LICENSE_BSD.txt files for details\n%\n% -- this function is part of JSONLab toolbox (http://iso2mesh.sf.net/cgi-bin/index.cgi?jsonlab)\n%\n\nif(nargin==1)\n   varname=inputname(1);\n   obj=rootname;\n   if(isempty(varname)) \n      varname='root';\n   end\n   rootname=varname;\nelse\n   varname=inputname(2);\nend\nif(length(varargin)==1 && ischar(varargin{1}))\n   opt=struct('filename',varargin{1});\nelse\n   opt=varargin2struct(varargin{:});\nend\nopt.IsOctave=exist('OCTAVE_VERSION','builtin');\nif(isfield(opt,'norowbracket'))\n    warning('Option ''NoRowBracket'' is depreciated, please use ''SingletArray'' and set its value to not(NoRowBracket)');\n    if(~isfield(opt,'singletarray'))\n        opt.singletarray=not(opt.norowbracket);\n    end\nend\nrootisarray=0;\nrootlevel=1;\nforceroot=jsonopt('ForceRootName',0,opt);\nif((isnumeric(obj) || islogical(obj) || ischar(obj) || isstruct(obj) || ...\n        iscell(obj) || isobject(obj)) && isempty(rootname) && forceroot==0)\n    rootisarray=1;\n    rootlevel=0;\nelse\n    if(isempty(rootname))\n        rootname=varname;\n    end\nend\nif((isstruct(obj) || iscell(obj))&& isempty(rootname) && forceroot)\n    rootname='root';\nend\n\nwhitespaces=struct('tab',sprintf('\\t'),'newline',sprintf('\\n'),'sep',sprintf(',\\n'));\nif(jsonopt('Compact',0,opt)==1)\n    whitespaces=struct('tab','','newline','','sep',',');\nend\nif(~isfield(opt,'whitespaces_'))\n    opt.whitespaces_=whitespaces;\nend\n\nnl=whitespaces.newline;\n\njson=obj2json(rootname,obj,rootlevel,opt);\nif(rootisarray)\n    json=sprintf('%s%s',json,nl);\nelse\n    json=sprintf('{%s%s%s}\\n',nl,json,nl);\nend\n\njsonp=jsonopt('JSONP','',opt);\nif(~isempty(jsonp))\n    json=sprintf('%s(%s);%s',jsonp,json,nl);\nend\n\n% save to a file if FileName is set, suggested by Patrick Rapin\nfilename=jsonopt('FileName','',opt);\nif(~isempty(filename))\n    if(jsonopt('SaveBinary',0,opt)==1)\n\t    fid = fopen(filename, 'wb');\n\t    fwrite(fid,json);\n    else\n\t    fid = fopen(filename, 'wt');\n\t    fwrite(fid,json,'char');\n    end\n    fclose(fid);\nend\n\n%%-------------------------------------------------------------------------\nfunction txt=obj2json(name,item,level,varargin)\n\nif(iscell(item))\n    txt=cell2json(name,item,level,varargin{:});\nelseif(isstruct(item))\n    txt=struct2json(name,item,level,varargin{:});\nelseif(ischar(item))\n    txt=str2json(name,item,level,varargin{:});\nelseif(isobject(item)) \n    txt=matlabobject2json(name,item,level,varargin{:});\nelse\n    txt=mat2json(name,item,level,varargin{:});\nend\n\n%%-------------------------------------------------------------------------\nfunction txt=cell2json(name,item,level,varargin)\ntxt={};\nif(~iscell(item))\n        error('input is not a cell');\nend\n\ndim=size(item);\nif(ndims(squeeze(item))>2) % for 3D or higher dimensions, flatten to 2D for now\n    item=reshape(item,dim(1),numel(item)/dim(1));\n    dim=size(item);\nend\nlen=numel(item);\nws=jsonopt('whitespaces_',struct('tab',sprintf('\\t'),'newline',sprintf('\\n'),'sep',sprintf(',\\n')),varargin{:});\npadding0=repmat(ws.tab,1,level);\npadding2=repmat(ws.tab,1,level+1);\nnl=ws.newline;\nbracketlevel=~jsonopt('singletcell',1,varargin{:});\nif(len>bracketlevel)\n    if(~isempty(name))\n        txt={padding0, '\"', checkname(name,varargin{:}),'\": [', nl}; name=''; \n    else\n        txt={padding0, '[', nl};\n    end\nelseif(len==0)\n    if(~isempty(name))\n        txt={padding0, '\"' checkname(name,varargin{:}) '\": []'}; name=''; \n    else\n        txt={padding0, '[]'};\n    end\nend\nfor i=1:dim(1)\n    if(dim(1)>1)\n        txt(end+1:end+3)={padding2,'[',nl};\n    end\n    for j=1:dim(2)\n       txt{end+1}=obj2json(name,item{i,j},level+(dim(1)>1)+(len>bracketlevel),varargin{:});\n       if(j<dim(2))\n           txt(end+1:end+2)={',' nl};\n       end\n    end\n    if(dim(1)>1)\n        txt(end+1:end+3)={nl,padding2,']'};\n    end\n    if(i<dim(1))\n        txt(end+1:end+2)={',' nl};\n    end\n    %if(j==dim(2)) txt=sprintf('%s%s',txt,sprintf(',%s',nl)); end\nend\nif(len>bracketlevel)\n    txt(end+1:end+3)={nl,padding0,']'};\nend\ntxt = sprintf('%s',txt{:});\n\n%%-------------------------------------------------------------------------\nfunction txt=struct2json(name,item,level,varargin)\ntxt={};\nif(~isstruct(item))\n\terror('input is not a struct');\nend\ndim=size(item);\nif(ndims(squeeze(item))>2) % for 3D or higher dimensions, flatten to 2D for now\n    item=reshape(item,dim(1),numel(item)/dim(1));\n    dim=size(item);\nend\nlen=numel(item);\nforcearray= (len>1 || (jsonopt('SingletArray',0,varargin{:})==1 && level>0));\nws=struct('tab',sprintf('\\t'),'newline',sprintf('\\n'));\nws=jsonopt('whitespaces_',ws,varargin{:});\npadding0=repmat(ws.tab,1,level);\npadding2=repmat(ws.tab,1,level+1);\npadding1=repmat(ws.tab,1,level+(dim(1)>1)+forcearray);\nnl=ws.newline;\n\nif(isempty(item)) \n    if(~isempty(name)) \n        txt={padding0, '\"', checkname(name,varargin{:}),'\": []'};\n    else\n        txt={padding0, '[]'};\n    end\n    return;\nend\nif(~isempty(name)) \n    if(forcearray)\n        txt={padding0, '\"', checkname(name,varargin{:}),'\": [', nl};\n    end\nelse\n    if(forcearray)\n        txt={padding0, '[', nl};\n    end\nend\nfor j=1:dim(2)\n  if(dim(1)>1)\n      txt(end+1:end+3)={padding2,'[',nl};\n  end\n  for i=1:dim(1)\n    names = fieldnames(item(i,j));\n    if(~isempty(name) && len==1 && ~forcearray)\n        txt(end+1:end+5)={padding1, '\"', checkname(name,varargin{:}),'\": {', nl};\n    else\n        txt(end+1:end+3)={padding1, '{', nl};\n    end\n    if(~isempty(names))\n      for e=1:length(names)\n\t    txt{end+1}=obj2json(names{e},item(i,j).(names{e}),...\n             level+(dim(1)>1)+1+forcearray,varargin{:});\n        if(e<length(names))\n            txt{end+1}=',';\n        end\n        txt{end+1}=nl;\n      end\n    end\n    txt(end+1:end+2)={padding1,'}'};\n    if(i<dim(1))\n        txt(end+1:end+2)={',' nl};\n    end\n  end\n  if(dim(1)>1)\n      txt(end+1:end+3)={nl,padding2,']'};\n  end\n  if(j<dim(2))\n      txt(end+1:end+2)={',' nl};\n  end\nend\nif(forcearray)\n    txt(end+1:end+3)={nl,padding0,']'};\nend\ntxt = sprintf('%s',txt{:});\n\n%%-------------------------------------------------------------------------\nfunction txt=str2json(name,item,level,varargin)\ntxt={};\nif(~ischar(item))\n        error('input is not a string');\nend\nitem=reshape(item, max(size(item),[1 0]));\nlen=size(item,1);\nws=struct('tab',sprintf('\\t'),'newline',sprintf('\\n'),'sep',sprintf(',\\n'));\nws=jsonopt('whitespaces_',ws,varargin{:});\npadding1=repmat(ws.tab,1,level);\npadding0=repmat(ws.tab,1,level+1);\nnl=ws.newline;\nsep=ws.sep;\n\nif(~isempty(name)) \n    if(len>1)\n        txt={padding1, '\"', checkname(name,varargin{:}),'\": [', nl};\n    end\nelse\n    if(len>1)\n        txt={padding1, '[', nl};\n    end\nend\nfor e=1:len\n    val=escapejsonstring(item(e,:));\n    if(len==1)\n        obj=['\"' checkname(name,varargin{:}) '\": ' '\"',val,'\"'];\n        if(isempty(name))\n            obj=['\"',val,'\"'];\n        end\n        txt(end+1:end+2)={padding1, obj};\n    else\n        txt(end+1:end+4)={padding0,'\"',val,'\"'};\n    end\n    if(e==len)\n        sep='';\n    end\n    txt{end+1}=sep;\nend\nif(len>1)\n    txt(end+1:end+3)={nl,padding1,']'};\nend\ntxt = sprintf('%s',txt{:});\n\n%%-------------------------------------------------------------------------\nfunction txt=mat2json(name,item,level,varargin)\nif(~isnumeric(item) && ~islogical(item))\n        error('input is not an array');\nend\nws=struct('tab',sprintf('\\t'),'newline',sprintf('\\n'),'sep',sprintf(',\\n'));\nws=jsonopt('whitespaces_',ws,varargin{:});\npadding1=repmat(ws.tab,1,level);\npadding0=repmat(ws.tab,1,level+1);\nnl=ws.newline;\nsep=ws.sep;\n\nif(length(size(item))>2 || issparse(item) || ~isreal(item) || ...\n   (isempty(item) && any(size(item))) ||jsonopt('ArrayToStruct',0,varargin{:}))\n    if(isempty(name))\n    \ttxt=sprintf('%s{%s%s\"_ArrayType_\": \"%s\",%s%s\"_ArraySize_\": %s,%s',...\n              padding1,nl,padding0,class(item),nl,padding0,regexprep(mat2str(size(item)),'\\s+',','),nl);\n    else\n    \ttxt=sprintf('%s\"%s\": {%s%s\"_ArrayType_\": \"%s\",%s%s\"_ArraySize_\": %s,%s',...\n              padding1,checkname(name,varargin{:}),nl,padding0,class(item),nl,padding0,regexprep(mat2str(size(item)),'\\s+',','),nl);\n    end\nelse\n    if(numel(item)==1 && jsonopt('SingletArray',0,varargin{:})==0 && level>0)\n        numtxt=regexprep(regexprep(matdata2json(item,level+1,varargin{:}),'^\\[',''),']','');\n    else\n        numtxt=matdata2json(item,level+1,varargin{:});\n    end\n    if(isempty(name))\n    \ttxt=sprintf('%s%s',padding1,numtxt);\n    else\n        if(numel(item)==1 && jsonopt('SingletArray',0,varargin{:})==0)\n           \ttxt=sprintf('%s\"%s\": %s',padding1,checkname(name,varargin{:}),numtxt);\n        else\n    \t    txt=sprintf('%s\"%s\": %s',padding1,checkname(name,varargin{:}),numtxt);\n        end\n    end\n    return;\nend\ndataformat='%s%s%s%s%s';\n\nif(issparse(item))\n    [ix,iy]=find(item);\n    data=full(item(find(item)));\n    if(~isreal(item))\n       data=[real(data(:)),imag(data(:))];\n       if(size(item,1)==1)\n           % Kludge to have data's 'transposedness' match item's.\n           % (Necessary for complex row vector handling below.)\n           data=data';\n       end\n       txt=sprintf(dataformat,txt,padding0,'\"_ArrayIsComplex_\": ','1', sep);\n    end\n    txt=sprintf(dataformat,txt,padding0,'\"_ArrayIsSparse_\": ','1', sep);\n    if(size(item,1)==1)\n        % Row vector, store only column indices.\n        txt=sprintf(dataformat,txt,padding0,'\"_ArrayData_\": ',...\n           matdata2json([iy(:),data'],level+2,varargin{:}), nl);\n    elseif(size(item,2)==1)\n        % Column vector, store only row indices.\n        txt=sprintf(dataformat,txt,padding0,'\"_ArrayData_\": ',...\n           matdata2json([ix,data],level+2,varargin{:}), nl);\n    else\n        % General case, store row and column indices.\n        txt=sprintf(dataformat,txt,padding0,'\"_ArrayData_\": ',...\n           matdata2json([ix,iy,data],level+2,varargin{:}), nl);\n    end\nelse\n    if(isreal(item))\n        txt=sprintf(dataformat,txt,padding0,'\"_ArrayData_\": ',...\n            matdata2json(item(:)',level+2,varargin{:}), nl);\n    else\n        txt=sprintf(dataformat,txt,padding0,'\"_ArrayIsComplex_\": ','1', sep);\n        txt=sprintf(dataformat,txt,padding0,'\"_ArrayData_\": ',...\n            matdata2json([real(item(:)) imag(item(:))],level+2,varargin{:}), nl);\n    end\nend\ntxt=sprintf('%s%s%s',txt,padding1,'}');\n\n%%-------------------------------------------------------------------------\nfunction txt=matlabobject2json(name,item,level,varargin)\nif numel(item) == 0 %empty object\n    st = struct();\nelse\n    % \"st = struct(item);\" would produce an inmutable warning, because it\n    % make the protected and private properties visible. Instead we get the\n    % visible properties\n    propertynames = properties(item);\n    for p = 1:numel(propertynames)\n        for o = numel(item):-1:1 % aray of objects\n            st(o).(propertynames{p}) = item(o).(propertynames{p});\n        end\n    end\nend\ntxt=struct2json(name,st,level,varargin{:});\n\n%%-------------------------------------------------------------------------\nfunction txt=matdata2json(mat,level,varargin)\n\nws=struct('tab',sprintf('\\t'),'newline',sprintf('\\n'),'sep',sprintf(',\\n'));\nws=jsonopt('whitespaces_',ws,varargin{:});\ntab=ws.tab;\nnl=ws.newline;\n\nif(size(mat,1)==1)\n    pre='';\n    post='';\n    level=level-1;\nelse\n    pre=sprintf('[%s',nl);\n    post=sprintf('%s%s]',nl,repmat(tab,1,level-1));\nend\n\nif(isempty(mat))\n    txt='[]';\n    return;\nend\nfloatformat=jsonopt('FloatFormat','%.10g',varargin{:});\n%if(numel(mat)>1)\n    formatstr=['[' repmat([floatformat ','],1,size(mat,2)-1) [floatformat sprintf('],%s',nl)]];\n%else\n%    formatstr=[repmat([floatformat ','],1,size(mat,2)-1) [floatformat sprintf(',\\n')]];\n%end\n\nif(nargin>=2 && size(mat,1)>1 && jsonopt('ArrayIndent',1,varargin{:})==1)\n    formatstr=[repmat(tab,1,level) formatstr];\nend\n\ntxt=sprintf(formatstr,mat');\ntxt(end-length(nl):end)=[];\nif(islogical(mat) && jsonopt('ParseLogical',0,varargin{:})==1)\n   txt=regexprep(txt,'1','true');\n   txt=regexprep(txt,'0','false');\nend\n%txt=regexprep(mat2str(mat),'\\s+',',');\n%txt=regexprep(txt,';',sprintf('],\\n['));\n% if(nargin>=2 && size(mat,1)>1)\n%     txt=regexprep(txt,'\\[',[repmat(sprintf('\\t'),1,level) '[']);\n% end\ntxt=[pre txt post];\nif(any(isinf(mat(:))))\n    txt=regexprep(txt,'([-+]*)Inf',jsonopt('Inf','\"$1_Inf_\"',varargin{:}));\nend\nif(any(isnan(mat(:))))\n    txt=regexprep(txt,'NaN',jsonopt('NaN','\"_NaN_\"',varargin{:}));\nend\n\n%%-------------------------------------------------------------------------\nfunction newname=checkname(name,varargin)\nisunpack=jsonopt('UnpackHex',1,varargin{:});\nnewname=name;\nif(isempty(regexp(name,'0x([0-9a-fA-F]+)_','once')))\n    return\nend\nif(isunpack)\n    isoct=jsonopt('IsOctave',0,varargin{:});\n    if(~isoct)\n        newname=regexprep(name,'(^x|_){1}0x([0-9a-fA-F]+)_','${native2unicode(hex2dec($2))}');\n    else\n        pos=regexp(name,'(^x|_){1}0x([0-9a-fA-F]+)_','start');\n        pend=regexp(name,'(^x|_){1}0x([0-9a-fA-F]+)_','end');\n        if(isempty(pos))\n            return;\n        end\n        str0=name;\n        pos0=[0 pend(:)' length(name)];\n        newname='';\n        for i=1:length(pos)\n            newname=[newname str0(pos0(i)+1:pos(i)-1) char(hex2dec(str0(pos(i)+3:pend(i)-1)))];\n        end\n        if(pos(end)~=length(name))\n            newname=[newname str0(pos0(end-1)+1:pos0(end))];\n        end\n    end\nend\n\n%%-------------------------------------------------------------------------\nfunction newstr=escapejsonstring(str)\nnewstr=str;\nisoct=exist('OCTAVE_VERSION','builtin');\nif(isoct)\n   vv=sscanf(OCTAVE_VERSION,'%f');\n   if(vv(1)>=3.8)\n       isoct=0;\n   end\nend\nif(isoct)\n  escapechars={'\\\\','\\\"','\\/','\\a','\\f','\\n','\\r','\\t','\\v'};\n  for i=1:length(escapechars);\n    newstr=regexprep(newstr,escapechars{i},escapechars{i});\n  end\n  newstr=regexprep(newstr,'\\\\\\\\(u[0-9a-fA-F]{4}[^0-9a-fA-F]*)','\\$1');\nelse\n  escapechars={'\\\\','\\\"','\\/','\\a','\\b','\\f','\\n','\\r','\\t','\\v'};\n  for i=1:length(escapechars);\n    newstr=regexprep(newstr,escapechars{i},regexprep(escapechars{i},'\\\\','\\\\\\\\'));\n  end\n  newstr=regexprep(newstr,'\\\\\\\\(u[0-9a-fA-F]{4}[^0-9a-fA-F]*)','\\\\$1');\nend\n"
  },
  {
    "path": "eval/poseval/matlab/external/jsonlab/saveubjson.m",
    "content": "function json=saveubjson(rootname,obj,varargin)\n%\n% json=saveubjson(rootname,obj,filename)\n%    or\n% json=saveubjson(rootname,obj,opt)\n% json=saveubjson(rootname,obj,'param1',value1,'param2',value2,...)\n%\n% convert a MATLAB object (cell, struct or array) into a Universal \n% Binary JSON (UBJSON) binary string\n%\n% author: Qianqian Fang (fangq<at> nmr.mgh.harvard.edu)\n% created on 2013/08/17\n%\n% $Id$\n%\n% input:\n%      rootname: the name of the root-object, when set to '', the root name\n%        is ignored, however, when opt.ForceRootName is set to 1 (see below),\n%        the MATLAB variable name will be used as the root name.\n%      obj: a MATLAB object (array, cell, cell array, struct, struct array,\n%      class instance)\n%      filename: a string for the file name to save the output UBJSON data\n%      opt: a struct for additional options, ignore to use default values.\n%        opt can have the following fields (first in [.|.] is the default)\n%\n%        opt.FileName [''|string]: a file name to save the output JSON data\n%        opt.ArrayToStruct[0|1]: when set to 0, saveubjson outputs 1D/2D\n%                         array in JSON array format; if sets to 1, an\n%                         array will be shown as a struct with fields\n%                         \"_ArrayType_\", \"_ArraySize_\" and \"_ArrayData_\"; for\n%                         sparse arrays, the non-zero elements will be\n%                         saved to _ArrayData_ field in triplet-format i.e.\n%                         (ix,iy,val) and \"_ArrayIsSparse_\" will be added\n%                         with a value of 1; for a complex array, the \n%                         _ArrayData_ array will include two columns \n%                         (4 for sparse) to record the real and imaginary \n%                         parts, and also \"_ArrayIsComplex_\":1 is added. \n%        opt.ParseLogical [1|0]: if this is set to 1, logical array elem\n%                         will use true/false rather than 1/0.\n%        opt.SingletArray [0|1]: if this is set to 1, arrays with a single\n%                         numerical element will be shown without a square\n%                         bracket, unless it is the root object; if 0, square\n%                         brackets are forced for any numerical arrays.\n%        opt.SingletCell  [1|0]: if 1, always enclose a cell with \"[]\" \n%                         even it has only one element; if 0, brackets\n%                         are ignored when a cell has only 1 element.\n%        opt.ForceRootName [0|1]: when set to 1 and rootname is empty, saveubjson\n%                         will use the name of the passed obj variable as the \n%                         root object name; if obj is an expression and \n%                         does not have a name, 'root' will be used; if this \n%                         is set to 0 and rootname is empty, the root level \n%                         will be merged down to the lower level.\n%        opt.JSONP [''|string]: to generate a JSONP output (JSON with padding),\n%                         for example, if opt.JSON='foo', the JSON data is\n%                         wrapped inside a function call as 'foo(...);'\n%        opt.UnpackHex [1|0]: conver the 0x[hex code] output by loadjson \n%                         back to the string form\n%\n%        opt can be replaced by a list of ('param',value) pairs. The param \n%        string is equivallent to a field in opt and is case sensitive.\n% output:\n%      json: a binary string in the UBJSON format (see http://ubjson.org)\n%\n% examples:\n%      jsonmesh=struct('MeshNode',[0 0 0;1 0 0;0 1 0;1 1 0;0 0 1;1 0 1;0 1 1;1 1 1],... \n%               'MeshTetra',[1 2 4 8;1 3 4 8;1 2 6 8;1 5 6 8;1 5 7 8;1 3 7 8],...\n%               'MeshTri',[1 2 4;1 2 6;1 3 4;1 3 7;1 5 6;1 5 7;...\n%                          2 8 4;2 8 6;3 8 4;3 8 7;5 8 6;5 8 7],...\n%               'MeshCreator','FangQ','MeshTitle','T6 Cube',...\n%               'SpecialData',[nan, inf, -inf]);\n%      saveubjson('jsonmesh',jsonmesh)\n%      saveubjson('jsonmesh',jsonmesh,'meshdata.ubj')\n%\n% license:\n%     BSD License, see LICENSE_BSD.txt files for details\n%\n% -- this function is part of JSONLab toolbox (http://iso2mesh.sf.net/cgi-bin/index.cgi?jsonlab)\n%\n\nif(nargin==1)\n   varname=inputname(1);\n   obj=rootname;\n   if(isempty(varname)) \n      varname='root';\n   end\n   rootname=varname;\nelse\n   varname=inputname(2);\nend\nif(length(varargin)==1 && ischar(varargin{1}))\n   opt=struct('filename',varargin{1});\nelse\n   opt=varargin2struct(varargin{:});\nend\nopt.IsOctave=exist('OCTAVE_VERSION','builtin');\nif(isfield(opt,'norowbracket'))\n    warning('Option ''NoRowBracket'' is depreciated, please use ''SingletArray'' and set its value to not(NoRowBracket)');\n    if(~isfield(opt,'singletarray'))\n        opt.singletarray=not(opt.norowbracket);\n    end\nend\nrootisarray=0;\nrootlevel=1;\nforceroot=jsonopt('ForceRootName',0,opt);\nif((isnumeric(obj) || islogical(obj) || ischar(obj) || isstruct(obj) || ...\n        iscell(obj) || isobject(obj)) && isempty(rootname) && forceroot==0)\n    rootisarray=1;\n    rootlevel=0;\nelse\n    if(isempty(rootname))\n        rootname=varname;\n    end\nend\nif((isstruct(obj) || iscell(obj))&& isempty(rootname) && forceroot)\n    rootname='root';\nend\njson=obj2ubjson(rootname,obj,rootlevel,opt);\nif(~rootisarray)\n    json=['{' json '}'];\nend\n\njsonp=jsonopt('JSONP','',opt);\nif(~isempty(jsonp))\n    json=[jsonp '(' json ')'];\nend\n\n% save to a file if FileName is set, suggested by Patrick Rapin\nfilename=jsonopt('FileName','',opt);\nif(~isempty(filename))\n    fid = fopen(filename, 'wb');\n    fwrite(fid,json);\n    fclose(fid);\nend\n\n%%-------------------------------------------------------------------------\nfunction txt=obj2ubjson(name,item,level,varargin)\n\nif(iscell(item))\n    txt=cell2ubjson(name,item,level,varargin{:});\nelseif(isstruct(item))\n    txt=struct2ubjson(name,item,level,varargin{:});\nelseif(ischar(item))\n    txt=str2ubjson(name,item,level,varargin{:});\nelseif(isobject(item)) \n    txt=matlabobject2ubjson(name,item,level,varargin{:});\nelse\n    txt=mat2ubjson(name,item,level,varargin{:});\nend\n\n%%-------------------------------------------------------------------------\nfunction txt=cell2ubjson(name,item,level,varargin)\ntxt='';\nif(~iscell(item))\n        error('input is not a cell');\nend\n\ndim=size(item);\nif(ndims(squeeze(item))>2) % for 3D or higher dimensions, flatten to 2D for now\n    item=reshape(item,dim(1),numel(item)/dim(1));\n    dim=size(item);\nend\nbracketlevel=~jsonopt('singletcell',1,varargin{:});\nlen=numel(item); % let's handle 1D cell first\nif(len>bracketlevel) \n    if(~isempty(name))\n        txt=[N_(checkname(name,varargin{:})) '[']; name=''; \n    else\n        txt='['; \n    end\nelseif(len==0)\n    if(~isempty(name))\n        txt=[N_(checkname(name,varargin{:})) 'Z']; name=''; \n    else\n        txt='Z'; \n    end\nend\nfor j=1:dim(2)\n    if(dim(1)>1)\n        txt=[txt '['];\n    end\n    for i=1:dim(1)\n       txt=[txt obj2ubjson(name,item{i,j},level+(len>bracketlevel),varargin{:})];\n    end\n    if(dim(1)>1)\n        txt=[txt ']'];\n    end\nend\nif(len>bracketlevel)\n    txt=[txt ']'];\nend\n\n%%-------------------------------------------------------------------------\nfunction txt=struct2ubjson(name,item,level,varargin)\ntxt='';\nif(~isstruct(item))\n\terror('input is not a struct');\nend\ndim=size(item);\nif(ndims(squeeze(item))>2) % for 3D or higher dimensions, flatten to 2D for now\n    item=reshape(item,dim(1),numel(item)/dim(1));\n    dim=size(item);\nend\nlen=numel(item);\nforcearray= (len>1 || (jsonopt('SingletArray',0,varargin{:})==1 && level>0));\n\nif(~isempty(name)) \n    if(forcearray)\n        txt=[N_(checkname(name,varargin{:})) '['];\n    end\nelse\n    if(forcearray)\n        txt='[';\n    end\nend\nfor j=1:dim(2)\n  if(dim(1)>1)\n      txt=[txt '['];\n  end\n  for i=1:dim(1)\n     names = fieldnames(item(i,j));\n     if(~isempty(name) && len==1 && ~forcearray)\n        txt=[txt N_(checkname(name,varargin{:})) '{']; \n     else\n        txt=[txt '{']; \n     end\n     if(~isempty(names))\n       for e=1:length(names)\n\t     txt=[txt obj2ubjson(names{e},item(i,j).(names{e}),...\n             level+(dim(1)>1)+1+forcearray,varargin{:})];\n       end\n     end\n     txt=[txt '}'];\n  end\n  if(dim(1)>1)\n      txt=[txt ']'];\n  end\nend\nif(forcearray)\n    txt=[txt ']'];\nend\n\n%%-------------------------------------------------------------------------\nfunction txt=str2ubjson(name,item,level,varargin)\ntxt='';\nif(~ischar(item))\n        error('input is not a string');\nend\nitem=reshape(item, max(size(item),[1 0]));\nlen=size(item,1);\n\nif(~isempty(name)) \n    if(len>1)\n        txt=[N_(checkname(name,varargin{:})) '['];\n    end\nelse\n    if(len>1)\n        txt='[';\n    end\nend\nfor e=1:len\n    val=item(e,:);\n    if(len==1)\n        obj=[N_(checkname(name,varargin{:})) '' '',S_(val),''];\n        if(isempty(name))\n            obj=['',S_(val),''];\n        end\n        txt=[txt,'',obj];\n    else\n        txt=[txt,'',['',S_(val),'']];\n    end\nend\nif(len>1)\n    txt=[txt ']'];\nend\n\n%%-------------------------------------------------------------------------\nfunction txt=mat2ubjson(name,item,level,varargin)\nif(~isnumeric(item) && ~islogical(item))\n        error('input is not an array');\nend\n\nif(length(size(item))>2 || issparse(item) || ~isreal(item) || ...\n   (isempty(item) && any(size(item))) ||jsonopt('ArrayToStruct',0,varargin{:}))\n      cid=I_(uint32(max(size(item))));\n      if(isempty(name))\n    \ttxt=['{' N_('_ArrayType_'),S_(class(item)),N_('_ArraySize_'),I_a(size(item),cid(1)) ];\n      else\n          if(isempty(item))\n              txt=[N_(checkname(name,varargin{:})),'Z'];\n              return;\n          else\n    \t      txt=[N_(checkname(name,varargin{:})),'{',N_('_ArrayType_'),S_(class(item)),N_('_ArraySize_'),I_a(size(item),cid(1))];\n          end\n      end\nelse\n    if(isempty(name))\n    \ttxt=matdata2ubjson(item,level+1,varargin{:});\n    else\n        if(numel(item)==1 && jsonopt('SingletArray',0,varargin{:})==0)\n            numtxt=regexprep(regexprep(matdata2ubjson(item,level+1,varargin{:}),'^\\[',''),']','');\n           \ttxt=[N_(checkname(name,varargin{:})) numtxt];\n        else\n    \t    txt=[N_(checkname(name,varargin{:})),matdata2ubjson(item,level+1,varargin{:})];\n        end\n    end\n    return;\nend\nif(issparse(item))\n    [ix,iy]=find(item);\n    data=full(item(find(item)));\n    if(~isreal(item))\n       data=[real(data(:)),imag(data(:))];\n       if(size(item,1)==1)\n           % Kludge to have data's 'transposedness' match item's.\n           % (Necessary for complex row vector handling below.)\n           data=data';\n       end\n       txt=[txt,N_('_ArrayIsComplex_'),'T'];\n    end\n    txt=[txt,N_('_ArrayIsSparse_'),'T'];\n    if(size(item,1)==1)\n        % Row vector, store only column indices.\n        txt=[txt,N_('_ArrayData_'),...\n           matdata2ubjson([iy(:),data'],level+2,varargin{:})];\n    elseif(size(item,2)==1)\n        % Column vector, store only row indices.\n        txt=[txt,N_('_ArrayData_'),...\n           matdata2ubjson([ix,data],level+2,varargin{:})];\n    else\n        % General case, store row and column indices.\n        txt=[txt,N_('_ArrayData_'),...\n           matdata2ubjson([ix,iy,data],level+2,varargin{:})];\n    end\nelse\n    if(isreal(item))\n        txt=[txt,N_('_ArrayData_'),...\n            matdata2ubjson(item(:)',level+2,varargin{:})];\n    else\n        txt=[txt,N_('_ArrayIsComplex_'),'T'];\n        txt=[txt,N_('_ArrayData_'),...\n            matdata2ubjson([real(item(:)) imag(item(:))],level+2,varargin{:})];\n    end\nend\ntxt=[txt,'}'];\n\n%%-------------------------------------------------------------------------\nfunction txt=matlabobject2ubjson(name,item,level,varargin)\nif numel(item) == 0 %empty object\n    st = struct();\nelse\n    % \"st = struct(item);\" would produce an inmutable warning, because it\n    % make the protected and private properties visible. Instead we get the\n    % visible properties\n    propertynames = properties(item);\n    for p = 1:numel(propertynames)\n        for o = numel(item):-1:1 % aray of objects\n            st(o).(propertynames{p}) = item(o).(propertynames{p});\n        end\n    end\nend\ntxt=struct2ubjson(name,st,level,varargin{:});\n\n%%-------------------------------------------------------------------------\nfunction txt=matdata2ubjson(mat,level,varargin)\nif(isempty(mat))\n    txt='Z';\n    return;\nend\ntype='';\nhasnegtive=(mat<0);\nif(isa(mat,'integer') || isinteger(mat) || (isfloat(mat) && all(mod(mat(:),1) == 0)))\n    if(isempty(hasnegtive))\n       if(max(mat(:))<=2^8)\n           type='U';\n       end\n    end\n    if(isempty(type))\n        % todo - need to consider negative ones separately\n        id= histc(abs(max(mat(:))),[0 2^7 2^15 2^31 2^63]);\n        if(isempty(id~=0))\n            error('high-precision data is not yet supported');\n        end\n        key='iIlL';\n\ttype=key(id~=0);\n    end\n    txt=[I_a(mat(:),type,size(mat))];\nelseif(islogical(mat))\n    logicalval='FT';\n    if(numel(mat)==1)\n        txt=logicalval(mat+1);\n    else\n        txt=['[$U#' I_a(size(mat),'l') typecast(swapbytes(uint8(mat(:)')),'uint8')];\n    end\nelse\n    if(numel(mat)==1)\n        txt=['[' D_(mat) ']'];\n    else\n        txt=D_a(mat(:),'D',size(mat));\n    end\nend\n\n%txt=regexprep(mat2str(mat),'\\s+',',');\n%txt=regexprep(txt,';',sprintf('],['));\n% if(nargin>=2 && size(mat,1)>1)\n%     txt=regexprep(txt,'\\[',[repmat(sprintf('\\t'),1,level) '[']);\n% end\nif(any(isinf(mat(:))))\n    txt=regexprep(txt,'([-+]*)Inf',jsonopt('Inf','\"$1_Inf_\"',varargin{:}));\nend\nif(any(isnan(mat(:))))\n    txt=regexprep(txt,'NaN',jsonopt('NaN','\"_NaN_\"',varargin{:}));\nend\n\n%%-------------------------------------------------------------------------\nfunction newname=checkname(name,varargin)\nisunpack=jsonopt('UnpackHex',1,varargin{:});\nnewname=name;\nif(isempty(regexp(name,'0x([0-9a-fA-F]+)_','once')))\n    return\nend\nif(isunpack)\n    isoct=jsonopt('IsOctave',0,varargin{:});\n    if(~isoct)\n        newname=regexprep(name,'(^x|_){1}0x([0-9a-fA-F]+)_','${native2unicode(hex2dec($2))}');\n    else\n        pos=regexp(name,'(^x|_){1}0x([0-9a-fA-F]+)_','start');\n        pend=regexp(name,'(^x|_){1}0x([0-9a-fA-F]+)_','end');\n        if(isempty(pos))\n            return;\n        end\n        str0=name;\n        pos0=[0 pend(:)' length(name)];\n        newname='';\n        for i=1:length(pos)\n            newname=[newname str0(pos0(i)+1:pos(i)-1) char(hex2dec(str0(pos(i)+3:pend(i)-1)))];\n        end\n        if(pos(end)~=length(name))\n            newname=[newname str0(pos0(end-1)+1:pos0(end))];\n        end\n    end\nend\n%%-------------------------------------------------------------------------\nfunction val=N_(str)\nval=[I_(int32(length(str))) str];\n%%-------------------------------------------------------------------------\nfunction val=S_(str)\nif(length(str)==1)\n  val=['C' str];\nelse\n  val=['S' I_(int32(length(str))) str];\nend\n%%-------------------------------------------------------------------------\nfunction val=I_(num)\nif(~isinteger(num))\n    error('input is not an integer');\nend\nif(num>=0 && num<255)\n   val=['U' data2byte(swapbytes(cast(num,'uint8')),'uint8')];\n   return;\nend\nkey='iIlL';\ncid={'int8','int16','int32','int64'};\nfor i=1:4\n  if((num>0 && num<2^(i*8-1)) || (num<0 && num>=-2^(i*8-1)))\n    val=[key(i) data2byte(swapbytes(cast(num,cid{i})),'uint8')];\n    return;\n  end\nend\nerror('unsupported integer');\n\n%%-------------------------------------------------------------------------\nfunction val=D_(num)\nif(~isfloat(num))\n    error('input is not a float');\nend\n\nif(isa(num,'single'))\n  val=['d' data2byte(num,'uint8')];\nelse\n  val=['D' data2byte(num,'uint8')];\nend\n%%-------------------------------------------------------------------------\nfunction data=I_a(num,type,dim,format)\nid=find(ismember('iUIlL',type));\n\nif(id==0)\n  error('unsupported integer array');\nend\n\n% based on UBJSON specs, all integer types are stored in big endian format\n\nif(id==1)\n  data=data2byte(swapbytes(int8(num)),'uint8');\n  blen=1;\nelseif(id==2)\n  data=data2byte(swapbytes(uint8(num)),'uint8');\n  blen=1;\nelseif(id==3)\n  data=data2byte(swapbytes(int16(num)),'uint8');\n  blen=2;\nelseif(id==4)\n  data=data2byte(swapbytes(int32(num)),'uint8');\n  blen=4;\nelseif(id==5)\n  data=data2byte(swapbytes(int64(num)),'uint8');\n  blen=8;\nend\n\nif(nargin>=3 && length(dim)>=2 && prod(dim)~=dim(2))\n  format='opt';\nend\nif((nargin<4 || strcmp(format,'opt')) && numel(num)>1)\n  if(nargin>=3 && (length(dim)==1 || (length(dim)>=2 && prod(dim)~=dim(2))))\n      cid=I_(uint32(max(dim)));\n      data=['$' type '#' I_a(dim,cid(1)) data(:)'];\n  else\n      data=['$' type '#' I_(int32(numel(data)/blen)) data(:)'];\n  end\n  data=['[' data(:)'];\nelse\n  data=reshape(data,blen,numel(data)/blen);\n  data(2:blen+1,:)=data;\n  data(1,:)=type;\n  data=data(:)';\n  data=['[' data(:)' ']'];\nend\n%%-------------------------------------------------------------------------\nfunction data=D_a(num,type,dim,format)\nid=find(ismember('dD',type));\n\nif(id==0)\n  error('unsupported float array');\nend\n\nif(id==1)\n  data=data2byte(single(num),'uint8');\nelseif(id==2)\n  data=data2byte(double(num),'uint8');\nend\n\nif(nargin>=3 && length(dim)>=2 && prod(dim)~=dim(2))\n  format='opt';\nend\nif((nargin<4 || strcmp(format,'opt')) && numel(num)>1)\n  if(nargin>=3 && (length(dim)==1 || (length(dim)>=2 && prod(dim)~=dim(2))))\n      cid=I_(uint32(max(dim)));\n      data=['$' type '#' I_a(dim,cid(1)) data(:)'];\n  else\n      data=['$' type '#' I_(int32(numel(data)/(id*4))) data(:)'];\n  end\n  data=['[' data];\nelse\n  data=reshape(data,(id*4),length(data)/(id*4));\n  data(2:(id*4+1),:)=data;\n  data(1,:)=type;\n  data=data(:)';\n  data=['[' data(:)' ']'];\nend\n%%-------------------------------------------------------------------------\nfunction bytes=data2byte(varargin)\nbytes=typecast(varargin{:});\nbytes=bytes(:)';\n"
  },
  {
    "path": "eval/poseval/matlab/external/jsonlab/struct2jdata.m",
    "content": "function newdata=struct2jdata(data,varargin)\n%\n% newdata=struct2jdata(data,opt,...)\n%\n% convert a JData object (in the form of a struct array) into an array\n%\n% authors:Qianqian Fang (fangq<at> nmr.mgh.harvard.edu)\n%\n% input:\n%      data: a struct array. If data contains JData keywords in the first\n%            level children, these fields are parsed and regrouped into a\n%            data object (arrays, trees, graphs etc) based on JData \n%            specification. The JData keywords are\n%               \"_ArrayType_\", \"_ArraySize_\", \"_ArrayData_\"\n%               \"_ArrayIsSparse_\", \"_ArrayIsComplex_\"\n%      opt: (optional) a list of 'Param',value pairs for additional options \n%           The supported options include\n%               'Recursive', if set to 1, will apply the conversion to \n%                            every child; 0 to disable\n%\n% output:\n%      newdata: the covnerted data if the input data does contain a JData \n%               structure; otherwise, the same as the input.\n%\n% examples:\n%      obj=struct('_ArrayType_','double','_ArraySize_',[2 3],\n%                 '_ArrayIsSparse_',1 ,'_ArrayData_',null);\n%      ubjdata=struct2jdata(obj);\n%\n% license:\n%     BSD License, see LICENSE_BSD.txt files for details \n%\n% -- this function is part of JSONLab toolbox (http://iso2mesh.sf.net/cgi-bin/index.cgi?jsonlab)\n%\n\nfn=fieldnames(data);\nnewdata=data;\nlen=length(data);\nif(jsonopt('Recursive',0,varargin{:})==1)\n  for i=1:length(fn) % depth-first\n    for j=1:len\n        if(isstruct(getfield(data(j),fn{i})))\n            newdata(j)=setfield(newdata(j),fn{i},jstruct2array(getfield(data(j),fn{i})));\n        end\n    end\n  end\nend\nif(~isempty(strmatch('x0x5F_ArrayType_',fn)) && ~isempty(strmatch('x0x5F_ArrayData_',fn)))\n  newdata=cell(len,1);\n  for j=1:len\n    ndata=cast(data(j).x0x5F_ArrayData_,data(j).x0x5F_ArrayType_);\n    iscpx=0;\n    if(~isempty(strmatch('x0x5F_ArrayIsComplex_',fn)))\n        if(data(j).x0x5F_ArrayIsComplex_)\n           iscpx=1;\n        end\n    end\n    if(~isempty(strmatch('x0x5F_ArrayIsSparse_',fn)))\n        if(data(j).x0x5F_ArrayIsSparse_)\n            if(~isempty(strmatch('x0x5F_ArraySize_',fn)))\n                dim=double(data(j).x0x5F_ArraySize_);\n                if(iscpx && size(ndata,2)==4-any(dim==1))\n                    ndata(:,end-1)=complex(ndata(:,end-1),ndata(:,end));\n                end\n                if isempty(ndata)\n                    % All-zeros sparse\n                    ndata=sparse(dim(1),prod(dim(2:end)));\n                elseif dim(1)==1\n                    % Sparse row vector\n                    ndata=sparse(1,ndata(:,1),ndata(:,2),dim(1),prod(dim(2:end)));\n                elseif dim(2)==1\n                    % Sparse column vector\n                    ndata=sparse(ndata(:,1),1,ndata(:,2),dim(1),prod(dim(2:end)));\n                else\n                    % Generic sparse array.\n                    ndata=sparse(ndata(:,1),ndata(:,2),ndata(:,3),dim(1),prod(dim(2:end)));\n                end\n            else\n                if(iscpx && size(ndata,2)==4)\n                    ndata(:,3)=complex(ndata(:,3),ndata(:,4));\n                end\n                ndata=sparse(ndata(:,1),ndata(:,2),ndata(:,3));\n            end\n        end\n    elseif(~isempty(strmatch('x0x5F_ArraySize_',fn)))\n        if(iscpx && size(ndata,2)==2)\n             ndata=complex(ndata(:,1),ndata(:,2));\n        end\n        ndata=reshape(ndata(:),data(j).x0x5F_ArraySize_);\n    end\n    newdata{j}=ndata;\n  end\n  if(len==1)\n      newdata=newdata{1};\n  end\nend"
  },
  {
    "path": "eval/poseval/matlab/external/jsonlab/varargin2struct.m",
    "content": "function opt=varargin2struct(varargin)\n%\n% opt=varargin2struct('param1',value1,'param2',value2,...)\n%   or\n% opt=varargin2struct(...,optstruct,...)\n%\n% convert a series of input parameters into a structure\n%\n% authors:Qianqian Fang (fangq<at> nmr.mgh.harvard.edu)\n% date: 2012/12/22\n%\n% input:\n%      'param', value: the input parameters should be pairs of a string and a value\n%       optstruct: if a parameter is a struct, the fields will be merged to the output struct\n%\n% output:\n%      opt: a struct where opt.param1=value1, opt.param2=value2 ...\n%\n% license:\n%     BSD License, see LICENSE_BSD.txt files for details \n%\n% -- this function is part of jsonlab toolbox (http://iso2mesh.sf.net/cgi-bin/index.cgi?jsonlab)\n%\n\nlen=length(varargin);\nopt=struct;\nif(len==0) return; end\ni=1;\nwhile(i<=len)\n    if(isstruct(varargin{i}))\n        opt=mergestruct(opt,varargin{i});\n    elseif(ischar(varargin{i}) && i<len)\n        opt=setfield(opt,lower(varargin{i}),varargin{i+1});\n        i=i+1;\n    else\n        error('input must be in the form of ...,''name'',value,... pairs or structs');\n    end\n    i=i+1;\nend\n\n"
  },
  {
    "path": "eval/poseval/matlab/mat2json.m",
    "content": "function mat2json(dataDir)\n% The function is part of PoseTrack dataset. \n%\n% It converts labels from matlab structure loaded from a *mat file to pytohn dictionary and saves in a *json file\n%\n% dataDir (string): directory containing labels in *mat files\n% Usage:\n%       mat2json(dataDir)\n%\n% Example: \n%       mat2json('../../../posetrack_data/annotations/val/')\n%\n\nfiles = dir([dataDir '/*mat']);\nfprintf('convert mat to json\\n');\nfor i = 1:length(files)\n    fprintf('%d/%d %s\\n',i,length(files),files(i).name);\n    filename = [dataDir '/' files(i).name];\n    [p,n,~] = fileparts(filename);\n    labels = load(filename);\n    savejson('',labels,'FileName',[p '/' n '.json'],'SingletArray',1,'NaN','nan','Inf','inf');\nend"
  },
  {
    "path": "eval/poseval/matlab/startup.m",
    "content": "addpath('./external/jsonlab')"
  },
  {
    "path": "eval/poseval/poseval/__init__.py",
    "content": ""
  },
  {
    "path": "eval/poseval/poseval/convert.py",
    "content": "#!/usr/bin/env python\n\"\"\"Convert between COCO and PoseTrack2017 format.\"\"\"\nfrom __future__ import print_function\n\nimport json\nimport logging\nimport os\nimport os.path as path\n\nimport click\nimport numpy as np\nimport tqdm\n\nfrom .posetrack18_id2fname import posetrack18_fname2id, posetrack18_id2fname\n\nLOGGER = logging.getLogger(__name__)\nPOSETRACK18_LM_NAMES_COCO_ORDER = [\n    \"nose\",\n    \"head_bottom\",  # \"left_eye\",\n    \"head_top\",  # \"right_eye\",\n    \"left_ear\",  # will be left zeroed out\n    \"right_ear\",  # will be left zeroed out\n    \"left_shoulder\",\n    \"right_shoulder\",\n    \"left_elbow\",\n    \"right_elbow\",\n    \"left_wrist\",\n    \"right_wrist\",\n    \"left_hip\",\n    \"right_hip\",\n    \"left_knee\",\n    \"right_knee\",\n    \"left_ankle\",\n    \"right_ankle\",\n]\nPOSETRACK18_LM_NAMES = [  # This is used to identify the IDs.\n    \"right_ankle\",\n    \"right_knee\",\n    \"right_hip\",\n    \"left_hip\",\n    \"left_knee\",\n    \"left_ankle\",\n    \"right_wrist\",\n    \"right_elbow\",\n    \"right_shoulder\",\n    \"left_shoulder\",\n    \"left_elbow\",\n    \"left_wrist\",\n    \"head_bottom\",\n    \"nose\",\n    \"head_top\",\n]\n\nSCORE_WARNING_EMITTED = False\n\n\ndef json_default(val):\n    \"\"\"Serialization workaround\n    https://stackoverflow.com/questions/11942364/typeerror-integer-is-not-json-serializable-when-serializing-json-in-python.\"\"\"\n    if isinstance(val, np.int64):\n        return int(val)\n    raise TypeError\n\n\nclass Video:\n\n    \"\"\"\n    A PoseTrack sequence.\n\n    Parameters\n    ==========\n\n    video_id: str.\n      A five or six digit number, potentially with leading zeros, identifying the\n      PoseTrack video.\n    \"\"\"\n\n    def __init__(self, video_id):\n        self.posetrack_video_id = video_id  # str.\n        self.frames = []  # list of Image objects.\n\n    def to_new(self):\n        \"\"\"Return a dictionary representation for the PoseTrack18 format.\"\"\"\n        result = {\"images\": [], \"annotations\": []}\n        for image in self.frames:\n            image_json = image.to_new()\n            image_json[\"vid_id\"] = self.posetrack_video_id\n            image_json[\"nframes\"] = len(self.frames)\n            image_json[\"id\"] = int(image.frame_id)\n            result[\"images\"].append(image_json)\n            for person_idx, person in enumerate(image.people):\n                person_json = person.to_new()\n                person_json[\"image_id\"] = int(image.frame_id)\n                person_json[\"id\"] = int(image.frame_id) * 100 + person_idx\n                result[\"annotations\"].append(person_json)\n        # Write the 'categories' field.\n        result[\"categories\"] = [\n            {\n                \"supercategory\": \"person\",\n                \"name\": \"person\",\n                \"skeleton\": [\n                    [16, 14],\n                    [14, 12],\n                    [17, 15],\n                    [15, 13],\n                    [12, 13],\n                    [6, 12],\n                    [7, 13],\n                    [6, 7],\n                    [6, 8],\n                    [7, 9],\n                    [8, 10],\n                    [9, 11],\n                    [2, 3],\n                    [1, 2],\n                    [1, 3],\n                    [2, 4],\n                    [3, 5],\n                    [4, 6],\n                    [5, 7],\n                ],\n                \"keypoints\": POSETRACK18_LM_NAMES_COCO_ORDER,\n                \"id\": 1,\n            }\n        ]\n        return result\n\n    def to_old(self):\n        \"\"\"Return a dictionary representation for the PoseTrack17 format.\"\"\"\n        res = {\"annolist\": []}\n        for image in self.frames:\n            elem = {}\n            im_rep, ir_list, imgnum = image.to_old()\n            elem[\"image\"] = [im_rep]\n            elem[\"imgnum\"] = [imgnum]\n            if ir_list:\n                elem[\"ignore_regions\"] = ir_list\n            elem[\"annorect\"] = []\n            for person in image.people:\n                elem[\"annorect\"].append(person.to_old())\n            if image.people:\n                elem['is_labeled'] = [1]\n            else:\n                elem['is_labeled'] = [0]\n            res[\"annolist\"].append(elem)\n        return res\n\n    @classmethod\n    def from_old(cls, track_data):\n        \"\"\"Parse a dictionary representation from the PoseTrack17 format.\"\"\"\n        assert \"annolist\" in track_data.keys(), \"Wrong format!\"\n        video = None\n        for image_info in track_data[\"annolist\"]:\n            image = Image.from_old(image_info)\n            if not video:\n                video = Video(\n                    path.basename(path.dirname(image.posetrack_filename)).split(\"_\")[0]\n                )\n            else:\n                assert (\n                    video.posetrack_video_id\n                    == path.basename(path.dirname(image.posetrack_filename)).split(\"_\")[\n                        0\n                    ]\n                )\n            video.frames.append(image)\n        return [video]\n\n    @classmethod\n    def from_new(cls, track_data):\n        \"\"\"Parse a dictionary representation from the PoseTrack17 format.\"\"\"\n        image_id_to_can_info = {}\n        video_id_to_video = {}\n        assert len(track_data[\"categories\"]) == 1\n        assert track_data[\"categories\"][0][\"name\"] == \"person\"\n        assert len(track_data[\"categories\"][0][\"keypoints\"]) in [15, 17]\n        conversion_table = []\n        for lm_name in track_data[\"categories\"][0][\"keypoints\"]:\n            if lm_name not in POSETRACK18_LM_NAMES:\n                conversion_table.append(None)\n            else:\n                conversion_table.append(POSETRACK18_LM_NAMES.index(lm_name))\n        for lm_idx, lm_name in enumerate(POSETRACK18_LM_NAMES):\n            assert lm_idx in conversion_table, \"Landmark `%s` not found.\" % (lm_name)\n        videos = []\n        for image_id in [image[\"id\"] for image in track_data[\"images\"]]:\n            image = Image.from_new(track_data, image_id)\n            video_id = path.basename(path.dirname(image.posetrack_filename)).split(\n                \"_\"\n            )[0]\n            if video_id in video_id_to_video.keys():\n                video = video_id_to_video[video_id]\n            else:\n                video = Video(video_id)\n                video_id_to_video[video_id] = video\n                videos.append(video)\n            video.frames.append(image)\n            for person_info in track_data[\"annotations\"]:\n                if person_info[\"image_id\"] != image_id:\n                    continue\n                image.people.append(Person.from_new(person_info, conversion_table))\n        return videos\n\n\nclass Person:\n\n    \"\"\"\n    A PoseTrack annotated person.\n\n    Parameters\n    ==========\n\n    track_id: int\n      Unique integer representing a person track.\n    \"\"\"\n\n    def __init__(self, track_id):\n        self.track_id = track_id\n        self.landmarks = None  # None or list of dicts with 'score', 'x', 'y', 'id'.\n        self.rect_head = None  # None or dict with 'x1', 'x2', 'y1' and 'y2'.\n        self.rect = None  # None or dict with 'x1', 'x2', 'y1' and 'y2'.\n        self.score = None  # None or float.\n\n    def to_new(self):\n        \"\"\"\n        Return a dictionary representation for the PoseTrack18 format.\n\n        The fields 'image_id' and 'id' must be added to the result.\n        \"\"\"\n        keypoints = []\n        scores = []\n        write_scores = (\n            len([1 for lm_info in self.landmarks if \"score\" in lm_info.keys()]) > 0\n        )\n        for landmark_name in POSETRACK18_LM_NAMES_COCO_ORDER:\n            try:\n                try:\n                    lm_id = POSETRACK18_LM_NAMES.index(landmark_name)\n                except ValueError:\n                    lm_id = -1\n                landmark_info = [lm for lm in self.landmarks if lm[\"id\"] == lm_id][0]\n            except IndexError:\n                landmark_info = {\"x\": 0, \"y\": 0, \"is_visible\": 0}\n            is_visible = 1\n            if \"is_visible\" in landmark_info.keys():\n                is_visible = landmark_info[\"is_visible\"]\n            keypoints.extend([landmark_info[\"x\"], landmark_info[\"y\"], is_visible])\n            if \"score\" in landmark_info.keys():\n                scores.append(landmark_info[\"score\"])\n            elif write_scores:\n                LOGGER.warning(\"Landmark with missing score info detected. Using 0.\")\n                scores.append(0.)\n        ret = {\n            \"track_id\": self.track_id,\n            \"category_id\": 1,\n            \"keypoints\": keypoints,\n            \"scores\": scores,\n            # image_id and id added later.\n        }\n        if self.rect:\n            ret[\"bbox\"] = [\n                self.rect[\"x1\"],\n                self.rect[\"y1\"],\n                self.rect[\"x2\"] - self.rect[\"x1\"],\n                self.rect[\"y2\"] - self.rect[\"y1\"],\n            ]\n        if self.rect_head:\n            ret[\"bbox_head\"] = [\n                self.rect_head[\"x1\"],\n                self.rect_head[\"y1\"],\n                self.rect_head[\"x2\"] - self.rect_head[\"x1\"],\n                self.rect_head[\"y2\"] - self.rect_head[\"y1\"],\n            ]\n        return ret\n\n    def to_old(self):\n        \"\"\"Return a dictionary representation for the PoseTrack17 format.\"\"\"\n        keypoints = []\n        for landmark_info in self.landmarks:\n            if (\n                landmark_info[\"x\"] == 0\n                and landmark_info[\"y\"] == 0\n                and \"is_visible\" in landmark_info.keys()\n                and landmark_info[\"is_visible\"] == 0\n            ):\n                # The points in new format are stored like this if they're unannotated.\n                # Skip in that case.\n                continue\n            point = {\n                \"id\": [landmark_info[\"id\"]],\n                \"x\": [landmark_info[\"x\"]],\n                \"y\": [landmark_info[\"y\"]],\n            }\n            if \"score\" in landmark_info.keys():\n                point[\"score\"] = [landmark_info[\"score\"]]\n            if \"is_visible\" in landmark_info.keys():\n                point[\"is_visible\"] = [landmark_info[\"is_visible\"]]\n            keypoints.append(point)\n        # ret = {\"track_id\": [self.track_id], \"annopoints\": keypoints}\n        ret = {\"track_id\": [self.track_id], \"annopoints\": [{'point': keypoints}]}\n        if self.rect_head:\n            ret[\"x1\"] = [self.rect_head[\"x1\"]]\n            ret[\"x2\"] = [self.rect_head[\"x2\"]]\n            ret[\"y1\"] = [self.rect_head[\"y1\"]]\n            ret[\"y2\"] = [self.rect_head[\"y2\"]]\n        if self.score:\n            ret[\"score\"] = [self.score]\n        return ret\n\n    @classmethod\n    def from_old(cls, person_info):\n        \"\"\"Parse a dictionary representation from the PoseTrack17 format.\"\"\"\n        global SCORE_WARNING_EMITTED  # pylint: disable=global-statement\n        person = Person(person_info[\"track_id\"][0])\n        assert len(person_info[\"track_id\"]) == 1, \"Invalid format!\"\n        rect_head = {}\n        rect_head[\"x1\"] = person_info[\"x1\"][0]\n        assert len(person_info[\"x1\"]) == 1, \"Invalid format!\"\n        rect_head[\"x2\"] = person_info[\"x2\"][0]\n        assert len(person_info[\"x2\"]) == 1, \"Invalid format!\"\n        rect_head[\"y1\"] = person_info[\"y1\"][0]\n        assert len(person_info[\"y1\"]) == 1, \"Invalid format!\"\n        rect_head[\"y2\"] = person_info[\"y2\"][0]\n        assert len(person_info[\"y2\"]) == 1, \"Invalid format!\"\n        person.rect_head = rect_head\n        try:\n            person.score = person_info[\"score\"][0]\n            assert len(person_info[\"score\"]) == 1, \"Invalid format!\"\n        except KeyError:\n            pass\n        person.landmarks = []\n        if \"annopoints\" not in person_info.keys() or not person_info[\"annopoints\"]:\n            return person\n        lm_x_values = []\n        lm_y_values = []\n        for landmark_info in person_info[\"annopoints\"][0][\"point\"]:\n            lm_dict = {\n                \"y\": landmark_info[\"y\"][0],\n                \"x\": landmark_info[\"x\"][0],\n                \"id\": landmark_info[\"id\"][0],\n            }\n            lm_x_values.append(lm_dict[\"x\"])\n            lm_y_values.append(lm_dict[\"y\"])\n            if \"score\" in landmark_info.keys():\n                lm_dict[\"score\"] = landmark_info[\"score\"][0]\n                assert len(landmark_info[\"score\"]) == 1, \"Invalid format!\"\n            elif not SCORE_WARNING_EMITTED:\n                LOGGER.warning(\"No landmark scoring information found!\")\n                LOGGER.warning(\"This will not be a valid submission file!\")\n                SCORE_WARNING_EMITTED = True\n            if \"is_visible\" in landmark_info.keys():\n                lm_dict[\"is_visible\"] = landmark_info[\"is_visible\"][0]\n            person.landmarks.append(lm_dict)\n            assert (\n                len(landmark_info[\"x\"]) == 1\n                and len(landmark_info[\"y\"]) == 1\n                and len(landmark_info[\"id\"]) == 1\n            ), \"Invalid format!\"\n        lm_x_values = np.array(lm_x_values)\n        lm_y_values = np.array(lm_y_values)\n        x_extent = lm_x_values.max() - lm_x_values.min()\n        y_extent = lm_y_values.max() - lm_y_values.min()\n        x_center = (lm_x_values.max() + lm_x_values.min()) / 2.\n        y_center = (lm_y_values.max() + lm_y_values.min()) / 2.\n        x1_final = x_center - x_extent * 0.65\n        x2_final = x_center + x_extent * 0.65\n        y1_final = y_center - y_extent * 0.65\n        y2_final = y_center + y_extent * 0.65\n        person.rect = {\"x1\": x1_final, \"x2\": x2_final, \"y1\": y1_final, \"y2\": y2_final}\n        return person\n\n    @classmethod\n    def from_new(cls, person_info, conversion_table):\n        \"\"\"Parse a dictionary representation from the PoseTrack18 format.\"\"\"\n        global SCORE_WARNING_EMITTED  # pylint: disable=global-statement\n        person = Person(person_info[\"track_id\"])\n        try:\n            rect_head = {}\n            rect_head[\"x1\"] = person_info[\"bbox_head\"][0]\n            rect_head[\"x2\"] = person_info[\"bbox_head\"][0] + person_info[\"bbox_head\"][2]\n            rect_head[\"y1\"] = person_info[\"bbox_head\"][1]\n            rect_head[\"y2\"] = person_info[\"bbox_head\"][1] + person_info[\"bbox_head\"][3]\n            person.rect_head = rect_head\n        except KeyError:\n            person.rect_head = None\n        try:\n            rect = {}\n            rect[\"x1\"] = person_info[\"bbox\"][0]\n            rect[\"x2\"] = person_info[\"bbox\"][0] + person_info[\"bbox\"][2]\n            rect[\"y1\"] = person_info[\"bbox\"][1]\n            rect[\"y2\"] = person_info[\"bbox\"][1] + person_info[\"bbox\"][3]\n            person.rect = rect\n        except KeyError:\n            person.rect = None\n        if \"score\" in person_info.keys():\n            person.score = person_info[\"score\"]\n        try:\n            landmark_scores = person_info[\"scores\"]\n        except KeyError:\n            landmark_scores = None\n            if not SCORE_WARNING_EMITTED:\n                LOGGER.warning(\"No landmark scoring information found!\")\n                LOGGER.warning(\"This will not be a valid submission file!\")\n                SCORE_WARNING_EMITTED = True\n        person.landmarks = []\n        for landmark_idx, landmark_info in enumerate(\n            np.array(person_info[\"keypoints\"]).reshape(len(conversion_table), 3)\n        ):\n            landmark_idx_can = conversion_table[landmark_idx]\n            if landmark_idx_can is not None:\n                lm_info = {\n                    \"y\": landmark_info[1],\n                    \"x\": landmark_info[0],\n                    \"id\": landmark_idx_can,\n                    \"is_visible\": landmark_info[2],\n                }\n                if landmark_scores:\n                    lm_info[\"score\"] = landmark_scores[landmark_idx]\n                person.landmarks.append(lm_info)\n        return person\n\n\nclass Image:\n\n    \"\"\"An image with annotated people on it.\"\"\"\n\n    def __init__(self, filename, frame_id):\n        self.posetrack_filename = filename\n        self.frame_id = frame_id\n        self.people = []\n        self.ignore_regions = None  # None or tuple of (regions_x, regions_y), each a\n        # list of lists of polygon coordinates.\n\n    def to_new(self):\n        \"\"\"\n        Return a dictionary representation for the PoseTrack18 format.\n\n        The field 'vid_id' must still be added.\n        \"\"\"\n        ret = {\n            \"file_name\": self.posetrack_filename,\n            \"has_no_densepose\": True,\n            \"is_labeled\": (len(self.people) > 0),\n            \"frame_id\": self.frame_id,\n            # vid_id and nframes are inserted later.\n        }\n        if self.ignore_regions:\n            ret[\"ignore_regions_x\"] = self.ignore_regions[0]\n            ret[\"ignore_regions_y\"] = self.ignore_regions[1]\n        return ret\n\n    def to_old(self):\n        \"\"\"\n        Return a dictionary representation for the PoseTrack17 format.\n\n        People are added later.\n        \"\"\"\n        ret = {\"name\": self.posetrack_filename}\n        if self.ignore_regions:\n            ir_list = []\n            for plist_x, plist_y in zip(self.ignore_regions[0], self.ignore_regions[1]):\n                r_list = []\n                for x_val, y_val in zip(plist_x, plist_y):\n                    r_list.append({\"x\": [x_val], \"y\": [y_val]})\n                ir_list.append({\"point\": r_list})\n        else:\n            ir_list = None\n        imgnum = int(path.basename(self.posetrack_filename).split(\".\")[0]) + 1\n        return ret, ir_list, imgnum\n\n    @classmethod\n    def from_old(cls, json_data):\n        \"\"\"Parse a dictionary representation from the PoseTrack17 format.\"\"\"\n        posetrack_filename = json_data[\"image\"][0][\"name\"]\n        assert len(json_data[\"image\"]) == 1, \"Invalid format!\"\n        old_seq_fp = path.basename(path.dirname(posetrack_filename))\n        fp_wo_ending = path.basename(posetrack_filename).split(\".\")[0]\n        if \"_\" in fp_wo_ending:\n            fp_wo_ending = fp_wo_ending.split(\"_\")[0]\n        old_frame_id = int(fp_wo_ending)\n        try:\n            frame_id = posetrack18_fname2id(old_seq_fp, old_frame_id)\n        except:  # pylint: disable=bare-except\n            print(\"I stumbled over a strange sequence. Maybe you can have a look?\")\n            import pdb\n\n            pdb.set_trace()  # pylint: disable=no-member\n        image = Image(posetrack_filename, frame_id)\n        for person_info in json_data[\"annorect\"]:\n            image.people.append(Person.from_old(person_info))\n        if \"ignore_regions\" in json_data.keys():\n            ignore_regions_x = []\n            ignore_regions_y = []\n            for ignore_region in json_data[\"ignore_regions\"]:\n                x_values = []\n                y_values = []\n                for point in ignore_region[\"point\"]:\n                    x_values.append(point[\"x\"][0])\n                    y_values.append(point[\"y\"][0])\n                ignore_regions_x.append(x_values)\n                ignore_regions_y.append(y_values)\n            image.ignore_regions = (ignore_regions_x, ignore_regions_y)\n        return image\n\n    @classmethod\n    def from_new(cls, track_data, image_id):\n        \"\"\"Parse a dictionary representation from the PoseTrack18 format.\"\"\"\n        image_info = [\n            image_info\n            for image_info in track_data[\"images\"]\n            if image_info[\"id\"] == image_id\n        ][0]\n        posetrack_filename = image_info[\"file_name\"]\n        # license, coco_url, height, width, date_capture, flickr_url, id are lost.\n        old_seq_fp = path.basename(path.dirname(posetrack_filename))\n        old_frame_id = int(path.basename(posetrack_filename).split(\".\")[0])\n        frame_id = posetrack18_fname2id(old_seq_fp, old_frame_id)\n        image = Image(posetrack_filename, frame_id)\n        if (\n            \"ignore_regions_x\" in image_info.keys()\n            and \"ignore_regions_y\" in image_info.keys()\n        ):\n            image.ignore_regions = (\n                image_info[\"ignore_regions_x\"],\n                image_info[\"ignore_regions_y\"],\n            )\n        return image\n\n\n@click.command()\n@click.argument(\n    \"in_fp\", type=click.Path(exists=True, readable=True, dir_okay=True, file_okay=True)\n)\n@click.option(\n    \"--out_fp\",\n    type=click.Path(exists=False, writable=True, file_okay=False),\n    default=\"converted\",\n    help=\"Write the results to this folder (may not exist). Default: converted.\",\n)\ndef cli(in_fp, out_fp=\"converted\"):\n    \"\"\"Convert between PoseTrack18 and PoseTrack17 format.\"\"\"\n    LOGGER.info(\"Converting `%s` to `%s`...\", in_fp, out_fp)\n    if in_fp.endswith(\".zip\") and path.isfile(in_fp):\n        LOGGER.info(\"Unzipping...\")\n        import zipfile\n        import tempfile\n\n        unzip_dir = tempfile.mkdtemp()\n        with zipfile.ZipFile(in_fp, \"r\") as zip_ref:\n            zip_ref.extractall(unzip_dir)\n        in_fp = unzip_dir\n        LOGGER.info(\"Done.\")\n    else:\n        unzip_dir = None\n    if path.isfile(in_fp):\n        track_fps = [in_fp]\n    else:\n        track_fps = sorted(\n            [\n                path.join(in_fp, track_fp)\n                for track_fp in os.listdir(in_fp)\n                if track_fp.endswith(\".json\")\n            ]\n        )\n    LOGGER.info(\"Identified %d track files.\", len(track_fps))\n    assert path.isfile(track_fps[0]), \"`%s` is not a file!\" % (track_fps[0])\n    with open(track_fps[0], \"r\") as inf:\n        first_track = json.load(inf)\n    # Determine format.\n    old_to_new = False\n    if \"annolist\" in first_track.keys():\n        old_to_new = True\n        LOGGER.info(\"Detected PoseTrack17 format. Converting to 2018...\")\n    else:\n        assert \"images\" in first_track.keys(), \"Unknown image format. :(\"\n        LOGGER.info(\"Detected PoseTrack18 format. Converting to 2017...\")\n\n    videos = []\n    LOGGER.info(\"Parsing data...\")\n    for track_fp in tqdm.tqdm(track_fps):\n        with open(track_fp, \"r\") as inf:\n            track_data = json.load(inf)\n        if old_to_new:\n            videos.extend(Video.from_old(track_data))\n        else:\n            videos.extend(Video.from_new(track_data))\n    LOGGER.info(\"Writing data...\")\n    if not path.exists(out_fp):\n        os.mkdir(out_fp)\n    for video in tqdm.tqdm(videos):\n        target_fp = path.join(\n            out_fp, posetrack18_id2fname(video.frames[0].frame_id)[0] + \".json\"\n        )\n        if old_to_new:\n            converted_json = video.to_new()\n        else:\n            converted_json = video.to_old()\n        with open(target_fp, \"w\") as outf:\n            json.dump(converted_json, outf, default=json_default)\n    if unzip_dir:\n        LOGGER.debug(\"Deleting temporary directory...\")\n        os.unlink(unzip_dir)\n    LOGGER.info(\"Done.\")\n\ndef convert_videos(track_data):\n    \"\"\"Convert between PoseTrack18 and PoseTrack17 format.\"\"\"\n    if \"annolist\" in track_data.keys():\n        old_to_new = True\n        LOGGER.info(\"Detected PoseTrack17 format. Converting to 2018...\")\n    else:\n        old_to_new = False\n        assert \"images\" in track_data.keys(), \"Unknown image format. :(\"\n        LOGGER.info(\"Detected PoseTrack18 format. Converting to 2017...\")\n\n    if (old_to_new):\n        videos = Video.from_old(track_data)\n        videos_converted = [v.to_new() for v in videos]\n    else:\n        videos = Video.from_new(track_data)\n        videos_converted = [v.to_old() for v in videos]\n    return videos_converted\n\nif __name__ == '__main__':\n    logging.basicConfig(level=logging.DEBUG)\n    cli()  # pylint: disable=no-value-for-parameter\n"
  },
  {
    "path": "eval/poseval/poseval/eval_helpers.py",
    "content": "import numpy as np\nfrom shapely import geometry\nimport pdb\nimport sys\nimport os\nimport json\nimport glob\nfrom .convert import convert_videos\n\nMIN_SCORE = -9999\nMAX_TRACK_ID = 100000\n\nclass Joint:\n    def __init__(self):\n        self.count = 15\n        self.right_ankle = 0\n        self.right_knee = 1\n        self.right_hip = 2\n        self.left_hip = 3\n        self.left_knee = 4\n        self.left_ankle = 5\n        self.right_wrist = 6\n        self.right_elbow = 7\n        self.right_shoulder = 8\n        self.left_shoulder = 9\n        self.left_elbow = 10\n        self.left_wrist = 11\n        self.neck = 12\n        self.nose = 13\n        self.head_top = 14\n\n        self.name = {}\n        self.name[self.right_ankle]    = \"right_ankle\"\n        self.name[self.right_knee]     = \"right_knee\"\n        self.name[self.right_hip]      = \"right_hip\"\n        self.name[self.right_shoulder] = \"right_shoulder\"\n        self.name[self.right_elbow]    = \"right_elbow\"\n        self.name[self.right_wrist]    = \"right_wrist\"\n        self.name[self.left_ankle]     = \"left_ankle\"\n        self.name[self.left_knee]      = \"left_knee\"\n        self.name[self.left_hip]       = \"left_hip\"\n        self.name[self.left_shoulder]  = \"left_shoulder\"\n        self.name[self.left_elbow]     = \"left_elbow\"\n        self.name[self.left_wrist]     = \"left_wrist\"\n        self.name[self.neck]           = \"neck\"\n        self.name[self.nose]           = \"nose\"\n        self.name[self.head_top]       = \"head_top\"\n\n        self.symmetric_joint = {}\n        self.symmetric_joint[self.right_ankle]    = self.left_ankle\n        self.symmetric_joint[self.right_knee]     = self.left_knee\n        self.symmetric_joint[self.right_hip]      = self.left_hip\n        self.symmetric_joint[self.right_shoulder] = self.left_shoulder\n        self.symmetric_joint[self.right_elbow]    = self.left_elbow\n        self.symmetric_joint[self.right_wrist]    = self.left_wrist\n        self.symmetric_joint[self.left_ankle]     = self.right_ankle\n        self.symmetric_joint[self.left_knee]      = self.right_knee\n        self.symmetric_joint[self.left_hip]       = self.right_hip\n        self.symmetric_joint[self.left_shoulder]  = self.right_shoulder\n        self.symmetric_joint[self.left_elbow]     = self.right_elbow\n        self.symmetric_joint[self.left_wrist]     = self.right_wrist\n        self.symmetric_joint[self.neck]           = -1\n        self.symmetric_joint[self.nose]           = -1\n        self.symmetric_joint[self.head_top]       = -1\n\n\ndef getPointGTbyID(points,pidx):\n\n    point = []\n    for i in range(len(points)):\n        if (points[i][\"id\"] is not None and points[i][\"id\"][0] == pidx): # if joint id matches\n            point = points[i]\n            break\n\n    return point\n\n\ndef getHeadSize(x1,y1,x2,y2):\n    headSize = 0.6*np.linalg.norm(np.subtract([x2,y2],[x1,y1]));\n    return headSize\n\n\ndef formatCell(val,delim):\n    return \"{:>5}\".format(\"%1.1f\" % val) + delim\n\n\ndef getHeader():\n    strHeader = \"&\"\n    strHeader += \" Head &\"\n    strHeader += \" Shou &\"\n    strHeader += \" Elb  &\"\n    strHeader += \" Wri  &\"\n    strHeader += \" Hip  &\"\n    strHeader += \" Knee &\"\n    strHeader += \" Ankl &\"\n    strHeader += \" Total%s\" % (\"\\\\\"+\"\\\\\")\n    return strHeader\n\n\ndef getMotHeader():\n    strHeader = \"&\"\n    strHeader += \" MOTA &\"\n    strHeader += \" MOTA &\"\n    strHeader += \" MOTA &\"\n    strHeader += \" MOTA &\"\n    strHeader += \" MOTA &\"\n    strHeader += \" MOTA &\"\n    strHeader += \" MOTA &\"\n    strHeader += \" MOTA &\"\n    strHeader += \" MOTP &\"\n    strHeader += \" Prec &\"\n    strHeader += \" Rec &\"\n    strHeader += \" IDF1 &\"\n    strHeader += \" IDs  %s\\n\" % (\"\\\\\"+\"\\\\\")\n    strHeader += \"&\"\n    strHeader += \" Head &\"\n    strHeader += \" Shou &\"\n    strHeader += \" Elb  &\"\n    strHeader += \" Wri  &\"\n    strHeader += \" Hip  &\"\n    strHeader += \" Knee &\"\n    strHeader += \" Ankl &\"\n    strHeader += \" Total&\"\n    strHeader += \" Total&\"\n    strHeader += \" Total&\"\n    strHeader += \" Total&\"\n    strHeader += \" Total&\"\n    strHeader += \" Total%s\" % (\"\\\\\"+\"\\\\\")\n\n    return strHeader\n\n\ndef getCum(vals):\n    cum = []; n = -1\n    cum += [(vals[[Joint().head_top,      Joint().neck,        Joint().nose],0].mean())]\n    cum += [(vals[[Joint().right_shoulder,Joint().left_shoulder],0].mean())]\n    cum += [(vals[[Joint().right_elbow,   Joint().left_elbow   ],0].mean())]\n    cum += [(vals[[Joint().right_wrist,   Joint().left_wrist   ],0].mean())]\n    cum += [(vals[[Joint().right_hip,     Joint().left_hip     ],0].mean())]\n    cum += [(vals[[Joint().right_knee,    Joint().left_knee    ],0].mean())]\n    cum += [(vals[[Joint().right_ankle,   Joint().left_ankle   ],0].mean())]\n    for i in range(Joint().count,len(vals)):\n        cum += [vals[i,0]]\n    return cum\n\n\ndef getFormatRow(cum):\n    row = \"&\"\n    for i in range(len(cum)-1):\n        row += formatCell(cum[i],\" &\")\n    row += formatCell(cum[len(cum)-1],(\" %s\" % \"\\\\\"+\"\\\\\"))\n    return row\n\n\ndef printTable(vals,motHeader=False):\n\n    cum = getCum(vals)\n    row = getFormatRow(cum)\n    if (motHeader):\n        header = getMotHeader()\n    else:\n        header = getHeader()\n    print(header)\n    print(row)\n    return header+\"\\n\", row+\"\\n\"\n\n\ndef printTableTracking(valsPerPart):\n\n    cum = getCum(valsPerPart)\n    row = getFormatRow(cum)\n    print(getHeader())\n    print(row)\n    return getHeader()+\"\\n\", row+\"\\n\"\n\n\n# compute recall/precision curve (RPC) values\ndef computeRPC(scores,labels,totalPos):\n\n    precision = np.zeros(len(scores))\n    recall    = np.zeros(len(scores))\n    npos = 0;\n\n    idxsSort = np.array(scores).argsort()[::-1]\n    labelsSort = labels[idxsSort];\n\n    for sidx in range(len(idxsSort)):\n        if (labelsSort[sidx] == 1):\n            npos += 1\n        # recall: how many true positives were found out of the total number of positives?\n        recall[sidx]    = 1.0*npos / totalPos\n        # precision: how many true positives were found out of the total number of samples?\n        precision[sidx] = 1.0*npos / (sidx + 1)\n\n    return precision, recall, idxsSort\n\n\n# compute Average Precision using recall/precision values\ndef VOCap(rec,prec):\n\n    mpre = np.zeros([1,2+len(prec)])\n    mpre[0,1:len(prec)+1] = prec\n    mrec = np.zeros([1,2+len(rec)])\n    mrec[0,1:len(rec)+1] = rec\n    mrec[0,len(rec)+1] = 1.0\n\n    for i in range(mpre.size-2,-1,-1):\n        mpre[0,i] = max(mpre[0,i],mpre[0,i+1])\n\n    i = np.argwhere( ~np.equal( mrec[0,1:], mrec[0,:mrec.shape[1]-1]) )+1\n    i = i.flatten()\n\n    # compute area under the curve\n    ap = np.sum( np.multiply( np.subtract( mrec[0,i], mrec[0,i-1]), mpre[0,i] ) )\n\n    return ap\n\ndef get_data_dir():\n  dataDir = \"./\"\n  return dataDir\n\ndef process_arguments(argv):\n\n  mode = 'multi'\n\n  if len(argv) > 3:\n    mode   = str.lower(argv[3])\n\n  gt_file = argv[1]\n  pred_file = argv[2]\n\n  if not os.path.exists(gt_file):\n    raise Exception('Given ground truth directory does not exist!')\n\n  if not os.path.exists(pred_file):\n    raise Exception('Given prediction directory does not exist!')\n\n  return gt_file, pred_file, mode\n\ndef process_arguments_server(argv):\n  mode = 'multi'\n\n  print(len(argv))\n  assert len(argv) == 10, \"Wrong number of arguments\"\n\n  gt_dir = argv[1]\n  pred_dir = argv[2]\n  mode   = str.lower(argv[3])\n  evaltrack = argv[4]\n  shortname = argv[5]\n  chl = argv[6]\n  shortname_uid = argv[7]\n  shakey = argv[8]\n  timestamp = argv[9]\n  if not os.path.exists(gt_dir):\n    help('Given ground truth does not exist!\\n')\n\n  if not os.path.exists(pred_dir):\n    help('Given prediction does not exist!\\n')\n\n  return gt_dir, pred_dir, mode, evaltrack, shortname, chl, shortname_uid, shakey, timestamp\n\n\ndef load_data(argv):\n\n  dataDir = get_data_dir()\n\n  gt_file, pred_file, mode = process_arguments(argv)\n  gtFilename = dataDir + gt_file\n  predFilename = dataDir + pred_file\n\n  # load ground truth (GT)\n  with open(gtFilename) as data_file:\n      data = json.load(data_file)\n  gtFramesAll = data\n\n  # load predictions\n  with open(predFilename) as data_file:\n      data = json.load(data_file)\n  prFramesAll = data\n\n  return gtFramesAll, prFramesAll\n\n\ndef cleanupData(gtFramesAll,prFramesAll):\n\n  # remove all GT frames with empty annorects and remove corresponding entries from predictions\n  imgidxs = []\n  for imgidx in range(len(gtFramesAll)):\n    if (len(gtFramesAll[imgidx][\"annorect\"]) > 0):\n      imgidxs += [imgidx]\n  gtFramesAll = [gtFramesAll[imgidx] for imgidx in imgidxs]\n  prFramesAll = [prFramesAll[imgidx] for imgidx in imgidxs]\n\n  # remove all gt rectangles that do not have annotations\n  for imgidx in range(len(gtFramesAll)):\n    gtFramesAll[imgidx][\"annorect\"] = removeRectsWithoutPoints(gtFramesAll[imgidx][\"annorect\"])\n    prFramesAll[imgidx][\"annorect\"] = removeRectsWithoutPoints(prFramesAll[imgidx][\"annorect\"])\n\n  return gtFramesAll, prFramesAll\n\n\ndef removeIgnoredPointsRects(rects,polyList):\n\n    ridxs = list(range(len(rects)))\n    for ridx in range(len(rects)):\n        points = rects[ridx][\"annopoints\"][0][\"point\"]\n        pidxs = list(range(len(points)))\n        for pidx in range(len(points)):\n            pt = geometry.Point(points[pidx][\"x\"][0], points[pidx][\"y\"][0])\n            bIgnore = False\n            for poidx in range(len(polyList)):\n                poly = polyList[poidx]\n                if (poly.contains(pt)):\n                    bIgnore = True\n                    break\n            if (bIgnore):\n                pidxs.remove(pidx)\n        points = [points[pidx] for pidx in pidxs]\n        if (len(points) > 0):\n            rects[ridx][\"annopoints\"][0][\"point\"] = points\n        else:\n            ridxs.remove(ridx)\n    rects = [rects[ridx] for ridx in ridxs]\n    return rects\n\n\ndef removeIgnoredPoints(gtFramesAll,prFramesAll):\n\n    imgidxs = []\n    for imgidx in range(len(gtFramesAll)):\n        if (\"ignore_regions\" in gtFramesAll[imgidx].keys() and\n            len(gtFramesAll[imgidx][\"ignore_regions\"]) > 0):\n            regions = gtFramesAll[imgidx][\"ignore_regions\"]\n            polyList = []\n            for ridx in range(len(regions)):\n                points = regions[ridx][\"point\"]\n                pointList = []\n                for pidx in range(len(points)):\n                    pt = geometry.Point(points[pidx][\"x\"][0], points[pidx][\"y\"][0])\n                    pointList += [pt]\n                poly = geometry.Polygon([[p.x, p.y] for p in pointList])\n                polyList += [poly]\n\n            rects = prFramesAll[imgidx][\"annorect\"]\n            prFramesAll[imgidx][\"annorect\"] = removeIgnoredPointsRects(rects,polyList)\n            rects = gtFramesAll[imgidx][\"annorect\"]\n            gtFramesAll[imgidx][\"annorect\"] = removeIgnoredPointsRects(rects,polyList)\n\n    return gtFramesAll, prFramesAll\n\n\ndef rectHasPoints(rect):\n    return ((\"annopoints\" in rect.keys()) and\n            (len(rect[\"annopoints\"]) > 0 and len(rect[\"annopoints\"][0]) > 0) and\n            (\"point\" in rect[\"annopoints\"][0].keys()))\n\n\ndef removeRectsWithoutPoints(rects):\n\n  idxsPr = []\n  for ridxPr in range(len(rects)):\n    if (rectHasPoints(rects[ridxPr])):\n        idxsPr += [ridxPr];\n  rects = [rects[ridx] for ridx in idxsPr]\n  return rects\n\ndef load_data_dir(argv):\n\n  gt_dir, pred_dir, mode = process_arguments(argv)\n  if not os.path.exists(gt_dir):\n    help('Given GT directory ' + gt_dir + ' does not exist!\\n')\n  if not os.path.exists(pred_dir):\n    help('Given prediction directory ' + pred_dir + ' does not exist!\\n')\n  filenames = glob.glob(gt_dir + \"/*.json\")\n  gtFramesAll = []\n  prFramesAll = []\n\n  for i in range(len(filenames)):\n    # load each annotation json file\n    with open(filenames[i]) as data_file:\n        data = json.load(data_file)\n    if (not \"annolist\" in data):\n        data = convert_videos(data)[0]\n    gt = data[\"annolist\"]\n    for imgidx in range(len(gt)):\n        gt[imgidx][\"seq_id\"] = i\n        gt[imgidx][\"seq_name\"] = os.path.basename(filenames[i]).split('.')[0]\n        for ridxGT in range(len(gt[imgidx][\"annorect\"])):\n            if (\"track_id\" in gt[imgidx][\"annorect\"][ridxGT].keys()):\n                # adjust track_ids to make them unique across all sequences\n                assert(gt[imgidx][\"annorect\"][ridxGT][\"track_id\"][0] < MAX_TRACK_ID)\n                gt[imgidx][\"annorect\"][ridxGT][\"track_id\"][0] += i*MAX_TRACK_ID\n    gtFramesAll += gt\n    gtBasename = os.path.basename(filenames[i])\n    predFilename = pred_dir + gtBasename\n\n    if (not os.path.exists(predFilename)):\n        raise IOError('Prediction file ' + predFilename + ' does not exist')\n\n    # load predictions\n    with open(predFilename) as data_file:\n        data = json.load(data_file)\n    if (not \"annolist\" in data):\n        data = convert_videos(data)[0]\n    pr = data[\"annolist\"]\n    if len(pr) != len(gt):\n        raise Exception('# prediction frames %d <> # GT frames %d for %s' % (len(pr),len(gt),predFilename))\n    for imgidx in range(len(pr)):\n        track_id_frame = []\n        for ridxPr in range(len(pr[imgidx][\"annorect\"])):\n            if (\"track_id\" in pr[imgidx][\"annorect\"][ridxPr].keys()):\n                track_id = pr[imgidx][\"annorect\"][ridxPr][\"track_id\"][0]\n                track_id_frame += [track_id]\n                # adjust track_ids to make them unique across all sequences\n                assert(track_id < MAX_TRACK_ID)\n                pr[imgidx][\"annorect\"][ridxPr][\"track_id\"][0] += i*MAX_TRACK_ID\n        track_id_frame_unique = np.unique(np.array(track_id_frame)).tolist()\n        #pdb.set_trace()\n        if len(track_id_frame) != len(track_id_frame_unique):\n            pass\n            #print(len(track_id_frame), len(track_id_frame_unique))\n            #raise Exception('Non-unique tracklet IDs found in frame %s of prediction %s' % (pr[imgidx][\"image\"][0][\"name\"],predFilename))\n    prFramesAll += pr\n\n  gtFramesAll,prFramesAll = cleanupData(gtFramesAll,prFramesAll)\n\n  gtFramesAll,prFramesAll = removeIgnoredPoints(gtFramesAll,prFramesAll)\n\n  return gtFramesAll, prFramesAll\n\n\ndef writeJson(val,fname):\n  with open(fname, 'w') as data_file:\n    json.dump(val, data_file)\n\n\ndef assignGTmulti(gtFrames, prFrames, distThresh):\n    assert (len(gtFrames) == len(prFrames))\n\n    nJoints = Joint().count\n    # part detection scores\n    scoresAll = {}\n    # positive / negative labels\n    labelsAll = {}\n    # number of annotated GT joints per image\n    nGTall = np.zeros([nJoints, len(gtFrames)])\n    for pidx in range(nJoints):\n        scoresAll[pidx] = {}\n        labelsAll[pidx] = {}\n        for imgidx in range(len(gtFrames)):\n            scoresAll[pidx][imgidx] = np.zeros([0, 0], dtype=np.float32)\n            labelsAll[pidx][imgidx] = np.zeros([0, 0], dtype=np.int8)\n\n    # GT track IDs\n    trackidxGT = []\n\n    # prediction track IDs\n    trackidxPr = []\n\n    # number of GT poses\n    nGTPeople = np.zeros((len(gtFrames), 1))\n    # number of predicted poses\n    nPrPeople = np.zeros((len(gtFrames), 1))\n\n    # container to save info for computing MOT metrics\n    motAll = {}\n\n    for imgidx in range(len(gtFrames)):\n        # distance between predicted and GT joints\n        dist = np.full((len(prFrames[imgidx][\"annorect\"]), len(gtFrames[imgidx][\"annorect\"]), nJoints), np.inf)\n        # score of the predicted joint\n        score = np.full((len(prFrames[imgidx][\"annorect\"]), nJoints), np.nan)\n        # body joint prediction exist\n        hasPr = np.zeros((len(prFrames[imgidx][\"annorect\"]), nJoints), dtype=bool)\n        # body joint is annotated\n        hasGT = np.zeros((len(gtFrames[imgidx][\"annorect\"]), nJoints), dtype=bool)\n\n        trackidxGT = []\n        trackidxPr = []\n        idxsPr = []\n        for ridxPr in range(len(prFrames[imgidx][\"annorect\"])):\n            if ((\"annopoints\" in prFrames[imgidx][\"annorect\"][ridxPr].keys()) and\n                (\"point\" in prFrames[imgidx][\"annorect\"][ridxPr][\"annopoints\"][0].keys())):\n                idxsPr += [ridxPr];\n        prFrames[imgidx][\"annorect\"] = [prFrames[imgidx][\"annorect\"][ridx] for ridx in idxsPr]\n\n        nPrPeople[imgidx, 0] = len(prFrames[imgidx][\"annorect\"])\n        nGTPeople[imgidx, 0] = len(gtFrames[imgidx][\"annorect\"])\n        # iterate over GT poses\n        for ridxGT in range(len(gtFrames[imgidx][\"annorect\"])):\n            # GT pose\n            rectGT = gtFrames[imgidx][\"annorect\"][ridxGT]\n            if (\"track_id\" in rectGT.keys()):\n                trackidxGT += [rectGT[\"track_id\"][0]]\n            pointsGT = []\n            if len(rectGT[\"annopoints\"]) > 0:\n                pointsGT = rectGT[\"annopoints\"][0][\"point\"]\n            # iterate over all possible body joints\n            for i in range(nJoints):\n                # GT joint in LSP format\n                ppGT = getPointGTbyID(pointsGT, i)\n                if len(ppGT) > 0:\n                    hasGT[ridxGT, i] = True\n\n        # iterate over predicted poses\n        for ridxPr in range(len(prFrames[imgidx][\"annorect\"])):\n            # predicted pose\n            rectPr = prFrames[imgidx][\"annorect\"][ridxPr]\n            if (\"track_id\" in rectPr.keys()):\n                trackidxPr += [rectPr[\"track_id\"][0]]\n            pointsPr = rectPr[\"annopoints\"][0][\"point\"]\n            for i in range(nJoints):\n                # predicted joint in LSP format\n                ppPr = getPointGTbyID(pointsPr, i)\n                if len(ppPr) > 0:\n                    if not (\"score\" in ppPr.keys()):\n                        # use minimum score if predicted score is missing\n                        if (imgidx == 0):\n                            print('WARNING: prediction score is missing. Setting fallback score={}'.format(MIN_SCORE))\n                        score[ridxPr, i] = MIN_SCORE\n                    else:\n                        score[ridxPr, i] = ppPr[\"score\"][0]\n                    hasPr[ridxPr, i] = True\n\n        if len(prFrames[imgidx][\"annorect\"]) and len(gtFrames[imgidx][\"annorect\"]):\n            # predictions and GT are present\n            # iterate over GT poses\n            for ridxGT in range(len(gtFrames[imgidx][\"annorect\"])):\n                # GT pose\n                rectGT = gtFrames[imgidx][\"annorect\"][ridxGT]\n                # compute reference distance as head size\n                headSize = getHeadSize(rectGT[\"x1\"][0], rectGT[\"y1\"][0],\n                                                    rectGT[\"x2\"][0], rectGT[\"y2\"][0])\n                pointsGT = []\n                if len(rectGT[\"annopoints\"]) > 0:\n                    pointsGT = rectGT[\"annopoints\"][0][\"point\"]\n                # iterate over predicted poses\n                for ridxPr in range(len(prFrames[imgidx][\"annorect\"])):\n                    # predicted pose\n                    rectPr = prFrames[imgidx][\"annorect\"][ridxPr]\n                    pointsPr = rectPr[\"annopoints\"][0][\"point\"]\n\n                    # iterate over all possible body joints\n                    for i in range(nJoints):\n                        # GT joint\n                        ppGT = getPointGTbyID(pointsGT, i)\n                        # predicted joint\n                        ppPr = getPointGTbyID(pointsPr, i)\n                        # compute distance between predicted and GT joint locations\n                        if hasPr[ridxPr, i] and hasGT[ridxGT, i]:\n                            pointGT = [ppGT[\"x\"][0], ppGT[\"y\"][0]]\n                            pointPr = [ppPr[\"x\"][0], ppPr[\"y\"][0]]\n                            dist[ridxPr, ridxGT, i] = np.linalg.norm(np.subtract(pointGT, pointPr)) / headSize\n\n            dist = np.array(dist)\n            hasGT = np.array(hasGT)\n\n            # number of annotated joints\n            nGTp = np.sum(hasGT, axis=1)\n            match = dist <= distThresh\n            pck = 1.0 * np.sum(match, axis=2)\n            for i in range(hasPr.shape[0]):\n                for j in range(hasGT.shape[0]):\n                    if nGTp[j] > 0:\n                        pck[i, j] = pck[i, j] / nGTp[j]\n\n            # preserve best GT match only\n            idx = np.argmax(pck, axis=1)\n            val = np.max(pck, axis=1)\n            for ridxPr in range(pck.shape[0]):\n                for ridxGT in range(pck.shape[1]):\n                    if (ridxGT != idx[ridxPr]):\n                        pck[ridxPr, ridxGT] = 0\n            prToGT = np.argmax(pck, axis=0)\n            val = np.max(pck, axis=0)\n            prToGT[val == 0] = -1\n\n            # info to compute MOT metrics\n            mot = {}\n            for i in range(nJoints):\n                mot[i] = {}\n\n            for i in range(nJoints):\n                ridxsGT = np.argwhere(hasGT[:,i]).flatten().tolist()\n                ridxsPr = np.argwhere(hasPr[:,i]).flatten().tolist()\n                mot[i][\"trackidxGT\"] = [trackidxGT[idx] for idx in ridxsGT]\n                mot[i][\"trackidxPr\"] = [trackidxPr[idx] for idx in ridxsPr]\n                mot[i][\"ridxsGT\"] = np.array(ridxsGT)\n                mot[i][\"ridxsPr\"] = np.array(ridxsPr)\n                mot[i][\"dist\"] = np.full((len(ridxsGT),len(ridxsPr)),np.nan)\n                for iPr in range(len(ridxsPr)):\n                    for iGT in range(len(ridxsGT)):\n                        if (match[ridxsPr[iPr], ridxsGT[iGT], i]):\n                            mot[i][\"dist\"][iGT,iPr] = dist[ridxsPr[iPr], ridxsGT[iGT], i]\n\n            # assign predicted poses to GT poses\n            for ridxPr in range(hasPr.shape[0]):\n                if (ridxPr in prToGT):  # pose matches to GT\n                    # GT pose that matches the predicted pose\n                    ridxGT = np.argwhere(prToGT == ridxPr)\n                    assert(ridxGT.size == 1)\n                    ridxGT = ridxGT[0,0]\n                    s = score[ridxPr, :]\n                    m = np.squeeze(match[ridxPr, ridxGT, :])\n                    hp = hasPr[ridxPr, :]\n                    for i in range(len(hp)):\n                        if (hp[i]):\n                            scoresAll[i][imgidx] = np.append(scoresAll[i][imgidx], s[i])\n                            labelsAll[i][imgidx] = np.append(labelsAll[i][imgidx], m[i])\n\n                else:  # no matching to GT\n                    s = score[ridxPr, :]\n                    m = np.zeros([match.shape[2], 1], dtype=bool)\n                    hp = hasPr[ridxPr, :]\n                    for i in range(len(hp)):\n                        if (hp[i]):\n                            scoresAll[i][imgidx] = np.append(scoresAll[i][imgidx], s[i])\n                            labelsAll[i][imgidx] = np.append(labelsAll[i][imgidx], m[i])\n        else:\n            if not len(gtFrames[imgidx][\"annorect\"]):\n                # No GT available. All predictions are false positives\n                for ridxPr in range(hasPr.shape[0]):\n                    s = score[ridxPr, :]\n                    m = np.zeros([nJoints, 1], dtype=bool)\n                    hp = hasPr[ridxPr, :]\n                    for i in range(len(hp)):\n                        if hp[i]:\n                            scoresAll[i][imgidx] = np.append(scoresAll[i][imgidx], s[i])\n                            labelsAll[i][imgidx] = np.append(labelsAll[i][imgidx], m[i])\n            mot = {}\n            for i in range(nJoints):\n                mot[i] = {}\n            for i in range(nJoints):\n                ridxsGT = [0]\n                ridxsPr = [0]\n                mot[i][\"trackidxGT\"] = [0]\n                mot[i][\"trackidxPr\"] = [0]\n                mot[i][\"ridxsGT\"] = np.array(ridxsGT)\n                mot[i][\"ridxsPr\"] = np.array(ridxsPr)\n                mot[i][\"dist\"] = np.full((len(ridxsGT),len(ridxsPr)),np.nan)\n\n        # save number of GT joints\n        for ridxGT in range(hasGT.shape[0]):\n            hg = hasGT[ridxGT, :]\n            for i in range(len(hg)):\n                nGTall[i, imgidx] += hg[i]\n\n        motAll[imgidx] = mot\n\n    return scoresAll, labelsAll, nGTall, motAll\n"
  },
  {
    "path": "eval/poseval/poseval/evaluateAP.py",
    "content": "import numpy as np\nimport json\nimport os\nimport sys\n\nfrom . import eval_helpers\nfrom .eval_helpers import Joint\n\ndef computeMetrics(scoresAll, labelsAll, nGTall):\n    apAll = np.zeros((nGTall.shape[0] + 1, 1))\n    recAll = np.zeros((nGTall.shape[0] + 1, 1))\n    preAll = np.zeros((nGTall.shape[0] + 1, 1))\n    # iterate over joints\n    for j in range(nGTall.shape[0]):\n        scores = np.zeros([0, 0], dtype=np.float32)\n        labels = np.zeros([0, 0], dtype=np.int8)\n        # iterate over images\n        for imgidx in range(nGTall.shape[1]):\n            scores = np.append(scores, scoresAll[j][imgidx])\n            labels = np.append(labels, labelsAll[j][imgidx])\n        # compute recall/precision values\n        nGT = sum(nGTall[j, :])\n        precision, recall, scoresSortedIdxs = eval_helpers.computeRPC(scores, labels, nGT)\n        if (len(precision) > 0):\n            apAll[j] = eval_helpers.VOCap(recall, precision) * 100\n            preAll[j] = precision[len(precision) - 1] * 100\n            recAll[j] = recall[len(recall) - 1] * 100\n    idxs = np.argwhere(~np.isnan(apAll[:nGTall.shape[0],0]))\n    apAll[nGTall.shape[0]] = apAll[idxs, 0].mean()\n    idxs = np.argwhere(~np.isnan(recAll[:nGTall.shape[0],0]))\n    recAll[nGTall.shape[0]] = recAll[idxs, 0].mean()\n    idxs = np.argwhere(~np.isnan(preAll[:nGTall.shape[0],0]))\n    preAll[nGTall.shape[0]] = preAll[idxs, 0].mean()\n\n    return apAll, preAll, recAll\n\n\ndef evaluateAP(gtFramesAll, prFramesAll, outputDir, bSaveAll=True, bSaveSeq=False):\n\n    distThresh = 0.5\n\n    seqidxs = []\n    for imgidx in range(len(gtFramesAll)):\n        seqidxs += [gtFramesAll[imgidx][\"seq_id\"]]\n    seqidxs = np.array(seqidxs)\n\n    seqidxsUniq = np.unique(seqidxs)\n    nSeq = len(seqidxsUniq)\n\n    names = Joint().name\n    names['15'] = 'total'\n\n    if (bSaveSeq):\n        for si in range(nSeq):\n            print(\"seqidx: %d/%d\" % (si+1,nSeq))\n\n            # extract frames IDs for the sequence\n            imgidxs = np.argwhere(seqidxs == seqidxsUniq[si])\n            seqName = gtFramesAll[imgidxs[0,0]][\"seq_name\"]\n\n            gtFrames = [gtFramesAll[imgidx] for imgidx in imgidxs.flatten().tolist()]\n            prFrames = [prFramesAll[imgidx] for imgidx in imgidxs.flatten().tolist()]\n\n            # assign predicted poses to GT poses\n            scores, labels, nGT, _ = eval_helpers.assignGTmulti(gtFrames, prFrames, distThresh)\n\n            # compute average precision (AP), precision and recall per part\n            ap, pre, rec = computeMetrics(scores, labels, nGT)\n            metricsSeq = {'ap': ap.flatten().tolist(), 'pre': pre.flatten().tolist(), 'rec': rec.flatten().tolist(), 'names': names}\n\n            filename = outputDir + '/' + seqName + '_AP_metrics.json'\n            print('saving results to', filename)\n            eval_helpers.writeJson(metricsSeq,filename)\n\n    # assign predicted poses to GT poses\n    scoresAll, labelsAll, nGTall, _ = eval_helpers.assignGTmulti(gtFramesAll, prFramesAll, distThresh)\n\n    # compute average precision (AP), precision and recall per part\n    apAll, preAll, recAll = computeMetrics(scoresAll, labelsAll, nGTall)\n    if (bSaveAll):\n        metrics = {'ap': apAll.flatten().tolist(), 'pre': preAll.flatten().tolist(), 'rec': recAll.flatten().tolist(),  'names': names}\n        filename = outputDir + '/total_AP_metrics.json'\n        print('saving results to', filename)\n        eval_helpers.writeJson(metrics,filename)\n\n    return apAll, preAll, recAll\n"
  },
  {
    "path": "eval/poseval/poseval/evaluatePCKh.py",
    "content": "import numpy as np\nimport json\nimport os\nimport sys\n\nimport eval_helpers\n\ndef computeDist(gtFrames,prFrames):\n    assert(len(gtFrames) == len(prFrames))\n\n    nJoints = eval_helpers.Joint().count\n    distAll = {}\n    for pidx in range(nJoints):\n        distAll[pidx] = np.zeros([0,0])\n\n    for imgidx in range(len(gtFrames)):\n        # ground truth\n        gtFrame = gtFrames[imgidx]\n        # prediction\n        detFrame = prFrames[imgidx]\n        if (gtFrames[imgidx][\"annorect\"] is not None):\n            for ridx in range(len(gtFrames[imgidx][\"annorect\"])):\n                rectGT = gtFrames[imgidx][\"annorect\"][ridx]\n                rectPr = prFrames[imgidx][\"annorect\"][ridx]\n                if (\"annopoints\" in rectGT.keys() and rectGT[\"annopoints\"] is not None):\n                    pointsGT = rectGT[\"annopoints\"][0][\"point\"]\n                    pointsPr = rectPr[\"annopoints\"][0][\"point\"]\n                    for pidx in range(len(pointsGT)):\n                        pointGT = [pointsGT[pidx][\"x\"][0],pointsGT[pidx][\"y\"][0]]\n                        idxGT = pointsGT[pidx][\"id\"][0]\n                        p = eval_helpers.getPointGTbyID(pointsPr,idxGT)\n                        if (len(p) > 0 and\n                            isinstance(p[\"x\"][0], (int, float)) and\n                            isinstance(p[\"y\"][0], (int, float))):\n                            pointPr = [p[\"x\"][0],p[\"y\"][0]]\n                            # compute distance between GT and prediction\n                            d = np.linalg.norm(np.subtract(pointGT,pointPr))\n                            # compute head size for distance normalization\n                            headSize = eval_helpers.getHeadSize(rectGT[\"x1\"][0],rectGT[\"y1\"][0],\n                                                                rectGT[\"x2\"][0],rectGT[\"y2\"][0])\n                            # normalize distance\n                            dNorm = d/headSize\n                        else:\n                            dNorm = np.inf\n                        distAll[idxGT] = np.append(distAll[idxGT],[[dNorm]])\n\n    return distAll\n\n\ndef computePCK(distAll,distThresh):\n\n    pckAll = np.zeros([len(distAll)+1,1])\n    nCorrect = 0\n    nTotal = 0\n    for pidx in range(len(distAll)):\n        idxs = np.argwhere(distAll[pidx] <= distThresh)\n        pck = 100.0*len(idxs)/len(distAll[pidx])\n        pckAll[pidx,0] = pck\n        nCorrect += len(idxs)\n        nTotal   += len(distAll[pidx])\n    pckAll[len(distAll),0] = 100.0*nCorrect/nTotal\n\n    return pckAll\n\n\ndef evaluatePCKh(gtFramesAll,prFramesAll):\n\n    distThresh = 0.5\n\n    # compute distances\n    distAll = computeDist(gtFramesAll,prFramesAll)\n\n    # compute PCK metric\n    pckAll = computePCK(distAll,distThresh)\n\n    return pckAll\n"
  },
  {
    "path": "eval/poseval/poseval/evaluateTracking.py",
    "content": "import numpy as np\nimport json\nimport pdb\nimport os\nimport sys\n\nfrom . import eval_helpers\nfrom .eval_helpers import Joint\nimport motmetrics as mm\n\n\ndef computeMetrics(gtFramesAll, motAll, outputDir, bSaveAll, bSaveSeq):\n\n    assert(len(gtFramesAll) == len(motAll))\n\n    nJoints = Joint().count\n    seqidxs = []\n    for imgidx in range(len(gtFramesAll)):\n        seqidxs += [gtFramesAll[imgidx][\"seq_id\"]]\n    seqidxs = np.array(seqidxs)\n\n    seqidxsUniq = np.unique(seqidxs)\n\n    # intermediate metrics\n    metricsMidNames = ['num_misses', 'num_switches', 'num_false_positives', 'num_objects', 'num_detections', 'idf1']\n\n    # final metrics computed from intermediate metrics\n    metricsFinNames = ['mota','motp','pre','rec', 'num_switches', 'idf1']\n\n    # initialize intermediate metrics\n    metricsMidAll = {}\n    for name in metricsMidNames:\n        metricsMidAll[name] = np.zeros([1,nJoints])\n    metricsMidAll['sumD'] = np.zeros([1,nJoints])\n\n    # initialize final metrics\n    metricsFinAll = {}\n    for name in metricsFinNames:\n        metricsFinAll[name] = np.zeros([1,nJoints+1])\n\n    # create metrics\n    mh = mm.metrics.create()\n\n    imgidxfirst = 0\n    # iterate over tracking sequences\n    # seqidxsUniq = seqidxsUniq[:20]\n    nSeq = len(seqidxsUniq)\n\n    # initialize per-sequence metrics\n    metricsSeqAll = {}\n    for si in range(nSeq):\n        metricsSeqAll[si] = {}\n        for name in metricsFinNames:\n            metricsSeqAll[si][name] = np.zeros([1,nJoints+1])\n\n    names = Joint().name\n    names['15'] = 'total'\n\n    for si in range(nSeq):\n        print(\"seqidx: %d/%d\" % (si+1,nSeq))\n\n        # init per-joint metrics accumulator\n        accAll = {}\n        for i in range(nJoints):\n            accAll[i] = mm.MOTAccumulator(auto_id=True)\n\n        # extract frames IDs for the sequence\n        imgidxs = np.argwhere(seqidxs == seqidxsUniq[si])\n        imgidxs = imgidxs[:-1].copy()\n        seqName = gtFramesAll[imgidxs[0,0]][\"seq_name\"]\n        print(seqName)\n        # create an accumulator that will be updated during each frame\n        # iterate over frames\n        for j in range(len(imgidxs)):\n            imgidx = imgidxs[j,0]\n            # iterate over joints\n            for i in range(nJoints):\n                # GT tracking ID\n                trackidxGT = motAll[imgidx][i][\"trackidxGT\"]\n                # prediction tracking ID\n                trackidxPr = motAll[imgidx][i][\"trackidxPr\"]\n                # distance GT <-> pred part to compute MOT metrics\n                # 'NaN' means force no match\n                dist = motAll[imgidx][i][\"dist\"]\n                # Call update once per frame\n                accAll[i].update(\n                    trackidxGT,                 # Ground truth objects in this frame\n                    trackidxPr,                 # Detector hypotheses in this frame\n                    dist                        # Distances from objects to hypotheses\n                )\n\n        # compute intermediate metrics per joint per sequence\n        for i in range(nJoints):\n            metricsMid = mh.compute(accAll[i], metrics=metricsMidNames, return_dataframe=False, name='acc')\n            for name in metricsMidNames:\n                metricsMidAll[name][0,i] += metricsMid[name]\n            s = accAll[i].events['D'].sum()\n            if (np.isnan(s)):\n                s = 0\n            metricsMidAll['sumD'][0,i] += s\n\n        if (bSaveSeq):\n            # compute metrics per joint per sequence\n            for i in range(nJoints):\n                metricsMid = mh.compute(accAll[i], metrics=metricsMidNames, return_dataframe=False, name='acc')\n                # compute final metrics per sequence\n                if (metricsMid['num_objects'] > 0):\n                    numObj = metricsMid['num_objects']\n                else:\n                    numObj = np.nan\n                numFP = metricsMid['num_false_positives']\n                metricsSeqAll[si]['mota'][0,i] = 100*(1. - 1.*(metricsMid['num_misses'] +\n                                                    metricsMid['num_switches'] +\n                                                    numFP) /\n                                                    numObj)\n                numDet = metricsMid['num_detections']\n                s = accAll[i].events['D'].sum()\n                if (numDet == 0 or np.isnan(s)):\n                    metricsSeqAll[si]['motp'][0,i] = 0.0\n                else:\n                    metricsSeqAll[si]['motp'][0,i] = 100*(1. - (1.*s / numDet))\n                if (numFP+numDet > 0):\n                    totalDet = numFP+numDet\n                else:\n                    totalDet = np.nan\n                metricsSeqAll[si]['pre'][0,i]  = 100*(1.*numDet /\n                                                totalDet)\n                metricsSeqAll[si]['rec'][0,i]  = 100*(1.*numDet /\n                                        numObj)\n\n            # average metrics over all joints per sequence\n            idxs = np.argwhere(~np.isnan(metricsSeqAll[si]['mota'][0,:nJoints]))\n            metricsSeqAll[si]['mota'][0,nJoints] = metricsSeqAll[si]['mota'][0,idxs].mean()\n            idxs = np.argwhere(~np.isnan(metricsSeqAll[si]['motp'][0,:nJoints]))\n            metricsSeqAll[si]['motp'][0,nJoints] = metricsSeqAll[si]['motp'][0,idxs].mean()\n            idxs = np.argwhere(~np.isnan(metricsSeqAll[si]['pre'][0,:nJoints]))\n            metricsSeqAll[si]['pre'][0,nJoints]  = metricsSeqAll[si]['pre'] [0,idxs].mean()\n            idxs = np.argwhere(~np.isnan(metricsSeqAll[si]['rec'][0,:nJoints]))\n            metricsSeqAll[si]['rec'][0,nJoints]  = metricsSeqAll[si]['rec'] [0,idxs].mean()\n\n            metricsSeq = metricsSeqAll[si].copy()\n            metricsSeq['mota'] = metricsSeq['mota'].flatten().tolist()\n            metricsSeq['motp'] = metricsSeq['motp'].flatten().tolist()\n            metricsSeq['pre'] = metricsSeq['pre'].flatten().tolist()\n            metricsSeq['rec'] = metricsSeq['rec'].flatten().tolist()\n            metricsSeq['names'] = names\n\n            filename = outputDir + '/' + seqName + '_MOT_metrics.json'\n            print('saving results to', filename)\n            eval_helpers.writeJson(metricsSeq,filename)\n\n    # compute final metrics per joint for all sequences\n    for i in range(nJoints):\n        if (metricsMidAll['num_objects'][0,i] > 0):\n            numObj = metricsMidAll['num_objects'][0,i]\n        else:\n            numObj = np.nan\n        numFP = metricsMidAll['num_false_positives'][0,i]\n        metricsFinAll['mota'][0,i] = 100*(1. - (metricsMidAll['num_misses'][0,i] +\n                                                metricsMidAll['num_switches'][0,i] +\n                                                numFP) /\n                                                numObj)\n        numDet = metricsMidAll['num_detections'][0,i]\n        s = metricsMidAll['sumD'][0,i]\n        if (numDet == 0 or np.isnan(s)):\n            metricsFinAll['motp'][0,i] = 0.0\n        else:\n            metricsFinAll['motp'][0,i] = 100*(1. - (s / numDet))\n        if (numFP+numDet > 0):\n            totalDet = numFP+numDet\n        else:\n            totalDet = np.nan\n\n        metricsFinAll['pre'][0,i]  = 100*(1.*numDet /\n                                       totalDet)\n        metricsFinAll['rec'][0,i]  = 100*(1.*numDet /\n                                       numObj)\n        metricsFinAll['idf1'][0,i] = 100*metricsMidAll['idf1'][0][i] /nSeq\n        metricsFinAll['num_switches'][0,i] = metricsMidAll['num_switches'][0,i]\n\n    # average metrics over all joints over all sequences\n    idxs = np.argwhere(~np.isnan(metricsFinAll['mota'][0,:nJoints]))\n    metricsFinAll['mota'][0,nJoints] = metricsFinAll['mota'][0,idxs].mean()\n    idxs = np.argwhere(~np.isnan(metricsFinAll['motp'][0,:nJoints]))\n    metricsFinAll['motp'][0,nJoints] = metricsFinAll['motp'][0,idxs].mean()\n    idxs = np.argwhere(~np.isnan(metricsFinAll['pre'][0,:nJoints]))\n    metricsFinAll['pre'][0,nJoints]  = metricsFinAll['pre'] [0,idxs].mean()\n    idxs = np.argwhere(~np.isnan(metricsFinAll['rec'][0,:nJoints]))\n    metricsFinAll['rec'][0,nJoints]  = metricsFinAll['rec'] [0,idxs].mean()\n    idxs = np.argwhere(~np.isnan(metricsFinAll['idf1'][0,:nJoints]))\n    metricsFinAll['idf1'][0,nJoints] = metricsFinAll['idf1'] [0,idxs].mean()\n    idxs = np.argwhere(~np.isnan(metricsFinAll['num_switches'][0,:nJoints]))\n    metricsFinAll['num_switches'][0,nJoints] = metricsFinAll['num_switches'] [0,idxs].sum()\n\n    if (bSaveAll):\n        metricsFin = metricsFinAll.copy()\n        metricsFin['mota'] = metricsFin['mota'].flatten().tolist()\n        metricsFin['motp'] = metricsFin['motp'].flatten().tolist()\n        metricsFin['pre'] = metricsFin['pre'].flatten().tolist()\n        metricsFin['rec'] = metricsFin['rec'].flatten().tolist()\n        metricsFin['idf1'] = metricsFin['idf1'].flatten().tolist()\n        metricsFin['num_switches'] = metricsFin['num_switches'].flatten().tolist()\n        metricsFin['names'] = names\n\n        filename = outputDir + '/total_MOT_metrics.json'\n        print('saving results to', filename)\n        eval_helpers.writeJson(metricsFin,filename)\n\n    return metricsFinAll\n\n\ndef evaluateTracking(gtFramesAll, prFramesAll, outputDir, saveAll=True, saveSeq=False):\n\n    distThresh = 0.5\n    # assign predicted poses to GT poses\n    _, _, _, motAll = eval_helpers.assignGTmulti(gtFramesAll, prFramesAll, distThresh)\n\n    # compute MOT metrics per part\n    metricsAll = computeMetrics(gtFramesAll, motAll, outputDir, saveAll, saveSeq)\n\n    return metricsAll\n"
  },
  {
    "path": "eval/poseval/poseval/posetrack18_id2fname.py",
    "content": "import os\n\nposetrack17_train_sequences = set(\n    [\n        (1, 8838),\n        (1, 12218),\n        (1, 6852),\n        (1, 16530),\n        (1, 12507),\n        (1, 14073),\n        (1, 9488),\n        (1, 22683),\n        (1, 16637),\n        (1, 7861),\n        (1, 8968),\n        (1, 43),\n        (1, 8732),\n        (1, 13627),\n        (1, 7380),\n        (1, 13780),\n        (1, 2716),\n        (1, 98),\n        (1, 436),\n        (1, 14265),\n        (1, 17133),\n        (1, 16464),\n        (1, 9922),\n        (1, 10773),\n        (1, 7607),\n        (1, 228),\n        (1, 20924),\n        (1, 24635),\n        (1, 16571),\n        (1, 760),\n        (1, 14321),\n        (1, 16165),\n        (1, 8808),\n        (1, 23492),\n        (1, 866),\n        (1, 6265),\n        (1, 16882),\n        (1, 275),\n        (1, 24985),\n        (1, 2905),\n        (1, 20928),\n        (1, 7851),\n        (1, 3402),\n        (1, 16171),\n        (1, 15882),\n        (1, 823),\n        (1, 3498),\n        (1, 14344),\n        (1, 14354),\n        (1, 20900),\n        (1, 9533),\n        (2, 10),\n        (1, 14480),\n        (1, 15892),\n        (1, 3701),\n        (1, 15124),\n        (1, 16411),\n        (1, 9043),\n        (1, 9012),\n        (1, 8743),\n        (1, 12620),\n        (2, 28),\n        (1, 16440),\n        (1, 7855),\n        (1, 15130),\n        (1, 271),\n        (1, 8820),\n        (1, 15875),\n        (1, 23471),\n        (1, 8882),\n        (1, 2357),\n        (1, 24180),\n        (2, 15),\n        (1, 23695),\n        (1, 16883),\n        (1, 231),\n        (1, 15290),\n        (1, 13337),\n        (1, 9003),\n        (1, 13908),\n        (1, 3403),\n        (2, 1),\n        (1, 502),\n        (1, 9495),\n        (1, 14268),\n        (1, 8961),\n        (1, 15277),\n        (1, 8616),\n        (1, 14345),\n        (1, 14278),\n        (1, 985),\n        (1, 1243),\n        (1, 11989),\n        (1, 15125),\n        (1, 13515),\n        (2, 29),\n        (1, 2839),\n        (1, 15366),\n        (1, 13821),\n        (1, 9718),\n        (1, 12056),\n        (1, 14052),\n        (1, 21077),\n        (1, 12268),\n        (1, 5728),\n        (1, 21133),\n        (1, 8819),\n        (1, 13965),\n        (1, 5759),\n        (1, 17180),\n        (1, 14553),\n        (1, 9506),\n        (1, 22671),\n        (1, 5847),\n        (1, 10288),\n        (1, 22682),\n        (1, 14231),\n        (1, 8969),\n        (1, 14403),\n        (1, 20896),\n        (1, 7381),\n        (1, 14375),\n        (1, 14122),\n        (1, 16535),\n        (1, 1158),\n        (1, 14506),\n        (1, 9598),\n        (1, 14334),\n        (1, 17184),\n        (1, 2893),\n        (1, 23699),\n        (1, 10010),\n        (1, 3730),\n        (1, 12023),\n        (2, 48),\n        (1, 23454),\n        (1, 9499),\n        (1, 9654),\n        (1, 14235),\n        (1, 10111),\n        (1, 13795),\n        (1, 16496),\n        (1, 16313),\n        (1, 8795),\n        (1, 12732),\n        (1, 9534),\n        (2, 17),\n        (1, 439),\n        (1, 8744),\n        (1, 1682),\n        (1, 921),\n        (1, 8833),\n        (1, 14054),\n        (1, 4902),\n        (1, 24893),\n        (2, 3),\n        (1, 10715),\n        (1, 15309),\n        (1, 820),\n        (1, 10542),\n        (1, 285),\n        (1, 7467),\n        (1, 13271),\n        (1, 15406),\n        (1, 9487),\n        (1, 14763),\n        (1, 12155),\n        (1, 9398),\n        (1, 1686),\n        (1, 2255),\n        (1, 8837),\n        (1, 2787),\n        (1, 12911),\n        (1, 9054),\n        (1, 223),\n        (1, 14662),\n        (1, 12722),\n        (2, 27),\n        (1, 15585),\n        (1, 4833),\n        (1, 14551),\n        (1, 9504),\n        (1, 9555),\n        (1, 13787),\n        (1, 9993),\n        (1, 14363),\n        (1, 8803),\n        (1, 9411),\n        (1, 15189),\n        (1, 9617),\n        (1, 8725),\n        (1, 4891),\n        (1, 14390),\n        (1, 20822),\n        (1, 5732),\n        (1, 12859),\n        (1, 474),\n        (1, 2552),\n        (1, 10774),\n        (1, 14367),\n        (2, 23),\n        (1, 520),\n        (1, 14272),\n        (1, 13527),\n        (1, 2234),\n        (1, 5841),\n        (1, 15899),\n        (1, 9538),\n        (1, 14266),\n        (1, 15537),\n        (1, 13671),\n        (1, 7387),\n        (1, 8906),\n        (2, 26),\n        (1, 10198),\n        (1, 7413),\n        (1, 2258),\n        (1, 14082),\n        (1, 16215),\n        (1, 15314),\n        (1, 12273),\n        (1, 17121),\n        (1, 20823),\n        (1, 14183),\n        (1, 15765),\n        (1, 11280),\n        (1, 16433),\n        (1, 23416),\n        (1, 8730),\n        (1, 21078),\n        (1, 352),\n        (1, 17197),\n        (1, 14121),\n        (1, 22676),\n        (1, 14505),\n        (1, 14280),\n        (1, 4836),\n        (2, 22),\n        (1, 16198),\n        (1, 7392),\n        (1, 9445),\n        (1, 13268),\n        (1, 16211),\n        (1, 2264),\n        (1, 24487),\n        (1, 14698),\n        (1, 10007),\n        (1, 8812),\n        (2, 36),\n        (1, 8796),\n        (1, 16330),\n        (1, 9938),\n        (1, 96),\n        (1, 17129),\n        (1, 8976),\n        (1, 22642),\n        (1, 7536),\n        (1, 385),\n        (1, 16417),\n        (1, 9597),\n        (1, 13512),\n        (1, 1157),\n        (1, 15293),\n        (1, 14178),\n        (1, 13557),\n        (1, 14129),\n        (1, 1491),\n        (1, 8877),\n        (1, 23484),\n        (2, 2),\n        (1, 10177),\n        (1, 10863),\n        (1, 8884),\n        (1, 8962),\n        (1, 5061),\n        (1, 2843),\n        (1, 9727),\n        (1, 24642),\n        (1, 14288),\n        (1, 1153),\n        (1, 15832),\n        (1, 1687),\n        (1, 2254),\n        (1, 15177),\n        (1, 2786),\n        (1, 12910),\n        (1, 22124),\n        (1, 1341),\n        (1, 16668),\n        (1, 14342),\n        (1, 5232),\n        (1, 799),\n    ]\n)\n\nposetrack17_testval_sequences = set(\n    [\n        (1, 707),\n        (1, 11878),\n        (1, 16842),\n        (1, 5368),\n        (1, 286),\n        (1, 9528),\n        (1, 8993),\n        (1, 9038),\n        (1, 9468),\n        (1, 10127),\n        (1, 18900),\n        (1, 16611),\n        (1, 14361),\n        (1, 46),\n        (1, 15869),\n        (1, 1110),\n        (1, 23966),\n        (1, 475),\n        (1, 18906),\n        (1, 20856),\n        (1, 229),\n        (1, 3136),\n        (1, 13029),\n        (1, 1757),\n        (1, 24566),\n        (1, 24153),\n        (1, 16451),\n        (1, 2838),\n        (1, 2266),\n        (1, 903),\n        (1, 14703),\n        (1, 17496),\n        (1, 15755),\n        (1, 14027),\n        (1, 750),\n        (1, 3745),\n        (3, 2),\n        (1, 16235),\n        (1, 12967),\n        (1, 10130),\n        (1, 15294),\n        (1, 17447),\n        (1, 16517),\n        (1, 15521),\n        (1, 13601),\n        (1, 14376),\n        (1, 15149),\n        (1, 18719),\n        (1, 14313),\n        (1, 1970),\n        (1, 8894),\n        (1, 14292),\n        (1, 10309),\n        (1, 19980),\n        (1, 6503),\n        (1, 3504),\n        (1, 9472),\n        (1, 8826),\n        (1, 24177),\n        (1, 17434),\n        (3, 3),\n        (1, 2061),\n        (1, 24493),\n        (1, 6545),\n        (1, 3542),\n        (1, 24906),\n        (1, 9268),\n        (1, 18592),\n        (1, 9469),\n        (1, 17955),\n        (1, 21082),\n        (1, 22831),\n        (1, 21130),\n        (1, 2284),\n        (1, 808),\n        (1, 15868),\n        (1, 21084),\n        (1, 12046),\n        (1, 1733),\n        (1, 24149),\n        (1, 12332),\n        (1, 17984),\n        (1, 11526),\n        (1, 2928),\n        (1, 5803),\n        (1, 23411),\n        (1, 15941),\n        (1, 2777),\n        (1, 16556),\n        (1, 9301),\n        (1, 23746),\n        (1, 18159),\n        (1, 10303),\n        (1, 9523),\n        (1, 22892),\n        (1, 10521),\n        (1, 18626),\n        (1, 7504),\n        (1, 18412),\n        (1, 1535),\n        (1, 14309),\n        (1, 1280),\n        (1, 15862),\n        (1, 2367),\n        (1, 22656),\n        (1, 3397),\n        (1, 14524),\n        (1, 18657),\n        (1, 9452),\n        (1, 8991),\n        (1, 5413),\n        (1, 3223),\n        (1, 9509),\n        (1, 8736),\n        (1, 10357),\n        (1, 20912),\n        (1, 161),\n        (1, 18296),\n        (1, 44),\n        (1, 2281),\n        (1, 20909),\n        (1, 7269),\n        (1, 16421),\n        (1, 22693),\n        (3, 1),\n        (1, 14522),\n        (1, 15375),\n        (1, 24564),\n        (1, 1940),\n        (1, 14297),\n        (1, 19078),\n        (1, 15908),\n        (1, 16419),\n        (1, 9477),\n        (1, 2273),\n        (1, 7952),\n        (1, 24573),\n        (1, 9460),\n        (3, 5),\n        (1, 16576),\n        (1, 14317),\n        (1, 11287),\n        (1, 16194),\n        (1, 7681),\n        (1, 9458),\n        (1, 12838),\n        (1, 5799),\n        (1, 18623),\n        (1, 8761),\n        (1, 24516),\n        (1, 8160),\n        (1, 9526),\n        (1, 15859),\n        (1, 20818),\n        (1, 9403),\n        (1, 2279),\n        (1, 3416),\n        (1, 202),\n        (1, 20820),\n        (1, 22699),\n        (1, 24156),\n        (1, 1545),\n        (1, 23730),\n        (1, 5336),\n        (1, 1242),\n        (1, 693),\n        (1, 14307),\n        (1, 15812),\n        (3, 4),\n        (1, 9602),\n        (1, 23444),\n        (1, 6818),\n        (1, 8847),\n        (1, 21086),\n        (1, 2286),\n        (1, 10517),\n        (1, 3546),\n        (1, 23965),\n        (1, 23736),\n        (1, 2852),\n        (1, 10350),\n        (1, 536),\n        (1, 9476),\n        (1, 811),\n        (1, 3224),\n        (1, 83),\n        (1, 24876),\n        (1, 9404),\n        (1, 9521),\n        (1, 23719),\n        (1, 7500),\n        (1, 20819),\n        (1, 9527),\n        (1, 13602),\n        (1, 1282),\n        (1, 21123),\n        (1, 15278),\n        (1, 8789),\n        (1, 1537),\n        (1, 5592),\n        (1, 13534),\n        (1, 15302),\n        (1, 24158),\n        (1, 24621),\n        (1, 7684),\n        (1, 3742),\n        (1, 16662),\n        (1, 2276),\n        (1, 1735),\n        (1, 2835),\n        (1, 16180),\n        (1, 23717),\n        (1, 20880),\n        (1, 522),\n        (1, 14102),\n        (1, 14384),\n        (1, 1001),\n        (1, 1486),\n        (1, 4622),\n        (1, 14531),\n        (1, 20910),\n        (1, 8827),\n        (1, 2277),\n        (1, 14293),\n        (1, 9883),\n        (1, 16239),\n        (1, 16236),\n        (1, 8760),\n        (1, 15860),\n        (1, 7128),\n        (1, 5833),\n        (1, 23653),\n        (1, 5067),\n        (1, 14523),\n        (1, 24165),\n        (1, 18725),\n        (1, 7496),\n        (1, 342),\n        (1, 17839),\n        (1, 15301),\n        (1, 24575),\n        (1, 2364),\n        (1, 1744),\n        (1, 13293),\n        (1, 14960),\n        (1, 22430),\n        (1, 23754),\n        (1, 3943),\n        (1, 12834),\n        (1, 22688),\n    ]\n)\n\nposetrack18_train_sequences = set(\n    [\n        (1, 15394),\n        (1, 16418),\n        (1, 20824),\n        (1, 12949),\n        (1, 24218),\n        (1, 14680),\n        (1, 9412),\n        (1, 14193),\n        (1, 11347),\n        (1, 16886),\n        (1, 15883),\n        (1, 1278),\n        (1, 12255),\n        (1, 23938),\n        (1, 23500),\n        (1, 18647),\n        (1, 5231),\n        (1, 22661),\n        (1, 24767),\n        (1, 24630),\n        (1, 22128),\n        (1, 15889),\n        (1, 24211),\n        (1, 17161),\n        (1, 24317),\n        (1, 24170),\n        (1, 23773),\n        (1, 13967),\n        (1, 9484),\n        (1, 1502),\n        (1, 18993),\n        (1, 23493),\n        (1, 20872),\n        (1, 24102),\n        (1, 18987),\n        (1, 20898),\n        (1, 18146),\n        (1, 11992),\n        (1, 17563),\n        (1, 23414),\n        (1, 16560),\n        (1, 14695),\n        (1, 10008),\n        (1, 5180),\n        (1, 17127),\n        (1, 10013),\n        (1, 13743),\n        (1, 8830),\n        (1, 24212),\n        (1, 7519),\n        (1, 12789),\n        (1, 5643),\n        (1, 13526),\n        (1, 23469),\n        (1, 18651),\n        (1, 24712),\n        (1, 12783),\n        (1, 14195),\n        (1, 14694),\n        (1, 16169),\n        (1, 9628),\n        (1, 9287),\n        (1, 20447),\n        (1, 12784),\n        (1, 10179),\n        (1, 22679),\n        (1, 9485),\n        (1, 14343),\n        (1, 18542),\n        (1, 1156),\n        (1, 17427),\n        (1, 22665),\n        (1, 16252),\n        (1, 8660),\n        (1, 24479),\n        (1, 5173),\n        (1, 10006),\n        (1, 19063),\n        (1, 24634),\n        (1, 20382),\n        (1, 17179),\n        (1, 20942),\n        (1, 10161),\n        (1, 13924),\n        (1, 14953),\n        (1, 220),\n        (1, 23882),\n        (1, 19998),\n        (1, 12980),\n        (1, 22911),\n        (1, 20258),\n        (1, 10486),\n        (1, 10005),\n        (1, 16461),\n        (1, 15022),\n        (1, 9615),\n        (1, 232),\n        (1, 23814),\n        (1, 15305),\n        (1, 23360),\n        (1, 21524),\n        (1, 24706),\n        (1, 23864),\n        (1, 16409),\n        (1, 24018),\n        (1, 24627),\n        (1, 20935),\n        (1, 22410),\n        (1, 14081),\n        (1, 17908),\n        (1, 24216),\n        (1, 20091),\n        (1, 15925),\n        (1, 10608),\n        (1, 14406),\n        (1, 24073),\n        (1, 13564),\n        (1, 12725),\n        (1, 24696),\n        (1, 24099),\n        (1, 8912),\n        (1, 14767),\n        (1, 13523),\n        (1, 24553),\n        (1, 20669),\n        (1, 1888),\n        (1, 24628),\n        (1, 8664),\n        (1, 24182),\n        (1, 17132),\n        (1, 343),\n        (1, 15821),\n        (1, 23472),\n        (1, 21115),\n        (1, 9427),\n        (1, 21126),\n        (1, 14665),\n        (1, 20672),\n        (1, 12984),\n        (1, 18390),\n        (1, 17234),\n        (1, 5222),\n        (1, 8745),\n        (1, 16540),\n        (1, 16199),\n        (1, 23818),\n        (1, 16166),\n        (1, 23412),\n        (1, 24606),\n        (1, 13207),\n        (1, 14064),\n        (1, 23739),\n        (1, 24143),\n        (1, 10003),\n        (1, 13193),\n        (1, 14795),\n        (1, 16706),\n        (1, 13816),\n        (1, 18649),\n        (1, 24435),\n        (1, 10270),\n        (1, 22071),\n        (1, 13518),\n        (1, 15025),\n        (1, 23776),\n        (1, 8875),\n        (1, 23988),\n        (1, 1151),\n        (1, 15619),\n        (1, 12729),\n        (1, 11991),\n        (1, 23706),\n        (1, 23950),\n        (1, 13742),\n        (1, 11557),\n        (1, 20486),\n        (1, 7389),\n        (1, 24459),\n        (1, 24477),\n        (1, 16545),\n        (1, 10004),\n        (1, 1503),\n        (1, 10869),\n        (1, 12049),\n        (1, 12003),\n        (1, 11606),\n        (1, 20940),\n        (1, 20090),\n        (1, 15405),\n        (1, 13957),\n        (1, 486),\n        (1, 24489),\n        (1, 13511),\n        (1, 8842),\n        (1, 20256),\n        (1, 10484),\n        (1, 20547),\n        (1, 15926),\n        (1, 12981),\n        (1, 23949),\n        (1, 4334),\n        (1, 23812),\n        (1, 23503),\n        (1, 20008),\n        (1, 17405),\n        (1, 24704),\n        (1, 16473),\n        (1, 24258),\n        (1, 12182),\n        (1, 13047),\n        (1, 24179),\n        (1, 22876),\n        (1, 24222),\n        (1, 15893),\n        (1, 9432),\n        (1, 8835),\n        (1, 11351),\n        (1, 18096),\n        (1, 24644),\n        (1, 23942),\n        (1, 9289),\n        (1, 12860),\n        (1, 23488),\n        (1, 12790),\n        (1, 9055),\n        (1, 8918),\n        (1, 22957),\n        (1, 23694),\n        (1, 14319),\n        (1, 16563),\n        (1, 15260),\n        (1, 14346),\n        (1, 20006),\n        (1, 12766),\n        (1, 13940),\n        (1, 23170),\n        (1, 24344),\n        (1, 13763),\n        (1, 12921),\n        (1, 18071),\n        (1, 17165),\n        (1, 23024),\n        (1, 14267),\n        (1, 24174),\n        (1, 23172),\n        (1, 17191),\n        (1, 20573),\n        (1, 17999),\n        (1, 14901),\n        (1, 15915),\n        (1, 9494),\n        (1, 13273),\n        (1, 13549),\n        (1, 14286),\n        (1, 24722),\n        (1, 16197),\n        (1, 19591),\n        (1, 23233),\n        (1, 16255),\n        (1, 12022),\n        (1, 15949),\n        (1, 10001),\n        (1, 15265),\n        (1, 15897),\n        (1, 13516),\n        (1, 15760),\n        (1, 24817),\n        (1, 21871),\n        (1, 10312),\n        (1, 9446),\n        (1, 24788),\n        (1, 1886),\n        (1, 9941),\n        (1, 22669),\n        (1, 16240),\n        (1, 1339),\n        (1, 21073),\n        (1, 13201),\n        (1, 24872),\n        (1, 21067),\n        (1, 24638),\n        (1, 24793),\n        (1, 24184),\n        (1, 15395),\n        (1, 22873),\n        (1, 23482),\n        (1, 15890),\n        (1, 12936),\n        (1, 14707),\n        (1, 12701),\n        (1, 18109),\n        (1, 20894),\n        (1, 24641),\n        (1, 18392),\n        (1, 15478),\n        (1, 9498),\n        (1, 23501),\n        (1, 10983),\n        (1, 24110),\n        (1, 23364),\n        (1, 689),\n        (1, 14322),\n        (1, 12422),\n        (1, 14762),\n        (1, 16413),\n        (1, 13451),\n        (1, 13182),\n        (1, 289),\n        (1, 13045),\n        (1, 13195),\n        (1, 21068),\n        (1, 24631),\n        (1, 14074),\n        (1, 22414),\n        (1, 23475),\n        (1, 21094),\n        (1, 24220),\n        (1, 17162),\n        (1, 22432),\n        (1, 24171),\n        (1, 23774),\n        (1, 19938),\n        (1, 11349),\n        (1, 10741),\n        (1, 10330),\n        (1, 23940),\n        (1, 4325),\n        (1, 9491),\n        (1, 13254),\n        (1, 15911),\n        (1, 14550),\n        (1, 14283),\n        (1, 22687),\n        (1, 11993),\n        (1, 17102),\n        (1, 16561),\n        (1, 16228),\n        (1, 13202),\n        (1, 8773),\n        (1, 20488),\n        (1, 24138),\n        (1, 14507),\n        (1, 10014),\n        (1, 23476),\n        (1, 24610),\n        (1, 24213),\n        (1, 17155),\n        (1, 23030),\n        (1, 24172),\n        (1, 12954),\n        (1, 14128),\n        (1, 23470),\n        (1, 13006),\n        (1, 3727),\n        (1, 13513),\n        (1, 16861),\n        (1, 16170),\n        (1, 15773),\n        (1, 24225),\n        (1, 16501),\n        (1, 17635),\n        (1, 14284),\n        (1, 9655),\n        (1, 21626),\n        (1, 22672),\n        (1, 14039),\n        (1, 22948),\n        (1, 23219),\n        (1, 13952),\n        (1, 23977),\n        (1, 22666),\n        (1, 12020),\n        (1, 23394),\n        (1, 24131),\n        (1, 5174),\n        (1, 15772),\n        (1, 13197),\n        (1, 10864),\n        (1, 12920),\n        (1, 24074),\n        (1, 21139),\n        (1, 13820),\n        (1, 14557),\n        (1, 20096),\n        (1, 20825),\n        (1, 15029),\n        (1, 16163),\n        (1, 23958),\n        (1, 18679),\n        (1, 13264),\n        (1, 14747),\n        (1, 16462),\n        (1, 15921),\n        (1, 9444),\n        (1, 10514),\n        (1, 14277),\n        (1, 22099),\n        (1, 24192),\n        (1, 1884),\n        (1, 24173),\n        (1, 15306),\n        (1, 20925),\n        (1, 20011),\n        (1, 12596),\n        (1, 14335),\n        (1, 15778),\n        (1, 728),\n        (1, 17051),\n        (1, 16246),\n        (1, 23865),\n        (1, 24636),\n        (1, 14055),\n        (1, 17181),\n        (1, 22411),\n        (1, 24190),\n        (1, 2262),\n        (1, 24217),\n        (1, 10157),\n        (1, 23771),\n        (1, 16164),\n        (1, 19951),\n        (1, 2906),\n        (1, 13670),\n        (1, 18844),\n        (1, 5389),\n        (1, 13834),\n        (1, 23805),\n        (1, 2720),\n        (1, 9845),\n        (1, 9050),\n        (1, 14320),\n        (1, 24632),\n        (1, 24708),\n        (1, 9558),\n        (1, 13003),\n        (1, 1889),\n        (1, 15828),\n        (1, 24629),\n        (1, 14072),\n        (1, 23715),\n        (1, 24183),\n        (1, 15822),\n        (1, 23473),\n        (1, 24210),\n        (1, 13637),\n        (1, 24486),\n        (1, 24169),\n        (1, 23772),\n        (1, 12935),\n        (1, 21984),\n        (1, 15275),\n        (1, 12830),\n        (1, 9599),\n        (1, 17186),\n        (1, 12985),\n        (1, 8752),\n        (1, 13252),\n        (1, 114),\n        (1, 11999),\n        (1, 23413),\n        (1, 8908),\n        (1, 23853),\n        (1, 13200),\n        (1, 23716),\n        (1, 14804),\n        (1, 14108),\n        (1, 13261),\n        (1, 23165),\n        (1, 24608),\n        (1, 7518),\n        (1, 12788),\n        (1, 24699),\n        (1, 18650),\n        (1, 3440),\n        (1, 16465),\n        (1, 19972),\n        (1, 13519),\n        (1, 20837),\n        (1, 23955),\n        (1, 9286),\n        (1, 16636),\n        (1, 18087),\n        (1, 8885),\n        (1, 22912),\n        (1, 23777),\n        (1, 13140),\n        (1, 14274),\n        (1, 13749),\n        (1, 23820),\n        (1, 23707),\n        (1, 15791),\n        (1, 10298),\n        (1, 22664),\n        (1, 16243),\n        (1, 20487),\n        (1, 24710),\n        (1, 16683),\n        (1, 24478),\n        (1, 18594),\n        (1, 12053),\n        (1, 24633),\n        (1, 9559),\n        (1, 14797),\n        (1, 10160),\n        (1, 24187),\n        (1, 12047),\n        (1, 15269),\n        (1, 21096),\n        (1, 16595),\n        (1, 13958),\n        (1, 24797),\n        (1, 19997),\n        (1, 22910),\n        (1, 15764),\n        (1, 22937),\n        (1, 15927),\n        (1, 23422),\n        (1, 20443),\n        (1, 9614),\n        (1, 16932),\n        (1, 23813),\n        (1, 23708),\n        (1, 9613),\n        (1, 10847),\n        (1, 7875),\n        (1, 18652),\n        (1, 12183),\n        (1, 24626),\n        (1, 15528),\n        (1, 24457),\n        (1, 13948),\n        (1, 349),\n        (1, 24320),\n        (1, 15311),\n        (1, 8801),\n        (1, 2260),\n        (1, 12940),\n        (1, 19949),\n        (1, 8731),\n        (1, 18097),\n        (1, 13249),\n        (1, 24098),\n        (1, 5228),\n        (1, 14766),\n        (1, 7876),\n        (1, 20456),\n        (1, 24147),\n        (1, 17097),\n        (1, 24822),\n        (1, 24318),\n        (1, 23713),\n        (1, 10114),\n        (1, 24181),\n        (1, 10009),\n        (1, 19066),\n        (1, 24208),\n        (1, 22127),\n        (1, 13522),\n        (1, 14664),\n        (1, 15283),\n        (1, 10317),\n        (1, 8806),\n        (1, 13825),\n        (1, 17652),\n        (1, 10197),\n        (1, 2229),\n        (1, 13274),\n        (1, 23495),\n        (1, 13550),\n        (1, 16248),\n        (1, 22675),\n        (1, 15356),\n        (1, 1152),\n        (1, 24480),\n        (1, 16565),\n        (1, 14200),\n        (1, 13206),\n        (1, 24588),\n        (1, 14071),\n        (1, 17594),\n        (1, 17261),\n        (1, 22094),\n        (1, 14511),\n        (1, 12050),\n        (1, 23738),\n        (1, 13192),\n        (1, 14374),\n        (1, 18153),\n        (1, 10321),\n        (1, 12746),\n        (1, 17054),\n        (1, 18648),\n        (1, 23926),\n        (1, 9421),\n        (1, 8880),\n        (1, 14715),\n        (1, 24818),\n        (1, 21864),\n        (1, 15315),\n        (1, 24649),\n        (1, 9447),\n        (1, 10002),\n        (1, 20182),\n        (1, 24570),\n        (1, 13741),\n        (1, 731),\n        (1, 22670),\n        (1, 24433),\n        (1, 21074),\n        (1, 24873),\n        (1, 10140),\n        (1, 13790),\n        (1, 10868),\n        (1, 24639),\n        (1, 12002),\n        (1, 17176),\n        (1, 10166),\n        (1, 24185),\n        (1, 15404),\n        (1, 22874),\n        (1, 10862),\n        (1, 23887),\n        (1, 14794),\n        (1, 15891),\n        (1, 8841),\n        (1, 22908),\n        (1, 14751),\n        (1, 14402),\n        (1, 24117),\n        (1, 13560),\n        (1, 24583),\n        (1, 24805),\n        (1, 23502),\n        (1, 14323),\n        (1, 22663),\n        (1, 16414),\n        (1, 13046),\n        (1, 21069),\n        (1, 16432),\n        (1, 20007),\n        (1, 17169),\n        (1, 20932),\n        (1, 3698),\n        (1, 24009),\n        (1, 22662),\n        (1, 21095),\n        (1, 9536),\n        (1, 20385),\n        (1, 23775),\n        (1, 8834),\n        (1, 20853),\n        (1, 23638),\n        (1, 14873),\n        (1, 9594),\n        (1, 13838),\n        (1, 23941),\n        (1, 9288),\n        (1, 9500),\n        (1, 13517),\n        (1, 929),\n        (1, 14764),\n        (1, 23693),\n        (1, 17264),\n        (1, 14791),\n        (1, 24145),\n        (1, 9220),\n        (1, 15251),\n        (1, 24585),\n        (1, 14076),\n        (1, 17021),\n        (1, 17258),\n        (1, 12765),\n        (1, 20512),\n        (1, 19064),\n        (1, 24214),\n        (1, 8972),\n        (1, 16314),\n        (1, 11614),\n        (1, 13520),\n        (1, 14199),\n        (1, 14057),\n        (1, 11329),\n        (1, 10495),\n        (1, 24226),\n        (1, 9493),\n        (1, 13272),\n        (1, 14723),\n        (1, 2903),\n        (1, 14285),\n        (1, 12392),\n        (1, 9939),\n        (1, 23232),\n        (1, 16254),\n        (1, 9505),\n        (1, 8799),\n        (1, 14338),\n        (1, 17050),\n        (1, 8662),\n        (1, 5175),\n        (1, 22701),\n        (1, 15895),\n        (1, 8817),\n        (1, 14663),\n        (1, 9994),\n        (1, 7540),\n        (1, 10934),\n        (1, 16172),\n        (1, 5819),\n        (1, 7534),\n        (1, 23505),\n        (1, 20868),\n        (1, 18511),\n        (1, 8872),\n        (1, 18653),\n        (1, 24810),\n        (1, 13014),\n        (1, 11520),\n        (1, 15307),\n        (1, 23711),\n        (1, 16442),\n        (1, 22668),\n        (1, 16247),\n        (1, 21072),\n        (1, 9060),\n        (1, 15763),\n        (1, 24646),\n        (1, 10164),\n        (1, 24191),\n    ]\n)\n\nposetrack18_testval_sequences = set(\n    [\n        (1, 9475),\n        (1, 14326),\n        (1, 3738),\n        (1, 15241),\n        (1, 24593),\n        (1, 23748),\n        (1, 7973),\n        (1, 14312),\n        (1, 7686),\n        (1, 9508),\n        (1, 14301),\n        (1, 10230),\n        (1, 2059),\n        (1, 7934),\n        (1, 24329),\n        (1, 18911),\n        (1, 20909),\n        (1, 21785),\n        (1, 5068),\n        (1, 3511),\n        (1, 22650),\n        (1, 45),\n        (1, 3419),\n        (1, 7950),\n        (1, 20915),\n        (1, 18913),\n        (1, 14546),\n        (1, 2376),\n        (1, 691),\n        (1, 12201),\n        (1, 10992),\n        (1, 14535),\n        (1, 15498),\n        (1, 23933),\n        (1, 10016),\n        (1, 23646),\n        (1, 24577),\n        (1, 756),\n        (1, 809),\n        (1, 16578),\n        (1, 18092),\n        (1, 16443),\n        (1, 3310),\n        (1, 5827),\n        (1, 12331),\n        (1, 14437),\n        (1, 17448),\n        (1, 3219),\n        (1, 2374),\n        (1, 18628),\n        (1, 7676),\n        (1, 2269),\n        (1, 21120),\n        (1, 24907),\n        (1, 2366),\n        (1, 14305),\n        (1, 14530),\n        (1, 9080),\n        (1, 803),\n        (1, 20374),\n        (1, 15372),\n        (1, 2060),\n        (1, 16677),\n        (1, 698),\n        (1, 14302),\n        (1, 18896),\n        (1, 23963),\n        (1, 16658),\n        (1, 17446),\n        (1, 3420),\n        (1, 9479),\n        (1, 5066),\n        (1, 1395),\n        (1, 4929),\n        (1, 22836),\n        (1, 1240),\n        (1, 3203),\n        (1, 11638),\n        (1, 14316),\n        (1, 1111),\n        (1, 14300),\n        (1, 5296),\n        (1, 23931),\n        (1, 814),\n        (1, 10129),\n        (1, 4338),\n        (1, 18627),\n        (1, 21065),\n        (1, 5290),\n        (1, 3503),\n        (1, 21118),\n        (1, 12833),\n        (1, 863),\n        (1, 7938),\n        (1, 24338),\n        (1, 4707),\n        (1, 5420),\n        (1, 16204),\n        (1, 8901),\n        (1, 14330),\n        (1, 9470),\n        (1, 2267),\n        (1, 804),\n        (1, 1007),\n        (1, 23752),\n        (1, 2227),\n        (1, 21862),\n        (1, 20777),\n        (1, 16455),\n        (1, 806),\n        (1, 9609),\n        (1, 2271),\n        (1, 18910),\n        (1, 752),\n        (1, 20031),\n        (1, 2274),\n        (1, 18712),\n        (1, 19395),\n        (1, 6807),\n        (1, 3418),\n        (1, 3950),\n        (1, 14545),\n        (1, 19076),\n        (1, 9039),\n        (1, 15845),\n        (1, 5830),\n        (1, 23932),\n        (1, 807),\n        (1, 14314),\n        (1, 24157),\n        (1, 18713),\n        (1, 9454),\n        (1, 22691),\n        (1, 6540),\n        (1, 2277),\n        (1, 2285),\n        (1, 16823),\n        (1, 1969),\n        (1, 6703),\n        (1, 2373),\n        (1, 9520),\n        (1, 14611),\n        (1, 2246),\n        (1, 1588),\n        (1, 24199),\n        (1, 15690),\n        (1, 18630),\n        (1, 14304),\n        (1, 583),\n        (1, 14529),\n        (1, 23732),\n        (1, 15753),\n        (1, 802),\n        (1, 758),\n        (1, 24624),\n        (1, 14295),\n        (1, 23962),\n        (1, 24334),\n        (1, 7216),\n        (1, 14296),\n        (1, 20420),\n        (1, 9478),\n        (1, 6820),\n        (1, 17450),\n        (1, 1022),\n        (1, 735),\n        (1, 3445),\n        (1, 18915),\n        (1, 754),\n        (1, 15239),\n        (1, 21083),\n        (1, 15374),\n        (1, 753),\n        (1, 707),\n        (1, 18898),\n        (1, 901),\n        (1, 15944),\n        (1, 7219),\n        (1, 21788),\n        (1, 24007),\n        (1, 6538),\n        (1, 22653),\n        (1, 7728),\n        (1, 2283),\n        (1, 23390),\n        (1, 9473),\n        (1, 1634),\n        (1, 2371),\n        (1, 10524),\n        (1, 4524),\n        (1, 18061),\n        (1, 9531),\n        (1, 6510),\n        (1, 4815),\n        (1, 12045),\n        (1, 20776),\n        (1, 3747),\n        (1, 16234),\n        (1, 18909),\n        (1, 11648),\n        (1, 737),\n        (1, 5337),\n        (1, 24514),\n        (1, 9653),\n        (1, 15907),\n        (1, 3417),\n        (1, 24332),\n        (1, 15863),\n        (1, 862),\n        (1, 22833),\n        (1, 20911),\n        (1, 2245),\n        (1, 24200),\n        (1, 12147),\n        (1, 7974),\n        (1, 9460),\n        (1, 9453),\n        (1, 23754),\n        (1, 1323),\n        (1, 3124),\n        (1, 9471),\n        (1, 24341),\n        (1, 5592),\n        (1, 630),\n        (1, 21561),\n        (1, 15084),\n        (1, 18657),\n        (1, 18381),\n        (1, 10516),\n        (1, 22651),\n        (1, 18090),\n        (1, 4712),\n        (1, 16195),\n        (1, 9532),\n        (1, 13287),\n        (1, 1962),\n        (1, 17437),\n        (1, 23744),\n        (1, 23934),\n        (1, 10529),\n        (1, 24159),\n        (1, 2),\n        (1, 757),\n        (1, 14520),\n        (1, 23510),\n        (1, 5292),\n        (1, 23749),\n        (1, 5370),\n        (1, 24336),\n        (1, 1934),\n        (1, 7793),\n        (1, 10017),\n        (1, 24499),\n        (1, 24617),\n        (1, 2375),\n        (1, 23718),\n        (1, 1417),\n        (1, 2270),\n        (1, 20609),\n        (1, 8789),\n        (1, 2243),\n        (1, 22905),\n        (1, 812),\n        (1, 5365),\n        (1, 24154),\n        (1, 4621),\n        (1, 15088),\n        (1, 749),\n        (1, 16238),\n        (1, 4687),\n        (1, 9266),\n        (1, 21066),\n        (1, 12968),\n        (1, 14099),\n        (1, 6537),\n        (1, 21116),\n        (1, 16423),\n        (1, 2282),\n        (1, 23649),\n        (1, 2772),\n        (1, 17212),\n        (1, 20502),\n        (1, 1241),\n        (1, 14140),\n        (1, 9079),\n        (1, 22842),\n        (1, 14317),\n        (1, 24616),\n        (1, 18625),\n        (1, 20882),\n        (1, 9459),\n        (1, 2214),\n        (1, 14772),\n        (1, 18908),\n        (1, 9607),\n        (1, 15946),\n        (1, 15933),\n        (1, 18903),\n        (1, 14035),\n        (1, 3508),\n        (1, 19583),\n    ]\n)\n\n\ndef idx2seqtype(idx):\n    if idx == 1:\n        return \"mpii\"\n    elif idx == 2:\n        return \"bonn\"\n    elif idx == 3:\n        return \"mpiinew\"\n    else:\n        assert False\n\n\ndef seqtype2idx(seqtype):\n    if seqtype == \"mpii\":\n        return 1\n    elif seqtype == \"bonn\":\n        return 2\n    elif seqtype in [\"mpiinew\"]:\n        return 3\n    else:\n        print(\"unknown sequence type:\", seqtype)\n        assert False\n\n\ndef posetrack18_id2fname(image_id):\n    \"\"\"Generates filename given image id\n\n    Args:\n      id: integer in the format TSSSSSSFFFF,\n          T encodes the sequence source (1: 'mpii', 2: 'bonn', 3: 'mpiinew')\n          SSSSSS is 6-digit index of the sequence\n          FFFF is 4-digit index of the image frame\n\n    Returns:\n      name of the video sequence\n    \"\"\"\n    seqtype_idx = image_id // 10000000000\n    seqidx = (image_id % 10000000000) // 10000\n    frameidx = image_id % 10000\n\n    fname = \"{:06}_{}\".format(seqidx, idx2seqtype(seqtype_idx))\n\n    if (seqtype_idx, seqidx) in posetrack17_testval_sequences or (\n        seqtype_idx,\n        seqidx,\n    ) in posetrack18_testval_sequences:\n        fname += \"_test\"\n    else:\n        assert (seqtype_idx, seqidx) in posetrack17_train_sequences or (\n            seqtype_idx,\n            seqidx,\n        ) in posetrack18_train_sequences\n        fname += \"_train\"\n\n    return fname, frameidx\n\n\ndef posetrack18_fname2id(fname, frameidx):\n    \"\"\"Generates image id\n\n    Args:\n      fname: name of the PoseTrack sequence\n      frameidx: index of the frame within the sequence\n    \"\"\"\n    tok = os.path.basename(fname).split(\"_\")\n    seqidx = int(tok[0])\n    seqtype_idx = seqtype2idx(tok[1])\n\n    assert frameidx >= 0 and frameidx < 1e4\n    image_id = seqtype_idx * 10000000000 + seqidx * 10000 + frameidx\n    return image_id\n\n\n# def posetrack18_id2fname(image_id):\n#     \"\"\"Generates filename given image id\n\n#     Args:\n#       id: integer in the format TSSSSSSFFFF,\n#           T encodes the sequence source (1: 'mpii', 2: 'bonn', 3: 'mpiinew')\n#           SSSSSS is 6-digit index of the sequence\n#           FFFF is 4-digit index of the image frame\n\n#     Returns:\n#       name of the video sequence\n#     \"\"\"\n#     seqtype_idx = image_id // 100000000000\n#     seqidx = (image_id % 100000000000) // 100000\n#     frameidx = image_id % 100000\n\n#     fname = \"{:06}_{}\".format(seqidx, idx2seqtype(seqtype_idx))\n\n#     if (seqtype_idx, seqidx) in posetrack17_testval_sequences or (\n#         seqtype_idx,\n#         seqidx,\n#     ) in posetrack18_testval_sequences:\n#         fname += \"_test\"\n#     else:\n#         assert (seqtype_idx, seqidx) in posetrack17_train_sequences or (\n#             seqtype_idx,\n#             seqidx,\n#         ) in posetrack18_train_sequences\n#         fname += \"_train\"\n\n#     return fname, frameidx\n\n\n# def posetrack18_fname2id(fname, frameidx):\n#     \"\"\"Generates image id\n\n#     Args:\n#       fname: name of the PoseTrack sequence\n#       frameidx: index of the frame within the sequence\n#     \"\"\"\n#     tok = os.path.basename(fname).split(\"_\")\n#     seqidx = int(tok[0])\n#     seqtype_idx = seqtype2idx(\"_\".join(tok[1:]))\n\n#     assert frameidx >= 0 and frameidx < 1e5\n#     image_id = seqtype_idx * 100000000000 + seqidx * 100000 + frameidx\n#     return image_id\n"
  },
  {
    "path": "eval/trackeval/__init__.py",
    "content": "from .eval import Evaluator\nfrom . import datasets\nfrom . import metrics\nfrom . import plotting\nfrom . import utils\n"
  },
  {
    "path": "eval/trackeval/_timing.py",
    "content": "from functools import wraps\nfrom time import perf_counter\nimport inspect\n\nDO_TIMING = False\nDISPLAY_LESS_PROGRESS = False\ntimer_dict = {}\n\n\ndef time(f):\n    @wraps(f)\n    def wrap(*args, **kw):\n        if DO_TIMING:\n            # Run function with timing\n            ts = perf_counter()\n            result = f(*args, **kw)\n            te = perf_counter()\n            tt = te-ts\n\n            # Get function name\n            arg_names = inspect.getfullargspec(f)[0]\n            if arg_names[0] == 'self' and DISPLAY_LESS_PROGRESS:\n                return result\n            elif arg_names[0] == 'self':\n                method_name = type(args[0]).__name__ + '.' + f.__name__\n            else:\n                method_name = f.__name__\n\n            # Record accumulative time in each function for analysis\n            if method_name in timer_dict.keys():\n                timer_dict[method_name] += tt\n            else:\n                timer_dict[method_name] = tt\n\n            # If code is finished, display timing summary\n            if method_name == \"Evaluator.evaluate\":\n                print(\"\")\n                print(\"Timing analysis:\")\n                for key, value in timer_dict.items():\n                    print('%-70s %2.4f sec' % (key, value))\n            else:\n                # Get function argument values for printing special arguments of interest\n                arg_titles = ['tracker', 'seq', 'cls']\n                arg_vals = []\n                for i, a in enumerate(arg_names):\n                    if a in arg_titles:\n                        arg_vals.append(args[i])\n                arg_text = '(' + ', '.join(arg_vals) + ')'\n\n                # Display methods and functions with different indentation.\n                if arg_names[0] == 'self':\n                    print('%-74s %2.4f sec' % (' '*4 + method_name + arg_text, tt))\n                else:\n                    print('%-70s %2.4f sec' % (method_name + arg_text, tt))\n\n            return result\n        else:\n            # If config[\"TIME_PROGRESS\"] is false, or config[\"USE_PARALLEL\"] is true, run functions normally without timing.\n            return f(*args, **kw)\n    return wrap\n"
  },
  {
    "path": "eval/trackeval/datasets/__init__.py",
    "content": "from .kitti_2d_box import Kitti2DBox\nfrom .kitti_mots import KittiMOTS\nfrom .mot_challenge_2d_box import MotChallenge2DBox\nfrom .mots_challenge import MOTSChallenge\nfrom .bdd100k import BDD100K\nfrom .davis import DAVIS\nfrom .tao import TAO\nfrom .youtube_vis import YouTubeVIS\n"
  },
  {
    "path": "eval/trackeval/datasets/_base_dataset.py",
    "content": "import csv\nimport io\nimport zipfile\nimport os\nimport traceback\nimport numpy as np\nfrom copy import deepcopy\nfrom abc import ABC, abstractmethod\nfrom .. import _timing\nfrom ..utils import TrackEvalException\n\n\nclass _BaseDataset(ABC):\n    @abstractmethod\n    def __init__(self):\n        self.tracker_list = None\n        self.seq_list = None\n        self.class_list = None\n        self.output_fol = None\n        self.output_sub_fol = None\n\n    # Functions to implement:\n\n    @staticmethod\n    @abstractmethod\n    def get_default_dataset_config():\n        ...\n\n    @abstractmethod\n    def _load_raw_file(self, tracker, seq, is_gt):\n        ...\n\n    @_timing.time\n    @abstractmethod\n    def get_preprocessed_seq_data(self, raw_data, cls):\n        ...\n\n    @abstractmethod\n    def _calculate_similarities(self, gt_dets_t, tracker_dets_t):\n        ...\n\n    # Helper functions for all datasets:\n\n    @classmethod\n    def get_name(cls):\n        return cls.__name__\n\n    def get_output_fol(self, tracker):\n        return os.path.join(self.output_fol, tracker, self.output_sub_fol)\n\n    def get_display_name(self, tracker):\n        \"\"\" Can be overwritten if the trackers name (in files) is different to how it should be displayed.\n        By default this method just returns the trackers name as is.\n        \"\"\"\n        return tracker\n\n    def get_eval_info(self):\n        \"\"\"Return info about the dataset needed for the Evaluator\"\"\"\n        return self.tracker_list, self.seq_list, self.class_list\n\n    @_timing.time\n    def get_raw_seq_data(self, tracker, seq):\n        \"\"\" Loads raw data (tracker and ground-truth) for a single tracker on a single sequence.\n        Raw data includes all of the information needed for both preprocessing and evaluation, for all classes.\n        A later function (get_processed_seq_data) will perform such preprocessing and extract relevant information for\n        the evaluation of each class.\n\n        This returns a dict which contains the fields:\n        [num_timesteps]: integer\n        [gt_ids, tracker_ids, gt_classes, tracker_classes, tracker_confidences]:\n                                                                list (for each timestep) of 1D NDArrays (for each det).\n        [gt_dets, tracker_dets, gt_crowd_ignore_regions]: list (for each timestep) of lists of detections.\n        [similarity_scores]: list (for each timestep) of 2D NDArrays.\n        [gt_extras]: dict (for each extra) of lists (for each timestep) of 1D NDArrays (for each det).\n\n        gt_extras contains dataset specific information used for preprocessing such as occlusion and truncation levels.\n\n        Note that similarities are extracted as part of the dataset and not the metric, because almost all metrics are\n        independent of the exact method of calculating the similarity. However datasets are not (e.g. segmentation\n        masks vs 2D boxes vs 3D boxes).\n        We calculate the similarity before preprocessing because often both preprocessing and evaluation require it and\n        we don't wish to calculate this twice.\n        We calculate similarity between all gt and tracker classes (not just each class individually) to allow for\n        calculation of metrics such as class confusion matrices. Typically the impact of this on performance is low.\n        \"\"\"\n        # Load raw data.\n        raw_gt_data = self._load_raw_file(tracker, seq, is_gt=True)\n        raw_tracker_data = self._load_raw_file(tracker, seq, is_gt=False)\n        raw_data = {**raw_tracker_data, **raw_gt_data}  # Merges dictionaries\n\n        # Calculate similarities for each timestep.\n        similarity_scores = []\n        for t, (gt_dets_t, tracker_dets_t) in enumerate(zip(raw_data['gt_dets'], raw_data['tracker_dets'])):\n            ious = self._calculate_similarities(gt_dets_t, tracker_dets_t)\n            similarity_scores.append(ious)\n        raw_data['similarity_scores'] = similarity_scores\n        return raw_data\n\n    @staticmethod\n    def _load_simple_text_file(file, time_col=0, id_col=None, remove_negative_ids=False, valid_filter=None,\n                               crowd_ignore_filter=None, convert_filter=None, is_zipped=False, zip_file=None,\n                               force_delimiters=None):\n        \"\"\" Function that loads data which is in a commonly used text file format.\n        Assumes each det is given by one row of a text file.\n        There is no limit to the number or meaning of each column,\n        however one column needs to give the timestep of each det (time_col) which is default col 0.\n\n        The file dialect (deliminator, num cols, etc) is determined automatically.\n        This function automatically separates dets by timestep,\n        and is much faster than alternatives such as np.loadtext or pandas.\n\n        If remove_negative_ids is True and id_col is not None, dets with negative values in id_col are excluded.\n        These are not excluded from ignore data.\n\n        valid_filter can be used to only include certain classes.\n        It is a dict with ints as keys, and lists as values,\n        such that a row is included if \"row[key].lower() is in value\" for all key/value pairs in the dict.\n        If None, all classes are included.\n\n        crowd_ignore_filter can be used to read crowd_ignore regions separately. It has the same format as valid filter.\n\n        convert_filter can be used to convert value read to another format.\n        This is used most commonly to convert classes given as string to a class id.\n        This is a dict such that the key is the column to convert, and the value is another dict giving the mapping.\n\n        Optionally, input files could be a zip of multiple text files for storage efficiency.\n\n        Returns read_data and ignore_data.\n        Each is a dict (with keys as timesteps as strings) of lists (over dets) of lists (over column values).\n        Note that all data is returned as strings, and must be converted to float/int later if needed.\n        Note that timesteps will not be present in the returned dict keys if there are no dets for them\n        \"\"\"\n\n        if remove_negative_ids and id_col is None:\n            raise TrackEvalException('remove_negative_ids is True, but id_col is not given.')\n        if crowd_ignore_filter is None:\n            crowd_ignore_filter = {}\n        if convert_filter is None:\n            convert_filter = {}\n        try:\n            if is_zipped:  # Either open file directly or within a zip.\n                if zip_file is None:\n                    raise TrackEvalException('is_zipped set to True, but no zip_file is given.')\n                archive = zipfile.ZipFile(os.path.join(zip_file), 'r')\n                fp = io.TextIOWrapper(archive.open(file, 'r'))\n            else:\n                fp = open(file)\n            read_data = {}\n            crowd_ignore_data = {}\n            fp.seek(0, os.SEEK_END)\n            # check if file is empty\n            if fp.tell():\n                fp.seek(0)\n                dialect = csv.Sniffer().sniff(fp.readline(), delimiters=force_delimiters)  # Auto determine structure.\n                dialect.skipinitialspace = True  # Deal with extra spaces between columns\n                fp.seek(0)\n                reader = csv.reader(fp, dialect)\n                for row in reader:\n                    try:\n                        # Deal with extra trailing spaces at the end of rows\n                        if row[-1] in '':\n                            row = row[:-1]\n                        timestep = str(int(float(row[time_col])))\n                        # Read ignore regions separately.\n                        is_ignored = False\n                        for ignore_key, ignore_value in crowd_ignore_filter.items():\n                            if row[ignore_key].lower() in ignore_value:\n                                # Convert values in one column (e.g. string to id)\n                                for convert_key, convert_value in convert_filter.items():\n                                    row[convert_key] = convert_value[row[convert_key].lower()]\n                                # Save data separated by timestep.\n                                if timestep in crowd_ignore_data.keys():\n                                    crowd_ignore_data[timestep].append(row)\n                                else:\n                                    crowd_ignore_data[timestep] = [row]\n                                is_ignored = True\n                        if is_ignored:  # if det is an ignore region, it cannot be a normal det.\n                            continue\n                        # Exclude some dets if not valid.\n                        if valid_filter is not None:\n                            for key, value in valid_filter.items():\n                                if row[key].lower() not in value:\n                                    continue\n                        if remove_negative_ids:\n                            if int(float(row[id_col])) < 0:\n                                continue\n                        # Convert values in one column (e.g. string to id)\n                        for convert_key, convert_value in convert_filter.items():\n                            row[convert_key] = convert_value[row[convert_key].lower()]\n                        # Save data separated by timestep.\n                        if timestep in read_data.keys():\n                            read_data[timestep].append(row)\n                        else:\n                            read_data[timestep] = [row]\n                    except Exception:\n                        exc_str_init = 'In file %s the following line cannot be read correctly: \\n' % os.path.basename(\n                            file)\n                        exc_str = ' '.join([exc_str_init]+row)\n                        raise TrackEvalException(exc_str)\n            fp.close()\n        except Exception:\n            print('Error loading file: %s, printing traceback.' % file)\n            traceback.print_exc()\n            raise TrackEvalException(\n                'File %s cannot be read because it is either not present or invalidly formatted' % os.path.basename(\n                    file))\n        return read_data, crowd_ignore_data\n\n    @staticmethod\n    def _calculate_mask_ious(masks1, masks2, is_encoded=False, do_ioa=False):\n        \"\"\" Calculates the IOU (intersection over union) between two arrays of segmentation masks.\n        If is_encoded a run length encoding with pycocotools is assumed as input format, otherwise an input of numpy\n        arrays of the shape (num_masks, height, width) is assumed and the encoding is performed.\n        If do_ioa (intersection over area) , then calculates the intersection over the area of masks1 - this is commonly\n        used to determine if detections are within crowd ignore region.\n        :param masks1:  first set of masks (numpy array of shape (num_masks, height, width) if not encoded,\n                        else pycocotools rle encoded format)\n        :param masks2:  second set of masks (numpy array of shape (num_masks, height, width) if not encoded,\n                        else pycocotools rle encoded format)\n        :param is_encoded: whether the input is in pycocotools rle encoded format\n        :param do_ioa: whether to perform IoA computation\n        :return: the IoU/IoA scores\n        \"\"\"\n\n        # Only loaded when run to reduce minimum requirements\n        from pycocotools import mask as mask_utils\n\n        # use pycocotools for run length encoding of masks\n        if not is_encoded:\n            masks1 = mask_utils.encode(np.array(np.transpose(masks1, (1, 2, 0)), order='F'))\n            masks2 = mask_utils.encode(np.array(np.transpose(masks2, (1, 2, 0)), order='F'))\n\n        # use pycocotools for iou computation of rle encoded masks\n        ious = mask_utils.iou(masks1, masks2, [do_ioa]*len(masks2))\n        if len(masks1) == 0 or len(masks2) == 0:\n            ious = np.asarray(ious).reshape(len(masks1), len(masks2))\n        assert (ious >= 0 - np.finfo('float').eps).all()\n        assert (ious <= 1 + np.finfo('float').eps).all()\n\n        return ious\n\n    @staticmethod\n    def _calculate_box_ious(bboxes1, bboxes2, box_format='xywh', do_ioa=False):\n        \"\"\" Calculates the IOU (intersection over union) between two arrays of boxes.\n        Allows variable box formats ('xywh' and 'x0y0x1y1').\n        If do_ioa (intersection over area) , then calculates the intersection over the area of boxes1 - this is commonly\n        used to determine if detections are within crowd ignore region.\n        \"\"\"\n        if box_format in 'xywh':\n            # layout: (x0, y0, w, h)\n            bboxes1 = deepcopy(bboxes1)\n            bboxes2 = deepcopy(bboxes2)\n\n            bboxes1[:, 2] = bboxes1[:, 0] + bboxes1[:, 2]\n            bboxes1[:, 3] = bboxes1[:, 1] + bboxes1[:, 3]\n            bboxes2[:, 2] = bboxes2[:, 0] + bboxes2[:, 2]\n            bboxes2[:, 3] = bboxes2[:, 1] + bboxes2[:, 3]\n        elif box_format not in 'x0y0x1y1':\n            raise (TrackEvalException('box_format %s is not implemented' % box_format))\n\n        # layout: (x0, y0, x1, y1)\n        min_ = np.minimum(bboxes1[:, np.newaxis, :], bboxes2[np.newaxis, :, :])\n        max_ = np.maximum(bboxes1[:, np.newaxis, :], bboxes2[np.newaxis, :, :])\n        intersection = np.maximum(min_[..., 2] - max_[..., 0], 0) * np.maximum(min_[..., 3] - max_[..., 1], 0)\n        area1 = (bboxes1[..., 2] - bboxes1[..., 0]) * (bboxes1[..., 3] - bboxes1[..., 1])\n\n        if do_ioa:\n            ioas = np.zeros_like(intersection)\n            valid_mask = area1 > 0 + np.finfo('float').eps\n            ioas[valid_mask, :] = intersection[valid_mask, :] / area1[valid_mask][:, np.newaxis]\n\n            return ioas\n        else:\n            area2 = (bboxes2[..., 2] - bboxes2[..., 0]) * (bboxes2[..., 3] - bboxes2[..., 1])\n            union = area1[:, np.newaxis] + area2[np.newaxis, :] - intersection\n            intersection[area1 <= 0 + np.finfo('float').eps, :] = 0\n            intersection[:, area2 <= 0 + np.finfo('float').eps] = 0\n            intersection[union <= 0 + np.finfo('float').eps] = 0\n            union[union <= 0 + np.finfo('float').eps] = 1\n            ious = intersection / union\n            return ious\n\n    @staticmethod\n    def _check_unique_ids(data, after_preproc=False):\n        \"\"\"Check the requirement that the tracker_ids and gt_ids are unique per timestep\"\"\"\n        gt_ids = data['gt_ids']\n        tracker_ids = data['tracker_ids']\n        for t, (gt_ids_t, tracker_ids_t) in enumerate(zip(gt_ids, tracker_ids)):\n            if len(tracker_ids_t) > 0:\n                unique_ids, counts = np.unique(tracker_ids_t, return_counts=True)\n                if np.max(counts) != 1:\n                    duplicate_ids = unique_ids[counts > 1]\n                    exc_str_init = 'Tracker predicts the same ID more than once in a single timestep ' \\\n                                   '(seq: %s, frame: %i, ids:' % (data['seq'], t+1)\n                    exc_str = ' '.join([exc_str_init] + [str(d) for d in duplicate_ids]) + ')'\n                    if after_preproc:\n                        exc_str_init += '\\n Note that this error occurred after preprocessing (but not before), ' \\\n                                        'so ids may not be as in file, and something seems wrong with preproc.'\n                    raise TrackEvalException(exc_str)\n            if len(gt_ids_t) > 0:\n                unique_ids, counts = np.unique(gt_ids_t, return_counts=True)\n                if np.max(counts) != 1:\n                    duplicate_ids = unique_ids[counts > 1]\n                    exc_str_init = 'Ground-truth has the same ID more than once in a single timestep ' \\\n                                   '(seq: %s, frame: %i, ids:' % (data['seq'], t+1)\n                    exc_str = ' '.join([exc_str_init] + [str(d) for d in duplicate_ids]) + ')'\n                    if after_preproc:\n                        exc_str_init += '\\n Note that this error occurred after preprocessing (but not before), ' \\\n                                        'so ids may not be as in file, and something seems wrong with preproc.'\n                    raise TrackEvalException(exc_str)\n"
  },
  {
    "path": "eval/trackeval/datasets/bdd100k.py",
    "content": "\nimport os\nimport json\nimport numpy as np\nfrom scipy.optimize import linear_sum_assignment\nfrom ..utils import TrackEvalException\nfrom ._base_dataset import _BaseDataset\nfrom .. import utils\nfrom .. import _timing\n\n\nclass BDD100K(_BaseDataset):\n    \"\"\"Dataset class for BDD100K tracking\"\"\"\n\n    @staticmethod\n    def get_default_dataset_config():\n        \"\"\"Default class config values\"\"\"\n        code_path = utils.get_code_path()\n        default_config = {\n            'GT_FOLDER': os.path.join(code_path, 'data/gt/bdd100k/bdd100k_val'),  # Location of GT data\n            'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/bdd100k/bdd100k_val'),  # Trackers location\n            'OUTPUT_FOLDER': None,  # Where to save eval results (if None, same as TRACKERS_FOLDER)\n            'TRACKERS_TO_EVAL': None,  # Filenames of trackers to eval (if None, all in folder)\n            'CLASSES_TO_EVAL': ['pedestrian', 'rider', 'car', 'bus', 'truck', 'train', 'motorcycle', 'bicycle'],\n            # Valid: ['pedestrian', 'rider', 'car', 'bus', 'truck', 'train', 'motorcycle', 'bicycle']\n            'SPLIT_TO_EVAL': 'val',  # Valid: 'training', 'val',\n            'INPUT_AS_ZIP': False,  # Whether tracker input files are zipped\n            'PRINT_CONFIG': True,  # Whether to print current config\n            'TRACKER_SUB_FOLDER': 'data',  # Tracker files are in TRACKER_FOLDER/tracker_name/TRACKER_SUB_FOLDER\n            'OUTPUT_SUB_FOLDER': '',  # Output files are saved in OUTPUT_FOLDER/tracker_name/OUTPUT_SUB_FOLDER\n            'TRACKER_DISPLAY_NAMES': None,  # Names of trackers to display, if None: TRACKERS_TO_EVAL\n        }\n        return default_config\n\n    def __init__(self, config=None):\n        \"\"\"Initialise dataset, checking that all required files are present\"\"\"\n        super().__init__()\n        # Fill non-given config values with defaults\n        self.config = utils.init_config(config, self.get_default_dataset_config(), self.get_name())\n        self.gt_fol = self.config['GT_FOLDER']\n        self.tracker_fol = self.config['TRACKERS_FOLDER']\n        self.should_classes_combine = True\n        self.use_super_categories = True\n\n        self.output_fol = self.config['OUTPUT_FOLDER']\n        if self.output_fol is None:\n            self.output_fol = self.tracker_fol\n\n        self.tracker_sub_fol = self.config['TRACKER_SUB_FOLDER']\n        self.output_sub_fol = self.config['OUTPUT_SUB_FOLDER']\n\n        # Get classes to eval\n        self.valid_classes = ['pedestrian', 'rider', 'car', 'bus', 'truck', 'train', 'motorcycle', 'bicycle']\n        self.class_list = [cls.lower() if cls.lower() in self.valid_classes else None\n                           for cls in self.config['CLASSES_TO_EVAL']]\n        if not all(self.class_list):\n            raise TrackEvalException('Attempted to evaluate an invalid class. Only classes [pedestrian, rider, car, '\n                                     'bus, truck, train, motorcycle, bicycle] are valid.')\n        self.super_categories = {\"HUMAN\": [cls for cls in [\"pedestrian\", \"rider\"] if cls in self.class_list],\n                                 \"VEHICLE\": [cls for cls in [\"car\", \"truck\", \"bus\", \"train\"] if cls in self.class_list],\n                                 \"BIKE\": [cls for cls in [\"motorcycle\", \"bicycle\"] if cls in self.class_list]}\n        self.distractor_classes = ['other person', 'trailer', 'other vehicle']\n        self.class_name_to_class_id = {'pedestrian': 1, 'rider': 2, 'other person': 3, 'car': 4, 'bus': 5, 'truck': 6,\n                                       'train': 7, 'trailer': 8, 'other vehicle': 9, 'motorcycle': 10, 'bicycle': 11}\n\n        # Get sequences to eval\n        self.seq_list = []\n        self.seq_lengths = {}\n\n        self.seq_list = [seq_file.replace('.json', '') for seq_file in os.listdir(self.gt_fol)]\n\n        # Get trackers to eval\n        if self.config['TRACKERS_TO_EVAL'] is None:\n            self.tracker_list = os.listdir(self.tracker_fol)\n        else:\n            self.tracker_list = self.config['TRACKERS_TO_EVAL']\n\n        if self.config['TRACKER_DISPLAY_NAMES'] is None:\n            self.tracker_to_disp = dict(zip(self.tracker_list, self.tracker_list))\n        elif (self.config['TRACKERS_TO_EVAL'] is not None) and (\n                len(self.config['TRACKER_DISPLAY_NAMES']) == len(self.tracker_list)):\n            self.tracker_to_disp = dict(zip(self.tracker_list, self.config['TRACKER_DISPLAY_NAMES']))\n        else:\n            raise TrackEvalException('List of tracker files and tracker display names do not match.')\n\n        for tracker in self.tracker_list:\n            for seq in self.seq_list:\n                curr_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol, seq + '.json')\n                if not os.path.isfile(curr_file):\n                    print('Tracker file not found: ' + curr_file)\n                    raise TrackEvalException(\n                        'Tracker file not found: ' + tracker + '/' + self.tracker_sub_fol + '/' + os.path.basename(\n                            curr_file))\n\n    def get_display_name(self, tracker):\n        return self.tracker_to_disp[tracker]\n\n    def _load_raw_file(self, tracker, seq, is_gt):\n        \"\"\"Load a file (gt or tracker) in the BDD100K format\n\n        If is_gt, this returns a dict which contains the fields:\n        [gt_ids, gt_classes] : list (for each timestep) of 1D NDArrays (for each det).\n        [gt_dets, gt_crowd_ignore_regions]: list (for each timestep) of lists of detections.\n\n        if not is_gt, this returns a dict which contains the fields:\n        [tracker_ids, tracker_classes, tracker_confidences] : list (for each timestep) of 1D NDArrays (for each det).\n        [tracker_dets]: list (for each timestep) of lists of detections.\n        \"\"\"\n        # File location\n        if is_gt:\n            file = os.path.join(self.gt_fol, seq + '.json')\n        else:\n            file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol, seq + '.json')\n\n        with open(file) as f:\n            data = json.load(f)\n\n        # sort data by frame index\n        data = sorted(data, key=lambda x: x['index'])\n\n        # check sequence length\n        if is_gt:\n            self.seq_lengths[seq] = len(data)\n            num_timesteps = len(data)\n        else:\n            num_timesteps = self.seq_lengths[seq]\n            if num_timesteps != len(data):\n                raise TrackEvalException('Number of ground truth and tracker timesteps do not match for sequence %s'\n                                         % seq)\n\n        # Convert data to required format\n        data_keys = ['ids', 'classes', 'dets']\n        if is_gt:\n            data_keys += ['gt_crowd_ignore_regions']\n        raw_data = {key: [None] * num_timesteps for key in data_keys}\n        for t in range(num_timesteps):\n            ig_ids = []\n            keep_ids = []\n            for i in range(len(data[t]['labels'])):\n                ann = data[t]['labels'][i]\n                if is_gt and (ann['category'] in self.distractor_classes or 'attributes' in ann.keys()\n                              and ann['attributes']['Crowd']):\n                    ig_ids.append(i)\n                else:\n                    keep_ids.append(i)\n\n            if keep_ids:\n                raw_data['dets'][t] = np.atleast_2d([[data[t]['labels'][i]['box2d']['x1'],\n                                                      data[t]['labels'][i]['box2d']['y1'],\n                                                      data[t]['labels'][i]['box2d']['x2'],\n                                                      data[t]['labels'][i]['box2d']['y2']\n                                                      ] for i in keep_ids]).astype(float)\n                raw_data['ids'][t] = np.atleast_1d([data[t]['labels'][i]['id'] for i in keep_ids]).astype(int)\n                raw_data['classes'][t] = np.atleast_1d([self.class_name_to_class_id[data[t]['labels'][i]['category']]\n                                                        for i in keep_ids]).astype(int)\n            else:\n                raw_data['dets'][t] = np.empty((0, 4)).astype(float)\n                raw_data['ids'][t] = np.empty(0).astype(int)\n                raw_data['classes'][t] = np.empty(0).astype(int)\n\n            if is_gt:\n                if ig_ids:\n                    raw_data['gt_crowd_ignore_regions'][t] = np.atleast_2d([[data[t]['labels'][i]['box2d']['x1'],\n                                                                             data[t]['labels'][i]['box2d']['y1'],\n                                                                             data[t]['labels'][i]['box2d']['x2'],\n                                                                             data[t]['labels'][i]['box2d']['y2']\n                                                                             ] for i in ig_ids]).astype(float)\n                else:\n                    raw_data['gt_crowd_ignore_regions'][t] = np.empty((0, 4)).astype(float)\n\n        if is_gt:\n            key_map = {'ids': 'gt_ids',\n                       'classes': 'gt_classes',\n                       'dets': 'gt_dets'}\n        else:\n            key_map = {'ids': 'tracker_ids',\n                       'classes': 'tracker_classes',\n                       'dets': 'tracker_dets'}\n        for k, v in key_map.items():\n            raw_data[v] = raw_data.pop(k)\n        raw_data['num_timesteps'] = num_timesteps\n        return raw_data\n\n    @_timing.time\n    def get_preprocessed_seq_data(self, raw_data, cls):\n        \"\"\" Preprocess data for a single sequence for a single class ready for evaluation.\n        Inputs:\n             - raw_data is a dict containing the data for the sequence already read in by get_raw_seq_data().\n             - cls is the class to be evaluated.\n        Outputs:\n             - data is a dict containing all of the information that metrics need to perform evaluation.\n                It contains the following fields:\n                    [num_timesteps, num_gt_ids, num_tracker_ids, num_gt_dets, num_tracker_dets] : integers.\n                    [gt_ids, tracker_ids, tracker_confidences]: list (for each timestep) of 1D NDArrays (for each det).\n                    [gt_dets, tracker_dets]: list (for each timestep) of lists of detections.\n                    [similarity_scores]: list (for each timestep) of 2D NDArrays.\n        Notes:\n            General preprocessing (preproc) occurs in 4 steps. Some datasets may not use all of these steps.\n                1) Extract only detections relevant for the class to be evaluated (including distractor detections).\n                2) Match gt dets and tracker dets. Remove tracker dets that are matched to a gt det that is of a\n                    distractor class, or otherwise marked as to be removed.\n                3) Remove unmatched tracker dets if they fall within a crowd ignore region or don't meet a certain\n                    other criteria (e.g. are too small).\n                4) Remove gt dets that were only useful for preprocessing and not for actual evaluation.\n            After the above preprocessing steps, this function also calculates the number of gt and tracker detections\n                and unique track ids. It also relabels gt and tracker ids to be contiguous and checks that ids are\n                unique within each timestep.\n\n        BDD100K:\n            In BDD100K, the 4 preproc steps are as follow:\n                1) There are eight classes (pedestrian, rider, car, bus, truck, train, motorcycle, bicycle)\n                    which are evaluated separately.\n                2) For BDD100K there is no removal of matched tracker dets.\n                3) Crowd ignore regions are used to remove unmatched detections.\n                4) No removal of gt dets.\n        \"\"\"\n        cls_id = self.class_name_to_class_id[cls]\n\n        data_keys = ['gt_ids', 'tracker_ids', 'gt_dets', 'tracker_dets', 'similarity_scores']\n        data = {key: [None] * raw_data['num_timesteps'] for key in data_keys}\n        unique_gt_ids = []\n        unique_tracker_ids = []\n        num_gt_dets = 0\n        num_tracker_dets = 0\n        for t in range(raw_data['num_timesteps']):\n\n            # Only extract relevant dets for this class for preproc and eval (cls)\n            gt_class_mask = np.atleast_1d(raw_data['gt_classes'][t] == cls_id)\n            gt_class_mask = gt_class_mask.astype(np.bool)\n            gt_ids = raw_data['gt_ids'][t][gt_class_mask]\n            gt_dets = raw_data['gt_dets'][t][gt_class_mask]\n\n            tracker_class_mask = np.atleast_1d(raw_data['tracker_classes'][t] == cls_id)\n            tracker_class_mask = tracker_class_mask.astype(np.bool)\n            tracker_ids = raw_data['tracker_ids'][t][tracker_class_mask]\n            tracker_dets = raw_data['tracker_dets'][t][tracker_class_mask]\n            similarity_scores = raw_data['similarity_scores'][t][gt_class_mask, :][:, tracker_class_mask]\n\n            # Match tracker and gt dets (with hungarian algorithm)\n            unmatched_indices = np.arange(tracker_ids.shape[0])\n            if gt_ids.shape[0] > 0 and tracker_ids.shape[0] > 0:\n                matching_scores = similarity_scores.copy()\n                matching_scores[matching_scores < 0.5 - np.finfo('float').eps] = 0\n                match_rows, match_cols = linear_sum_assignment(-matching_scores)\n                actually_matched_mask = matching_scores[match_rows, match_cols] > 0 + np.finfo('float').eps\n                match_cols = match_cols[actually_matched_mask]\n                unmatched_indices = np.delete(unmatched_indices, match_cols, axis=0)\n\n            # For unmatched tracker dets, remove those that are greater than 50% within a crowd ignore region.\n            unmatched_tracker_dets = tracker_dets[unmatched_indices, :]\n            crowd_ignore_regions = raw_data['gt_crowd_ignore_regions'][t]\n            intersection_with_ignore_region = self._calculate_box_ious(unmatched_tracker_dets, crowd_ignore_regions,\n                                                                       box_format='x0y0x1y1', do_ioa=True)\n            is_within_crowd_ignore_region = np.any(intersection_with_ignore_region > 0.5 + np.finfo('float').eps,\n                                                   axis=1)\n\n            # Apply preprocessing to remove unwanted tracker dets.\n            to_remove_tracker = unmatched_indices[is_within_crowd_ignore_region]\n            data['tracker_ids'][t] = np.delete(tracker_ids, to_remove_tracker, axis=0)\n            data['tracker_dets'][t] = np.delete(tracker_dets, to_remove_tracker, axis=0)\n            similarity_scores = np.delete(similarity_scores, to_remove_tracker, axis=1)\n\n            data['gt_ids'][t] = gt_ids\n            data['gt_dets'][t] = gt_dets\n            data['similarity_scores'][t] = similarity_scores\n\n            unique_gt_ids += list(np.unique(data['gt_ids'][t]))\n            unique_tracker_ids += list(np.unique(data['tracker_ids'][t]))\n            num_tracker_dets += len(data['tracker_ids'][t])\n            num_gt_dets += len(data['gt_ids'][t])\n\n        # Re-label IDs such that there are no empty IDs\n        if len(unique_gt_ids) > 0:\n            unique_gt_ids = np.unique(unique_gt_ids)\n            gt_id_map = np.nan * np.ones((np.max(unique_gt_ids) + 1))\n            gt_id_map[unique_gt_ids] = np.arange(len(unique_gt_ids))\n            for t in range(raw_data['num_timesteps']):\n                if len(data['gt_ids'][t]) > 0:\n                    data['gt_ids'][t] = gt_id_map[data['gt_ids'][t]].astype(np.int)\n        if len(unique_tracker_ids) > 0:\n            unique_tracker_ids = np.unique(unique_tracker_ids)\n            tracker_id_map = np.nan * np.ones((np.max(unique_tracker_ids) + 1))\n            tracker_id_map[unique_tracker_ids] = np.arange(len(unique_tracker_ids))\n            for t in range(raw_data['num_timesteps']):\n                if len(data['tracker_ids'][t]) > 0:\n                    data['tracker_ids'][t] = tracker_id_map[data['tracker_ids'][t]].astype(np.int)\n\n        # Record overview statistics.\n        data['num_tracker_dets'] = num_tracker_dets\n        data['num_gt_dets'] = num_gt_dets\n        data['num_tracker_ids'] = len(unique_tracker_ids)\n        data['num_gt_ids'] = len(unique_gt_ids)\n        data['num_timesteps'] = raw_data['num_timesteps']\n\n        # Ensure that ids are unique per timestep.\n        self._check_unique_ids(data)\n\n        return data\n\n    def _calculate_similarities(self, gt_dets_t, tracker_dets_t):\n        similarity_scores = self._calculate_box_ious(gt_dets_t, tracker_dets_t, box_format='x0y0x1y1')\n        return similarity_scores\n"
  },
  {
    "path": "eval/trackeval/datasets/davis.py",
    "content": "import os\nimport csv\nimport numpy as np\nfrom ._base_dataset import _BaseDataset\nfrom ..utils import TrackEvalException\nfrom .. import utils\nfrom .. import _timing\n\n\nclass DAVIS(_BaseDataset):\n    \"\"\"Dataset class for DAVIS tracking\"\"\"\n\n    @staticmethod\n    def get_default_dataset_config():\n        \"\"\"Default class config values\"\"\"\n        code_path = utils.get_code_path()\n        default_config = {\n            'GT_FOLDER': os.path.join(code_path, 'data/gt/davis/davis_unsupervised_val/'),  # Location of GT data\n            'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/davis/davis_unsupervised_val/'),  # Trackers location\n            'OUTPUT_FOLDER': None,  # Where to save eval results (if None, same as TRACKERS_FOLDER)\n            'TRACKERS_TO_EVAL': None,  # Filenames of trackers to eval (if None, all in folder)\n            'SPLIT_TO_EVAL': 'val',  # Valid: 'val', 'train'\n            'CLASSES_TO_EVAL': ['general'],\n            'PRINT_CONFIG': True,  # Whether to print current config\n            'TRACKER_SUB_FOLDER': 'data',  # Tracker files are in TRACKER_FOLDER/tracker_name/TRACKER_SUB_FOLDER\n            'OUTPUT_SUB_FOLDER': '',  # Output files are saved in OUTPUT_FOLDER/tracker_name/OUTPUT_SUB_FOLDER\n            'TRACKER_DISPLAY_NAMES': None,  # Names of trackers to display, if None: TRACKERS_TO_EVAL\n            'SEQMAP_FILE': None,  # Specify seqmap file\n            'SEQ_INFO': None,  # If not None, directly specify sequences to eval and their number of timesteps\n            # '{gt_folder}/Annotations_unsupervised/480p/{seq}'\n            'MAX_DETECTIONS': 0  # Maximum number of allowed detections per sequence (0 for no threshold)\n        }\n        return default_config\n\n    def __init__(self, config=None):\n        \"\"\"Initialise dataset, checking that all required files are present\"\"\"\n        super().__init__()\n        # Fill non-given config values with defaults\n        self.config = utils.init_config(config, self.get_default_dataset_config(), self.get_name())\n        # defining a default class since there are no classes in DAVIS\n        self.should_classes_combine = False\n        self.use_super_categories = False\n\n        self.gt_fol = self.config['GT_FOLDER']\n        self.tracker_fol = self.config['TRACKERS_FOLDER']\n\n        self.output_sub_fol = self.config['OUTPUT_SUB_FOLDER']\n        self.tracker_sub_fol = self.config['TRACKER_SUB_FOLDER']\n\n        self.output_fol = self.config['OUTPUT_FOLDER']\n        if self.output_fol is None:\n            self.output_fol = self.config['TRACKERS_FOLDER']\n\n        self.max_det = self.config['MAX_DETECTIONS']\n\n        # Get classes to eval\n        self.valid_classes = ['general']\n        self.class_list = [cls.lower() if cls.lower() in self.valid_classes else None\n                           for cls in self.config['CLASSES_TO_EVAL']]\n        if not all(self.class_list):\n            raise TrackEvalException('Attempted to evaluate an invalid class. Only general class is valid.')\n\n        # Get sequences to eval\n        if self.config[\"SEQ_INFO\"]:\n            self.seq_list = list(self.config[\"SEQ_INFO\"].keys())\n            self.seq_lengths = self.config[\"SEQ_INFO\"]\n        elif self.config[\"SEQMAP_FILE\"]:\n            self.seq_list = []\n            seqmap_file = self.config[\"SEQMAP_FILE\"]\n            if not os.path.isfile(seqmap_file):\n                raise TrackEvalException('no seqmap found: ' + os.path.basename(seqmap_file))\n            with open(seqmap_file) as fp:\n                reader = csv.reader(fp)\n                for i, row in enumerate(reader):\n                    if row[0] == '':\n                        continue\n                    seq = row[0]\n                    self.seq_list.append(seq)\n        else:\n            self.seq_list = os.listdir(self.gt_fol)\n\n        self.seq_lengths = {seq: len(os.listdir(os.path.join(self.gt_fol, seq))) for seq in self.seq_list}\n\n        # Get trackers to eval\n        if self.config['TRACKERS_TO_EVAL'] is None:\n            self.tracker_list = os.listdir(self.tracker_fol)\n        else:\n            self.tracker_list = self.config['TRACKERS_TO_EVAL']\n        for tracker in self.tracker_list:\n            for seq in self.seq_list:\n                curr_dir = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol, seq)\n                if not os.path.isdir(curr_dir):\n                    print('Tracker directory not found: ' + curr_dir)\n                    raise TrackEvalException('Tracker directory not found: ' +\n                                             os.path.join(tracker, self.tracker_sub_fol, seq))\n                tr_timesteps = len(os.listdir(curr_dir))\n                if self.seq_lengths[seq] != tr_timesteps:\n                    raise TrackEvalException('GT folder and tracker folder have a different number'\n                                             'timesteps for tracker %s and sequence %s' % (tracker, seq))\n\n        if self.config['TRACKER_DISPLAY_NAMES'] is None:\n            self.tracker_to_disp = dict(zip(self.tracker_list, self.tracker_list))\n        elif (self.config['TRACKERS_TO_EVAL'] is not None) and (\n                len(self.config['TRACKER_DISPLAY_NAMES']) == len(self.tracker_list)):\n            self.tracker_to_disp = dict(zip(self.tracker_list, self.config['TRACKER_DISPLAY_NAMES']))\n        else:\n            raise TrackEvalException('List of tracker files and tracker display names do not match.')\n\n    def _load_raw_file(self, tracker, seq, is_gt):\n        \"\"\"Load a file (gt or tracker) in the DAVIS format\n\n        If is_gt, this returns a dict which contains the fields:\n        [gt_ids] : list (for each timestep) of 1D NDArrays (for each det).\n        [gt_dets]: list (for each timestep) of lists of detections.\n        [masks_void]: list of masks with void pixels (pixels to be ignored during evaluation)\n\n        if not is_gt, this returns a dict which contains the fields:\n        [tracker_ids] : list (for each timestep) of 1D NDArrays (for each det).\n        [tracker_dets]: list (for each timestep) of lists of detections.\n        \"\"\"\n\n        # Only loaded when run to reduce minimum requirements\n        from pycocotools import mask as mask_utils\n        from PIL import Image\n\n        # File location\n        if is_gt:\n            seq_dir = os.path.join(self.gt_fol, seq)\n        else:\n            seq_dir = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol, seq)\n\n        num_timesteps = self.seq_lengths[seq]\n        data_keys = ['ids', 'dets', 'masks_void']\n        raw_data = {key: [None] * num_timesteps for key in data_keys}\n\n        # read frames\n        frames = [os.path.join(seq_dir, im_name) for im_name in sorted(os.listdir(seq_dir))]\n\n        id_list = []\n        for t in range(num_timesteps):\n            frame = np.array(Image.open(frames[t]))\n            if is_gt:\n                void = frame == 255\n                frame[void] = 0\n                raw_data['masks_void'][t] = mask_utils.encode(np.asfortranarray(void.astype(np.uint8)))\n            id_values = np.unique(frame)\n            id_values = id_values[id_values != 0]\n            id_list += list(id_values)\n            tmp = np.ones((len(id_values), *frame.shape))\n            tmp = tmp * id_values[:, None, None]\n            masks = np.array(tmp == frame[None, ...]).astype(np.uint8)\n            raw_data['dets'][t] = mask_utils.encode(np.array(np.transpose(masks, (1, 2, 0)), order='F'))\n            raw_data['ids'][t] = id_values.astype(int)\n        num_objects = len(np.unique(id_list))\n\n        if not is_gt and num_objects > self.max_det > 0:\n            raise Exception('Number of proposals (%i) for sequence %s exceeds number of maximum allowed proposals (%i).'\n                            % (num_objects, seq, self.max_det))\n\n        if is_gt:\n            key_map = {'ids': 'gt_ids',\n                       'dets': 'gt_dets'}\n        else:\n            key_map = {'ids': 'tracker_ids',\n                       'dets': 'tracker_dets'}\n        for k, v in key_map.items():\n            raw_data[v] = raw_data.pop(k)\n        raw_data[\"num_timesteps\"] = num_timesteps\n        raw_data['mask_shape'] = np.array(Image.open(frames[0])).shape\n        if is_gt:\n            raw_data['num_gt_ids'] = num_objects\n        else:\n            raw_data['num_tracker_ids'] = num_objects\n        return raw_data\n\n    @_timing.time\n    def get_preprocessed_seq_data(self, raw_data, cls):\n        \"\"\" Preprocess data for a single sequence for a single class ready for evaluation.\n        Inputs:\n             - raw_data is a dict containing the data for the sequence already read in by get_raw_seq_data().\n             - cls is the class to be evaluated.\n        Outputs:\n             - data is a dict containing all of the information that metrics need to perform evaluation.\n                It contains the following fields:\n                    [num_timesteps, num_gt_ids, num_tracker_ids, num_gt_dets, num_tracker_dets] : integers.\n                    [gt_ids, tracker_ids]: list (for each timestep) of 1D NDArrays (for each det).\n                    [gt_dets, tracker_dets]: list (for each timestep) of lists of detection masks.\n                    [similarity_scores]: list (for each timestep) of 2D NDArrays.\n        Notes:\n            General preprocessing (preproc) occurs in 4 steps. Some datasets may not use all of these steps.\n                1) Extract only detections relevant for the class to be evaluated (including distractor detections).\n                2) Match gt dets and tracker dets. Remove tracker dets that are matched to a gt det that is of a\n                    distractor class, or otherwise marked as to be removed.\n                3) Remove unmatched tracker dets if they fall within a crowd ignore region or don't meet a certain\n                    other criteria (e.g. are too small).\n                4) Remove gt dets that were only useful for preprocessing and not for actual evaluation.\n            After the above preprocessing steps, this function also calculates the number of gt and tracker detections\n                and unique track ids. It also relabels gt and tracker ids to be contiguous and checks that ids are\n                unique within each timestep.\n\n        DAVIS:\n            In DAVIS, the 4 preproc steps are as follow:\n                1) There are no classes, all detections are evaluated jointly\n                2) No matched tracker detections are removed.\n                3) No unmatched tracker detections are removed.\n                4) There are no ground truth detections (e.g. those of distractor classes) to be removed.\n            Preprocessing special to DAVIS: Pixels which are marked as void in the ground truth are set to zero in the\n                tracker detections since they are not considered during evaluation.\n        \"\"\"\n\n        # Only loaded when run to reduce minimum requirements\n        from pycocotools import mask as mask_utils\n\n        data_keys = ['gt_ids', 'tracker_ids', 'gt_dets', 'tracker_dets', 'similarity_scores']\n        data = {key: [None] * raw_data['num_timesteps'] for key in data_keys}\n        num_gt_dets = 0\n        num_tracker_dets = 0\n        unique_gt_ids = []\n        unique_tracker_ids = []\n        num_timesteps = raw_data['num_timesteps']\n\n        # count detections\n        for t in range(num_timesteps):\n            num_gt_dets += len(raw_data['gt_dets'][t])\n            num_tracker_dets += len(raw_data['tracker_dets'][t])\n            unique_gt_ids += list(np.unique(raw_data['gt_ids'][t]))\n            unique_tracker_ids += list(np.unique(raw_data['tracker_ids'][t]))\n\n        data['gt_ids'] = raw_data['gt_ids']\n        data['gt_dets'] = raw_data['gt_dets']\n        data['similarity_scores'] = raw_data['similarity_scores']\n        data['tracker_ids'] = raw_data['tracker_ids']\n\n        # set void pixels in tracker detections to zero\n        for t in range(num_timesteps):\n            void_mask = raw_data['masks_void'][t]\n            if mask_utils.area(void_mask) > 0:\n                void_mask_ious = np.atleast_1d(mask_utils.iou(raw_data['tracker_dets'][t], [void_mask], [False]))\n                if void_mask_ious.any():\n                    rows, columns = np.where(void_mask_ious > 0)\n                    for r in rows:\n                        det = mask_utils.decode(raw_data['tracker_dets'][t][r])\n                        void = mask_utils.decode(void_mask).astype(np.bool)\n                        det[void] = 0\n                        det = mask_utils.encode(np.array(det, order='F').astype(np.uint8))\n                        raw_data['tracker_dets'][t][r] = det\n        data['tracker_dets'] = raw_data['tracker_dets']\n\n        # Re-label IDs such that there are no empty IDs\n        if len(unique_gt_ids) > 0:\n            unique_gt_ids = np.unique(unique_gt_ids)\n            gt_id_map = np.nan * np.ones((np.max(unique_gt_ids) + 1))\n            gt_id_map[unique_gt_ids] = np.arange(len(unique_gt_ids))\n            for t in range(raw_data['num_timesteps']):\n                if len(data['gt_ids'][t]) > 0:\n                    data['gt_ids'][t] = gt_id_map[data['gt_ids'][t]].astype(np.int)\n        if len(unique_tracker_ids) > 0:\n            unique_tracker_ids = np.unique(unique_tracker_ids)\n            tracker_id_map = np.nan * np.ones((np.max(unique_tracker_ids) + 1))\n            tracker_id_map[unique_tracker_ids] = np.arange(len(unique_tracker_ids))\n            for t in range(raw_data['num_timesteps']):\n                if len(data['tracker_ids'][t]) > 0:\n                    data['tracker_ids'][t] = tracker_id_map[data['tracker_ids'][t]].astype(np.int)\n\n        # Record overview statistics.\n        data['num_tracker_dets'] = num_tracker_dets\n        data['num_gt_dets'] = num_gt_dets\n        data['num_tracker_ids'] = raw_data['num_tracker_ids']\n        data['num_gt_ids'] = raw_data['num_gt_ids']\n        data['mask_shape'] = raw_data['mask_shape']\n        data['num_timesteps'] = num_timesteps\n        return data\n\n    def _calculate_similarities(self, gt_dets_t, tracker_dets_t):\n        similarity_scores = self._calculate_mask_ious(gt_dets_t, tracker_dets_t, is_encoded=True, do_ioa=False)\n        return similarity_scores\n"
  },
  {
    "path": "eval/trackeval/datasets/kitti_2d_box.py",
    "content": "\nimport os\nimport csv\nimport numpy as np\nfrom scipy.optimize import linear_sum_assignment\nfrom ._base_dataset import _BaseDataset\nfrom .. import utils\nfrom ..utils import TrackEvalException\nfrom .. import _timing\n\n\nclass Kitti2DBox(_BaseDataset):\n    \"\"\"Dataset class for KITTI 2D bounding box tracking\"\"\"\n\n    @staticmethod\n    def get_default_dataset_config():\n        \"\"\"Default class config values\"\"\"\n        code_path = utils.get_code_path()\n        default_config = {\n            'GT_FOLDER': os.path.join(code_path, 'data/gt/kitti/kitti_2d_box_train'),  # Location of GT data\n            'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/kitti/kitti_2d_box_train/'),  # Trackers location\n            'OUTPUT_FOLDER': None,  # Where to save eval results (if None, same as TRACKERS_FOLDER)\n            'TRACKERS_TO_EVAL': None,  # Filenames of trackers to eval (if None, all in folder)\n            'CLASSES_TO_EVAL': ['car', 'pedestrian'],  # Valid: ['car', 'pedestrian']\n            'SPLIT_TO_EVAL': 'training',  # Valid: 'training', 'val', 'training_minus_val', 'test'\n            'INPUT_AS_ZIP': False,  # Whether tracker input files are zipped\n            'PRINT_CONFIG': True,  # Whether to print current config\n            'TRACKER_SUB_FOLDER': 'data',  # Tracker files are in TRACKER_FOLDER/tracker_name/TRACKER_SUB_FOLDER\n            'OUTPUT_SUB_FOLDER': '',  # Output files are saved in OUTPUT_FOLDER/tracker_name/OUTPUT_SUB_FOLDER\n            'TRACKER_DISPLAY_NAMES': None,  # Names of trackers to display, if None: TRACKERS_TO_EVAL\n        }\n        return default_config\n\n    def __init__(self, config=None):\n        \"\"\"Initialise dataset, checking that all required files are present\"\"\"\n        super().__init__()\n        # Fill non-given config values with defaults\n        self.config = utils.init_config(config, self.get_default_dataset_config(), self.get_name())\n        self.gt_fol = self.config['GT_FOLDER']\n        self.tracker_fol = self.config['TRACKERS_FOLDER']\n        self.should_classes_combine = False\n        self.use_super_categories = False\n        self.data_is_zipped = self.config['INPUT_AS_ZIP']\n\n        self.output_fol = self.config['OUTPUT_FOLDER']\n        if self.output_fol is None:\n            self.output_fol = self.tracker_fol\n\n        self.tracker_sub_fol = self.config['TRACKER_SUB_FOLDER']\n        self.output_sub_fol = self.config['OUTPUT_SUB_FOLDER']\n\n        self.max_occlusion = 2\n        self.max_truncation = 0\n        self.min_height = 25\n\n        # Get classes to eval\n        self.valid_classes = ['car', 'pedestrian']\n        self.class_list = [cls.lower() if cls.lower() in self.valid_classes else None\n                           for cls in self.config['CLASSES_TO_EVAL']]\n        if not all(self.class_list):\n            raise TrackEvalException('Attempted to evaluate an invalid class. Only classes [car, pedestrian] are valid.')\n        self.class_name_to_class_id = {'car': 1, 'van': 2, 'truck': 3, 'pedestrian': 4, 'person': 5,  # person sitting\n                                       'cyclist': 6, 'tram': 7, 'misc': 8, 'dontcare': 9, 'car_2': 1}\n\n        # Get sequences to eval and check gt files exist\n        self.seq_list = []\n        self.seq_lengths = {}\n        seqmap_name = 'evaluate_tracking.seqmap.' + self.config['SPLIT_TO_EVAL']\n        seqmap_file = os.path.join(self.gt_fol, seqmap_name)\n        if not os.path.isfile(seqmap_file):\n            raise TrackEvalException('no seqmap found: ' + os.path.basename(seqmap_file))\n        with open(seqmap_file) as fp:\n            dialect = csv.Sniffer().sniff(fp.read(1024))\n            fp.seek(0)\n            reader = csv.reader(fp, dialect)\n            for row in reader:\n                if len(row) >= 4:\n                    seq = row[0]\n                    self.seq_list.append(seq)\n                    self.seq_lengths[seq] = int(row[3])\n                    if not self.data_is_zipped:\n                        curr_file = os.path.join(self.gt_fol, 'label_02', seq + '.txt')\n                        if not os.path.isfile(curr_file):\n                            raise TrackEvalException('GT file not found: ' + os.path.basename(curr_file))\n            if self.data_is_zipped:\n                curr_file = os.path.join(self.gt_fol, 'data.zip')\n                if not os.path.isfile(curr_file):\n                    raise TrackEvalException('GT file not found: ' + os.path.basename(curr_file))\n\n        # Get trackers to eval\n        if self.config['TRACKERS_TO_EVAL'] is None:\n            self.tracker_list = os.listdir(self.tracker_fol)\n        else:\n            self.tracker_list = self.config['TRACKERS_TO_EVAL']\n\n        if self.config['TRACKER_DISPLAY_NAMES'] is None:\n            self.tracker_to_disp = dict(zip(self.tracker_list, self.tracker_list))\n        elif (self.config['TRACKERS_TO_EVAL'] is not None) and (\n                len(self.config['TRACKER_DISPLAY_NAMES']) == len(self.tracker_list)):\n            self.tracker_to_disp = dict(zip(self.tracker_list, self.config['TRACKER_DISPLAY_NAMES']))\n        else:\n            raise TrackEvalException('List of tracker files and tracker display names do not match.')\n\n        for tracker in self.tracker_list:\n            if self.data_is_zipped:\n                curr_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol + '.zip')\n                if not os.path.isfile(curr_file):\n                    raise TrackEvalException('Tracker file not found: ' + tracker + '/' + os.path.basename(curr_file))\n            else:\n                for seq in self.seq_list:\n                    curr_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol, seq + '.txt')\n                    if not os.path.isfile(curr_file):\n                        raise TrackEvalException(\n                            'Tracker file not found: ' + tracker + '/' + self.tracker_sub_fol + '/' + os.path.basename(\n                                curr_file))\n\n    def get_display_name(self, tracker):\n        return self.tracker_to_disp[tracker]\n\n    def _load_raw_file(self, tracker, seq, is_gt):\n        \"\"\"Load a file (gt or tracker) in the kitti 2D box format\n\n        If is_gt, this returns a dict which contains the fields:\n        [gt_ids, gt_classes] : list (for each timestep) of 1D NDArrays (for each det).\n        [gt_dets, gt_crowd_ignore_regions]: list (for each timestep) of lists of detections.\n        [gt_extras] : list (for each timestep) of dicts (for each extra) of 1D NDArrays (for each det).\n\n        if not is_gt, this returns a dict which contains the fields:\n        [tracker_ids, tracker_classes, tracker_confidences] : list (for each timestep) of 1D NDArrays (for each det).\n        [tracker_dets]: list (for each timestep) of lists of detections.\n        \"\"\"\n        # File location\n        if self.data_is_zipped:\n            if is_gt:\n                zip_file = os.path.join(self.gt_fol, 'data.zip')\n            else:\n                zip_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol + '.zip')\n            file = seq + '.txt'\n        else:\n            zip_file = None\n            if is_gt:\n                file = os.path.join(self.gt_fol, 'label_02', seq + '.txt')\n            else:\n                file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol, seq + '.txt')\n\n        # Ignore regions\n        if is_gt:\n            crowd_ignore_filter = {2: ['dontcare']}\n        else:\n            crowd_ignore_filter = None\n\n        # Valid classes\n        valid_filter = {2: [x for x in self.class_list]}\n        if is_gt:\n            if 'car' in self.class_list:\n                valid_filter[2].append('van')\n            if 'pedestrian' in self.class_list:\n                valid_filter[2] += ['person']\n\n        # Convert kitti class strings to class ids\n        convert_filter = {2: self.class_name_to_class_id}\n\n        # Load raw data from text file\n        read_data, ignore_data = self._load_simple_text_file(file, time_col=0, id_col=1, remove_negative_ids=True,\n                                                             valid_filter=valid_filter,\n                                                             crowd_ignore_filter=crowd_ignore_filter,\n                                                             convert_filter=convert_filter,\n                                                             is_zipped=self.data_is_zipped, zip_file=zip_file)\n        # Convert data to required format\n        num_timesteps = self.seq_lengths[seq]\n        data_keys = ['ids', 'classes', 'dets']\n        if is_gt:\n            data_keys += ['gt_crowd_ignore_regions', 'gt_extras']\n        else:\n            data_keys += ['tracker_confidences']\n        raw_data = {key: [None] * num_timesteps for key in data_keys}\n\n        # Check for any extra time keys\n        extra_time_keys = [x for x in read_data.keys() if x not in [str(t) for t in range(num_timesteps)]]\n        if len(extra_time_keys) > 0:\n            if is_gt:\n                text = 'Ground-truth'\n            else:\n                text = 'Tracking'\n            raise TrackEvalException(\n                text + ' data contains the following invalid timesteps in seq %s: ' % seq + ', '.join(\n                    [str(x) + ', ' for x in extra_time_keys]))\n\n        for t in range(num_timesteps):\n            time_key = str(t)\n            if time_key in read_data.keys():\n                time_data = np.asarray(read_data[time_key], dtype=np.float)\n                raw_data['dets'][t] = np.atleast_2d(time_data[:, 6:10])\n                raw_data['ids'][t] = np.atleast_1d(time_data[:, 1]).astype(int)\n                raw_data['classes'][t] = np.atleast_1d(time_data[:, 2]).astype(int)\n                if is_gt:\n                    gt_extras_dict = {'truncation': np.atleast_1d(time_data[:, 3].astype(int)),\n                                      'occlusion': np.atleast_1d(time_data[:, 4].astype(int))}\n                    raw_data['gt_extras'][t] = gt_extras_dict\n                else:\n                    if time_data.shape[1] > 17:\n                        raw_data['tracker_confidences'][t] = np.atleast_1d(time_data[:, 17])\n                    else:\n                        raw_data['tracker_confidences'][t] = np.ones(time_data.shape[0])\n            else:\n                raw_data['dets'][t] = np.empty((0, 4))\n                raw_data['ids'][t] = np.empty(0).astype(int)\n                raw_data['classes'][t] = np.empty(0).astype(int)\n                if is_gt:\n                    gt_extras_dict = {'truncation': np.empty(0),\n                                      'occlusion': np.empty(0)}\n                    raw_data['gt_extras'][t] = gt_extras_dict\n                else:\n                    raw_data['tracker_confidences'][t] = np.empty(0)\n            if is_gt:\n                if time_key in ignore_data.keys():\n                    time_ignore = np.asarray(ignore_data[time_key], dtype=np.float)\n                    raw_data['gt_crowd_ignore_regions'][t] = np.atleast_2d(time_ignore[:, 6:10])\n                else:\n                    raw_data['gt_crowd_ignore_regions'][t] = np.empty((0, 4))\n\n        if is_gt:\n            key_map = {'ids': 'gt_ids',\n                       'classes': 'gt_classes',\n                       'dets': 'gt_dets'}\n        else:\n            key_map = {'ids': 'tracker_ids',\n                       'classes': 'tracker_classes',\n                       'dets': 'tracker_dets'}\n        for k, v in key_map.items():\n            raw_data[v] = raw_data.pop(k)\n        raw_data['num_timesteps'] = num_timesteps\n        raw_data['seq'] = seq\n        return raw_data\n\n    @_timing.time\n    def get_preprocessed_seq_data(self, raw_data, cls):\n        \"\"\" Preprocess data for a single sequence for a single class ready for evaluation.\n        Inputs:\n             - raw_data is a dict containing the data for the sequence already read in by get_raw_seq_data().\n             - cls is the class to be evaluated.\n        Outputs:\n             - data is a dict containing all of the information that metrics need to perform evaluation.\n                It contains the following fields:\n                    [num_timesteps, num_gt_ids, num_tracker_ids, num_gt_dets, num_tracker_dets] : integers.\n                    [gt_ids, tracker_ids, tracker_confidences]: list (for each timestep) of 1D NDArrays (for each det).\n                    [gt_dets, tracker_dets]: list (for each timestep) of lists of detections.\n                    [similarity_scores]: list (for each timestep) of 2D NDArrays.\n        Notes:\n            General preprocessing (preproc) occurs in 4 steps. Some datasets may not use all of these steps.\n                1) Extract only detections relevant for the class to be evaluated (including distractor detections).\n                2) Match gt dets and tracker dets. Remove tracker dets that are matched to a gt det that is of a\n                    distractor class, or otherwise marked as to be removed.\n                3) Remove unmatched tracker dets if they fall within a crowd ignore region or don't meet a certain\n                    other criteria (e.g. are too small).\n                4) Remove gt dets that were only useful for preprocessing and not for actual evaluation.\n            After the above preprocessing steps, this function also calculates the number of gt and tracker detections\n                and unique track ids. It also relabels gt and tracker ids to be contiguous and checks that ids are\n                unique within each timestep.\n\n        KITTI:\n            In KITTI, the 4 preproc steps are as follow:\n                1) There are two classes (pedestrian and car) which are evaluated separately.\n                2) For the pedestrian class, the 'person' class is distractor objects (people sitting).\n                    For the car class, the 'van' class are distractor objects.\n                    GT boxes marked as having occlusion level > 2 or truncation level > 0 are also treated as\n                        distractors.\n                3) Crowd ignore regions are used to remove unmatched detections. Also unmatched detections with\n                    height <= 25 pixels are removed.\n                4) Distractor gt dets (including truncated and occluded) are removed.\n        \"\"\"\n        if cls == 'pedestrian':\n            distractor_classes = [self.class_name_to_class_id['person']]\n        elif cls == 'car':\n            distractor_classes = [self.class_name_to_class_id['van']]\n        else:\n            raise (TrackEvalException('Class %s is not evaluatable' % cls))\n        cls_id = self.class_name_to_class_id[cls]\n\n        data_keys = ['gt_ids', 'tracker_ids', 'gt_dets', 'tracker_dets', 'tracker_confidences', 'similarity_scores']\n        data = {key: [None] * raw_data['num_timesteps'] for key in data_keys}\n        unique_gt_ids = []\n        unique_tracker_ids = []\n        num_gt_dets = 0\n        num_tracker_dets = 0\n        for t in range(raw_data['num_timesteps']):\n\n            # Only extract relevant dets for this class for preproc and eval (cls + distractor classes)\n            gt_class_mask = np.sum([raw_data['gt_classes'][t] == c for c in [cls_id] + distractor_classes], axis=0)\n            gt_class_mask = gt_class_mask.astype(np.bool)\n            gt_ids = raw_data['gt_ids'][t][gt_class_mask]\n            gt_dets = raw_data['gt_dets'][t][gt_class_mask]\n            gt_classes = raw_data['gt_classes'][t][gt_class_mask]\n            gt_occlusion = raw_data['gt_extras'][t]['occlusion'][gt_class_mask]\n            gt_truncation = raw_data['gt_extras'][t]['truncation'][gt_class_mask]\n\n            tracker_class_mask = np.atleast_1d(raw_data['tracker_classes'][t] == cls_id)\n            tracker_class_mask = tracker_class_mask.astype(np.bool)\n            tracker_ids = raw_data['tracker_ids'][t][tracker_class_mask]\n            tracker_dets = raw_data['tracker_dets'][t][tracker_class_mask]\n            tracker_confidences = raw_data['tracker_confidences'][t][tracker_class_mask]\n            similarity_scores = raw_data['similarity_scores'][t][gt_class_mask, :][:, tracker_class_mask]\n\n            # Match tracker and gt dets (with hungarian algorithm) and remove tracker dets which match with gt dets\n            # which are labeled as truncated, occluded, or belonging to a distractor class.\n            to_remove_matched = np.array([], np.int)\n            unmatched_indices = np.arange(tracker_ids.shape[0])\n            if gt_ids.shape[0] > 0 and tracker_ids.shape[0] > 0:\n                matching_scores = similarity_scores.copy()\n                matching_scores[matching_scores < 0.5 - np.finfo('float').eps] = 0\n                match_rows, match_cols = linear_sum_assignment(-matching_scores)\n                actually_matched_mask = matching_scores[match_rows, match_cols] > 0 + np.finfo('float').eps\n                match_rows = match_rows[actually_matched_mask]\n                match_cols = match_cols[actually_matched_mask]\n\n                is_distractor_class = np.isin(gt_classes[match_rows], distractor_classes)\n                is_occluded_or_truncated = np.logical_or(\n                    gt_occlusion[match_rows] > self.max_occlusion + np.finfo('float').eps,\n                    gt_truncation[match_rows] > self.max_truncation + np.finfo('float').eps)\n                to_remove_matched = np.logical_or(is_distractor_class, is_occluded_or_truncated)\n                to_remove_matched = match_cols[to_remove_matched]\n                unmatched_indices = np.delete(unmatched_indices, match_cols, axis=0)\n\n            # For unmatched tracker dets, also remove those smaller than a minimum height.\n            unmatched_tracker_dets = tracker_dets[unmatched_indices, :]\n            unmatched_heights = unmatched_tracker_dets[:, 3] - unmatched_tracker_dets[:, 1]\n            is_too_small = unmatched_heights <= self.min_height + np.finfo('float').eps\n\n            # For unmatched tracker dets, also remove those that are greater than 50% within a crowd ignore region.\n            crowd_ignore_regions = raw_data['gt_crowd_ignore_regions'][t]\n            intersection_with_ignore_region = self._calculate_box_ious(unmatched_tracker_dets, crowd_ignore_regions,\n                                                                       box_format='x0y0x1y1', do_ioa=True)\n            is_within_crowd_ignore_region = np.any(intersection_with_ignore_region > 0.5 + np.finfo('float').eps, axis=1)\n\n            # Apply preprocessing to remove all unwanted tracker dets.\n            to_remove_unmatched = unmatched_indices[np.logical_or(is_too_small, is_within_crowd_ignore_region)]\n            to_remove_tracker = np.concatenate((to_remove_matched, to_remove_unmatched), axis=0)\n            data['tracker_ids'][t] = np.delete(tracker_ids, to_remove_tracker, axis=0)\n            data['tracker_dets'][t] = np.delete(tracker_dets, to_remove_tracker, axis=0)\n            data['tracker_confidences'][t] = np.delete(tracker_confidences, to_remove_tracker, axis=0)\n            similarity_scores = np.delete(similarity_scores, to_remove_tracker, axis=1)\n\n            # Also remove gt dets that were only useful for preprocessing and are not needed for evaluation.\n            # These are those that are occluded, truncated and from distractor objects.\n            gt_to_keep_mask = (np.less_equal(gt_occlusion, self.max_occlusion)) & \\\n                              (np.less_equal(gt_truncation, self.max_truncation)) & \\\n                              (np.equal(gt_classes, cls_id))\n            data['gt_ids'][t] = gt_ids[gt_to_keep_mask]\n            data['gt_dets'][t] = gt_dets[gt_to_keep_mask, :]\n            data['similarity_scores'][t] = similarity_scores[gt_to_keep_mask]\n\n            unique_gt_ids += list(np.unique(data['gt_ids'][t]))\n            unique_tracker_ids += list(np.unique(data['tracker_ids'][t]))\n            num_tracker_dets += len(data['tracker_ids'][t])\n            num_gt_dets += len(data['gt_ids'][t])\n\n        # Re-label IDs such that there are no empty IDs\n        if len(unique_gt_ids) > 0:\n            unique_gt_ids = np.unique(unique_gt_ids)\n            gt_id_map = np.nan * np.ones((np.max(unique_gt_ids) + 1))\n            gt_id_map[unique_gt_ids] = np.arange(len(unique_gt_ids))\n            for t in range(raw_data['num_timesteps']):\n                if len(data['gt_ids'][t]) > 0:\n                    data['gt_ids'][t] = gt_id_map[data['gt_ids'][t]].astype(np.int)\n        if len(unique_tracker_ids) > 0:\n            unique_tracker_ids = np.unique(unique_tracker_ids)\n            tracker_id_map = np.nan * np.ones((np.max(unique_tracker_ids) + 1))\n            tracker_id_map[unique_tracker_ids] = np.arange(len(unique_tracker_ids))\n            for t in range(raw_data['num_timesteps']):\n                if len(data['tracker_ids'][t]) > 0:\n                    data['tracker_ids'][t] = tracker_id_map[data['tracker_ids'][t]].astype(np.int)\n\n        # Record overview statistics.\n        data['num_tracker_dets'] = num_tracker_dets\n        data['num_gt_dets'] = num_gt_dets\n        data['num_tracker_ids'] = len(unique_tracker_ids)\n        data['num_gt_ids'] = len(unique_gt_ids)\n        data['num_timesteps'] = raw_data['num_timesteps']\n        data['seq'] = raw_data['seq']\n\n        # Ensure that ids are unique per timestep.\n        self._check_unique_ids(data)\n\n        return data\n\n    def _calculate_similarities(self, gt_dets_t, tracker_dets_t):\n        similarity_scores = self._calculate_box_ious(gt_dets_t, tracker_dets_t, box_format='x0y0x1y1')\n        return similarity_scores\n"
  },
  {
    "path": "eval/trackeval/datasets/kitti_mots.py",
    "content": "import os\nimport csv\nimport numpy as np\nfrom scipy.optimize import linear_sum_assignment\nfrom ._base_dataset import _BaseDataset\nfrom .. import utils\nfrom .. import _timing\nfrom ..utils import TrackEvalException\n\n\nclass KittiMOTS(_BaseDataset):\n    \"\"\"Dataset class for KITTI MOTS tracking\"\"\"\n\n    @staticmethod\n    def get_default_dataset_config():\n        \"\"\"Default class config values\"\"\"\n        code_path = utils.get_code_path()\n        default_config = {\n            'GT_FOLDER': os.path.join(code_path, 'data/gt/kitti/kitti_mots_val'),  # Location of GT data\n            'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/kitti/kitti_mots_val'),  # Trackers location\n            'OUTPUT_FOLDER': None,  # Where to save eval results (if None, same as TRACKERS_FOLDER)\n            'TRACKERS_TO_EVAL': None,  # Filenames of trackers to eval (if None, all in folder)\n            'CLASSES_TO_EVAL': ['car', 'pedestrian'],  # Valid: ['car', 'pedestrian']\n            'SPLIT_TO_EVAL': 'val',  # Valid: 'training', 'val'\n            'INPUT_AS_ZIP': False,  # Whether tracker input files are zipped\n            'PRINT_CONFIG': True,  # Whether to print current config\n            'TRACKER_SUB_FOLDER': 'data',  # Tracker files are in TRACKER_FOLDER/tracker_name/TRACKER_SUB_FOLDER\n            'OUTPUT_SUB_FOLDER': '',  # Output files are saved in OUTPUT_FOLDER/tracker_name/OUTPUT_SUB_FOLDER\n            'TRACKER_DISPLAY_NAMES': None,  # Names of trackers to display, if None: TRACKERS_TO_EVAL\n            'SEQMAP_FOLDER': None,  # Where seqmaps are found (if None, GT_FOLDER)\n            'SEQMAP_FILE': None,    # Directly specify seqmap file (if none use seqmap_folder/split_to_eval.seqmap)\n            'SEQ_INFO': None,  # If not None, directly specify sequences to eval and their number of timesteps\n            'GT_LOC_FORMAT': '{gt_folder}/label_02/{seq}.txt',  # format of gt localization\n        }\n        return default_config\n\n    def __init__(self, config=None):\n        \"\"\"Initialise dataset, checking that all required files are present\"\"\"\n        super().__init__()\n        # Fill non-given config values with defaults\n        self.config = utils.init_config(config, self.get_default_dataset_config(), self.get_name())\n        self.gt_fol = self.config['GT_FOLDER']\n        self.tracker_fol = self.config['TRACKERS_FOLDER']\n        self.split_to_eval = self.config['SPLIT_TO_EVAL']\n        self.should_classes_combine = False\n        self.use_super_categories = False\n        self.data_is_zipped = self.config['INPUT_AS_ZIP']\n\n        self.output_fol = self.config['OUTPUT_FOLDER']\n        if self.output_fol is None:\n            self.output_fol = self.tracker_fol\n\n        self.tracker_sub_fol = self.config['TRACKER_SUB_FOLDER']\n        self.output_sub_fol = self.config['OUTPUT_SUB_FOLDER']\n\n        # Get classes to eval\n        self.valid_classes = ['car', 'pedestrian']\n        self.class_list = [cls.lower() if cls.lower() in self.valid_classes else None\n                           for cls in self.config['CLASSES_TO_EVAL']]\n        if not all(self.class_list):\n            raise TrackEvalException('Attempted to evaluate an invalid class. '\n                                     'Only classes [car, pedestrian] are valid.')\n        self.class_name_to_class_id = {'car': '1', 'pedestrian': '2', 'ignore': '10'}\n\n        # Get sequences to eval and check gt files exist\n        self.seq_list, self.seq_lengths = self._get_seq_info()\n        if len(self.seq_list) < 1:\n            raise TrackEvalException('No sequences are selected to be evaluated.')\n\n        # Check gt files exist\n        for seq in self.seq_list:\n            if not self.data_is_zipped:\n                curr_file = self.config[\"GT_LOC_FORMAT\"].format(gt_folder=self.gt_fol, seq=seq)\n                if not os.path.isfile(curr_file):\n                    print('GT file not found ' + curr_file)\n                    raise TrackEvalException('GT file not found for sequence: ' + seq)\n        if self.data_is_zipped:\n            curr_file = os.path.join(self.gt_fol, 'data.zip')\n            if not os.path.isfile(curr_file):\n                raise TrackEvalException('GT file not found: ' + os.path.basename(curr_file))\n\n        # Get trackers to eval\n        if self.config['TRACKERS_TO_EVAL'] is None:\n            self.tracker_list = os.listdir(self.tracker_fol)\n        else:\n            self.tracker_list = self.config['TRACKERS_TO_EVAL']\n\n        if self.config['TRACKER_DISPLAY_NAMES'] is None:\n            self.tracker_to_disp = dict(zip(self.tracker_list, self.tracker_list))\n        elif (self.config['TRACKERS_TO_EVAL'] is not None) and (\n                len(self.config['TRACKER_DISPLAY_NAMES']) == len(self.tracker_list)):\n            self.tracker_to_disp = dict(zip(self.tracker_list, self.config['TRACKER_DISPLAY_NAMES']))\n        else:\n            raise TrackEvalException('List of tracker files and tracker display names do not match.')\n\n        for tracker in self.tracker_list:\n            if self.data_is_zipped:\n                curr_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol + '.zip')\n                if not os.path.isfile(curr_file):\n                    print('Tracker file not found: ' + curr_file)\n                    raise TrackEvalException('Tracker file not found: ' + tracker + '/' + os.path.basename(curr_file))\n            else:\n                for seq in self.seq_list:\n                    curr_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol, seq + '.txt')\n                    if not os.path.isfile(curr_file):\n                        print('Tracker file not found: ' + curr_file)\n                        raise TrackEvalException(\n                            'Tracker file not found: ' + tracker + '/' + self.tracker_sub_fol + '/' + os.path.basename(\n                                curr_file))\n\n    def get_display_name(self, tracker):\n        return self.tracker_to_disp[tracker]\n\n    def _get_seq_info(self):\n        seq_list = []\n        seq_lengths = {}\n        seqmap_name = 'evaluate_mots.seqmap.' + self.config['SPLIT_TO_EVAL']\n\n        if self.config[\"SEQ_INFO\"]:\n            seq_list = list(self.config[\"SEQ_INFO\"].keys())\n            seq_lengths = self.config[\"SEQ_INFO\"]\n        else:\n            if self.config[\"SEQMAP_FILE\"]:\n                seqmap_file = self.config[\"SEQMAP_FILE\"]\n            else:\n                if self.config[\"SEQMAP_FOLDER\"] is None:\n                    seqmap_file = os.path.join(self.config['GT_FOLDER'], seqmap_name)\n                else:\n                    seqmap_file = os.path.join(self.config[\"SEQMAP_FOLDER\"], seqmap_name)\n            if not os.path.isfile(seqmap_file):\n                print('no seqmap found: ' + seqmap_file)\n                raise TrackEvalException('no seqmap found: ' + os.path.basename(seqmap_file))\n            with open(seqmap_file) as fp:\n                reader = csv.reader(fp)\n                for i, _ in enumerate(reader):\n                    dialect = csv.Sniffer().sniff(fp.read(1024))\n                    fp.seek(0)\n                    reader = csv.reader(fp, dialect)\n                    for row in reader:\n                        if len(row) >= 4:\n                            seq = \"%04d\" % int(row[0])\n                            seq_list.append(seq)\n                            seq_lengths[seq] = int(row[3]) + 1\n        return seq_list, seq_lengths\n\n    def _load_raw_file(self, tracker, seq, is_gt):\n        \"\"\"Load a file (gt or tracker) in the KITTI MOTS format\n\n        If is_gt, this returns a dict which contains the fields:\n        [gt_ids, gt_classes] : list (for each timestep) of 1D NDArrays (for each det).\n        [gt_dets]: list (for each timestep) of lists of detections.\n        [gt_ignore_region]: list (for each timestep) of masks for the ignore regions\n\n        if not is_gt, this returns a dict which contains the fields:\n        [tracker_ids, tracker_classes] : list (for each timestep) of 1D NDArrays (for each det).\n        [tracker_dets]: list (for each timestep) of lists of detections.\n        \"\"\"\n\n        # Only loaded when run to reduce minimum requirements\n        from pycocotools import mask as mask_utils\n\n        # File location\n        if self.data_is_zipped:\n            if is_gt:\n                zip_file = os.path.join(self.gt_fol, 'data.zip')\n            else:\n                zip_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol + '.zip')\n            file = seq + '.txt'\n        else:\n            zip_file = None\n            if is_gt:\n                file = self.config[\"GT_LOC_FORMAT\"].format(gt_folder=self.gt_fol, seq=seq)\n            else:\n                file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol, seq + '.txt')\n\n        # Ignore regions\n        if is_gt:\n            crowd_ignore_filter = {2: ['10']}\n        else:\n            crowd_ignore_filter = None\n\n        # Load raw data from text file\n        read_data, ignore_data = self._load_simple_text_file(file, crowd_ignore_filter=crowd_ignore_filter,\n                                                             is_zipped=self.data_is_zipped, zip_file=zip_file,\n                                                             force_delimiters=' ')\n\n        # Convert data to required format\n        num_timesteps = self.seq_lengths[seq]\n        data_keys = ['ids', 'classes', 'dets']\n        if is_gt:\n            data_keys += ['gt_ignore_region']\n        raw_data = {key: [None] * num_timesteps for key in data_keys}\n\n        # Check for any extra time keys\n        extra_time_keys = [x for x in read_data.keys() if x not in [str(t) for t in range(num_timesteps)]]\n        if len(extra_time_keys) > 0:\n            if is_gt:\n                text = 'Ground-truth'\n            else:\n                text = 'Tracking'\n            raise TrackEvalException(\n                text + ' data contains the following invalid timesteps in seq %s: ' % seq + ', '.join(\n                    [str(x) + ', ' for x in extra_time_keys]))\n\n        for t in range(num_timesteps):\n            time_key = str(t)\n            # list to collect all masks of a timestep to check for overlapping areas\n            all_masks = []\n            if time_key in read_data.keys():\n                try:\n                    raw_data['dets'][t] = [{'size': [int(region[3]), int(region[4])],\n                                            'counts': region[5].encode(encoding='UTF-8')}\n                                           for region in read_data[time_key]]\n                    raw_data['ids'][t] = np.atleast_1d([region[1] for region in read_data[time_key]]).astype(int)\n                    raw_data['classes'][t] = np.atleast_1d([region[2] for region in read_data[time_key]]).astype(int)\n                    all_masks += raw_data['dets'][t]\n                except IndexError:\n                    self._raise_index_error(is_gt, tracker, seq)\n                except ValueError:\n                    self._raise_value_error(is_gt, tracker, seq)\n            else:\n                raw_data['dets'][t] = []\n                raw_data['ids'][t] = np.empty(0).astype(int)\n                raw_data['classes'][t] = np.empty(0).astype(int)\n            if is_gt:\n                if time_key in ignore_data.keys():\n                    try:\n                        time_ignore = [{'size': [int(region[3]), int(region[4])],\n                                        'counts': region[5].encode(encoding='UTF-8')}\n                                       for region in ignore_data[time_key]]\n                        raw_data['gt_ignore_region'][t] = mask_utils.merge([mask for mask in time_ignore],\n                                                                           intersect=False)\n                        all_masks += [raw_data['gt_ignore_region'][t]]\n                    except IndexError:\n                        self._raise_index_error(is_gt, tracker, seq)\n                    except ValueError:\n                        self._raise_value_error(is_gt, tracker, seq)\n                else:\n                    raw_data['gt_ignore_region'][t] = mask_utils.merge([], intersect=False)\n\n            # check for overlapping masks\n            if all_masks:\n                masks_merged = all_masks[0]\n                for mask in all_masks[1:]:\n                    if mask_utils.area(mask_utils.merge([masks_merged, mask], intersect=True)) != 0.0:\n                        raise TrackEvalException(\n                            'Tracker has overlapping masks. Tracker: ' + tracker + ' Seq: ' + seq + ' Timestep: ' + str(\n                                t))\n                    masks_merged = mask_utils.merge([masks_merged, mask], intersect=False)\n\n        if is_gt:\n            key_map = {'ids': 'gt_ids',\n                       'classes': 'gt_classes',\n                       'dets': 'gt_dets'}\n        else:\n            key_map = {'ids': 'tracker_ids',\n                       'classes': 'tracker_classes',\n                       'dets': 'tracker_dets'}\n        for k, v in key_map.items():\n            raw_data[v] = raw_data.pop(k)\n        raw_data[\"num_timesteps\"] = num_timesteps\n        raw_data['seq'] = seq\n        return raw_data\n\n    @_timing.time\n    def get_preprocessed_seq_data(self, raw_data, cls):\n        \"\"\" Preprocess data for a single sequence for a single class ready for evaluation.\n        Inputs:\n             - raw_data is a dict containing the data for the sequence already read in by get_raw_seq_data().\n             - cls is the class to be evaluated.\n        Outputs:\n             - data is a dict containing all of the information that metrics need to perform evaluation.\n                It contains the following fields:\n                    [num_timesteps, num_gt_ids, num_tracker_ids, num_gt_dets, num_tracker_dets] : integers.\n                    [gt_ids, tracker_ids]: list (for each timestep) of 1D NDArrays (for each det).\n                    [gt_dets, tracker_dets]: list (for each timestep) of lists of detection masks.\n                    [similarity_scores]: list (for each timestep) of 2D NDArrays.\n        Notes:\n            General preprocessing (preproc) occurs in 4 steps. Some datasets may not use all of these steps.\n                1) Extract only detections relevant for the class to be evaluated (including distractor detections).\n                2) Match gt dets and tracker dets. Remove tracker dets that are matched to a gt det that is of a\n                    distractor class, or otherwise marked as to be removed.\n                3) Remove unmatched tracker dets if they fall within a crowd ignore region or don't meet a certain\n                    other criteria (e.g. are too small).\n                4) Remove gt dets that were only useful for preprocessing and not for actual evaluation.\n            After the above preprocessing steps, this function also calculates the number of gt and tracker detections\n                and unique track ids. It also relabels gt and tracker ids to be contiguous and checks that ids are\n                unique within each timestep.\n\n        KITTI MOTS:\n            In KITTI MOTS, the 4 preproc steps are as follow:\n                1) There are two classes (car and pedestrian) which are evaluated separately.\n                2) There are no ground truth detections marked as to be removed/distractor classes.\n                    Therefore also no matched tracker detections are removed.\n                3) Ignore regions are used to remove unmatched detections (at least 50% overlap with ignore region).\n                4) There are no ground truth detections (e.g. those of distractor classes) to be removed.\n        \"\"\"\n        # Check that input data has unique ids\n        self._check_unique_ids(raw_data)\n\n        cls_id = int(self.class_name_to_class_id[cls])\n\n        data_keys = ['gt_ids', 'tracker_ids', 'gt_dets', 'tracker_dets', 'similarity_scores']\n        data = {key: [None] * raw_data['num_timesteps'] for key in data_keys}\n        unique_gt_ids = []\n        unique_tracker_ids = []\n        num_gt_dets = 0\n        num_tracker_dets = 0\n        for t in range(raw_data['num_timesteps']):\n\n            # Only extract relevant dets for this class for preproc and eval (cls)\n            gt_class_mask = np.atleast_1d(raw_data['gt_classes'][t] == cls_id)\n            gt_class_mask = gt_class_mask.astype(np.bool)\n            gt_ids = raw_data['gt_ids'][t][gt_class_mask]\n            gt_dets = [raw_data['gt_dets'][t][ind] for ind in range(len(gt_class_mask)) if gt_class_mask[ind]]\n\n            tracker_class_mask = np.atleast_1d(raw_data['tracker_classes'][t] == cls_id)\n            tracker_class_mask = tracker_class_mask.astype(np.bool)\n            tracker_ids = raw_data['tracker_ids'][t][tracker_class_mask]\n            tracker_dets = [raw_data['tracker_dets'][t][ind] for ind in range(len(tracker_class_mask)) if\n                            tracker_class_mask[ind]]\n            similarity_scores = raw_data['similarity_scores'][t][gt_class_mask, :][:, tracker_class_mask]\n\n            # Match tracker and gt dets (with hungarian algorithm)\n            unmatched_indices = np.arange(tracker_ids.shape[0])\n            if gt_ids.shape[0] > 0 and tracker_ids.shape[0] > 0:\n                matching_scores = similarity_scores.copy()\n                matching_scores[matching_scores < 0.5 - np.finfo('float').eps] = -10000\n                match_rows, match_cols = linear_sum_assignment(-matching_scores)\n                actually_matched_mask = matching_scores[match_rows, match_cols] > 0 + np.finfo('float').eps\n                match_cols = match_cols[actually_matched_mask]\n\n                unmatched_indices = np.delete(unmatched_indices, match_cols, axis=0)\n\n            # For unmatched tracker dets, remove those that are greater than 50% within a crowd ignore region.\n            unmatched_tracker_dets = [tracker_dets[i] for i in range(len(tracker_dets)) if i in unmatched_indices]\n            ignore_region = raw_data['gt_ignore_region'][t]\n            intersection_with_ignore_region = self._calculate_mask_ious(unmatched_tracker_dets, [ignore_region],\n                                                                        is_encoded=True, do_ioa=True)\n            is_within_ignore_region = np.any(intersection_with_ignore_region > 0.5 + np.finfo('float').eps, axis=1)\n\n            # Apply preprocessing to remove unwanted tracker dets.\n            to_remove_tracker = unmatched_indices[is_within_ignore_region]\n            data['tracker_ids'][t] = np.delete(tracker_ids, to_remove_tracker, axis=0)\n            data['tracker_dets'][t] = np.delete(tracker_dets, to_remove_tracker, axis=0)\n            similarity_scores = np.delete(similarity_scores, to_remove_tracker, axis=1)\n\n            # Keep all ground truth detections\n            data['gt_ids'][t] = gt_ids\n            data['gt_dets'][t] = gt_dets\n            data['similarity_scores'][t] = similarity_scores\n\n            unique_gt_ids += list(np.unique(data['gt_ids'][t]))\n            unique_tracker_ids += list(np.unique(data['tracker_ids'][t]))\n            num_tracker_dets += len(data['tracker_ids'][t])\n            num_gt_dets += len(data['gt_ids'][t])\n\n        # Re-label IDs such that there are no empty IDs\n        if len(unique_gt_ids) > 0:\n            unique_gt_ids = np.unique(unique_gt_ids)\n            gt_id_map = np.nan * np.ones((np.max(unique_gt_ids) + 1))\n            gt_id_map[unique_gt_ids] = np.arange(len(unique_gt_ids))\n            for t in range(raw_data['num_timesteps']):\n                if len(data['gt_ids'][t]) > 0:\n                    data['gt_ids'][t] = gt_id_map[data['gt_ids'][t]].astype(np.int)\n        if len(unique_tracker_ids) > 0:\n            unique_tracker_ids = np.unique(unique_tracker_ids)\n            tracker_id_map = np.nan * np.ones((np.max(unique_tracker_ids) + 1))\n            tracker_id_map[unique_tracker_ids] = np.arange(len(unique_tracker_ids))\n            for t in range(raw_data['num_timesteps']):\n                if len(data['tracker_ids'][t]) > 0:\n                    data['tracker_ids'][t] = tracker_id_map[data['tracker_ids'][t]].astype(np.int)\n\n        # Record overview statistics.\n        data['num_tracker_dets'] = num_tracker_dets\n        data['num_gt_dets'] = num_gt_dets\n        data['num_tracker_ids'] = len(unique_tracker_ids)\n        data['num_gt_ids'] = len(unique_gt_ids)\n        data['num_timesteps'] = raw_data['num_timesteps']\n        data['seq'] = raw_data['seq']\n        data['cls'] = cls\n\n        # Ensure again that ids are unique per timestep after preproc.\n        self._check_unique_ids(data, after_preproc=True)\n\n        return data\n\n    def _calculate_similarities(self, gt_dets_t, tracker_dets_t):\n        similarity_scores = self._calculate_mask_ious(gt_dets_t, tracker_dets_t, is_encoded=True, do_ioa=False)\n        return similarity_scores\n\n    @staticmethod\n    def _raise_index_error(is_gt, tracker, seq):\n        \"\"\"\n        Auxiliary method to raise an evaluation error in case of an index error while reading files.\n        :param is_gt: whether gt or tracker data is read\n        :param tracker: the name of the tracker\n        :param seq: the name of the seq\n        :return: None\n        \"\"\"\n        if is_gt:\n            err = 'Cannot load gt data from sequence %s, because there are not enough ' \\\n                  'columns in the data.' % seq\n            raise TrackEvalException(err)\n        else:\n            err = 'Cannot load tracker data from tracker %s, sequence %s, because there are not enough ' \\\n                  'columns in the data.' % (tracker, seq)\n            raise TrackEvalException(err)\n\n    @staticmethod\n    def _raise_value_error(is_gt, tracker, seq):\n        \"\"\"\n        Auxiliary method to raise an evaluation error in case of an value error while reading files.\n        :param is_gt: whether gt or tracker data is read\n        :param tracker: the name of the tracker\n        :param seq: the name of the seq\n        :return: None\n        \"\"\"\n        if is_gt:\n            raise TrackEvalException(\n                'GT data for sequence %s cannot be converted to the right format. Is data corrupted?' % seq)\n        else:\n            raise TrackEvalException(\n                'Tracking data from tracker %s, sequence %s cannot be converted to the right format. '\n                'Is data corrupted?' % (tracker, seq))\n"
  },
  {
    "path": "eval/trackeval/datasets/mot_challenge_2d_box.py",
    "content": "import os\nimport csv\nimport configparser\nimport numpy as np\nfrom scipy.optimize import linear_sum_assignment\nfrom ._base_dataset import _BaseDataset\nfrom .. import utils\nfrom .. import _timing\nfrom ..utils import TrackEvalException\n\n\nclass MotChallenge2DBox(_BaseDataset):\n    \"\"\"Dataset class for MOT Challenge 2D bounding box tracking\"\"\"\n\n    @staticmethod\n    def get_default_dataset_config():\n        \"\"\"Default class config values\"\"\"\n        code_path = utils.get_code_path()\n        default_config = {\n            'GT_FOLDER': os.path.join(code_path, 'data/gt/mot_challenge/'),  # Location of GT data\n            'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/mot_challenge/'),  # Trackers location\n            'OUTPUT_FOLDER': None,  # Where to save eval results (if None, same as TRACKERS_FOLDER)\n            'TRACKERS_TO_EVAL': None,  # Filenames of trackers to eval (if None, all in folder)\n            'CLASSES_TO_EVAL': ['pedestrian'],  # Valid: ['pedestrian']\n            'BENCHMARK': 'MOT17',  # Valid: 'MOT17', 'MOT16', 'MOT20', 'MOT15'\n            'SPLIT_TO_EVAL': 'train',  # Valid: 'train', 'test', 'all'\n            'INPUT_AS_ZIP': False,  # Whether tracker input files are zipped\n            'PRINT_CONFIG': True,  # Whether to print current config\n            'DO_PREPROC': True,  # Whether to perform preprocessing (never done for MOT15)\n            'TRACKER_SUB_FOLDER': 'data',  # Tracker files are in TRACKER_FOLDER/tracker_name/TRACKER_SUB_FOLDER\n            'OUTPUT_SUB_FOLDER': '',  # Output files are saved in OUTPUT_FOLDER/tracker_name/OUTPUT_SUB_FOLDER\n            'TRACKER_DISPLAY_NAMES': None,  # Names of trackers to display, if None: TRACKERS_TO_EVAL\n            'SEQMAP_FOLDER': None,  # Where seqmaps are found (if None, GT_FOLDER/seqmaps)\n            'SEQMAP_FILE': None,  # Directly specify seqmap file (if none use seqmap_folder/benchmark-split_to_eval)\n            'SEQ_INFO': None,  # If not None, directly specify sequences to eval and their number of timesteps\n            'GT_LOC_FORMAT': '{gt_folder}/{seq}/gt/gt.txt',  # '{gt_folder}/{seq}/gt/gt.txt'\n            'SKIP_SPLIT_FOL': False,  # If False, data is in GT_FOLDER/BENCHMARK-SPLIT_TO_EVAL/ and in\n                                      # TRACKERS_FOLDER/BENCHMARK-SPLIT_TO_EVAL/tracker/\n                                      # If True, then the middle 'benchmark-split' folder is skipped for both.\n        }\n        return default_config\n\n    def __init__(self, config=None):\n        \"\"\"Initialise dataset, checking that all required files are present\"\"\"\n        super().__init__()\n        # Fill non-given config values with defaults\n        self.config = utils.init_config(config, self.get_default_dataset_config(), self.get_name())\n\n        self.benchmark = self.config['BENCHMARK']\n        gt_set = self.config['BENCHMARK'] + '-' + self.config['SPLIT_TO_EVAL']\n        self.gt_set = gt_set\n        if not self.config['SKIP_SPLIT_FOL']:\n            split_fol = gt_set\n        else:\n            split_fol = ''\n        if self.benchmark == 'MOT16':\n            split_fol = ''\n        self.gt_fol = os.path.join(self.config['GT_FOLDER'], split_fol)\n        self.tracker_fol = os.path.join(self.config['TRACKERS_FOLDER'], split_fol)\n        self.should_classes_combine = False\n        self.use_super_categories = False\n        self.data_is_zipped = self.config['INPUT_AS_ZIP']\n        self.do_preproc = self.config['DO_PREPROC']\n\n        self.output_fol = self.config['OUTPUT_FOLDER']\n        if self.output_fol is None:\n            self.output_fol = self.tracker_fol\n\n        self.tracker_sub_fol = self.config['TRACKER_SUB_FOLDER']\n        self.output_sub_fol = self.config['OUTPUT_SUB_FOLDER']\n\n        # Get classes to eval\n        self.valid_classes = ['pedestrian']\n        self.class_list = [cls.lower() if cls.lower() in self.valid_classes else None\n                           for cls in self.config['CLASSES_TO_EVAL']]\n        if not all(self.class_list):\n            raise TrackEvalException('Attempted to evaluate an invalid class. Only pedestrian class is valid.')\n        self.class_name_to_class_id = {'pedestrian': 1, 'person_on_vehicle': 2, 'car': 3, 'bicycle': 4, 'motorbike': 5,\n                                       'non_mot_vehicle': 6, 'static_person': 7, 'distractor': 8, 'occluder': 9,\n                                       'occluder_on_ground': 10, 'occluder_full': 11, 'reflection': 12, 'crowd': 13}\n\n        # Get sequences to eval and check gt files exist\n        self.seq_list, self.seq_lengths = self._get_seq_info()\n        if len(self.seq_list) < 1:\n            raise TrackEvalException('No sequences are selected to be evaluated.')\n\n        # Check gt files exist\n        for seq in self.seq_list:\n            if not self.data_is_zipped:\n                curr_file = self.config[\"GT_LOC_FORMAT\"].format(gt_folder=self.gt_fol, seq=seq)\n                if not os.path.isfile(curr_file):\n                    print('GT file not found ' + curr_file)\n                    raise TrackEvalException('GT file not found for sequence: ' + seq)\n        if self.data_is_zipped:\n            curr_file = os.path.join(self.gt_fol, 'data.zip')\n            if not os.path.isfile(curr_file):\n                print('GT file not found ' + curr_file)\n                raise TrackEvalException('GT file not found: ' + os.path.basename(curr_file))\n\n        # Get trackers to eval\n        if self.config['TRACKERS_TO_EVAL'] is None:\n            self.tracker_list = os.listdir(self.tracker_fol)\n        else:\n            self.tracker_list = self.config['TRACKERS_TO_EVAL']\n\n        if self.config['TRACKER_DISPLAY_NAMES'] is None:\n            self.tracker_to_disp = dict(zip(self.tracker_list, self.tracker_list))\n        elif (self.config['TRACKERS_TO_EVAL'] is not None) and (\n                len(self.config['TRACKER_DISPLAY_NAMES']) == len(self.tracker_list)):\n            self.tracker_to_disp = dict(zip(self.tracker_list, self.config['TRACKER_DISPLAY_NAMES']))\n        else:\n            raise TrackEvalException('List of tracker files and tracker display names do not match.')\n\n        for tracker in self.tracker_list:\n            if self.data_is_zipped:\n                curr_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol + '.zip')\n                if not os.path.isfile(curr_file):\n                    print('Tracker file not found: ' + curr_file)\n                    raise TrackEvalException('Tracker file not found: ' + tracker + '/' + os.path.basename(curr_file))\n            else:\n                for seq in self.seq_list:\n                    curr_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol, seq + '.txt')\n                    if not os.path.isfile(curr_file):\n                        print('Tracker file not found: ' + curr_file)\n                        raise TrackEvalException(\n                            'Tracker file not found: ' + tracker + '/' + self.tracker_sub_fol + '/' + os.path.basename(\n                                curr_file))\n\n    def get_display_name(self, tracker):\n        return self.tracker_to_disp[tracker]\n\n    def _get_seq_info(self):\n        seq_list = []\n        seq_lengths = {}\n        if self.config[\"SEQ_INFO\"]:\n            seq_list = list(self.config[\"SEQ_INFO\"].keys())\n            seq_lengths = self.config[\"SEQ_INFO\"]\n\n            # If sequence length is 'None' tries to read sequence length from .ini files.\n            for seq, seq_length in seq_lengths.items():\n                if seq_length is None:\n                    ini_file = os.path.join(self.gt_fol, seq, 'seqinfo.ini')\n                    if not os.path.isfile(ini_file):\n                        raise TrackEvalException('ini file does not exist: ' + seq + '/' + os.path.basename(ini_file))\n                    ini_data = configparser.ConfigParser()\n                    ini_data.read(ini_file)\n                    seq_lengths[seq] = int(ini_data['Sequence']['seqLength'])\n\n        else:\n            if self.config[\"SEQMAP_FILE\"]:\n                seqmap_file = self.config[\"SEQMAP_FILE\"]\n            else:\n                if self.config[\"SEQMAP_FOLDER\"] is None:\n                    seqmap_file = os.path.join(self.config['GT_FOLDER'], 'seqmaps', self.gt_set + '.txt')\n                else:\n                    seqmap_file = os.path.join(self.config[\"SEQMAP_FOLDER\"], self.gt_set + '.txt')\n            if not os.path.isfile(seqmap_file):\n                print('no seqmap found: ' + seqmap_file)\n                raise TrackEvalException('no seqmap found: ' + os.path.basename(seqmap_file))\n            with open(seqmap_file) as fp:\n                reader = csv.reader(fp)\n                for i, row in enumerate(reader):\n                    if i == 0 or row[0] == '':\n                        continue\n                    seq = row[0]\n                    seq_list.append(seq)\n                    ini_file = os.path.join(self.gt_fol, seq, 'seqinfo.ini')\n                    if not os.path.isfile(ini_file):\n                        raise TrackEvalException('ini file does not exist: ' + seq + '/' + os.path.basename(ini_file))\n                    ini_data = configparser.ConfigParser()\n                    ini_data.read(ini_file)\n                    seq_lengths[seq] = int(ini_data['Sequence']['seqLength'])\n        return seq_list, seq_lengths\n\n    def _load_raw_file(self, tracker, seq, is_gt):\n        \"\"\"Load a file (gt or tracker) in the MOT Challenge 2D box format\n\n        If is_gt, this returns a dict which contains the fields:\n        [gt_ids, gt_classes] : list (for each timestep) of 1D NDArrays (for each det).\n        [gt_dets, gt_crowd_ignore_regions]: list (for each timestep) of lists of detections.\n        [gt_extras] : list (for each timestep) of dicts (for each extra) of 1D NDArrays (for each det).\n\n        if not is_gt, this returns a dict which contains the fields:\n        [tracker_ids, tracker_classes, tracker_confidences] : list (for each timestep) of 1D NDArrays (for each det).\n        [tracker_dets]: list (for each timestep) of lists of detections.\n        \"\"\"\n        # File location\n        if self.data_is_zipped:\n            if is_gt:\n                zip_file = os.path.join(self.gt_fol, 'data.zip')\n            else:\n                zip_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol + '.zip')\n            file = seq + '.txt'\n        else:\n            zip_file = None\n            if is_gt:\n                file = self.config[\"GT_LOC_FORMAT\"].format(gt_folder=self.gt_fol, seq=seq)\n            else:\n                file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol, seq + '.txt')\n\n        # Load raw data from text file\n        read_data, ignore_data = self._load_simple_text_file(file, is_zipped=self.data_is_zipped, zip_file=zip_file)\n\n        # Convert data to required format\n        num_timesteps = self.seq_lengths[seq]\n        data_keys = ['ids', 'classes', 'dets']\n        if is_gt:\n            data_keys += ['gt_crowd_ignore_regions', 'gt_extras']\n        else:\n            data_keys += ['tracker_confidences']\n        raw_data = {key: [None] * num_timesteps for key in data_keys}\n\n        # Check for any extra time keys\n        extra_time_keys = [x for x in read_data.keys() if x not in [str(t+1) for t in range(num_timesteps)]]\n        if len(extra_time_keys) > 0:\n            if is_gt:\n                text = 'Ground-truth'\n            else:\n                text = 'Tracking'\n            raise TrackEvalException(\n                text + ' data contains the following invalid timesteps in seq %s: ' % seq + ', '.join(\n                    [str(x) + ', ' for x in extra_time_keys]))\n\n        for t in range(num_timesteps):\n            time_key = str(t+1)\n            if time_key in read_data.keys():\n                try:\n                    time_data = np.asarray(read_data[time_key], dtype=np.float)\n                except ValueError:\n                    if is_gt:\n                        raise TrackEvalException(\n                            'Cannot convert gt data for sequence %s to float. Is data corrupted?' % seq)\n                    else:\n                        raise TrackEvalException(\n                            'Cannot convert tracking data from tracker %s, sequence %s to float. Is data corrupted?' % (\n                                tracker, seq))\n                try:\n                    raw_data['dets'][t] = np.atleast_2d(time_data[:, 2:6])\n                    raw_data['ids'][t] = np.atleast_1d(time_data[:, 1]).astype(int)\n                except IndexError:\n                    if is_gt:\n                        err = 'Cannot load gt data from sequence %s, because there is not enough ' \\\n                              'columns in the data.' % seq\n                        raise TrackEvalException(err)\n                    else:\n                        err = 'Cannot load tracker data from tracker %s, sequence %s, because there is not enough ' \\\n                              'columns in the data.' % (tracker, seq)\n                        raise TrackEvalException(err)\n                if time_data.shape[1] >= 8:\n                    raw_data['classes'][t] = np.atleast_1d(time_data[:, 7]).astype(int)\n                else:\n                    if not is_gt:\n                        raw_data['classes'][t] = np.ones_like(raw_data['ids'][t])\n                    else:\n                        raise TrackEvalException(\n                            'GT data is not in a valid format, there is not enough rows in seq %s, timestep %i.' % (\n                                seq, t))\n                if is_gt:\n                    gt_extras_dict = {'zero_marked': np.atleast_1d(time_data[:, 6].astype(int))}\n                    raw_data['gt_extras'][t] = gt_extras_dict\n                else:\n                    raw_data['tracker_confidences'][t] = np.atleast_1d(time_data[:, 6])\n            else:\n                raw_data['dets'][t] = np.empty((0, 4))\n                raw_data['ids'][t] = np.empty(0).astype(int)\n                raw_data['classes'][t] = np.empty(0).astype(int)\n                if is_gt:\n                    gt_extras_dict = {'zero_marked': np.empty(0)}\n                    raw_data['gt_extras'][t] = gt_extras_dict\n                else:\n                    raw_data['tracker_confidences'][t] = np.empty(0)\n            if is_gt:\n                raw_data['gt_crowd_ignore_regions'][t] = np.empty((0, 4))\n\n        if is_gt:\n            key_map = {'ids': 'gt_ids',\n                       'classes': 'gt_classes',\n                       'dets': 'gt_dets'}\n        else:\n            key_map = {'ids': 'tracker_ids',\n                       'classes': 'tracker_classes',\n                       'dets': 'tracker_dets'}\n        for k, v in key_map.items():\n            raw_data[v] = raw_data.pop(k)\n        raw_data['num_timesteps'] = num_timesteps\n        raw_data['seq'] = seq\n        return raw_data\n\n    @_timing.time\n    def get_preprocessed_seq_data(self, raw_data, cls):\n        \"\"\" Preprocess data for a single sequence for a single class ready for evaluation.\n        Inputs:\n             - raw_data is a dict containing the data for the sequence already read in by get_raw_seq_data().\n             - cls is the class to be evaluated.\n        Outputs:\n             - data is a dict containing all of the information that metrics need to perform evaluation.\n                It contains the following fields:\n                    [num_timesteps, num_gt_ids, num_tracker_ids, num_gt_dets, num_tracker_dets] : integers.\n                    [gt_ids, tracker_ids, tracker_confidences]: list (for each timestep) of 1D NDArrays (for each det).\n                    [gt_dets, tracker_dets]: list (for each timestep) of lists of detections.\n                    [similarity_scores]: list (for each timestep) of 2D NDArrays.\n        Notes:\n            General preprocessing (preproc) occurs in 4 steps. Some datasets may not use all of these steps.\n                1) Extract only detections relevant for the class to be evaluated (including distractor detections).\n                2) Match gt dets and tracker dets. Remove tracker dets that are matched to a gt det that is of a\n                    distractor class, or otherwise marked as to be removed.\n                3) Remove unmatched tracker dets if they fall within a crowd ignore region or don't meet a certain\n                    other criteria (e.g. are too small).\n                4) Remove gt dets that were only useful for preprocessing and not for actual evaluation.\n            After the above preprocessing steps, this function also calculates the number of gt and tracker detections\n                and unique track ids. It also relabels gt and tracker ids to be contiguous and checks that ids are\n                unique within each timestep.\n\n        MOT Challenge:\n            In MOT Challenge, the 4 preproc steps are as follow:\n                1) There is only one class (pedestrian) to be evaluated, but all other classes are used for preproc.\n                2) Predictions are matched against all gt boxes (regardless of class), those matching with distractor\n                    objects are removed.\n                3) There is no crowd ignore regions.\n                4) All gt dets except pedestrian are removed, also removes pedestrian gt dets marked with zero_marked.\n        \"\"\"\n        # Check that input data has unique ids\n        self._check_unique_ids(raw_data)\n\n        distractor_class_names = ['person_on_vehicle', 'static_person', 'distractor', 'reflection']\n        if self.benchmark == 'MOT20':\n            distractor_class_names.append('non_mot_vehicle')\n        distractor_classes = [self.class_name_to_class_id[x] for x in distractor_class_names]\n        cls_id = self.class_name_to_class_id[cls]\n\n        data_keys = ['gt_ids', 'tracker_ids', 'gt_dets', 'tracker_dets', 'tracker_confidences', 'similarity_scores']\n        data = {key: [None] * raw_data['num_timesteps'] for key in data_keys}\n        unique_gt_ids = []\n        unique_tracker_ids = []\n        num_gt_dets = 0\n        num_tracker_dets = 0\n        for t in range(raw_data['num_timesteps']):\n\n            # Get all data\n            gt_ids = raw_data['gt_ids'][t]\n            gt_dets = raw_data['gt_dets'][t]\n            gt_classes = raw_data['gt_classes'][t]\n            gt_zero_marked = raw_data['gt_extras'][t]['zero_marked']\n\n            tracker_ids = raw_data['tracker_ids'][t]\n            tracker_dets = raw_data['tracker_dets'][t]\n            tracker_classes = raw_data['tracker_classes'][t]\n            tracker_confidences = raw_data['tracker_confidences'][t]\n            similarity_scores = raw_data['similarity_scores'][t]\n\n            # Evaluation is ONLY valid for pedestrian class\n            if len(tracker_classes) > 0 and np.max(tracker_classes) > 1:\n                raise TrackEvalException(\n                    'Evaluation is only valid for pedestrian class. Non pedestrian class (%i) found in sequence %s at '\n                    'timestep %i.' % (np.max(tracker_classes), raw_data['seq'], t))\n\n            # Match tracker and gt dets (with hungarian algorithm) and remove tracker dets which match with gt dets\n            # which are labeled as belonging to a distractor class.\n            to_remove_tracker = np.array([], np.int)\n            if self.do_preproc and self.benchmark != 'MOT15' and gt_ids.shape[0] > 0 and tracker_ids.shape[0] > 0:\n                matching_scores = similarity_scores.copy()\n                matching_scores[matching_scores < 0.5 - np.finfo('float').eps] = 0\n                match_rows, match_cols = linear_sum_assignment(-matching_scores)\n                actually_matched_mask = matching_scores[match_rows, match_cols] > 0 + np.finfo('float').eps\n                match_rows = match_rows[actually_matched_mask]\n                match_cols = match_cols[actually_matched_mask]\n\n                is_distractor_class = np.isin(gt_classes[match_rows], distractor_classes)\n                to_remove_tracker = match_cols[is_distractor_class]\n\n            # Apply preprocessing to remove all unwanted tracker dets.\n            data['tracker_ids'][t] = np.delete(tracker_ids, to_remove_tracker, axis=0)\n            data['tracker_dets'][t] = np.delete(tracker_dets, to_remove_tracker, axis=0)\n            data['tracker_confidences'][t] = np.delete(tracker_confidences, to_remove_tracker, axis=0)\n            similarity_scores = np.delete(similarity_scores, to_remove_tracker, axis=1)\n\n            # Remove gt detections marked as to remove (zero marked), and also remove gt detections not in pedestrian\n            # class (not applicable for MOT15)\n            if self.benchmark != 'MOT15':\n                gt_to_keep_mask = (np.not_equal(gt_zero_marked, 0)) & \\\n                                  (np.equal(gt_classes, cls_id))\n            else:\n                # There are no classes for MOT15\n                gt_to_keep_mask = np.not_equal(gt_zero_marked, 0)\n            data['gt_ids'][t] = gt_ids[gt_to_keep_mask]\n            data['gt_dets'][t] = gt_dets[gt_to_keep_mask, :]\n            data['similarity_scores'][t] = similarity_scores[gt_to_keep_mask]\n\n            unique_gt_ids += list(np.unique(data['gt_ids'][t]))\n            unique_tracker_ids += list(np.unique(data['tracker_ids'][t]))\n            num_tracker_dets += len(data['tracker_ids'][t])\n            num_gt_dets += len(data['gt_ids'][t])\n\n        # Re-label IDs such that there are no empty IDs\n        if len(unique_gt_ids) > 0:\n            unique_gt_ids = np.unique(unique_gt_ids)\n            gt_id_map = np.nan * np.ones((np.max(unique_gt_ids) + 1))\n            gt_id_map[unique_gt_ids] = np.arange(len(unique_gt_ids))\n            for t in range(raw_data['num_timesteps']):\n                if len(data['gt_ids'][t]) > 0:\n                    data['gt_ids'][t] = gt_id_map[data['gt_ids'][t]].astype(np.int)\n        if len(unique_tracker_ids) > 0:\n            unique_tracker_ids = np.unique(unique_tracker_ids)\n            tracker_id_map = np.nan * np.ones((np.max(unique_tracker_ids) + 1))\n            tracker_id_map[unique_tracker_ids] = np.arange(len(unique_tracker_ids))\n            for t in range(raw_data['num_timesteps']):\n                if len(data['tracker_ids'][t]) > 0:\n                    data['tracker_ids'][t] = tracker_id_map[data['tracker_ids'][t]].astype(np.int)\n\n        # Record overview statistics.\n        data['num_tracker_dets'] = num_tracker_dets\n        data['num_gt_dets'] = num_gt_dets\n        data['num_tracker_ids'] = len(unique_tracker_ids)\n        data['num_gt_ids'] = len(unique_gt_ids)\n        data['num_timesteps'] = raw_data['num_timesteps']\n        data['seq'] = raw_data['seq']\n\n        # Ensure again that ids are unique per timestep after preproc.\n        self._check_unique_ids(data, after_preproc=True)\n\n        return data\n\n    def _calculate_similarities(self, gt_dets_t, tracker_dets_t):\n        similarity_scores = self._calculate_box_ious(gt_dets_t, tracker_dets_t, box_format='xywh')\n        return similarity_scores\n"
  },
  {
    "path": "eval/trackeval/datasets/mots_challenge.py",
    "content": "import os\nimport csv\nimport configparser\nimport numpy as np\nfrom scipy.optimize import linear_sum_assignment\nfrom ._base_dataset import _BaseDataset\nfrom .. import utils\nfrom .. import _timing\nfrom ..utils import TrackEvalException\n\n\nclass MOTSChallenge(_BaseDataset):\n    \"\"\"Dataset class for MOTS Challenge tracking\"\"\"\n\n    @staticmethod\n    def get_default_dataset_config():\n        \"\"\"Default class config values\"\"\"\n        code_path = utils.get_code_path()\n        default_config = {\n            'GT_FOLDER': os.path.join(code_path, 'data/gt/mot_challenge/'),  # Location of GT data\n            'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/mot_challenge/'),  # Trackers location\n            'OUTPUT_FOLDER': None,  # Where to save eval results (if None, same as TRACKERS_FOLDER)\n            'TRACKERS_TO_EVAL': None,  # Filenames of trackers to eval (if None, all in folder)\n            'CLASSES_TO_EVAL': ['pedestrian'],  # Valid: ['pedestrian']\n            'SPLIT_TO_EVAL': 'train',  # Valid: 'train', 'test'\n            'INPUT_AS_ZIP': False,  # Whether tracker input files are zipped\n            'PRINT_CONFIG': True,  # Whether to print current config\n            'TRACKER_SUB_FOLDER': 'data',  # Tracker files are in TRACKER_FOLDER/tracker_name/TRACKER_SUB_FOLDER\n            'OUTPUT_SUB_FOLDER': '',  # Output files are saved in OUTPUT_FOLDER/tracker_name/OUTPUT_SUB_FOLDER\n            'TRACKER_DISPLAY_NAMES': None,  # Names of trackers to display, if None: TRACKERS_TO_EVAL\n            'SEQMAP_FOLDER': None,  # Where seqmaps are found (if None, GT_FOLDER/seqmaps)\n            'SEQMAP_FILE': None,  # Directly specify seqmap file (if none use seqmap_folder/MOTS-split_to_eval)\n            'SEQ_INFO': None,  # If not None, directly specify sequences to eval and their number of timesteps\n            'GT_LOC_FORMAT': '{gt_folder}/{seq}/gt/gt.txt',  # '{gt_folder}/{seq}/gt/gt.txt'\n            'SKIP_SPLIT_FOL': False,  # If False, data is in GT_FOLDER/MOTS-SPLIT_TO_EVAL/ and in\n                                      # TRACKERS_FOLDER/MOTS-SPLIT_TO_EVAL/tracker/\n                                      # If True, then the middle 'MOTS-split' folder is skipped for both.\n        }\n        return default_config\n\n    def __init__(self, config=None):\n        \"\"\"Initialise dataset, checking that all required files are present\"\"\"\n        super().__init__()\n        # Fill non-given config values with defaults\n        self.config = utils.init_config(config, self.get_default_dataset_config(), self.get_name())\n\n        self.benchmark = 'MOTS'\n        self.gt_set = self.benchmark + '-' + self.config['SPLIT_TO_EVAL']\n        if not self.config['SKIP_SPLIT_FOL']:\n            split_fol = self.gt_set\n        else:\n            split_fol = ''\n        self.gt_fol = os.path.join(self.config['GT_FOLDER'], split_fol)\n        self.tracker_fol = os.path.join(self.config['TRACKERS_FOLDER'], split_fol)\n        self.should_classes_combine = False\n        self.use_super_categories = False\n        self.data_is_zipped = self.config['INPUT_AS_ZIP']\n\n        self.output_fol = self.config['OUTPUT_FOLDER']\n        if self.output_fol is None:\n            self.output_fol = self.tracker_fol\n\n        self.tracker_sub_fol = self.config['TRACKER_SUB_FOLDER']\n        self.output_sub_fol = self.config['OUTPUT_SUB_FOLDER']\n\n        # Get classes to eval\n        self.valid_classes = ['pedestrian']\n        self.class_list = [cls.lower() if cls.lower() in self.valid_classes else None\n                           for cls in self.config['CLASSES_TO_EVAL']]\n        if not all(self.class_list):\n            raise TrackEvalException('Attempted to evaluate an invalid class. Only pedestrian class is valid.')\n        self.class_name_to_class_id = {'pedestrian': '2', 'ignore': '10'}\n\n        # Get sequences to eval and check gt files exist\n        self.seq_list, self.seq_lengths = self._get_seq_info()\n        if len(self.seq_list) < 1:\n            raise TrackEvalException('No sequences are selected to be evaluated.')\n\n        # Check gt files exist\n        for seq in self.seq_list:\n            if not self.data_is_zipped:\n                curr_file = self.config[\"GT_LOC_FORMAT\"].format(gt_folder=self.gt_fol, seq=seq)\n                if not os.path.isfile(curr_file):\n                    print('GT file not found ' + curr_file)\n                    raise TrackEvalException('GT file not found for sequence: ' + seq)\n        if self.data_is_zipped:\n            curr_file = os.path.join(self.gt_fol, 'data.zip')\n            if not os.path.isfile(curr_file):\n                print('GT file not found ' + curr_file)\n                raise TrackEvalException('GT file not found: ' + os.path.basename(curr_file))\n\n        # Get trackers to eval\n        if self.config['TRACKERS_TO_EVAL'] is None:\n            self.tracker_list = os.listdir(self.tracker_fol)\n        else:\n            self.tracker_list = self.config['TRACKERS_TO_EVAL']\n\n        if self.config['TRACKER_DISPLAY_NAMES'] is None:\n            self.tracker_to_disp = dict(zip(self.tracker_list, self.tracker_list))\n        elif (self.config['TRACKERS_TO_EVAL'] is not None) and (\n                len(self.config['TRACKER_DISPLAY_NAMES']) == len(self.tracker_list)):\n            self.tracker_to_disp = dict(zip(self.tracker_list, self.config['TRACKER_DISPLAY_NAMES']))\n        else:\n            raise TrackEvalException('List of tracker files and tracker display names do not match.')\n\n        for tracker in self.tracker_list:\n            if self.data_is_zipped:\n                curr_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol + '.zip')\n                if not os.path.isfile(curr_file):\n                    print('Tracker file not found: ' + curr_file)\n                    raise TrackEvalException('Tracker file not found: ' + tracker + '/' + os.path.basename(curr_file))\n            else:\n                for seq in self.seq_list:\n                    curr_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol, seq + '.txt')\n                    if not os.path.isfile(curr_file):\n                        print('Tracker file not found: ' + curr_file)\n                        raise TrackEvalException(\n                            'Tracker file not found: ' + tracker + '/' + self.tracker_sub_fol + '/' + os.path.basename(\n                                curr_file))\n\n    def get_display_name(self, tracker):\n        return self.tracker_to_disp[tracker]\n\n    def _get_seq_info(self):\n        seq_list = []\n        seq_lengths = {}\n        if self.config[\"SEQ_INFO\"]:\n            seq_list = list(self.config[\"SEQ_INFO\"].keys())\n            seq_lengths = self.config[\"SEQ_INFO\"]\n\n            # If sequence length is 'None' tries to read sequence length from .ini files.\n            for seq, seq_length in seq_lengths.items():\n                if seq_length is None:\n                    ini_file = os.path.join(self.gt_fol, seq, 'seqinfo.ini')\n                    if not os.path.isfile(ini_file):\n                        raise TrackEvalException('ini file does not exist: ' + seq + '/' + os.path.basename(ini_file))\n                    ini_data = configparser.ConfigParser()\n                    ini_data.read(ini_file)\n                    seq_lengths[seq] = int(ini_data['Sequence']['seqLength'])\n\n        else:\n            if self.config[\"SEQMAP_FILE\"]:\n                seqmap_file = self.config[\"SEQMAP_FILE\"]\n            else:\n                if self.config[\"SEQMAP_FOLDER\"] is None:\n                    seqmap_file = os.path.join(self.config['GT_FOLDER'], 'seqmaps', self.gt_set + '.txt')\n                else:\n                    seqmap_file = os.path.join(self.config[\"SEQMAP_FOLDER\"], self.gt_set + '.txt')\n            if not os.path.isfile(seqmap_file):\n                print('no seqmap found: ' + seqmap_file)\n                raise TrackEvalException('no seqmap found: ' + os.path.basename(seqmap_file))\n            with open(seqmap_file) as fp:\n                reader = csv.reader(fp)\n                for i, row in enumerate(reader):\n                    if i == 0 or row[0] == '':\n                        continue\n                    seq = row[0]\n                    seq_list.append(seq)\n                    ini_file = os.path.join(self.gt_fol, seq, 'seqinfo.ini')\n                    if not os.path.isfile(ini_file):\n                        raise TrackEvalException('ini file does not exist: ' + seq + '/' + os.path.basename(ini_file))\n                    ini_data = configparser.ConfigParser()\n                    ini_data.read(ini_file)\n                    seq_lengths[seq] = int(ini_data['Sequence']['seqLength'])\n        return seq_list, seq_lengths\n\n    def _load_raw_file(self, tracker, seq, is_gt):\n        \"\"\"Load a file (gt or tracker) in the MOTS Challenge format\n\n        If is_gt, this returns a dict which contains the fields:\n        [gt_ids, gt_classes] : list (for each timestep) of 1D NDArrays (for each det).\n        [gt_dets]: list (for each timestep) of lists of detections.\n        [gt_ignore_region]: list (for each timestep) of masks for the ignore regions\n\n        if not is_gt, this returns a dict which contains the fields:\n        [tracker_ids, tracker_classes] : list (for each timestep) of 1D NDArrays (for each det).\n        [tracker_dets]: list (for each timestep) of lists of detections.\n        \"\"\"\n\n        # Only loaded when run to reduce minimum requirements\n        from pycocotools import mask as mask_utils\n\n        # File location\n        if self.data_is_zipped:\n            if is_gt:\n                zip_file = os.path.join(self.gt_fol, 'data.zip')\n            else:\n                zip_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol + '.zip')\n            file = seq + '.txt'\n        else:\n            zip_file = None\n            if is_gt:\n                file = self.config[\"GT_LOC_FORMAT\"].format(gt_folder=self.gt_fol, seq=seq)\n            else:\n                file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol, seq + '.txt')\n\n        # Ignore regions\n        if is_gt:\n            crowd_ignore_filter = {2: ['10']}\n        else:\n            crowd_ignore_filter = None\n\n        # Load raw data from text file\n        read_data, ignore_data = self._load_simple_text_file(file, crowd_ignore_filter=crowd_ignore_filter,\n                                                             is_zipped=self.data_is_zipped, zip_file=zip_file,\n                                                             force_delimiters=' ')\n\n        # Convert data to required format\n        num_timesteps = self.seq_lengths[seq]\n        data_keys = ['ids', 'classes', 'dets']\n        if is_gt:\n            data_keys += ['gt_ignore_region']\n        raw_data = {key: [None] * num_timesteps for key in data_keys}\n\n        # Check for any extra time keys\n        extra_time_keys = [x for x in read_data.keys() if x not in [str(t+1) for t in range(num_timesteps)]]\n        if len(extra_time_keys) > 0:\n            if is_gt:\n                text = 'Ground-truth'\n            else:\n                text = 'Tracking'\n            raise TrackEvalException(\n                text + ' data contains the following invalid timesteps in seq %s: ' % seq + ', '.join(\n                    [str(x) + ', ' for x in extra_time_keys]))\n\n        for t in range(num_timesteps):\n            time_key = str(t+1)\n            # list to collect all masks of a timestep to check for overlapping areas\n            all_masks = []\n            if time_key in read_data.keys():\n                try:\n                    raw_data['dets'][t] = [{'size': [int(region[3]), int(region[4])],\n                                            'counts': region[5].encode(encoding='UTF-8')}\n                                           for region in read_data[time_key]]\n                    raw_data['ids'][t] = np.atleast_1d([region[1] for region in read_data[time_key]]).astype(int)\n                    raw_data['classes'][t] = np.atleast_1d([region[2] for region in read_data[time_key]]).astype(int)\n                    all_masks += raw_data['dets'][t]\n                except IndexError:\n                    self._raise_index_error(is_gt, tracker, seq)\n                except ValueError:\n                    self._raise_value_error(is_gt, tracker, seq)\n            else:\n                raw_data['dets'][t] = []\n                raw_data['ids'][t] = np.empty(0).astype(int)\n                raw_data['classes'][t] = np.empty(0).astype(int)\n            if is_gt:\n                if time_key in ignore_data.keys():\n                    try:\n                        time_ignore = [{'size': [int(region[3]), int(region[4])],\n                                        'counts': region[5].encode(encoding='UTF-8')}\n                                       for region in ignore_data[time_key]]\n                        raw_data['gt_ignore_region'][t] = mask_utils.merge([mask for mask in time_ignore],\n                                                                           intersect=False)\n                        all_masks += [raw_data['gt_ignore_region'][t]]\n                    except IndexError:\n                        self._raise_index_error(is_gt, tracker, seq)\n                    except ValueError:\n                        self._raise_value_error(is_gt, tracker, seq)\n                else:\n                    raw_data['gt_ignore_region'][t] = mask_utils.merge([], intersect=False)\n\n            # check for overlapping masks\n            if all_masks:\n                masks_merged = all_masks[0]\n                for mask in all_masks[1:]:\n                    if mask_utils.area(mask_utils.merge([masks_merged, mask], intersect=True)) != 0.0:\n                        raise TrackEvalException(\n                            'Tracker has overlapping masks. Tracker: ' + tracker + ' Seq: ' + seq + ' Timestep: ' + str(\n                                t))\n                    masks_merged = mask_utils.merge([masks_merged, mask], intersect=False)\n\n        if is_gt:\n            key_map = {'ids': 'gt_ids',\n                       'classes': 'gt_classes',\n                       'dets': 'gt_dets'}\n        else:\n            key_map = {'ids': 'tracker_ids',\n                       'classes': 'tracker_classes',\n                       'dets': 'tracker_dets'}\n        for k, v in key_map.items():\n            raw_data[v] = raw_data.pop(k)\n        raw_data['num_timesteps'] = num_timesteps\n        raw_data['seq'] = seq\n        return raw_data\n\n    @_timing.time\n    def get_preprocessed_seq_data(self, raw_data, cls):\n        \"\"\" Preprocess data for a single sequence for a single class ready for evaluation.\n        Inputs:\n             - raw_data is a dict containing the data for the sequence already read in by get_raw_seq_data().\n             - cls is the class to be evaluated.\n        Outputs:\n             - data is a dict containing all of the information that metrics need to perform evaluation.\n                It contains the following fields:\n                    [num_timesteps, num_gt_ids, num_tracker_ids, num_gt_dets, num_tracker_dets] : integers.\n                    [gt_ids, tracker_ids]: list (for each timestep) of 1D NDArrays (for each det).\n                    [gt_dets, tracker_dets]: list (for each timestep) of lists of detection masks.\n                    [similarity_scores]: list (for each timestep) of 2D NDArrays.\n        Notes:\n            General preprocessing (preproc) occurs in 4 steps. Some datasets may not use all of these steps.\n                1) Extract only detections relevant for the class to be evaluated (including distractor detections).\n                2) Match gt dets and tracker dets. Remove tracker dets that are matched to a gt det that is of a\n                    distractor class, or otherwise marked as to be removed.\n                3) Remove unmatched tracker dets if they fall within a crowd ignore region or don't meet a certain\n                    other criteria (e.g. are too small).\n                4) Remove gt dets that were only useful for preprocessing and not for actual evaluation.\n            After the above preprocessing steps, this function also calculates the number of gt and tracker detections\n                and unique track ids. It also relabels gt and tracker ids to be contiguous and checks that ids are\n                unique within each timestep.\n\n        MOTS Challenge:\n            In MOTS Challenge, the 4 preproc steps are as follow:\n                1) There is only one class (pedestrians) to be evaluated.\n                2) There are no ground truth detections marked as to be removed/distractor classes.\n                    Therefore also no matched tracker detections are removed.\n                3) Ignore regions are used to remove unmatched detections (at least 50% overlap with ignore region).\n                4) There are no ground truth detections (e.g. those of distractor classes) to be removed.\n        \"\"\"\n        # Check that input data has unique ids\n        self._check_unique_ids(raw_data)\n\n        cls_id = int(self.class_name_to_class_id[cls])\n\n        data_keys = ['gt_ids', 'tracker_ids', 'gt_dets', 'tracker_dets', 'similarity_scores']\n        data = {key: [None] * raw_data['num_timesteps'] for key in data_keys}\n        unique_gt_ids = []\n        unique_tracker_ids = []\n        num_gt_dets = 0\n        num_tracker_dets = 0\n        for t in range(raw_data['num_timesteps']):\n\n            # Only extract relevant dets for this class for preproc and eval (cls)\n            gt_class_mask = np.atleast_1d(raw_data['gt_classes'][t] == cls_id)\n            gt_class_mask = gt_class_mask.astype(np.bool)\n            gt_ids = raw_data['gt_ids'][t][gt_class_mask]\n            gt_dets = [raw_data['gt_dets'][t][ind] for ind in range(len(gt_class_mask)) if gt_class_mask[ind]]\n\n            tracker_class_mask = np.atleast_1d(raw_data['tracker_classes'][t] == cls_id)\n            tracker_class_mask = tracker_class_mask.astype(np.bool)\n            tracker_ids = raw_data['tracker_ids'][t][tracker_class_mask]\n            tracker_dets = [raw_data['tracker_dets'][t][ind] for ind in range(len(tracker_class_mask)) if\n                            tracker_class_mask[ind]]\n            similarity_scores = raw_data['similarity_scores'][t][gt_class_mask, :][:, tracker_class_mask]\n\n            # Match tracker and gt dets (with hungarian algorithm)\n            unmatched_indices = np.arange(tracker_ids.shape[0])\n            if gt_ids.shape[0] > 0 and tracker_ids.shape[0] > 0:\n                matching_scores = similarity_scores.copy()\n                matching_scores[matching_scores < 0.5 - np.finfo('float').eps] = -10000\n                match_rows, match_cols = linear_sum_assignment(-matching_scores)\n                actually_matched_mask = matching_scores[match_rows, match_cols] > 0 + np.finfo('float').eps\n                match_cols = match_cols[actually_matched_mask]\n\n                unmatched_indices = np.delete(unmatched_indices, match_cols, axis=0)\n\n            # For unmatched tracker dets, remove those that are greater than 50% within a crowd ignore region.\n            unmatched_tracker_dets = [tracker_dets[i] for i in range(len(tracker_dets)) if i in unmatched_indices]\n            ignore_region = raw_data['gt_ignore_region'][t]\n            intersection_with_ignore_region = self._calculate_mask_ious(unmatched_tracker_dets, [ignore_region],\n                                                                        is_encoded=True, do_ioa=True)\n            is_within_ignore_region = np.any(intersection_with_ignore_region > 0.5 + np.finfo('float').eps, axis=1)\n\n            # Apply preprocessing to remove unwanted tracker dets.\n            to_remove_tracker = unmatched_indices[is_within_ignore_region]\n            data['tracker_ids'][t] = np.delete(tracker_ids, to_remove_tracker, axis=0)\n            data['tracker_dets'][t] = np.delete(tracker_dets, to_remove_tracker, axis=0)\n            similarity_scores = np.delete(similarity_scores, to_remove_tracker, axis=1)\n\n            # Keep all ground truth detections\n            data['gt_ids'][t] = gt_ids\n            data['gt_dets'][t] = gt_dets\n            data['similarity_scores'][t] = similarity_scores\n\n            unique_gt_ids += list(np.unique(data['gt_ids'][t]))\n            unique_tracker_ids += list(np.unique(data['tracker_ids'][t]))\n            num_tracker_dets += len(data['tracker_ids'][t])\n            num_gt_dets += len(data['gt_ids'][t])\n\n        # Re-label IDs such that there are no empty IDs\n        if len(unique_gt_ids) > 0:\n            unique_gt_ids = np.unique(unique_gt_ids)\n            gt_id_map = np.nan * np.ones((np.max(unique_gt_ids) + 1))\n            gt_id_map[unique_gt_ids] = np.arange(len(unique_gt_ids))\n            for t in range(raw_data['num_timesteps']):\n                if len(data['gt_ids'][t]) > 0:\n                    data['gt_ids'][t] = gt_id_map[data['gt_ids'][t]].astype(np.int)\n        if len(unique_tracker_ids) > 0:\n            unique_tracker_ids = np.unique(unique_tracker_ids)\n            tracker_id_map = np.nan * np.ones((np.max(unique_tracker_ids) + 1))\n            tracker_id_map[unique_tracker_ids] = np.arange(len(unique_tracker_ids))\n            for t in range(raw_data['num_timesteps']):\n                if len(data['tracker_ids'][t]) > 0:\n                    data['tracker_ids'][t] = tracker_id_map[data['tracker_ids'][t]].astype(np.int)\n\n        # Record overview statistics.\n        data['num_tracker_dets'] = num_tracker_dets\n        data['num_gt_dets'] = num_gt_dets\n        data['num_tracker_ids'] = len(unique_tracker_ids)\n        data['num_gt_ids'] = len(unique_gt_ids)\n        data['num_timesteps'] = raw_data['num_timesteps']\n        data['seq'] = raw_data['seq']\n\n        # Ensure again that ids are unique per timestep after preproc.\n        self._check_unique_ids(data, after_preproc=True)\n\n        return data\n\n    def _calculate_similarities(self, gt_dets_t, tracker_dets_t):\n        similarity_scores = self._calculate_mask_ious(gt_dets_t, tracker_dets_t, is_encoded=True, do_ioa=False)\n        return similarity_scores\n\n    @staticmethod\n    def _raise_index_error(is_gt, tracker, seq):\n        \"\"\"\n        Auxiliary method to raise an evaluation error in case of an index error while reading files.\n        :param is_gt: whether gt or tracker data is read\n        :param tracker: the name of the tracker\n        :param seq: the name of the seq\n        :return: None\n        \"\"\"\n        if is_gt:\n            err = 'Cannot load gt data from sequence %s, because there are not enough ' \\\n                  'columns in the data.' % seq\n            raise TrackEvalException(err)\n        else:\n            err = 'Cannot load tracker data from tracker %s, sequence %s, because there are not enough ' \\\n                  'columns in the data.' % (tracker, seq)\n            raise TrackEvalException(err)\n\n    @staticmethod\n    def _raise_value_error(is_gt, tracker, seq):\n        \"\"\"\n        Auxiliary method to raise an evaluation error in case of an value error while reading files.\n        :param is_gt: whether gt or tracker data is read\n        :param tracker: the name of the tracker\n        :param seq: the name of the seq\n        :return: None\n        \"\"\"\n        if is_gt:\n            raise TrackEvalException(\n                'GT data for sequence %s cannot be converted to the right format. Is data corrupted?' % seq)\n        else:\n            raise TrackEvalException(\n                'Tracking data from tracker %s, sequence %s cannot be converted to the right format. '\n                'Is data corrupted?' % (tracker, seq))\n"
  },
  {
    "path": "eval/trackeval/datasets/tao.py",
    "content": "import os\nimport numpy as np\nimport json\nimport itertools\nfrom collections import defaultdict\nfrom scipy.optimize import linear_sum_assignment\nfrom ..utils import TrackEvalException\nfrom ._base_dataset import _BaseDataset\nfrom .. import utils\nfrom .. import _timing\n\n\nclass TAO(_BaseDataset):\n    \"\"\"Dataset class for TAO tracking\"\"\"\n\n    @staticmethod\n    def get_default_dataset_config():\n        \"\"\"Default class config values\"\"\"\n        code_path = utils.get_code_path()\n        default_config = {\n            'GT_FOLDER': os.path.join(code_path, 'data/gt/tao/tao_training'),  # Location of GT data\n            'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/tao/tao_training'),  # Trackers location\n            'OUTPUT_FOLDER': None,  # Where to save eval results (if None, same as TRACKERS_FOLDER)\n            'TRACKERS_TO_EVAL': None,  # Filenames of trackers to eval (if None, all in folder)\n            'CLASSES_TO_EVAL': None,  # Classes to eval (if None, all classes)\n            'SPLIT_TO_EVAL': 'training',  # Valid: 'training', 'val'\n            'PRINT_CONFIG': True,  # Whether to print current config\n            'TRACKER_SUB_FOLDER': 'data',  # Tracker files are in TRACKER_FOLDER/tracker_name/TRACKER_SUB_FOLDER\n            'OUTPUT_SUB_FOLDER': '',  # Output files are saved in OUTPUT_FOLDER/tracker_name/OUTPUT_SUB_FOLDER\n            'TRACKER_DISPLAY_NAMES': None,  # Names of trackers to display, if None: TRACKERS_TO_EVAL\n            'MAX_DETECTIONS': 300,  # Number of maximal allowed detections per image (0 for unlimited)\n        }\n        return default_config\n\n    def __init__(self, config=None):\n        \"\"\"Initialise dataset, checking that all required files are present\"\"\"\n        super().__init__()\n        # Fill non-given config values with defaults\n        self.config = utils.init_config(config, self.get_default_dataset_config(), self.get_name())\n        self.gt_fol = self.config['GT_FOLDER']\n        self.tracker_fol = self.config['TRACKERS_FOLDER']\n        self.should_classes_combine = True\n        self.use_super_categories = False\n\n        self.tracker_sub_fol = self.config['TRACKER_SUB_FOLDER']\n        self.output_fol = self.config['OUTPUT_FOLDER']\n        if self.output_fol is None:\n            self.output_fol = self.tracker_fol\n        self.output_sub_fol = self.config['OUTPUT_SUB_FOLDER']\n\n        gt_dir_files = [file for file in os.listdir(self.gt_fol) if file.endswith('.json')]\n        if len(gt_dir_files) != 1:\n            raise TrackEvalException(self.gt_fol + ' does not contain exactly one json file.')\n\n        with open(os.path.join(self.gt_fol, gt_dir_files[0])) as f:\n            self.gt_data = json.load(f)\n\n        # merge categories marked with a merged tag in TAO dataset\n        self._merge_categories(self.gt_data['annotations'] + self.gt_data['tracks'])\n\n        # Get sequences to eval and sequence information\n        self.seq_list = [vid['name'].replace('/', '-') for vid in self.gt_data['videos']]\n        self.seq_name_to_seq_id = {vid['name'].replace('/', '-'): vid['id'] for vid in self.gt_data['videos']}\n        # compute mappings from videos to annotation data\n        self.videos_to_gt_tracks, self.videos_to_gt_images = self._compute_vid_mappings(self.gt_data['annotations'])\n        # compute sequence lengths\n        self.seq_lengths = {vid['id']: 0 for vid in self.gt_data['videos']}\n        for img in self.gt_data['images']:\n            self.seq_lengths[img['video_id']] += 1\n        self.seq_to_images_to_timestep = self._compute_image_to_timestep_mappings()\n        self.seq_to_classes = {vid['id']: {'pos_cat_ids': list({track['category_id'] for track\n                                                                in self.videos_to_gt_tracks[vid['id']]}),\n                                           'neg_cat_ids': vid['neg_category_ids'],\n                                           'not_exhaustively_labeled_cat_ids': vid['not_exhaustive_category_ids']}\n                               for vid in self.gt_data['videos']}\n\n        # Get classes to eval\n        considered_vid_ids = [self.seq_name_to_seq_id[vid] for vid in self.seq_list]\n        seen_cats = set([cat_id for vid_id in considered_vid_ids for cat_id\n                         in self.seq_to_classes[vid_id]['pos_cat_ids']])\n        # only classes with ground truth are evaluated in TAO\n        self.valid_classes = [cls['name'] for cls in self.gt_data['categories'] if cls['id'] in seen_cats]\n        cls_name_to_cls_id_map = {cls['name']: cls['id'] for cls in self.gt_data['categories']}\n\n        if self.config['CLASSES_TO_EVAL']:\n            self.class_list = [cls.lower() if cls.lower() in self.valid_classes else None\n                               for cls in self.config['CLASSES_TO_EVAL']]\n            if not all(self.class_list):\n                raise TrackEvalException('Attempted to evaluate an invalid class. Only classes ' +\n                                         ', '.join(self.valid_classes) +\n                                         ' are valid (classes present in ground truth data).')\n        else:\n            self.class_list = [cls for cls in self.valid_classes]\n        self.class_name_to_class_id = {k: v for k, v in cls_name_to_cls_id_map.items() if k in self.class_list}\n\n        # Get trackers to eval\n        if self.config['TRACKERS_TO_EVAL'] is None:\n            self.tracker_list = os.listdir(self.tracker_fol)\n        else:\n            self.tracker_list = self.config['TRACKERS_TO_EVAL']\n\n        if self.config['TRACKER_DISPLAY_NAMES'] is None:\n            self.tracker_to_disp = dict(zip(self.tracker_list, self.tracker_list))\n        elif (self.config['TRACKERS_TO_EVAL'] is not None) and (\n                len(self.config['TRACKER_DISPLAY_NAMES']) == len(self.tracker_list)):\n            self.tracker_to_disp = dict(zip(self.tracker_list, self.config['TRACKER_DISPLAY_NAMES']))\n        else:\n            raise TrackEvalException('List of tracker files and tracker display names do not match.')\n\n        self.tracker_data = {tracker: dict() for tracker in self.tracker_list}\n\n        for tracker in self.tracker_list:\n            tr_dir_files = [file for file in os.listdir(os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol))\n                            if file.endswith('.json')]\n            if len(tr_dir_files) != 1:\n                raise TrackEvalException(os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol)\n                                         + ' does not contain exactly one json file.')\n            with open(os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol, tr_dir_files[0])) as f:\n                curr_data = json.load(f)\n\n            # limit detections if MAX_DETECTIONS > 0\n            if self.config['MAX_DETECTIONS']:\n                curr_data = self._limit_dets_per_image(curr_data)\n\n            # fill missing video ids\n            self._fill_video_ids_inplace(curr_data)\n\n            # make track ids unique over whole evaluation set\n            self._make_track_ids_unique(curr_data)\n\n            # merge categories marked with a merged tag in TAO dataset\n            self._merge_categories(curr_data)\n\n            # get tracker sequence information\n            curr_videos_to_tracker_tracks, curr_videos_to_tracker_images = self._compute_vid_mappings(curr_data)\n            self.tracker_data[tracker]['vids_to_tracks'] = curr_videos_to_tracker_tracks\n            self.tracker_data[tracker]['vids_to_images'] = curr_videos_to_tracker_images\n\n    def get_display_name(self, tracker):\n        return self.tracker_to_disp[tracker]\n\n    def _load_raw_file(self, tracker, seq, is_gt):\n        \"\"\"Load a file (gt or tracker) in the TAO format\n\n        If is_gt, this returns a dict which contains the fields:\n        [gt_ids, gt_classes] : list (for each timestep) of 1D NDArrays (for each det).\n        [gt_dets]: list (for each timestep) of lists of detections.\n        [classes_to_gt_tracks]: dictionary with class values as keys and list of dictionaries (with frame indices as\n                                keys and corresponding segmentations as values) for each track\n        [classes_to_gt_track_ids, classes_to_gt_track_areas, classes_to_gt_track_lengths]: dictionary with class values\n                                as keys and lists (for each track) as values\n\n        if not is_gt, this returns a dict which contains the fields:\n        [tracker_ids, tracker_classes, tracker_confidences] : list (for each timestep) of 1D NDArrays (for each det).\n        [tracker_dets]: list (for each timestep) of lists of detections.\n        [classes_to_dt_tracks]: dictionary with class values as keys and list of dictionaries (with frame indices as\n                                keys and corresponding segmentations as values) for each track\n        [classes_to_dt_track_ids, classes_to_dt_track_areas, classes_to_dt_track_lengths]: dictionary with class values\n                                                                                           as keys and lists as values\n        [classes_to_dt_track_scores]: dictionary with class values as keys and 1D numpy arrays as values\n        \"\"\"\n        seq_id = self.seq_name_to_seq_id[seq]\n        # File location\n        if is_gt:\n            imgs = self.videos_to_gt_images[seq_id]\n        else:\n            imgs = self.tracker_data[tracker]['vids_to_images'][seq_id]\n\n        # Convert data to required format\n        num_timesteps = self.seq_lengths[seq_id]\n        img_to_timestep = self.seq_to_images_to_timestep[seq_id]\n        data_keys = ['ids', 'classes', 'dets']\n        if not is_gt:\n            data_keys += ['tracker_confidences']\n        raw_data = {key: [None] * num_timesteps for key in data_keys}\n        for img in imgs:\n            # some tracker data contains images without any ground truth information, these are ignored\n            try:\n                t = img_to_timestep[img['id']]\n            except KeyError:\n                continue\n            annotations = img['annotations']\n            raw_data['dets'][t] = np.atleast_2d([ann['bbox'] for ann in annotations]).astype(float)\n            raw_data['ids'][t] = np.atleast_1d([ann['track_id'] for ann in annotations]).astype(int)\n            raw_data['classes'][t] = np.atleast_1d([ann['category_id'] for ann in annotations]).astype(int)\n            if not is_gt:\n                raw_data['tracker_confidences'][t] = np.atleast_1d([ann['score'] for ann in annotations]).astype(float)\n\n        for t, d in enumerate(raw_data['dets']):\n            if d is None:\n                raw_data['dets'][t] = np.empty((0, 4)).astype(float)\n                raw_data['ids'][t] = np.empty(0).astype(int)\n                raw_data['classes'][t] = np.empty(0).astype(int)\n                if not is_gt:\n                    raw_data['tracker_confidences'][t] = np.empty(0)\n\n        if is_gt:\n            key_map = {'ids': 'gt_ids',\n                       'classes': 'gt_classes',\n                       'dets': 'gt_dets'}\n        else:\n            key_map = {'ids': 'tracker_ids',\n                       'classes': 'tracker_classes',\n                       'dets': 'tracker_dets'}\n        for k, v in key_map.items():\n            raw_data[v] = raw_data.pop(k)\n\n        all_classes = [self.class_name_to_class_id[cls] for cls in self.class_list]\n        if is_gt:\n            classes_to_consider = all_classes\n            all_tracks = self.videos_to_gt_tracks[seq_id]\n        else:\n            classes_to_consider = self.seq_to_classes[seq_id]['pos_cat_ids'] \\\n                                  + self.seq_to_classes[seq_id]['neg_cat_ids']\n            all_tracks = self.tracker_data[tracker]['vids_to_tracks'][seq_id]\n\n        classes_to_tracks = {cls: [track for track in all_tracks if track['category_id'] == cls]\n                             if cls in classes_to_consider else [] for cls in all_classes}\n\n        # mapping from classes to track information\n        raw_data['classes_to_tracks'] = {cls: [{det['image_id']: np.atleast_1d(det['bbox'])\n                                                for det in track['annotations']} for track in tracks]\n                                         for cls, tracks in classes_to_tracks.items()}\n        raw_data['classes_to_track_ids'] = {cls: [track['id'] for track in tracks]\n                                            for cls, tracks in classes_to_tracks.items()}\n        raw_data['classes_to_track_areas'] = {cls: [track['area'] for track in tracks]\n                                              for cls, tracks in classes_to_tracks.items()}\n        raw_data['classes_to_track_lengths'] = {cls: [len(track['annotations']) for track in tracks]\n                                                for cls, tracks in classes_to_tracks.items()}\n\n        if not is_gt:\n            raw_data['classes_to_dt_track_scores'] = {cls: np.array([np.mean([float(x['score'])\n                                                                              for x in track['annotations']])\n                                                                     for track in tracks])\n                                                      for cls, tracks in classes_to_tracks.items()}\n\n        if is_gt:\n            key_map = {'classes_to_tracks': 'classes_to_gt_tracks',\n                       'classes_to_track_ids': 'classes_to_gt_track_ids',\n                       'classes_to_track_lengths': 'classes_to_gt_track_lengths',\n                       'classes_to_track_areas': 'classes_to_gt_track_areas'}\n        else:\n            key_map = {'classes_to_tracks': 'classes_to_dt_tracks',\n                       'classes_to_track_ids': 'classes_to_dt_track_ids',\n                       'classes_to_track_lengths': 'classes_to_dt_track_lengths',\n                       'classes_to_track_areas': 'classes_to_dt_track_areas'}\n        for k, v in key_map.items():\n            raw_data[v] = raw_data.pop(k)\n\n        raw_data['num_timesteps'] = num_timesteps\n        raw_data['neg_cat_ids'] = self.seq_to_classes[seq_id]['neg_cat_ids']\n        raw_data['not_exhaustively_labeled_cls'] = self.seq_to_classes[seq_id]['not_exhaustively_labeled_cat_ids']\n        raw_data['seq'] = seq\n        return raw_data\n\n    @_timing.time\n    def get_preprocessed_seq_data(self, raw_data, cls):\n        \"\"\" Preprocess data for a single sequence for a single class ready for evaluation.\n        Inputs:\n             - raw_data is a dict containing the data for the sequence already read in by get_raw_seq_data().\n             - cls is the class to be evaluated.\n        Outputs:\n             - data is a dict containing all of the information that metrics need to perform evaluation.\n                It contains the following fields:\n                    [num_timesteps, num_gt_ids, num_tracker_ids, num_gt_dets, num_tracker_dets] : integers.\n                    [gt_ids, tracker_ids, tracker_confidences]: list (for each timestep) of 1D NDArrays (for each det).\n                    [gt_dets, tracker_dets]: list (for each timestep) of lists of detections.\n                    [similarity_scores]: list (for each timestep) of 2D NDArrays.\n        Notes:\n            General preprocessing (preproc) occurs in 4 steps. Some datasets may not use all of these steps.\n                1) Extract only detections relevant for the class to be evaluated (including distractor detections).\n                2) Match gt dets and tracker dets. Remove tracker dets that are matched to a gt det that is of a\n                    distractor class, or otherwise marked as to be removed.\n                3) Remove unmatched tracker dets if they fall within a crowd ignore region or don't meet a certain\n                    other criteria (e.g. are too small).\n                4) Remove gt dets that were only useful for preprocessing and not for actual evaluation.\n            After the above preprocessing steps, this function also calculates the number of gt and tracker detections\n                and unique track ids. It also relabels gt and tracker ids to be contiguous and checks that ids are\n                unique within each timestep.\n        TAO:\n            In TAO, the 4 preproc steps are as follow:\n                1) All classes present in the ground truth data are evaluated separately.\n                2) No matched tracker detections are removed.\n                3) Unmatched tracker detections are removed if there is not ground truth data and the class does not\n                    belong to the categories marked as negative for this sequence. Additionally, unmatched tracker\n                    detections for classes which are marked as not exhaustively labeled are removed.\n                4) No gt detections are removed.\n            Further, for TrackMAP computation track representations for the given class are accessed from a dictionary\n            and the tracks from the tracker data are sorted according to the tracker confidence.\n        \"\"\"\n        cls_id = self.class_name_to_class_id[cls]\n        is_not_exhaustively_labeled = cls_id in raw_data['not_exhaustively_labeled_cls']\n        is_neg_category = cls_id in raw_data['neg_cat_ids']\n\n        data_keys = ['gt_ids', 'tracker_ids', 'gt_dets', 'tracker_dets', 'tracker_confidences', 'similarity_scores']\n        data = {key: [None] * raw_data['num_timesteps'] for key in data_keys}\n        unique_gt_ids = []\n        unique_tracker_ids = []\n        num_gt_dets = 0\n        num_tracker_dets = 0\n        for t in range(raw_data['num_timesteps']):\n\n            # Only extract relevant dets for this class for preproc and eval (cls)\n            gt_class_mask = np.atleast_1d(raw_data['gt_classes'][t] == cls_id)\n            gt_class_mask = gt_class_mask.astype(np.bool)\n            gt_ids = raw_data['gt_ids'][t][gt_class_mask]\n            gt_dets = raw_data['gt_dets'][t][gt_class_mask]\n\n            tracker_class_mask = np.atleast_1d(raw_data['tracker_classes'][t] == cls_id)\n            tracker_class_mask = tracker_class_mask.astype(np.bool)\n            tracker_ids = raw_data['tracker_ids'][t][tracker_class_mask]\n            tracker_dets = raw_data['tracker_dets'][t][tracker_class_mask]\n            tracker_confidences = raw_data['tracker_confidences'][t][tracker_class_mask]\n            similarity_scores = raw_data['similarity_scores'][t][gt_class_mask, :][:, tracker_class_mask]\n\n            # Match tracker and gt dets (with hungarian algorithm).\n            unmatched_indices = np.arange(tracker_ids.shape[0])\n            if gt_ids.shape[0] > 0 and tracker_ids.shape[0] > 0:\n                matching_scores = similarity_scores.copy()\n                matching_scores[matching_scores < 0.5 - np.finfo('float').eps] = 0\n                match_rows, match_cols = linear_sum_assignment(-matching_scores)\n                actually_matched_mask = matching_scores[match_rows, match_cols] > 0 + np.finfo('float').eps\n                match_cols = match_cols[actually_matched_mask]\n                unmatched_indices = np.delete(unmatched_indices, match_cols, axis=0)\n\n            if gt_ids.shape[0] == 0 and not is_neg_category:\n                to_remove_tracker = unmatched_indices\n            elif is_not_exhaustively_labeled:\n                to_remove_tracker = unmatched_indices\n            else:\n                to_remove_tracker = np.array([], dtype=np.int)\n\n            # remove all unwanted unmatched tracker detections\n            data['tracker_ids'][t] = np.delete(tracker_ids, to_remove_tracker, axis=0)\n            data['tracker_dets'][t] = np.delete(tracker_dets, to_remove_tracker, axis=0)\n            data['tracker_confidences'][t] = np.delete(tracker_confidences, to_remove_tracker, axis=0)\n            similarity_scores = np.delete(similarity_scores, to_remove_tracker, axis=1)\n\n            data['gt_ids'][t] = gt_ids\n            data['gt_dets'][t] = gt_dets\n            data['similarity_scores'][t] = similarity_scores\n\n            unique_gt_ids += list(np.unique(data['gt_ids'][t]))\n            unique_tracker_ids += list(np.unique(data['tracker_ids'][t]))\n            num_tracker_dets += len(data['tracker_ids'][t])\n            num_gt_dets += len(data['gt_ids'][t])\n\n        # Re-label IDs such that there are no empty IDs\n        if len(unique_gt_ids) > 0:\n            unique_gt_ids = np.unique(unique_gt_ids)\n            gt_id_map = np.nan * np.ones((np.max(unique_gt_ids) + 1))\n            gt_id_map[unique_gt_ids] = np.arange(len(unique_gt_ids))\n            for t in range(raw_data['num_timesteps']):\n                if len(data['gt_ids'][t]) > 0:\n                    data['gt_ids'][t] = gt_id_map[data['gt_ids'][t]].astype(np.int)\n        if len(unique_tracker_ids) > 0:\n            unique_tracker_ids = np.unique(unique_tracker_ids)\n            tracker_id_map = np.nan * np.ones((np.max(unique_tracker_ids) + 1))\n            tracker_id_map[unique_tracker_ids] = np.arange(len(unique_tracker_ids))\n            for t in range(raw_data['num_timesteps']):\n                if len(data['tracker_ids'][t]) > 0:\n                    data['tracker_ids'][t] = tracker_id_map[data['tracker_ids'][t]].astype(np.int)\n\n        # Record overview statistics.\n        data['num_tracker_dets'] = num_tracker_dets\n        data['num_gt_dets'] = num_gt_dets\n        data['num_tracker_ids'] = len(unique_tracker_ids)\n        data['num_gt_ids'] = len(unique_gt_ids)\n        data['num_timesteps'] = raw_data['num_timesteps']\n        data['seq'] = raw_data['seq']\n\n        # get track representations\n        data['gt_tracks'] = raw_data['classes_to_gt_tracks'][cls_id]\n        data['gt_track_ids'] = raw_data['classes_to_gt_track_ids'][cls_id]\n        data['gt_track_lengths'] = raw_data['classes_to_gt_track_lengths'][cls_id]\n        data['gt_track_areas'] = raw_data['classes_to_gt_track_areas'][cls_id]\n        data['dt_tracks'] = raw_data['classes_to_dt_tracks'][cls_id]\n        data['dt_track_ids'] = raw_data['classes_to_dt_track_ids'][cls_id]\n        data['dt_track_lengths'] = raw_data['classes_to_dt_track_lengths'][cls_id]\n        data['dt_track_areas'] = raw_data['classes_to_dt_track_areas'][cls_id]\n        data['dt_track_scores'] = raw_data['classes_to_dt_track_scores'][cls_id]\n        data['not_exhaustively_labeled'] = is_not_exhaustively_labeled\n        data['iou_type'] = 'bbox'\n\n        # sort tracker data tracks by tracker confidence scores\n        if data['dt_tracks']:\n            idx = np.argsort([-score for score in data['dt_track_scores']], kind=\"mergesort\")\n            data['dt_track_scores'] = [data['dt_track_scores'][i] for i in idx]\n            data['dt_tracks'] = [data['dt_tracks'][i] for i in idx]\n            data['dt_track_ids'] = [data['dt_track_ids'][i] for i in idx]\n            data['dt_track_lengths'] = [data['dt_track_lengths'][i] for i in idx]\n            data['dt_track_areas'] = [data['dt_track_areas'][i] for i in idx]\n        # Ensure that ids are unique per timestep.\n        self._check_unique_ids(data)\n\n        return data\n\n    def _calculate_similarities(self, gt_dets_t, tracker_dets_t):\n        similarity_scores = self._calculate_box_ious(gt_dets_t, tracker_dets_t)\n        return similarity_scores\n\n    def _merge_categories(self, annotations):\n        \"\"\"\n        Merges categories with a merged tag. Adapted from https://github.com/TAO-Dataset\n        :param annotations: the annotations in which the classes should be merged\n        :return: None\n        \"\"\"\n        merge_map = {}\n        for category in self.gt_data['categories']:\n            if 'merged' in category:\n                for to_merge in category['merged']:\n                    merge_map[to_merge['id']] = category['id']\n\n        for ann in annotations:\n            ann['category_id'] = merge_map.get(ann['category_id'], ann['category_id'])\n\n    def _compute_vid_mappings(self, annotations):\n        \"\"\"\n        Computes mappings from Videos to corresponding tracks and images.\n        :param annotations: the annotations for which the mapping should be generated\n        :return: the video-to-track-mapping, the video-to-image-mapping\n        \"\"\"\n        vids_to_tracks = {}\n        vids_to_imgs = {}\n        vid_ids = [vid['id'] for vid in self.gt_data['videos']]\n\n        # compute an mapping from image IDs to images\n        images = {}\n        for image in self.gt_data['images']:\n            images[image['id']] = image\n\n        for ann in annotations:\n            ann[\"area\"] = ann[\"bbox\"][2] * ann[\"bbox\"][3]\n\n            vid = ann[\"video_id\"]\n            if ann[\"video_id\"] not in vids_to_tracks.keys():\n                vids_to_tracks[ann[\"video_id\"]] = list()\n            if ann[\"video_id\"] not in vids_to_imgs.keys():\n                vids_to_imgs[ann[\"video_id\"]] = list()\n\n            # Fill in vids_to_tracks\n            tid = ann[\"track_id\"]\n            exist_tids = [track[\"id\"] for track in vids_to_tracks[vid]]\n            try:\n                index1 = exist_tids.index(tid)\n            except ValueError:\n                index1 = -1\n            if tid not in exist_tids:\n                curr_track = {\"id\": tid, \"category_id\": ann['category_id'],\n                              \"video_id\": vid, \"annotations\": [ann]}\n                vids_to_tracks[vid].append(curr_track)\n            else:\n                vids_to_tracks[vid][index1][\"annotations\"].append(ann)\n\n            # Fill in vids_to_imgs\n            img_id = ann['image_id']\n            exist_img_ids = [img[\"id\"] for img in vids_to_imgs[vid]]\n            try:\n                index2 = exist_img_ids.index(img_id)\n            except ValueError:\n                index2 = -1\n            if index2 == -1:\n                curr_img = {\"id\": img_id, \"annotations\": [ann]}\n                vids_to_imgs[vid].append(curr_img)\n            else:\n                vids_to_imgs[vid][index2][\"annotations\"].append(ann)\n\n        # sort annotations by frame index and compute track area\n        for vid, tracks in vids_to_tracks.items():\n            for track in tracks:\n                track[\"annotations\"] = sorted(\n                    track['annotations'],\n                    key=lambda x: images[x['image_id']]['frame_index'])\n                # Computer average area\n                track[\"area\"] = (sum(x['area'] for x in track['annotations']) / len(track['annotations']))\n\n        # Ensure all videos are present\n        for vid_id in vid_ids:\n            if vid_id not in vids_to_tracks.keys():\n                vids_to_tracks[vid_id] = []\n            if vid_id not in vids_to_imgs.keys():\n                vids_to_imgs[vid_id] = []\n\n        return vids_to_tracks, vids_to_imgs\n\n    def _compute_image_to_timestep_mappings(self):\n        \"\"\"\n        Computes a mapping from images to the corresponding timestep in the sequence.\n        :return: the image-to-timestep-mapping\n        \"\"\"\n        images = {}\n        for image in self.gt_data['images']:\n            images[image['id']] = image\n\n        seq_to_imgs_to_timestep = {vid['id']: dict() for vid in self.gt_data['videos']}\n        for vid in seq_to_imgs_to_timestep:\n            curr_imgs = [img['id'] for img in self.videos_to_gt_images[vid]]\n            curr_imgs = sorted(curr_imgs, key=lambda x: images[x]['frame_index'])\n            seq_to_imgs_to_timestep[vid] = {curr_imgs[i]: i for i in range(len(curr_imgs))}\n\n        return seq_to_imgs_to_timestep\n\n    def _limit_dets_per_image(self, annotations):\n        \"\"\"\n        Limits the number of detections for each image to config['MAX_DETECTIONS']. Adapted from\n        https://github.com/TAO-Dataset/\n        :param annotations: the annotations in which the detections should be limited\n        :return: the annotations with limited detections\n        \"\"\"\n        max_dets = self.config['MAX_DETECTIONS']\n        img_ann = defaultdict(list)\n        for ann in annotations:\n            img_ann[ann[\"image_id\"]].append(ann)\n\n        for img_id, _anns in img_ann.items():\n            if len(_anns) <= max_dets:\n                continue\n            _anns = sorted(_anns, key=lambda x: x[\"score\"], reverse=True)\n            img_ann[img_id] = _anns[:max_dets]\n\n        return [ann for anns in img_ann.values() for ann in anns]\n\n    def _fill_video_ids_inplace(self, annotations):\n        \"\"\"\n        Fills in missing video IDs inplace. Adapted from https://github.com/TAO-Dataset/\n        :param annotations: the annotations for which the videos IDs should be filled inplace\n        :return: None\n        \"\"\"\n        missing_video_id = [x for x in annotations if 'video_id' not in x]\n        if missing_video_id:\n            image_id_to_video_id = {\n                x['id']: x['video_id'] for x in self.gt_data['images']\n            }\n            for x in missing_video_id:\n                x['video_id'] = image_id_to_video_id[x['image_id']]\n\n    @staticmethod\n    def _make_track_ids_unique(annotations):\n        \"\"\"\n        Makes the track IDs unqiue over the whole annotation set. Adapted from https://github.com/TAO-Dataset/\n        :param annotations: the annotation set\n        :return: the number of updated IDs\n        \"\"\"\n        track_id_videos = {}\n        track_ids_to_update = set()\n        max_track_id = 0\n        for ann in annotations:\n            t = ann['track_id']\n            if t not in track_id_videos:\n                track_id_videos[t] = ann['video_id']\n\n            if ann['video_id'] != track_id_videos[t]:\n                # Track id is assigned to multiple videos\n                track_ids_to_update.add(t)\n            max_track_id = max(max_track_id, t)\n\n        if track_ids_to_update:\n            print('true')\n            next_id = itertools.count(max_track_id + 1)\n            new_track_ids = defaultdict(lambda: next(next_id))\n            for ann in annotations:\n                t = ann['track_id']\n                v = ann['video_id']\n                if t in track_ids_to_update:\n                    ann['track_id'] = new_track_ids[t, v]\n        return len(track_ids_to_update)\n"
  },
  {
    "path": "eval/trackeval/datasets/youtube_vis.py",
    "content": "import os\nimport numpy as np\nimport json\nfrom ._base_dataset import _BaseDataset\nfrom ..utils import TrackEvalException\nfrom .. import utils\nfrom .. import _timing\n\n\nclass YouTubeVIS(_BaseDataset):\n    \"\"\"Dataset class for YouTubeVIS tracking\"\"\"\n\n    @staticmethod\n    def get_default_dataset_config():\n        \"\"\"Default class config values\"\"\"\n        code_path = utils.get_code_path()\n        default_config = {\n            'GT_FOLDER': os.path.join(code_path, 'data/gt/youtube_vis/'),  # Location of GT data\n            'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/youtube_vis/'),\n            # Trackers location\n            'OUTPUT_FOLDER': None,  # Where to save eval results (if None, same as TRACKERS_FOLDER)\n            'TRACKERS_TO_EVAL': None,  # Filenames of trackers to eval (if None, all in folder)\n            'CLASSES_TO_EVAL': None,  # Classes to eval (if None, all classes)\n            'SPLIT_TO_EVAL': 'train_sub_split',  # Valid: 'train', 'val', 'train_sub_split'\n            'PRINT_CONFIG': True,  # Whether to print current config\n            'OUTPUT_SUB_FOLDER': '',  # Output files are saved in OUTPUT_FOLDER/tracker_name/OUTPUT_SUB_FOLDER\n            'TRACKER_SUB_FOLDER': 'data',  # Tracker files are in TRACKER_FOLDER/tracker_name/TRACKER_SUB_FOLDER\n            'TRACKER_DISPLAY_NAMES': None,  # Names of trackers to display, if None: TRACKERS_TO_EVAL\n        }\n        return default_config\n\n    def __init__(self, config=None):\n        \"\"\"Initialise dataset, checking that all required files are present\"\"\"\n        super().__init__()\n        # Fill non-given config values with defaults\n        self.config = utils.init_config(config, self.get_default_dataset_config(), self.get_name())\n        self.gt_fol = self.config['GT_FOLDER'] + 'youtube_vis_' + self.config['SPLIT_TO_EVAL']\n        self.tracker_fol = self.config['TRACKERS_FOLDER']\n        self.use_super_categories = False\n        self.should_classes_combine = True\n\n        self.output_fol = self.config['OUTPUT_FOLDER']\n        if self.output_fol is None:\n            self.output_fol = self.tracker_fol\n        self.output_sub_fol = self.config['OUTPUT_SUB_FOLDER']\n        self.tracker_sub_fol = self.config['TRACKER_SUB_FOLDER']\n\n        if not os.path.exists(self.gt_fol):\n            print(\"GT folder not found: \" + self.gt_fol)\n            raise TrackEvalException(\"GT folder not found: \" + os.path.basename(self.gt_fol))\n        gt_dir_files = [file for file in os.listdir(self.gt_fol) if file.endswith('.json')]\n        if len(gt_dir_files) != 1:\n            raise TrackEvalException(self.gt_fol + ' does not contain exactly one json file.')\n\n        with open(os.path.join(self.gt_fol, gt_dir_files[0])) as f:\n            self.gt_data = json.load(f)\n\n        # Get classes to eval\n        self.valid_classes = [cls['name'] for cls in self.gt_data['categories']]\n        cls_name_to_cls_id_map = {cls['name']: cls['id'] for cls in self.gt_data['categories']}\n\n        if self.config['CLASSES_TO_EVAL']:\n            self.class_list = [cls.lower() if cls.lower() in self.valid_classes else None\n                               for cls in self.config['CLASSES_TO_EVAL']]\n            if not all(self.class_list):\n                raise TrackEvalException('Attempted to evaluate an invalid class. Only classes ' +\n                                         ', '.join(self.valid_classes) + ' are valid.')\n        else:\n            self.class_list = [cls['name'] for cls in self.gt_data['categories']]\n        self.class_name_to_class_id = {k: v for k, v in cls_name_to_cls_id_map.items() if k in self.class_list}\n\n        # Get sequences to eval and check gt files exist\n        self.seq_list = [vid['file_names'][0].split('/')[0] for vid in self.gt_data['videos']]\n        self.seq_name_to_seq_id = {vid['file_names'][0].split('/')[0]: vid['id'] for vid in self.gt_data['videos']}\n        self.seq_lengths = {vid['id']: len(vid['file_names']) for vid in self.gt_data['videos']}\n\n        # encode masks and compute track areas\n        self._prepare_gt_annotations()\n\n        # Get trackers to eval\n        if self.config['TRACKERS_TO_EVAL'] is None:\n            self.tracker_list = os.listdir(self.tracker_fol)\n        else:\n            self.tracker_list = self.config['TRACKERS_TO_EVAL']\n\n        if self.config['TRACKER_DISPLAY_NAMES'] is None:\n            self.tracker_to_disp = dict(zip(self.tracker_list, self.tracker_list))\n        elif (self.config['TRACKERS_TO_EVAL'] is not None) and (\n                len(self.config['TRACKER_DISPLAY_NAMES']) == len(self.tracker_list)):\n            self.tracker_to_disp = dict(zip(self.tracker_list, self.config['TRACKER_DISPLAY_NAMES']))\n        else:\n            raise TrackEvalException('List of tracker files and tracker display names do not match.')\n\n        # counter for globally unique track IDs\n        self.global_tid_counter = 0\n\n        self.tracker_data = dict()\n        for tracker in self.tracker_list:\n            tracker_dir_path = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol)\n            #tr_dir_files = [file for file in os.listdir(tracker_dir_path) if file.endswith('.json')]\n            tr_dir_files = [tracker+'.json', ]\n            if len(tr_dir_files) != 1:\n                raise TrackEvalException(tracker_dir_path + ' does not contain exactly one json file.')\n\n            with open(os.path.join(tracker_dir_path, '..', tr_dir_files[0])) as f:\n                curr_data = json.load(f)\n\n            self.tracker_data[tracker] = curr_data\n\n    def get_display_name(self, tracker):\n        return self.tracker_to_disp[tracker]\n\n    def _load_raw_file(self, tracker, seq, is_gt):\n        \"\"\"Load a file (gt or tracker) in the YouTubeVIS format\n        If is_gt, this returns a dict which contains the fields:\n        [gt_ids, gt_classes] : list (for each timestep) of 1D NDArrays (for each det).\n        [gt_dets]: list (for each timestep) of lists of detections.\n        [classes_to_gt_tracks]: dictionary with class values as keys and list of dictionaries (with frame indices as\n                                keys and corresponding segmentations as values) for each track\n        [classes_to_gt_track_ids, classes_to_gt_track_areas, classes_to_gt_track_iscrowd]: dictionary with class values\n                                as keys and lists (for each track) as values\n\n        if not is_gt, this returns a dict which contains the fields:\n        [tracker_ids, tracker_classes, tracker_confidences] : list (for each timestep) of 1D NDArrays (for each det).\n        [tracker_dets]: list (for each timestep) of lists of detections.\n        [classes_to_dt_tracks]: dictionary with class values as keys and list of dictionaries (with frame indices as\n                                keys and corresponding segmentations as values) for each track\n        [classes_to_dt_track_ids, classes_to_dt_track_areas]: dictionary with class values as keys and lists as values\n        [classes_to_dt_track_scores]: dictionary with class values as keys and 1D numpy arrays as values\n        \"\"\"\n        # select sequence tracks\n        seq_id = self.seq_name_to_seq_id[seq]\n        if is_gt:\n            tracks = [ann for ann in self.gt_data['annotations'] if ann['video_id'] == seq_id]\n        else:\n            tracks = self._get_tracker_seq_tracks(tracker, seq_id)\n\n        # Convert data to required format\n        num_timesteps = self.seq_lengths[seq_id]\n        data_keys = ['ids', 'classes', 'dets']\n        if not is_gt:\n            data_keys += ['tracker_confidences']\n        raw_data = {key: [None] * num_timesteps for key in data_keys}\n        for t in range(num_timesteps):\n            raw_data['dets'][t] = [track['segmentations'][t] for track in tracks if track['segmentations'][t]]\n            raw_data['ids'][t] = np.atleast_1d([track['id'] for track in tracks\n                                                if track['segmentations'][t]]).astype(int)\n            raw_data['classes'][t] = np.atleast_1d([track['category_id'] for track in tracks\n                                                    if track['segmentations'][t]]).astype(int)\n            if not is_gt:\n                raw_data['tracker_confidences'][t] = np.atleast_1d([track['score'] for track in tracks\n                                                                    if track['segmentations'][t]]).astype(float)\n\n        if is_gt:\n            key_map = {'ids': 'gt_ids',\n                       'classes': 'gt_classes',\n                       'dets': 'gt_dets'}\n        else:\n            key_map = {'ids': 'tracker_ids',\n                       'classes': 'tracker_classes',\n                       'dets': 'tracker_dets'}\n        for k, v in key_map.items():\n            raw_data[v] = raw_data.pop(k)\n\n        all_cls_ids = {self.class_name_to_class_id[cls] for cls in self.class_list}\n        classes_to_tracks = {cls: [track for track in tracks if track['category_id'] == cls] for cls in all_cls_ids}\n\n        # mapping from classes to track representations and track information\n        raw_data['classes_to_tracks'] = {cls: [{i: track['segmentations'][i]\n                                                for i in range(len(track['segmentations']))} for track in tracks]\n                                         for cls, tracks in classes_to_tracks.items()}\n        raw_data['classes_to_track_ids'] = {cls: [track['id'] for track in tracks]\n                                            for cls, tracks in classes_to_tracks.items()}\n        raw_data['classes_to_track_areas'] = {cls: [track['area'] for track in tracks]\n                                              for cls, tracks in classes_to_tracks.items()}\n\n        if is_gt:\n            raw_data['classes_to_gt_track_iscrowd'] = {cls: [track['iscrowd'] for track in tracks]\n                                                       for cls, tracks in classes_to_tracks.items()}\n        else:\n            raw_data['classes_to_dt_track_scores'] = {cls: np.array([track['score'] for track in tracks])\n                                                      for cls, tracks in classes_to_tracks.items()}\n\n        if is_gt:\n            key_map = {'classes_to_tracks': 'classes_to_gt_tracks',\n                       'classes_to_track_ids': 'classes_to_gt_track_ids',\n                       'classes_to_track_areas': 'classes_to_gt_track_areas'}\n        else:\n            key_map = {'classes_to_tracks': 'classes_to_dt_tracks',\n                       'classes_to_track_ids': 'classes_to_dt_track_ids',\n                       'classes_to_track_areas': 'classes_to_dt_track_areas'}\n        for k, v in key_map.items():\n            raw_data[v] = raw_data.pop(k)\n\n        raw_data['num_timesteps'] = num_timesteps\n        raw_data['seq'] = seq\n        return raw_data\n\n    @_timing.time\n    def get_preprocessed_seq_data(self, raw_data, cls):\n        \"\"\" Preprocess data for a single sequence for a single class ready for evaluation.\n        Inputs:\n             - raw_data is a dict containing the data for the sequence already read in by get_raw_seq_data().\n             - cls is the class to be evaluated.\n        Outputs:\n             - data is a dict containing all of the information that metrics need to perform evaluation.\n                It contains the following fields:\n                    [num_timesteps, num_gt_ids, num_tracker_ids, num_gt_dets, num_tracker_dets] : integers.\n                    [gt_ids, tracker_ids, tracker_confidences]: list (for each timestep) of 1D NDArrays (for each det).\n                    [gt_dets, tracker_dets]: list (for each timestep) of lists of detections.\n                    [similarity_scores]: list (for each timestep) of 2D NDArrays.\n        Notes:\n            General preprocessing (preproc) occurs in 4 steps. Some datasets may not use all of these steps.\n                1) Extract only detections relevant for the class to be evaluated (including distractor detections).\n                2) Match gt dets and tracker dets. Remove tracker dets that are matched to a gt det that is of a\n                    distractor class, or otherwise marked as to be removed.\n                3) Remove unmatched tracker dets if they fall within a crowd ignore region or don't meet a certain\n                    other criteria (e.g. are too small).\n                4) Remove gt dets that were only useful for preprocessing and not for actual evaluation.\n            After the above preprocessing steps, this function also calculates the number of gt and tracker detections\n                and unique track ids. It also relabels gt and tracker ids to be contiguous and checks that ids are\n                unique within each timestep.\n        YouTubeVIS:\n            In YouTubeVIS, the 4 preproc steps are as follow:\n                1) There are 40 classes which are evaluated separately.\n                2) No matched tracker dets are removed.\n                3) No unmatched tracker dets are removed.\n                4) No gt dets are removed.\n            Further, for TrackMAP computation track representations for the given class are accessed from a dictionary\n            and the tracks from the tracker data are sorted according to the tracker confidence.\n        \"\"\"\n        cls_id = self.class_name_to_class_id[cls]\n\n        data_keys = ['gt_ids', 'tracker_ids', 'gt_dets', 'tracker_dets', 'similarity_scores']\n        data = {key: [None] * raw_data['num_timesteps'] for key in data_keys}\n        unique_gt_ids = []\n        unique_tracker_ids = []\n        num_gt_dets = 0\n        num_tracker_dets = 0\n\n        for t in range(raw_data['num_timesteps']):\n\n            # Only extract relevant dets for this class for eval (cls)\n            gt_class_mask = np.atleast_1d(raw_data['gt_classes'][t] == cls_id)\n            gt_class_mask = gt_class_mask.astype(np.bool)\n            gt_ids = raw_data['gt_ids'][t][gt_class_mask]\n            gt_dets = [raw_data['gt_dets'][t][ind] for ind in range(len(gt_class_mask)) if gt_class_mask[ind]]\n\n            tracker_class_mask = np.atleast_1d(raw_data['tracker_classes'][t] == cls_id)\n            tracker_class_mask = tracker_class_mask.astype(np.bool)\n            tracker_ids = raw_data['tracker_ids'][t][tracker_class_mask]\n            tracker_dets = [raw_data['tracker_dets'][t][ind] for ind in range(len(tracker_class_mask)) if\n                            tracker_class_mask[ind]]\n            similarity_scores = raw_data['similarity_scores'][t][gt_class_mask, :][:, tracker_class_mask]\n\n            data['tracker_ids'][t] = tracker_ids\n            data['tracker_dets'][t] = tracker_dets\n            data['gt_ids'][t] = gt_ids\n            data['gt_dets'][t] = gt_dets\n            data['similarity_scores'][t] = similarity_scores\n\n            unique_gt_ids += list(np.unique(data['gt_ids'][t]))\n            unique_tracker_ids += list(np.unique(data['tracker_ids'][t]))\n            num_tracker_dets += len(data['tracker_ids'][t])\n            num_gt_dets += len(data['gt_ids'][t])\n\n        # Re-label IDs such that there are no empty IDs\n        if len(unique_gt_ids) > 0:\n            unique_gt_ids = np.unique(unique_gt_ids)\n            gt_id_map = np.nan * np.ones((np.max(unique_gt_ids) + 1))\n            gt_id_map[unique_gt_ids] = np.arange(len(unique_gt_ids))\n            for t in range(raw_data['num_timesteps']):\n                if len(data['gt_ids'][t]) > 0:\n                    data['gt_ids'][t] = gt_id_map[data['gt_ids'][t]].astype(np.int)\n        if len(unique_tracker_ids) > 0:\n            unique_tracker_ids = np.unique(unique_tracker_ids)\n            tracker_id_map = np.nan * np.ones((np.max(unique_tracker_ids) + 1))\n            tracker_id_map[unique_tracker_ids] = np.arange(len(unique_tracker_ids))\n            for t in range(raw_data['num_timesteps']):\n                if len(data['tracker_ids'][t]) > 0:\n                    data['tracker_ids'][t] = tracker_id_map[data['tracker_ids'][t]].astype(np.int)\n\n        # Ensure that ids are unique per timestep.\n        self._check_unique_ids(data)\n\n        # Record overview statistics.\n        data['num_tracker_dets'] = num_tracker_dets\n        data['num_gt_dets'] = num_gt_dets\n        data['num_tracker_ids'] = len(unique_tracker_ids)\n        data['num_gt_ids'] = len(unique_gt_ids)\n        data['num_timesteps'] = raw_data['num_timesteps']\n        data['seq'] = raw_data['seq']\n\n        # get track representations\n        data['gt_tracks'] = raw_data['classes_to_gt_tracks'][cls_id]\n        data['gt_track_ids'] = raw_data['classes_to_gt_track_ids'][cls_id]\n        data['gt_track_areas'] = raw_data['classes_to_gt_track_areas'][cls_id]\n        data['gt_track_iscrowd'] = raw_data['classes_to_gt_track_iscrowd'][cls_id]\n        data['dt_tracks'] = raw_data['classes_to_dt_tracks'][cls_id]\n        data['dt_track_ids'] = raw_data['classes_to_dt_track_ids'][cls_id]\n        data['dt_track_areas'] = raw_data['classes_to_dt_track_areas'][cls_id]\n        data['dt_track_scores'] = raw_data['classes_to_dt_track_scores'][cls_id]\n        data['iou_type'] = 'mask'\n\n        # sort tracker data tracks by tracker confidence scores\n        if data['dt_tracks']:\n            idx = np.argsort([-score for score in data['dt_track_scores']], kind=\"mergesort\")\n            data['dt_track_scores'] = [data['dt_track_scores'][i] for i in idx]\n            data['dt_tracks'] = [data['dt_tracks'][i] for i in idx]\n            data['dt_track_ids'] = [data['dt_track_ids'][i] for i in idx]\n            data['dt_track_areas'] = [data['dt_track_areas'][i] for i in idx]\n\n        return data\n\n    def _calculate_similarities(self, gt_dets_t, tracker_dets_t):\n        similarity_scores = self._calculate_mask_ious(gt_dets_t, tracker_dets_t, is_encoded=True, do_ioa=False)\n        return similarity_scores\n\n    def _prepare_gt_annotations(self):\n        \"\"\"\n        Prepares GT data by rle encoding segmentations and computing the average track area.\n        :return: None\n        \"\"\"\n        # only loaded when needed to reduce minimum requirements\n        from pycocotools import mask as mask_utils\n\n        for track in self.gt_data['annotations']:\n            h = track['height']\n            w = track['width']\n            for i, seg in enumerate(track['segmentations']):\n                if seg:\n                    track['segmentations'][i] = mask_utils.frPyObjects(seg, h, w)\n            areas = [a for a in track['areas'] if a]\n            if len(areas) == 0:\n                track['area'] = 0\n            else:\n                track['area'] = np.array(areas).mean()\n\n    def _get_tracker_seq_tracks(self, tracker, seq_id):\n        \"\"\"\n        Prepares tracker data for a given sequence. Extracts all annotations for given sequence ID, computes\n        average track area and assigns a track ID.\n        :param tracker: the given tracker\n        :param seq_id: the sequence ID\n        :return: the extracted tracks\n        \"\"\"\n        # only loaded when needed to reduce minimum requirements\n        from pycocotools import mask as mask_utils\n\n        tracks = [ann for ann in self.tracker_data[tracker] if ann['video_id'] == seq_id]\n        for track in tracks:\n            track['areas'] = []\n            for seg in track['segmentations']:\n                if seg:\n                    track['areas'].append(mask_utils.area(seg))\n                else:\n                    track['areas'].append(None)\n            areas = [a for a in track['areas'] if a]\n            if len(areas) == 0:\n                track['area'] = 0\n            else:\n                track['area'] = np.array(areas).mean()\n            track['id'] = self.global_tid_counter\n            self.global_tid_counter += 1\n        return tracks\n"
  },
  {
    "path": "eval/trackeval/eval.py",
    "content": "import time\nimport traceback\nfrom multiprocessing.pool import Pool\nfrom functools import partial\nimport os\nfrom . import utils\nfrom .utils import TrackEvalException\nfrom . import _timing\nfrom .metrics import Count\n\n\nclass Evaluator:\n    \"\"\"Evaluator class for evaluating different metrics for different datasets\"\"\"\n\n    @staticmethod\n    def get_default_eval_config():\n        \"\"\"Returns the default config values for evaluation\"\"\"\n        code_path = utils.get_code_path()\n        default_config = {\n            'USE_PARALLEL': False,\n            'NUM_PARALLEL_CORES': 8,\n            'BREAK_ON_ERROR': True,  # Raises exception and exits with error\n            'RETURN_ON_ERROR': False,  # if not BREAK_ON_ERROR, then returns from function on error\n            'LOG_ON_ERROR': os.path.join(code_path, 'error_log.txt'),  # if not None, save any errors into a log file.\n\n            'PRINT_RESULTS': True,\n            'PRINT_ONLY_COMBINED': False,\n            'PRINT_CONFIG': True,\n            'TIME_PROGRESS': True,\n            'DISPLAY_LESS_PROGRESS': True,\n\n            'OUTPUT_SUMMARY': True,\n            'OUTPUT_EMPTY_CLASSES': True,  # If False, summary files are not output for classes with no detections\n            'OUTPUT_DETAILED': True,\n            'PLOT_CURVES': True,\n        }\n        return default_config\n\n    def __init__(self, config=None):\n        \"\"\"Initialise the evaluator with a config file\"\"\"\n        self.config = utils.init_config(config, self.get_default_eval_config(), 'Eval')\n        # Only run timing analysis if not run in parallel.\n        if self.config['TIME_PROGRESS'] and not self.config['USE_PARALLEL']:\n            _timing.DO_TIMING = True\n            if self.config['DISPLAY_LESS_PROGRESS']:\n                _timing.DISPLAY_LESS_PROGRESS = True\n\n    @_timing.time\n    def evaluate(self, dataset_list, metrics_list):\n        \"\"\"Evaluate a set of metrics on a set of datasets\"\"\"\n        config = self.config\n        metrics_list = metrics_list + [Count()]  # Count metrics are always run\n        metric_names = utils.validate_metrics_list(metrics_list)\n        dataset_names = [dataset.get_name() for dataset in dataset_list]\n        output_res = {}\n        output_msg = {}\n\n        for dataset, dataset_name in zip(dataset_list, dataset_names):\n            # Get dataset info about what to evaluate\n            output_res[dataset_name] = {}\n            output_msg[dataset_name] = {}\n            tracker_list, seq_list, class_list = dataset.get_eval_info()\n            print('\\nEvaluating %i tracker(s) on %i sequence(s) for %i class(es) on %s dataset using the following '\n                  'metrics: %s\\n' % (len(tracker_list), len(seq_list), len(class_list), dataset_name,\n                                     ', '.join(metric_names)))\n\n            # Evaluate each tracker\n            for tracker in tracker_list:\n                # if not config['BREAK_ON_ERROR'] then go to next tracker without breaking\n                try:\n                    # Evaluate each sequence in parallel or in series.\n                    # returns a nested dict (res), indexed like: res[seq][class][metric_name][sub_metric field]\n                    # e.g. res[seq_0001][pedestrian][hota][DetA]\n                    print('\\nEvaluating %s\\n' % tracker)\n                    time_start = time.time()\n                    if config['USE_PARALLEL']:\n                        with Pool(config['NUM_PARALLEL_CORES']) as pool:\n                            _eval_sequence = partial(eval_sequence, dataset=dataset, tracker=tracker,\n                                                     class_list=class_list, metrics_list=metrics_list,\n                                                     metric_names=metric_names)\n                            results = pool.map(_eval_sequence, seq_list)\n                            res = dict(zip(seq_list, results))\n                    else:\n                        res = {}\n                        for curr_seq in sorted(seq_list):\n                            res[curr_seq] = eval_sequence(curr_seq, dataset, tracker, class_list, metrics_list,\n                                                          metric_names)\n\n                    # Combine results over all sequences and then over all classes\n\n                    # collecting combined cls keys (cls averaged, det averaged, super classes)\n                    combined_cls_keys = []\n                    res['COMBINED_SEQ'] = {}\n                    # combine sequences for each class\n                    for c_cls in class_list:\n                        res['COMBINED_SEQ'][c_cls] = {}\n                        for metric, metric_name in zip(metrics_list, metric_names):\n                            curr_res = {seq_key: seq_value[c_cls][metric_name] for seq_key, seq_value in res.items() if\n                                        seq_key is not 'COMBINED_SEQ'}\n                            res['COMBINED_SEQ'][c_cls][metric_name] = metric.combine_sequences(curr_res)\n                    # combine classes\n                    if dataset.should_classes_combine:\n                        combined_cls_keys += ['cls_comb_cls_av', 'cls_comb_det_av']\n                        res['COMBINED_SEQ']['cls_comb_cls_av'] = {}\n                        res['COMBINED_SEQ']['cls_comb_det_av'] = {}\n                        for metric, metric_name in zip(metrics_list, metric_names):\n                            cls_res = {cls_key: cls_value[metric_name] for cls_key, cls_value in\n                                       res['COMBINED_SEQ'].items() if cls_key not in combined_cls_keys}\n                            res['COMBINED_SEQ']['cls_comb_cls_av'][metric_name] = \\\n                                metric.combine_classes_class_averaged(cls_res)\n                            res['COMBINED_SEQ']['cls_comb_det_av'][metric_name] = \\\n                                metric.combine_classes_det_averaged(cls_res)\n                    # combine classes to super classes\n                    if dataset.use_super_categories:\n                        for cat, sub_cats in dataset.super_categories.items():\n                            combined_cls_keys.append(cat)\n                            res['COMBINED_SEQ'][cat] = {}\n                            for metric, metric_name in zip(metrics_list, metric_names):\n                                cat_res = {cls_key: cls_value[metric_name] for cls_key, cls_value in\n                                           res['COMBINED_SEQ'].items() if cls_key in sub_cats}\n                                res['COMBINED_SEQ'][cat][metric_name] = metric.combine_classes_det_averaged(cat_res)\n\n                    # Print and output results in various formats\n                    if config['TIME_PROGRESS']:\n                        print('\\nAll sequences for %s finished in %.2f seconds' % (tracker, time.time() - time_start))\n                    output_fol = dataset.get_output_fol(tracker)\n                    tracker_display_name = dataset.get_display_name(tracker)\n                    for c_cls in res['COMBINED_SEQ'].keys():  # class_list + combined classes if calculated\n                        summaries = []\n                        details = []\n                        num_dets = res['COMBINED_SEQ'][c_cls]['Count']['Dets']\n                        if config['OUTPUT_EMPTY_CLASSES'] or num_dets > 0:\n                            for metric, metric_name in zip(metrics_list, metric_names):\n                                # for combined classes there is no per sequence evaluation\n                                if c_cls in combined_cls_keys:\n                                    table_res = {'COMBINED_SEQ': res['COMBINED_SEQ'][c_cls][metric_name]}\n                                else:\n                                    table_res = {seq_key: seq_value[c_cls][metric_name] for seq_key, seq_value\n                                                 in res.items()}\n                                if config['PRINT_RESULTS'] and config['PRINT_ONLY_COMBINED']:\n                                    metric.print_table({'COMBINED_SEQ': table_res['COMBINED_SEQ']},\n                                                       tracker_display_name, c_cls)\n                                elif config['PRINT_RESULTS']:\n                                    metric.print_table(table_res, tracker_display_name, c_cls)\n                                if config['OUTPUT_SUMMARY']:\n                                    summaries.append(metric.summary_results(table_res))\n                                if config['OUTPUT_DETAILED']:\n                                    details.append(metric.detailed_results(table_res))\n                                if config['PLOT_CURVES']:\n                                    metric.plot_single_tracker_results(table_res, tracker_display_name, c_cls,\n                                                                       output_fol)\n                            if config['OUTPUT_SUMMARY']:\n                                utils.write_summary_results(summaries, c_cls, output_fol)\n                            if config['OUTPUT_DETAILED']:\n                                utils.write_detailed_results(details, c_cls, output_fol)\n\n                    # Output for returning from function\n                    output_res[dataset_name][tracker] = res\n                    output_msg[dataset_name][tracker] = 'Success'\n\n                except Exception as err:\n                    output_res[dataset_name][tracker] = None\n                    if type(err) == TrackEvalException:\n                        output_msg[dataset_name][tracker] = str(err)\n                    else:\n                        output_msg[dataset_name][tracker] = 'Unknown error occurred.'\n                    print('Tracker %s was unable to be evaluated.' % tracker)\n                    print(err)\n                    traceback.print_exc()\n                    if config['LOG_ON_ERROR'] is not None:\n                        with open(config['LOG_ON_ERROR'], 'a') as f:\n                            print(dataset_name, file=f)\n                            print(tracker, file=f)\n                            print(traceback.format_exc(), file=f)\n                            print('\\n\\n\\n', file=f)\n                    if config['BREAK_ON_ERROR']:\n                        raise err\n                    elif config['RETURN_ON_ERROR']:\n                        return output_res, output_msg\n\n        return output_res, output_msg\n\n\n@_timing.time\ndef eval_sequence(seq, dataset, tracker, class_list, metrics_list, metric_names):\n    \"\"\"Function for evaluating a single sequence\"\"\"\n    raw_data = dataset.get_raw_seq_data(tracker, seq)\n    seq_res = {}\n    for cls in class_list:\n        seq_res[cls] = {}\n        data = dataset.get_preprocessed_seq_data(raw_data, cls)\n        for metric, met_name in zip(metrics_list, metric_names):\n            seq_res[cls][met_name] = metric.eval_sequence(data)\n    return seq_res\n"
  },
  {
    "path": "eval/trackeval/metrics/__init__.py",
    "content": "from .hota import HOTA\nfrom .clear import CLEAR\nfrom .identity import Identity\nfrom .count import Count\nfrom .j_and_f import JAndF\nfrom .track_map import TrackMAP\nfrom .vace import VACE\n"
  },
  {
    "path": "eval/trackeval/metrics/_base_metric.py",
    "content": "\nimport numpy as np\nfrom abc import ABC, abstractmethod\nfrom .. import _timing\nfrom ..utils import TrackEvalException\n\n\nclass _BaseMetric(ABC):\n    @abstractmethod\n    def __init__(self):\n        self.plottable = False\n        self.integer_fields = []\n        self.float_fields = []\n        self.array_labels = []\n        self.integer_array_fields = []\n        self.float_array_fields = []\n        self.fields = []\n        self.summary_fields = []\n        self.registered = False\n\n    #####################################################################\n    # Abstract functions for subclasses to implement\n\n    @_timing.time\n    @abstractmethod\n    def eval_sequence(self, data):\n        ...\n\n    @abstractmethod\n    def combine_sequences(self, all_res):\n        ...\n\n    @abstractmethod\n    def combine_classes_class_averaged(self, all_res):\n        ...\n\n    @ abstractmethod\n    def combine_classes_det_averaged(self, all_res):\n        ...\n\n    def plot_single_tracker_results(self, all_res, tracker, output_folder, cls):\n        \"\"\"Plot results of metrics, only valid for metrics with self.plottable\"\"\"\n        if self.plottable:\n            raise NotImplementedError('plot_results is not implemented for metric %s' % self.get_name())\n        else:\n            pass\n\n    #####################################################################\n    # Helper functions which are useful for all metrics:\n\n    @classmethod\n    def get_name(cls):\n        return cls.__name__\n\n    @staticmethod\n    def _combine_sum(all_res, field):\n        \"\"\"Combine sequence results via sum\"\"\"\n        return sum([all_res[k][field] for k in all_res.keys()])\n\n    @staticmethod\n    def _combine_weighted_av(all_res, field, comb_res, weight_field):\n        \"\"\"Combine sequence results via weighted average\"\"\"\n        return sum([all_res[k][field] * all_res[k][weight_field] for k in all_res.keys()]) / np.maximum(1.0, comb_res[\n            weight_field])\n\n    def print_table(self, table_res, tracker, cls):\n        \"\"\"Prints table of results for all sequences\"\"\"\n        print('')\n        metric_name = self.get_name()\n        self._row_print([metric_name + ': ' + tracker + '-' + cls] + self.summary_fields)\n        for seq, results in sorted(table_res.items()):\n            if seq == 'COMBINED_SEQ':\n                continue\n            summary_res = self._summary_row(results)\n            self._row_print([seq] + summary_res)\n        summary_res = self._summary_row(table_res['COMBINED_SEQ'])\n        self._row_print(['COMBINED'] + summary_res)\n\n    def _summary_row(self, results_):\n        vals = []\n        for h in self.summary_fields:\n            if h in self.float_array_fields:\n                vals.append(\"{0:1.5g}\".format(100 * np.mean(results_[h])))\n            elif h in self.float_fields:\n                vals.append(\"{0:1.5g}\".format(100 * float(results_[h])))\n            elif h in self.integer_fields:\n                vals.append(\"{0:d}\".format(int(results_[h])))\n            else:\n                raise NotImplementedError(\"Summary function not implemented for this field type.\")\n        return vals\n\n    @staticmethod\n    def _row_print(*argv):\n        \"\"\"Prints results in an evenly spaced rows, with more space in first row\"\"\"\n        if len(argv) == 1:\n            argv = argv[0]\n        to_print = '%-35s' % argv[0]\n        for v in argv[1:]:\n            to_print += '%-10s' % str(v)\n        print(to_print)\n\n    def summary_results(self, table_res):\n        \"\"\"Returns a simple summary of final results for a tracker\"\"\"\n        return dict(zip(self.summary_fields, self._summary_row(table_res['COMBINED_SEQ'])))\n\n    def detailed_results(self, table_res):\n        \"\"\"Returns detailed final results for a tracker\"\"\"\n        # Get detailed field information\n        detailed_fields = self.float_fields + self.integer_fields\n        for h in self.float_array_fields + self.integer_array_fields:\n            for alpha in [int(100*x) for x in self.array_labels]:\n                detailed_fields.append(h + '___' + str(alpha))\n            detailed_fields.append(h + '___AUC')\n\n        # Get detailed results\n        detailed_results = {}\n        for seq, res in table_res.items():\n            detailed_row = self._detailed_row(res)\n            if len(detailed_row) != len(detailed_fields):\n                raise TrackEvalException(\n                    'Field names and data have different sizes (%i and %i)' % (len(detailed_row), len(detailed_fields)))\n            detailed_results[seq] = dict(zip(detailed_fields, detailed_row))\n        return detailed_results\n\n    def _detailed_row(self, res):\n        detailed_row = []\n        for h in self.float_fields + self.integer_fields:\n            detailed_row.append(res[h])\n        for h in self.float_array_fields + self.integer_array_fields:\n            for i, alpha in enumerate([int(100 * x) for x in self.array_labels]):\n                detailed_row.append(res[h][i])\n            detailed_row.append(np.mean(res[h]))\n        return detailed_row\n"
  },
  {
    "path": "eval/trackeval/metrics/clear.py",
    "content": "\nimport numpy as np\nfrom scipy.optimize import linear_sum_assignment\nfrom ._base_metric import _BaseMetric\nfrom .. import _timing\n\n\nclass CLEAR(_BaseMetric):\n    \"\"\"Class which implements the CLEAR metrics\"\"\"\n    def __init__(self):\n        super().__init__()\n        main_integer_fields = ['CLR_TP', 'CLR_FN', 'CLR_FP', 'IDSW', 'MT', 'PT', 'ML', 'Frag']\n        extra_integer_fields = ['CLR_Frames']\n        self.integer_fields = main_integer_fields + extra_integer_fields\n        main_float_fields = ['MOTA', 'MOTP', 'MODA', 'CLR_Re', 'CLR_Pr', 'MTR', 'PTR', 'MLR', 'sMOTA']\n        extra_float_fields = ['CLR_F1', 'FP_per_frame', 'MOTAL', 'MOTP_sum']\n        self.float_fields = main_float_fields + extra_float_fields\n        self.fields = self.float_fields + self.integer_fields\n        self.summed_fields = self.integer_fields + ['MOTP_sum']\n        self.summary_fields = main_float_fields + main_integer_fields\n\n        self.threshold = 0.5\n\n    @_timing.time\n    def eval_sequence(self, data):\n        \"\"\"Calculates CLEAR metrics for one sequence\"\"\"\n        # Initialise results\n        res = {}\n        for field in self.fields:\n            res[field] = 0\n\n        # Return result quickly if tracker or gt sequence is empty\n        if data['num_tracker_dets'] == 0:\n            res['CLR_FN'] = data['num_gt_dets']\n            res['ML'] = data['num_gt_ids']\n            res['MLR'] = 1.0\n            return res\n        if data['num_gt_dets'] == 0:\n            res['CLR_FP'] = data['num_tracker_dets']\n            res['MLR'] = 1.0\n            return res\n\n        # Variables counting global association\n        num_gt_ids = data['num_gt_ids']\n        gt_id_count = np.zeros(num_gt_ids)  # For MT/ML/PT\n        gt_matched_count = np.zeros(num_gt_ids)  # For MT/ML/PT\n        gt_frag_count = np.zeros(num_gt_ids)  # For Frag\n\n        # Note that IDSWs are counted based on the last time each gt_id was present (any number of frames previously),\n        # but are only used in matching to continue current tracks based on the gt_id in the single previous timestep.\n        prev_tracker_id = np.nan * np.zeros(num_gt_ids)  # For scoring IDSW\n        prev_timestep_tracker_id = np.nan * np.zeros(num_gt_ids)  # For matching IDSW\n\n        # Calculate scores for each timestep\n        for t, (gt_ids_t, tracker_ids_t) in enumerate(zip(data['gt_ids'], data['tracker_ids'])):\n            # Deal with the case that there are no gt_det/tracker_det in a timestep.\n            if len(gt_ids_t) == 0:\n                res['CLR_FP'] += len(tracker_ids_t)\n                continue\n            if len(tracker_ids_t) == 0:\n                res['CLR_FN'] += len(gt_ids_t)\n                gt_id_count[gt_ids_t] += 1\n                continue\n\n            # Calc score matrix to first minimise IDSWs from previous frame, and then maximise MOTP secondarily\n            similarity = data['similarity_scores'][t]\n            score_mat = (tracker_ids_t[np.newaxis, :] == prev_timestep_tracker_id[gt_ids_t[:, np.newaxis]])\n            score_mat = 1000 * score_mat + similarity\n            score_mat[similarity < self.threshold - np.finfo('float').eps] = 0\n\n            # Hungarian algorithm to find best matches\n            match_rows, match_cols = linear_sum_assignment(-score_mat)\n            actually_matched_mask = score_mat[match_rows, match_cols] > 0 + np.finfo('float').eps\n            match_rows = match_rows[actually_matched_mask]\n            match_cols = match_cols[actually_matched_mask]\n\n            matched_gt_ids = gt_ids_t[match_rows]\n            matched_tracker_ids = tracker_ids_t[match_cols]\n\n            # Calc IDSW for MOTA\n            prev_matched_tracker_ids = prev_tracker_id[matched_gt_ids]\n            is_idsw = (np.logical_not(np.isnan(prev_matched_tracker_ids))) & (\n                np.not_equal(matched_tracker_ids, prev_matched_tracker_ids))\n            res['IDSW'] += np.sum(is_idsw)\n\n            # Update counters for MT/ML/PT/Frag and record for IDSW/Frag for next timestep\n            gt_id_count[gt_ids_t] += 1\n            gt_matched_count[matched_gt_ids] += 1\n            not_previously_tracked = np.isnan(prev_timestep_tracker_id)\n            prev_tracker_id[matched_gt_ids] = matched_tracker_ids\n            prev_timestep_tracker_id[:] = np.nan\n            prev_timestep_tracker_id[matched_gt_ids] = matched_tracker_ids\n            currently_tracked = np.logical_not(np.isnan(prev_timestep_tracker_id))\n            gt_frag_count += np.logical_and(not_previously_tracked, currently_tracked)\n\n            # Calculate and accumulate basic statistics\n            num_matches = len(matched_gt_ids)\n            res['CLR_TP'] += num_matches\n            res['CLR_FN'] += len(gt_ids_t) - num_matches\n            res['CLR_FP'] += len(tracker_ids_t) - num_matches\n            if num_matches > 0:\n                res['MOTP_sum'] += sum(similarity[match_rows, match_cols])\n\n        # Calculate MT/ML/PT/Frag/MOTP\n        tracked_ratio = gt_matched_count[gt_id_count > 0] / gt_id_count[gt_id_count > 0]\n        res['MT'] = np.sum(np.greater(tracked_ratio, 0.8))\n        res['PT'] = np.sum(np.greater_equal(tracked_ratio, 0.2)) - res['MT']\n        res['ML'] = num_gt_ids - res['MT'] - res['PT']\n        res['Frag'] = np.sum(np.subtract(gt_frag_count[gt_frag_count > 0], 1))\n        res['MOTP'] = res['MOTP_sum'] / np.maximum(1.0, res['CLR_TP'])\n\n        res['CLR_Frames'] = data['num_timesteps']\n\n        # Calculate final CLEAR scores\n        res = self._compute_final_fields(res)\n        return res\n\n    def combine_sequences(self, all_res):\n        \"\"\"Combines metrics across all sequences\"\"\"\n        res = {}\n        for field in self.summed_fields:\n            res[field] = self._combine_sum(all_res, field)\n        res = self._compute_final_fields(res)\n        return res\n\n    def combine_classes_det_averaged(self, all_res):\n        \"\"\"Combines metrics across all classes by averaging over the detection values\"\"\"\n        res = {}\n        for field in self.summed_fields:\n            res[field] = self._combine_sum(all_res, field)\n        res = self._compute_final_fields(res)\n        return res\n\n    def combine_classes_class_averaged(self, all_res):\n        \"\"\"Combines metrics across all classes by averaging over the class values\"\"\"\n        res = {}\n        for field in self.integer_fields:\n            res[field] = self._combine_sum(\n                {k: v for k, v in all_res.items() if v['CLR_TP'] + v['CLR_FN'] + v['CLR_FP'] > 0}, field)\n        for field in self.float_fields:\n            res[field] = np.mean(\n                [v[field] for v in all_res.values() if v['CLR_TP'] + v['CLR_FN'] + v['CLR_FP'] > 0], axis=0)\n        return res\n\n    @staticmethod\n    def _compute_final_fields(res):\n        \"\"\"Calculate sub-metric ('field') values which only depend on other sub-metric values.\n        This function is used both for both per-sequence calculation, and in combining values across sequences.\n        \"\"\"\n        num_gt_ids = res['MT'] + res['ML'] + res['PT']\n        res['MTR'] = res['MT'] / np.maximum(1.0, num_gt_ids)\n        res['MLR'] = res['ML'] / np.maximum(1.0, num_gt_ids)\n        res['PTR'] = res['PT'] / np.maximum(1.0, num_gt_ids)\n        res['CLR_Re'] = res['CLR_TP'] / np.maximum(1.0, res['CLR_TP'] + res['CLR_FN'])\n        res['CLR_Pr'] = res['CLR_TP'] / np.maximum(1.0, res['CLR_TP'] + res['CLR_FP'])\n        res['MODA'] = (res['CLR_TP'] - res['CLR_FP']) / np.maximum(1.0, res['CLR_TP'] + res['CLR_FN'])\n        res['MOTA'] = (res['CLR_TP'] - res['CLR_FP'] - res['IDSW']) / np.maximum(1.0, res['CLR_TP'] + res['CLR_FN'])\n        res['MOTP'] = res['MOTP_sum'] / np.maximum(1.0, res['CLR_TP'])\n        res['sMOTA'] = (res['MOTP_sum'] - res['CLR_FP'] - res['IDSW']) / np.maximum(1.0, res['CLR_TP'] + res['CLR_FN'])\n\n        res['CLR_F1'] = res['CLR_TP'] / np.maximum(1.0, res['CLR_TP'] + 0.5*res['CLR_FN'] + 0.5*res['CLR_FP'])\n        res['FP_per_frame'] = res['CLR_FP'] / np.maximum(1.0, res['CLR_Frames'])\n        safe_log_idsw = np.log10(res['IDSW']) if res['IDSW'] > 0 else res['IDSW']\n        res['MOTAL'] = (res['CLR_TP'] - res['CLR_FP'] - safe_log_idsw) / np.maximum(1.0, res['CLR_TP'] + res['CLR_FN'])\n        return res\n"
  },
  {
    "path": "eval/trackeval/metrics/count.py",
    "content": "\nfrom ._base_metric import _BaseMetric\nfrom .. import _timing\n\n\nclass Count(_BaseMetric):\n    \"\"\"Class which simply counts the number of tracker and gt detections and ids.\"\"\"\n    def __init__(self):\n        super().__init__()\n        self.integer_fields = ['Dets', 'GT_Dets', 'IDs', 'GT_IDs']\n        self.fields = self.integer_fields\n        self.summary_fields = self.fields\n\n    @_timing.time\n    def eval_sequence(self, data):\n        \"\"\"Returns counts for one sequence\"\"\"\n        # Get results\n        res = {'Dets': data['num_tracker_dets'],\n               'GT_Dets': data['num_gt_dets'],\n               'IDs': data['num_tracker_ids'],\n               'GT_IDs': data['num_gt_ids'],\n               'Frames': data['num_timesteps']}\n        return res\n\n    def combine_sequences(self, all_res):\n        \"\"\"Combines metrics across all sequences\"\"\"\n        res = {}\n        for field in self.integer_fields:\n            res[field] = self._combine_sum(all_res, field)\n        return res\n\n    def combine_classes_class_averaged(self, all_res):\n        \"\"\"Combines metrics across all classes by averaging over the class values\"\"\"\n        res = {}\n        for field in self.integer_fields:\n            res[field] = self._combine_sum(all_res, field)\n        return res\n\n    def combine_classes_det_averaged(self, all_res):\n        \"\"\"Combines metrics across all classes by averaging over the detection values\"\"\"\n        res = {}\n        for field in self.integer_fields:\n            res[field] = self._combine_sum(all_res, field)\n        return res\n"
  },
  {
    "path": "eval/trackeval/metrics/hota.py",
    "content": "\nimport os\nimport numpy as np\nfrom scipy.optimize import linear_sum_assignment\nfrom ._base_metric import _BaseMetric\nfrom .. import _timing\n\n\nclass HOTA(_BaseMetric):\n    \"\"\"Class which implements the HOTA metrics.\n    See: https://link.springer.com/article/10.1007/s11263-020-01375-2\n    \"\"\"\n    def __init__(self):\n        super().__init__()\n        self.plottable = True\n        self.array_labels = np.arange(0.05, 0.99, 0.05)\n        self.integer_array_fields = ['HOTA_TP', 'HOTA_FN', 'HOTA_FP']\n        self.float_array_fields = ['HOTA', 'DetA', 'AssA', 'DetRe', 'DetPr', 'AssRe', 'AssPr', 'LocA', 'RHOTA']\n        self.float_fields = ['HOTA(0)', 'LocA(0)', 'HOTALocA(0)']\n        self.fields = self.float_array_fields + self.integer_array_fields + self.float_fields\n        self.summary_fields = self.float_array_fields + self.float_fields\n\n    @_timing.time\n    def eval_sequence(self, data):\n        \"\"\"Calculates the HOTA metrics for one sequence\"\"\"\n\n        # Initialise results\n        res = {}\n        for field in self.float_array_fields + self.integer_array_fields:\n            res[field] = np.zeros((len(self.array_labels)), dtype=np.float)\n        for field in self.float_fields:\n            res[field] = 0\n\n        # Return result quickly if tracker or gt sequence is empty\n        if data['num_tracker_dets'] == 0:\n            res['HOTA_FN'] = data['num_gt_dets'] * np.ones((len(self.array_labels)), dtype=np.float)\n            res['LocA'] = np.ones((len(self.array_labels)), dtype=np.float)\n            res['LocA(0)'] = 1.0\n            return res\n        if data['num_gt_dets'] == 0:\n            res['HOTA_FP'] = data['num_tracker_dets'] * np.ones((len(self.array_labels)), dtype=np.float)\n            res['LocA'] = np.ones((len(self.array_labels)), dtype=np.float)\n            res['LocA(0)'] = 1.0\n            return res\n\n        # Variables counting global association\n        potential_matches_count = np.zeros((data['num_gt_ids'], data['num_tracker_ids']))\n        gt_id_count = np.zeros((data['num_gt_ids'], 1))\n        tracker_id_count = np.zeros((1, data['num_tracker_ids']))\n\n        # First loop through each timestep and accumulate global track information.\n        for t, (gt_ids_t, tracker_ids_t) in enumerate(zip(data['gt_ids'], data['tracker_ids'])):\n            # Count the potential matches between ids in each timestep\n            # These are normalised, weighted by the match similarity.\n            similarity = data['similarity_scores'][t]\n            sim_iou_denom = similarity.sum(0)[np.newaxis, :] + similarity.sum(1)[:, np.newaxis] - similarity\n            sim_iou = np.zeros_like(similarity)\n            sim_iou_mask = sim_iou_denom > 0 + np.finfo('float').eps\n            sim_iou[sim_iou_mask] = similarity[sim_iou_mask] / sim_iou_denom[sim_iou_mask]\n            potential_matches_count[gt_ids_t[:, np.newaxis], tracker_ids_t[np.newaxis, :]] += sim_iou\n\n            # Calculate the total number of dets for each gt_id and tracker_id.\n            gt_id_count[gt_ids_t] += 1\n            tracker_id_count[0, tracker_ids_t] += 1\n\n        # Calculate overall jaccard alignment score (before unique matching) between IDs\n        global_alignment_score = potential_matches_count / (gt_id_count + tracker_id_count - potential_matches_count)\n        matches_counts = [np.zeros_like(potential_matches_count) for _ in self.array_labels]\n\n        # Calculate scores for each timestep\n        for t, (gt_ids_t, tracker_ids_t) in enumerate(zip(data['gt_ids'], data['tracker_ids'])):\n            # Deal with the case that there are no gt_det/tracker_det in a timestep.\n            if len(gt_ids_t) == 0:\n                for a, alpha in enumerate(self.array_labels):\n                    res['HOTA_FP'][a] += len(tracker_ids_t)\n                continue\n            if len(tracker_ids_t) == 0:\n                for a, alpha in enumerate(self.array_labels):\n                    res['HOTA_FN'][a] += len(gt_ids_t)\n                continue\n\n            # Get matching scores between pairs of dets for optimizing HOTA\n            similarity = data['similarity_scores'][t]\n            score_mat = global_alignment_score[gt_ids_t[:, np.newaxis], tracker_ids_t[np.newaxis, :]] * similarity\n\n            # Hungarian algorithm to find best matches\n            match_rows, match_cols = linear_sum_assignment(-score_mat)\n\n            # Calculate and accumulate basic statistics\n            for a, alpha in enumerate(self.array_labels):\n                actually_matched_mask = similarity[match_rows, match_cols] >= alpha - np.finfo('float').eps\n                alpha_match_rows = match_rows[actually_matched_mask]\n                alpha_match_cols = match_cols[actually_matched_mask]\n                num_matches = len(alpha_match_rows)\n                res['HOTA_TP'][a] += num_matches\n                res['HOTA_FN'][a] += len(gt_ids_t) - num_matches\n                res['HOTA_FP'][a] += len(tracker_ids_t) - num_matches\n                if num_matches > 0:\n                    res['LocA'][a] += sum(similarity[alpha_match_rows, alpha_match_cols])\n                    matches_counts[a][gt_ids_t[alpha_match_rows], tracker_ids_t[alpha_match_cols]] += 1\n\n        # Calculate association scores (AssA, AssRe, AssPr) for the alpha value.\n        # First calculate scores per gt_id/tracker_id combo and then average over the number of detections.\n        for a, alpha in enumerate(self.array_labels):\n            matches_count = matches_counts[a]\n            ass_a = matches_count / np.maximum(1, gt_id_count + tracker_id_count - matches_count)\n            res['AssA'][a] = np.sum(matches_count * ass_a) / np.maximum(1, res['HOTA_TP'][a])\n            ass_re = matches_count / np.maximum(1, gt_id_count)\n            res['AssRe'][a] = np.sum(matches_count * ass_re) / np.maximum(1, res['HOTA_TP'][a])\n            ass_pr = matches_count / np.maximum(1, tracker_id_count)\n            res['AssPr'][a] = np.sum(matches_count * ass_pr) / np.maximum(1, res['HOTA_TP'][a])\n\n        # Calculate final scores\n        res['LocA'] = np.maximum(1e-10, res['LocA']) / np.maximum(1e-10, res['HOTA_TP'])\n        res = self._compute_final_fields(res)\n        return res\n\n    def combine_sequences(self, all_res):\n        \"\"\"Combines metrics across all sequences\"\"\"\n        res = {}\n        for field in self.integer_array_fields:\n            res[field] = self._combine_sum(all_res, field)\n        for field in ['AssRe', 'AssPr', 'AssA']:\n            res[field] = self._combine_weighted_av(all_res, field, res, weight_field='HOTA_TP')\n        loca_weighted_sum = sum([all_res[k]['LocA'] * all_res[k]['HOTA_TP'] for k in all_res.keys()])\n        res['LocA'] = np.maximum(1e-10, loca_weighted_sum) / np.maximum(1e-10, res['HOTA_TP'])\n        res = self._compute_final_fields(res)\n        return res\n\n    def combine_classes_class_averaged(self, all_res):\n        \"\"\"Combines metrics across all classes by averaging over the class values\"\"\"\n        res = {}\n        for field in self.integer_array_fields:\n            res[field] = self._combine_sum(\n                {k: v for k, v in all_res.items()\n                 if (v['HOTA_TP'] + v['HOTA_FN'] + v['HOTA_FP'] > 0 + np.finfo('float').eps).any()}, field)\n        for field in self.float_fields:\n            res[field] = np.mean([v[field] for v in all_res.values()\n                                  if (v['HOTA_TP'] + v['HOTA_FN'] + v['HOTA_FP'] > 0 + np.finfo('float').eps).any()],\n                                 axis=0)\n        for field in self.float_array_fields:\n            res[field] = np.mean([v[field] for v in all_res.values()\n                                  if (v['HOTA_TP'] + v['HOTA_FN'] + v['HOTA_FP'] > 0 + np.finfo('float').eps).any()],\n                                 axis=0)\n        return res\n\n    def combine_classes_det_averaged(self, all_res):\n        \"\"\"Combines metrics across all classes by averaging over the detection values\"\"\"\n        res = {}\n        for field in self.integer_array_fields:\n            res[field] = self._combine_sum(all_res, field)\n        for field in ['AssRe', 'AssPr', 'AssA']:\n            res[field] = self._combine_weighted_av(all_res, field, res, weight_field='HOTA_TP')\n        loca_weighted_sum = sum([all_res[k]['LocA'] * all_res[k]['HOTA_TP'] for k in all_res.keys()])\n        res['LocA'] = np.maximum(1e-10, loca_weighted_sum) / np.maximum(1e-10, res['HOTA_TP'])\n        res = self._compute_final_fields(res)\n        return res\n\n    @staticmethod\n    def _compute_final_fields(res):\n        \"\"\"Calculate sub-metric ('field') values which only depend on other sub-metric values.\n        This function is used both for both per-sequence calculation, and in combining values across sequences.\n        \"\"\"\n        res['DetRe'] = res['HOTA_TP'] / np.maximum(1, res['HOTA_TP'] + res['HOTA_FN'])\n        res['DetPr'] = res['HOTA_TP'] / np.maximum(1, res['HOTA_TP'] + res['HOTA_FP'])\n        res['DetA'] = res['HOTA_TP'] / np.maximum(1, res['HOTA_TP'] + res['HOTA_FN'] + res['HOTA_FP'])\n        res['HOTA'] = np.sqrt(res['DetA'] * res['AssA'])\n        res['RHOTA'] = np.sqrt(res['DetRe'] * res['AssA'])\n\n        res['HOTA(0)'] = res['HOTA'][0]\n        res['LocA(0)'] = res['LocA'][0]\n        res['HOTALocA(0)'] = res['HOTA(0)']*res['LocA(0)']\n        return res\n\n    def plot_single_tracker_results(self, table_res, tracker, cls, output_folder):\n        \"\"\"Create plot of results\"\"\"\n\n        # Only loaded when run to reduce minimum requirements\n        from matplotlib import pyplot as plt\n\n        res = table_res['COMBINED_SEQ']\n        styles_to_plot = ['r', 'b', 'g', 'b--', 'b:', 'g--', 'g:', 'm']\n        for name, style in zip(self.float_array_fields, styles_to_plot):\n            plt.plot(self.array_labels, res[name], style)\n        plt.xlabel('alpha')\n        plt.ylabel('score')\n        plt.title(tracker + ' - ' + cls)\n        plt.axis([0, 1, 0, 1])\n        legend = []\n        for name in self.float_array_fields:\n            legend += [name + ' (' + str(np.round(np.mean(res[name]), 2)) + ')']\n        plt.legend(legend, loc='lower left')\n        out_file = os.path.join(output_folder, cls + '_plot.pdf')\n        os.makedirs(os.path.dirname(out_file), exist_ok=True)\n        plt.savefig(out_file)\n        plt.savefig(out_file.replace('.pdf', '.png'))\n        plt.clf()\n"
  },
  {
    "path": "eval/trackeval/metrics/identity.py",
    "content": "\nimport numpy as np\nfrom scipy.optimize import linear_sum_assignment\nfrom ._base_metric import _BaseMetric\nfrom .. import _timing\n\n\nclass Identity(_BaseMetric):\n    \"\"\"Class which implements the ID metrics\"\"\"\n    def __init__(self):\n        super().__init__()\n        self.integer_fields = ['IDTP', 'IDFN', 'IDFP']\n        self.float_fields = ['IDF1', 'IDR', 'IDP']\n        self.fields = self.float_fields + self.integer_fields\n        self.summary_fields = self.fields\n\n        self.threshold = 0.5\n\n    @_timing.time\n    def eval_sequence(self, data):\n        \"\"\"Calculates ID metrics for one sequence\"\"\"\n        # Initialise results\n        res = {}\n        for field in self.fields:\n            res[field] = 0\n\n        # Return result quickly if tracker or gt sequence is empty\n        if data['num_tracker_dets'] == 0:\n            res['IDFN'] = data['num_gt_dets']\n            return res\n        if data['num_gt_dets'] == 0:\n            res['IDFP'] = data['num_tracker_dets']\n            return res\n\n        # Variables counting global association\n        potential_matches_count = np.zeros((data['num_gt_ids'], data['num_tracker_ids']))\n        gt_id_count = np.zeros(data['num_gt_ids'])\n        tracker_id_count = np.zeros(data['num_tracker_ids'])\n\n        # First loop through each timestep and accumulate global track information.\n        for t, (gt_ids_t, tracker_ids_t) in enumerate(zip(data['gt_ids'], data['tracker_ids'])):\n            # Count the potential matches between ids in each timestep\n            matches_mask = np.greater_equal(data['similarity_scores'][t], self.threshold)\n            match_idx_gt, match_idx_tracker = np.nonzero(matches_mask)\n            potential_matches_count[gt_ids_t[match_idx_gt], tracker_ids_t[match_idx_tracker]] += 1\n\n            # Calculate the total number of dets for each gt_id and tracker_id.\n            gt_id_count[gt_ids_t] += 1\n            tracker_id_count[tracker_ids_t] += 1\n\n        # Calculate optimal assignment cost matrix for ID metrics\n        num_gt_ids = data['num_gt_ids']\n        num_tracker_ids = data['num_tracker_ids']\n        fp_mat = np.zeros((num_gt_ids + num_tracker_ids, num_gt_ids + num_tracker_ids))\n        fn_mat = np.zeros((num_gt_ids + num_tracker_ids, num_gt_ids + num_tracker_ids))\n        fp_mat[num_gt_ids:, :num_tracker_ids] = 1e10\n        fn_mat[:num_gt_ids, num_tracker_ids:] = 1e10\n        for gt_id in range(num_gt_ids):\n            fn_mat[gt_id, :num_tracker_ids] = gt_id_count[gt_id]\n            fn_mat[gt_id, num_tracker_ids + gt_id] = gt_id_count[gt_id]\n        for tracker_id in range(num_tracker_ids):\n            fp_mat[:num_gt_ids, tracker_id] = tracker_id_count[tracker_id]\n            fp_mat[tracker_id + num_gt_ids, tracker_id] = tracker_id_count[tracker_id]\n        fn_mat[:num_gt_ids, :num_tracker_ids] -= potential_matches_count\n        fp_mat[:num_gt_ids, :num_tracker_ids] -= potential_matches_count\n\n        # Hungarian algorithm\n        match_rows, match_cols = linear_sum_assignment(fn_mat + fp_mat)\n\n        # Accumulate basic statistics\n        res['IDFN'] = fn_mat[match_rows, match_cols].sum().astype(np.int)\n        res['IDFP'] = fp_mat[match_rows, match_cols].sum().astype(np.int)\n        res['IDTP'] = (gt_id_count.sum() - res['IDFN']).astype(np.int)\n\n        # Calculate final ID scores\n        res = self._compute_final_fields(res)\n        return res\n\n    def combine_classes_class_averaged(self, all_res):\n        \"\"\"Combines metrics across all classes by averaging over the class values\"\"\"\n        res = {}\n        for field in self.integer_fields:\n            res[field] = self._combine_sum({k: v for k, v in all_res.items()\n                                            if v['IDTP'] + v['IDFN'] + v['IDFP'] > 0 + np.finfo('float').eps}, field)\n        for field in self.float_fields:\n            res[field] = np.mean([v[field] for v in all_res.values()\n                                  if v['IDTP'] + v['IDFN'] + v['IDFP'] > 0 + np.finfo('float').eps], axis=0)\n        return res\n\n    def combine_classes_det_averaged(self, all_res):\n        \"\"\"Combines metrics across all classes by averaging over the detection values\"\"\"\n        res = {}\n        for field in self.integer_fields:\n            res[field] = self._combine_sum(all_res, field)\n        res = self._compute_final_fields(res)\n        return res\n\n    def combine_sequences(self, all_res):\n        \"\"\"Combines metrics across all sequences\"\"\"\n        res = {}\n        for field in self.integer_fields:\n            res[field] = self._combine_sum(all_res, field)\n        res = self._compute_final_fields(res)\n        return res\n\n    @staticmethod\n    def _compute_final_fields(res):\n        \"\"\"Calculate sub-metric ('field') values which only depend on other sub-metric values.\n        This function is used both for both per-sequence calculation, and in combining values across sequences.\n        \"\"\"\n        res['IDR'] = res['IDTP'] / np.maximum(1.0, res['IDTP'] + res['IDFN'])\n        res['IDP'] = res['IDTP'] / np.maximum(1.0, res['IDTP'] + res['IDFP'])\n        res['IDF1'] = res['IDTP'] / np.maximum(1.0, res['IDTP'] + 0.5*res['IDFP'] + 0.5*res['IDFN'])\n        return res\n"
  },
  {
    "path": "eval/trackeval/metrics/j_and_f.py",
    "content": "\nimport numpy as np\nimport math\nfrom scipy.optimize import linear_sum_assignment\nfrom ..utils import TrackEvalException\nfrom ._base_metric import _BaseMetric\nfrom .. import _timing\n\n\nclass JAndF(_BaseMetric):\n    \"\"\"Class which implements the J&F metrics\"\"\"\n    def __init__(self):\n        super().__init__()\n        self.integer_fields = ['num_gt_tracks']\n        self.float_fields = ['J-Mean', 'J-Recall', 'J-Decay', 'F-Mean', 'F-Recall', 'F-Decay', 'J&F']\n        self.fields = self.float_fields + self.integer_fields\n        self.summary_fields = self.float_fields\n        self.optim_type = 'J'  # possible values J, J&F\n\n    @_timing.time\n    def eval_sequence(self, data):\n        \"\"\"Returns J&F metrics for one sequence\"\"\"\n\n        # Only loaded when run to reduce minimum requirements\n        from pycocotools import mask as mask_utils\n\n        num_timesteps = data['num_timesteps']\n        num_tracker_ids = data['num_tracker_ids']\n        num_gt_ids = data['num_gt_ids']\n        gt_dets = data['gt_dets']\n        tracker_dets = data['tracker_dets']\n        gt_ids = data['gt_ids']\n        tracker_ids = data['tracker_ids']\n\n        # get shape of frames\n        frame_shape = None\n        if num_gt_ids > 0:\n            for t in range(num_timesteps):\n                if len(gt_ids[t]) > 0:\n                    frame_shape = gt_dets[t][0]['size']\n                    break\n        elif num_tracker_ids > 0:\n            for t in range(num_timesteps):\n                if len(tracker_ids[t]) > 0:\n                    frame_shape = tracker_dets[t][0]['size']\n                    break\n\n        if frame_shape:\n            # append all zero masks for timesteps in which tracks do not have a detection\n            zero_padding = np.zeros((frame_shape), order= 'F').astype(np.uint8)\n            padding_mask = mask_utils.encode(zero_padding)\n            for t in range(num_timesteps):\n                gt_id_det_mapping = {gt_ids[t][i]: gt_dets[t][i] for i in range(len(gt_ids[t]))}\n                gt_dets[t] = [gt_id_det_mapping[index] if index in gt_ids[t] else padding_mask for index\n                              in range(num_gt_ids)]\n                tracker_id_det_mapping = {tracker_ids[t][i]: tracker_dets[t][i] for i in range(len(tracker_ids[t]))}\n                tracker_dets[t] = [tracker_id_det_mapping[index] if index in tracker_ids[t] else padding_mask for index\n                                   in range(num_tracker_ids)]\n            # also perform zero padding if number of tracker IDs < number of ground truth IDs\n            if num_tracker_ids < num_gt_ids:\n                diff = num_gt_ids - num_tracker_ids\n                for t in range(num_timesteps):\n                    tracker_dets[t] = tracker_dets[t] + [padding_mask for _ in range(diff)]\n                num_tracker_ids += diff\n\n        j = self._compute_j(gt_dets, tracker_dets, num_gt_ids, num_tracker_ids, num_timesteps)\n\n        # boundary threshold for F computation\n        bound_th = 0.008\n\n        # perform matching\n        if self.optim_type == 'J&F':\n            f = np.zeros_like(j)\n            for k in range(num_tracker_ids):\n                for i in range(num_gt_ids):\n                    f[k, i, :] = self._compute_f(gt_dets, tracker_dets, k, i, bound_th)\n            optim_metrics = (np.mean(j, axis=2) + np.mean(f, axis=2)) / 2\n            row_ind, col_ind = linear_sum_assignment(- optim_metrics)\n            j_m = j[row_ind, col_ind, :]\n            f_m = f[row_ind, col_ind, :]\n        elif self.optim_type == 'J':\n            optim_metrics = np.mean(j, axis=2)\n            row_ind, col_ind = linear_sum_assignment(- optim_metrics)\n            j_m = j[row_ind, col_ind, :]\n            f_m = np.zeros_like(j_m)\n            for i, (tr_ind, gt_ind) in enumerate(zip(row_ind, col_ind)):\n                f_m[i] = self._compute_f(gt_dets, tracker_dets, tr_ind, gt_ind, bound_th)\n        else:\n            raise TrackEvalException('Unsupported optimization type %s for J&F metric.' % self.optim_type)\n\n        # append zeros for false negatives\n        if j_m.shape[0] < data['num_gt_ids']:\n            diff = data['num_gt_ids'] - j_m.shape[0]\n            j_m = np.concatenate((j_m, np.zeros((diff, j_m.shape[1]))), axis=0)\n            f_m = np.concatenate((f_m, np.zeros((diff, f_m.shape[1]))), axis=0)\n\n        # compute the metrics for each ground truth track\n        res = {\n            'J-Mean': [np.nanmean(j_m[i, :]) for i in range(j_m.shape[0])],\n            'J-Recall': [np.nanmean(j_m[i, :] > 0.5 + np.finfo('float').eps) for i in range(j_m.shape[0])],\n            'F-Mean': [np.nanmean(f_m[i, :]) for i in range(f_m.shape[0])],\n            'F-Recall': [np.nanmean(f_m[i, :] > 0.5 + np.finfo('float').eps) for i in range(f_m.shape[0])],\n            'J-Decay': [],\n            'F-Decay': []\n        }\n        n_bins = 4\n        ids = np.round(np.linspace(1, data['num_timesteps'], n_bins + 1) + 1e-10) - 1\n        ids = ids.astype(np.uint8)\n\n        for k in range(j_m.shape[0]):\n            d_bins_j = [j_m[k][ids[i]:ids[i + 1] + 1] for i in range(0, n_bins)]\n            res['J-Decay'].append(np.nanmean(d_bins_j[0]) - np.nanmean(d_bins_j[3]))\n        for k in range(f_m.shape[0]):\n            d_bins_f = [f_m[k][ids[i]:ids[i + 1] + 1] for i in range(0, n_bins)]\n            res['F-Decay'].append(np.nanmean(d_bins_f[0]) - np.nanmean(d_bins_f[3]))\n\n        # count number of tracks for weighting of the result\n        res['num_gt_tracks'] = len(res['J-Mean'])\n        for field in ['J-Mean', 'J-Recall', 'J-Decay', 'F-Mean', 'F-Recall', 'F-Decay']:\n            res[field] = np.mean(res[field])\n        res['J&F'] = (res['J-Mean'] + res['F-Mean']) / 2\n        return res\n\n    def combine_sequences(self, all_res):\n        \"\"\"Combines metrics across all sequences\"\"\"\n        res = {'num_gt_tracks': self._combine_sum(all_res, 'num_gt_tracks')}\n        for field in self.summary_fields:\n            res[field] = self._combine_weighted_av(all_res, field, res, weight_field='num_gt_tracks')\n        return res\n\n    def combine_classes_class_averaged(self, all_res):\n        \"\"\"Combines metrics across all classes by averaging over the class values\"\"\"\n        res = {'num_gt_tracks': self._combine_sum(all_res, 'num_gt_tracks')}\n        for field in self.float_fields:\n            res[field] = np.mean([v[field] for v in all_res.values()])\n        return res\n\n    def combine_classes_det_averaged(self, all_res):\n        \"\"\"Combines metrics across all classes by averaging over the detection values\"\"\"\n        res = {'num_gt_tracks': self._combine_sum(all_res, 'num_gt_tracks')}\n        for field in self.float_fields:\n            res[field] = np.mean([v[field] for v in all_res.values()])\n        return res\n\n    @staticmethod\n    def _seg2bmap(seg, width=None, height=None):\n        \"\"\"\n        From a segmentation, compute a binary boundary map with 1 pixel wide\n        boundaries.  The boundary pixels are offset by 1/2 pixel towards the\n        origin from the actual segment boundary.\n        Arguments:\n            seg     : Segments labeled from 1..k.\n            width\t  :\tWidth of desired bmap  <= seg.shape[1]\n            height  :\tHeight of desired bmap <= seg.shape[0]\n        Returns:\n            bmap (ndarray):\tBinary boundary map.\n         David Martin <dmartin@eecs.berkeley.edu>\n         January 2003\n        \"\"\"\n\n        seg = seg.astype(np.bool)\n        seg[seg > 0] = 1\n\n        assert np.atleast_3d(seg).shape[2] == 1\n\n        width = seg.shape[1] if width is None else width\n        height = seg.shape[0] if height is None else height\n\n        h, w = seg.shape[:2]\n\n        ar1 = float(width) / float(height)\n        ar2 = float(w) / float(h)\n\n        assert not (\n                width > w | height > h | abs(ar1 - ar2) > 0.01\n        ), \"Can\" \"t convert %dx%d seg to %dx%d bmap.\" % (w, h, width, height)\n\n        e = np.zeros_like(seg)\n        s = np.zeros_like(seg)\n        se = np.zeros_like(seg)\n\n        e[:, :-1] = seg[:, 1:]\n        s[:-1, :] = seg[1:, :]\n        se[:-1, :-1] = seg[1:, 1:]\n\n        b = seg ^ e | seg ^ s | seg ^ se\n        b[-1, :] = seg[-1, :] ^ e[-1, :]\n        b[:, -1] = seg[:, -1] ^ s[:, -1]\n        b[-1, -1] = 0\n\n        if w == width and h == height:\n            bmap = b\n        else:\n            bmap = np.zeros((height, width))\n            for x in range(w):\n                for y in range(h):\n                    if b[y, x]:\n                        j = 1 + math.floor((y - 1) + height / h)\n                        i = 1 + math.floor((x - 1) + width / h)\n                        bmap[j, i] = 1\n\n        return bmap\n\n    @staticmethod\n    def _compute_f(gt_data, tracker_data, tracker_data_id, gt_id, bound_th):\n        \"\"\"\n        Perform F computation for a given gt and a given tracker ID. Adapted from\n        https://github.com/davisvideochallenge/davis2017-evaluation\n        :param gt_data: the encoded gt masks\n        :param tracker_data: the encoded tracker masks\n        :param tracker_data_id: the tracker ID\n        :param gt_id: the ground truth ID\n        :param bound_th: boundary threshold parameter\n        :return: the F value for the given tracker and gt ID\n        \"\"\"\n\n        # Only loaded when run to reduce minimum requirements\n        from pycocotools import mask as mask_utils\n        from skimage.morphology import disk\n        import cv2\n\n        f = np.zeros(len(gt_data))\n\n        for t, (gt_masks, tracker_masks) in enumerate(zip(gt_data, tracker_data)):\n            curr_tracker_mask = mask_utils.decode(tracker_masks[tracker_data_id])\n            curr_gt_mask = mask_utils.decode(gt_masks[gt_id])\n            \n            bound_pix = bound_th if bound_th >= 1 - np.finfo('float').eps else \\\n                np.ceil(bound_th * np.linalg.norm(curr_tracker_mask.shape))\n\n            # Get the pixel boundaries of both masks\n            fg_boundary = JAndF._seg2bmap(curr_tracker_mask)\n            gt_boundary = JAndF._seg2bmap(curr_gt_mask)\n\n            # fg_dil = binary_dilation(fg_boundary, disk(bound_pix))\n            fg_dil = cv2.dilate(fg_boundary.astype(np.uint8), disk(bound_pix).astype(np.uint8))\n            # gt_dil = binary_dilation(gt_boundary, disk(bound_pix))\n            gt_dil = cv2.dilate(gt_boundary.astype(np.uint8), disk(bound_pix).astype(np.uint8))\n\n            # Get the intersection\n            gt_match = gt_boundary * fg_dil\n            fg_match = fg_boundary * gt_dil\n\n            # Area of the intersection\n            n_fg = np.sum(fg_boundary)\n            n_gt = np.sum(gt_boundary)\n\n            # % Compute precision and recall\n            if n_fg == 0 and n_gt > 0:\n                precision = 1\n                recall = 0\n            elif n_fg > 0 and n_gt == 0:\n                precision = 0\n                recall = 1\n            elif n_fg == 0 and n_gt == 0:\n                precision = 1\n                recall = 1\n            else:\n                precision = np.sum(fg_match) / float(n_fg)\n                recall = np.sum(gt_match) / float(n_gt)\n\n            # Compute F measure\n            if precision + recall == 0:\n                f_val = 0\n            else:\n                f_val = 2 * precision * recall / (precision + recall)\n\n            f[t] = f_val\n\n        return f\n\n    @staticmethod\n    def _compute_j(gt_data, tracker_data, num_gt_ids, num_tracker_ids, num_timesteps):\n        \"\"\"\n        Computation of J value for all ground truth IDs and all tracker IDs in the given sequence. Adapted from\n        https://github.com/davisvideochallenge/davis2017-evaluation\n        :param gt_data: the ground truth masks\n        :param tracker_data: the tracker masks\n        :param num_gt_ids: the number of ground truth IDs\n        :param num_tracker_ids: the number of tracker IDs\n        :param num_timesteps: the number of timesteps\n        :return: the J values\n        \"\"\"\n\n        # Only loaded when run to reduce minimum requirements\n        from pycocotools import mask as mask_utils\n\n        j = np.zeros((num_tracker_ids, num_gt_ids, num_timesteps))\n\n        for t, (time_gt, time_data) in enumerate(zip(gt_data, tracker_data)):\n            # run length encoded masks with pycocotools\n            area_gt = mask_utils.area(time_gt)\n            area_tr = mask_utils.area(time_data)\n\n            area_tr = np.repeat(area_tr[:, np.newaxis], len(area_gt), axis=1)\n            area_gt = np.repeat(area_gt[np.newaxis, :], len(area_tr), axis=0)\n\n            # mask iou computation with pycocotools\n            ious = mask_utils.iou(time_data, time_gt, [0]*len(time_gt))\n            # set iou to 1 if both masks are close to 0 (no ground truth and no predicted mask in timestep)\n            ious[np.isclose(area_tr, 0) & np.isclose(area_gt, 0)] = 1\n            assert (ious >= 0 - np.finfo('float').eps).all()\n            assert (ious <= 1 + np.finfo('float').eps).all()\n\n            j[..., t] = ious\n\n        return j\n"
  },
  {
    "path": "eval/trackeval/metrics/track_map.py",
    "content": "import numpy as np\nfrom ._base_metric import _BaseMetric\nfrom .. import _timing\nfrom functools import partial\nfrom .. import utils\nfrom ..utils import TrackEvalException\n\n\nclass TrackMAP(_BaseMetric):\n    \"\"\"Class which implements the TrackMAP metrics\"\"\"\n\n    @staticmethod\n    def get_default_metric_config():\n        \"\"\"Default class config values\"\"\"\n        default_config = {\n            'USE_AREA_RANGES': True,  # whether to evaluate for certain area ranges\n            'AREA_RANGES': [[0 ** 2, 32 ** 2],  # additional area range sets for which TrackMAP is evaluated\n                            [32 ** 2, 96 ** 2],  # (all area range always included), default values for TAO\n                            [96 ** 2, 1e5 ** 2]],  # evaluation\n            'AREA_RANGE_LABELS': [\"area_s\", \"area_m\", \"area_l\"],  # the labels for the area ranges\n            'USE_TIME_RANGES': True,  # whether to evaluate for certain time ranges (length of tracks)\n            'TIME_RANGES': [[0, 3], [3, 10], [10, 1e5]],  # additional time range sets for which TrackMAP is evaluated\n            # (all time range always included) , default values for TAO evaluation\n            'TIME_RANGE_LABELS': [\"time_s\", \"time_m\", \"time_l\"],  # the labels for the time ranges\n            'IOU_THRESHOLDS': np.arange(0.5, 0.96, 0.05),  # the IoU thresholds\n            'RECALL_THRESHOLDS': np.linspace(0.0, 1.00, int(np.round((1.00 - 0.0) / 0.01) + 1), endpoint=True),\n            # recall thresholds at which precision is evaluated\n            'MAX_DETECTIONS': 0,  # limit the maximum number of considered tracks per sequence (0 for unlimited)\n            'PRINT_CONFIG': True\n        }\n        return default_config\n\n    def __init__(self, config=None):\n        super().__init__()\n        self.config = utils.init_config(config, self.get_default_metric_config(), self.get_name())\n\n        self.num_ig_masks = 1\n        self.lbls = ['all']\n        self.use_area_rngs = self.config['USE_AREA_RANGES']\n        if self.use_area_rngs:\n            self.area_rngs = self.config['AREA_RANGES']\n            self.area_rng_lbls = self.config['AREA_RANGE_LABELS']\n            self.num_ig_masks += len(self.area_rng_lbls)\n            self.lbls += self.area_rng_lbls\n\n        self.use_time_rngs = self.config['USE_TIME_RANGES']\n        if self.use_time_rngs:\n            self.time_rngs = self.config['TIME_RANGES']\n            self.time_rng_lbls = self.config['TIME_RANGE_LABELS']\n            self.num_ig_masks += len(self.time_rng_lbls)\n            self.lbls += self.time_rng_lbls\n\n        self.array_labels = self.config['IOU_THRESHOLDS']\n        self.rec_thrs = self.config['RECALL_THRESHOLDS']\n\n        self.maxDet = self.config['MAX_DETECTIONS']\n        self.float_array_fields = ['AP_' + lbl for lbl in self.lbls] + ['AR_' + lbl for lbl in self.lbls]\n        self.fields = self.float_array_fields\n        self.summary_fields = self.float_array_fields\n\n    @_timing.time\n    def eval_sequence(self, data):\n        \"\"\"Calculates GT and Tracker matches for one sequence for TrackMAP metrics. Adapted from\n        https://github.com/TAO-Dataset/\"\"\"\n\n        # Initialise results to zero for each sequence as the fields are only defined over the set of all sequences\n        res = {}\n        for field in self.fields:\n            res[field] = [0 for _ in self.array_labels]\n\n        gt_ids, dt_ids = data['gt_track_ids'], data['dt_track_ids']\n\n        if len(gt_ids) == 0 and len(dt_ids) == 0:\n            for idx in range(self.num_ig_masks):\n                res[idx] = None\n            return res\n\n        # get track data\n        gt_tr_areas = data.get('gt_track_areas', None) if self.use_area_rngs else None\n        gt_tr_lengths = data.get('gt_track_lengths', None) if self.use_time_rngs else None\n        gt_tr_iscrowd = data.get('gt_track_iscrowd', None)\n        dt_tr_areas = data.get('dt_track_areas', None) if self.use_area_rngs else None\n        dt_tr_lengths = data.get('dt_track_lengths', None) if self.use_time_rngs else None\n        is_nel = data.get('not_exhaustively_labeled', False)\n\n        # compute ignore masks for different track sets to eval\n        gt_ig_masks = self._compute_track_ig_masks(len(gt_ids), track_lengths=gt_tr_lengths, track_areas=gt_tr_areas,\n                                                   iscrowd=gt_tr_iscrowd)\n        dt_ig_masks = self._compute_track_ig_masks(len(dt_ids), track_lengths=dt_tr_lengths, track_areas=dt_tr_areas,\n                                                   is_not_exhaustively_labeled=is_nel, is_gt=False)\n\n        boxformat = data.get('boxformat', 'xywh')\n        ious = self._compute_track_ious(data['dt_tracks'], data['gt_tracks'], iou_function=data['iou_type'],\n                                        boxformat=boxformat)\n\n        for mask_idx in range(self.num_ig_masks):\n            gt_ig_mask = gt_ig_masks[mask_idx]\n\n            # Sort gt ignore last\n            gt_idx = np.argsort([g for g in gt_ig_mask], kind=\"mergesort\")\n            gt_ids = [gt_ids[i] for i in gt_idx]\n\n            ious_sorted = ious[:, gt_idx] if len(ious) > 0 else ious\n\n            num_thrs = len(self.array_labels)\n            num_gt = len(gt_ids)\n            num_dt = len(dt_ids)\n\n            # Array to store the \"id\" of the matched dt/gt\n            gt_m = np.zeros((num_thrs, num_gt)) - 1\n            dt_m = np.zeros((num_thrs, num_dt)) - 1\n\n            gt_ig = np.array([gt_ig_mask[idx] for idx in gt_idx])\n            dt_ig = np.zeros((num_thrs, num_dt))\n\n            for iou_thr_idx, iou_thr in enumerate(self.array_labels):\n                if len(ious_sorted) == 0:\n                    break\n\n                for dt_idx, _dt in enumerate(dt_ids):\n                    iou = min([iou_thr, 1 - 1e-10])\n                    # information about best match so far (m=-1 -> unmatched)\n                    # store the gt_idx which matched for _dt\n                    m = -1\n                    for gt_idx, _ in enumerate(gt_ids):\n                        # if this gt already matched continue\n                        if gt_m[iou_thr_idx, gt_idx] > 0:\n                            continue\n                        # if _dt matched to reg gt, and on ignore gt, stop\n                        if m > -1 and gt_ig[m] == 0 and gt_ig[gt_idx] == 1:\n                            break\n                        # continue to next gt unless better match made\n                        if ious_sorted[dt_idx, gt_idx] < iou - np.finfo('float').eps:\n                            continue\n                        # if match successful and best so far, store appropriately\n                        iou = ious_sorted[dt_idx, gt_idx]\n                        m = gt_idx\n\n                    # No match found for _dt, go to next _dt\n                    if m == -1:\n                        continue\n\n                    # if gt to ignore for some reason update dt_ig.\n                    # Should not be used in evaluation.\n                    dt_ig[iou_thr_idx, dt_idx] = gt_ig[m]\n                    # _dt match found, update gt_m, and dt_m with \"id\"\n                    dt_m[iou_thr_idx, dt_idx] = gt_ids[m]\n                    gt_m[iou_thr_idx, m] = _dt\n\n            dt_ig_mask = dt_ig_masks[mask_idx]\n\n            dt_ig_mask = np.array(dt_ig_mask).reshape((1, num_dt))  # 1 X num_dt\n            dt_ig_mask = np.repeat(dt_ig_mask, num_thrs, 0)  # num_thrs X num_dt\n\n            # Based on dt_ig_mask ignore any unmatched detection by updating dt_ig\n            dt_ig = np.logical_or(dt_ig, np.logical_and(dt_m == -1, dt_ig_mask))\n            # store results for given video and category\n            res[mask_idx] = {\n                \"dt_ids\": dt_ids,\n                \"gt_ids\": gt_ids,\n                \"dt_matches\": dt_m,\n                \"gt_matches\": gt_m,\n                \"dt_scores\": data['dt_track_scores'],\n                \"gt_ignore\": gt_ig,\n                \"dt_ignore\": dt_ig,\n            }\n\n        return res\n\n    def combine_sequences(self, all_res):\n        \"\"\"Combines metrics across all sequences. Computes precision and recall values based on track matches.\n        Adapted from https://github.com/TAO-Dataset/\n        \"\"\"\n        num_thrs = len(self.array_labels)\n        num_recalls = len(self.rec_thrs)\n\n        # -1 for absent categories\n        precision = -np.ones(\n            (num_thrs, num_recalls, self.num_ig_masks)\n        )\n        recall = -np.ones((num_thrs, self.num_ig_masks))\n\n        for ig_idx in range(self.num_ig_masks):\n            ig_idx_results = [res[ig_idx] for res in all_res.values() if res[ig_idx] is not None]\n\n            # Remove elements which are None\n            if len(ig_idx_results) == 0:\n                continue\n\n            # Append all scores: shape (N,)\n            # limit considered tracks for each sequence if maxDet > 0\n            if self.maxDet == 0:\n                dt_scores = np.concatenate([res[\"dt_scores\"] for res in ig_idx_results], axis=0)\n\n                dt_idx = np.argsort(-dt_scores, kind=\"mergesort\")\n\n                dt_m = np.concatenate([e[\"dt_matches\"] for e in ig_idx_results],\n                                      axis=1)[:, dt_idx]\n                dt_ig = np.concatenate([e[\"dt_ignore\"] for e in ig_idx_results],\n                                       axis=1)[:, dt_idx]\n            elif self.maxDet > 0:\n                dt_scores = np.concatenate([res[\"dt_scores\"][0:self.maxDet] for res in ig_idx_results], axis=0)\n\n                dt_idx = np.argsort(-dt_scores, kind=\"mergesort\")\n\n                dt_m = np.concatenate([e[\"dt_matches\"][:, 0:self.maxDet] for e in ig_idx_results],\n                                      axis=1)[:, dt_idx]\n                dt_ig = np.concatenate([e[\"dt_ignore\"][:, 0:self.maxDet] for e in ig_idx_results],\n                                       axis=1)[:, dt_idx]\n            else:\n                raise Exception(\"Number of maximum detections must be >= 0, but is set to %i\" % self.maxDet)\n\n            gt_ig = np.concatenate([res[\"gt_ignore\"] for res in ig_idx_results])\n            # num gt anns to consider\n            num_gt = np.count_nonzero(gt_ig == 0)\n\n            if num_gt == 0:\n                continue\n\n            tps = np.logical_and(dt_m != -1, np.logical_not(dt_ig))\n            fps = np.logical_and(dt_m == -1, np.logical_not(dt_ig))\n\n            tp_sum = np.cumsum(tps, axis=1).astype(dtype=np.float)\n            fp_sum = np.cumsum(fps, axis=1).astype(dtype=np.float)\n\n            for iou_thr_idx, (tp, fp) in enumerate(zip(tp_sum, fp_sum)):\n                tp = np.array(tp)\n                fp = np.array(fp)\n                num_tp = len(tp)\n                rc = tp / num_gt\n                if num_tp:\n                    recall[iou_thr_idx, ig_idx] = rc[-1]\n                else:\n                    recall[iou_thr_idx, ig_idx] = 0\n\n                # np.spacing(1) ~= eps\n                pr = tp / (fp + tp + np.spacing(1))\n                pr = pr.tolist()\n\n                # Ensure precision values are monotonically decreasing\n                for i in range(num_tp - 1, 0, -1):\n                    if pr[i] > pr[i - 1]:\n                        pr[i - 1] = pr[i]\n\n                # find indices at the predefined recall values\n                rec_thrs_insert_idx = np.searchsorted(rc, self.rec_thrs, side=\"left\")\n\n                pr_at_recall = [0.0] * num_recalls\n\n                try:\n                    for _idx, pr_idx in enumerate(rec_thrs_insert_idx):\n                        pr_at_recall[_idx] = pr[pr_idx]\n                except IndexError:\n                    pass\n\n                precision[iou_thr_idx, :, ig_idx] = (np.array(pr_at_recall))\n\n        res = {'precision': precision, 'recall': recall}\n\n        # compute the precision and recall averages for the respective alpha thresholds and ignore masks\n        for lbl in self.lbls:\n            res['AP_' + lbl] = np.zeros((len(self.array_labels)), dtype=np.float)\n            res['AR_' + lbl] = np.zeros((len(self.array_labels)), dtype=np.float)\n\n        for a_id, alpha in enumerate(self.array_labels):\n            for lbl_idx, lbl in enumerate(self.lbls):\n                p = precision[a_id, :, lbl_idx]\n                if len(p[p > -1]) == 0:\n                    mean_p = -1\n                else:\n                    mean_p = np.mean(p[p > -1])\n                res['AP_' + lbl][a_id] = mean_p\n                res['AR_' + lbl][a_id] = recall[a_id, lbl_idx]\n\n        return res\n\n    def combine_classes_class_averaged(self, all_res):\n        \"\"\"Combines metrics across all classes by averaging over the class values\"\"\"\n        res = {}\n        for field in self.fields:\n            res[field] = np.zeros((len(self.array_labels)), dtype=np.float)\n            field_stacked = np.array([res[field] for res in all_res.values()])\n\n            for a_id, alpha in enumerate(self.array_labels):\n                values = field_stacked[:, a_id]\n                if len(values[values > -1]) == 0:\n                    mean = -1\n                else:\n                    mean = np.mean(values[values > -1])\n                res[field][a_id] = mean\n        return res\n\n    def combine_classes_det_averaged(self, all_res):\n        \"\"\"Combines metrics across all classes by averaging over the detection values\"\"\"\n\n        res = {}\n        for field in self.fields:\n            res[field] = np.zeros((len(self.array_labels)), dtype=np.float)\n            field_stacked = np.array([res[field] for res in all_res.values()])\n\n            for a_id, alpha in enumerate(self.array_labels):\n                values = field_stacked[:, a_id]\n                if len(values[values > -1]) == 0:\n                    mean = -1\n                else:\n                    mean = np.mean(values[values > -1])\n                res[field][a_id] = mean\n        return res\n\n    def _compute_track_ig_masks(self, num_ids, track_lengths=None, track_areas=None, iscrowd=None,\n                                is_not_exhaustively_labeled=False, is_gt=True):\n        \"\"\"\n        Computes ignore masks for different track sets to evaluate\n        :param num_ids: the number of track IDs\n        :param track_lengths: the lengths of the tracks (number of timesteps)\n        :param track_areas: the average area of a track\n        :param iscrowd: whether a track is marked as crowd\n        :param is_not_exhaustively_labeled: whether the track category is not exhaustively labeled\n        :param is_gt: whether it is gt\n        :return: the track ignore masks\n        \"\"\"\n        # for TAO tracks for classes which are not exhaustively labeled are not evaluated\n        if not is_gt and is_not_exhaustively_labeled:\n            track_ig_masks = [[1 for _ in range(num_ids)] for i in range(self.num_ig_masks)]\n        else:\n            # consider all tracks\n            track_ig_masks = [[0 for _ in range(num_ids)]]\n\n            # consider tracks with certain area\n            if self.use_area_rngs:\n                for rng in self.area_rngs:\n                    track_ig_masks.append([0 if rng[0] - np.finfo('float').eps <= area <= rng[1] + np.finfo('float').eps\n                                           else 1 for area in track_areas])\n\n            # consider tracks with certain duration\n            if self.use_time_rngs:\n                for rng in self.time_rngs:\n                    track_ig_masks.append([0 if rng[0] - np.finfo('float').eps <= length\n                                                <= rng[1] + np.finfo('float').eps else 1 for length in track_lengths])\n\n        # for YouTubeVIS evaluation tracks with crowd tag are not evaluated\n        if is_gt and iscrowd:\n            track_ig_masks = [np.logical_or(mask, iscrowd) for mask in track_ig_masks]\n\n        return track_ig_masks\n\n    @staticmethod\n    def _compute_bb_track_iou(dt_track, gt_track, boxformat='xywh'):\n        \"\"\"\n        Calculates the track IoU for one detected track and one ground truth track for bounding boxes\n        :param dt_track: the detected track (format: dictionary with frame index as keys and\n                            numpy arrays as values)\n        :param gt_track: the ground truth track (format: dictionary with frame index as keys and\n                        numpy array as values)\n        :param boxformat: the format of the boxes\n        :return: the track IoU\n        \"\"\"\n        intersect = 0\n        union = 0\n        image_ids = set(gt_track.keys()) | set(dt_track.keys())\n        for image in image_ids:\n            g = gt_track.get(image, None)\n            d = dt_track.get(image, None)\n            if boxformat == 'xywh':\n                if d is not None and g is not None:\n                    dx, dy, dw, dh = d\n                    gx, gy, gw, gh = g\n                    w = max(min(dx + dw, gx + gw) - max(dx, gx), 0)\n                    h = max(min(dy + dh, gy + gh) - max(dy, gy), 0)\n                    i = w * h\n                    u = dw * dh + gw * gh - i\n                    intersect += i\n                    union += u\n                elif d is None and g is not None:\n                    union += g[2] * g[3]\n                elif d is not None and g is None:\n                    union += d[2] * d[3]\n            elif boxformat == 'x0y0x1y1':\n                if d is not None and g is not None:\n                    dx0, dy0, dx1, dy1 = d\n                    gx0, gy0, gx1, gy1 = g\n                    w = max(min(dx1, gx1) - max(dx0, gx0), 0)\n                    h = max(min(dy1, gy1) - max(dy0, gy0), 0)\n                    i = w * h\n                    u = (dx1 - dx0) * (dy1 - dy0) + (gx1 - gx0) * (gy1 - gy0) - i\n                    intersect += i\n                    union += u\n                elif d is None and g is not None:\n                    union += (g[2] - g[0]) * (g[3] - g[1])\n                elif d is not None and g is None:\n                    union += (d[2] - d[0]) * (d[3] - d[1])\n            else:\n                raise TrackEvalException('BoxFormat not implemented')\n        if intersect > union:\n            raise TrackEvalException(\"Intersection value > union value. Are the box values corrupted?\")\n        return intersect / union if union > 0 else 0\n\n    @staticmethod\n    def _compute_mask_track_iou(dt_track, gt_track):\n        \"\"\"\n        Calculates the track IoU for one detected track and one ground truth track for segmentation masks\n        :param dt_track: the detected track (format: dictionary with frame index as keys and\n                            pycocotools rle encoded masks as values)\n        :param gt_track: the ground truth track (format: dictionary with frame index as keys and\n                            pycocotools rle encoded masks as values)\n        :return: the track IoU\n        \"\"\"\n        # only loaded when needed to reduce minimum requirements\n        from pycocotools import mask as mask_utils\n\n        intersect = .0\n        union = .0\n        image_ids = set(gt_track.keys()) | set(dt_track.keys())\n        for image in image_ids:\n            g = gt_track.get(image, None)\n            d = dt_track.get(image, None)\n            if d and g:\n                intersect += mask_utils.area(mask_utils.merge([d, g], True))\n                union += mask_utils.area(mask_utils.merge([d, g], False))\n            elif not d and g:\n                union += mask_utils.area(g)\n            elif d and not g:\n                union += mask_utils.area(d)\n        if union < 0.0 - np.finfo('float').eps:\n            raise TrackEvalException(\"Union value < 0. Are the segmentaions corrupted?\")\n        if intersect > union:\n            raise TrackEvalException(\"Intersection value > union value. Are the segmentations corrupted?\")\n        iou = intersect / union if union > 0.0 + np.finfo('float').eps else 0.0\n        return iou\n\n    @staticmethod\n    def _compute_track_ious(dt, gt, iou_function='bbox', boxformat='xywh'):\n        \"\"\"\n        Calculate track IoUs for a set of ground truth tracks and a set of detected tracks\n        \"\"\"\n\n        if len(gt) == 0 and len(dt) == 0:\n            return []\n\n        if iou_function == 'bbox':\n            track_iou_function = partial(TrackMAP._compute_bb_track_iou, boxformat=boxformat)\n        elif iou_function == 'mask':\n            track_iou_function = partial(TrackMAP._compute_mask_track_iou)\n        else:\n            raise Exception('IoU function not implemented')\n\n        ious = np.zeros([len(dt), len(gt)])\n        for i, j in np.ndindex(ious.shape):\n            ious[i, j] = track_iou_function(dt[i], gt[j])\n        return ious\n\n    @staticmethod\n    def _row_print(*argv):\n        \"\"\"Prints results in an evenly spaced rows, with more space in first row\"\"\"\n        if len(argv) == 1:\n            argv = argv[0]\n        to_print = '%-40s' % argv[0]\n        for v in argv[1:]:\n            to_print += '%-12s' % str(v)\n        print(to_print)\n"
  },
  {
    "path": "eval/trackeval/metrics/vace.py",
    "content": "import numpy as np\nfrom scipy.optimize import linear_sum_assignment\nfrom ._base_metric import _BaseMetric\nfrom .. import _timing\n\n\nclass VACE(_BaseMetric):\n    \"\"\"Class which implements the VACE metrics.\n\n    The metrics are described in:\n    Manohar et al. (2006) \"Performance Evaluation of Object Detection and Tracking in Video\"\n    https://link.springer.com/chapter/10.1007/11612704_16\n\n    This implementation uses the \"relaxed\" variant of the metrics,\n    where an overlap threshold is applied in each frame.\n    \"\"\"\n\n    def __init__(self):\n        super().__init__()\n        self.integer_fields = ['VACE_IDs', 'VACE_GT_IDs', 'num_non_empty_timesteps']\n        self.float_fields = ['STDA', 'ATA', 'FDA', 'SFDA']\n        self.fields = self.integer_fields + self.float_fields\n        self.summary_fields = ['SFDA', 'ATA']\n\n        # Fields that are accumulated over multiple videos.\n        self._additive_fields = self.integer_fields + ['STDA', 'FDA']\n\n        self.threshold = 0.5\n\n    @_timing.time\n    def eval_sequence(self, data):\n        \"\"\"Calculates VACE metrics for one sequence.\n\n        Depends on the fields:\n            data['num_gt_ids']\n            data['num_tracker_ids']\n            data['gt_ids']\n            data['tracker_ids']\n            data['similarity_scores']\n        \"\"\"\n        res = {}\n\n        # Obtain Average Tracking Accuracy (ATA) using track correspondence.\n        # Obtain counts necessary to compute temporal IOU.\n        # Assume that integer counts can be represented exactly as floats.\n        potential_matches_count = np.zeros((data['num_gt_ids'], data['num_tracker_ids']))\n        gt_id_count = np.zeros(data['num_gt_ids'])\n        tracker_id_count = np.zeros(data['num_tracker_ids'])\n        both_present_count = np.zeros((data['num_gt_ids'], data['num_tracker_ids']))\n        for t, (gt_ids_t, tracker_ids_t) in enumerate(zip(data['gt_ids'], data['tracker_ids'])):\n            # Count the number of frames in which two tracks satisfy the overlap criterion.\n            matches_mask = np.greater_equal(data['similarity_scores'][t], self.threshold)\n            match_idx_gt, match_idx_tracker = np.nonzero(matches_mask)\n            potential_matches_count[gt_ids_t[match_idx_gt], tracker_ids_t[match_idx_tracker]] += 1\n            # Count the number of frames in which the tracks are present.\n            gt_id_count[gt_ids_t] += 1\n            tracker_id_count[tracker_ids_t] += 1\n            both_present_count[gt_ids_t[:, np.newaxis], tracker_ids_t[np.newaxis, :]] += 1\n        # Number of frames in which either track is present (union of the two sets of frames).\n        union_count = (gt_id_count[:, np.newaxis]\n                       + tracker_id_count[np.newaxis, :]\n                       - both_present_count)\n        # The denominator should always be non-zero if all tracks are non-empty.\n        with np.errstate(divide='raise', invalid='raise'):\n            temporal_iou = potential_matches_count / union_count\n        # Find assignment that maximizes temporal IOU.\n        match_rows, match_cols = linear_sum_assignment(-temporal_iou)\n        res['STDA'] = temporal_iou[match_rows, match_cols].sum()\n        res['VACE_IDs'] = data['num_tracker_ids']\n        res['VACE_GT_IDs'] = data['num_gt_ids']\n\n        # Obtain Frame Detection Accuracy (FDA) using per-frame correspondence.\n        non_empty_count = 0\n        fda = 0\n        for t, (gt_ids_t, tracker_ids_t) in enumerate(zip(data['gt_ids'], data['tracker_ids'])):\n            n_g = len(gt_ids_t)\n            n_d = len(tracker_ids_t)\n            if not (n_g or n_d):\n                continue\n            # n_g > 0 or n_d > 0\n            non_empty_count += 1\n            if not (n_g and n_d):\n                continue\n            # n_g > 0 and n_d > 0\n            spatial_overlap = data['similarity_scores'][t]\n            match_rows, match_cols = linear_sum_assignment(-spatial_overlap)\n            overlap_ratio = spatial_overlap[match_rows, match_cols].sum()\n            fda += overlap_ratio / (0.5 * (n_g + n_d))\n        res['FDA'] = fda\n        res['num_non_empty_timesteps'] = non_empty_count\n\n        res.update(self._compute_final_fields(res))\n        return res\n\n    def combine_classes_class_averaged(self, all_res):\n        \"\"\"Combines metrics across all classes by averaging over the class values\"\"\"\n        res = {}\n        for field in self.fields:\n            res[field] = np.mean([v[field] for v in all_res.values()\n                                  if v['VACE_GT_IDs'] > 0 or v['VACE_IDs'] > 0], axis=0)\n        return res\n\n    def combine_classes_det_averaged(self, all_res):\n        \"\"\"Combines metrics across all classes by averaging over the detection values\"\"\"\n        res = {}\n        for field in self._additive_fields:\n            res[field] = _BaseMetric._combine_sum(all_res, field)\n        res = self._compute_final_fields(res)\n        return res\n\n    def combine_sequences(self, all_res):\n        \"\"\"Combines metrics across all sequences\"\"\"\n        res = {}\n        for header in self._additive_fields:\n            res[header] = _BaseMetric._combine_sum(all_res, header)\n        res.update(self._compute_final_fields(res))\n        return res\n\n    @staticmethod\n    def _compute_final_fields(additive):\n        final = {}\n        with np.errstate(invalid='ignore'):  # Permit nan results.\n            final['ATA'] = (additive['STDA'] /\n                            (0.5 * (additive['VACE_IDs'] + additive['VACE_GT_IDs'])))\n            final['SFDA'] = additive['FDA'] / additive['num_non_empty_timesteps']\n        return final\n"
  },
  {
    "path": "eval/trackeval/plotting.py",
    "content": "\nimport os\nimport numpy as np\nfrom .utils import TrackEvalException\n\n\ndef plot_compare_trackers(tracker_folder, tracker_list, cls, output_folder, plots_list=None):\n    \"\"\"Create plots which compare metrics across different trackers.\"\"\"\n    # Define what to plot\n    if plots_list is None:\n        plots_list = get_default_plots_list()\n\n    # Load data\n    data = load_multiple_tracker_summaries(tracker_folder, tracker_list, cls)\n    out_loc = os.path.join(output_folder, cls)\n\n    # Plot\n    for args in plots_list:\n        create_comparison_plot(data, out_loc, *args)\n\n\ndef get_default_plots_list():\n    # y_label, x_label, sort_label, bg_label, bg_function\n    plots_list = [\n        ['AssA', 'DetA', 'HOTA', 'HOTA', 'geometric_mean'],\n        ['AssPr', 'AssRe', 'HOTA', 'AssA', 'jaccard'],\n        ['DetPr', 'DetRe', 'HOTA', 'DetA', 'jaccard'],\n        ['HOTA(0)', 'LocA(0)', 'HOTA', 'HOTALocA(0)', 'multiplication'],\n        ['HOTA', 'LocA', 'HOTA', None, None],\n\n        ['HOTA', 'MOTA', 'HOTA', None, None],\n        ['HOTA', 'IDF1', 'HOTA', None, None],\n        ['IDF1', 'MOTA', 'HOTA', None, None],\n    ]\n    return plots_list\n\n\ndef load_multiple_tracker_summaries(tracker_folder, tracker_list, cls):\n    \"\"\"Loads summary data for multiple trackers.\"\"\"\n    data = {}\n    for tracker in tracker_list:\n        with open(os.path.join(tracker_folder, tracker, cls + '_summary.txt')) as f:\n            keys = next(f).split(' ')\n            done = False\n            while not done:\n                values = next(f).split(' ')\n                if len(values) == len(keys):\n                    done = True\n            data[tracker] = dict(zip(keys, map(float, values)))\n    return data\n\n\ndef create_comparison_plot(data, out_loc, y_label, x_label, sort_label, bg_label=None, bg_function=None, settings=None):\n    \"\"\" Creates a scatter plot comparing multiple trackers between two metric fields, with one on the x-axis and the\n    other on the y axis. Adds pareto optical lines and (optionally) a background contour.\n\n    Inputs:\n        data: dict of dicts such that data[tracker_name][metric_field_name] = float\n        y_label: the metric_field_name to be plotted on the y-axis\n        x_label: the metric_field_name to be plotted on the x-axis\n        sort_label: the metric_field_name by which trackers are ordered and ranked\n        bg_label: the metric_field_name by which (optional) background contours are plotted\n        bg_function: the (optional) function bg_function(x,y) which converts the x_label / y_label values into bg_label.\n        settings: dict of plot settings with keys:\n            'gap_val': gap between axis ticks and bg curves.\n            'num_to_plot': maximum number of trackers to plot\n    \"\"\"\n\n    # Only loaded when run to reduce minimum requirements\n    from matplotlib import pyplot as plt\n\n    # Get plot settings\n    if settings is None:\n        gap_val = 2\n        num_to_plot = 20\n    else:\n        gap_val = settings['gap_val']\n        num_to_plot = settings['num_to_plot']\n\n    if (bg_label is None) != (bg_function is None):\n        raise TrackEvalException('bg_function and bg_label must either be both given or neither given.')\n\n    # Extract data\n    tracker_names = np.array(list(data.keys()))\n    sort_index = np.array([data[t][sort_label] for t in tracker_names]).argsort()[::-1]\n    x_values = np.array([data[t][x_label] for t in tracker_names])[sort_index][:num_to_plot]\n    y_values = np.array([data[t][y_label] for t in tracker_names])[sort_index][:num_to_plot]\n\n    # Print info on what is being plotted\n    tracker_names = tracker_names[sort_index][:num_to_plot]\n    print('\\nPlotting %s vs %s, for the following (ordered) trackers:' % (y_label, x_label))\n    for i, name in enumerate(tracker_names):\n        print('%i: %s' % (i+1, name))\n\n    # Find best fitting boundaries for data\n    boundaries = _get_boundaries(x_values, y_values, round_val=gap_val/2)\n\n    fig = plt.figure()\n\n    # Plot background contour\n    if bg_function is not None:\n        _plot_bg_contour(bg_function, boundaries, gap_val)\n\n    # Plot pareto optimal lines\n    _plot_pareto_optimal_lines(x_values, y_values)\n\n    # Plot data points with number labels\n    labels = np.arange(len(y_values)) + 1\n    plt.plot(x_values, y_values, 'b.', markersize=15)\n    for xx, yy, l in zip(x_values, y_values, labels):\n        plt.text(xx, yy, str(l), color=\"red\", fontsize=15)\n\n    # Add extra explanatory text to plots\n    plt.text(0, -0.11, 'label order:\\nHOTA', horizontalalignment='left', verticalalignment='center',\n             transform=fig.axes[0].transAxes, color=\"red\", fontsize=12)\n    if bg_label is not None:\n        plt.text(1, -0.11, 'curve values:\\n' + bg_label, horizontalalignment='right', verticalalignment='center',\n                 transform=fig.axes[0].transAxes, color=\"grey\", fontsize=12)\n\n    plt.xlabel(x_label, fontsize=15)\n    plt.ylabel(y_label, fontsize=15)\n    title = y_label + ' vs ' + x_label\n    if bg_label is not None:\n        title += ' (' + bg_label + ')'\n    plt.title(title, fontsize=17)\n    plt.xticks(np.arange(0, 100, gap_val))\n    plt.yticks(np.arange(0, 100, gap_val))\n    min_x, max_x, min_y, max_y = boundaries\n    plt.xlim(min_x, max_x)\n    plt.ylim(min_y, max_y)\n    plt.gca().set_aspect('equal', adjustable='box')\n    plt.tight_layout()\n\n    os.makedirs(out_loc, exist_ok=True)\n    filename = os.path.join(out_loc, title.replace(' ', '_'))\n    plt.savefig(filename + '.pdf', bbox_inches='tight', pad_inches=0.05)\n    plt.savefig(filename + '.png', bbox_inches='tight', pad_inches=0.05)\n\n\ndef _get_boundaries(x_values, y_values, round_val):\n    x1 = np.min(np.floor((x_values - 0.5) / round_val) * round_val)\n    x2 = np.max(np.ceil((x_values + 0.5) / round_val) * round_val)\n    y1 = np.min(np.floor((y_values - 0.5) / round_val) * round_val)\n    y2 = np.max(np.ceil((y_values + 0.5) / round_val) * round_val)\n    x_range = x2 - x1\n    y_range = y2 - y1\n    max_range = max(x_range, y_range)\n    x_center = (x1 + x2) / 2\n    y_center = (y1 + y2) / 2\n    min_x = max(x_center - max_range / 2, 0)\n    max_x = min(x_center + max_range / 2, 100)\n    min_y = max(y_center - max_range / 2, 0)\n    max_y = min(y_center + max_range / 2, 100)\n    return min_x, max_x, min_y, max_y\n\n\ndef geometric_mean(x, y):\n    return np.sqrt(x * y)\n\n\ndef jaccard(x, y):\n    x = x / 100\n    y = y / 100\n    return 100 * (x * y) / (x + y - x * y)\n\n\ndef multiplication(x, y):\n    return x * y / 100\n\n\nbg_function_dict = {\n    \"geometric_mean\": geometric_mean,\n    \"jaccard\": jaccard,\n    \"multiplication\": multiplication,\n    }\n\n\ndef _plot_bg_contour(bg_function, plot_boundaries, gap_val):\n    \"\"\" Plot background contour. \"\"\"\n\n    # Only loaded when run to reduce minimum requirements\n    from matplotlib import pyplot as plt\n\n    # Plot background contour\n    min_x, max_x, min_y, max_y = plot_boundaries\n    x = np.arange(min_x, max_x, 0.1)\n    y = np.arange(min_y, max_y, 0.1)\n    x_grid, y_grid = np.meshgrid(x, y)\n    if bg_function in bg_function_dict.keys():\n        z_grid = bg_function_dict[bg_function](x_grid, y_grid)\n    else:\n        raise TrackEvalException(\"background plotting function '%s' is not defined.\" % bg_function)\n    levels = np.arange(0, 100, gap_val)\n    con = plt.contour(x_grid, y_grid, z_grid, levels, colors='grey')\n\n    def bg_format(val):\n        s = '{:1f}'.format(val)\n        return '{:.0f}'.format(val) if s[-1] == '0' else s\n\n    con.levels = [bg_format(val) for val in con.levels]\n    plt.clabel(con, con.levels, inline=True, fmt='%r', fontsize=8)\n\n\ndef _plot_pareto_optimal_lines(x_values, y_values):\n    \"\"\" Plot pareto optimal lines \"\"\"\n\n    # Only loaded when run to reduce minimum requirements\n    from matplotlib import pyplot as plt\n\n    # Plot pareto optimal lines\n    cxs = x_values\n    cys = y_values\n    best_y = np.argmax(cys)\n    x_pareto = [0, cxs[best_y]]\n    y_pareto = [cys[best_y], cys[best_y]]\n    t = 2\n    remaining = cxs > x_pareto[t - 1]\n    cys = cys[remaining]\n    cxs = cxs[remaining]\n    while len(cxs) > 0 and len(cys) > 0:\n        best_y = np.argmax(cys)\n        x_pareto += [x_pareto[t - 1], cxs[best_y]]\n        y_pareto += [cys[best_y], cys[best_y]]\n        t += 2\n        remaining = cxs > x_pareto[t - 1]\n        cys = cys[remaining]\n        cxs = cxs[remaining]\n    x_pareto.append(x_pareto[t - 1])\n    y_pareto.append(0)\n    plt.plot(np.array(x_pareto), np.array(y_pareto), '--r')\n"
  },
  {
    "path": "eval/trackeval/utils.py",
    "content": "\nimport os\nimport csv\nfrom collections import OrderedDict\n\n\ndef init_config(config, default_config, name=None):\n    \"\"\"Initialise non-given config values with defaults\"\"\"\n    if config is None:\n        config = default_config\n    else:\n        for k in default_config.keys():\n            if k not in config.keys():\n                config[k] = default_config[k]\n    if name and config['PRINT_CONFIG']:\n        print('\\n%s Config:' % name)\n        for c in config.keys():\n            print('%-20s : %-30s' % (c, config[c]))\n    return config\n\ndef get_code_path():\n    \"\"\"Get base path where code is\"\"\"\n    return os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))\n\n\ndef validate_metrics_list(metrics_list):\n    \"\"\"Get names of metric class and ensures they are unique, further checks that the fields within each metric class\n    do not have overlapping names.\n    \"\"\"\n    metric_names = [metric.get_name() for metric in metrics_list]\n    # check metric names are unique\n    if len(metric_names) != len(set(metric_names)):\n        raise TrackEvalException('Code being run with multiple metrics of the same name')\n    fields = []\n    for m in metrics_list:\n        fields += m.fields\n    # check metric fields are unique\n    if len(fields) != len(set(fields)):\n        raise TrackEvalException('Code being run with multiple metrics with fields of the same name')\n    return metric_names\n\n\ndef write_summary_results(summaries, cls, output_folder):\n    \"\"\"Write summary results to file\"\"\"\n\n    fields = sum([list(s.keys()) for s in summaries], [])\n    values = sum([list(s.values()) for s in summaries], [])\n\n    # In order to remain consistent upon new fields being adding, for each of the following fields if they are present\n    # they will be output in the summary first in the order below. Any further fields will be output in the order each\n    # metric family is called, and within each family either in the order they were added to the dict (python >= 3.6) or\n    # randomly (python < 3.6).\n    default_order = ['HOTA', 'DetA', 'AssA', 'DetRe', 'DetPr', 'AssRe', 'AssPr', 'LocA', 'RHOTA', 'HOTA(0)', 'LocA(0)',\n                     'HOTALocA(0)', 'MOTA', 'MOTP', 'MODA', 'CLR_Re', 'CLR_Pr', 'MTR', 'PTR', 'MLR', 'CLR_TP', 'CLR_FN',\n                     'CLR_FP', 'IDSW', 'MT', 'PT', 'ML', 'Frag', 'sMOTA', 'IDF1', 'IDR', 'IDP', 'IDTP', 'IDFN', 'IDFP',\n                     'Dets', 'GT_Dets', 'IDs', 'GT_IDs']\n    default_ordered_dict = OrderedDict(zip(default_order, [None for _ in default_order]))\n    for f, v in zip(fields, values):\n        default_ordered_dict[f] = v\n    for df in default_order:\n        if default_ordered_dict[df] is None:\n            del default_ordered_dict[df]\n    fields = list(default_ordered_dict.keys())\n    values = list(default_ordered_dict.values())\n\n    out_file = os.path.join(output_folder, cls + '_summary.txt')\n    os.makedirs(os.path.dirname(out_file), exist_ok=True)\n    with open(out_file, 'w', newline='') as f:\n        writer = csv.writer(f, delimiter=' ')\n        writer.writerow(fields)\n        writer.writerow(values)\n\n\ndef write_detailed_results(details, cls, output_folder):\n    \"\"\"Write detailed results to file\"\"\"\n    sequences = details[0].keys()\n    fields = ['seq'] + sum([list(s['COMBINED_SEQ'].keys()) for s in details], [])\n    out_file = os.path.join(output_folder, cls + '_detailed.csv')\n    os.makedirs(os.path.dirname(out_file), exist_ok=True)\n    with open(out_file, 'w', newline='') as f:\n        writer = csv.writer(f)\n        writer.writerow(fields)\n        for seq in sorted(sequences):\n            if seq == 'COMBINED_SEQ':\n                continue\n            writer.writerow([seq] + sum([list(s[seq].values()) for s in details], []))\n        writer.writerow(['COMBINED'] + sum([list(s['COMBINED_SEQ'].values()) for s in details], []))\n\n\ndef load_detail(file):\n    \"\"\"Loads detailed data for a tracker.\"\"\"\n    data = {}\n    with open(file) as f:\n        for i, row_text in enumerate(f):\n            row = row_text.replace('\\r', '').replace('\\n', '').split(',')\n            if i == 0:\n                keys = row[1:]\n                continue\n            current_values = row[1:]\n            seq = row[0]\n            if seq == 'COMBINED':\n                seq = 'COMBINED_SEQ'\n            if (len(current_values) == len(keys)) and seq is not '':\n                data[seq] = {}\n                for key, value in zip(keys, current_values):\n                    data[seq][key] = float(value)\n    return data\n\n\nclass TrackEvalException(Exception):\n    \"\"\"Custom exception for catching expected errors.\"\"\"\n    ...\n"
  },
  {
    "path": "eval.sh",
    "content": "###################################################################\n# File Name: eval.sh\n# Author: Zhongdao Wang\n# mail: wcd17@mails.tsinghua.edu.cn\n# Created Time: Wed Mar 31 15:51:59 2021\n###################################################################\n#!/bin/bash\n\nEXP_NAME=$1\nCFG_PATH=config/${EXP_NAME}.yaml\nSMRY_ROOT=results/summary/${EXP_NAME}\nmkdir -p $SMRY_ROOT\n#CUDA_VISIBLE_DEVICES=$2 python -u test/test_sot_siamfc.py --config $CFG_PATH | tee results/summary/${EXP_NAME}/sot_siamfc.log 2>&1 \n#CUDA_VISIBLE_DEVICES=$2 python -u test/test_sot_cfnet.py --config $CFG_PATH | tee results/summary/${EXP_NAME}/sot_cfnet.log 2>&1\nCUDA_VISIBLE_DEVICES=$2 python -u test/test_vos.py --config $CFG_PATH | tee results/summary/${EXP_NAME}/vos.log 2>&1\n#CUDA_VISIBLE_DEVICES=$2 python -u test/test_mot.py --config $CFG_PATH | tee results/summary/${EXP_NAME}/mot.log 2>&1\n#CUDA_VISIBLE_DEVICES=$2 python -u test/test_mots.py --config $CFG_PATH | tee  results/summary/${EXP_NAME}/mots.log 2>&1\n#CUDA_VISIBLE_DEVICES=$2 python -u test/test_posetrack.py --config $CFG_PATH | tee results/summary/${EXP_NAME}/posetrack.log 2>&1\n"
  },
  {
    "path": "model/__init__.py",
    "content": "###################################################################\n# File Name: __init__.py\n# Author: Zhongdao Wang\n# mail: wcd17@mails.tsinghua.edu.cn\n# Created Time: Thu Dec 24 14:24:44 2020\n###################################################################\n\nfrom __future__ import print_function\nfrom __future__ import division\nfrom __future__ import absolute_import\n\nfrom .model import *\nfrom .resnet import *\n"
  },
  {
    "path": "model/functional.py",
    "content": "###################################################################\n# File Name: functional.py\n# Author: Zhongdao Wang\n# mail: wcd17@mails.tsinghua.edu.cn\n# Created Time: Mon Jun 21 21:04:09 2021\n###################################################################\n\nfrom __future__ import print_function\nfrom __future__ import division\nfrom __future__ import absolute_import\n\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\ndef hard_prop(pred):\n    pred_max = pred.max(axis=0)[0]\n    pred[pred <  pred_max] = 0\n    pred[pred >= pred_max] = 1\n    pred /= pred.sum(0)[None]\n    return pred\n\ndef context_index_bank(n_context, long_mem, N):\n    '''\n    Construct bank of source frames indices, for each target frame\n    '''\n    ll = []   # \"long term\" context (i.e. first frame)\n    for t in long_mem:\n        assert 0 <= t < N, 'context frame out of bounds'\n        idx = torch.zeros(N, 1).long()\n        if t > 0:\n            idx += t + (n_context+1)\n            idx[:n_context+t+1] = 0\n        ll.append(idx)\n    # \"short\" context    \n    ss = [(torch.arange(n_context)[None].repeat(N, 1) +  \\\n            torch.arange(N)[:, None])[:, :]]\n    return ll + ss\n\n\ndef mem_efficient_batched_affinity(\n        query, keys, mask, temperature, topk, long_mem, device):\n    '''\n    Mini-batched computation of affinity, for memory efficiency\n    '''\n    bsize, pbsize = 10, 100 #keys.shape[2] // 2\n    Ws, Is = [], []\n\n    for b in range(0, keys.shape[2], bsize):\n        _k, _q = keys[:, :, b:b+bsize].to(device), query[:, :, b:b+bsize].to(device)\n        w_s, i_s = [], []\n\n        for pb in range(0, _k.shape[-1], pbsize):\n            A = torch.einsum('ijklm,ijkn->iklmn', _k, _q[..., pb:pb+pbsize]) \n            A[0, :, len(long_mem):] += mask[..., pb:pb+pbsize].to(device)\n\n            _, N, T, h1w1, hw = A.shape\n            A = A.view(N, T*h1w1, hw)\n            A /= temperature\n\n            weights, ids = torch.topk(A, topk, dim=-2)\n            weights = F.softmax(weights, dim=-2)\n            \n            w_s.append(weights.cpu())\n            i_s.append(ids.cpu())\n\n        weights = torch.cat(w_s, dim=-1)\n        ids = torch.cat(i_s, dim=-1)\n        Ws += [w for w in weights]\n        Is += [ii for ii in ids]\n\n    return Ws, Is\n\ndef batched_affinity(query, keys, mask, temperature, topk, long_mem, device):\n    '''\n    Mini-batched computation of affinity, for memory efficiency\n    (less aggressively mini-batched)\n    '''\n    bsize = 2\n    Ws, Is = [], []\n    for b in range(0, keys.shape[2], bsize):\n        _k, _q = keys[:, :, b:b+bsize].to(device), query[:, :, b:b+bsize].to(device)\n        w_s, i_s = [], []\n\n        A = torch.einsum('ijklmn,ijkop->iklmnop', _k, _q) / temperature\n        \n        # Mask\n        A[0, :, len(long_mem):] += mask.to(device)\n\n        _, N, T, h1w1, hw = A.shape\n        A = A.view(N, T*h1w1, hw)\n        A /= temperature\n\n        weights, ids = torch.topk(A, topk, dim=-2)\n        weights = F.softmax(weights, dim=-2)\n            \n        Ws += [w for w in weights]\n        Is += [ii for ii in ids]\n    \n    return Ws, Is\n\ndef process_pose(pred, lbl_set, topk=3):\n    # generate the coordinates:\n    pred = pred[..., 1:]\n    flatlbls = pred.flatten(0,1)\n    topk = min(flatlbls.shape[0], topk)\n    \n    vals, ids = torch.topk(flatlbls, k=topk, dim=0)\n    vals /= vals.sum(0)[None]\n    xx, yy = ids % pred.shape[1], ids // pred.shape[1]\n\n    current_coord = torch.stack([(xx * vals).sum(0), (yy * vals).sum(0)], dim=0)\n    current_coord[:, flatlbls.sum(0) == 0] = -1\n\n    pred_val_sharp = np.zeros((*pred.shape[:2], 3))\n\n    for t in range(len(lbl_set) - 1):\n        x = int(current_coord[0, t])\n        y = int(current_coord[1, t])\n\n        if x >=0 and y >= 0:\n            pred_val_sharp[y, x, :] = lbl_set[t + 1]\n\n    return current_coord.cpu(), pred_val_sharp\n\nclass MaskedAttention(nn.Module):\n    '''\n    A module that implements masked attention based on spatial locality \n    TODO implement in a more efficient way (torch sparse or correlation filter)\n    '''\n    def __init__(self, radius, flat=True):\n        super(MaskedAttention, self).__init__()\n        self.radius = radius\n        self.flat = flat\n        self.masks = {}\n        self.index = {}\n\n    def mask(self, H, W):\n        if not ('%s-%s' %(H,W) in self.masks):\n            self.make(H, W)\n        return self.masks['%s-%s' %(H,W)]\n\n    def index(self, H, W):\n        if not ('%s-%s' %(H,W) in self.index):\n            self.make_index(H, W)\n        return self.index['%s-%s' %(H,W)]\n\n    def make(self, H, W):\n        if self.flat:\n            H = int(H**0.5)\n            W = int(W**0.5)\n        \n        gx, gy = torch.meshgrid(torch.arange(0, H), torch.arange(0, W))\n        D = ( (gx[None, None, :, :] - gx[:, :, None, None])**2 + (gy[None, None, :, :] - gy[:, :, None, None])**2 ).float() ** 0.5\n        D = (D < self.radius)[None].float()\n\n        if self.flat:\n            D = self.flatten(D)\n        self.masks['%s-%s' %(H,W)] = D\n\n        return D\n\n    def flatten(self, D):\n        return torch.flatten(torch.flatten(D, 1, 2), -2, -1)\n\n    def make_index(self, H, W, pad=False):\n        mask = self.mask(H, W).view(1, -1).byte()\n        idx = torch.arange(0, mask.numel())[mask[0]][None]\n\n        self.index['%s-%s' %(H,W)] = idx\n\n        return idx\n        \n    def forward(self, x):\n        H, W = x.shape[-2:]\n        sid = '%s-%s' % (H,W)\n        if sid not in self.masks:\n            self.masks[sid] = self.make(H, W).to(x.device)\n        mask = self.masks[sid]\n        return x * mask[0]\n"
  },
  {
    "path": "model/hrnet.py",
    "content": "# ------------------------------------------------------------------------------\n# Copyright (c) Microsoft\n# Licensed under the MIT License.\n# Written by Bin Xiao (Bin.Xiao@microsoft.com)\n# Modified by Ke Sun (sunk@mail.ustc.edu.cn)\n# Modified by Zhongdao Wang(wcd17@mails.tsinghua.edu.cn)\n# ------------------------------------------------------------------------------\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport pdb\nimport logging\nimport functools\n\nimport numpy as np\n\nimport torch\nimport torch.nn as nn\nimport torch._utils\nimport torch.nn.functional as F\n\nBN_MOMENTUM = 0.1\nlogger = logging.getLogger(__name__)\n\n\ndef conv3x3(in_planes, out_planes, stride=1):\n    \"\"\"3x3 convolution with padding\"\"\"\n    return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,\n                     padding=1, bias=False)\n\n\nclass BasicBlock(nn.Module):\n    expansion = 1\n\n    def __init__(self, inplanes, planes, stride=1, downsample=None):\n        super(BasicBlock, self).__init__()\n        self.conv1 = conv3x3(inplanes, planes, stride)\n        self.bn1 = nn.BatchNorm2d(planes, momentum=BN_MOMENTUM)\n        self.relu = nn.ReLU(inplace=True)\n        self.conv2 = conv3x3(planes, planes)\n        self.bn2 = nn.BatchNorm2d(planes, momentum=BN_MOMENTUM)\n        self.downsample = downsample\n        self.stride = stride\n\n    def forward(self, x):\n        residual = x\n\n        out = self.conv1(x)\n        out = self.bn1(out)\n        out = self.relu(out)\n\n        out = self.conv2(out)\n        out = self.bn2(out)\n\n        if self.downsample is not None:\n            residual = self.downsample(x)\n\n        out += residual\n        out = self.relu(out)\n\n        return out\n\n\nclass Bottleneck(nn.Module):\n    expansion = 4\n\n    def __init__(self, inplanes, planes, stride=1, downsample=None):\n        super(Bottleneck, self).__init__()\n        self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)\n        self.bn1 = nn.BatchNorm2d(planes, momentum=BN_MOMENTUM)\n        self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,\n                               padding=1, bias=False)\n        self.bn2 = nn.BatchNorm2d(planes, momentum=BN_MOMENTUM)\n        self.conv3 = nn.Conv2d(planes, planes * self.expansion, kernel_size=1,\n                               bias=False)\n        self.bn3 = nn.BatchNorm2d(planes * self.expansion,\n                               momentum=BN_MOMENTUM)\n        self.relu = nn.ReLU(inplace=True)\n        self.downsample = downsample\n        self.stride = stride\n\n    def forward(self, x):\n        residual = x\n\n        out = self.conv1(x)\n        out = self.bn1(out)\n        out = self.relu(out)\n\n        out = self.conv2(out)\n        out = self.bn2(out)\n        out = self.relu(out)\n\n        out = self.conv3(out)\n        out = self.bn3(out)\n\n        if self.downsample is not None:\n            residual = self.downsample(x)\n\n        out += residual\n        out = self.relu(out)\n\n        return out\n\n\nclass HighResolutionModule(nn.Module):\n    def __init__(self, num_branches, blocks, num_blocks, num_inchannels,\n                 num_channels, fuse_method, multi_scale_output=True):\n        super(HighResolutionModule, self).__init__()\n        self._check_branches(\n            num_branches, blocks, num_blocks, num_inchannels, num_channels)\n\n        self.num_inchannels = num_inchannels\n        self.fuse_method = fuse_method\n        self.num_branches = num_branches\n\n        self.multi_scale_output = multi_scale_output\n\n        self.branches = self._make_branches(\n            num_branches, blocks, num_blocks, num_channels)\n        self.fuse_layers = self._make_fuse_layers()\n        self.relu = nn.ReLU(False)\n\n    def _check_branches(self, num_branches, blocks, num_blocks,\n                        num_inchannels, num_channels):\n        if num_branches != len(num_blocks):\n            error_msg = 'NUM_BRANCHES({}) <> NUM_BLOCKS({})'.format(\n                num_branches, len(num_blocks))\n            logger.error(error_msg)\n            raise ValueError(error_msg)\n\n        if num_branches != len(num_channels):\n            error_msg = 'NUM_BRANCHES({}) <> NUM_CHANNELS({})'.format(\n                num_branches, len(num_channels))\n            logger.error(error_msg)\n            raise ValueError(error_msg)\n\n        if num_branches != len(num_inchannels):\n            error_msg = 'NUM_BRANCHES({}) <> NUM_INCHANNELS({})'.format(\n                num_branches, len(num_inchannels))\n            logger.error(error_msg)\n            raise ValueError(error_msg)\n\n    def _make_one_branch(self, branch_index, block, num_blocks, num_channels,\n                         stride=1):\n        downsample = None\n        if stride != 1 or \\\n           self.num_inchannels[branch_index] != num_channels[branch_index] * block.expansion:\n            downsample = nn.Sequential(\n                nn.Conv2d(self.num_inchannels[branch_index],\n                          num_channels[branch_index] * block.expansion,\n                          kernel_size=1, stride=stride, bias=False),\n                nn.BatchNorm2d(num_channels[branch_index] * block.expansion,\n                            momentum=BN_MOMENTUM),\n            )\n\n        layers = []\n        layers.append(block(self.num_inchannels[branch_index],\n                            num_channels[branch_index], stride, downsample))\n        self.num_inchannels[branch_index] = \\\n            num_channels[branch_index] * block.expansion\n        for i in range(1, num_blocks[branch_index]):\n            layers.append(block(self.num_inchannels[branch_index],\n                                num_channels[branch_index]))\n\n        return nn.Sequential(*layers)\n\n    def _make_branches(self, num_branches, block, num_blocks, num_channels):\n        branches = []\n\n        for i in range(num_branches):\n            branches.append(\n                self._make_one_branch(i, block, num_blocks, num_channels))\n\n        return nn.ModuleList(branches)\n\n    def _make_fuse_layers(self):\n        if self.num_branches == 1:\n            return None\n\n        num_branches = self.num_branches\n        num_inchannels = self.num_inchannels\n        fuse_layers = []\n        for i in range(num_branches if self.multi_scale_output else 1):\n            fuse_layer = []\n            for j in range(num_branches):\n                if j > i:\n                    fuse_layer.append(nn.Sequential(\n                        nn.Conv2d(num_inchannels[j],\n                                  num_inchannels[i],\n                                  1,\n                                  1,\n                                  0,\n                                  bias=False),\n                        nn.BatchNorm2d(num_inchannels[i], \n                                       momentum=BN_MOMENTUM),\n                        nn.Upsample(scale_factor=2**(j-i), mode='nearest')))\n                elif j == i:\n                    fuse_layer.append(None)\n                else:\n                    conv3x3s = []\n                    for k in range(i-j):\n                        if k == i - j - 1:\n                            num_outchannels_conv3x3 = num_inchannels[i]\n                            conv3x3s.append(nn.Sequential(\n                                nn.Conv2d(num_inchannels[j],\n                                          num_outchannels_conv3x3,\n                                          3, 2, 1, bias=False),\n                                nn.BatchNorm2d(num_outchannels_conv3x3, \n                                            momentum=BN_MOMENTUM)))\n                        else:\n                            num_outchannels_conv3x3 = num_inchannels[j]\n                            conv3x3s.append(nn.Sequential(\n                                nn.Conv2d(num_inchannels[j],\n                                          num_outchannels_conv3x3,\n                                          3, 2, 1, bias=False),\n                                nn.BatchNorm2d(num_outchannels_conv3x3,\n                                            momentum=BN_MOMENTUM),\n                                nn.ReLU(False)))\n                    fuse_layer.append(nn.Sequential(*conv3x3s))\n            fuse_layers.append(nn.ModuleList(fuse_layer))\n\n        return nn.ModuleList(fuse_layers)\n\n    def get_num_inchannels(self):\n        return self.num_inchannels\n\n    def forward(self, x):\n        if self.num_branches == 1:\n            return [self.branches[0](x[0])]\n\n        for i in range(self.num_branches):\n            x[i] = self.branches[i](x[i])\n\n        x_fuse = []\n        for i in range(len(self.fuse_layers)):\n            y = x[0] if i == 0 else self.fuse_layers[i][0](x[0])\n            for j in range(1, self.num_branches):\n                if i == j:\n                    y = y + x[j]\n                else:\n                    fused = self.fuse_layers[i][j](x[j])\n                    fh, fw = fused.shape[-2:]\n                    yh, yw = y.shape[-2:]\n                    if fh > yh:\n                        fused = fused[:,:,(fh-yh)//2:-(fh-yh)//2,:]\n                    if fw > yw:\n                        fused = fused[:,:,:,(fw-yw)//2:-(fw-yw)//2]\n                    y = y + fused\n            x_fuse.append(self.relu(y))\n\n        return x_fuse\n\n\nblocks_dict = {\n    'BASIC': BasicBlock,\n    'BOTTLENECK': Bottleneck\n}\n\n\nclass HighResolutionNet(nn.Module):\n\n    def __init__(self, cfg, **kwargs):\n        super(HighResolutionNet, self).__init__()\n        self.cfg = cfg\n\n        self.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=2, padding=1,\n                               bias=False)\n        self.bn1 = nn.BatchNorm2d(64, momentum=BN_MOMENTUM)\n        self.conv2 = nn.Conv2d(64, 64, kernel_size=3, stride=2, padding=1,\n                               bias=False)\n        self.bn2 = nn.BatchNorm2d(64, momentum=BN_MOMENTUM)\n        self.relu = nn.ReLU(inplace=True)\n\n        self.stage1_cfg = cfg['MODEL']['EXTRA']['STAGE1']\n        num_channels = self.stage1_cfg['NUM_CHANNELS'][0]\n        block = blocks_dict[self.stage1_cfg['BLOCK']]\n        num_blocks = self.stage1_cfg['NUM_BLOCKS'][0]\n        self.layer1 = self._make_layer(block, 64, num_channels, num_blocks)\n        stage1_out_channel = block.expansion*num_channels\n\n        self.stage2_cfg = cfg['MODEL']['EXTRA']['STAGE2']\n        num_channels = self.stage2_cfg['NUM_CHANNELS']\n        block = blocks_dict[self.stage2_cfg['BLOCK']]\n        num_channels = [\n            num_channels[i] * block.expansion for i in range(len(num_channels))]\n        self.transition1 = self._make_transition_layer(\n            [stage1_out_channel], num_channels)\n        self.stage2, pre_stage_channels = self._make_stage(\n            self.stage2_cfg, num_channels)\n\n        self.stage3_cfg = cfg['MODEL']['EXTRA']['STAGE3']\n        num_channels = self.stage3_cfg['NUM_CHANNELS']\n        block = blocks_dict[self.stage3_cfg['BLOCK']]\n        num_channels = [\n            num_channels[i] * block.expansion for i in range(len(num_channels))]\n        self.transition2 = self._make_transition_layer(\n            pre_stage_channels, num_channels)\n        self.stage3, pre_stage_channels = self._make_stage(\n            self.stage3_cfg, num_channels)\n\n        self.stage4_cfg = cfg['MODEL']['EXTRA']['STAGE4']\n        num_channels = self.stage4_cfg['NUM_CHANNELS']\n        block = blocks_dict[self.stage4_cfg['BLOCK']]\n        num_channels = [\n            num_channels[i] * block.expansion for i in range(len(num_channels))]\n        self.transition3 = self._make_transition_layer(\n            pre_stage_channels, num_channels)\n        self.stage4, pre_stage_channels = self._make_stage(\n            self.stage4_cfg, num_channels, multi_scale_output=True)\n\n        # Classification Head\n        self.incre_modules, self.downsamp_modules, \\\n            self.final_layer = self._make_head(pre_stage_channels)\n\n        self.classifier = nn.Linear(2048, 1000)\n\n    def _make_head(self, pre_stage_channels):\n        head_block = Bottleneck\n        head_channels = [32, 64, 128, 256]\n\n        # Increasing the #channels on each resolution \n        # from C, 2C, 4C, 8C to 128, 256, 512, 1024\n        incre_modules = []\n        for i, channels  in enumerate(pre_stage_channels):\n            incre_module = self._make_layer(head_block,\n                                            channels,\n                                            head_channels[i],\n                                            1,\n                                            stride=1)\n            incre_modules.append(incre_module)\n        incre_modules = nn.ModuleList(incre_modules)\n            \n        # downsampling modules\n        downsamp_modules = []\n        for i in range(len(pre_stage_channels)-1):\n            in_channels = head_channels[i] * head_block.expansion\n            out_channels = head_channels[i+1] * head_block.expansion\n\n            downsamp_module = nn.Sequential(\n                nn.Conv2d(in_channels=in_channels,\n                          out_channels=out_channels,\n                          kernel_size=3,\n                          stride=2,\n                          padding=1),\n                nn.BatchNorm2d(out_channels, momentum=BN_MOMENTUM),\n                nn.ReLU(inplace=True)\n            )\n\n            downsamp_modules.append(downsamp_module)\n        downsamp_modules = nn.ModuleList(downsamp_modules)\n\n        final_layer = nn.Sequential(\n            nn.Conv2d(\n                in_channels=head_channels[3] * head_block.expansion,\n                out_channels=2048,\n                kernel_size=1,\n                stride=1,\n                padding=0\n            ),\n            nn.BatchNorm2d(2048, momentum=BN_MOMENTUM),\n            nn.ReLU(inplace=True)\n        )\n\n        return incre_modules, downsamp_modules, final_layer\n\n    def _make_transition_layer(\n            self, num_channels_pre_layer, num_channels_cur_layer):\n        num_branches_cur = len(num_channels_cur_layer)\n        num_branches_pre = len(num_channels_pre_layer)\n\n        transition_layers = []\n        for i in range(num_branches_cur):\n            if i < num_branches_pre:\n                if num_channels_cur_layer[i] != num_channels_pre_layer[i]:\n                    transition_layers.append(nn.Sequential(\n                        nn.Conv2d(num_channels_pre_layer[i],\n                                  num_channels_cur_layer[i],\n                                  3,\n                                  1,\n                                  1,\n                                  bias=False),\n                        nn.BatchNorm2d(\n                            num_channels_cur_layer[i], momentum=BN_MOMENTUM),\n                        nn.ReLU(inplace=True)))\n                else:\n                    transition_layers.append(None)\n            else:\n                conv3x3s = []\n                for j in range(i+1-num_branches_pre):\n                    inchannels = num_channels_pre_layer[-1]\n                    outchannels = num_channels_cur_layer[i] \\\n                        if j == i-num_branches_pre else inchannels\n                    conv3x3s.append(nn.Sequential(\n                        nn.Conv2d(\n                            inchannels, outchannels, 3, 2, 1, bias=False),\n                        nn.BatchNorm2d(outchannels, momentum=BN_MOMENTUM),\n                        nn.ReLU(inplace=True)))\n                transition_layers.append(nn.Sequential(*conv3x3s))\n\n        return nn.ModuleList(transition_layers)\n\n    def _make_layer(self, block, inplanes, planes, blocks, stride=1):\n        downsample = None\n        if stride != 1 or inplanes != planes * block.expansion:\n            downsample = nn.Sequential(\n                nn.Conv2d(inplanes, planes * block.expansion,\n                          kernel_size=1, stride=stride, bias=False),\n                nn.BatchNorm2d(planes * block.expansion, momentum=BN_MOMENTUM),\n            )\n\n        layers = []\n        layers.append(block(inplanes, planes, stride, downsample))\n        inplanes = planes * block.expansion\n        for i in range(1, blocks):\n            layers.append(block(inplanes, planes))\n\n        return nn.Sequential(*layers)\n\n    def _make_stage(self, layer_config, num_inchannels,\n                    multi_scale_output=True):\n        num_modules = layer_config['NUM_MODULES']\n        num_branches = layer_config['NUM_BRANCHES']\n        num_blocks = layer_config['NUM_BLOCKS']\n        num_channels = layer_config['NUM_CHANNELS']\n        block = blocks_dict[layer_config['BLOCK']]\n        fuse_method = layer_config['FUSE_METHOD']\n\n        modules = []\n        for i in range(num_modules):\n            # multi_scale_output is only used last module\n            if not multi_scale_output and i == num_modules - 1:\n                reset_multi_scale_output = False\n            else:\n                reset_multi_scale_output = True\n\n            modules.append(\n                HighResolutionModule(num_branches,\n                                      block,\n                                      num_blocks,\n                                      num_inchannels,\n                                      num_channels,\n                                      fuse_method,\n                                      reset_multi_scale_output)\n            )\n            num_inchannels = modules[-1].get_num_inchannels()\n\n        return nn.Sequential(*modules), num_inchannels\n\n    def forward(self, x):\n        x = self.conv1(x)\n        x = self.bn1(x)\n        x = self.relu(x)\n        x = self.conv2(x)\n        x = self.bn2(x)\n        x = self.relu(x)\n        x = self.layer1(x)\n\n        x_list = []\n        for i in range(self.stage2_cfg['NUM_BRANCHES']):\n            if self.transition1[i] is not None:\n                x_list.append(self.transition1[i](x))\n            else:\n                x_list.append(x)\n        y_list = self.stage2(x_list)\n\n        x_list = []\n        for i in range(self.stage3_cfg['NUM_BRANCHES']):\n            if self.transition2[i] is not None:\n                x_list.append(self.transition2[i](y_list[-1]))\n            else:\n                x_list.append(y_list[i])\n        y_list = self.stage3(x_list)\n\n        x_list = []\n        for i in range(self.stage4_cfg['NUM_BRANCHES']):\n            if self.transition3[i] is not None:\n                x_list.append(self.transition3[i](y_list[-1]))\n            else:\n                x_list.append(y_list[i])\n        y_list = self.stage4(x_list)\n\n        # Classification Head\n        y_list_out = {}\n        y_list_out[0] = self.incre_modules[0](y_list[0])\n        for i in range(len(self.downsamp_modules)):\n            y_list_out[i+1] = self.incre_modules[i+1](y_list[i+1]) + \\\n                        self.downsamp_modules[i](y_list_out[i])\n\n        #y = self.final_layer(y)\n\n        ret = y_list_out[self.cfg['MODEL']['RETURN_STAGE']]\n\n        ret_size = y_list_out[1].shape[-2:]\n        ret = F.interpolate(ret, ret_size, mode='bilinear')\n        return ret\n\n    def init_weights(self, pretrained='',):\n        print('=> init weights from normal distribution')\n        for m in self.modules():\n            if isinstance(m, nn.Conv2d):\n                nn.init.kaiming_normal_(\n                    m.weight, mode='fan_out', nonlinearity='relu')\n            elif isinstance(m, nn.BatchNorm2d):\n                nn.init.constant_(m.weight, 1)\n                nn.init.constant_(m.bias, 0)\n        if os.path.isfile(pretrained):\n            pretrained_dict = torch.load(pretrained)\n            print('=> loading pretrained model {}'.format(pretrained))\n            model_dict = self.state_dict()\n            pretrained_dict = {k: v for k, v in pretrained_dict.items()\n                               if k in model_dict.keys()}\n            for k, _ in pretrained_dict.items():\n                print(\n                    '=> loading {} pretrained model {}'.format(k, pretrained))\n            model_dict.update(pretrained_dict)\n            self.load_state_dict(model_dict)\n\n\nconfig = {\n'hrnet_w18': {\n    'MODEL':{\n        'EXTRA':{\n            'STAGE1':{\n                'NUM_MODULES':1,\n                'NUM_BRANCHES':1,\n                'BLOCK': 'BOTTLENECK',\n                'NUM_BLOCKS':[4,],\n                'NUM_CHANNELS':[64,],\n                'FUSE_METHOD': 'SUM',\n                },\n            'STAGE2':{\n                'NUM_MODULES':1,\n                'NUM_BRANCHES':2,\n                'BLOCK': 'BASIC',\n                'NUM_BLOCKS':[4,4,],\n                'NUM_CHANNELS':[18, 36],\n                'FUSE_METHOD': 'SUM',\n                },\n            'STAGE3':{\n                'NUM_MODULES':4,\n                'NUM_BRANCHES':3,\n                'BLOCK': 'BASIC',\n                'NUM_BLOCKS':[4,4,4],\n                'NUM_CHANNELS':[18, 36, 72],\n                'FUSE_METHOD': 'SUM',\n                },\n            'STAGE4':{\n                'NUM_MODULES':3,\n                'NUM_BRANCHES':4,\n                'BLOCK': 'BASIC',\n                'NUM_BLOCKS':[4,4,4,4],\n                'NUM_CHANNELS':[18, 36, 72, 144],\n                'FUSE_METHOD': 'SUM',\n                },\n            }\n        } \n    },\n'hrnet_w32': {\n    'MODEL':{\n        'EXTRA':{\n            'STAGE1':{\n                'NUM_MODULES':1,\n                'NUM_BRANCHES':1,\n                'BLOCK': 'BOTTLENECK',\n                'NUM_BLOCKS':[4,],\n                'NUM_CHANNELS':[64,],\n                'FUSE_METHOD': 'SUM',\n                },\n            'STAGE2':{\n                'NUM_MODULES':1,\n                'NUM_BRANCHES':2,\n                'BLOCK': 'BASIC',\n                'NUM_BLOCKS':[4,4,],\n                'NUM_CHANNELS':[32, 64],\n                'FUSE_METHOD': 'SUM',\n                },\n            'STAGE3':{\n                'NUM_MODULES':4,\n                'NUM_BRANCHES':3,\n                'BLOCK': 'BASIC',\n                'NUM_BLOCKS':[4,4,4],\n                'NUM_CHANNELS':[32, 64, 128],\n                'FUSE_METHOD': 'SUM',\n                },\n            'STAGE4':{\n                'NUM_MODULES':3,\n                'NUM_BRANCHES':4,\n                'BLOCK': 'BASIC',\n                'NUM_BLOCKS':[4,4,4,4],\n                'NUM_CHANNELS':[32, 64, 128, 256],\n                'FUSE_METHOD': 'SUM',\n                },\n            }\n        } \n    }\n}\n\ndef get_cls_net(c, **kwargs):\n    cfg = config[c]\n    cfg['MODEL']['RETURN_STAGE'] = kwargs['return_stage']\n    model = HighResolutionNet(cfg, **kwargs)\n    model.init_weights(pretrained=kwargs['pretrained'])\n    return model\n\nif __name__ == '__main__':\n    net = get_cls_net('hrnet_w18', return_stage=2, pretrained='../weights/hrnetv2_w18_imagenet.pth')\n    pdb.set_trace()\n"
  },
  {
    "path": "model/model.py",
    "content": "import pdb\nimport os.path as osp\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torchvision\nfrom torchvision import transforms\nimport numpy as np\n\nimport utils\nimport model.resnet as resnet\nimport model.hrnet as hrnet\nimport model.random_feat_generator as random_feat_generator\n\nclass AppearanceModel(nn.Module):\n    def __init__(self, args):\n        super(AppearanceModel, self).__init__()\n        self.args = args\n        \n        self.model = make_encoder(args).to(self.args.device)\n    def forward(self, x):\n        z = self.model(x)\n        return z\n\ndef partial_load(pretrained_dict, model, skip_keys=[], log=False):\n    model_dict = model.state_dict()\n    \n    # 1. filter out unnecessary keys\n    filtered_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict and not any([sk in k for sk in skip_keys])}\n    skipped_keys = [k for k in pretrained_dict if k not in filtered_dict]\n    unload_keys = [k for k in model_dict if k not in pretrained_dict]\n    \n    # 2. overwrite entries in the existing state dict\n    model_dict.update(filtered_dict)\n\n    # 3. load the new state dict\n    model.load_state_dict(model_dict)\n\n    if log:\n        print('\\nSkipped keys: ', skipped_keys)\n        print('\\nLoading keys: ', filtered_dict.keys())\n        print('\\nUnLoaded keys: ', unload_keys)\n\ndef load_vince_model(path):\n    checkpoint = torch.load(path, map_location={'cuda:0': 'cpu'})\n    checkpoint = {k.replace('feature_extractor.module.model.', ''): checkpoint[k] for k in checkpoint if 'feature_extractor' in k}\n    return checkpoint\n\n\ndef load_uvc_model(ckpt_path):\n    net = resnet.resnet18()\n    net.avgpool, net.fc = None, None\n\n    ckpt = torch.load(ckpt_path, map_location='cpu')\n    state_dict = {k.replace('module.gray_encoder.', ''):v for k,v in ckpt['state_dict'].items() if 'gray_encoder' in k}\n    partial_load(state_dict, net)\n\n    return net\n\n\ndef load_tc_model(ckpt_path):\n    model_state = torch.load(ckpt_path, map_location='cpu')['state_dict']\n    \n    net = resnet.resnet50()\n    net_state = net.state_dict()\n\n    for k in [k for k in model_state.keys() if 'encoderVideo' in k]:\n        kk = k.replace('module.encoderVideo.', '')\n        tmp = model_state[k]\n        if net_state[kk].shape != model_state[k].shape and net_state[kk].dim() == 4 and model_state[k].dim() == 5:\n            tmp = model_state[k].squeeze(2)\n        net_state[kk][:] = tmp[:]\n        \n    net.load_state_dict(net_state)\n\n    return net\n\nclass From3D(nn.Module):\n    ''' Use a 2D convnet as a 3D convnet '''\n    def __init__(self, resnet):\n        super(From3D, self).__init__()\n        self.model = resnet\n    \n    def forward(self, x):\n        N, C, T, h, w = x.shape\n        xx = x.permute(0, 2, 1, 3, 4).contiguous().view(-1, C, h, w)\n        m = self.model(xx)\n\n        return m.view(N, T, *m.shape[-3:]).permute(0, 2, 1, 3, 4)\n\n\ndef make_encoder(args):\n    SSL_MODELS = ['byol', 'deepcluster-v2', 'infomin', 'insdis', 'moco-v1', 'moco-v2',\n            'pcl-v1', 'pcl-v2','pirl', 'sela-v2', 'swav', 'simclr-v1', 'simclr-v2',\n            'pixpro', 'detco', 'barlowtwins']\n    model_type = args.model_type\n    if model_type == 'crw':\n        net = resnet.resnet18()\n        if osp.isfile(args.resume):\n            ckpt = torch.load(args.resume)\n            state = {}\n            for k, v in ckpt['model'].items():\n                if 'conv1.1.weight' in k or 'conv2.1.weight' in k:\n                    state[k.replace('.1.weight', '.weight')] = v\n                if 'encoder.model' in k:\n                    state[k.replace('encoder.model.', '')] = v\n                else:\n                    state[k] = v\n            partial_load(state, net, skip_keys=['head',])\n            del ckpt\n    elif model_type == 'random18':\n        net = resnet.resnet18(pretrained=False)\n    elif model_type == 'random50': \n        net = resnet.resnet50(pretrained=False)\n    elif model_type == 'imagenet18':\n        net = resnet.resnet18(pretrained=True)\n    elif model_type == 'imagenet50':\n        net = resnet.resnet50(pretrained=True)\n    elif model_type == 'imagenet101':\n        net = resnet.resnet101(pretrained=True)\n    elif model_type == 'imagenet_resnext50':\n        net = resnet.resnext50_32x4d(pretrained=True)\n    elif model_type == 'imagenet_resnext101':\n        net = resnet.resnext101_32x8d(pretrained=True)\n    elif model_type == 'mocov2':\n        net = resnet.resnet50(pretrained=False)\n        net_ckpt = torch.load(args.resume)\n        net_state = {k.replace('module.encoder_q.', ''):v for k,v in net_ckpt['state_dict'].items() \\\n                if 'module.encoder_q' in k}\n        partial_load(net_state, net)\n    elif model_type == 'ssib':\n        net = resnet.resnet50(pretrained=False)\n        net_ckpt = torch.load(args.resume)\n        net_state = {k.replace('module.encoder.', ''):v for k,v in net_ckpt.items() \\\n                if 'module.encoder' in k}\n        partial_load(net_state, net)\n    elif model_type == 'uvc':\n        net = load_uvc_model(args.resume)\n    elif model_type == 'timecycle':\n        net = load_tc_model(args.resume)\n    elif model_type in SSL_MODELS:\n        net = resnet.resnet50(pretrained=False)\n        net_ckpt = torch.load(args.resume)\n        partial_load(net_ckpt, net)\n    elif 'hrnet' in model_type:\n        net = hrnet.get_cls_net(model_type, return_stage=args.return_stage, pretrained=args.resume)\n    elif model_type == 'random':\n        net = random_feat_generator.RandomFeatGenerator(args)\n    else:\n        raise ValueError('Invalid model_type.')\n    if hasattr(net, 'modify'):\n        net.modify(remove_layers=args.remove_layers)\n\n    if 'Conv2d' in str(net) and not args.infer2D:\n        net = From3D(net)\n    return net\n"
  },
  {
    "path": "model/random_feat_generator.py",
    "content": "###################################################################\n# File Name: random_feat_generator.py\n# Author: Zhongdao Wang\n# mail: wcd17@mails.tsinghua.edu.cn\n# Created Time: Mon May 10 16:13:46 2021\n###################################################################\n\nfrom __future__ import print_function\nfrom __future__ import division\nfrom __future__ import absolute_import\n\nimport torch\nimport torch.nn as nn\n\nclass RandomFeatGenerator(nn.Module):\n    def __init__(self, args):\n        super(RandomFeatGenerator, self).__init__()\n        self.df = args.down_factor\n        self.dim = args.dim\n        self.dummy = nn.Linear(2,3)\n    def forward(self, x):\n        if len(x.shape) == 4:\n            N,C,H,W = x.shape\n        elif len(x.shape) == 5:\n            N,C,T,H,W = x.shape\n        else:\n            raise ValueError\n        c, h, w = self.dim, round(H/self.df), round(W/self.df)\n\n        if len(x.shape) == 4:\n            feat = torch.rand(N,c,h,w).cuda()\n        elif len(x.shape) == 5:\n            feat = torch.rand(N,c,T,h,w).cuda()\n        return feat\n\n    def __str__(self):\n        return ''\n"
  },
  {
    "path": "model/resnet.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\ntry:\n    from torch.hub import load_state_dict_from_url\nexcept ImportError:\n    from torch.utils.model_zoo import load_url as load_state_dict_from_url\n\nimport torchvision.models.resnet as torch_resnet\nfrom torchvision.models.resnet import BasicBlock, Bottleneck\n\nmodel_urls = {'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth',\n    'resnet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth',\n    'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth',\n    'resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth',\n    'resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth',\n    'resnext50_32x4d': 'https://download.pytorch.org/models/resnext50_32x4d-7cdf4587.pth',\n    'resnext101_32x8d': 'https://download.pytorch.org/models/resnext101_32x8d-8ba56ff5.pth',\n    'wide_resnet50_2': 'https://download.pytorch.org/models/wide_resnet50_2-95faca4d.pth',\n    'wide_resnet101_2': 'https://download.pytorch.org/models/wide_resnet101_2-32ee1156.pth',\n}\n\nclass ResNet(torch_resnet.ResNet):\n    def __init__(self, *args, **kwargs):\n        super(ResNet, self).__init__(*args, **kwargs)\n\n    def modify(self, remove_layers=[], padding=''):\n        # Set stride of layer3 and layer 4 to 1 (from 2)\n        filter_layers = lambda x: [l for l in x if getattr(self, l) is not None]\n        for layer in filter_layers(['layer3', 'layer4']):\n            for m in getattr(self, layer).modules():\n                if isinstance(m, torch.nn.Conv2d):\n                    m.stride = tuple(1 for _ in m.stride)\n        # Set padding (zeros or reflect, doesn't change much; \n        # zeros requires lower temperature)\n        if padding != '' and padding != 'no':\n            for m in self.modules():\n                if isinstance(m, torch.nn.Conv2d) and sum(m.padding) > 0:\n                    m.padding_mode = padding\n        elif padding == 'no':\n            for m in self.modules():\n                if isinstance(m, torch.nn.Conv2d) and sum(m.padding) > 0:\n                    m.padding = (0,0)\n\n        # Remove extraneous layers\n        remove_layers += ['fc', 'avgpool']\n        for layer in filter_layers(remove_layers):\n            setattr(self, layer, None)\n\n    def forward(self, x):\n        x = self.conv1(x)\n        x = self.bn1(x)\n        x = self.relu(x)\n        x = x if self.maxpool is None else self.maxpool(x) \n\n        x = self.layer1(x)\n        x = F.avg_pool2d(x,(2,2)) if self.layer2 is None else self.layer2(x)\n        x = x if self.layer3 is None else self.layer3(x) \n        x = x if self.layer4 is None else self.layer4(x) \n    \n        return x        \n\n\ndef _resnet(arch, block, layers, pretrained, progress, **kwargs):\n    model = ResNet(block, layers, **kwargs)\n    if pretrained:\n        state_dict = load_state_dict_from_url(model_urls[arch],\n                                              progress=progress)\n        model.load_state_dict(state_dict)\n    return model\n\ndef resnet18(pretrained=False, progress=True, **kwargs):\n    return _resnet('resnet18', BasicBlock, [2, 2, 2, 2], pretrained, progress,\n                   **kwargs)\n\ndef resnet50(pretrained=False, progress=True, **kwargs) -> ResNet:\n    return _resnet('resnet50', Bottleneck, [3, 4, 6, 3], pretrained, progress,\n                   **kwargs)\n\ndef resnet101(pretrained=False, progress=True, **kwargs): \n    return _resnet('resnet101', Bottleneck, [3, 4, 23, 3], pretrained, progress,\n                   **kwargs)\n\ndef resnet152(pretrained=False, progress=True, **kwargs):\n    r\"\"\"ResNet-152 model from\n    `\"Deep Residual Learning for Image Recognition\" <https://arxiv.org/pdf/1512.03385.pdf>`_\n\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    return _resnet('resnet152', Bottleneck, [3, 8, 36, 3], pretrained, progress,\n                   **kwargs)\n\n\ndef resnext50_32x4d(pretrained=False, progress=True, **kwargs):\n    r\"\"\"ResNeXt-50 32x4d model from\n    `\"Aggregated Residual Transformation for Deep Neural Networks\" <https://arxiv.org/pdf/1611.05431.pdf>`_\n\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    kwargs['groups'] = 32\n    kwargs['width_per_group'] = 4\n    return _resnet('resnext50_32x4d', Bottleneck, [3, 4, 6, 3],\n                   pretrained, progress, **kwargs)\n\n\ndef resnext101_32x8d(pretrained=False, progress=True, **kwargs):\n    r\"\"\"ResNeXt-101 32x8d model from\n    `\"Aggregated Residual Transformation for Deep Neural Networks\" <https://arxiv.org/pdf/1611.05431.pdf>`_\n\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    kwargs['groups'] = 32\n    kwargs['width_per_group'] = 8\n    return _resnet('resnext101_32x8d', Bottleneck, [3, 4, 23, 3],\n                   pretrained, progress, **kwargs)\n\n\ndef wide_resnet50_2(pretrained=False, progress=True, **kwargs):\n    r\"\"\"Wide ResNet-50-2 model from\n    `\"Wide Residual Networks\" <https://arxiv.org/pdf/1605.07146.pdf>`_\n\n    The model is the same as ResNet except for the bottleneck number of channels\n    which is twice larger in every block. The number of channels in outer 1x1\n    convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048\n    channels, and in Wide ResNet-50-2 has 2048-1024-2048.\n\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    kwargs['width_per_group'] = 64 * 2\n    return _resnet('wide_resnet50_2', Bottleneck, [3, 4, 6, 3],\n                   pretrained, progress, **kwargs)\n\n\ndef wide_resnet101_2(pretrained=False, progress=True, **kwargs):\n    r\"\"\"Wide ResNet-101-2 model from\n    `\"Wide Residual Networks\" <https://arxiv.org/pdf/1605.07146.pdf>`_\n\n    The model is the same as ResNet except for the bottleneck number of channels\n    which is twice larger in every block. The number of channels in outer 1x1\n    convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048\n    channels, and in Wide ResNet-50-2 has 2048-1024-2048.\n\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n        progress (bool): If True, displays a progress bar of the download to stderr\n    \"\"\"\n    kwargs['width_per_group'] = 64 * 2\n    return _resnet('wide_resnet101_2', Bottleneck, [3, 4, 23, 3],\n                   pretrained, progress, **kwargs)\n"
  },
  {
    "path": "requirements.txt",
    "content": "scikit-learn==0.22\nopencv==4.0\npyyaml==5.3.1\neasydict==1.9\nimageio==2.6.1\ntqdm==4.42.0\nshapely==1.7.1\nmatplotlib==2.2.2\npycocotools==2.0\nlap==0.4.0\nclick==7.1.2\nmotmetrics==1.2.0\nscikit-image==0.16.2\ncolorama\nloguru\nthop\ntabulate\n"
  },
  {
    "path": "setup.py",
    "content": "###################################################################\n# File Name: setup.py\n# Author: Zhongdao Wang\n# mail: wcd17@mails.tsinghua.edu.cn\n# Created Time: Wed Jul  7 20:23:56 2021\n###################################################################\n\nfrom __future__ import print_function\nfrom __future__ import division\nfrom __future__ import absolute_import\n\nimport os\n\nif __name__ == '__main__':\n    os.chdir('./tracker/sot/lib/eval_toolkit/pysot/utils/')\n    os.system('python setup.py build_ext --inplace')\n"
  },
  {
    "path": "test/test_mot.py",
    "content": "import os\r\nimport sys\r\nimport cv2\r\nimport yaml\r\nimport logging\r\nimport argparse\r\nimport os.path as osp\r\n\r\nimport numpy as np\r\n\r\nsys.path[0] = os.getcwd()\r\nfrom utils.log import logger\r\nfrom utils.meter import Timer\r\nimport data.video as videodataset\r\nfrom utils import visualize as vis\r\nfrom utils.io import write_mot_results, mkdir_if_missing\r\nfrom eval import trackeval\r\n\r\nfrom tracker.mot.box import BoxAssociationTracker\r\n\r\n\r\ndef eval_seq(opt, dataloader, result_filename, save_dir=None,\r\n             show_image=True, frame_rate=30):\r\n    if save_dir:\r\n        mkdir_if_missing(save_dir)\r\n    opt.frame_rate = frame_rate\r\n    tracker = BoxAssociationTracker(opt)\r\n    timer = Timer()\r\n    results = []\r\n    for frame_id, (img, obs, img0, _) in enumerate(dataloader):\r\n        if frame_id % 20 == 0:\r\n            logger.info('Processing frame {} ({:.2f} fps)'.format(\r\n                frame_id, 1./max(1e-5, timer.average_time)))\r\n\r\n        # run tracking\r\n        timer.tic()\r\n        online_targets = tracker.update(img, img0, obs)\r\n        online_tlwhs = []\r\n        online_ids = []\r\n        for t in online_targets:\r\n            tlwh = t.tlwh\r\n            tid = t.track_id\r\n            vertical = tlwh[2] / tlwh[3] > 1.6\r\n            if tlwh[2] * tlwh[3] > opt.min_box_area and not vertical:\r\n                online_tlwhs.append(tlwh)\r\n                online_ids.append(tid)\r\n        timer.toc()\r\n        # save results\r\n        results.append((frame_id + 1, online_tlwhs, online_ids))\r\n        if show_image or save_dir is not None:\r\n            online_im = vis.plot_tracking(\r\n                    img0, online_tlwhs, online_ids, frame_id=frame_id,\r\n                    fps=1. / timer.average_time)\r\n        if show_image:\r\n            cv2.imshow('online_im', online_im)\r\n        if save_dir is not None:\r\n            cv2.imwrite(os.path.join(\r\n                save_dir, '{:05d}.jpg'.format(frame_id)), online_im)\r\n    # save results\r\n    write_mot_results(result_filename, results)\r\n    return frame_id, timer.average_time, timer.calls\r\n\r\n\r\ndef main(opt, data_root, seqs=('MOT16-05',), exp_name='demo',\r\n         save_images=False, save_videos=False, show_image=True):\r\n    logger.setLevel(logging.INFO)\r\n    result_root = os.path.join('results/mot', exp_name, 'quantitive')\r\n    mkdir_if_missing(result_root)\r\n\r\n    # run tracking\r\n    n_frame = 0\r\n    timer_avgs, timer_calls = [], []\r\n    for seq in seqs:\r\n        output_dir = os.path.join('results/mot', exp_name, 'qualitative', seq)\\\r\n                    if save_images or save_videos else None\r\n\r\n        logger.info('start seq: {}'.format(seq))\r\n        dataloader = videodataset.LoadImagesAndObs(\r\n                osp.join(data_root, seq, 'img1'), opt)\r\n        result_filename = os.path.join(result_root, '{}.txt'.format(seq))\r\n        meta_info = open(os.path.join(data_root, seq, 'seqinfo.ini')).read()\r\n        frame_rate = int(meta_info[meta_info.find('frameRate')+10:\r\n                                   meta_info.find('\\nseqLength')])\r\n        nf, ta, tc = eval_seq(opt, dataloader, result_filename,\r\n                              save_dir=output_dir, show_image=show_image,\r\n                              frame_rate=frame_rate)\r\n        n_frame += nf\r\n        timer_avgs.append(ta)\r\n        timer_calls.append(tc)\r\n\r\n        # eval\r\n        logger.info('Evaluate seq: {}'.format(seq))\r\n        if save_videos:\r\n            output_video_path = osp.join(output_dir, '{}.mp4'.format(seq))\r\n            cmd_str = 'ffmpeg -f image2 -i {}/%05d.jpg -c:v copy {}'.format(\r\n                    output_dir, output_video_path)\r\n            os.system(cmd_str)\r\n    timer_avgs = np.asarray(timer_avgs)\r\n    timer_calls = np.asarray(timer_calls)\r\n    all_time = np.dot(timer_avgs, timer_calls)\r\n    avg_time = all_time / np.sum(timer_calls)\r\n    logger.info('Time elapsed: {:.2f} seconds, FPS: {:.2f}'.format(\r\n        all_time, 1.0 / avg_time))\r\n\r\n    eval_config = trackeval.Evaluator.get_default_eval_config()\r\n    dataset_config = trackeval.datasets.MotChallenge2DBox.get_default_dataset_config()\r\n    metrics_config = {'METRICS': ['HOTA', 'CLEAR', 'Identity']}\r\n\r\n    eval_config['LOG_ON_ERROR'] = osp.join(result_root, 'error.log')\r\n    eval_config['PLOT_CURVES'] = False\r\n    dataset_config['GT_FOLDER'] = data_root\r\n    dataset_config['SEQMAP_FOLDER'] = osp.join(data_root, '../../seqmaps')\r\n    dataset_config['SPLIT_TO_EVAL'] = 'train'\r\n    dataset_config['TRACKERS_FOLDER'] = osp.join(result_root, '..')\r\n    dataset_config['TRACKER_SUB_FOLDER'] = ''\r\n    dataset_config['TRACKERS_TO_EVAL'] = ['quantitive']\r\n    dataset_config['BENCHMARK'] = 'MOT16'\r\n\r\n    evaluator = trackeval.Evaluator(eval_config)\r\n    dataset_list = [trackeval.datasets.MotChallenge2DBox(dataset_config)]\r\n    metrics_list = []\r\n    for metric in [trackeval.metrics.HOTA, trackeval.metrics.CLEAR,\r\n                   trackeval.metrics.Identity, trackeval.metrics.VACE]:\r\n        if metric.get_name() in metrics_config['METRICS']:\r\n            metrics_list.append(metric())\r\n    if len(metrics_list) == 0:\r\n        raise Exception('No metrics selected for evaluation')\r\n    evaluator.evaluate(dataset_list, metrics_list)\r\n\r\n\r\nif __name__ == '__main__':\r\n    parser = argparse.ArgumentParser()\r\n    parser.add_argument('--config', default='', required=True, type=str)\r\n    parser.add_argument('--obid', default='FairMOT', type=str)\r\n    opt = parser.parse_args()\r\n    with open(opt.config) as f:\r\n        common_args = yaml.load(f)\r\n    for k, v in common_args['common'].items():\r\n        setattr(opt, k, v)\r\n    for k, v in common_args['mot'].items():\r\n        setattr(opt, k, v)\r\n    print(opt, end='\\n\\n')\r\n\r\n    if not opt.test_mot16:\r\n        seqs_str = '''MOT16-02\r\n                      MOT16-04\r\n                      MOT16-05\r\n                      MOT16-09\r\n                      MOT16-10\r\n                      MOT16-11\r\n                      MOT16-13\r\n                    '''\r\n        data_root = '{}/images/train'.format(opt.mot_root)\r\n    else:\r\n        seqs_str = '''MOT16-01\r\n                     MOT16-03\r\n                     MOT16-06\r\n                     MOT16-07\r\n                     MOT16-08\r\n                     MOT16-12\r\n                     MOT16-14'''\r\n        data_root = '{}/images/test'.format(opt.mot_root)\r\n    seqs = [seq.strip() for seq in seqs_str.split()]\r\n\r\n    main(opt,\r\n         data_root=data_root,\r\n         seqs=seqs,\r\n         exp_name=opt.exp_name,\r\n         show_image=False,\r\n         save_images=opt.save_images,\r\n         save_videos=opt.save_videos)\r\n"
  },
  {
    "path": "test/test_mots.py",
    "content": "import os\r\nimport sys\r\nimport pdb\r\nimport cv2\r\nimport yaml\r\nimport logging\r\nimport argparse\r\nimport os.path as osp\r\nimport pycocotools.mask as mask_utils\r\n\r\nimport numpy as np\r\nimport torch\r\nfrom torchvision.transforms import transforms as T\r\n\r\nsys.path[0] = os.getcwd()\r\nfrom utils.log import logger\r\nfrom utils.meter import Timer\r\nimport data.video as videodataset\r\n\r\nfrom eval import trackeval\r\nfrom eval.mots.MOTSVisualization import MOTSVisualizer\r\nfrom utils import visualize as vis\r\nfrom utils.io import write_mots_results, mkdir_if_missing\r\nfrom tracker.mot.mask import MaskAssociationTracker\r\n\r\ndef eval_seq(opt, dataloader, result_filename, save_dir=None, \r\n        show_image=True, frame_rate=30):\r\n    if save_dir:\r\n        mkdir_if_missing(save_dir)\r\n    opt.frame_rate = frame_rate\r\n    tracker = MaskAssociationTracker(opt) \r\n    timer = Timer()\r\n    results = []\r\n    for frame_id, (img, obs, img0, _) in enumerate(dataloader):\r\n        if frame_id % 20 == 0:\r\n            logger.info('Processing frame {} ({:.2f} fps)'.format(\r\n                frame_id, 1./max(1e-5, timer.average_time)))\r\n\r\n        # run tracking\r\n        timer.tic()\r\n        online_targets = tracker.update(img, img0, obs)\r\n        online_tlwhs = []\r\n        online_ids = []\r\n        online_masks = []\r\n        for t in online_targets:\r\n            tlwh = t.tlwh * opt.down_factor\r\n            tid = t.track_id\r\n            mask = t.mask.astype(np.uint8)\r\n            mask = mask_utils.encode(np.asfortranarray(mask))\r\n            mask['counts'] = mask['counts'].decode('ascii')\r\n            online_tlwhs.append(tlwh)\r\n            online_ids.append(tid)\r\n            online_masks.append(mask)\r\n        timer.toc()\r\n        # save results\r\n        results.append((frame_id + 1, online_tlwhs, online_masks, online_ids))\r\n        if  save_dir is not None:\r\n            online_im = vis.plot_tracking(img0, online_tlwhs, \r\n                    online_ids, frame_id=frame_id, fps=1. / timer.average_time)\r\n        if save_dir is not None:\r\n            cv2.imwrite(os.path.join(\r\n                save_dir, '{:05d}.jpg'.format(frame_id)), online_im)\r\n\r\n    write_mots_results(result_filename, results)\r\n    return frame_id, timer.average_time, timer.calls\r\n\r\n\r\ndef main(opt, data_root='/data/MOT16/train', det_root=None, seqs=('MOT16-05',), exp_name='demo', \r\n         save_images=False, save_videos=False, show_image=True):\r\n    logger.setLevel(logging.INFO)\r\n    result_root = os.path.join('results/mots', exp_name, 'quantitive')\r\n    mkdir_if_missing(result_root)\r\n\r\n\r\n    # run tracking\r\n    accs = []\r\n    n_frame = 0\r\n    timer_avgs, timer_calls = [], []\r\n    for seq in seqs:\r\n        output_dir = os.path.join('results/mots', exp_name, 'qualititive', seq) if save_images or save_videos else None\r\n        img_dir = osp.join(data_root, seq, 'img1')\r\n        logger.info('start seq: {}'.format(seq))\r\n        dataloader = videodataset.LoadImagesAndMaskObsMOTS(img_dir, opt)\r\n        result_filename = os.path.join(result_root, '{}.txt'.format(seq))\r\n        meta_info = open(os.path.join(data_root, seq, 'seqinfo.ini')).read() \r\n        frame_rate = int(meta_info[meta_info.find('frameRate')+10:meta_info.find('\\nseqLength')])\r\n        nf, ta, tc = eval_seq(opt, dataloader, result_filename,\r\n                save_dir=output_dir, show_image=show_image, frame_rate=frame_rate)\r\n        n_frame += nf\r\n        timer_avgs.append(ta)\r\n        timer_calls.append(tc)\r\n\r\n        if save_videos: \r\n            visualzier = MOTSVisualizer(seq, None, result_filename, output_dir, img_dir)\r\n            visualzier.generateVideo()\r\n\r\n    timer_avgs = np.asarray(timer_avgs)\r\n    timer_calls = np.asarray(timer_calls)\r\n    all_time = np.dot(timer_avgs, timer_calls)\r\n    avg_time = all_time / np.sum(timer_calls)\r\n    logger.info('Time elapsed: {:.2f} seconds, FPS: {:.2f}'.format(all_time, 1.0 / avg_time))\r\n\r\n    eval_config = trackeval.Evaluator.get_default_eval_config()\r\n    dataset_config = trackeval.datasets.MOTSChallenge.get_default_dataset_config()\r\n    metrics_config = {'METRICS':['HOTA','CLEAR','Identity']}\r\n\r\n    eval_config['LOG_ON_ERROR'] = osp.join(result_root,'error.log')\r\n    eval_config['PLOT_CURVES'] = False\r\n    dataset_config['GT_FOLDER'] = data_root \r\n    dataset_config['SEQMAP_FOLDER'] = osp.join(data_root, '../../seqmaps')\r\n    dataset_config['SPLIT_TO_EVAL'] = 'train'\r\n    dataset_config['TRACKERS_FOLDER'] = osp.join(result_root, '..') \r\n    dataset_config['TRACKER_SUB_FOLDER'] = '' \r\n    dataset_config['TRACKERS_TO_EVAL'] = ['quantitive'] \r\n    dataset_config['BENCHMARK'] = 'MOTS20'\r\n    dataset_config['SKIP_SPLIT_FOL'] = True\r\n\r\n    evaluator = trackeval.Evaluator(eval_config)\r\n    dataset_list = [trackeval.datasets.MOTSChallenge(dataset_config)]\r\n    metrics_list = []\r\n    for metric in [trackeval.metrics.HOTA, trackeval.metrics.CLEAR, \r\n            trackeval.metrics.Identity, trackeval.metrics.VACE,\r\n            trackeval.metrics.JAndF]:\r\n        if metric.get_name() in metrics_config['METRICS']:\r\n            metrics_list.append(metric())\r\n    if len(metrics_list) == 0:\r\n        raise Exception('No metrics selected for evaluation')\r\n    evaluator.evaluate(dataset_list, metrics_list)\r\n\r\n\r\nif __name__ == '__main__':\r\n    parser = argparse.ArgumentParser()\r\n    parser.add_argument('--config', default='', required=True, type=str)\r\n    opt = parser.parse_args()\r\n    with open(opt.config) as f:\r\n        common_args = yaml.load(f) \r\n    for k, v in common_args['common'].items():\r\n        setattr(opt, k, v)    \r\n    for k, v in common_args['mots'].items():\r\n        setattr(opt, k, v)    \r\n\r\n    print(opt, end='\\n\\n')\r\n\r\n    if not opt.test:\r\n        seqs_str = '''MOTS20-02\r\n                      MOTS20-05\r\n                      MOTS20-09\r\n                      MOTS20-11\r\n                    '''\r\n        data_root = '{}/images/train'.format(opt.mots_root)\r\n    else:\r\n        seqs_str = '''MOTS20-01\r\n                      MOTS20-06\r\n                      MOTS20-07\r\n                      MOTS20-12\r\n                    '''\r\n        data_root = '{}/images/test'.format(opt.mots_root)\r\n    seqs = [seq.strip() for seq in seqs_str.split()]\r\n\r\n    main(opt,\r\n         data_root=data_root,\r\n         seqs=seqs,\r\n         exp_name=opt.exp_name,\r\n         show_image=False,\r\n         save_images=opt.save_images, \r\n         save_videos=opt.save_videos)\r\n\r\n"
  },
  {
    "path": "test/test_poseprop.py",
    "content": "from __future__ import print_function\n\nimport os\nimport pdb\nimport sys\nsys.path[0] = os.getcwd()\nimport time\nimport yaml\nimport imageio\nimport argparse\nimport os.path as osp\nimport numpy as np\n\nimport torch\nimport torch.nn as nn\nimport torch.backends.cudnn as cudnn\n\nfrom data import vos, jhmdb\nfrom model import AppearanceModel\nfrom model.functional import *\nimport utils\nfrom utils.visualize import dump_predictions \n\ndef main(args, vis):\n    model = AppearanceModel(args).to(args.device)\n    args.mapScale = [args.down_factor, args.down_factor] \n\n    dataset = jhmdb.JhmdbSet(args)\n    val_loader = torch.utils.data.DataLoader(dataset,\n        batch_size=1, shuffle=False, num_workers=args.workers, pin_memory=True)\n\n    print('Total params: %.2fM' % (sum(p.numel() for p in model.parameters())/1000000.0))\n \n    model.eval()\n    model = model.to(args.device)\n\n    if not os.path.exists(args.save_path):\n        os.makedirs(args.save_path)\n    \n    with torch.no_grad():\n        test_loss = test(val_loader, model, args)\n            \n\ndef test(loader, model, args):\n    n_context = args.videoLen\n    D = None    # Radius mask\n    \n    for vid_idx, (imgs, imgs_orig, lbls, lbls_orig, lbl_map, meta) in enumerate(loader):\n        t_vid = time.time()\n        imgs = imgs.to(args.device)\n        B, N = imgs.shape[:2]\n        assert(B == 1)\n\n        print('******* Vid %s (%s frames) *******' % (vid_idx, N))\n        with torch.no_grad():\n            t00 = time.time()\n\n            ##################################################################\n            # Compute image features (batched for memory efficiency)\n            ##################################################################\n            bsize = 5   # minibatch size for computing features\n            feats = []\n            for b in range(0, imgs.shape[1], bsize):\n                feat = model(imgs[:, b:b+bsize].transpose(1,2).to(args.device))\n                feats.append(feat.cpu())\n            feats = torch.cat(feats, dim=2).squeeze(1)\n\n            if not args.no_l2:\n                feats = torch.nn.functional.normalize(feats, dim=1)\n\n            print('computed features', time.time()-t00)\n\n\n            ##################################################################\n            # Compute affinities\n            ##################################################################\n            torch.cuda.empty_cache()\n            t03 = time.time()\n            \n            # Prepare source (keys) and target (query) frame features\n            key_indices = context_index_bank(n_context, args.long_mem, N - n_context)\n            key_indices = torch.cat(key_indices, dim=-1)           \n            keys, query = feats[:, :, key_indices], feats[:, :, n_context:]\n\n            # Make spatial radius mask TODO use torch.sparse\n            restrict = MaskedAttention(args.radius, flat=False)\n            D = restrict.mask(*feats.shape[-2:])[None]\n            D = D.flatten(-4, -3).flatten(-2)\n            D[D==0] = -1e10; D[D==1] = 0\n\n            # Flatten source frame features to make context feature set\n            keys, query = keys.flatten(-2), query.flatten(-2)\n\n            print('computing affinity')\n            Ws, Is = mem_efficient_batched_affinity(query, keys, D, \n                        args.temperature, args.topk, args.long_mem, args.device)\n\n            if torch.cuda.is_available():\n                print(time.time()-t03, 'affinity forward, max mem', torch.cuda.max_memory_allocated() / (1024**2))\n\n            ##################################################################\n            # Propagate Labels and Save Predictions\n            ###################################################################\n\n            maps, keypts = [], []\n            lbls[0, n_context:] *= 0 \n            lbl_map, lbls = lbl_map[0], lbls[0]\n\n            for t in range(key_indices.shape[0]):\n                # Soft labels of source nodes\n                ctx_lbls = lbls[key_indices[t]].to(args.device)\n                ctx_lbls = ctx_lbls.flatten(0, 2).transpose(0, 1)\n\n                # Weighted sum of top-k neighbours (Is is index, Ws is weight) \n                pred = (ctx_lbls[:, Is[t]] * Ws[t].to(args.device)[None]).sum(1)\n                pred = pred.view(-1, *feats.shape[-2:])\n                pred = pred.permute(1,2,0)\n                \n                if t > 0:\n                    lbls[t + n_context] = pred\n                else:\n                    pred = lbls[0]\n                    lbls[t + n_context] = pred\n\n                if args.norm_mask:\n                    pred[:, :, :] -= pred.min(-1)[0][:, :, None]\n                    pred[:, :, :] /= pred.max(-1)[0][:, :, None]\n\n                # Save Predictions            \n                cur_img = imgs_orig[0, t + n_context].permute(1,2,0).numpy() * 255\n                _maps = []\n\n                coords, pred_sharp = process_pose(pred, lbl_map)\n                keypts.append(coords)\n                pose_map = utils.visualize.vis_pose(np.array(cur_img).copy(), coords.numpy() * args.mapScale[..., None])\n                _maps += [pose_map]\n                outpath = osp.join(args.save_path, str(vid_idx)+'_'+str(t))\n                heatmap, lblmap, heatmap_prob = dump_predictions(\n                    pred.cpu().numpy(),\n                    lbl_map, cur_img, outpath)\n\n\n                _maps += [heatmap, lblmap, heatmap_prob]\n                maps.append(_maps)\n\n            if len(keypts) > 0:\n                coordpath = os.path.join(args.save_path, str(vid_idx) + '.dat')\n                np.stack(keypts, axis=-1).dump(coordpath)\n            \n            torch.cuda.empty_cache()\n            print('******* Vid %s TOOK %s *******' % (vid_idx, time.time() - t_vid))\n\n\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--config', default='', required=True, type=str)\n    args = parser.parse_args()\n\n    with open(args.config) as f:\n        common_args = yaml.load(f) \n    for k, v in common_args['common'].items():\n        setattr(args, k, v)\n    for k, v in common_args['poseprop'].items():\n        setattr(args, k, v)\n\n    args.imgSize = args.cropSize\n    args.save_path = 'results/poseprop/{}'.format(args.exp_name)\n\n    main(args, None)\n"
  },
  {
    "path": "test/test_posetrack.py",
    "content": "import os\r\nimport pdb\r\nimport os.path as osp\r\nimport sys\r\nsys.path[0] = os.getcwd()\r\nimport cv2\r\nimport copy\r\nimport json\r\nimport yaml\r\nimport logging\r\nimport argparse\r\nfrom tqdm import tqdm\r\nfrom itertools import groupby\r\nimport pycocotools.mask as mask_utils\r\n\r\nimport numpy as np\r\nimport torch\r\nfrom torchvision.transforms import transforms as T\r\n\r\nfrom utils.log import logger\r\nfrom utils.meter import Timer\r\nfrom utils.mask import pts2array\r\nimport data.video as videodataset\r\nfrom utils import visualize as vis\r\nfrom utils.io import mkdir_if_missing\r\nfrom core.association import matching\r\n\r\nfrom tracker.mot.pose import PoseAssociationTracker\r\n\r\ndef identical(a, b):\r\n    if len(a) == len(b):\r\n        arra = pts2array(a)\r\n        arrb = pts2array(b)\r\n        if np.abs(arra-arrb).sum() < 1e-2:\r\n            return True\r\n    return False\r\n\r\ndef fuse_result(res, jpath):\r\n    with open(jpath, 'r') as f:\r\n        obsj = json.load(f)\r\n\r\n    obsj_fused = copy.deepcopy(obsj)\r\n    for t, inpj in enumerate(obsj['annolist']):\r\n        skltns, ids = res[t][2], res[t][3]\r\n        nobj_ori = len(obsj['annolist'][t]['annorect'])\r\n        for i in range(nobj_ori):\r\n            obsj_fused['annolist'][t]['annorect'][i]['track_id'] = [1000]\r\n            for j, skltn in enumerate(skltns):\r\n                match = identical(obsj['annolist'][t]['annorect'][i]['annopoints'][0]['point'], skltn)\r\n                if match:\r\n                    obsj_fused['annolist'][t]['annorect'][i]['track_id'] = [ids[j],]\r\n    return obsj_fused\r\n\r\n\r\ndef eval_seq(opt, dataloader, save_dir=None):\r\n    if save_dir:\r\n        mkdir_if_missing(save_dir)\r\n    tracker = PoseAssociationTracker(opt) \r\n    timer = Timer()\r\n    results = []\r\n    for frame_id, (img, obs, img0, _) in enumerate(dataloader):\r\n        # run tracking\r\n        timer.tic()\r\n        online_targets = tracker.update(img, img0, obs)\r\n        online_tlwhs = []\r\n        online_ids = []\r\n        online_poses = []\r\n        for t in online_targets:\r\n            tlwh = t.tlwh \r\n            tid = t.track_id\r\n            online_tlwhs.append(tlwh)\r\n            online_ids.append(tid)\r\n            online_poses.append(t.pose)\r\n        timer.toc()\r\n        # save results\r\n        results.append((frame_id + 1, online_tlwhs, online_poses, online_ids))\r\n        if  save_dir is not None:\r\n            online_im = vis.plot_tracking(img0, online_tlwhs, \r\n                    online_ids, frame_id=frame_id, fps=1. / timer.average_time)\r\n        if save_dir is not None:\r\n            cv2.imwrite(os.path.join(\r\n                save_dir, '{:05d}.jpg'.format(frame_id)), online_im)\r\n    return results, timer.average_time, timer.calls\r\n\r\n\r\ndef main(opt):\r\n    logger.setLevel(logging.INFO)\r\n    result_root = opt.out_root \r\n    result_json_root = osp.join(result_root, 'json')\r\n    mkdir_if_missing(result_json_root)\r\n    transforms= T.Compose([T.ToTensor(), T.Normalize(opt.im_mean, opt.im_std)])\r\n\r\n    obs_root = osp.join(opt.data_root, 'obs', opt.split, opt.obid)\r\n    obs_jpaths = [osp.join(obs_root, o) for o in os.listdir(obs_root)]\r\n    obs_jpaths = sorted([o for o in obs_jpaths if o.endswith('.json')])\r\n\r\n    # run tracking\r\n    accs = []\r\n    timer_avgs, timer_calls = [], []\r\n    for i, obs_jpath in enumerate(obs_jpaths):\r\n        seqname = obs_jpath.split('/')[-1].split('.')[0]\r\n        output_dir = osp.join(result_root, 'frame', seqname)\r\n        dataloader = videodataset.LoadImagesAndPoseObs(obs_jpath, opt)\r\n        seq_res, ta, tc = eval_seq(opt, dataloader, save_dir=output_dir)\r\n        seq_json = fuse_result(seq_res, obs_jpath) \r\n        with open(osp.join(result_json_root, \"{}.json\".format(seqname)), 'w') as f:\r\n            json.dump(seq_json, f)\r\n        timer_avgs.append(ta)\r\n        timer_calls.append(tc)\r\n\r\n        # eval\r\n        logger.info('Evaluate seq: {}'.format(seqname))\r\n        if opt.save_videos:\r\n            output_video_path = osp.join(output_dir, '{}.mp4'.format(seqname))\r\n            cmd_str = 'ffmpeg -f image2 -i {}/%05d.jpg -c:v copy {}'.format(output_dir, output_video_path)\r\n            os.system(cmd_str)\r\n    \r\n    timer_avgs = np.asarray(timer_avgs)\r\n    timer_calls = np.asarray(timer_calls)\r\n    all_time = np.dot(timer_avgs, timer_calls)\r\n    avg_time = all_time / np.sum(timer_calls)\r\n    logger.info('Time elapsed: {:.2f} seconds, FPS: {:.2f}'.format(all_time, 1.0 / avg_time))\r\n\r\n    cmd_str = ('python ./eval/poseval/evaluate.py --groundTruth={}/posetrack_data/annotations/{} '\r\n               '--predictions={}/ --evalPoseTracking'.format(opt.data_root, opt.split, result_json_root))\r\n    os.system(cmd_str)\r\n\r\n\r\nif __name__ == '__main__':\r\n    parser = argparse.ArgumentParser()\r\n    parser.add_argument('--config', default='', required=True, type=str)\r\n    opt = parser.parse_args()\r\n    with open(opt.config) as f:\r\n        common_args = yaml.load(f) \r\n    for k, v in common_args['common'].items():\r\n        setattr(opt, k, v)\r\n    for k, v in common_args['posetrack'].items():\r\n        setattr(opt, k, v)\r\n    \r\n    opt.out_root = osp.join('results/pose', opt.exp_name)\r\n    opt.out_file = osp.join('results/pose', opt.exp_name + '.json')\r\n    print(opt, end='\\n\\n')\r\n\r\n    main(opt)\r\n"
  },
  {
    "path": "test/test_sot_cfnet.py",
    "content": "# ------------------------------------------------------------------------------\r\n# Copyright (c) Microsoft\r\n# Licensed under the MIT License.\r\n# Written by Zhipeng Zhang (zhangzhipeng2017@ia.ac.cn)\r\n# ------------------------------------------------------------------------------\r\n\r\nimport os\r\nimport pdb\r\nimport sys\r\nsys.path[0] = os.getcwd()\r\nimport cv2\r\nimport yaml\r\nimport random\r\nimport argparse\r\nfrom os.path import exists, join\r\nfrom easydict import EasyDict as edict\r\n\r\nimport torch\r\nimport numpy as np\r\n\r\nimport tracker.sot.lib.models as models\r\nfrom tracker.sot.lib.utils.utils import  load_dataset, crop_chw, \\\r\n    gaussian_shaped_labels, cxy_wh_2_rect1, rect1_2_cxy_wh, cxy_wh_2_bbox\r\nfrom tracker.sot.lib.core.eval_otb import eval_auc_tune\r\n\r\nimport utils\r\nfrom model import AppearanceModel, partial_load \r\nfrom data.vos import color_normalize, load_image, im_to_numpy, im_to_torch\r\n\r\n\r\ndef sot_loadimg(path, im_mean, im_std, use_lab=False):\r\n    img = load_image(path)\r\n    if use_lab:\r\n        img = im_to_numpy(img)\r\n        img = (img*255).astype(np.uint8)[:,:,::-1]\r\n        img = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)\r\n        img = im_to_torch(img) / 255.\r\n    img = color_normalize(img, im_mean, im_std)\r\n    if use_lab:\r\n        img = torch.stack([img[0],]*3)\r\n    img = img.permute(1,2,0).numpy()  # H, W, C\r\n    return img \r\n\r\nclass TrackerConfig(object):\r\n    # These are the default hyper-params for DCFNet\r\n    # OTB2013 / AUC(0.665)\r\n    feature_path = 'param.pth'\r\n    crop_sz = 512 + 8 \r\n    downscale = 8\r\n    temp_sz = crop_sz // downscale\r\n\r\n    lambda0 = 1e-4\r\n    padding = 3.5 \r\n    output_sigma_factor = 0.1\r\n    interp_factor = 0.01\r\n    num_scale =  3\r\n    scale_step = 1.0275\r\n    scale_factor = scale_step ** (np.arange(num_scale) - num_scale // 2)\r\n    min_scale_factor = 0.2\r\n    max_scale_factor = 5 \r\n    scale_penalty = 0.985  \r\n    scale_penalties = scale_penalty ** (np.abs((np.arange(num_scale) - num_scale // 2)))\r\n\r\n    net_input_size = [crop_sz, crop_sz]\r\n    net_output_size = [temp_sz, temp_sz]\r\n    output_sigma = temp_sz / (1 + padding) * output_sigma_factor\r\n    y = gaussian_shaped_labels(output_sigma, net_output_size)\r\n    yf = torch.rfft(torch.Tensor(y).view(1, 1, temp_sz, temp_sz).cuda(), signal_ndim=2)\r\n    cos_window = torch.Tensor(np.outer(np.hanning(temp_sz), np.hanning(temp_sz))).cuda()\r\n    inner_line = np.zeros(temp_sz)\r\n    inner_line[temp_sz//2 - int(temp_sz//(2*(padding+1))):temp_sz//2 + int(temp_sz//(2*(padding+1)))] = 1\r\n    inner_window = torch.Tensor(np.outer(inner_line, inner_line)).cuda()\r\n\r\n    rcos_window = torch.Tensor(np.outer(1-np.hanning(temp_sz), 1-np.hanning(temp_sz))).cuda()\r\n    srcos = 0.0\r\n\r\n\r\ndef track(net, video, args):\r\n    start_frame, toc = 0, 0\r\n    config = TrackerConfig()\r\n    # save result to evaluate\r\n    if args.exp_name:\r\n        tracker_path = join('results', 'sot', args.arch, args.exp_name)\r\n    else:\r\n        tracker_path = join('results', 'sot', args.arch, 'unknown')\r\n\r\n    if not os.path.exists(tracker_path):\r\n        os.makedirs(tracker_path)\r\n\r\n    if 'VOT' in args.dataset:\r\n        baseline_path = join(tracker_path, 'baseline')\r\n        video_path = join(baseline_path, video['name'])\r\n        if not os.path.exists(video_path):\r\n            os.makedirs(video_path)\r\n        result_path = os.path.join(video_path, video['name'] + '_001.txt')\r\n    else:\r\n        result_path = os.path.join(tracker_path, '{:s}.txt'.format(video['name']))\r\n\r\n    if os.path.exists(result_path):\r\n        return  # for mult-gputesting\r\n\r\n    regions = [] # FINAL RESULTS\r\n    image_files, gt = video['image_files'], video['gt']\r\n    for f, image_file in enumerate(image_files):\r\n        use_lab = getattr(args, 'use_lab', False)\r\n        im = sot_loadimg(image_file, args.im_mean, args.im_std, use_lab)\r\n        tic = cv2.getTickCount()\r\n\t### Init\r\n        if f == 0:\r\n            target_pos, target_sz = rect1_2_cxy_wh(gt[0])\r\n            min_sz = np.maximum(config.min_scale_factor * target_sz, 4)\r\n            max_sz = np.minimum(im.shape[:2], config.max_scale_factor * target_sz)\r\n\r\n            # crop template\r\n            window_sz = target_sz * (1 + config.padding)\r\n            bbox = cxy_wh_2_bbox(target_pos, window_sz)\r\n            patch = crop_chw(im, bbox, config.crop_sz)\r\n\r\n            target = patch \r\n            net.update(torch.Tensor(np.expand_dims(target, axis=0)).cuda(), lr=1)\r\n            regions.append(cxy_wh_2_rect1(target_pos, target_sz))\r\n            patch_crop = np.zeros((config.num_scale, patch.shape[0], patch.shape[1], patch.shape[2]), np.float32)\r\n        ### Track\r\n        else:\r\n            for i in range(config.num_scale):  # crop multi-scale search region\r\n                window_sz = target_sz * (config.scale_factor[i] * (1 + config.padding))\r\n                bbox = cxy_wh_2_bbox(target_pos, window_sz)\r\n                patch_crop[i, :] = crop_chw(im, bbox, config.crop_sz)\r\n\r\n            search = patch_crop \r\n            response = net(torch.Tensor(search).cuda())\r\n            peak, idx = torch.max(response.view(config.num_scale, -1), 1)\r\n            peak = peak.data.cpu().numpy() * config.scale_penalties\r\n            best_scale = np.argmax(peak)\r\n            r_max, c_max = np.unravel_index(idx[best_scale].cpu(), config.net_output_size)\r\n\r\n            #if f >20:\r\n            #    pdb.set_trace()\r\n\r\n            if r_max > config.net_output_size[0] / 2:\r\n                r_max = r_max - config.net_output_size[0]\r\n            if c_max > config.net_output_size[1] / 2:\r\n                c_max = c_max - config.net_output_size[1]\r\n            window_sz = target_sz * (config.scale_factor[best_scale] * (1 + config.padding))\r\n\r\n            #print(f, target_pos)\r\n            target_pos = target_pos + np.array([c_max, r_max]) * window_sz / config.net_output_size \r\n            target_sz = np.minimum(np.maximum(window_sz / (1 + config.padding), min_sz), max_sz)\r\n\r\n            #print(f, (r_max, c_max), target_pos, window_sz/config.net_output_size)\r\n            # model update\r\n            window_sz = target_sz * (1 + config.padding)\r\n            bbox = cxy_wh_2_bbox(target_pos, window_sz)\r\n            patch = crop_chw(im, bbox, config.crop_sz)\r\n            target = patch \r\n            net.update(torch.Tensor(np.expand_dims(target, axis=0)).cuda(), lr=config.interp_factor)\r\n\r\n            regions.append(cxy_wh_2_rect1(target_pos, target_sz))  # 1-index\r\n\r\n        toc += cv2.getTickCount() - tic\r\n\r\n    with open(result_path, \"w\") as fin:\r\n        if 'VOT' in args.dataset:\r\n            for x in regions:\r\n                if isinstance(x, int):\r\n                    fin.write(\"{:d}\\n\".format(x))\r\n                else:\r\n                    p_bbox = x.copy()\r\n                    fin.write(','.join([str(i) for i in p_bbox]) + '\\n')\r\n        else:\r\n            for x in regions:\r\n                p_bbox = x.copy()\r\n                fin.write(\r\n                    ','.join([str(i + 1) if idx == 0 or idx == 1 else str(i) for idx, i in enumerate(p_bbox)]) + '\\n')\r\n\r\n    toc /= cv2.getTickFrequency()\r\n    print('Video: {:12s} Time: {:2.1f}s Speed: {:3.1f}fps'.format(video['name'], toc, f / toc))\r\n\r\n\r\ndef main():\r\n\r\n    parser = argparse.ArgumentParser()\r\n    parser.add_argument('--config', default='', required=True, type=str)\r\n    args = parser.parse_args()\r\n\r\n    with open(args.config) as f:\r\n        common_args = yaml.load(f) \r\n    for k, v in common_args['common'].items():\r\n        setattr(args, k, v)\r\n    for k, v in common_args['sot'].items():\r\n        setattr(args, k, v)\r\n   \r\n    args.arch = 'CFNet'\r\n\r\n    # prepare model\r\n    base = AppearanceModel(args,).to(args.device) \r\n    print('Total params: %.2fM' % \r\n            (sum(p.numel() for p in base.parameters())/1e6))\r\n    print(base) \r\n    net = models.__dict__[args.arch](base=base, config=TrackerConfig())\r\n    net.eval()\r\n    net = net.cuda()\r\n\r\n    # prepare video\r\n    dataset = load_dataset(args.dataset, args.dataroot)\r\n    video_keys = list(dataset.keys()).copy()\r\n\r\n    # tracking all videos in benchmark\r\n    for video in video_keys:\r\n        track(net, dataset[video], args)\r\n\r\n    eval_cmd = ('python ./tracker/sot/lib/eval_toolkit/bin/eval.py --dataset_dir {} '\r\n                '--tracker_result_dir ./results/sot/{} --trackers {} --dataset OTB2015').format(\r\n                args.dataroot, args.arch, args.exp_name)\r\n    os.system(eval_cmd)\r\n\r\nif __name__ == '__main__':\r\n    main()\r\n\r\n"
  },
  {
    "path": "test/test_sot_siamfc.py",
    "content": "# ------------------------------------------------------------------------------\r\n# Copyright (c) Microsoft\r\n# Licensed under the MIT License.\r\n# Written by Zhipeng Zhang (zhangzhipeng2017@ia.ac.cn)\r\n# ------------------------------------------------------------------------------\r\n\r\nimport os\r\nimport pdb\r\nimport sys\r\nsys.path[0] = os.getcwd()\r\nimport cv2\r\nimport yaml\r\nimport random\r\nimport argparse\r\nfrom os.path import exists, join\r\nfrom easydict import EasyDict as edict\r\n\r\nimport torch\r\nimport numpy as np\r\n\r\nimport tracker.sot.lib.models as models\r\nfrom tracker.sot.lib.utils.utils import  load_dataset, crop_chw, \\\r\n    gaussian_shaped_labels, cxy_wh_2_rect1, rect1_2_cxy_wh, cxy_wh_2_bbox\r\nfrom tracker.sot.lib.core.eval_otb import eval_auc_tune\r\n\r\nimport utils\r\nfrom model import AppearanceModel, partial_load\r\nfrom data.vos import color_normalize, load_image, im_to_numpy, im_to_torch\r\n\r\ndef sot_loadimg(path, im_mean, im_std, use_lab=False):\r\n    img = load_image(path)\r\n    if use_lab:\r\n        img = im_to_numpy(img)\r\n        img = (img*255).astype(np.uint8)[:,:,::-1]\r\n        img = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)\r\n        img = im_to_torch(img) / 255.\r\n    img = color_normalize(img, im_mean, im_std)\r\n    if use_lab:\r\n        img = torch.stack([img[0],]*3)\r\n    img = img.permute(1,2,0).numpy()  # H, W, C\r\n    return img \r\n\r\nclass TrackerConfig(object):\r\n    crop_sz = 512 + 8 \r\n    downscale = 8\r\n    temp_sz = crop_sz // downscale\r\n\r\n    lambda0 = 1e-4\r\n    padding = 3.5\r\n    interp_factor = 0.01\r\n    num_scale =  3\r\n    scale_step = 1.0275\r\n    scale_factor = scale_step ** (np.arange(num_scale) - num_scale // 2)\r\n    min_scale_factor = 0.2\r\n    max_scale_factor = 5 \r\n    scale_penalty = 0.985  \r\n    scale_penalties = scale_penalty ** (np.abs((np.arange(num_scale) - num_scale // 2)))\r\n\r\n    net_output_size = [temp_sz, temp_sz]\r\n    cos_window = torch.Tensor(np.outer(np.hanning(temp_sz), np.hanning(temp_sz))).cuda()\r\n\r\n\r\ndef track(net, video, args):\r\n    start_frame, toc = 0, 0\r\n    config = TrackerConfig()\r\n    # save result to evaluate\r\n    if args.exp_name:\r\n        tracker_path = join('results', 'sot', args.arch, args.exp_name)\r\n    else:\r\n        tracker_path = join('results', 'sot', args.arch, 'unknown')\r\n\r\n    if not os.path.exists(tracker_path):\r\n        os.makedirs(tracker_path)\r\n\r\n    if 'VOT' in args.dataset:\r\n        baseline_path = join(tracker_path, 'baseline')\r\n        video_path = join(baseline_path, video['name'])\r\n        if not os.path.exists(video_path):\r\n            os.makedirs(video_path)\r\n        result_path = os.path.join(video_path, video['name'] + '_001.txt')\r\n    else:\r\n        result_path = os.path.join(tracker_path, '{:s}.txt'.format(video['name']))\r\n\r\n    if os.path.exists(result_path):\r\n        return  # for mult-gputesting\r\n\r\n    regions = [] # FINAL RESULTS\r\n    image_files, gt = video['image_files'], video['gt']\r\n    for f, image_file in enumerate(image_files):\r\n        use_lab = getattr(args, 'use_lab', False)\r\n        im = sot_loadimg(image_file, args.im_mean, args.im_std, use_lab)\r\n        tic = cv2.getTickCount()\r\n\t### Init\r\n        if f == 0:\r\n            target_pos, target_sz = rect1_2_cxy_wh(gt[0])\r\n            min_sz = np.maximum(config.min_scale_factor * target_sz, 4)\r\n            max_sz = np.minimum(im.shape[:2], config.max_scale_factor * target_sz)\r\n\r\n            # crop template\r\n            window_sz = target_sz * (1 + config.padding)\r\n            bbox = cxy_wh_2_bbox(target_pos, window_sz)\r\n            patch = crop_chw(im, bbox, config.crop_sz)\r\n\r\n            target = patch \r\n            net.update(torch.Tensor(np.expand_dims(target, axis=0)).cuda(), lr=1)\r\n            regions.append(cxy_wh_2_rect1(target_pos, target_sz))\r\n            patch_crop = np.zeros((config.num_scale, patch.shape[0], patch.shape[1], patch.shape[2]), np.float32)\r\n        ### Track\r\n        else:\r\n            for i in range(config.num_scale):  # crop multi-scale search region\r\n                window_sz = target_sz * (config.scale_factor[i] * (1 + config.padding))\r\n                bbox = cxy_wh_2_bbox(target_pos, window_sz)\r\n                patch_crop[i, :] = crop_chw(im, bbox, config.crop_sz)\r\n\r\n            search = patch_crop \r\n            response = net(torch.Tensor(search).cuda())\r\n            net_output_size = [response.shape[-2], response.shape[-1]]\r\n            peak, idx = torch.max(response.view(config.num_scale, -1), 1)\r\n            peak = peak.data.cpu().numpy() * config.scale_penalties\r\n            best_scale = np.argmax(peak)\r\n            r_max, c_max = np.unravel_index(idx[best_scale].cpu(), net_output_size)\r\n\r\n            r_max = r_max - net_output_size[0] * 0.5\r\n            c_max = c_max - net_output_size[1] * 0.5\r\n            window_sz = target_sz * (config.scale_factor[best_scale] * (1 + config.padding))\r\n\r\n            target_pos = target_pos + np.array([c_max, r_max]) * window_sz / net_output_size \r\n            target_sz = np.minimum(np.maximum(window_sz / (1 + config.padding), min_sz), max_sz)\r\n\r\n            # model update\r\n            window_sz = target_sz * (1 + config.padding)\r\n            bbox = cxy_wh_2_bbox(target_pos, window_sz)\r\n            patch = crop_chw(im, bbox, config.crop_sz)\r\n            target = patch \r\n            #net.update(torch.Tensor(np.expand_dims(target, axis=0)).cuda(), lr=config.interp_factor)\r\n\r\n            regions.append(cxy_wh_2_rect1(target_pos, target_sz))  # 1-index\r\n\r\n        toc += cv2.getTickCount() - tic\r\n\r\n    with open(result_path, \"w\") as fin:\r\n        if 'VOT' in args.dataset:\r\n            for x in regions:\r\n                if isinstance(x, int):\r\n                    fin.write(\"{:d}\\n\".format(x))\r\n                else:\r\n                    p_bbox = x.copy()\r\n                    fin.write(','.join([str(i) for i in p_bbox]) + '\\n')\r\n        else:\r\n            for x in regions:\r\n                p_bbox = x.copy()\r\n                fin.write(\r\n                    ','.join([str(i + 1) if idx == 0 or idx == 1 else str(i) for idx, i in enumerate(p_bbox)]) + '\\n')\r\n\r\n    toc /= cv2.getTickFrequency()\r\n    print('Video: {:12s} Time: {:2.1f}s Speed: {:3.1f}fps'.format(video['name'], toc, f / toc))\r\n\r\n\r\ndef main():\r\n    parser = argparse.ArgumentParser()\r\n    parser.add_argument('--config', default='', required=True, type=str)\r\n    args = parser.parse_args()\r\n\r\n    with open(args.config) as f:\r\n        common_args = yaml.load(f) \r\n    for k, v in common_args['common'].items():\r\n        setattr(args, k, v)\r\n    for k, v in common_args['sot'].items():\r\n        setattr(args, k, v)\r\n    args.arch = 'SiamFC'\r\n\r\n    # prepare model\r\n    base = AppearanceModel(args).to(args.device) \r\n    print('Total params: %.2fM' % \r\n            (sum(p.numel() for p in base.parameters())/1e6))\r\n    print(base)\r\n    \r\n    net = models.__dict__[args.arch](base=base, config=TrackerConfig())\r\n    net.eval()\r\n    net = net.cuda()\r\n\r\n    # prepare video\r\n    dataset = load_dataset(args.dataset, args.dataroot)\r\n    video_keys = list(dataset.keys()).copy()\r\n\r\n    # tracking all videos in benchmark\r\n    for video in video_keys:\r\n        track(net, dataset[video], args)\r\n\r\n    eval_cmd = ('python ./tracker/sot/lib/eval_toolkit/bin/eval.py --dataset_dir {} '\r\n                '--tracker_result_dir ./results/sot/{} --trackers {} --dataset OTB2015').format(\r\n                args.dataroot, args.arch, args.exp_name)\r\n    os.system(eval_cmd)\r\n\r\nif __name__ == '__main__':\r\n    main()\r\n\r\n"
  },
  {
    "path": "test/test_vis.py",
    "content": "import os\r\nimport pdb\r\nimport sys\r\nimport cv2\r\nimport copy\r\nimport yaml\r\nimport json\r\nimport logging\r\nimport argparse\r\nfrom tqdm import tqdm\r\nimport os.path as osp\r\nfrom itertools import groupby\r\nimport pycocotools.mask as mask_utils\r\n\r\nimport numpy as np\r\nimport torch\r\nfrom torchvision.transforms import transforms as T\r\n\r\nsys.path[0] = os.getcwd()\r\nfrom utils.log import logger\r\nfrom utils.meter import Timer\r\nimport data.video as videodataset\r\nfrom utils import visualize as vis\r\nfrom utils.mask import temp_interp_mask, mask_seq_jac\r\nfrom utils.io import write_mot_results, mkdir_if_missing\r\nfrom core.association import matching\r\nfrom eval import trackeval\r\n\r\nfrom tracker.mot.mask import MaskAssociationTracker\r\n\r\ndef fuse_result(res, obs):\r\n    def blank_rle(size):\r\n        brle = np.asfortranarray(np.zeros(size).astype(np.uint8))\r\n        brle = mask_utils.encode(brle)\r\n        brle['counts'] = brle['counts'].decode('ascii')\r\n        return brle \r\n    size = [o for o in obs[0]['segmentations'] if o is not None][0]['size']\r\n    ret = copy.deepcopy(obs)\r\n    eles = [zip(r[-2], r[-1]) for r in res]\r\n    eles = [(z,t) for t,z in enumerate(eles)]\r\n    eles = [(mask, id_, t) for z,t in eles for (mask, id_) in z]\r\n    idvals = set(map(lambda x:x[1], eles))\r\n    elesbyid = [[(y[0], y[2]) for y in eles if y[1]==x] for x in idvals]\r\n    # mask_seqs: num_objs x seq_len\r\n    mask_seqs = [temp_interp_mask(seq, len(res)) for seq in elesbyid]\r\n    ob_mask_seqs = [o['segmentations'] for o in obs]\r\n    for i, oms in enumerate(ob_mask_seqs):\r\n        for j, it in enumerate(oms):\r\n            if it is None:\r\n                ob_mask_seqs[i][j] = blank_rle(size)\r\n    jac = mask_seq_jac(ob_mask_seqs, mask_seqs)\r\n    #assign_obid = jac.argmax(0)\r\n    matches, u_obs, u_trks = matching.linear_assignment(1-jac, thresh=0.1)\r\n    #pdb.set_trace()\r\n    for i,r in matches:\r\n        ret[i]['segmentations'] = mask_seqs[r]\r\n        #ret.append(ret[i])\r\n        #ret[-1]['segmentations'] = mask_seqs[r]\r\n\r\n    return ret\r\n\r\n\r\ndef obs_by_seq(obs):\r\n    ret = dict()\r\n    for j in obs:\r\n        if ret.get(j['video_id'], None):\r\n            ret[j['video_id']].append(j)\r\n        else:\r\n            ret[j['video_id']] = [j,]\r\n    return ret\r\n\r\ndef obs_by_ins(obs):\r\n    ret = list()\r\n    for x in obs:\r\n        ret.extend(obs[x])\r\n    return ret\r\n\r\ndef eval_seq(opt, dataloader, save_dir=None):\r\n    if save_dir:\r\n        mkdir_if_missing(save_dir)\r\n    tracker = MaskAssociationTracker(opt) \r\n    timer = Timer()\r\n    results = []\r\n    for frame_id, (img, obs, img0, _) in enumerate(dataloader):\r\n        # run tracking\r\n        timer.tic()\r\n        online_targets = tracker.update(img, img0, obs)\r\n        online_tlwhs = []\r\n        online_ids = []\r\n        online_masks = []\r\n        for t in online_targets:\r\n            tlwh = t.tlwh * opt.down_factor\r\n            tid = t.track_id\r\n            mask = t.mask.astype(np.uint8)\r\n            mask = mask_utils.encode(np.asfortranarray(mask))\r\n            mask['counts'] = mask['counts'].decode('ascii')\r\n            online_tlwhs.append(tlwh)\r\n            online_ids.append(tid)\r\n            online_masks.append(mask)\r\n        timer.toc()\r\n        # save results\r\n        results.append((frame_id + 1, online_tlwhs, online_masks, online_ids))\r\n        if  save_dir is not None:\r\n            online_im = vis.plot_tracking(img0, online_masks, \r\n                    online_ids, frame_id=frame_id, fps=1. / timer.average_time)\r\n        if save_dir is not None:\r\n            cv2.imwrite(os.path.join(\r\n                save_dir, '{:05d}.jpg'.format(frame_id)), online_im)\r\n    return results, timer.average_time, timer.calls\r\n\r\n\r\ndef main(opt):\r\n    logger.setLevel(logging.INFO)\r\n    result_root = opt.out_root \r\n    mkdir_if_missing(result_root)\r\n    dataroot = osp.join(opt.data_root, opt.split, 'JPEGImages')\r\n    transforms= T.Compose([T.ToTensor(), T.Normalize(opt.im_mean, opt.im_std)])\r\n\r\n    obs_file = osp.join(opt.data_root, 'obs', opt.split, opt.obid+'.json')\r\n    meta_file = osp.join(opt.data_root, 'annotations', 'instances_{}_sub.json'.format(opt.split))\r\n    obs = json.load(open(obs_file))\r\n    meta = json.load(open(meta_file))['videos']\r\n    obs = obs_by_seq(obs)\r\n    resobs = dict()\r\n    assert len(obs) == len(meta)\r\n\r\n    # run tracking\r\n    accs = []\r\n    timer_avgs, timer_calls = [], []\r\n    for i, seqmeta in enumerate(meta):\r\n        seqobs_all = obs[seqmeta['id']]\r\n        seqobs = [s for s in seqobs_all if s['score']>opt.conf_thres]\r\n        if len(seqobs) < 2: \r\n            resobs[seqmeta['id']] = [s for s in seqobs_all]\r\n            continue\r\n        seqname = seqmeta['file_names'][0].split('/')[0]\r\n        output_dir = osp.join(result_root, 'frame', seqname)\r\n        dataloader = videodataset.LoadImagesAndMaskObsVIS(dataroot, seqmeta, seqobs, opt)\r\n        seq_res, ta, tc = eval_seq(opt, dataloader, save_dir=output_dir)\r\n        resobs[seqmeta['id']] = fuse_result(seq_res, seqobs) \r\n        timer_avgs.append(ta)\r\n        timer_calls.append(tc)\r\n\r\n        # eval\r\n        logger.info('Evaluate seq: {}'.format(seqname))\r\n        if opt.save_videos:\r\n            output_video_path = osp.join(output_dir, '{}.mp4'.format(seqname))\r\n            cmd_str = 'ffmpeg -f image2 -i {}/%05d.jpg -c:v copy {}'.format(output_dir, output_video_path)\r\n            os.system(cmd_str)\r\n\r\n    refined_obs = obs_by_ins(resobs)\r\n    with open(opt.out_file,'w') as f:\r\n        json.dump(refined_obs, f)\r\n    \r\n    timer_avgs = np.asarray(timer_avgs)\r\n    timer_calls = np.asarray(timer_calls)\r\n    all_time = np.dot(timer_avgs, timer_calls)\r\n    avg_time = all_time / np.sum(timer_calls)\r\n    logger.info('Time elapsed: {:.2f} seconds, FPS: {:.2f}'.format(all_time, 1.0 / avg_time))\r\n    \r\n    \r\n    if not opt.split == 'tinytrain':\r\n        return\r\n    eval_config = trackeval.Evaluator.get_default_eval_config()\r\n    dataset_config = trackeval.datasets.YouTubeVIS.get_default_dataset_config()\r\n    metrics_config = {'METRICS':['TrackMAP','HOTA','Identity']}\r\n\r\n    eval_config['LOG_ON_ERROR'] = osp.join(result_root,'error.log')\r\n    eval_config['PRINT_ONLY_COMBINED'] = True\r\n    dataset_config['GT_FOLDER'] = osp.join(dataroot, '../../annotations/')\r\n    dataset_config['SPLIT_TO_EVAL'] = 'tinytrain'\r\n    dataset_config['TRACKERS_FOLDER'] = osp.join(result_root, '..') \r\n    dataset_config['TRACKER_SUB_FOLDER'] = '' \r\n    dataset_config['TRACKERS_TO_EVAL'] = [opt.exp_name, ]\r\n    dataset_config['BENCHMARK'] = 'MOTS20'\r\n    dataset_config['SKIP_SPLIT_FOL'] = True\r\n\r\n    evaluator = trackeval.Evaluator(eval_config)\r\n    dataset_list = [trackeval.datasets.YouTubeVIS(dataset_config)]\r\n    metrics_list = []\r\n    for metric in [trackeval.metrics.HOTA, trackeval.metrics.CLEAR, \r\n            trackeval.metrics.Identity, trackeval.metrics.TrackMAP]:\r\n        if metric.get_name() in metrics_config['METRICS']:\r\n            if metric == trackeval.metrics.TrackMAP:\r\n                default_tmap_config = metric.get_default_metric_config()\r\n                default_tmap_config['USE_TIME_RANGES'] = False\r\n                default_tmap_config['AREA_RANGES'] = [[0 ** 2, 128 ** 2],\r\n                                                      [128 ** 2, 256 ** 2],\r\n                                                      [256 ** 2, 1e5 ** 2]]\r\n                metrics_list.append(metric(default_tmap_config))\r\n            else:\r\n                metrics_list.append(metric())\r\n    if len(metrics_list) == 0:\r\n        raise Exception('No metrics selected for evaluation')\r\n    evaluator.evaluate(dataset_list, metrics_list)\r\n    \r\n\r\nif __name__ == '__main__':\r\n    parser = argparse.ArgumentParser()\r\n    parser.add_argument('--config', default='', required=True, type=str)\r\n    opt = parser.parse_args()\r\n    with open(opt.config) as f:\r\n        common_args = yaml.load(f) \r\n    for k, v in common_args['common'].items():\r\n        setattr(opt, k, v)\r\n    for k, v in common_args['vis'].items():\r\n        setattr(opt, k, v)\r\n    opt.out_root = osp.join('results/vis', opt.exp_name)\r\n    opt.out_file = osp.join('results/vis', opt.exp_name+'.json')\r\n    print(opt, end='\\n\\n')\r\n\r\n    main(opt)\r\n"
  },
  {
    "path": "test/test_vos.py",
    "content": "from __future__ import print_function\n\nimport os\nimport sys\nsys.path[0] = os.getcwd()\nimport time\nimport yaml\nimport imageio\nimport argparse\nimport os.path as osp\nimport numpy as np\n\nimport torch\nimport torch.nn as nn\nimport torch.backends.cudnn as cudnn\n\nfrom data import vos, jhmdb\nfrom model import AppearanceModel, partial_load\nfrom model.functional import *\nimport utils\nfrom utils.visualize import dump_predictions\n\n\ndef main(args, vis):\n    model = AppearanceModel(args).to(args.device)\n    print('Total params: %.2fM' %\n          (sum(p.numel() for p in model.parameters())/1000000.0))\n    print(model)\n    args.mapScale = [args.down_factor, args.down_factor]\n\n    model.eval()\n    model = model.to(args.device)\n\n    dataset = vos.VOSDataset(args)\n    val_loader = torch.utils.data.DataLoader(\n        dataset, batch_size=1, shuffle=False,\n        num_workers=args.workers, pin_memory=True)\n\n    if not os.path.exists(args.save_path):\n        os.makedirs(args.save_path)\n\n    with torch.no_grad():\n        test(val_loader, model, args)\n\n    cvt_path = args.save_path.replace(args.exp_name, 'convert_'+args.exp_name)\n\n    # convert to DAVIS format\n    cvt_cmd_str = ('python ./eval/convert_davis.py --in_folder {} '\n                   '--out_folder {} --dataset {}').format(\n                       args.save_path, cvt_path, args.davisroot)\n    eval_cmd_str = ('python {}/evaluation_method.py --task semi-supervised '\n                    '--results_path {} --set val --davis_path {}'.format(\n                        args.evaltool_root, cvt_path, args.davisroot))\n    os.system(cvt_cmd_str)\n    os.system(eval_cmd_str)\n\n\ndef test(loader, model, args):\n    n_context = args.videoLen\n    D = None    # Radius mask\n\n    for vid_idx, (imgs, imgs_orig, lbls, lbls_orig, lbl_map, meta) in enumerate(loader):\n        t_vid = time.time()\n        imgs = imgs.to(args.device)\n        B, N = imgs.shape[:2]\n        assert(B == 1)\n\n        print('******* Vid %s (%s frames) *******' % (vid_idx, N))\n        with torch.no_grad():\n            t00 = time.time()\n\n            ##################################################################\n            # Compute image features (batched for memory efficiency)\n            ##################################################################\n            bsize = 5   # minibatch size for computing features\n            feats = []\n            for b in range(0, imgs.shape[1], bsize):\n                feat = model(imgs[:, b:b+bsize].transpose(1, 2).to(args.device))\n                feats.append(feat.cpu())\n            feats = torch.cat(feats, dim=2).squeeze(1)\n\n            if not args.no_l2:\n                feats = torch.nn.functional.normalize(feats, dim=1)\n\n            print('computed features', time.time()-t00)\n\n            ##################################################################\n            # Compute affinities\n            ##################################################################\n            torch.cuda.empty_cache()\n            t03 = time.time()\n\n            # Prepare source (keys) and target (query) frame features\n            key_indices = context_index_bank(n_context, args.long_mem, N - n_context)\n            key_indices = torch.cat(key_indices, dim=-1)           \n            keys, query = feats[:, :, key_indices], feats[:, :, n_context:]\n\n            # Make spatial radius mask TODO use torch.sparse\n            restrict = MaskedAttention(args.radius, flat=False)\n            D = restrict.mask(*feats.shape[-2:])[None]\n            D = D.flatten(-4, -3).flatten(-2)\n            D[D==0] = -1e10; D[D==1] = 0\n\n            # Flatten source frame features to make context feature set\n            keys, query = keys.flatten(-2), query.flatten(-2)\n\n            print('computing affinity')\n            Ws, Is = mem_efficient_batched_affinity(query, keys, D, \n                        args.temperature, args.topk, args.long_mem, args.device)\n\n            if torch.cuda.is_available():\n                print(time.time()-t03, 'affinity forward, max mem', torch.cuda.max_memory_allocated() / (1024**2))\n\n            ##################################################################\n            # Propagate Labels and Save Predictions\n            ###################################################################\n\n            maps, keypts = [], []\n            lbls[0, n_context:] *= 0 \n            lbl_map, lbls = lbl_map[0], lbls[0]\n\n            for t in range(key_indices.shape[0]):\n                # Soft labels of source nodes\n                ctx_lbls = lbls[key_indices[t]].to(args.device)\n                ctx_lbls = ctx_lbls.flatten(0, 2).transpose(0, 1)\n\n                # Weighted sum of top-k neighbours (Is is index, Ws is weight) \n                pred = (ctx_lbls[:, Is[t]] * Ws[t].to(args.device)[None]).sum(1)\n                pred = pred.view(-1, *feats.shape[-2:])\n                pred = pred.permute(1,2,0)\n                \n                if t > 0:\n                    lbls[t + n_context] = pred\n                else:\n                    pred = lbls[0]\n                    lbls[t + n_context] = pred\n\n                if args.norm_mask:\n                    pred[:, :, :] -= pred.min(-1)[0][:, :, None]\n                    pred[:, :, :] /= pred.max(-1)[0][:, :, None]\n\n                # Save Predictions            \n                cur_img = imgs_orig[0, t + n_context].permute(1,2,0).numpy() * 255\n                _maps = []\n\n                outpath = os.path.join(args.save_path, str(vid_idx) + '_' + str(t))\n\n                heatmap, lblmap, heatmap_prob = dump_predictions(\n                    pred.cpu().numpy(),\n                    lbl_map, cur_img, outpath)\n\n\n                _maps += [heatmap, lblmap, heatmap_prob]\n                maps.append(_maps)\n\n            if len(keypts) > 0:\n                coordpath = os.path.join(args.save_path, str(vid_idx) + '.dat')\n                np.stack(keypts, axis=-1).dump(coordpath)\n            \n            torch.cuda.empty_cache()\n            print('******* Vid %s TOOK %s *******' % (vid_idx, time.time() - t_vid))\n\n\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--config', default='', required=True, type=str)\n    args = parser.parse_args()\n\n    with open(args.config) as f:\n        common_args = yaml.load(f) \n    for k, v in common_args['common'].items():\n        setattr(args, k, v)\n    for k, v in common_args['vos'].items():\n        setattr(args, k, v)\n\n    args.imgSize = args.cropSize\n    args.save_path = 'results/vos/{}'.format(args.exp_name)\n    args.evaltool_root = osp.join(args.davisroot, 'davis2017-evaluation')\n\n    main(args, None)\n"
  },
  {
    "path": "tools/gen_mot16_fairmot.py",
    "content": "import os.path as osp\nimport os\nimport numpy as np\nimport pdb\n\n# Modify here. \ndet = 'FairMOT'\nsplit = 'test'\nmot16_root = '/home/wangzd/datasets/MOT/MOT16'\ndet_lists_root = '/home/wangzd/datasets/MOT/MOT16/dets/fairmot_det/'\n\nseq_root = osp.join(mot16_root,'images', split)\nlabel_root = osp.join(mot16_root, 'obs', det, split)\nos.makedirs(label_root, exist_ok=True)\nseqs = [s for s in os.listdir(seq_root)]\n\ntid_curr = 0\ntid_last = -1\nfor seq in seqs:\n    seq_info = open(osp.join(seq_root, seq, 'seqinfo.ini')).read()\n    seq_width = int(seq_info[seq_info.find('imWidth=') + 8:seq_info.find('\\nimHeight')])\n    seq_height = int(seq_info[seq_info.find('imHeight=') + 9:seq_info.find('\\nimExt')])\n\n    \n    ob_txt = osp.join(det_lists_root, '{}.txt'.format(seq))\n    gt = np.loadtxt(ob_txt, dtype=np.float64, delimiter=',')\n\n    seq_label_root = osp.join(label_root, seq, 'img1')\n    os.makedirs(seq_label_root, exist_ok=True)\n\n    for fid, x, y, w, h, conf,  in gt:\n        tid = -1 \n        fid = int(fid)\n        tid = int(tid)\n        if not tid == tid_last:\n            tid_curr += 1\n            tid_last = tid\n        x += w / 2\n        y += h / 2\n        label_fpath = osp.join(seq_label_root, '{:06d}.txt'.format(fid))\n        label_str = '{:.6f} {:.6f} {:.6f} {:.6f} {:.6f}\\n'.format(\n             x / seq_width, y / seq_height, w / seq_width, h / seq_height, conf)\n        with open(label_fpath, 'a') as f:\n            f.write(label_str)\n"
  },
  {
    "path": "tools/gen_mot16_gt.py",
    "content": "import os.path as osp\nimport os\nimport numpy as np\n\n# Modify here\nmot16_root = '/home/wangzd/datasets/MOT/MOT16'\nseq_root = osp.join(mot16_root,'images', 'train')\n\nlabel_root = osp.join(mot16_root, 'obs', 'gt', 'train')\nos.makedirs(label_root, exist_ok=True)\nseqs = [s for s in os.listdir(seq_root)]\n\ntid_curr = 0\ntid_last = -1\nfor seq in seqs:\n    seq_info = open(osp.join(seq_root, seq, 'seqinfo.ini')).read()\n    seq_width = int(seq_info[seq_info.find('imWidth=') + 8:seq_info.find('\\nimHeight')])\n    seq_height = int(seq_info[seq_info.find('imHeight=') + 9:seq_info.find('\\nimExt')])\n\n    gt_txt = osp.join(seq_root, seq, 'gt', 'gt.txt')\n    gt = np.loadtxt(gt_txt, dtype=np.float64, delimiter=',')\n\n    seq_label_root = osp.join(label_root, seq, 'img1')\n    os.makedirs(seq_label_root, exist_ok=True)\n\n    for fid, tid, x, y, w, h, mark, label, _ in gt:\n        if mark == 0 or not label == 1:\n            continue\n        fid = int(fid)\n        tid = int(tid)\n        if not tid == tid_last:\n            tid_curr += 1\n            tid_last = tid\n        x += w / 2\n        y += h / 2\n        label_fpath = osp.join(seq_label_root, '{:06d}.txt'.format(fid))\n        label_str = '{:.6f} {:.6f} {:.6f} {:.6f} 1.0\\n'.format(\n             x / seq_width, y / seq_height, w / seq_width, h / seq_height)\n        with open(label_fpath, 'a') as f:\n            f.write(label_str)\n"
  },
  {
    "path": "tools/gen_mot16_label17.py",
    "content": "import os.path as osp\nimport os\nimport numpy as np\nimport pdb\n\n# Modify here. \n# Note: Since we borrow detection results from MOT17 dataset, \n# we need to place MOT17 dataset under the same folder of MOT16 dataset,\n# e.g. /home/wangzd/datasets/MOT/MOT17 \ndet = 'DPM' # 'DPM'/'FRCNN'/'SDP'\nsplit = 'train' # 'test'/'train'\nmot16_root = '/home/wangzd/datasets/MOT/MOT16'\n\nseq_root = osp.join(mot16_root,'images', split)\nlabel_root = osp.join(mot16_root, 'obs', det, split)\nos.makedirs(label_root, exist_ok=True)\nseqs = [s for s in os.listdir(seq_root)]\n\ntid_curr = 0\ntid_last = -1\nfor seq in seqs:\n    seq_info = open(osp.join(seq_root, seq, 'seqinfo.ini')).read()\n    seq_width = int(seq_info[seq_info.find('imWidth=') + 8:seq_info.find('\\nimHeight')])\n    seq_height = int(seq_info[seq_info.find('imHeight=') + 9:seq_info.find('\\nimExt')])\n\n    \n    ob_txt = osp.join(seq_root, seq+'-'+det, 'det', 'det.txt')\n    ob_txt = ob_txt.replace('MOT16', 'MOT17') \n    gt = np.loadtxt(ob_txt, dtype=np.float64, delimiter=',')\n\n    seq_label_root = osp.join(label_root, seq, 'img1')\n    os.makedirs(seq_label_root, exist_ok=True)\n\n    for z in gt:\n        if det == 'DPM':\n            fid, tid, x, y, w, h, conf, mark, label, _ = z\n        else:\n            fid, tid, x, y, w, h, conf = z\n        fid = int(fid)\n        tid = int(tid)\n        if not tid == tid_last:\n            tid_curr += 1\n            tid_last = tid\n        x += w / 2\n        y += h / 2\n        label_fpath = osp.join(seq_label_root, '{:06d}.txt'.format(fid))\n        label_str = '{:.6f} {:.6f} {:.6f} {:.6f} {:.6f}\\n'.format(\n             x / seq_width, y / seq_height, w / seq_width, h / seq_height, conf)\n        with open(label_fpath, 'a') as f:\n            f.write(label_str)\n"
  },
  {
    "path": "tools/gen_mot19_det.py",
    "content": "import os.path as osp\nimport os\nimport numpy as np\nimport pdb\n\n\ndef mkdirs(d):\n    if not osp.exists(d):\n        os.makedirs(d)\n\ndet = 'DET'\nseq_root = '/home/wangzd/datasets/MOT/MOT19/images/train'\nlabel_root = '/home/wangzd/datasets/MOT/MOT19/obs/{}/train'.format(det)\nmkdirs(label_root)\nseqs = [s for s in os.listdir(seq_root)]\n\ntid_curr = 0\ntid_last = -1\nfor seq in seqs:\n    seq_info = open(osp.join(seq_root, seq, 'seqinfo.ini')).read()\n    seq_width = int(seq_info[seq_info.find('imWidth=') + 8:seq_info.find('\\nimHeight')])\n    seq_height = int(seq_info[seq_info.find('imHeight=') + 9:seq_info.find('\\nimExt')])\n\n    \n    ob_txt = osp.join(seq_root, seq, 'det', 'det.txt')\n    gt = np.loadtxt(ob_txt, dtype=np.float64, delimiter=',')\n\n    seq_label_root = osp.join(label_root, seq, 'img1')\n    mkdirs(seq_label_root)\n\n    for fid, tid, x, y, w, h, conf,_,_,_  in gt:\n        fid = int(fid)\n        tid = int(tid)\n        if not tid == tid_last:\n            tid_curr += 1\n            tid_last = tid\n        x += w / 2\n        y += h / 2\n        label_fpath = osp.join(seq_label_root, '{:06d}.txt'.format(fid))\n        label_str = '{:.6f} {:.6f} {:.6f} {:.6f} {:.6f}\\n'.format(\n             x / seq_width, y / seq_height, w / seq_width, h / seq_height, conf)\n        with open(label_fpath, 'a') as f:\n            f.write(label_str)\n"
  },
  {
    "path": "tools/gen_mots_costa.py",
    "content": "import pdb\nimport os.path as osp\nimport os\nimport numpy as np\n\n\ndef mkdirs(d):\n    if not osp.exists(d):\n        os.makedirs(d)\n\n\nentry = 'COSTA'\nsplit = 'test'\nseq_root = '/home/wangzd/datasets/GOT/MOTS/images/{}'.format(split)\nlabel_root = '/home/wangzd/datasets/GOT/MOTS/obs/{}/{}'.format(entry, split)\nmkdirs(label_root)\nseqs = [s for s in os.listdir(seq_root)]\n\ntid_curr = 0\ntid_last = -1\nfor seq in seqs:\n    seq_info = open(osp.join(seq_root, seq, 'seqinfo.ini')).read()\n    seq_width = int(seq_info[seq_info.find('imWidth=') + 8:seq_info.find('\\nimHeight')])\n    seq_height = int(seq_info[seq_info.find('imHeight=') + 9:seq_info.find('\\nimExt')])\n\n    gt_txt = osp.join(seq_root, '../..', entry, '{}.txt'.format(seq))\n    gt = []\n    with open(gt_txt, 'r') as f:\n        for line in f:\n            gt.append(line.strip().split())\n\n    seq_label_root = osp.join(label_root, seq, 'img1')\n    mkdirs(seq_label_root)\n    \n    for fid, tid, cid, h, w, m in gt:\n        fid = int(fid)\n        tid = int(tid)\n        cid = int(cid)\n        h, w = int(h), int(w)\n        label_fpath = osp.join(seq_label_root, '{:06d}.txt'.format(fid))\n        label_str = '{:d} {:d} {:d} {:d} {:d} {} \\n'.format(\n                fid, tid, cid, h, w, m)\n        with open(label_fpath, 'a') as f:\n            f.write(label_str)\n"
  },
  {
    "path": "tools/gen_mots_gt.py",
    "content": "import pdb\nimport os.path as osp\nimport os\nimport numpy as np\n\n\ndef mkdirs(d):\n    if not osp.exists(d):\n        os.makedirs(d)\n\n\nseq_root = '/home/wangzd/datasets/GOT/MOTS/images/train'\nlabel_root = '/home/wangzd/datasets/GOT/MOTS/obs/gt/train'\nmkdirs(label_root)\nseqs = [s for s in os.listdir(seq_root)]\n\ntid_curr = 0\ntid_last = -1\nfor seq in seqs:\n    seq_info = open(osp.join(seq_root, seq, 'seqinfo.ini')).read()\n    seq_width = int(seq_info[seq_info.find('imWidth=') + 8:seq_info.find('\\nimHeight')])\n    seq_height = int(seq_info[seq_info.find('imHeight=') + 9:seq_info.find('\\nimExt')])\n\n    gt_txt = osp.join(seq_root, seq, 'gt', 'gt.txt')\n    gt = []\n    with open(gt_txt, 'r') as f:\n        for line in f:\n            gt.append(line.strip().split())\n\n    seq_label_root = osp.join(label_root, seq, 'img1')\n    mkdirs(seq_label_root)\n    \n    for fid, tid, cid, h, w, m in gt:\n        fid = int(fid)\n        tid = int(tid)\n        cid = int(cid)\n        h, w = int(h), int(w)\n        label_fpath = osp.join(seq_label_root, '{:06d}.txt'.format(fid))\n        label_str = '{:d} {:d} {:d} {:d} {:d} {} \\n'.format(\n                fid, tid, cid, h, w, m)\n        with open(label_fpath, 'a') as f:\n            f.write(label_str)\n"
  },
  {
    "path": "tracker/mot/basetrack.py",
    "content": "import numpy as np\r\nfrom collections import OrderedDict,deque\r\nfrom core.motion.kalman_filter import KalmanFilter\r\nimport core.association.matching as matching\r\nfrom utils.box import *\r\nimport torch\r\nimport torch.nn.functional as F\r\n\r\n\r\nclass TrackState(object):\r\n    New = 0\r\n    Tracked = 1\r\n    Lost = 2\r\n    Removed = 3\r\n\r\n\r\nclass BaseTrack(object):\r\n    _count = 0\r\n\r\n    track_id = 0\r\n    is_activated = False\r\n    state = TrackState.New\r\n\r\n    history = OrderedDict()\r\n    features = []\r\n    curr_feature = None\r\n    score = 0\r\n    start_frame = 0\r\n    frame_id = 0\r\n    time_since_update = 0\r\n\r\n    # multi-camera\r\n    location = (np.inf, np.inf)\r\n\r\n    @property\r\n    def end_frame(self):\r\n        return self.frame_id\r\n\r\n    @staticmethod\r\n    def next_id():\r\n        BaseTrack._count += 1\r\n        return BaseTrack._count\r\n\r\n    def activate(self, *args):\r\n        raise NotImplementedError\r\n\r\n    def predict(self):\r\n        raise NotImplementedError\r\n\r\n    def update(self, *args, **kwargs):\r\n        raise NotImplementedError\r\n\r\n    def mark_lost(self):\r\n        self.state = TrackState.Lost\r\n\r\n    def mark_removed(self):\r\n        self.state = TrackState.Removed\r\n\r\nclass STrack(BaseTrack):\r\n    shared_kalman = KalmanFilter()\r\n\r\n    def __init__(self, tlwh, score, temp_feat, buffer_size=30, \r\n            mask=None, pose=None, ac=False, category=-1, use_kalman=True):\r\n\r\n        # wait activate\r\n        self._tlwh = np.asarray(tlwh, dtype=np.float)\r\n        self.kalman_filter = None\r\n        self.mean, self.covariance = None, None\r\n        self.use_kalman = use_kalman\r\n        if not use_kalman: ac=True\r\n        self.is_activated = ac \r\n\r\n        self.score = score\r\n        self.category = category \r\n        self.tracklet_len = 0\r\n\r\n        self.smooth_feat = None\r\n        self.update_features(temp_feat)\r\n        self.features = deque([], maxlen=buffer_size)\r\n        self.alpha = 0.9\r\n        self.mask = mask\r\n        self.pose = pose\r\n    \r\n    def update_features(self, feat):\r\n        self.curr_feat = feat \r\n        if self.smooth_feat is None:\r\n            self.smooth_feat = feat\r\n        elif self.smooth_feat.shape == feat.shape:\r\n            self.smooth_feat = self.alpha *self.smooth_feat + (1-self.alpha) * feat\r\n        else:\r\n            pass\r\n\r\n\r\n    def predict(self):\r\n        mean_state = self.mean.copy()\r\n        if self.state != TrackState.Tracked:\r\n            mean_state[7] = 0\r\n        self.mean, self.covariance = self.kalman_filter.predict(mean_state, self.covariance)\r\n\r\n    @staticmethod\r\n    def multi_predict(stracks):\r\n        if len(stracks) > 0:\r\n            multi_mean = np.asarray([st.mean.copy() for st in stracks])\r\n            multi_covariance = np.asarray([st.covariance for st in stracks])\r\n            for i,st in enumerate(stracks):\r\n                if st.state != TrackState.Tracked:\r\n                    multi_mean[i][7] = 0\r\n            multi_mean, multi_covariance = STrack.shared_kalman.multi_predict(multi_mean, multi_covariance)\r\n            for i, (mean, cov) in enumerate(zip(multi_mean, multi_covariance)):\r\n                stracks[i].mean = mean\r\n                stracks[i].covariance = cov\r\n\r\n\r\n    def activate(self, kalman_filter, frame_id):\r\n        \"\"\"Start a new tracklet\"\"\"\r\n        self.kalman_filter = kalman_filter\r\n        self.track_id = self.next_id()\r\n        self.mean, self.covariance = self.kalman_filter.initiate(tlwh_to_xyah(self._tlwh))\r\n\r\n        self.tracklet_len = 0\r\n        self.state = TrackState.Tracked\r\n        if frame_id == 1:\r\n            self.is_activated = True\r\n        #self.is_activated = True\r\n        self.frame_id = frame_id\r\n        self.start_frame = frame_id\r\n\r\n    def re_activate(self, new_track, frame_id, new_id=False, update_feature=True):\r\n        if self.use_kalman:\r\n            self.mean, self.covariance = self.kalman_filter.update(\r\n                self.mean, self.covariance, tlwh_to_xyah(new_track.tlwh)\r\n            )\r\n        else:\r\n            self.mean, self.covariance = None, None\r\n            self._tlwh = np.asarray(new_track.tlwh, dtype=np.float)\r\n        if update_feature:\r\n            self.update_features(new_track.curr_feat)\r\n        self.tracklet_len = 0\r\n        self.state = TrackState.Tracked\r\n        self.is_activated = True\r\n        self.frame_id = frame_id\r\n        if new_id:\r\n            self.track_id = self.next_id()\r\n        if not new_track.mask is None:\r\n            self.mask = new_track.mask\r\n\r\n    def update(self, new_track, frame_id, update_feature=True):\r\n        \"\"\"\r\n        Update a matched track\r\n        :type new_track: STrack\r\n        :type frame_id: int\r\n        :type update_feature: bool\r\n        :return:\r\n        \"\"\"\r\n        self.frame_id = frame_id\r\n        self.tracklet_len += 1\r\n\r\n        new_tlwh = new_track.tlwh\r\n        if self.use_kalman:\r\n            self.mean, self.covariance = self.kalman_filter.update(\r\n                self.mean, self.covariance, tlwh_to_xyah(new_tlwh))\r\n        else:\r\n            self.mean, self.covariance = None, None\r\n            self._tlwh = np.asarray(new_tlwh, dtype=np.float)\r\n        self.state = TrackState.Tracked\r\n        self.is_activated = True\r\n\r\n        self.score = new_track.score\r\n        '''\r\n        For TAO dataset \r\n        '''\r\n        self.category = new_track.category\r\n        if update_feature:\r\n            self.update_features(new_track.curr_feat)\r\n        if not new_track.mask is None:\r\n            self.mask = new_track.mask\r\n        if not new_track.pose is None:\r\n            self.pose = new_track.pose\r\n\r\n    @property\r\n    def tlwh(self):\r\n        \"\"\"Get current position in bounding box format `(top left x, top left y,\r\n                width, height)`.\r\n        \"\"\"\r\n        if self.mean is None:\r\n            return self._tlwh.copy()\r\n        ret = self.mean[:4].copy()\r\n        ret[2] *= ret[3]\r\n        ret[:2] -= ret[2:] / 2\r\n        return ret\r\n\r\n    @property\r\n    def tlbr(self):\r\n        \"\"\"Convert bounding box to format `(min x, min y, max x, max y)`, i.e.,\r\n        `(top left, bottom right)`.\r\n        \"\"\"\r\n        ret = self.tlwh.copy()\r\n        ret[2:] += ret[:2]\r\n        return ret\r\n\r\n\r\n    def to_xyah(self):\r\n        return tlwh_to_xyah(self.tlwh)\r\n    \r\n\r\n    def __repr__(self):\r\n        return 'OT_{}_({}-{})'.format(self.track_id, self.start_frame, self.end_frame)\r\n\r\n\r\ndef joint_stracks(tlista, tlistb):\r\n    exists = {}\r\n    res = []\r\n    for t in tlista:\r\n        exists[t.track_id] = 1\r\n        res.append(t)\r\n    for t in tlistb:\r\n        tid = t.track_id\r\n        if not exists.get(tid, 0):\r\n            exists[tid] = 1\r\n            res.append(t)\r\n    return res\r\n\r\ndef sub_stracks(tlista, tlistb):\r\n    stracks = {}\r\n    for t in tlista:\r\n        stracks[t.track_id] = t\r\n    for t in tlistb:\r\n        tid = t.track_id\r\n        if stracks.get(tid, 0):\r\n            del stracks[tid]\r\n    return list(stracks.values())\r\n\r\ndef remove_duplicate_stracks(stracksa, stracksb, ioudist=0.15):\r\n    pdist = matching.iou_distance(stracksa, stracksb)\r\n    pairs = np.where(pdist<ioudist)\r\n    dupa, dupb = list(), list()\r\n    for p,q in zip(*pairs):\r\n        timep = stracksa[p].frame_id - stracksa[p].start_frame\r\n        timeq = stracksb[q].frame_id - stracksb[q].start_frame\r\n        if timep > timeq:\r\n            dupb.append(q)\r\n        else:\r\n            dupa.append(p)\r\n    resa = [t for i,t in enumerate(stracksa) if not i in dupa]\r\n    resb = [t for i,t in enumerate(stracksb) if not i in dupb]\r\n    return resa, resb\r\n            \r\n\r\n"
  },
  {
    "path": "tracker/mot/box.py",
    "content": "###################################################################\n# File Name: box.py\n# Author: Zhongdao Wang\n# mail: wcd17@mails.tsinghua.edu.cn\n# Created Time: Fri Jan 29 15:16:53 2021\n###################################################################\n\nimport torch\nfrom torchvision import ops\n\nfrom .basetrack import STrack\nfrom .multitracker import AssociationTracker\nfrom utils.box import scale_box, scale_box_input_size, xywh2xyxy, tlbr_to_tlwh\n\n\nclass BoxAssociationTracker(AssociationTracker):\n    def __init__(self, opt):\n        super(BoxAssociationTracker, self).__init__(opt)\n\n    def extract_emb(self, img, obs):\n        feat = self.app_model(img.unsqueeze(0).to(self.opt.device).float())\n        scale = [feat.shape[-1]/self.opt.img_size[0],\n                 feat.shape[-2]/self.opt.img_size[1]]\n        obs_feat = scale_box(scale, obs).to(self.opt.device)\n        obs_feat = [obs_feat[:, :4], ]\n        ret = ops.roi_align(feat, obs_feat, self.opt.feat_size).detach().cpu()\n        return ret\n\n    def prepare_obs(self, img, img0, obs):\n        if len(obs) > 0:\n            obs = torch.from_numpy(obs[obs[:, 4] > self.opt.conf_thres]).float()\n            obs = xywh2xyxy(obs)\n            obs = scale_box(self.opt.img_size, obs)\n            embs = self.extract_emb(img, obs)\n            obs = scale_box_input_size(self.opt.img_size, obs, img0.shape)\n\n            if obs.shape[1] == 5:\n                detections = [STrack(tlbr_to_tlwh(tlbrs[:4]), tlbrs[4], f,\n                              self.buffer_size, use_kalman=self.opt.use_kalman)\n                              for (tlbrs, f) in zip(obs, embs)]\n            elif obs.shape[1] == 6:\n                detections = [STrack(tlbr_to_tlwh(tlbrs[:4]), tlbrs[4], f,\n                              self.buffer_size, category=tlbrs[5],\n                              use_kalman=self.opt.use_kalman)\n                              for (tlbrs, f) in zip(obs, embs)]\n            else:\n                raise ValueError(\n                        'Shape of observations should be [n, 5] or [n, 6].')\n        else:\n            detections = []\n        return detections\n"
  },
  {
    "path": "tracker/mot/mask.py",
    "content": "###################################################################\n# File Name: mask.py\n# Author: Zhongdao Wang\n# mail: wcd17@mails.tsinghua.edu.cn\n# Created Time: Fri Jan 29 15:16:53 2021\n###################################################################\n\nimport numpy as np\nimport torch\nimport torch.nn.functional as F\n\nfrom utils.box import * \nfrom utils.mask import *\nfrom .basetrack import *\nfrom .multitracker import AssociationTracker\n\nclass MaskAssociationTracker(AssociationTracker):\n    def __init__(self, opt):\n        super(MaskAssociationTracker, self).__init__(opt)\n\n    def extract_emb(self, img, obs):\n        img = img.unsqueeze(0).to(self.opt.device).float()\n        with torch.no_grad():\n            feat = self.app_model(img)\n        _, d, h, w = feat.shape\n        obs = torch.from_numpy(obs).to(self.opt.device).float()\n        obs = F.interpolate(obs.unsqueeze(1), size=(h,w), mode='nearest')\n        template_scale = np.prod(self.opt.feat_size)\n        embs = []\n        for ob in obs:\n            obfeat = ob*feat\n            scale = ob.sum()\n            if scale > 0:\n                if scale > self.opt.max_mask_area:\n                    scale_factor = np.sqrt(self.opt.max_mask_area/scale.item())\n                else:\n                    scale_factor = 1\n                norm_obfeat = F.interpolate(obfeat, scale_factor=scale_factor, mode='bilinear')\n                norm_mask = F.interpolate(ob.unsqueeze(1), scale_factor=scale_factor, mode='nearest')\n                emb = norm_obfeat[:,:, norm_mask.squeeze(0).squeeze(0).ge(0.5)]\n                embs.append(emb.cpu())\n            else: \n                embs.append(torch.randn(d, template_scale))\n        return obs, embs\n\n    def prepare_obs(self, img, img0, obs):\n        ''' Step 1: Network forward, get detections & embeddings'''\n        if obs.shape[0] > 0:\n            masks, embs = self.extract_emb(img, obs)\n            boxes = mask2box(masks)\n            keep_idx = remove_duplicated_box(boxes, iou_th=0.7)\n            boxes, masks, obs = boxes[keep_idx], masks[keep_idx], obs[keep_idx]\n            embs = [embs[k] for k in keep_idx]\n            detections = [STrack(tlbr_to_tlwh(tlbrs), 1, f, self.buffer_size, mask, ac=True) \\\n                    for (tlbrs,mask,f) in zip(boxes, obs, embs)]\n        else:\n            detections = []\n        return detections\n\n"
  },
  {
    "path": "tracker/mot/multitracker.py",
    "content": "from collections import deque\r\n\r\nimport torch\r\nfrom torchvision import ops\r\n\r\nfrom model import AppearanceModel\r\nfrom utils.log import logger\r\nfrom core.association import matching\r\nfrom core.propagation import propagate\r\nfrom core.motion.kalman_filter import KalmanFilter\r\n\r\nfrom utils.box import *\r\nfrom utils.mask import *\r\nfrom .basetrack import sub_stracks, joint_stracks, remove_duplicate_stracks, \\\r\n                       STrack, TrackState\r\n\r\n\r\nclass AssociationTracker(object):\r\n    def __init__(self, opt):\r\n        self.opt = opt\r\n        self.tracked_stracks = []\r\n        self.lost_stracks = []\r\n        self.removed_stracks = []\r\n\r\n        self.frame_id = 0\r\n        self.det_thresh = opt.conf_thres\r\n        self.buffer_size = int(opt.frame_rate / 30.0 * opt.track_buffer)\r\n        self.max_time_lost = self.buffer_size\r\n\r\n        self.kalman_filter = KalmanFilter()\r\n\r\n        self.app_model = AppearanceModel(opt).to(opt.device)\r\n        self.app_model.eval()\r\n\r\n        if not self.opt.asso_with_motion:\r\n            self.opt.motion_lambda = 1\r\n            self.opt.motion_gated = False\r\n\r\n    def extract_emb(self, img, obs):\r\n        raise NotImplementedError\r\n\r\n    def prepare_obs(self, img, img0, obs):\r\n        raise NotImplementedError\r\n\r\n    def update(self, img, img0, obs):\r\n        torch.cuda.empty_cache()\r\n        self.frame_id += 1\r\n        activated_stracks = []\r\n        refind_stracks = []\r\n        lost_stracks = []\r\n        removed_stracks = []\r\n\r\n        detections = self.prepare_obs(img, img0, obs)\r\n\r\n        ''' Add newly detected tracklets to tracked_stracks'''\r\n        unconfirmed = []\r\n        tracked_stracks = []  \r\n        for track in self.tracked_stracks:\r\n            if not track.is_activated:\r\n                unconfirmed.append(track)\r\n            else:\r\n                tracked_stracks.append(track)\r\n\r\n        ''' Step 2: First association, with embedding'''\r\n        tracks = joint_stracks(tracked_stracks, self.lost_stracks)\r\n        dists, recons_ftrk = matching.reconsdot_distance(tracks, detections)\r\n        if self.opt.use_kalman:\r\n            # Predict the current location with KF\r\n            STrack.multi_predict(tracks)\r\n            dists = matching.fuse_motion(self.kalman_filter, dists, tracks, detections,\r\n                                         lambda_=self.opt.motion_lambda,\r\n                                         gate=self.opt.motion_gated)\r\n        if hasattr(obs, 'shape') and len(obs.shape) > 1 and obs.shape[1] == 6:\r\n            dists = matching.category_gate(dists, tracks, detections)\r\n        matches, u_track, u_detection = matching.linear_assignment(dists, thresh=0.7)\r\n\r\n        for itracked, idet in matches:\r\n            track = tracks[itracked]\r\n            det = detections[idet]\r\n            if track.state == TrackState.Tracked:\r\n                track.update(detections[idet], self.frame_id)\r\n                activated_stracks.append(track)\r\n            else:\r\n                track.re_activate(det, self.frame_id, new_id=False)\r\n                refind_stracks.append(track)\r\n\r\n        if self.opt.use_kalman:\r\n            '''(optional) Step 3: Second association, with IOU'''\r\n            tracks = [tracks[i] for i in u_track if tracks[i].state == TrackState.Tracked]\r\n            detections = [detections[i] for i in u_detection]\r\n            dists = matching.iou_distance(tracks, detections)\r\n            matches, u_track, u_detection = matching.linear_assignment(dists, thresh=0.5)\r\n\r\n            for itracked, idet in matches:\r\n                track = tracks[itracked]\r\n                det = detections[idet]\r\n                if track.state == TrackState.Tracked:\r\n                    track.update(det, self.frame_id)\r\n                    activated_stracks.append(track)\r\n                else:\r\n                    track.re_activate(det, self.frame_id, new_id=False)\r\n                    refind_stracks.append(track)\r\n\r\n            '''Deal with unconfirmed tracks, usually tracks with only one beginning frame'''\r\n            detections = [detections[i] for i in u_detection]\r\n            dists = matching.iou_distance(unconfirmed, detections)\r\n            matches, u_unconfirmed, u_detection = matching.linear_assignment(\r\n                    dists, thresh=self.opt.confirm_iou_thres)\r\n            for itracked, idet in matches:\r\n                unconfirmed[itracked].update(detections[idet], self.frame_id)\r\n                activated_stracks.append(unconfirmed[itracked])\r\n            for it in u_unconfirmed:\r\n                track = unconfirmed[it]\r\n                track.mark_removed()\r\n                removed_stracks.append(track)\r\n\r\n        for it in u_track:\r\n            track = tracks[it]\r\n            if not track.state == TrackState.Lost:\r\n                track.mark_lost()\r\n                lost_stracks.append(track)\r\n\r\n        \"\"\" Step 4: Init new stracks\"\"\"\r\n        for inew in u_detection:\r\n            track = detections[inew]\r\n            if track.score < self.det_thresh:\r\n                continue\r\n            track.activate(self.kalman_filter, self.frame_id)\r\n            activated_stracks.append(track)\r\n\r\n        \"\"\" Step 5: Update state\"\"\"\r\n        for track in self.lost_stracks:\r\n            if self.frame_id - track.end_frame > self.max_time_lost:\r\n                track.mark_removed()\r\n                removed_stracks.append(track)\r\n\r\n        self.tracked_stracks = [t for t in self.tracked_stracks if t.state == TrackState.Tracked]\r\n        self.tracked_stracks = joint_stracks(self.tracked_stracks, activated_stracks)\r\n        self.tracked_stracks = joint_stracks(self.tracked_stracks, refind_stracks)\r\n        self.lost_stracks = sub_stracks(self.lost_stracks, self.tracked_stracks)\r\n        self.lost_stracks.extend(lost_stracks)\r\n        self.lost_stracks = sub_stracks(self.lost_stracks, self.removed_stracks)\r\n        self.removed_stracks.extend(removed_stracks)\r\n        self.tracked_stracks, self.lost_stracks = remove_duplicate_stracks(\r\n                self.tracked_stracks, self.lost_stracks, ioudist=self.opt.dup_iou_thres)\r\n\r\n        # get scores of lost tracks\r\n        output_stracks = [track for track in self.tracked_stracks if track.is_activated]\r\n        return output_stracks\r\n"
  },
  {
    "path": "tracker/mot/pose.py",
    "content": "###################################################################\n# File Name: mask.py\n# Author: Zhongdao Wang\n# mail: wcd17@mails.tsinghua.edu.cn\n# Created Time: Fri Jan 29 15:16:53 2021\n###################################################################\n\nfrom __future__ import print_function\nfrom __future__ import division\nfrom __future__ import absolute_import\n\nimport os\nimport pdb\nimport cv2\nimport time\nimport itertools\nimport os.path as osp\n\nimport numpy as np\nimport torch\nimport torch.nn.functional as F\nfrom torchvision import ops\n\nfrom utils.box import * \nfrom utils.mask import *\nfrom .basetrack import *\nfrom .multitracker import AssociationTracker\n\nclass PoseAssociationTracker(AssociationTracker):\n    def __init__(self, opt):\n        super(PoseAssociationTracker, self).__init__(opt)\n\n    def extract_emb(self, img, obs):\n        img = img.unsqueeze(0).to(self.opt.device).float()\n        with torch.no_grad():\n            feat = self.app_model(img)\n        _, d, h, w = feat.shape\n        obs = torch.from_numpy(obs).to(self.opt.device).float()\n        obs = F.interpolate(obs.unsqueeze(1), size=(h,w), mode='nearest')\n        template_scale = np.prod(self.opt.feat_size)\n        embs = []\n        for ob in obs:\n            obfeat = ob*feat\n            scale = ob.sum()\n            if scale > 0:\n                if scale > self.opt.max_mask_area:\n                    scale_factor = np.sqrt(self.opt.max_mask_area/scale.item())\n                else:\n                    scale_factor = 1\n                norm_obfeat = F.interpolate(obfeat, scale_factor=scale_factor, mode='bilinear')\n                norm_mask = F.interpolate(ob.unsqueeze(1), scale_factor=scale_factor, mode='nearest')\n                emb = norm_obfeat[:,:, norm_mask.squeeze(0).squeeze(0).ge(0.5)]\n                embs.append(emb.cpu())\n            else: \n                embs.append(torch.randn(1, d, template_scale).cpu())\n        return obs.cpu(), embs\n\n    def prepare_obs(self, img, img0, obs):\n        _, h, w = img.shape\n        ''' Step 1: Network forward, get detections & embeddings'''\n        if len(obs) > 0:\n            masks = list()\n            for ob in obs:\n                mask = skltn2mask(ob, (h,w))\n                masks.append(mask)\n            masks = np.stack(masks)\n            masks, embs = self.extract_emb(img, masks)\n            boxes = [skltn2box(ob) for ob in obs]\n            assert len(obs)==len(boxes)\n            detections = [STrack(tlbr_to_tlwh(tlbrs), 1, f, self.buffer_size, mask, pose, ac=True) \\\n                    for (tlbrs,mask,pose,f) in zip(boxes,masks,obs,embs)]\n        else:\n            detections = []\n        return detections\n\n"
  },
  {
    "path": "tracker/sot/lib/core/config.py",
    "content": "import os\nimport yaml\nfrom easydict import EasyDict as edict\n\nconfig = edict()\n\n# ------config for general parameters------\nconfig.GPUS = \"0,1,2,3\"\nconfig.WORKERS = 32\nconfig.PRINT_FREQ = 10\nconfig.OUTPUT_DIR = 'logs'\nconfig.CHECKPOINT_DIR = 'snapshot'\n\nconfig.OCEAN = edict()\nconfig.OCEAN.TRAIN = edict()\nconfig.OCEAN.TEST = edict()\nconfig.OCEAN.TUNE = edict()\nconfig.OCEAN.DATASET = edict()\nconfig.OCEAN.DATASET.VID = edict()\nconfig.OCEAN.DATASET.GOT10K = edict()\nconfig.OCEAN.DATASET.COCO = edict()\nconfig.OCEAN.DATASET.DET = edict()\nconfig.OCEAN.DATASET.LASOT = edict()\nconfig.OCEAN.DATASET.YTB = edict()\nconfig.OCEAN.DATASET.VISDRONE = edict()\n\n# augmentation\nconfig.OCEAN.DATASET.SHIFT = 4\nconfig.OCEAN.DATASET.SCALE = 0.05\nconfig.OCEAN.DATASET.COLOR = 1\nconfig.OCEAN.DATASET.FLIP = 0\nconfig.OCEAN.DATASET.BLUR = 0\nconfig.OCEAN.DATASET.GRAY = 0\nconfig.OCEAN.DATASET.MIXUP = 0\nconfig.OCEAN.DATASET.CUTOUT = 0\nconfig.OCEAN.DATASET.CHANNEL6 = 0\nconfig.OCEAN.DATASET.LABELSMOOTH = 0\nconfig.OCEAN.DATASET.ROTATION = 0\nconfig.OCEAN.DATASET.SHIFTs = 64\nconfig.OCEAN.DATASET.SCALEs = 0.18\n\n# vid\nconfig.OCEAN.DATASET.VID.PATH = '$data_path/vid/crop511'\nconfig.OCEAN.DATASET.VID.ANNOTATION = '$data_path/vid/train.json'\n\n# got10k\nconfig.OCEAN.DATASET.GOT10K.PATH = '$data_path/got10k/crop511'\nconfig.OCEAN.DATASET.GOT10K.ANNOTATION = '$data_path/got10k/train.json'\nconfig.OCEAN.DATASET.GOT10K.RANGE = 100\nconfig.OCEAN.DATASET.GOT10K.USE = 200000\n\n# visdrone\nconfig.OCEAN.DATASET.VISDRONE.ANNOTATION = '$data_path/visdrone/train.json'\nconfig.OCEAN.DATASET.VISDRONE.PATH = '$data_path/visdrone/crop271'\nconfig.OCEAN.DATASET.VISDRONE.RANGE = 100\nconfig.OCEAN.DATASET.VISDRONE.USE = 100000\n\n# train\nconfig.OCEAN.TRAIN.GROUP = \"resrchvc\"\nconfig.OCEAN.TRAIN.EXID = \"setting1\"\nconfig.OCEAN.TRAIN.MODEL = \"Ocean\"\nconfig.OCEAN.TRAIN.RESUME = False\nconfig.OCEAN.TRAIN.START_EPOCH = 0\nconfig.OCEAN.TRAIN.END_EPOCH = 50\nconfig.OCEAN.TRAIN.TEMPLATE_SIZE = 127\nconfig.OCEAN.TRAIN.SEARCH_SIZE = 255\nconfig.OCEAN.TRAIN.STRIDE = 8\nconfig.OCEAN.TRAIN.BATCH = 32\nconfig.OCEAN.TRAIN.PRETRAIN = 'pretrain.model'\nconfig.OCEAN.TRAIN.LR_POLICY = 'log'\nconfig.OCEAN.TRAIN.LR = 0.001\nconfig.OCEAN.TRAIN.LR_END = 0.00001\nconfig.OCEAN.TRAIN.MOMENTUM = 0.9\nconfig.OCEAN.TRAIN.WEIGHT_DECAY = 0.0001\nconfig.OCEAN.TRAIN.WHICH_USE = ['GOT10K']  # VID or 'GOT10K'\n\n# test\nconfig.OCEAN.TEST.MODEL = config.OCEAN.TRAIN.MODEL\nconfig.OCEAN.TEST.DATA = 'VOT2019'\nconfig.OCEAN.TEST.START_EPOCH = 30\nconfig.OCEAN.TEST.END_EPOCH = 50\n\n# tune\nconfig.OCEAN.TUNE.MODEL = config.OCEAN.TRAIN.MODEL\nconfig.OCEAN.TUNE.DATA = 'VOT2019'\nconfig.OCEAN.TUNE.METHOD = 'TPE'  # 'GENE' or 'RAY'\n\n\n\ndef _update_dict(k, v, model_name):\n    if k in ['TRAIN', 'TEST', 'TUNE']:\n        for vk, vv in v.items():\n            config[model_name][k][vk] = vv\n    elif k == 'DATASET':\n        for vk, vv in v.items():\n            if vk not in ['VID', 'GOT10K', 'COCO', 'DET', 'YTB', 'LASOT']:\n                config[model_name][k][vk] = vv\n            else:\n                for vvk, vvv in vv.items():\n                    try:\n                        config[model_name][k][vk][vvk] = vvv\n                    except:\n                        config[model_name][k][vk] = edict()\n                        config[model_name][k][vk][vvk] = vvv\n\n    else:\n        config[k] = v   # gpu et.\n\n\ndef update_config(config_file):\n    \"\"\"\n    ADD new keys to config\n    \"\"\"\n    exp_config = None\n    with open(config_file) as f:\n        exp_config = edict(yaml.load(f))\n        model_name = list(exp_config.keys())[0]\n        if model_name not in ['OCEAN', 'SIAMRPN']:\n            raise ValueError('please edit config.py to support new model')\n\n        model_config = exp_config[model_name]  # siamfc or siamrpn\n        for k, v in model_config.items():\n            if k in config or k in config[model_name]:\n                _update_dict(k, v, model_name)   # k=OCEAN or SIAMRPN\n            else:\n                raise ValueError(\"{} not exist in config.py\".format(k))\n"
  },
  {
    "path": "tracker/sot/lib/core/config_ocean.py",
    "content": "import os\nimport yaml\nfrom easydict import EasyDict as edict\n\nconfig = edict()\n\n# ------config for general parameters------\nconfig.GPUS = \"0,1,2,3\"\nconfig.WORKERS = 32\nconfig.PRINT_FREQ = 10\nconfig.OUTPUT_DIR = 'logs'\nconfig.CHECKPOINT_DIR = 'snapshot'\n\nconfig.OCEAN = edict()\nconfig.OCEAN.TRAIN = edict()\nconfig.OCEAN.TEST = edict()\nconfig.OCEAN.TUNE = edict()\nconfig.OCEAN.DATASET = edict()\nconfig.OCEAN.DATASET.VID = edict()\nconfig.OCEAN.DATASET.GOT10K = edict()\nconfig.OCEAN.DATASET.COCO = edict()\nconfig.OCEAN.DATASET.DET = edict()\nconfig.OCEAN.DATASET.LASOT = edict()\nconfig.OCEAN.DATASET.YTB = edict()\nconfig.OCEAN.DATASET.VISDRONE = edict()\n\n# augmentation\nconfig.OCEAN.DATASET.SHIFT = 4\nconfig.OCEAN.DATASET.SCALE = 0.05\nconfig.OCEAN.DATASET.COLOR = 1\nconfig.OCEAN.DATASET.FLIP = 0\nconfig.OCEAN.DATASET.BLUR = 0\nconfig.OCEAN.DATASET.GRAY = 0\nconfig.OCEAN.DATASET.MIXUP = 0\nconfig.OCEAN.DATASET.CUTOUT = 0\nconfig.OCEAN.DATASET.CHANNEL6 = 0\nconfig.OCEAN.DATASET.LABELSMOOTH = 0\nconfig.OCEAN.DATASET.ROTATION = 0\nconfig.OCEAN.DATASET.SHIFTs = 64\nconfig.OCEAN.DATASET.SCALEs = 0.18\n\n# vid\nconfig.OCEAN.DATASET.VID.PATH = '$data_path/vid/crop511'\nconfig.OCEAN.DATASET.VID.ANNOTATION = '$data_path/vid/train.json'\n\n# got10k\nconfig.OCEAN.DATASET.GOT10K.PATH = '$data_path/got10k/crop511'\nconfig.OCEAN.DATASET.GOT10K.ANNOTATION = '$data_path/got10k/train.json'\nconfig.OCEAN.DATASET.GOT10K.RANGE = 100\nconfig.OCEAN.DATASET.GOT10K.USE = 200000\n\n# visdrone\nconfig.OCEAN.DATASET.VISDRONE.ANNOTATION = '$data_path/visdrone/train.json'\nconfig.OCEAN.DATASET.VISDRONE.PATH = '$data_path/visdrone/crop271'\nconfig.OCEAN.DATASET.VISDRONE.RANGE = 100\nconfig.OCEAN.DATASET.VISDRONE.USE = 100000\n\n# train\nconfig.OCEAN.TRAIN.GROUP = \"resrchvc\"\nconfig.OCEAN.TRAIN.EXID = \"setting1\"\nconfig.OCEAN.TRAIN.MODEL = \"Ocean\"\nconfig.OCEAN.TRAIN.RESUME = False\nconfig.OCEAN.TRAIN.START_EPOCH = 0\nconfig.OCEAN.TRAIN.END_EPOCH = 50\nconfig.OCEAN.TRAIN.TEMPLATE_SIZE = 127\nconfig.OCEAN.TRAIN.SEARCH_SIZE = 255\nconfig.OCEAN.TRAIN.STRIDE = 8\nconfig.OCEAN.TRAIN.BATCH = 32\nconfig.OCEAN.TRAIN.PRETRAIN = 'pretrain.model'\nconfig.OCEAN.TRAIN.LR_POLICY = 'log'\nconfig.OCEAN.TRAIN.LR = 0.001\nconfig.OCEAN.TRAIN.LR_END = 0.00001\nconfig.OCEAN.TRAIN.MOMENTUM = 0.9\nconfig.OCEAN.TRAIN.WEIGHT_DECAY = 0.0001\nconfig.OCEAN.TRAIN.WHICH_USE = ['GOT10K']  # VID or 'GOT10K'\n\n# test\nconfig.OCEAN.TEST.MODEL = config.OCEAN.TRAIN.MODEL\nconfig.OCEAN.TEST.DATA = 'VOT2019'\nconfig.OCEAN.TEST.START_EPOCH = 30\nconfig.OCEAN.TEST.END_EPOCH = 50\n\n# tune\nconfig.OCEAN.TUNE.MODEL = config.OCEAN.TRAIN.MODEL\nconfig.OCEAN.TUNE.DATA = 'VOT2019'\nconfig.OCEAN.TUNE.METHOD = 'TPE'  # 'GENE' or 'RAY'\n\n\n\ndef _update_dict(k, v, model_name):\n    if k in ['TRAIN', 'TEST', 'TUNE']:\n        for vk, vv in v.items():\n            config[model_name][k][vk] = vv\n    elif k == 'DATASET':\n        for vk, vv in v.items():\n            if vk not in ['VID', 'GOT10K', 'COCO', 'DET', 'YTB', 'LASOT']:\n                config[model_name][k][vk] = vv\n            else:\n                for vvk, vvv in vv.items():\n                    try:\n                        config[model_name][k][vk][vvk] = vvv\n                    except:\n                        config[model_name][k][vk] = edict()\n                        config[model_name][k][vk][vvk] = vvv\n\n    else:\n        config[k] = v   # gpu et.\n\n\ndef update_config(config_file):\n    \"\"\"\n    ADD new keys to config\n    \"\"\"\n    exp_config = None\n    with open(config_file) as f:\n        exp_config = edict(yaml.load(f))\n        model_name = list(exp_config.keys())[0]\n        if model_name not in ['OCEAN', 'SIAMRPN']:\n            raise ValueError('please edit config.py to support new model')\n\n        model_config = exp_config[model_name]  # siamfc or siamrpn\n        for k, v in model_config.items():\n            if k in config or k in config[model_name]:\n                _update_dict(k, v, model_name)   # k=OCEAN or SIAMRPN\n            else:\n                raise ValueError(\"{} not exist in config.py\".format(k))\n"
  },
  {
    "path": "tracker/sot/lib/core/config_oceanplus.py",
    "content": "# ------------------------------------------------------------------------------\n# Copyright (c) Microsoft\n# Licensed under the MIT License.\n# Written by Zhipeng Zhang (zhangzhipeng2017@ia.ac.cn)\n# Details: This script provides configs for siamfc and siamrpn\n# ------------------------------------------------------------------------------\n\nimport os\nimport yaml\nfrom easydict import EasyDict as edict\n\nconfig = edict()\n\n# ------config for general parameters------\nconfig.GPUS = \"0\"\nconfig.WORKERS = 32\nconfig.PRINT_FREQ = 10\nconfig.OUTPUT_DIR = 'logs'\nconfig.CHECKPOINT_DIR = 'snapshot'\n\n# #-----————- config for siamfc ------------\nconfig.ADAFREE = edict()\nconfig.ADAFREE.TRAIN = edict()\nconfig.ADAFREE.TEST = edict()\nconfig.ADAFREE.TUNE = edict()\nconfig.ADAFREE.DATASET = edict()\nconfig.ADAFREE.DATASET.VID = edict()       # paper utlized but not recommended\nconfig.ADAFREE.DATASET.GOT10K = edict()    # not utlized in paper but recommended, better performance and more stable\nconfig.ADAFREE.DATASET.COCO = edict()\nconfig.ADAFREE.DATASET.DET = edict()\nconfig.ADAFREE.DATASET.LASOT = edict()\nconfig.ADAFREE.DATASET.YTB = edict()\nconfig.ADAFREE.DATASET.VISDRONE = edict()\n\n# augmentation\nconfig.ADAFREE.DATASET.SHIFT = 4\nconfig.ADAFREE.DATASET.SCALE = 0.05\nconfig.ADAFREE.DATASET.COLOR = 1\nconfig.ADAFREE.DATASET.FLIP = 0\nconfig.ADAFREE.DATASET.BLUR = 0\nconfig.ADAFREE.DATASET.GRAY = 0\nconfig.ADAFREE.DATASET.MIXUP = 0\nconfig.ADAFREE.DATASET.CUTOUT = 0\nconfig.ADAFREE.DATASET.CHANNEL6 = 0\nconfig.ADAFREE.DATASET.LABELSMOOTH = 0\nconfig.ADAFREE.DATASET.ROTATION = 0\nconfig.ADAFREE.DATASET.SHIFTs = 64\nconfig.ADAFREE.DATASET.SCALEs = 0.18\n\n# vid\nconfig.ADAFREE.DATASET.VID.PATH = '/home/zhbli/Dataset/data2/vid/crop511'\nconfig.ADAFREE.DATASET.VID.ANNOTATION = '/home/zhbli/Dataset/data2/vid/train.json'\n\n# got10k\nconfig.ADAFREE.DATASET.GOT10K.PATH = '/data/share/LARGESIAM/got10k/crop511'\nconfig.ADAFREE.DATASET.GOT10K.ANNOTATION = '/data/share/LARGESIAM/got10k/train.json'\nconfig.ADAFREE.DATASET.GOT10K.RANGE = 100\nconfig.ADAFREE.DATASET.GOT10K.USE = 200000\n\n# visdrone\nconfig.ADAFREE.DATASET.VISDRONE.ANNOTATION = '/data2/SMALLSIAM/visdrone/train.json'\nconfig.ADAFREE.DATASET.VISDRONE.PATH = '/data2/SMALLSIAM/visdrone/crop271'\nconfig.ADAFREE.DATASET.VISDRONE.RANGE = 100\nconfig.ADAFREE.DATASET.VISDRONE.USE = 100000\n\n# train\nconfig.ADAFREE.TRAIN.GROUP = \"resrchvc\"\nconfig.ADAFREE.TRAIN.MODEL = \"SiamFCRes22W\"\nconfig.ADAFREE.TRAIN.RESUME = False\nconfig.ADAFREE.TRAIN.START_EPOCH = 0\nconfig.ADAFREE.TRAIN.END_EPOCH = 50\nconfig.ADAFREE.TRAIN.TEMPLATE_SIZE = 127\nconfig.ADAFREE.TRAIN.SEARCH_SIZE = 143\nconfig.ADAFREE.TRAIN.STRIDE = 8\nconfig.ADAFREE.TRAIN.BATCH = 32\nconfig.ADAFREE.TRAIN.PRETRAIN = 'resnet23_inlayer.model'\nconfig.ADAFREE.TRAIN.LR_POLICY = 'log'\nconfig.ADAFREE.TRAIN.LR = 0.001\nconfig.ADAFREE.TRAIN.LR_END = 0.0000001\nconfig.ADAFREE.TRAIN.MOMENTUM = 0.9\nconfig.ADAFREE.TRAIN.WEIGHT_DECAY = 0.0001\nconfig.ADAFREE.TRAIN.WHICH_USE = ['GOT10K']  # VID or 'GOT10K'\n\n# test\nconfig.ADAFREE.TEST.MODEL = config.ADAFREE.TRAIN.MODEL\nconfig.ADAFREE.TEST.DATA = 'VOT2015'\nconfig.ADAFREE.TEST.START_EPOCH = 30\nconfig.ADAFREE.TEST.END_EPOCH = 50\n\n# tune\nconfig.ADAFREE.TUNE.MODEL = config.ADAFREE.TRAIN.MODEL\nconfig.ADAFREE.TUNE.DATA = 'VOT2015'\nconfig.ADAFREE.TUNE.METHOD = 'TPE'  # 'GENE' or 'RAY'\n\n# #-----————- config for freemask ------------\nconfig.FREEMASK = edict()\nconfig.FREEMASK.TRAIN = edict()\nconfig.FREEMASK.TEST = edict()\nconfig.FREEMASK.TUNE = edict()\nconfig.FREEMASK.DATASET = edict()\nconfig.FREEMASK.DATASET.YTBVOS = edict()\nconfig.FREEMASK.DATASET.COCO = edict()\n\n\n# augmentation\nconfig.FREEMASK.DATASET.SHIFT = 4\nconfig.FREEMASK.DATASET.SCALE = 0.05\nconfig.FREEMASK.DATASET.COLOR = 1\nconfig.FREEMASK.DATASET.FLIP = 0\nconfig.FREEMASK.DATASET.BLUR = 0.18\nconfig.FREEMASK.DATASET.GRAY = 0\nconfig.FREEMASK.DATASET.MIXUP = 0\nconfig.FREEMASK.DATASET.CUTOUT = 0\nconfig.FREEMASK.DATASET.CHANNEL6 = 0\nconfig.FREEMASK.DATASET.LABELSMOOTH = 0\nconfig.FREEMASK.DATASET.ROTATION = 0\nconfig.FREEMASK.DATASET.SHIFTs = 0\nconfig.FREEMASK.DATASET.SCALEs = 0\nconfig.FREEMASK.DATASET.TEMPLATE_SMALL = False\n\n# got10k\nconfig.FREEMASK.DATASET.YTBVOS.PATH = '/data/home/v-zhipeng/data/segmentation/Crop/YTBVOS/crop511'\nconfig.FREEMASK.DATASET.YTBVOS.ANNOTATION = '/data/home/v-zhipeng/data/segmentation/Crop/YTBVOS/train.json'\nconfig.FREEMASK.DATASET.YTBVOS.RANGE = 20\nconfig.FREEMASK.DATASET.YTBVOS.USE = 100000\n\n# visdrone\nconfig.FREEMASK.DATASET.COCO.ANNOTATION = '/data/home/v-zhipeng/data/segmentation/Crop/coco/train2017.json'\nconfig.FREEMASK.DATASET.COCO.PATH = '/data/home/v-zhipeng/data/segmentation/Crop/coco/crop511'\nconfig.FREEMASK.DATASET.COCO.RANGE = 1\nconfig.FREEMASK.DATASET.COCO.USE = 100000\n\n# train\nconfig.FREEMASK.TRAIN.MODEL = \"FreeDepthMASK\"\nconfig.FREEMASK.TRAIN.RESUME = False\nconfig.FREEMASK.TRAIN.START_EPOCH = 0\nconfig.FREEMASK.TRAIN.END_EPOCH = 50\nconfig.FREEMASK.TRAIN.BASE_SIZE = 0\nconfig.FREEMASK.TRAIN.CROP_SIZE = 0\nconfig.FREEMASK.TRAIN.ORIGIN_SIZE = 127\nconfig.FREEMASK.TRAIN.TEMPLATE_SIZE = 127\nconfig.FREEMASK.TRAIN.SEARCH_SIZE = 255\nconfig.FREEMASK.TRAIN.STRIDE = 8\nconfig.FREEMASK.TRAIN.BATCH = 32\nconfig.FREEMASK.TRAIN.PRETRAIN = ''\nconfig.FREEMASK.TRAIN.LR_POLICY = 'log'\nconfig.FREEMASK.TRAIN.LR = 0.001\nconfig.FREEMASK.TRAIN.LR_END = 0.0000001\nconfig.FREEMASK.TRAIN.MOMENTUM = 0.9\nconfig.FREEMASK.TRAIN.WEIGHT_DECAY = 0.0001\nconfig.FREEMASK.TRAIN.WHICH_USE = ['COCO', 'YTBVOS']  # VID or 'GOT10K'\n\n# test\nconfig.FREEMASK.TEST.MODEL = config.FREEMASK.TRAIN.MODEL\nconfig.FREEMASK.TEST.DATA = 'VOT2015'\nconfig.FREEMASK.TEST.START_EPOCH = 30\nconfig.FREEMASK.TEST.END_EPOCH = 50\n\n# tune\nconfig.FREEMASK.TUNE.MODEL = config.FREEMASK.TRAIN.MODEL\nconfig.FREEMASK.TUNE.DATA = 'VOT2015'\nconfig.FREEMASK.TUNE.METHOD = 'TPE'  # 'GENE' or 'RAY'\n\ndef _update_dict(k, v, model_name):\n    if k in ['TRAIN', 'TEST', 'TUNE']:\n        for vk, vv in v.items():\n            config[model_name][k][vk] = vv\n    elif k == 'DATASET':\n        for vk, vv in v.items():\n            if vk not in ['VID', 'GOT10K', 'COCO', 'DET', 'YTB', 'LASOT']:\n                config[model_name][k][vk] = vv\n            else:\n                for vvk, vvv in vv.items():\n                    try:\n                        config[model_name][k][vk][vvk] = vvv\n                    except:\n                        config[model_name][k][vk] = edict()\n                        config[model_name][k][vk][vvk] = vvv\n\n    else:\n        config[k] = v   # gpu et.\n\n\ndef update_config(config_file):\n    \"\"\"\n    ADD new keys to config\n    \"\"\"\n    exp_config = None\n    with open(config_file) as f:\n        exp_config = edict(yaml.load(f))\n        model_name = list(exp_config.keys())[0]\n        if model_name not in ['ADAFREE', 'FREEMASK']:\n            raise ValueError('please edit config.py to support new model')\n\n        model_config = exp_config[model_name]  # siamfc or siamrpn\n        for k, v in model_config.items():\n            if k in config or k in config[model_name]:\n                _update_dict(k, v, model_name)   # k=ADAFREE or SIAMRPN\n            else:\n                raise ValueError(\"{} not exist in config.py\".format(k))\n"
  },
  {
    "path": "tracker/sot/lib/core/config_siamdw.py",
    "content": "# ------------------------------------------------------------------------------\r\n# Copyright (c) Microsoft\r\n# Licensed under the MIT License.\r\n# Written by Houwen Peng and Zhipeng Zhang\r\n# Email:  houwen.peng@microsoft.com\r\n# Details: This script provides configs for siamfc and siamrpn\r\n# ------------------------------------------------------------------------------\r\n\r\nimport os\r\nimport yaml\r\nfrom easydict import EasyDict as edict\r\n\r\nconfig = edict()\r\n\r\n# ------config for general parameters------\r\nconfig.GPUS = \"0,1,2,3\"\r\nconfig.WORKERS = 32\r\nconfig.PRINT_FREQ = 10\r\nconfig.OUTPUT_DIR = 'logs'\r\nconfig.CHECKPOINT_DIR = 'snapshot'\r\n\r\n# #-----————- config for siamfc ------------\r\nconfig.SIAMFC = edict()\r\nconfig.SIAMFC.TRAIN = edict()\r\nconfig.SIAMFC.TEST = edict()\r\nconfig.SIAMFC.TUNE = edict()\r\nconfig.SIAMFC.DATASET = edict()\r\nconfig.SIAMFC.DATASET.VID = edict()       # paper utlized but not recommended\r\nconfig.SIAMFC.DATASET.GOT10K = edict()    # not utlized in paper but recommended, better performance and more stable\r\n\r\n# augmentation\r\nconfig.SIAMFC.DATASET.SHIFT = 4\r\nconfig.SIAMFC.DATASET.SCALE = 0.05\r\nconfig.SIAMFC.DATASET.COLOR = 1\r\nconfig.SIAMFC.DATASET.FLIP = 0\r\nconfig.SIAMFC.DATASET.BLUR = 0\r\nconfig.SIAMFC.DATASET.ROTATION = 0\r\n\r\n# vid\r\nconfig.SIAMFC.DATASET.VID.PATH = '/home/zhbli/Dataset/data2/vid/crop511'\r\nconfig.SIAMFC.DATASET.VID.ANNOTATION = '/home/zhbli/Dataset/data2/vid/train.json'\r\n\r\n# got10k\r\nconfig.SIAMFC.DATASET.GOT10K.PATH = '/home/zhbli/Dataset/data3/got10k/crop511'\r\nconfig.SIAMFC.DATASET.GOT10K.ANNOTATION = '/home/zhbli/Dataset/data3/got10k/train.json'\r\n\r\n# train\r\nconfig.SIAMFC.TRAIN.MODEL = \"SiamFCIncep22\"\r\nconfig.SIAMFC.TRAIN.RESUME = False\r\nconfig.SIAMFC.TRAIN.START_EPOCH = 0\r\nconfig.SIAMFC.TRAIN.END_EPOCH = 50\r\nconfig.SIAMFC.TRAIN.TEMPLATE_SIZE = 127\r\nconfig.SIAMFC.TRAIN.SEARCH_SIZE = 255\r\nconfig.SIAMFC.TRAIN.STRIDE = 8\r\nconfig.SIAMFC.TRAIN.BATCH = 32\r\nconfig.SIAMFC.TRAIN.PAIRS = 200000\r\nconfig.SIAMFC.TRAIN.PRETRAIN = 'resnet23_inlayer.model'\r\nconfig.SIAMFC.TRAIN.LR_POLICY = 'log'\r\nconfig.SIAMFC.TRAIN.LR = 0.001\r\nconfig.SIAMFC.TRAIN.LR_END = 0.0000001\r\nconfig.SIAMFC.TRAIN.MOMENTUM = 0.9\r\nconfig.SIAMFC.TRAIN.WEIGHT_DECAY = 0.0001\r\nconfig.SIAMFC.TRAIN.WHICH_USE = 'GOT10K'  # VID or 'GOT10K'\r\n\r\n# test\r\nconfig.SIAMFC.TEST.MODEL = config.SIAMFC.TRAIN.MODEL\r\nconfig.SIAMFC.TEST.DATA = 'VOT2015'\r\nconfig.SIAMFC.TEST.START_EPOCH = 30\r\nconfig.SIAMFC.TEST.END_EPOCH = 50\r\n\r\n# tune\r\nconfig.SIAMFC.TUNE.MODEL = config.SIAMFC.TRAIN.MODEL\r\nconfig.SIAMFC.TUNE.DATA = 'VOT2015'\r\nconfig.SIAMFC.TUNE.METHOD = 'GENE'  # 'GENE' or 'RAY'\r\n\r\n# #-----————- config for siamrpn ------------\r\nconfig.SIAMRPN = edict()\r\nconfig.SIAMRPN.DATASET = edict()\r\nconfig.SIAMRPN.DATASET.VID = edict()\r\nconfig.SIAMRPN.DATASET.YTB = edict()\r\nconfig.SIAMRPN.DATASET.COCO = edict()\r\nconfig.SIAMRPN.DATASET.DET = edict()\r\nconfig.SIAMRPN.DATASET.GOT10K = edict()\r\nconfig.SIAMRPN.DATASET.LASOT = edict()\r\nconfig.SIAMRPN.TRAIN = edict()\r\nconfig.SIAMRPN.TEST = edict()\r\nconfig.SIAMRPN.TUNE = edict()\r\n\r\n# augmentation\r\nconfig.SIAMRPN.DATASET.SHIFT = 4\r\nconfig.SIAMRPN.DATASET.SCALE = 0.05\r\nconfig.SIAMRPN.DATASET.COLOR = 1\r\nconfig.SIAMRPN.DATASET.FLIP = 0\r\nconfig.SIAMRPN.DATASET.BLUR = 0\r\nconfig.SIAMRPN.DATASET.ROTATION = 0\r\n\r\n\r\n# vid\r\nconfig.SIAMRPN.DATASET.VID.PATH = '/data2/vid/crop271'\r\nconfig.SIAMRPN.DATASET.VID.ANNOTATION = '/data2/vid/train.json'\r\n\r\n# Y2B\r\nconfig.SIAMRPN.DATASET.YTB.PATH = '/data2/yt_bb/crop271'\r\nconfig.SIAMRPN.DATASET.YTB.ANNOTATION = '/data2/yt_bb/train.json'\r\n\r\n# DET\r\nconfig.SIAMRPN.DATASET.YTB.PATH = '/data2/det/crop271'\r\nconfig.SIAMRPN.DATASET.YTB.ANNOTATION = '/data2/det/train.json'\r\n\r\n# COCO\r\nconfig.SIAMRPN.DATASET.YTB.PATH = '/data2/coco/crop271'\r\nconfig.SIAMRPN.DATASET.YTB.ANNOTATION = '/data2/coco/train.json'\r\n\r\n# GOT10K\r\nconfig.SIAMRPN.DATASET.YTB.PATH = '/data2/got10k/crop271'\r\nconfig.SIAMRPN.DATASET.YTB.ANNOTATION = '/data2/got10k/train.json'\r\n\r\n# LASOT\r\nconfig.SIAMRPN.DATASET.YTB.PATH = '/data2/lasot/crop271'\r\nconfig.SIAMRPN.DATASET.YTB.ANNOTATION = '/data2/lasot/train.json'\r\n\r\n\r\n# train\r\nconfig.SIAMRPN.TRAIN.MODEL = \"SiamRPNRes22\"\r\nconfig.SIAMRPN.TRAIN.RESUME = False\r\nconfig.SIAMRPN.TRAIN.START_EPOCH = 0\r\nconfig.SIAMRPN.TRAIN.END_EPOCH = 50\r\nconfig.SIAMRPN.TRAIN.TEMPLATE_SIZE = 127\r\nconfig.SIAMRPN.TRAIN.SEARCH_SIZE = 255\r\nconfig.SIAMRPN.TRAIN.STRIDE = 8\r\nconfig.SIAMRPN.TRAIN.BATCH = 32\r\nconfig.SIAMRPN.TRAIN.PRETRAIN = 'resnet.model'\r\nconfig.SIAMRPN.TRAIN.LR_POLICY = 'log'\r\nconfig.SIAMRPN.TRAIN.LR = 0.01\r\nconfig.SIAMRPN.TRAIN.LR_END = 0.00001\r\nconfig.SIAMRPN.TRAIN.MOMENTUM = 0.9\r\nconfig.SIAMRPN.TRAIN.WEIGHT_DECAY = 0.0005\r\nconfig.SIAMRPN.TRAIN.CLS_WEIGHT = 1\r\nconfig.SIAMRPN.TRAIN.REG_WEIGHT = 1\r\nconfig.SIAMRPN.TRAIN.WHICH_USE = ['VID', 'YTB']  # VID or 'GOT10K' \r\nconfig.SIAMRPN.TRAIN.ANCHORS_RATIOS = [0.33, 0.5, 1, 2, 3]  \r\nconfig.SIAMRPN.TRAIN.ANCHORS_SCALES = [8]  \r\nconfig.SIAMRPN.TRAIN.ANCHORS_THR_HIGH = 0.6\r\nconfig.SIAMRPN.TRAIN.ANCHORS_THR_LOW = 0.3\r\nconfig.SIAMRPN.TRAIN.ANCHORS_POS_KEEP = 16    # postive anchors to calc loss\r\nconfig.SIAMRPN.TRAIN.ANCHORS_ALL_KEEP = 64    # postive + neg anchors to calc loss\r\n\r\n\r\n# test\r\nconfig.SIAMRPN.TEST.MODEL = config.SIAMRPN.TRAIN.MODEL\r\nconfig.SIAMRPN.TEST.DATA = 'VOT2017'\r\nconfig.SIAMRPN.TEST.START_EPOCH = 30\r\nconfig.SIAMRPN.TEST.END_EPOCH = 50\r\n\r\n# tune\r\nconfig.SIAMRPN.TUNE.MODEL = config.SIAMRPN.TRAIN.MODEL\r\nconfig.SIAMRPN.TUNE.DATA = 'VOT2017'\r\nconfig.SIAMRPN.TUNE.METHOD = 'TPE' \r\n\r\n\r\n\r\ndef _update_dict(k, v, model_name):\r\n    if k in ['TRAIN', 'TEST', 'TUNE']:\r\n        for vk, vv in v.items():\r\n            config[model_name][k][vk] = vv\r\n    elif k == 'DATASET':\r\n        for vk, vv in v.items():\r\n            if vk not in ['VID', 'GOT10K', 'COCO', 'DET', 'YTB']:\r\n                config[model_name][k][vk] = vv\r\n            else:\r\n                for vvk, vvv in vv.items():\r\n                    config[model_name][k][vk][vvk] = vvv\r\n    else:\r\n        config[k] = v   # gpu et.\r\n\r\n\r\ndef update_config(config_file):\r\n    \"\"\"\r\n    ADD new keys to config\r\n    \"\"\"\r\n    exp_config = None\r\n    with open(config_file) as f:\r\n        exp_config = edict(yaml.load(f))\r\n        model_name = list(exp_config.keys())[0]\r\n        if model_name not in ['SIAMFC', 'SIAMRPN']:\r\n            raise ValueError('please edit config.py to support new model')\r\n\r\n        model_config = exp_config[model_name]  # siamfc or siamrpn\r\n        for k, v in model_config.items():\r\n            if k in config or k in config[model_name]:\r\n                _update_dict(k, v, model_name)   # k=SIAMFC or SIAMRPN\r\n            else:\r\n                raise ValueError(\"{} not exist in config.py\".format(k))\r\n"
  },
  {
    "path": "tracker/sot/lib/core/eval_davis.py",
    "content": "# ------------------------------------------------------------------------------\n# Copyright (c) Microsoft\n# Licensed under the MIT License.\n# Written by Zhipeng Zhang (zhangzhipeng2017@ia.ac.cn)\n# multi-gpu test for epochs\n# ------------------------------------------------------------------------------\n\nimport os\nimport time\nimport argparse\nimport numpy as np\nfrom os import listdir\nfrom os.path import join, exists\nfrom concurrent import futures\n\nparser = argparse.ArgumentParser(description='multi-gpu test all epochs')\nparser.add_argument('--dataset', default='DAVIS2016', type=str, help='benchmarks')\nparser.add_argument('--num_threads', default=16, type=int, help='number of threads')\nparser.add_argument('--datapath', default='dataset/DAVIS', type=str, help='benchmarks')\nargs = parser.parse_args()\n\n\ndef eval_davis(epoch):\n    year = args.dataset[5:]\n    full_path = join('result', args.dataset, epoch)\n    os.system('python lib/eval_toolkit/davis/davis2017-evaluation/evaluation_method.py --task semi-supervised --results_path {0} --davis_path {1} --year {2}'.format(full_path, args.datapath, year))\n\n\ndef extract_davis(epochs):\n    # J&F-Mean,J-Mean,J-Recall,J-Decay,F-Mean,F-Recall,F-Decay\n    results = dict()\n    print('\\t \\tJ&F-Mean,J-Mean,J-Recall,J-Decay,F-Mean,F-Recall,F-Decay')\n\n    JFm = []\n    Jm = []\n    Jr = []\n    Jd = []\n    Fm = []\n    Fr = []\n    Fd = []\n    \n    for e in epochs:\n        results[e] = dict()\n        full_path = join('result', args.dataset, e, 'global_results-val.csv')\n        record = open(full_path, 'r').readlines()\n        record = eval(record[1])\n        print('{} {} {} {} {} {} {} {}'.format(e, record[0], record[1], record[2], record[3], record[4], record[5], record[6]))\n\n        JFm.append(record[0])\n        Jm.append(record[1])\n        Jr.append(record[2])\n        Jd.append(record[3])\n        Fm.append(record[4])\n        Fr.append(record[5])\n        Fd.append(record[6])\n    print('=========> sort with J&F: <===========')\n    argidx = np.argmax(np.array(JFm))\n    print('{} {} {} {} {} {} {} {}'.format(epochs[argidx], JFm[argidx], Jm[argidx], Jr[argidx], Jd[argidx], Fm[argidx], Fr[argidx], Fd[argidx]))\n    print('=========> sort with Jm: <===========')\n    argidx = np.argmax(np.array(Jm))\n    print('{} {} {} {} {} {} {} {}'.format(epochs[argidx], JFm[argidx], Jm[argidx], Jr[argidx], Jd[argidx], Fm[argidx], Fr[argidx], Fd[argidx]))\n\n\nbase_path = join('result', args.dataset)\nepochs = listdir(base_path)\nprint('total {} epochs'.format(len(epochs)))\n\n# multi-process evaluation\nif args.dataset in ['DAVIS2016', 'DAVIS2017']:\n    with futures.ProcessPoolExecutor(max_workers=args.num_threads) as executor:\n        fs = [executor.submit(eval_davis, e) for e in epochs]\n    print('done')\n    extract_davis(epochs)\nelse:\n    raise ValueError('not supported data')\n"
  },
  {
    "path": "tracker/sot/lib/core/eval_got10k.py",
    "content": "import sys\nimport json\nimport os\nimport glob\nfrom os.path import join, realpath, dirname\nimport numpy as np\n\n# eval script for GOT10K validation dataset (official GOT10k only ranking according to tetsing dataset)\n# use AUC not AO here\n\n\ndef overlap_ratio(rect1, rect2):\n    '''\n    Compute overlap ratio between two rects\n    - rect: 1d array of [x,y,w,h] or\n            2d array of N x [x,y,w,h]\n    '''\n\n    if rect1.ndim == 1:\n        rect1 = rect1[None, :]\n    if rect2.ndim == 1:\n        rect2 = rect2[None, :]\n\n    left = np.maximum(rect1[:, 0], rect2[:, 0])\n    right = np.minimum(rect1[:, 0] + rect1[:, 2], rect2[:, 0] + rect2[:, 2])\n    top = np.maximum(rect1[:, 1], rect2[:, 1])\n    bottom = np.minimum(rect1[:, 1] + rect1[:, 3], rect2[:, 1] + rect2[:, 3])\n\n    intersect = np.maximum(0, right - left) * np.maximum(0, bottom - top)\n    union = rect1[:, 2] * rect1[:, 3] + rect2[:, 2] * rect2[:, 3] - intersect\n    iou = np.clip(intersect / union, 0, 1)\n    return iou\n\n\ndef compute_success_overlap(gt_bb, result_bb):\n    thresholds_overlap = np.arange(0, 1.05, 0.05)\n    n_frame = len(gt_bb)\n    success = np.zeros(len(thresholds_overlap))\n    iou = overlap_ratio(gt_bb, result_bb)\n    for i in range(len(thresholds_overlap)):\n        success[i] = sum(iou > thresholds_overlap[i]) / float(n_frame)\n    return success\n\n\ndef compute_success_error(gt_center, result_center):\n    thresholds_error = np.arange(0, 51, 1)\n    n_frame = len(gt_center)\n    success = np.zeros(len(thresholds_error))\n    dist = np.sqrt(np.sum(np.power(gt_center - result_center, 2), axis=1))\n    for i in range(len(thresholds_error)):\n        success[i] = sum(dist <= thresholds_error[i]) / float(n_frame)\n    return success\n\n\ndef get_result_bb(arch, seq):\n    result_path = join(arch, seq, seq + '.txt')\n    temp = np.loadtxt(result_path, delimiter=',').astype(np.float)\n    return np.array(temp)\n\n\ndef convert_bb_to_center(bboxes):\n    return np.array([(bboxes[:, 0] + (bboxes[:, 2] - 1) / 2),\n                     (bboxes[:, 1] + (bboxes[:, 3] - 1) / 2)]).T\n\n\ndef eval_auc(dataset='GOT10KVAL', result_path='./test/', tracker_reg='S*', start=0, end=1e6):\n    list_path = os.path.join(realpath(dirname(__file__)), '../../', 'dataset', dataset + '.json')\n    annos = json.load(open(list_path, 'r'))\n    seqs = list(annos.keys())  # dict to list for py3\n\n    trackers = glob.glob(join(result_path, dataset, tracker_reg))\n    trackers = trackers[start:min(end, len(trackers))]\n\n    n_seq = len(seqs)\n    thresholds_overlap = np.arange(0, 1.05, 0.05)\n    # thresholds_error = np.arange(0, 51, 1)\n\n    success_overlap = np.zeros((n_seq, len(trackers), len(thresholds_overlap)))\n    # success_error = np.zeros((n_seq, len(trackers), len(thresholds_error)))\n    for i in range(n_seq):\n        seq = seqs[i]\n        gt_rect = np.array(annos[seq]['gt_rect']).astype(np.float)\n        gt_center = convert_bb_to_center(gt_rect)\n        for j in range(len(trackers)):\n            tracker = trackers[j]\n            print('{:d} processing:{} tracker: {}'.format(i, seq, tracker))\n            bb = get_result_bb(tracker, seq)\n            center = convert_bb_to_center(bb)\n            success_overlap[i][j] = compute_success_overlap(gt_rect, bb)\n            # success_error[i][j] = compute_success_error(gt_center, center)\n\n    print('Success Overlap')\n\n    max_auc = 0.\n    max_name = ''\n    for i in range(len(trackers)):\n        auc = success_overlap[:, i, :].mean()\n        if auc > max_auc:\n            max_auc = auc\n            max_name = trackers[i]\n        print('%s(%.4f)' % (trackers[i], auc))\n\n    print('\\n%s Best: %s(%.4f)' % (dataset, max_name, max_auc))\n\n\ndef eval_got10k_tune(result_path, dataset='GOT10KVAL'):\n    list_path = os.path.join(realpath(dirname(__file__)), '../../', 'dataset', dataset + '.json')\n    annos = json.load(open(list_path, 'r'))\n    seqs = list(annos.keys())  # dict to list for py3\n    n_seq = len(seqs)\n    thresholds_overlap = np.arange(0, 1.05, 0.05)\n    success_overlap = np.zeros((n_seq, 1, len(thresholds_overlap)))\n\n    for i in range(n_seq):\n        seq = seqs[i]\n        gt_rect = np.array(annos[seq]['gt_rect']).astype(np.float)\n        gt_center = convert_bb_to_center(gt_rect)\n        bb = get_result_bb(result_path, seq)\n        center = convert_bb_to_center(bb)\n        success_overlap[i][0] = compute_success_overlap(gt_rect, bb)\n\n    auc = success_overlap[:, 0, :].mean()\n    return auc\n\n\nif __name__ == \"__main__\":\n    if len(sys.argv) < 5:\n        print('python ./lib/core/eval_got10k.py GOT10KVAL ./result SiamFC* 0 1')\n        exit()\n    dataset = sys.argv[1]\n    result_path = sys.argv[2]\n    tracker_reg = sys.argv[3]\n    start = int(sys.argv[4])\n    end = int(sys.argv[5])\n    eval_auc(dataset, result_path, tracker_reg, start, end)\n"
  },
  {
    "path": "tracker/sot/lib/core/eval_lasot.py",
    "content": "import sys\nimport json\nimport os\nimport glob\nfrom os.path import join, realpath, dirname\nimport numpy as np\n\n# eval script for GOT10K validation dataset (official GOT10k only ranking according to tetsing dataset)\n# use AUC not AO here\n\n\ndef overlap_ratio(rect1, rect2):\n    '''\n    Compute overlap ratio between two rects\n    - rect: 1d array of [x,y,w,h] or\n            2d array of N x [x,y,w,h]\n    '''\n\n    if rect1.ndim == 1:\n        rect1 = rect1[None, :]\n    if rect2.ndim == 1:\n        rect2 = rect2[None, :]\n\n    left = np.maximum(rect1[:, 0], rect2[:, 0])\n    right = np.minimum(rect1[:, 0] + rect1[:, 2], rect2[:, 0] + rect2[:, 2])\n    top = np.maximum(rect1[:, 1], rect2[:, 1])\n    bottom = np.minimum(rect1[:, 1] + rect1[:, 3], rect2[:, 1] + rect2[:, 3])\n\n    intersect = np.maximum(0, right - left) * np.maximum(0, bottom - top)\n    union = rect1[:, 2] * rect1[:, 3] + rect2[:, 2] * rect2[:, 3] - intersect\n    iou = np.clip(intersect / union, 0, 1)\n    return iou\n\n\ndef compute_success_overlap(gt_bb, result_bb):\n    thresholds_overlap = np.arange(0, 1.05, 0.05)\n    n_frame = len(gt_bb)\n    success = np.zeros(len(thresholds_overlap))\n    iou = overlap_ratio(gt_bb, result_bb)\n    for i in range(len(thresholds_overlap)):\n        success[i] = sum(iou > thresholds_overlap[i]) / float(n_frame)\n    return success\n\n\ndef compute_success_error(gt_center, result_center):\n    thresholds_error = np.arange(0, 51, 1)\n    n_frame = len(gt_center)\n    success = np.zeros(len(thresholds_error))\n    dist = np.sqrt(np.sum(np.power(gt_center - result_center, 2), axis=1))\n    for i in range(len(thresholds_error)):\n        success[i] = sum(dist <= thresholds_error[i]) / float(n_frame)\n    return success\n\n\ndef get_result_bb(arch, seq):\n    result_path = join(arch, seq + '.txt')\n    temp = np.loadtxt(result_path, delimiter=',').astype(np.float)\n    return np.array(temp)\n\n\ndef convert_bb_to_center(bboxes):\n    return np.array([(bboxes[:, 0] + (bboxes[:, 2] - 1) / 2),\n                     (bboxes[:, 1] + (bboxes[:, 3] - 1) / 2)]).T\n\n\ndef eval_auc(dataset='LASOTTEST', result_path='./test/', tracker_reg='S*', start=0, end=1e6):\n    list_path = os.path.join(realpath(dirname(__file__)), '../../', 'dataset', dataset + '.json')\n    annos = json.load(open(list_path, 'r'))\n    seqs = list(annos.keys())  # dict to list for py3\n\n    trackers = glob.glob(join(result_path, dataset, tracker_reg))\n    trackers = trackers[start:min(end, len(trackers))]\n\n    n_seq = len(seqs)\n    thresholds_overlap = np.arange(0, 1.05, 0.05)\n    # thresholds_error = np.arange(0, 51, 1)\n\n    success_overlap = np.zeros((n_seq, len(trackers), len(thresholds_overlap)))\n    # success_error = np.zeros((n_seq, len(trackers), len(thresholds_error)))\n    for i in range(n_seq):\n        seq = seqs[i]\n        gt_rect = np.array(annos[seq]['gt_rect']).astype(np.float)\n        gt_center = convert_bb_to_center(gt_rect)\n        for j in range(len(trackers)):\n            tracker = trackers[j]\n            print('{:d} processing:{} tracker: {}'.format(i, seq, tracker))\n            bb = get_result_bb(tracker, seq)\n            center = convert_bb_to_center(bb)\n            success_overlap[i][j] = compute_success_overlap(gt_rect, bb)\n            # success_error[i][j] = compute_success_error(gt_center, center)\n\n    print('Success Overlap')\n\n    max_auc = 0.\n    max_name = ''\n    for i in range(len(trackers)):\n        auc = success_overlap[:, i, :].mean()\n        if auc > max_auc:\n            max_auc = auc\n            max_name = trackers[i]\n        print('%s(%.4f)' % (trackers[i], auc))\n\n    print('\\n%s Best: %s(%.4f)' % (dataset, max_name, max_auc))\n\n\ndef eval_lasot_tune(result_path, dataset='LASOTTEST'):\n    list_path = os.path.join(realpath(dirname(__file__)), '../../', 'dataset', dataset + '.json')\n    annos = json.load(open(list_path, 'r'))\n    seqs = list(annos.keys())  # dict to list for py3\n    n_seq = len(seqs)\n    thresholds_overlap = np.arange(0, 1.05, 0.05)\n    success_overlap = np.zeros((n_seq, 1, len(thresholds_overlap)))\n\n    for i in range(n_seq):\n        seq = seqs[i]\n        gt_rect = np.array(annos[seq]['gt_rect']).astype(np.float)\n        gt_center = convert_bb_to_center(gt_rect)\n        bb = get_result_bb(result_path, seq)\n        center = convert_bb_to_center(bb)\n        success_overlap[i][0] = compute_success_overlap(gt_rect, bb)\n\n    auc = success_overlap[:, 0, :].mean()\n    return auc\n\n\nif __name__ == \"__main__\":\n    if len(sys.argv) < 5:\n        print('python ./lib/core/eval_lasot.py LASOTTEST ./result SiamFC* 0 1')\n        exit()\n    dataset = sys.argv[1]\n    result_path = sys.argv[2]\n    tracker_reg = sys.argv[3]\n    start = int(sys.argv[4])\n    end = int(sys.argv[5])\n    eval_auc(dataset, result_path, tracker_reg, start, end)\n    # eval_auc('LASOTTEST', './result', 'DSiam', 0, 1)\n"
  },
  {
    "path": "tracker/sot/lib/core/eval_otb.py",
    "content": "import sys\nimport json\nimport os\nimport glob\nfrom os.path import join, realpath, dirname\nimport numpy as np\n\nOTB2013 = ['carDark', 'car4', 'david', 'david2', 'sylvester', 'trellis', 'fish', 'mhyang', 'soccer', 'matrix',\n           'ironman', 'deer', 'skating1', 'shaking', 'singer1', 'singer2', 'coke', 'bolt', 'boy', 'dudek',\n           'crossing', 'couple', 'football1', 'jogging_1', 'jogging_2', 'doll', 'girl', 'walking2', 'walking',\n           'fleetface', 'freeman1', 'freeman3', 'freeman4', 'david3', 'jumping', 'carScale', 'skiing', 'dog1',\n           'suv', 'motorRolling', 'mountainBike', 'lemming', 'liquor', 'woman', 'faceocc1', 'faceocc2',\n           'basketball', 'football', 'subway', 'tiger1', 'tiger2']\n\nOTB2015 = ['carDark', 'car4', 'david', 'david2', 'sylvester', 'trellis', 'fish', 'mhyang', 'soccer', 'matrix',\n           'ironman', 'deer', 'skating1', 'shaking', 'singer1', 'singer2', 'coke', 'bolt', 'boy', 'dudek',\n           'crossing', 'couple', 'football1', 'jogging_1', 'jogging_2', 'doll', 'girl', 'walking2', 'walking',\n           'fleetface', 'freeman1', 'freeman3', 'freeman4', 'david3', 'jumping', 'carScale', 'skiing', 'dog1',\n           'suv', 'motorRolling', 'mountainBike', 'lemming', 'liquor', 'woman', 'faceocc1', 'faceocc2',\n           'basketball', 'football', 'subway', 'tiger1', 'tiger2', 'clifBar', 'biker', 'bird1', 'blurBody',\n           'blurCar2', 'blurFace', 'blurOwl', 'box', 'car1', 'crowds', 'diving', 'dragonBaby', 'human3', 'human4_2',\n           'human6', 'human9', 'jump', 'panda', 'redTeam', 'skating2_1', 'skating2_2', 'surfer', 'bird2',\n           'blurCar1', 'blurCar3', 'blurCar4', 'board', 'bolt2', 'car2', 'car24', 'coupon', 'dancer', 'dancer2',\n           'dog', 'girl2', 'gym', 'human2', 'human5', 'human7', 'human8', 'kiteSurf', 'man', 'rubik', 'skater',\n           'skater2', 'toy', 'trans', 'twinnings', 'vase']\n\n\ndef overlap_ratio(rect1, rect2):\n    '''\n    Compute overlap ratio between two rects\n    - rect: 1d array of [x,y,w,h] or\n            2d array of N x [x,y,w,h]\n    '''\n\n    if rect1.ndim==1:\n        rect1 = rect1[None,:]\n    if rect2.ndim==1:\n        rect2 = rect2[None,:]\n\n    left = np.maximum(rect1[:,0], rect2[:,0])\n    right = np.minimum(rect1[:,0]+rect1[:,2], rect2[:,0]+rect2[:,2])\n    top = np.maximum(rect1[:,1], rect2[:,1])\n    bottom = np.minimum(rect1[:,1]+rect1[:,3], rect2[:,1]+rect2[:,3])\n\n    intersect = np.maximum(0,right - left) * np.maximum(0,bottom - top)\n    union = rect1[:,2]*rect1[:,3] + rect2[:,2]*rect2[:,3] - intersect\n    iou = np.clip(intersect / union, 0, 1)\n    return iou\n\n\ndef compute_success_overlap(gt_bb, result_bb):\n    thresholds_overlap = np.arange(0, 1.05, 0.05)\n    n_frame = len(gt_bb)\n    success = np.zeros(len(thresholds_overlap))\n    iou = overlap_ratio(gt_bb, result_bb)\n    for i in range(len(thresholds_overlap)):\n        success[i] = sum(iou > thresholds_overlap[i]) / float(n_frame)\n    return success\n\n\ndef compute_success_error(gt_center, result_center):\n    thresholds_error = np.arange(0, 51, 1)\n    n_frame = len(gt_center)\n    success = np.zeros(len(thresholds_error))\n    dist = np.sqrt(np.sum(np.power(gt_center - result_center, 2), axis=1))\n    for i in range(len(thresholds_error)):\n        success[i] = sum(dist <= thresholds_error[i]) / float(n_frame)\n    return success\n\n\ndef get_result_bb(arch, seq):\n    result_path = join(arch, seq + '.txt')\n    temp = np.loadtxt(result_path, delimiter=',').astype(np.float)\n    return np.array(temp)\n\n\ndef convert_bb_to_center(bboxes):\n    return np.array([(bboxes[:, 0] + (bboxes[:, 2] - 1) / 2),\n                     (bboxes[:, 1] + (bboxes[:, 3] - 1) / 2)]).T\n\n\ndef eval_auc(dataset='OTB2015', result_path='./test/', tracker_reg='S*', start=0, end=1e6):\n    list_path = os.path.join(realpath(dirname(__file__)), '../../', 'dataset', dataset + '.json')\n    annos = json.load(open(list_path, 'r'))\n    seqs = list(annos.keys())  # dict to list for py3\n\n    trackers = glob.glob(join(result_path, dataset, tracker_reg))\n    trackers = trackers[start:min(end, len(trackers))]\n\n    print(trackers)\n    n_seq = len(seqs)\n    thresholds_overlap = np.arange(0, 1.05, 0.05)\n    # thresholds_error = np.arange(0, 51, 1)\n\n    success_overlap = np.zeros((n_seq, len(trackers), len(thresholds_overlap)))\n    # success_error = np.zeros((n_seq, len(trackers), len(thresholds_error)))\n    for i in range(n_seq):\n        seq = seqs[i]\n        gt_rect = np.array(annos[seq]['gt_rect']).astype(np.float)\n        gt_center = convert_bb_to_center(gt_rect)\n        for j in range(len(trackers)):\n            tracker = trackers[j]\n            print('{:d} processing:{} tracker: {}'.format(i, seq, tracker))\n            bb = get_result_bb(tracker, seq)\n            center = convert_bb_to_center(bb)\n            success_overlap[i][j] = compute_success_overlap(gt_rect, bb)\n            # success_error[i][j] = compute_success_error(gt_center, center)\n\n    print('Success Overlap')\n\n    if 'OTB2015' == dataset:\n        OTB2013_id = []\n        for i in range(n_seq):\n            if seqs[i] in OTB2013:\n                OTB2013_id.append(i)\n        max_auc_OTB2013 = 0.\n        max_name_OTB2013 = ''\n        for i in range(len(trackers)):\n            auc = success_overlap[OTB2013_id, i, :].mean()\n            if auc > max_auc_OTB2013:\n                max_auc_OTB2013 = auc\n                max_name_OTB2013 = trackers[i]\n            # print('%s(%.4f)' % (trackers[i], auc))\n\n        max_auc = 0.\n        max_name = ''\n        for i in range(len(trackers)):\n            auc = success_overlap[:, i, :].mean()\n            if auc > max_auc:\n                max_auc = auc\n                max_name = trackers[i]\n            print('%s(%.4f)' % (trackers[i], auc))\n        print('\\nOTB2015 Best: %s(%.4f)' % (max_name, max_auc))\n    else:\n        max_auc = 0.\n        max_name = ''\n        for i in range(len(trackers)):\n            auc = success_overlap[:, i, :].mean()\n            if auc > max_auc:\n                max_auc = auc\n                max_name = trackers[i]\n            print('%s(%.4f)' % (trackers[i], auc))\n\n        print('\\n%s Best: %s(%.4f)' % (dataset, max_name, max_auc))\n\n\ndef eval_auc_tune(result_path, dataset='OTB2015'):\n    list_path = os.path.join(realpath(dirname(__file__)), '../../', 'dataset', dataset + '.json')\n    annos = json.load(open(list_path, 'r'))\n    seqs = list(annos.keys())  # dict to list for py3\n    n_seq = len(seqs)\n    thresholds_overlap = np.arange(0, 1.05, 0.05)\n    success_overlap = np.zeros((n_seq, 1, len(thresholds_overlap)))\n\n    for i in range(n_seq):\n        seq = seqs[i]\n        gt_rect = np.array(annos[seq]['gt_rect']).astype(np.float)\n        gt_center = convert_bb_to_center(gt_rect)\n        bb = get_result_bb(result_path, seq)\n        center = convert_bb_to_center(bb)\n        success_overlap[i][0] = compute_success_overlap(gt_rect, bb)\n        \n    auc = success_overlap[:, 0, :].mean()\n    return auc\n\nif __name__ == \"__main__\":\n    if len(sys.argv) < 5:\n        print('python ./lib/core/eval_otb.py OTB2015 ./result Ocean* 0 1')\n        exit()\n    dataset = sys.argv[1]\n    result_path = sys.argv[2]\n    tracker_reg = sys.argv[3]\n    start = int(sys.argv[4])\n    end = int(sys.argv[5])\n    eval_auc(dataset, result_path, tracker_reg, start, end)\n"
  },
  {
    "path": "tracker/sot/lib/core/eval_visdrone.py",
    "content": "import sys\nimport json\nimport os\nimport glob\nfrom os.path import join, realpath, dirname\nimport numpy as np\n\n\ndef overlap_ratio(rect1, rect2):\n    '''\n    Compute overlap ratio between two rects\n    - rect: 1d array of [x,y,w,h] or\n            2d array of N x [x,y,w,h]\n    '''\n\n    if rect1.ndim==1:\n        rect1 = rect1[None,:]\n    if rect2.ndim==1:\n        rect2 = rect2[None,:]\n\n    left = np.maximum(rect1[:,0], rect2[:,0])\n    right = np.minimum(rect1[:,0]+rect1[:,2], rect2[:,0]+rect2[:,2])\n    top = np.maximum(rect1[:,1], rect2[:,1])\n    bottom = np.minimum(rect1[:,1]+rect1[:,3], rect2[:,1]+rect2[:,3])\n\n    intersect = np.maximum(0,right - left) * np.maximum(0,bottom - top)\n    union = rect1[:,2]*rect1[:,3] + rect2[:,2]*rect2[:,3] - intersect\n    iou = np.clip(intersect / union, 0, 1)\n    return iou\n\n\ndef compute_success_overlap(gt_bb, result_bb):\n    thresholds_overlap = np.arange(0, 1.05, 0.05)\n    n_frame = len(gt_bb)\n    success = np.zeros(len(thresholds_overlap))\n    iou = overlap_ratio(gt_bb, result_bb)\n    for i in range(len(thresholds_overlap)):\n        success[i] = sum(iou > thresholds_overlap[i]) / float(n_frame)\n    return success\n\n\ndef compute_success_error(gt_center, result_center):\n    thresholds_error = np.arange(0, 51, 1)\n    n_frame = len(gt_center)\n    success = np.zeros(len(thresholds_error))\n    dist = np.sqrt(np.sum(np.power(gt_center - result_center, 2), axis=1))\n    for i in range(len(thresholds_error)):\n        success[i] = sum(dist <= thresholds_error[i]) / float(n_frame)\n    return success\n\n\ndef get_result_bb(arch, seq):\n    result_path = join(arch, seq + '.txt')\n    temp = np.loadtxt(result_path, delimiter=',').astype(np.float)\n    return np.array(temp)\n\n\ndef convert_bb_to_center(bboxes):\n    return np.array([(bboxes[:, 0] + (bboxes[:, 2] - 1) / 2),\n                     (bboxes[:, 1] + (bboxes[:, 3] - 1) / 2)]).T\n\n\ndef eval_auc(dataset='VISDRONVAL', result_path='./result/', tracker_reg='S*', start=0, end=1e6):\n    list_path = os.path.join(realpath(dirname(__file__)), '../../', 'dataset', dataset + '.json')\n    annos = json.load(open(list_path, 'r'))\n    seqs = list(annos.keys())  # dict to list for py3\n\n    trackers = glob.glob(join(result_path, dataset, tracker_reg))\n    trackers = trackers[start:min(end, len(trackers))]\n\n    n_seq = len(seqs)\n    thresholds_overlap = np.arange(0, 1.05, 0.05)\n    # thresholds_error = np.arange(0, 51, 1)\n\n    success_overlap = np.zeros((n_seq, len(trackers), len(thresholds_overlap)))\n    # success_error = np.zeros((n_seq, len(trackers), len(thresholds_error)))\n    for i in range(n_seq):\n\n        seq = seqs[i]\n        gt_rect = np.array(annos[seq]['gt_rect']).astype(np.float)\n        gt_center = convert_bb_to_center(gt_rect)\n        for j in range(len(trackers)):\n            tracker = trackers[j]\n            print('{:d} processing:{} tracker: {}'.format(i, seq, tracker))\n            bb = get_result_bb(tracker, seq)\n            center = convert_bb_to_center(bb)\n\n            # fix bugs\n            # exit()\n            success_overlap[i][j] = compute_success_overlap(gt_rect, bb)\n            # success_error[i][j] = compute_success_error(gt_center, center)\n\n    print('Success Overlap')\n\n    max_auc = 0.\n    max_name = ''\n    for i in range(len(trackers)):\n        auc = success_overlap[:, i, :].mean()\n        if auc > max_auc:\n            max_auc = auc\n            max_name = trackers[i]\n        print('%s(%.4f)' % (trackers[i], auc))\n\n    print('\\n%s Best: %s(%.4f)' % (dataset, max_name, max_auc))\n\n\ndef eval_vis_tune(result_path, dataset='VISDRONEVAL'):\n    list_path = os.path.join(realpath(dirname(__file__)), '../../', 'dataset', dataset + '.json')\n    annos = json.load(open(list_path, 'r'))\n    seqs = list(annos.keys())  # dict to list for py3\n    n_seq = len(seqs)\n    thresholds_overlap = np.arange(0, 1.05, 0.05)\n    success_overlap = np.zeros((n_seq, 1, len(thresholds_overlap)))\n\n    for i in range(n_seq):\n        seq = seqs[i]\n        gt_rect = np.array(annos[seq]['gt_rect']).astype(np.float)\n        gt_center = convert_bb_to_center(gt_rect)\n        bb = get_result_bb(result_path, seq)\n        center = convert_bb_to_center(bb)\n        success_overlap[i][0] = compute_success_overlap(gt_rect, bb)\n        \n    auc = success_overlap[:, 0, :].mean()\n    return auc\n\nif __name__ == \"__main__\":\n    if len(sys.argv) < 5:\n        print('python ./lib/core/eval_visdrone.py VISDRONEVAL ./result SiamFC* 0 1')\n        exit()\n    dataset = sys.argv[1]\n    result_path = sys.argv[2]\n    tracker_reg = sys.argv[3]\n    start = int(sys.argv[4])\n    end = int(sys.argv[5])\n    eval_auc(dataset, result_path, tracker_reg, start, end)\n"
  },
  {
    "path": "tracker/sot/lib/core/extract_tune_logs.py",
    "content": "import shutil\nimport argparse\nimport numpy as np\n\n\nparser = argparse.ArgumentParser(description='Analysis siamfc tune results')\nparser.add_argument('--path', default='logs/tpe_tune_rpn.log', help='tune result path')\nparser.add_argument('--dataset', default='VOT2018', help='test dataset')\nparser.add_argument('--save_path', default='logs', help='log file save path')\n\n\ndef collect_results(args):\n    if not args.path.endswith('txt'):\n        name = args.path.split('.')[0]\n        name = name + '.txt'\n        shutil.copy(args.path, name)\n        args.path = name\n    fin = open(args.path, 'r')\n    lines = fin.readlines()\n    penalty_k = []\n    scale_lr = []\n    wi = []\n    sz = []\n    bz = []\n    eao = []\n    count = 0 # total numbers\n\n    for line in lines:\n        if not line.startswith('penalty_k'):\n            pass\n        else:\n     #       print(line)\n            count += 1\n            temp0, temp1, temp2, temp3, temp4, temp5 = line.split(',')\n            penalty_k.append(float(temp0.split(': ')[-1]))\n            scale_lr.append(float(temp1.split(': ')[-1]))\n            wi.append(float(temp2.split(': ')[-1]))\n            sz.append(float(temp3.split(': ')[-1]))\n            bz.append(float(temp4.split(': ')[-1]))\n            eao.append(float(temp5.split(': ')[-1]))\n\n    # find max\n    eao = np.array(eao)\n    max_idx = np.argmax(eao)\n    max_eao = eao[max_idx]\n    print('{} params group  have been tested'.format(count))\n    print('penalty_k: {:.4f}, scale_lr: {:.4f}, wi: {:.4f}, small_sz: {}, big_sz: {}, auc: {}'.format(penalty_k[max_idx], scale_lr[max_idx], wi[max_idx], sz[max_idx], bz[max_idx], max_eao))\n\n\nif __name__ == '__main__':\n    args = parser.parse_args()\n    collect_results(args)\n"
  },
  {
    "path": "tracker/sot/lib/core/function.py",
    "content": "import math\nimport time\nimport torch\nfrom utils.utils import print_speed\n\n# -----------------------------\n# Main training code for Ocean\n# -----------------------------\ndef ocean_train(train_loader, model,  optimizer, epoch, cur_lr, cfg, writer_dict, logger, device):\n    # unfix for FREEZE-OUT method\n    # model, optimizer = unfix_more(model, optimizer, epoch, cfg, cur_lr, logger)\n\n    # prepare\n    batch_time = AverageMeter()\n    data_time = AverageMeter()\n    losses = AverageMeter()\n    cls_losses_align = AverageMeter()\n    cls_losses_ori = AverageMeter()\n    reg_losses = AverageMeter()\n    end = time.time()\n\n    # switch to train mode\n    model.train()\n    model = model.to(device)\n\n    for iter, input in enumerate(train_loader):\n        # measure data loading time\n        data_time.update(time.time() - end)\n\n        # input and output/loss\n        label_cls = input[2].type(torch.FloatTensor)  # BCE need float\n        template = input[0].to(device)\n        search = input[1].to(device)\n        label_cls = label_cls.to(device)\n        reg_label = input[3].float().to(device)\n        reg_weight = input[4].float().to(device)\n\n        cls_loss_ori, cls_loss_align, reg_loss = model(template, search, label_cls, reg_target=reg_label, reg_weight=reg_weight)\n\n        cls_loss_ori = torch.mean(cls_loss_ori)\n        reg_loss = torch.mean(reg_loss)\n\n        if cls_loss_align is not None:\n            cls_loss_align = torch.mean(cls_loss_align)\n            loss = cls_loss_ori + cls_loss_align + reg_loss   # smaller reg loss is better for stable training (compared to 1.2 in SiamRPN seriese)\n        else:                                                 # I would suggest the readers to perform ablation on the loss trade-off weights when building a new module\n            cls_loss_align = 0\n            loss = cls_loss_ori + reg_loss\n\n        loss = torch.mean(loss)\n\n        # compute gradient and do update step\n        optimizer.zero_grad()\n        loss.backward()\n        # torch.nn.utils.clip_grad_norm(model.parameters(), 10)  # gradient clip\n\n        if is_valid_number(loss.item()):\n            optimizer.step()\n\n        # record loss\n        loss = loss.item()\n        losses.update(loss, template.size(0))\n\n        cls_loss_ori = cls_loss_ori.item()\n        cls_losses_ori.update(cls_loss_ori, template.size(0))\n\n        try:\n            cls_loss_align = cls_loss_align.item()\n        except:\n            cls_loss_align = 0\n\n        cls_losses_align.update(cls_loss_align, template.size(0))\n\n        reg_loss = reg_loss.item()\n        reg_losses.update(reg_loss, template.size(0))\n\n        batch_time.update(time.time() - end)\n        end = time.time()\n\n        if (iter + 1) % cfg.PRINT_FREQ == 0:\n            logger.info(\n                'Epoch: [{0}][{1}/{2}] lr: {lr:.7f}\\t Batch Time: {batch_time.avg:.3f}s \\t Data Time:{data_time.avg:.3f}s \\t CLS_ORI Loss:{cls_loss_ori.avg:.5f} \\t CLS_ALIGN Loss:{cls_loss_align.avg:.5f} \\t REG Loss:{reg_loss.avg:.5f} \\t Loss:{loss.avg:.5f}'.format(\n                    epoch, iter + 1, len(train_loader), lr=cur_lr, batch_time=batch_time, data_time=data_time,\n                    loss=losses, cls_loss_ori=cls_losses_ori, cls_loss_align=cls_losses_align, reg_loss=reg_losses))\n\n            print_speed((epoch - 1) * len(train_loader) + iter + 1, batch_time.avg,\n                        cfg.OCEAN.TRAIN.END_EPOCH * len(train_loader), logger)\n\n        # write to tensorboard\n        writer = writer_dict['writer']\n        global_steps = writer_dict['train_global_steps']\n        writer.add_scalar('train_loss', loss, global_steps)\n        writer_dict['train_global_steps'] = global_steps + 1\n\n    return model, writer_dict\n\n# ------------------------------------------\n# Main code for Ocean Plus training\n# ------------------------------------------\ndef BNtoFixed(m):\n    class_name = m.__class__.__name__\n    if class_name.find('BatchNorm') != -1:\n        m.eval()\n\ndef oceanplus_train(train_loader, model, optimizer, epoch, cur_lr, cfg, writer_dict, logger, device):\n    # prepare\n    batch_time = AverageMeter()\n    data_time = AverageMeter()\n    losses = AverageMeter()\n    end = time.time()\n\n    # switch to train mode\n    print('====> fix again <=====')\n    model.train()\n\n    try:\n        model.module.features.features.eval()\n    except:\n        model.module.features.eval()\n\n    try:\n        model.module.neck.eval()\n        model.module.neck.apply(BNtoFixed)\n    except:\n        pass\n\n    try:\n        model.module.connect_model.eval()\n        model.module.connect_model.apply(BNtoFixed)\n    except:\n        pass\n\n    try:\n        model.module.bbox_tower.eval()\n        model.module.bbox_tower.apply(BNtoFixed)\n    except:\n        pass\n\n    try:\n        model.module.features.features.apply(BNtoFixed)\n    except:\n        model.module.features.apply(BNtoFixed)\n\n    model.module.mask_model.train()\n    model = model.to(device)\n\n    for iter, input in enumerate(train_loader):\n        # measure data loading time\n        data_time.update(time.time() - end)\n\n        # input and output/loss\n        label_cls = input[2].type(torch.FloatTensor)  # BCE need float\n        template = input[0].to(device)\n        search = input[1].to(device)\n        label_cls = label_cls.to(device)\n        reg_label = input[3].float().to(device)\n        reg_weight = input[4].float().to(device)\n\n        mask = input[6].float().to(device)\n        template_mask = input[-1].float().to(device)\n        mask_weight = input[7].float().to(device)\n\n        _, _, loss = model(template, search, label_cls, reg_target=reg_label, reg_weight=reg_weight,\n                                              mask=mask, mask_weight=mask_weight, template_mask=template_mask)\n\n        loss = torch.mean(loss)\n\n        # compute gradient and do update step\n        optimizer.zero_grad()\n        loss.backward()\n        # torch.nn.utils.clip_grad_norm(model.parameters(), 10)  # gradient clip\n\n        if is_valid_number(loss.item()):\n            optimizer.step()\n        torch.nn.utils.clip_grad_norm_(model.parameters(), 10)  # gradient clip\n        # record loss\n        loss = loss.item()\n        losses.update(loss, template.size(0))\n\n        batch_time.update(time.time() - end)\n        end = time.time()\n\n        if (iter + 1) % cfg.PRINT_FREQ == 0:\n            logger.info(\n                'Epoch: [{0}][{1}/{2}] lr: {lr:.7f}\\t Batch Time: {batch_time.avg:.3f}s \\t Data Time:{data_time.avg:.3f}s \\t MASK Loss:{mask_loss.avg:.5f}'.format(\n                    epoch, iter + 1, len(train_loader), lr=cur_lr, batch_time=batch_time, data_time=data_time, mask_loss=losses))\n\n            print_speed((epoch - 1) * len(train_loader) + iter + 1, batch_time.avg,\n                        cfg.FREEMASK.TRAIN.END_EPOCH * len(train_loader), logger)\n\n        # write to tensorboard\n        writer = writer_dict['writer']\n        global_steps = writer_dict['train_global_steps']\n        writer.add_scalar('train_loss', loss, global_steps)\n        writer_dict['train_global_steps'] = global_steps + 1\n\n    return model, writer_dict\n\n# ===========================================================\n# Main code for SiamDW train\n# ===========================================================\ndef siamdw_train(train_loader, model,  optimizer, epoch, cur_lr, cfg, writer_dict, logger):\n    # unfix for FREEZE-OUT method\n    # model, optimizer = unfix_more(model, optimizer, epoch, cfg, cur_lr, logger)  # you may try freeze-out\n\n    # prepare\n    batch_time = AverageMeter()\n    data_time = AverageMeter()\n    losses = AverageMeter()\n    end = time.time()\n\n    # switch to train mode\n    model.train()\n    model = model.cuda()\n\n    for iter, input in enumerate(train_loader):\n        # measure data loading time\n        data_time.update(time.time() - end)\n\n        # input and output/loss\n        label_cls = input[2].type(torch.FloatTensor)  # BCE need float\n        template = input[0].cuda()\n        search = input[1].cuda()\n        label_cls = label_cls.cuda()\n\n        loss = model(template, search, label_cls)\n        loss = torch.mean(loss)\n\n        # compute gradient and do update step\n        optimizer.zero_grad()\n        loss.backward()\n        torch.nn.utils.clip_grad_norm(model.parameters(), 10)  # gradient clip\n\n        if is_valid_number(loss.data[0]):\n            optimizer.step()\n\n        # record loss\n        loss = loss.data[0]\n        losses.update(loss, template.size(0))\n        batch_time.update(time.time() - end)\n        end = time.time()\n\n        if (iter + 1) % cfg.PRINT_FREQ == 0:\n            logger.info('Epoch: [{0}][{1}/{2}] lr: {lr:.7f}\\t Batch Time: {batch_time.avg:.3f}s \\t Data Time:{data_time.avg:.3f}s \\t Loss:{loss.avg:.5f}'.format(\n                epoch, iter + 1, len(train_loader), lr=cur_lr, batch_time=batch_time, data_time=data_time, loss=losses))\n\n            print_speed((epoch - 1) * len(train_loader) + iter + 1, batch_time.avg, cfg.SIAMFC.TRAIN.END_EPOCH * len(train_loader), logger)\n\n        # write to tensorboard\n        writer = writer_dict['writer']\n        global_steps = writer_dict['train_global_steps']\n        writer.add_scalar('train_loss', loss, global_steps)\n        writer_dict['train_global_steps'] = global_steps + 1\n\n    return model, writer_dict\n\n\ndef is_valid_number(x):\n    return not(math.isnan(x) or math.isinf(x) or x > 1e4)\n\n\nclass AverageMeter(object):\n    \"\"\"Computes and stores the average and current value\"\"\"\n    def __init__(self):\n        self.reset()\n\n    def reset(self):\n        self.val = 0\n        self.avg = 0\n        self.sum = 0\n        self.count = 0\n\n    def update(self, val, n=1):\n        self.val = val\n        self.sum += val * n\n        self.count += n\n        self.avg = self.sum / self.count if self.count != 0 else 0\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/DAVIS/gen_json.py",
    "content": "# --------------------------------------------------------\n# processing DAVIS train\n# --------------------------------------------------------\nfrom os.path import join\nimport json\nimport os\nimport cv2\nimport pdb\nimport numpy as np\nimport pdb\nfrom PIL import Image\n\ndata_dir = '/home/zpzhang/data/testing/DAVIS-trainval'\nsaveDir = '/home/zpzhang/data/training/DAVIS'\n\ndataset = dict()\ntrain_txt = join(data_dir, 'ImageSets/2017', 'train.txt')\nvideos = open(train_txt, 'r').readlines()\nn_videos = len(videos)\n\nfor iidx, video_name in enumerate(videos):\n    video_name = video_name[:-1]\n\n    print('video id: {:04d} / {:04d}'.format(iidx, n_videos))\n    try:\n        imgs = sorted(os.listdir(join(data_dir, 'JPEGImages/480p', video_name)))\n    except:\n        continue\n    dataset[video_name] = dict()\n\n    for idx, im_name in enumerate(imgs):\n        mask_path = join(data_dir, 'Annotations/480p', video_name, im_name.replace('.jpg', '.png'))\n        mask = np.array(Image.open(mask_path)).astype(np.uint8)\n        objects = np.unique(mask)\n\n        for track_id in range(1, len(objects)):\n            color = objects[track_id]\n            mask_temp = (mask == color).astype(np.uint8) * 255\n            x, y, w, h = cv2.boundingRect(mask_temp)\n            bbox = [x, y, x + w - 1, y + h - 1] # [x1,y1,x2,y2]\n            if w <= 0 or h <= 0:  # lead nan error in cls.\n                continue\n\n            if '{:02d}'.format(track_id - 1) not in dataset[video_name].keys():\n                dataset[video_name]['{:02d}'.format(track_id - 1)] = dict()\n            dataset[video_name]['{:02d}'.format(track_id-1)]['{:06d}'.format(int(im_name.split('.')[0]))] = bbox\nprint('save json (dataset), please wait 20 seconds~')\nsave_path = join(saveDir, 'davis.json')\njson.dump(dataset, open(save_path, 'w'), indent=4, sort_keys=True)\nprint('done!')\n\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/DAVIS/par_crop.py",
    "content": "# --------------------------------------------------------\n# process msra10k\n# --------------------------------------------------------\nimport os\nimport cv2\nimport numpy as np\nfrom os.path import join, isdir\nfrom os import mkdir, makedirs\nfrom concurrent import futures\nimport sys\nimport time\nimport argparse\nimport pdb\nfrom PIL import Image\n\nparser = argparse.ArgumentParser(description='COCO Parallel Preprocessing for SiamMask')\nparser.add_argument('--exemplar_size', type=int, default=127, help='size of exemplar')\nparser.add_argument('--context_amount', type=float, default=0.5, help='context amount')\nparser.add_argument('--search_size', type=int, default=511, help='size of cropped search region')\nparser.add_argument('--enable_mask', action='store_true', help='whether crop mask')\nparser.add_argument('--num_threads', type=int, default=24, help='number of threads')\nargs = parser.parse_args()\n\n\n# Print iterations progress (thanks StackOverflow)\ndef printProgress(iteration, total, prefix='', suffix='', decimals=1, barLength=100):\n    \"\"\"\n    Call in a loop to create terminal progress bar\n    @params:\n        iteration   - Required  : current iteration (Int)\n        total       - Required  : total iterations (Int)\n        prefix      - Optional  : prefix string (Str)\n        suffix      - Optional  : suffix string (Str)\n        decimals    - Optional  : positive number of decimals in percent complete (Int)\n        barLength   - Optional  : character length of bar (Int)\n    \"\"\"\n    formatStr       = \"{0:.\" + str(decimals) + \"f}\"\n    percents        = formatStr.format(100 * (iteration / float(total)))\n    filledLength    = int(round(barLength * iteration / float(total)))\n    bar             = '' * filledLength + '-' * (barLength - filledLength)\n    sys.stdout.write('\\r%s |%s| %s%s %s' % (prefix, bar, percents, '%', suffix)),\n    if iteration == total:\n        sys.stdout.write('\\x1b[2K\\r')\n    sys.stdout.flush()\n\n\ndef crop_hwc(image, bbox, out_sz, padding=(0, 0, 0)):\n    a = (out_sz-1) / (bbox[2]-bbox[0])\n    b = (out_sz-1) / (bbox[3]-bbox[1])\n    c = -a * bbox[0]\n    d = -b * bbox[1]\n    mapping = np.array([[a, 0, c],\n                        [0, b, d]]).astype(np.float)\n    crop = cv2.warpAffine(image, mapping, (out_sz, out_sz),\n                          borderMode=cv2.BORDER_CONSTANT, borderValue=padding)\n    return crop\n\n\ndef pos_s_2_bbox(pos, s):\n    return [pos[0]-s/2, pos[1]-s/2, pos[0]+s/2, pos[1]+s/2]\n\n\ndef crop_like_SiamFCx(image, bbox, exemplar_size=127, context_amount=0.5, search_size=255, padding=(0, 0, 0)):\n    target_pos = [(bbox[2]+bbox[0])/2., (bbox[3]+bbox[1])/2.]\n    target_size = [bbox[2]-bbox[0]+1, bbox[3]-bbox[1]+1]\n    wc_z = target_size[1] + context_amount * sum(target_size)\n    hc_z = target_size[0] + context_amount * sum(target_size)\n    s_z = np.sqrt(wc_z * hc_z)\n    scale_z = exemplar_size / s_z\n    d_search = (search_size - exemplar_size) / 2\n    pad = d_search / scale_z\n    s_x = s_z + 2 * pad\n\n    x = crop_hwc(image, pos_s_2_bbox(target_pos, s_x), search_size, padding)\n    return x\n\n\ndef crop_img(video_name, set_crop_base_path, set_img_base_path,\n             exemplar_size=127, context_amount=0.5, search_size=511, enable_mask=True):\n    video_name = video_name[:-1]\n    imgs = sorted(os.listdir(join(set_img_base_path, 'JPEGImages/480p', video_name)))\n\n    for im_id, im_name in enumerate(imgs):\n        im_path = join(set_img_base_path, 'JPEGImages/480p', video_name, im_name)\n        mask_path = join(set_img_base_path, 'Annotations/480p', video_name, im_name.replace('.jpg', '.png'))\n        im = cv2.imread(im_path)\n        avg_chans = np.mean(im, axis=(0, 1))\n        mask = np.array(Image.open(mask_path)).astype(np.uint8)\n\n        objects = np.unique(mask)\n        frame_crop_base_path = join(set_crop_base_path, video_name)  # video path\n        if not isdir(frame_crop_base_path): makedirs(frame_crop_base_path)\n\n        for track_id in range(1, len(objects)):\n            \n            color = objects[track_id]\n            mask_temp = (mask == color).astype(np.uint8) * 255\n            x, y, w, h = cv2.boundingRect(mask_temp)\n            bbox = [x, y, x + w - 1, y + h - 1] # [x1,y1,x2,y2]\n\n            x = crop_like_SiamFCx(im, bbox, exemplar_size=exemplar_size, context_amount=context_amount,\n                                  search_size=search_size, padding=avg_chans)\n            cv2.imwrite(join(frame_crop_base_path, '{:06d}.{:02d}.x.jpg'.format(im_id, track_id-1)), x)\n\n            x = crop_like_SiamFCx(mask_temp, bbox, exemplar_size=exemplar_size, context_amount=context_amount, search_size=search_size)\n            x  = (x > 0.5 * 255).astype(np.uint8) * 255\n            cv2.imwrite(join(frame_crop_base_path, '{:06d}.{:02d}.m.png'.format(im_id, track_id-1)), x)\n\n\ndef main(exemplar_size=127, context_amount=0.5, search_size=511, enable_mask=True, num_threads=24):\n    global coco  # will used for generate mask\n    data_dir = '/home/zpzhang/data/testing/DAVIS-trainval'\n    crop_path = '/home/zpzhang/data/training/DAVIS/crop{:d}'.format(search_size)\n    if not isdir(crop_path): makedirs(crop_path)\n\n    train_txt = join(data_dir, 'ImageSets/2017', 'train.txt')\n    videos = open(train_txt, 'r').readlines()\n    n_videos = len(videos)\n    \n    # debug\n    # for video in videos:\n    #     if not video == 'cat-girl\\n':\n    #         continue\n    #     crop_img(video, crop_path, data_dir, exemplar_size, context_amount, search_size, enable_mask)\n\n    #\n    with futures.ProcessPoolExecutor(max_workers=num_threads) as executor:\n        fs = [executor.submit(crop_img, video,\n                                crop_path, data_dir,\n                                exemplar_size, context_amount, search_size,\n                                enable_mask) for video in videos]\n        for i, f in enumerate(futures.as_completed(fs)):\n            printProgress(i, n_videos, prefix='DAVIS', suffix='Done ', barLength=40)\n    print('done')\n\n\nif __name__ == '__main__':\n    since = time.time()\n    main(args.exemplar_size, args.context_amount, args.search_size, args.enable_mask, args.num_threads)\n    time_elapsed = time.time() - since\n    print('Total complete in {:.0f}m {:.0f}s'.format(\n        time_elapsed // 60, time_elapsed % 60))\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/DAVIS/readme.md",
    "content": "# Preprocessing DAVIS\n\n````shell\npython par_crop.py --enable_mask --num_threads 24\npython gen_json.py\n````\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/RGBT210/RGBT210_genjson.py",
    "content": "# -*- coding:utf-8 -*-\r\n# ! ./usr/bin/env python\r\n# __author__ = 'zzp'\r\n\r\nimport json\r\nimport numpy as np\r\nfrom os import listdir\r\nfrom os.path import join\r\n\r\nbasepath = '/data/share/RGBT210/'\r\nsave = dict()\r\n\r\n\r\ndef genjson():\r\n    videos = listdir(basepath)\r\n\r\n    for v in videos:\r\n        save[v] = dict()\r\n        save[v]['name'] = v  # video name\r\n\r\n        # save img names\r\n        v_in_path = join(basepath, v, 'infrared')\r\n        v_rgb_path = join(basepath, v, 'visible')\r\n        temp1 = listdir(v_in_path)\r\n        temp2 = listdir(v_rgb_path)\r\n        temp1.sort()\r\n        temp2.sort()\r\n        save[v]['infrared_imgs'] = temp1   # infrared file names\r\n        save[v]['visible_imgs'] = temp2    # infrared file names\r\n\r\n        # read gt\r\n        v_in_gt_path = join(basepath, v, 'init.txt')\r\n        v_rgb_gt_path = join(basepath, v, 'init.txt')\r\n        v_in_gts = np.loadtxt(v_in_gt_path, delimiter=',')\r\n        v_rgb_gts = np.loadtxt(v_rgb_gt_path, delimiter=',')\r\n\r\n        v_in_gts[:, 0:2] = v_in_gts[:, 0:2] - 1    # to python 0 index\r\n        v_rgb_gts[:, 0:2] = v_rgb_gts[:, 0:2] - 1  # to python 0 index\r\n\r\n        v_in_init = v_in_gts[0]\r\n        v_rgb_init = v_rgb_gts[0]\r\n\r\n        # save int and gt\r\n        save[v]['infrared_init'] = v_in_init.tolist()\r\n        save[v]['visible_init'] = v_rgb_init.tolist()\r\n        save[v]['infrared_gt'] = v_in_gts.tolist()\r\n        save[v]['visible_gt'] = v_rgb_gts.tolist()\r\n\r\n    json.dump(save, open('/data/zpzhang/datasets/dataset/RGBT210.json', 'w'), indent=4, sort_keys=True)\r\n\r\n\r\nif __name__ == '__main__':\r\n    genjson()\r\n\r\n\r\n\r\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/RGBT210/gen_json.py",
    "content": "from os.path import join\nfrom os import listdir\nimport json\nimport cv2\nimport numpy as np\nfrom pprint import pprint\n\nprint('loading json (raw RGBT234 info), please wait 20 seconds~')\nRGBT210 = json.load(open('/data/zpzhang/datasets/dataset/RGBT210.json', 'r'))\nRGBT210_base_path = '/data/share/RGBT210'\n\ndef check_size(frame_sz, bbox):\n    min_ratio = 0.1\n    max_ratio = 0.75\n    # only accept objects >10% and <75% of the total frame\n    area_ratio = np.sqrt((bbox[2]-bbox[0])*(bbox[3]-bbox[1])/float(np.prod(frame_sz)))\n    ok = (area_ratio > min_ratio) and (area_ratio < max_ratio)\n    return ok\n\n\ndef check_borders(frame_sz, bbox):\n    dist_from_border = 0.05 * (bbox[2] - bbox[0] + bbox[3] - bbox[1])/2\n    ok = (bbox[0] > dist_from_border) and (bbox[1] > dist_from_border) and \\\n         ((frame_sz[0] - bbox[2]) > dist_from_border) and \\\n         ((frame_sz[1] - bbox[3]) > dist_from_border)\n    return ok\n\n\nsnippets = dict()\n\nn_videos = 0\n\n\nfor v_name in list(RGBT210.keys()):\n    video = RGBT210[v_name]\n    n_videos += 1\n    in_frames = video['infrared_imgs']\n    rgb_frames = video['visible_imgs']\n    snippet = dict()\n    snippets[video['name']] = dict()\n\n    # read a image to get im size\n    im_temp_path = join(RGBT210_base_path, video['name'], 'visible', rgb_frames[0])\n    im_temp = cv2.imread(im_temp_path)\n    frame_sz = [im_temp.shape[1], im_temp.shape[0]]\n\n    in_gts = video['infrared_gt']\n    rgb_gts = video['visible_gt']\n\n    for f, in_frame in enumerate(in_frames):\n        in_bbox = in_gts[f]  # (x,y,w,h)\n        rgb_bbox = rgb_gts[f]  # (x,y,w,h)\n\n        bboxs = [[in_bbox[0], in_bbox[1], in_bbox[0]+in_bbox[2], in_bbox[1]+in_bbox[3]],\n                 [rgb_bbox[0], rgb_bbox[1], rgb_bbox[0]+rgb_bbox[2], rgb_bbox[1]+rgb_bbox[3]]]  #(xmin, ymin, xmax, ymax)\n\n        imgs = [in_frames[f], rgb_frames[f]] # image name may be different in visible and rgb imgs\n\n        snippet['{:06d}'.format(f)] = [imgs, bboxs]\n\n    snippets[video['name']]['{:02d}'.format(0)] = snippet.copy()\n\njson.dump(snippets, open('/data/share/SMALLSIAM/RGBT210/all.json', 'w'), indent=4, sort_keys=True)\nprint('done!')\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/RGBT210/par_crop.py",
    "content": "from os.path import join, isdir, exists\nfrom os import listdir, mkdir, makedirs\nimport cv2\nimport numpy as np\nimport glob\nfrom concurrent import futures\nimport sys\nimport time\n\n\nRGBT234_base_path = '/data/share/RGBT210'\n\n# Print iterations progress (thanks StackOverflow)\ndef printProgress(iteration, total, prefix='', suffix='', decimals=1, barLength=100):\n    \"\"\"\n    Call in a loop to create terminal progress bar\n    @params:\n        iteration   - Required  : current iteration (Int)\n        total       - Required  : total iterations (Int)\n        prefix      - Optional  : prefix string (Str)\n        suffix      - Optional  : suffix string (Str)\n        decimals    - Optional  : positive number of decimals in percent complete (Int)\n        barLength   - Optional  : character length of bar (Int)\n    \"\"\"\n    formatStr       = \"{0:.\" + str(decimals) + \"f}\"\n    percents        = formatStr.format(100 * (iteration / float(total)))\n    filledLength    = int(round(barLength * iteration / float(total)))\n    bar             = '' * filledLength + '-' * (barLength - filledLength)\n    sys.stdout.write('\\r%s |%s| %s%s %s' % (prefix, bar, percents, '%', suffix)),\n    if iteration == total:\n        sys.stdout.write('\\x1b[2K\\r')\n    sys.stdout.flush()\n\n\ndef crop_hwc(image, bbox, out_sz, padding=(0, 0, 0)):\n    a = (out_sz-1) / (bbox[2]-bbox[0])\n    b = (out_sz-1) / (bbox[3]-bbox[1])\n    c = -a * bbox[0]\n    d = -b * bbox[1]\n    mapping = np.array([[a, 0, c],\n                        [0, b, d]]).astype(np.float)\n    crop = cv2.warpAffine(image, mapping, (out_sz, out_sz), borderMode=cv2.BORDER_CONSTANT, borderValue=padding)\n    return crop\n\n\ndef pos_s_2_bbox(pos, s):\n    return [pos[0]-s/2, pos[1]-s/2, pos[0]+s/2, pos[1]+s/2]\n\n\ndef crop_like_SiamFC(image, bbox, context_amount=0.5, exemplar_size=127, instanc_size=255, padding=(0, 0, 0)):\n    target_pos = [(bbox[2]+bbox[0])/2., (bbox[3]+bbox[1])/2.]\n    target_size = [bbox[2]-bbox[0], bbox[3]-bbox[1]]   # width, height\n    wc_z = target_size[1] + context_amount * sum(target_size)\n    hc_z = target_size[0] + context_amount * sum(target_size)\n    s_z = np.sqrt(wc_z * hc_z)\n    scale_z = exemplar_size / s_z\n    d_search = (instanc_size - exemplar_size) / 2\n    pad = d_search / scale_z\n    s_x = s_z + 2 * pad\n\n    z = crop_hwc(image, pos_s_2_bbox(target_pos, s_z), exemplar_size, padding)\n    x = crop_hwc(image, pos_s_2_bbox(target_pos, s_x), instanc_size, padding)\n    return z, x\n\n\ndef crop_img(im, bbox, instanc_size):\n    avg_chans = np.mean(im, axis=(0, 1))\n    z, x = crop_like_SiamFC(im, bbox, instanc_size=instanc_size, padding=avg_chans)\n    return z, x\n\n\neps = 1e-5\ndef crop_video(video, crop_path, instanc_size):\n    video_crop_base_path = join(crop_path, video)\n    if not exists(video_crop_base_path): makedirs(video_crop_base_path)\n\n    video_base_path = join(RGBT234_base_path, video)\n\n    # infrared gt\n    in_gts_path = join(video_base_path, 'init.txt')\n    try:\n        in_gts = np.loadtxt(open(in_gts_path, \"rb\"), delimiter=',')\n    except:\n        in_gts = np.loadtxt(open(in_gts_path, \"rb\"), delimiter=' ')\n\n    # rgb gt\n    rgb_gts_path = join(video_base_path, 'init.txt')\n    try:\n        rgb_gts = np.loadtxt(open(rgb_gts_path, \"rb\"), delimiter=',')\n    except:\n        rgb_gts = np.loadtxt(open(rgb_gts_path, \"rb\"), delimiter=' ')\n\n    in_jpgs = sorted(glob.glob(join(video_base_path, 'infrared', '*.jpg')))\n    rgb_jpgs = sorted(glob.glob(join(video_base_path, 'visible', '*.jpg')))\n\n\n    for idx, img_path in enumerate(in_jpgs):\n        in_im = cv2.imread(img_path)\n        rgb_im = cv2.imread(rgb_jpgs[idx])\n        in_gt = in_gts[idx]\n        rgb_gt = rgb_gts[idx]\n        in_bbox = [int(g) for g in in_gt]  # (x,y,w,h)\n\n        if abs(in_bbox[2]) < eps or abs(in_bbox[3]) < eps:\n            continue\n\n        in_bbox = [in_bbox[0], in_bbox[1], in_bbox[0]+in_bbox[2], in_bbox[1]+in_bbox[3]]   # (xmin, ymin, xmax, ymax)\n        rgb_bbox = [int(g) for g in rgb_gt]  # (x,y,w,h)\n        rgb_bbox = [rgb_bbox[0], rgb_bbox[1], rgb_bbox[0] + rgb_bbox[2], rgb_bbox[1] + rgb_bbox[3]]  # (xmin, ymin, xmax, ymax)\n\n        in_z, in_x = crop_img(in_im, in_bbox, instanc_size)\n        rgb_z, rgb_x = crop_img(rgb_im, rgb_bbox, instanc_size)\n\n        cv2.imwrite(join(video_crop_base_path, '{:06d}.{:02d}.in.z.jpg'.format(int(idx), 0)), in_z)\n        cv2.imwrite(join(video_crop_base_path, '{:06d}.{:02d}.in.x.jpg'.format(int(idx), 0)), in_x)\n\n        cv2.imwrite(join(video_crop_base_path, '{:06d}.{:02d}.rgb.z.jpg'.format(int(idx), 0)), rgb_z)\n        cv2.imwrite(join(video_crop_base_path, '{:06d}.{:02d}.rgb.x.jpg'.format(int(idx), 0)), rgb_x)\n\n\ndef main(instanc_size=271, num_threads=24):\n    crop_path = '/data/share/SMALLSIAM/RGBT210/crop{:d}'.format(instanc_size)\n    if not exists(crop_path): makedirs(crop_path)\n\n    videos = sorted(listdir(RGBT234_base_path))\n    n_videos = len(videos)\n\n    with futures.ProcessPoolExecutor(max_workers=num_threads) as executor:\n        fs = [executor.submit(crop_video, video, crop_path, instanc_size) for video in videos]\n        for i, f in enumerate(futures.as_completed(fs)):\n            # Write progress to error so that it can be seen\n            printProgress(i, n_videos, prefix='RGBT210', suffix='Done ', barLength=40)\n\n\nif __name__ == '__main__':\n    since = time.time()\n    main(int(sys.argv[1]), int(sys.argv[2]))\n    time_elapsed = time.time() - since\n    print('Total complete in {:.0f}m {:.0f}s'.format(\n        time_elapsed // 60, time_elapsed % 60))\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/RGBT210/readme.md",
    "content": "# Preprocessing RGBT234 (train and val)\n\n\n### Crop & Generate data info (20 min)\n\n````sh\npython RGBT234_genjson.py\npython par_crop.py 511 24\npython gen_json.py\n````\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/RGBT234/RGBT234_genjson.py",
    "content": "# -*- coding:utf-8 -*-\r\n# ! ./usr/bin/env python\r\n# __author__ = 'zzp'\r\n\r\nimport json\r\nimport numpy as np\r\nfrom os import listdir\r\nfrom os.path import join\r\n\r\nbasepath = '/data/zpzhang/datasets/dataset/RGBT234/'\r\nsave = dict()\r\n\r\n\r\ndef genjson():\r\n    videos = listdir(basepath)\r\n\r\n    for v in videos:\r\n        save[v] = dict()\r\n        save[v]['name'] = v  # video name\r\n\r\n        # save img names\r\n        v_in_path = join(basepath, v, 'infrared')\r\n        v_rgb_path = join(basepath, v, 'visible')\r\n        temp1 = listdir(v_in_path)\r\n        temp2 = listdir(v_rgb_path)\r\n        temp1.sort()\r\n        temp2.sort()\r\n        save[v]['infrared_imgs'] = temp1   # infrared file names\r\n        save[v]['visible_imgs'] = temp2    # infrared file names\r\n\r\n        # read gt\r\n        v_in_gt_path = join(basepath, v, 'infrared.txt')\r\n        v_rgb_gt_path = join(basepath, v, 'visible.txt')\r\n        v_in_gts = np.loadtxt(v_in_gt_path, delimiter=',')\r\n        v_rgb_gts = np.loadtxt(v_rgb_gt_path, delimiter=',')\r\n\r\n        v_in_gts[:, 0:2] = v_in_gts[:, 0:2] - 1    # to python 0 index\r\n        v_rgb_gts[:, 0:2] = v_rgb_gts[:, 0:2] - 1  # to python 0 index\r\n\r\n        v_in_init = v_in_gts[0]\r\n        v_rgb_init = v_rgb_gts[0]\r\n\r\n        # save int and gt\r\n        save[v]['infrared_init'] = v_in_init.tolist()\r\n        save[v]['visible_init'] = v_rgb_init.tolist()\r\n        save[v]['infrared_gt'] = v_in_gts.tolist()\r\n        save[v]['visible_gt'] = v_rgb_gts.tolist()\r\n\r\n    json.dump(save, open('/data/zpzhang/datasets/dataset/RGBT234.json', 'w'), indent=4, sort_keys=True)\r\n\r\n\r\nif __name__ == '__main__':\r\n    genjson()\r\n\r\n\r\n\r\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/RGBT234/gen_json.py",
    "content": "from os.path import join\nfrom os import listdir\nimport json\nimport cv2\nimport numpy as np\nfrom pprint import pprint\n\nprint('loading json (raw RGBT234 info), please wait 20 seconds~')\nRGBT234 = json.load(open('RGBT234.json', 'r'))\nRGBT234_base_path = '/data/zpzhang/datasets/dataset/RGBT234'\n\ndef check_size(frame_sz, bbox):\n    min_ratio = 0.1\n    max_ratio = 0.75\n    # only accept objects >10% and <75% of the total frame\n    area_ratio = np.sqrt((bbox[2]-bbox[0])*(bbox[3]-bbox[1])/float(np.prod(frame_sz)))\n    ok = (area_ratio > min_ratio) and (area_ratio < max_ratio)\n    return ok\n\n\ndef check_borders(frame_sz, bbox):\n    dist_from_border = 0.05 * (bbox[2] - bbox[0] + bbox[3] - bbox[1])/2\n    ok = (bbox[0] > dist_from_border) and (bbox[1] > dist_from_border) and \\\n         ((frame_sz[0] - bbox[2]) > dist_from_border) and \\\n         ((frame_sz[1] - bbox[3]) > dist_from_border)\n    return ok\n\n\nsnippets = dict()\n\nn_videos = 0\n\n\nfor v_name in list(RGBT234.keys()):\n    video = RGBT234[v_name]\n    n_videos += 1\n    in_frames = video['infrared_imgs']\n    rgb_frames = video['visible_imgs']\n    snippet = dict()\n    snippets[video['name']] = dict()\n\n    # read a image to get im size\n    im_temp_path = join(RGBT234_base_path, video['name'], 'visible', rgb_frames[0])\n    im_temp = cv2.imread(im_temp_path)\n    frame_sz = [im_temp.shape[1], im_temp.shape[0]]\n\n    in_gts = video['infrared_gt']\n    rgb_gts = video['visible_gt']\n\n    for f, in_frame in enumerate(in_frames):\n        in_bbox = in_gts[f]  # (x,y,w,h)\n        rgb_bbox = rgb_gts[f]  # (x,y,w,h)\n\n        bboxs = [[in_bbox[0], in_bbox[1], in_bbox[0]+in_bbox[2], in_bbox[1]+in_bbox[3]],\n                 [rgb_bbox[0], rgb_bbox[1], rgb_bbox[0]+rgb_bbox[2], rgb_bbox[1]+rgb_bbox[3]]]  #(xmin, ymin, xmax, ymax)\n\n        imgs = [in_frames[f], rgb_frames[f]] # image name may be different in visible and rgb imgs\n\n        snippet['{:06d}'.format(f)] = [imgs, bboxs]\n\n    snippets[video['name']]['{:02d}'.format(0)] = snippet.copy()\n\njson.dump(snippets, open('/data/share/SMALLSIAM/RGBT234/all.json', 'w'), indent=4, sort_keys=True)\nprint('done!')\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/RGBT234/par_crop.py",
    "content": "from os.path import join, isdir, exists\nfrom os import listdir, mkdir, makedirs\nimport cv2\nimport numpy as np\nimport glob\nfrom concurrent import futures\nimport sys\nimport time\n\n\nRGBT234_base_path = '/data/zpzhang/datasets/dataset/RGBT234'\n\n# Print iterations progress (thanks StackOverflow)\ndef printProgress(iteration, total, prefix='', suffix='', decimals=1, barLength=100):\n    \"\"\"\n    Call in a loop to create terminal progress bar\n    @params:\n        iteration   - Required  : current iteration (Int)\n        total       - Required  : total iterations (Int)\n        prefix      - Optional  : prefix string (Str)\n        suffix      - Optional  : suffix string (Str)\n        decimals    - Optional  : positive number of decimals in percent complete (Int)\n        barLength   - Optional  : character length of bar (Int)\n    \"\"\"\n    formatStr       = \"{0:.\" + str(decimals) + \"f}\"\n    percents        = formatStr.format(100 * (iteration / float(total)))\n    filledLength    = int(round(barLength * iteration / float(total)))\n    bar             = '' * filledLength + '-' * (barLength - filledLength)\n    sys.stdout.write('\\r%s |%s| %s%s %s' % (prefix, bar, percents, '%', suffix)),\n    if iteration == total:\n        sys.stdout.write('\\x1b[2K\\r')\n    sys.stdout.flush()\n\n\ndef crop_hwc(image, bbox, out_sz, padding=(0, 0, 0)):\n    a = (out_sz-1) / (bbox[2]-bbox[0])\n    b = (out_sz-1) / (bbox[3]-bbox[1])\n    c = -a * bbox[0]\n    d = -b * bbox[1]\n    mapping = np.array([[a, 0, c],\n                        [0, b, d]]).astype(np.float)\n    crop = cv2.warpAffine(image, mapping, (out_sz, out_sz), borderMode=cv2.BORDER_CONSTANT, borderValue=padding)\n    return crop\n\n\ndef pos_s_2_bbox(pos, s):\n    return [pos[0]-s/2, pos[1]-s/2, pos[0]+s/2, pos[1]+s/2]\n\n\ndef crop_like_SiamFC(image, bbox, context_amount=0.5, exemplar_size=127, instanc_size=255, padding=(0, 0, 0)):\n    target_pos = [(bbox[2]+bbox[0])/2., (bbox[3]+bbox[1])/2.]\n    target_size = [bbox[2]-bbox[0], bbox[3]-bbox[1]]   # width, height\n    wc_z = target_size[1] + context_amount * sum(target_size)\n    hc_z = target_size[0] + context_amount * sum(target_size)\n    s_z = np.sqrt(wc_z * hc_z)\n    scale_z = exemplar_size / s_z\n    d_search = (instanc_size - exemplar_size) / 2\n    pad = d_search / scale_z\n    s_x = s_z + 2 * pad\n\n    z = crop_hwc(image, pos_s_2_bbox(target_pos, s_z), exemplar_size, padding)\n    x = crop_hwc(image, pos_s_2_bbox(target_pos, s_x), instanc_size, padding)\n    return z, x\n\n\ndef crop_img(im, bbox, instanc_size):\n    avg_chans = np.mean(im, axis=(0, 1))\n    z, x = crop_like_SiamFC(im, bbox, instanc_size=instanc_size, padding=avg_chans)\n    return z, x\n\n\ndef crop_video(video, crop_path, instanc_size):\n    video_crop_base_path = join(crop_path, video)\n    if not exists(video_crop_base_path): makedirs(video_crop_base_path)\n\n    video_base_path = join(RGBT234_base_path, video)\n\n    # infrared gt\n    in_gts_path = join(video_base_path, 'infrared.txt')\n    try:\n        in_gts = np.loadtxt(open(in_gts_path, \"rb\"), delimiter=',')\n    except:\n        in_gts = np.loadtxt(open(in_gts_path, \"rb\"), delimiter=' ')\n\n    # rgb gt\n    rgb_gts_path = join(video_base_path, 'visible.txt')\n    try:\n        rgb_gts = np.loadtxt(open(rgb_gts_path, \"rb\"), delimiter=',')\n    except:\n        rgb_gts = np.loadtxt(open(rgb_gts_path, \"rb\"), delimiter=' ')\n\n    in_jpgs = sorted(glob.glob(join(video_base_path, 'infrared', '*.jpg')))\n    rgb_jpgs = sorted(glob.glob(join(video_base_path, 'visible', '*.jpg')))\n\n    for idx, img_path in enumerate(in_jpgs):\n        in_im = cv2.imread(img_path)\n        rgb_im = cv2.imread(rgb_jpgs[idx])\n        in_gt = in_gts[idx]\n        rgb_gt = rgb_gts[idx]\n        in_bbox = [int(g) for g in in_gt]  # (x,y,w,h)\n        in_bbox = [in_bbox[0], in_bbox[1], in_bbox[0]+in_bbox[2], in_bbox[1]+in_bbox[3]]   # (xmin, ymin, xmax, ymax)\n        rgb_bbox = [int(g) for g in rgb_gt]  # (x,y,w,h)\n        rgb_bbox = [rgb_bbox[0], rgb_bbox[1], rgb_bbox[0] + rgb_bbox[2], rgb_bbox[1] + rgb_bbox[3]]  # (xmin, ymin, xmax, ymax)\n\n        in_z, in_x = crop_img(in_im, in_bbox, instanc_size)\n        rgb_z, rgb_x = crop_img(rgb_im, rgb_bbox, instanc_size)\n\n\n        cv2.imwrite(join(video_crop_base_path, '{:06d}.{:02d}.in.z.jpg'.format(int(idx), 0)), in_z)\n        cv2.imwrite(join(video_crop_base_path, '{:06d}.{:02d}.in.x.jpg'.format(int(idx), 0)), in_x)\n\n        cv2.imwrite(join(video_crop_base_path, '{:06d}.{:02d}.rgb.z.jpg'.format(int(idx), 0)), rgb_z)\n        cv2.imwrite(join(video_crop_base_path, '{:06d}.{:02d}.rgb.x.jpg'.format(int(idx), 0)), rgb_x)\n\n\ndef main(instanc_size=271, num_threads=24):\n    crop_path = '/data/share/SMALLSIAM/RGBT234/crop{:d}'.format(instanc_size)\n    if not exists(crop_path): makedirs(crop_path)\n\n    videos = sorted(listdir(RGBT234_base_path))\n    n_videos = len(videos)\n\n    with futures.ProcessPoolExecutor(max_workers=num_threads) as executor:\n        fs = [executor.submit(crop_video, video, crop_path, instanc_size) for video in videos]\n        for i, f in enumerate(futures.as_completed(fs)):\n            # Write progress to error so that it can be seen\n            printProgress(i, n_videos, prefix='RGBT234', suffix='Done ', barLength=40)\n\n\nif __name__ == '__main__':\n    since = time.time()\n    main(int(sys.argv[1]), int(sys.argv[2]))\n    time_elapsed = time.time() - since\n    print('Total complete in {:.0f}m {:.0f}s'.format(\n        time_elapsed // 60, time_elapsed % 60))\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/RGBT234/readme.md",
    "content": "# Preprocessing RGBT234 (train and val)\n\n\n### Crop & Generate data info (20 min)\n\n````sh\npython RGBT234_genjson.py\npython par_crop.py 511 24\npython gen_json.py\n````\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/coco/gen_json.py",
    "content": "from pycocotools.coco import COCO\r\nfrom os.path import join\r\nimport json\r\nimport os\r\n\r\n\r\ndataDir = '/data/home/hopeng/msralab_IMG/Users/hopeng/data_official/coco'\r\n#'/data/share/coco'\r\nfor dataType in ['val2017', 'train2017']:\r\n    dataset = dict()\r\n    annFile = '{}/annotations/instances_{}.json'.format(dataDir,dataType)\r\n    coco = COCO(annFile)\r\n    n_imgs = len(coco.imgs)\r\n    for n, img_id in enumerate(coco.imgs):\r\n        print('subset: {} image id: {:04d} / {:04d}'.format(dataType, n, n_imgs))\r\n        img = coco.loadImgs(img_id)[0]\r\n        annIds = coco.getAnnIds(imgIds=img['id'], iscrowd=None)\r\n        anns = coco.loadAnns(annIds)\r\n        video_crop_base_path = join(dataType, img['file_name'].split('/')[-1].split('.')[0])\r\n        \r\n        if len(anns) > 0:\r\n            dataset[video_crop_base_path] = dict()        \r\n\r\n        for trackid, ann in enumerate(anns):\r\n            rect = ann['bbox']\r\n            c = ann['category_id']\r\n            bbox = [rect[0], rect[1], rect[0]+rect[2], rect[1]+rect[3]]\r\n            if rect[2] <= 0 or rect[3] <= 0:  # lead nan error in cls.\r\n                continue\r\n            dataset[video_crop_base_path]['{:02d}'.format(trackid)] = {'000000': bbox}\r\n\r\n    print('save json (dataset), please wait 20 seconds~')\r\n    #json.dump(dataset, open('{}.json'.format(dataType), 'w'), indent=4, sort_keys=True)\r\n    json.dump(dataset, open('{}.json'.format(os.path.join(dataDir, dataType)), 'w'), indent=4, sort_keys=True)\r\n    print('done!')\r\n\r\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/coco/par_crop.py",
    "content": "from pycocotools.coco import COCO\r\nimport cv2\r\nimport numpy as np\r\nfrom os.path import join, isdir\r\nfrom os import mkdir, makedirs\r\nfrom concurrent import futures\r\nimport sys\r\nimport time\r\n\r\n\r\n# Print iterations progress (thanks StackOverflow)\r\ndef printProgress(iteration, total, prefix='', suffix='', decimals=1, barLength=100):\r\n    \"\"\"\r\n    Call in a loop to create terminal progress bar\r\n    @params:\r\n        iteration   - Required  : current iteration (Int)\r\n        total       - Required  : total iterations (Int)\r\n        prefix      - Optional  : prefix string (Str)\r\n        suffix      - Optional  : suffix string (Str)\r\n        decimals    - Optional  : positive number of decimals in percent complete (Int)\r\n        barLength   - Optional  : character length of bar (Int)\r\n    \"\"\"\r\n    formatStr       = \"{0:.\" + str(decimals) + \"f}\"\r\n    percents        = formatStr.format(100 * (iteration / float(total)))\r\n    filledLength    = int(round(barLength * iteration / float(total)))\r\n    bar             = '' * filledLength + '-' * (barLength - filledLength)\r\n    sys.stdout.write('\\r%s |%s| %s%s %s' % (prefix, bar, percents, '%', suffix)),\r\n    if iteration == total:\r\n        sys.stdout.write('\\x1b[2K\\r')\r\n    sys.stdout.flush()\r\n\r\n\r\ndef crop_hwc(image, bbox, out_sz, padding=(0, 0, 0)):\r\n    a = (out_sz-1) / (bbox[2]-bbox[0])\r\n    b = (out_sz-1) / (bbox[3]-bbox[1])\r\n    c = -a * bbox[0]\r\n    d = -b * bbox[1]\r\n    mapping = np.array([[a, 0, c],\r\n                        [0, b, d]]).astype(np.float)\r\n    crop = cv2.warpAffine(image, mapping, (out_sz, out_sz), borderMode=cv2.BORDER_CONSTANT, borderValue=padding)\r\n    return crop\r\n\r\n\r\ndef pos_s_2_bbox(pos, s):\r\n    return [pos[0]-s/2, pos[1]-s/2, pos[0]+s/2, pos[1]+s/2]\r\n\r\n\r\ndef crop_like_SiamFC(image, bbox, context_amount=0.5, exemplar_size=127, instanc_size=255, padding=(0, 0, 0)):\r\n    target_pos = [(bbox[2]+bbox[0])/2., (bbox[3]+bbox[1])/2.]\r\n    target_size = [bbox[2]-bbox[0], bbox[3]-bbox[1]]\r\n    wc_z = target_size[1] + context_amount * sum(target_size)\r\n    hc_z = target_size[0] + context_amount * sum(target_size)\r\n    s_z = np.sqrt(wc_z * hc_z)\r\n    scale_z = exemplar_size / s_z\r\n    d_search = (instanc_size - exemplar_size) / 2\r\n    pad = d_search / scale_z\r\n    s_x = s_z + 2 * pad\r\n\r\n    z = crop_hwc(image, pos_s_2_bbox(target_pos, s_z), exemplar_size, padding)\r\n    x = crop_hwc(image, pos_s_2_bbox(target_pos, s_x), instanc_size, padding)\r\n    return z, x\r\n\r\n\r\ndef crop_img(img, anns, set_crop_base_path, set_img_base_path, instanc_size=511):\r\n    frame_crop_base_path = join(set_crop_base_path, img['file_name'].split('/')[-1].split('.')[0])\r\n    if not isdir(frame_crop_base_path): makedirs(frame_crop_base_path)\r\n\r\n    im = cv2.imread('{}/{}'.format(set_img_base_path, img['file_name']))\r\n    avg_chans = np.mean(im, axis=(0, 1))\r\n    for trackid, ann in enumerate(anns):\r\n        rect = ann['bbox']\r\n        bbox = [rect[0], rect[1], rect[0] + rect[2], rect[1] + rect[3]]\r\n        if rect[2] <= 0 or rect[3] <=0:\r\n            continue\r\n        z, x = crop_like_SiamFC(im, bbox, instanc_size=instanc_size, padding=avg_chans)\r\n        cv2.imwrite(join(frame_crop_base_path, '{:06d}.{:02d}.z.jpg'.format(0, trackid)), z)\r\n        cv2.imwrite(join(frame_crop_base_path, '{:06d}.{:02d}.x.jpg'.format(0, trackid)), x)\r\n\r\n\r\ndef main(dataDir='.', instanc_size=511, num_threads=12):\r\n    #crop_path = './crop{:d}'.format(instanc_size)\r\n    crop_path = '/data/home/hopeng/msralab_IMG/Users/hopeng/data_official/coco/crop{:d}'.format(instanc_size)\r\n    if not isdir(crop_path): mkdir(crop_path)\r\n\r\n    for dataType in ['val2017', 'train2017']:\r\n    #for dataType in ['val2014', 'train2014', 'test2015', 'test2017']:\r\n        set_crop_base_path = join(crop_path, dataType)\r\n        set_img_base_path = join(dataDir, dataType)\r\n\r\n        annFile = '{}/annotations/instances_{}.json'.format(dataDir,dataType)\r\n        coco = COCO(annFile)\r\n        n_imgs = len(coco.imgs)\r\n        with futures.ProcessPoolExecutor(max_workers=num_threads) as executor:\r\n            fs = [executor.submit(crop_img, coco.loadImgs(id)[0],\r\n                                  coco.loadAnns(coco.getAnnIds(imgIds=id, iscrowd=None)),\r\n                                  set_crop_base_path, set_img_base_path, instanc_size) for id in coco.imgs]\r\n            for i, f in enumerate(futures.as_completed(fs)):\r\n                # Write progress to error so that it can be seen\r\n                printProgress(i, n_imgs, prefix=dataType, suffix='Done ', barLength=40)\r\n    print('done')\r\n\r\n\r\nif __name__ == '__main__':\r\n    since = time.time()\r\n    main(sys.argv[1], int(sys.argv[2]), int(sys.argv[3]))\r\n    time_elapsed = time.time() - since\r\n    print('Total complete in {:.0f}m {:.0f}s'.format(\r\n        time_elapsed // 60, time_elapsed % 60))\r\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/coco/readme.md",
    "content": "# Preprocessing COCO\r\n\r\n### Download raw images and annotations\r\n\r\n````shell\r\nwget http://images.cocodataset.org/zips/train2017.zip\r\nwget http://images.cocodataset.org/zips/val2017.zip\r\nwget http://images.cocodataset.org/annotations/annotations_trainval2017.zip\r\n\r\nunzip ./train2017.zip\r\nunzip ./val2017.zip\r\nunzip ./annotations_trainval2017.zip\r\ncd pycocotools && make && cd ..\r\n````\r\n\r\n### Crop & Generate data info (10 min)\r\n\r\n````shell\r\n#python par_crop.py [data_path] [crop_size] [num_threads]\r\npython par_crop.py /data/share/coco  511 12   \r\npython gen_json.py\r\n````\r\n\r\nCode are modified from SiamMask.\r\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/det/gen_json.py",
    "content": "from os.path import join, isdir\r\nfrom os import mkdir\r\nimport glob\r\nimport xml.etree.ElementTree as ET\r\nimport json\r\n\r\njs = {}\r\n#VID_base_path = '/data/share/ILSVRC'\r\nVID_base_path = '/data/home/hopeng/data_local/ILSVRC2015'\r\nann_base_path = join(VID_base_path, 'Annotations/DET/train/')\r\nsub_sets = ('a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i')\r\nfor sub_set in sub_sets:\r\n    sub_set_base_path = join(ann_base_path, sub_set)\r\n\r\n    if 'a' == sub_set:\r\n        xmls = sorted(glob.glob(join(sub_set_base_path, '*', '*.xml')))\r\n    else:\r\n        xmls = sorted(glob.glob(join(sub_set_base_path, '*.xml')))\r\n    n_imgs = len(xmls)\r\n    for f, xml in enumerate(xmls):\r\n        print('subset: {} frame id: {:08d} / {:08d}'.format(sub_set, f, n_imgs))\r\n        xmltree = ET.parse(xml)\r\n        objects = xmltree.findall('object')\r\n\r\n        video = join(sub_set, xml.split('/')[-1].split('.')[0])\r\n\r\n        for id, object_iter in enumerate(objects):\r\n            bndbox = object_iter.find('bndbox')\r\n            bbox = [int(bndbox.find('xmin').text), int(bndbox.find('ymin').text),\r\n                    int(bndbox.find('xmax').text), int(bndbox.find('ymax').text)]\r\n            frame = '%06d' % (0)\r\n            obj = '%02d' % (id)\r\n            if video not in js:\r\n                js[video] = {}\r\n            if obj not in js[video]:\r\n                js[video][obj] = {}\r\n            js[video][obj][frame] = bbox\r\n\r\ntrain = {k:v for (k,v) in js.items() if 'i/' not in k}\r\nval = {k:v for (k,v) in js.items() if 'i/' in k}\r\n\r\n#json.dump(train, open('train.json', 'w'), indent=4, sort_keys=True)\r\n#json.dump(val, open('val.json', 'w'), indent=4, sort_keys=True)\r\njson.dump(train, open('/data/home/hopeng/data_local/ILSVRC2015/DET/train.json', 'w'), indent=4, sort_keys=True)\r\njson.dump(val, open('/data/home/hopeng/data_local/ILSVRC2015/DET/val.json', 'w'), indent=4, sort_keys=True)\r\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/det/par_crop.py",
    "content": "from os.path import join, isdir\r\nfrom os import mkdir, makedirs\r\nimport cv2\r\nimport numpy as np\r\nimport glob\r\nimport xml.etree.ElementTree as ET\r\nfrom concurrent import futures\r\nimport time\r\nimport sys\r\n\r\n\r\n# Print iterations progress (thanks StackOverflow)\r\ndef printProgress(iteration, total, prefix='', suffix='', decimals=1, barLength=100):\r\n    \"\"\"\r\n    Call in a loop to create terminal progress bar\r\n    @params:\r\n        iteration   - Required  : current iteration (Int)\r\n        total       - Required  : total iterations (Int)\r\n        prefix      - Optional  : prefix string (Str)\r\n        suffix      - Optional  : suffix string (Str)\r\n        decimals    - Optional  : positive number of decimals in percent complete (Int)\r\n        barLength   - Optional  : character length of bar (Int)\r\n    \"\"\"\r\n    formatStr       = \"{0:.\" + str(decimals) + \"f}\"\r\n    percents        = formatStr.format(100 * (iteration / float(total)))\r\n    filledLength    = int(round(barLength * iteration / float(total)))\r\n    bar             = '' * filledLength + '-' * (barLength - filledLength)\r\n    sys.stdout.write('\\r%s |%s| %s%s %s' % (prefix, bar, percents, '%', suffix)),\r\n    if iteration == total:\r\n        sys.stdout.write('\\x1b[2K\\r')\r\n    sys.stdout.flush()\r\n\r\n\r\ndef crop_hwc(image, bbox, out_sz, padding=(0, 0, 0)):\r\n    a = (out_sz - 1) / (bbox[2] - bbox[0])\r\n    b = (out_sz - 1) / (bbox[3] - bbox[1])\r\n    c = -a * bbox[0]\r\n    d = -b * bbox[1]\r\n    mapping = np.array([[a, 0, c],\r\n                        [0, b, d]]).astype(np.float)\r\n    crop = cv2.warpAffine(image, mapping, (out_sz, out_sz), borderMode=cv2.BORDER_CONSTANT, borderValue=padding)\r\n    return crop\r\n\r\n\r\ndef pos_s_2_bbox(pos, s):\r\n    return [pos[0] - s / 2, pos[1] - s / 2, pos[0] + s / 2, pos[1] + s / 2]\r\n\r\n\r\ndef crop_like_SiamFC(image, bbox, context_amount=0.5, exemplar_size=127, instanc_size=255, padding=(0, 0, 0)):\r\n    target_pos = [(bbox[2] + bbox[0]) / 2., (bbox[3] + bbox[1]) / 2.]\r\n    target_size = [bbox[2] - bbox[0], bbox[3] - bbox[1]]\r\n    wc_z = target_size[1] + context_amount * sum(target_size)\r\n    hc_z = target_size[0] + context_amount * sum(target_size)\r\n    s_z = np.sqrt(wc_z * hc_z)\r\n    scale_z = exemplar_size / s_z\r\n    d_search = (instanc_size - exemplar_size) / 2\r\n    pad = d_search / scale_z\r\n    s_x = s_z + 2 * pad\r\n\r\n    z = crop_hwc(image, pos_s_2_bbox(target_pos, s_z), exemplar_size, padding)\r\n    x = crop_hwc(image, pos_s_2_bbox(target_pos, s_x), instanc_size, padding)\r\n    return z, x\r\n\r\n\r\ndef crop_xml(xml, sub_set_crop_path, instanc_size=511):\r\n    xmltree = ET.parse(xml)\r\n    objects = xmltree.findall('object')\r\n\r\n    frame_crop_base_path = join(sub_set_crop_path, xml.split('/')[-1].split('.')[0])\r\n    if not isdir(frame_crop_base_path): makedirs(frame_crop_base_path)\r\n\r\n    img_path = xml.replace('xml', 'JPEG').replace('Annotations', 'Data')\r\n\r\n    im = cv2.imread(img_path)\r\n    avg_chans = np.mean(im, axis=(0, 1))\r\n\r\n    for id, object_iter in enumerate(objects):\r\n        bndbox = object_iter.find('bndbox')\r\n        bbox = [int(bndbox.find('xmin').text), int(bndbox.find('ymin').text),\r\n                int(bndbox.find('xmax').text), int(bndbox.find('ymax').text)]\r\n\r\n        z, x = crop_like_SiamFC(im, bbox, instanc_size=instanc_size, padding=avg_chans)\r\n        cv2.imwrite(join(frame_crop_base_path, '{:06d}.{:02d}.z.jpg'.format(0, id)), z)\r\n        cv2.imwrite(join(frame_crop_base_path, '{:06d}.{:02d}.x.jpg'.format(0, id)), x)\r\n\r\n\r\ndef main(VID_base_path='/data/share/ILSVRC', instanc_size=511, num_threads=24):\r\n    #crop_path = './crop{:d}'.format(instanc_size)\r\n    crop_path = '/data/home/hopeng/msralab_IMG/Users/hopeng/data_official/ILSVRC2015/DET/crop{:d}'.format(instanc_size)\r\n    if not isdir(crop_path): mkdir(crop_path)\r\n    ann_base_path = join(VID_base_path, 'Annotations/DET/train/')\r\n    sub_sets = ('a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i')\r\n    for sub_set in sub_sets:\r\n        sub_set_base_path = join(ann_base_path, sub_set)\r\n        if 'a' == sub_set:\r\n            xmls = sorted(glob.glob(join(sub_set_base_path, '*', '*.xml')))\r\n        else:\r\n            xmls = sorted(glob.glob(join(sub_set_base_path, '*.xml')))\r\n\r\n        n_imgs = len(xmls)\r\n        sub_set_crop_path = join(crop_path, sub_set)\r\n        with futures.ProcessPoolExecutor(max_workers=num_threads) as executor:\r\n            fs = [executor.submit(crop_xml, xml, sub_set_crop_path, instanc_size) for xml in xmls]\r\n            for i, f in enumerate(futures.as_completed(fs)):\r\n                printProgress(i, n_imgs, prefix=sub_set, suffix='Done ', barLength=80)\r\n\r\n\r\nif __name__ == '__main__':\r\n    since = time.time()\r\n    main(sys.argv[1], int(sys.argv[2]), int(sys.argv[3]))\r\n    time_elapsed = time.time() - since\r\n    print('Total complete in {:.0f}m {:.0f}s'.format(\r\n        time_elapsed // 60, time_elapsed % 60))\r\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/det/readme.md",
    "content": "# Preprocessing DET(Object detection)\r\nLarge Scale Visual Recognition Challenge 2015 (ILSVRC2015)\r\n\r\n### Download dataset (49GB)\r\n\r\n````shell\r\nwget http://image-net.org/image/ILSVRC2015/ILSVRC2015_DET.tar.gz\r\ntar -xzvf ./ILSVRC2015_DET.tar.gz\r\n\r\nln -sfb $PWD/ILSVRC/Annotations/DET/train/ILSVRC2013_train ILSVRC/Annotations/DET/train/a\r\nln -sfb $PWD/ILSVRC/Annotations/DET/train/ILSVRC2014_train_0000 ILSVRC/Annotations/DET/train/b\r\nln -sfb $PWD/ILSVRC/Annotations/DET/train/ILSVRC2014_train_0001 ILSVRC/Annotations/DET/train/c\r\nln -sfb $PWD/ILSVRC/Annotations/DET/train/ILSVRC2014_train_0002 ILSVRC/Annotations/DET/train/d\r\nln -sfb $PWD/ILSVRC/Annotations/DET/train/ILSVRC2014_train_0003 ILSVRC/Annotations/DET/train/e\r\nln -sfb $PWD/ILSVRC/Annotations/DET/train/ILSVRC2014_train_0004 ILSVRC/Annotations/DET/train/f\r\nln -sfb $PWD/ILSVRC/Annotations/DET/train/ILSVRC2014_train_0005 ILSVRC/Annotations/DET/train/g\r\nln -sfb $PWD/ILSVRC/Annotations/DET/train/ILSVRC2014_train_0006 ILSVRC/Annotations/DET/train/h\r\nln -sfb $PWD/ILSVRC/Annotations/DET/val ILSVRC/Annotations/DET/train/i\r\n\r\nln -sfb $PWD/ILSVRC/Data/DET/train/ILSVRC2013_train ILSVRC/Data/DET/train/a\r\nln -sfb $PWD/ILSVRC/Data/DET/train/ILSVRC2014_train_0000 ILSVRC/Data/DET/train/b\r\nln -sfb $PWD/ILSVRC/Data/DET/train/ILSVRC2014_train_0001 ILSVRC/Data/DET/train/c\r\nln -sfb $PWD/ILSVRC/Data/DET/train/ILSVRC2014_train_0002 ILSVRC/Data/DET/train/d\r\nln -sfb $PWD/ILSVRC/Data/DET/train/ILSVRC2014_train_0003 ILSVRC/Data/DET/train/e\r\nln -sfb $PWD/ILSVRC/Data/DET/train/ILSVRC2014_train_0004 ILSVRC/Data/DET/train/f\r\nln -sfb $PWD/ILSVRC/Data/DET/train/ILSVRC2014_train_0005 ILSVRC/Data/DET/train/g\r\nln -sfb $PWD/ILSVRC/Data/DET/train/ILSVRC2014_train_0006 ILSVRC/Data/DET/train/h\r\nln -sfb $PWD/ILSVRC/Data/DET/val ILSVRC/Data/DET/train/i\r\n````\r\n\r\n### Crop & Generate data info (20 min)\r\n\r\n````shell\r\n#python par_crop.py [crop_size] [num_threads]\r\npython par_crop.py /data/share/ILSVRC 511 12 \r\npython gen_json.py\r\n````\r\n\r\nCodes are modified from SiamMask.\r\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/got10k/gen_json.py",
    "content": "from os.path import join\r\nfrom os import listdir\r\nimport json\r\nimport numpy as np\r\n\r\nprint('loading json (raw got10k info), please wait 20 seconds~')\r\ngot10k = json.load(open('got10k.json', 'r'))\r\n\r\n\r\ndef check_size(frame_sz, bbox):\r\n    min_ratio = 0.1\r\n    max_ratio = 0.75\r\n    # only accept objects >10% and <75% of the total frame\r\n    area_ratio = np.sqrt((bbox[2]-bbox[0])*(bbox[3]-bbox[1])/float(np.prod(frame_sz)))\r\n    ok = (area_ratio > min_ratio) and (area_ratio < max_ratio)\r\n    return ok\r\n\r\n\r\ndef check_borders(frame_sz, bbox):\r\n    dist_from_border = 0.05 * (bbox[2] - bbox[0] + bbox[3] - bbox[1])/2\r\n    ok = (bbox[0] > dist_from_border) and (bbox[1] > dist_from_border) and \\\r\n         ((frame_sz[0] - bbox[2]) > dist_from_border) and \\\r\n         ((frame_sz[1] - bbox[3]) > dist_from_border)\r\n    return ok\r\n\r\n\r\nsnippets = dict()\r\n\r\nn_videos = 0\r\nfor subset in got10k:\r\n    for video in subset:\r\n        n_videos += 1\r\n        frames = video['frame']\r\n        snippet = dict()\r\n        snippets[video['base_path']] = dict()\r\n        for f, frame in enumerate(frames):\r\n            frame_sz = frame['frame_sz']\r\n            bbox = frame['bbox']  # (x,y,w,h)\r\n\r\n            snippet['{:06d}'.format(f)] = [bbox[0], bbox[1], bbox[0]+bbox[2], bbox[1]+bbox[3]]   #(xmin, ymin, xmax, ymax)\r\n\r\n        snippets[video['base_path']]['{:02d}'.format(0)] = snippet.copy()\r\n        \r\ntrain = {k:v for (k,v) in snippets.items() if 'train' in k}\r\nval = {k:v for (k,v) in snippets.items() if 'val' in k}\r\n\r\n# json.dump(train, open('/data2/got10k/train.json', 'w'), indent=4, sort_keys=True)\r\njson.dump(val, open('/data2/got10k/val.json', 'w'), indent=4, sort_keys=True)\r\nprint('done!')\r\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/got10k/par_crop.py",
    "content": "from os.path import join, isdir, exists\r\nfrom os import listdir, mkdir, makedirs\r\nimport cv2\r\nimport numpy as np\r\nimport glob\r\nfrom concurrent import futures\r\nimport sys\r\nimport time\r\n\r\n\r\ngot10k_base_path = '/data/share/GOT10K'\r\nsub_sets = sorted({'train', 'val'})\r\n\r\n\r\n# Print iterations progress (thanks StackOverflow)\r\ndef printProgress(iteration, total, prefix='', suffix='', decimals=1, barLength=100):\r\n    \"\"\"\r\n    Call in a loop to create terminal progress bar\r\n    @params:\r\n        iteration   - Required  : current iteration (Int)\r\n        total       - Required  : total iterations (Int)\r\n        prefix      - Optional  : prefix string (Str)\r\n        suffix      - Optional  : suffix string (Str)\r\n        decimals    - Optional  : positive number of decimals in percent complete (Int)\r\n        barLength   - Optional  : character length of bar (Int)\r\n    \"\"\"\r\n    formatStr       = \"{0:.\" + str(decimals) + \"f}\"\r\n    percents        = formatStr.format(100 * (iteration / float(total)))\r\n    filledLength    = int(round(barLength * iteration / float(total)))\r\n    bar             = '' * filledLength + '-' * (barLength - filledLength)\r\n    sys.stdout.write('\\r%s |%s| %s%s %s' % (prefix, bar, percents, '%', suffix)),\r\n    if iteration == total:\r\n        sys.stdout.write('\\x1b[2K\\r')\r\n    sys.stdout.flush()\r\n\r\n\r\ndef crop_hwc(image, bbox, out_sz, padding=(0, 0, 0)):\r\n    a = (out_sz-1) / (bbox[2]-bbox[0])\r\n    b = (out_sz-1) / (bbox[3]-bbox[1])\r\n    c = -a * bbox[0]\r\n    d = -b * bbox[1]\r\n    mapping = np.array([[a, 0, c],\r\n                        [0, b, d]]).astype(np.float)\r\n    crop = cv2.warpAffine(image, mapping, (out_sz, out_sz), borderMode=cv2.BORDER_CONSTANT, borderValue=padding)\r\n    return crop\r\n\r\n\r\ndef pos_s_2_bbox(pos, s):\r\n    return [pos[0]-s/2, pos[1]-s/2, pos[0]+s/2, pos[1]+s/2]\r\n\r\n\r\ndef crop_like_SiamFC(image, bbox, context_amount=0.5, exemplar_size=127, instanc_size=255, padding=(0, 0, 0)):\r\n    target_pos = [(bbox[2]+bbox[0])/2., (bbox[3]+bbox[1])/2.]\r\n    target_size = [bbox[2]-bbox[0], bbox[3]-bbox[1]]   # width, height\r\n    wc_z = target_size[1] + context_amount * sum(target_size)\r\n    hc_z = target_size[0] + context_amount * sum(target_size)\r\n    s_z = np.sqrt(wc_z * hc_z)\r\n    scale_z = exemplar_size / s_z\r\n    d_search = (instanc_size - exemplar_size) / 2\r\n    pad = d_search / scale_z\r\n    s_x = s_z + 2 * pad\r\n\r\n    z = crop_hwc(image, pos_s_2_bbox(target_pos, s_z), exemplar_size, padding)\r\n    x = crop_hwc(image, pos_s_2_bbox(target_pos, s_x), instanc_size, padding)\r\n    return z, x\r\n\r\n\r\ndef crop_video(sub_set, video, crop_path, instanc_size):\r\n    video_crop_base_path = join(crop_path, sub_set, video)\r\n    if not exists(video_crop_base_path): makedirs(video_crop_base_path)\r\n\r\n    sub_set_base_path = join(got10k_base_path, sub_set)\r\n    video_base_path = join(sub_set_base_path, video)\r\n    gts_path = join(video_base_path, 'groundtruth.txt')\r\n    gts = np.loadtxt(open(gts_path, \"rb\"), delimiter=',')\r\n    jpgs = sorted(glob.glob(join(video_base_path, '*.jpg')))\r\n\r\n    if not jpgs:\r\n        print('no jpg files, try png files')\r\n        jpgs = sorted(glob.glob(join(video_base_path, '*.png')))\r\n        if not jpgs:\r\n            print('no jpg and png files, check data please')\r\n\r\n    for idx, img_path in enumerate(jpgs):\r\n        im = cv2.imread(img_path)\r\n        avg_chans = np.mean(im, axis=(0, 1))\r\n        gt = gts[idx]\r\n        bbox = [int(g) for g in gt]  # (x,y,w,h)\r\n        bbox = [bbox[0], bbox[1], bbox[0]+bbox[2], bbox[1]+bbox[3]]   # (xmin, ymin, xmax, ymax)\r\n\r\n        z, x = crop_like_SiamFC(im, bbox, instanc_size=instanc_size, padding=avg_chans)\r\n        cv2.imwrite(join(video_crop_base_path, '{:06d}.{:02d}.z.jpg'.format(int(idx), 0)), z)\r\n        cv2.imwrite(join(video_crop_base_path, '{:06d}.{:02d}.x.jpg'.format(int(idx), 0)), x)\r\n\r\n\r\ndef main(instanc_size=511, num_threads=24):\r\n    crop_path = '/data2/got10k/crop{:d}'.format(instanc_size)\r\n    if not exists(crop_path): makedirs(crop_path)\r\n\r\n    for sub_set in sub_sets:\r\n        sub_set_base_path = join(got10k_base_path, sub_set)\r\n        videos = sorted(listdir(sub_set_base_path))\r\n        n_videos = len(videos)\r\n        with futures.ProcessPoolExecutor(max_workers=num_threads) as executor:\r\n            fs = [executor.submit(crop_video, sub_set, video, crop_path, instanc_size) for video in videos]\r\n            for i, f in enumerate(futures.as_completed(fs)):\r\n                # Write progress to error so that it can be seen\r\n                printProgress(i, n_videos, prefix=sub_set, suffix='Done ', barLength=40)\r\n\r\n\r\nif __name__ == '__main__':\r\n    since = time.time()\r\n    main(int(sys.argv[1]), int(sys.argv[2]))\r\n    time_elapsed = time.time() - since\r\n    print('Total complete in {:.0f}m {:.0f}s'.format(\r\n        time_elapsed // 60, time_elapsed % 60))\r\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/got10k/parser_got10k.py",
    "content": "# -*- coding:utf-8 -*-\r\n# ! ./usr/bin/env python\r\n# __author__ = 'zzp'\r\n\r\nimport cv2\r\nimport json\r\nimport glob\r\nimport numpy as np\r\nfrom os.path import join\r\nfrom os import listdir\r\n\r\nimport argparse\r\n\r\nparser = argparse.ArgumentParser()\r\nparser.add_argument('--dir',type=str, default='/data/share/GOT10K', help='your vid data dir')\r\nargs = parser.parse_args()\r\n\r\ngot10k_base_path = args.dir\r\nsub_sets = sorted({'train', 'val'})\r\n\r\ngot10k = []\r\nfor sub_set in sub_sets:\r\n    sub_set_base_path = join(got10k_base_path, sub_set)\r\n    videos = sorted(listdir(sub_set_base_path))\r\n    s = []\r\n    for vi, video in enumerate(videos):\r\n        print('subset: {} video id: {:04d} / {:04d}'.format(sub_set, vi, len(videos)))\r\n        v = dict()\r\n        v['base_path'] = join(sub_set, video)\r\n        v['frame'] = []\r\n        video_base_path = join(sub_set_base_path, video)\r\n        gts_path = join(video_base_path, 'groundtruth.txt')\r\n        # gts_file = open(gts_path, 'r')\r\n        # gts = gts_file.readlines()\r\n        gts = np.loadtxt(open(gts_path, \"rb\"), delimiter=',')\r\n\r\n        # get image size\r\n        im_path = join(video_base_path, '00000001.jpg')\r\n        im = cv2.imread(im_path)\r\n        size = im.shape  # height, width\r\n        frame_sz = [size[1], size[0]]  # width,height\r\n\r\n        # get all im name\r\n        jpgs = sorted(glob.glob(join(video_base_path, '*.jpg')))\r\n\r\n        f = dict()\r\n        for idx, img_path in enumerate(jpgs):\r\n            f['frame_sz'] = frame_sz\r\n            f['img_path'] = img_path.split('/')[-1]\r\n\r\n            gt = gts[idx]\r\n            bbox = [int(g) for g in gt]   # (x,y,w,h)\r\n            f['bbox'] = bbox\r\n            v['frame'].append(f.copy())\r\n        s.append(v)\r\n    got10k.append(s)\r\nprint('save json (raw got10k info), please wait 1 min~')\r\njson.dump(got10k, open('got10k.json', 'w'), indent=4, sort_keys=True)\r\nprint('got10k.json has been saved in ./')\r\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/got10k/readme.md",
    "content": "# Preprocessing GOT10K (train and val)\r\n\r\n\r\n### Crop & Generate data info (20 min)\r\n\r\n````shell\r\nrm ./train/list.txt\r\nrm ./val/list.txt\r\n\r\npython parse_got10k.py\r\npython par_crop.py 511 16\r\npython gen_json.py\r\n````\r\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/lasot/gen_json.py",
    "content": "from os.path import join\r\nfrom os import listdir\r\nimport json\r\nimport numpy as np\r\n\r\nprint('loading json (raw lasot info), please wait 20 seconds~')\r\nlasot = json.load(open('lasot.json', 'r'))\r\n\r\n\r\ndef check_size(frame_sz, bbox):\r\n    min_ratio = 0.1\r\n    max_ratio = 0.75\r\n    # only accept objects >10% and <75% of the total frame\r\n    area_ratio = np.sqrt((bbox[2]-bbox[0])*(bbox[3]-bbox[1])/float(np.prod(frame_sz)))\r\n    ok = (area_ratio > min_ratio) and (area_ratio < max_ratio)\r\n    return ok\r\n\r\n\r\ndef check_borders(frame_sz, bbox):\r\n    dist_from_border = 0.05 * (bbox[2] - bbox[0] + bbox[3] - bbox[1])/2\r\n    ok = (bbox[0] > dist_from_border) and (bbox[1] > dist_from_border) and \\\r\n         ((frame_sz[0] - bbox[2]) > dist_from_border) and \\\r\n         ((frame_sz[1] - bbox[3]) > dist_from_border)\r\n    return ok\r\n\r\n\r\nsnippets = dict()\r\n\r\nn_videos = 0\r\nfor subset in lasot:\r\n    for video in subset:\r\n        n_videos += 1\r\n        frames = video['frame']\r\n        snippet = dict()\r\n\r\n        snippets[video['base_path'].split('/')[-1]] = dict()\r\n        for f, frame in enumerate(frames):\r\n            frame_sz = frame['frame_sz']\r\n            bbox = frame['bbox']  # (x,y,w,h)\r\n\r\n            snippet['{:06d}'.format(f)] = [bbox[0], bbox[1], bbox[0]+bbox[2], bbox[1]+bbox[3]]   #(xmin, ymin, xmax, ymax)\r\n\r\n        snippets[video['base_path'].split('/')[-1]]['{:02d}'.format(0)] = snippet.copy()\r\n\r\njson.dump(snippets, open('/data/share/LASOT/train.json', 'w'), indent=4, sort_keys=True)\r\nprint('done!')\r\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/lasot/par_crop.py",
    "content": "from os.path import join, isdir\r\nfrom os import listdir, mkdir, makedirs\r\nimport cv2\r\nimport numpy as np\r\nimport glob\r\nimport xml.etree.ElementTree as ET\r\nfrom concurrent import futures\r\nimport sys\r\nimport time\r\nimport argparse\r\n\r\nlasot_base_path = '/data/share/LaSOTBenchmark'\r\n\r\n\r\n# Print iterations progress (thanks StackOverflow)\r\ndef printProgress(iteration, total, prefix='', suffix='', decimals=1, barLength=100):\r\n    \"\"\"\r\n    Call in a loop to create terminal progress bar\r\n    @params:\r\n        iteration   - Required  : current iteration (Int)\r\n        total       - Required  : total iterations (Int)\r\n        prefix      - Optional  : prefix string (Str)\r\n        suffix      - Optional  : suffix string (Str)\r\n        decimals    - Optional  : positive number of decimals in percent complete (Int)\r\n        barLength   - Optional  : character length of bar (Int)\r\n    \"\"\"\r\n    formatStr       = \"{0:.\" + str(decimals) + \"f}\"\r\n    percents        = formatStr.format(100 * (iteration / float(total)))\r\n    filledLength    = int(round(barLength * iteration / float(total)))\r\n    bar             = '' * filledLength + '-' * (barLength - filledLength)\r\n    sys.stdout.write('\\r%s |%s| %s%s %s' % (prefix, bar, percents, '%', suffix)),\r\n    if iteration == total:\r\n        sys.stdout.write('\\x1b[2K\\r')\r\n    sys.stdout.flush()\r\n\r\n\r\ndef crop_hwc(image, bbox, out_sz, padding=(0, 0, 0)):\r\n    a = (out_sz-1) / (bbox[2]-bbox[0])\r\n    b = (out_sz-1) / (bbox[3]-bbox[1])\r\n    c = -a * bbox[0]\r\n    d = -b * bbox[1]\r\n    mapping = np.array([[a, 0, c],\r\n                        [0, b, d]]).astype(np.float)\r\n    crop = cv2.warpAffine(image, mapping, (out_sz, out_sz), borderMode=cv2.BORDER_CONSTANT, borderValue=padding)\r\n    return crop\r\n\r\n\r\ndef pos_s_2_bbox(pos, s):\r\n    return [pos[0]-s/2, pos[1]-s/2, pos[0]+s/2, pos[1]+s/2]\r\n\r\n\r\ndef crop_like_SiamFC(image, bbox, context_amount=0.5, exemplar_size=127, instanc_size=255, padding=(0, 0, 0)):\r\n    target_pos = [(bbox[2]+bbox[0])/2., (bbox[3]+bbox[1])/2.]\r\n    target_size = [bbox[2]-bbox[0], bbox[3]-bbox[1]]   # width, height\r\n    wc_z = target_size[1] + context_amount * sum(target_size)\r\n    hc_z = target_size[0] + context_amount * sum(target_size)\r\n    s_z = np.sqrt(wc_z * hc_z)\r\n    scale_z = exemplar_size / s_z\r\n    d_search = (instanc_size - exemplar_size) / 2\r\n    pad = d_search / scale_z\r\n    s_x = s_z + 2 * pad\r\n\r\n    z = crop_hwc(image, pos_s_2_bbox(target_pos, s_z), exemplar_size, padding)\r\n    x = crop_hwc(image, pos_s_2_bbox(target_pos, s_x), instanc_size, padding)\r\n    return z, x\r\n\r\neps = 1e-5\r\ndef crop_video(video_f, video, crop_path, instanc_size):\r\n    video_crop_base_path = join(crop_path, video)\r\n    if not isdir(video_crop_base_path): makedirs(video_crop_base_path)\r\n\r\n    sub_set_base_path = join(lasot_base_path, video_f)\r\n    video_base_path = join(sub_set_base_path, video)\r\n    gts_path = join(video_base_path, 'groundtruth.txt')\r\n    gts = np.loadtxt(open(gts_path, \"rb\"), delimiter=',')\r\n    jpgs = sorted(glob.glob(join(video_base_path, 'img', '*.jpg')))\r\n\r\n\r\n    if not jpgs:\r\n        print('no jpg files, try png files')\r\n        jpgs = sorted(glob.glob(join(video_base_path, '*.png')))\r\n        if not jpgs:\r\n            print('no jpg and png files, check data please')\r\n\r\n    for idx, img_path in enumerate(jpgs):\r\n        # skip gt == 0\r\n        gt = gts[idx]\r\n        if abs(gt[2] - 0) < 1e-5 or abs(gt[3] - 0) < 1e-5:\r\n            continue\r\n\r\n        im = cv2.imread(img_path)\r\n        avg_chans = np.mean(im, axis=(0, 1))\r\n\r\n        bbox = [int(g) for g in gt]  # (x,y,w,h)\r\n        bbox = [bbox[0], bbox[1], bbox[0] + bbox[2], bbox[1] + bbox[3]]  # (xmin, ymin, xmax, ymax)\r\n\r\n        z, x = crop_like_SiamFC(im, bbox, instanc_size=instanc_size, padding=avg_chans)\r\n        cv2.imwrite(join(video_crop_base_path, '{:06d}.{:02d}.z.jpg'.format(int(idx), 0)), z)\r\n        cv2.imwrite(join(video_crop_base_path, '{:06d}.{:02d}.x.jpg'.format(int(idx), 0)), x)\r\n\r\n\r\ndef main(instanc_size=511, num_threads=24):\r\n    #crop_path = './crop{:d}'.format(instanc_size)\r\n    crop_path = '/data/share/LASOT/crop{:d}'.format(instanc_size)\r\n    if not isdir(crop_path): makedirs(crop_path)\r\n\r\n    videos_fathers = sorted(listdir(lasot_base_path))\r\n\r\n    for video_f in videos_fathers:\r\n        videos_sons = sorted(listdir(join(lasot_base_path, video_f)))\r\n        n_videos = len(videos_sons)\r\n\r\n        with futures.ProcessPoolExecutor(max_workers=num_threads) as executor:\r\n            fs = [executor.submit(crop_video, video_f, video, crop_path, instanc_size) for video in videos_sons]\r\n            for i, f in enumerate(futures.as_completed(fs)):\r\n                # Write progress to error so that it can be seen\r\n                printProgress(i, n_videos, prefix=video_f, suffix='Done ', barLength=40)\r\n\r\n\r\nif __name__ == '__main__':\r\n    since = time.time()\r\n    main(int(sys.argv[1]), int(sys.argv[2]))\r\n    time_elapsed = time.time() - since\r\n    print('Total complete in {:.0f}m {:.0f}s'.format(\r\n        time_elapsed // 60, time_elapsed % 60))\r\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/lasot/parser_lasot.py",
    "content": "# -*- coding:utf-8 -*-\r\n# ! ./usr/bin/env python\r\n# __author__ = 'zzp'\r\n\r\nimport cv2\r\nimport json\r\nimport glob\r\nimport numpy as np\r\nfrom os.path import join\r\nfrom os import listdir\r\n\r\nimport argparse\r\n\r\nparser = argparse.ArgumentParser()\r\nparser.add_argument('--dir',type=str, default='/data/share/LaSOTBenchmark', help='your vid data dir')\r\nargs = parser.parse_args()\r\n\r\nlasot_base_path = args.dir\r\n# sub_sets = sorted({'train', 'val'})\r\n\r\nlasot = []\r\n\r\nvideos_fathers = sorted(listdir(lasot_base_path))\r\ns = []\r\nfor _, video_f in enumerate(videos_fathers):\r\n    videos_sons = sorted(listdir(join(lasot_base_path, video_f)))\r\n\r\n    for vi, video in enumerate(videos_sons):\r\n\r\n        print('father class: {} video id: {:04d} / {:04d}'.format(video_f, vi, len(videos_sons)))\r\n        v = dict()\r\n        v['base_path'] = join(video_f, video)\r\n        v['frame'] = []\r\n        video_base_path = join(lasot_base_path, video_f, video)\r\n        gts_path = join(video_base_path, 'groundtruth.txt')\r\n        # gts_file = open(gts_path, 'r')\r\n        # gts = gts_file.readlines()\r\n        gts = np.loadtxt(open(gts_path, \"rb\"), delimiter=',')\r\n\r\n        # get image size\r\n        im_path = join(video_base_path, 'img', '00000001.jpg')\r\n        im = cv2.imread(im_path)\r\n        size = im.shape  # height, width\r\n        frame_sz = [size[1], size[0]]  # width,height\r\n\r\n        # get all im name\r\n        jpgs = sorted(glob.glob(join(video_base_path, 'img', '*.jpg')))\r\n\r\n        f = dict()\r\n        for idx, img_path in enumerate(jpgs):\r\n            f['frame_sz'] = frame_sz\r\n            f['img_path'] = img_path.split('/')[-1]\r\n\r\n            gt = gts[idx]\r\n            bbox = [int(g) for g in gt]   # (x,y,w,h)\r\n            f['bbox'] = bbox\r\n            v['frame'].append(f.copy())\r\n        s.append(v)\r\nlasot.append(s)\r\n\r\nprint('save json (raw lasot info), please wait 1 min~')\r\njson.dump(lasot, open('lasot.json', 'w'), indent=4, sort_keys=True)\r\nprint('lasot.json has been saved in ./')\r\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/lasot/readme.md",
    "content": "# Preprocessing LASOT (train and val)\r\n\r\n\r\n### Crop & Generate data info (20 min)\r\n\r\n````shell\r\nrm ./train/list.txt\r\nrm ./val/list.txt\r\n\r\npython parse_lasot.py\r\npython par_crop.py 511 16\r\npython gen_json.py\r\n````\r\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/vid/gen_json.py",
    "content": "from os.path import join\r\nfrom os import listdir\r\nimport json\r\nimport numpy as np\r\n\r\nprint('loading json (raw vid info), please wait 20 seconds~')\r\nvid = json.load(open('vid.json', 'r'))\r\n\r\n\r\ndef check_size(frame_sz, bbox):\r\n    min_ratio = 0.1\r\n    max_ratio = 0.75\r\n    # only accept objects >10% and <75% of the total frame\r\n    area_ratio = np.sqrt((bbox[2]-bbox[0])*(bbox[3]-bbox[1])/float(np.prod(frame_sz)))\r\n    ok = (area_ratio > min_ratio) and (area_ratio < max_ratio)\r\n    return ok\r\n\r\n\r\ndef check_borders(frame_sz, bbox):\r\n    dist_from_border = 0.05 * (bbox[2] - bbox[0] + bbox[3] - bbox[1])/2\r\n    ok = (bbox[0] > dist_from_border) and (bbox[1] > dist_from_border) and \\\r\n         ((frame_sz[0] - bbox[2]) > dist_from_border) and \\\r\n         ((frame_sz[1] - bbox[3]) > dist_from_border)\r\n    return ok\r\n\r\n\r\nsnippets = dict()\r\nn_snippets = 0\r\nn_videos = 0\r\nfor subset in vid:\r\n    for video in subset:\r\n        n_videos += 1\r\n        frames = video['frame']\r\n        id_set = []\r\n        id_frames = [[]] * 60  # at most 60 objects\r\n        for f, frame in enumerate(frames):\r\n            objs = frame['objs']\r\n            frame_sz = frame['frame_sz']\r\n            for obj in objs:\r\n                trackid = obj['trackid']\r\n                occluded = obj['occ']\r\n                bbox = obj['bbox']\r\n                # if occluded:\r\n                #     continue\r\n                #\r\n                # if not(check_size(frame_sz, bbox) and check_borders(frame_sz, bbox)):\r\n                #     continue\r\n                #\r\n                # if obj['c'] in ['n01674464', 'n01726692', 'n04468005', 'n02062744']:\r\n                #     continue\r\n\r\n                if trackid not in id_set:\r\n                    id_set.append(trackid)\r\n                    id_frames[trackid] = []\r\n                id_frames[trackid].append(f)\r\n        if len(id_set) > 0:\r\n            snippets[video['base_path']] = dict()\r\n        for selected in id_set:\r\n            frame_ids = sorted(id_frames[selected])\r\n            sequences = np.split(frame_ids, np.array(np.where(np.diff(frame_ids) > 1)[0]) + 1)\r\n            sequences = [s for s in sequences if len(s) > 1]  # remove isolated frame.\r\n            for seq in sequences:\r\n                snippet = dict()\r\n                for frame_id in seq:\r\n                    frame = frames[frame_id]\r\n                    for obj in frame['objs']:\r\n                        if obj['trackid'] == selected:\r\n                            o = obj\r\n                            continue\r\n                    snippet[frame['img_path'].split('.')[0]] = o['bbox']\r\n                snippets[video['base_path']]['{:02d}'.format(selected)] = snippet\r\n                n_snippets += 1\r\n        print('video: {:d} snippets_num: {:d}'.format(n_videos, n_snippets))\r\n        \r\ntrain = {k:v for (k,v) in snippets.items() if 'train' in k}\r\nval = {k:v for (k,v) in snippets.items() if 'val' in k}\r\n\r\njson.dump(train, open('/data/home/hopeng/data_local/ILSVRC2015/VID/train.json', 'w'), indent=4, sort_keys=True)\r\njson.dump(val, open('/data/home/hopeng/data_local/ILSVRC2015/VID/val.json', 'w'), indent=4, sort_keys=True)\r\nprint('done!')\r\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/vid/par_crop.py",
    "content": "from os.path import join, isdir\r\nfrom os import listdir, mkdir, makedirs\r\nimport cv2\r\nimport numpy as np\r\nimport glob\r\nimport xml.etree.ElementTree as ET\r\nfrom concurrent import futures\r\nimport sys\r\nimport time\r\nimport argparse\r\n\r\n#parser = argparse.ArgumentParser()\r\n#parser.add_argument('--dir',type=str, default='/data/share/ILSVRC2015', help='your vid data dir')\r\n#args = parser.parse_args()\r\n\r\n#VID_base_path = '/data/share/ILSVRC2015'\r\nVID_base_path = '/data/home/hopeng/data_local/ILSVRC2015'\r\nann_base_path = join(VID_base_path, 'Annotations/VID/train/')\r\nsub_sets = sorted({'a', 'b', 'c', 'd', 'e'})\r\n\r\n\r\n# Print iterations progress (thanks StackOverflow)\r\ndef printProgress(iteration, total, prefix='', suffix='', decimals=1, barLength=100):\r\n    \"\"\"\r\n    Call in a loop to create terminal progress bar\r\n    @params:\r\n        iteration   - Required  : current iteration (Int)\r\n        total       - Required  : total iterations (Int)\r\n        prefix      - Optional  : prefix string (Str)\r\n        suffix      - Optional  : suffix string (Str)\r\n        decimals    - Optional  : positive number of decimals in percent complete (Int)\r\n        barLength   - Optional  : character length of bar (Int)\r\n    \"\"\"\r\n    formatStr       = \"{0:.\" + str(decimals) + \"f}\"\r\n    percents        = formatStr.format(100 * (iteration / float(total)))\r\n    filledLength    = int(round(barLength * iteration / float(total)))\r\n    bar             = '' * filledLength + '-' * (barLength - filledLength)\r\n    sys.stdout.write('\\r%s |%s| %s%s %s' % (prefix, bar, percents, '%', suffix)),\r\n    if iteration == total:\r\n        sys.stdout.write('\\x1b[2K\\r')\r\n    sys.stdout.flush()\r\n\r\n\r\ndef crop_hwc(image, bbox, out_sz, padding=(0, 0, 0)):\r\n    a = (out_sz-1) / (bbox[2]-bbox[0])\r\n    b = (out_sz-1) / (bbox[3]-bbox[1])\r\n    c = -a * bbox[0]\r\n    d = -b * bbox[1]\r\n    mapping = np.array([[a, 0, c],\r\n                        [0, b, d]]).astype(np.float)\r\n    crop = cv2.warpAffine(image, mapping, (out_sz, out_sz), borderMode=cv2.BORDER_CONSTANT, borderValue=padding)\r\n    return crop\r\n\r\n\r\ndef pos_s_2_bbox(pos, s):\r\n    return [pos[0]-s/2, pos[1]-s/2, pos[0]+s/2, pos[1]+s/2]\r\n\r\n\r\ndef crop_like_SiamFC(image, bbox, context_amount=0.5, exemplar_size=127, instanc_size=255, padding=(0, 0, 0)):\r\n    target_pos = [(bbox[2]+bbox[0])/2., (bbox[3]+bbox[1])/2.]\r\n    target_size = [bbox[2]-bbox[0], bbox[3]-bbox[1]]   # width, height\r\n    wc_z = target_size[1] + context_amount * sum(target_size)\r\n    hc_z = target_size[0] + context_amount * sum(target_size)\r\n    s_z = np.sqrt(wc_z * hc_z)\r\n    scale_z = exemplar_size / s_z\r\n    d_search = (instanc_size - exemplar_size) / 2\r\n    pad = d_search / scale_z\r\n    s_x = s_z + 2 * pad\r\n\r\n    z = crop_hwc(image, pos_s_2_bbox(target_pos, s_z), exemplar_size, padding)\r\n    x = crop_hwc(image, pos_s_2_bbox(target_pos, s_x), instanc_size, padding)\r\n    return z, x\r\n\r\n\r\ndef crop_video(sub_set, video, crop_path, instanc_size):\r\n    video_crop_base_path = join(crop_path, sub_set, video)\r\n    if not isdir(video_crop_base_path): makedirs(video_crop_base_path)\r\n\r\n    sub_set_base_path = join(ann_base_path, sub_set)\r\n    xmls = sorted(glob.glob(join(sub_set_base_path, video, '*.xml')))\r\n    for xml in xmls:\r\n        xmltree = ET.parse(xml)\r\n        # size = xmltree.findall('size')[0]\r\n        # frame_sz = [int(it.text) for it in size]\r\n        objects = xmltree.findall('object')\r\n        objs = []\r\n        filename = xmltree.findall('filename')[0].text\r\n\r\n        im = cv2.imread(xml.replace('xml', 'JPEG').replace('Annotations', 'Data'))\r\n        avg_chans = np.mean(im, axis=(0, 1))\r\n        for object_iter in objects:\r\n            trackid = int(object_iter.find('trackid').text)\r\n            # name = (object_iter.find('name')).text\r\n            bndbox = object_iter.find('bndbox')\r\n            # occluded = int(object_iter.find('occluded').text)\r\n\r\n            bbox = [int(bndbox.find('xmin').text), int(bndbox.find('ymin').text),\r\n                    int(bndbox.find('xmax').text), int(bndbox.find('ymax').text)]\r\n            z, x = crop_like_SiamFC(im, bbox, instanc_size=instanc_size, padding=avg_chans)\r\n            cv2.imwrite(join(video_crop_base_path, '{:06d}.{:02d}.z.jpg'.format(int(filename), trackid)), z)\r\n            cv2.imwrite(join(video_crop_base_path, '{:06d}.{:02d}.x.jpg'.format(int(filename), trackid)), x)\r\n\r\n\r\ndef main(instanc_size=511, num_threads=24):\r\n    #crop_path = './crop{:d}'.format(instanc_size)\r\n    crop_path = '/data/home/hopeng/data_local/ILSVRC2015/VID/crop{:d}'.format(instanc_size)\r\n    if not isdir(crop_path): mkdir(crop_path)\r\n\r\n    for sub_set in sub_sets:\r\n        sub_set_base_path = join(ann_base_path, sub_set)\r\n        videos = sorted(listdir(sub_set_base_path))\r\n        n_videos = len(videos)\r\n        with futures.ProcessPoolExecutor(max_workers=num_threads) as executor:\r\n            fs = [executor.submit(crop_video, sub_set, video, crop_path, instanc_size) for video in videos]\r\n            for i, f in enumerate(futures.as_completed(fs)):\r\n                # Write progress to error so that it can be seen\r\n                printProgress(i, n_videos, prefix=sub_set, suffix='Done ', barLength=40)\r\n\r\n\r\nif __name__ == '__main__':\r\n    since = time.time()\r\n    main(int(sys.argv[1]), int(sys.argv[2]))\r\n    time_elapsed = time.time() - since\r\n    print('Total complete in {:.0f}m {:.0f}s'.format(\r\n        time_elapsed // 60, time_elapsed % 60))\r\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/vid/parse_vid.py",
    "content": "from os.path import join\r\nfrom os import listdir\r\nimport json\r\nimport glob\r\nimport argparse\r\nimport xml.etree.ElementTree as ET\r\n\r\nparser = argparse.ArgumentParser()\r\nparser.add_argument('--dir',type=str, default='/data/share/ILSVRC2015', help='your vid data dir' )\r\nargs = parser.parse_args()\r\n\r\nVID_base_path = args.dir\r\nann_base_path = join(VID_base_path, 'Annotations/VID/train/')\r\nimg_base_path = join(VID_base_path, 'Data/VID/train/')\r\nsub_sets = sorted({'a', 'b', 'c', 'd', 'e'})\r\n\r\nvid = []\r\nfor sub_set in sub_sets:\r\n    sub_set_base_path = join(ann_base_path, sub_set)\r\n    videos = sorted(listdir(sub_set_base_path))\r\n    s = []\r\n    for vi, video in enumerate(videos):\r\n        print('subset: {} video id: {:04d} / {:04d}'.format(sub_set, vi, len(videos)))\r\n        v = dict()\r\n        v['base_path'] = join(sub_set, video)\r\n        v['frame'] = []\r\n        video_base_path = join(sub_set_base_path, video)\r\n        xmls = sorted(glob.glob(join(video_base_path, '*.xml')))\r\n        for xml in xmls:\r\n            f = dict()\r\n            xmltree = ET.parse(xml)\r\n            size = xmltree.findall('size')[0]\r\n            frame_sz = [int(it.text) for it in size]  # width,height\r\n            objects = xmltree.findall('object')\r\n            objs = []\r\n            for object_iter in objects:\r\n                trackid = int(object_iter.find('trackid').text)\r\n                name = (object_iter.find('name')).text\r\n                bndbox = object_iter.find('bndbox')\r\n                occluded = int(object_iter.find('occluded').text)\r\n                o = dict()\r\n                o['c'] = name\r\n                o['bbox'] = [int(bndbox.find('xmin').text), int(bndbox.find('ymin').text),\r\n                             int(bndbox.find('xmax').text), int(bndbox.find('ymax').text)]\r\n                o['trackid'] = trackid\r\n                o['occ'] = occluded\r\n                objs.append(o)\r\n            f['frame_sz'] = frame_sz\r\n            f['img_path'] = xml.split('/')[-1].replace('xml', 'JPEG')\r\n            f['objs'] = objs\r\n            v['frame'].append(f)\r\n        s.append(v)\r\n    vid.append(s)\r\nprint('save json (raw vid info), please wait 1 min~')\r\njson.dump(vid, open('vid.json', 'w'), indent=4, sort_keys=True)\r\nprint('val.json has been saved in ./')\r\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/vid/readme.md",
    "content": "# Preprocessing VID(Object detection from video)\r\nLarge Scale Visual Recognition Challenge 2015 (ILSVRC2015)\r\n\r\n### Download dataset (86GB)\r\n\r\n````shell\r\nwget http://bvisionweb1.cs.unc.edu/ilsvrc2015/ILSVRC2015_VID.tar.gz\r\ntar -xzvf ./ILSVRC2015_VID.tar.gz\r\nln -sfb $PWD/ILSVRC2015/Annotations/VID/train/ILSVRC2015_VID_train_0000 ILSVRC2015/Annotations/VID/train/a\r\nln -sfb $PWD/ILSVRC2015/Annotations/VID/train/ILSVRC2015_VID_train_0001 ILSVRC2015/Annotations/VID/train/b\r\nln -sfb $PWD/ILSVRC2015/Annotations/VID/train/ILSVRC2015_VID_train_0002 ILSVRC2015/Annotations/VID/train/c\r\nln -sfb $PWD/ILSVRC2015/Annotations/VID/train/ILSVRC2015_VID_train_0003 ILSVRC2015/Annotations/VID/train/d\r\nln -sfb $PWD/ILSVRC2015/Annotations/VID/val ILSVRC2015/Annotations/VID/train/e\r\n\r\nln -sfb $PWD/ILSVRC2015/Data/VID/train/ILSVRC2015_VID_train_0000 ILSVRC2015/Data/VID/train/a\r\nln -sfb $PWD/ILSVRC2015/Data/VID/train/ILSVRC2015_VID_train_0001 ILSVRC2015/Data/VID/train/b\r\nln -sfb $PWD/ILSVRC2015/Data/VID/train/ILSVRC2015_VID_train_0002 ILSVRC2015/Data/VID/train/c\r\nln -sfb $PWD/ILSVRC2015/Data/VID/train/ILSVRC2015_VID_train_0003 ILSVRC2015/Data/VID/train/d\r\nln -sfb $PWD/ILSVRC2015/Data/VID/val ILSVRC2015/Data/VID/train/e\r\n````\r\n\r\n### Crop & Generate data info (20 min)\r\n\r\n````shell\r\npython parse_vid.py\r\n\r\n#python par_crop.py [crop_size] [num_threads]\r\npython par_crop.py 511 12\r\npython gen_json.py\r\n````\r\nCodes are modified from SiamMask."
  },
  {
    "path": "tracker/sot/lib/dataset/crop/visdrone/gen_json.py",
    "content": "from os.path import join\r\nfrom os import listdir\r\nimport json\r\nimport numpy as np\r\n\r\nprint('loading json (raw visdrone info), please wait 20 seconds~')\r\nvisdrone = json.load(open('visdrone.json', 'r'))\r\n\r\n\r\ndef check_size(frame_sz, bbox):\r\n    min_ratio = 0.1\r\n    max_ratio = 0.75\r\n    # only accept objects >10% and <75% of the total frame\r\n    area_ratio = np.sqrt((bbox[2]-bbox[0])*(bbox[3]-bbox[1])/float(np.prod(frame_sz)))\r\n    ok = (area_ratio > min_ratio) and (area_ratio < max_ratio)\r\n    return ok\r\n\r\n\r\ndef check_borders(frame_sz, bbox):\r\n    dist_from_border = 0.05 * (bbox[2] - bbox[0] + bbox[3] - bbox[1])/2\r\n    ok = (bbox[0] > dist_from_border) and (bbox[1] > dist_from_border) and \\\r\n         ((frame_sz[0] - bbox[2]) > dist_from_border) and \\\r\n         ((frame_sz[1] - bbox[3]) > dist_from_border)\r\n    return ok\r\n\r\n\r\nsnippets = dict()\r\n\r\nn_videos = 0\r\nfor subset in visdrone:\r\n    for video in subset:\r\n        n_videos += 1\r\n        frames = video['frame']\r\n        snippet = dict()\r\n        bp = video['base_path']\r\n        bp = bp.split('/')\r\n        bp = join(bp[0], bp[-1])\r\n\r\n        snippets[bp] = dict()\r\n        for f, frame in enumerate(frames):\r\n            frame_sz = frame['frame_sz']\r\n            bbox = frame['bbox']  # (x,y,w,h)\r\n\r\n            snippet['{:06d}'.format(f)] = [bbox[0], bbox[1], bbox[0]+bbox[2], bbox[1]+bbox[3]]   #(xmin, ymin, xmax, ymax)\r\n\r\n        snippets[bp]['{:02d}'.format(0)] = snippet.copy()\r\n        \r\n# train = {k:v for (k,v) in snippets.items() if 'train' in k}\r\n# val = {k:v for (k,v) in snippets.items() if 'val' in k}\r\n\r\ntrain = {k:v for (k,v) in snippets.items()}\r\n\r\n# json.dump(train, open('/data2/visdrone/train.json', 'w'), indent=4, sort_keys=True)\r\njson.dump(train, open('/data/home/v-zhipeng/dataset/training/VISDRONE/train.json', 'w'), indent=4, sort_keys=True)\r\nprint('done!')\r\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/visdrone/par_crop.py",
    "content": "from os.path import join, isdir, exists\r\nfrom os import listdir, mkdir, makedirs\r\nimport cv2\r\nimport numpy as np\r\nimport glob\r\nfrom concurrent import futures\r\nimport sys\r\nimport time\r\n\r\n\r\nvisdrone_base_path = '/data/home/v-zhipeng/dataset/testing/VISDRONE'\r\nsub_sets = sorted({'VisDrone2019-SOT-train', 'VisDrone2019-SOT-val'})\r\n\r\n\r\n# Print iterations progress (thanks StackOverflow)\r\ndef printProgress(iteration, total, prefix='', suffix='', decimals=1, barLength=100):\r\n    \"\"\"\r\n    Call in a loop to create terminal progress bar\r\n    @params:\r\n        iteration   - Required  : current iteration (Int)\r\n        total       - Required  : total iterations (Int)\r\n        prefix      - Optional  : prefix string (Str)\r\n        suffix      - Optional  : suffix string (Str)\r\n        decimals    - Optional  : positive number of decimals in percent complete (Int)\r\n        barLength   - Optional  : character length of bar (Int)\r\n    \"\"\"\r\n    formatStr       = \"{0:.\" + str(decimals) + \"f}\"\r\n    percents        = formatStr.format(100 * (iteration / float(total)))\r\n    filledLength    = int(round(barLength * iteration / float(total)))\r\n    bar             = '' * filledLength + '-' * (barLength - filledLength)\r\n    sys.stdout.write('\\r%s |%s| %s%s %s' % (prefix, bar, percents, '%', suffix)),\r\n    if iteration == total:\r\n        sys.stdout.write('\\x1b[2K\\r')\r\n    sys.stdout.flush()\r\n\r\n\r\ndef crop_hwc(image, bbox, out_sz, padding=(0, 0, 0)):\r\n    a = (out_sz-1) / (bbox[2]-bbox[0])\r\n    b = (out_sz-1) / (bbox[3]-bbox[1])\r\n    c = -a * bbox[0]\r\n    d = -b * bbox[1]\r\n    mapping = np.array([[a, 0, c],\r\n                        [0, b, d]]).astype(np.float)\r\n    crop = cv2.warpAffine(image, mapping, (out_sz, out_sz), borderMode=cv2.BORDER_CONSTANT, borderValue=padding)\r\n    return crop\r\n\r\n\r\ndef pos_s_2_bbox(pos, s):\r\n    return [pos[0]-s/2, pos[1]-s/2, pos[0]+s/2, pos[1]+s/2]\r\n\r\n\r\ndef crop_like_SiamFC(image, bbox, context_amount=0.5, exemplar_size=127, instanc_size=255, padding=(0, 0, 0)):\r\n    target_pos = [(bbox[2]+bbox[0])/2., (bbox[3]+bbox[1])/2.]\r\n    target_size = [bbox[2]-bbox[0], bbox[3]-bbox[1]]   # width, height\r\n    wc_z = target_size[1] + context_amount * sum(target_size)\r\n    hc_z = target_size[0] + context_amount * sum(target_size)\r\n    s_z = np.sqrt(wc_z * hc_z)\r\n    scale_z = exemplar_size / s_z\r\n    d_search = (instanc_size - exemplar_size) / 2\r\n    pad = d_search / scale_z\r\n    s_x = s_z + 2 * pad\r\n\r\n    z = crop_hwc(image, pos_s_2_bbox(target_pos, s_z), exemplar_size, padding)\r\n    x = crop_hwc(image, pos_s_2_bbox(target_pos, s_x), instanc_size, padding)\r\n    return z, x\r\n\r\n\r\ndef crop_video(sub_set, video, crop_path, instanc_size):\r\n    video_crop_base_path = join(crop_path, sub_set, video)\r\n    if not exists(video_crop_base_path): makedirs(video_crop_base_path)\r\n\r\n    sub_set_base_path = join(visdrone_base_path, sub_set)\r\n    video_base_path = join(sub_set_base_path, 'sequences', video)\r\n    gts_path = join(sub_set_base_path, 'annotations', '{}.txt'.format(video))\r\n    gts = np.loadtxt(open(gts_path, \"rb\"), delimiter=',')\r\n    jpgs = sorted(glob.glob(join(video_base_path, '*.jpg')))\r\n\r\n    if not jpgs:\r\n        print('no jpg files, try png files')\r\n        jpgs = sorted(glob.glob(join(video_base_path, '*.png')))\r\n        if not jpgs:\r\n            print('no jpg and png files, check data please')\r\n\r\n    for idx, img_path in enumerate(jpgs):\r\n        im = cv2.imread(img_path)\r\n        avg_chans = np.mean(im, axis=(0, 1))\r\n        gt = gts[idx]\r\n        bbox = [int(g) for g in gt]  # (x,y,w,h)\r\n        bbox = [bbox[0], bbox[1], bbox[0]+bbox[2], bbox[1]+bbox[3]]   # (xmin, ymin, xmax, ymax)\r\n\r\n        z, x = crop_like_SiamFC(im, bbox, instanc_size=instanc_size, padding=avg_chans)\r\n        cv2.imwrite(join(video_crop_base_path, '{:06d}.{:02d}.z.jpg'.format(int(idx), 0)), z)\r\n        cv2.imwrite(join(video_crop_base_path, '{:06d}.{:02d}.x.jpg'.format(int(idx), 0)), x)\r\n\r\n\r\ndef main(instanc_size=511, num_threads=24):\r\n    crop_path = '/data/home/v-zhipeng/dataset/training/VISDRONE/crop{:d}'.format(instanc_size)\r\n    if not exists(crop_path): makedirs(crop_path)\r\n\r\n    for sub_set in sub_sets:\r\n        sub_set_base_path = join(visdrone_base_path, sub_set)\r\n        videos = sorted(listdir(join(sub_set_base_path, 'sequences')))\r\n        n_videos = len(videos)\r\n        with futures.ProcessPoolExecutor(max_workers=num_threads) as executor:\r\n            fs = [executor.submit(crop_video, sub_set, video, crop_path, instanc_size) for video in videos]\r\n            for i, f in enumerate(futures.as_completed(fs)):\r\n                # Write progress to error so that it can be seen\r\n                printProgress(i, n_videos, prefix=sub_set, suffix='Done ', barLength=40)\r\n\r\n\r\nif __name__ == '__main__':\r\n    since = time.time()\r\n    main(int(sys.argv[1]), int(sys.argv[2]))\r\n    time_elapsed = time.time() - since\r\n    print('Total complete in {:.0f}m {:.0f}s'.format(\r\n        time_elapsed // 60, time_elapsed % 60))\r\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/visdrone/parser_visdrone.py",
    "content": "# -*- coding:utf-8 -*-\r\n# ! ./usr/bin/env python\r\n# __author__ = 'zzp'\r\n\r\nimport cv2\r\nimport json\r\nimport glob\r\nimport numpy as np\r\nfrom os.path import join\r\nfrom os import listdir\r\n\r\nimport argparse\r\n\r\nparser = argparse.ArgumentParser()\r\nparser.add_argument('--dir',type=str, default='/data/home/v-zhipeng/dataset/testing/VISDRONE', help='your vid data dir')\r\nargs = parser.parse_args()\r\n\r\nvisdrone_base_path = args.dir\r\nsub_sets = sorted({'VisDrone2019-SOT-train', 'VisDrone2019-SOT-val'})\r\n\r\nvisdrone = []\r\nfor sub_set in sub_sets:\r\n    sub_set_base_path = join(visdrone_base_path, sub_set)\r\n    videos = sorted(listdir(join(sub_set_base_path, 'sequences')))\r\n    s = []\r\n    for vi, video in enumerate(videos):\r\n        print('subset: {} video id: {:04d} / {:04d}'.format(sub_set, vi, len(videos)))\r\n        v = dict()\r\n        v['base_path'] = join(sub_set, 'sequences', video)\r\n        v['frame'] = []\r\n        video_base_path = join(sub_set_base_path, 'sequences', video)\r\n        gts_path = join(sub_set_base_path, 'annotations', '{}.txt'.format(video))\r\n        # gts_file = open(gts_path, 'r')\r\n        # gts = gts_file.readlines()\r\n        gts = np.loadtxt(open(gts_path, \"rb\"), delimiter=',')\r\n\r\n        # get image size\r\n        im_path = join(video_base_path, 'img0000001.jpg')\r\n        im = cv2.imread(im_path)\r\n        size = im.shape  # height, width\r\n        frame_sz = [size[1], size[0]]  # width,height\r\n\r\n        # get all im name\r\n        jpgs = sorted(glob.glob(join(video_base_path, '*.jpg')))\r\n\r\n        f = dict()\r\n        for idx, img_path in enumerate(jpgs):\r\n            f['frame_sz'] = frame_sz\r\n            f['img_path'] = img_path.split('/')[-1]\r\n\r\n            gt = gts[idx]\r\n            bbox = [int(g) for g in gt]   # (x,y,w,h)\r\n            f['bbox'] = bbox\r\n            v['frame'].append(f.copy())\r\n        s.append(v)\r\n    visdrone.append(s)\r\nprint('save json (raw visdrone info), please wait 1 min~')\r\njson.dump(visdrone, open('visdrone.json', 'w'), indent=4, sort_keys=True)\r\nprint('visdrone.json has been saved in ./')\r\n"
  },
  {
    "path": "tracker/sot/lib/dataset/crop/visdrone/readme.md",
    "content": "# Preprocessing VISDRONE (train and val)\r\n\r\n\r\n### Crop & Generate data info (20 min)\r\n\r\n````shell\r\nrm ./train/list.txt\r\nrm ./val/list.txt\r\n\r\npython parse_visdrone.py\r\npython par_crop.py 511 16\r\npython gen_json.py\r\n````\r\n"
  },
  {
    "path": "tracker/sot/lib/dataset/ocean.py",
    "content": "from __future__ import division\n\nimport os\nimport cv2\nimport json\nimport torch\nimport random\nimport logging\nimport numpy as np\nimport torchvision.transforms as transforms\nfrom scipy.ndimage.filters import gaussian_filter\nfrom os.path import join\nfrom easydict import EasyDict as edict\nfrom torch.utils.data import Dataset\n\nimport sys\nsys.path.append('../')\nfrom utils.utils import *\nfrom utils.cutout import Cutout\nfrom core.config_ocean import config\n\nsample_random = random.Random()\n\n\nclass OceanDataset(Dataset):\n    def __init__(self, cfg):\n        super(OceanDataset, self).__init__()\n        # pair information\n        self.template_size = cfg.OCEAN.TRAIN.TEMPLATE_SIZE\n        self.search_size = cfg.OCEAN.TRAIN.SEARCH_SIZE\n\n        self.size = 25\n        self.stride = cfg.OCEAN.TRAIN.STRIDE\n\n        # aug information\n        self.color = cfg.OCEAN.DATASET.COLOR\n        self.flip = cfg.OCEAN.DATASET.FLIP\n        self.rotation = cfg.OCEAN.DATASET.ROTATION\n        self.blur = cfg.OCEAN.DATASET.BLUR\n        self.shift = cfg.OCEAN.DATASET.SHIFT\n        self.scale = cfg.OCEAN.DATASET.SCALE\n        self.gray = cfg.OCEAN.DATASET.GRAY\n        self.label_smooth = cfg.OCEAN.DATASET.LABELSMOOTH\n        self.mixup = cfg.OCEAN.DATASET.MIXUP\n        self.cutout = cfg.OCEAN.DATASET.CUTOUT\n\n        # aug for search image\n        self.shift_s = cfg.OCEAN.DATASET.SHIFTs\n        self.scale_s = cfg.OCEAN.DATASET.SCALEs\n\n        self.grids()\n\n        self.transform_extra = transforms.Compose(\n            [transforms.ToPILImage(), ] +\n            ([transforms.ColorJitter(0.05, 0.05, 0.05, 0.05), ] if self.color > random.random() else [])\n            + ([transforms.RandomHorizontalFlip(), ] if self.flip > random.random() else [])\n            + ([transforms.RandomRotation(degrees=10), ] if self.rotation > random.random() else [])\n            + ([transforms.Grayscale(num_output_channels=3), ] if self.gray > random.random() else [])\n            + ([Cutout(n_holes=1, length=16)] if self.cutout > random.random() else [])\n        )\n\n        # train data information\n        print('train datas: {}'.format(cfg.OCEAN.TRAIN.WHICH_USE))\n        self.train_datas = []    # all train dataset\n        start = 0\n        self.num = 0\n        for data_name in cfg.OCEAN.TRAIN.WHICH_USE:\n            dataset = subData(cfg, data_name, start)\n            self.train_datas.append(dataset)\n            start += dataset.num         # real video number\n            self.num += dataset.num_use  # the number used for subset shuffle\n\n        self._shuffle()\n        print(cfg)\n\n    def __len__(self):\n        return self.num\n\n    def __getitem__(self, index):\n        \"\"\"\n        pick a vodeo/frame --> pairs --> data aug --> label\n        \"\"\"\n        index = self.pick[index]\n        dataset, index = self._choose_dataset(index)\n\n        template, search = dataset._get_pairs(index, dataset.data_name)\n        template, search = self.check_exists(index, dataset, template, search) \n\n\n        template_image = cv2.imread(template[0])\n        search_image = cv2.imread(search[0])\n\n\n        template_box = self._toBBox(template_image, template[1])\n        search_box = self._toBBox(search_image, search[1])\n\n        template, _, _ = self._augmentation(template_image, template_box, self.template_size)\n        search, bbox, dag_param = self._augmentation(search_image, search_box, self.search_size, search=True)\n\n\n        # from PIL image to numpy\n        template = np.array(template)\n        search = np.array(search)\n\n        out_label = self._dynamic_label([self.size, self.size], dag_param.shift)\n\n        reg_label, reg_weight = self.reg_label(bbox)\n\n        template, search = map(lambda x: np.transpose(x, (2, 0, 1)).astype(np.float32), [template, search])\n\n        return template, search, out_label, reg_label, reg_weight, np.array(bbox, np.float32)  # self.label 15*15/17*17\n\n    # ------------------------------------\n    # function groups for selecting pairs\n    # ------------------------------------\n    def grids(self):\n        \"\"\"\n        each element of feature map on input search image\n        :return: H*W*2 (position for each element)\n        \"\"\"\n        sz = self.size\n\n        sz_x = sz // 2\n        sz_y = sz // 2\n\n        x, y = np.meshgrid(np.arange(0, sz) - np.floor(float(sz_x)),\n                           np.arange(0, sz) - np.floor(float(sz_y)))\n\n        self.grid_to_search = {}\n        self.grid_to_search_x = x * self.stride + self.search_size // 2\n        self.grid_to_search_y = y * self.stride + self.search_size // 2\n\n    def reg_label(self, bbox):\n        \"\"\"\n        generate regression label\n        :param bbox: [x1, y1, x2, y2]\n        :return: [l, t, r, b]\n        \"\"\"\n        x1, y1, x2, y2 = bbox\n        l = self.grid_to_search_x - x1  # [17, 17]\n        t = self.grid_to_search_y - y1\n        r = x2 - self.grid_to_search_x\n        b = y2 - self.grid_to_search_y\n\n        l, t, r, b = map(lambda x: np.expand_dims(x, axis=-1), [l, t, r, b])\n        reg_label = np.concatenate((l, t, r, b), axis=-1)  # [17, 17, 4]\n        reg_label_min = np.min(reg_label, axis=-1)\n        inds_nonzero = (reg_label_min > 0).astype(float)\n\n        return reg_label, inds_nonzero\n\n\n    def check_exists(self, index, dataset, template, search):\n        name = dataset.data_name\n        while True:\n            if 'RGBT' in name or 'GTOT' in name and 'RGBTRGB' not in name and 'RGBTT' not in name:\n                if not (os.path.exists(template[0][0]) and os.path.exists(search[0][0])):\n                    index = random.randint(0, 100)\n                    template, search = dataset._get_pairs(index,name)\n                    continue\n                else:\n                    return template, search\n            else:\n                if not (os.path.exists(template[0]) and os.path.exists(search[0])):\n                    index = random.randint(0, 100)\n                    template, search = dataset._get_pairs(index,name)\n                    continue\n                else:\n                    return template, search\n\n    def _shuffle(self):\n        \"\"\"\n        random shuffel\n        \"\"\"\n        pick = []\n        m = 0\n        while m < self.num:\n            p = []\n            for subset in self.train_datas:\n                sub_p = subset.pick\n                p += sub_p\n            sample_random.shuffle(p)\n\n            pick += p\n            m = len(pick)\n        self.pick = pick\n        print(\"dataset length {}\".format(self.num))\n\n    def _choose_dataset(self, index):\n        for dataset in self.train_datas:\n            if dataset.start + dataset.num > index:\n                return dataset, index - dataset.start\n\n    def _get_image_anno(self, video, track, frame, RGBT_FLAG=False):\n        \"\"\"\n        get image and annotation\n        \"\"\"\n\n        frame = \"{:06d}\".format(frame)\n        if not RGBT_FLAG:\n            image_path = join(self.root, video, \"{}.{}.x.jpg\".format(frame, track))\n            image_anno = self.labels[video][track][frame]\n            return image_path, image_anno\n        else:  # rgb\n            in_image_path = join(self.root, video, \"{}.{}.in.x.jpg\".format(frame, track))\n            rgb_image_path = join(self.root, video, \"{}.{}.rgb.x.jpg\".format(frame, track))\n            image_anno = self.labels[video][track][frame]\n            in_anno = np.array(image_anno[-1][0])\n            rgb_anno = np.array(image_anno[-1][1])\n\n            return [in_image_path, rgb_image_path], (in_anno + rgb_anno) / 2\n\n    def _get_pairs(self, index):\n        \"\"\"\n        get training pairs\n        \"\"\"\n        video_name = self.videos[index]\n        video = self.labels[video_name]\n        track = random.choice(list(video.keys()))\n        track_info = video[track]\n        try:\n            frames = track_info['frames']\n        except:\n            frames = list(track_info.keys())\n\n        template_frame = random.randint(0, len(frames)-1)\n\n        left = max(template_frame - self.frame_range, 0)\n        right = min(template_frame + self.frame_range, len(frames)-1) + 1\n        search_range = frames[left:right]\n        template_frame = int(frames[template_frame])\n        search_frame = int(random.choice(search_range))\n\n        return self._get_image_anno(video_name, track, template_frame), \\\n               self._get_image_anno(video_name, track, search_frame)\n\n    def _posNegRandom(self):\n        \"\"\"\n        random number from [-1, 1]\n        \"\"\"\n        return random.random() * 2 - 1.0\n\n    def _toBBox(self, image, shape):\n        imh, imw = image.shape[:2]\n        if len(shape) == 4:\n            w, h = shape[2] - shape[0], shape[3] - shape[1]\n        else:\n            w, h = shape\n        context_amount = 0.5\n        exemplar_size = self.template_size\n        \n        wc_z = w + context_amount * (w + h)\n        hc_z = h + context_amount * (w + h)\n        s_z = np.sqrt(wc_z * hc_z)\n        scale_z = exemplar_size / s_z\n        w = w * scale_z\n        h = h * scale_z\n        cx, cy = imw // 2, imh // 2\n        bbox = center2corner(Center(cx, cy, w, h))\n        return bbox\n\n    def _crop_hwc(self, image, bbox, out_sz, padding=(0, 0, 0)):\n        \"\"\"\n        crop image\n        \"\"\"\n        bbox = [float(x) for x in bbox]\n        a = (out_sz - 1) / (bbox[2] - bbox[0])\n        b = (out_sz - 1) / (bbox[3] - bbox[1])\n        c = -a * bbox[0]\n        d = -b * bbox[1]\n        mapping = np.array([[a, 0, c],\n                            [0, b, d]]).astype(np.float)\n        crop = cv2.warpAffine(image, mapping, (out_sz, out_sz), borderMode=cv2.BORDER_CONSTANT, borderValue=padding)\n        return crop\n\n    def _draw(self, image, box, name):\n        \"\"\"\n        draw image for debugging\n        \"\"\"\n        draw_image = np.array(image.copy())\n        x1, y1, x2, y2 = map(lambda x:int(round(x)), box)\n        cv2.rectangle(draw_image, (x1, y1), (x2, y2), (0,255,0))\n        cv2.circle(draw_image, (int(round(x1 + x2)/2), int(round(y1 + y2) /2)), 3, (0, 0, 255))\n        cv2.putText(draw_image, '[x: {}, y: {}]'.format(int(round(x1 + x2)/2), int(round(y1 + y2) /2)), (int(round(x1 + x2)/2) - 3, int(round(y1 + y2) /2) -3), cv2.FONT_HERSHEY_SIMPLEX, 0.3, (255, 255, 255), 1)\n        cv2.imwrite(name, draw_image)\n\n    def _draw_reg(self, image, grid_x, grid_y, reg_label, reg_weight, save_path, index):\n        \"\"\"\n        visiualization\n        reg_label: [l, t, r, b]\n        \"\"\"\n        draw_image = image.copy()\n        # count = 0\n        save_name = join(save_path, '{:06d}.jpg'.format(index))\n        h, w = reg_weight.shape\n        for i in range(h):\n            for j in range(w):\n                if not reg_weight[i, j] > 0:\n                    continue\n                else:\n                    x1 = int(grid_x[i, j] - reg_label[i, j, 0])\n                    y1 = int(grid_y[i, j] - reg_label[i, j, 1])\n                    x2 = int(grid_x[i, j] + reg_label[i, j, 2])\n                    y2 = int(grid_y[i, j] + reg_label[i, j, 3])\n\n                    draw_image = cv2.rectangle(draw_image, (x1, y1), (x2, y2), (0, 255, 0))\n\n        cv2.imwrite(save_name, draw_image)\n\n    def _mixupRandom(self):\n        \"\"\"\n        gaussian random -- 0.3~0.7\n        \"\"\"\n        return random.random() * 0.4 + 0.3\n\n    # ------------------------------------\n    # function for data augmentation\n    # ------------------------------------\n    def _augmentation(self, image, bbox, size, search=False):\n        \"\"\"\n        data augmentation for input pairs (modified from SiamRPN.)\n        \"\"\"\n        shape = image.shape\n        crop_bbox = center2corner((shape[0] // 2, shape[1] // 2, size, size))\n        param = edict()\n\n        if search:\n            param.shift = (self._posNegRandom() * self.shift_s, self._posNegRandom() * self.shift_s)  # shift\n            param.scale = ((1.0 + self._posNegRandom() * self.scale_s), (1.0 + self._posNegRandom() * self.scale_s))  # scale change\n        else:\n            param.shift = (self._posNegRandom() * self.shift, self._posNegRandom() * self.shift)   # shift\n            param.scale = ((1.0 + self._posNegRandom() * self.scale), (1.0 + self._posNegRandom() * self.scale))  # scale change\n\n        crop_bbox, _ = aug_apply(Corner(*crop_bbox), param, shape)\n\n        x1, y1 = crop_bbox.x1, crop_bbox.y1\n        bbox = BBox(bbox.x1 - x1, bbox.y1 - y1, bbox.x2 - x1, bbox.y2 - y1)\n\n        scale_x, scale_y = param.scale\n        bbox = Corner(bbox.x1 / scale_x, bbox.y1 / scale_y, bbox.x2 / scale_x, bbox.y2 / scale_y)\n\n        image = self._crop_hwc(image, crop_bbox, size)   # shift and scale\n\n        if self.blur > random.random():\n            image = gaussian_filter(image, sigma=(1, 1, 0))\n\n        image = self.transform_extra(image)        # other data augmentation\n        return image, bbox, param\n\n    def _mixupShift(self, image, size):\n        \"\"\"\n        random shift mixed-up image\n        \"\"\"\n        shape = image.shape\n        crop_bbox = center2corner((shape[0] // 2, shape[1] // 2, size, size))\n        param = edict()\n\n        param.shift = (self._posNegRandom() * 64, self._posNegRandom() * 64)  # shift\n        crop_bbox, _ = aug_apply(Corner(*crop_bbox), param, shape)\n\n        image = self._crop_hwc(image, crop_bbox, size)  # shift and scale\n\n        return image\n\n    # ------------------------------------\n    # function for creating training label\n    # ------------------------------------\n    def _dynamic_label(self, fixedLabelSize, c_shift, rPos=2, rNeg=0):\n        if isinstance(fixedLabelSize, int):\n            fixedLabelSize = [fixedLabelSize, fixedLabelSize]\n\n        assert (fixedLabelSize[0] % 2 == 1)\n\n        d_label = self._create_dynamic_logisticloss_label(fixedLabelSize, c_shift, rPos, rNeg)\n\n        return d_label\n\n    def _create_dynamic_logisticloss_label(self, label_size, c_shift, rPos=2, rNeg=0):\n        if isinstance(label_size, int):\n            sz = label_size\n        else:\n            sz = label_size[0]\n\n        sz_x = sz // 2 + int(-c_shift[0] / 8)  # 8 is strides\n        sz_y = sz // 2 + int(-c_shift[1] / 8)\n\n        x, y = np.meshgrid(np.arange(0, sz) - np.floor(float(sz_x)),\n                           np.arange(0, sz) - np.floor(float(sz_y)))\n\n        dist_to_center = np.abs(x) + np.abs(y)  # Block metric\n        label = np.where(dist_to_center <= rPos,\n                         np.ones_like(y),\n                         np.where(dist_to_center < rNeg,\n                                  0.5 * np.ones_like(y),\n                                  np.zeros_like(y)))\n        return label\n\n\n\n\n# ---------------------\n# for a single dataset\n# ---------------------\n\nclass subData(object):\n    \"\"\"\n    for training with multi dataset, modified from SiamRPN\n    \"\"\"\n    def __init__(self, cfg, data_name, start):\n        self.data_name = data_name\n        self.start = start\n\n        info = cfg.OCEAN.DATASET[data_name]\n        self.frame_range = info.RANGE\n        self.num_use = info.USE\n        self.root = info.PATH\n\n        with open(info.ANNOTATION) as fin:\n            self.labels = json.load(fin)\n            self._clean()\n            self.num = len(self.labels)    # video numer\n\n        self._shuffle()\n\n    def _clean(self):\n        \"\"\"\n        remove empty videos/frames/annos in dataset\n        \"\"\"\n        # no frames\n        to_del = []\n        for video in self.labels:\n            for track in self.labels[video]:\n                frames = self.labels[video][track]\n                frames = list(map(int, frames.keys()))\n                frames.sort()\n                self.labels[video][track]['frames'] = frames\n                if len(frames) <= 0:\n                    print(\"warning {}/{} has no frames.\".format(video, track))\n                    to_del.append((video, track))\n\n        for video, track in to_del:\n            try:\n                del self.labels[video][track]\n            except:\n                pass\n\n        # no track/annos\n        to_del = []\n\n        if self.data_name == 'YTB':\n            to_del.append('train/1/YyE0clBPamU')  # This video has no bounding box.\n        print(self.data_name)\n\n        for video in self.labels:\n            if len(self.labels[video]) <= 0:\n                print(\"warning {} has no tracks\".format(video))\n                to_del.append(video)\n        \n        for video in to_del:\n            try:\n                del self.labels[video]\n            except:\n                pass\n\n        self.videos = list(self.labels.keys())\n        print('{} loaded.'.format(self.data_name))\n\n    def _shuffle(self):\n        \"\"\"\n        shuffel to get random pairs index (video)\n        \"\"\"\n        lists = list(range(self.start, self.start + self.num))\n        m = 0\n        pick = []\n        while m < self.num_use:\n            sample_random.shuffle(lists)\n            pick += lists\n            m += self.num\n\n        self.pick = pick[:self.num_use]\n        return self.pick\n\n    def _get_image_anno(self, video, track, frame):\n        \"\"\"\n        get image and annotation\n        \"\"\"\n\n        frame = \"{:06d}\".format(frame)\n\n        image_path = join(self.root, video, \"{}.{}.x.jpg\".format(frame, track))\n        image_anno = self.labels[video][track][frame]\n        return image_path, image_anno\n\n\n    def _get_pairs(self, index, data_name):\n        \"\"\"\n        get training pairs\n        \"\"\"\n        video_name = self.videos[index]\n        video = self.labels[video_name]\n        track = random.choice(list(video.keys()))\n        track_info = video[track]\n        try:\n            frames = track_info['frames']\n        except:\n            frames = list(track_info.keys())\n\n        template_frame = random.randint(0, len(frames)-1)\n\n        left = max(template_frame - self.frame_range, 0)\n        right = min(template_frame + self.frame_range, len(frames)-1) + 1\n        search_range = frames[left:right]\n\n        template_frame = int(frames[template_frame])\n        search_frame = int(random.choice(search_range))\n\n\n        return self._get_image_anno(video_name, track, template_frame), \\\n               self._get_image_anno(video_name, track, search_frame)\n\n\n    def _get_negative_target(self, index=-1):\n        \"\"\"\n        dasiam neg\n        \"\"\"\n        if index == -1:\n            index = random.randint(0, self.num - 1)\n        video_name = self.videos[index]\n        video = self.labels[video_name]\n        track = random.choice(list(video.keys()))\n        track_info = video[track]\n\n        frames = track_info['frames']\n        frame = random.choice(frames)\n\n        return self._get_image_anno(video_name, track, frame)\n\n\nif __name__ == '__main__':\n    # label = dynamic_label([17,17], [50, 30], 'balanced')shiw\n    # print(label)\n    import os\n    from torch.utils.data import DataLoader\n    from core.config import config\n\n\n    train_set = OceanDataset(config)\n    train_loader = DataLoader(train_set, batch_size=16, num_workers=1, pin_memory=False)\n\n    for iter, input in enumerate(train_loader):\n        # label_cls = input[2].numpy()  # BCE need float\n        template = input[0]\n        search = input[1]\n        print(template.size())\n        print(search.size())\n\n\n        print('dataset test')\n\n    print()\n\n"
  },
  {
    "path": "tracker/sot/lib/dataset/siamfc.py",
    "content": "# ------------------------------------------------------------------------------\r\n# Copyright (c) Microsoft\r\n# Licensed under the MIT License.\r\n# Written by Zhipeng Zhang (zhangzhipeng2017@ia.ac.cn)\r\n# ------------------------------------------------------------------------------\r\nfrom __future__ import division\r\n\r\nimport cv2\r\nimport json\r\nimport torch\r\nimport random\r\nimport logging\r\nimport numpy as np\r\nimport torchvision.transforms as transforms\r\nfrom scipy.ndimage.filters import gaussian_filter\r\nfrom os.path import join\r\nfrom easydict import EasyDict as edict\r\nfrom torch.utils.data import Dataset\r\n\r\nimport sys\r\nsys.path.append('../')\r\nfrom utils.utils import *\r\nfrom core.config_siamdw import config\r\n\r\nsample_random = random.Random()\r\n# sample_random.seed(123456)\r\n\r\nclass SiamFCDataset(Dataset):\r\n    def __init__(self, cfg):\r\n        super(SiamFCDataset, self).__init__()\r\n        # pair information\r\n        self.template_size = cfg.SIAMFC.TRAIN.TEMPLATE_SIZE\r\n        self.search_size = cfg.SIAMFC.TRAIN.SEARCH_SIZE\r\n        self.size = (self.search_size - self.template_size) // cfg.SIAMFC.TRAIN.STRIDE + 1   # from cross-correlation\r\n\r\n        # aug information\r\n        self.color = cfg.SIAMFC.DATASET.COLOR\r\n        self.flip = cfg.SIAMFC.DATASET.FLIP\r\n        self.rotation = cfg.SIAMFC.DATASET.ROTATION\r\n        self.blur = cfg.SIAMFC.DATASET.BLUR\r\n        self.shift = cfg.SIAMFC.DATASET.SHIFT\r\n        self.scale = cfg.SIAMFC.DATASET.SCALE\r\n\r\n        self.transform_extra = transforms.Compose(\r\n            [transforms.ToPILImage(), ] +\r\n            ([transforms.ColorJitter(0.05, 0.05, 0.05, 0.05), ] if self.color > random.random() else [])\r\n            + ([transforms.RandomHorizontalFlip(), ] if self.flip > random.random() else [])\r\n            + ([transforms.RandomRotation(degrees=10), ] if self.rotation > random.random() else [])\r\n        )\r\n\r\n        # train data information\r\n        if cfg.SIAMFC.TRAIN.WHICH_USE == 'VID':\r\n            self.anno = cfg.SIAMFC.DATASET.VID.ANNOTATION\r\n            self.num_use = cfg.SIAMFC.TRAIN.PAIRS\r\n            self.root = cfg.SIAMFC.DATASET.VID.PATH\r\n        elif cfg.SIAMFC.TRAIN.WHICH_USE == 'GOT10K':\r\n            self.anno = cfg.SIAMFC.DATASET.GOT10K.ANNOTATION\r\n            self.num_use = cfg.SIAMFC.TRAIN.PAIRS\r\n            self.root = cfg.SIAMFC.DATASET.GOT10K.PATH\r\n        else:\r\n            raise ValueError('not supported training dataset')\r\n\r\n        self.labels = json.load(open(self.anno, 'r'))\r\n        self.videos = list(self.labels.keys())\r\n        self.num = len(self.videos)   # video number\r\n        self.frame_range = 100\r\n        self.pick = self._shuffle()\r\n\r\n    def __len__(self):\r\n        return self.num_use\r\n\r\n    def __getitem__(self, index):\r\n        \"\"\"\r\n        pick a vodeo/frame --> pairs --> data aug --> label\r\n        \"\"\"\r\n        index = self.pick[index]\r\n        template, search = self._get_pairs(index)\r\n\r\n        template_image = cv2.imread(template[0])\r\n        search_image = cv2.imread(search[0])\r\n\r\n        template_box = self._toBBox(template_image, template[1])\r\n        search_box = self._toBBox(search_image, search[1])\r\n\r\n        template, _, _ = self._augmentation(template_image, template_box, self.template_size)\r\n        search, bbox, dag_param = self._augmentation(search_image, search_box, self.search_size)\r\n\r\n        # from PIL image to numpy\r\n        template = np.array(template)\r\n        search = np.array(search)\r\n\r\n        out_label = self._dynamic_label([self.size, self.size], dag_param.shift)\r\n\r\n        template, search = map(lambda x: np.transpose(x, (2, 0, 1)).astype(np.float32), [template, search])\r\n\r\n        return template, search, out_label, np.array(bbox, np.float32)  # self.label 15*15/17*17\r\n\r\n    # ------------------------------------\r\n    # function groups for selecting pairs\r\n    # ------------------------------------\r\n    def _shuffle(self):\r\n        \"\"\"\r\n        shuffel to get random pairs index\r\n        \"\"\"\r\n        lists = list(range(0, self.num))\r\n        m = 0\r\n        pick = []\r\n        while m < self.num_use:\r\n            sample_random.shuffle(lists)\r\n            pick += lists\r\n            m += self.num\r\n\r\n        self.pick = pick[:self.num_use]\r\n        return self.pick\r\n\r\n    def _get_image_anno(self, video, track, frame):\r\n        \"\"\"\r\n        get image and annotation\r\n        \"\"\"\r\n        frame = \"{:06d}\".format(frame)\r\n        image_path = join(self.root, video, \"{}.{}.x.jpg\".format(frame, track))\r\n        image_anno = self.labels[video][track][frame]\r\n\r\n        return image_path, image_anno\r\n\r\n    def _get_pairs(self, index):\r\n        \"\"\"\r\n        get training pairs\r\n        \"\"\"\r\n        video_name = self.videos[index]\r\n        video = self.labels[video_name]\r\n        track = random.choice(list(video.keys()))\r\n        track_info = video[track]\r\n        try:\r\n            frames = track_info['frames']\r\n        except:\r\n            frames = list(track_info.keys())\r\n\r\n        template_frame = random.randint(0, len(frames)-1)\r\n\r\n        left = max(template_frame - self.frame_range, 0)\r\n        right = min(template_frame + self.frame_range, len(frames)-1) + 1\r\n        search_range = frames[left:right]\r\n        template_frame = int(frames[template_frame])\r\n        search_frame = int(random.choice(search_range))\r\n\r\n        return self._get_image_anno(video_name, track, template_frame), \\\r\n               self._get_image_anno(video_name, track, search_frame)\r\n\r\n    def _posNegRandom(self):\r\n        \"\"\"\r\n        random number from [-1, 1]\r\n        \"\"\"\r\n        return random.random() * 2 - 1.0\r\n\r\n    def _toBBox(self, image, shape):\r\n        imh, imw = image.shape[:2]\r\n        if len(shape) == 4:\r\n            w, h = shape[2] - shape[0], shape[3] - shape[1]\r\n        else:\r\n            w, h = shape\r\n        context_amount = 0.5\r\n        exemplar_size = self.template_size\r\n        wc_z = w + context_amount * (w + h)\r\n        hc_z = h + context_amount * (w + h)\r\n        s_z = np.sqrt(wc_z * hc_z)\r\n        scale_z = exemplar_size / s_z\r\n        w = w * scale_z\r\n        h = h * scale_z\r\n        cx, cy = imw // 2, imh // 2\r\n        bbox = center2corner(Center(cx, cy, w, h))\r\n        return bbox\r\n\r\n    def _crop_hwc(self, image, bbox, out_sz, padding=(0, 0, 0)):\r\n        \"\"\"\r\n        crop image\r\n        \"\"\"\r\n        bbox = [float(x) for x in bbox]\r\n        a = (out_sz - 1) / (bbox[2] - bbox[0])\r\n        b = (out_sz - 1) / (bbox[3] - bbox[1])\r\n        c = -a * bbox[0]\r\n        d = -b * bbox[1]\r\n        mapping = np.array([[a, 0, c],\r\n                            [0, b, d]]).astype(np.float)\r\n        crop = cv2.warpAffine(image, mapping, (out_sz, out_sz), borderMode=cv2.BORDER_CONSTANT, borderValue=padding)\r\n        return crop\r\n\r\n    def _draw(self, image, box, name):\r\n        \"\"\"\r\n        draw image for debugging\r\n        \"\"\"\r\n        draw_image = image.copy()\r\n        x1, y1, x2, y2 = map(lambda x:int(round(x)), box)\r\n        cv2.rectangle(draw_image, (x1, y1), (x2, y2), (0,255,0))\r\n        cv2.circle(draw_image, (int(round(x1 + x2)/2), int(round(y1 + y2) /2)), 3, (0, 0, 255))\r\n        cv2.putText(draw_image, '[x: {}, y: {}]'.format(int(round(x1 + x2)/2), int(round(y1 + y2) /2)), (int(round(x1 + x2)/2) - 3, int(round(y1 + y2) /2) -3), cv2.FONT_HERSHEY_SIMPLEX, 0.3, (255, 255, 255), 1)\r\n        cv2.imwrite(name, draw_image)\r\n\r\n    # ------------------------------------\r\n    # function for data augmentation\r\n    # ------------------------------------\r\n    def _augmentation(self, image, bbox, size):\r\n        \"\"\"\r\n        data augmentation for input pairs\r\n        \"\"\"\r\n        shape = image.shape\r\n        crop_bbox = center2corner((shape[0] // 2, shape[1] // 2, size, size))\r\n        param = edict()\r\n\r\n        param.shift = (self._posNegRandom() * self.shift, self._posNegRandom() * self.shift)   # shift\r\n        param.scale = ((1.0 + self._posNegRandom() * self.scale), (1.0 + self._posNegRandom() * self.scale))  # scale change\r\n\r\n        crop_bbox, _ = aug_apply(Corner(*crop_bbox), param, shape)\r\n\r\n        x1, y1 = crop_bbox.x1, crop_bbox.y1\r\n        bbox = BBox(bbox.x1 - x1, bbox.y1 - y1, bbox.x2 - x1, bbox.y2 - y1)\r\n\r\n        scale_x, scale_y = param.scale\r\n        bbox = Corner(bbox.x1 / scale_x, bbox.y1 / scale_y, bbox.x2 / scale_x, bbox.y2 / scale_y)\r\n\r\n        image = self._crop_hwc(image, crop_bbox, size)   # shift and scale\r\n\r\n        if self.blur > random.random():\r\n            image = gaussian_filter(image, sigma=(1, 1, 0))\r\n\r\n        image = self.transform_extra(image)        # other data augmentation\r\n        return image, bbox, param\r\n\r\n    # ------------------------------------\r\n    # function for creating training label\r\n    # ------------------------------------\r\n    def _dynamic_label(self, fixedLabelSize, c_shift, rPos=2, rNeg=0):\r\n        if isinstance(fixedLabelSize, int):\r\n            fixedLabelSize = [fixedLabelSize, fixedLabelSize]\r\n\r\n        assert (fixedLabelSize[0] % 2 == 1)\r\n\r\n        d_label = self._create_dynamic_logisticloss_label(fixedLabelSize, c_shift, rPos, rNeg)\r\n\r\n        return d_label\r\n\r\n    def _create_dynamic_logisticloss_label(self, label_size, c_shift, rPos=2, rNeg=0):\r\n        if isinstance(label_size, int):\r\n            sz = label_size\r\n        else:\r\n            sz = label_size[0]\r\n\r\n        # the real shift is -param['shifts']\r\n        sz_x = sz // 2 + round(-c_shift[0]) // 8  # 8 is strides\r\n        sz_y = sz // 2 + round(-c_shift[1]) // 8\r\n\r\n        x, y = np.meshgrid(np.arange(0, sz) - np.floor(float(sz_x)),\r\n                           np.arange(0, sz) - np.floor(float(sz_y)))\r\n\r\n        dist_to_center = np.abs(x) + np.abs(y)  # Block metric\r\n        label = np.where(dist_to_center <= rPos,\r\n                         np.ones_like(y),\r\n                         np.where(dist_to_center < rNeg,\r\n                                  0.5 * np.ones_like(y),\r\n                                  np.zeros_like(y)))\r\n        return label\r\n\r\n\r\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/bin/_init_paths.py",
    "content": "\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os.path as osp\nimport sys\n\n\ndef add_path(path):\n    if path not in sys.path:\n        sys.path.insert(0, path)\n\n\nthis_dir = osp.dirname(__file__)\n\nlib_path = osp.join(this_dir, '../..', 'eval_toolkit')\nadd_path(lib_path)\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/bin/eval.py",
    "content": "import _init_paths\nimport os\nimport sys\nimport time\nimport argparse\nimport functools\nsys.path.append(\"./\")\n\nfrom glob import glob\nfrom tqdm import tqdm\nfrom multiprocessing import Pool\nfrom pysot.datasets import OTBDataset, UAVDataset, LaSOTDataset, VOTDataset, NFSDataset, VOTLTDataset\nfrom pysot.evaluation import OPEBenchmark, AccuracyRobustnessBenchmark, EAOBenchmark, F1Benchmark\nfrom pysot.visualization import draw_success_precision, draw_eao, draw_f1\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser(description='Single Object Tracking Evaluation')\n    parser.add_argument('--dataset_dir', type=str, help='dataset root directory')\n    parser.add_argument('--dataset', type=str, help='dataset name')\n    parser.add_argument('--tracker_result_dir', type=str, help='tracker result root')\n    parser.add_argument('--trackers', nargs='+')\n    parser.add_argument('--vis', dest='vis', default=False)\n    parser.add_argument('--show_video_level', dest='show_video_level', default=False)\n    parser.add_argument('--num', type=int, help='number of processes to eval', default=1)\n    args = parser.parse_args()\n\n    tracker_dir = args.tracker_result_dir\n    trackers = args.trackers\n    root = args.dataset_dir\n\n    assert len(trackers) > 0\n    args.num = min(args.num, len(trackers))\n\n    if 'OTB' in args.dataset:\n        dataset = OTBDataset(args.dataset, root)\n        dataset.set_tracker(tracker_dir, trackers)\n        benchmark = OPEBenchmark(dataset)\n        success_ret = {}\n        with Pool(processes=args.num) as pool:\n            for ret in tqdm(pool.imap_unordered(benchmark.eval_success,\n                trackers), desc='eval success', total=len(trackers), ncols=100):\n                success_ret.update(ret)\n        precision_ret = {}\n        with Pool(processes=args.num) as pool:\n            for ret in tqdm(pool.imap_unordered(benchmark.eval_precision,\n                trackers), desc='eval precision', total=len(trackers), ncols=100):\n                precision_ret.update(ret)\n        benchmark.show_result(success_ret, precision_ret,\n                show_video_level=args.show_video_level)\n        if args.vis:\n            for attr, videos in dataset.attr.items():\n                draw_success_precision(success_ret,\n                            name=dataset.name,\n                            videos=videos,\n                            attr=attr,\n                            precision_ret=precision_ret)\n    elif 'LaSOT' == args.dataset:\n        dataset = LaSOTDataset(args.dataset, root)\n        dataset.set_tracker(tracker_dir, trackers)\n        benchmark = OPEBenchmark(dataset)\n        success_ret = {}\n        # success_ret = benchmark.eval_success(trackers)\n        with Pool(processes=args.num) as pool:\n            for ret in tqdm(pool.imap_unordered(benchmark.eval_success,\n                trackers), desc='eval success', total=len(trackers), ncols=100):\n                success_ret.update(ret)\n        precision_ret = {}\n        with Pool(processes=args.num) as pool:\n            for ret in tqdm(pool.imap_unordered(benchmark.eval_precision,\n                trackers), desc='eval precision', total=len(trackers), ncols=100):\n                precision_ret.update(ret)\n        norm_precision_ret = {}\n        with Pool(processes=args.num) as pool:\n            for ret in tqdm(pool.imap_unordered(benchmark.eval_norm_precision,\n                trackers), desc='eval norm precision', total=len(trackers), ncols=100):\n                norm_precision_ret.update(ret)\n        benchmark.show_result(success_ret, precision_ret, norm_precision_ret,\n                show_video_level=args.show_video_level)\n        if args.vis:\n            draw_success_precision(success_ret,\n                    name=dataset.name,\n                    videos=dataset.attr['ALL'],\n                    attr='ALL',\n                    precision_ret=precision_ret,\n                    norm_precision_ret=norm_precision_ret)\n    elif 'UAV' in args.dataset:\n        dataset = UAVDataset(args.dataset, root)\n        dataset.set_tracker(tracker_dir, trackers)\n        benchmark = OPEBenchmark(dataset)\n        success_ret = {}\n        with Pool(processes=args.num) as pool:\n            for ret in tqdm(pool.imap_unordered(benchmark.eval_success,\n                trackers), desc='eval success', total=len(trackers), ncols=100):\n                success_ret.update(ret)\n        precision_ret = {}\n        with Pool(processes=args.num) as pool:\n            for ret in tqdm(pool.imap_unordered(benchmark.eval_precision,\n                trackers), desc='eval precision', total=len(trackers), ncols=100):\n                precision_ret.update(ret)\n        benchmark.show_result(success_ret, precision_ret,\n                show_video_level=args.show_video_level)\n        if args.vis:\n            for attr, videos in dataset.attr.items():\n                draw_success_precision(success_ret,\n                        name=dataset.name,\n                        videos=videos,\n                        attr=attr,\n                        precision_ret=precision_ret)\n    elif 'NFS' in args.dataset:\n        dataset = NFSDataset(args.dataset, root)\n        dataset.set_tracker(tracker_dir, trackers)\n        benchmark = OPEBenchmark(dataset)\n        success_ret = {}\n        with Pool(processes=args.num) as pool:\n            for ret in tqdm(pool.imap_unordered(benchmark.eval_success,\n                trackers), desc='eval success', total=len(trackers), ncols=100):\n                success_ret.update(ret)\n        precision_ret = {}\n        with Pool(processes=args.num) as pool:\n            for ret in tqdm(pool.imap_unordered(benchmark.eval_precision,\n                trackers), desc='eval precision', total=len(trackers), ncols=100):\n                precision_ret.update(ret)\n        benchmark.show_result(success_ret, precision_ret,\n                show_video_level=args.show_video_level)\n        if args.vis:\n            for attr, videos in dataset.attr.items():\n                draw_success_precision(success_ret,\n                            name=dataset.name,\n                            video=videos,\n                            attr=attr,\n                            precision_ret=precision_ret)\n    elif 'VOT' in args.dataset:\n        dataset = VOTDataset(args.dataset, root)\n        dataset.set_tracker(tracker_dir, trackers)\n        ar_benchmark = AccuracyRobustnessBenchmark(dataset)\n        ar_result = {}\n        with Pool(processes=args.num) as pool:\n            for ret in tqdm(pool.imap_unordered(ar_benchmark.eval,\n                trackers), desc='eval ar', total=len(trackers), ncols=100):\n                ar_result.update(ret)\n        # benchmark.show_result(ar_result)\n\n        benchmark = EAOBenchmark(dataset)\n        eao_result = {}\n        with Pool(processes=args.num) as pool:\n            for ret in tqdm(pool.imap_unordered(benchmark.eval,\n                trackers), desc='eval eao', total=len(trackers), ncols=100):\n                eao_result.update(ret)\n        # benchmark.show_result(eao_result)\n        ar_benchmark.show_result(ar_result, eao_result,\n                show_video_level=args.show_video_level)\n    elif 'VOT2018-LT' == args.dataset:\n        dataset = VOTLTDataset(args.dataset, root)\n        dataset.set_tracker(tracker_dir, trackers)\n        benchmark = F1Benchmark(dataset)\n        f1_result = {}\n        with Pool(processes=args.num) as pool:\n            for ret in tqdm(pool.imap_unordered(benchmark.eval,\n                trackers), desc='eval f1', total=len(trackers), ncols=100):\n                f1_result.update(ret)\n        benchmark.show_result(f1_result,\n                show_video_level=args.show_video_level)\n        if args.vis:\n            draw_f1(f1_result)\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/__init__.py",
    "content": ""
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/datasets/__init__.py",
    "content": "from .vot import VOTDataset, VOTLTDataset\nfrom .otb import OTBDataset\nfrom .uav import UAVDataset\nfrom .lasot import LaSOTDataset\nfrom .nfs import NFSDataset\nfrom .trackingnet import TrackingNetDataset\nfrom .got10k import GOT10kDataset\n\nclass DatasetFactory(object):\n    @staticmethod\n    def create_dataset(**kwargs):\n        \"\"\"\n        Args:\n            name: dataset name 'OTB2015', 'LaSOT', 'UAV123', 'NFS240', 'NFS30',\n                'VOT2018', 'VOT2016', 'VOT2018-LT'\n            dataset_root: dataset root\n            load_img: wether to load image\n        Return:\n            dataset\n        \"\"\"\n        assert 'name' in kwargs, \"should provide dataset name\"\n        name = kwargs['name']\n        if 'OTB' in name:\n            dataset = OTBDataset(**kwargs)\n        elif 'LaSOT' == name:\n            dataset = LaSOTDataset(**kwargs)\n        elif 'UAV' in name:\n            dataset = UAVDataset(**kwargs)\n        elif 'NFS' in name:\n            dataset = NFSDataset(**kwargs)\n        elif 'VOT2018' == name or 'VOT2016' == name:\n            dataset = VOTDataset(**kwargs)\n        elif 'VOT2018-LT' == name:\n            dataset = VOTLTDataset(**kwargs)\n        elif 'TrackingNet' == name:\n            dataset = TrackingNetDataset(**kwargs)\n        elif 'GOT-10k' == name:\n            dataset = GOT10kDataset(**kwargs)\n        else:\n            raise Exception(\"unknow dataset {}\".format(kwargs['name']))\n        return dataset\n\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/datasets/dataset.py",
    "content": "from tqdm import tqdm\n\nclass Dataset(object):\n    def __init__(self, name, dataset_root):\n        self.name = name\n        self.dataset_root = dataset_root\n        self.videos = None\n\n    def __getitem__(self, idx):\n        if isinstance(idx, str):\n            return self.videos[idx]\n        elif isinstance(idx, int):\n            return self.videos[sorted(list(self.videos.keys()))[idx]]\n\n    def __len__(self):\n        return len(self.videos)\n\n    def __iter__(self):\n        keys = sorted(list(self.videos.keys()))\n        for key in keys:\n            yield self.videos[key]\n\n    def set_tracker(self, path, tracker_names):\n        \"\"\"\n        Args:\n            path: path to tracker results,\n            tracker_names: list of tracker name\n        \"\"\"\n        self.tracker_path = path\n        self.tracker_names = tracker_names\n        for video in tqdm(self.videos.values(), \n                desc='loading tacker result', ncols=100):\n            video.load_tracker(path, tracker_names)\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/datasets/got10k.py",
    "content": "\nimport json\nimport os\nimport numpy as np\n\nfrom tqdm import tqdm\nfrom glob import glob\n\nfrom .dataset import Dataset\nfrom .video import Video\n\nclass GOT10kVideo(Video):\n    \"\"\"\n    Args:\n        name: video name\n        root: dataset root\n        video_dir: video directory\n        init_rect: init rectangle\n        img_names: image names\n        gt_rect: groundtruth rectangle\n        attr: attribute of video\n    \"\"\"\n    def __init__(self, name, root, video_dir, init_rect, img_names,\n            gt_rect, attr, load_img=False):\n        super(GOT10kVideo, self).__init__(name, root, video_dir,\n                init_rect, img_names, gt_rect, attr, load_img)\n\n    # def load_tracker(self, path, tracker_names=None):\n    #     \"\"\"\n    #     Args:\n    #         path(str): path to result\n    #         tracker_name(list): name of tracker\n    #     \"\"\"\n    #     if not tracker_names:\n    #         tracker_names = [x.split('/')[-1] for x in glob(path)\n    #                 if os.path.isdir(x)]\n    #     if isinstance(tracker_names, str):\n    #         tracker_names = [tracker_names]\n    #     # self.pred_trajs = {}\n    #     for name in tracker_names:\n    #         traj_file = os.path.join(path, name, self.name+'.txt')\n    #         if os.path.exists(traj_file):\n    #             with open(traj_file, 'r') as f :\n    #                 self.pred_trajs[name] = [list(map(float, x.strip().split(',')))\n    #                         for x in f.readlines()]\n    #             if len(self.pred_trajs[name]) != len(self.gt_traj):\n    #                 print(name, len(self.pred_trajs[name]), len(self.gt_traj), self.name)\n    #         else:\n\n    #     self.tracker_names = list(self.pred_trajs.keys())\n\nclass GOT10kDataset(Dataset):\n    \"\"\"\n    Args:\n        name:  dataset name, should be \"NFS30\" or \"NFS240\"\n        dataset_root, dataset root dir\n    \"\"\"\n    def __init__(self, name, dataset_root, load_img=False):\n        super(GOT10kDataset, self).__init__(name, dataset_root)\n        with open(os.path.join(dataset_root, name+'.json'), 'r') as f:\n            meta_data = json.load(f)\n\n        # load videos\n        pbar = tqdm(meta_data.keys(), desc='loading '+name, ncols=100)\n        self.videos = {}\n        for video in pbar:\n            pbar.set_postfix_str(video)\n            self.videos[video] = GOT10kVideo(video,\n                                          dataset_root,\n                                          meta_data[video]['video_dir'],\n                                          meta_data[video]['init_rect'],\n                                          meta_data[video]['img_names'],\n                                          meta_data[video]['gt_rect'],\n                                          None)\n        self.attr = {}\n        self.attr['ALL'] = list(self.videos.keys())\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/datasets/lasot.py",
    "content": "import os\nimport json\nimport numpy as np\n\nfrom tqdm import tqdm\nfrom glob import glob\n\nfrom .dataset import Dataset\nfrom .video import Video\n\nclass LaSOTVideo(Video):\n    \"\"\"\n    Args:\n        name: video name\n        root: dataset root\n        video_dir: video directory\n        init_rect: init rectangle\n        img_names: image names\n        gt_rect: groundtruth rectangle\n        attr: attribute of video\n    \"\"\"\n    def __init__(self, name, root, video_dir, init_rect, img_names,\n            gt_rect, attr, absent, load_img=False):\n        super(LaSOTVideo, self).__init__(name, root, video_dir,\n                init_rect, img_names, gt_rect, attr, load_img)\n        self.absent = np.array(absent, np.int8)\n\n    def load_tracker(self, path, tracker_names=None, store=True):\n        \"\"\"\n        Args:\n            path(str): path to result\n            tracker_name(list): name of tracker\n        \"\"\"\n        if not tracker_names:\n            tracker_names = [x.split('/')[-1] for x in glob(path)\n                    if os.path.isdir(x)]\n        if isinstance(tracker_names, str):\n            tracker_names = [tracker_names]\n        for name in tracker_names:\n            traj_file = os.path.join(path, name, self.name+'.txt')\n            if os.path.exists(traj_file):\n                with open(traj_file, 'r') as f :\n                    pred_traj = [list(map(float, x.strip().split(',')))\n                            for x in f.readlines()]\n            else:\n                print(\"File not exists: \", traj_file)\n            if self.name == 'monkey-17':\n                pred_traj = pred_traj[:len(self.gt_traj)]\n            if store:\n                self.pred_trajs[name] = pred_traj\n            else:\n                return pred_traj\n        self.tracker_names = list(self.pred_trajs.keys())\n\n\n\nclass LaSOTDataset(Dataset):\n    \"\"\"\n    Args:\n        name: dataset name, should be 'OTB100', 'CVPR13', 'OTB50'\n        dataset_root: dataset root\n        load_img: wether to load all imgs\n    \"\"\"\n    def __init__(self, name, dataset_root, load_img=False):\n        super(LaSOTDataset, self).__init__(name, dataset_root)\n        with open(os.path.join(dataset_root, name+'.json'), 'r') as f:\n            meta_data = json.load(f)\n\n        # load videos\n        pbar = tqdm(meta_data.keys(), desc='loading '+name, ncols=100)\n        self.videos = {}\n        for video in pbar:\n            pbar.set_postfix_str(video)\n            self.videos[video] = LaSOTVideo(video,\n                                          dataset_root,\n                                          meta_data[video]['video_dir'],\n                                          meta_data[video]['init_rect'],\n                                          meta_data[video]['img_names'],\n                                          meta_data[video]['gt_rect'],\n                                          meta_data[video]['attr'],\n                                          meta_data[video]['absent'])\n\n        # set attr\n        attr = []\n        for x in self.videos.values():\n            attr += x.attr\n        attr = set(attr)\n        self.attr = {}\n        self.attr['ALL'] = list(self.videos.keys())\n        for x in attr:\n            self.attr[x] = []\n        for k, v in self.videos.items():\n            for attr_ in v.attr:\n                self.attr[attr_].append(k)\n\n\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/datasets/nfs.py",
    "content": "import json\nimport os\nimport numpy as np\n\nfrom tqdm import tqdm\nfrom glob import glob\n\nfrom .dataset import Dataset\nfrom .video import Video\n\n\nclass NFSVideo(Video):\n    \"\"\"\n    Args:\n        name: video name\n        root: dataset root\n        video_dir: video directory\n        init_rect: init rectangle\n        img_names: image names\n        gt_rect: groundtruth rectangle\n        attr: attribute of video\n    \"\"\"\n    def __init__(self, name, root, video_dir, init_rect, img_names,\n            gt_rect, attr, load_img=False):\n        super(NFSVideo, self).__init__(name, root, video_dir,\n                init_rect, img_names, gt_rect, attr, load_img)\n\n    # def load_tracker(self, path, tracker_names=None):\n    #     \"\"\"\n    #     Args:\n    #         path(str): path to result\n    #         tracker_name(list): name of tracker\n    #     \"\"\"\n    #     if not tracker_names:\n    #         tracker_names = [x.split('/')[-1] for x in glob(path)\n    #                 if os.path.isdir(x)]\n    #     if isinstance(tracker_names, str):\n    #         tracker_names = [tracker_names]\n    #     # self.pred_trajs = {}\n    #     for name in tracker_names:\n    #         traj_file = os.path.join(path, name, self.name+'.txt')\n    #         if os.path.exists(traj_file):\n    #             with open(traj_file, 'r') as f :\n    #                 self.pred_trajs[name] = [list(map(float, x.strip().split(',')))\n    #                         for x in f.readlines()]\n    #             if len(self.pred_trajs[name]) != len(self.gt_traj):\n    #                 print(name, len(self.pred_trajs[name]), len(self.gt_traj), self.name)\n    #         else:\n\n    #     self.tracker_names = list(self.pred_trajs.keys())\n\nclass NFSDataset(Dataset):\n    \"\"\"\n    Args:\n        name:  dataset name, should be \"NFS30\" or \"NFS240\"\n        dataset_root, dataset root dir\n    \"\"\"\n    def __init__(self, name, dataset_root, load_img=False):\n        super(NFSDataset, self).__init__(name, dataset_root)\n        with open(os.path.join(dataset_root, name+'.json'), 'r') as f:\n            meta_data = json.load(f)\n\n        # load videos\n        pbar = tqdm(meta_data.keys(), desc='loading '+name, ncols=100)\n        self.videos = {}\n        for video in pbar:\n            pbar.set_postfix_str(video)\n            self.videos[video] = NFSVideo(video,\n                                          dataset_root,\n                                          meta_data[video]['video_dir'],\n                                          meta_data[video]['init_rect'],\n                                          meta_data[video]['img_names'],\n                                          meta_data[video]['gt_rect'],\n                                          None)\n\n        self.attr = {}\n        self.attr['ALL'] = list(self.videos.keys())\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/datasets/otb.py",
    "content": "import json\nimport os\nimport numpy as np\n\nfrom PIL import Image\nfrom tqdm import tqdm\n\nfrom .dataset import Dataset\nfrom .video import Video\n\n\nclass OTBVideo(Video):\n    \"\"\"\n    Args:\n        name: video name\n        root: dataset root\n        video_dir: video directory\n        init_rect: init rectangle\n        img_names: image names\n        gt_rect: groundtruth rectangle\n        attr: attribute of video\n    \"\"\"\n    def __init__(self, name, root, video_dir, init_rect, img_names,\n            gt_rect, attr, load_img=False):\n        super(OTBVideo, self).__init__(name, root, video_dir,\n                init_rect, img_names, gt_rect, attr, load_img)\n\n    def load_tracker(self, path, tracker_names=None, store=True):\n        \"\"\"\n        Args:\n            path(str): path to result\n            tracker_name(list): name of tracker\n        \"\"\"\n        if not tracker_names:\n            tracker_names = [x.split('/')[-1] for x in glob(path)\n                    if os.path.isdir(x)]\n        if isinstance(tracker_names, str):\n            tracker_names = [tracker_names]\n        for name in tracker_names:\n            traj_file = os.path.join(path, name, self.name+'.txt')\n            if not os.path.exists(traj_file):\n                if self.name == 'FleetFace':\n                    txt_name = 'fleetface.txt'\n                elif self.name == 'Jogging-1':\n                    txt_name = 'jogging_1.txt'\n                elif self.name == 'Jogging-2':\n                    txt_name = 'jogging_2.txt'\n                elif self.name == 'Skating2-1':\n                    txt_name = 'skating2_1.txt'\n                elif self.name == 'Skating2-2':\n                    txt_name = 'skating2_2.txt'\n                elif self.name == 'FaceOcc1':\n                    txt_name = 'faceocc1.txt'\n                elif self.name == 'FaceOcc2':\n                    txt_name = 'faceocc2.txt'\n                elif self.name == 'Human4-2':\n                    txt_name = 'human4_2.txt'\n                else:\n                    txt_name = self.name[0].lower()+self.name[1:]+'.txt'\n                traj_file = os.path.join(path, name, txt_name)\n            if os.path.exists(traj_file):\n                with open(traj_file, 'r') as f :\n                    pred_traj = [list(map(float, x.strip().split(',')))\n                            for x in f.readlines()]\n                    if len(pred_traj) != len(self.gt_traj):\n                        print(name, len(pred_traj), len(self.gt_traj), self.name)\n                    if store:\n                        self.pred_trajs[name] = pred_traj\n                    else:\n                        return pred_traj\n            else:\n                print(traj_file)\n        self.tracker_names = list(self.pred_trajs.keys())\n\n\n\nclass OTBDataset(Dataset):\n    \"\"\"\n    Args:\n        name: dataset name, should be 'OTB100', 'CVPR13', 'OTB50'\n        dataset_root: dataset root\n        load_img: wether to load all imgs\n    \"\"\"\n    def __init__(self, name, dataset_root, load_img=False):\n        super(OTBDataset, self).__init__(name, dataset_root)\n        with open(os.path.join(dataset_root, name+'.json'), 'r') as f:\n            meta_data = json.load(f)\n\n        # load videos\n        pbar = tqdm(meta_data.keys(), desc='loading '+name, ncols=100)\n        self.videos = {}\n        for video in pbar:\n            pbar.set_postfix_str(video)\n            self.videos[video] = OTBVideo(video,\n                                          dataset_root,\n                                          meta_data[video]['video_dir'],\n                                          meta_data[video]['init_rect'],\n                                          meta_data[video]['img_names'],\n                                          meta_data[video]['gt_rect'],\n                                          meta_data[video]['attr'],\n                                          load_img)\n\n        # set attr\n        attr = []\n        for x in self.videos.values():\n            attr += x.attr\n        attr = set(attr)\n        self.attr = {}\n        self.attr['ALL'] = list(self.videos.keys())\n        for x in attr:\n            self.attr[x] = []\n        for k, v in self.videos.items():\n            for attr_ in v.attr:\n                self.attr[attr_].append(k)\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/datasets/trackingnet.py",
    "content": "import json\nimport os\nimport numpy as np\n\nfrom tqdm import tqdm\nfrom glob import glob\n\nfrom .dataset import Dataset\nfrom .video import Video\n\nclass TrackingNetVideo(Video):\n    \"\"\"\n    Args:\n        name: video name\n        root: dataset root\n        video_dir: video directory\n        init_rect: init rectangle\n        img_names: image names\n        gt_rect: groundtruth rectangle\n        attr: attribute of video\n    \"\"\"\n    def __init__(self, name, root, video_dir, init_rect, img_names,\n            gt_rect, attr, load_img=False):\n        super(TrackingNetVideo, self).__init__(name, root, video_dir,\n                init_rect, img_names, gt_rect, attr, load_img)\n\n    # def load_tracker(self, path, tracker_names=None):\n    #     \"\"\"\n    #     Args:\n    #         path(str): path to result\n    #         tracker_name(list): name of tracker\n    #     \"\"\"\n    #     if not tracker_names:\n    #         tracker_names = [x.split('/')[-1] for x in glob(path)\n    #                 if os.path.isdir(x)]\n    #     if isinstance(tracker_names, str):\n    #         tracker_names = [tracker_names]\n    #     # self.pred_trajs = {}\n    #     for name in tracker_names:\n    #         traj_file = os.path.join(path, name, self.name+'.txt')\n    #         if os.path.exists(traj_file):\n    #             with open(traj_file, 'r') as f :\n    #                 self.pred_trajs[name] = [list(map(float, x.strip().split(',')))\n    #                         for x in f.readlines()]\n    #             if len(self.pred_trajs[name]) != len(self.gt_traj):\n    #                 print(name, len(self.pred_trajs[name]), len(self.gt_traj), self.name)\n    #         else:\n\n    #     self.tracker_names = list(self.pred_trajs.keys())\n\nclass TrackingNetDataset(Dataset):\n    \"\"\"\n    Args:\n        name:  dataset name, should be \"NFS30\" or \"NFS240\"\n        dataset_root, dataset root dir\n    \"\"\"\n    def __init__(self, name, dataset_root, load_img=False):\n        super(TrackingNetDataset, self).__init__(name, dataset_root)\n        with open(os.path.join(dataset_root, name+'.json'), 'r') as f:\n            meta_data = json.load(f)\n\n        # load videos\n        pbar = tqdm(meta_data.keys(), desc='loading '+name, ncols=100)\n        self.videos = {}\n        for video in pbar:\n            pbar.set_postfix_str(video)\n            self.videos[video] = TrackingNetVideo(video,\n                                          dataset_root,\n                                          meta_data[video]['video_dir'],\n                                          meta_data[video]['init_rect'],\n                                          meta_data[video]['img_names'],\n                                          meta_data[video]['gt_rect'],\n                                          None)\n        self.attr = {}\n        self.attr['ALL'] = list(self.videos.keys())\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/datasets/uav.py",
    "content": "import os\nimport json\n\nfrom tqdm import tqdm\nfrom glob import glob\n\nfrom .dataset import Dataset\nfrom .video import Video\n\nclass UAVVideo(Video):\n    \"\"\"\n    Args:\n        name: video name\n        root: dataset root\n        video_dir: video directory\n        init_rect: init rectangle\n        img_names: image names\n        gt_rect: groundtruth rectangle\n        attr: attribute of video\n    \"\"\"\n    def __init__(self, name, root, video_dir, init_rect, img_names,\n            gt_rect, attr, load_img=False):\n        super(UAVVideo, self).__init__(name, root, video_dir,\n                init_rect, img_names, gt_rect, attr, load_img)\n\n\nclass UAVDataset(Dataset):\n    \"\"\"\n    Args:\n        name: dataset name, should be 'UAV123', 'UAV20L'\n        dataset_root: dataset root\n        load_img: wether to load all imgs\n    \"\"\"\n    def __init__(self, name, dataset_root, load_img=False):\n        super(UAVDataset, self).__init__(name, dataset_root)\n        with open(os.path.join(dataset_root, name+'.json'), 'r') as f:\n            meta_data = json.load(f)\n\n        # load videos\n        pbar = tqdm(meta_data.keys(), desc='loading '+name, ncols=100)\n        self.videos = {}\n        for video in pbar:\n            pbar.set_postfix_str(video)\n            self.videos[video] = UAVVideo(video,\n                                          dataset_root,\n                                          meta_data[video]['video_dir'],\n                                          meta_data[video]['init_rect'],\n                                          meta_data[video]['img_names'],\n                                          meta_data[video]['gt_rect'],\n                                          meta_data[video]['attr'])\n\n        # set attr\n        attr = []\n        for x in self.videos.values():\n            attr += x.attr\n        attr = set(attr)\n        self.attr = {}\n        self.attr['ALL'] = list(self.videos.keys())\n        for x in attr:\n            self.attr[x] = []\n        for k, v in self.videos.items():\n            for attr_ in v.attr:\n                self.attr[attr_].append(k)\n\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/datasets/video.py",
    "content": "import os\nimport cv2\nimport re\nimport numpy as np\nimport json\n\nfrom glob import glob\n\nclass Video(object):\n    def __init__(self, name, root, video_dir, init_rect, img_names,\n            gt_rect, attr, load_img=False):\n        self.name = name\n        self.video_dir = video_dir\n        self.init_rect = init_rect\n        self.gt_traj = gt_rect\n        self.attr = attr\n        self.pred_trajs = {}\n        self.img_names = [os.path.join(root, x) for x in img_names]\n        self.imgs = None\n\n        if load_img:\n            self.imgs = [cv2.imread(img)\n                            for x in self.img_names]\n            self.width = self.imgs[0].shape[1]\n            self.height = self.imgs[0].shape[0]\n        else:\n            img = cv2.imread(self.img_names[0])\n            assert img is not None, self.img_names[0]\n            self.width = img.shape[1]\n            self.height = img.shape[0]\n\n    def load_tracker(self, path, tracker_names=None, store=True):\n        \"\"\"\n        Args:\n            path(str): path to result\n            tracker_name(list): name of tracker\n        \"\"\"\n        if not tracker_names:\n            tracker_names = [x.split('/')[-1] for x in glob(path)\n                    if os.path.isdir(x)]\n        if isinstance(tracker_names, str):\n            tracker_names = [tracker_names]\n        for name in tracker_names:\n            traj_file = os.path.join(path, name, self.name+'.txt')\n            if os.path.exists(traj_file):\n                with open(traj_file, 'r') as f :\n                    pred_traj = [list(map(float, x.strip().split(',')))\n                            for x in f.readlines()]\n                if len(pred_traj) != len(self.gt_traj):\n                    print(name, len(pred_traj), len(self.gt_traj), self.name)\n                if store:\n                    self.pred_trajs[name] = pred_traj\n                else:\n                    return pred_traj\n            else:\n                print(traj_file)\n        self.tracker_names = list(self.pred_trajs.keys())\n\n    def load_img(self):\n        if self.imgs is None:\n            self.imgs = [cv2.imread(x)\n                            for x in self.img_names]\n            self.width = self.imgs[0].shape[1]\n            self.height = self.imgs[0].shape[0]\n\n    def free_img(self):\n        self.imgs = None\n\n    def __len__(self):\n        return len(self.img_names)\n\n    def __getitem__(self, idx):\n        if self.imgs is None:\n            return cv2.imread(self.img_names[idx]), \\\n                    self.gt_traj[idx]\n        else:\n            return self.imgs[idx], self.gt_traj[idx]\n\n    def __iter__(self):\n        for i in range(len(self.img_names)):\n            if self.imgs is not None:\n                yield self.imgs[i], self.gt_traj[i]\n            else:\n                yield cv2.imread(self.img_names[i]), \\\n                        self.gt_traj[i]\n\n    def draw_box(self, roi, img, linewidth, color, name=None):\n        \"\"\"\n            roi: rectangle or polygon\n            img: numpy array img\n            linewith: line width of the bbox\n        \"\"\"\n        if len(roi) > 6 and len(roi) % 2 == 0:\n            pts = np.array(roi, np.int32).reshape(-1, 1, 2)\n            color = tuple(map(int, color))\n            img = cv2.polylines(img, [pts], True, color, linewidth)\n            pt = (pts[0, 0, 0], pts[0, 0, 1]-5)\n            if name:\n                img = cv2.putText(img, name, pt, cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, color, 1)\n        elif len(roi) == 4:\n            if not np.isnan(roi[0]):\n                roi = list(map(int, roi))\n                color = tuple(map(int, color))\n                img = cv2.rectangle(img, (roi[0], roi[1]), (roi[0]+roi[2], roi[1]+roi[3]),\n                         color, linewidth)\n                if name:\n                    img = cv2.putText(img, name, (roi[0], roi[1]-5), cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, color, 1)\n        return img\n\n    def show(self, pred_trajs={}, linewidth=2, show_name=False):\n        \"\"\"\n            pred_trajs: dict of pred_traj, {'tracker_name': list of traj}\n                        pred_traj should contain polygon or rectangle(x, y, width, height)\n            linewith: line width of the bbox\n        \"\"\"\n        assert self.imgs is not None\n        video = []\n        cv2.namedWindow(self.name, cv2.WINDOW_NORMAL)\n        colors = {}\n        if len(pred_trajs) == 0 and len(self.pred_trajs) > 0:\n            pred_trajs = self.pred_trajs\n        for i, (roi, img) in enumerate(zip(self.gt_traj,\n                self.imgs[self.start_frame:self.end_frame+1])):\n            img = img.copy()\n            if len(img.shape) == 2:\n                img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)\n            else:\n                img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)\n            img = self.draw_box(roi, img, linewidth, (0, 255, 0),\n                    'gt' if show_name else None)\n            for name, trajs in pred_trajs.items():\n                if name not in colors:\n                    color = tuple(np.random.randint(0, 256, 3))\n                    colors[name] = color\n                else:\n                    color = colors[name]\n                img = self.draw_box(traj[0][i], img, linewidth, color,\n                        name if show_name else None)\n            cv2.putText(img, str(i+self.start_frame), (5, 20),\n                    cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, (255, 255, 0), 2)\n            cv2.imshow(self.name, img)\n            cv2.waitKey(40)\n            video.append(img.copy())\n        return video\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/datasets/vot.py",
    "content": "import os\nimport cv2\nimport json\nimport numpy as np\n\nfrom glob import glob\nfrom tqdm import tqdm\nfrom PIL import Image\n\nfrom .dataset import Dataset\nfrom .video import Video\n\nclass VOTVideo(Video):\n    \"\"\"\n    Args:\n        name: video name\n        root: dataset root\n        video_dir: video directory\n        init_rect: init rectangle\n        img_names: image names\n        gt_rect: groundtruth rectangle\n        camera_motion: camera motion tag\n        illum_change: illum change tag\n        motion_change: motion change tag\n        size_change: size change\n        occlusion: occlusion\n    \"\"\"\n    def __init__(self, name, root, video_dir, init_rect, img_names, gt_rect,\n            camera_motion, illum_change, motion_change, size_change, occlusion, load_img=False):\n        super(VOTVideo, self).__init__(name, root, video_dir,\n                init_rect, img_names, gt_rect, None, load_img)\n        self.tags= {'all': [1] * len(gt_rect)}\n        self.tags['camera_motion'] = camera_motion\n        self.tags['illum_change'] = illum_change\n        self.tags['motion_change'] = motion_change\n        self.tags['size_change'] = size_change\n        self.tags['occlusion'] = occlusion\n\n        # TODO\n        # if len(self.gt_traj[0]) == 4:\n        #     self.gt_traj = [[x[0], x[1], x[0], x[1]+x[3]-1,\n        #                     x[0]+x[2]-1, x[1]+x[3]-1, x[0]+x[2]-1, x[1]]\n        #                         for x in self.gt_traj]\n\n        # empty tag\n        all_tag = [v for k, v in self.tags.items() if len(v) > 0 ]\n        self.tags['empty'] = np.all(1 - np.array(all_tag), axis=1).astype(np.int32).tolist()\n        # self.tags['empty'] = np.all(1 - np.array(list(self.tags.values())),\n        #         axis=1).astype(np.int32).tolist()\n\n        self.tag_names = list(self.tags.keys())\n        if not load_img:\n            #img_name = os.path.join(root, self.img_names[0])\n            img_name = self.img_names[0] # zzp\n            img = np.array(Image.open(img_name), np.uint8)\n            self.width = img.shape[1]\n            self.height = img.shape[0]\n\n    def select_tag(self, tag, start=0, end=0):\n        if tag == 'empty':\n            return self.tags[tag]\n        return self.tags[tag][start:end]\n\n    def load_tracker(self, path, tracker_names=None, store=True):\n        \"\"\"\n        Args:\n            path(str): path to result\n            tracker_name(list): name of tracker\n        \"\"\"\n        if not tracker_names:\n            tracker_names = [x.split('/')[-1] for x in glob(path)\n                    if os.path.isdir(x)]\n        if isinstance(tracker_names, str):\n            tracker_names = [tracker_names]\n        for name in tracker_names:\n            traj_files = glob(os.path.join(path, name, 'baseline', self.name, '*0*.txt'))\n            if len(traj_files) == 15:\n                traj_files = traj_files\n            else:\n                traj_files = traj_files[0:1]\n            pred_traj = []\n            for traj_file in traj_files:\n                with open(traj_file, 'r') as f:\n                    traj = [list(map(float, x.strip().split(',')))\n                            for x in f.readlines()]\n                    pred_traj.append(traj)\n            if store:\n                self.pred_trajs[name] = pred_traj\n            else:\n                return pred_traj\n\nclass VOTDataset(Dataset):\n    \"\"\"\n    Args:\n        name: dataset name, should be 'VOT2018', 'VOT2016'\n        dataset_root: dataset root\n        load_img: wether to load all imgs\n    \"\"\"\n    def __init__(self, name, dataset_root, load_img=False):\n        super(VOTDataset, self).__init__(name, dataset_root)\n        with open(os.path.join(dataset_root, name+'.json'), 'r') as f:\n            meta_data = json.load(f)\n\n        # load videos\n        pbar = tqdm(meta_data.keys(), desc='loading '+name, ncols=100)\n        self.videos = {}\n        for video in pbar:\n            pbar.set_postfix_str(video)\n\n            self.videos[video] = VOTVideo(video,\n                                          os.path.join(dataset_root, name),\n                                          meta_data[video]['video_dir'],\n                                          meta_data[video]['init_rect'],\n                                          meta_data[video]['img_names'],\n                                          meta_data[video]['gt_rect'],\n                                          meta_data[video]['camera_motion'],\n                                          meta_data[video]['illum_change'],\n                                          meta_data[video]['motion_change'],\n                                          meta_data[video]['size_change'],\n                                          meta_data[video]['occlusion'],\n                                          load_img=load_img)\n\n        self.tags = ['all', 'camera_motion', 'illum_change', 'motion_change',\n                     'size_change', 'occlusion', 'empty']\n\n\nclass VOTLTVideo(Video):\n    \"\"\"\n    Args:\n        name: video name\n        root: dataset root\n        video_dir: video directory\n        init_rect: init rectangle\n        img_names: image names\n        gt_rect: groundtruth rectangle\n    \"\"\"\n    def __init__(self, name, root, video_dir, init_rect, img_names,\n            gt_rect, load_img=False):\n        super(VOTLTVideo, self).__init__(name, root, video_dir,\n                init_rect, img_names, gt_rect, None, load_img)\n        self.gt_traj = [[0] if np.isnan(bbox[0]) else bbox\n                for bbox in self.gt_traj]\n        if not load_img:\n            img_name = os.path.join(root, self.img_names[0])\n            img = np.array(Image.open(img_name), np.uint8)\n            self.width = img.shape[1]\n            self.height = img.shape[0]\n        self.confidence = {}\n\n    def load_tracker(self, path, tracker_names=None, store=True):\n        \"\"\"\n        Args:\n            path(str): path to result\n            tracker_name(list): name of tracker\n        \"\"\"\n        if not tracker_names:\n            tracker_names = [x.split('/')[-1] for x in glob(path)\n                    if os.path.isdir(x)]\n        if isinstance(tracker_names, str):\n            tracker_names = [tracker_names]\n        for name in tracker_names:\n            traj_file = os.path.join(path, name, 'longterm',\n                    self.name, self.name+'_001.txt')\n            with open(traj_file, 'r') as f:\n                traj = [list(map(float, x.strip().split(',')))\n                        for x in f.readlines()]\n            if store:\n                self.pred_trajs[name] = traj\n            confidence_file = os.path.join(path, name, 'longterm',\n                    self.name, self.name+'_001_confidence.value')\n            with open(confidence_file, 'r') as f:\n                score = [float(x.strip()) for x in f.readlines()[1:]]\n                score.insert(0, float('nan'))\n            if store:\n                self.confidence[name] = score\n        return traj, score\n\nclass VOTLTDataset(Dataset):\n    \"\"\"\n    Args:\n        name: dataset name, 'VOT2018-LT'\n        dataset_root: dataset root\n        load_img: wether to load all imgs\n    \"\"\"\n    def __init__(self, name, dataset_root, load_img=False):\n        super(VOTLTDataset, self).__init__(name, dataset_root)\n        with open(os.path.join(dataset_root, name+'.json'), 'r') as f:\n            meta_data = json.load(f)\n\n        # load videos\n        pbar = tqdm(meta_data.keys(), desc='loading '+name, ncols=100)\n        self.videos = {}\n        for video in pbar:\n            pbar.set_postfix_str(video)\n            self.videos[video] = VOTLTVideo(video,\n                                          dataset_root,\n                                          meta_data[video]['video_dir'],\n                                          meta_data[video]['init_rect'],\n                                          meta_data[video]['img_names'],\n                                          meta_data[video]['gt_rect'])\n\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/evaluation/__init__.py",
    "content": "from .ar_benchmark import AccuracyRobustnessBenchmark\nfrom .eao_benchmark import EAOBenchmark\nfrom .ope_benchmark import OPEBenchmark\nfrom .f1_benchmark import F1Benchmark\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/evaluation/ar_benchmark.py",
    "content": "\"\"\"\n    @author\n\"\"\"\n\nimport warnings\nimport itertools\nimport numpy as np\n\nfrom colorama import Style, Fore\nfrom ..utils import calculate_failures, calculate_accuracy\n\nclass AccuracyRobustnessBenchmark:\n    \"\"\"\n    Args:\n        dataset:\n        burnin:\n    \"\"\"\n    def __init__(self, dataset, burnin=10):\n        self.dataset = dataset\n        self.burnin = burnin\n\n    def eval(self, eval_trackers=None):\n        \"\"\"\n        Args:\n            eval_tags: list of tag\n            eval_trackers: list of tracker name\n        Returns:\n            ret: dict of results\n        \"\"\"\n        if eval_trackers is None:\n            eval_trackers = self.dataset.tracker_names\n        if isinstance(eval_trackers, str):\n            eval_trackers = [eval_trackers]\n\n        result = {}\n        for tracker_name in eval_trackers:\n            accuracy, failures = self._calculate_accuracy_robustness(tracker_name)\n            result[tracker_name] = {'overlaps': accuracy,\n                                    'failures': failures}\n        return result\n\n    def show_result(self, result, eao_result=None, show_video_level=False, helight_threshold=0.5):\n        \"\"\"pretty print result\n        Args:\n            result: returned dict from function eval\n        \"\"\"\n        tracker_name_len = max((max([len(x) for x in result.keys()])+2), 12)\n        if eao_result is not None:\n            header = \"|{:^\"+str(tracker_name_len)+\"}|{:^10}|{:^12}|{:^13}|{:^7}|\"\n            header = header.format('Tracker Name',\n                    'Accuracy', 'Robustness', 'Lost Number', 'EAO')\n            formatter = \"|{:^\"+str(tracker_name_len)+\"}|{:^10.3f}|{:^12.3f}|{:^13.1f}|{:^7.3f}|\"\n        else:\n            header = \"|{:^\"+str(tracker_name_len)+\"}|{:^10}|{:^12}|{:^13}|\"\n            header = header.format('Tracker Name',\n                    'Accuracy', 'Robustness', 'Lost Number')\n            formatter = \"|{:^\"+str(tracker_name_len)+\"}|{:^10.3f}|{:^12.3f}|{:^13.1f}|\"\n        bar = '-'*len(header)\n        print(bar)\n        print(header)\n        print(bar)\n        if eao_result is not None:\n            tracker_eao = sorted(eao_result.items(),\n                                 key=lambda x:x[1]['all'],\n                                 reverse=True)[:20]\n            tracker_names = [x[0] for x in tracker_eao]\n        else:\n            tracker_names = list(result.keys())\n        for tracker_name in tracker_names:\n        # for tracker_name, ret in result.items():\n            ret = result[tracker_name]\n            overlaps = list(itertools.chain(*ret['overlaps'].values()))\n            accuracy = np.nanmean(overlaps)\n            length = sum([len(x) for x in ret['overlaps'].values()])\n            failures = list(ret['failures'].values())\n            lost_number = np.mean(np.sum(failures, axis=0))\n            robustness = np.mean(np.sum(np.array(failures), axis=0) / length) * 100\n            if eao_result is None:\n                print(formatter.format(tracker_name, accuracy, robustness, lost_number))\n            else:\n                print(formatter.format(tracker_name, accuracy, robustness, lost_number, eao_result[tracker_name]['all']))\n        print(bar)\n\n        if show_video_level and len(result) < 10:\n            print('\\n\\n')\n            header1 = \"|{:^14}|\".format(\"Tracker name\")\n            header2 = \"|{:^14}|\".format(\"Video name\")\n            for tracker_name in result.keys():\n                header1 += (\"{:^17}|\").format(tracker_name)\n                header2 += \"{:^8}|{:^8}|\".format(\"Acc\", \"LN\")\n            print('-'*len(header1))\n            print(header1)\n            print('-'*len(header1))\n            print(header2)\n            print('-'*len(header1))\n            videos = list(result[tracker_name]['overlaps'].keys())\n            for video in videos:\n                row = \"|{:^14}|\".format(video)\n                for tracker_name in result.keys():\n                    overlaps = result[tracker_name]['overlaps'][video]\n                    accuracy = np.nanmean(overlaps)\n                    failures = result[tracker_name]['failures'][video]\n                    lost_number = np.mean(failures)\n\n                    accuracy_str = \"{:^8.3f}\".format(accuracy)\n                    if accuracy < helight_threshold:\n                        row += f'{Fore.RED}{accuracy_str}{Style.RESET_ALL}|'\n                    else:\n                        row += accuracy_str+'|'\n                    lost_num_str = \"{:^8.3f}\".format(lost_number)\n                    if lost_number > 0:\n                        row += f'{Fore.RED}{lost_num_str}{Style.RESET_ALL}|'\n                    else:\n                        row += lost_num_str+'|'\n                print(row)\n            print('-'*len(header1))\n\n    def _calculate_accuracy_robustness(self, tracker_name):\n        overlaps = {}\n        failures = {}\n        all_length = {}\n        for i in range(len(self.dataset)):\n            video = self.dataset[i]\n            gt_traj = video.gt_traj\n            if tracker_name not in video.pred_trajs:\n                tracker_trajs = video.load_tracker(self.dataset.tracker_path, tracker_name, False)\n            else:\n                tracker_trajs = video.pred_trajs[tracker_name]\n            overlaps_group = []\n            num_failures_group = []\n            for tracker_traj in tracker_trajs:\n                num_failures = calculate_failures(tracker_traj)[0]\n                overlaps_ = calculate_accuracy(tracker_traj, gt_traj,\n                        burnin=10, bound=(video.width, video.height))[1]\n                overlaps_group.append(overlaps_)\n                num_failures_group.append(num_failures)\n            with warnings.catch_warnings():\n                warnings.simplefilter(\"ignore\", category=RuntimeWarning)\n                overlaps[video.name] = np.nanmean(overlaps_group, axis=0).tolist()\n                failures[video.name] = num_failures_group\n        return overlaps, failures\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/evaluation/eao_benchmark.py",
    "content": "import os\nimport time\nimport numpy as np\n\nfrom glob import glob\n\nfrom ..utils import calculate_failures, calculate_accuracy, calculate_expected_overlap\n\nclass EAOBenchmark:\n    \"\"\"\n    Args:\n        dataset:\n    \"\"\"\n    def __init__(self, dataset, skipping=5, tags=['all']):\n        self.dataset = dataset\n        self.skipping = skipping\n        self.tags = tags\n        # NOTE we not use gmm to generate low, high, peak value\n        if dataset.name == 'VOT2018' or dataset.name == 'VOT2017':\n            self.low = 100\n            self.high = 356\n            self.peak =  160\n        elif dataset.name == 'VOT2016':\n            self.low = 108 #TODO\n            self.high = 371\n            self.peak = 168\n        elif dataset.name == 'VOT2019':\n            self.low = 46\n            self.high = 291\n            self.peak = 128\n\n    def eval(self, eval_trackers=None):\n        \"\"\"\n        Args:\n            eval_tags: list of tag\n            eval_trackers: list of tracker name\n        Returns:\n            eao: dict of results\n        \"\"\"\n        if eval_trackers is None:\n            eval_trackers = self.dataset.tracker_names\n        if isinstance(eval_trackers, str):\n            eval_trackers = [eval_trackers]\n\n        ret = {}\n        for tracker_name in eval_trackers:\n            eao = self._calculate_eao(tracker_name, self.tags)\n            ret[tracker_name] = eao\n        return ret\n\n    def show_result(self, result, topk=10):\n        \"\"\"pretty print result\n        Args:\n            result: returned dict from function eval\n        \"\"\"\n        if len(self.tags) == 1:\n            tracker_name_len = max((max([len(x) for x in result.keys()])+2), 12)\n            header = (\"|{:^\"+str(tracker_name_len)+\"}|{:^10}|\").format('Tracker Name', 'EAO')\n            bar = '-'*len(header)\n            formatter = \"|{:^20}|{:^10.3f}|\"\n            print(bar)\n            print(header)\n            print(bar)\n            tracker_eao = sorted(result.items(), \n                                 key=lambda x: x[1]['all'], \n                                 reverse=True)[:topk]\n            for tracker_name, eao in tracker_eao:\n            # for tracker_name, ret in result.items():\n                print(formatter.format(tracker_name, eao))\n            print(bar)\n        else:\n            header = \"|{:^20}|\".format('Tracker Name')\n            header += \"{:^7}|{:^15}|{:^14}|{:^15}|{:^13}|{:^11}|{:^7}|\".format(*self.tags)\n            bar = '-'*len(header)\n            formatter = \"{:^7.3f}|{:^15.3f}|{:^14.3f}|{:^15.3f}|{:^13.3f}|{:^11.3f}|{:^7.3f}|\"\n            print(bar)\n            print(header)\n            print(bar)\n            sorted_tacker = sorted(result.items(), \n                                   key=lambda x: x[1]['all'],\n                                   reverse=True)[:topk]\n            sorted_tacker = [x[0] for x in sorted_tacker]\n            for tracker_name in sorted_tacker:\n            # for tracker_name, ret in result.items():\n                print(\"|{:^20}|\".format(tracker_name)+formatter.format(\n                    *[result[tracker_name][x] for x in self.tags]))\n            print(bar)\n\n    def _calculate_eao(self, tracker_name, tags):\n        all_overlaps = []\n        all_failures = []\n        video_names = []\n        gt_traj_length = []\n        # for i in range(len(self.dataset)):\n        for video in self.dataset:\n            # video = self.dataset[i]\n            gt_traj = video.gt_traj\n            if tracker_name not in video.pred_trajs:\n                tracker_trajs = video.load_tracker(self.dataset.tracker_path, tracker_name, False)\n            else:\n                tracker_trajs = video.pred_trajs[tracker_name]\n            for tracker_traj in tracker_trajs:\n                gt_traj_length.append(len(gt_traj))\n                video_names.append(video.name)\n                overlaps = calculate_accuracy(tracker_traj, gt_traj, bound=(video.width-1, video.height-1))[1]\n                failures = calculate_failures(tracker_traj)[1]\n                all_overlaps.append(overlaps)\n                all_failures.append(failures)\n        fragment_num = sum([len(x)+1 for x in all_failures])\n        max_len = max([len(x) for x in all_overlaps])\n        seq_weight = 1 / len(tracker_trajs)\n\n        eao = {}\n        for tag in tags:\n            # prepare segments\n            fweights = np.ones((fragment_num)) * np.nan\n            fragments = np.ones((fragment_num, max_len)) * np.nan\n            seg_counter = 0\n            for name, traj_len, failures, overlaps in zip(video_names, gt_traj_length,\n                    all_failures, all_overlaps):\n                if len(failures) > 0:\n                    points = [x+self.skipping for x in failures if\n                            x+self.skipping <= len(overlaps)]\n                    points.insert(0, 0)\n                    for i in range(len(points)):\n                        if i != len(points) - 1:\n                            fragment = np.array(overlaps[points[i]:points[i+1]+1])\n                            fragments[seg_counter, :] = 0\n                        else:\n                            fragment = np.array(overlaps[points[i]:])\n                        fragment[np.isnan(fragment)] = 0\n                        fragments[seg_counter, :len(fragment)] = fragment\n                        if i != len(points) - 1:\n                            # tag_value = self.dataset[name].tags[tag][points[i]:points[i+1]+1]\n                            tag_value = self.dataset[name].select_tag(tag, points[i], points[i+1]+1)\n                            w = sum(tag_value) / (points[i+1] - points[i]+1)\n                            fweights[seg_counter] = seq_weight * w\n                        else:\n                            # tag_value = self.dataset[name].tags[tag][points[i]:len(overlaps)]\n                            tag_value = self.dataset[name].select_tag(tag, points[i], len(overlaps))\n                            w = sum(tag_value) / (traj_len - points[i]+1e-16)\n                            fweights[seg_counter] = seq_weight * w# (len(fragment) / (traj_len-points[i]))\n                        seg_counter += 1\n                else:\n                    # no failure\n                    max_idx = min(len(overlaps), max_len)\n                    fragments[seg_counter, :max_idx] = overlaps[:max_idx]\n                    # tag_value = self.dataset[name].tags[tag][:max_idx]\n                    tag_value = self.dataset[name].select_tag(tag, 0, max_idx)\n                    w = sum(tag_value) / max_idx\n                    fweights[seg_counter] = seq_weight * w\n                    seg_counter += 1\n\n            expected_overlaps = calculate_expected_overlap(fragments, fweights)\n            # caculate eao\n            weight = np.zeros((len(expected_overlaps)))\n            weight[self.low-1:self.high-1+1] = 1\n            is_valid = np.logical_not(np.isnan(expected_overlaps))\n            eao_ = np.sum(expected_overlaps[is_valid] * weight[is_valid]) / np.sum(weight[is_valid])\n            eao[tag] = eao_\n        return eao\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/evaluation/f1_benchmark.py",
    "content": "import os\nimport numpy as np\n\nfrom glob import glob\nfrom tqdm import tqdm\nfrom colorama import Style, Fore\n\nfrom ..utils import determine_thresholds, calculate_accuracy, calculate_f1\n\nclass F1Benchmark:\n    def __init__(self, dataset):\n        \"\"\"\n        Args:\n            result_path:\n        \"\"\"\n        self.dataset = dataset\n\n    def eval(self, eval_trackers=None):\n        \"\"\"\n        Args:\n            eval_tags: list of tag\n            eval_trackers: list of tracker name\n        Returns:\n            eao: dict of results\n        \"\"\"\n        if eval_trackers is None:\n            eval_trackers = self.dataset.tracker_names\n        if isinstance(eval_trackers, str):\n            eval_trackers = [eval_trackers]\n\n        ret = {}\n        for tracker_name in eval_trackers:\n            precision, recall, f1 = self._cal_precision_reall(tracker_name)\n            ret[tracker_name] = {\"precision\": precision,\n                                 \"recall\": recall,\n                                 \"f1\": f1\n                                }\n        return ret\n\n    def _cal_precision_reall(self, tracker_name):\n        score = []\n        # for i in range(len(self.dataset)):\n        #     video = self.dataset[i]\n        for video in self.dataset:\n            if tracker_name not in video.confidence:\n                score += video.load_tracker(self.dataset.tracker_path, tracker_name, False)[1]\n            else:\n                score += video.confidence[tracker_name]\n        score = np.array(score)\n        thresholds = determine_thresholds(score)[::-1]\n\n        precision = {}\n        recall = {}\n        f1 = {}\n        for i in range(len(self.dataset)):\n            video = self.dataset[i]\n            gt_traj = video.gt_traj\n            N = sum([1 for x in gt_traj if len(x) > 1])\n            if tracker_name not in video.pred_trajs:\n                tracker_traj, score = video.load_tracker(self.dataset.tracker_path, tracker_name, False)\n            else:\n                tracker_traj = video.pred_trajs[tracker_name]\n                score = video.confidence[tracker_name]\n            overlaps = calculate_accuracy(tracker_traj, gt_traj, \\\n                    bound=(video.width,video.height))[1]\n            f1[video.name], precision[video.name], recall[video.name] = \\\n                    calculate_f1(overlaps, score, (video.width,video.height),thresholds, N)\n        return precision, recall, f1\n\n    def show_result(self, result, show_video_level=False, helight_threshold=0.5):\n        \"\"\"pretty print result\n        Args:\n            result: returned dict from function eval\n        \"\"\"\n        # sort tracker according to f1\n        sorted_tracker = {}\n        for tracker_name, ret in result.items():\n            precision = np.mean(list(ret['precision'].values()), axis=0)\n            recall = np.mean(list(ret['recall'].values()), axis=0)\n            f1 = 2 * precision * recall / (precision + recall)\n            max_idx = np.argmax(f1)\n            sorted_tracker[tracker_name] = (precision[max_idx], recall[max_idx],\n                    f1[max_idx])\n        sorted_tracker_ = sorted(sorted_tracker.items(),\n                                 key=lambda x:x[1][2],\n                                 reverse=True)[:20]\n        tracker_names = [x[0] for x in sorted_tracker_]\n\n        tracker_name_len = max((max([len(x) for x in result.keys()])+2), 12)\n        header = \"|{:^\"+str(tracker_name_len)+\"}|{:^11}|{:^8}|{:^7}|\"\n        header = header.format('Tracker Name',\n                'Precision', 'Recall', 'F1')\n        bar = '-' * len(header)\n        formatter = \"|{:^\"+str(tracker_name_len)+\"}|{:^11.3f}|{:^8.3f}|{:^7.3f}|\"\n        print(bar)\n        print(header)\n        print(bar)\n        # for tracker_name, ret in result.items():\n        #     precision = np.mean(list(ret['precision'].values()), axis=0)\n        #     recall = np.mean(list(ret['recall'].values()), axis=0)\n        #     f1 = 2 * precision * recall / (precision + recall)\n        #     max_idx = np.argmax(f1)\n        for tracker_name in tracker_names:\n            precision = sorted_tracker[tracker_name][0]\n            recall = sorted_tracker[tracker_name][1]\n            f1 = sorted_tracker[tracker_name][2]\n            print(formatter.format(tracker_name, precision, recall, f1))\n        print(bar)\n\n        if show_video_level and len(result) < 10:\n            print('\\n\\n')\n            header1 = \"|{:^14}|\".format(\"Tracker name\")\n            header2 = \"|{:^14}|\".format(\"Video name\")\n            for tracker_name in result.keys():\n                # col_len = max(20, len(tracker_name))\n                header1 += (\"{:^28}|\").format(tracker_name)\n                header2 += \"{:^11}|{:^8}|{:^7}|\".format(\"Precision\", \"Recall\", \"F1\")\n            print('-'*len(header1))\n            print(header1)\n            print('-'*len(header1))\n            print(header2)\n            print('-'*len(header1))\n            videos = list(result[tracker_name]['precision'].keys())\n            for video in videos:\n                row = \"|{:^14}|\".format(video)\n                for tracker_name in result.keys():\n                    precision = result[tracker_name]['precision'][video]\n                    recall = result[tracker_name]['recall'][video]\n                    f1 = result[tracker_name]['f1'][video]\n                    max_idx = np.argmax(f1)\n                    precision_str = \"{:^11.3f}\".format(precision[max_idx])\n                    if precision[max_idx] < helight_threshold:\n                        row += f'{Fore.RED}{precision_str}{Style.RESET_ALL}|'\n                    else:\n                        row += precision_str+'|'\n                    recall_str = \"{:^8.3f}\".format(recall[max_idx])\n                    if recall[max_idx] < helight_threshold:\n                        row += f'{Fore.RED}{recall_str}{Style.RESET_ALL}|'\n                    else:\n                        row += recall_str+'|'\n                    f1_str = \"{:^7.3f}\".format(f1[max_idx])\n                    if f1[max_idx] < helight_threshold:\n                        row += f'{Fore.RED}{f1_str}{Style.RESET_ALL}|'\n                    else:\n                        row += f1_str+'|'\n                print(row)\n            print('-'*len(header1))\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/evaluation/ope_benchmark.py",
    "content": "import numpy as np\n\nfrom colorama import Style, Fore\n\nfrom ..utils import overlap_ratio, success_overlap, success_error\n\nclass OPEBenchmark:\n    \"\"\"\n    Args:\n        result_path: result path of your tracker\n                should the same format like VOT\n    \"\"\"\n    def __init__(self, dataset):\n        self.dataset = dataset\n\n    def convert_bb_to_center(self, bboxes):\n        return np.array([(bboxes[:, 0] + (bboxes[:, 2] - 1) / 2),\n                         (bboxes[:, 1] + (bboxes[:, 3] - 1) / 2)]).T\n\n    def convert_bb_to_norm_center(self, bboxes, gt_wh):\n        return self.convert_bb_to_center(bboxes) / (gt_wh+1e-16)\n\n    def eval_success(self, eval_trackers=None):\n        \"\"\"\n        Args: \n            eval_trackers: list of tracker name or single tracker name\n        Return:\n            res: dict of results\n        \"\"\"\n        if eval_trackers is None:\n            eval_trackers = self.dataset.tracker_names\n        if isinstance(eval_trackers, str):\n            eval_trackers = [eval_trackers]\n\n        success_ret = {}\n        for tracker_name in eval_trackers:\n            success_ret_ = {}\n            for video in self.dataset:\n                gt_traj = np.array(video.gt_traj)\n                if tracker_name not in video.pred_trajs:\n                    tracker_traj = video.load_tracker(self.dataset.tracker_path,\n                            tracker_name, False)\n                    tracker_traj = np.array(tracker_traj)\n                else:\n                    tracker_traj = np.array(video.pred_trajs[tracker_name])\n                n_frame = len(gt_traj)\n                if hasattr(video, 'absent'):\n                    gt_traj = gt_traj[video.absent == 1]\n                    tracker_traj = tracker_traj[video.absent == 1]\n                success_ret_[video.name] = success_overlap(gt_traj, tracker_traj, n_frame)\n            success_ret[tracker_name] = success_ret_\n        return success_ret\n\n    def eval_precision(self, eval_trackers=None):\n        \"\"\"\n        Args:\n            eval_trackers: list of tracker name or single tracker name\n        Return:\n            res: dict of results\n        \"\"\"\n        if eval_trackers is None:\n            eval_trackers = self.dataset.tracker_names\n        if isinstance(eval_trackers, str):\n            eval_trackers = [eval_trackers]\n\n        precision_ret = {}\n        for tracker_name in eval_trackers:\n            precision_ret_ = {}\n            for video in self.dataset:\n                gt_traj = np.array(video.gt_traj)\n                if tracker_name not in video.pred_trajs:\n                    tracker_traj = video.load_tracker(self.dataset.tracker_path,\n                            tracker_name, False)\n                    tracker_traj = np.array(tracker_traj)\n                else:\n                    tracker_traj = np.array(video.pred_trajs[tracker_name])\n                n_frame = len(gt_traj)\n                if hasattr(video, 'absent'):\n                    gt_traj = gt_traj[video.absent == 1]\n                    tracker_traj = tracker_traj[video.absent == 1]\n                gt_center = self.convert_bb_to_center(gt_traj)\n                tracker_center = self.convert_bb_to_center(tracker_traj)\n                thresholds = np.arange(0, 51, 1)\n                precision_ret_[video.name] = success_error(gt_center, tracker_center,\n                        thresholds, n_frame)\n            precision_ret[tracker_name] = precision_ret_\n        return precision_ret\n\n    def eval_norm_precision(self, eval_trackers=None):\n        \"\"\"\n        Args:\n            eval_trackers: list of tracker name or single tracker name\n        Return:\n            res: dict of results\n        \"\"\"\n        if eval_trackers is None:\n            eval_trackers = self.dataset.tracker_names\n        if isinstance(eval_trackers, str):\n            eval_trackers = [eval_trackers]\n\n        norm_precision_ret = {}\n        for tracker_name in eval_trackers:\n            norm_precision_ret_ = {}\n            for video in self.dataset:\n                gt_traj = np.array(video.gt_traj)\n                if tracker_name not in video.pred_trajs:\n                    tracker_traj = video.load_tracker(self.dataset.tracker_path, \n                            tracker_name, False)\n                    tracker_traj = np.array(tracker_traj)\n                else:\n                    tracker_traj = np.array(video.pred_trajs[tracker_name])\n                n_frame = len(gt_traj)\n                if hasattr(video, 'absent'):\n                    gt_traj = gt_traj[video.absent == 1]\n                    tracker_traj = tracker_traj[video.absent == 1]\n                gt_center_norm = self.convert_bb_to_norm_center(gt_traj, gt_traj[:, 2:4])\n                tracker_center_norm = self.convert_bb_to_norm_center(tracker_traj, gt_traj[:, 2:4])\n                thresholds = np.arange(0, 51, 1) / 100\n                norm_precision_ret_[video.name] = success_error(gt_center_norm,\n                        tracker_center_norm, thresholds, n_frame)\n            norm_precision_ret[tracker_name] = norm_precision_ret_\n        return norm_precision_ret\n\n    def show_result(self, success_ret, precision_ret=None,\n            norm_precision_ret=None, show_video_level=False, helight_threshold=0.6):\n        \"\"\"pretty print result\n        Args:\n            result: returned dict from function eval\n        \"\"\"\n        # sort tracker\n        tracker_auc = {}\n        for tracker_name in success_ret.keys():\n            auc = np.mean(list(success_ret[tracker_name].values()))\n            tracker_auc[tracker_name] = auc\n        tracker_auc_ = sorted(tracker_auc.items(),\n                             key=lambda x:x[1],\n                             reverse=True)[:20]\n        tracker_names = [x[0] for x in tracker_auc_]\n\n\n        tracker_name_len = max((max([len(x) for x in success_ret.keys()])+2), 12)\n        header = (\"|{:^\"+str(tracker_name_len)+\"}|{:^9}|{:^16}|{:^11}|\").format(\n                \"Tracker name\", \"Success\", \"Norm Precision\", \"Precision\")\n        formatter = \"|{:^\"+str(tracker_name_len)+\"}|{:^9.3f}|{:^16.3f}|{:^11.3f}|\"\n        print('-'*len(header))\n        print(header)\n        print('-'*len(header))\n        for tracker_name in tracker_names:\n            # success = np.mean(list(success_ret[tracker_name].values()))\n            success = tracker_auc[tracker_name]\n            if precision_ret is not None:\n                precision = np.mean(list(precision_ret[tracker_name].values()), axis=0)[20]\n            else:\n                precision = 0\n            if norm_precision_ret is not None:\n                norm_precision = np.mean(list(norm_precision_ret[tracker_name].values()),\n                        axis=0)[20]\n            else:\n                norm_precision = 0\n            print(formatter.format(tracker_name, success, norm_precision, precision))\n        print('-'*len(header))\n\n        if show_video_level and len(success_ret) < 10 \\\n                and precision_ret is not None \\\n                and len(precision_ret) < 10:\n            print(\"\\n\\n\")\n            header1 = \"|{:^21}|\".format(\"Tracker name\")\n            header2 = \"|{:^21}|\".format(\"Video name\")\n            for tracker_name in success_ret.keys():\n                # col_len = max(20, len(tracker_name))\n                header1 += (\"{:^21}|\").format(tracker_name)\n                header2 += \"{:^9}|{:^11}|\".format(\"success\", \"precision\")\n            print('-'*len(header1))\n            print(header1)\n            print('-'*len(header1))\n            print(header2)\n            print('-'*len(header1))\n            videos = list(success_ret[tracker_name].keys())\n            for video in videos:\n                row = \"|{:^21}|\".format(video)\n                for tracker_name in success_ret.keys():\n                    success = np.mean(success_ret[tracker_name][video])\n                    precision = np.mean(precision_ret[tracker_name][video])\n                    success_str = \"{:^9.3f}\".format(success)\n                    if success < helight_threshold:\n                        row += f'{Fore.RED}{success_str}{Style.RESET_ALL}|'\n                    else:\n                        row += success_str+'|'\n                    precision_str = \"{:^11.3f}\".format(precision)\n                    if precision < helight_threshold:\n                        row += f'{Fore.RED}{precision_str}{Style.RESET_ALL}|'\n                    else:\n                        row += precision_str+'|'\n                print(row)\n            print('-'*len(header1))\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/utils/__init__.py",
    "content": "from . import region\nfrom .statistics import *\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/utils/c_region.pxd",
    "content": "cdef extern from \"src/region.h\":\n    ctypedef enum region_type \"RegionType\":\n        EMTPY\n        SPECIAL\n        RECTANGEL\n        POLYGON\n        MASK\n\n    ctypedef struct region_bounds:\n        float top\n        float bottom\n        float left\n        float right\n\n    ctypedef struct region_rectangle:\n        float x\n        float y\n        float width\n        float height\n\n    # ctypedef struct region_mask:\n    #     int x\n    #     int y\n    #     int width\n    #     int height\n    #     char *data\n\n    ctypedef struct region_polygon:\n        int count\n        float *x\n        float *y\n\n    ctypedef union region_container_data:\n        region_rectangle rectangle\n        region_polygon polygon\n        # region_mask mask\n        int special\n\n    ctypedef struct region_container:\n        region_type type\n        region_container_data data\n\n    # ctypedef struct region_overlap:\n    #     float overlap\n    #     float only1\n    #     float only2\n\n    # region_overlap region_compute_overlap(const region_container* ra, const region_container* rb, region_bounds bounds)\n\n    float compute_polygon_overlap(const region_polygon* p1, const region_polygon* p2, float *only1, float *only2, region_bounds bounds)\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/utils/misc.py",
    "content": "\nimport numpy as np\n\ndef determine_thresholds(confidence, resolution=100):\n    \"\"\"choose threshold according to confidence\n\n    Args:\n        confidence: list or numpy array or numpy array\n        reolution: number of threshold to choose\n\n    Restures:\n        threshold: numpy array\n    \"\"\"\n    if isinstance(confidence, list):\n        confidence = np.array(confidence)\n    confidence = confidence.flatten()\n    confidence = confidence[~np.isnan(confidence)]\n    confidence.sort()\n\n    assert len(confidence) > resolution and resolution > 2\n\n    thresholds = np.ones((resolution))\n    thresholds[0] = - np.inf\n    thresholds[-1] = np.inf\n    delta = np.floor(len(confidence) / (resolution - 2))\n    idxs = np.linspace(delta, len(confidence)-delta, resolution-2, dtype=np.int32)\n    thresholds[1:-1] =  confidence[idxs]\n    return thresholds\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/utils/region.c",
    "content": "/* Generated by Cython 0.29.14 */\n\n/* BEGIN: Cython Metadata\n{\n    \"distutils\": {\n        \"depends\": [\n            \"src/region.h\"\n        ],\n        \"include_dirs\": [\n            \".\"\n        ],\n        \"name\": \"region\",\n        \"sources\": [\n            \"region.pyx\",\n            \"src/region.c\"\n        ]\n    },\n    \"module_name\": \"region\"\n}\nEND: Cython Metadata */\n\n#define PY_SSIZE_T_CLEAN\n#include \"Python.h\"\n#ifndef Py_PYTHON_H\n    #error Python headers needed to compile C extensions, please install development version of Python.\n#elif PY_VERSION_HEX < 0x02060000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03030000)\n    #error Cython requires Python 2.6+ or Python 3.3+.\n#else\n#define CYTHON_ABI \"0_29_14\"\n#define CYTHON_HEX_VERSION 0x001D0EF0\n#define CYTHON_FUTURE_DIVISION 0\n#include <stddef.h>\n#ifndef offsetof\n  #define offsetof(type, member) ( (size_t) & ((type*)0) -> member )\n#endif\n#if !defined(WIN32) && !defined(MS_WINDOWS)\n  #ifndef __stdcall\n    #define __stdcall\n  #endif\n  #ifndef __cdecl\n    #define __cdecl\n  #endif\n  #ifndef __fastcall\n    #define __fastcall\n  #endif\n#endif\n#ifndef DL_IMPORT\n  #define DL_IMPORT(t) t\n#endif\n#ifndef DL_EXPORT\n  #define DL_EXPORT(t) t\n#endif\n#define __PYX_COMMA ,\n#ifndef HAVE_LONG_LONG\n  #if PY_VERSION_HEX >= 0x02070000\n    #define HAVE_LONG_LONG\n  #endif\n#endif\n#ifndef PY_LONG_LONG\n  #define PY_LONG_LONG LONG_LONG\n#endif\n#ifndef Py_HUGE_VAL\n  #define Py_HUGE_VAL HUGE_VAL\n#endif\n#ifdef PYPY_VERSION\n  #define CYTHON_COMPILING_IN_PYPY 1\n  #define CYTHON_COMPILING_IN_PYSTON 0\n  #define CYTHON_COMPILING_IN_CPYTHON 0\n  #undef CYTHON_USE_TYPE_SLOTS\n  #define CYTHON_USE_TYPE_SLOTS 0\n  #undef CYTHON_USE_PYTYPE_LOOKUP\n  #define CYTHON_USE_PYTYPE_LOOKUP 0\n  #if PY_VERSION_HEX < 0x03050000\n    #undef CYTHON_USE_ASYNC_SLOTS\n    #define CYTHON_USE_ASYNC_SLOTS 0\n  #elif !defined(CYTHON_USE_ASYNC_SLOTS)\n    #define CYTHON_USE_ASYNC_SLOTS 1\n  #endif\n  #undef CYTHON_USE_PYLIST_INTERNALS\n  #define CYTHON_USE_PYLIST_INTERNALS 0\n  #undef CYTHON_USE_UNICODE_INTERNALS\n  #define CYTHON_USE_UNICODE_INTERNALS 0\n  #undef CYTHON_USE_UNICODE_WRITER\n  #define CYTHON_USE_UNICODE_WRITER 0\n  #undef CYTHON_USE_PYLONG_INTERNALS\n  #define CYTHON_USE_PYLONG_INTERNALS 0\n  #undef CYTHON_AVOID_BORROWED_REFS\n  #define CYTHON_AVOID_BORROWED_REFS 1\n  #undef CYTHON_ASSUME_SAFE_MACROS\n  #define CYTHON_ASSUME_SAFE_MACROS 0\n  #undef CYTHON_UNPACK_METHODS\n  #define CYTHON_UNPACK_METHODS 0\n  #undef CYTHON_FAST_THREAD_STATE\n  #define CYTHON_FAST_THREAD_STATE 0\n  #undef CYTHON_FAST_PYCALL\n  #define CYTHON_FAST_PYCALL 0\n  #undef CYTHON_PEP489_MULTI_PHASE_INIT\n  #define CYTHON_PEP489_MULTI_PHASE_INIT 0\n  #undef CYTHON_USE_TP_FINALIZE\n  #define CYTHON_USE_TP_FINALIZE 0\n  #undef CYTHON_USE_DICT_VERSIONS\n  #define CYTHON_USE_DICT_VERSIONS 0\n  #undef CYTHON_USE_EXC_INFO_STACK\n  #define CYTHON_USE_EXC_INFO_STACK 0\n#elif defined(PYSTON_VERSION)\n  #define CYTHON_COMPILING_IN_PYPY 0\n  #define CYTHON_COMPILING_IN_PYSTON 1\n  #define CYTHON_COMPILING_IN_CPYTHON 0\n  #ifndef CYTHON_USE_TYPE_SLOTS\n    #define CYTHON_USE_TYPE_SLOTS 1\n  #endif\n  #undef CYTHON_USE_PYTYPE_LOOKUP\n  #define CYTHON_USE_PYTYPE_LOOKUP 0\n  #undef CYTHON_USE_ASYNC_SLOTS\n  #define CYTHON_USE_ASYNC_SLOTS 0\n  #undef CYTHON_USE_PYLIST_INTERNALS\n  #define CYTHON_USE_PYLIST_INTERNALS 0\n  #ifndef CYTHON_USE_UNICODE_INTERNALS\n    #define CYTHON_USE_UNICODE_INTERNALS 1\n  #endif\n  #undef CYTHON_USE_UNICODE_WRITER\n  #define CYTHON_USE_UNICODE_WRITER 0\n  #undef CYTHON_USE_PYLONG_INTERNALS\n  #define CYTHON_USE_PYLONG_INTERNALS 0\n  #ifndef CYTHON_AVOID_BORROWED_REFS\n    #define CYTHON_AVOID_BORROWED_REFS 0\n  #endif\n  #ifndef CYTHON_ASSUME_SAFE_MACROS\n    #define CYTHON_ASSUME_SAFE_MACROS 1\n  #endif\n  #ifndef CYTHON_UNPACK_METHODS\n    #define CYTHON_UNPACK_METHODS 1\n  #endif\n  #undef CYTHON_FAST_THREAD_STATE\n  #define CYTHON_FAST_THREAD_STATE 0\n  #undef CYTHON_FAST_PYCALL\n  #define CYTHON_FAST_PYCALL 0\n  #undef CYTHON_PEP489_MULTI_PHASE_INIT\n  #define CYTHON_PEP489_MULTI_PHASE_INIT 0\n  #undef CYTHON_USE_TP_FINALIZE\n  #define CYTHON_USE_TP_FINALIZE 0\n  #undef CYTHON_USE_DICT_VERSIONS\n  #define CYTHON_USE_DICT_VERSIONS 0\n  #undef CYTHON_USE_EXC_INFO_STACK\n  #define CYTHON_USE_EXC_INFO_STACK 0\n#else\n  #define CYTHON_COMPILING_IN_PYPY 0\n  #define CYTHON_COMPILING_IN_PYSTON 0\n  #define CYTHON_COMPILING_IN_CPYTHON 1\n  #ifndef CYTHON_USE_TYPE_SLOTS\n    #define CYTHON_USE_TYPE_SLOTS 1\n  #endif\n  #if PY_VERSION_HEX < 0x02070000\n    #undef CYTHON_USE_PYTYPE_LOOKUP\n    #define CYTHON_USE_PYTYPE_LOOKUP 0\n  #elif !defined(CYTHON_USE_PYTYPE_LOOKUP)\n    #define CYTHON_USE_PYTYPE_LOOKUP 1\n  #endif\n  #if PY_MAJOR_VERSION < 3\n    #undef CYTHON_USE_ASYNC_SLOTS\n    #define CYTHON_USE_ASYNC_SLOTS 0\n  #elif !defined(CYTHON_USE_ASYNC_SLOTS)\n    #define CYTHON_USE_ASYNC_SLOTS 1\n  #endif\n  #if PY_VERSION_HEX < 0x02070000\n    #undef CYTHON_USE_PYLONG_INTERNALS\n    #define CYTHON_USE_PYLONG_INTERNALS 0\n  #elif !defined(CYTHON_USE_PYLONG_INTERNALS)\n    #define CYTHON_USE_PYLONG_INTERNALS 1\n  #endif\n  #ifndef CYTHON_USE_PYLIST_INTERNALS\n    #define CYTHON_USE_PYLIST_INTERNALS 1\n  #endif\n  #ifndef CYTHON_USE_UNICODE_INTERNALS\n    #define CYTHON_USE_UNICODE_INTERNALS 1\n  #endif\n  #if PY_VERSION_HEX < 0x030300F0\n    #undef CYTHON_USE_UNICODE_WRITER\n    #define CYTHON_USE_UNICODE_WRITER 0\n  #elif !defined(CYTHON_USE_UNICODE_WRITER)\n    #define CYTHON_USE_UNICODE_WRITER 1\n  #endif\n  #ifndef CYTHON_AVOID_BORROWED_REFS\n    #define CYTHON_AVOID_BORROWED_REFS 0\n  #endif\n  #ifndef CYTHON_ASSUME_SAFE_MACROS\n    #define CYTHON_ASSUME_SAFE_MACROS 1\n  #endif\n  #ifndef CYTHON_UNPACK_METHODS\n    #define CYTHON_UNPACK_METHODS 1\n  #endif\n  #ifndef CYTHON_FAST_THREAD_STATE\n    #define CYTHON_FAST_THREAD_STATE 1\n  #endif\n  #ifndef CYTHON_FAST_PYCALL\n    #define CYTHON_FAST_PYCALL 1\n  #endif\n  #ifndef CYTHON_PEP489_MULTI_PHASE_INIT\n    #define CYTHON_PEP489_MULTI_PHASE_INIT (PY_VERSION_HEX >= 0x03050000)\n  #endif\n  #ifndef CYTHON_USE_TP_FINALIZE\n    #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1)\n  #endif\n  #ifndef CYTHON_USE_DICT_VERSIONS\n    #define CYTHON_USE_DICT_VERSIONS (PY_VERSION_HEX >= 0x030600B1)\n  #endif\n  #ifndef CYTHON_USE_EXC_INFO_STACK\n    #define CYTHON_USE_EXC_INFO_STACK (PY_VERSION_HEX >= 0x030700A3)\n  #endif\n#endif\n#if !defined(CYTHON_FAST_PYCCALL)\n#define CYTHON_FAST_PYCCALL  (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1)\n#endif\n#if CYTHON_USE_PYLONG_INTERNALS\n  #include \"longintrepr.h\"\n  #undef SHIFT\n  #undef BASE\n  #undef MASK\n  #ifdef SIZEOF_VOID_P\n    enum { __pyx_check_sizeof_voidp = 1 / (int)(SIZEOF_VOID_P == sizeof(void*)) };\n  #endif\n#endif\n#ifndef __has_attribute\n  #define __has_attribute(x) 0\n#endif\n#ifndef __has_cpp_attribute\n  #define __has_cpp_attribute(x) 0\n#endif\n#ifndef CYTHON_RESTRICT\n  #if defined(__GNUC__)\n    #define CYTHON_RESTRICT __restrict__\n  #elif defined(_MSC_VER) && _MSC_VER >= 1400\n    #define CYTHON_RESTRICT __restrict\n  #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L\n    #define CYTHON_RESTRICT restrict\n  #else\n    #define CYTHON_RESTRICT\n  #endif\n#endif\n#ifndef CYTHON_UNUSED\n# if defined(__GNUC__)\n#   if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4))\n#     define CYTHON_UNUSED __attribute__ ((__unused__))\n#   else\n#     define CYTHON_UNUSED\n#   endif\n# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER))\n#   define CYTHON_UNUSED __attribute__ ((__unused__))\n# else\n#   define CYTHON_UNUSED\n# endif\n#endif\n#ifndef CYTHON_MAYBE_UNUSED_VAR\n#  if defined(__cplusplus)\n     template<class T> void CYTHON_MAYBE_UNUSED_VAR( const T& ) { }\n#  else\n#    define CYTHON_MAYBE_UNUSED_VAR(x) (void)(x)\n#  endif\n#endif\n#ifndef CYTHON_NCP_UNUSED\n# if CYTHON_COMPILING_IN_CPYTHON\n#  define CYTHON_NCP_UNUSED\n# else\n#  define CYTHON_NCP_UNUSED CYTHON_UNUSED\n# endif\n#endif\n#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None)\n#ifdef _MSC_VER\n    #ifndef _MSC_STDINT_H_\n        #if _MSC_VER < 1300\n           typedef unsigned char     uint8_t;\n           typedef unsigned int      uint32_t;\n        #else\n           typedef unsigned __int8   uint8_t;\n           typedef unsigned __int32  uint32_t;\n        #endif\n    #endif\n#else\n   #include <stdint.h>\n#endif\n#ifndef CYTHON_FALLTHROUGH\n  #if defined(__cplusplus) && __cplusplus >= 201103L\n    #if __has_cpp_attribute(fallthrough)\n      #define CYTHON_FALLTHROUGH [[fallthrough]]\n    #elif __has_cpp_attribute(clang::fallthrough)\n      #define CYTHON_FALLTHROUGH [[clang::fallthrough]]\n    #elif __has_cpp_attribute(gnu::fallthrough)\n      #define CYTHON_FALLTHROUGH [[gnu::fallthrough]]\n    #endif\n  #endif\n  #ifndef CYTHON_FALLTHROUGH\n    #if __has_attribute(fallthrough)\n      #define CYTHON_FALLTHROUGH __attribute__((fallthrough))\n    #else\n      #define CYTHON_FALLTHROUGH\n    #endif\n  #endif\n  #if defined(__clang__ ) && defined(__apple_build_version__)\n    #if __apple_build_version__ < 7000000\n      #undef  CYTHON_FALLTHROUGH\n      #define CYTHON_FALLTHROUGH\n    #endif\n  #endif\n#endif\n\n#ifndef CYTHON_INLINE\n  #if defined(__clang__)\n    #define CYTHON_INLINE __inline__ __attribute__ ((__unused__))\n  #elif defined(__GNUC__)\n    #define CYTHON_INLINE __inline__\n  #elif defined(_MSC_VER)\n    #define CYTHON_INLINE __inline\n  #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L\n    #define CYTHON_INLINE inline\n  #else\n    #define CYTHON_INLINE\n  #endif\n#endif\n\n#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX < 0x02070600 && !defined(Py_OptimizeFlag)\n  #define Py_OptimizeFlag 0\n#endif\n#define __PYX_BUILD_PY_SSIZE_T \"n\"\n#define CYTHON_FORMAT_SSIZE_T \"z\"\n#if PY_MAJOR_VERSION < 3\n  #define __Pyx_BUILTIN_MODULE_NAME \"__builtin__\"\n  #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\\\n          PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\n  #define __Pyx_DefaultClassType PyClass_Type\n#else\n  #define __Pyx_BUILTIN_MODULE_NAME \"builtins\"\n#if PY_VERSION_HEX >= 0x030800A4 && PY_VERSION_HEX < 0x030800B2\n  #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\\\n          PyCode_New(a, 0, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\n#else\n  #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\\\n          PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\n#endif\n  #define __Pyx_DefaultClassType PyType_Type\n#endif\n#ifndef Py_TPFLAGS_CHECKTYPES\n  #define Py_TPFLAGS_CHECKTYPES 0\n#endif\n#ifndef Py_TPFLAGS_HAVE_INDEX\n  #define Py_TPFLAGS_HAVE_INDEX 0\n#endif\n#ifndef Py_TPFLAGS_HAVE_NEWBUFFER\n  #define Py_TPFLAGS_HAVE_NEWBUFFER 0\n#endif\n#ifndef Py_TPFLAGS_HAVE_FINALIZE\n  #define Py_TPFLAGS_HAVE_FINALIZE 0\n#endif\n#ifndef METH_STACKLESS\n  #define METH_STACKLESS 0\n#endif\n#if PY_VERSION_HEX <= 0x030700A3 || !defined(METH_FASTCALL)\n  #ifndef METH_FASTCALL\n     #define METH_FASTCALL 0x80\n  #endif\n  typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject *const *args, Py_ssize_t nargs);\n  typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject *const *args,\n                                                          Py_ssize_t nargs, PyObject *kwnames);\n#else\n  #define __Pyx_PyCFunctionFast _PyCFunctionFast\n  #define __Pyx_PyCFunctionFastWithKeywords _PyCFunctionFastWithKeywords\n#endif\n#if CYTHON_FAST_PYCCALL\n#define __Pyx_PyFastCFunction_Check(func)\\\n    ((PyCFunction_Check(func) && (METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS)))))\n#else\n#define __Pyx_PyFastCFunction_Check(func) 0\n#endif\n#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc)\n  #define PyObject_Malloc(s)   PyMem_Malloc(s)\n  #define PyObject_Free(p)     PyMem_Free(p)\n  #define PyObject_Realloc(p)  PyMem_Realloc(p)\n#endif\n#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030400A1\n  #define PyMem_RawMalloc(n)           PyMem_Malloc(n)\n  #define PyMem_RawRealloc(p, n)       PyMem_Realloc(p, n)\n  #define PyMem_RawFree(p)             PyMem_Free(p)\n#endif\n#if CYTHON_COMPILING_IN_PYSTON\n  #define __Pyx_PyCode_HasFreeVars(co)  PyCode_HasFreeVars(co)\n  #define __Pyx_PyFrame_SetLineNumber(frame, lineno) PyFrame_SetLineNumber(frame, lineno)\n#else\n  #define __Pyx_PyCode_HasFreeVars(co)  (PyCode_GetNumFree(co) > 0)\n  #define __Pyx_PyFrame_SetLineNumber(frame, lineno)  (frame)->f_lineno = (lineno)\n#endif\n#if !CYTHON_FAST_THREAD_STATE || PY_VERSION_HEX < 0x02070000\n  #define __Pyx_PyThreadState_Current PyThreadState_GET()\n#elif PY_VERSION_HEX >= 0x03060000\n  #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet()\n#elif PY_VERSION_HEX >= 0x03000000\n  #define __Pyx_PyThreadState_Current PyThreadState_GET()\n#else\n  #define __Pyx_PyThreadState_Current _PyThreadState_Current\n#endif\n#if PY_VERSION_HEX < 0x030700A2 && !defined(PyThread_tss_create) && !defined(Py_tss_NEEDS_INIT)\n#include \"pythread.h\"\n#define Py_tss_NEEDS_INIT 0\ntypedef int Py_tss_t;\nstatic CYTHON_INLINE int PyThread_tss_create(Py_tss_t *key) {\n  *key = PyThread_create_key();\n  return 0;\n}\nstatic CYTHON_INLINE Py_tss_t * PyThread_tss_alloc(void) {\n  Py_tss_t *key = (Py_tss_t *)PyObject_Malloc(sizeof(Py_tss_t));\n  *key = Py_tss_NEEDS_INIT;\n  return key;\n}\nstatic CYTHON_INLINE void PyThread_tss_free(Py_tss_t *key) {\n  PyObject_Free(key);\n}\nstatic CYTHON_INLINE int PyThread_tss_is_created(Py_tss_t *key) {\n  return *key != Py_tss_NEEDS_INIT;\n}\nstatic CYTHON_INLINE void PyThread_tss_delete(Py_tss_t *key) {\n  PyThread_delete_key(*key);\n  *key = Py_tss_NEEDS_INIT;\n}\nstatic CYTHON_INLINE int PyThread_tss_set(Py_tss_t *key, void *value) {\n  return PyThread_set_key_value(*key, value);\n}\nstatic CYTHON_INLINE void * PyThread_tss_get(Py_tss_t *key) {\n  return PyThread_get_key_value(*key);\n}\n#endif\n#if CYTHON_COMPILING_IN_CPYTHON || defined(_PyDict_NewPresized)\n#define __Pyx_PyDict_NewPresized(n)  ((n <= 8) ? PyDict_New() : _PyDict_NewPresized(n))\n#else\n#define __Pyx_PyDict_NewPresized(n)  PyDict_New()\n#endif\n#if PY_MAJOR_VERSION >= 3 || CYTHON_FUTURE_DIVISION\n  #define __Pyx_PyNumber_Divide(x,y)         PyNumber_TrueDivide(x,y)\n  #define __Pyx_PyNumber_InPlaceDivide(x,y)  PyNumber_InPlaceTrueDivide(x,y)\n#else\n  #define __Pyx_PyNumber_Divide(x,y)         PyNumber_Divide(x,y)\n  #define __Pyx_PyNumber_InPlaceDivide(x,y)  PyNumber_InPlaceDivide(x,y)\n#endif\n#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 && CYTHON_USE_UNICODE_INTERNALS\n#define __Pyx_PyDict_GetItemStr(dict, name)  _PyDict_GetItem_KnownHash(dict, name, ((PyASCIIObject *) name)->hash)\n#else\n#define __Pyx_PyDict_GetItemStr(dict, name)  PyDict_GetItem(dict, name)\n#endif\n#if PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND)\n  #define CYTHON_PEP393_ENABLED 1\n  #define __Pyx_PyUnicode_READY(op)       (likely(PyUnicode_IS_READY(op)) ?\\\n                                              0 : _PyUnicode_Ready((PyObject *)(op)))\n  #define __Pyx_PyUnicode_GET_LENGTH(u)   PyUnicode_GET_LENGTH(u)\n  #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i)\n  #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u)   PyUnicode_MAX_CHAR_VALUE(u)\n  #define __Pyx_PyUnicode_KIND(u)         PyUnicode_KIND(u)\n  #define __Pyx_PyUnicode_DATA(u)         PyUnicode_DATA(u)\n  #define __Pyx_PyUnicode_READ(k, d, i)   PyUnicode_READ(k, d, i)\n  #define __Pyx_PyUnicode_WRITE(k, d, i, ch)  PyUnicode_WRITE(k, d, i, ch)\n  #define __Pyx_PyUnicode_IS_TRUE(u)      (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u)))\n#else\n  #define CYTHON_PEP393_ENABLED 0\n  #define PyUnicode_1BYTE_KIND  1\n  #define PyUnicode_2BYTE_KIND  2\n  #define PyUnicode_4BYTE_KIND  4\n  #define __Pyx_PyUnicode_READY(op)       (0)\n  #define __Pyx_PyUnicode_GET_LENGTH(u)   PyUnicode_GET_SIZE(u)\n  #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i]))\n  #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u)   ((sizeof(Py_UNICODE) == 2) ? 65535 : 1114111)\n  #define __Pyx_PyUnicode_KIND(u)         (sizeof(Py_UNICODE))\n  #define __Pyx_PyUnicode_DATA(u)         ((void*)PyUnicode_AS_UNICODE(u))\n  #define __Pyx_PyUnicode_READ(k, d, i)   ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i]))\n  #define __Pyx_PyUnicode_WRITE(k, d, i, ch)  (((void)(k)), ((Py_UNICODE*)d)[i] = ch)\n  #define __Pyx_PyUnicode_IS_TRUE(u)      (0 != PyUnicode_GET_SIZE(u))\n#endif\n#if CYTHON_COMPILING_IN_PYPY\n  #define __Pyx_PyUnicode_Concat(a, b)      PyNumber_Add(a, b)\n  #define __Pyx_PyUnicode_ConcatSafe(a, b)  PyNumber_Add(a, b)\n#else\n  #define __Pyx_PyUnicode_Concat(a, b)      PyUnicode_Concat(a, b)\n  #define __Pyx_PyUnicode_ConcatSafe(a, b)  ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\\\n      PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b))\n#endif\n#if CYTHON_COMPILING_IN_PYPY && !defined(PyUnicode_Contains)\n  #define PyUnicode_Contains(u, s)  PySequence_Contains(u, s)\n#endif\n#if CYTHON_COMPILING_IN_PYPY && !defined(PyByteArray_Check)\n  #define PyByteArray_Check(obj)  PyObject_TypeCheck(obj, &PyByteArray_Type)\n#endif\n#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Format)\n  #define PyObject_Format(obj, fmt)  PyObject_CallMethod(obj, \"__format__\", \"O\", fmt)\n#endif\n#define __Pyx_PyString_FormatSafe(a, b)   ((unlikely((a) == Py_None || (PyString_Check(b) && !PyString_CheckExact(b)))) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b))\n#define __Pyx_PyUnicode_FormatSafe(a, b)  ((unlikely((a) == Py_None || (PyUnicode_Check(b) && !PyUnicode_CheckExact(b)))) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b))\n#if PY_MAJOR_VERSION >= 3\n  #define __Pyx_PyString_Format(a, b)  PyUnicode_Format(a, b)\n#else\n  #define __Pyx_PyString_Format(a, b)  PyString_Format(a, b)\n#endif\n#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII)\n  #define PyObject_ASCII(o)            PyObject_Repr(o)\n#endif\n#if PY_MAJOR_VERSION >= 3\n  #define PyBaseString_Type            PyUnicode_Type\n  #define PyStringObject               PyUnicodeObject\n  #define PyString_Type                PyUnicode_Type\n  #define PyString_Check               PyUnicode_Check\n  #define PyString_CheckExact          PyUnicode_CheckExact\n  #define PyObject_Unicode             PyObject_Str\n#endif\n#if PY_MAJOR_VERSION >= 3\n  #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj)\n  #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj)\n#else\n  #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj))\n  #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj))\n#endif\n#ifndef PySet_CheckExact\n  #define PySet_CheckExact(obj)        (Py_TYPE(obj) == &PySet_Type)\n#endif\n#if CYTHON_ASSUME_SAFE_MACROS\n  #define __Pyx_PySequence_SIZE(seq)  Py_SIZE(seq)\n#else\n  #define __Pyx_PySequence_SIZE(seq)  PySequence_Size(seq)\n#endif\n#if PY_MAJOR_VERSION >= 3\n  #define PyIntObject                  PyLongObject\n  #define PyInt_Type                   PyLong_Type\n  #define PyInt_Check(op)              PyLong_Check(op)\n  #define PyInt_CheckExact(op)         PyLong_CheckExact(op)\n  #define PyInt_FromString             PyLong_FromString\n  #define PyInt_FromUnicode            PyLong_FromUnicode\n  #define PyInt_FromLong               PyLong_FromLong\n  #define PyInt_FromSize_t             PyLong_FromSize_t\n  #define PyInt_FromSsize_t            PyLong_FromSsize_t\n  #define PyInt_AsLong                 PyLong_AsLong\n  #define PyInt_AS_LONG                PyLong_AS_LONG\n  #define PyInt_AsSsize_t              PyLong_AsSsize_t\n  #define PyInt_AsUnsignedLongMask     PyLong_AsUnsignedLongMask\n  #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask\n  #define PyNumber_Int                 PyNumber_Long\n#endif\n#if PY_MAJOR_VERSION >= 3\n  #define PyBoolObject                 PyLongObject\n#endif\n#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY\n  #ifndef PyUnicode_InternFromString\n    #define PyUnicode_InternFromString(s) PyUnicode_FromString(s)\n  #endif\n#endif\n#if PY_VERSION_HEX < 0x030200A4\n  typedef long Py_hash_t;\n  #define __Pyx_PyInt_FromHash_t PyInt_FromLong\n  #define __Pyx_PyInt_AsHash_t   PyInt_AsLong\n#else\n  #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t\n  #define __Pyx_PyInt_AsHash_t   PyInt_AsSsize_t\n#endif\n#if PY_MAJOR_VERSION >= 3\n  #define __Pyx_PyMethod_New(func, self, klass) ((self) ? PyMethod_New(func, self) : (Py_INCREF(func), func))\n#else\n  #define __Pyx_PyMethod_New(func, self, klass) PyMethod_New(func, self, klass)\n#endif\n#if CYTHON_USE_ASYNC_SLOTS\n  #if PY_VERSION_HEX >= 0x030500B1\n    #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods\n    #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async)\n  #else\n    #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved))\n  #endif\n#else\n  #define __Pyx_PyType_AsAsync(obj) NULL\n#endif\n#ifndef __Pyx_PyAsyncMethodsStruct\n    typedef struct {\n        unaryfunc am_await;\n        unaryfunc am_aiter;\n        unaryfunc am_anext;\n    } __Pyx_PyAsyncMethodsStruct;\n#endif\n\n#if defined(WIN32) || defined(MS_WINDOWS)\n  #define _USE_MATH_DEFINES\n#endif\n#include <math.h>\n#ifdef NAN\n#define __PYX_NAN() ((float) NAN)\n#else\nstatic CYTHON_INLINE float __PYX_NAN() {\n  float value;\n  memset(&value, 0xFF, sizeof(value));\n  return value;\n}\n#endif\n#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL)\n#define __Pyx_truncl trunc\n#else\n#define __Pyx_truncl truncl\n#endif\n\n\n#define __PYX_ERR(f_index, lineno, Ln_error) \\\n{ \\\n  __pyx_filename = __pyx_f[f_index]; __pyx_lineno = lineno; __pyx_clineno = __LINE__; goto Ln_error; \\\n}\n\n#ifndef __PYX_EXTERN_C\n  #ifdef __cplusplus\n    #define __PYX_EXTERN_C extern \"C\"\n  #else\n    #define __PYX_EXTERN_C extern\n  #endif\n#endif\n\n#define __PYX_HAVE__region\n#define __PYX_HAVE_API__region\n/* Early includes */\n#include <string.h>\n#include <stdlib.h>\n#include <stdio.h>\n#include \"src/region.h\"\n#ifdef _OPENMP\n#include <omp.h>\n#endif /* _OPENMP */\n\n#if defined(PYREX_WITHOUT_ASSERTIONS) && !defined(CYTHON_WITHOUT_ASSERTIONS)\n#define CYTHON_WITHOUT_ASSERTIONS\n#endif\n\ntypedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding;\n                const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry;\n\n#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0\n#define __PYX_DEFAULT_STRING_ENCODING_IS_UTF8 0\n#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT (PY_MAJOR_VERSION >= 3 && __PYX_DEFAULT_STRING_ENCODING_IS_UTF8)\n#define __PYX_DEFAULT_STRING_ENCODING \"\"\n#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString\n#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize\n#define __Pyx_uchar_cast(c) ((unsigned char)c)\n#define __Pyx_long_cast(x) ((long)x)\n#define __Pyx_fits_Py_ssize_t(v, type, is_signed)  (\\\n    (sizeof(type) < sizeof(Py_ssize_t))  ||\\\n    (sizeof(type) > sizeof(Py_ssize_t) &&\\\n          likely(v < (type)PY_SSIZE_T_MAX ||\\\n                 v == (type)PY_SSIZE_T_MAX)  &&\\\n          (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\\\n                                v == (type)PY_SSIZE_T_MIN)))  ||\\\n    (sizeof(type) == sizeof(Py_ssize_t) &&\\\n          (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\\\n                               v == (type)PY_SSIZE_T_MAX)))  )\nstatic CYTHON_INLINE int __Pyx_is_valid_index(Py_ssize_t i, Py_ssize_t limit) {\n    return (size_t) i < (size_t) limit;\n}\n#if defined (__cplusplus) && __cplusplus >= 201103L\n    #include <cstdlib>\n    #define __Pyx_sst_abs(value) std::abs(value)\n#elif SIZEOF_INT >= SIZEOF_SIZE_T\n    #define __Pyx_sst_abs(value) abs(value)\n#elif SIZEOF_LONG >= SIZEOF_SIZE_T\n    #define __Pyx_sst_abs(value) labs(value)\n#elif defined (_MSC_VER)\n    #define __Pyx_sst_abs(value) ((Py_ssize_t)_abs64(value))\n#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L\n    #define __Pyx_sst_abs(value) llabs(value)\n#elif defined (__GNUC__)\n    #define __Pyx_sst_abs(value) __builtin_llabs(value)\n#else\n    #define __Pyx_sst_abs(value) ((value<0) ? -value : value)\n#endif\nstatic CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject*);\nstatic CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length);\n#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s))\n#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l)\n#define __Pyx_PyBytes_FromString        PyBytes_FromString\n#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize\nstatic CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*);\n#if PY_MAJOR_VERSION < 3\n    #define __Pyx_PyStr_FromString        __Pyx_PyBytes_FromString\n    #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize\n#else\n    #define __Pyx_PyStr_FromString        __Pyx_PyUnicode_FromString\n    #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize\n#endif\n#define __Pyx_PyBytes_AsWritableString(s)     ((char*) PyBytes_AS_STRING(s))\n#define __Pyx_PyBytes_AsWritableSString(s)    ((signed char*) PyBytes_AS_STRING(s))\n#define __Pyx_PyBytes_AsWritableUString(s)    ((unsigned char*) PyBytes_AS_STRING(s))\n#define __Pyx_PyBytes_AsString(s)     ((const char*) PyBytes_AS_STRING(s))\n#define __Pyx_PyBytes_AsSString(s)    ((const signed char*) PyBytes_AS_STRING(s))\n#define __Pyx_PyBytes_AsUString(s)    ((const unsigned char*) PyBytes_AS_STRING(s))\n#define __Pyx_PyObject_AsWritableString(s)    ((char*) __Pyx_PyObject_AsString(s))\n#define __Pyx_PyObject_AsWritableSString(s)    ((signed char*) __Pyx_PyObject_AsString(s))\n#define __Pyx_PyObject_AsWritableUString(s)    ((unsigned char*) __Pyx_PyObject_AsString(s))\n#define __Pyx_PyObject_AsSString(s)    ((const signed char*) __Pyx_PyObject_AsString(s))\n#define __Pyx_PyObject_AsUString(s)    ((const unsigned char*) __Pyx_PyObject_AsString(s))\n#define __Pyx_PyObject_FromCString(s)  __Pyx_PyObject_FromString((const char*)s)\n#define __Pyx_PyBytes_FromCString(s)   __Pyx_PyBytes_FromString((const char*)s)\n#define __Pyx_PyByteArray_FromCString(s)   __Pyx_PyByteArray_FromString((const char*)s)\n#define __Pyx_PyStr_FromCString(s)     __Pyx_PyStr_FromString((const char*)s)\n#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s)\nstatic CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) {\n    const Py_UNICODE *u_end = u;\n    while (*u_end++) ;\n    return (size_t)(u_end - u - 1);\n}\n#define __Pyx_PyUnicode_FromUnicode(u)       PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u))\n#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode\n#define __Pyx_PyUnicode_AsUnicode            PyUnicode_AsUnicode\n#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj)\n#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None)\nstatic CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b);\nstatic CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*);\nstatic CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject*);\nstatic CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x);\n#define __Pyx_PySequence_Tuple(obj)\\\n    (likely(PyTuple_CheckExact(obj)) ? __Pyx_NewRef(obj) : PySequence_Tuple(obj))\nstatic CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*);\nstatic CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t);\n#if CYTHON_ASSUME_SAFE_MACROS\n#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x))\n#else\n#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x)\n#endif\n#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x))\n#if PY_MAJOR_VERSION >= 3\n#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x))\n#else\n#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x))\n#endif\n#define __Pyx_PyNumber_Float(x) (PyFloat_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Float(x))\n#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII\nstatic int __Pyx_sys_getdefaultencoding_not_ascii;\nstatic int __Pyx_init_sys_getdefaultencoding_params(void) {\n    PyObject* sys;\n    PyObject* default_encoding = NULL;\n    PyObject* ascii_chars_u = NULL;\n    PyObject* ascii_chars_b = NULL;\n    const char* default_encoding_c;\n    sys = PyImport_ImportModule(\"sys\");\n    if (!sys) goto bad;\n    default_encoding = PyObject_CallMethod(sys, (char*) \"getdefaultencoding\", NULL);\n    Py_DECREF(sys);\n    if (!default_encoding) goto bad;\n    default_encoding_c = PyBytes_AsString(default_encoding);\n    if (!default_encoding_c) goto bad;\n    if (strcmp(default_encoding_c, \"ascii\") == 0) {\n        __Pyx_sys_getdefaultencoding_not_ascii = 0;\n    } else {\n        char ascii_chars[128];\n        int c;\n        for (c = 0; c < 128; c++) {\n            ascii_chars[c] = c;\n        }\n        __Pyx_sys_getdefaultencoding_not_ascii = 1;\n        ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL);\n        if (!ascii_chars_u) goto bad;\n        ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL);\n        if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) {\n            PyErr_Format(\n                PyExc_ValueError,\n                \"This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.\",\n                default_encoding_c);\n            goto bad;\n        }\n        Py_DECREF(ascii_chars_u);\n        Py_DECREF(ascii_chars_b);\n    }\n    Py_DECREF(default_encoding);\n    return 0;\nbad:\n    Py_XDECREF(default_encoding);\n    Py_XDECREF(ascii_chars_u);\n    Py_XDECREF(ascii_chars_b);\n    return -1;\n}\n#endif\n#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3\n#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL)\n#else\n#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL)\n#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT\nstatic char* __PYX_DEFAULT_STRING_ENCODING;\nstatic int __Pyx_init_sys_getdefaultencoding_params(void) {\n    PyObject* sys;\n    PyObject* default_encoding = NULL;\n    char* default_encoding_c;\n    sys = PyImport_ImportModule(\"sys\");\n    if (!sys) goto bad;\n    default_encoding = PyObject_CallMethod(sys, (char*) (const char*) \"getdefaultencoding\", NULL);\n    Py_DECREF(sys);\n    if (!default_encoding) goto bad;\n    default_encoding_c = PyBytes_AsString(default_encoding);\n    if (!default_encoding_c) goto bad;\n    __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c) + 1);\n    if (!__PYX_DEFAULT_STRING_ENCODING) goto bad;\n    strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c);\n    Py_DECREF(default_encoding);\n    return 0;\nbad:\n    Py_XDECREF(default_encoding);\n    return -1;\n}\n#endif\n#endif\n\n\n/* Test for GCC > 2.95 */\n#if defined(__GNUC__)     && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95)))\n  #define likely(x)   __builtin_expect(!!(x), 1)\n  #define unlikely(x) __builtin_expect(!!(x), 0)\n#else /* !__GNUC__ or GCC < 2.95 */\n  #define likely(x)   (x)\n  #define unlikely(x) (x)\n#endif /* __GNUC__ */\nstatic CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void)ptr; }\n\nstatic PyObject *__pyx_m = NULL;\nstatic PyObject *__pyx_d;\nstatic PyObject *__pyx_b;\nstatic PyObject *__pyx_cython_runtime = NULL;\nstatic PyObject *__pyx_empty_tuple;\nstatic PyObject *__pyx_empty_bytes;\nstatic PyObject *__pyx_empty_unicode;\nstatic int __pyx_lineno;\nstatic int __pyx_clineno = 0;\nstatic const char * __pyx_cfilenm= __FILE__;\nstatic const char *__pyx_filename;\n\n\nstatic const char *__pyx_f[] = {\n  \"region.pyx\",\n  \"stringsource\",\n};\n\n/*--- Type declarations ---*/\nstruct __pyx_obj_6region_RegionBounds;\nstruct __pyx_obj_6region_Rectangle;\nstruct __pyx_obj_6region_Polygon;\nstruct __pyx_obj___Pyx_EnumMeta;\n\n/* \"region.pyx\":13\n * cimport c_region\n * \n * cpdef enum RegionType:             # <<<<<<<<<<<<<<\n *     EMTPY\n *     SPECIAL\n */\nenum __pyx_t_6region_RegionType {\n  __pyx_e_6region_EMTPY,\n  __pyx_e_6region_SPECIAL,\n  __pyx_e_6region_RECTANGEL,\n  __pyx_e_6region_POLYGON,\n  __pyx_e_6region_MASK\n};\n\n/* \"region.pyx\":20\n *     MASK\n * \n * cdef class RegionBounds:             # <<<<<<<<<<<<<<\n *     cdef c_region.region_bounds* _c_region_bounds\n * \n */\nstruct __pyx_obj_6region_RegionBounds {\n  PyObject_HEAD\n  region_bounds *_c_region_bounds;\n};\n\n\n/* \"region.pyx\":57\n *         self._c_region_bounds.right = right\n * \n * cdef class Rectangle:             # <<<<<<<<<<<<<<\n *     cdef c_region.region_rectangle* _c_region_rectangle\n * \n */\nstruct __pyx_obj_6region_Rectangle {\n  PyObject_HEAD\n  region_rectangle *_c_region_rectangle;\n};\n\n\n/* \"region.pyx\":98\n *                 self._c_region_rectangle.height)\n * \n * cdef class Polygon:             # <<<<<<<<<<<<<<\n *     cdef c_region.region_polygon* _c_region_polygon\n * \n */\nstruct __pyx_obj_6region_Polygon {\n  PyObject_HEAD\n  region_polygon *_c_region_polygon;\n};\n\n\n/* \"EnumBase\":15\n * \n * @cython.internal\n * cdef class __Pyx_EnumMeta(type):             # <<<<<<<<<<<<<<\n *     def __init__(cls, name, parents, dct):\n *         type.__init__(cls, name, parents, dct)\n */\nstruct __pyx_obj___Pyx_EnumMeta {\n  PyHeapTypeObject __pyx_base;\n};\n\n\n/* --- Runtime support code (head) --- */\n/* Refnanny.proto */\n#ifndef CYTHON_REFNANNY\n  #define CYTHON_REFNANNY 0\n#endif\n#if CYTHON_REFNANNY\n  typedef struct {\n    void (*INCREF)(void*, PyObject*, int);\n    void (*DECREF)(void*, PyObject*, int);\n    void (*GOTREF)(void*, PyObject*, int);\n    void (*GIVEREF)(void*, PyObject*, int);\n    void* (*SetupContext)(const char*, int, const char*);\n    void (*FinishContext)(void**);\n  } __Pyx_RefNannyAPIStruct;\n  static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL;\n  static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname);\n  #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL;\n#ifdef WITH_THREAD\n  #define __Pyx_RefNannySetupContext(name, acquire_gil)\\\n          if (acquire_gil) {\\\n              PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\\\n              __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\\\n              PyGILState_Release(__pyx_gilstate_save);\\\n          } else {\\\n              __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\\\n          }\n#else\n  #define __Pyx_RefNannySetupContext(name, acquire_gil)\\\n          __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__)\n#endif\n  #define __Pyx_RefNannyFinishContext()\\\n          __Pyx_RefNanny->FinishContext(&__pyx_refnanny)\n  #define __Pyx_INCREF(r)  __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__)\n  #define __Pyx_DECREF(r)  __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__)\n  #define __Pyx_GOTREF(r)  __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__)\n  #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__)\n  #define __Pyx_XINCREF(r)  do { if((r) != NULL) {__Pyx_INCREF(r); }} while(0)\n  #define __Pyx_XDECREF(r)  do { if((r) != NULL) {__Pyx_DECREF(r); }} while(0)\n  #define __Pyx_XGOTREF(r)  do { if((r) != NULL) {__Pyx_GOTREF(r); }} while(0)\n  #define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);}} while(0)\n#else\n  #define __Pyx_RefNannyDeclarations\n  #define __Pyx_RefNannySetupContext(name, acquire_gil)\n  #define __Pyx_RefNannyFinishContext()\n  #define __Pyx_INCREF(r) Py_INCREF(r)\n  #define __Pyx_DECREF(r) Py_DECREF(r)\n  #define __Pyx_GOTREF(r)\n  #define __Pyx_GIVEREF(r)\n  #define __Pyx_XINCREF(r) Py_XINCREF(r)\n  #define __Pyx_XDECREF(r) Py_XDECREF(r)\n  #define __Pyx_XGOTREF(r)\n  #define __Pyx_XGIVEREF(r)\n#endif\n#define __Pyx_XDECREF_SET(r, v) do {\\\n        PyObject *tmp = (PyObject *) r;\\\n        r = v; __Pyx_XDECREF(tmp);\\\n    } while (0)\n#define __Pyx_DECREF_SET(r, v) do {\\\n        PyObject *tmp = (PyObject *) r;\\\n        r = v; __Pyx_DECREF(tmp);\\\n    } while (0)\n#define __Pyx_CLEAR(r)    do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0)\n#define __Pyx_XCLEAR(r)   do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0)\n\n/* PyObjectGetAttrStr.proto */\n#if CYTHON_USE_TYPE_SLOTS\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name);\n#else\n#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n)\n#endif\n\n/* GetBuiltinName.proto */\nstatic PyObject *__Pyx_GetBuiltinName(PyObject *name);\n\n/* RaiseArgTupleInvalid.proto */\nstatic void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact,\n    Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found);\n\n/* KeywordStringCheck.proto */\nstatic int __Pyx_CheckKeywordStrings(PyObject *kwdict, const char* function_name, int kw_allowed);\n\n/* RaiseDoubleKeywords.proto */\nstatic void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name);\n\n/* ParseKeywords.proto */\nstatic int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[],\\\n    PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args,\\\n    const char* function_name);\n\n/* PyFunctionFastCall.proto */\n#if CYTHON_FAST_PYCALL\n#define __Pyx_PyFunction_FastCall(func, args, nargs)\\\n    __Pyx_PyFunction_FastCallDict((func), (args), (nargs), NULL)\n#if 1 || PY_VERSION_HEX < 0x030600B1\nstatic PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs);\n#else\n#define __Pyx_PyFunction_FastCallDict(func, args, nargs, kwargs) _PyFunction_FastCallDict(func, args, nargs, kwargs)\n#endif\n#define __Pyx_BUILD_ASSERT_EXPR(cond)\\\n    (sizeof(char [1 - 2*!(cond)]) - 1)\n#ifndef Py_MEMBER_SIZE\n#define Py_MEMBER_SIZE(type, member) sizeof(((type *)0)->member)\n#endif\n  static size_t __pyx_pyframe_localsplus_offset = 0;\n  #include \"frameobject.h\"\n  #define __Pxy_PyFrame_Initialize_Offsets()\\\n    ((void)__Pyx_BUILD_ASSERT_EXPR(sizeof(PyFrameObject) == offsetof(PyFrameObject, f_localsplus) + Py_MEMBER_SIZE(PyFrameObject, f_localsplus)),\\\n     (void)(__pyx_pyframe_localsplus_offset = ((size_t)PyFrame_Type.tp_basicsize) - Py_MEMBER_SIZE(PyFrameObject, f_localsplus)))\n  #define __Pyx_PyFrame_GetLocalsplus(frame)\\\n    (assert(__pyx_pyframe_localsplus_offset), (PyObject **)(((char *)(frame)) + __pyx_pyframe_localsplus_offset))\n#endif\n\n/* PyCFunctionFastCall.proto */\n#if CYTHON_FAST_PYCCALL\nstatic CYTHON_INLINE PyObject *__Pyx_PyCFunction_FastCall(PyObject *func, PyObject **args, Py_ssize_t nargs);\n#else\n#define __Pyx_PyCFunction_FastCall(func, args, nargs)  (assert(0), NULL)\n#endif\n\n/* PyObjectCall.proto */\n#if CYTHON_COMPILING_IN_CPYTHON\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw);\n#else\n#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw)\n#endif\n\n/* PyThreadStateGet.proto */\n#if CYTHON_FAST_THREAD_STATE\n#define __Pyx_PyThreadState_declare  PyThreadState *__pyx_tstate;\n#define __Pyx_PyThreadState_assign  __pyx_tstate = __Pyx_PyThreadState_Current;\n#define __Pyx_PyErr_Occurred()  __pyx_tstate->curexc_type\n#else\n#define __Pyx_PyThreadState_declare\n#define __Pyx_PyThreadState_assign\n#define __Pyx_PyErr_Occurred()  PyErr_Occurred()\n#endif\n\n/* PyErrFetchRestore.proto */\n#if CYTHON_FAST_THREAD_STATE\n#define __Pyx_PyErr_Clear() __Pyx_ErrRestore(NULL, NULL, NULL)\n#define __Pyx_ErrRestoreWithState(type, value, tb)  __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb)\n#define __Pyx_ErrFetchWithState(type, value, tb)    __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb)\n#define __Pyx_ErrRestore(type, value, tb)  __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb)\n#define __Pyx_ErrFetch(type, value, tb)    __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb)\nstatic CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb);\nstatic CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);\n#if CYTHON_COMPILING_IN_CPYTHON\n#define __Pyx_PyErr_SetNone(exc) (Py_INCREF(exc), __Pyx_ErrRestore((exc), NULL, NULL))\n#else\n#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc)\n#endif\n#else\n#define __Pyx_PyErr_Clear() PyErr_Clear()\n#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc)\n#define __Pyx_ErrRestoreWithState(type, value, tb)  PyErr_Restore(type, value, tb)\n#define __Pyx_ErrFetchWithState(type, value, tb)  PyErr_Fetch(type, value, tb)\n#define __Pyx_ErrRestoreInState(tstate, type, value, tb)  PyErr_Restore(type, value, tb)\n#define __Pyx_ErrFetchInState(tstate, type, value, tb)  PyErr_Fetch(type, value, tb)\n#define __Pyx_ErrRestore(type, value, tb)  PyErr_Restore(type, value, tb)\n#define __Pyx_ErrFetch(type, value, tb)  PyErr_Fetch(type, value, tb)\n#endif\n\n/* RaiseException.proto */\nstatic void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause);\n\n/* None.proto */\nstatic CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t, Py_ssize_t);\n\n/* PyObjectCallMethO.proto */\n#if CYTHON_COMPILING_IN_CPYTHON\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg);\n#endif\n\n/* PyObjectCallOneArg.proto */\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg);\n\n/* GetItemInt.proto */\n#define __Pyx_GetItemInt(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\\\n    (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\\\n    __Pyx_GetItemInt_Fast(o, (Py_ssize_t)i, is_list, wraparound, boundscheck) :\\\n    (is_list ? (PyErr_SetString(PyExc_IndexError, \"list index out of range\"), (PyObject*)NULL) :\\\n               __Pyx_GetItemInt_Generic(o, to_py_func(i))))\n#define __Pyx_GetItemInt_List(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\\\n    (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\\\n    __Pyx_GetItemInt_List_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\\\n    (PyErr_SetString(PyExc_IndexError, \"list index out of range\"), (PyObject*)NULL))\nstatic CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i,\n                                                              int wraparound, int boundscheck);\n#define __Pyx_GetItemInt_Tuple(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\\\n    (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\\\n    __Pyx_GetItemInt_Tuple_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\\\n    (PyErr_SetString(PyExc_IndexError, \"tuple index out of range\"), (PyObject*)NULL))\nstatic CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i,\n                                                              int wraparound, int boundscheck);\nstatic PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j);\nstatic CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i,\n                                                     int is_list, int wraparound, int boundscheck);\n\n/* ObjectGetItem.proto */\n#if CYTHON_USE_TYPE_SLOTS\nstatic CYTHON_INLINE PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject* key);\n#else\n#define __Pyx_PyObject_GetItem(obj, key)  PyObject_GetItem(obj, key)\n#endif\n\n/* PyIntBinop.proto */\n#if !CYTHON_COMPILING_IN_PYPY\nstatic PyObject* __Pyx_PyInt_AddObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check);\n#else\n#define __Pyx_PyInt_AddObjC(op1, op2, intval, inplace, zerodivision_check)\\\n    (inplace ? PyNumber_InPlaceAdd(op1, op2) : PyNumber_Add(op1, op2))\n#endif\n\n/* pyobject_as_double.proto */\nstatic double __Pyx__PyObject_AsDouble(PyObject* obj);\n#if CYTHON_COMPILING_IN_PYPY\n#define __Pyx_PyObject_AsDouble(obj)\\\n(likely(PyFloat_CheckExact(obj)) ? PyFloat_AS_DOUBLE(obj) :\\\n likely(PyInt_CheckExact(obj)) ?\\\n PyFloat_AsDouble(obj) : __Pyx__PyObject_AsDouble(obj))\n#else\n#define __Pyx_PyObject_AsDouble(obj)\\\n((likely(PyFloat_CheckExact(obj))) ?\\\n PyFloat_AS_DOUBLE(obj) : __Pyx__PyObject_AsDouble(obj))\n#endif\n\n/* PyDictVersioning.proto */\n#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS\n#define __PYX_DICT_VERSION_INIT  ((PY_UINT64_T) -1)\n#define __PYX_GET_DICT_VERSION(dict)  (((PyDictObject*)(dict))->ma_version_tag)\n#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)\\\n    (version_var) = __PYX_GET_DICT_VERSION(dict);\\\n    (cache_var) = (value);\n#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) {\\\n    static PY_UINT64_T __pyx_dict_version = 0;\\\n    static PyObject *__pyx_dict_cached_value = NULL;\\\n    if (likely(__PYX_GET_DICT_VERSION(DICT) == __pyx_dict_version)) {\\\n        (VAR) = __pyx_dict_cached_value;\\\n    } else {\\\n        (VAR) = __pyx_dict_cached_value = (LOOKUP);\\\n        __pyx_dict_version = __PYX_GET_DICT_VERSION(DICT);\\\n    }\\\n}\nstatic CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj);\nstatic CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj);\nstatic CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version);\n#else\n#define __PYX_GET_DICT_VERSION(dict)  (0)\n#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)\n#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP)  (VAR) = (LOOKUP);\n#endif\n\n/* GetModuleGlobalName.proto */\n#if CYTHON_USE_DICT_VERSIONS\n#define __Pyx_GetModuleGlobalName(var, name)  {\\\n    static PY_UINT64_T __pyx_dict_version = 0;\\\n    static PyObject *__pyx_dict_cached_value = NULL;\\\n    (var) = (likely(__pyx_dict_version == __PYX_GET_DICT_VERSION(__pyx_d))) ?\\\n        (likely(__pyx_dict_cached_value) ? __Pyx_NewRef(__pyx_dict_cached_value) : __Pyx_GetBuiltinName(name)) :\\\n        __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\\\n}\n#define __Pyx_GetModuleGlobalNameUncached(var, name)  {\\\n    PY_UINT64_T __pyx_dict_version;\\\n    PyObject *__pyx_dict_cached_value;\\\n    (var) = __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\\\n}\nstatic PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value);\n#else\n#define __Pyx_GetModuleGlobalName(var, name)  (var) = __Pyx__GetModuleGlobalName(name)\n#define __Pyx_GetModuleGlobalNameUncached(var, name)  (var) = __Pyx__GetModuleGlobalName(name)\nstatic CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name);\n#endif\n\n/* ListAppend.proto */\n#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS\nstatic CYTHON_INLINE int __Pyx_PyList_Append(PyObject* list, PyObject* x) {\n    PyListObject* L = (PyListObject*) list;\n    Py_ssize_t len = Py_SIZE(list);\n    if (likely(L->allocated > len) & likely(len > (L->allocated >> 1))) {\n        Py_INCREF(x);\n        PyList_SET_ITEM(list, len, x);\n        Py_SIZE(list) = len+1;\n        return 0;\n    }\n    return PyList_Append(list, x);\n}\n#else\n#define __Pyx_PyList_Append(L,x) PyList_Append(L,x)\n#endif\n\n/* PyObjectCallNoArg.proto */\n#if CYTHON_COMPILING_IN_CPYTHON\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func);\n#else\n#define __Pyx_PyObject_CallNoArg(func) __Pyx_PyObject_Call(func, __pyx_empty_tuple, NULL)\n#endif\n\n/* IncludeStringH.proto */\n#include <string.h>\n\n/* decode_c_string_utf16.proto */\nstatic CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16(const char *s, Py_ssize_t size, const char *errors) {\n    int byteorder = 0;\n    return PyUnicode_DecodeUTF16(s, size, errors, &byteorder);\n}\nstatic CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16LE(const char *s, Py_ssize_t size, const char *errors) {\n    int byteorder = -1;\n    return PyUnicode_DecodeUTF16(s, size, errors, &byteorder);\n}\nstatic CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16BE(const char *s, Py_ssize_t size, const char *errors) {\n    int byteorder = 1;\n    return PyUnicode_DecodeUTF16(s, size, errors, &byteorder);\n}\n\n/* decode_c_string.proto */\nstatic CYTHON_INLINE PyObject* __Pyx_decode_c_string(\n         const char* cstring, Py_ssize_t start, Py_ssize_t stop,\n         const char* encoding, const char* errors,\n         PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors));\n\n/* GetException.proto */\n#if CYTHON_FAST_THREAD_STATE\n#define __Pyx_GetException(type, value, tb)  __Pyx__GetException(__pyx_tstate, type, value, tb)\nstatic int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);\n#else\nstatic int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb);\n#endif\n\n/* SwapException.proto */\n#if CYTHON_FAST_THREAD_STATE\n#define __Pyx_ExceptionSwap(type, value, tb)  __Pyx__ExceptionSwap(__pyx_tstate, type, value, tb)\nstatic CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);\n#else\nstatic CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb);\n#endif\n\n/* GetTopmostException.proto */\n#if CYTHON_USE_EXC_INFO_STACK\nstatic _PyErr_StackItem * __Pyx_PyErr_GetTopmostException(PyThreadState *tstate);\n#endif\n\n/* SaveResetException.proto */\n#if CYTHON_FAST_THREAD_STATE\n#define __Pyx_ExceptionSave(type, value, tb)  __Pyx__ExceptionSave(__pyx_tstate, type, value, tb)\nstatic CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);\n#define __Pyx_ExceptionReset(type, value, tb)  __Pyx__ExceptionReset(__pyx_tstate, type, value, tb)\nstatic CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb);\n#else\n#define __Pyx_ExceptionSave(type, value, tb)   PyErr_GetExcInfo(type, value, tb)\n#define __Pyx_ExceptionReset(type, value, tb)  PyErr_SetExcInfo(type, value, tb)\n#endif\n\n/* PyObjectSetAttrStr.proto */\n#if CYTHON_USE_TYPE_SLOTS\n#define __Pyx_PyObject_DelAttrStr(o,n) __Pyx_PyObject_SetAttrStr(o, n, NULL)\nstatic CYTHON_INLINE int __Pyx_PyObject_SetAttrStr(PyObject* obj, PyObject* attr_name, PyObject* value);\n#else\n#define __Pyx_PyObject_DelAttrStr(o,n)   PyObject_DelAttr(o,n)\n#define __Pyx_PyObject_SetAttrStr(o,n,v) PyObject_SetAttr(o,n,v)\n#endif\n\n/* PyErrExceptionMatches.proto */\n#if CYTHON_FAST_THREAD_STATE\n#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err)\nstatic CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err);\n#else\n#define __Pyx_PyErr_ExceptionMatches(err)  PyErr_ExceptionMatches(err)\n#endif\n\n/* GetAttr.proto */\nstatic CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *, PyObject *);\n\n/* GetAttr3.proto */\nstatic CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *, PyObject *, PyObject *);\n\n/* Import.proto */\nstatic PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level);\n\n/* ImportFrom.proto */\nstatic PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name);\n\n/* PyObjectCall2Args.proto */\nstatic CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2);\n\n/* HasAttr.proto */\nstatic CYTHON_INLINE int __Pyx_HasAttr(PyObject *, PyObject *);\n\n/* PyObject_GenericGetAttrNoDict.proto */\n#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name);\n#else\n#define __Pyx_PyObject_GenericGetAttrNoDict PyObject_GenericGetAttr\n#endif\n\n/* PyObject_GenericGetAttr.proto */\n#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000\nstatic PyObject* __Pyx_PyObject_GenericGetAttr(PyObject* obj, PyObject* attr_name);\n#else\n#define __Pyx_PyObject_GenericGetAttr PyObject_GenericGetAttr\n#endif\n\n/* SetupReduce.proto */\nstatic int __Pyx_setup_reduce(PyObject* type_obj);\n\n/* CalculateMetaclass.proto */\nstatic PyObject *__Pyx_CalculateMetaclass(PyTypeObject *metaclass, PyObject *bases);\n\n/* SetNameInClass.proto */\n#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1\n#define __Pyx_SetNameInClass(ns, name, value)\\\n    (likely(PyDict_CheckExact(ns)) ? _PyDict_SetItem_KnownHash(ns, name, value, ((PyASCIIObject *) name)->hash) : PyObject_SetItem(ns, name, value))\n#elif CYTHON_COMPILING_IN_CPYTHON\n#define __Pyx_SetNameInClass(ns, name, value)\\\n    (likely(PyDict_CheckExact(ns)) ? PyDict_SetItem(ns, name, value) : PyObject_SetItem(ns, name, value))\n#else\n#define __Pyx_SetNameInClass(ns, name, value)  PyObject_SetItem(ns, name, value)\n#endif\n\n/* FetchCommonType.proto */\nstatic PyTypeObject* __Pyx_FetchCommonType(PyTypeObject* type);\n\n/* CythonFunction.proto */\n#define __Pyx_CyFunction_USED 1\n#define __Pyx_CYFUNCTION_STATICMETHOD  0x01\n#define __Pyx_CYFUNCTION_CLASSMETHOD   0x02\n#define __Pyx_CYFUNCTION_CCLASS        0x04\n#define __Pyx_CyFunction_GetClosure(f)\\\n    (((__pyx_CyFunctionObject *) (f))->func_closure)\n#define __Pyx_CyFunction_GetClassObj(f)\\\n    (((__pyx_CyFunctionObject *) (f))->func_classobj)\n#define __Pyx_CyFunction_Defaults(type, f)\\\n    ((type *)(((__pyx_CyFunctionObject *) (f))->defaults))\n#define __Pyx_CyFunction_SetDefaultsGetter(f, g)\\\n    ((__pyx_CyFunctionObject *) (f))->defaults_getter = (g)\ntypedef struct {\n    PyCFunctionObject func;\n#if PY_VERSION_HEX < 0x030500A0\n    PyObject *func_weakreflist;\n#endif\n    PyObject *func_dict;\n    PyObject *func_name;\n    PyObject *func_qualname;\n    PyObject *func_doc;\n    PyObject *func_globals;\n    PyObject *func_code;\n    PyObject *func_closure;\n    PyObject *func_classobj;\n    void *defaults;\n    int defaults_pyobjects;\n    int flags;\n    PyObject *defaults_tuple;\n    PyObject *defaults_kwdict;\n    PyObject *(*defaults_getter)(PyObject *);\n    PyObject *func_annotations;\n} __pyx_CyFunctionObject;\nstatic PyTypeObject *__pyx_CyFunctionType = 0;\n#define __Pyx_CyFunction_Check(obj)  (__Pyx_TypeCheck(obj, __pyx_CyFunctionType))\n#define __Pyx_CyFunction_NewEx(ml, flags, qualname, self, module, globals, code)\\\n    __Pyx_CyFunction_New(__pyx_CyFunctionType, ml, flags, qualname, self, module, globals, code)\nstatic PyObject *__Pyx_CyFunction_New(PyTypeObject *, PyMethodDef *ml,\n                                      int flags, PyObject* qualname,\n                                      PyObject *self,\n                                      PyObject *module, PyObject *globals,\n                                      PyObject* code);\nstatic CYTHON_INLINE void *__Pyx_CyFunction_InitDefaults(PyObject *m,\n                                                         size_t size,\n                                                         int pyobjects);\nstatic CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsTuple(PyObject *m,\n                                                            PyObject *tuple);\nstatic CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsKwDict(PyObject *m,\n                                                             PyObject *dict);\nstatic CYTHON_INLINE void __Pyx_CyFunction_SetAnnotationsDict(PyObject *m,\n                                                              PyObject *dict);\nstatic int __pyx_CyFunction_init(void);\n\n/* Py3ClassCreate.proto */\nstatic PyObject *__Pyx_Py3MetaclassPrepare(PyObject *metaclass, PyObject *bases, PyObject *name, PyObject *qualname,\n                                           PyObject *mkw, PyObject *modname, PyObject *doc);\nstatic PyObject *__Pyx_Py3ClassCreate(PyObject *metaclass, PyObject *name, PyObject *bases, PyObject *dict,\n                                      PyObject *mkw, int calculate_metaclass, int allow_py2_metaclass);\n\n/* Globals.proto */\nstatic PyObject* __Pyx_Globals(void);\n\n/* CLineInTraceback.proto */\n#ifdef CYTHON_CLINE_IN_TRACEBACK\n#define __Pyx_CLineForTraceback(tstate, c_line)  (((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0)\n#else\nstatic int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line);\n#endif\n\n/* CodeObjectCache.proto */\ntypedef struct {\n    PyCodeObject* code_object;\n    int code_line;\n} __Pyx_CodeObjectCacheEntry;\nstruct __Pyx_CodeObjectCache {\n    int count;\n    int max_count;\n    __Pyx_CodeObjectCacheEntry* entries;\n};\nstatic struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL};\nstatic int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line);\nstatic PyCodeObject *__pyx_find_code_object(int code_line);\nstatic void __pyx_insert_code_object(int code_line, PyCodeObject* code_object);\n\n/* AddTraceback.proto */\nstatic void __Pyx_AddTraceback(const char *funcname, int c_line,\n                               int py_line, const char *filename);\n\n/* CIntToPy.proto */\nstatic CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value);\n\n/* CIntFromPy.proto */\nstatic CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *);\n\n/* CIntFromPy.proto */\nstatic CYTHON_INLINE size_t __Pyx_PyInt_As_size_t(PyObject *);\n\n/* CIntFromPy.proto */\nstatic CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *);\n\n/* CIntToPy.proto */\nstatic CYTHON_INLINE PyObject* __Pyx_PyInt_From_enum____pyx_t_6region_RegionType(enum __pyx_t_6region_RegionType value);\n\n/* FastTypeChecks.proto */\n#if CYTHON_COMPILING_IN_CPYTHON\n#define __Pyx_TypeCheck(obj, type) __Pyx_IsSubtype(Py_TYPE(obj), (PyTypeObject *)type)\nstatic CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b);\nstatic CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject *type);\nstatic CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2);\n#else\n#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type)\n#define __Pyx_PyErr_GivenExceptionMatches(err, type) PyErr_GivenExceptionMatches(err, type)\n#define __Pyx_PyErr_GivenExceptionMatches2(err, type1, type2) (PyErr_GivenExceptionMatches(err, type1) || PyErr_GivenExceptionMatches(err, type2))\n#endif\n#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception)\n\n/* CheckBinaryVersion.proto */\nstatic int __Pyx_check_binary_version(void);\n\n/* InitStrings.proto */\nstatic int __Pyx_InitStrings(__Pyx_StringTabEntry *t);\n\n\n/* Module declarations from 'libc.string' */\n\n/* Module declarations from 'libc.stdlib' */\n\n/* Module declarations from 'libc.stdio' */\n\n/* Module declarations from 'c_region' */\n\n/* Module declarations from 'region' */\nstatic PyTypeObject *__pyx_ptype_6region_RegionBounds = 0;\nstatic PyTypeObject *__pyx_ptype_6region_Rectangle = 0;\nstatic PyTypeObject *__pyx_ptype_6region_Polygon = 0;\nstatic PyTypeObject *__pyx_ptype___Pyx_EnumMeta = 0;\nstatic PyObject *__Pyx_OrderedDict = 0;\nstatic PyObject *__Pyx_EnumBase = 0;\nstatic PyObject *__Pyx_globals = 0;\nstatic PyObject *__pyx_unpickle___Pyx_EnumMeta__set_state(struct __pyx_obj___Pyx_EnumMeta *, PyObject *); /*proto*/\n#define __Pyx_MODULE_NAME \"region\"\nextern int __pyx_module_is_main_region;\nint __pyx_module_is_main_region = 0;\n\n/* Implementation of 'region' */\nstatic PyObject *__pyx_builtin_MemoryError;\nstatic PyObject *__pyx_builtin_TypeError;\nstatic PyObject *__pyx_builtin_range;\nstatic PyObject *__pyx_builtin_ValueError;\nstatic const char __pyx_k_i[] = \"i\";\nstatic const char __pyx_k_v[] = \"v\";\nstatic const char __pyx_k_x[] = \"x\";\nstatic const char __pyx_k_y[] = \"y\";\nstatic const char __pyx_k__5[] = \"\";\nstatic const char __pyx_k_cls[] = \"cls\";\nstatic const char __pyx_k_dct[] = \"dct\";\nstatic const char __pyx_k_doc[] = \"__doc__\";\nstatic const char __pyx_k_inf[] = \"inf\";\nstatic const char __pyx_k_nan[] = \"nan\";\nstatic const char __pyx_k_new[] = \"__new__\";\nstatic const char __pyx_k_res[] = \"res\";\nstatic const char __pyx_k_ret[] = \"ret\";\nstatic const char __pyx_k_s_s[] = \"%s.%s\";\nstatic const char __pyx_k_set[] = \"set\";\nstatic const char __pyx_k_str[] = \"__str__\";\nstatic const char __pyx_k_top[] = \"top\";\nstatic const char __pyx_k_MASK[] = \"MASK\";\nstatic const char __pyx_k_dict[] = \"__dict__\";\nstatic const char __pyx_k_enum[] = \"enum\";\nstatic const char __pyx_k_init[] = \"__init__\";\nstatic const char __pyx_k_left[] = \"left\";\nstatic const char __pyx_k_main[] = \"__main__\";\nstatic const char __pyx_k_name[] = \"name\";\nstatic const char __pyx_k_repr[] = \"__repr__\";\nstatic const char __pyx_k_self[] = \"self\";\nstatic const char __pyx_k_test[] = \"__test__\";\nstatic const char __pyx_k_3f_3f[] = \"({:.3f} {:.3f}) \";\nstatic const char __pyx_k_EMTPY[] = \"EMTPY\";\nstatic const char __pyx_k_class[] = \"__class__\";\nstatic const char __pyx_k_only1[] = \"only1\";\nstatic const char __pyx_k_only2[] = \"only2\";\nstatic const char __pyx_k_range[] = \"range\";\nstatic const char __pyx_k_right[] = \"right\";\nstatic const char __pyx_k_s_s_d[] = \"<%s.%s: %d>\";\nstatic const char __pyx_k_value[] = \"value\";\nstatic const char __pyx_k_width[] = \"width\";\nstatic const char __pyx_k_bottom[] = \"bottom\";\nstatic const char __pyx_k_bounds[] = \"bounds\";\nstatic const char __pyx_k_encode[] = \"encode\";\nstatic const char __pyx_k_format[] = \"format\";\nstatic const char __pyx_k_height[] = \"height\";\nstatic const char __pyx_k_import[] = \"__import__\";\nstatic const char __pyx_k_module[] = \"__module__\";\nstatic const char __pyx_k_name_2[] = \"__name__\";\nstatic const char __pyx_k_output[] = \"output\";\nstatic const char __pyx_k_pickle[] = \"pickle\";\nstatic const char __pyx_k_points[] = \"points\";\nstatic const char __pyx_k_reduce[] = \"__reduce__\";\nstatic const char __pyx_k_region[] = \"region\";\nstatic const char __pyx_k_update[] = \"update\";\nstatic const char __pyx_k_values[] = \"values\";\nstatic const char __pyx_k_3f_3f_2[] = \"({:.3f} {:.3f})\";\nstatic const char __pyx_k_IntEnum[] = \"IntEnum\";\nstatic const char __pyx_k_POLYGON[] = \"POLYGON\";\nstatic const char __pyx_k_Polygon[] = \"Polygon\";\nstatic const char __pyx_k_SPECIAL[] = \"SPECIAL\";\nstatic const char __pyx_k_members[] = \"__members__\";\nstatic const char __pyx_k_overlap[] = \"overlap\";\nstatic const char __pyx_k_parents[] = \"parents\";\nstatic const char __pyx_k_prepare[] = \"__prepare__\";\nstatic const char __pyx_k_EnumBase[] = \"EnumBase\";\nstatic const char __pyx_k_EnumType[] = \"EnumType\";\nstatic const char __pyx_k_getstate[] = \"__getstate__\";\nstatic const char __pyx_k_overlaps[] = \"overlaps\";\nstatic const char __pyx_k_polygon1[] = \"polygon1\";\nstatic const char __pyx_k_polygon2[] = \"polygon2\";\nstatic const char __pyx_k_pyx_type[] = \"__pyx_type\";\nstatic const char __pyx_k_qualname[] = \"__qualname__\";\nstatic const char __pyx_k_setstate[] = \"__setstate__\";\nstatic const char __pyx_k_template[] = \"template\";\nstatic const char __pyx_k_RECTANGEL[] = \"RECTANGEL\";\nstatic const char __pyx_k_Rectangle[] = \"Rectangle\";\nstatic const char __pyx_k_TypeError[] = \"TypeError\";\nstatic const char __pyx_k_ctemplate[] = \"ctemplate\";\nstatic const char __pyx_k_metaclass[] = \"__metaclass__\";\nstatic const char __pyx_k_no_bounds[] = \"no_bounds\";\nstatic const char __pyx_k_polygons1[] = \"polygons1\";\nstatic const char __pyx_k_polygons2[] = \"polygons2\";\nstatic const char __pyx_k_ptemplate[] = \"ptemplate\";\nstatic const char __pyx_k_pyx_state[] = \"__pyx_state\";\nstatic const char __pyx_k_reduce_ex[] = \"__reduce_ex__\";\nstatic const char __pyx_k_RegionType[] = \"RegionType\";\nstatic const char __pyx_k_ValueError[] = \"ValueError\";\nstatic const char __pyx_k_c_polygon1[] = \"c_polygon1\";\nstatic const char __pyx_k_c_polygon2[] = \"c_polygon2\";\nstatic const char __pyx_k_pno_bounds[] = \"pno_bounds\";\nstatic const char __pyx_k_polygon1_2[] = \"polygon1_\";\nstatic const char __pyx_k_polygon2_2[] = \"polygon2_\";\nstatic const char __pyx_k_pyx_result[] = \"__pyx_result\";\nstatic const char __pyx_k_region_pyx[] = \"region.pyx\";\nstatic const char __pyx_k_MemoryError[] = \"MemoryError\";\nstatic const char __pyx_k_OrderedDict[] = \"OrderedDict\";\nstatic const char __pyx_k_PickleError[] = \"PickleError\";\nstatic const char __pyx_k_collections[] = \"collections\";\nstatic const char __pyx_k_vot_overlap[] = \"vot_overlap\";\nstatic const char __pyx_k_Pyx_EnumBase[] = \"__Pyx_EnumBase\";\nstatic const char __pyx_k_RegionBounds[] = \"RegionBounds\";\nstatic const char __pyx_k_pyx_checksum[] = \"__pyx_checksum\";\nstatic const char __pyx_k_stringsource[] = \"stringsource\";\nstatic const char __pyx_k_reduce_cython[] = \"__reduce_cython__\";\nstatic const char __pyx_k_vot_float2str[] = \"vot_float2str\";\nstatic const char __pyx_k_pyx_PickleError[] = \"__pyx_PickleError\";\nstatic const char __pyx_k_setstate_cython[] = \"__setstate_cython__\";\nstatic const char __pyx_k_vot_overlap_traj[] = \"vot_overlap_traj\";\nstatic const char __pyx_k_Pyx_EnumBase___new[] = \"__Pyx_EnumBase.__new__\";\nstatic const char __pyx_k_Pyx_EnumBase___str[] = \"__Pyx_EnumBase.__str__\";\nstatic const char __pyx_k_cline_in_traceback[] = \"cline_in_traceback\";\nstatic const char __pyx_k_Pyx_EnumBase___repr[] = \"__Pyx_EnumBase.__repr__\";\nstatic const char __pyx_k_Unknown_enum_value_s[] = \"Unknown enum value: '%s'\";\nstatic const char __pyx_k_pyx_unpickle___Pyx_EnumMeta[] = \"__pyx_unpickle___Pyx_EnumMeta\";\nstatic const char __pyx_k_x_3f_y_3f_width_3f_height_3f[] = \"x: {:.3f} y: {:.3f} width: {:.3f} height: {:.3f}\";\nstatic const char __pyx_k_author_fangyi_zhang_vipl_ict_ac[] = \"\\n    @author xx.cn\\n\";\nstatic const char __pyx_k_top_3f_bottom_3f_left_3f_reight[] = \"top: {:.3f} bottom: {:.3f} left: {:.3f} reight: {:.3f}\";\nstatic const char __pyx_k_Incompatible_checksums_s_vs_0xd4[] = \"Incompatible checksums (%s vs 0xd41d8cd = ())\";\nstatic const char __pyx_k_no_default___reduce___due_to_non[] = \"no default __reduce__ due to non-trivial __cinit__\";\nstatic PyObject *__pyx_kp_s_3f_3f;\nstatic PyObject *__pyx_kp_s_3f_3f_2;\nstatic PyObject *__pyx_n_s_EMTPY;\nstatic PyObject *__pyx_n_s_EnumBase;\nstatic PyObject *__pyx_n_s_EnumType;\nstatic PyObject *__pyx_kp_s_Incompatible_checksums_s_vs_0xd4;\nstatic PyObject *__pyx_n_s_IntEnum;\nstatic PyObject *__pyx_n_s_MASK;\nstatic PyObject *__pyx_n_s_MemoryError;\nstatic PyObject *__pyx_n_s_OrderedDict;\nstatic PyObject *__pyx_n_s_POLYGON;\nstatic PyObject *__pyx_n_s_PickleError;\nstatic PyObject *__pyx_n_s_Polygon;\nstatic PyObject *__pyx_n_s_Pyx_EnumBase;\nstatic PyObject *__pyx_n_s_Pyx_EnumBase___new;\nstatic PyObject *__pyx_n_s_Pyx_EnumBase___repr;\nstatic PyObject *__pyx_n_s_Pyx_EnumBase___str;\nstatic PyObject *__pyx_n_s_RECTANGEL;\nstatic PyObject *__pyx_n_s_Rectangle;\nstatic PyObject *__pyx_n_s_RegionBounds;\nstatic PyObject *__pyx_n_s_RegionType;\nstatic PyObject *__pyx_n_s_SPECIAL;\nstatic PyObject *__pyx_n_s_TypeError;\nstatic PyObject *__pyx_kp_s_Unknown_enum_value_s;\nstatic PyObject *__pyx_n_s_ValueError;\nstatic PyObject *__pyx_kp_s__5;\nstatic PyObject *__pyx_n_s_bottom;\nstatic PyObject *__pyx_n_s_bounds;\nstatic PyObject *__pyx_n_s_c_polygon1;\nstatic PyObject *__pyx_n_s_c_polygon2;\nstatic PyObject *__pyx_n_s_class;\nstatic PyObject *__pyx_n_s_cline_in_traceback;\nstatic PyObject *__pyx_n_s_cls;\nstatic PyObject *__pyx_n_s_collections;\nstatic PyObject *__pyx_n_s_ctemplate;\nstatic PyObject *__pyx_n_s_dct;\nstatic PyObject *__pyx_n_s_dict;\nstatic PyObject *__pyx_n_s_doc;\nstatic PyObject *__pyx_n_s_encode;\nstatic PyObject *__pyx_n_s_enum;\nstatic PyObject *__pyx_n_s_format;\nstatic PyObject *__pyx_n_s_getstate;\nstatic PyObject *__pyx_n_s_height;\nstatic PyObject *__pyx_n_s_i;\nstatic PyObject *__pyx_n_s_import;\nstatic PyObject *__pyx_n_s_inf;\nstatic PyObject *__pyx_n_s_init;\nstatic PyObject *__pyx_n_s_left;\nstatic PyObject *__pyx_n_s_main;\nstatic PyObject *__pyx_n_s_members;\nstatic PyObject *__pyx_n_s_metaclass;\nstatic PyObject *__pyx_n_s_module;\nstatic PyObject *__pyx_n_s_name;\nstatic PyObject *__pyx_n_s_name_2;\nstatic PyObject *__pyx_n_s_nan;\nstatic PyObject *__pyx_n_s_new;\nstatic PyObject *__pyx_n_s_no_bounds;\nstatic PyObject *__pyx_kp_s_no_default___reduce___due_to_non;\nstatic PyObject *__pyx_n_s_only1;\nstatic PyObject *__pyx_n_s_only2;\nstatic PyObject *__pyx_n_s_output;\nstatic PyObject *__pyx_n_s_overlap;\nstatic PyObject *__pyx_n_s_overlaps;\nstatic PyObject *__pyx_n_s_parents;\nstatic PyObject *__pyx_n_s_pickle;\nstatic PyObject *__pyx_n_s_pno_bounds;\nstatic PyObject *__pyx_n_s_points;\nstatic PyObject *__pyx_n_s_polygon1;\nstatic PyObject *__pyx_n_s_polygon1_2;\nstatic PyObject *__pyx_n_s_polygon2;\nstatic PyObject *__pyx_n_s_polygon2_2;\nstatic PyObject *__pyx_n_s_polygons1;\nstatic PyObject *__pyx_n_s_polygons2;\nstatic PyObject *__pyx_n_s_prepare;\nstatic PyObject *__pyx_n_s_ptemplate;\nstatic PyObject *__pyx_n_s_pyx_PickleError;\nstatic PyObject *__pyx_n_s_pyx_checksum;\nstatic PyObject *__pyx_n_s_pyx_result;\nstatic PyObject *__pyx_n_s_pyx_state;\nstatic PyObject *__pyx_n_s_pyx_type;\nstatic PyObject *__pyx_n_s_pyx_unpickle___Pyx_EnumMeta;\nstatic PyObject *__pyx_n_s_qualname;\nstatic PyObject *__pyx_n_s_range;\nstatic PyObject *__pyx_n_s_reduce;\nstatic PyObject *__pyx_n_s_reduce_cython;\nstatic PyObject *__pyx_n_s_reduce_ex;\nstatic PyObject *__pyx_n_s_region;\nstatic PyObject *__pyx_kp_s_region_pyx;\nstatic PyObject *__pyx_n_s_repr;\nstatic PyObject *__pyx_n_s_res;\nstatic PyObject *__pyx_n_s_ret;\nstatic PyObject *__pyx_n_s_right;\nstatic PyObject *__pyx_kp_s_s_s;\nstatic PyObject *__pyx_kp_s_s_s_d;\nstatic PyObject *__pyx_n_s_self;\nstatic PyObject *__pyx_n_s_set;\nstatic PyObject *__pyx_n_s_setstate;\nstatic PyObject *__pyx_n_s_setstate_cython;\nstatic PyObject *__pyx_n_s_str;\nstatic PyObject *__pyx_kp_s_stringsource;\nstatic PyObject *__pyx_n_s_template;\nstatic PyObject *__pyx_n_s_test;\nstatic PyObject *__pyx_n_s_top;\nstatic PyObject *__pyx_kp_s_top_3f_bottom_3f_left_3f_reight;\nstatic PyObject *__pyx_n_s_update;\nstatic PyObject *__pyx_n_s_v;\nstatic PyObject *__pyx_n_s_value;\nstatic PyObject *__pyx_n_s_values;\nstatic PyObject *__pyx_n_s_vot_float2str;\nstatic PyObject *__pyx_n_s_vot_overlap;\nstatic PyObject *__pyx_n_s_vot_overlap_traj;\nstatic PyObject *__pyx_n_s_width;\nstatic PyObject *__pyx_n_s_x;\nstatic PyObject *__pyx_kp_s_x_3f_y_3f_width_3f_height_3f;\nstatic PyObject *__pyx_n_s_y;\nstatic int __pyx_pf_6region_12RegionBounds___cinit__(struct __pyx_obj_6region_RegionBounds *__pyx_v_self); /* proto */\nstatic int __pyx_pf_6region_12RegionBounds_2__init__(struct __pyx_obj_6region_RegionBounds *__pyx_v_self, PyObject *__pyx_v_top, PyObject *__pyx_v_bottom, PyObject *__pyx_v_left, PyObject *__pyx_v_right); /* proto */\nstatic void __pyx_pf_6region_12RegionBounds_4__dealloc__(struct __pyx_obj_6region_RegionBounds *__pyx_v_self); /* proto */\nstatic PyObject *__pyx_pf_6region_12RegionBounds_6__str__(struct __pyx_obj_6region_RegionBounds *__pyx_v_self); /* proto */\nstatic PyObject *__pyx_pf_6region_12RegionBounds_8get(struct __pyx_obj_6region_RegionBounds *__pyx_v_self); /* proto */\nstatic PyObject *__pyx_pf_6region_12RegionBounds_10set(struct __pyx_obj_6region_RegionBounds *__pyx_v_self, PyObject *__pyx_v_top, PyObject *__pyx_v_bottom, PyObject *__pyx_v_left, PyObject *__pyx_v_right); /* proto */\nstatic PyObject *__pyx_pf_6region_12RegionBounds_12__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_6region_RegionBounds *__pyx_v_self); /* proto */\nstatic PyObject *__pyx_pf_6region_12RegionBounds_14__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_6region_RegionBounds *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */\nstatic int __pyx_pf_6region_9Rectangle___cinit__(struct __pyx_obj_6region_Rectangle *__pyx_v_self); /* proto */\nstatic int __pyx_pf_6region_9Rectangle_2__init__(struct __pyx_obj_6region_Rectangle *__pyx_v_self, PyObject *__pyx_v_x, PyObject *__pyx_v_y, PyObject *__pyx_v_width, PyObject *__pyx_v_height); /* proto */\nstatic void __pyx_pf_6region_9Rectangle_4__dealloc__(struct __pyx_obj_6region_Rectangle *__pyx_v_self); /* proto */\nstatic PyObject *__pyx_pf_6region_9Rectangle_6__str__(struct __pyx_obj_6region_Rectangle *__pyx_v_self); /* proto */\nstatic PyObject *__pyx_pf_6region_9Rectangle_8set(struct __pyx_obj_6region_Rectangle *__pyx_v_self, PyObject *__pyx_v_x, PyObject *__pyx_v_y, PyObject *__pyx_v_width, PyObject *__pyx_v_height); /* proto */\nstatic PyObject *__pyx_pf_6region_9Rectangle_10get(struct __pyx_obj_6region_Rectangle *__pyx_v_self); /* proto */\nstatic PyObject *__pyx_pf_6region_9Rectangle_12__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_6region_Rectangle *__pyx_v_self); /* proto */\nstatic PyObject *__pyx_pf_6region_9Rectangle_14__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_6region_Rectangle *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */\nstatic int __pyx_pf_6region_7Polygon___cinit__(struct __pyx_obj_6region_Polygon *__pyx_v_self, PyObject *__pyx_v_points); /* proto */\nstatic void __pyx_pf_6region_7Polygon_2__dealloc__(struct __pyx_obj_6region_Polygon *__pyx_v_self); /* proto */\nstatic PyObject *__pyx_pf_6region_7Polygon_4__str__(struct __pyx_obj_6region_Polygon *__pyx_v_self); /* proto */\nstatic PyObject *__pyx_pf_6region_7Polygon_6__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_6region_Polygon *__pyx_v_self); /* proto */\nstatic PyObject *__pyx_pf_6region_7Polygon_8__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_6region_Polygon *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */\nstatic PyObject *__pyx_pf_6region_vot_overlap(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_polygon1, PyObject *__pyx_v_polygon2, PyObject *__pyx_v_bounds); /* proto */\nstatic PyObject *__pyx_pf_6region_2vot_overlap_traj(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_polygons1, PyObject *__pyx_v_polygons2, PyObject *__pyx_v_bounds); /* proto */\nstatic PyObject *__pyx_pf_6region_4vot_float2str(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_template, float __pyx_v_value); /* proto */\nstatic int __pyx_pf_8EnumBase_14__Pyx_EnumMeta___init__(struct __pyx_obj___Pyx_EnumMeta *__pyx_v_cls, PyObject *__pyx_v_name, PyObject *__pyx_v_parents, PyObject *__pyx_v_dct); /* proto */\nstatic PyObject *__pyx_pf_8EnumBase_14__Pyx_EnumMeta_2__iter__(struct __pyx_obj___Pyx_EnumMeta *__pyx_v_cls); /* proto */\nstatic PyObject *__pyx_pf_8EnumBase_14__Pyx_EnumMeta_4__getitem__(struct __pyx_obj___Pyx_EnumMeta *__pyx_v_cls, PyObject *__pyx_v_name); /* proto */\nstatic PyObject *__pyx_pf_8EnumBase_14__Pyx_EnumMeta_6__reduce_cython__(struct __pyx_obj___Pyx_EnumMeta *__pyx_v_self); /* proto */\nstatic PyObject *__pyx_pf_8EnumBase_14__Pyx_EnumMeta_8__setstate_cython__(struct __pyx_obj___Pyx_EnumMeta *__pyx_v_self, PyObject *__pyx_v___pyx_state); /* proto */\nstatic PyObject *__pyx_pf_8EnumBase_14__Pyx_EnumBase___new__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_cls, PyObject *__pyx_v_value, PyObject *__pyx_v_name); /* proto */\nstatic PyObject *__pyx_pf_8EnumBase_14__Pyx_EnumBase_2__repr__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self); /* proto */\nstatic PyObject *__pyx_pf_8EnumBase_14__Pyx_EnumBase_4__str__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self); /* proto */\nstatic PyObject *__pyx_pf_8EnumBase___pyx_unpickle___Pyx_EnumMeta(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v___pyx_type, long __pyx_v___pyx_checksum, PyObject *__pyx_v___pyx_state); /* proto */\nstatic PyObject *__pyx_tp_new_6region_RegionBounds(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/\nstatic PyObject *__pyx_tp_new_6region_Rectangle(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/\nstatic PyObject *__pyx_tp_new_6region_Polygon(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/\nstatic PyObject *__pyx_tp_new___Pyx_EnumMeta(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/\nstatic PyObject *__pyx_int_0;\nstatic PyObject *__pyx_int_1;\nstatic PyObject *__pyx_int_2;\nstatic PyObject *__pyx_int_222419149;\nstatic PyObject *__pyx_tuple_;\nstatic PyObject *__pyx_tuple__2;\nstatic PyObject *__pyx_tuple__3;\nstatic PyObject *__pyx_tuple__4;\nstatic PyObject *__pyx_tuple__6;\nstatic PyObject *__pyx_tuple__7;\nstatic PyObject *__pyx_tuple__8;\nstatic PyObject *__pyx_tuple__10;\nstatic PyObject *__pyx_tuple__12;\nstatic PyObject *__pyx_tuple__14;\nstatic PyObject *__pyx_tuple__16;\nstatic PyObject *__pyx_tuple__17;\nstatic PyObject *__pyx_tuple__19;\nstatic PyObject *__pyx_tuple__21;\nstatic PyObject *__pyx_codeobj__9;\nstatic PyObject *__pyx_codeobj__11;\nstatic PyObject *__pyx_codeobj__13;\nstatic PyObject *__pyx_codeobj__15;\nstatic PyObject *__pyx_codeobj__18;\nstatic PyObject *__pyx_codeobj__20;\nstatic PyObject *__pyx_codeobj__22;\n/* Late includes */\n\n/* \"region.pyx\":23\n *     cdef c_region.region_bounds* _c_region_bounds\n * \n *     def __cinit__(self):             # <<<<<<<<<<<<<<\n *         self._c_region_bounds = <c_region.region_bounds*>malloc(\n *                 sizeof(c_region.region_bounds))\n */\n\n/* Python wrapper */\nstatic int __pyx_pw_6region_12RegionBounds_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic int __pyx_pw_6region_12RegionBounds_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__cinit__ (wrapper)\", 0);\n  if (unlikely(PyTuple_GET_SIZE(__pyx_args) > 0)) {\n    __Pyx_RaiseArgtupleInvalid(\"__cinit__\", 1, 0, 0, PyTuple_GET_SIZE(__pyx_args)); return -1;}\n  if (unlikely(__pyx_kwds) && unlikely(PyDict_Size(__pyx_kwds) > 0) && unlikely(!__Pyx_CheckKeywordStrings(__pyx_kwds, \"__cinit__\", 0))) return -1;\n  __pyx_r = __pyx_pf_6region_12RegionBounds___cinit__(((struct __pyx_obj_6region_RegionBounds *)__pyx_v_self));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic int __pyx_pf_6region_12RegionBounds___cinit__(struct __pyx_obj_6region_RegionBounds *__pyx_v_self) {\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  int __pyx_t_1;\n  __Pyx_RefNannySetupContext(\"__cinit__\", 0);\n\n  /* \"region.pyx\":24\n * \n *     def __cinit__(self):\n *         self._c_region_bounds = <c_region.region_bounds*>malloc(             # <<<<<<<<<<<<<<\n *                 sizeof(c_region.region_bounds))\n *         if not self._c_region_bounds:\n */\n  __pyx_v_self->_c_region_bounds = ((region_bounds *)malloc((sizeof(region_bounds))));\n\n  /* \"region.pyx\":26\n *         self._c_region_bounds = <c_region.region_bounds*>malloc(\n *                 sizeof(c_region.region_bounds))\n *         if not self._c_region_bounds:             # <<<<<<<<<<<<<<\n *             self._c_region_bounds = NULL\n *             raise MemoryError()\n */\n  __pyx_t_1 = ((!(__pyx_v_self->_c_region_bounds != 0)) != 0);\n  if (unlikely(__pyx_t_1)) {\n\n    /* \"region.pyx\":27\n *                 sizeof(c_region.region_bounds))\n *         if not self._c_region_bounds:\n *             self._c_region_bounds = NULL             # <<<<<<<<<<<<<<\n *             raise MemoryError()\n * \n */\n    __pyx_v_self->_c_region_bounds = NULL;\n\n    /* \"region.pyx\":28\n *         if not self._c_region_bounds:\n *             self._c_region_bounds = NULL\n *             raise MemoryError()             # <<<<<<<<<<<<<<\n * \n *     def __init__(self, top, bottom, left, right):\n */\n    PyErr_NoMemory(); __PYX_ERR(0, 28, __pyx_L1_error)\n\n    /* \"region.pyx\":26\n *         self._c_region_bounds = <c_region.region_bounds*>malloc(\n *                 sizeof(c_region.region_bounds))\n *         if not self._c_region_bounds:             # <<<<<<<<<<<<<<\n *             self._c_region_bounds = NULL\n *             raise MemoryError()\n */\n  }\n\n  /* \"region.pyx\":23\n *     cdef c_region.region_bounds* _c_region_bounds\n * \n *     def __cinit__(self):             # <<<<<<<<<<<<<<\n *         self._c_region_bounds = <c_region.region_bounds*>malloc(\n *                 sizeof(c_region.region_bounds))\n */\n\n  /* function exit code */\n  __pyx_r = 0;\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_AddTraceback(\"region.RegionBounds.__cinit__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = -1;\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"region.pyx\":30\n *             raise MemoryError()\n * \n *     def __init__(self, top, bottom, left, right):             # <<<<<<<<<<<<<<\n *         self.set(top, bottom, left, right)\n * \n */\n\n/* Python wrapper */\nstatic int __pyx_pw_6region_12RegionBounds_3__init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic int __pyx_pw_6region_12RegionBounds_3__init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyObject *__pyx_v_top = 0;\n  PyObject *__pyx_v_bottom = 0;\n  PyObject *__pyx_v_left = 0;\n  PyObject *__pyx_v_right = 0;\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__init__ (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_top,&__pyx_n_s_bottom,&__pyx_n_s_left,&__pyx_n_s_right,0};\n    PyObject* values[4] = {0,0,0,0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);\n        CYTHON_FALLTHROUGH;\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        CYTHON_FALLTHROUGH;\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        CYTHON_FALLTHROUGH;\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        CYTHON_FALLTHROUGH;\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_top)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        CYTHON_FALLTHROUGH;\n        case  1:\n        if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_bottom)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"__init__\", 1, 4, 4, 1); __PYX_ERR(0, 30, __pyx_L3_error)\n        }\n        CYTHON_FALLTHROUGH;\n        case  2:\n        if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_left)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"__init__\", 1, 4, 4, 2); __PYX_ERR(0, 30, __pyx_L3_error)\n        }\n        CYTHON_FALLTHROUGH;\n        case  3:\n        if (likely((values[3] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_right)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"__init__\", 1, 4, 4, 3); __PYX_ERR(0, 30, __pyx_L3_error)\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"__init__\") < 0)) __PYX_ERR(0, 30, __pyx_L3_error)\n      }\n    } else if (PyTuple_GET_SIZE(__pyx_args) != 4) {\n      goto __pyx_L5_argtuple_error;\n    } else {\n      values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n      values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n      values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n      values[3] = PyTuple_GET_ITEM(__pyx_args, 3);\n    }\n    __pyx_v_top = values[0];\n    __pyx_v_bottom = values[1];\n    __pyx_v_left = values[2];\n    __pyx_v_right = values[3];\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"__init__\", 1, 4, 4, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 30, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"region.RegionBounds.__init__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return -1;\n  __pyx_L4_argument_unpacking_done:;\n  __pyx_r = __pyx_pf_6region_12RegionBounds_2__init__(((struct __pyx_obj_6region_RegionBounds *)__pyx_v_self), __pyx_v_top, __pyx_v_bottom, __pyx_v_left, __pyx_v_right);\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic int __pyx_pf_6region_12RegionBounds_2__init__(struct __pyx_obj_6region_RegionBounds *__pyx_v_self, PyObject *__pyx_v_top, PyObject *__pyx_v_bottom, PyObject *__pyx_v_left, PyObject *__pyx_v_right) {\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  int __pyx_t_4;\n  PyObject *__pyx_t_5 = NULL;\n  __Pyx_RefNannySetupContext(\"__init__\", 0);\n\n  /* \"region.pyx\":31\n * \n *     def __init__(self, top, bottom, left, right):\n *         self.set(top, bottom, left, right)             # <<<<<<<<<<<<<<\n * \n *     def __dealloc__(self):\n */\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_set); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 31, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_t_3 = NULL;\n  __pyx_t_4 = 0;\n  if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {\n    __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);\n    if (likely(__pyx_t_3)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);\n      __Pyx_INCREF(__pyx_t_3);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_2, function);\n      __pyx_t_4 = 1;\n    }\n  }\n  #if CYTHON_FAST_PYCALL\n  if (PyFunction_Check(__pyx_t_2)) {\n    PyObject *__pyx_temp[5] = {__pyx_t_3, __pyx_v_top, __pyx_v_bottom, __pyx_v_left, __pyx_v_right};\n    __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_4, 4+__pyx_t_4); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 31, __pyx_L1_error)\n    __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __Pyx_GOTREF(__pyx_t_1);\n  } else\n  #endif\n  #if CYTHON_FAST_PYCCALL\n  if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {\n    PyObject *__pyx_temp[5] = {__pyx_t_3, __pyx_v_top, __pyx_v_bottom, __pyx_v_left, __pyx_v_right};\n    __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_4, 4+__pyx_t_4); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 31, __pyx_L1_error)\n    __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __Pyx_GOTREF(__pyx_t_1);\n  } else\n  #endif\n  {\n    __pyx_t_5 = PyTuple_New(4+__pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 31, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    if (__pyx_t_3) {\n      __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_3); __pyx_t_3 = NULL;\n    }\n    __Pyx_INCREF(__pyx_v_top);\n    __Pyx_GIVEREF(__pyx_v_top);\n    PyTuple_SET_ITEM(__pyx_t_5, 0+__pyx_t_4, __pyx_v_top);\n    __Pyx_INCREF(__pyx_v_bottom);\n    __Pyx_GIVEREF(__pyx_v_bottom);\n    PyTuple_SET_ITEM(__pyx_t_5, 1+__pyx_t_4, __pyx_v_bottom);\n    __Pyx_INCREF(__pyx_v_left);\n    __Pyx_GIVEREF(__pyx_v_left);\n    PyTuple_SET_ITEM(__pyx_t_5, 2+__pyx_t_4, __pyx_v_left);\n    __Pyx_INCREF(__pyx_v_right);\n    __Pyx_GIVEREF(__pyx_v_right);\n    PyTuple_SET_ITEM(__pyx_t_5, 3+__pyx_t_4, __pyx_v_right);\n    __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_5, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 31, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  }\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"region.pyx\":30\n *             raise MemoryError()\n * \n *     def __init__(self, top, bottom, left, right):             # <<<<<<<<<<<<<<\n *         self.set(top, bottom, left, right)\n * \n */\n\n  /* function exit code */\n  __pyx_r = 0;\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_AddTraceback(\"region.RegionBounds.__init__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = -1;\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"region.pyx\":33\n *         self.set(top, bottom, left, right)\n * \n *     def __dealloc__(self):             # <<<<<<<<<<<<<<\n *         if self._c_region_bounds is not NULL:\n *             free(self._c_region_bounds)\n */\n\n/* Python wrapper */\nstatic void __pyx_pw_6region_12RegionBounds_5__dealloc__(PyObject *__pyx_v_self); /*proto*/\nstatic void __pyx_pw_6region_12RegionBounds_5__dealloc__(PyObject *__pyx_v_self) {\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__dealloc__ (wrapper)\", 0);\n  __pyx_pf_6region_12RegionBounds_4__dealloc__(((struct __pyx_obj_6region_RegionBounds *)__pyx_v_self));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n}\n\nstatic void __pyx_pf_6region_12RegionBounds_4__dealloc__(struct __pyx_obj_6region_RegionBounds *__pyx_v_self) {\n  __Pyx_RefNannyDeclarations\n  int __pyx_t_1;\n  __Pyx_RefNannySetupContext(\"__dealloc__\", 0);\n\n  /* \"region.pyx\":34\n * \n *     def __dealloc__(self):\n *         if self._c_region_bounds is not NULL:             # <<<<<<<<<<<<<<\n *             free(self._c_region_bounds)\n *             self._c_region_bounds = NULL\n */\n  __pyx_t_1 = ((__pyx_v_self->_c_region_bounds != NULL) != 0);\n  if (__pyx_t_1) {\n\n    /* \"region.pyx\":35\n *     def __dealloc__(self):\n *         if self._c_region_bounds is not NULL:\n *             free(self._c_region_bounds)             # <<<<<<<<<<<<<<\n *             self._c_region_bounds = NULL\n * \n */\n    free(__pyx_v_self->_c_region_bounds);\n\n    /* \"region.pyx\":36\n *         if self._c_region_bounds is not NULL:\n *             free(self._c_region_bounds)\n *             self._c_region_bounds = NULL             # <<<<<<<<<<<<<<\n * \n *     def __str__(self):\n */\n    __pyx_v_self->_c_region_bounds = NULL;\n\n    /* \"region.pyx\":34\n * \n *     def __dealloc__(self):\n *         if self._c_region_bounds is not NULL:             # <<<<<<<<<<<<<<\n *             free(self._c_region_bounds)\n *             self._c_region_bounds = NULL\n */\n  }\n\n  /* \"region.pyx\":33\n *         self.set(top, bottom, left, right)\n * \n *     def __dealloc__(self):             # <<<<<<<<<<<<<<\n *         if self._c_region_bounds is not NULL:\n *             free(self._c_region_bounds)\n */\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n}\n\n/* \"region.pyx\":38\n *             self._c_region_bounds = NULL\n * \n *     def __str__(self):             # <<<<<<<<<<<<<<\n *         return \"top: {:.3f} bottom: {:.3f} left: {:.3f} reight: {:.3f}\".format(\n *                 self._c_region_bounds.top,\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_6region_12RegionBounds_7__str__(PyObject *__pyx_v_self); /*proto*/\nstatic PyObject *__pyx_pw_6region_12RegionBounds_7__str__(PyObject *__pyx_v_self) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__str__ (wrapper)\", 0);\n  __pyx_r = __pyx_pf_6region_12RegionBounds_6__str__(((struct __pyx_obj_6region_RegionBounds *)__pyx_v_self));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_6region_12RegionBounds_6__str__(struct __pyx_obj_6region_RegionBounds *__pyx_v_self) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *__pyx_t_4 = NULL;\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  PyObject *__pyx_t_7 = NULL;\n  int __pyx_t_8;\n  PyObject *__pyx_t_9 = NULL;\n  __Pyx_RefNannySetupContext(\"__str__\", 0);\n\n  /* \"region.pyx\":39\n * \n *     def __str__(self):\n *         return \"top: {:.3f} bottom: {:.3f} left: {:.3f} reight: {:.3f}\".format(             # <<<<<<<<<<<<<<\n *                 self._c_region_bounds.top,\n *                 self._c_region_bounds.bottom,\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_kp_s_top_3f_bottom_3f_left_3f_reight, __pyx_n_s_format); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 39, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n\n  /* \"region.pyx\":40\n *     def __str__(self):\n *         return \"top: {:.3f} bottom: {:.3f} left: {:.3f} reight: {:.3f}\".format(\n *                 self._c_region_bounds.top,             # <<<<<<<<<<<<<<\n *                 self._c_region_bounds.bottom,\n *                 self._c_region_bounds.left,\n */\n  __pyx_t_3 = PyFloat_FromDouble(__pyx_v_self->_c_region_bounds->top); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 40, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n\n  /* \"region.pyx\":41\n *         return \"top: {:.3f} bottom: {:.3f} left: {:.3f} reight: {:.3f}\".format(\n *                 self._c_region_bounds.top,\n *                 self._c_region_bounds.bottom,             # <<<<<<<<<<<<<<\n *                 self._c_region_bounds.left,\n *                 self._c_region_bounds.right)\n */\n  __pyx_t_4 = PyFloat_FromDouble(__pyx_v_self->_c_region_bounds->bottom); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 41, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n\n  /* \"region.pyx\":42\n *                 self._c_region_bounds.top,\n *                 self._c_region_bounds.bottom,\n *                 self._c_region_bounds.left,             # <<<<<<<<<<<<<<\n *                 self._c_region_bounds.right)\n * \n */\n  __pyx_t_5 = PyFloat_FromDouble(__pyx_v_self->_c_region_bounds->left); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 42, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n\n  /* \"region.pyx\":43\n *                 self._c_region_bounds.bottom,\n *                 self._c_region_bounds.left,\n *                 self._c_region_bounds.right)             # <<<<<<<<<<<<<<\n * \n *     def get(self):\n */\n  __pyx_t_6 = PyFloat_FromDouble(__pyx_v_self->_c_region_bounds->right); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 43, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_6);\n  __pyx_t_7 = NULL;\n  __pyx_t_8 = 0;\n  if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {\n    __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_2);\n    if (likely(__pyx_t_7)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);\n      __Pyx_INCREF(__pyx_t_7);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_2, function);\n      __pyx_t_8 = 1;\n    }\n  }\n  #if CYTHON_FAST_PYCALL\n  if (PyFunction_Check(__pyx_t_2)) {\n    PyObject *__pyx_temp[5] = {__pyx_t_7, __pyx_t_3, __pyx_t_4, __pyx_t_5, __pyx_t_6};\n    __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_8, 4+__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 39, __pyx_L1_error)\n    __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;\n    __Pyx_GOTREF(__pyx_t_1);\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n  } else\n  #endif\n  #if CYTHON_FAST_PYCCALL\n  if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {\n    PyObject *__pyx_temp[5] = {__pyx_t_7, __pyx_t_3, __pyx_t_4, __pyx_t_5, __pyx_t_6};\n    __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_8, 4+__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 39, __pyx_L1_error)\n    __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;\n    __Pyx_GOTREF(__pyx_t_1);\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n  } else\n  #endif\n  {\n    __pyx_t_9 = PyTuple_New(4+__pyx_t_8); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 39, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_9);\n    if (__pyx_t_7) {\n      __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_9, 0, __pyx_t_7); __pyx_t_7 = NULL;\n    }\n    __Pyx_GIVEREF(__pyx_t_3);\n    PyTuple_SET_ITEM(__pyx_t_9, 0+__pyx_t_8, __pyx_t_3);\n    __Pyx_GIVEREF(__pyx_t_4);\n    PyTuple_SET_ITEM(__pyx_t_9, 1+__pyx_t_8, __pyx_t_4);\n    __Pyx_GIVEREF(__pyx_t_5);\n    PyTuple_SET_ITEM(__pyx_t_9, 2+__pyx_t_8, __pyx_t_5);\n    __Pyx_GIVEREF(__pyx_t_6);\n    PyTuple_SET_ITEM(__pyx_t_9, 3+__pyx_t_8, __pyx_t_6);\n    __pyx_t_3 = 0;\n    __pyx_t_4 = 0;\n    __pyx_t_5 = 0;\n    __pyx_t_6 = 0;\n    __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_9, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 39, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;\n  }\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_r = __pyx_t_1;\n  __pyx_t_1 = 0;\n  goto __pyx_L0;\n\n  /* \"region.pyx\":38\n *             self._c_region_bounds = NULL\n * \n *     def __str__(self):             # <<<<<<<<<<<<<<\n *         return \"top: {:.3f} bottom: {:.3f} left: {:.3f} reight: {:.3f}\".format(\n *                 self._c_region_bounds.top,\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_XDECREF(__pyx_t_7);\n  __Pyx_XDECREF(__pyx_t_9);\n  __Pyx_AddTraceback(\"region.RegionBounds.__str__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"region.pyx\":45\n *                 self._c_region_bounds.right)\n * \n *     def get(self):             # <<<<<<<<<<<<<<\n *         return (self._c_region_bounds.top,\n *                 self._c_region_bounds.bottom,\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_6region_12RegionBounds_9get(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/\nstatic PyObject *__pyx_pw_6region_12RegionBounds_9get(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"get (wrapper)\", 0);\n  __pyx_r = __pyx_pf_6region_12RegionBounds_8get(((struct __pyx_obj_6region_RegionBounds *)__pyx_v_self));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_6region_12RegionBounds_8get(struct __pyx_obj_6region_RegionBounds *__pyx_v_self) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *__pyx_t_4 = NULL;\n  PyObject *__pyx_t_5 = NULL;\n  __Pyx_RefNannySetupContext(\"get\", 0);\n\n  /* \"region.pyx\":46\n * \n *     def get(self):\n *         return (self._c_region_bounds.top,             # <<<<<<<<<<<<<<\n *                 self._c_region_bounds.bottom,\n *                 self._c_region_bounds.left,\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_1 = PyFloat_FromDouble(__pyx_v_self->_c_region_bounds->top); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 46, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n\n  /* \"region.pyx\":47\n *     def get(self):\n *         return (self._c_region_bounds.top,\n *                 self._c_region_bounds.bottom,             # <<<<<<<<<<<<<<\n *                 self._c_region_bounds.left,\n *                 self._c_region_bounds.right)\n */\n  __pyx_t_2 = PyFloat_FromDouble(__pyx_v_self->_c_region_bounds->bottom); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 47, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n\n  /* \"region.pyx\":48\n *         return (self._c_region_bounds.top,\n *                 self._c_region_bounds.bottom,\n *                 self._c_region_bounds.left,             # <<<<<<<<<<<<<<\n *                 self._c_region_bounds.right)\n * \n */\n  __pyx_t_3 = PyFloat_FromDouble(__pyx_v_self->_c_region_bounds->left); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 48, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n\n  /* \"region.pyx\":49\n *                 self._c_region_bounds.bottom,\n *                 self._c_region_bounds.left,\n *                 self._c_region_bounds.right)             # <<<<<<<<<<<<<<\n * \n *     def set(self, top, bottom, left, right):\n */\n  __pyx_t_4 = PyFloat_FromDouble(__pyx_v_self->_c_region_bounds->right); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 49, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n\n  /* \"region.pyx\":46\n * \n *     def get(self):\n *         return (self._c_region_bounds.top,             # <<<<<<<<<<<<<<\n *                 self._c_region_bounds.bottom,\n *                 self._c_region_bounds.left,\n */\n  __pyx_t_5 = PyTuple_New(4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 46, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_GIVEREF(__pyx_t_1);\n  PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_1);\n  __Pyx_GIVEREF(__pyx_t_2);\n  PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_2);\n  __Pyx_GIVEREF(__pyx_t_3);\n  PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_t_3);\n  __Pyx_GIVEREF(__pyx_t_4);\n  PyTuple_SET_ITEM(__pyx_t_5, 3, __pyx_t_4);\n  __pyx_t_1 = 0;\n  __pyx_t_2 = 0;\n  __pyx_t_3 = 0;\n  __pyx_t_4 = 0;\n  __pyx_r = __pyx_t_5;\n  __pyx_t_5 = 0;\n  goto __pyx_L0;\n\n  /* \"region.pyx\":45\n *                 self._c_region_bounds.right)\n * \n *     def get(self):             # <<<<<<<<<<<<<<\n *         return (self._c_region_bounds.top,\n *                 self._c_region_bounds.bottom,\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_AddTraceback(\"region.RegionBounds.get\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"region.pyx\":51\n *                 self._c_region_bounds.right)\n * \n *     def set(self, top, bottom, left, right):             # <<<<<<<<<<<<<<\n *         self._c_region_bounds.top = top\n *         self._c_region_bounds.bottom = bottom\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_6region_12RegionBounds_11set(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic PyObject *__pyx_pw_6region_12RegionBounds_11set(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyObject *__pyx_v_top = 0;\n  PyObject *__pyx_v_bottom = 0;\n  PyObject *__pyx_v_left = 0;\n  PyObject *__pyx_v_right = 0;\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"set (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_top,&__pyx_n_s_bottom,&__pyx_n_s_left,&__pyx_n_s_right,0};\n    PyObject* values[4] = {0,0,0,0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);\n        CYTHON_FALLTHROUGH;\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        CYTHON_FALLTHROUGH;\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        CYTHON_FALLTHROUGH;\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        CYTHON_FALLTHROUGH;\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_top)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        CYTHON_FALLTHROUGH;\n        case  1:\n        if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_bottom)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"set\", 1, 4, 4, 1); __PYX_ERR(0, 51, __pyx_L3_error)\n        }\n        CYTHON_FALLTHROUGH;\n        case  2:\n        if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_left)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"set\", 1, 4, 4, 2); __PYX_ERR(0, 51, __pyx_L3_error)\n        }\n        CYTHON_FALLTHROUGH;\n        case  3:\n        if (likely((values[3] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_right)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"set\", 1, 4, 4, 3); __PYX_ERR(0, 51, __pyx_L3_error)\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"set\") < 0)) __PYX_ERR(0, 51, __pyx_L3_error)\n      }\n    } else if (PyTuple_GET_SIZE(__pyx_args) != 4) {\n      goto __pyx_L5_argtuple_error;\n    } else {\n      values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n      values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n      values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n      values[3] = PyTuple_GET_ITEM(__pyx_args, 3);\n    }\n    __pyx_v_top = values[0];\n    __pyx_v_bottom = values[1];\n    __pyx_v_left = values[2];\n    __pyx_v_right = values[3];\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"set\", 1, 4, 4, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 51, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"region.RegionBounds.set\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return NULL;\n  __pyx_L4_argument_unpacking_done:;\n  __pyx_r = __pyx_pf_6region_12RegionBounds_10set(((struct __pyx_obj_6region_RegionBounds *)__pyx_v_self), __pyx_v_top, __pyx_v_bottom, __pyx_v_left, __pyx_v_right);\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_6region_12RegionBounds_10set(struct __pyx_obj_6region_RegionBounds *__pyx_v_self, PyObject *__pyx_v_top, PyObject *__pyx_v_bottom, PyObject *__pyx_v_left, PyObject *__pyx_v_right) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  float __pyx_t_1;\n  __Pyx_RefNannySetupContext(\"set\", 0);\n\n  /* \"region.pyx\":52\n * \n *     def set(self, top, bottom, left, right):\n *         self._c_region_bounds.top = top             # <<<<<<<<<<<<<<\n *         self._c_region_bounds.bottom = bottom\n *         self._c_region_bounds.left = left\n */\n  __pyx_t_1 = __pyx_PyFloat_AsFloat(__pyx_v_top); if (unlikely((__pyx_t_1 == (float)-1) && PyErr_Occurred())) __PYX_ERR(0, 52, __pyx_L1_error)\n  __pyx_v_self->_c_region_bounds->top = __pyx_t_1;\n\n  /* \"region.pyx\":53\n *     def set(self, top, bottom, left, right):\n *         self._c_region_bounds.top = top\n *         self._c_region_bounds.bottom = bottom             # <<<<<<<<<<<<<<\n *         self._c_region_bounds.left = left\n *         self._c_region_bounds.right = right\n */\n  __pyx_t_1 = __pyx_PyFloat_AsFloat(__pyx_v_bottom); if (unlikely((__pyx_t_1 == (float)-1) && PyErr_Occurred())) __PYX_ERR(0, 53, __pyx_L1_error)\n  __pyx_v_self->_c_region_bounds->bottom = __pyx_t_1;\n\n  /* \"region.pyx\":54\n *         self._c_region_bounds.top = top\n *         self._c_region_bounds.bottom = bottom\n *         self._c_region_bounds.left = left             # <<<<<<<<<<<<<<\n *         self._c_region_bounds.right = right\n * \n */\n  __pyx_t_1 = __pyx_PyFloat_AsFloat(__pyx_v_left); if (unlikely((__pyx_t_1 == (float)-1) && PyErr_Occurred())) __PYX_ERR(0, 54, __pyx_L1_error)\n  __pyx_v_self->_c_region_bounds->left = __pyx_t_1;\n\n  /* \"region.pyx\":55\n *         self._c_region_bounds.bottom = bottom\n *         self._c_region_bounds.left = left\n *         self._c_region_bounds.right = right             # <<<<<<<<<<<<<<\n * \n * cdef class Rectangle:\n */\n  __pyx_t_1 = __pyx_PyFloat_AsFloat(__pyx_v_right); if (unlikely((__pyx_t_1 == (float)-1) && PyErr_Occurred())) __PYX_ERR(0, 55, __pyx_L1_error)\n  __pyx_v_self->_c_region_bounds->right = __pyx_t_1;\n\n  /* \"region.pyx\":51\n *                 self._c_region_bounds.right)\n * \n *     def set(self, top, bottom, left, right):             # <<<<<<<<<<<<<<\n *         self._c_region_bounds.top = top\n *         self._c_region_bounds.bottom = bottom\n */\n\n  /* function exit code */\n  __pyx_r = Py_None; __Pyx_INCREF(Py_None);\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_AddTraceback(\"region.RegionBounds.set\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"(tree fragment)\":1\n * def __reduce_cython__(self):             # <<<<<<<<<<<<<<\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n * def __setstate_cython__(self, __pyx_state):\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_6region_12RegionBounds_13__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/\nstatic PyObject *__pyx_pw_6region_12RegionBounds_13__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__reduce_cython__ (wrapper)\", 0);\n  __pyx_r = __pyx_pf_6region_12RegionBounds_12__reduce_cython__(((struct __pyx_obj_6region_RegionBounds *)__pyx_v_self));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_6region_12RegionBounds_12__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_6region_RegionBounds *__pyx_v_self) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"__reduce_cython__\", 0);\n\n  /* \"(tree fragment)\":2\n * def __reduce_cython__(self):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")             # <<<<<<<<<<<<<<\n * def __setstate_cython__(self, __pyx_state):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n */\n  __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple_, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_Raise(__pyx_t_1, 0, 0, 0);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __PYX_ERR(1, 2, __pyx_L1_error)\n\n  /* \"(tree fragment)\":1\n * def __reduce_cython__(self):             # <<<<<<<<<<<<<<\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n * def __setstate_cython__(self, __pyx_state):\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"region.RegionBounds.__reduce_cython__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"(tree fragment)\":3\n * def __reduce_cython__(self):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n * def __setstate_cython__(self, __pyx_state):             # <<<<<<<<<<<<<<\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_6region_12RegionBounds_15__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/\nstatic PyObject *__pyx_pw_6region_12RegionBounds_15__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__setstate_cython__ (wrapper)\", 0);\n  __pyx_r = __pyx_pf_6region_12RegionBounds_14__setstate_cython__(((struct __pyx_obj_6region_RegionBounds *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_6region_12RegionBounds_14__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_6region_RegionBounds *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"__setstate_cython__\", 0);\n\n  /* \"(tree fragment)\":4\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n * def __setstate_cython__(self, __pyx_state):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")             # <<<<<<<<<<<<<<\n */\n  __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__2, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_Raise(__pyx_t_1, 0, 0, 0);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __PYX_ERR(1, 4, __pyx_L1_error)\n\n  /* \"(tree fragment)\":3\n * def __reduce_cython__(self):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n * def __setstate_cython__(self, __pyx_state):             # <<<<<<<<<<<<<<\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"region.RegionBounds.__setstate_cython__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"region.pyx\":60\n *     cdef c_region.region_rectangle* _c_region_rectangle\n * \n *     def __cinit__(self):             # <<<<<<<<<<<<<<\n *         self._c_region_rectangle = <c_region.region_rectangle*>malloc(\n *                 sizeof(c_region.region_rectangle))\n */\n\n/* Python wrapper */\nstatic int __pyx_pw_6region_9Rectangle_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic int __pyx_pw_6region_9Rectangle_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__cinit__ (wrapper)\", 0);\n  if (unlikely(PyTuple_GET_SIZE(__pyx_args) > 0)) {\n    __Pyx_RaiseArgtupleInvalid(\"__cinit__\", 1, 0, 0, PyTuple_GET_SIZE(__pyx_args)); return -1;}\n  if (unlikely(__pyx_kwds) && unlikely(PyDict_Size(__pyx_kwds) > 0) && unlikely(!__Pyx_CheckKeywordStrings(__pyx_kwds, \"__cinit__\", 0))) return -1;\n  __pyx_r = __pyx_pf_6region_9Rectangle___cinit__(((struct __pyx_obj_6region_Rectangle *)__pyx_v_self));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic int __pyx_pf_6region_9Rectangle___cinit__(struct __pyx_obj_6region_Rectangle *__pyx_v_self) {\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  int __pyx_t_1;\n  __Pyx_RefNannySetupContext(\"__cinit__\", 0);\n\n  /* \"region.pyx\":61\n * \n *     def __cinit__(self):\n *         self._c_region_rectangle = <c_region.region_rectangle*>malloc(             # <<<<<<<<<<<<<<\n *                 sizeof(c_region.region_rectangle))\n *         if not self._c_region_rectangle:\n */\n  __pyx_v_self->_c_region_rectangle = ((region_rectangle *)malloc((sizeof(region_rectangle))));\n\n  /* \"region.pyx\":63\n *         self._c_region_rectangle = <c_region.region_rectangle*>malloc(\n *                 sizeof(c_region.region_rectangle))\n *         if not self._c_region_rectangle:             # <<<<<<<<<<<<<<\n *             self._c_region_rectangle = NULL\n *             raise MemoryError()\n */\n  __pyx_t_1 = ((!(__pyx_v_self->_c_region_rectangle != 0)) != 0);\n  if (unlikely(__pyx_t_1)) {\n\n    /* \"region.pyx\":64\n *                 sizeof(c_region.region_rectangle))\n *         if not self._c_region_rectangle:\n *             self._c_region_rectangle = NULL             # <<<<<<<<<<<<<<\n *             raise MemoryError()\n * \n */\n    __pyx_v_self->_c_region_rectangle = NULL;\n\n    /* \"region.pyx\":65\n *         if not self._c_region_rectangle:\n *             self._c_region_rectangle = NULL\n *             raise MemoryError()             # <<<<<<<<<<<<<<\n * \n *     def __init__(self, x, y, width, height):\n */\n    PyErr_NoMemory(); __PYX_ERR(0, 65, __pyx_L1_error)\n\n    /* \"region.pyx\":63\n *         self._c_region_rectangle = <c_region.region_rectangle*>malloc(\n *                 sizeof(c_region.region_rectangle))\n *         if not self._c_region_rectangle:             # <<<<<<<<<<<<<<\n *             self._c_region_rectangle = NULL\n *             raise MemoryError()\n */\n  }\n\n  /* \"region.pyx\":60\n *     cdef c_region.region_rectangle* _c_region_rectangle\n * \n *     def __cinit__(self):             # <<<<<<<<<<<<<<\n *         self._c_region_rectangle = <c_region.region_rectangle*>malloc(\n *                 sizeof(c_region.region_rectangle))\n */\n\n  /* function exit code */\n  __pyx_r = 0;\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_AddTraceback(\"region.Rectangle.__cinit__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = -1;\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"region.pyx\":67\n *             raise MemoryError()\n * \n *     def __init__(self, x, y, width, height):             # <<<<<<<<<<<<<<\n *         self.set(x, y, width, height)\n * \n */\n\n/* Python wrapper */\nstatic int __pyx_pw_6region_9Rectangle_3__init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic int __pyx_pw_6region_9Rectangle_3__init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyObject *__pyx_v_x = 0;\n  PyObject *__pyx_v_y = 0;\n  PyObject *__pyx_v_width = 0;\n  PyObject *__pyx_v_height = 0;\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__init__ (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_x,&__pyx_n_s_y,&__pyx_n_s_width,&__pyx_n_s_height,0};\n    PyObject* values[4] = {0,0,0,0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);\n        CYTHON_FALLTHROUGH;\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        CYTHON_FALLTHROUGH;\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        CYTHON_FALLTHROUGH;\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        CYTHON_FALLTHROUGH;\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_x)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        CYTHON_FALLTHROUGH;\n        case  1:\n        if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_y)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"__init__\", 1, 4, 4, 1); __PYX_ERR(0, 67, __pyx_L3_error)\n        }\n        CYTHON_FALLTHROUGH;\n        case  2:\n        if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_width)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"__init__\", 1, 4, 4, 2); __PYX_ERR(0, 67, __pyx_L3_error)\n        }\n        CYTHON_FALLTHROUGH;\n        case  3:\n        if (likely((values[3] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_height)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"__init__\", 1, 4, 4, 3); __PYX_ERR(0, 67, __pyx_L3_error)\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"__init__\") < 0)) __PYX_ERR(0, 67, __pyx_L3_error)\n      }\n    } else if (PyTuple_GET_SIZE(__pyx_args) != 4) {\n      goto __pyx_L5_argtuple_error;\n    } else {\n      values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n      values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n      values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n      values[3] = PyTuple_GET_ITEM(__pyx_args, 3);\n    }\n    __pyx_v_x = values[0];\n    __pyx_v_y = values[1];\n    __pyx_v_width = values[2];\n    __pyx_v_height = values[3];\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"__init__\", 1, 4, 4, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 67, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"region.Rectangle.__init__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return -1;\n  __pyx_L4_argument_unpacking_done:;\n  __pyx_r = __pyx_pf_6region_9Rectangle_2__init__(((struct __pyx_obj_6region_Rectangle *)__pyx_v_self), __pyx_v_x, __pyx_v_y, __pyx_v_width, __pyx_v_height);\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic int __pyx_pf_6region_9Rectangle_2__init__(struct __pyx_obj_6region_Rectangle *__pyx_v_self, PyObject *__pyx_v_x, PyObject *__pyx_v_y, PyObject *__pyx_v_width, PyObject *__pyx_v_height) {\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  int __pyx_t_4;\n  PyObject *__pyx_t_5 = NULL;\n  __Pyx_RefNannySetupContext(\"__init__\", 0);\n\n  /* \"region.pyx\":68\n * \n *     def __init__(self, x, y, width, height):\n *         self.set(x, y, width, height)             # <<<<<<<<<<<<<<\n * \n *     def __dealloc__(self):\n */\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_set); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 68, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_t_3 = NULL;\n  __pyx_t_4 = 0;\n  if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {\n    __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);\n    if (likely(__pyx_t_3)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);\n      __Pyx_INCREF(__pyx_t_3);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_2, function);\n      __pyx_t_4 = 1;\n    }\n  }\n  #if CYTHON_FAST_PYCALL\n  if (PyFunction_Check(__pyx_t_2)) {\n    PyObject *__pyx_temp[5] = {__pyx_t_3, __pyx_v_x, __pyx_v_y, __pyx_v_width, __pyx_v_height};\n    __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_4, 4+__pyx_t_4); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 68, __pyx_L1_error)\n    __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __Pyx_GOTREF(__pyx_t_1);\n  } else\n  #endif\n  #if CYTHON_FAST_PYCCALL\n  if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {\n    PyObject *__pyx_temp[5] = {__pyx_t_3, __pyx_v_x, __pyx_v_y, __pyx_v_width, __pyx_v_height};\n    __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_4, 4+__pyx_t_4); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 68, __pyx_L1_error)\n    __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __Pyx_GOTREF(__pyx_t_1);\n  } else\n  #endif\n  {\n    __pyx_t_5 = PyTuple_New(4+__pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 68, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    if (__pyx_t_3) {\n      __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_3); __pyx_t_3 = NULL;\n    }\n    __Pyx_INCREF(__pyx_v_x);\n    __Pyx_GIVEREF(__pyx_v_x);\n    PyTuple_SET_ITEM(__pyx_t_5, 0+__pyx_t_4, __pyx_v_x);\n    __Pyx_INCREF(__pyx_v_y);\n    __Pyx_GIVEREF(__pyx_v_y);\n    PyTuple_SET_ITEM(__pyx_t_5, 1+__pyx_t_4, __pyx_v_y);\n    __Pyx_INCREF(__pyx_v_width);\n    __Pyx_GIVEREF(__pyx_v_width);\n    PyTuple_SET_ITEM(__pyx_t_5, 2+__pyx_t_4, __pyx_v_width);\n    __Pyx_INCREF(__pyx_v_height);\n    __Pyx_GIVEREF(__pyx_v_height);\n    PyTuple_SET_ITEM(__pyx_t_5, 3+__pyx_t_4, __pyx_v_height);\n    __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_5, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 68, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  }\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"region.pyx\":67\n *             raise MemoryError()\n * \n *     def __init__(self, x, y, width, height):             # <<<<<<<<<<<<<<\n *         self.set(x, y, width, height)\n * \n */\n\n  /* function exit code */\n  __pyx_r = 0;\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_AddTraceback(\"region.Rectangle.__init__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = -1;\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"region.pyx\":70\n *         self.set(x, y, width, height)\n * \n *     def __dealloc__(self):             # <<<<<<<<<<<<<<\n *         if self._c_region_rectangle is not NULL:\n *             free(self._c_region_rectangle)\n */\n\n/* Python wrapper */\nstatic void __pyx_pw_6region_9Rectangle_5__dealloc__(PyObject *__pyx_v_self); /*proto*/\nstatic void __pyx_pw_6region_9Rectangle_5__dealloc__(PyObject *__pyx_v_self) {\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__dealloc__ (wrapper)\", 0);\n  __pyx_pf_6region_9Rectangle_4__dealloc__(((struct __pyx_obj_6region_Rectangle *)__pyx_v_self));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n}\n\nstatic void __pyx_pf_6region_9Rectangle_4__dealloc__(struct __pyx_obj_6region_Rectangle *__pyx_v_self) {\n  __Pyx_RefNannyDeclarations\n  int __pyx_t_1;\n  __Pyx_RefNannySetupContext(\"__dealloc__\", 0);\n\n  /* \"region.pyx\":71\n * \n *     def __dealloc__(self):\n *         if self._c_region_rectangle is not NULL:             # <<<<<<<<<<<<<<\n *             free(self._c_region_rectangle)\n *             self._c_region_rectangle = NULL\n */\n  __pyx_t_1 = ((__pyx_v_self->_c_region_rectangle != NULL) != 0);\n  if (__pyx_t_1) {\n\n    /* \"region.pyx\":72\n *     def __dealloc__(self):\n *         if self._c_region_rectangle is not NULL:\n *             free(self._c_region_rectangle)             # <<<<<<<<<<<<<<\n *             self._c_region_rectangle = NULL\n * \n */\n    free(__pyx_v_self->_c_region_rectangle);\n\n    /* \"region.pyx\":73\n *         if self._c_region_rectangle is not NULL:\n *             free(self._c_region_rectangle)\n *             self._c_region_rectangle = NULL             # <<<<<<<<<<<<<<\n * \n *     def __str__(self):\n */\n    __pyx_v_self->_c_region_rectangle = NULL;\n\n    /* \"region.pyx\":71\n * \n *     def __dealloc__(self):\n *         if self._c_region_rectangle is not NULL:             # <<<<<<<<<<<<<<\n *             free(self._c_region_rectangle)\n *             self._c_region_rectangle = NULL\n */\n  }\n\n  /* \"region.pyx\":70\n *         self.set(x, y, width, height)\n * \n *     def __dealloc__(self):             # <<<<<<<<<<<<<<\n *         if self._c_region_rectangle is not NULL:\n *             free(self._c_region_rectangle)\n */\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n}\n\n/* \"region.pyx\":75\n *             self._c_region_rectangle = NULL\n * \n *     def __str__(self):             # <<<<<<<<<<<<<<\n *         return \"x: {:.3f} y: {:.3f} width: {:.3f} height: {:.3f}\".format(\n *                 self._c_region_rectangle.x,\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_6region_9Rectangle_7__str__(PyObject *__pyx_v_self); /*proto*/\nstatic PyObject *__pyx_pw_6region_9Rectangle_7__str__(PyObject *__pyx_v_self) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__str__ (wrapper)\", 0);\n  __pyx_r = __pyx_pf_6region_9Rectangle_6__str__(((struct __pyx_obj_6region_Rectangle *)__pyx_v_self));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_6region_9Rectangle_6__str__(struct __pyx_obj_6region_Rectangle *__pyx_v_self) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *__pyx_t_4 = NULL;\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  PyObject *__pyx_t_7 = NULL;\n  int __pyx_t_8;\n  PyObject *__pyx_t_9 = NULL;\n  __Pyx_RefNannySetupContext(\"__str__\", 0);\n\n  /* \"region.pyx\":76\n * \n *     def __str__(self):\n *         return \"x: {:.3f} y: {:.3f} width: {:.3f} height: {:.3f}\".format(             # <<<<<<<<<<<<<<\n *                 self._c_region_rectangle.x,\n *                 self._c_region_rectangle.y,\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_kp_s_x_3f_y_3f_width_3f_height_3f, __pyx_n_s_format); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 76, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n\n  /* \"region.pyx\":77\n *     def __str__(self):\n *         return \"x: {:.3f} y: {:.3f} width: {:.3f} height: {:.3f}\".format(\n *                 self._c_region_rectangle.x,             # <<<<<<<<<<<<<<\n *                 self._c_region_rectangle.y,\n *                 self._c_region_rectangle.width,\n */\n  __pyx_t_3 = PyFloat_FromDouble(__pyx_v_self->_c_region_rectangle->x); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 77, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n\n  /* \"region.pyx\":78\n *         return \"x: {:.3f} y: {:.3f} width: {:.3f} height: {:.3f}\".format(\n *                 self._c_region_rectangle.x,\n *                 self._c_region_rectangle.y,             # <<<<<<<<<<<<<<\n *                 self._c_region_rectangle.width,\n *                 self._c_region_rectangle.height)\n */\n  __pyx_t_4 = PyFloat_FromDouble(__pyx_v_self->_c_region_rectangle->y); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 78, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n\n  /* \"region.pyx\":79\n *                 self._c_region_rectangle.x,\n *                 self._c_region_rectangle.y,\n *                 self._c_region_rectangle.width,             # <<<<<<<<<<<<<<\n *                 self._c_region_rectangle.height)\n * \n */\n  __pyx_t_5 = PyFloat_FromDouble(__pyx_v_self->_c_region_rectangle->width); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 79, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n\n  /* \"region.pyx\":80\n *                 self._c_region_rectangle.y,\n *                 self._c_region_rectangle.width,\n *                 self._c_region_rectangle.height)             # <<<<<<<<<<<<<<\n * \n *     def set(self, x, y, width, height):\n */\n  __pyx_t_6 = PyFloat_FromDouble(__pyx_v_self->_c_region_rectangle->height); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 80, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_6);\n  __pyx_t_7 = NULL;\n  __pyx_t_8 = 0;\n  if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {\n    __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_2);\n    if (likely(__pyx_t_7)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);\n      __Pyx_INCREF(__pyx_t_7);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_2, function);\n      __pyx_t_8 = 1;\n    }\n  }\n  #if CYTHON_FAST_PYCALL\n  if (PyFunction_Check(__pyx_t_2)) {\n    PyObject *__pyx_temp[5] = {__pyx_t_7, __pyx_t_3, __pyx_t_4, __pyx_t_5, __pyx_t_6};\n    __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_8, 4+__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 76, __pyx_L1_error)\n    __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;\n    __Pyx_GOTREF(__pyx_t_1);\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n  } else\n  #endif\n  #if CYTHON_FAST_PYCCALL\n  if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {\n    PyObject *__pyx_temp[5] = {__pyx_t_7, __pyx_t_3, __pyx_t_4, __pyx_t_5, __pyx_t_6};\n    __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_8, 4+__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 76, __pyx_L1_error)\n    __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;\n    __Pyx_GOTREF(__pyx_t_1);\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n  } else\n  #endif\n  {\n    __pyx_t_9 = PyTuple_New(4+__pyx_t_8); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 76, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_9);\n    if (__pyx_t_7) {\n      __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_9, 0, __pyx_t_7); __pyx_t_7 = NULL;\n    }\n    __Pyx_GIVEREF(__pyx_t_3);\n    PyTuple_SET_ITEM(__pyx_t_9, 0+__pyx_t_8, __pyx_t_3);\n    __Pyx_GIVEREF(__pyx_t_4);\n    PyTuple_SET_ITEM(__pyx_t_9, 1+__pyx_t_8, __pyx_t_4);\n    __Pyx_GIVEREF(__pyx_t_5);\n    PyTuple_SET_ITEM(__pyx_t_9, 2+__pyx_t_8, __pyx_t_5);\n    __Pyx_GIVEREF(__pyx_t_6);\n    PyTuple_SET_ITEM(__pyx_t_9, 3+__pyx_t_8, __pyx_t_6);\n    __pyx_t_3 = 0;\n    __pyx_t_4 = 0;\n    __pyx_t_5 = 0;\n    __pyx_t_6 = 0;\n    __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_9, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 76, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;\n  }\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_r = __pyx_t_1;\n  __pyx_t_1 = 0;\n  goto __pyx_L0;\n\n  /* \"region.pyx\":75\n *             self._c_region_rectangle = NULL\n * \n *     def __str__(self):             # <<<<<<<<<<<<<<\n *         return \"x: {:.3f} y: {:.3f} width: {:.3f} height: {:.3f}\".format(\n *                 self._c_region_rectangle.x,\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_XDECREF(__pyx_t_7);\n  __Pyx_XDECREF(__pyx_t_9);\n  __Pyx_AddTraceback(\"region.Rectangle.__str__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"region.pyx\":82\n *                 self._c_region_rectangle.height)\n * \n *     def set(self, x, y, width, height):             # <<<<<<<<<<<<<<\n *         self._c_region_rectangle.x = x\n *         self._c_region_rectangle.y = y\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_6region_9Rectangle_9set(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic PyObject *__pyx_pw_6region_9Rectangle_9set(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyObject *__pyx_v_x = 0;\n  PyObject *__pyx_v_y = 0;\n  PyObject *__pyx_v_width = 0;\n  PyObject *__pyx_v_height = 0;\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"set (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_x,&__pyx_n_s_y,&__pyx_n_s_width,&__pyx_n_s_height,0};\n    PyObject* values[4] = {0,0,0,0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);\n        CYTHON_FALLTHROUGH;\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        CYTHON_FALLTHROUGH;\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        CYTHON_FALLTHROUGH;\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        CYTHON_FALLTHROUGH;\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_x)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        CYTHON_FALLTHROUGH;\n        case  1:\n        if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_y)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"set\", 1, 4, 4, 1); __PYX_ERR(0, 82, __pyx_L3_error)\n        }\n        CYTHON_FALLTHROUGH;\n        case  2:\n        if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_width)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"set\", 1, 4, 4, 2); __PYX_ERR(0, 82, __pyx_L3_error)\n        }\n        CYTHON_FALLTHROUGH;\n        case  3:\n        if (likely((values[3] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_height)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"set\", 1, 4, 4, 3); __PYX_ERR(0, 82, __pyx_L3_error)\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"set\") < 0)) __PYX_ERR(0, 82, __pyx_L3_error)\n      }\n    } else if (PyTuple_GET_SIZE(__pyx_args) != 4) {\n      goto __pyx_L5_argtuple_error;\n    } else {\n      values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n      values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n      values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n      values[3] = PyTuple_GET_ITEM(__pyx_args, 3);\n    }\n    __pyx_v_x = values[0];\n    __pyx_v_y = values[1];\n    __pyx_v_width = values[2];\n    __pyx_v_height = values[3];\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"set\", 1, 4, 4, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 82, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"region.Rectangle.set\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return NULL;\n  __pyx_L4_argument_unpacking_done:;\n  __pyx_r = __pyx_pf_6region_9Rectangle_8set(((struct __pyx_obj_6region_Rectangle *)__pyx_v_self), __pyx_v_x, __pyx_v_y, __pyx_v_width, __pyx_v_height);\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_6region_9Rectangle_8set(struct __pyx_obj_6region_Rectangle *__pyx_v_self, PyObject *__pyx_v_x, PyObject *__pyx_v_y, PyObject *__pyx_v_width, PyObject *__pyx_v_height) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  float __pyx_t_1;\n  __Pyx_RefNannySetupContext(\"set\", 0);\n\n  /* \"region.pyx\":83\n * \n *     def set(self, x, y, width, height):\n *         self._c_region_rectangle.x = x             # <<<<<<<<<<<<<<\n *         self._c_region_rectangle.y = y\n *         self._c_region_rectangle.width = width\n */\n  __pyx_t_1 = __pyx_PyFloat_AsFloat(__pyx_v_x); if (unlikely((__pyx_t_1 == (float)-1) && PyErr_Occurred())) __PYX_ERR(0, 83, __pyx_L1_error)\n  __pyx_v_self->_c_region_rectangle->x = __pyx_t_1;\n\n  /* \"region.pyx\":84\n *     def set(self, x, y, width, height):\n *         self._c_region_rectangle.x = x\n *         self._c_region_rectangle.y = y             # <<<<<<<<<<<<<<\n *         self._c_region_rectangle.width = width\n *         self._c_region_rectangle.height = height\n */\n  __pyx_t_1 = __pyx_PyFloat_AsFloat(__pyx_v_y); if (unlikely((__pyx_t_1 == (float)-1) && PyErr_Occurred())) __PYX_ERR(0, 84, __pyx_L1_error)\n  __pyx_v_self->_c_region_rectangle->y = __pyx_t_1;\n\n  /* \"region.pyx\":85\n *         self._c_region_rectangle.x = x\n *         self._c_region_rectangle.y = y\n *         self._c_region_rectangle.width = width             # <<<<<<<<<<<<<<\n *         self._c_region_rectangle.height = height\n * \n */\n  __pyx_t_1 = __pyx_PyFloat_AsFloat(__pyx_v_width); if (unlikely((__pyx_t_1 == (float)-1) && PyErr_Occurred())) __PYX_ERR(0, 85, __pyx_L1_error)\n  __pyx_v_self->_c_region_rectangle->width = __pyx_t_1;\n\n  /* \"region.pyx\":86\n *         self._c_region_rectangle.y = y\n *         self._c_region_rectangle.width = width\n *         self._c_region_rectangle.height = height             # <<<<<<<<<<<<<<\n * \n *     def get(self):\n */\n  __pyx_t_1 = __pyx_PyFloat_AsFloat(__pyx_v_height); if (unlikely((__pyx_t_1 == (float)-1) && PyErr_Occurred())) __PYX_ERR(0, 86, __pyx_L1_error)\n  __pyx_v_self->_c_region_rectangle->height = __pyx_t_1;\n\n  /* \"region.pyx\":82\n *                 self._c_region_rectangle.height)\n * \n *     def set(self, x, y, width, height):             # <<<<<<<<<<<<<<\n *         self._c_region_rectangle.x = x\n *         self._c_region_rectangle.y = y\n */\n\n  /* function exit code */\n  __pyx_r = Py_None; __Pyx_INCREF(Py_None);\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_AddTraceback(\"region.Rectangle.set\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"region.pyx\":88\n *         self._c_region_rectangle.height = height\n * \n *     def get(self):             # <<<<<<<<<<<<<<\n *         \"\"\"\n *         return:\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_6region_9Rectangle_11get(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/\nstatic char __pyx_doc_6region_9Rectangle_10get[] = \"\\n        return:\\n            (x, y, width, height)\\n        \";\nstatic PyObject *__pyx_pw_6region_9Rectangle_11get(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"get (wrapper)\", 0);\n  __pyx_r = __pyx_pf_6region_9Rectangle_10get(((struct __pyx_obj_6region_Rectangle *)__pyx_v_self));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_6region_9Rectangle_10get(struct __pyx_obj_6region_Rectangle *__pyx_v_self) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *__pyx_t_4 = NULL;\n  PyObject *__pyx_t_5 = NULL;\n  __Pyx_RefNannySetupContext(\"get\", 0);\n\n  /* \"region.pyx\":93\n *             (x, y, width, height)\n *         \"\"\"\n *         return (self._c_region_rectangle.x,             # <<<<<<<<<<<<<<\n *                 self._c_region_rectangle.y,\n *                 self._c_region_rectangle.width,\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_1 = PyFloat_FromDouble(__pyx_v_self->_c_region_rectangle->x); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 93, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n\n  /* \"region.pyx\":94\n *         \"\"\"\n *         return (self._c_region_rectangle.x,\n *                 self._c_region_rectangle.y,             # <<<<<<<<<<<<<<\n *                 self._c_region_rectangle.width,\n *                 self._c_region_rectangle.height)\n */\n  __pyx_t_2 = PyFloat_FromDouble(__pyx_v_self->_c_region_rectangle->y); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 94, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n\n  /* \"region.pyx\":95\n *         return (self._c_region_rectangle.x,\n *                 self._c_region_rectangle.y,\n *                 self._c_region_rectangle.width,             # <<<<<<<<<<<<<<\n *                 self._c_region_rectangle.height)\n * \n */\n  __pyx_t_3 = PyFloat_FromDouble(__pyx_v_self->_c_region_rectangle->width); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 95, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n\n  /* \"region.pyx\":96\n *                 self._c_region_rectangle.y,\n *                 self._c_region_rectangle.width,\n *                 self._c_region_rectangle.height)             # <<<<<<<<<<<<<<\n * \n * cdef class Polygon:\n */\n  __pyx_t_4 = PyFloat_FromDouble(__pyx_v_self->_c_region_rectangle->height); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 96, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n\n  /* \"region.pyx\":93\n *             (x, y, width, height)\n *         \"\"\"\n *         return (self._c_region_rectangle.x,             # <<<<<<<<<<<<<<\n *                 self._c_region_rectangle.y,\n *                 self._c_region_rectangle.width,\n */\n  __pyx_t_5 = PyTuple_New(4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 93, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_GIVEREF(__pyx_t_1);\n  PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_1);\n  __Pyx_GIVEREF(__pyx_t_2);\n  PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_2);\n  __Pyx_GIVEREF(__pyx_t_3);\n  PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_t_3);\n  __Pyx_GIVEREF(__pyx_t_4);\n  PyTuple_SET_ITEM(__pyx_t_5, 3, __pyx_t_4);\n  __pyx_t_1 = 0;\n  __pyx_t_2 = 0;\n  __pyx_t_3 = 0;\n  __pyx_t_4 = 0;\n  __pyx_r = __pyx_t_5;\n  __pyx_t_5 = 0;\n  goto __pyx_L0;\n\n  /* \"region.pyx\":88\n *         self._c_region_rectangle.height = height\n * \n *     def get(self):             # <<<<<<<<<<<<<<\n *         \"\"\"\n *         return:\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_AddTraceback(\"region.Rectangle.get\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"(tree fragment)\":1\n * def __reduce_cython__(self):             # <<<<<<<<<<<<<<\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n * def __setstate_cython__(self, __pyx_state):\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_6region_9Rectangle_13__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/\nstatic PyObject *__pyx_pw_6region_9Rectangle_13__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__reduce_cython__ (wrapper)\", 0);\n  __pyx_r = __pyx_pf_6region_9Rectangle_12__reduce_cython__(((struct __pyx_obj_6region_Rectangle *)__pyx_v_self));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_6region_9Rectangle_12__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_6region_Rectangle *__pyx_v_self) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"__reduce_cython__\", 0);\n\n  /* \"(tree fragment)\":2\n * def __reduce_cython__(self):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")             # <<<<<<<<<<<<<<\n * def __setstate_cython__(self, __pyx_state):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n */\n  __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__3, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_Raise(__pyx_t_1, 0, 0, 0);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __PYX_ERR(1, 2, __pyx_L1_error)\n\n  /* \"(tree fragment)\":1\n * def __reduce_cython__(self):             # <<<<<<<<<<<<<<\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n * def __setstate_cython__(self, __pyx_state):\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"region.Rectangle.__reduce_cython__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"(tree fragment)\":3\n * def __reduce_cython__(self):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n * def __setstate_cython__(self, __pyx_state):             # <<<<<<<<<<<<<<\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_6region_9Rectangle_15__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/\nstatic PyObject *__pyx_pw_6region_9Rectangle_15__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__setstate_cython__ (wrapper)\", 0);\n  __pyx_r = __pyx_pf_6region_9Rectangle_14__setstate_cython__(((struct __pyx_obj_6region_Rectangle *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_6region_9Rectangle_14__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_6region_Rectangle *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"__setstate_cython__\", 0);\n\n  /* \"(tree fragment)\":4\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n * def __setstate_cython__(self, __pyx_state):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")             # <<<<<<<<<<<<<<\n */\n  __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__4, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_Raise(__pyx_t_1, 0, 0, 0);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __PYX_ERR(1, 4, __pyx_L1_error)\n\n  /* \"(tree fragment)\":3\n * def __reduce_cython__(self):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n * def __setstate_cython__(self, __pyx_state):             # <<<<<<<<<<<<<<\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"region.Rectangle.__setstate_cython__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"region.pyx\":101\n *     cdef c_region.region_polygon* _c_region_polygon\n * \n *     def __cinit__(self, points):             # <<<<<<<<<<<<<<\n *         \"\"\"\n *         args:\n */\n\n/* Python wrapper */\nstatic int __pyx_pw_6region_7Polygon_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic int __pyx_pw_6region_7Polygon_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyObject *__pyx_v_points = 0;\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__cinit__ (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_points,0};\n    PyObject* values[1] = {0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        CYTHON_FALLTHROUGH;\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_points)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"__cinit__\") < 0)) __PYX_ERR(0, 101, __pyx_L3_error)\n      }\n    } else if (PyTuple_GET_SIZE(__pyx_args) != 1) {\n      goto __pyx_L5_argtuple_error;\n    } else {\n      values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n    }\n    __pyx_v_points = values[0];\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"__cinit__\", 1, 1, 1, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 101, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"region.Polygon.__cinit__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return -1;\n  __pyx_L4_argument_unpacking_done:;\n  __pyx_r = __pyx_pf_6region_7Polygon___cinit__(((struct __pyx_obj_6region_Polygon *)__pyx_v_self), __pyx_v_points);\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic int __pyx_pf_6region_7Polygon___cinit__(struct __pyx_obj_6region_Polygon *__pyx_v_self, PyObject *__pyx_v_points) {\n  PyObject *__pyx_v_num = NULL;\n  PyObject *__pyx_v_i = NULL;\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  Py_ssize_t __pyx_t_1;\n  PyObject *__pyx_t_2 = NULL;\n  int __pyx_t_3;\n  int __pyx_t_4;\n  PyObject *__pyx_t_5 = NULL;\n  size_t __pyx_t_6;\n  PyObject *(*__pyx_t_7)(PyObject *);\n  PyObject *__pyx_t_8 = NULL;\n  float __pyx_t_9;\n  Py_ssize_t __pyx_t_10;\n  __Pyx_RefNannySetupContext(\"__cinit__\", 0);\n\n  /* \"region.pyx\":107\n *             points = ((1, 1), (10, 10))\n *         \"\"\"\n *         num = len(points) // 2             # <<<<<<<<<<<<<<\n *         self._c_region_polygon = <c_region.region_polygon*>malloc(\n *                 sizeof(c_region.region_polygon))\n */\n  __pyx_t_1 = PyObject_Length(__pyx_v_points); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 107, __pyx_L1_error)\n  __pyx_t_2 = PyInt_FromSsize_t(__Pyx_div_Py_ssize_t(__pyx_t_1, 2)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 107, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_v_num = __pyx_t_2;\n  __pyx_t_2 = 0;\n\n  /* \"region.pyx\":108\n *         \"\"\"\n *         num = len(points) // 2\n *         self._c_region_polygon = <c_region.region_polygon*>malloc(             # <<<<<<<<<<<<<<\n *                 sizeof(c_region.region_polygon))\n *         if not self._c_region_polygon:\n */\n  __pyx_v_self->_c_region_polygon = ((region_polygon *)malloc((sizeof(region_polygon))));\n\n  /* \"region.pyx\":110\n *         self._c_region_polygon = <c_region.region_polygon*>malloc(\n *                 sizeof(c_region.region_polygon))\n *         if not self._c_region_polygon:             # <<<<<<<<<<<<<<\n *             self._c_region_polygon = NULL\n *             raise MemoryError()\n */\n  __pyx_t_3 = ((!(__pyx_v_self->_c_region_polygon != 0)) != 0);\n  if (unlikely(__pyx_t_3)) {\n\n    /* \"region.pyx\":111\n *                 sizeof(c_region.region_polygon))\n *         if not self._c_region_polygon:\n *             self._c_region_polygon = NULL             # <<<<<<<<<<<<<<\n *             raise MemoryError()\n *         self._c_region_polygon.count = num\n */\n    __pyx_v_self->_c_region_polygon = NULL;\n\n    /* \"region.pyx\":112\n *         if not self._c_region_polygon:\n *             self._c_region_polygon = NULL\n *             raise MemoryError()             # <<<<<<<<<<<<<<\n *         self._c_region_polygon.count = num\n *         self._c_region_polygon.x = <float*>malloc(sizeof(float) * num)\n */\n    PyErr_NoMemory(); __PYX_ERR(0, 112, __pyx_L1_error)\n\n    /* \"region.pyx\":110\n *         self._c_region_polygon = <c_region.region_polygon*>malloc(\n *                 sizeof(c_region.region_polygon))\n *         if not self._c_region_polygon:             # <<<<<<<<<<<<<<\n *             self._c_region_polygon = NULL\n *             raise MemoryError()\n */\n  }\n\n  /* \"region.pyx\":113\n *             self._c_region_polygon = NULL\n *             raise MemoryError()\n *         self._c_region_polygon.count = num             # <<<<<<<<<<<<<<\n *         self._c_region_polygon.x = <float*>malloc(sizeof(float) * num)\n *         if not self._c_region_polygon.x:\n */\n  __pyx_t_4 = __Pyx_PyInt_As_int(__pyx_v_num); if (unlikely((__pyx_t_4 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 113, __pyx_L1_error)\n  __pyx_v_self->_c_region_polygon->count = __pyx_t_4;\n\n  /* \"region.pyx\":114\n *             raise MemoryError()\n *         self._c_region_polygon.count = num\n *         self._c_region_polygon.x = <float*>malloc(sizeof(float) * num)             # <<<<<<<<<<<<<<\n *         if not self._c_region_polygon.x:\n *             raise MemoryError()\n */\n  __pyx_t_2 = __Pyx_PyInt_FromSize_t((sizeof(float))); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 114, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_t_5 = PyNumber_Multiply(__pyx_t_2, __pyx_v_num); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 114, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_t_6 = __Pyx_PyInt_As_size_t(__pyx_t_5); if (unlikely((__pyx_t_6 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 114, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  __pyx_v_self->_c_region_polygon->x = ((float *)malloc(__pyx_t_6));\n\n  /* \"region.pyx\":115\n *         self._c_region_polygon.count = num\n *         self._c_region_polygon.x = <float*>malloc(sizeof(float) * num)\n *         if not self._c_region_polygon.x:             # <<<<<<<<<<<<<<\n *             raise MemoryError()\n *         self._c_region_polygon.y = <float*>malloc(sizeof(float) * num)\n */\n  __pyx_t_3 = ((!(__pyx_v_self->_c_region_polygon->x != 0)) != 0);\n  if (unlikely(__pyx_t_3)) {\n\n    /* \"region.pyx\":116\n *         self._c_region_polygon.x = <float*>malloc(sizeof(float) * num)\n *         if not self._c_region_polygon.x:\n *             raise MemoryError()             # <<<<<<<<<<<<<<\n *         self._c_region_polygon.y = <float*>malloc(sizeof(float) * num)\n *         if not self._c_region_polygon.y:\n */\n    PyErr_NoMemory(); __PYX_ERR(0, 116, __pyx_L1_error)\n\n    /* \"region.pyx\":115\n *         self._c_region_polygon.count = num\n *         self._c_region_polygon.x = <float*>malloc(sizeof(float) * num)\n *         if not self._c_region_polygon.x:             # <<<<<<<<<<<<<<\n *             raise MemoryError()\n *         self._c_region_polygon.y = <float*>malloc(sizeof(float) * num)\n */\n  }\n\n  /* \"region.pyx\":117\n *         if not self._c_region_polygon.x:\n *             raise MemoryError()\n *         self._c_region_polygon.y = <float*>malloc(sizeof(float) * num)             # <<<<<<<<<<<<<<\n *         if not self._c_region_polygon.y:\n *             raise MemoryError()\n */\n  __pyx_t_5 = __Pyx_PyInt_FromSize_t((sizeof(float))); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 117, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __pyx_t_2 = PyNumber_Multiply(__pyx_t_5, __pyx_v_num); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 117, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  __pyx_t_6 = __Pyx_PyInt_As_size_t(__pyx_t_2); if (unlikely((__pyx_t_6 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 117, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_v_self->_c_region_polygon->y = ((float *)malloc(__pyx_t_6));\n\n  /* \"region.pyx\":118\n *             raise MemoryError()\n *         self._c_region_polygon.y = <float*>malloc(sizeof(float) * num)\n *         if not self._c_region_polygon.y:             # <<<<<<<<<<<<<<\n *             raise MemoryError()\n * \n */\n  __pyx_t_3 = ((!(__pyx_v_self->_c_region_polygon->y != 0)) != 0);\n  if (unlikely(__pyx_t_3)) {\n\n    /* \"region.pyx\":119\n *         self._c_region_polygon.y = <float*>malloc(sizeof(float) * num)\n *         if not self._c_region_polygon.y:\n *             raise MemoryError()             # <<<<<<<<<<<<<<\n * \n *         for i in range(num):\n */\n    PyErr_NoMemory(); __PYX_ERR(0, 119, __pyx_L1_error)\n\n    /* \"region.pyx\":118\n *             raise MemoryError()\n *         self._c_region_polygon.y = <float*>malloc(sizeof(float) * num)\n *         if not self._c_region_polygon.y:             # <<<<<<<<<<<<<<\n *             raise MemoryError()\n * \n */\n  }\n\n  /* \"region.pyx\":121\n *             raise MemoryError()\n * \n *         for i in range(num):             # <<<<<<<<<<<<<<\n *             self._c_region_polygon.x[i] = points[i*2]\n *             self._c_region_polygon.y[i] = points[i*2+1]\n */\n  __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_builtin_range, __pyx_v_num); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 121, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  if (likely(PyList_CheckExact(__pyx_t_2)) || PyTuple_CheckExact(__pyx_t_2)) {\n    __pyx_t_5 = __pyx_t_2; __Pyx_INCREF(__pyx_t_5); __pyx_t_1 = 0;\n    __pyx_t_7 = NULL;\n  } else {\n    __pyx_t_1 = -1; __pyx_t_5 = PyObject_GetIter(__pyx_t_2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 121, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __pyx_t_7 = Py_TYPE(__pyx_t_5)->tp_iternext; if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 121, __pyx_L1_error)\n  }\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  for (;;) {\n    if (likely(!__pyx_t_7)) {\n      if (likely(PyList_CheckExact(__pyx_t_5))) {\n        if (__pyx_t_1 >= PyList_GET_SIZE(__pyx_t_5)) break;\n        #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n        __pyx_t_2 = PyList_GET_ITEM(__pyx_t_5, __pyx_t_1); __Pyx_INCREF(__pyx_t_2); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(0, 121, __pyx_L1_error)\n        #else\n        __pyx_t_2 = PySequence_ITEM(__pyx_t_5, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 121, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_2);\n        #endif\n      } else {\n        if (__pyx_t_1 >= PyTuple_GET_SIZE(__pyx_t_5)) break;\n        #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n        __pyx_t_2 = PyTuple_GET_ITEM(__pyx_t_5, __pyx_t_1); __Pyx_INCREF(__pyx_t_2); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(0, 121, __pyx_L1_error)\n        #else\n        __pyx_t_2 = PySequence_ITEM(__pyx_t_5, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 121, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_2);\n        #endif\n      }\n    } else {\n      __pyx_t_2 = __pyx_t_7(__pyx_t_5);\n      if (unlikely(!__pyx_t_2)) {\n        PyObject* exc_type = PyErr_Occurred();\n        if (exc_type) {\n          if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();\n          else __PYX_ERR(0, 121, __pyx_L1_error)\n        }\n        break;\n      }\n      __Pyx_GOTREF(__pyx_t_2);\n    }\n    __Pyx_XDECREF_SET(__pyx_v_i, __pyx_t_2);\n    __pyx_t_2 = 0;\n\n    /* \"region.pyx\":122\n * \n *         for i in range(num):\n *             self._c_region_polygon.x[i] = points[i*2]             # <<<<<<<<<<<<<<\n *             self._c_region_polygon.y[i] = points[i*2+1]\n * \n */\n    __pyx_t_2 = PyNumber_Multiply(__pyx_v_i, __pyx_int_2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 122, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_2);\n    __pyx_t_8 = __Pyx_PyObject_GetItem(__pyx_v_points, __pyx_t_2); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 122, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_8);\n    __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n    __pyx_t_9 = __pyx_PyFloat_AsFloat(__pyx_t_8); if (unlikely((__pyx_t_9 == (float)-1) && PyErr_Occurred())) __PYX_ERR(0, 122, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;\n    __pyx_t_10 = __Pyx_PyIndex_AsSsize_t(__pyx_v_i); if (unlikely((__pyx_t_10 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 122, __pyx_L1_error)\n    (__pyx_v_self->_c_region_polygon->x[__pyx_t_10]) = __pyx_t_9;\n\n    /* \"region.pyx\":123\n *         for i in range(num):\n *             self._c_region_polygon.x[i] = points[i*2]\n *             self._c_region_polygon.y[i] = points[i*2+1]             # <<<<<<<<<<<<<<\n * \n *     def __dealloc__(self):\n */\n    __pyx_t_8 = PyNumber_Multiply(__pyx_v_i, __pyx_int_2); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 123, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_8);\n    __pyx_t_2 = __Pyx_PyInt_AddObjC(__pyx_t_8, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 123, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_2);\n    __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;\n    __pyx_t_8 = __Pyx_PyObject_GetItem(__pyx_v_points, __pyx_t_2); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 123, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_8);\n    __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n    __pyx_t_9 = __pyx_PyFloat_AsFloat(__pyx_t_8); if (unlikely((__pyx_t_9 == (float)-1) && PyErr_Occurred())) __PYX_ERR(0, 123, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;\n    __pyx_t_10 = __Pyx_PyIndex_AsSsize_t(__pyx_v_i); if (unlikely((__pyx_t_10 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 123, __pyx_L1_error)\n    (__pyx_v_self->_c_region_polygon->y[__pyx_t_10]) = __pyx_t_9;\n\n    /* \"region.pyx\":121\n *             raise MemoryError()\n * \n *         for i in range(num):             # <<<<<<<<<<<<<<\n *             self._c_region_polygon.x[i] = points[i*2]\n *             self._c_region_polygon.y[i] = points[i*2+1]\n */\n  }\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n\n  /* \"region.pyx\":101\n *     cdef c_region.region_polygon* _c_region_polygon\n * \n *     def __cinit__(self, points):             # <<<<<<<<<<<<<<\n *         \"\"\"\n *         args:\n */\n\n  /* function exit code */\n  __pyx_r = 0;\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_8);\n  __Pyx_AddTraceback(\"region.Polygon.__cinit__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = -1;\n  __pyx_L0:;\n  __Pyx_XDECREF(__pyx_v_num);\n  __Pyx_XDECREF(__pyx_v_i);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"region.pyx\":125\n *             self._c_region_polygon.y[i] = points[i*2+1]\n * \n *     def __dealloc__(self):             # <<<<<<<<<<<<<<\n *         if self._c_region_polygon is not NULL:\n *             if self._c_region_polygon.x is not NULL:\n */\n\n/* Python wrapper */\nstatic void __pyx_pw_6region_7Polygon_3__dealloc__(PyObject *__pyx_v_self); /*proto*/\nstatic void __pyx_pw_6region_7Polygon_3__dealloc__(PyObject *__pyx_v_self) {\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__dealloc__ (wrapper)\", 0);\n  __pyx_pf_6region_7Polygon_2__dealloc__(((struct __pyx_obj_6region_Polygon *)__pyx_v_self));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n}\n\nstatic void __pyx_pf_6region_7Polygon_2__dealloc__(struct __pyx_obj_6region_Polygon *__pyx_v_self) {\n  __Pyx_RefNannyDeclarations\n  int __pyx_t_1;\n  __Pyx_RefNannySetupContext(\"__dealloc__\", 0);\n\n  /* \"region.pyx\":126\n * \n *     def __dealloc__(self):\n *         if self._c_region_polygon is not NULL:             # <<<<<<<<<<<<<<\n *             if self._c_region_polygon.x is not NULL:\n *                 free(self._c_region_polygon.x)\n */\n  __pyx_t_1 = ((__pyx_v_self->_c_region_polygon != NULL) != 0);\n  if (__pyx_t_1) {\n\n    /* \"region.pyx\":127\n *     def __dealloc__(self):\n *         if self._c_region_polygon is not NULL:\n *             if self._c_region_polygon.x is not NULL:             # <<<<<<<<<<<<<<\n *                 free(self._c_region_polygon.x)\n *                 self._c_region_polygon.x = NULL\n */\n    __pyx_t_1 = ((__pyx_v_self->_c_region_polygon->x != NULL) != 0);\n    if (__pyx_t_1) {\n\n      /* \"region.pyx\":128\n *         if self._c_region_polygon is not NULL:\n *             if self._c_region_polygon.x is not NULL:\n *                 free(self._c_region_polygon.x)             # <<<<<<<<<<<<<<\n *                 self._c_region_polygon.x = NULL\n *             if self._c_region_polygon.y is not NULL:\n */\n      free(__pyx_v_self->_c_region_polygon->x);\n\n      /* \"region.pyx\":129\n *             if self._c_region_polygon.x is not NULL:\n *                 free(self._c_region_polygon.x)\n *                 self._c_region_polygon.x = NULL             # <<<<<<<<<<<<<<\n *             if self._c_region_polygon.y is not NULL:\n *                 free(self._c_region_polygon.y)\n */\n      __pyx_v_self->_c_region_polygon->x = NULL;\n\n      /* \"region.pyx\":127\n *     def __dealloc__(self):\n *         if self._c_region_polygon is not NULL:\n *             if self._c_region_polygon.x is not NULL:             # <<<<<<<<<<<<<<\n *                 free(self._c_region_polygon.x)\n *                 self._c_region_polygon.x = NULL\n */\n    }\n\n    /* \"region.pyx\":130\n *                 free(self._c_region_polygon.x)\n *                 self._c_region_polygon.x = NULL\n *             if self._c_region_polygon.y is not NULL:             # <<<<<<<<<<<<<<\n *                 free(self._c_region_polygon.y)\n *                 self._c_region_polygon.y = NULL\n */\n    __pyx_t_1 = ((__pyx_v_self->_c_region_polygon->y != NULL) != 0);\n    if (__pyx_t_1) {\n\n      /* \"region.pyx\":131\n *                 self._c_region_polygon.x = NULL\n *             if self._c_region_polygon.y is not NULL:\n *                 free(self._c_region_polygon.y)             # <<<<<<<<<<<<<<\n *                 self._c_region_polygon.y = NULL\n *             free(self._c_region_polygon)\n */\n      free(__pyx_v_self->_c_region_polygon->y);\n\n      /* \"region.pyx\":132\n *             if self._c_region_polygon.y is not NULL:\n *                 free(self._c_region_polygon.y)\n *                 self._c_region_polygon.y = NULL             # <<<<<<<<<<<<<<\n *             free(self._c_region_polygon)\n *             self._c_region_polygon = NULL\n */\n      __pyx_v_self->_c_region_polygon->y = NULL;\n\n      /* \"region.pyx\":130\n *                 free(self._c_region_polygon.x)\n *                 self._c_region_polygon.x = NULL\n *             if self._c_region_polygon.y is not NULL:             # <<<<<<<<<<<<<<\n *                 free(self._c_region_polygon.y)\n *                 self._c_region_polygon.y = NULL\n */\n    }\n\n    /* \"region.pyx\":133\n *                 free(self._c_region_polygon.y)\n *                 self._c_region_polygon.y = NULL\n *             free(self._c_region_polygon)             # <<<<<<<<<<<<<<\n *             self._c_region_polygon = NULL\n * \n */\n    free(__pyx_v_self->_c_region_polygon);\n\n    /* \"region.pyx\":134\n *                 self._c_region_polygon.y = NULL\n *             free(self._c_region_polygon)\n *             self._c_region_polygon = NULL             # <<<<<<<<<<<<<<\n * \n *     def __str__(self):\n */\n    __pyx_v_self->_c_region_polygon = NULL;\n\n    /* \"region.pyx\":126\n * \n *     def __dealloc__(self):\n *         if self._c_region_polygon is not NULL:             # <<<<<<<<<<<<<<\n *             if self._c_region_polygon.x is not NULL:\n *                 free(self._c_region_polygon.x)\n */\n  }\n\n  /* \"region.pyx\":125\n *             self._c_region_polygon.y[i] = points[i*2+1]\n * \n *     def __dealloc__(self):             # <<<<<<<<<<<<<<\n *         if self._c_region_polygon is not NULL:\n *             if self._c_region_polygon.x is not NULL:\n */\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n}\n\n/* \"region.pyx\":136\n *             self._c_region_polygon = NULL\n * \n *     def __str__(self):             # <<<<<<<<<<<<<<\n *         ret = \"\"\n *         for i in range(self._c_region_polygon.count-1):\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_6region_7Polygon_5__str__(PyObject *__pyx_v_self); /*proto*/\nstatic PyObject *__pyx_pw_6region_7Polygon_5__str__(PyObject *__pyx_v_self) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__str__ (wrapper)\", 0);\n  __pyx_r = __pyx_pf_6region_7Polygon_4__str__(((struct __pyx_obj_6region_Polygon *)__pyx_v_self));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_6region_7Polygon_4__str__(struct __pyx_obj_6region_Polygon *__pyx_v_self) {\n  PyObject *__pyx_v_ret = NULL;\n  long __pyx_v_i;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  long __pyx_t_1;\n  long __pyx_t_2;\n  long __pyx_t_3;\n  PyObject *__pyx_t_4 = NULL;\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  PyObject *__pyx_t_7 = NULL;\n  PyObject *__pyx_t_8 = NULL;\n  int __pyx_t_9;\n  PyObject *__pyx_t_10 = NULL;\n  __Pyx_RefNannySetupContext(\"__str__\", 0);\n\n  /* \"region.pyx\":137\n * \n *     def __str__(self):\n *         ret = \"\"             # <<<<<<<<<<<<<<\n *         for i in range(self._c_region_polygon.count-1):\n *             ret += \"({:.3f} {:.3f}) \".format(self._c_region_polygon.x[i],\n */\n  __Pyx_INCREF(__pyx_kp_s__5);\n  __pyx_v_ret = __pyx_kp_s__5;\n\n  /* \"region.pyx\":138\n *     def __str__(self):\n *         ret = \"\"\n *         for i in range(self._c_region_polygon.count-1):             # <<<<<<<<<<<<<<\n *             ret += \"({:.3f} {:.3f}) \".format(self._c_region_polygon.x[i],\n *                     self._c_region_polygon.y[i])\n */\n  __pyx_t_1 = (__pyx_v_self->_c_region_polygon->count - 1);\n  __pyx_t_2 = __pyx_t_1;\n  for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) {\n    __pyx_v_i = __pyx_t_3;\n\n    /* \"region.pyx\":139\n *         ret = \"\"\n *         for i in range(self._c_region_polygon.count-1):\n *             ret += \"({:.3f} {:.3f}) \".format(self._c_region_polygon.x[i],             # <<<<<<<<<<<<<<\n *                     self._c_region_polygon.y[i])\n *         ret += \"({:.3f} {:.3f})\".format(self._c_region_polygon.x[i],\n */\n    __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_kp_s_3f_3f, __pyx_n_s_format); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 139, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __pyx_t_6 = PyFloat_FromDouble((__pyx_v_self->_c_region_polygon->x[__pyx_v_i])); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 139, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n\n    /* \"region.pyx\":140\n *         for i in range(self._c_region_polygon.count-1):\n *             ret += \"({:.3f} {:.3f}) \".format(self._c_region_polygon.x[i],\n *                     self._c_region_polygon.y[i])             # <<<<<<<<<<<<<<\n *         ret += \"({:.3f} {:.3f})\".format(self._c_region_polygon.x[i],\n *                 self._c_region_polygon.y[i])\n */\n    __pyx_t_7 = PyFloat_FromDouble((__pyx_v_self->_c_region_polygon->y[__pyx_v_i])); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 140, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __pyx_t_8 = NULL;\n    __pyx_t_9 = 0;\n    if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) {\n      __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_5);\n      if (likely(__pyx_t_8)) {\n        PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5);\n        __Pyx_INCREF(__pyx_t_8);\n        __Pyx_INCREF(function);\n        __Pyx_DECREF_SET(__pyx_t_5, function);\n        __pyx_t_9 = 1;\n      }\n    }\n    #if CYTHON_FAST_PYCALL\n    if (PyFunction_Check(__pyx_t_5)) {\n      PyObject *__pyx_temp[3] = {__pyx_t_8, __pyx_t_6, __pyx_t_7};\n      __pyx_t_4 = __Pyx_PyFunction_FastCall(__pyx_t_5, __pyx_temp+1-__pyx_t_9, 2+__pyx_t_9); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 139, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0;\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    } else\n    #endif\n    #if CYTHON_FAST_PYCCALL\n    if (__Pyx_PyFastCFunction_Check(__pyx_t_5)) {\n      PyObject *__pyx_temp[3] = {__pyx_t_8, __pyx_t_6, __pyx_t_7};\n      __pyx_t_4 = __Pyx_PyCFunction_FastCall(__pyx_t_5, __pyx_temp+1-__pyx_t_9, 2+__pyx_t_9); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 139, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0;\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    } else\n    #endif\n    {\n      __pyx_t_10 = PyTuple_New(2+__pyx_t_9); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 139, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_10);\n      if (__pyx_t_8) {\n        __Pyx_GIVEREF(__pyx_t_8); PyTuple_SET_ITEM(__pyx_t_10, 0, __pyx_t_8); __pyx_t_8 = NULL;\n      }\n      __Pyx_GIVEREF(__pyx_t_6);\n      PyTuple_SET_ITEM(__pyx_t_10, 0+__pyx_t_9, __pyx_t_6);\n      __Pyx_GIVEREF(__pyx_t_7);\n      PyTuple_SET_ITEM(__pyx_t_10, 1+__pyx_t_9, __pyx_t_7);\n      __pyx_t_6 = 0;\n      __pyx_t_7 = 0;\n      __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_5, __pyx_t_10, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 139, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n    }\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n\n    /* \"region.pyx\":139\n *         ret = \"\"\n *         for i in range(self._c_region_polygon.count-1):\n *             ret += \"({:.3f} {:.3f}) \".format(self._c_region_polygon.x[i],             # <<<<<<<<<<<<<<\n *                     self._c_region_polygon.y[i])\n *         ret += \"({:.3f} {:.3f})\".format(self._c_region_polygon.x[i],\n */\n    __pyx_t_5 = PyNumber_InPlaceAdd(__pyx_v_ret, __pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 139, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    __Pyx_DECREF_SET(__pyx_v_ret, __pyx_t_5);\n    __pyx_t_5 = 0;\n  }\n\n  /* \"region.pyx\":141\n *             ret += \"({:.3f} {:.3f}) \".format(self._c_region_polygon.x[i],\n *                     self._c_region_polygon.y[i])\n *         ret += \"({:.3f} {:.3f})\".format(self._c_region_polygon.x[i],             # <<<<<<<<<<<<<<\n *                 self._c_region_polygon.y[i])\n *         return ret\n */\n  __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_kp_s_3f_3f_2, __pyx_n_s_format); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 141, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __pyx_t_10 = PyFloat_FromDouble((__pyx_v_self->_c_region_polygon->x[__pyx_v_i])); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 141, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_10);\n\n  /* \"region.pyx\":142\n *                     self._c_region_polygon.y[i])\n *         ret += \"({:.3f} {:.3f})\".format(self._c_region_polygon.x[i],\n *                 self._c_region_polygon.y[i])             # <<<<<<<<<<<<<<\n *         return ret\n * \n */\n  __pyx_t_7 = PyFloat_FromDouble((__pyx_v_self->_c_region_polygon->y[__pyx_v_i])); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 142, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_7);\n  __pyx_t_6 = NULL;\n  __pyx_t_9 = 0;\n  if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_4))) {\n    __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_4);\n    if (likely(__pyx_t_6)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4);\n      __Pyx_INCREF(__pyx_t_6);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_4, function);\n      __pyx_t_9 = 1;\n    }\n  }\n  #if CYTHON_FAST_PYCALL\n  if (PyFunction_Check(__pyx_t_4)) {\n    PyObject *__pyx_temp[3] = {__pyx_t_6, __pyx_t_10, __pyx_t_7};\n    __pyx_t_5 = __Pyx_PyFunction_FastCall(__pyx_t_4, __pyx_temp+1-__pyx_t_9, 2+__pyx_t_9); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 141, __pyx_L1_error)\n    __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n  } else\n  #endif\n  #if CYTHON_FAST_PYCCALL\n  if (__Pyx_PyFastCFunction_Check(__pyx_t_4)) {\n    PyObject *__pyx_temp[3] = {__pyx_t_6, __pyx_t_10, __pyx_t_7};\n    __pyx_t_5 = __Pyx_PyCFunction_FastCall(__pyx_t_4, __pyx_temp+1-__pyx_t_9, 2+__pyx_t_9); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 141, __pyx_L1_error)\n    __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n  } else\n  #endif\n  {\n    __pyx_t_8 = PyTuple_New(2+__pyx_t_9); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 141, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_8);\n    if (__pyx_t_6) {\n      __Pyx_GIVEREF(__pyx_t_6); PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_t_6); __pyx_t_6 = NULL;\n    }\n    __Pyx_GIVEREF(__pyx_t_10);\n    PyTuple_SET_ITEM(__pyx_t_8, 0+__pyx_t_9, __pyx_t_10);\n    __Pyx_GIVEREF(__pyx_t_7);\n    PyTuple_SET_ITEM(__pyx_t_8, 1+__pyx_t_9, __pyx_t_7);\n    __pyx_t_10 = 0;\n    __pyx_t_7 = 0;\n    __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_4, __pyx_t_8, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 141, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;\n  }\n  __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n\n  /* \"region.pyx\":141\n *             ret += \"({:.3f} {:.3f}) \".format(self._c_region_polygon.x[i],\n *                     self._c_region_polygon.y[i])\n *         ret += \"({:.3f} {:.3f})\".format(self._c_region_polygon.x[i],             # <<<<<<<<<<<<<<\n *                 self._c_region_polygon.y[i])\n *         return ret\n */\n  __pyx_t_4 = PyNumber_InPlaceAdd(__pyx_v_ret, __pyx_t_5); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 141, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  __Pyx_DECREF_SET(__pyx_v_ret, __pyx_t_4);\n  __pyx_t_4 = 0;\n\n  /* \"region.pyx\":143\n *         ret += \"({:.3f} {:.3f})\".format(self._c_region_polygon.x[i],\n *                 self._c_region_polygon.y[i])\n *         return ret             # <<<<<<<<<<<<<<\n * \n * def vot_overlap(polygon1, polygon2, bounds=None):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(__pyx_v_ret);\n  __pyx_r = __pyx_v_ret;\n  goto __pyx_L0;\n\n  /* \"region.pyx\":136\n *             self._c_region_polygon = NULL\n * \n *     def __str__(self):             # <<<<<<<<<<<<<<\n *         ret = \"\"\n *         for i in range(self._c_region_polygon.count-1):\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_XDECREF(__pyx_t_7);\n  __Pyx_XDECREF(__pyx_t_8);\n  __Pyx_XDECREF(__pyx_t_10);\n  __Pyx_AddTraceback(\"region.Polygon.__str__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XDECREF(__pyx_v_ret);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"(tree fragment)\":1\n * def __reduce_cython__(self):             # <<<<<<<<<<<<<<\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n * def __setstate_cython__(self, __pyx_state):\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_6region_7Polygon_7__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/\nstatic PyObject *__pyx_pw_6region_7Polygon_7__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__reduce_cython__ (wrapper)\", 0);\n  __pyx_r = __pyx_pf_6region_7Polygon_6__reduce_cython__(((struct __pyx_obj_6region_Polygon *)__pyx_v_self));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_6region_7Polygon_6__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_6region_Polygon *__pyx_v_self) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"__reduce_cython__\", 0);\n\n  /* \"(tree fragment)\":2\n * def __reduce_cython__(self):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")             # <<<<<<<<<<<<<<\n * def __setstate_cython__(self, __pyx_state):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n */\n  __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__6, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_Raise(__pyx_t_1, 0, 0, 0);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __PYX_ERR(1, 2, __pyx_L1_error)\n\n  /* \"(tree fragment)\":1\n * def __reduce_cython__(self):             # <<<<<<<<<<<<<<\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n * def __setstate_cython__(self, __pyx_state):\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"region.Polygon.__reduce_cython__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"(tree fragment)\":3\n * def __reduce_cython__(self):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n * def __setstate_cython__(self, __pyx_state):             # <<<<<<<<<<<<<<\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_6region_7Polygon_9__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/\nstatic PyObject *__pyx_pw_6region_7Polygon_9__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__setstate_cython__ (wrapper)\", 0);\n  __pyx_r = __pyx_pf_6region_7Polygon_8__setstate_cython__(((struct __pyx_obj_6region_Polygon *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_6region_7Polygon_8__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_6region_Polygon *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"__setstate_cython__\", 0);\n\n  /* \"(tree fragment)\":4\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n * def __setstate_cython__(self, __pyx_state):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")             # <<<<<<<<<<<<<<\n */\n  __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__7, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_Raise(__pyx_t_1, 0, 0, 0);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __PYX_ERR(1, 4, __pyx_L1_error)\n\n  /* \"(tree fragment)\":3\n * def __reduce_cython__(self):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n * def __setstate_cython__(self, __pyx_state):             # <<<<<<<<<<<<<<\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"region.Polygon.__setstate_cython__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"region.pyx\":145\n *         return ret\n * \n * def vot_overlap(polygon1, polygon2, bounds=None):             # <<<<<<<<<<<<<<\n *     \"\"\" computing overlap between two polygon\n *     Args:\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_6region_1vot_overlap(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic char __pyx_doc_6region_vot_overlap[] = \" computing overlap between two polygon\\n    Args:\\n        polygon1: polygon tuple of points\\n        polygon2: polygon tuple of points\\n        bounds: tuple of (left, top, right, bottom) or tuple of (width height)\\n    Return:\\n        overlap: overlap between two polygons\\n    \";\nstatic PyMethodDef __pyx_mdef_6region_1vot_overlap = {\"vot_overlap\", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_6region_1vot_overlap, METH_VARARGS|METH_KEYWORDS, __pyx_doc_6region_vot_overlap};\nstatic PyObject *__pyx_pw_6region_1vot_overlap(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyObject *__pyx_v_polygon1 = 0;\n  PyObject *__pyx_v_polygon2 = 0;\n  PyObject *__pyx_v_bounds = 0;\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"vot_overlap (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_polygon1,&__pyx_n_s_polygon2,&__pyx_n_s_bounds,0};\n    PyObject* values[3] = {0,0,0};\n    values[2] = ((PyObject *)Py_None);\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        CYTHON_FALLTHROUGH;\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        CYTHON_FALLTHROUGH;\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        CYTHON_FALLTHROUGH;\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_polygon1)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        CYTHON_FALLTHROUGH;\n        case  1:\n        if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_polygon2)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"vot_overlap\", 0, 2, 3, 1); __PYX_ERR(0, 145, __pyx_L3_error)\n        }\n        CYTHON_FALLTHROUGH;\n        case  2:\n        if (kw_args > 0) {\n          PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_bounds);\n          if (value) { values[2] = value; kw_args--; }\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"vot_overlap\") < 0)) __PYX_ERR(0, 145, __pyx_L3_error)\n      }\n    } else {\n      switch (PyTuple_GET_SIZE(__pyx_args)) {\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        CYTHON_FALLTHROUGH;\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n    }\n    __pyx_v_polygon1 = values[0];\n    __pyx_v_polygon2 = values[1];\n    __pyx_v_bounds = values[2];\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"vot_overlap\", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 145, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"region.vot_overlap\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return NULL;\n  __pyx_L4_argument_unpacking_done:;\n  __pyx_r = __pyx_pf_6region_vot_overlap(__pyx_self, __pyx_v_polygon1, __pyx_v_polygon2, __pyx_v_bounds);\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_6region_vot_overlap(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_polygon1, PyObject *__pyx_v_polygon2, PyObject *__pyx_v_bounds) {\n  struct __pyx_obj_6region_Polygon *__pyx_v_polygon1_ = NULL;\n  struct __pyx_obj_6region_Polygon *__pyx_v_polygon2_ = NULL;\n  struct __pyx_obj_6region_RegionBounds *__pyx_v_pno_bounds = NULL;\n  float __pyx_v_only1;\n  float __pyx_v_only2;\n  region_polygon *__pyx_v_c_polygon1;\n  region_polygon *__pyx_v_c_polygon2;\n  region_bounds __pyx_v_no_bounds;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  int __pyx_t_1;\n  Py_ssize_t __pyx_t_2;\n  int __pyx_t_3;\n  PyObject *__pyx_t_4 = NULL;\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  PyObject *__pyx_t_7 = NULL;\n  PyObject *__pyx_t_8 = NULL;\n  PyObject *__pyx_t_9 = NULL;\n  PyObject *__pyx_t_10 = NULL;\n  PyObject *__pyx_t_11 = NULL;\n  PyObject *__pyx_t_12 = NULL;\n  PyObject *__pyx_t_13 = NULL;\n  int __pyx_t_14;\n  double __pyx_t_15;\n  region_polygon *__pyx_t_16;\n  __Pyx_RefNannySetupContext(\"vot_overlap\", 0);\n\n  /* \"region.pyx\":154\n *         overlap: overlap between two polygons\n *     \"\"\"\n *     if len(polygon1) == 1 or len(polygon2) == 1:             # <<<<<<<<<<<<<<\n *         return float(\"nan\")\n * \n */\n  __pyx_t_2 = PyObject_Length(__pyx_v_polygon1); if (unlikely(__pyx_t_2 == ((Py_ssize_t)-1))) __PYX_ERR(0, 154, __pyx_L1_error)\n  __pyx_t_3 = ((__pyx_t_2 == 1) != 0);\n  if (!__pyx_t_3) {\n  } else {\n    __pyx_t_1 = __pyx_t_3;\n    goto __pyx_L4_bool_binop_done;\n  }\n  __pyx_t_2 = PyObject_Length(__pyx_v_polygon2); if (unlikely(__pyx_t_2 == ((Py_ssize_t)-1))) __PYX_ERR(0, 154, __pyx_L1_error)\n  __pyx_t_3 = ((__pyx_t_2 == 1) != 0);\n  __pyx_t_1 = __pyx_t_3;\n  __pyx_L4_bool_binop_done:;\n  if (__pyx_t_1) {\n\n    /* \"region.pyx\":155\n *     \"\"\"\n *     if len(polygon1) == 1 or len(polygon2) == 1:\n *         return float(\"nan\")             # <<<<<<<<<<<<<<\n * \n *     if len(polygon1) == 4:\n */\n    __Pyx_XDECREF(__pyx_r);\n    __pyx_t_4 = __Pyx_PyNumber_Float(__pyx_n_s_nan); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 155, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __pyx_r = __pyx_t_4;\n    __pyx_t_4 = 0;\n    goto __pyx_L0;\n\n    /* \"region.pyx\":154\n *         overlap: overlap between two polygons\n *     \"\"\"\n *     if len(polygon1) == 1 or len(polygon2) == 1:             # <<<<<<<<<<<<<<\n *         return float(\"nan\")\n * \n */\n  }\n\n  /* \"region.pyx\":157\n *         return float(\"nan\")\n * \n *     if len(polygon1) == 4:             # <<<<<<<<<<<<<<\n *         polygon1_ = Polygon([polygon1[0], polygon1[1],\n *                              polygon1[0]+polygon1[2], polygon1[1],\n */\n  __pyx_t_2 = PyObject_Length(__pyx_v_polygon1); if (unlikely(__pyx_t_2 == ((Py_ssize_t)-1))) __PYX_ERR(0, 157, __pyx_L1_error)\n  __pyx_t_1 = ((__pyx_t_2 == 4) != 0);\n  if (__pyx_t_1) {\n\n    /* \"region.pyx\":158\n * \n *     if len(polygon1) == 4:\n *         polygon1_ = Polygon([polygon1[0], polygon1[1],             # <<<<<<<<<<<<<<\n *                              polygon1[0]+polygon1[2], polygon1[1],\n *                              polygon1[0]+polygon1[2], polygon1[1]+polygon1[3],\n */\n    __pyx_t_4 = __Pyx_GetItemInt(__pyx_v_polygon1, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 158, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __pyx_t_5 = __Pyx_GetItemInt(__pyx_v_polygon1, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 158, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n\n    /* \"region.pyx\":159\n *     if len(polygon1) == 4:\n *         polygon1_ = Polygon([polygon1[0], polygon1[1],\n *                              polygon1[0]+polygon1[2], polygon1[1],             # <<<<<<<<<<<<<<\n *                              polygon1[0]+polygon1[2], polygon1[1]+polygon1[3],\n *                              polygon1[0], polygon1[1]+polygon1[3]])\n */\n    __pyx_t_6 = __Pyx_GetItemInt(__pyx_v_polygon1, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 159, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __pyx_t_7 = __Pyx_GetItemInt(__pyx_v_polygon1, 2, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 159, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __pyx_t_8 = PyNumber_Add(__pyx_t_6, __pyx_t_7); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 159, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_8);\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    __pyx_t_7 = __Pyx_GetItemInt(__pyx_v_polygon1, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 159, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n\n    /* \"region.pyx\":160\n *         polygon1_ = Polygon([polygon1[0], polygon1[1],\n *                              polygon1[0]+polygon1[2], polygon1[1],\n *                              polygon1[0]+polygon1[2], polygon1[1]+polygon1[3],             # <<<<<<<<<<<<<<\n *                              polygon1[0], polygon1[1]+polygon1[3]])\n *     else:\n */\n    __pyx_t_6 = __Pyx_GetItemInt(__pyx_v_polygon1, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 160, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __pyx_t_9 = __Pyx_GetItemInt(__pyx_v_polygon1, 2, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 160, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_9);\n    __pyx_t_10 = PyNumber_Add(__pyx_t_6, __pyx_t_9); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 160, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_10);\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n    __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;\n    __pyx_t_9 = __Pyx_GetItemInt(__pyx_v_polygon1, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 160, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_9);\n    __pyx_t_6 = __Pyx_GetItemInt(__pyx_v_polygon1, 3, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 160, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __pyx_t_11 = PyNumber_Add(__pyx_t_9, __pyx_t_6); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 160, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_11);\n    __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n\n    /* \"region.pyx\":161\n *                              polygon1[0]+polygon1[2], polygon1[1],\n *                              polygon1[0]+polygon1[2], polygon1[1]+polygon1[3],\n *                              polygon1[0], polygon1[1]+polygon1[3]])             # <<<<<<<<<<<<<<\n *     else:\n *         polygon1_ = Polygon(polygon1)\n */\n    __pyx_t_6 = __Pyx_GetItemInt(__pyx_v_polygon1, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 161, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __pyx_t_9 = __Pyx_GetItemInt(__pyx_v_polygon1, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 161, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_9);\n    __pyx_t_12 = __Pyx_GetItemInt(__pyx_v_polygon1, 3, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 161, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_12);\n    __pyx_t_13 = PyNumber_Add(__pyx_t_9, __pyx_t_12); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 161, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_13);\n    __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;\n    __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n\n    /* \"region.pyx\":158\n * \n *     if len(polygon1) == 4:\n *         polygon1_ = Polygon([polygon1[0], polygon1[1],             # <<<<<<<<<<<<<<\n *                              polygon1[0]+polygon1[2], polygon1[1],\n *                              polygon1[0]+polygon1[2], polygon1[1]+polygon1[3],\n */\n    __pyx_t_12 = PyList_New(8); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 158, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_12);\n    __Pyx_GIVEREF(__pyx_t_4);\n    PyList_SET_ITEM(__pyx_t_12, 0, __pyx_t_4);\n    __Pyx_GIVEREF(__pyx_t_5);\n    PyList_SET_ITEM(__pyx_t_12, 1, __pyx_t_5);\n    __Pyx_GIVEREF(__pyx_t_8);\n    PyList_SET_ITEM(__pyx_t_12, 2, __pyx_t_8);\n    __Pyx_GIVEREF(__pyx_t_7);\n    PyList_SET_ITEM(__pyx_t_12, 3, __pyx_t_7);\n    __Pyx_GIVEREF(__pyx_t_10);\n    PyList_SET_ITEM(__pyx_t_12, 4, __pyx_t_10);\n    __Pyx_GIVEREF(__pyx_t_11);\n    PyList_SET_ITEM(__pyx_t_12, 5, __pyx_t_11);\n    __Pyx_GIVEREF(__pyx_t_6);\n    PyList_SET_ITEM(__pyx_t_12, 6, __pyx_t_6);\n    __Pyx_GIVEREF(__pyx_t_13);\n    PyList_SET_ITEM(__pyx_t_12, 7, __pyx_t_13);\n    __pyx_t_4 = 0;\n    __pyx_t_5 = 0;\n    __pyx_t_8 = 0;\n    __pyx_t_7 = 0;\n    __pyx_t_10 = 0;\n    __pyx_t_11 = 0;\n    __pyx_t_6 = 0;\n    __pyx_t_13 = 0;\n    __pyx_t_13 = __Pyx_PyObject_CallOneArg(((PyObject *)__pyx_ptype_6region_Polygon), __pyx_t_12); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 158, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_13);\n    __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n    __pyx_v_polygon1_ = ((struct __pyx_obj_6region_Polygon *)__pyx_t_13);\n    __pyx_t_13 = 0;\n\n    /* \"region.pyx\":157\n *         return float(\"nan\")\n * \n *     if len(polygon1) == 4:             # <<<<<<<<<<<<<<\n *         polygon1_ = Polygon([polygon1[0], polygon1[1],\n *                              polygon1[0]+polygon1[2], polygon1[1],\n */\n    goto __pyx_L6;\n  }\n\n  /* \"region.pyx\":163\n *                              polygon1[0], polygon1[1]+polygon1[3]])\n *     else:\n *         polygon1_ = Polygon(polygon1)             # <<<<<<<<<<<<<<\n * \n *     if len(polygon2) == 4:\n */\n  /*else*/ {\n    __pyx_t_13 = __Pyx_PyObject_CallOneArg(((PyObject *)__pyx_ptype_6region_Polygon), __pyx_v_polygon1); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 163, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_13);\n    __pyx_v_polygon1_ = ((struct __pyx_obj_6region_Polygon *)__pyx_t_13);\n    __pyx_t_13 = 0;\n  }\n  __pyx_L6:;\n\n  /* \"region.pyx\":165\n *         polygon1_ = Polygon(polygon1)\n * \n *     if len(polygon2) == 4:             # <<<<<<<<<<<<<<\n *         polygon2_ = Polygon([polygon2[0], polygon2[1],\n *                              polygon2[0]+polygon2[2], polygon2[1],\n */\n  __pyx_t_2 = PyObject_Length(__pyx_v_polygon2); if (unlikely(__pyx_t_2 == ((Py_ssize_t)-1))) __PYX_ERR(0, 165, __pyx_L1_error)\n  __pyx_t_1 = ((__pyx_t_2 == 4) != 0);\n  if (__pyx_t_1) {\n\n    /* \"region.pyx\":166\n * \n *     if len(polygon2) == 4:\n *         polygon2_ = Polygon([polygon2[0], polygon2[1],             # <<<<<<<<<<<<<<\n *                              polygon2[0]+polygon2[2], polygon2[1],\n *                              polygon2[0]+polygon2[2], polygon2[1]+polygon2[3],\n */\n    __pyx_t_13 = __Pyx_GetItemInt(__pyx_v_polygon2, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 166, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_13);\n    __pyx_t_12 = __Pyx_GetItemInt(__pyx_v_polygon2, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 166, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_12);\n\n    /* \"region.pyx\":167\n *     if len(polygon2) == 4:\n *         polygon2_ = Polygon([polygon2[0], polygon2[1],\n *                              polygon2[0]+polygon2[2], polygon2[1],             # <<<<<<<<<<<<<<\n *                              polygon2[0]+polygon2[2], polygon2[1]+polygon2[3],\n *                              polygon2[0], polygon2[1]+polygon2[3]])\n */\n    __pyx_t_6 = __Pyx_GetItemInt(__pyx_v_polygon2, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 167, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __pyx_t_11 = __Pyx_GetItemInt(__pyx_v_polygon2, 2, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 167, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_11);\n    __pyx_t_10 = PyNumber_Add(__pyx_t_6, __pyx_t_11); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 167, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_10);\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n    __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n    __pyx_t_11 = __Pyx_GetItemInt(__pyx_v_polygon2, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 167, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_11);\n\n    /* \"region.pyx\":168\n *         polygon2_ = Polygon([polygon2[0], polygon2[1],\n *                              polygon2[0]+polygon2[2], polygon2[1],\n *                              polygon2[0]+polygon2[2], polygon2[1]+polygon2[3],             # <<<<<<<<<<<<<<\n *                              polygon2[0], polygon2[1]+polygon2[3]])\n *     else:\n */\n    __pyx_t_6 = __Pyx_GetItemInt(__pyx_v_polygon2, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 168, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __pyx_t_7 = __Pyx_GetItemInt(__pyx_v_polygon2, 2, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 168, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __pyx_t_8 = PyNumber_Add(__pyx_t_6, __pyx_t_7); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 168, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_8);\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    __pyx_t_7 = __Pyx_GetItemInt(__pyx_v_polygon2, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 168, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __pyx_t_6 = __Pyx_GetItemInt(__pyx_v_polygon2, 3, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 168, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __pyx_t_5 = PyNumber_Add(__pyx_t_7, __pyx_t_6); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 168, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n\n    /* \"region.pyx\":169\n *                              polygon2[0]+polygon2[2], polygon2[1],\n *                              polygon2[0]+polygon2[2], polygon2[1]+polygon2[3],\n *                              polygon2[0], polygon2[1]+polygon2[3]])             # <<<<<<<<<<<<<<\n *     else:\n *         polygon2_ = Polygon(polygon2)\n */\n    __pyx_t_6 = __Pyx_GetItemInt(__pyx_v_polygon2, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 169, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __pyx_t_7 = __Pyx_GetItemInt(__pyx_v_polygon2, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 169, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __pyx_t_4 = __Pyx_GetItemInt(__pyx_v_polygon2, 3, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 169, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __pyx_t_9 = PyNumber_Add(__pyx_t_7, __pyx_t_4); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 169, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_9);\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n\n    /* \"region.pyx\":166\n * \n *     if len(polygon2) == 4:\n *         polygon2_ = Polygon([polygon2[0], polygon2[1],             # <<<<<<<<<<<<<<\n *                              polygon2[0]+polygon2[2], polygon2[1],\n *                              polygon2[0]+polygon2[2], polygon2[1]+polygon2[3],\n */\n    __pyx_t_4 = PyList_New(8); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 166, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __Pyx_GIVEREF(__pyx_t_13);\n    PyList_SET_ITEM(__pyx_t_4, 0, __pyx_t_13);\n    __Pyx_GIVEREF(__pyx_t_12);\n    PyList_SET_ITEM(__pyx_t_4, 1, __pyx_t_12);\n    __Pyx_GIVEREF(__pyx_t_10);\n    PyList_SET_ITEM(__pyx_t_4, 2, __pyx_t_10);\n    __Pyx_GIVEREF(__pyx_t_11);\n    PyList_SET_ITEM(__pyx_t_4, 3, __pyx_t_11);\n    __Pyx_GIVEREF(__pyx_t_8);\n    PyList_SET_ITEM(__pyx_t_4, 4, __pyx_t_8);\n    __Pyx_GIVEREF(__pyx_t_5);\n    PyList_SET_ITEM(__pyx_t_4, 5, __pyx_t_5);\n    __Pyx_GIVEREF(__pyx_t_6);\n    PyList_SET_ITEM(__pyx_t_4, 6, __pyx_t_6);\n    __Pyx_GIVEREF(__pyx_t_9);\n    PyList_SET_ITEM(__pyx_t_4, 7, __pyx_t_9);\n    __pyx_t_13 = 0;\n    __pyx_t_12 = 0;\n    __pyx_t_10 = 0;\n    __pyx_t_11 = 0;\n    __pyx_t_8 = 0;\n    __pyx_t_5 = 0;\n    __pyx_t_6 = 0;\n    __pyx_t_9 = 0;\n    __pyx_t_9 = __Pyx_PyObject_CallOneArg(((PyObject *)__pyx_ptype_6region_Polygon), __pyx_t_4); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 166, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_9);\n    __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    __pyx_v_polygon2_ = ((struct __pyx_obj_6region_Polygon *)__pyx_t_9);\n    __pyx_t_9 = 0;\n\n    /* \"region.pyx\":165\n *         polygon1_ = Polygon(polygon1)\n * \n *     if len(polygon2) == 4:             # <<<<<<<<<<<<<<\n *         polygon2_ = Polygon([polygon2[0], polygon2[1],\n *                              polygon2[0]+polygon2[2], polygon2[1],\n */\n    goto __pyx_L7;\n  }\n\n  /* \"region.pyx\":171\n *                              polygon2[0], polygon2[1]+polygon2[3]])\n *     else:\n *         polygon2_ = Polygon(polygon2)             # <<<<<<<<<<<<<<\n * \n *     if bounds is not None and len(bounds) == 4:\n */\n  /*else*/ {\n    __pyx_t_9 = __Pyx_PyObject_CallOneArg(((PyObject *)__pyx_ptype_6region_Polygon), __pyx_v_polygon2); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 171, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_9);\n    __pyx_v_polygon2_ = ((struct __pyx_obj_6region_Polygon *)__pyx_t_9);\n    __pyx_t_9 = 0;\n  }\n  __pyx_L7:;\n\n  /* \"region.pyx\":173\n *         polygon2_ = Polygon(polygon2)\n * \n *     if bounds is not None and len(bounds) == 4:             # <<<<<<<<<<<<<<\n *         pno_bounds = RegionBounds(bounds[0], bounds[1], bounds[2], bounds[3])\n *     elif bounds is not None and len(bounds) == 2:\n */\n  __pyx_t_3 = (__pyx_v_bounds != Py_None);\n  __pyx_t_14 = (__pyx_t_3 != 0);\n  if (__pyx_t_14) {\n  } else {\n    __pyx_t_1 = __pyx_t_14;\n    goto __pyx_L9_bool_binop_done;\n  }\n  __pyx_t_2 = PyObject_Length(__pyx_v_bounds); if (unlikely(__pyx_t_2 == ((Py_ssize_t)-1))) __PYX_ERR(0, 173, __pyx_L1_error)\n  __pyx_t_14 = ((__pyx_t_2 == 4) != 0);\n  __pyx_t_1 = __pyx_t_14;\n  __pyx_L9_bool_binop_done:;\n  if (__pyx_t_1) {\n\n    /* \"region.pyx\":174\n * \n *     if bounds is not None and len(bounds) == 4:\n *         pno_bounds = RegionBounds(bounds[0], bounds[1], bounds[2], bounds[3])             # <<<<<<<<<<<<<<\n *     elif bounds is not None and len(bounds) == 2:\n *         pno_bounds = RegionBounds(0, bounds[1], 0, bounds[0])\n */\n    __pyx_t_9 = __Pyx_GetItemInt(__pyx_v_bounds, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 174, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_9);\n    __pyx_t_4 = __Pyx_GetItemInt(__pyx_v_bounds, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 174, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __pyx_t_6 = __Pyx_GetItemInt(__pyx_v_bounds, 2, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 174, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __pyx_t_5 = __Pyx_GetItemInt(__pyx_v_bounds, 3, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 174, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __pyx_t_8 = PyTuple_New(4); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 174, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_8);\n    __Pyx_GIVEREF(__pyx_t_9);\n    PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_t_9);\n    __Pyx_GIVEREF(__pyx_t_4);\n    PyTuple_SET_ITEM(__pyx_t_8, 1, __pyx_t_4);\n    __Pyx_GIVEREF(__pyx_t_6);\n    PyTuple_SET_ITEM(__pyx_t_8, 2, __pyx_t_6);\n    __Pyx_GIVEREF(__pyx_t_5);\n    PyTuple_SET_ITEM(__pyx_t_8, 3, __pyx_t_5);\n    __pyx_t_9 = 0;\n    __pyx_t_4 = 0;\n    __pyx_t_6 = 0;\n    __pyx_t_5 = 0;\n    __pyx_t_5 = __Pyx_PyObject_Call(((PyObject *)__pyx_ptype_6region_RegionBounds), __pyx_t_8, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 174, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;\n    __pyx_v_pno_bounds = ((struct __pyx_obj_6region_RegionBounds *)__pyx_t_5);\n    __pyx_t_5 = 0;\n\n    /* \"region.pyx\":173\n *         polygon2_ = Polygon(polygon2)\n * \n *     if bounds is not None and len(bounds) == 4:             # <<<<<<<<<<<<<<\n *         pno_bounds = RegionBounds(bounds[0], bounds[1], bounds[2], bounds[3])\n *     elif bounds is not None and len(bounds) == 2:\n */\n    goto __pyx_L8;\n  }\n\n  /* \"region.pyx\":175\n *     if bounds is not None and len(bounds) == 4:\n *         pno_bounds = RegionBounds(bounds[0], bounds[1], bounds[2], bounds[3])\n *     elif bounds is not None and len(bounds) == 2:             # <<<<<<<<<<<<<<\n *         pno_bounds = RegionBounds(0, bounds[1], 0, bounds[0])\n *     else:\n */\n  __pyx_t_14 = (__pyx_v_bounds != Py_None);\n  __pyx_t_3 = (__pyx_t_14 != 0);\n  if (__pyx_t_3) {\n  } else {\n    __pyx_t_1 = __pyx_t_3;\n    goto __pyx_L11_bool_binop_done;\n  }\n  __pyx_t_2 = PyObject_Length(__pyx_v_bounds); if (unlikely(__pyx_t_2 == ((Py_ssize_t)-1))) __PYX_ERR(0, 175, __pyx_L1_error)\n  __pyx_t_3 = ((__pyx_t_2 == 2) != 0);\n  __pyx_t_1 = __pyx_t_3;\n  __pyx_L11_bool_binop_done:;\n  if (__pyx_t_1) {\n\n    /* \"region.pyx\":176\n *         pno_bounds = RegionBounds(bounds[0], bounds[1], bounds[2], bounds[3])\n *     elif bounds is not None and len(bounds) == 2:\n *         pno_bounds = RegionBounds(0, bounds[1], 0, bounds[0])             # <<<<<<<<<<<<<<\n *     else:\n *         pno_bounds = RegionBounds(-float(\"inf\"), float(\"inf\"),\n */\n    __pyx_t_5 = __Pyx_GetItemInt(__pyx_v_bounds, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 176, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __pyx_t_8 = __Pyx_GetItemInt(__pyx_v_bounds, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 176, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_8);\n    __pyx_t_6 = PyTuple_New(4); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 176, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __Pyx_INCREF(__pyx_int_0);\n    __Pyx_GIVEREF(__pyx_int_0);\n    PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_int_0);\n    __Pyx_GIVEREF(__pyx_t_5);\n    PyTuple_SET_ITEM(__pyx_t_6, 1, __pyx_t_5);\n    __Pyx_INCREF(__pyx_int_0);\n    __Pyx_GIVEREF(__pyx_int_0);\n    PyTuple_SET_ITEM(__pyx_t_6, 2, __pyx_int_0);\n    __Pyx_GIVEREF(__pyx_t_8);\n    PyTuple_SET_ITEM(__pyx_t_6, 3, __pyx_t_8);\n    __pyx_t_5 = 0;\n    __pyx_t_8 = 0;\n    __pyx_t_8 = __Pyx_PyObject_Call(((PyObject *)__pyx_ptype_6region_RegionBounds), __pyx_t_6, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 176, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_8);\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n    __pyx_v_pno_bounds = ((struct __pyx_obj_6region_RegionBounds *)__pyx_t_8);\n    __pyx_t_8 = 0;\n\n    /* \"region.pyx\":175\n *     if bounds is not None and len(bounds) == 4:\n *         pno_bounds = RegionBounds(bounds[0], bounds[1], bounds[2], bounds[3])\n *     elif bounds is not None and len(bounds) == 2:             # <<<<<<<<<<<<<<\n *         pno_bounds = RegionBounds(0, bounds[1], 0, bounds[0])\n *     else:\n */\n    goto __pyx_L8;\n  }\n\n  /* \"region.pyx\":178\n *         pno_bounds = RegionBounds(0, bounds[1], 0, bounds[0])\n *     else:\n *         pno_bounds = RegionBounds(-float(\"inf\"), float(\"inf\"),             # <<<<<<<<<<<<<<\n *                                   -float(\"inf\"), float(\"inf\"))\n *     cdef float only1 = 0\n */\n  /*else*/ {\n    __pyx_t_15 = __Pyx_PyObject_AsDouble(__pyx_n_s_inf); if (unlikely(__pyx_t_15 == ((double)((double)-1)) && PyErr_Occurred())) __PYX_ERR(0, 178, __pyx_L1_error)\n    __pyx_t_8 = PyFloat_FromDouble((-__pyx_t_15)); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 178, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_8);\n    __pyx_t_6 = __Pyx_PyNumber_Float(__pyx_n_s_inf); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 178, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n\n    /* \"region.pyx\":179\n *     else:\n *         pno_bounds = RegionBounds(-float(\"inf\"), float(\"inf\"),\n *                                   -float(\"inf\"), float(\"inf\"))             # <<<<<<<<<<<<<<\n *     cdef float only1 = 0\n *     cdef float only2 = 0\n */\n    __pyx_t_15 = __Pyx_PyObject_AsDouble(__pyx_n_s_inf); if (unlikely(__pyx_t_15 == ((double)((double)-1)) && PyErr_Occurred())) __PYX_ERR(0, 179, __pyx_L1_error)\n    __pyx_t_5 = PyFloat_FromDouble((-__pyx_t_15)); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 179, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __pyx_t_4 = __Pyx_PyNumber_Float(__pyx_n_s_inf); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 179, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n\n    /* \"region.pyx\":178\n *         pno_bounds = RegionBounds(0, bounds[1], 0, bounds[0])\n *     else:\n *         pno_bounds = RegionBounds(-float(\"inf\"), float(\"inf\"),             # <<<<<<<<<<<<<<\n *                                   -float(\"inf\"), float(\"inf\"))\n *     cdef float only1 = 0\n */\n    __pyx_t_9 = PyTuple_New(4); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 178, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_9);\n    __Pyx_GIVEREF(__pyx_t_8);\n    PyTuple_SET_ITEM(__pyx_t_9, 0, __pyx_t_8);\n    __Pyx_GIVEREF(__pyx_t_6);\n    PyTuple_SET_ITEM(__pyx_t_9, 1, __pyx_t_6);\n    __Pyx_GIVEREF(__pyx_t_5);\n    PyTuple_SET_ITEM(__pyx_t_9, 2, __pyx_t_5);\n    __Pyx_GIVEREF(__pyx_t_4);\n    PyTuple_SET_ITEM(__pyx_t_9, 3, __pyx_t_4);\n    __pyx_t_8 = 0;\n    __pyx_t_6 = 0;\n    __pyx_t_5 = 0;\n    __pyx_t_4 = 0;\n    __pyx_t_4 = __Pyx_PyObject_Call(((PyObject *)__pyx_ptype_6region_RegionBounds), __pyx_t_9, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 178, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;\n    __pyx_v_pno_bounds = ((struct __pyx_obj_6region_RegionBounds *)__pyx_t_4);\n    __pyx_t_4 = 0;\n  }\n  __pyx_L8:;\n\n  /* \"region.pyx\":180\n *         pno_bounds = RegionBounds(-float(\"inf\"), float(\"inf\"),\n *                                   -float(\"inf\"), float(\"inf\"))\n *     cdef float only1 = 0             # <<<<<<<<<<<<<<\n *     cdef float only2 = 0\n *     cdef c_region.region_polygon* c_polygon1 = polygon1_._c_region_polygon\n */\n  __pyx_v_only1 = 0.0;\n\n  /* \"region.pyx\":181\n *                                   -float(\"inf\"), float(\"inf\"))\n *     cdef float only1 = 0\n *     cdef float only2 = 0             # <<<<<<<<<<<<<<\n *     cdef c_region.region_polygon* c_polygon1 = polygon1_._c_region_polygon\n *     cdef c_region.region_polygon* c_polygon2 = polygon2_._c_region_polygon\n */\n  __pyx_v_only2 = 0.0;\n\n  /* \"region.pyx\":182\n *     cdef float only1 = 0\n *     cdef float only2 = 0\n *     cdef c_region.region_polygon* c_polygon1 = polygon1_._c_region_polygon             # <<<<<<<<<<<<<<\n *     cdef c_region.region_polygon* c_polygon2 = polygon2_._c_region_polygon\n *     cdef c_region.region_bounds no_bounds = pno_bounds._c_region_bounds[0] # deference\n */\n  __pyx_t_16 = __pyx_v_polygon1_->_c_region_polygon;\n  __pyx_v_c_polygon1 = __pyx_t_16;\n\n  /* \"region.pyx\":183\n *     cdef float only2 = 0\n *     cdef c_region.region_polygon* c_polygon1 = polygon1_._c_region_polygon\n *     cdef c_region.region_polygon* c_polygon2 = polygon2_._c_region_polygon             # <<<<<<<<<<<<<<\n *     cdef c_region.region_bounds no_bounds = pno_bounds._c_region_bounds[0] # deference\n *     return c_region.compute_polygon_overlap(c_polygon1,\n */\n  __pyx_t_16 = __pyx_v_polygon2_->_c_region_polygon;\n  __pyx_v_c_polygon2 = __pyx_t_16;\n\n  /* \"region.pyx\":184\n *     cdef c_region.region_polygon* c_polygon1 = polygon1_._c_region_polygon\n *     cdef c_region.region_polygon* c_polygon2 = polygon2_._c_region_polygon\n *     cdef c_region.region_bounds no_bounds = pno_bounds._c_region_bounds[0] # deference             # <<<<<<<<<<<<<<\n *     return c_region.compute_polygon_overlap(c_polygon1,\n *                                             c_polygon2,\n */\n  __pyx_v_no_bounds = (__pyx_v_pno_bounds->_c_region_bounds[0]);\n\n  /* \"region.pyx\":185\n *     cdef c_region.region_polygon* c_polygon2 = polygon2_._c_region_polygon\n *     cdef c_region.region_bounds no_bounds = pno_bounds._c_region_bounds[0] # deference\n *     return c_region.compute_polygon_overlap(c_polygon1,             # <<<<<<<<<<<<<<\n *                                             c_polygon2,\n *                                             &only1,\n */\n  __Pyx_XDECREF(__pyx_r);\n\n  /* \"region.pyx\":189\n *                                             &only1,\n *                                             &only2,\n *                                             no_bounds)             # <<<<<<<<<<<<<<\n * \n * def vot_overlap_traj(polygons1, polygons2, bounds=None):\n */\n  __pyx_t_4 = PyFloat_FromDouble(compute_polygon_overlap(__pyx_v_c_polygon1, __pyx_v_c_polygon2, (&__pyx_v_only1), (&__pyx_v_only2), __pyx_v_no_bounds)); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 185, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __pyx_r = __pyx_t_4;\n  __pyx_t_4 = 0;\n  goto __pyx_L0;\n\n  /* \"region.pyx\":145\n *         return ret\n * \n * def vot_overlap(polygon1, polygon2, bounds=None):             # <<<<<<<<<<<<<<\n *     \"\"\" computing overlap between two polygon\n *     Args:\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_XDECREF(__pyx_t_7);\n  __Pyx_XDECREF(__pyx_t_8);\n  __Pyx_XDECREF(__pyx_t_9);\n  __Pyx_XDECREF(__pyx_t_10);\n  __Pyx_XDECREF(__pyx_t_11);\n  __Pyx_XDECREF(__pyx_t_12);\n  __Pyx_XDECREF(__pyx_t_13);\n  __Pyx_AddTraceback(\"region.vot_overlap\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_polygon1_);\n  __Pyx_XDECREF((PyObject *)__pyx_v_polygon2_);\n  __Pyx_XDECREF((PyObject *)__pyx_v_pno_bounds);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"region.pyx\":191\n *                                             no_bounds)\n * \n * def vot_overlap_traj(polygons1, polygons2, bounds=None):             # <<<<<<<<<<<<<<\n *     \"\"\" computing overlap between two trajectory\n *     Args:\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_6region_3vot_overlap_traj(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic char __pyx_doc_6region_2vot_overlap_traj[] = \" computing overlap between two trajectory\\n    Args:\\n        polygons1: list of polygon\\n        polygons2: list of polygon\\n        bounds: tuple of (left, top, right, bottom) or tuple of (width height)\\n    Return:\\n        overlaps: overlaps between all pair of polygons\\n    \";\nstatic PyMethodDef __pyx_mdef_6region_3vot_overlap_traj = {\"vot_overlap_traj\", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_6region_3vot_overlap_traj, METH_VARARGS|METH_KEYWORDS, __pyx_doc_6region_2vot_overlap_traj};\nstatic PyObject *__pyx_pw_6region_3vot_overlap_traj(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyObject *__pyx_v_polygons1 = 0;\n  PyObject *__pyx_v_polygons2 = 0;\n  PyObject *__pyx_v_bounds = 0;\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"vot_overlap_traj (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_polygons1,&__pyx_n_s_polygons2,&__pyx_n_s_bounds,0};\n    PyObject* values[3] = {0,0,0};\n    values[2] = ((PyObject *)Py_None);\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        CYTHON_FALLTHROUGH;\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        CYTHON_FALLTHROUGH;\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        CYTHON_FALLTHROUGH;\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_polygons1)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        CYTHON_FALLTHROUGH;\n        case  1:\n        if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_polygons2)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"vot_overlap_traj\", 0, 2, 3, 1); __PYX_ERR(0, 191, __pyx_L3_error)\n        }\n        CYTHON_FALLTHROUGH;\n        case  2:\n        if (kw_args > 0) {\n          PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_bounds);\n          if (value) { values[2] = value; kw_args--; }\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"vot_overlap_traj\") < 0)) __PYX_ERR(0, 191, __pyx_L3_error)\n      }\n    } else {\n      switch (PyTuple_GET_SIZE(__pyx_args)) {\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        CYTHON_FALLTHROUGH;\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n    }\n    __pyx_v_polygons1 = values[0];\n    __pyx_v_polygons2 = values[1];\n    __pyx_v_bounds = values[2];\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"vot_overlap_traj\", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 191, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"region.vot_overlap_traj\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return NULL;\n  __pyx_L4_argument_unpacking_done:;\n  __pyx_r = __pyx_pf_6region_2vot_overlap_traj(__pyx_self, __pyx_v_polygons1, __pyx_v_polygons2, __pyx_v_bounds);\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_6region_2vot_overlap_traj(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_polygons1, PyObject *__pyx_v_polygons2, PyObject *__pyx_v_bounds) {\n  PyObject *__pyx_v_overlaps = NULL;\n  Py_ssize_t __pyx_v_i;\n  PyObject *__pyx_v_overlap = NULL;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  Py_ssize_t __pyx_t_1;\n  Py_ssize_t __pyx_t_2;\n  PyObject *__pyx_t_3 = NULL;\n  Py_ssize_t __pyx_t_4;\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  PyObject *__pyx_t_7 = NULL;\n  int __pyx_t_8;\n  __Pyx_RefNannySetupContext(\"vot_overlap_traj\", 0);\n\n  /* \"region.pyx\":200\n *         overlaps: overlaps between all pair of polygons\n *     \"\"\"\n *     assert len(polygons1) == len(polygons2)             # <<<<<<<<<<<<<<\n *     overlaps = []\n *     for i in range(len(polygons1)):\n */\n  #ifndef CYTHON_WITHOUT_ASSERTIONS\n  if (unlikely(!Py_OptimizeFlag)) {\n    __pyx_t_1 = PyObject_Length(__pyx_v_polygons1); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 200, __pyx_L1_error)\n    __pyx_t_2 = PyObject_Length(__pyx_v_polygons2); if (unlikely(__pyx_t_2 == ((Py_ssize_t)-1))) __PYX_ERR(0, 200, __pyx_L1_error)\n    if (unlikely(!((__pyx_t_1 == __pyx_t_2) != 0))) {\n      PyErr_SetNone(PyExc_AssertionError);\n      __PYX_ERR(0, 200, __pyx_L1_error)\n    }\n  }\n  #endif\n\n  /* \"region.pyx\":201\n *     \"\"\"\n *     assert len(polygons1) == len(polygons2)\n *     overlaps = []             # <<<<<<<<<<<<<<\n *     for i in range(len(polygons1)):\n *         overlap = vot_overlap(polygons1[i], polygons2[i], bounds=bounds)\n */\n  __pyx_t_3 = PyList_New(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 201, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __pyx_v_overlaps = ((PyObject*)__pyx_t_3);\n  __pyx_t_3 = 0;\n\n  /* \"region.pyx\":202\n *     assert len(polygons1) == len(polygons2)\n *     overlaps = []\n *     for i in range(len(polygons1)):             # <<<<<<<<<<<<<<\n *         overlap = vot_overlap(polygons1[i], polygons2[i], bounds=bounds)\n *         overlaps.append(overlap)\n */\n  __pyx_t_2 = PyObject_Length(__pyx_v_polygons1); if (unlikely(__pyx_t_2 == ((Py_ssize_t)-1))) __PYX_ERR(0, 202, __pyx_L1_error)\n  __pyx_t_1 = __pyx_t_2;\n  for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_1; __pyx_t_4+=1) {\n    __pyx_v_i = __pyx_t_4;\n\n    /* \"region.pyx\":203\n *     overlaps = []\n *     for i in range(len(polygons1)):\n *         overlap = vot_overlap(polygons1[i], polygons2[i], bounds=bounds)             # <<<<<<<<<<<<<<\n *         overlaps.append(overlap)\n *     return overlaps\n */\n    __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_vot_overlap); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 203, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __pyx_t_5 = __Pyx_GetItemInt(__pyx_v_polygons1, __pyx_v_i, Py_ssize_t, 1, PyInt_FromSsize_t, 0, 1, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 203, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __pyx_t_6 = __Pyx_GetItemInt(__pyx_v_polygons2, __pyx_v_i, Py_ssize_t, 1, PyInt_FromSsize_t, 0, 1, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 203, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __pyx_t_7 = PyTuple_New(2); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 203, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __Pyx_GIVEREF(__pyx_t_5);\n    PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_5);\n    __Pyx_GIVEREF(__pyx_t_6);\n    PyTuple_SET_ITEM(__pyx_t_7, 1, __pyx_t_6);\n    __pyx_t_5 = 0;\n    __pyx_t_6 = 0;\n    __pyx_t_6 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 203, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    if (PyDict_SetItem(__pyx_t_6, __pyx_n_s_bounds, __pyx_v_bounds) < 0) __PYX_ERR(0, 203, __pyx_L1_error)\n    __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_7, __pyx_t_6); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 203, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n    __Pyx_XDECREF_SET(__pyx_v_overlap, __pyx_t_5);\n    __pyx_t_5 = 0;\n\n    /* \"region.pyx\":204\n *     for i in range(len(polygons1)):\n *         overlap = vot_overlap(polygons1[i], polygons2[i], bounds=bounds)\n *         overlaps.append(overlap)             # <<<<<<<<<<<<<<\n *     return overlaps\n * \n */\n    __pyx_t_8 = __Pyx_PyList_Append(__pyx_v_overlaps, __pyx_v_overlap); if (unlikely(__pyx_t_8 == ((int)-1))) __PYX_ERR(0, 204, __pyx_L1_error)\n  }\n\n  /* \"region.pyx\":205\n *         overlap = vot_overlap(polygons1[i], polygons2[i], bounds=bounds)\n *         overlaps.append(overlap)\n *     return overlaps             # <<<<<<<<<<<<<<\n * \n * \n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(__pyx_v_overlaps);\n  __pyx_r = __pyx_v_overlaps;\n  goto __pyx_L0;\n\n  /* \"region.pyx\":191\n *                                             no_bounds)\n * \n * def vot_overlap_traj(polygons1, polygons2, bounds=None):             # <<<<<<<<<<<<<<\n *     \"\"\" computing overlap between two trajectory\n *     Args:\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_XDECREF(__pyx_t_7);\n  __Pyx_AddTraceback(\"region.vot_overlap_traj\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XDECREF(__pyx_v_overlaps);\n  __Pyx_XDECREF(__pyx_v_overlap);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"region.pyx\":208\n * \n * \n * def vot_float2str(template, float value):             # <<<<<<<<<<<<<<\n *     \"\"\"\n *     Args:\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_6region_5vot_float2str(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic char __pyx_doc_6region_4vot_float2str[] = \"\\n    Args:\\n        tempate: like \\\"%.3f\\\" in C syntax\\n        value: float value\\n    \";\nstatic PyMethodDef __pyx_mdef_6region_5vot_float2str = {\"vot_float2str\", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_6region_5vot_float2str, METH_VARARGS|METH_KEYWORDS, __pyx_doc_6region_4vot_float2str};\nstatic PyObject *__pyx_pw_6region_5vot_float2str(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyObject *__pyx_v_template = 0;\n  float __pyx_v_value;\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"vot_float2str (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_template,&__pyx_n_s_value,0};\n    PyObject* values[2] = {0,0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        CYTHON_FALLTHROUGH;\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        CYTHON_FALLTHROUGH;\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_template)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        CYTHON_FALLTHROUGH;\n        case  1:\n        if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_value)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"vot_float2str\", 1, 2, 2, 1); __PYX_ERR(0, 208, __pyx_L3_error)\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"vot_float2str\") < 0)) __PYX_ERR(0, 208, __pyx_L3_error)\n      }\n    } else if (PyTuple_GET_SIZE(__pyx_args) != 2) {\n      goto __pyx_L5_argtuple_error;\n    } else {\n      values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n      values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n    }\n    __pyx_v_template = values[0];\n    __pyx_v_value = __pyx_PyFloat_AsFloat(values[1]); if (unlikely((__pyx_v_value == (float)-1) && PyErr_Occurred())) __PYX_ERR(0, 208, __pyx_L3_error)\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"vot_float2str\", 1, 2, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 208, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"region.vot_float2str\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return NULL;\n  __pyx_L4_argument_unpacking_done:;\n  __pyx_r = __pyx_pf_6region_4vot_float2str(__pyx_self, __pyx_v_template, __pyx_v_value);\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_6region_4vot_float2str(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_template, float __pyx_v_value) {\n  PyObject *__pyx_v_ptemplate = 0;\n  char const *__pyx_v_ctemplate;\n  char *__pyx_v_output;\n  PyObject *__pyx_v_ret = NULL;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  char const *__pyx_t_4;\n  int __pyx_t_5;\n  int __pyx_t_6;\n  int __pyx_t_7;\n  char const *__pyx_t_8;\n  PyObject *__pyx_t_9 = NULL;\n  PyObject *__pyx_t_10 = NULL;\n  PyObject *__pyx_t_11 = NULL;\n  PyObject *__pyx_t_12 = NULL;\n  PyObject *__pyx_t_13 = NULL;\n  PyObject *__pyx_t_14 = NULL;\n  __Pyx_RefNannySetupContext(\"vot_float2str\", 0);\n\n  /* \"region.pyx\":214\n *         value: float value\n *     \"\"\"\n *     cdef bytes ptemplate = template.encode()             # <<<<<<<<<<<<<<\n *     cdef const char* ctemplate = ptemplate\n *     cdef char* output = <char*>malloc(sizeof(char) * 100)\n */\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_template, __pyx_n_s_encode); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 214, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_t_3 = NULL;\n  if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {\n    __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);\n    if (likely(__pyx_t_3)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);\n      __Pyx_INCREF(__pyx_t_3);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_2, function);\n    }\n  }\n  __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3) : __Pyx_PyObject_CallNoArg(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n  if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 214, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  if (!(likely(PyBytes_CheckExact(__pyx_t_1))||((__pyx_t_1) == Py_None)||(PyErr_Format(PyExc_TypeError, \"Expected %.16s, got %.200s\", \"bytes\", Py_TYPE(__pyx_t_1)->tp_name), 0))) __PYX_ERR(0, 214, __pyx_L1_error)\n  __pyx_v_ptemplate = ((PyObject*)__pyx_t_1);\n  __pyx_t_1 = 0;\n\n  /* \"region.pyx\":215\n *     \"\"\"\n *     cdef bytes ptemplate = template.encode()\n *     cdef const char* ctemplate = ptemplate             # <<<<<<<<<<<<<<\n *     cdef char* output = <char*>malloc(sizeof(char) * 100)\n *     if not output:\n */\n  if (unlikely(__pyx_v_ptemplate == Py_None)) {\n    PyErr_SetString(PyExc_TypeError, \"expected bytes, NoneType found\");\n    __PYX_ERR(0, 215, __pyx_L1_error)\n  }\n  __pyx_t_4 = __Pyx_PyBytes_AsString(__pyx_v_ptemplate); if (unlikely((!__pyx_t_4) && PyErr_Occurred())) __PYX_ERR(0, 215, __pyx_L1_error)\n  __pyx_v_ctemplate = __pyx_t_4;\n\n  /* \"region.pyx\":216\n *     cdef bytes ptemplate = template.encode()\n *     cdef const char* ctemplate = ptemplate\n *     cdef char* output = <char*>malloc(sizeof(char) * 100)             # <<<<<<<<<<<<<<\n *     if not output:\n *         raise MemoryError()\n */\n  __pyx_v_output = ((char *)malloc(((sizeof(char)) * 0x64)));\n\n  /* \"region.pyx\":217\n *     cdef const char* ctemplate = ptemplate\n *     cdef char* output = <char*>malloc(sizeof(char) * 100)\n *     if not output:             # <<<<<<<<<<<<<<\n *         raise MemoryError()\n *     sprintf(output, ctemplate, value)\n */\n  __pyx_t_5 = ((!(__pyx_v_output != 0)) != 0);\n  if (unlikely(__pyx_t_5)) {\n\n    /* \"region.pyx\":218\n *     cdef char* output = <char*>malloc(sizeof(char) * 100)\n *     if not output:\n *         raise MemoryError()             # <<<<<<<<<<<<<<\n *     sprintf(output, ctemplate, value)\n *     try:\n */\n    PyErr_NoMemory(); __PYX_ERR(0, 218, __pyx_L1_error)\n\n    /* \"region.pyx\":217\n *     cdef const char* ctemplate = ptemplate\n *     cdef char* output = <char*>malloc(sizeof(char) * 100)\n *     if not output:             # <<<<<<<<<<<<<<\n *         raise MemoryError()\n *     sprintf(output, ctemplate, value)\n */\n  }\n\n  /* \"region.pyx\":219\n *     if not output:\n *         raise MemoryError()\n *     sprintf(output, ctemplate, value)             # <<<<<<<<<<<<<<\n *     try:\n *         ret = output[:strlen(output)].decode()\n */\n  (void)(sprintf(__pyx_v_output, __pyx_v_ctemplate, __pyx_v_value));\n\n  /* \"region.pyx\":220\n *         raise MemoryError()\n *     sprintf(output, ctemplate, value)\n *     try:             # <<<<<<<<<<<<<<\n *         ret = output[:strlen(output)].decode()\n *     finally:\n */\n  /*try:*/ {\n\n    /* \"region.pyx\":221\n *     sprintf(output, ctemplate, value)\n *     try:\n *         ret = output[:strlen(output)].decode()             # <<<<<<<<<<<<<<\n *     finally:\n *         free(output)\n */\n    __pyx_t_1 = __Pyx_decode_c_string(__pyx_v_output, 0, strlen(__pyx_v_output), NULL, NULL, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 221, __pyx_L5_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __pyx_v_ret = __pyx_t_1;\n    __pyx_t_1 = 0;\n  }\n\n  /* \"region.pyx\":223\n *         ret = output[:strlen(output)].decode()\n *     finally:\n *         free(output)             # <<<<<<<<<<<<<<\n *     return ret\n */\n  /*finally:*/ {\n    /*normal exit:*/{\n      free(__pyx_v_output);\n      goto __pyx_L6;\n    }\n    __pyx_L5_error:;\n    /*exception exit:*/{\n      __Pyx_PyThreadState_declare\n      __Pyx_PyThreadState_assign\n      __pyx_t_9 = 0; __pyx_t_10 = 0; __pyx_t_11 = 0; __pyx_t_12 = 0; __pyx_t_13 = 0; __pyx_t_14 = 0;\n      __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;\n      __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n      __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (PY_MAJOR_VERSION >= 3) __Pyx_ExceptionSwap(&__pyx_t_12, &__pyx_t_13, &__pyx_t_14);\n      if ((PY_MAJOR_VERSION < 3) || unlikely(__Pyx_GetException(&__pyx_t_9, &__pyx_t_10, &__pyx_t_11) < 0)) __Pyx_ErrFetch(&__pyx_t_9, &__pyx_t_10, &__pyx_t_11);\n      __Pyx_XGOTREF(__pyx_t_9);\n      __Pyx_XGOTREF(__pyx_t_10);\n      __Pyx_XGOTREF(__pyx_t_11);\n      __Pyx_XGOTREF(__pyx_t_12);\n      __Pyx_XGOTREF(__pyx_t_13);\n      __Pyx_XGOTREF(__pyx_t_14);\n      __pyx_t_6 = __pyx_lineno; __pyx_t_7 = __pyx_clineno; __pyx_t_8 = __pyx_filename;\n      {\n        free(__pyx_v_output);\n      }\n      if (PY_MAJOR_VERSION >= 3) {\n        __Pyx_XGIVEREF(__pyx_t_12);\n        __Pyx_XGIVEREF(__pyx_t_13);\n        __Pyx_XGIVEREF(__pyx_t_14);\n        __Pyx_ExceptionReset(__pyx_t_12, __pyx_t_13, __pyx_t_14);\n      }\n      __Pyx_XGIVEREF(__pyx_t_9);\n      __Pyx_XGIVEREF(__pyx_t_10);\n      __Pyx_XGIVEREF(__pyx_t_11);\n      __Pyx_ErrRestore(__pyx_t_9, __pyx_t_10, __pyx_t_11);\n      __pyx_t_9 = 0; __pyx_t_10 = 0; __pyx_t_11 = 0; __pyx_t_12 = 0; __pyx_t_13 = 0; __pyx_t_14 = 0;\n      __pyx_lineno = __pyx_t_6; __pyx_clineno = __pyx_t_7; __pyx_filename = __pyx_t_8;\n      goto __pyx_L1_error;\n    }\n    __pyx_L6:;\n  }\n\n  /* \"region.pyx\":224\n *     finally:\n *         free(output)\n *     return ret             # <<<<<<<<<<<<<<\n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(__pyx_v_ret);\n  __pyx_r = __pyx_v_ret;\n  goto __pyx_L0;\n\n  /* \"region.pyx\":208\n * \n * \n * def vot_float2str(template, float value):             # <<<<<<<<<<<<<<\n *     \"\"\"\n *     Args:\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_AddTraceback(\"region.vot_float2str\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XDECREF(__pyx_v_ptemplate);\n  __Pyx_XDECREF(__pyx_v_ret);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"EnumBase\":16\n * @cython.internal\n * cdef class __Pyx_EnumMeta(type):\n *     def __init__(cls, name, parents, dct):             # <<<<<<<<<<<<<<\n *         type.__init__(cls, name, parents, dct)\n *         cls.__members__ = __Pyx_OrderedDict()\n */\n\n/* Python wrapper */\nstatic int __pyx_pw_8EnumBase_14__Pyx_EnumMeta_1__init__(PyObject *__pyx_v_cls, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic int __pyx_pw_8EnumBase_14__Pyx_EnumMeta_1__init__(PyObject *__pyx_v_cls, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyObject *__pyx_v_name = 0;\n  PyObject *__pyx_v_parents = 0;\n  PyObject *__pyx_v_dct = 0;\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__init__ (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_name,&__pyx_n_s_parents,&__pyx_n_s_dct,0};\n    PyObject* values[3] = {0,0,0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        CYTHON_FALLTHROUGH;\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        CYTHON_FALLTHROUGH;\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        CYTHON_FALLTHROUGH;\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_name)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        CYTHON_FALLTHROUGH;\n        case  1:\n        if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_parents)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"__init__\", 1, 3, 3, 1); __PYX_ERR(1, 16, __pyx_L3_error)\n        }\n        CYTHON_FALLTHROUGH;\n        case  2:\n        if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_dct)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"__init__\", 1, 3, 3, 2); __PYX_ERR(1, 16, __pyx_L3_error)\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"__init__\") < 0)) __PYX_ERR(1, 16, __pyx_L3_error)\n      }\n    } else if (PyTuple_GET_SIZE(__pyx_args) != 3) {\n      goto __pyx_L5_argtuple_error;\n    } else {\n      values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n      values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n      values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n    }\n    __pyx_v_name = values[0];\n    __pyx_v_parents = values[1];\n    __pyx_v_dct = values[2];\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"__init__\", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 16, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"EnumBase.__Pyx_EnumMeta.__init__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return -1;\n  __pyx_L4_argument_unpacking_done:;\n  __pyx_r = __pyx_pf_8EnumBase_14__Pyx_EnumMeta___init__(((struct __pyx_obj___Pyx_EnumMeta *)__pyx_v_cls), __pyx_v_name, __pyx_v_parents, __pyx_v_dct);\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic int __pyx_pf_8EnumBase_14__Pyx_EnumMeta___init__(struct __pyx_obj___Pyx_EnumMeta *__pyx_v_cls, PyObject *__pyx_v_name, PyObject *__pyx_v_parents, PyObject *__pyx_v_dct) {\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  int __pyx_t_4;\n  PyObject *__pyx_t_5 = NULL;\n  __Pyx_RefNannySetupContext(\"__init__\", 0);\n\n  /* \"EnumBase\":17\n * cdef class __Pyx_EnumMeta(type):\n *     def __init__(cls, name, parents, dct):\n *         type.__init__(cls, name, parents, dct)             # <<<<<<<<<<<<<<\n *         cls.__members__ = __Pyx_OrderedDict()\n *     def __iter__(cls):\n */\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)(&PyType_Type)), __pyx_n_s_init); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 17, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_t_3 = NULL;\n  __pyx_t_4 = 0;\n  if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {\n    __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);\n    if (likely(__pyx_t_3)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);\n      __Pyx_INCREF(__pyx_t_3);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_2, function);\n      __pyx_t_4 = 1;\n    }\n  }\n  #if CYTHON_FAST_PYCALL\n  if (PyFunction_Check(__pyx_t_2)) {\n    PyObject *__pyx_temp[5] = {__pyx_t_3, ((PyObject *)__pyx_v_cls), __pyx_v_name, __pyx_v_parents, __pyx_v_dct};\n    __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_4, 4+__pyx_t_4); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 17, __pyx_L1_error)\n    __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __Pyx_GOTREF(__pyx_t_1);\n  } else\n  #endif\n  #if CYTHON_FAST_PYCCALL\n  if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) {\n    PyObject *__pyx_temp[5] = {__pyx_t_3, ((PyObject *)__pyx_v_cls), __pyx_v_name, __pyx_v_parents, __pyx_v_dct};\n    __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-__pyx_t_4, 4+__pyx_t_4); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 17, __pyx_L1_error)\n    __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __Pyx_GOTREF(__pyx_t_1);\n  } else\n  #endif\n  {\n    __pyx_t_5 = PyTuple_New(4+__pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 17, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    if (__pyx_t_3) {\n      __Pyx_GIVEREF(__pyx_t_3); PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_3); __pyx_t_3 = NULL;\n    }\n    __Pyx_INCREF(((PyObject *)__pyx_v_cls));\n    __Pyx_GIVEREF(((PyObject *)__pyx_v_cls));\n    PyTuple_SET_ITEM(__pyx_t_5, 0+__pyx_t_4, ((PyObject *)__pyx_v_cls));\n    __Pyx_INCREF(__pyx_v_name);\n    __Pyx_GIVEREF(__pyx_v_name);\n    PyTuple_SET_ITEM(__pyx_t_5, 1+__pyx_t_4, __pyx_v_name);\n    __Pyx_INCREF(__pyx_v_parents);\n    __Pyx_GIVEREF(__pyx_v_parents);\n    PyTuple_SET_ITEM(__pyx_t_5, 2+__pyx_t_4, __pyx_v_parents);\n    __Pyx_INCREF(__pyx_v_dct);\n    __Pyx_GIVEREF(__pyx_v_dct);\n    PyTuple_SET_ITEM(__pyx_t_5, 3+__pyx_t_4, __pyx_v_dct);\n    __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_5, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 17, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  }\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"EnumBase\":18\n *     def __init__(cls, name, parents, dct):\n *         type.__init__(cls, name, parents, dct)\n *         cls.__members__ = __Pyx_OrderedDict()             # <<<<<<<<<<<<<<\n *     def __iter__(cls):\n *         return iter(cls.__members__.values())\n */\n  __Pyx_INCREF(__Pyx_OrderedDict);\n  __pyx_t_2 = __Pyx_OrderedDict; __pyx_t_5 = NULL;\n  if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {\n    __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_2);\n    if (likely(__pyx_t_5)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);\n      __Pyx_INCREF(__pyx_t_5);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_2, function);\n    }\n  }\n  __pyx_t_1 = (__pyx_t_5) ? __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_5) : __Pyx_PyObject_CallNoArg(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;\n  if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 18, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  if (__Pyx_PyObject_SetAttrStr(((PyObject *)__pyx_v_cls), __pyx_n_s_members, __pyx_t_1) < 0) __PYX_ERR(1, 18, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"EnumBase\":16\n * @cython.internal\n * cdef class __Pyx_EnumMeta(type):\n *     def __init__(cls, name, parents, dct):             # <<<<<<<<<<<<<<\n *         type.__init__(cls, name, parents, dct)\n *         cls.__members__ = __Pyx_OrderedDict()\n */\n\n  /* function exit code */\n  __pyx_r = 0;\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_AddTraceback(\"EnumBase.__Pyx_EnumMeta.__init__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = -1;\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"EnumBase\":19\n *         type.__init__(cls, name, parents, dct)\n *         cls.__members__ = __Pyx_OrderedDict()\n *     def __iter__(cls):             # <<<<<<<<<<<<<<\n *         return iter(cls.__members__.values())\n *     def __getitem__(cls, name):\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_8EnumBase_14__Pyx_EnumMeta_3__iter__(PyObject *__pyx_v_cls); /*proto*/\nstatic PyObject *__pyx_pw_8EnumBase_14__Pyx_EnumMeta_3__iter__(PyObject *__pyx_v_cls) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__iter__ (wrapper)\", 0);\n  __pyx_r = __pyx_pf_8EnumBase_14__Pyx_EnumMeta_2__iter__(((struct __pyx_obj___Pyx_EnumMeta *)__pyx_v_cls));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_8EnumBase_14__Pyx_EnumMeta_2__iter__(struct __pyx_obj___Pyx_EnumMeta *__pyx_v_cls) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  __Pyx_RefNannySetupContext(\"__iter__\", 0);\n\n  /* \"EnumBase\":20\n *         cls.__members__ = __Pyx_OrderedDict()\n *     def __iter__(cls):\n *         return iter(cls.__members__.values())             # <<<<<<<<<<<<<<\n *     def __getitem__(cls, name):\n *         return cls.__members__[name]\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_cls), __pyx_n_s_members); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 20, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_values); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 20, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_t_2 = NULL;\n  if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) {\n    __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3);\n    if (likely(__pyx_t_2)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);\n      __Pyx_INCREF(__pyx_t_2);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_3, function);\n    }\n  }\n  __pyx_t_1 = (__pyx_t_2) ? __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_t_2) : __Pyx_PyObject_CallNoArg(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n  if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 20, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  __pyx_t_3 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 20, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_r = __pyx_t_3;\n  __pyx_t_3 = 0;\n  goto __pyx_L0;\n\n  /* \"EnumBase\":19\n *         type.__init__(cls, name, parents, dct)\n *         cls.__members__ = __Pyx_OrderedDict()\n *     def __iter__(cls):             # <<<<<<<<<<<<<<\n *         return iter(cls.__members__.values())\n *     def __getitem__(cls, name):\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_AddTraceback(\"EnumBase.__Pyx_EnumMeta.__iter__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"EnumBase\":21\n *     def __iter__(cls):\n *         return iter(cls.__members__.values())\n *     def __getitem__(cls, name):             # <<<<<<<<<<<<<<\n *         return cls.__members__[name]\n * \n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_8EnumBase_14__Pyx_EnumMeta_5__getitem__(PyObject *__pyx_v_cls, PyObject *__pyx_v_name); /*proto*/\nstatic PyObject *__pyx_pw_8EnumBase_14__Pyx_EnumMeta_5__getitem__(PyObject *__pyx_v_cls, PyObject *__pyx_v_name) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__getitem__ (wrapper)\", 0);\n  __pyx_r = __pyx_pf_8EnumBase_14__Pyx_EnumMeta_4__getitem__(((struct __pyx_obj___Pyx_EnumMeta *)__pyx_v_cls), ((PyObject *)__pyx_v_name));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_8EnumBase_14__Pyx_EnumMeta_4__getitem__(struct __pyx_obj___Pyx_EnumMeta *__pyx_v_cls, PyObject *__pyx_v_name) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  __Pyx_RefNannySetupContext(\"__getitem__\", 0);\n\n  /* \"EnumBase\":22\n *         return iter(cls.__members__.values())\n *     def __getitem__(cls, name):\n *         return cls.__members__[name]             # <<<<<<<<<<<<<<\n * \n * \n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_cls), __pyx_n_s_members); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 22, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = __Pyx_PyObject_GetItem(__pyx_t_1, __pyx_v_name); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 22, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_r = __pyx_t_2;\n  __pyx_t_2 = 0;\n  goto __pyx_L0;\n\n  /* \"EnumBase\":21\n *     def __iter__(cls):\n *         return iter(cls.__members__.values())\n *     def __getitem__(cls, name):             # <<<<<<<<<<<<<<\n *         return cls.__members__[name]\n * \n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_AddTraceback(\"EnumBase.__Pyx_EnumMeta.__getitem__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"(tree fragment)\":1\n * def __reduce_cython__(self):             # <<<<<<<<<<<<<<\n *     cdef tuple state\n *     cdef object _dict\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_8EnumBase_14__Pyx_EnumMeta_7__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/\nstatic PyObject *__pyx_pw_8EnumBase_14__Pyx_EnumMeta_7__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__reduce_cython__ (wrapper)\", 0);\n  __pyx_r = __pyx_pf_8EnumBase_14__Pyx_EnumMeta_6__reduce_cython__(((struct __pyx_obj___Pyx_EnumMeta *)__pyx_v_self));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_8EnumBase_14__Pyx_EnumMeta_6__reduce_cython__(struct __pyx_obj___Pyx_EnumMeta *__pyx_v_self) {\n  PyObject *__pyx_v_state = 0;\n  PyObject *__pyx_v__dict = 0;\n  int __pyx_v_use_setstate;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  int __pyx_t_2;\n  int __pyx_t_3;\n  PyObject *__pyx_t_4 = NULL;\n  PyObject *__pyx_t_5 = NULL;\n  __Pyx_RefNannySetupContext(\"__reduce_cython__\", 0);\n\n  /* \"(tree fragment)\":5\n *     cdef object _dict\n *     cdef bint use_setstate\n *     state = ()             # <<<<<<<<<<<<<<\n *     _dict = getattr(self, '__dict__', None)\n *     if _dict is not None:\n */\n  __Pyx_INCREF(__pyx_empty_tuple);\n  __pyx_v_state = __pyx_empty_tuple;\n\n  /* \"(tree fragment)\":6\n *     cdef bint use_setstate\n *     state = ()\n *     _dict = getattr(self, '__dict__', None)             # <<<<<<<<<<<<<<\n *     if _dict is not None:\n *         state += (_dict,)\n */\n  __pyx_t_1 = __Pyx_GetAttr3(((PyObject *)__pyx_v_self), __pyx_n_s_dict, Py_None); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 6, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_v__dict = __pyx_t_1;\n  __pyx_t_1 = 0;\n\n  /* \"(tree fragment)\":7\n *     state = ()\n *     _dict = getattr(self, '__dict__', None)\n *     if _dict is not None:             # <<<<<<<<<<<<<<\n *         state += (_dict,)\n *         use_setstate = True\n */\n  __pyx_t_2 = (__pyx_v__dict != Py_None);\n  __pyx_t_3 = (__pyx_t_2 != 0);\n  if (__pyx_t_3) {\n\n    /* \"(tree fragment)\":8\n *     _dict = getattr(self, '__dict__', None)\n *     if _dict is not None:\n *         state += (_dict,)             # <<<<<<<<<<<<<<\n *         use_setstate = True\n *     else:\n */\n    __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 8, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __Pyx_INCREF(__pyx_v__dict);\n    __Pyx_GIVEREF(__pyx_v__dict);\n    PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v__dict);\n    __pyx_t_4 = PyNumber_InPlaceAdd(__pyx_v_state, __pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 8, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n    __Pyx_DECREF_SET(__pyx_v_state, ((PyObject*)__pyx_t_4));\n    __pyx_t_4 = 0;\n\n    /* \"(tree fragment)\":9\n *     if _dict is not None:\n *         state += (_dict,)\n *         use_setstate = True             # <<<<<<<<<<<<<<\n *     else:\n *         use_setstate = False\n */\n    __pyx_v_use_setstate = 1;\n\n    /* \"(tree fragment)\":7\n *     state = ()\n *     _dict = getattr(self, '__dict__', None)\n *     if _dict is not None:             # <<<<<<<<<<<<<<\n *         state += (_dict,)\n *         use_setstate = True\n */\n    goto __pyx_L3;\n  }\n\n  /* \"(tree fragment)\":11\n *         use_setstate = True\n *     else:\n *         use_setstate = False             # <<<<<<<<<<<<<<\n *     if use_setstate:\n *         return __pyx_unpickle___Pyx_EnumMeta, (type(self), 0xd41d8cd, None), state\n */\n  /*else*/ {\n    __pyx_v_use_setstate = 0;\n  }\n  __pyx_L3:;\n\n  /* \"(tree fragment)\":12\n *     else:\n *         use_setstate = False\n *     if use_setstate:             # <<<<<<<<<<<<<<\n *         return __pyx_unpickle___Pyx_EnumMeta, (type(self), 0xd41d8cd, None), state\n *     else:\n */\n  __pyx_t_3 = (__pyx_v_use_setstate != 0);\n  if (__pyx_t_3) {\n\n    /* \"(tree fragment)\":13\n *         use_setstate = False\n *     if use_setstate:\n *         return __pyx_unpickle___Pyx_EnumMeta, (type(self), 0xd41d8cd, None), state             # <<<<<<<<<<<<<<\n *     else:\n *         return __pyx_unpickle___Pyx_EnumMeta, (type(self), 0xd41d8cd, state)\n */\n    __Pyx_XDECREF(__pyx_r);\n    __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_pyx_unpickle___Pyx_EnumMeta); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 13, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 13, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));\n    __Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));\n    PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));\n    __Pyx_INCREF(__pyx_int_222419149);\n    __Pyx_GIVEREF(__pyx_int_222419149);\n    PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_int_222419149);\n    __Pyx_INCREF(Py_None);\n    __Pyx_GIVEREF(Py_None);\n    PyTuple_SET_ITEM(__pyx_t_1, 2, Py_None);\n    __pyx_t_5 = PyTuple_New(3); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 13, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_GIVEREF(__pyx_t_4);\n    PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_4);\n    __Pyx_GIVEREF(__pyx_t_1);\n    PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_1);\n    __Pyx_INCREF(__pyx_v_state);\n    __Pyx_GIVEREF(__pyx_v_state);\n    PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_v_state);\n    __pyx_t_4 = 0;\n    __pyx_t_1 = 0;\n    __pyx_r = __pyx_t_5;\n    __pyx_t_5 = 0;\n    goto __pyx_L0;\n\n    /* \"(tree fragment)\":12\n *     else:\n *         use_setstate = False\n *     if use_setstate:             # <<<<<<<<<<<<<<\n *         return __pyx_unpickle___Pyx_EnumMeta, (type(self), 0xd41d8cd, None), state\n *     else:\n */\n  }\n\n  /* \"(tree fragment)\":15\n *         return __pyx_unpickle___Pyx_EnumMeta, (type(self), 0xd41d8cd, None), state\n *     else:\n *         return __pyx_unpickle___Pyx_EnumMeta, (type(self), 0xd41d8cd, state)             # <<<<<<<<<<<<<<\n * def __setstate_cython__(self, __pyx_state):\n *     __pyx_unpickle___Pyx_EnumMeta__set_state(self, __pyx_state)\n */\n  /*else*/ {\n    __Pyx_XDECREF(__pyx_r);\n    __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_pyx_unpickle___Pyx_EnumMeta); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 15, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 15, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));\n    __Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));\n    PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));\n    __Pyx_INCREF(__pyx_int_222419149);\n    __Pyx_GIVEREF(__pyx_int_222419149);\n    PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_int_222419149);\n    __Pyx_INCREF(__pyx_v_state);\n    __Pyx_GIVEREF(__pyx_v_state);\n    PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_v_state);\n    __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 15, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __Pyx_GIVEREF(__pyx_t_5);\n    PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_5);\n    __Pyx_GIVEREF(__pyx_t_1);\n    PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_1);\n    __pyx_t_5 = 0;\n    __pyx_t_1 = 0;\n    __pyx_r = __pyx_t_4;\n    __pyx_t_4 = 0;\n    goto __pyx_L0;\n  }\n\n  /* \"(tree fragment)\":1\n * def __reduce_cython__(self):             # <<<<<<<<<<<<<<\n *     cdef tuple state\n *     cdef object _dict\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_AddTraceback(\"EnumBase.__Pyx_EnumMeta.__reduce_cython__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XDECREF(__pyx_v_state);\n  __Pyx_XDECREF(__pyx_v__dict);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"(tree fragment)\":16\n *     else:\n *         return __pyx_unpickle___Pyx_EnumMeta, (type(self), 0xd41d8cd, state)\n * def __setstate_cython__(self, __pyx_state):             # <<<<<<<<<<<<<<\n *     __pyx_unpickle___Pyx_EnumMeta__set_state(self, __pyx_state)\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_8EnumBase_14__Pyx_EnumMeta_9__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/\nstatic PyObject *__pyx_pw_8EnumBase_14__Pyx_EnumMeta_9__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__setstate_cython__ (wrapper)\", 0);\n  __pyx_r = __pyx_pf_8EnumBase_14__Pyx_EnumMeta_8__setstate_cython__(((struct __pyx_obj___Pyx_EnumMeta *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_8EnumBase_14__Pyx_EnumMeta_8__setstate_cython__(struct __pyx_obj___Pyx_EnumMeta *__pyx_v_self, PyObject *__pyx_v___pyx_state) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"__setstate_cython__\", 0);\n\n  /* \"(tree fragment)\":17\n *         return __pyx_unpickle___Pyx_EnumMeta, (type(self), 0xd41d8cd, state)\n * def __setstate_cython__(self, __pyx_state):\n *     __pyx_unpickle___Pyx_EnumMeta__set_state(self, __pyx_state)             # <<<<<<<<<<<<<<\n */\n  if (!(likely(PyTuple_CheckExact(__pyx_v___pyx_state))||((__pyx_v___pyx_state) == Py_None)||(PyErr_Format(PyExc_TypeError, \"Expected %.16s, got %.200s\", \"tuple\", Py_TYPE(__pyx_v___pyx_state)->tp_name), 0))) __PYX_ERR(1, 17, __pyx_L1_error)\n  __pyx_t_1 = __pyx_unpickle___Pyx_EnumMeta__set_state(__pyx_v_self, ((PyObject*)__pyx_v___pyx_state)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 17, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"(tree fragment)\":16\n *     else:\n *         return __pyx_unpickle___Pyx_EnumMeta, (type(self), 0xd41d8cd, state)\n * def __setstate_cython__(self, __pyx_state):             # <<<<<<<<<<<<<<\n *     __pyx_unpickle___Pyx_EnumMeta__set_state(self, __pyx_state)\n */\n\n  /* function exit code */\n  __pyx_r = Py_None; __Pyx_INCREF(Py_None);\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"EnumBase.__Pyx_EnumMeta.__setstate_cython__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"EnumBase\":28\n * class __Pyx_EnumBase(int):\n *     __metaclass__ = __Pyx_EnumMeta\n *     def __new__(cls, value, name=None):             # <<<<<<<<<<<<<<\n *         for v in cls:\n *             if v == value:\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_8EnumBase_14__Pyx_EnumBase_1__new__(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic PyMethodDef __pyx_mdef_8EnumBase_14__Pyx_EnumBase_1__new__ = {\"__new__\", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_8EnumBase_14__Pyx_EnumBase_1__new__, METH_VARARGS|METH_KEYWORDS, 0};\nstatic PyObject *__pyx_pw_8EnumBase_14__Pyx_EnumBase_1__new__(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyObject *__pyx_v_cls = 0;\n  PyObject *__pyx_v_value = 0;\n  PyObject *__pyx_v_name = 0;\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__new__ (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_cls,&__pyx_n_s_value,&__pyx_n_s_name,0};\n    PyObject* values[3] = {0,0,0};\n    values[2] = ((PyObject *)((PyObject *)Py_None));\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        CYTHON_FALLTHROUGH;\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        CYTHON_FALLTHROUGH;\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        CYTHON_FALLTHROUGH;\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_cls)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        CYTHON_FALLTHROUGH;\n        case  1:\n        if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_value)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"__new__\", 0, 2, 3, 1); __PYX_ERR(1, 28, __pyx_L3_error)\n        }\n        CYTHON_FALLTHROUGH;\n        case  2:\n        if (kw_args > 0) {\n          PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_name);\n          if (value) { values[2] = value; kw_args--; }\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"__new__\") < 0)) __PYX_ERR(1, 28, __pyx_L3_error)\n      }\n    } else {\n      switch (PyTuple_GET_SIZE(__pyx_args)) {\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        CYTHON_FALLTHROUGH;\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n    }\n    __pyx_v_cls = values[0];\n    __pyx_v_value = values[1];\n    __pyx_v_name = values[2];\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"__new__\", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 28, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"EnumBase.__Pyx_EnumBase.__new__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return NULL;\n  __pyx_L4_argument_unpacking_done:;\n  __pyx_r = __pyx_pf_8EnumBase_14__Pyx_EnumBase___new__(__pyx_self, __pyx_v_cls, __pyx_v_value, __pyx_v_name);\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_8EnumBase_14__Pyx_EnumBase___new__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_cls, PyObject *__pyx_v_value, PyObject *__pyx_v_name) {\n  PyObject *__pyx_v_v = NULL;\n  PyObject *__pyx_v_res = NULL;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  Py_ssize_t __pyx_t_2;\n  PyObject *(*__pyx_t_3)(PyObject *);\n  PyObject *__pyx_t_4 = NULL;\n  int __pyx_t_5;\n  int __pyx_t_6;\n  PyObject *__pyx_t_7 = NULL;\n  int __pyx_t_8;\n  PyObject *__pyx_t_9 = NULL;\n  int __pyx_t_10;\n  __Pyx_RefNannySetupContext(\"__new__\", 0);\n\n  /* \"EnumBase\":29\n *     __metaclass__ = __Pyx_EnumMeta\n *     def __new__(cls, value, name=None):\n *         for v in cls:             # <<<<<<<<<<<<<<\n *             if v == value:\n *                 return v\n */\n  if (likely(PyList_CheckExact(__pyx_v_cls)) || PyTuple_CheckExact(__pyx_v_cls)) {\n    __pyx_t_1 = __pyx_v_cls; __Pyx_INCREF(__pyx_t_1); __pyx_t_2 = 0;\n    __pyx_t_3 = NULL;\n  } else {\n    __pyx_t_2 = -1; __pyx_t_1 = PyObject_GetIter(__pyx_v_cls); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 29, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __pyx_t_3 = Py_TYPE(__pyx_t_1)->tp_iternext; if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 29, __pyx_L1_error)\n  }\n  for (;;) {\n    if (likely(!__pyx_t_3)) {\n      if (likely(PyList_CheckExact(__pyx_t_1))) {\n        if (__pyx_t_2 >= PyList_GET_SIZE(__pyx_t_1)) break;\n        #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n        __pyx_t_4 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_2); __Pyx_INCREF(__pyx_t_4); __pyx_t_2++; if (unlikely(0 < 0)) __PYX_ERR(1, 29, __pyx_L1_error)\n        #else\n        __pyx_t_4 = PySequence_ITEM(__pyx_t_1, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 29, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_4);\n        #endif\n      } else {\n        if (__pyx_t_2 >= PyTuple_GET_SIZE(__pyx_t_1)) break;\n        #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n        __pyx_t_4 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_2); __Pyx_INCREF(__pyx_t_4); __pyx_t_2++; if (unlikely(0 < 0)) __PYX_ERR(1, 29, __pyx_L1_error)\n        #else\n        __pyx_t_4 = PySequence_ITEM(__pyx_t_1, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 29, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_4);\n        #endif\n      }\n    } else {\n      __pyx_t_4 = __pyx_t_3(__pyx_t_1);\n      if (unlikely(!__pyx_t_4)) {\n        PyObject* exc_type = PyErr_Occurred();\n        if (exc_type) {\n          if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();\n          else __PYX_ERR(1, 29, __pyx_L1_error)\n        }\n        break;\n      }\n      __Pyx_GOTREF(__pyx_t_4);\n    }\n    __Pyx_XDECREF_SET(__pyx_v_v, __pyx_t_4);\n    __pyx_t_4 = 0;\n\n    /* \"EnumBase\":30\n *     def __new__(cls, value, name=None):\n *         for v in cls:\n *             if v == value:             # <<<<<<<<<<<<<<\n *                 return v\n *         if name is None:\n */\n    __pyx_t_4 = PyObject_RichCompare(__pyx_v_v, __pyx_v_value, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 30, __pyx_L1_error)\n    __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_5 < 0)) __PYX_ERR(1, 30, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    if (__pyx_t_5) {\n\n      /* \"EnumBase\":31\n *         for v in cls:\n *             if v == value:\n *                 return v             # <<<<<<<<<<<<<<\n *         if name is None:\n *             raise ValueError(\"Unknown enum value: '%s'\" % value)\n */\n      __Pyx_XDECREF(__pyx_r);\n      __Pyx_INCREF(__pyx_v_v);\n      __pyx_r = __pyx_v_v;\n      __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n      goto __pyx_L0;\n\n      /* \"EnumBase\":30\n *     def __new__(cls, value, name=None):\n *         for v in cls:\n *             if v == value:             # <<<<<<<<<<<<<<\n *                 return v\n *         if name is None:\n */\n    }\n\n    /* \"EnumBase\":29\n *     __metaclass__ = __Pyx_EnumMeta\n *     def __new__(cls, value, name=None):\n *         for v in cls:             # <<<<<<<<<<<<<<\n *             if v == value:\n *                 return v\n */\n  }\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"EnumBase\":32\n *             if v == value:\n *                 return v\n *         if name is None:             # <<<<<<<<<<<<<<\n *             raise ValueError(\"Unknown enum value: '%s'\" % value)\n *         res = int.__new__(cls, value)\n */\n  __pyx_t_5 = (__pyx_v_name == Py_None);\n  __pyx_t_6 = (__pyx_t_5 != 0);\n  if (unlikely(__pyx_t_6)) {\n\n    /* \"EnumBase\":33\n *                 return v\n *         if name is None:\n *             raise ValueError(\"Unknown enum value: '%s'\" % value)             # <<<<<<<<<<<<<<\n *         res = int.__new__(cls, value)\n *         res.name = name\n */\n    __pyx_t_1 = __Pyx_PyString_FormatSafe(__pyx_kp_s_Unknown_enum_value_s, __pyx_v_value); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 33, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __pyx_t_4 = __Pyx_PyObject_CallOneArg(__pyx_builtin_ValueError, __pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 33, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n    __Pyx_Raise(__pyx_t_4, 0, 0, 0);\n    __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    __PYX_ERR(1, 33, __pyx_L1_error)\n\n    /* \"EnumBase\":32\n *             if v == value:\n *                 return v\n *         if name is None:             # <<<<<<<<<<<<<<\n *             raise ValueError(\"Unknown enum value: '%s'\" % value)\n *         res = int.__new__(cls, value)\n */\n  }\n\n  /* \"EnumBase\":34\n *         if name is None:\n *             raise ValueError(\"Unknown enum value: '%s'\" % value)\n *         res = int.__new__(cls, value)             # <<<<<<<<<<<<<<\n *         res.name = name\n *         setattr(cls, name, res)\n */\n  __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)(&PyInt_Type)), __pyx_n_s_new); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 34, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_7 = NULL;\n  __pyx_t_8 = 0;\n  if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) {\n    __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_1);\n    if (likely(__pyx_t_7)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1);\n      __Pyx_INCREF(__pyx_t_7);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_1, function);\n      __pyx_t_8 = 1;\n    }\n  }\n  #if CYTHON_FAST_PYCALL\n  if (PyFunction_Check(__pyx_t_1)) {\n    PyObject *__pyx_temp[3] = {__pyx_t_7, __pyx_v_cls, __pyx_v_value};\n    __pyx_t_4 = __Pyx_PyFunction_FastCall(__pyx_t_1, __pyx_temp+1-__pyx_t_8, 2+__pyx_t_8); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 34, __pyx_L1_error)\n    __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;\n    __Pyx_GOTREF(__pyx_t_4);\n  } else\n  #endif\n  #if CYTHON_FAST_PYCCALL\n  if (__Pyx_PyFastCFunction_Check(__pyx_t_1)) {\n    PyObject *__pyx_temp[3] = {__pyx_t_7, __pyx_v_cls, __pyx_v_value};\n    __pyx_t_4 = __Pyx_PyCFunction_FastCall(__pyx_t_1, __pyx_temp+1-__pyx_t_8, 2+__pyx_t_8); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 34, __pyx_L1_error)\n    __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;\n    __Pyx_GOTREF(__pyx_t_4);\n  } else\n  #endif\n  {\n    __pyx_t_9 = PyTuple_New(2+__pyx_t_8); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 34, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_9);\n    if (__pyx_t_7) {\n      __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_9, 0, __pyx_t_7); __pyx_t_7 = NULL;\n    }\n    __Pyx_INCREF(__pyx_v_cls);\n    __Pyx_GIVEREF(__pyx_v_cls);\n    PyTuple_SET_ITEM(__pyx_t_9, 0+__pyx_t_8, __pyx_v_cls);\n    __Pyx_INCREF(__pyx_v_value);\n    __Pyx_GIVEREF(__pyx_v_value);\n    PyTuple_SET_ITEM(__pyx_t_9, 1+__pyx_t_8, __pyx_v_value);\n    __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_9, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 34, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;\n  }\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_v_res = __pyx_t_4;\n  __pyx_t_4 = 0;\n\n  /* \"EnumBase\":35\n *             raise ValueError(\"Unknown enum value: '%s'\" % value)\n *         res = int.__new__(cls, value)\n *         res.name = name             # <<<<<<<<<<<<<<\n *         setattr(cls, name, res)\n *         cls.__members__[name] = res\n */\n  if (__Pyx_PyObject_SetAttrStr(__pyx_v_res, __pyx_n_s_name, __pyx_v_name) < 0) __PYX_ERR(1, 35, __pyx_L1_error)\n\n  /* \"EnumBase\":36\n *         res = int.__new__(cls, value)\n *         res.name = name\n *         setattr(cls, name, res)             # <<<<<<<<<<<<<<\n *         cls.__members__[name] = res\n *         return res\n */\n  __pyx_t_10 = PyObject_SetAttr(__pyx_v_cls, __pyx_v_name, __pyx_v_res); if (unlikely(__pyx_t_10 == ((int)-1))) __PYX_ERR(1, 36, __pyx_L1_error)\n\n  /* \"EnumBase\":37\n *         res.name = name\n *         setattr(cls, name, res)\n *         cls.__members__[name] = res             # <<<<<<<<<<<<<<\n *         return res\n *     def __repr__(self):\n */\n  __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_cls, __pyx_n_s_members); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 37, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  if (unlikely(PyObject_SetItem(__pyx_t_4, __pyx_v_name, __pyx_v_res) < 0)) __PYX_ERR(1, 37, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n\n  /* \"EnumBase\":38\n *         setattr(cls, name, res)\n *         cls.__members__[name] = res\n *         return res             # <<<<<<<<<<<<<<\n *     def __repr__(self):\n *         return \"<%s.%s: %d>\" % (self.__class__.__name__, self.name, self)\n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(__pyx_v_res);\n  __pyx_r = __pyx_v_res;\n  goto __pyx_L0;\n\n  /* \"EnumBase\":28\n * class __Pyx_EnumBase(int):\n *     __metaclass__ = __Pyx_EnumMeta\n *     def __new__(cls, value, name=None):             # <<<<<<<<<<<<<<\n *         for v in cls:\n *             if v == value:\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_XDECREF(__pyx_t_7);\n  __Pyx_XDECREF(__pyx_t_9);\n  __Pyx_AddTraceback(\"EnumBase.__Pyx_EnumBase.__new__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XDECREF(__pyx_v_v);\n  __Pyx_XDECREF(__pyx_v_res);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"EnumBase\":39\n *         cls.__members__[name] = res\n *         return res\n *     def __repr__(self):             # <<<<<<<<<<<<<<\n *         return \"<%s.%s: %d>\" % (self.__class__.__name__, self.name, self)\n *     def __str__(self):\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_8EnumBase_14__Pyx_EnumBase_3__repr__(PyObject *__pyx_self, PyObject *__pyx_v_self); /*proto*/\nstatic PyMethodDef __pyx_mdef_8EnumBase_14__Pyx_EnumBase_3__repr__ = {\"__repr__\", (PyCFunction)__pyx_pw_8EnumBase_14__Pyx_EnumBase_3__repr__, METH_O, 0};\nstatic PyObject *__pyx_pw_8EnumBase_14__Pyx_EnumBase_3__repr__(PyObject *__pyx_self, PyObject *__pyx_v_self) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__repr__ (wrapper)\", 0);\n  __pyx_r = __pyx_pf_8EnumBase_14__Pyx_EnumBase_2__repr__(__pyx_self, ((PyObject *)__pyx_v_self));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_8EnumBase_14__Pyx_EnumBase_2__repr__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  __Pyx_RefNannySetupContext(\"__repr__\", 0);\n\n  /* \"EnumBase\":40\n *         return res\n *     def __repr__(self):\n *         return \"<%s.%s: %d>\" % (self.__class__.__name__, self.name, self)             # <<<<<<<<<<<<<<\n *     def __str__(self):\n *         return \"%s.%s\" % (self.__class__.__name__, self.name)\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_class); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 40, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_name_2); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 40, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_name); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 40, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 40, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_GIVEREF(__pyx_t_2);\n  PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2);\n  __Pyx_GIVEREF(__pyx_t_1);\n  PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1);\n  __Pyx_INCREF(__pyx_v_self);\n  __Pyx_GIVEREF(__pyx_v_self);\n  PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_v_self);\n  __pyx_t_2 = 0;\n  __pyx_t_1 = 0;\n  __pyx_t_1 = __Pyx_PyString_Format(__pyx_kp_s_s_s_d, __pyx_t_3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 40, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  __pyx_r = __pyx_t_1;\n  __pyx_t_1 = 0;\n  goto __pyx_L0;\n\n  /* \"EnumBase\":39\n *         cls.__members__[name] = res\n *         return res\n *     def __repr__(self):             # <<<<<<<<<<<<<<\n *         return \"<%s.%s: %d>\" % (self.__class__.__name__, self.name, self)\n *     def __str__(self):\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_AddTraceback(\"EnumBase.__Pyx_EnumBase.__repr__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"EnumBase\":41\n *     def __repr__(self):\n *         return \"<%s.%s: %d>\" % (self.__class__.__name__, self.name, self)\n *     def __str__(self):             # <<<<<<<<<<<<<<\n *         return \"%s.%s\" % (self.__class__.__name__, self.name)\n * \n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_8EnumBase_14__Pyx_EnumBase_5__str__(PyObject *__pyx_self, PyObject *__pyx_v_self); /*proto*/\nstatic PyMethodDef __pyx_mdef_8EnumBase_14__Pyx_EnumBase_5__str__ = {\"__str__\", (PyCFunction)__pyx_pw_8EnumBase_14__Pyx_EnumBase_5__str__, METH_O, 0};\nstatic PyObject *__pyx_pw_8EnumBase_14__Pyx_EnumBase_5__str__(PyObject *__pyx_self, PyObject *__pyx_v_self) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__str__ (wrapper)\", 0);\n  __pyx_r = __pyx_pf_8EnumBase_14__Pyx_EnumBase_4__str__(__pyx_self, ((PyObject *)__pyx_v_self));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_8EnumBase_14__Pyx_EnumBase_4__str__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  __Pyx_RefNannySetupContext(\"__str__\", 0);\n\n  /* \"EnumBase\":42\n *         return \"<%s.%s: %d>\" % (self.__class__.__name__, self.name, self)\n *     def __str__(self):\n *         return \"%s.%s\" % (self.__class__.__name__, self.name)             # <<<<<<<<<<<<<<\n * \n * if PY_VERSION_HEX >= 0x03040000:\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_class); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 42, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_name_2); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 42, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_name); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 42, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 42, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_GIVEREF(__pyx_t_2);\n  PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2);\n  __Pyx_GIVEREF(__pyx_t_1);\n  PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1);\n  __pyx_t_2 = 0;\n  __pyx_t_1 = 0;\n  __pyx_t_1 = __Pyx_PyString_Format(__pyx_kp_s_s_s, __pyx_t_3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 42, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  __pyx_r = __pyx_t_1;\n  __pyx_t_1 = 0;\n  goto __pyx_L0;\n\n  /* \"EnumBase\":41\n *     def __repr__(self):\n *         return \"<%s.%s: %d>\" % (self.__class__.__name__, self.name, self)\n *     def __str__(self):             # <<<<<<<<<<<<<<\n *         return \"%s.%s\" % (self.__class__.__name__, self.name)\n * \n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_AddTraceback(\"EnumBase.__Pyx_EnumBase.__str__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"(tree fragment)\":1\n * def __pyx_unpickle___Pyx_EnumMeta(__pyx_type, long __pyx_checksum, __pyx_state):             # <<<<<<<<<<<<<<\n *     cdef object __pyx_PickleError\n *     cdef object __pyx_result\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_8EnumBase_1__pyx_unpickle___Pyx_EnumMeta(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic PyMethodDef __pyx_mdef_8EnumBase_1__pyx_unpickle___Pyx_EnumMeta = {\"__pyx_unpickle___Pyx_EnumMeta\", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_8EnumBase_1__pyx_unpickle___Pyx_EnumMeta, METH_VARARGS|METH_KEYWORDS, 0};\nstatic PyObject *__pyx_pw_8EnumBase_1__pyx_unpickle___Pyx_EnumMeta(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyObject *__pyx_v___pyx_type = 0;\n  long __pyx_v___pyx_checksum;\n  PyObject *__pyx_v___pyx_state = 0;\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__pyx_unpickle___Pyx_EnumMeta (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pyx_type,&__pyx_n_s_pyx_checksum,&__pyx_n_s_pyx_state,0};\n    PyObject* values[3] = {0,0,0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        CYTHON_FALLTHROUGH;\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        CYTHON_FALLTHROUGH;\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        CYTHON_FALLTHROUGH;\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pyx_type)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        CYTHON_FALLTHROUGH;\n        case  1:\n        if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pyx_checksum)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"__pyx_unpickle___Pyx_EnumMeta\", 1, 3, 3, 1); __PYX_ERR(1, 1, __pyx_L3_error)\n        }\n        CYTHON_FALLTHROUGH;\n        case  2:\n        if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pyx_state)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"__pyx_unpickle___Pyx_EnumMeta\", 1, 3, 3, 2); __PYX_ERR(1, 1, __pyx_L3_error)\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"__pyx_unpickle___Pyx_EnumMeta\") < 0)) __PYX_ERR(1, 1, __pyx_L3_error)\n      }\n    } else if (PyTuple_GET_SIZE(__pyx_args) != 3) {\n      goto __pyx_L5_argtuple_error;\n    } else {\n      values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n      values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n      values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n    }\n    __pyx_v___pyx_type = values[0];\n    __pyx_v___pyx_checksum = __Pyx_PyInt_As_long(values[1]); if (unlikely((__pyx_v___pyx_checksum == (long)-1) && PyErr_Occurred())) __PYX_ERR(1, 1, __pyx_L3_error)\n    __pyx_v___pyx_state = values[2];\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"__pyx_unpickle___Pyx_EnumMeta\", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 1, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"EnumBase.__pyx_unpickle___Pyx_EnumMeta\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return NULL;\n  __pyx_L4_argument_unpacking_done:;\n  __pyx_r = __pyx_pf_8EnumBase___pyx_unpickle___Pyx_EnumMeta(__pyx_self, __pyx_v___pyx_type, __pyx_v___pyx_checksum, __pyx_v___pyx_state);\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_8EnumBase___pyx_unpickle___Pyx_EnumMeta(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v___pyx_type, long __pyx_v___pyx_checksum, PyObject *__pyx_v___pyx_state) {\n  PyObject *__pyx_v___pyx_PickleError = 0;\n  PyObject *__pyx_v___pyx_result = 0;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  int __pyx_t_1;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *__pyx_t_4 = NULL;\n  PyObject *__pyx_t_5 = NULL;\n  int __pyx_t_6;\n  __Pyx_RefNannySetupContext(\"__pyx_unpickle___Pyx_EnumMeta\", 0);\n\n  /* \"(tree fragment)\":4\n *     cdef object __pyx_PickleError\n *     cdef object __pyx_result\n *     if __pyx_checksum != 0xd41d8cd:             # <<<<<<<<<<<<<<\n *         from pickle import PickleError as __pyx_PickleError\n *         raise __pyx_PickleError(\"Incompatible checksums (%s vs 0xd41d8cd = ())\" % __pyx_checksum)\n */\n  __pyx_t_1 = ((__pyx_v___pyx_checksum != 0xd41d8cd) != 0);\n  if (__pyx_t_1) {\n\n    /* \"(tree fragment)\":5\n *     cdef object __pyx_result\n *     if __pyx_checksum != 0xd41d8cd:\n *         from pickle import PickleError as __pyx_PickleError             # <<<<<<<<<<<<<<\n *         raise __pyx_PickleError(\"Incompatible checksums (%s vs 0xd41d8cd = ())\" % __pyx_checksum)\n *     __pyx_result = __Pyx_EnumMeta.__new__(__pyx_type)\n */\n    __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 5, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_2);\n    __Pyx_INCREF(__pyx_n_s_PickleError);\n    __Pyx_GIVEREF(__pyx_n_s_PickleError);\n    PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_s_PickleError);\n    __pyx_t_3 = __Pyx_Import(__pyx_n_s_pickle, __pyx_t_2, -1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 5, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n    __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_PickleError); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 5, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_2);\n    __Pyx_INCREF(__pyx_t_2);\n    __pyx_v___pyx_PickleError = __pyx_t_2;\n    __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n\n    /* \"(tree fragment)\":6\n *     if __pyx_checksum != 0xd41d8cd:\n *         from pickle import PickleError as __pyx_PickleError\n *         raise __pyx_PickleError(\"Incompatible checksums (%s vs 0xd41d8cd = ())\" % __pyx_checksum)             # <<<<<<<<<<<<<<\n *     __pyx_result = __Pyx_EnumMeta.__new__(__pyx_type)\n *     if __pyx_state is not None:\n */\n    __pyx_t_2 = __Pyx_PyInt_From_long(__pyx_v___pyx_checksum); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 6, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_2);\n    __pyx_t_4 = __Pyx_PyString_Format(__pyx_kp_s_Incompatible_checksums_s_vs_0xd4, __pyx_t_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 6, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n    __Pyx_INCREF(__pyx_v___pyx_PickleError);\n    __pyx_t_2 = __pyx_v___pyx_PickleError; __pyx_t_5 = NULL;\n    if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {\n      __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_2);\n      if (likely(__pyx_t_5)) {\n        PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);\n        __Pyx_INCREF(__pyx_t_5);\n        __Pyx_INCREF(function);\n        __Pyx_DECREF_SET(__pyx_t_2, function);\n      }\n    }\n    __pyx_t_3 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_5, __pyx_t_4) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_4);\n    __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;\n    __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 6, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n    __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __PYX_ERR(1, 6, __pyx_L1_error)\n\n    /* \"(tree fragment)\":4\n *     cdef object __pyx_PickleError\n *     cdef object __pyx_result\n *     if __pyx_checksum != 0xd41d8cd:             # <<<<<<<<<<<<<<\n *         from pickle import PickleError as __pyx_PickleError\n *         raise __pyx_PickleError(\"Incompatible checksums (%s vs 0xd41d8cd = ())\" % __pyx_checksum)\n */\n  }\n\n  /* \"(tree fragment)\":7\n *         from pickle import PickleError as __pyx_PickleError\n *         raise __pyx_PickleError(\"Incompatible checksums (%s vs 0xd41d8cd = ())\" % __pyx_checksum)\n *     __pyx_result = __Pyx_EnumMeta.__new__(__pyx_type)             # <<<<<<<<<<<<<<\n *     if __pyx_state is not None:\n *         __pyx_unpickle___Pyx_EnumMeta__set_state(<__Pyx_EnumMeta> __pyx_result, __pyx_state)\n */\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_ptype___Pyx_EnumMeta), __pyx_n_s_new); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 7, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_t_4 = NULL;\n  if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {\n    __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_2);\n    if (likely(__pyx_t_4)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);\n      __Pyx_INCREF(__pyx_t_4);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_2, function);\n    }\n  }\n  __pyx_t_3 = (__pyx_t_4) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_4, __pyx_v___pyx_type) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v___pyx_type);\n  __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;\n  if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 7, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_v___pyx_result = __pyx_t_3;\n  __pyx_t_3 = 0;\n\n  /* \"(tree fragment)\":8\n *         raise __pyx_PickleError(\"Incompatible checksums (%s vs 0xd41d8cd = ())\" % __pyx_checksum)\n *     __pyx_result = __Pyx_EnumMeta.__new__(__pyx_type)\n *     if __pyx_state is not None:             # <<<<<<<<<<<<<<\n *         __pyx_unpickle___Pyx_EnumMeta__set_state(<__Pyx_EnumMeta> __pyx_result, __pyx_state)\n *     return __pyx_result\n */\n  __pyx_t_1 = (__pyx_v___pyx_state != Py_None);\n  __pyx_t_6 = (__pyx_t_1 != 0);\n  if (__pyx_t_6) {\n\n    /* \"(tree fragment)\":9\n *     __pyx_result = __Pyx_EnumMeta.__new__(__pyx_type)\n *     if __pyx_state is not None:\n *         __pyx_unpickle___Pyx_EnumMeta__set_state(<__Pyx_EnumMeta> __pyx_result, __pyx_state)             # <<<<<<<<<<<<<<\n *     return __pyx_result\n * cdef __pyx_unpickle___Pyx_EnumMeta__set_state(__Pyx_EnumMeta __pyx_result, tuple __pyx_state):\n */\n    if (!(likely(PyTuple_CheckExact(__pyx_v___pyx_state))||((__pyx_v___pyx_state) == Py_None)||(PyErr_Format(PyExc_TypeError, \"Expected %.16s, got %.200s\", \"tuple\", Py_TYPE(__pyx_v___pyx_state)->tp_name), 0))) __PYX_ERR(1, 9, __pyx_L1_error)\n    __pyx_t_3 = __pyx_unpickle___Pyx_EnumMeta__set_state(((struct __pyx_obj___Pyx_EnumMeta *)__pyx_v___pyx_result), ((PyObject*)__pyx_v___pyx_state)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 9, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n\n    /* \"(tree fragment)\":8\n *         raise __pyx_PickleError(\"Incompatible checksums (%s vs 0xd41d8cd = ())\" % __pyx_checksum)\n *     __pyx_result = __Pyx_EnumMeta.__new__(__pyx_type)\n *     if __pyx_state is not None:             # <<<<<<<<<<<<<<\n *         __pyx_unpickle___Pyx_EnumMeta__set_state(<__Pyx_EnumMeta> __pyx_result, __pyx_state)\n *     return __pyx_result\n */\n  }\n\n  /* \"(tree fragment)\":10\n *     if __pyx_state is not None:\n *         __pyx_unpickle___Pyx_EnumMeta__set_state(<__Pyx_EnumMeta> __pyx_result, __pyx_state)\n *     return __pyx_result             # <<<<<<<<<<<<<<\n * cdef __pyx_unpickle___Pyx_EnumMeta__set_state(__Pyx_EnumMeta __pyx_result, tuple __pyx_state):\n *     if len(__pyx_state) > 0 and hasattr(__pyx_result, '__dict__'):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(__pyx_v___pyx_result);\n  __pyx_r = __pyx_v___pyx_result;\n  goto __pyx_L0;\n\n  /* \"(tree fragment)\":1\n * def __pyx_unpickle___Pyx_EnumMeta(__pyx_type, long __pyx_checksum, __pyx_state):             # <<<<<<<<<<<<<<\n *     cdef object __pyx_PickleError\n *     cdef object __pyx_result\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_AddTraceback(\"EnumBase.__pyx_unpickle___Pyx_EnumMeta\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XDECREF(__pyx_v___pyx_PickleError);\n  __Pyx_XDECREF(__pyx_v___pyx_result);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"(tree fragment)\":11\n *         __pyx_unpickle___Pyx_EnumMeta__set_state(<__Pyx_EnumMeta> __pyx_result, __pyx_state)\n *     return __pyx_result\n * cdef __pyx_unpickle___Pyx_EnumMeta__set_state(__Pyx_EnumMeta __pyx_result, tuple __pyx_state):             # <<<<<<<<<<<<<<\n *     if len(__pyx_state) > 0 and hasattr(__pyx_result, '__dict__'):\n *         __pyx_result.__dict__.update(__pyx_state[0])\n */\n\nstatic PyObject *__pyx_unpickle___Pyx_EnumMeta__set_state(struct __pyx_obj___Pyx_EnumMeta *__pyx_v___pyx_result, PyObject *__pyx_v___pyx_state) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  int __pyx_t_1;\n  Py_ssize_t __pyx_t_2;\n  int __pyx_t_3;\n  int __pyx_t_4;\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  PyObject *__pyx_t_7 = NULL;\n  PyObject *__pyx_t_8 = NULL;\n  __Pyx_RefNannySetupContext(\"__pyx_unpickle___Pyx_EnumMeta__set_state\", 0);\n\n  /* \"(tree fragment)\":12\n *     return __pyx_result\n * cdef __pyx_unpickle___Pyx_EnumMeta__set_state(__Pyx_EnumMeta __pyx_result, tuple __pyx_state):\n *     if len(__pyx_state) > 0 and hasattr(__pyx_result, '__dict__'):             # <<<<<<<<<<<<<<\n *         __pyx_result.__dict__.update(__pyx_state[0])\n */\n  if (unlikely(__pyx_v___pyx_state == Py_None)) {\n    PyErr_SetString(PyExc_TypeError, \"object of type 'NoneType' has no len()\");\n    __PYX_ERR(1, 12, __pyx_L1_error)\n  }\n  __pyx_t_2 = PyTuple_GET_SIZE(__pyx_v___pyx_state); if (unlikely(__pyx_t_2 == ((Py_ssize_t)-1))) __PYX_ERR(1, 12, __pyx_L1_error)\n  __pyx_t_3 = ((__pyx_t_2 > 0) != 0);\n  if (__pyx_t_3) {\n  } else {\n    __pyx_t_1 = __pyx_t_3;\n    goto __pyx_L4_bool_binop_done;\n  }\n  __pyx_t_3 = __Pyx_HasAttr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 12, __pyx_L1_error)\n  __pyx_t_4 = (__pyx_t_3 != 0);\n  __pyx_t_1 = __pyx_t_4;\n  __pyx_L4_bool_binop_done:;\n  if (__pyx_t_1) {\n\n    /* \"(tree fragment)\":13\n * cdef __pyx_unpickle___Pyx_EnumMeta__set_state(__Pyx_EnumMeta __pyx_result, tuple __pyx_state):\n *     if len(__pyx_state) > 0 and hasattr(__pyx_result, '__dict__'):\n *         __pyx_result.__dict__.update(__pyx_state[0])             # <<<<<<<<<<<<<<\n */\n    __pyx_t_6 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 13, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_update); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 13, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n    if (unlikely(__pyx_v___pyx_state == Py_None)) {\n      PyErr_SetString(PyExc_TypeError, \"'NoneType' object is not subscriptable\");\n      __PYX_ERR(1, 13, __pyx_L1_error)\n    }\n    __pyx_t_6 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 13, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __pyx_t_8 = NULL;\n    if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_7))) {\n      __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_7);\n      if (likely(__pyx_t_8)) {\n        PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7);\n        __Pyx_INCREF(__pyx_t_8);\n        __Pyx_INCREF(function);\n        __Pyx_DECREF_SET(__pyx_t_7, function);\n      }\n    }\n    __pyx_t_5 = (__pyx_t_8) ? __Pyx_PyObject_Call2Args(__pyx_t_7, __pyx_t_8, __pyx_t_6) : __Pyx_PyObject_CallOneArg(__pyx_t_7, __pyx_t_6);\n    __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0;\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n    if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 13, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n\n    /* \"(tree fragment)\":12\n *     return __pyx_result\n * cdef __pyx_unpickle___Pyx_EnumMeta__set_state(__Pyx_EnumMeta __pyx_result, tuple __pyx_state):\n *     if len(__pyx_state) > 0 and hasattr(__pyx_result, '__dict__'):             # <<<<<<<<<<<<<<\n *         __pyx_result.__dict__.update(__pyx_state[0])\n */\n  }\n\n  /* \"(tree fragment)\":11\n *         __pyx_unpickle___Pyx_EnumMeta__set_state(<__Pyx_EnumMeta> __pyx_result, __pyx_state)\n *     return __pyx_result\n * cdef __pyx_unpickle___Pyx_EnumMeta__set_state(__Pyx_EnumMeta __pyx_result, tuple __pyx_state):             # <<<<<<<<<<<<<<\n *     if len(__pyx_state) > 0 and hasattr(__pyx_result, '__dict__'):\n *         __pyx_result.__dict__.update(__pyx_state[0])\n */\n\n  /* function exit code */\n  __pyx_r = Py_None; __Pyx_INCREF(Py_None);\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_XDECREF(__pyx_t_7);\n  __Pyx_XDECREF(__pyx_t_8);\n  __Pyx_AddTraceback(\"EnumBase.__pyx_unpickle___Pyx_EnumMeta__set_state\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = 0;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_tp_new_6region_RegionBounds(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) {\n  PyObject *o;\n  if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) {\n    o = (*t->tp_alloc)(t, 0);\n  } else {\n    o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0);\n  }\n  if (unlikely(!o)) return 0;\n  if (unlikely(__pyx_pw_6region_12RegionBounds_1__cinit__(o, __pyx_empty_tuple, NULL) < 0)) goto bad;\n  return o;\n  bad:\n  Py_DECREF(o); o = 0;\n  return NULL;\n}\n\nstatic void __pyx_tp_dealloc_6region_RegionBounds(PyObject *o) {\n  #if CYTHON_USE_TP_FINALIZE\n  if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && (!PyType_IS_GC(Py_TYPE(o)) || !_PyGC_FINALIZED(o))) {\n    if (PyObject_CallFinalizerFromDealloc(o)) return;\n  }\n  #endif\n  {\n    PyObject *etype, *eval, *etb;\n    PyErr_Fetch(&etype, &eval, &etb);\n    ++Py_REFCNT(o);\n    __pyx_pw_6region_12RegionBounds_5__dealloc__(o);\n    --Py_REFCNT(o);\n    PyErr_Restore(etype, eval, etb);\n  }\n  (*Py_TYPE(o)->tp_free)(o);\n}\n\nstatic PyMethodDef __pyx_methods_6region_RegionBounds[] = {\n  {\"get\", (PyCFunction)__pyx_pw_6region_12RegionBounds_9get, METH_NOARGS, 0},\n  {\"set\", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_6region_12RegionBounds_11set, METH_VARARGS|METH_KEYWORDS, 0},\n  {\"__reduce_cython__\", (PyCFunction)__pyx_pw_6region_12RegionBounds_13__reduce_cython__, METH_NOARGS, 0},\n  {\"__setstate_cython__\", (PyCFunction)__pyx_pw_6region_12RegionBounds_15__setstate_cython__, METH_O, 0},\n  {0, 0, 0, 0}\n};\n\nstatic PyTypeObject __pyx_type_6region_RegionBounds = {\n  PyVarObject_HEAD_INIT(0, 0)\n  \"region.RegionBounds\", /*tp_name*/\n  sizeof(struct __pyx_obj_6region_RegionBounds), /*tp_basicsize*/\n  0, /*tp_itemsize*/\n  __pyx_tp_dealloc_6region_RegionBounds, /*tp_dealloc*/\n  #if PY_VERSION_HEX < 0x030800b4\n  0, /*tp_print*/\n  #endif\n  #if PY_VERSION_HEX >= 0x030800b4\n  0, /*tp_vectorcall_offset*/\n  #endif\n  0, /*tp_getattr*/\n  0, /*tp_setattr*/\n  #if PY_MAJOR_VERSION < 3\n  0, /*tp_compare*/\n  #endif\n  #if PY_MAJOR_VERSION >= 3\n  0, /*tp_as_async*/\n  #endif\n  0, /*tp_repr*/\n  0, /*tp_as_number*/\n  0, /*tp_as_sequence*/\n  0, /*tp_as_mapping*/\n  0, /*tp_hash*/\n  0, /*tp_call*/\n  __pyx_pw_6region_12RegionBounds_7__str__, /*tp_str*/\n  0, /*tp_getattro*/\n  0, /*tp_setattro*/\n  0, /*tp_as_buffer*/\n  Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE, /*tp_flags*/\n  0, /*tp_doc*/\n  0, /*tp_traverse*/\n  0, /*tp_clear*/\n  0, /*tp_richcompare*/\n  0, /*tp_weaklistoffset*/\n  0, /*tp_iter*/\n  0, /*tp_iternext*/\n  __pyx_methods_6region_RegionBounds, /*tp_methods*/\n  0, /*tp_members*/\n  0, /*tp_getset*/\n  0, /*tp_base*/\n  0, /*tp_dict*/\n  0, /*tp_descr_get*/\n  0, /*tp_descr_set*/\n  0, /*tp_dictoffset*/\n  __pyx_pw_6region_12RegionBounds_3__init__, /*tp_init*/\n  0, /*tp_alloc*/\n  __pyx_tp_new_6region_RegionBounds, /*tp_new*/\n  0, /*tp_free*/\n  0, /*tp_is_gc*/\n  0, /*tp_bases*/\n  0, /*tp_mro*/\n  0, /*tp_cache*/\n  0, /*tp_subclasses*/\n  0, /*tp_weaklist*/\n  0, /*tp_del*/\n  0, /*tp_version_tag*/\n  #if PY_VERSION_HEX >= 0x030400a1\n  0, /*tp_finalize*/\n  #endif\n  #if PY_VERSION_HEX >= 0x030800b1\n  0, /*tp_vectorcall*/\n  #endif\n  #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000\n  0, /*tp_print*/\n  #endif\n};\n\nstatic PyObject *__pyx_tp_new_6region_Rectangle(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) {\n  PyObject *o;\n  if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) {\n    o = (*t->tp_alloc)(t, 0);\n  } else {\n    o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0);\n  }\n  if (unlikely(!o)) return 0;\n  if (unlikely(__pyx_pw_6region_9Rectangle_1__cinit__(o, __pyx_empty_tuple, NULL) < 0)) goto bad;\n  return o;\n  bad:\n  Py_DECREF(o); o = 0;\n  return NULL;\n}\n\nstatic void __pyx_tp_dealloc_6region_Rectangle(PyObject *o) {\n  #if CYTHON_USE_TP_FINALIZE\n  if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && (!PyType_IS_GC(Py_TYPE(o)) || !_PyGC_FINALIZED(o))) {\n    if (PyObject_CallFinalizerFromDealloc(o)) return;\n  }\n  #endif\n  {\n    PyObject *etype, *eval, *etb;\n    PyErr_Fetch(&etype, &eval, &etb);\n    ++Py_REFCNT(o);\n    __pyx_pw_6region_9Rectangle_5__dealloc__(o);\n    --Py_REFCNT(o);\n    PyErr_Restore(etype, eval, etb);\n  }\n  (*Py_TYPE(o)->tp_free)(o);\n}\n\nstatic PyMethodDef __pyx_methods_6region_Rectangle[] = {\n  {\"set\", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_6region_9Rectangle_9set, METH_VARARGS|METH_KEYWORDS, 0},\n  {\"get\", (PyCFunction)__pyx_pw_6region_9Rectangle_11get, METH_NOARGS, __pyx_doc_6region_9Rectangle_10get},\n  {\"__reduce_cython__\", (PyCFunction)__pyx_pw_6region_9Rectangle_13__reduce_cython__, METH_NOARGS, 0},\n  {\"__setstate_cython__\", (PyCFunction)__pyx_pw_6region_9Rectangle_15__setstate_cython__, METH_O, 0},\n  {0, 0, 0, 0}\n};\n\nstatic PyTypeObject __pyx_type_6region_Rectangle = {\n  PyVarObject_HEAD_INIT(0, 0)\n  \"region.Rectangle\", /*tp_name*/\n  sizeof(struct __pyx_obj_6region_Rectangle), /*tp_basicsize*/\n  0, /*tp_itemsize*/\n  __pyx_tp_dealloc_6region_Rectangle, /*tp_dealloc*/\n  #if PY_VERSION_HEX < 0x030800b4\n  0, /*tp_print*/\n  #endif\n  #if PY_VERSION_HEX >= 0x030800b4\n  0, /*tp_vectorcall_offset*/\n  #endif\n  0, /*tp_getattr*/\n  0, /*tp_setattr*/\n  #if PY_MAJOR_VERSION < 3\n  0, /*tp_compare*/\n  #endif\n  #if PY_MAJOR_VERSION >= 3\n  0, /*tp_as_async*/\n  #endif\n  0, /*tp_repr*/\n  0, /*tp_as_number*/\n  0, /*tp_as_sequence*/\n  0, /*tp_as_mapping*/\n  0, /*tp_hash*/\n  0, /*tp_call*/\n  __pyx_pw_6region_9Rectangle_7__str__, /*tp_str*/\n  0, /*tp_getattro*/\n  0, /*tp_setattro*/\n  0, /*tp_as_buffer*/\n  Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE, /*tp_flags*/\n  0, /*tp_doc*/\n  0, /*tp_traverse*/\n  0, /*tp_clear*/\n  0, /*tp_richcompare*/\n  0, /*tp_weaklistoffset*/\n  0, /*tp_iter*/\n  0, /*tp_iternext*/\n  __pyx_methods_6region_Rectangle, /*tp_methods*/\n  0, /*tp_members*/\n  0, /*tp_getset*/\n  0, /*tp_base*/\n  0, /*tp_dict*/\n  0, /*tp_descr_get*/\n  0, /*tp_descr_set*/\n  0, /*tp_dictoffset*/\n  __pyx_pw_6region_9Rectangle_3__init__, /*tp_init*/\n  0, /*tp_alloc*/\n  __pyx_tp_new_6region_Rectangle, /*tp_new*/\n  0, /*tp_free*/\n  0, /*tp_is_gc*/\n  0, /*tp_bases*/\n  0, /*tp_mro*/\n  0, /*tp_cache*/\n  0, /*tp_subclasses*/\n  0, /*tp_weaklist*/\n  0, /*tp_del*/\n  0, /*tp_version_tag*/\n  #if PY_VERSION_HEX >= 0x030400a1\n  0, /*tp_finalize*/\n  #endif\n  #if PY_VERSION_HEX >= 0x030800b1\n  0, /*tp_vectorcall*/\n  #endif\n  #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000\n  0, /*tp_print*/\n  #endif\n};\n\nstatic PyObject *__pyx_tp_new_6region_Polygon(PyTypeObject *t, PyObject *a, PyObject *k) {\n  PyObject *o;\n  if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) {\n    o = (*t->tp_alloc)(t, 0);\n  } else {\n    o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0);\n  }\n  if (unlikely(!o)) return 0;\n  if (unlikely(__pyx_pw_6region_7Polygon_1__cinit__(o, a, k) < 0)) goto bad;\n  return o;\n  bad:\n  Py_DECREF(o); o = 0;\n  return NULL;\n}\n\nstatic void __pyx_tp_dealloc_6region_Polygon(PyObject *o) {\n  #if CYTHON_USE_TP_FINALIZE\n  if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && (!PyType_IS_GC(Py_TYPE(o)) || !_PyGC_FINALIZED(o))) {\n    if (PyObject_CallFinalizerFromDealloc(o)) return;\n  }\n  #endif\n  {\n    PyObject *etype, *eval, *etb;\n    PyErr_Fetch(&etype, &eval, &etb);\n    ++Py_REFCNT(o);\n    __pyx_pw_6region_7Polygon_3__dealloc__(o);\n    --Py_REFCNT(o);\n    PyErr_Restore(etype, eval, etb);\n  }\n  (*Py_TYPE(o)->tp_free)(o);\n}\n\nstatic PyMethodDef __pyx_methods_6region_Polygon[] = {\n  {\"__reduce_cython__\", (PyCFunction)__pyx_pw_6region_7Polygon_7__reduce_cython__, METH_NOARGS, 0},\n  {\"__setstate_cython__\", (PyCFunction)__pyx_pw_6region_7Polygon_9__setstate_cython__, METH_O, 0},\n  {0, 0, 0, 0}\n};\n\nstatic PyTypeObject __pyx_type_6region_Polygon = {\n  PyVarObject_HEAD_INIT(0, 0)\n  \"region.Polygon\", /*tp_name*/\n  sizeof(struct __pyx_obj_6region_Polygon), /*tp_basicsize*/\n  0, /*tp_itemsize*/\n  __pyx_tp_dealloc_6region_Polygon, /*tp_dealloc*/\n  #if PY_VERSION_HEX < 0x030800b4\n  0, /*tp_print*/\n  #endif\n  #if PY_VERSION_HEX >= 0x030800b4\n  0, /*tp_vectorcall_offset*/\n  #endif\n  0, /*tp_getattr*/\n  0, /*tp_setattr*/\n  #if PY_MAJOR_VERSION < 3\n  0, /*tp_compare*/\n  #endif\n  #if PY_MAJOR_VERSION >= 3\n  0, /*tp_as_async*/\n  #endif\n  0, /*tp_repr*/\n  0, /*tp_as_number*/\n  0, /*tp_as_sequence*/\n  0, /*tp_as_mapping*/\n  0, /*tp_hash*/\n  0, /*tp_call*/\n  __pyx_pw_6region_7Polygon_5__str__, /*tp_str*/\n  0, /*tp_getattro*/\n  0, /*tp_setattro*/\n  0, /*tp_as_buffer*/\n  Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE, /*tp_flags*/\n  0, /*tp_doc*/\n  0, /*tp_traverse*/\n  0, /*tp_clear*/\n  0, /*tp_richcompare*/\n  0, /*tp_weaklistoffset*/\n  0, /*tp_iter*/\n  0, /*tp_iternext*/\n  __pyx_methods_6region_Polygon, /*tp_methods*/\n  0, /*tp_members*/\n  0, /*tp_getset*/\n  0, /*tp_base*/\n  0, /*tp_dict*/\n  0, /*tp_descr_get*/\n  0, /*tp_descr_set*/\n  0, /*tp_dictoffset*/\n  0, /*tp_init*/\n  0, /*tp_alloc*/\n  __pyx_tp_new_6region_Polygon, /*tp_new*/\n  0, /*tp_free*/\n  0, /*tp_is_gc*/\n  0, /*tp_bases*/\n  0, /*tp_mro*/\n  0, /*tp_cache*/\n  0, /*tp_subclasses*/\n  0, /*tp_weaklist*/\n  0, /*tp_del*/\n  0, /*tp_version_tag*/\n  #if PY_VERSION_HEX >= 0x030400a1\n  0, /*tp_finalize*/\n  #endif\n  #if PY_VERSION_HEX >= 0x030800b1\n  0, /*tp_vectorcall*/\n  #endif\n  #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000\n  0, /*tp_print*/\n  #endif\n};\n\nstatic PyObject *__pyx_tp_new___Pyx_EnumMeta(PyTypeObject *t, PyObject *a, PyObject *k) {\n  PyObject *o = (&PyType_Type)->tp_new(t, a, k);\n  if (unlikely(!o)) return 0;\n  return o;\n}\n\nstatic void __pyx_tp_dealloc___Pyx_EnumMeta(PyObject *o) {\n  #if CYTHON_USE_TP_FINALIZE\n  if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && !_PyGC_FINALIZED(o)) {\n    if (PyObject_CallFinalizerFromDealloc(o)) return;\n  }\n  #endif\n  PyObject_GC_UnTrack(o);\n  PyObject_GC_Track(o);\n  (&PyType_Type)->tp_dealloc(o);\n}\n\nstatic int __pyx_tp_traverse___Pyx_EnumMeta(PyObject *o, visitproc v, void *a) {\n  int e;\n  if (!(&PyType_Type)->tp_traverse); else { e = (&PyType_Type)->tp_traverse(o,v,a); if (e) return e; }\n  return 0;\n}\n\nstatic int __pyx_tp_clear___Pyx_EnumMeta(PyObject *o) {\n  if (!(&PyType_Type)->tp_clear); else (&PyType_Type)->tp_clear(o);\n  return 0;\n}\nstatic PyObject *__pyx_sq_item___Pyx_EnumMeta(PyObject *o, Py_ssize_t i) {\n  PyObject *r;\n  PyObject *x = PyInt_FromSsize_t(i); if(!x) return 0;\n  r = Py_TYPE(o)->tp_as_mapping->mp_subscript(o, x);\n  Py_DECREF(x);\n  return r;\n}\n\nstatic PyMethodDef __pyx_methods___Pyx_EnumMeta[] = {\n  {\"__reduce_cython__\", (PyCFunction)__pyx_pw_8EnumBase_14__Pyx_EnumMeta_7__reduce_cython__, METH_NOARGS, 0},\n  {\"__setstate_cython__\", (PyCFunction)__pyx_pw_8EnumBase_14__Pyx_EnumMeta_9__setstate_cython__, METH_O, 0},\n  {0, 0, 0, 0}\n};\n\nstatic PySequenceMethods __pyx_tp_as_sequence___Pyx_EnumMeta = {\n  0, /*sq_length*/\n  0, /*sq_concat*/\n  0, /*sq_repeat*/\n  __pyx_sq_item___Pyx_EnumMeta, /*sq_item*/\n  0, /*sq_slice*/\n  0, /*sq_ass_item*/\n  0, /*sq_ass_slice*/\n  0, /*sq_contains*/\n  0, /*sq_inplace_concat*/\n  0, /*sq_inplace_repeat*/\n};\n\nstatic PyMappingMethods __pyx_tp_as_mapping___Pyx_EnumMeta = {\n  0, /*mp_length*/\n  __pyx_pw_8EnumBase_14__Pyx_EnumMeta_5__getitem__, /*mp_subscript*/\n  0, /*mp_ass_subscript*/\n};\n\nstatic PyTypeObject __Pyx_EnumMeta = {\n  PyVarObject_HEAD_INIT(0, 0)\n  \"region.__Pyx_EnumMeta\", /*tp_name*/\n  sizeof(struct __pyx_obj___Pyx_EnumMeta), /*tp_basicsize*/\n  0, /*tp_itemsize*/\n  __pyx_tp_dealloc___Pyx_EnumMeta, /*tp_dealloc*/\n  #if PY_VERSION_HEX < 0x030800b4\n  0, /*tp_print*/\n  #endif\n  #if PY_VERSION_HEX >= 0x030800b4\n  0, /*tp_vectorcall_offset*/\n  #endif\n  0, /*tp_getattr*/\n  0, /*tp_setattr*/\n  #if PY_MAJOR_VERSION < 3\n  0, /*tp_compare*/\n  #endif\n  #if PY_MAJOR_VERSION >= 3\n  0, /*tp_as_async*/\n  #endif\n  0, /*tp_repr*/\n  0, /*tp_as_number*/\n  &__pyx_tp_as_sequence___Pyx_EnumMeta, /*tp_as_sequence*/\n  &__pyx_tp_as_mapping___Pyx_EnumMeta, /*tp_as_mapping*/\n  0, /*tp_hash*/\n  0, /*tp_call*/\n  0, /*tp_str*/\n  0, /*tp_getattro*/\n  0, /*tp_setattro*/\n  0, /*tp_as_buffer*/\n  Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, /*tp_flags*/\n  0, /*tp_doc*/\n  __pyx_tp_traverse___Pyx_EnumMeta, /*tp_traverse*/\n  __pyx_tp_clear___Pyx_EnumMeta, /*tp_clear*/\n  0, /*tp_richcompare*/\n  0, /*tp_weaklistoffset*/\n  __pyx_pw_8EnumBase_14__Pyx_EnumMeta_3__iter__, /*tp_iter*/\n  0, /*tp_iternext*/\n  __pyx_methods___Pyx_EnumMeta, /*tp_methods*/\n  0, /*tp_members*/\n  0, /*tp_getset*/\n  0, /*tp_base*/\n  0, /*tp_dict*/\n  0, /*tp_descr_get*/\n  0, /*tp_descr_set*/\n  0, /*tp_dictoffset*/\n  __pyx_pw_8EnumBase_14__Pyx_EnumMeta_1__init__, /*tp_init*/\n  0, /*tp_alloc*/\n  __pyx_tp_new___Pyx_EnumMeta, /*tp_new*/\n  0, /*tp_free*/\n  0, /*tp_is_gc*/\n  0, /*tp_bases*/\n  0, /*tp_mro*/\n  0, /*tp_cache*/\n  0, /*tp_subclasses*/\n  0, /*tp_weaklist*/\n  0, /*tp_del*/\n  0, /*tp_version_tag*/\n  #if PY_VERSION_HEX >= 0x030400a1\n  0, /*tp_finalize*/\n  #endif\n  #if PY_VERSION_HEX >= 0x030800b1\n  0, /*tp_vectorcall*/\n  #endif\n  #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000\n  0, /*tp_print*/\n  #endif\n};\n\nstatic PyMethodDef __pyx_methods[] = {\n  {0, 0, 0, 0}\n};\n\n#if PY_MAJOR_VERSION >= 3\n#if CYTHON_PEP489_MULTI_PHASE_INIT\nstatic PyObject* __pyx_pymod_create(PyObject *spec, PyModuleDef *def); /*proto*/\nstatic int __pyx_pymod_exec_region(PyObject* module); /*proto*/\nstatic PyModuleDef_Slot __pyx_moduledef_slots[] = {\n  {Py_mod_create, (void*)__pyx_pymod_create},\n  {Py_mod_exec, (void*)__pyx_pymod_exec_region},\n  {0, NULL}\n};\n#endif\n\nstatic struct PyModuleDef __pyx_moduledef = {\n    PyModuleDef_HEAD_INIT,\n    \"region\",\n    __pyx_k_author_fangyi_zhang_vipl_ict_ac, /* m_doc */\n  #if CYTHON_PEP489_MULTI_PHASE_INIT\n    0, /* m_size */\n  #else\n    -1, /* m_size */\n  #endif\n    __pyx_methods /* m_methods */,\n  #if CYTHON_PEP489_MULTI_PHASE_INIT\n    __pyx_moduledef_slots, /* m_slots */\n  #else\n    NULL, /* m_reload */\n  #endif\n    NULL, /* m_traverse */\n    NULL, /* m_clear */\n    NULL /* m_free */\n};\n#endif\n#ifndef CYTHON_SMALL_CODE\n#if defined(__clang__)\n    #define CYTHON_SMALL_CODE\n#elif defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3))\n    #define CYTHON_SMALL_CODE __attribute__((cold))\n#else\n    #define CYTHON_SMALL_CODE\n#endif\n#endif\n\nstatic __Pyx_StringTabEntry __pyx_string_tab[] = {\n  {&__pyx_kp_s_3f_3f, __pyx_k_3f_3f, sizeof(__pyx_k_3f_3f), 0, 0, 1, 0},\n  {&__pyx_kp_s_3f_3f_2, __pyx_k_3f_3f_2, sizeof(__pyx_k_3f_3f_2), 0, 0, 1, 0},\n  {&__pyx_n_s_EMTPY, __pyx_k_EMTPY, sizeof(__pyx_k_EMTPY), 0, 0, 1, 1},\n  {&__pyx_n_s_EnumBase, __pyx_k_EnumBase, sizeof(__pyx_k_EnumBase), 0, 0, 1, 1},\n  {&__pyx_n_s_EnumType, __pyx_k_EnumType, sizeof(__pyx_k_EnumType), 0, 0, 1, 1},\n  {&__pyx_kp_s_Incompatible_checksums_s_vs_0xd4, __pyx_k_Incompatible_checksums_s_vs_0xd4, sizeof(__pyx_k_Incompatible_checksums_s_vs_0xd4), 0, 0, 1, 0},\n  {&__pyx_n_s_IntEnum, __pyx_k_IntEnum, sizeof(__pyx_k_IntEnum), 0, 0, 1, 1},\n  {&__pyx_n_s_MASK, __pyx_k_MASK, sizeof(__pyx_k_MASK), 0, 0, 1, 1},\n  {&__pyx_n_s_MemoryError, __pyx_k_MemoryError, sizeof(__pyx_k_MemoryError), 0, 0, 1, 1},\n  {&__pyx_n_s_OrderedDict, __pyx_k_OrderedDict, sizeof(__pyx_k_OrderedDict), 0, 0, 1, 1},\n  {&__pyx_n_s_POLYGON, __pyx_k_POLYGON, sizeof(__pyx_k_POLYGON), 0, 0, 1, 1},\n  {&__pyx_n_s_PickleError, __pyx_k_PickleError, sizeof(__pyx_k_PickleError), 0, 0, 1, 1},\n  {&__pyx_n_s_Polygon, __pyx_k_Polygon, sizeof(__pyx_k_Polygon), 0, 0, 1, 1},\n  {&__pyx_n_s_Pyx_EnumBase, __pyx_k_Pyx_EnumBase, sizeof(__pyx_k_Pyx_EnumBase), 0, 0, 1, 1},\n  {&__pyx_n_s_Pyx_EnumBase___new, __pyx_k_Pyx_EnumBase___new, sizeof(__pyx_k_Pyx_EnumBase___new), 0, 0, 1, 1},\n  {&__pyx_n_s_Pyx_EnumBase___repr, __pyx_k_Pyx_EnumBase___repr, sizeof(__pyx_k_Pyx_EnumBase___repr), 0, 0, 1, 1},\n  {&__pyx_n_s_Pyx_EnumBase___str, __pyx_k_Pyx_EnumBase___str, sizeof(__pyx_k_Pyx_EnumBase___str), 0, 0, 1, 1},\n  {&__pyx_n_s_RECTANGEL, __pyx_k_RECTANGEL, sizeof(__pyx_k_RECTANGEL), 0, 0, 1, 1},\n  {&__pyx_n_s_Rectangle, __pyx_k_Rectangle, sizeof(__pyx_k_Rectangle), 0, 0, 1, 1},\n  {&__pyx_n_s_RegionBounds, __pyx_k_RegionBounds, sizeof(__pyx_k_RegionBounds), 0, 0, 1, 1},\n  {&__pyx_n_s_RegionType, __pyx_k_RegionType, sizeof(__pyx_k_RegionType), 0, 0, 1, 1},\n  {&__pyx_n_s_SPECIAL, __pyx_k_SPECIAL, sizeof(__pyx_k_SPECIAL), 0, 0, 1, 1},\n  {&__pyx_n_s_TypeError, __pyx_k_TypeError, sizeof(__pyx_k_TypeError), 0, 0, 1, 1},\n  {&__pyx_kp_s_Unknown_enum_value_s, __pyx_k_Unknown_enum_value_s, sizeof(__pyx_k_Unknown_enum_value_s), 0, 0, 1, 0},\n  {&__pyx_n_s_ValueError, __pyx_k_ValueError, sizeof(__pyx_k_ValueError), 0, 0, 1, 1},\n  {&__pyx_kp_s__5, __pyx_k__5, sizeof(__pyx_k__5), 0, 0, 1, 0},\n  {&__pyx_n_s_bottom, __pyx_k_bottom, sizeof(__pyx_k_bottom), 0, 0, 1, 1},\n  {&__pyx_n_s_bounds, __pyx_k_bounds, sizeof(__pyx_k_bounds), 0, 0, 1, 1},\n  {&__pyx_n_s_c_polygon1, __pyx_k_c_polygon1, sizeof(__pyx_k_c_polygon1), 0, 0, 1, 1},\n  {&__pyx_n_s_c_polygon2, __pyx_k_c_polygon2, sizeof(__pyx_k_c_polygon2), 0, 0, 1, 1},\n  {&__pyx_n_s_class, __pyx_k_class, sizeof(__pyx_k_class), 0, 0, 1, 1},\n  {&__pyx_n_s_cline_in_traceback, __pyx_k_cline_in_traceback, sizeof(__pyx_k_cline_in_traceback), 0, 0, 1, 1},\n  {&__pyx_n_s_cls, __pyx_k_cls, sizeof(__pyx_k_cls), 0, 0, 1, 1},\n  {&__pyx_n_s_collections, __pyx_k_collections, sizeof(__pyx_k_collections), 0, 0, 1, 1},\n  {&__pyx_n_s_ctemplate, __pyx_k_ctemplate, sizeof(__pyx_k_ctemplate), 0, 0, 1, 1},\n  {&__pyx_n_s_dct, __pyx_k_dct, sizeof(__pyx_k_dct), 0, 0, 1, 1},\n  {&__pyx_n_s_dict, __pyx_k_dict, sizeof(__pyx_k_dict), 0, 0, 1, 1},\n  {&__pyx_n_s_doc, __pyx_k_doc, sizeof(__pyx_k_doc), 0, 0, 1, 1},\n  {&__pyx_n_s_encode, __pyx_k_encode, sizeof(__pyx_k_encode), 0, 0, 1, 1},\n  {&__pyx_n_s_enum, __pyx_k_enum, sizeof(__pyx_k_enum), 0, 0, 1, 1},\n  {&__pyx_n_s_format, __pyx_k_format, sizeof(__pyx_k_format), 0, 0, 1, 1},\n  {&__pyx_n_s_getstate, __pyx_k_getstate, sizeof(__pyx_k_getstate), 0, 0, 1, 1},\n  {&__pyx_n_s_height, __pyx_k_height, sizeof(__pyx_k_height), 0, 0, 1, 1},\n  {&__pyx_n_s_i, __pyx_k_i, sizeof(__pyx_k_i), 0, 0, 1, 1},\n  {&__pyx_n_s_import, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1},\n  {&__pyx_n_s_inf, __pyx_k_inf, sizeof(__pyx_k_inf), 0, 0, 1, 1},\n  {&__pyx_n_s_init, __pyx_k_init, sizeof(__pyx_k_init), 0, 0, 1, 1},\n  {&__pyx_n_s_left, __pyx_k_left, sizeof(__pyx_k_left), 0, 0, 1, 1},\n  {&__pyx_n_s_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 0, 1, 1},\n  {&__pyx_n_s_members, __pyx_k_members, sizeof(__pyx_k_members), 0, 0, 1, 1},\n  {&__pyx_n_s_metaclass, __pyx_k_metaclass, sizeof(__pyx_k_metaclass), 0, 0, 1, 1},\n  {&__pyx_n_s_module, __pyx_k_module, sizeof(__pyx_k_module), 0, 0, 1, 1},\n  {&__pyx_n_s_name, __pyx_k_name, sizeof(__pyx_k_name), 0, 0, 1, 1},\n  {&__pyx_n_s_name_2, __pyx_k_name_2, sizeof(__pyx_k_name_2), 0, 0, 1, 1},\n  {&__pyx_n_s_nan, __pyx_k_nan, sizeof(__pyx_k_nan), 0, 0, 1, 1},\n  {&__pyx_n_s_new, __pyx_k_new, sizeof(__pyx_k_new), 0, 0, 1, 1},\n  {&__pyx_n_s_no_bounds, __pyx_k_no_bounds, sizeof(__pyx_k_no_bounds), 0, 0, 1, 1},\n  {&__pyx_kp_s_no_default___reduce___due_to_non, __pyx_k_no_default___reduce___due_to_non, sizeof(__pyx_k_no_default___reduce___due_to_non), 0, 0, 1, 0},\n  {&__pyx_n_s_only1, __pyx_k_only1, sizeof(__pyx_k_only1), 0, 0, 1, 1},\n  {&__pyx_n_s_only2, __pyx_k_only2, sizeof(__pyx_k_only2), 0, 0, 1, 1},\n  {&__pyx_n_s_output, __pyx_k_output, sizeof(__pyx_k_output), 0, 0, 1, 1},\n  {&__pyx_n_s_overlap, __pyx_k_overlap, sizeof(__pyx_k_overlap), 0, 0, 1, 1},\n  {&__pyx_n_s_overlaps, __pyx_k_overlaps, sizeof(__pyx_k_overlaps), 0, 0, 1, 1},\n  {&__pyx_n_s_parents, __pyx_k_parents, sizeof(__pyx_k_parents), 0, 0, 1, 1},\n  {&__pyx_n_s_pickle, __pyx_k_pickle, sizeof(__pyx_k_pickle), 0, 0, 1, 1},\n  {&__pyx_n_s_pno_bounds, __pyx_k_pno_bounds, sizeof(__pyx_k_pno_bounds), 0, 0, 1, 1},\n  {&__pyx_n_s_points, __pyx_k_points, sizeof(__pyx_k_points), 0, 0, 1, 1},\n  {&__pyx_n_s_polygon1, __pyx_k_polygon1, sizeof(__pyx_k_polygon1), 0, 0, 1, 1},\n  {&__pyx_n_s_polygon1_2, __pyx_k_polygon1_2, sizeof(__pyx_k_polygon1_2), 0, 0, 1, 1},\n  {&__pyx_n_s_polygon2, __pyx_k_polygon2, sizeof(__pyx_k_polygon2), 0, 0, 1, 1},\n  {&__pyx_n_s_polygon2_2, __pyx_k_polygon2_2, sizeof(__pyx_k_polygon2_2), 0, 0, 1, 1},\n  {&__pyx_n_s_polygons1, __pyx_k_polygons1, sizeof(__pyx_k_polygons1), 0, 0, 1, 1},\n  {&__pyx_n_s_polygons2, __pyx_k_polygons2, sizeof(__pyx_k_polygons2), 0, 0, 1, 1},\n  {&__pyx_n_s_prepare, __pyx_k_prepare, sizeof(__pyx_k_prepare), 0, 0, 1, 1},\n  {&__pyx_n_s_ptemplate, __pyx_k_ptemplate, sizeof(__pyx_k_ptemplate), 0, 0, 1, 1},\n  {&__pyx_n_s_pyx_PickleError, __pyx_k_pyx_PickleError, sizeof(__pyx_k_pyx_PickleError), 0, 0, 1, 1},\n  {&__pyx_n_s_pyx_checksum, __pyx_k_pyx_checksum, sizeof(__pyx_k_pyx_checksum), 0, 0, 1, 1},\n  {&__pyx_n_s_pyx_result, __pyx_k_pyx_result, sizeof(__pyx_k_pyx_result), 0, 0, 1, 1},\n  {&__pyx_n_s_pyx_state, __pyx_k_pyx_state, sizeof(__pyx_k_pyx_state), 0, 0, 1, 1},\n  {&__pyx_n_s_pyx_type, __pyx_k_pyx_type, sizeof(__pyx_k_pyx_type), 0, 0, 1, 1},\n  {&__pyx_n_s_pyx_unpickle___Pyx_EnumMeta, __pyx_k_pyx_unpickle___Pyx_EnumMeta, sizeof(__pyx_k_pyx_unpickle___Pyx_EnumMeta), 0, 0, 1, 1},\n  {&__pyx_n_s_qualname, __pyx_k_qualname, sizeof(__pyx_k_qualname), 0, 0, 1, 1},\n  {&__pyx_n_s_range, __pyx_k_range, sizeof(__pyx_k_range), 0, 0, 1, 1},\n  {&__pyx_n_s_reduce, __pyx_k_reduce, sizeof(__pyx_k_reduce), 0, 0, 1, 1},\n  {&__pyx_n_s_reduce_cython, __pyx_k_reduce_cython, sizeof(__pyx_k_reduce_cython), 0, 0, 1, 1},\n  {&__pyx_n_s_reduce_ex, __pyx_k_reduce_ex, sizeof(__pyx_k_reduce_ex), 0, 0, 1, 1},\n  {&__pyx_n_s_region, __pyx_k_region, sizeof(__pyx_k_region), 0, 0, 1, 1},\n  {&__pyx_kp_s_region_pyx, __pyx_k_region_pyx, sizeof(__pyx_k_region_pyx), 0, 0, 1, 0},\n  {&__pyx_n_s_repr, __pyx_k_repr, sizeof(__pyx_k_repr), 0, 0, 1, 1},\n  {&__pyx_n_s_res, __pyx_k_res, sizeof(__pyx_k_res), 0, 0, 1, 1},\n  {&__pyx_n_s_ret, __pyx_k_ret, sizeof(__pyx_k_ret), 0, 0, 1, 1},\n  {&__pyx_n_s_right, __pyx_k_right, sizeof(__pyx_k_right), 0, 0, 1, 1},\n  {&__pyx_kp_s_s_s, __pyx_k_s_s, sizeof(__pyx_k_s_s), 0, 0, 1, 0},\n  {&__pyx_kp_s_s_s_d, __pyx_k_s_s_d, sizeof(__pyx_k_s_s_d), 0, 0, 1, 0},\n  {&__pyx_n_s_self, __pyx_k_self, sizeof(__pyx_k_self), 0, 0, 1, 1},\n  {&__pyx_n_s_set, __pyx_k_set, sizeof(__pyx_k_set), 0, 0, 1, 1},\n  {&__pyx_n_s_setstate, __pyx_k_setstate, sizeof(__pyx_k_setstate), 0, 0, 1, 1},\n  {&__pyx_n_s_setstate_cython, __pyx_k_setstate_cython, sizeof(__pyx_k_setstate_cython), 0, 0, 1, 1},\n  {&__pyx_n_s_str, __pyx_k_str, sizeof(__pyx_k_str), 0, 0, 1, 1},\n  {&__pyx_kp_s_stringsource, __pyx_k_stringsource, sizeof(__pyx_k_stringsource), 0, 0, 1, 0},\n  {&__pyx_n_s_template, __pyx_k_template, sizeof(__pyx_k_template), 0, 0, 1, 1},\n  {&__pyx_n_s_test, __pyx_k_test, sizeof(__pyx_k_test), 0, 0, 1, 1},\n  {&__pyx_n_s_top, __pyx_k_top, sizeof(__pyx_k_top), 0, 0, 1, 1},\n  {&__pyx_kp_s_top_3f_bottom_3f_left_3f_reight, __pyx_k_top_3f_bottom_3f_left_3f_reight, sizeof(__pyx_k_top_3f_bottom_3f_left_3f_reight), 0, 0, 1, 0},\n  {&__pyx_n_s_update, __pyx_k_update, sizeof(__pyx_k_update), 0, 0, 1, 1},\n  {&__pyx_n_s_v, __pyx_k_v, sizeof(__pyx_k_v), 0, 0, 1, 1},\n  {&__pyx_n_s_value, __pyx_k_value, sizeof(__pyx_k_value), 0, 0, 1, 1},\n  {&__pyx_n_s_values, __pyx_k_values, sizeof(__pyx_k_values), 0, 0, 1, 1},\n  {&__pyx_n_s_vot_float2str, __pyx_k_vot_float2str, sizeof(__pyx_k_vot_float2str), 0, 0, 1, 1},\n  {&__pyx_n_s_vot_overlap, __pyx_k_vot_overlap, sizeof(__pyx_k_vot_overlap), 0, 0, 1, 1},\n  {&__pyx_n_s_vot_overlap_traj, __pyx_k_vot_overlap_traj, sizeof(__pyx_k_vot_overlap_traj), 0, 0, 1, 1},\n  {&__pyx_n_s_width, __pyx_k_width, sizeof(__pyx_k_width), 0, 0, 1, 1},\n  {&__pyx_n_s_x, __pyx_k_x, sizeof(__pyx_k_x), 0, 0, 1, 1},\n  {&__pyx_kp_s_x_3f_y_3f_width_3f_height_3f, __pyx_k_x_3f_y_3f_width_3f_height_3f, sizeof(__pyx_k_x_3f_y_3f_width_3f_height_3f), 0, 0, 1, 0},\n  {&__pyx_n_s_y, __pyx_k_y, sizeof(__pyx_k_y), 0, 0, 1, 1},\n  {0, 0, 0, 0, 0, 0, 0}\n};\nstatic CYTHON_SMALL_CODE int __Pyx_InitCachedBuiltins(void) {\n  __pyx_builtin_MemoryError = __Pyx_GetBuiltinName(__pyx_n_s_MemoryError); if (!__pyx_builtin_MemoryError) __PYX_ERR(0, 28, __pyx_L1_error)\n  __pyx_builtin_TypeError = __Pyx_GetBuiltinName(__pyx_n_s_TypeError); if (!__pyx_builtin_TypeError) __PYX_ERR(1, 2, __pyx_L1_error)\n  __pyx_builtin_range = __Pyx_GetBuiltinName(__pyx_n_s_range); if (!__pyx_builtin_range) __PYX_ERR(0, 121, __pyx_L1_error)\n  __pyx_builtin_ValueError = __Pyx_GetBuiltinName(__pyx_n_s_ValueError); if (!__pyx_builtin_ValueError) __PYX_ERR(1, 33, __pyx_L1_error)\n  return 0;\n  __pyx_L1_error:;\n  return -1;\n}\n\nstatic CYTHON_SMALL_CODE int __Pyx_InitCachedConstants(void) {\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__Pyx_InitCachedConstants\", 0);\n\n  /* \"(tree fragment)\":2\n * def __reduce_cython__(self):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")             # <<<<<<<<<<<<<<\n * def __setstate_cython__(self, __pyx_state):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n */\n  __pyx_tuple_ = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple_)) __PYX_ERR(1, 2, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple_);\n  __Pyx_GIVEREF(__pyx_tuple_);\n\n  /* \"(tree fragment)\":4\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n * def __setstate_cython__(self, __pyx_state):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")             # <<<<<<<<<<<<<<\n */\n  __pyx_tuple__2 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__2)) __PYX_ERR(1, 4, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__2);\n  __Pyx_GIVEREF(__pyx_tuple__2);\n\n  /* \"(tree fragment)\":2\n * def __reduce_cython__(self):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")             # <<<<<<<<<<<<<<\n * def __setstate_cython__(self, __pyx_state):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n */\n  __pyx_tuple__3 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__3)) __PYX_ERR(1, 2, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__3);\n  __Pyx_GIVEREF(__pyx_tuple__3);\n\n  /* \"(tree fragment)\":4\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n * def __setstate_cython__(self, __pyx_state):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")             # <<<<<<<<<<<<<<\n */\n  __pyx_tuple__4 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__4)) __PYX_ERR(1, 4, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__4);\n  __Pyx_GIVEREF(__pyx_tuple__4);\n\n  /* \"(tree fragment)\":2\n * def __reduce_cython__(self):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")             # <<<<<<<<<<<<<<\n * def __setstate_cython__(self, __pyx_state):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n */\n  __pyx_tuple__6 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__6)) __PYX_ERR(1, 2, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__6);\n  __Pyx_GIVEREF(__pyx_tuple__6);\n\n  /* \"(tree fragment)\":4\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n * def __setstate_cython__(self, __pyx_state):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")             # <<<<<<<<<<<<<<\n */\n  __pyx_tuple__7 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__7)) __PYX_ERR(1, 4, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__7);\n  __Pyx_GIVEREF(__pyx_tuple__7);\n\n  /* \"region.pyx\":145\n *         return ret\n * \n * def vot_overlap(polygon1, polygon2, bounds=None):             # <<<<<<<<<<<<<<\n *     \"\"\" computing overlap between two polygon\n *     Args:\n */\n  __pyx_tuple__8 = PyTuple_Pack(11, __pyx_n_s_polygon1, __pyx_n_s_polygon2, __pyx_n_s_bounds, __pyx_n_s_polygon1_2, __pyx_n_s_polygon2_2, __pyx_n_s_pno_bounds, __pyx_n_s_only1, __pyx_n_s_only2, __pyx_n_s_c_polygon1, __pyx_n_s_c_polygon2, __pyx_n_s_no_bounds); if (unlikely(!__pyx_tuple__8)) __PYX_ERR(0, 145, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__8);\n  __Pyx_GIVEREF(__pyx_tuple__8);\n  __pyx_codeobj__9 = (PyObject*)__Pyx_PyCode_New(3, 0, 11, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__8, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_region_pyx, __pyx_n_s_vot_overlap, 145, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__9)) __PYX_ERR(0, 145, __pyx_L1_error)\n\n  /* \"region.pyx\":191\n *                                             no_bounds)\n * \n * def vot_overlap_traj(polygons1, polygons2, bounds=None):             # <<<<<<<<<<<<<<\n *     \"\"\" computing overlap between two trajectory\n *     Args:\n */\n  __pyx_tuple__10 = PyTuple_Pack(6, __pyx_n_s_polygons1, __pyx_n_s_polygons2, __pyx_n_s_bounds, __pyx_n_s_overlaps, __pyx_n_s_i, __pyx_n_s_overlap); if (unlikely(!__pyx_tuple__10)) __PYX_ERR(0, 191, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__10);\n  __Pyx_GIVEREF(__pyx_tuple__10);\n  __pyx_codeobj__11 = (PyObject*)__Pyx_PyCode_New(3, 0, 6, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__10, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_region_pyx, __pyx_n_s_vot_overlap_traj, 191, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__11)) __PYX_ERR(0, 191, __pyx_L1_error)\n\n  /* \"region.pyx\":208\n * \n * \n * def vot_float2str(template, float value):             # <<<<<<<<<<<<<<\n *     \"\"\"\n *     Args:\n */\n  __pyx_tuple__12 = PyTuple_Pack(6, __pyx_n_s_template, __pyx_n_s_value, __pyx_n_s_ptemplate, __pyx_n_s_ctemplate, __pyx_n_s_output, __pyx_n_s_ret); if (unlikely(!__pyx_tuple__12)) __PYX_ERR(0, 208, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__12);\n  __Pyx_GIVEREF(__pyx_tuple__12);\n  __pyx_codeobj__13 = (PyObject*)__Pyx_PyCode_New(2, 0, 6, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__12, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_region_pyx, __pyx_n_s_vot_float2str, 208, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__13)) __PYX_ERR(0, 208, __pyx_L1_error)\n\n  /* \"EnumBase\":28\n * class __Pyx_EnumBase(int):\n *     __metaclass__ = __Pyx_EnumMeta\n *     def __new__(cls, value, name=None):             # <<<<<<<<<<<<<<\n *         for v in cls:\n *             if v == value:\n */\n  __pyx_tuple__14 = PyTuple_Pack(5, __pyx_n_s_cls, __pyx_n_s_value, __pyx_n_s_name, __pyx_n_s_v, __pyx_n_s_res); if (unlikely(!__pyx_tuple__14)) __PYX_ERR(1, 28, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__14);\n  __Pyx_GIVEREF(__pyx_tuple__14);\n  __pyx_codeobj__15 = (PyObject*)__Pyx_PyCode_New(3, 0, 5, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__14, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_stringsource, __pyx_n_s_new, 28, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__15)) __PYX_ERR(1, 28, __pyx_L1_error)\n  __pyx_tuple__16 = PyTuple_Pack(1, ((PyObject *)Py_None)); if (unlikely(!__pyx_tuple__16)) __PYX_ERR(1, 28, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__16);\n  __Pyx_GIVEREF(__pyx_tuple__16);\n\n  /* \"EnumBase\":39\n *         cls.__members__[name] = res\n *         return res\n *     def __repr__(self):             # <<<<<<<<<<<<<<\n *         return \"<%s.%s: %d>\" % (self.__class__.__name__, self.name, self)\n *     def __str__(self):\n */\n  __pyx_tuple__17 = PyTuple_Pack(1, __pyx_n_s_self); if (unlikely(!__pyx_tuple__17)) __PYX_ERR(1, 39, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__17);\n  __Pyx_GIVEREF(__pyx_tuple__17);\n  __pyx_codeobj__18 = (PyObject*)__Pyx_PyCode_New(1, 0, 1, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__17, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_stringsource, __pyx_n_s_repr, 39, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__18)) __PYX_ERR(1, 39, __pyx_L1_error)\n\n  /* \"EnumBase\":41\n *     def __repr__(self):\n *         return \"<%s.%s: %d>\" % (self.__class__.__name__, self.name, self)\n *     def __str__(self):             # <<<<<<<<<<<<<<\n *         return \"%s.%s\" % (self.__class__.__name__, self.name)\n * \n */\n  __pyx_tuple__19 = PyTuple_Pack(1, __pyx_n_s_self); if (unlikely(!__pyx_tuple__19)) __PYX_ERR(1, 41, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__19);\n  __Pyx_GIVEREF(__pyx_tuple__19);\n  __pyx_codeobj__20 = (PyObject*)__Pyx_PyCode_New(1, 0, 1, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__19, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_stringsource, __pyx_n_s_str, 41, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__20)) __PYX_ERR(1, 41, __pyx_L1_error)\n\n  /* \"(tree fragment)\":1\n * def __pyx_unpickle___Pyx_EnumMeta(__pyx_type, long __pyx_checksum, __pyx_state):             # <<<<<<<<<<<<<<\n *     cdef object __pyx_PickleError\n *     cdef object __pyx_result\n */\n  __pyx_tuple__21 = PyTuple_Pack(5, __pyx_n_s_pyx_type, __pyx_n_s_pyx_checksum, __pyx_n_s_pyx_state, __pyx_n_s_pyx_PickleError, __pyx_n_s_pyx_result); if (unlikely(!__pyx_tuple__21)) __PYX_ERR(1, 1, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__21);\n  __Pyx_GIVEREF(__pyx_tuple__21);\n  __pyx_codeobj__22 = (PyObject*)__Pyx_PyCode_New(3, 0, 5, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__21, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_stringsource, __pyx_n_s_pyx_unpickle___Pyx_EnumMeta, 1, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__22)) __PYX_ERR(1, 1, __pyx_L1_error)\n  __Pyx_RefNannyFinishContext();\n  return 0;\n  __pyx_L1_error:;\n  __Pyx_RefNannyFinishContext();\n  return -1;\n}\n\nstatic CYTHON_SMALL_CODE int __Pyx_InitGlobals(void) {\n  if (__Pyx_InitStrings(__pyx_string_tab) < 0) __PYX_ERR(0, 1, __pyx_L1_error);\n  __pyx_int_0 = PyInt_FromLong(0); if (unlikely(!__pyx_int_0)) __PYX_ERR(0, 1, __pyx_L1_error)\n  __pyx_int_1 = PyInt_FromLong(1); if (unlikely(!__pyx_int_1)) __PYX_ERR(0, 1, __pyx_L1_error)\n  __pyx_int_2 = PyInt_FromLong(2); if (unlikely(!__pyx_int_2)) __PYX_ERR(0, 1, __pyx_L1_error)\n  __pyx_int_222419149 = PyInt_FromLong(222419149L); if (unlikely(!__pyx_int_222419149)) __PYX_ERR(0, 1, __pyx_L1_error)\n  return 0;\n  __pyx_L1_error:;\n  return -1;\n}\n\nstatic CYTHON_SMALL_CODE int __Pyx_modinit_global_init_code(void); /*proto*/\nstatic CYTHON_SMALL_CODE int __Pyx_modinit_variable_export_code(void); /*proto*/\nstatic CYTHON_SMALL_CODE int __Pyx_modinit_function_export_code(void); /*proto*/\nstatic CYTHON_SMALL_CODE int __Pyx_modinit_type_init_code(void); /*proto*/\nstatic CYTHON_SMALL_CODE int __Pyx_modinit_type_import_code(void); /*proto*/\nstatic CYTHON_SMALL_CODE int __Pyx_modinit_variable_import_code(void); /*proto*/\nstatic CYTHON_SMALL_CODE int __Pyx_modinit_function_import_code(void); /*proto*/\n\nstatic int __Pyx_modinit_global_init_code(void) {\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__Pyx_modinit_global_init_code\", 0);\n  /*--- Global init code ---*/\n  __Pyx_OrderedDict = Py_None; Py_INCREF(Py_None);\n  __Pyx_EnumBase = Py_None; Py_INCREF(Py_None);\n  __Pyx_globals = ((PyObject*)Py_None); Py_INCREF(Py_None);\n  __Pyx_RefNannyFinishContext();\n  return 0;\n}\n\nstatic int __Pyx_modinit_variable_export_code(void) {\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__Pyx_modinit_variable_export_code\", 0);\n  /*--- Variable export code ---*/\n  __Pyx_RefNannyFinishContext();\n  return 0;\n}\n\nstatic int __Pyx_modinit_function_export_code(void) {\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__Pyx_modinit_function_export_code\", 0);\n  /*--- Function export code ---*/\n  __Pyx_RefNannyFinishContext();\n  return 0;\n}\n\nstatic int __Pyx_modinit_type_init_code(void) {\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__Pyx_modinit_type_init_code\", 0);\n  /*--- Type init code ---*/\n  if (PyType_Ready(&__pyx_type_6region_RegionBounds) < 0) __PYX_ERR(0, 20, __pyx_L1_error)\n  #if PY_VERSION_HEX < 0x030800B1\n  __pyx_type_6region_RegionBounds.tp_print = 0;\n  #endif\n  if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_type_6region_RegionBounds.tp_dictoffset && __pyx_type_6region_RegionBounds.tp_getattro == PyObject_GenericGetAttr)) {\n    __pyx_type_6region_RegionBounds.tp_getattro = __Pyx_PyObject_GenericGetAttr;\n  }\n  if (PyObject_SetAttr(__pyx_m, __pyx_n_s_RegionBounds, (PyObject *)&__pyx_type_6region_RegionBounds) < 0) __PYX_ERR(0, 20, __pyx_L1_error)\n  if (__Pyx_setup_reduce((PyObject*)&__pyx_type_6region_RegionBounds) < 0) __PYX_ERR(0, 20, __pyx_L1_error)\n  __pyx_ptype_6region_RegionBounds = &__pyx_type_6region_RegionBounds;\n  if (PyType_Ready(&__pyx_type_6region_Rectangle) < 0) __PYX_ERR(0, 57, __pyx_L1_error)\n  #if PY_VERSION_HEX < 0x030800B1\n  __pyx_type_6region_Rectangle.tp_print = 0;\n  #endif\n  if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_type_6region_Rectangle.tp_dictoffset && __pyx_type_6region_Rectangle.tp_getattro == PyObject_GenericGetAttr)) {\n    __pyx_type_6region_Rectangle.tp_getattro = __Pyx_PyObject_GenericGetAttr;\n  }\n  if (PyObject_SetAttr(__pyx_m, __pyx_n_s_Rectangle, (PyObject *)&__pyx_type_6region_Rectangle) < 0) __PYX_ERR(0, 57, __pyx_L1_error)\n  if (__Pyx_setup_reduce((PyObject*)&__pyx_type_6region_Rectangle) < 0) __PYX_ERR(0, 57, __pyx_L1_error)\n  __pyx_ptype_6region_Rectangle = &__pyx_type_6region_Rectangle;\n  if (PyType_Ready(&__pyx_type_6region_Polygon) < 0) __PYX_ERR(0, 98, __pyx_L1_error)\n  #if PY_VERSION_HEX < 0x030800B1\n  __pyx_type_6region_Polygon.tp_print = 0;\n  #endif\n  if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_type_6region_Polygon.tp_dictoffset && __pyx_type_6region_Polygon.tp_getattro == PyObject_GenericGetAttr)) {\n    __pyx_type_6region_Polygon.tp_getattro = __Pyx_PyObject_GenericGetAttr;\n  }\n  if (PyObject_SetAttr(__pyx_m, __pyx_n_s_Polygon, (PyObject *)&__pyx_type_6region_Polygon) < 0) __PYX_ERR(0, 98, __pyx_L1_error)\n  if (__Pyx_setup_reduce((PyObject*)&__pyx_type_6region_Polygon) < 0) __PYX_ERR(0, 98, __pyx_L1_error)\n  __pyx_ptype_6region_Polygon = &__pyx_type_6region_Polygon;\n  __Pyx_EnumMeta.tp_base = (&PyType_Type);\n  if (PyType_Ready(&__Pyx_EnumMeta) < 0) __PYX_ERR(1, 15, __pyx_L1_error)\n  #if PY_VERSION_HEX < 0x030800B1\n  __Pyx_EnumMeta.tp_print = 0;\n  #endif\n  if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__Pyx_EnumMeta.tp_dictoffset && __Pyx_EnumMeta.tp_getattro == PyObject_GenericGetAttr)) {\n    __Pyx_EnumMeta.tp_getattro = __Pyx_PyObject_GenericGetAttr;\n  }\n  if (__Pyx_setup_reduce((PyObject*)&__Pyx_EnumMeta) < 0) __PYX_ERR(1, 15, __pyx_L1_error)\n  __pyx_ptype___Pyx_EnumMeta = &__Pyx_EnumMeta;\n  __Pyx_RefNannyFinishContext();\n  return 0;\n  __pyx_L1_error:;\n  __Pyx_RefNannyFinishContext();\n  return -1;\n}\n\nstatic int __Pyx_modinit_type_import_code(void) {\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__Pyx_modinit_type_import_code\", 0);\n  /*--- Type import code ---*/\n  __Pyx_RefNannyFinishContext();\n  return 0;\n}\n\nstatic int __Pyx_modinit_variable_import_code(void) {\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__Pyx_modinit_variable_import_code\", 0);\n  /*--- Variable import code ---*/\n  __Pyx_RefNannyFinishContext();\n  return 0;\n}\n\nstatic int __Pyx_modinit_function_import_code(void) {\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__Pyx_modinit_function_import_code\", 0);\n  /*--- Function import code ---*/\n  __Pyx_RefNannyFinishContext();\n  return 0;\n}\n\n\n#if PY_MAJOR_VERSION < 3\n#ifdef CYTHON_NO_PYINIT_EXPORT\n#define __Pyx_PyMODINIT_FUNC void\n#else\n#define __Pyx_PyMODINIT_FUNC PyMODINIT_FUNC\n#endif\n#else\n#ifdef CYTHON_NO_PYINIT_EXPORT\n#define __Pyx_PyMODINIT_FUNC PyObject *\n#else\n#define __Pyx_PyMODINIT_FUNC PyMODINIT_FUNC\n#endif\n#endif\n\n\n#if PY_MAJOR_VERSION < 3\n__Pyx_PyMODINIT_FUNC initregion(void) CYTHON_SMALL_CODE; /*proto*/\n__Pyx_PyMODINIT_FUNC initregion(void)\n#else\n__Pyx_PyMODINIT_FUNC PyInit_region(void) CYTHON_SMALL_CODE; /*proto*/\n__Pyx_PyMODINIT_FUNC PyInit_region(void)\n#if CYTHON_PEP489_MULTI_PHASE_INIT\n{\n  return PyModuleDef_Init(&__pyx_moduledef);\n}\nstatic CYTHON_SMALL_CODE int __Pyx_check_single_interpreter(void) {\n    #if PY_VERSION_HEX >= 0x030700A1\n    static PY_INT64_T main_interpreter_id = -1;\n    PY_INT64_T current_id = PyInterpreterState_GetID(PyThreadState_Get()->interp);\n    if (main_interpreter_id == -1) {\n        main_interpreter_id = current_id;\n        return (unlikely(current_id == -1)) ? -1 : 0;\n    } else if (unlikely(main_interpreter_id != current_id))\n    #else\n    static PyInterpreterState *main_interpreter = NULL;\n    PyInterpreterState *current_interpreter = PyThreadState_Get()->interp;\n    if (!main_interpreter) {\n        main_interpreter = current_interpreter;\n    } else if (unlikely(main_interpreter != current_interpreter))\n    #endif\n    {\n        PyErr_SetString(\n            PyExc_ImportError,\n            \"Interpreter change detected - this module can only be loaded into one interpreter per process.\");\n        return -1;\n    }\n    return 0;\n}\nstatic CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *moddict, const char* from_name, const char* to_name, int allow_none) {\n    PyObject *value = PyObject_GetAttrString(spec, from_name);\n    int result = 0;\n    if (likely(value)) {\n        if (allow_none || value != Py_None) {\n            result = PyDict_SetItemString(moddict, to_name, value);\n        }\n        Py_DECREF(value);\n    } else if (PyErr_ExceptionMatches(PyExc_AttributeError)) {\n        PyErr_Clear();\n    } else {\n        result = -1;\n    }\n    return result;\n}\nstatic CYTHON_SMALL_CODE PyObject* __pyx_pymod_create(PyObject *spec, CYTHON_UNUSED PyModuleDef *def) {\n    PyObject *module = NULL, *moddict, *modname;\n    if (__Pyx_check_single_interpreter())\n        return NULL;\n    if (__pyx_m)\n        return __Pyx_NewRef(__pyx_m);\n    modname = PyObject_GetAttrString(spec, \"name\");\n    if (unlikely(!modname)) goto bad;\n    module = PyModule_NewObject(modname);\n    Py_DECREF(modname);\n    if (unlikely(!module)) goto bad;\n    moddict = PyModule_GetDict(module);\n    if (unlikely(!moddict)) goto bad;\n    if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, \"loader\", \"__loader__\", 1) < 0)) goto bad;\n    if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, \"origin\", \"__file__\", 1) < 0)) goto bad;\n    if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, \"parent\", \"__package__\", 1) < 0)) goto bad;\n    if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, \"submodule_search_locations\", \"__path__\", 0) < 0)) goto bad;\n    return module;\nbad:\n    Py_XDECREF(module);\n    return NULL;\n}\n\n\nstatic CYTHON_SMALL_CODE int __pyx_pymod_exec_region(PyObject *__pyx_pyinit_module)\n#endif\n#endif\n{\n  PyObject *__pyx_t_1 = NULL;\n  int __pyx_t_2;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *__pyx_t_4 = NULL;\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  PyObject *__pyx_t_7 = NULL;\n  __Pyx_RefNannyDeclarations\n  #if CYTHON_PEP489_MULTI_PHASE_INIT\n  if (__pyx_m) {\n    if (__pyx_m == __pyx_pyinit_module) return 0;\n    PyErr_SetString(PyExc_RuntimeError, \"Module 'region' has already been imported. Re-initialisation is not supported.\");\n    return -1;\n  }\n  #elif PY_MAJOR_VERSION >= 3\n  if (__pyx_m) return __Pyx_NewRef(__pyx_m);\n  #endif\n  #if CYTHON_REFNANNY\n__Pyx_RefNanny = __Pyx_RefNannyImportAPI(\"refnanny\");\nif (!__Pyx_RefNanny) {\n  PyErr_Clear();\n  __Pyx_RefNanny = __Pyx_RefNannyImportAPI(\"Cython.Runtime.refnanny\");\n  if (!__Pyx_RefNanny)\n      Py_FatalError(\"failed to import 'refnanny' module\");\n}\n#endif\n  __Pyx_RefNannySetupContext(\"__Pyx_PyMODINIT_FUNC PyInit_region(void)\", 0);\n  if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #ifdef __Pxy_PyFrame_Initialize_Offsets\n  __Pxy_PyFrame_Initialize_Offsets();\n  #endif\n  __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error)\n  __pyx_empty_bytes = PyBytes_FromStringAndSize(\"\", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error)\n  __pyx_empty_unicode = PyUnicode_FromStringAndSize(\"\", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error)\n  #ifdef __Pyx_CyFunction_USED\n  if (__pyx_CyFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  #ifdef __Pyx_FusedFunction_USED\n  if (__pyx_FusedFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  #ifdef __Pyx_Coroutine_USED\n  if (__pyx_Coroutine_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  #ifdef __Pyx_Generator_USED\n  if (__pyx_Generator_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  #ifdef __Pyx_AsyncGen_USED\n  if (__pyx_AsyncGen_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  #ifdef __Pyx_StopAsyncIteration_USED\n  if (__pyx_StopAsyncIteration_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  /*--- Library function declarations ---*/\n  /*--- Threads initialization code ---*/\n  #if defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS\n  #ifdef WITH_THREAD /* Python build with threading support? */\n  PyEval_InitThreads();\n  #endif\n  #endif\n  /*--- Module creation code ---*/\n  #if CYTHON_PEP489_MULTI_PHASE_INIT\n  __pyx_m = __pyx_pyinit_module;\n  Py_INCREF(__pyx_m);\n  #else\n  #if PY_MAJOR_VERSION < 3\n  __pyx_m = Py_InitModule4(\"region\", __pyx_methods, __pyx_k_author_fangyi_zhang_vipl_ict_ac, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m);\n  #else\n  __pyx_m = PyModule_Create(&__pyx_moduledef);\n  #endif\n  if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  __pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error)\n  Py_INCREF(__pyx_d);\n  __pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error)\n  Py_INCREF(__pyx_b);\n  __pyx_cython_runtime = PyImport_AddModule((char *) \"cython_runtime\"); if (unlikely(!__pyx_cython_runtime)) __PYX_ERR(0, 1, __pyx_L1_error)\n  Py_INCREF(__pyx_cython_runtime);\n  if (PyObject_SetAttrString(__pyx_m, \"__builtins__\", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error);\n  /*--- Initialize various global constants etc. ---*/\n  if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT)\n  if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  if (__pyx_module_is_main_region) {\n    if (PyObject_SetAttr(__pyx_m, __pyx_n_s_name_2, __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  }\n  #if PY_MAJOR_VERSION >= 3\n  {\n    PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error)\n    if (!PyDict_GetItemString(modules, \"region\")) {\n      if (unlikely(PyDict_SetItemString(modules, \"region\", __pyx_m) < 0)) __PYX_ERR(0, 1, __pyx_L1_error)\n    }\n  }\n  #endif\n  /*--- Builtin init code ---*/\n  if (__Pyx_InitCachedBuiltins() < 0) goto __pyx_L1_error;\n  /*--- Constants init code ---*/\n  if (__Pyx_InitCachedConstants() < 0) goto __pyx_L1_error;\n  /*--- Global type/function init code ---*/\n  (void)__Pyx_modinit_global_init_code();\n  (void)__Pyx_modinit_variable_export_code();\n  (void)__Pyx_modinit_function_export_code();\n  if (unlikely(__Pyx_modinit_type_init_code() != 0)) goto __pyx_L1_error;\n  (void)__Pyx_modinit_type_import_code();\n  (void)__Pyx_modinit_variable_import_code();\n  (void)__Pyx_modinit_function_import_code();\n  /*--- Execution code ---*/\n  #if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED)\n  if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n\n  /* \"region.pyx\":145\n *         return ret\n * \n * def vot_overlap(polygon1, polygon2, bounds=None):             # <<<<<<<<<<<<<<\n *     \"\"\" computing overlap between two polygon\n *     Args:\n */\n  __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_6region_1vot_overlap, NULL, __pyx_n_s_region); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 145, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_vot_overlap, __pyx_t_1) < 0) __PYX_ERR(0, 145, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"region.pyx\":191\n *                                             no_bounds)\n * \n * def vot_overlap_traj(polygons1, polygons2, bounds=None):             # <<<<<<<<<<<<<<\n *     \"\"\" computing overlap between two trajectory\n *     Args:\n */\n  __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_6region_3vot_overlap_traj, NULL, __pyx_n_s_region); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 191, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_vot_overlap_traj, __pyx_t_1) < 0) __PYX_ERR(0, 191, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"region.pyx\":208\n * \n * \n * def vot_float2str(template, float value):             # <<<<<<<<<<<<<<\n *     \"\"\"\n *     Args:\n */\n  __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_6region_5vot_float2str, NULL, __pyx_n_s_region); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 208, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_vot_float2str, __pyx_t_1) < 0) __PYX_ERR(0, 208, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"region.pyx\":1\n */\n  __pyx_t_1 = __Pyx_PyDict_NewPresized(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_1) < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"EnumBase\":9\n * \n * cdef object __Pyx_OrderedDict\n * if PY_VERSION_HEX >= 0x02070000:             # <<<<<<<<<<<<<<\n *     from collections import OrderedDict as __Pyx_OrderedDict\n * else:\n */\n  __pyx_t_2 = ((PY_VERSION_HEX >= 0x02070000) != 0);\n  if (__pyx_t_2) {\n\n    /* \"EnumBase\":10\n * cdef object __Pyx_OrderedDict\n * if PY_VERSION_HEX >= 0x02070000:\n *     from collections import OrderedDict as __Pyx_OrderedDict             # <<<<<<<<<<<<<<\n * else:\n *     __Pyx_OrderedDict = dict\n */\n    __pyx_t_1 = PyList_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 10, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __Pyx_INCREF(__pyx_n_s_OrderedDict);\n    __Pyx_GIVEREF(__pyx_n_s_OrderedDict);\n    PyList_SET_ITEM(__pyx_t_1, 0, __pyx_n_s_OrderedDict);\n    __pyx_t_3 = __Pyx_Import(__pyx_n_s_collections, __pyx_t_1, -1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 10, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n    __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_OrderedDict); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 10, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __Pyx_INCREF(__pyx_t_1);\n    __Pyx_XGOTREF(__Pyx_OrderedDict);\n    __Pyx_DECREF_SET(__Pyx_OrderedDict, __pyx_t_1);\n    __Pyx_GIVEREF(__pyx_t_1);\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n\n    /* \"EnumBase\":9\n * \n * cdef object __Pyx_OrderedDict\n * if PY_VERSION_HEX >= 0x02070000:             # <<<<<<<<<<<<<<\n *     from collections import OrderedDict as __Pyx_OrderedDict\n * else:\n */\n    goto __pyx_L2;\n  }\n\n  /* \"EnumBase\":12\n *     from collections import OrderedDict as __Pyx_OrderedDict\n * else:\n *     __Pyx_OrderedDict = dict             # <<<<<<<<<<<<<<\n * \n * @cython.internal\n */\n  /*else*/ {\n    __Pyx_INCREF(((PyObject *)(&PyDict_Type)));\n    __Pyx_XGOTREF(__Pyx_OrderedDict);\n    __Pyx_DECREF_SET(__Pyx_OrderedDict, ((PyObject *)(&PyDict_Type)));\n    __Pyx_GIVEREF(((PyObject *)(&PyDict_Type)));\n  }\n  __pyx_L2:;\n\n  /* \"EnumBase\":26\n * \n * cdef object __Pyx_EnumBase\n * class __Pyx_EnumBase(int):             # <<<<<<<<<<<<<<\n *     __metaclass__ = __Pyx_EnumMeta\n *     def __new__(cls, value, name=None):\n */\n  __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 26, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_INCREF(((PyObject *)(&PyInt_Type)));\n  __Pyx_GIVEREF(((PyObject *)(&PyInt_Type)));\n  PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)(&PyInt_Type)));\n  __pyx_t_1 = __Pyx_CalculateMetaclass(NULL, __pyx_t_3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 26, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_4 = __Pyx_Py3MetaclassPrepare(__pyx_t_1, __pyx_t_3, __pyx_n_s_Pyx_EnumBase, __pyx_n_s_Pyx_EnumBase, (PyObject *) NULL, __pyx_n_s_EnumBase, (PyObject *) NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 26, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n\n  /* \"EnumBase\":27\n * cdef object __Pyx_EnumBase\n * class __Pyx_EnumBase(int):\n *     __metaclass__ = __Pyx_EnumMeta             # <<<<<<<<<<<<<<\n *     def __new__(cls, value, name=None):\n *         for v in cls:\n */\n  if (__Pyx_SetNameInClass(__pyx_t_4, __pyx_n_s_metaclass, ((PyObject *)__pyx_ptype___Pyx_EnumMeta)) < 0) __PYX_ERR(1, 27, __pyx_L1_error)\n\n  /* \"EnumBase\":28\n * class __Pyx_EnumBase(int):\n *     __metaclass__ = __Pyx_EnumMeta\n *     def __new__(cls, value, name=None):             # <<<<<<<<<<<<<<\n *         for v in cls:\n *             if v == value:\n */\n  __pyx_t_5 = __Pyx_CyFunction_NewEx(&__pyx_mdef_8EnumBase_14__Pyx_EnumBase_1__new__, __Pyx_CYFUNCTION_STATICMETHOD, __pyx_n_s_Pyx_EnumBase___new, NULL, __pyx_n_s_EnumBase, __pyx_d, ((PyObject *)__pyx_codeobj__15)); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 28, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_CyFunction_SetDefaultsTuple(__pyx_t_5, __pyx_tuple__16);\n  if (__Pyx_SetNameInClass(__pyx_t_4, __pyx_n_s_new, __pyx_t_5) < 0) __PYX_ERR(1, 28, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n\n  /* \"EnumBase\":39\n *         cls.__members__[name] = res\n *         return res\n *     def __repr__(self):             # <<<<<<<<<<<<<<\n *         return \"<%s.%s: %d>\" % (self.__class__.__name__, self.name, self)\n *     def __str__(self):\n */\n  __pyx_t_5 = __Pyx_CyFunction_NewEx(&__pyx_mdef_8EnumBase_14__Pyx_EnumBase_3__repr__, 0, __pyx_n_s_Pyx_EnumBase___repr, NULL, __pyx_n_s_EnumBase, __pyx_d, ((PyObject *)__pyx_codeobj__18)); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 39, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  if (__Pyx_SetNameInClass(__pyx_t_4, __pyx_n_s_repr, __pyx_t_5) < 0) __PYX_ERR(1, 39, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n\n  /* \"EnumBase\":41\n *     def __repr__(self):\n *         return \"<%s.%s: %d>\" % (self.__class__.__name__, self.name, self)\n *     def __str__(self):             # <<<<<<<<<<<<<<\n *         return \"%s.%s\" % (self.__class__.__name__, self.name)\n * \n */\n  __pyx_t_5 = __Pyx_CyFunction_NewEx(&__pyx_mdef_8EnumBase_14__Pyx_EnumBase_5__str__, 0, __pyx_n_s_Pyx_EnumBase___str, NULL, __pyx_n_s_EnumBase, __pyx_d, ((PyObject *)__pyx_codeobj__20)); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 41, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  if (__Pyx_SetNameInClass(__pyx_t_4, __pyx_n_s_str, __pyx_t_5) < 0) __PYX_ERR(1, 41, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n\n  /* \"EnumBase\":26\n * \n * cdef object __Pyx_EnumBase\n * class __Pyx_EnumBase(int):             # <<<<<<<<<<<<<<\n *     __metaclass__ = __Pyx_EnumMeta\n *     def __new__(cls, value, name=None):\n */\n  __pyx_t_5 = __Pyx_Py3ClassCreate(__pyx_t_1, __pyx_n_s_Pyx_EnumBase, __pyx_t_3, __pyx_t_4, NULL, 0, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 26, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_XGOTREF(__Pyx_EnumBase);\n  __Pyx_DECREF_SET(__Pyx_EnumBase, __pyx_t_5);\n  __Pyx_GIVEREF(__pyx_t_5);\n  __pyx_t_5 = 0;\n  __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n\n  /* \"EnumBase\":44\n *         return \"%s.%s\" % (self.__class__.__name__, self.name)\n * \n * if PY_VERSION_HEX >= 0x03040000:             # <<<<<<<<<<<<<<\n *     from enum import IntEnum as __Pyx_EnumBase\n * \n */\n  __pyx_t_2 = ((PY_VERSION_HEX >= 0x03040000) != 0);\n  if (__pyx_t_2) {\n\n    /* \"EnumBase\":45\n * \n * if PY_VERSION_HEX >= 0x03040000:\n *     from enum import IntEnum as __Pyx_EnumBase             # <<<<<<<<<<<<<<\n * \n */\n    __pyx_t_3 = PyList_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 45, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __Pyx_INCREF(__pyx_n_s_IntEnum);\n    __Pyx_GIVEREF(__pyx_n_s_IntEnum);\n    PyList_SET_ITEM(__pyx_t_3, 0, __pyx_n_s_IntEnum);\n    __pyx_t_1 = __Pyx_Import(__pyx_n_s_enum, __pyx_t_3, -1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 45, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __pyx_t_3 = __Pyx_ImportFrom(__pyx_t_1, __pyx_n_s_IntEnum); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 45, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __Pyx_INCREF(__pyx_t_3);\n    __Pyx_XGOTREF(__Pyx_EnumBase);\n    __Pyx_DECREF_SET(__Pyx_EnumBase, __pyx_t_3);\n    __Pyx_GIVEREF(__pyx_t_3);\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n    /* \"EnumBase\":44\n *         return \"%s.%s\" % (self.__class__.__name__, self.name)\n * \n * if PY_VERSION_HEX >= 0x03040000:             # <<<<<<<<<<<<<<\n *     from enum import IntEnum as __Pyx_EnumBase\n * \n */\n  }\n\n  /* \"(tree fragment)\":1\n * def __pyx_unpickle___Pyx_EnumMeta(__pyx_type, long __pyx_checksum, __pyx_state):             # <<<<<<<<<<<<<<\n *     cdef object __pyx_PickleError\n *     cdef object __pyx_result\n */\n  __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_8EnumBase_1__pyx_unpickle___Pyx_EnumMeta, NULL, __pyx_n_s_EnumBase); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_pyx_unpickle___Pyx_EnumMeta, __pyx_t_1) < 0) __PYX_ERR(1, 1, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"EnumType\":50\n * \n * \n * cdef dict __Pyx_globals = globals()             # <<<<<<<<<<<<<<\n * if PY_VERSION_HEX >= 0x03040000:\n * \n */\n  __pyx_t_1 = __Pyx_Globals(); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 50, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (!(likely(PyDict_CheckExact(__pyx_t_1))||((__pyx_t_1) == Py_None)||(PyErr_Format(PyExc_TypeError, \"Expected %.16s, got %.200s\", \"dict\", Py_TYPE(__pyx_t_1)->tp_name), 0))) __PYX_ERR(1, 50, __pyx_L1_error)\n  __Pyx_XGOTREF(__Pyx_globals);\n  __Pyx_DECREF_SET(__Pyx_globals, ((PyObject*)__pyx_t_1));\n  __Pyx_GIVEREF(__pyx_t_1);\n  __pyx_t_1 = 0;\n\n  /* \"EnumType\":51\n * \n * cdef dict __Pyx_globals = globals()\n * if PY_VERSION_HEX >= 0x03040000:             # <<<<<<<<<<<<<<\n * \n *     RegionType = __Pyx_EnumBase('RegionType', __Pyx_OrderedDict([\n */\n  __pyx_t_2 = ((PY_VERSION_HEX >= 0x03040000) != 0);\n  if (__pyx_t_2) {\n\n    /* \"EnumType\":54\n * \n *     RegionType = __Pyx_EnumBase('RegionType', __Pyx_OrderedDict([\n *         ('EMTPY', EMTPY),             # <<<<<<<<<<<<<<\n *         ('SPECIAL', SPECIAL),\n *         ('RECTANGEL', RECTANGEL),\n */\n    __pyx_t_1 = __Pyx_PyInt_From_enum____pyx_t_6region_RegionType(__pyx_e_6region_EMTPY); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 54, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 54, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __Pyx_INCREF(__pyx_n_s_EMTPY);\n    __Pyx_GIVEREF(__pyx_n_s_EMTPY);\n    PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_n_s_EMTPY);\n    __Pyx_GIVEREF(__pyx_t_1);\n    PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1);\n    __pyx_t_1 = 0;\n\n    /* \"EnumType\":55\n *     RegionType = __Pyx_EnumBase('RegionType', __Pyx_OrderedDict([\n *         ('EMTPY', EMTPY),\n *         ('SPECIAL', SPECIAL),             # <<<<<<<<<<<<<<\n *         ('RECTANGEL', RECTANGEL),\n *         ('POLYGON', POLYGON),\n */\n    __pyx_t_1 = __Pyx_PyInt_From_enum____pyx_t_6region_RegionType(__pyx_e_6region_SPECIAL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 55, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 55, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __Pyx_INCREF(__pyx_n_s_SPECIAL);\n    __Pyx_GIVEREF(__pyx_n_s_SPECIAL);\n    PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_n_s_SPECIAL);\n    __Pyx_GIVEREF(__pyx_t_1);\n    PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_1);\n    __pyx_t_1 = 0;\n\n    /* \"EnumType\":56\n *         ('EMTPY', EMTPY),\n *         ('SPECIAL', SPECIAL),\n *         ('RECTANGEL', RECTANGEL),             # <<<<<<<<<<<<<<\n *         ('POLYGON', POLYGON),\n *         ('MASK', MASK),\n */\n    __pyx_t_1 = __Pyx_PyInt_From_enum____pyx_t_6region_RegionType(__pyx_e_6region_RECTANGEL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 56, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 56, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_INCREF(__pyx_n_s_RECTANGEL);\n    __Pyx_GIVEREF(__pyx_n_s_RECTANGEL);\n    PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_n_s_RECTANGEL);\n    __Pyx_GIVEREF(__pyx_t_1);\n    PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_1);\n    __pyx_t_1 = 0;\n\n    /* \"EnumType\":57\n *         ('SPECIAL', SPECIAL),\n *         ('RECTANGEL', RECTANGEL),\n *         ('POLYGON', POLYGON),             # <<<<<<<<<<<<<<\n *         ('MASK', MASK),\n *     ]))\n */\n    __pyx_t_1 = __Pyx_PyInt_From_enum____pyx_t_6region_RegionType(__pyx_e_6region_POLYGON); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 57, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __pyx_t_6 = PyTuple_New(2); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 57, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __Pyx_INCREF(__pyx_n_s_POLYGON);\n    __Pyx_GIVEREF(__pyx_n_s_POLYGON);\n    PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_n_s_POLYGON);\n    __Pyx_GIVEREF(__pyx_t_1);\n    PyTuple_SET_ITEM(__pyx_t_6, 1, __pyx_t_1);\n    __pyx_t_1 = 0;\n\n    /* \"EnumType\":58\n *         ('RECTANGEL', RECTANGEL),\n *         ('POLYGON', POLYGON),\n *         ('MASK', MASK),             # <<<<<<<<<<<<<<\n *     ]))\n *     __Pyx_globals['EMTPY'] = RegionType.EMTPY\n */\n    __pyx_t_1 = __Pyx_PyInt_From_enum____pyx_t_6region_RegionType(__pyx_e_6region_MASK); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 58, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __pyx_t_7 = PyTuple_New(2); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 58, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __Pyx_INCREF(__pyx_n_s_MASK);\n    __Pyx_GIVEREF(__pyx_n_s_MASK);\n    PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_n_s_MASK);\n    __Pyx_GIVEREF(__pyx_t_1);\n    PyTuple_SET_ITEM(__pyx_t_7, 1, __pyx_t_1);\n    __pyx_t_1 = 0;\n\n    /* \"EnumType\":53\n * if PY_VERSION_HEX >= 0x03040000:\n * \n *     RegionType = __Pyx_EnumBase('RegionType', __Pyx_OrderedDict([             # <<<<<<<<<<<<<<\n *         ('EMTPY', EMTPY),\n *         ('SPECIAL', SPECIAL),\n */\n    __pyx_t_1 = PyList_New(5); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 53, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __Pyx_GIVEREF(__pyx_t_3);\n    PyList_SET_ITEM(__pyx_t_1, 0, __pyx_t_3);\n    __Pyx_GIVEREF(__pyx_t_4);\n    PyList_SET_ITEM(__pyx_t_1, 1, __pyx_t_4);\n    __Pyx_GIVEREF(__pyx_t_5);\n    PyList_SET_ITEM(__pyx_t_1, 2, __pyx_t_5);\n    __Pyx_GIVEREF(__pyx_t_6);\n    PyList_SET_ITEM(__pyx_t_1, 3, __pyx_t_6);\n    __Pyx_GIVEREF(__pyx_t_7);\n    PyList_SET_ITEM(__pyx_t_1, 4, __pyx_t_7);\n    __pyx_t_3 = 0;\n    __pyx_t_4 = 0;\n    __pyx_t_5 = 0;\n    __pyx_t_6 = 0;\n    __pyx_t_7 = 0;\n    __pyx_t_7 = __Pyx_PyObject_CallOneArg(__Pyx_OrderedDict, __pyx_t_1); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 53, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n    __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 53, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __Pyx_INCREF(__pyx_n_s_RegionType);\n    __Pyx_GIVEREF(__pyx_n_s_RegionType);\n    PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_n_s_RegionType);\n    __Pyx_GIVEREF(__pyx_t_7);\n    PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_7);\n    __pyx_t_7 = 0;\n    __pyx_t_7 = __Pyx_PyObject_Call(__Pyx_EnumBase, __pyx_t_1, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 53, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n    if (PyDict_SetItem(__pyx_d, __pyx_n_s_RegionType, __pyx_t_7) < 0) __PYX_ERR(1, 53, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n\n    /* \"EnumType\":60\n *         ('MASK', MASK),\n *     ]))\n *     __Pyx_globals['EMTPY'] = RegionType.EMTPY             # <<<<<<<<<<<<<<\n *     __Pyx_globals['SPECIAL'] = RegionType.SPECIAL\n *     __Pyx_globals['RECTANGEL'] = RegionType.RECTANGEL\n */\n    __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_RegionType); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 60, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_EMTPY); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 60, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    if (unlikely(__Pyx_globals == Py_None)) {\n      PyErr_SetString(PyExc_TypeError, \"'NoneType' object is not subscriptable\");\n      __PYX_ERR(1, 60, __pyx_L1_error)\n    }\n    if (unlikely(PyDict_SetItem(__Pyx_globals, __pyx_n_s_EMTPY, __pyx_t_1) < 0)) __PYX_ERR(1, 60, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n    /* \"EnumType\":61\n *     ]))\n *     __Pyx_globals['EMTPY'] = RegionType.EMTPY\n *     __Pyx_globals['SPECIAL'] = RegionType.SPECIAL             # <<<<<<<<<<<<<<\n *     __Pyx_globals['RECTANGEL'] = RegionType.RECTANGEL\n *     __Pyx_globals['POLYGON'] = RegionType.POLYGON\n */\n    __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_RegionType); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 61, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_SPECIAL); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 61, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n    if (unlikely(__Pyx_globals == Py_None)) {\n      PyErr_SetString(PyExc_TypeError, \"'NoneType' object is not subscriptable\");\n      __PYX_ERR(1, 61, __pyx_L1_error)\n    }\n    if (unlikely(PyDict_SetItem(__Pyx_globals, __pyx_n_s_SPECIAL, __pyx_t_7) < 0)) __PYX_ERR(1, 61, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n\n    /* \"EnumType\":62\n *     __Pyx_globals['EMTPY'] = RegionType.EMTPY\n *     __Pyx_globals['SPECIAL'] = RegionType.SPECIAL\n *     __Pyx_globals['RECTANGEL'] = RegionType.RECTANGEL             # <<<<<<<<<<<<<<\n *     __Pyx_globals['POLYGON'] = RegionType.POLYGON\n *     __Pyx_globals['MASK'] = RegionType.MASK\n */\n    __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_RegionType); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 62, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_RECTANGEL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 62, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    if (unlikely(__Pyx_globals == Py_None)) {\n      PyErr_SetString(PyExc_TypeError, \"'NoneType' object is not subscriptable\");\n      __PYX_ERR(1, 62, __pyx_L1_error)\n    }\n    if (unlikely(PyDict_SetItem(__Pyx_globals, __pyx_n_s_RECTANGEL, __pyx_t_1) < 0)) __PYX_ERR(1, 62, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n    /* \"EnumType\":63\n *     __Pyx_globals['SPECIAL'] = RegionType.SPECIAL\n *     __Pyx_globals['RECTANGEL'] = RegionType.RECTANGEL\n *     __Pyx_globals['POLYGON'] = RegionType.POLYGON             # <<<<<<<<<<<<<<\n *     __Pyx_globals['MASK'] = RegionType.MASK\n * else:\n */\n    __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_RegionType); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 63, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_POLYGON); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 63, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n    if (unlikely(__Pyx_globals == Py_None)) {\n      PyErr_SetString(PyExc_TypeError, \"'NoneType' object is not subscriptable\");\n      __PYX_ERR(1, 63, __pyx_L1_error)\n    }\n    if (unlikely(PyDict_SetItem(__Pyx_globals, __pyx_n_s_POLYGON, __pyx_t_7) < 0)) __PYX_ERR(1, 63, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n\n    /* \"EnumType\":64\n *     __Pyx_globals['RECTANGEL'] = RegionType.RECTANGEL\n *     __Pyx_globals['POLYGON'] = RegionType.POLYGON\n *     __Pyx_globals['MASK'] = RegionType.MASK             # <<<<<<<<<<<<<<\n * else:\n *     class RegionType(__Pyx_EnumBase):\n */\n    __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_RegionType); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 64, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_MASK); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 64, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    if (unlikely(__Pyx_globals == Py_None)) {\n      PyErr_SetString(PyExc_TypeError, \"'NoneType' object is not subscriptable\");\n      __PYX_ERR(1, 64, __pyx_L1_error)\n    }\n    if (unlikely(PyDict_SetItem(__Pyx_globals, __pyx_n_s_MASK, __pyx_t_1) < 0)) __PYX_ERR(1, 64, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n    /* \"EnumType\":51\n * \n * cdef dict __Pyx_globals = globals()\n * if PY_VERSION_HEX >= 0x03040000:             # <<<<<<<<<<<<<<\n * \n *     RegionType = __Pyx_EnumBase('RegionType', __Pyx_OrderedDict([\n */\n    goto __pyx_L4;\n  }\n\n  /* \"EnumType\":66\n *     __Pyx_globals['MASK'] = RegionType.MASK\n * else:\n *     class RegionType(__Pyx_EnumBase):             # <<<<<<<<<<<<<<\n *         pass\n *     __Pyx_globals['EMTPY'] = RegionType(EMTPY, 'EMTPY')\n */\n  /*else*/ {\n    __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 66, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __Pyx_INCREF(__Pyx_EnumBase);\n    __Pyx_GIVEREF(__Pyx_EnumBase);\n    PyTuple_SET_ITEM(__pyx_t_1, 0, __Pyx_EnumBase);\n    __pyx_t_7 = __Pyx_CalculateMetaclass(NULL, __pyx_t_1); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 66, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __pyx_t_6 = __Pyx_Py3MetaclassPrepare(__pyx_t_7, __pyx_t_1, __pyx_n_s_RegionType, __pyx_n_s_RegionType, (PyObject *) NULL, __pyx_n_s_EnumType, (PyObject *) NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 66, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __pyx_t_5 = __Pyx_Py3ClassCreate(__pyx_t_7, __pyx_n_s_RegionType, __pyx_t_1, __pyx_t_6, NULL, 0, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 66, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    if (PyDict_SetItem(__pyx_d, __pyx_n_s_RegionType, __pyx_t_5) < 0) __PYX_ERR(1, 66, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n    /* \"EnumType\":68\n *     class RegionType(__Pyx_EnumBase):\n *         pass\n *     __Pyx_globals['EMTPY'] = RegionType(EMTPY, 'EMTPY')             # <<<<<<<<<<<<<<\n *     __Pyx_globals['SPECIAL'] = RegionType(SPECIAL, 'SPECIAL')\n *     __Pyx_globals['RECTANGEL'] = RegionType(RECTANGEL, 'RECTANGEL')\n */\n    __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_RegionType); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 68, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __pyx_t_7 = __Pyx_PyInt_From_enum____pyx_t_6region_RegionType(__pyx_e_6region_EMTPY); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 68, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __pyx_t_6 = PyTuple_New(2); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 68, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __Pyx_GIVEREF(__pyx_t_7);\n    PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_7);\n    __Pyx_INCREF(__pyx_n_s_EMTPY);\n    __Pyx_GIVEREF(__pyx_n_s_EMTPY);\n    PyTuple_SET_ITEM(__pyx_t_6, 1, __pyx_n_s_EMTPY);\n    __pyx_t_7 = 0;\n    __pyx_t_7 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_6, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 68, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n    if (unlikely(__Pyx_globals == Py_None)) {\n      PyErr_SetString(PyExc_TypeError, \"'NoneType' object is not subscriptable\");\n      __PYX_ERR(1, 68, __pyx_L1_error)\n    }\n    if (unlikely(PyDict_SetItem(__Pyx_globals, __pyx_n_s_EMTPY, __pyx_t_7) < 0)) __PYX_ERR(1, 68, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n\n    /* \"EnumType\":69\n *         pass\n *     __Pyx_globals['EMTPY'] = RegionType(EMTPY, 'EMTPY')\n *     __Pyx_globals['SPECIAL'] = RegionType(SPECIAL, 'SPECIAL')             # <<<<<<<<<<<<<<\n *     __Pyx_globals['RECTANGEL'] = RegionType(RECTANGEL, 'RECTANGEL')\n *     __Pyx_globals['POLYGON'] = RegionType(POLYGON, 'POLYGON')\n */\n    __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_RegionType); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 69, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __pyx_t_6 = __Pyx_PyInt_From_enum____pyx_t_6region_RegionType(__pyx_e_6region_SPECIAL); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 69, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 69, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __Pyx_GIVEREF(__pyx_t_6);\n    PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_6);\n    __Pyx_INCREF(__pyx_n_s_SPECIAL);\n    __Pyx_GIVEREF(__pyx_n_s_SPECIAL);\n    PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_n_s_SPECIAL);\n    __pyx_t_6 = 0;\n    __pyx_t_6 = __Pyx_PyObject_Call(__pyx_t_7, __pyx_t_1, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 69, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n    if (unlikely(__Pyx_globals == Py_None)) {\n      PyErr_SetString(PyExc_TypeError, \"'NoneType' object is not subscriptable\");\n      __PYX_ERR(1, 69, __pyx_L1_error)\n    }\n    if (unlikely(PyDict_SetItem(__Pyx_globals, __pyx_n_s_SPECIAL, __pyx_t_6) < 0)) __PYX_ERR(1, 69, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n\n    /* \"EnumType\":70\n *     __Pyx_globals['EMTPY'] = RegionType(EMTPY, 'EMTPY')\n *     __Pyx_globals['SPECIAL'] = RegionType(SPECIAL, 'SPECIAL')\n *     __Pyx_globals['RECTANGEL'] = RegionType(RECTANGEL, 'RECTANGEL')             # <<<<<<<<<<<<<<\n *     __Pyx_globals['POLYGON'] = RegionType(POLYGON, 'POLYGON')\n *     __Pyx_globals['MASK'] = RegionType(MASK, 'MASK')\n */\n    __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_RegionType); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 70, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __pyx_t_1 = __Pyx_PyInt_From_enum____pyx_t_6region_RegionType(__pyx_e_6region_RECTANGEL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 70, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __pyx_t_7 = PyTuple_New(2); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 70, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __Pyx_GIVEREF(__pyx_t_1);\n    PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_1);\n    __Pyx_INCREF(__pyx_n_s_RECTANGEL);\n    __Pyx_GIVEREF(__pyx_n_s_RECTANGEL);\n    PyTuple_SET_ITEM(__pyx_t_7, 1, __pyx_n_s_RECTANGEL);\n    __pyx_t_1 = 0;\n    __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_6, __pyx_t_7, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 70, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    if (unlikely(__Pyx_globals == Py_None)) {\n      PyErr_SetString(PyExc_TypeError, \"'NoneType' object is not subscriptable\");\n      __PYX_ERR(1, 70, __pyx_L1_error)\n    }\n    if (unlikely(PyDict_SetItem(__Pyx_globals, __pyx_n_s_RECTANGEL, __pyx_t_1) < 0)) __PYX_ERR(1, 70, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n    /* \"EnumType\":71\n *     __Pyx_globals['SPECIAL'] = RegionType(SPECIAL, 'SPECIAL')\n *     __Pyx_globals['RECTANGEL'] = RegionType(RECTANGEL, 'RECTANGEL')\n *     __Pyx_globals['POLYGON'] = RegionType(POLYGON, 'POLYGON')             # <<<<<<<<<<<<<<\n *     __Pyx_globals['MASK'] = RegionType(MASK, 'MASK')\n * \n */\n    __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_RegionType); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 71, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __pyx_t_7 = __Pyx_PyInt_From_enum____pyx_t_6region_RegionType(__pyx_e_6region_POLYGON); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 71, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __pyx_t_6 = PyTuple_New(2); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 71, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __Pyx_GIVEREF(__pyx_t_7);\n    PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_7);\n    __Pyx_INCREF(__pyx_n_s_POLYGON);\n    __Pyx_GIVEREF(__pyx_n_s_POLYGON);\n    PyTuple_SET_ITEM(__pyx_t_6, 1, __pyx_n_s_POLYGON);\n    __pyx_t_7 = 0;\n    __pyx_t_7 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_6, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 71, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n    if (unlikely(__Pyx_globals == Py_None)) {\n      PyErr_SetString(PyExc_TypeError, \"'NoneType' object is not subscriptable\");\n      __PYX_ERR(1, 71, __pyx_L1_error)\n    }\n    if (unlikely(PyDict_SetItem(__Pyx_globals, __pyx_n_s_POLYGON, __pyx_t_7) < 0)) __PYX_ERR(1, 71, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n\n    /* \"EnumType\":72\n *     __Pyx_globals['RECTANGEL'] = RegionType(RECTANGEL, 'RECTANGEL')\n *     __Pyx_globals['POLYGON'] = RegionType(POLYGON, 'POLYGON')\n *     __Pyx_globals['MASK'] = RegionType(MASK, 'MASK')             # <<<<<<<<<<<<<<\n * \n */\n    __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_RegionType); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 72, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __pyx_t_6 = __Pyx_PyInt_From_enum____pyx_t_6region_RegionType(__pyx_e_6region_MASK); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 72, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 72, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __Pyx_GIVEREF(__pyx_t_6);\n    PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_6);\n    __Pyx_INCREF(__pyx_n_s_MASK);\n    __Pyx_GIVEREF(__pyx_n_s_MASK);\n    PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_n_s_MASK);\n    __pyx_t_6 = 0;\n    __pyx_t_6 = __Pyx_PyObject_Call(__pyx_t_7, __pyx_t_1, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 72, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n    if (unlikely(__Pyx_globals == Py_None)) {\n      PyErr_SetString(PyExc_TypeError, \"'NoneType' object is not subscriptable\");\n      __PYX_ERR(1, 72, __pyx_L1_error)\n    }\n    if (unlikely(PyDict_SetItem(__Pyx_globals, __pyx_n_s_MASK, __pyx_t_6) < 0)) __PYX_ERR(1, 72, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n  }\n  __pyx_L4:;\n\n  /*--- Wrapped vars code ---*/\n\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_XDECREF(__pyx_t_7);\n  if (__pyx_m) {\n    if (__pyx_d) {\n      __Pyx_AddTraceback(\"init region\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n    }\n    Py_CLEAR(__pyx_m);\n  } else if (!PyErr_Occurred()) {\n    PyErr_SetString(PyExc_ImportError, \"init region\");\n  }\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  #if CYTHON_PEP489_MULTI_PHASE_INIT\n  return (__pyx_m != NULL) ? 0 : -1;\n  #elif PY_MAJOR_VERSION >= 3\n  return __pyx_m;\n  #else\n  return;\n  #endif\n}\n\n/* --- Runtime support code --- */\n/* Refnanny */\n#if CYTHON_REFNANNY\nstatic __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) {\n    PyObject *m = NULL, *p = NULL;\n    void *r = NULL;\n    m = PyImport_ImportModule(modname);\n    if (!m) goto end;\n    p = PyObject_GetAttrString(m, \"RefNannyAPI\");\n    if (!p) goto end;\n    r = PyLong_AsVoidPtr(p);\nend:\n    Py_XDECREF(p);\n    Py_XDECREF(m);\n    return (__Pyx_RefNannyAPIStruct *)r;\n}\n#endif\n\n/* PyObjectGetAttrStr */\n#if CYTHON_USE_TYPE_SLOTS\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) {\n    PyTypeObject* tp = Py_TYPE(obj);\n    if (likely(tp->tp_getattro))\n        return tp->tp_getattro(obj, attr_name);\n#if PY_MAJOR_VERSION < 3\n    if (likely(tp->tp_getattr))\n        return tp->tp_getattr(obj, PyString_AS_STRING(attr_name));\n#endif\n    return PyObject_GetAttr(obj, attr_name);\n}\n#endif\n\n/* GetBuiltinName */\nstatic PyObject *__Pyx_GetBuiltinName(PyObject *name) {\n    PyObject* result = __Pyx_PyObject_GetAttrStr(__pyx_b, name);\n    if (unlikely(!result)) {\n        PyErr_Format(PyExc_NameError,\n#if PY_MAJOR_VERSION >= 3\n            \"name '%U' is not defined\", name);\n#else\n            \"name '%.200s' is not defined\", PyString_AS_STRING(name));\n#endif\n    }\n    return result;\n}\n\n/* RaiseArgTupleInvalid */\nstatic void __Pyx_RaiseArgtupleInvalid(\n    const char* func_name,\n    int exact,\n    Py_ssize_t num_min,\n    Py_ssize_t num_max,\n    Py_ssize_t num_found)\n{\n    Py_ssize_t num_expected;\n    const char *more_or_less;\n    if (num_found < num_min) {\n        num_expected = num_min;\n        more_or_less = \"at least\";\n    } else {\n        num_expected = num_max;\n        more_or_less = \"at most\";\n    }\n    if (exact) {\n        more_or_less = \"exactly\";\n    }\n    PyErr_Format(PyExc_TypeError,\n                 \"%.200s() takes %.8s %\" CYTHON_FORMAT_SSIZE_T \"d positional argument%.1s (%\" CYTHON_FORMAT_SSIZE_T \"d given)\",\n                 func_name, more_or_less, num_expected,\n                 (num_expected == 1) ? \"\" : \"s\", num_found);\n}\n\n/* KeywordStringCheck */\nstatic int __Pyx_CheckKeywordStrings(\n    PyObject *kwdict,\n    const char* function_name,\n    int kw_allowed)\n{\n    PyObject* key = 0;\n    Py_ssize_t pos = 0;\n#if CYTHON_COMPILING_IN_PYPY\n    if (!kw_allowed && PyDict_Next(kwdict, &pos, &key, 0))\n        goto invalid_keyword;\n    return 1;\n#else\n    while (PyDict_Next(kwdict, &pos, &key, 0)) {\n        #if PY_MAJOR_VERSION < 3\n        if (unlikely(!PyString_Check(key)))\n        #endif\n            if (unlikely(!PyUnicode_Check(key)))\n                goto invalid_keyword_type;\n    }\n    if ((!kw_allowed) && unlikely(key))\n        goto invalid_keyword;\n    return 1;\ninvalid_keyword_type:\n    PyErr_Format(PyExc_TypeError,\n        \"%.200s() keywords must be strings\", function_name);\n    return 0;\n#endif\ninvalid_keyword:\n    PyErr_Format(PyExc_TypeError,\n    #if PY_MAJOR_VERSION < 3\n        \"%.200s() got an unexpected keyword argument '%.200s'\",\n        function_name, PyString_AsString(key));\n    #else\n        \"%s() got an unexpected keyword argument '%U'\",\n        function_name, key);\n    #endif\n    return 0;\n}\n\n/* RaiseDoubleKeywords */\nstatic void __Pyx_RaiseDoubleKeywordsError(\n    const char* func_name,\n    PyObject* kw_name)\n{\n    PyErr_Format(PyExc_TypeError,\n        #if PY_MAJOR_VERSION >= 3\n        \"%s() got multiple values for keyword argument '%U'\", func_name, kw_name);\n        #else\n        \"%s() got multiple values for keyword argument '%s'\", func_name,\n        PyString_AsString(kw_name));\n        #endif\n}\n\n/* ParseKeywords */\nstatic int __Pyx_ParseOptionalKeywords(\n    PyObject *kwds,\n    PyObject **argnames[],\n    PyObject *kwds2,\n    PyObject *values[],\n    Py_ssize_t num_pos_args,\n    const char* function_name)\n{\n    PyObject *key = 0, *value = 0;\n    Py_ssize_t pos = 0;\n    PyObject*** name;\n    PyObject*** first_kw_arg = argnames + num_pos_args;\n    while (PyDict_Next(kwds, &pos, &key, &value)) {\n        name = first_kw_arg;\n        while (*name && (**name != key)) name++;\n        if (*name) {\n            values[name-argnames] = value;\n            continue;\n        }\n        name = first_kw_arg;\n        #if PY_MAJOR_VERSION < 3\n        if (likely(PyString_CheckExact(key)) || likely(PyString_Check(key))) {\n            while (*name) {\n                if ((CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**name) == PyString_GET_SIZE(key))\n                        && _PyString_Eq(**name, key)) {\n                    values[name-argnames] = value;\n                    break;\n                }\n                name++;\n            }\n            if (*name) continue;\n            else {\n                PyObject*** argname = argnames;\n                while (argname != first_kw_arg) {\n                    if ((**argname == key) || (\n                            (CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**argname) == PyString_GET_SIZE(key))\n                             && _PyString_Eq(**argname, key))) {\n                        goto arg_passed_twice;\n                    }\n                    argname++;\n                }\n            }\n        } else\n        #endif\n        if (likely(PyUnicode_Check(key))) {\n            while (*name) {\n                int cmp = (**name == key) ? 0 :\n                #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3\n                    (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :\n                #endif\n                    PyUnicode_Compare(**name, key);\n                if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad;\n                if (cmp == 0) {\n                    values[name-argnames] = value;\n                    break;\n                }\n                name++;\n            }\n            if (*name) continue;\n            else {\n                PyObject*** argname = argnames;\n                while (argname != first_kw_arg) {\n                    int cmp = (**argname == key) ? 0 :\n                    #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3\n                        (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :\n                    #endif\n                        PyUnicode_Compare(**argname, key);\n                    if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad;\n                    if (cmp == 0) goto arg_passed_twice;\n                    argname++;\n                }\n            }\n        } else\n            goto invalid_keyword_type;\n        if (kwds2) {\n            if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad;\n        } else {\n            goto invalid_keyword;\n        }\n    }\n    return 0;\narg_passed_twice:\n    __Pyx_RaiseDoubleKeywordsError(function_name, key);\n    goto bad;\ninvalid_keyword_type:\n    PyErr_Format(PyExc_TypeError,\n        \"%.200s() keywords must be strings\", function_name);\n    goto bad;\ninvalid_keyword:\n    PyErr_Format(PyExc_TypeError,\n    #if PY_MAJOR_VERSION < 3\n        \"%.200s() got an unexpected keyword argument '%.200s'\",\n        function_name, PyString_AsString(key));\n    #else\n        \"%s() got an unexpected keyword argument '%U'\",\n        function_name, key);\n    #endif\nbad:\n    return -1;\n}\n\n/* PyFunctionFastCall */\n#if CYTHON_FAST_PYCALL\nstatic PyObject* __Pyx_PyFunction_FastCallNoKw(PyCodeObject *co, PyObject **args, Py_ssize_t na,\n                                               PyObject *globals) {\n    PyFrameObject *f;\n    PyThreadState *tstate = __Pyx_PyThreadState_Current;\n    PyObject **fastlocals;\n    Py_ssize_t i;\n    PyObject *result;\n    assert(globals != NULL);\n    /* XXX Perhaps we should create a specialized\n       PyFrame_New() that doesn't take locals, but does\n       take builtins without sanity checking them.\n       */\n    assert(tstate != NULL);\n    f = PyFrame_New(tstate, co, globals, NULL);\n    if (f == NULL) {\n        return NULL;\n    }\n    fastlocals = __Pyx_PyFrame_GetLocalsplus(f);\n    for (i = 0; i < na; i++) {\n        Py_INCREF(*args);\n        fastlocals[i] = *args++;\n    }\n    result = PyEval_EvalFrameEx(f,0);\n    ++tstate->recursion_depth;\n    Py_DECREF(f);\n    --tstate->recursion_depth;\n    return result;\n}\n#if 1 || PY_VERSION_HEX < 0x030600B1\nstatic PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs) {\n    PyCodeObject *co = (PyCodeObject *)PyFunction_GET_CODE(func);\n    PyObject *globals = PyFunction_GET_GLOBALS(func);\n    PyObject *argdefs = PyFunction_GET_DEFAULTS(func);\n    PyObject *closure;\n#if PY_MAJOR_VERSION >= 3\n    PyObject *kwdefs;\n#endif\n    PyObject *kwtuple, **k;\n    PyObject **d;\n    Py_ssize_t nd;\n    Py_ssize_t nk;\n    PyObject *result;\n    assert(kwargs == NULL || PyDict_Check(kwargs));\n    nk = kwargs ? PyDict_Size(kwargs) : 0;\n    if (Py_EnterRecursiveCall((char*)\" while calling a Python object\")) {\n        return NULL;\n    }\n    if (\n#if PY_MAJOR_VERSION >= 3\n            co->co_kwonlyargcount == 0 &&\n#endif\n            likely(kwargs == NULL || nk == 0) &&\n            co->co_flags == (CO_OPTIMIZED | CO_NEWLOCALS | CO_NOFREE)) {\n        if (argdefs == NULL && co->co_argcount == nargs) {\n            result = __Pyx_PyFunction_FastCallNoKw(co, args, nargs, globals);\n            goto done;\n        }\n        else if (nargs == 0 && argdefs != NULL\n                 && co->co_argcount == Py_SIZE(argdefs)) {\n            /* function called with no arguments, but all parameters have\n               a default value: use default values as arguments .*/\n            args = &PyTuple_GET_ITEM(argdefs, 0);\n            result =__Pyx_PyFunction_FastCallNoKw(co, args, Py_SIZE(argdefs), globals);\n            goto done;\n        }\n    }\n    if (kwargs != NULL) {\n        Py_ssize_t pos, i;\n        kwtuple = PyTuple_New(2 * nk);\n        if (kwtuple == NULL) {\n            result = NULL;\n            goto done;\n        }\n        k = &PyTuple_GET_ITEM(kwtuple, 0);\n        pos = i = 0;\n        while (PyDict_Next(kwargs, &pos, &k[i], &k[i+1])) {\n            Py_INCREF(k[i]);\n            Py_INCREF(k[i+1]);\n            i += 2;\n        }\n        nk = i / 2;\n    }\n    else {\n        kwtuple = NULL;\n        k = NULL;\n    }\n    closure = PyFunction_GET_CLOSURE(func);\n#if PY_MAJOR_VERSION >= 3\n    kwdefs = PyFunction_GET_KW_DEFAULTS(func);\n#endif\n    if (argdefs != NULL) {\n        d = &PyTuple_GET_ITEM(argdefs, 0);\n        nd = Py_SIZE(argdefs);\n    }\n    else {\n        d = NULL;\n        nd = 0;\n    }\n#if PY_MAJOR_VERSION >= 3\n    result = PyEval_EvalCodeEx((PyObject*)co, globals, (PyObject *)NULL,\n                               args, (int)nargs,\n                               k, (int)nk,\n                               d, (int)nd, kwdefs, closure);\n#else\n    result = PyEval_EvalCodeEx(co, globals, (PyObject *)NULL,\n                               args, (int)nargs,\n                               k, (int)nk,\n                               d, (int)nd, closure);\n#endif\n    Py_XDECREF(kwtuple);\ndone:\n    Py_LeaveRecursiveCall();\n    return result;\n}\n#endif\n#endif\n\n/* PyCFunctionFastCall */\n#if CYTHON_FAST_PYCCALL\nstatic CYTHON_INLINE PyObject * __Pyx_PyCFunction_FastCall(PyObject *func_obj, PyObject **args, Py_ssize_t nargs) {\n    PyCFunctionObject *func = (PyCFunctionObject*)func_obj;\n    PyCFunction meth = PyCFunction_GET_FUNCTION(func);\n    PyObject *self = PyCFunction_GET_SELF(func);\n    int flags = PyCFunction_GET_FLAGS(func);\n    assert(PyCFunction_Check(func));\n    assert(METH_FASTCALL == (flags & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS)));\n    assert(nargs >= 0);\n    assert(nargs == 0 || args != NULL);\n    /* _PyCFunction_FastCallDict() must not be called with an exception set,\n       because it may clear it (directly or indirectly) and so the\n       caller loses its exception */\n    assert(!PyErr_Occurred());\n    if ((PY_VERSION_HEX < 0x030700A0) || unlikely(flags & METH_KEYWORDS)) {\n        return (*((__Pyx_PyCFunctionFastWithKeywords)(void*)meth)) (self, args, nargs, NULL);\n    } else {\n        return (*((__Pyx_PyCFunctionFast)(void*)meth)) (self, args, nargs);\n    }\n}\n#endif\n\n/* PyObjectCall */\n#if CYTHON_COMPILING_IN_CPYTHON\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw) {\n    PyObject *result;\n    ternaryfunc call = func->ob_type->tp_call;\n    if (unlikely(!call))\n        return PyObject_Call(func, arg, kw);\n    if (unlikely(Py_EnterRecursiveCall((char*)\" while calling a Python object\")))\n        return NULL;\n    result = (*call)(func, arg, kw);\n    Py_LeaveRecursiveCall();\n    if (unlikely(!result) && unlikely(!PyErr_Occurred())) {\n        PyErr_SetString(\n            PyExc_SystemError,\n            \"NULL result without error in PyObject_Call\");\n    }\n    return result;\n}\n#endif\n\n/* PyErrFetchRestore */\n#if CYTHON_FAST_THREAD_STATE\nstatic CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) {\n    PyObject *tmp_type, *tmp_value, *tmp_tb;\n    tmp_type = tstate->curexc_type;\n    tmp_value = tstate->curexc_value;\n    tmp_tb = tstate->curexc_traceback;\n    tstate->curexc_type = type;\n    tstate->curexc_value = value;\n    tstate->curexc_traceback = tb;\n    Py_XDECREF(tmp_type);\n    Py_XDECREF(tmp_value);\n    Py_XDECREF(tmp_tb);\n}\nstatic CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) {\n    *type = tstate->curexc_type;\n    *value = tstate->curexc_value;\n    *tb = tstate->curexc_traceback;\n    tstate->curexc_type = 0;\n    tstate->curexc_value = 0;\n    tstate->curexc_traceback = 0;\n}\n#endif\n\n/* RaiseException */\n#if PY_MAJOR_VERSION < 3\nstatic void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb,\n                        CYTHON_UNUSED PyObject *cause) {\n    __Pyx_PyThreadState_declare\n    Py_XINCREF(type);\n    if (!value || value == Py_None)\n        value = NULL;\n    else\n        Py_INCREF(value);\n    if (!tb || tb == Py_None)\n        tb = NULL;\n    else {\n        Py_INCREF(tb);\n        if (!PyTraceBack_Check(tb)) {\n            PyErr_SetString(PyExc_TypeError,\n                \"raise: arg 3 must be a traceback or None\");\n            goto raise_error;\n        }\n    }\n    if (PyType_Check(type)) {\n#if CYTHON_COMPILING_IN_PYPY\n        if (!value) {\n            Py_INCREF(Py_None);\n            value = Py_None;\n        }\n#endif\n        PyErr_NormalizeException(&type, &value, &tb);\n    } else {\n        if (value) {\n            PyErr_SetString(PyExc_TypeError,\n                \"instance exception may not have a separate value\");\n            goto raise_error;\n        }\n        value = type;\n        type = (PyObject*) Py_TYPE(type);\n        Py_INCREF(type);\n        if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) {\n            PyErr_SetString(PyExc_TypeError,\n                \"raise: exception class must be a subclass of BaseException\");\n            goto raise_error;\n        }\n    }\n    __Pyx_PyThreadState_assign\n    __Pyx_ErrRestore(type, value, tb);\n    return;\nraise_error:\n    Py_XDECREF(value);\n    Py_XDECREF(type);\n    Py_XDECREF(tb);\n    return;\n}\n#else\nstatic void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) {\n    PyObject* owned_instance = NULL;\n    if (tb == Py_None) {\n        tb = 0;\n    } else if (tb && !PyTraceBack_Check(tb)) {\n        PyErr_SetString(PyExc_TypeError,\n            \"raise: arg 3 must be a traceback or None\");\n        goto bad;\n    }\n    if (value == Py_None)\n        value = 0;\n    if (PyExceptionInstance_Check(type)) {\n        if (value) {\n            PyErr_SetString(PyExc_TypeError,\n                \"instance exception may not have a separate value\");\n            goto bad;\n        }\n        value = type;\n        type = (PyObject*) Py_TYPE(value);\n    } else if (PyExceptionClass_Check(type)) {\n        PyObject *instance_class = NULL;\n        if (value && PyExceptionInstance_Check(value)) {\n            instance_class = (PyObject*) Py_TYPE(value);\n            if (instance_class != type) {\n                int is_subclass = PyObject_IsSubclass(instance_class, type);\n                if (!is_subclass) {\n                    instance_class = NULL;\n                } else if (unlikely(is_subclass == -1)) {\n                    goto bad;\n                } else {\n                    type = instance_class;\n                }\n            }\n        }\n        if (!instance_class) {\n            PyObject *args;\n            if (!value)\n                args = PyTuple_New(0);\n            else if (PyTuple_Check(value)) {\n                Py_INCREF(value);\n                args = value;\n            } else\n                args = PyTuple_Pack(1, value);\n            if (!args)\n                goto bad;\n            owned_instance = PyObject_Call(type, args, NULL);\n            Py_DECREF(args);\n            if (!owned_instance)\n                goto bad;\n            value = owned_instance;\n            if (!PyExceptionInstance_Check(value)) {\n                PyErr_Format(PyExc_TypeError,\n                             \"calling %R should have returned an instance of \"\n                             \"BaseException, not %R\",\n                             type, Py_TYPE(value));\n                goto bad;\n            }\n        }\n    } else {\n        PyErr_SetString(PyExc_TypeError,\n            \"raise: exception class must be a subclass of BaseException\");\n        goto bad;\n    }\n    if (cause) {\n        PyObject *fixed_cause;\n        if (cause == Py_None) {\n            fixed_cause = NULL;\n        } else if (PyExceptionClass_Check(cause)) {\n            fixed_cause = PyObject_CallObject(cause, NULL);\n            if (fixed_cause == NULL)\n                goto bad;\n        } else if (PyExceptionInstance_Check(cause)) {\n            fixed_cause = cause;\n            Py_INCREF(fixed_cause);\n        } else {\n            PyErr_SetString(PyExc_TypeError,\n                            \"exception causes must derive from \"\n                            \"BaseException\");\n            goto bad;\n        }\n        PyException_SetCause(value, fixed_cause);\n    }\n    PyErr_SetObject(type, value);\n    if (tb) {\n#if CYTHON_COMPILING_IN_PYPY\n        PyObject *tmp_type, *tmp_value, *tmp_tb;\n        PyErr_Fetch(&tmp_type, &tmp_value, &tmp_tb);\n        Py_INCREF(tb);\n        PyErr_Restore(tmp_type, tmp_value, tb);\n        Py_XDECREF(tmp_tb);\n#else\n        PyThreadState *tstate = __Pyx_PyThreadState_Current;\n        PyObject* tmp_tb = tstate->curexc_traceback;\n        if (tb != tmp_tb) {\n            Py_INCREF(tb);\n            tstate->curexc_traceback = tb;\n            Py_XDECREF(tmp_tb);\n        }\n#endif\n    }\nbad:\n    Py_XDECREF(owned_instance);\n    return;\n}\n#endif\n\n/* None */\nstatic CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t a, Py_ssize_t b) {\n    Py_ssize_t q = a / b;\n    Py_ssize_t r = a - q*b;\n    q -= ((r != 0) & ((r ^ b) < 0));\n    return q;\n}\n\n/* PyObjectCallMethO */\n#if CYTHON_COMPILING_IN_CPYTHON\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg) {\n    PyObject *self, *result;\n    PyCFunction cfunc;\n    cfunc = PyCFunction_GET_FUNCTION(func);\n    self = PyCFunction_GET_SELF(func);\n    if (unlikely(Py_EnterRecursiveCall((char*)\" while calling a Python object\")))\n        return NULL;\n    result = cfunc(self, arg);\n    Py_LeaveRecursiveCall();\n    if (unlikely(!result) && unlikely(!PyErr_Occurred())) {\n        PyErr_SetString(\n            PyExc_SystemError,\n            \"NULL result without error in PyObject_Call\");\n    }\n    return result;\n}\n#endif\n\n/* PyObjectCallOneArg */\n#if CYTHON_COMPILING_IN_CPYTHON\nstatic PyObject* __Pyx__PyObject_CallOneArg(PyObject *func, PyObject *arg) {\n    PyObject *result;\n    PyObject *args = PyTuple_New(1);\n    if (unlikely(!args)) return NULL;\n    Py_INCREF(arg);\n    PyTuple_SET_ITEM(args, 0, arg);\n    result = __Pyx_PyObject_Call(func, args, NULL);\n    Py_DECREF(args);\n    return result;\n}\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) {\n#if CYTHON_FAST_PYCALL\n    if (PyFunction_Check(func)) {\n        return __Pyx_PyFunction_FastCall(func, &arg, 1);\n    }\n#endif\n    if (likely(PyCFunction_Check(func))) {\n        if (likely(PyCFunction_GET_FLAGS(func) & METH_O)) {\n            return __Pyx_PyObject_CallMethO(func, arg);\n#if CYTHON_FAST_PYCCALL\n        } else if (PyCFunction_GET_FLAGS(func) & METH_FASTCALL) {\n            return __Pyx_PyCFunction_FastCall(func, &arg, 1);\n#endif\n        }\n    }\n    return __Pyx__PyObject_CallOneArg(func, arg);\n}\n#else\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) {\n    PyObject *result;\n    PyObject *args = PyTuple_Pack(1, arg);\n    if (unlikely(!args)) return NULL;\n    result = __Pyx_PyObject_Call(func, args, NULL);\n    Py_DECREF(args);\n    return result;\n}\n#endif\n\n/* GetItemInt */\nstatic PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j) {\n    PyObject *r;\n    if (!j) return NULL;\n    r = PyObject_GetItem(o, j);\n    Py_DECREF(j);\n    return r;\n}\nstatic CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i,\n                                                              CYTHON_NCP_UNUSED int wraparound,\n                                                              CYTHON_NCP_UNUSED int boundscheck) {\n#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n    Py_ssize_t wrapped_i = i;\n    if (wraparound & unlikely(i < 0)) {\n        wrapped_i += PyList_GET_SIZE(o);\n    }\n    if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyList_GET_SIZE(o)))) {\n        PyObject *r = PyList_GET_ITEM(o, wrapped_i);\n        Py_INCREF(r);\n        return r;\n    }\n    return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i));\n#else\n    return PySequence_GetItem(o, i);\n#endif\n}\nstatic CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i,\n                                                              CYTHON_NCP_UNUSED int wraparound,\n                                                              CYTHON_NCP_UNUSED int boundscheck) {\n#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n    Py_ssize_t wrapped_i = i;\n    if (wraparound & unlikely(i < 0)) {\n        wrapped_i += PyTuple_GET_SIZE(o);\n    }\n    if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyTuple_GET_SIZE(o)))) {\n        PyObject *r = PyTuple_GET_ITEM(o, wrapped_i);\n        Py_INCREF(r);\n        return r;\n    }\n    return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i));\n#else\n    return PySequence_GetItem(o, i);\n#endif\n}\nstatic CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, int is_list,\n                                                     CYTHON_NCP_UNUSED int wraparound,\n                                                     CYTHON_NCP_UNUSED int boundscheck) {\n#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS && CYTHON_USE_TYPE_SLOTS\n    if (is_list || PyList_CheckExact(o)) {\n        Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyList_GET_SIZE(o);\n        if ((!boundscheck) || (likely(__Pyx_is_valid_index(n, PyList_GET_SIZE(o))))) {\n            PyObject *r = PyList_GET_ITEM(o, n);\n            Py_INCREF(r);\n            return r;\n        }\n    }\n    else if (PyTuple_CheckExact(o)) {\n        Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyTuple_GET_SIZE(o);\n        if ((!boundscheck) || likely(__Pyx_is_valid_index(n, PyTuple_GET_SIZE(o)))) {\n            PyObject *r = PyTuple_GET_ITEM(o, n);\n            Py_INCREF(r);\n            return r;\n        }\n    } else {\n        PySequenceMethods *m = Py_TYPE(o)->tp_as_sequence;\n        if (likely(m && m->sq_item)) {\n            if (wraparound && unlikely(i < 0) && likely(m->sq_length)) {\n                Py_ssize_t l = m->sq_length(o);\n                if (likely(l >= 0)) {\n                    i += l;\n                } else {\n                    if (!PyErr_ExceptionMatches(PyExc_OverflowError))\n                        return NULL;\n                    PyErr_Clear();\n                }\n            }\n            return m->sq_item(o, i);\n        }\n    }\n#else\n    if (is_list || PySequence_Check(o)) {\n        return PySequence_GetItem(o, i);\n    }\n#endif\n    return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i));\n}\n\n/* ObjectGetItem */\n#if CYTHON_USE_TYPE_SLOTS\nstatic PyObject *__Pyx_PyObject_GetIndex(PyObject *obj, PyObject* index) {\n    PyObject *runerr;\n    Py_ssize_t key_value;\n    PySequenceMethods *m = Py_TYPE(obj)->tp_as_sequence;\n    if (unlikely(!(m && m->sq_item))) {\n        PyErr_Format(PyExc_TypeError, \"'%.200s' object is not subscriptable\", Py_TYPE(obj)->tp_name);\n        return NULL;\n    }\n    key_value = __Pyx_PyIndex_AsSsize_t(index);\n    if (likely(key_value != -1 || !(runerr = PyErr_Occurred()))) {\n        return __Pyx_GetItemInt_Fast(obj, key_value, 0, 1, 1);\n    }\n    if (PyErr_GivenExceptionMatches(runerr, PyExc_OverflowError)) {\n        PyErr_Clear();\n        PyErr_Format(PyExc_IndexError, \"cannot fit '%.200s' into an index-sized integer\", Py_TYPE(index)->tp_name);\n    }\n    return NULL;\n}\nstatic PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject* key) {\n    PyMappingMethods *m = Py_TYPE(obj)->tp_as_mapping;\n    if (likely(m && m->mp_subscript)) {\n        return m->mp_subscript(obj, key);\n    }\n    return __Pyx_PyObject_GetIndex(obj, key);\n}\n#endif\n\n/* PyIntBinop */\n#if !CYTHON_COMPILING_IN_PYPY\nstatic PyObject* __Pyx_PyInt_AddObjC(PyObject *op1, PyObject *op2, CYTHON_UNUSED long intval, int inplace, int zerodivision_check) {\n    (void)inplace;\n    (void)zerodivision_check;\n    #if PY_MAJOR_VERSION < 3\n    if (likely(PyInt_CheckExact(op1))) {\n        const long b = intval;\n        long x;\n        long a = PyInt_AS_LONG(op1);\n            x = (long)((unsigned long)a + b);\n            if (likely((x^a) >= 0 || (x^b) >= 0))\n                return PyInt_FromLong(x);\n            return PyLong_Type.tp_as_number->nb_add(op1, op2);\n    }\n    #endif\n    #if CYTHON_USE_PYLONG_INTERNALS\n    if (likely(PyLong_CheckExact(op1))) {\n        const long b = intval;\n        long a, x;\n#ifdef HAVE_LONG_LONG\n        const PY_LONG_LONG llb = intval;\n        PY_LONG_LONG lla, llx;\n#endif\n        const digit* digits = ((PyLongObject*)op1)->ob_digit;\n        const Py_ssize_t size = Py_SIZE(op1);\n        if (likely(__Pyx_sst_abs(size) <= 1)) {\n            a = likely(size) ? digits[0] : 0;\n            if (size == -1) a = -a;\n        } else {\n            switch (size) {\n                case -2:\n                    if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {\n                        a = -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));\n                        break;\n#ifdef HAVE_LONG_LONG\n                    } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) {\n                        lla = -(PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0]));\n                        goto long_long;\n#endif\n                    }\n                    CYTHON_FALLTHROUGH;\n                case 2:\n                    if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {\n                        a = (long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));\n                        break;\n#ifdef HAVE_LONG_LONG\n                    } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) {\n                        lla = (PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0]));\n                        goto long_long;\n#endif\n                    }\n                    CYTHON_FALLTHROUGH;\n                case -3:\n                    if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {\n                        a = -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));\n                        break;\n#ifdef HAVE_LONG_LONG\n                    } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) {\n                        lla = -(PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0]));\n                        goto long_long;\n#endif\n                    }\n                    CYTHON_FALLTHROUGH;\n                case 3:\n                    if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {\n                        a = (long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));\n                        break;\n#ifdef HAVE_LONG_LONG\n                    } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) {\n                        lla = (PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0]));\n                        goto long_long;\n#endif\n                    }\n                    CYTHON_FALLTHROUGH;\n                case -4:\n                    if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) {\n                        a = -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));\n                        break;\n#ifdef HAVE_LONG_LONG\n                    } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) {\n                        lla = -(PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0]));\n                        goto long_long;\n#endif\n                    }\n                    CYTHON_FALLTHROUGH;\n                case 4:\n                    if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) {\n                        a = (long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));\n                        break;\n#ifdef HAVE_LONG_LONG\n                    } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) {\n                        lla = (PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0]));\n                        goto long_long;\n#endif\n                    }\n                    CYTHON_FALLTHROUGH;\n                default: return PyLong_Type.tp_as_number->nb_add(op1, op2);\n            }\n        }\n                x = a + b;\n            return PyLong_FromLong(x);\n#ifdef HAVE_LONG_LONG\n        long_long:\n                llx = lla + llb;\n            return PyLong_FromLongLong(llx);\n#endif\n        \n        \n    }\n    #endif\n    if (PyFloat_CheckExact(op1)) {\n        const long b = intval;\n        double a = PyFloat_AS_DOUBLE(op1);\n            double result;\n            PyFPE_START_PROTECT(\"add\", return NULL)\n            result = ((double)a) + (double)b;\n            PyFPE_END_PROTECT(result)\n            return PyFloat_FromDouble(result);\n    }\n    return (inplace ? PyNumber_InPlaceAdd : PyNumber_Add)(op1, op2);\n}\n#endif\n\n/* pyobject_as_double */\nstatic double __Pyx__PyObject_AsDouble(PyObject* obj) {\n    PyObject* float_value;\n#if !CYTHON_USE_TYPE_SLOTS\n    float_value = PyNumber_Float(obj);  if ((0)) goto bad;\n#else\n    PyNumberMethods *nb = Py_TYPE(obj)->tp_as_number;\n    if (likely(nb) && likely(nb->nb_float)) {\n        float_value = nb->nb_float(obj);\n        if (likely(float_value) && unlikely(!PyFloat_Check(float_value))) {\n            PyErr_Format(PyExc_TypeError,\n                \"__float__ returned non-float (type %.200s)\",\n                Py_TYPE(float_value)->tp_name);\n            Py_DECREF(float_value);\n            goto bad;\n        }\n    } else if (PyUnicode_CheckExact(obj) || PyBytes_CheckExact(obj)) {\n#if PY_MAJOR_VERSION >= 3\n        float_value = PyFloat_FromString(obj);\n#else\n        float_value = PyFloat_FromString(obj, 0);\n#endif\n    } else {\n        PyObject* args = PyTuple_New(1);\n        if (unlikely(!args)) goto bad;\n        PyTuple_SET_ITEM(args, 0, obj);\n        float_value = PyObject_Call((PyObject*)&PyFloat_Type, args, 0);\n        PyTuple_SET_ITEM(args, 0, 0);\n        Py_DECREF(args);\n    }\n#endif\n    if (likely(float_value)) {\n        double value = PyFloat_AS_DOUBLE(float_value);\n        Py_DECREF(float_value);\n        return value;\n    }\nbad:\n    return (double)-1;\n}\n\n/* PyDictVersioning */\n#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS\nstatic CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj) {\n    PyObject *dict = Py_TYPE(obj)->tp_dict;\n    return likely(dict) ? __PYX_GET_DICT_VERSION(dict) : 0;\n}\nstatic CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj) {\n    PyObject **dictptr = NULL;\n    Py_ssize_t offset = Py_TYPE(obj)->tp_dictoffset;\n    if (offset) {\n#if CYTHON_COMPILING_IN_CPYTHON\n        dictptr = (likely(offset > 0)) ? (PyObject **) ((char *)obj + offset) : _PyObject_GetDictPtr(obj);\n#else\n        dictptr = _PyObject_GetDictPtr(obj);\n#endif\n    }\n    return (dictptr && *dictptr) ? __PYX_GET_DICT_VERSION(*dictptr) : 0;\n}\nstatic CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version) {\n    PyObject *dict = Py_TYPE(obj)->tp_dict;\n    if (unlikely(!dict) || unlikely(tp_dict_version != __PYX_GET_DICT_VERSION(dict)))\n        return 0;\n    return obj_dict_version == __Pyx_get_object_dict_version(obj);\n}\n#endif\n\n/* GetModuleGlobalName */\n#if CYTHON_USE_DICT_VERSIONS\nstatic PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value)\n#else\nstatic CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name)\n#endif\n{\n    PyObject *result;\n#if !CYTHON_AVOID_BORROWED_REFS\n#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1\n    result = _PyDict_GetItem_KnownHash(__pyx_d, name, ((PyASCIIObject *) name)->hash);\n    __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version)\n    if (likely(result)) {\n        return __Pyx_NewRef(result);\n    } else if (unlikely(PyErr_Occurred())) {\n        return NULL;\n    }\n#else\n    result = PyDict_GetItem(__pyx_d, name);\n    __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version)\n    if (likely(result)) {\n        return __Pyx_NewRef(result);\n    }\n#endif\n#else\n    result = PyObject_GetItem(__pyx_d, name);\n    __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version)\n    if (likely(result)) {\n        return __Pyx_NewRef(result);\n    }\n    PyErr_Clear();\n#endif\n    return __Pyx_GetBuiltinName(name);\n}\n\n/* PyObjectCallNoArg */\n#if CYTHON_COMPILING_IN_CPYTHON\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func) {\n#if CYTHON_FAST_PYCALL\n    if (PyFunction_Check(func)) {\n        return __Pyx_PyFunction_FastCall(func, NULL, 0);\n    }\n#endif\n#ifdef __Pyx_CyFunction_USED\n    if (likely(PyCFunction_Check(func) || __Pyx_CyFunction_Check(func)))\n#else\n    if (likely(PyCFunction_Check(func)))\n#endif\n    {\n        if (likely(PyCFunction_GET_FLAGS(func) & METH_NOARGS)) {\n            return __Pyx_PyObject_CallMethO(func, NULL);\n        }\n    }\n    return __Pyx_PyObject_Call(func, __pyx_empty_tuple, NULL);\n}\n#endif\n\n/* decode_c_string */\nstatic CYTHON_INLINE PyObject* __Pyx_decode_c_string(\n         const char* cstring, Py_ssize_t start, Py_ssize_t stop,\n         const char* encoding, const char* errors,\n         PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)) {\n    Py_ssize_t length;\n    if (unlikely((start < 0) | (stop < 0))) {\n        size_t slen = strlen(cstring);\n        if (unlikely(slen > (size_t) PY_SSIZE_T_MAX)) {\n            PyErr_SetString(PyExc_OverflowError,\n                            \"c-string too long to convert to Python\");\n            return NULL;\n        }\n        length = (Py_ssize_t) slen;\n        if (start < 0) {\n            start += length;\n            if (start < 0)\n                start = 0;\n        }\n        if (stop < 0)\n            stop += length;\n    }\n    length = stop - start;\n    if (unlikely(length <= 0))\n        return PyUnicode_FromUnicode(NULL, 0);\n    cstring += start;\n    if (decode_func) {\n        return decode_func(cstring, length, errors);\n    } else {\n        return PyUnicode_Decode(cstring, length, encoding, errors);\n    }\n}\n\n/* GetException */\n#if CYTHON_FAST_THREAD_STATE\nstatic int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb)\n#else\nstatic int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb)\n#endif\n{\n    PyObject *local_type, *local_value, *local_tb;\n#if CYTHON_FAST_THREAD_STATE\n    PyObject *tmp_type, *tmp_value, *tmp_tb;\n    local_type = tstate->curexc_type;\n    local_value = tstate->curexc_value;\n    local_tb = tstate->curexc_traceback;\n    tstate->curexc_type = 0;\n    tstate->curexc_value = 0;\n    tstate->curexc_traceback = 0;\n#else\n    PyErr_Fetch(&local_type, &local_value, &local_tb);\n#endif\n    PyErr_NormalizeException(&local_type, &local_value, &local_tb);\n#if CYTHON_FAST_THREAD_STATE\n    if (unlikely(tstate->curexc_type))\n#else\n    if (unlikely(PyErr_Occurred()))\n#endif\n        goto bad;\n    #if PY_MAJOR_VERSION >= 3\n    if (local_tb) {\n        if (unlikely(PyException_SetTraceback(local_value, local_tb) < 0))\n            goto bad;\n    }\n    #endif\n    Py_XINCREF(local_tb);\n    Py_XINCREF(local_type);\n    Py_XINCREF(local_value);\n    *type = local_type;\n    *value = local_value;\n    *tb = local_tb;\n#if CYTHON_FAST_THREAD_STATE\n    #if CYTHON_USE_EXC_INFO_STACK\n    {\n        _PyErr_StackItem *exc_info = tstate->exc_info;\n        tmp_type = exc_info->exc_type;\n        tmp_value = exc_info->exc_value;\n        tmp_tb = exc_info->exc_traceback;\n        exc_info->exc_type = local_type;\n        exc_info->exc_value = local_value;\n        exc_info->exc_traceback = local_tb;\n    }\n    #else\n    tmp_type = tstate->exc_type;\n    tmp_value = tstate->exc_value;\n    tmp_tb = tstate->exc_traceback;\n    tstate->exc_type = local_type;\n    tstate->exc_value = local_value;\n    tstate->exc_traceback = local_tb;\n    #endif\n    Py_XDECREF(tmp_type);\n    Py_XDECREF(tmp_value);\n    Py_XDECREF(tmp_tb);\n#else\n    PyErr_SetExcInfo(local_type, local_value, local_tb);\n#endif\n    return 0;\nbad:\n    *type = 0;\n    *value = 0;\n    *tb = 0;\n    Py_XDECREF(local_type);\n    Py_XDECREF(local_value);\n    Py_XDECREF(local_tb);\n    return -1;\n}\n\n/* SwapException */\n#if CYTHON_FAST_THREAD_STATE\nstatic CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) {\n    PyObject *tmp_type, *tmp_value, *tmp_tb;\n    #if CYTHON_USE_EXC_INFO_STACK\n    _PyErr_StackItem *exc_info = tstate->exc_info;\n    tmp_type = exc_info->exc_type;\n    tmp_value = exc_info->exc_value;\n    tmp_tb = exc_info->exc_traceback;\n    exc_info->exc_type = *type;\n    exc_info->exc_value = *value;\n    exc_info->exc_traceback = *tb;\n    #else\n    tmp_type = tstate->exc_type;\n    tmp_value = tstate->exc_value;\n    tmp_tb = tstate->exc_traceback;\n    tstate->exc_type = *type;\n    tstate->exc_value = *value;\n    tstate->exc_traceback = *tb;\n    #endif\n    *type = tmp_type;\n    *value = tmp_value;\n    *tb = tmp_tb;\n}\n#else\nstatic CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb) {\n    PyObject *tmp_type, *tmp_value, *tmp_tb;\n    PyErr_GetExcInfo(&tmp_type, &tmp_value, &tmp_tb);\n    PyErr_SetExcInfo(*type, *value, *tb);\n    *type = tmp_type;\n    *value = tmp_value;\n    *tb = tmp_tb;\n}\n#endif\n\n/* GetTopmostException */\n#if CYTHON_USE_EXC_INFO_STACK\nstatic _PyErr_StackItem *\n__Pyx_PyErr_GetTopmostException(PyThreadState *tstate)\n{\n    _PyErr_StackItem *exc_info = tstate->exc_info;\n    while ((exc_info->exc_type == NULL || exc_info->exc_type == Py_None) &&\n           exc_info->previous_item != NULL)\n    {\n        exc_info = exc_info->previous_item;\n    }\n    return exc_info;\n}\n#endif\n\n/* SaveResetException */\n#if CYTHON_FAST_THREAD_STATE\nstatic CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) {\n    #if CYTHON_USE_EXC_INFO_STACK\n    _PyErr_StackItem *exc_info = __Pyx_PyErr_GetTopmostException(tstate);\n    *type = exc_info->exc_type;\n    *value = exc_info->exc_value;\n    *tb = exc_info->exc_traceback;\n    #else\n    *type = tstate->exc_type;\n    *value = tstate->exc_value;\n    *tb = tstate->exc_traceback;\n    #endif\n    Py_XINCREF(*type);\n    Py_XINCREF(*value);\n    Py_XINCREF(*tb);\n}\nstatic CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) {\n    PyObject *tmp_type, *tmp_value, *tmp_tb;\n    #if CYTHON_USE_EXC_INFO_STACK\n    _PyErr_StackItem *exc_info = tstate->exc_info;\n    tmp_type = exc_info->exc_type;\n    tmp_value = exc_info->exc_value;\n    tmp_tb = exc_info->exc_traceback;\n    exc_info->exc_type = type;\n    exc_info->exc_value = value;\n    exc_info->exc_traceback = tb;\n    #else\n    tmp_type = tstate->exc_type;\n    tmp_value = tstate->exc_value;\n    tmp_tb = tstate->exc_traceback;\n    tstate->exc_type = type;\n    tstate->exc_value = value;\n    tstate->exc_traceback = tb;\n    #endif\n    Py_XDECREF(tmp_type);\n    Py_XDECREF(tmp_value);\n    Py_XDECREF(tmp_tb);\n}\n#endif\n\n/* PyObjectSetAttrStr */\n#if CYTHON_USE_TYPE_SLOTS\nstatic CYTHON_INLINE int __Pyx_PyObject_SetAttrStr(PyObject* obj, PyObject* attr_name, PyObject* value) {\n    PyTypeObject* tp = Py_TYPE(obj);\n    if (likely(tp->tp_setattro))\n        return tp->tp_setattro(obj, attr_name, value);\n#if PY_MAJOR_VERSION < 3\n    if (likely(tp->tp_setattr))\n        return tp->tp_setattr(obj, PyString_AS_STRING(attr_name), value);\n#endif\n    return PyObject_SetAttr(obj, attr_name, value);\n}\n#endif\n\n/* PyErrExceptionMatches */\n#if CYTHON_FAST_THREAD_STATE\nstatic int __Pyx_PyErr_ExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) {\n    Py_ssize_t i, n;\n    n = PyTuple_GET_SIZE(tuple);\n#if PY_MAJOR_VERSION >= 3\n    for (i=0; i<n; i++) {\n        if (exc_type == PyTuple_GET_ITEM(tuple, i)) return 1;\n    }\n#endif\n    for (i=0; i<n; i++) {\n        if (__Pyx_PyErr_GivenExceptionMatches(exc_type, PyTuple_GET_ITEM(tuple, i))) return 1;\n    }\n    return 0;\n}\nstatic CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err) {\n    PyObject *exc_type = tstate->curexc_type;\n    if (exc_type == err) return 1;\n    if (unlikely(!exc_type)) return 0;\n    if (unlikely(PyTuple_Check(err)))\n        return __Pyx_PyErr_ExceptionMatchesTuple(exc_type, err);\n    return __Pyx_PyErr_GivenExceptionMatches(exc_type, err);\n}\n#endif\n\n/* GetAttr */\nstatic CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *o, PyObject *n) {\n#if CYTHON_USE_TYPE_SLOTS\n#if PY_MAJOR_VERSION >= 3\n    if (likely(PyUnicode_Check(n)))\n#else\n    if (likely(PyString_Check(n)))\n#endif\n        return __Pyx_PyObject_GetAttrStr(o, n);\n#endif\n    return PyObject_GetAttr(o, n);\n}\n\n/* GetAttr3 */\nstatic PyObject *__Pyx_GetAttr3Default(PyObject *d) {\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    if (unlikely(!__Pyx_PyErr_ExceptionMatches(PyExc_AttributeError)))\n        return NULL;\n    __Pyx_PyErr_Clear();\n    Py_INCREF(d);\n    return d;\n}\nstatic CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *o, PyObject *n, PyObject *d) {\n    PyObject *r = __Pyx_GetAttr(o, n);\n    return (likely(r)) ? r : __Pyx_GetAttr3Default(d);\n}\n\n/* Import */\nstatic PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level) {\n    PyObject *empty_list = 0;\n    PyObject *module = 0;\n    PyObject *global_dict = 0;\n    PyObject *empty_dict = 0;\n    PyObject *list;\n    #if PY_MAJOR_VERSION < 3\n    PyObject *py_import;\n    py_import = __Pyx_PyObject_GetAttrStr(__pyx_b, __pyx_n_s_import);\n    if (!py_import)\n        goto bad;\n    #endif\n    if (from_list)\n        list = from_list;\n    else {\n        empty_list = PyList_New(0);\n        if (!empty_list)\n            goto bad;\n        list = empty_list;\n    }\n    global_dict = PyModule_GetDict(__pyx_m);\n    if (!global_dict)\n        goto bad;\n    empty_dict = PyDict_New();\n    if (!empty_dict)\n        goto bad;\n    {\n        #if PY_MAJOR_VERSION >= 3\n        if (level == -1) {\n            if (strchr(__Pyx_MODULE_NAME, '.')) {\n                module = PyImport_ImportModuleLevelObject(\n                    name, global_dict, empty_dict, list, 1);\n                if (!module) {\n                    if (!PyErr_ExceptionMatches(PyExc_ImportError))\n                        goto bad;\n                    PyErr_Clear();\n                }\n            }\n            level = 0;\n        }\n        #endif\n        if (!module) {\n            #if PY_MAJOR_VERSION < 3\n            PyObject *py_level = PyInt_FromLong(level);\n            if (!py_level)\n                goto bad;\n            module = PyObject_CallFunctionObjArgs(py_import,\n                name, global_dict, empty_dict, list, py_level, (PyObject *)NULL);\n            Py_DECREF(py_level);\n            #else\n            module = PyImport_ImportModuleLevelObject(\n                name, global_dict, empty_dict, list, level);\n            #endif\n        }\n    }\nbad:\n    #if PY_MAJOR_VERSION < 3\n    Py_XDECREF(py_import);\n    #endif\n    Py_XDECREF(empty_list);\n    Py_XDECREF(empty_dict);\n    return module;\n}\n\n/* ImportFrom */\nstatic PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name) {\n    PyObject* value = __Pyx_PyObject_GetAttrStr(module, name);\n    if (unlikely(!value) && PyErr_ExceptionMatches(PyExc_AttributeError)) {\n        PyErr_Format(PyExc_ImportError,\n        #if PY_MAJOR_VERSION < 3\n            \"cannot import name %.230s\", PyString_AS_STRING(name));\n        #else\n            \"cannot import name %S\", name);\n        #endif\n    }\n    return value;\n}\n\n/* PyObjectCall2Args */\nstatic CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2) {\n    PyObject *args, *result = NULL;\n    #if CYTHON_FAST_PYCALL\n    if (PyFunction_Check(function)) {\n        PyObject *args[2] = {arg1, arg2};\n        return __Pyx_PyFunction_FastCall(function, args, 2);\n    }\n    #endif\n    #if CYTHON_FAST_PYCCALL\n    if (__Pyx_PyFastCFunction_Check(function)) {\n        PyObject *args[2] = {arg1, arg2};\n        return __Pyx_PyCFunction_FastCall(function, args, 2);\n    }\n    #endif\n    args = PyTuple_New(2);\n    if (unlikely(!args)) goto done;\n    Py_INCREF(arg1);\n    PyTuple_SET_ITEM(args, 0, arg1);\n    Py_INCREF(arg2);\n    PyTuple_SET_ITEM(args, 1, arg2);\n    Py_INCREF(function);\n    result = __Pyx_PyObject_Call(function, args, NULL);\n    Py_DECREF(args);\n    Py_DECREF(function);\ndone:\n    return result;\n}\n\n/* HasAttr */\nstatic CYTHON_INLINE int __Pyx_HasAttr(PyObject *o, PyObject *n) {\n    PyObject *r;\n    if (unlikely(!__Pyx_PyBaseString_Check(n))) {\n        PyErr_SetString(PyExc_TypeError,\n                        \"hasattr(): attribute name must be string\");\n        return -1;\n    }\n    r = __Pyx_GetAttr(o, n);\n    if (unlikely(!r)) {\n        PyErr_Clear();\n        return 0;\n    } else {\n        Py_DECREF(r);\n        return 1;\n    }\n}\n\n/* PyObject_GenericGetAttrNoDict */\n#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000\nstatic PyObject *__Pyx_RaiseGenericGetAttributeError(PyTypeObject *tp, PyObject *attr_name) {\n    PyErr_Format(PyExc_AttributeError,\n#if PY_MAJOR_VERSION >= 3\n                 \"'%.50s' object has no attribute '%U'\",\n                 tp->tp_name, attr_name);\n#else\n                 \"'%.50s' object has no attribute '%.400s'\",\n                 tp->tp_name, PyString_AS_STRING(attr_name));\n#endif\n    return NULL;\n}\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name) {\n    PyObject *descr;\n    PyTypeObject *tp = Py_TYPE(obj);\n    if (unlikely(!PyString_Check(attr_name))) {\n        return PyObject_GenericGetAttr(obj, attr_name);\n    }\n    assert(!tp->tp_dictoffset);\n    descr = _PyType_Lookup(tp, attr_name);\n    if (unlikely(!descr)) {\n        return __Pyx_RaiseGenericGetAttributeError(tp, attr_name);\n    }\n    Py_INCREF(descr);\n    #if PY_MAJOR_VERSION < 3\n    if (likely(PyType_HasFeature(Py_TYPE(descr), Py_TPFLAGS_HAVE_CLASS)))\n    #endif\n    {\n        descrgetfunc f = Py_TYPE(descr)->tp_descr_get;\n        if (unlikely(f)) {\n            PyObject *res = f(descr, obj, (PyObject *)tp);\n            Py_DECREF(descr);\n            return res;\n        }\n    }\n    return descr;\n}\n#endif\n\n/* PyObject_GenericGetAttr */\n#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000\nstatic PyObject* __Pyx_PyObject_GenericGetAttr(PyObject* obj, PyObject* attr_name) {\n    if (unlikely(Py_TYPE(obj)->tp_dictoffset)) {\n        return PyObject_GenericGetAttr(obj, attr_name);\n    }\n    return __Pyx_PyObject_GenericGetAttrNoDict(obj, attr_name);\n}\n#endif\n\n/* SetupReduce */\nstatic int __Pyx_setup_reduce_is_named(PyObject* meth, PyObject* name) {\n  int ret;\n  PyObject *name_attr;\n  name_attr = __Pyx_PyObject_GetAttrStr(meth, __pyx_n_s_name_2);\n  if (likely(name_attr)) {\n      ret = PyObject_RichCompareBool(name_attr, name, Py_EQ);\n  } else {\n      ret = -1;\n  }\n  if (unlikely(ret < 0)) {\n      PyErr_Clear();\n      ret = 0;\n  }\n  Py_XDECREF(name_attr);\n  return ret;\n}\nstatic int __Pyx_setup_reduce(PyObject* type_obj) {\n    int ret = 0;\n    PyObject *object_reduce = NULL;\n    PyObject *object_reduce_ex = NULL;\n    PyObject *reduce = NULL;\n    PyObject *reduce_ex = NULL;\n    PyObject *reduce_cython = NULL;\n    PyObject *setstate = NULL;\n    PyObject *setstate_cython = NULL;\n#if CYTHON_USE_PYTYPE_LOOKUP\n    if (_PyType_Lookup((PyTypeObject*)type_obj, __pyx_n_s_getstate)) goto GOOD;\n#else\n    if (PyObject_HasAttr(type_obj, __pyx_n_s_getstate)) goto GOOD;\n#endif\n#if CYTHON_USE_PYTYPE_LOOKUP\n    object_reduce_ex = _PyType_Lookup(&PyBaseObject_Type, __pyx_n_s_reduce_ex); if (!object_reduce_ex) goto BAD;\n#else\n    object_reduce_ex = __Pyx_PyObject_GetAttrStr((PyObject*)&PyBaseObject_Type, __pyx_n_s_reduce_ex); if (!object_reduce_ex) goto BAD;\n#endif\n    reduce_ex = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_reduce_ex); if (unlikely(!reduce_ex)) goto BAD;\n    if (reduce_ex == object_reduce_ex) {\n#if CYTHON_USE_PYTYPE_LOOKUP\n        object_reduce = _PyType_Lookup(&PyBaseObject_Type, __pyx_n_s_reduce); if (!object_reduce) goto BAD;\n#else\n        object_reduce = __Pyx_PyObject_GetAttrStr((PyObject*)&PyBaseObject_Type, __pyx_n_s_reduce); if (!object_reduce) goto BAD;\n#endif\n        reduce = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_reduce); if (unlikely(!reduce)) goto BAD;\n        if (reduce == object_reduce || __Pyx_setup_reduce_is_named(reduce, __pyx_n_s_reduce_cython)) {\n            reduce_cython = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_reduce_cython); if (unlikely(!reduce_cython)) goto BAD;\n            ret = PyDict_SetItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_reduce, reduce_cython); if (unlikely(ret < 0)) goto BAD;\n            ret = PyDict_DelItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_reduce_cython); if (unlikely(ret < 0)) goto BAD;\n            setstate = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_setstate);\n            if (!setstate) PyErr_Clear();\n            if (!setstate || __Pyx_setup_reduce_is_named(setstate, __pyx_n_s_setstate_cython)) {\n                setstate_cython = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_setstate_cython); if (unlikely(!setstate_cython)) goto BAD;\n                ret = PyDict_SetItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_setstate, setstate_cython); if (unlikely(ret < 0)) goto BAD;\n                ret = PyDict_DelItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_setstate_cython); if (unlikely(ret < 0)) goto BAD;\n            }\n            PyType_Modified((PyTypeObject*)type_obj);\n        }\n    }\n    goto GOOD;\nBAD:\n    if (!PyErr_Occurred())\n        PyErr_Format(PyExc_RuntimeError, \"Unable to initialize pickling for %s\", ((PyTypeObject*)type_obj)->tp_name);\n    ret = -1;\nGOOD:\n#if !CYTHON_USE_PYTYPE_LOOKUP\n    Py_XDECREF(object_reduce);\n    Py_XDECREF(object_reduce_ex);\n#endif\n    Py_XDECREF(reduce);\n    Py_XDECREF(reduce_ex);\n    Py_XDECREF(reduce_cython);\n    Py_XDECREF(setstate);\n    Py_XDECREF(setstate_cython);\n    return ret;\n}\n\n/* CalculateMetaclass */\nstatic PyObject *__Pyx_CalculateMetaclass(PyTypeObject *metaclass, PyObject *bases) {\n    Py_ssize_t i, nbases = PyTuple_GET_SIZE(bases);\n    for (i=0; i < nbases; i++) {\n        PyTypeObject *tmptype;\n        PyObject *tmp = PyTuple_GET_ITEM(bases, i);\n        tmptype = Py_TYPE(tmp);\n#if PY_MAJOR_VERSION < 3\n        if (tmptype == &PyClass_Type)\n            continue;\n#endif\n        if (!metaclass) {\n            metaclass = tmptype;\n            continue;\n        }\n        if (PyType_IsSubtype(metaclass, tmptype))\n            continue;\n        if (PyType_IsSubtype(tmptype, metaclass)) {\n            metaclass = tmptype;\n            continue;\n        }\n        PyErr_SetString(PyExc_TypeError,\n                        \"metaclass conflict: \"\n                        \"the metaclass of a derived class \"\n                        \"must be a (non-strict) subclass \"\n                        \"of the metaclasses of all its bases\");\n        return NULL;\n    }\n    if (!metaclass) {\n#if PY_MAJOR_VERSION < 3\n        metaclass = &PyClass_Type;\n#else\n        metaclass = &PyType_Type;\n#endif\n    }\n    Py_INCREF((PyObject*) metaclass);\n    return (PyObject*) metaclass;\n}\n\n/* FetchCommonType */\nstatic PyTypeObject* __Pyx_FetchCommonType(PyTypeObject* type) {\n    PyObject* fake_module;\n    PyTypeObject* cached_type = NULL;\n    fake_module = PyImport_AddModule((char*) \"_cython_\" CYTHON_ABI);\n    if (!fake_module) return NULL;\n    Py_INCREF(fake_module);\n    cached_type = (PyTypeObject*) PyObject_GetAttrString(fake_module, type->tp_name);\n    if (cached_type) {\n        if (!PyType_Check((PyObject*)cached_type)) {\n            PyErr_Format(PyExc_TypeError,\n                \"Shared Cython type %.200s is not a type object\",\n                type->tp_name);\n            goto bad;\n        }\n        if (cached_type->tp_basicsize != type->tp_basicsize) {\n            PyErr_Format(PyExc_TypeError,\n                \"Shared Cython type %.200s has the wrong size, try recompiling\",\n                type->tp_name);\n            goto bad;\n        }\n    } else {\n        if (!PyErr_ExceptionMatches(PyExc_AttributeError)) goto bad;\n        PyErr_Clear();\n        if (PyType_Ready(type) < 0) goto bad;\n        if (PyObject_SetAttrString(fake_module, type->tp_name, (PyObject*) type) < 0)\n            goto bad;\n        Py_INCREF(type);\n        cached_type = type;\n    }\ndone:\n    Py_DECREF(fake_module);\n    return cached_type;\nbad:\n    Py_XDECREF(cached_type);\n    cached_type = NULL;\n    goto done;\n}\n\n/* CythonFunction */\n#include <structmember.h>\nstatic PyObject *\n__Pyx_CyFunction_get_doc(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *closure)\n{\n    if (unlikely(op->func_doc == NULL)) {\n        if (op->func.m_ml->ml_doc) {\n#if PY_MAJOR_VERSION >= 3\n            op->func_doc = PyUnicode_FromString(op->func.m_ml->ml_doc);\n#else\n            op->func_doc = PyString_FromString(op->func.m_ml->ml_doc);\n#endif\n            if (unlikely(op->func_doc == NULL))\n                return NULL;\n        } else {\n            Py_INCREF(Py_None);\n            return Py_None;\n        }\n    }\n    Py_INCREF(op->func_doc);\n    return op->func_doc;\n}\nstatic int\n__Pyx_CyFunction_set_doc(__pyx_CyFunctionObject *op, PyObject *value, CYTHON_UNUSED void *context)\n{\n    PyObject *tmp = op->func_doc;\n    if (value == NULL) {\n        value = Py_None;\n    }\n    Py_INCREF(value);\n    op->func_doc = value;\n    Py_XDECREF(tmp);\n    return 0;\n}\nstatic PyObject *\n__Pyx_CyFunction_get_name(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context)\n{\n    if (unlikely(op->func_name == NULL)) {\n#if PY_MAJOR_VERSION >= 3\n        op->func_name = PyUnicode_InternFromString(op->func.m_ml->ml_name);\n#else\n        op->func_name = PyString_InternFromString(op->func.m_ml->ml_name);\n#endif\n        if (unlikely(op->func_name == NULL))\n            return NULL;\n    }\n    Py_INCREF(op->func_name);\n    return op->func_name;\n}\nstatic int\n__Pyx_CyFunction_set_name(__pyx_CyFunctionObject *op, PyObject *value, CYTHON_UNUSED void *context)\n{\n    PyObject *tmp;\n#if PY_MAJOR_VERSION >= 3\n    if (unlikely(value == NULL || !PyUnicode_Check(value)))\n#else\n    if (unlikely(value == NULL || !PyString_Check(value)))\n#endif\n    {\n        PyErr_SetString(PyExc_TypeError,\n                        \"__name__ must be set to a string object\");\n        return -1;\n    }\n    tmp = op->func_name;\n    Py_INCREF(value);\n    op->func_name = value;\n    Py_XDECREF(tmp);\n    return 0;\n}\nstatic PyObject *\n__Pyx_CyFunction_get_qualname(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context)\n{\n    Py_INCREF(op->func_qualname);\n    return op->func_qualname;\n}\nstatic int\n__Pyx_CyFunction_set_qualname(__pyx_CyFunctionObject *op, PyObject *value, CYTHON_UNUSED void *context)\n{\n    PyObject *tmp;\n#if PY_MAJOR_VERSION >= 3\n    if (unlikely(value == NULL || !PyUnicode_Check(value)))\n#else\n    if (unlikely(value == NULL || !PyString_Check(value)))\n#endif\n    {\n        PyErr_SetString(PyExc_TypeError,\n                        \"__qualname__ must be set to a string object\");\n        return -1;\n    }\n    tmp = op->func_qualname;\n    Py_INCREF(value);\n    op->func_qualname = value;\n    Py_XDECREF(tmp);\n    return 0;\n}\nstatic PyObject *\n__Pyx_CyFunction_get_self(__pyx_CyFunctionObject *m, CYTHON_UNUSED void *closure)\n{\n    PyObject *self;\n    self = m->func_closure;\n    if (self == NULL)\n        self = Py_None;\n    Py_INCREF(self);\n    return self;\n}\nstatic PyObject *\n__Pyx_CyFunction_get_dict(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context)\n{\n    if (unlikely(op->func_dict == NULL)) {\n        op->func_dict = PyDict_New();\n        if (unlikely(op->func_dict == NULL))\n            return NULL;\n    }\n    Py_INCREF(op->func_dict);\n    return op->func_dict;\n}\nstatic int\n__Pyx_CyFunction_set_dict(__pyx_CyFunctionObject *op, PyObject *value, CYTHON_UNUSED void *context)\n{\n    PyObject *tmp;\n    if (unlikely(value == NULL)) {\n        PyErr_SetString(PyExc_TypeError,\n               \"function's dictionary may not be deleted\");\n        return -1;\n    }\n    if (unlikely(!PyDict_Check(value))) {\n        PyErr_SetString(PyExc_TypeError,\n               \"setting function's dictionary to a non-dict\");\n        return -1;\n    }\n    tmp = op->func_dict;\n    Py_INCREF(value);\n    op->func_dict = value;\n    Py_XDECREF(tmp);\n    return 0;\n}\nstatic PyObject *\n__Pyx_CyFunction_get_globals(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context)\n{\n    Py_INCREF(op->func_globals);\n    return op->func_globals;\n}\nstatic PyObject *\n__Pyx_CyFunction_get_closure(CYTHON_UNUSED __pyx_CyFunctionObject *op, CYTHON_UNUSED void *context)\n{\n    Py_INCREF(Py_None);\n    return Py_None;\n}\nstatic PyObject *\n__Pyx_CyFunction_get_code(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context)\n{\n    PyObject* result = (op->func_code) ? op->func_code : Py_None;\n    Py_INCREF(result);\n    return result;\n}\nstatic int\n__Pyx_CyFunction_init_defaults(__pyx_CyFunctionObject *op) {\n    int result = 0;\n    PyObject *res = op->defaults_getter((PyObject *) op);\n    if (unlikely(!res))\n        return -1;\n    #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n    op->defaults_tuple = PyTuple_GET_ITEM(res, 0);\n    Py_INCREF(op->defaults_tuple);\n    op->defaults_kwdict = PyTuple_GET_ITEM(res, 1);\n    Py_INCREF(op->defaults_kwdict);\n    #else\n    op->defaults_tuple = PySequence_ITEM(res, 0);\n    if (unlikely(!op->defaults_tuple)) result = -1;\n    else {\n        op->defaults_kwdict = PySequence_ITEM(res, 1);\n        if (unlikely(!op->defaults_kwdict)) result = -1;\n    }\n    #endif\n    Py_DECREF(res);\n    return result;\n}\nstatic int\n__Pyx_CyFunction_set_defaults(__pyx_CyFunctionObject *op, PyObject* value, CYTHON_UNUSED void *context) {\n    PyObject* tmp;\n    if (!value) {\n        value = Py_None;\n    } else if (value != Py_None && !PyTuple_Check(value)) {\n        PyErr_SetString(PyExc_TypeError,\n                        \"__defaults__ must be set to a tuple object\");\n        return -1;\n    }\n    Py_INCREF(value);\n    tmp = op->defaults_tuple;\n    op->defaults_tuple = value;\n    Py_XDECREF(tmp);\n    return 0;\n}\nstatic PyObject *\n__Pyx_CyFunction_get_defaults(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) {\n    PyObject* result = op->defaults_tuple;\n    if (unlikely(!result)) {\n        if (op->defaults_getter) {\n            if (__Pyx_CyFunction_init_defaults(op) < 0) return NULL;\n            result = op->defaults_tuple;\n        } else {\n            result = Py_None;\n        }\n    }\n    Py_INCREF(result);\n    return result;\n}\nstatic int\n__Pyx_CyFunction_set_kwdefaults(__pyx_CyFunctionObject *op, PyObject* value, CYTHON_UNUSED void *context) {\n    PyObject* tmp;\n    if (!value) {\n        value = Py_None;\n    } else if (value != Py_None && !PyDict_Check(value)) {\n        PyErr_SetString(PyExc_TypeError,\n                        \"__kwdefaults__ must be set to a dict object\");\n        return -1;\n    }\n    Py_INCREF(value);\n    tmp = op->defaults_kwdict;\n    op->defaults_kwdict = value;\n    Py_XDECREF(tmp);\n    return 0;\n}\nstatic PyObject *\n__Pyx_CyFunction_get_kwdefaults(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) {\n    PyObject* result = op->defaults_kwdict;\n    if (unlikely(!result)) {\n        if (op->defaults_getter) {\n            if (__Pyx_CyFunction_init_defaults(op) < 0) return NULL;\n            result = op->defaults_kwdict;\n        } else {\n            result = Py_None;\n        }\n    }\n    Py_INCREF(result);\n    return result;\n}\nstatic int\n__Pyx_CyFunction_set_annotations(__pyx_CyFunctionObject *op, PyObject* value, CYTHON_UNUSED void *context) {\n    PyObject* tmp;\n    if (!value || value == Py_None) {\n        value = NULL;\n    } else if (!PyDict_Check(value)) {\n        PyErr_SetString(PyExc_TypeError,\n                        \"__annotations__ must be set to a dict object\");\n        return -1;\n    }\n    Py_XINCREF(value);\n    tmp = op->func_annotations;\n    op->func_annotations = value;\n    Py_XDECREF(tmp);\n    return 0;\n}\nstatic PyObject *\n__Pyx_CyFunction_get_annotations(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) {\n    PyObject* result = op->func_annotations;\n    if (unlikely(!result)) {\n        result = PyDict_New();\n        if (unlikely(!result)) return NULL;\n        op->func_annotations = result;\n    }\n    Py_INCREF(result);\n    return result;\n}\nstatic PyGetSetDef __pyx_CyFunction_getsets[] = {\n    {(char *) \"func_doc\", (getter)__Pyx_CyFunction_get_doc, (setter)__Pyx_CyFunction_set_doc, 0, 0},\n    {(char *) \"__doc__\",  (getter)__Pyx_CyFunction_get_doc, (setter)__Pyx_CyFunction_set_doc, 0, 0},\n    {(char *) \"func_name\", (getter)__Pyx_CyFunction_get_name, (setter)__Pyx_CyFunction_set_name, 0, 0},\n    {(char *) \"__name__\", (getter)__Pyx_CyFunction_get_name, (setter)__Pyx_CyFunction_set_name, 0, 0},\n    {(char *) \"__qualname__\", (getter)__Pyx_CyFunction_get_qualname, (setter)__Pyx_CyFunction_set_qualname, 0, 0},\n    {(char *) \"__self__\", (getter)__Pyx_CyFunction_get_self, 0, 0, 0},\n    {(char *) \"func_dict\", (getter)__Pyx_CyFunction_get_dict, (setter)__Pyx_CyFunction_set_dict, 0, 0},\n    {(char *) \"__dict__\", (getter)__Pyx_CyFunction_get_dict, (setter)__Pyx_CyFunction_set_dict, 0, 0},\n    {(char *) \"func_globals\", (getter)__Pyx_CyFunction_get_globals, 0, 0, 0},\n    {(char *) \"__globals__\", (getter)__Pyx_CyFunction_get_globals, 0, 0, 0},\n    {(char *) \"func_closure\", (getter)__Pyx_CyFunction_get_closure, 0, 0, 0},\n    {(char *) \"__closure__\", (getter)__Pyx_CyFunction_get_closure, 0, 0, 0},\n    {(char *) \"func_code\", (getter)__Pyx_CyFunction_get_code, 0, 0, 0},\n    {(char *) \"__code__\", (getter)__Pyx_CyFunction_get_code, 0, 0, 0},\n    {(char *) \"func_defaults\", (getter)__Pyx_CyFunction_get_defaults, (setter)__Pyx_CyFunction_set_defaults, 0, 0},\n    {(char *) \"__defaults__\", (getter)__Pyx_CyFunction_get_defaults, (setter)__Pyx_CyFunction_set_defaults, 0, 0},\n    {(char *) \"__kwdefaults__\", (getter)__Pyx_CyFunction_get_kwdefaults, (setter)__Pyx_CyFunction_set_kwdefaults, 0, 0},\n    {(char *) \"__annotations__\", (getter)__Pyx_CyFunction_get_annotations, (setter)__Pyx_CyFunction_set_annotations, 0, 0},\n    {0, 0, 0, 0, 0}\n};\nstatic PyMemberDef __pyx_CyFunction_members[] = {\n    {(char *) \"__module__\", T_OBJECT, offsetof(PyCFunctionObject, m_module), PY_WRITE_RESTRICTED, 0},\n    {0, 0, 0,  0, 0}\n};\nstatic PyObject *\n__Pyx_CyFunction_reduce(__pyx_CyFunctionObject *m, CYTHON_UNUSED PyObject *args)\n{\n#if PY_MAJOR_VERSION >= 3\n    return PyUnicode_FromString(m->func.m_ml->ml_name);\n#else\n    return PyString_FromString(m->func.m_ml->ml_name);\n#endif\n}\nstatic PyMethodDef __pyx_CyFunction_methods[] = {\n    {\"__reduce__\", (PyCFunction)__Pyx_CyFunction_reduce, METH_VARARGS, 0},\n    {0, 0, 0, 0}\n};\n#if PY_VERSION_HEX < 0x030500A0\n#define __Pyx_CyFunction_weakreflist(cyfunc) ((cyfunc)->func_weakreflist)\n#else\n#define __Pyx_CyFunction_weakreflist(cyfunc) ((cyfunc)->func.m_weakreflist)\n#endif\nstatic PyObject *__Pyx_CyFunction_New(PyTypeObject *type, PyMethodDef *ml, int flags, PyObject* qualname,\n                                      PyObject *closure, PyObject *module, PyObject* globals, PyObject* code) {\n    __pyx_CyFunctionObject *op = PyObject_GC_New(__pyx_CyFunctionObject, type);\n    if (op == NULL)\n        return NULL;\n    op->flags = flags;\n    __Pyx_CyFunction_weakreflist(op) = NULL;\n    op->func.m_ml = ml;\n    op->func.m_self = (PyObject *) op;\n    Py_XINCREF(closure);\n    op->func_closure = closure;\n    Py_XINCREF(module);\n    op->func.m_module = module;\n    op->func_dict = NULL;\n    op->func_name = NULL;\n    Py_INCREF(qualname);\n    op->func_qualname = qualname;\n    op->func_doc = NULL;\n    op->func_classobj = NULL;\n    op->func_globals = globals;\n    Py_INCREF(op->func_globals);\n    Py_XINCREF(code);\n    op->func_code = code;\n    op->defaults_pyobjects = 0;\n    op->defaults = NULL;\n    op->defaults_tuple = NULL;\n    op->defaults_kwdict = NULL;\n    op->defaults_getter = NULL;\n    op->func_annotations = NULL;\n    PyObject_GC_Track(op);\n    return (PyObject *) op;\n}\nstatic int\n__Pyx_CyFunction_clear(__pyx_CyFunctionObject *m)\n{\n    Py_CLEAR(m->func_closure);\n    Py_CLEAR(m->func.m_module);\n    Py_CLEAR(m->func_dict);\n    Py_CLEAR(m->func_name);\n    Py_CLEAR(m->func_qualname);\n    Py_CLEAR(m->func_doc);\n    Py_CLEAR(m->func_globals);\n    Py_CLEAR(m->func_code);\n    Py_CLEAR(m->func_classobj);\n    Py_CLEAR(m->defaults_tuple);\n    Py_CLEAR(m->defaults_kwdict);\n    Py_CLEAR(m->func_annotations);\n    if (m->defaults) {\n        PyObject **pydefaults = __Pyx_CyFunction_Defaults(PyObject *, m);\n        int i;\n        for (i = 0; i < m->defaults_pyobjects; i++)\n            Py_XDECREF(pydefaults[i]);\n        PyObject_Free(m->defaults);\n        m->defaults = NULL;\n    }\n    return 0;\n}\nstatic void __Pyx__CyFunction_dealloc(__pyx_CyFunctionObject *m)\n{\n    if (__Pyx_CyFunction_weakreflist(m) != NULL)\n        PyObject_ClearWeakRefs((PyObject *) m);\n    __Pyx_CyFunction_clear(m);\n    PyObject_GC_Del(m);\n}\nstatic void __Pyx_CyFunction_dealloc(__pyx_CyFunctionObject *m)\n{\n    PyObject_GC_UnTrack(m);\n    __Pyx__CyFunction_dealloc(m);\n}\nstatic int __Pyx_CyFunction_traverse(__pyx_CyFunctionObject *m, visitproc visit, void *arg)\n{\n    Py_VISIT(m->func_closure);\n    Py_VISIT(m->func.m_module);\n    Py_VISIT(m->func_dict);\n    Py_VISIT(m->func_name);\n    Py_VISIT(m->func_qualname);\n    Py_VISIT(m->func_doc);\n    Py_VISIT(m->func_globals);\n    Py_VISIT(m->func_code);\n    Py_VISIT(m->func_classobj);\n    Py_VISIT(m->defaults_tuple);\n    Py_VISIT(m->defaults_kwdict);\n    if (m->defaults) {\n        PyObject **pydefaults = __Pyx_CyFunction_Defaults(PyObject *, m);\n        int i;\n        for (i = 0; i < m->defaults_pyobjects; i++)\n            Py_VISIT(pydefaults[i]);\n    }\n    return 0;\n}\nstatic PyObject *__Pyx_CyFunction_descr_get(PyObject *func, PyObject *obj, PyObject *type)\n{\n    __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func;\n    if (m->flags & __Pyx_CYFUNCTION_STATICMETHOD) {\n        Py_INCREF(func);\n        return func;\n    }\n    if (m->flags & __Pyx_CYFUNCTION_CLASSMETHOD) {\n        if (type == NULL)\n            type = (PyObject *)(Py_TYPE(obj));\n        return __Pyx_PyMethod_New(func, type, (PyObject *)(Py_TYPE(type)));\n    }\n    if (obj == Py_None)\n        obj = NULL;\n    return __Pyx_PyMethod_New(func, obj, type);\n}\nstatic PyObject*\n__Pyx_CyFunction_repr(__pyx_CyFunctionObject *op)\n{\n#if PY_MAJOR_VERSION >= 3\n    return PyUnicode_FromFormat(\"<cyfunction %U at %p>\",\n                                op->func_qualname, (void *)op);\n#else\n    return PyString_FromFormat(\"<cyfunction %s at %p>\",\n                               PyString_AsString(op->func_qualname), (void *)op);\n#endif\n}\nstatic PyObject * __Pyx_CyFunction_CallMethod(PyObject *func, PyObject *self, PyObject *arg, PyObject *kw) {\n    PyCFunctionObject* f = (PyCFunctionObject*)func;\n    PyCFunction meth = f->m_ml->ml_meth;\n    Py_ssize_t size;\n    switch (f->m_ml->ml_flags & (METH_VARARGS | METH_KEYWORDS | METH_NOARGS | METH_O)) {\n    case METH_VARARGS:\n        if (likely(kw == NULL || PyDict_Size(kw) == 0))\n            return (*meth)(self, arg);\n        break;\n    case METH_VARARGS | METH_KEYWORDS:\n        return (*(PyCFunctionWithKeywords)(void*)meth)(self, arg, kw);\n    case METH_NOARGS:\n        if (likely(kw == NULL || PyDict_Size(kw) == 0)) {\n            size = PyTuple_GET_SIZE(arg);\n            if (likely(size == 0))\n                return (*meth)(self, NULL);\n            PyErr_Format(PyExc_TypeError,\n                \"%.200s() takes no arguments (%\" CYTHON_FORMAT_SSIZE_T \"d given)\",\n                f->m_ml->ml_name, size);\n            return NULL;\n        }\n        break;\n    case METH_O:\n        if (likely(kw == NULL || PyDict_Size(kw) == 0)) {\n            size = PyTuple_GET_SIZE(arg);\n            if (likely(size == 1)) {\n                PyObject *result, *arg0;\n                #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n                arg0 = PyTuple_GET_ITEM(arg, 0);\n                #else\n                arg0 = PySequence_ITEM(arg, 0); if (unlikely(!arg0)) return NULL;\n                #endif\n                result = (*meth)(self, arg0);\n                #if !(CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS)\n                Py_DECREF(arg0);\n                #endif\n                return result;\n            }\n            PyErr_Format(PyExc_TypeError,\n                \"%.200s() takes exactly one argument (%\" CYTHON_FORMAT_SSIZE_T \"d given)\",\n                f->m_ml->ml_name, size);\n            return NULL;\n        }\n        break;\n    default:\n        PyErr_SetString(PyExc_SystemError, \"Bad call flags in \"\n                        \"__Pyx_CyFunction_Call. METH_OLDARGS is no \"\n                        \"longer supported!\");\n        return NULL;\n    }\n    PyErr_Format(PyExc_TypeError, \"%.200s() takes no keyword arguments\",\n                 f->m_ml->ml_name);\n    return NULL;\n}\nstatic CYTHON_INLINE PyObject *__Pyx_CyFunction_Call(PyObject *func, PyObject *arg, PyObject *kw) {\n    return __Pyx_CyFunction_CallMethod(func, ((PyCFunctionObject*)func)->m_self, arg, kw);\n}\nstatic PyObject *__Pyx_CyFunction_CallAsMethod(PyObject *func, PyObject *args, PyObject *kw) {\n    PyObject *result;\n    __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *) func;\n    if ((cyfunc->flags & __Pyx_CYFUNCTION_CCLASS) && !(cyfunc->flags & __Pyx_CYFUNCTION_STATICMETHOD)) {\n        Py_ssize_t argc;\n        PyObject *new_args;\n        PyObject *self;\n        argc = PyTuple_GET_SIZE(args);\n        new_args = PyTuple_GetSlice(args, 1, argc);\n        if (unlikely(!new_args))\n            return NULL;\n        self = PyTuple_GetItem(args, 0);\n        if (unlikely(!self)) {\n            Py_DECREF(new_args);\n            return NULL;\n        }\n        result = __Pyx_CyFunction_CallMethod(func, self, new_args, kw);\n        Py_DECREF(new_args);\n    } else {\n        result = __Pyx_CyFunction_Call(func, args, kw);\n    }\n    return result;\n}\nstatic PyTypeObject __pyx_CyFunctionType_type = {\n    PyVarObject_HEAD_INIT(0, 0)\n    \"cython_function_or_method\",\n    sizeof(__pyx_CyFunctionObject),\n    0,\n    (destructor) __Pyx_CyFunction_dealloc,\n    0,\n    0,\n    0,\n#if PY_MAJOR_VERSION < 3\n    0,\n#else\n    0,\n#endif\n    (reprfunc) __Pyx_CyFunction_repr,\n    0,\n    0,\n    0,\n    0,\n    __Pyx_CyFunction_CallAsMethod,\n    0,\n    0,\n    0,\n    0,\n    Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC,\n    0,\n    (traverseproc) __Pyx_CyFunction_traverse,\n    (inquiry) __Pyx_CyFunction_clear,\n    0,\n#if PY_VERSION_HEX < 0x030500A0\n    offsetof(__pyx_CyFunctionObject, func_weakreflist),\n#else\n    offsetof(PyCFunctionObject, m_weakreflist),\n#endif\n    0,\n    0,\n    __pyx_CyFunction_methods,\n    __pyx_CyFunction_members,\n    __pyx_CyFunction_getsets,\n    0,\n    0,\n    __Pyx_CyFunction_descr_get,\n    0,\n    offsetof(__pyx_CyFunctionObject, func_dict),\n    0,\n    0,\n    0,\n    0,\n    0,\n    0,\n    0,\n    0,\n    0,\n    0,\n    0,\n    0,\n#if PY_VERSION_HEX >= 0x030400a1\n    0,\n#endif\n#if PY_VERSION_HEX >= 0x030800b1\n    0,\n#endif\n#if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000\n    0,\n#endif\n};\nstatic int __pyx_CyFunction_init(void) {\n    __pyx_CyFunctionType = __Pyx_FetchCommonType(&__pyx_CyFunctionType_type);\n    if (unlikely(__pyx_CyFunctionType == NULL)) {\n        return -1;\n    }\n    return 0;\n}\nstatic CYTHON_INLINE void *__Pyx_CyFunction_InitDefaults(PyObject *func, size_t size, int pyobjects) {\n    __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func;\n    m->defaults = PyObject_Malloc(size);\n    if (unlikely(!m->defaults))\n        return PyErr_NoMemory();\n    memset(m->defaults, 0, size);\n    m->defaults_pyobjects = pyobjects;\n    return m->defaults;\n}\nstatic CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsTuple(PyObject *func, PyObject *tuple) {\n    __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func;\n    m->defaults_tuple = tuple;\n    Py_INCREF(tuple);\n}\nstatic CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsKwDict(PyObject *func, PyObject *dict) {\n    __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func;\n    m->defaults_kwdict = dict;\n    Py_INCREF(dict);\n}\nstatic CYTHON_INLINE void __Pyx_CyFunction_SetAnnotationsDict(PyObject *func, PyObject *dict) {\n    __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func;\n    m->func_annotations = dict;\n    Py_INCREF(dict);\n}\n\n/* Py3ClassCreate */\nstatic PyObject *__Pyx_Py3MetaclassPrepare(PyObject *metaclass, PyObject *bases, PyObject *name,\n                                           PyObject *qualname, PyObject *mkw, PyObject *modname, PyObject *doc) {\n    PyObject *ns;\n    if (metaclass) {\n        PyObject *prep = __Pyx_PyObject_GetAttrStr(metaclass, __pyx_n_s_prepare);\n        if (prep) {\n            PyObject *pargs = PyTuple_Pack(2, name, bases);\n            if (unlikely(!pargs)) {\n                Py_DECREF(prep);\n                return NULL;\n            }\n            ns = PyObject_Call(prep, pargs, mkw);\n            Py_DECREF(prep);\n            Py_DECREF(pargs);\n        } else {\n            if (unlikely(!PyErr_ExceptionMatches(PyExc_AttributeError)))\n                return NULL;\n            PyErr_Clear();\n            ns = PyDict_New();\n        }\n    } else {\n        ns = PyDict_New();\n    }\n    if (unlikely(!ns))\n        return NULL;\n    if (unlikely(PyObject_SetItem(ns, __pyx_n_s_module, modname) < 0)) goto bad;\n    if (unlikely(PyObject_SetItem(ns, __pyx_n_s_qualname, qualname) < 0)) goto bad;\n    if (unlikely(doc && PyObject_SetItem(ns, __pyx_n_s_doc, doc) < 0)) goto bad;\n    return ns;\nbad:\n    Py_DECREF(ns);\n    return NULL;\n}\nstatic PyObject *__Pyx_Py3ClassCreate(PyObject *metaclass, PyObject *name, PyObject *bases,\n                                      PyObject *dict, PyObject *mkw,\n                                      int calculate_metaclass, int allow_py2_metaclass) {\n    PyObject *result, *margs;\n    PyObject *owned_metaclass = NULL;\n    if (allow_py2_metaclass) {\n        owned_metaclass = PyObject_GetItem(dict, __pyx_n_s_metaclass);\n        if (owned_metaclass) {\n            metaclass = owned_metaclass;\n        } else if (likely(PyErr_ExceptionMatches(PyExc_KeyError))) {\n            PyErr_Clear();\n        } else {\n            return NULL;\n        }\n    }\n    if (calculate_metaclass && (!metaclass || PyType_Check(metaclass))) {\n        metaclass = __Pyx_CalculateMetaclass((PyTypeObject*) metaclass, bases);\n        Py_XDECREF(owned_metaclass);\n        if (unlikely(!metaclass))\n            return NULL;\n        owned_metaclass = metaclass;\n    }\n    margs = PyTuple_Pack(3, name, bases, dict);\n    if (unlikely(!margs)) {\n        result = NULL;\n    } else {\n        result = PyObject_Call(metaclass, margs, mkw);\n        Py_DECREF(margs);\n    }\n    Py_XDECREF(owned_metaclass);\n    return result;\n}\n\n/* Globals */\nstatic PyObject* __Pyx_Globals(void) {\n    Py_ssize_t i;\n    PyObject *names;\n    PyObject *globals = __pyx_d;\n    Py_INCREF(globals);\n    names = PyObject_Dir(__pyx_m);\n    if (!names)\n        goto bad;\n    for (i = PyList_GET_SIZE(names)-1; i >= 0; i--) {\n#if CYTHON_COMPILING_IN_PYPY\n        PyObject* name = PySequence_ITEM(names, i);\n        if (!name)\n            goto bad;\n#else\n        PyObject* name = PyList_GET_ITEM(names, i);\n#endif\n        if (!PyDict_Contains(globals, name)) {\n            PyObject* value = __Pyx_GetAttr(__pyx_m, name);\n            if (!value) {\n#if CYTHON_COMPILING_IN_PYPY\n                Py_DECREF(name);\n#endif\n                goto bad;\n            }\n            if (PyDict_SetItem(globals, name, value) < 0) {\n#if CYTHON_COMPILING_IN_PYPY\n                Py_DECREF(name);\n#endif\n                Py_DECREF(value);\n                goto bad;\n            }\n        }\n#if CYTHON_COMPILING_IN_PYPY\n        Py_DECREF(name);\n#endif\n    }\n    Py_DECREF(names);\n    return globals;\nbad:\n    Py_XDECREF(names);\n    Py_XDECREF(globals);\n    return NULL;\n}\n\n/* CLineInTraceback */\n#ifndef CYTHON_CLINE_IN_TRACEBACK\nstatic int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line) {\n    PyObject *use_cline;\n    PyObject *ptype, *pvalue, *ptraceback;\n#if CYTHON_COMPILING_IN_CPYTHON\n    PyObject **cython_runtime_dict;\n#endif\n    if (unlikely(!__pyx_cython_runtime)) {\n        return c_line;\n    }\n    __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback);\n#if CYTHON_COMPILING_IN_CPYTHON\n    cython_runtime_dict = _PyObject_GetDictPtr(__pyx_cython_runtime);\n    if (likely(cython_runtime_dict)) {\n        __PYX_PY_DICT_LOOKUP_IF_MODIFIED(\n            use_cline, *cython_runtime_dict,\n            __Pyx_PyDict_GetItemStr(*cython_runtime_dict, __pyx_n_s_cline_in_traceback))\n    } else\n#endif\n    {\n      PyObject *use_cline_obj = __Pyx_PyObject_GetAttrStr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback);\n      if (use_cline_obj) {\n        use_cline = PyObject_Not(use_cline_obj) ? Py_False : Py_True;\n        Py_DECREF(use_cline_obj);\n      } else {\n        PyErr_Clear();\n        use_cline = NULL;\n      }\n    }\n    if (!use_cline) {\n        c_line = 0;\n        PyObject_SetAttr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback, Py_False);\n    }\n    else if (use_cline == Py_False || (use_cline != Py_True && PyObject_Not(use_cline) != 0)) {\n        c_line = 0;\n    }\n    __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback);\n    return c_line;\n}\n#endif\n\n/* CodeObjectCache */\nstatic int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line) {\n    int start = 0, mid = 0, end = count - 1;\n    if (end >= 0 && code_line > entries[end].code_line) {\n        return count;\n    }\n    while (start < end) {\n        mid = start + (end - start) / 2;\n        if (code_line < entries[mid].code_line) {\n            end = mid;\n        } else if (code_line > entries[mid].code_line) {\n             start = mid + 1;\n        } else {\n            return mid;\n        }\n    }\n    if (code_line <= entries[mid].code_line) {\n        return mid;\n    } else {\n        return mid + 1;\n    }\n}\nstatic PyCodeObject *__pyx_find_code_object(int code_line) {\n    PyCodeObject* code_object;\n    int pos;\n    if (unlikely(!code_line) || unlikely(!__pyx_code_cache.entries)) {\n        return NULL;\n    }\n    pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line);\n    if (unlikely(pos >= __pyx_code_cache.count) || unlikely(__pyx_code_cache.entries[pos].code_line != code_line)) {\n        return NULL;\n    }\n    code_object = __pyx_code_cache.entries[pos].code_object;\n    Py_INCREF(code_object);\n    return code_object;\n}\nstatic void __pyx_insert_code_object(int code_line, PyCodeObject* code_object) {\n    int pos, i;\n    __Pyx_CodeObjectCacheEntry* entries = __pyx_code_cache.entries;\n    if (unlikely(!code_line)) {\n        return;\n    }\n    if (unlikely(!entries)) {\n        entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Malloc(64*sizeof(__Pyx_CodeObjectCacheEntry));\n        if (likely(entries)) {\n            __pyx_code_cache.entries = entries;\n            __pyx_code_cache.max_count = 64;\n            __pyx_code_cache.count = 1;\n            entries[0].code_line = code_line;\n            entries[0].code_object = code_object;\n            Py_INCREF(code_object);\n        }\n        return;\n    }\n    pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line);\n    if ((pos < __pyx_code_cache.count) && unlikely(__pyx_code_cache.entries[pos].code_line == code_line)) {\n        PyCodeObject* tmp = entries[pos].code_object;\n        entries[pos].code_object = code_object;\n        Py_DECREF(tmp);\n        return;\n    }\n    if (__pyx_code_cache.count == __pyx_code_cache.max_count) {\n        int new_max = __pyx_code_cache.max_count + 64;\n        entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Realloc(\n            __pyx_code_cache.entries, (size_t)new_max*sizeof(__Pyx_CodeObjectCacheEntry));\n        if (unlikely(!entries)) {\n            return;\n        }\n        __pyx_code_cache.entries = entries;\n        __pyx_code_cache.max_count = new_max;\n    }\n    for (i=__pyx_code_cache.count; i>pos; i--) {\n        entries[i] = entries[i-1];\n    }\n    entries[pos].code_line = code_line;\n    entries[pos].code_object = code_object;\n    __pyx_code_cache.count++;\n    Py_INCREF(code_object);\n}\n\n/* AddTraceback */\n#include \"compile.h\"\n#include \"frameobject.h\"\n#include \"traceback.h\"\nstatic PyCodeObject* __Pyx_CreateCodeObjectForTraceback(\n            const char *funcname, int c_line,\n            int py_line, const char *filename) {\n    PyCodeObject *py_code = 0;\n    PyObject *py_srcfile = 0;\n    PyObject *py_funcname = 0;\n    #if PY_MAJOR_VERSION < 3\n    py_srcfile = PyString_FromString(filename);\n    #else\n    py_srcfile = PyUnicode_FromString(filename);\n    #endif\n    if (!py_srcfile) goto bad;\n    if (c_line) {\n        #if PY_MAJOR_VERSION < 3\n        py_funcname = PyString_FromFormat( \"%s (%s:%d)\", funcname, __pyx_cfilenm, c_line);\n        #else\n        py_funcname = PyUnicode_FromFormat( \"%s (%s:%d)\", funcname, __pyx_cfilenm, c_line);\n        #endif\n    }\n    else {\n        #if PY_MAJOR_VERSION < 3\n        py_funcname = PyString_FromString(funcname);\n        #else\n        py_funcname = PyUnicode_FromString(funcname);\n        #endif\n    }\n    if (!py_funcname) goto bad;\n    py_code = __Pyx_PyCode_New(\n        0,\n        0,\n        0,\n        0,\n        0,\n        __pyx_empty_bytes, /*PyObject *code,*/\n        __pyx_empty_tuple, /*PyObject *consts,*/\n        __pyx_empty_tuple, /*PyObject *names,*/\n        __pyx_empty_tuple, /*PyObject *varnames,*/\n        __pyx_empty_tuple, /*PyObject *freevars,*/\n        __pyx_empty_tuple, /*PyObject *cellvars,*/\n        py_srcfile,   /*PyObject *filename,*/\n        py_funcname,  /*PyObject *name,*/\n        py_line,\n        __pyx_empty_bytes  /*PyObject *lnotab*/\n    );\n    Py_DECREF(py_srcfile);\n    Py_DECREF(py_funcname);\n    return py_code;\nbad:\n    Py_XDECREF(py_srcfile);\n    Py_XDECREF(py_funcname);\n    return NULL;\n}\nstatic void __Pyx_AddTraceback(const char *funcname, int c_line,\n                               int py_line, const char *filename) {\n    PyCodeObject *py_code = 0;\n    PyFrameObject *py_frame = 0;\n    PyThreadState *tstate = __Pyx_PyThreadState_Current;\n    if (c_line) {\n        c_line = __Pyx_CLineForTraceback(tstate, c_line);\n    }\n    py_code = __pyx_find_code_object(c_line ? -c_line : py_line);\n    if (!py_code) {\n        py_code = __Pyx_CreateCodeObjectForTraceback(\n            funcname, c_line, py_line, filename);\n        if (!py_code) goto bad;\n        __pyx_insert_code_object(c_line ? -c_line : py_line, py_code);\n    }\n    py_frame = PyFrame_New(\n        tstate,            /*PyThreadState *tstate,*/\n        py_code,           /*PyCodeObject *code,*/\n        __pyx_d,    /*PyObject *globals,*/\n        0                  /*PyObject *locals*/\n    );\n    if (!py_frame) goto bad;\n    __Pyx_PyFrame_SetLineNumber(py_frame, py_line);\n    PyTraceBack_Here(py_frame);\nbad:\n    Py_XDECREF(py_code);\n    Py_XDECREF(py_frame);\n}\n\n/* CIntFromPyVerify */\n#define __PYX_VERIFY_RETURN_INT(target_type, func_type, func_value)\\\n    __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 0)\n#define __PYX_VERIFY_RETURN_INT_EXC(target_type, func_type, func_value)\\\n    __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 1)\n#define __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, exc)\\\n    {\\\n        func_type value = func_value;\\\n        if (sizeof(target_type) < sizeof(func_type)) {\\\n            if (unlikely(value != (func_type) (target_type) value)) {\\\n                func_type zero = 0;\\\n                if (exc && unlikely(value == (func_type)-1 && PyErr_Occurred()))\\\n                    return (target_type) -1;\\\n                if (is_unsigned && unlikely(value < zero))\\\n                    goto raise_neg_overflow;\\\n                else\\\n                    goto raise_overflow;\\\n            }\\\n        }\\\n        return (target_type) value;\\\n    }\n\n/* CIntToPy */\nstatic CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) {\n    const long neg_one = (long) ((long) 0 - (long) 1), const_zero = (long) 0;\n    const int is_unsigned = neg_one > const_zero;\n    if (is_unsigned) {\n        if (sizeof(long) < sizeof(long)) {\n            return PyInt_FromLong((long) value);\n        } else if (sizeof(long) <= sizeof(unsigned long)) {\n            return PyLong_FromUnsignedLong((unsigned long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) {\n            return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value);\n#endif\n        }\n    } else {\n        if (sizeof(long) <= sizeof(long)) {\n            return PyInt_FromLong((long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) {\n            return PyLong_FromLongLong((PY_LONG_LONG) value);\n#endif\n        }\n    }\n    {\n        int one = 1; int little = (int)*(unsigned char *)&one;\n        unsigned char *bytes = (unsigned char *)&value;\n        return _PyLong_FromByteArray(bytes, sizeof(long),\n                                     little, !is_unsigned);\n    }\n}\n\n/* CIntFromPy */\nstatic CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *x) {\n    const int neg_one = (int) ((int) 0 - (int) 1), const_zero = (int) 0;\n    const int is_unsigned = neg_one > const_zero;\n#if PY_MAJOR_VERSION < 3\n    if (likely(PyInt_Check(x))) {\n        if (sizeof(int) < sizeof(long)) {\n            __PYX_VERIFY_RETURN_INT(int, long, PyInt_AS_LONG(x))\n        } else {\n            long val = PyInt_AS_LONG(x);\n            if (is_unsigned && unlikely(val < 0)) {\n                goto raise_neg_overflow;\n            }\n            return (int) val;\n        }\n    } else\n#endif\n    if (likely(PyLong_Check(x))) {\n        if (is_unsigned) {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (int) 0;\n                case  1: __PYX_VERIFY_RETURN_INT(int, digit, digits[0])\n                case 2:\n                    if (8 * sizeof(int) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) >= 2 * PyLong_SHIFT) {\n                            return (int) (((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(int) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) >= 3 * PyLong_SHIFT) {\n                            return (int) (((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(int) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) >= 4 * PyLong_SHIFT) {\n                            return (int) (((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]));\n                        }\n                    }\n                    break;\n            }\n#endif\n#if CYTHON_COMPILING_IN_CPYTHON\n            if (unlikely(Py_SIZE(x) < 0)) {\n                goto raise_neg_overflow;\n            }\n#else\n            {\n                int result = PyObject_RichCompareBool(x, Py_False, Py_LT);\n                if (unlikely(result < 0))\n                    return (int) -1;\n                if (unlikely(result == 1))\n                    goto raise_neg_overflow;\n            }\n#endif\n            if (sizeof(int) <= sizeof(unsigned long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(int, unsigned long, PyLong_AsUnsignedLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x))\n#endif\n            }\n        } else {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (int) 0;\n                case -1: __PYX_VERIFY_RETURN_INT(int, sdigit, (sdigit) (-(sdigit)digits[0]))\n                case  1: __PYX_VERIFY_RETURN_INT(int,  digit, +digits[0])\n                case -2:\n                    if (8 * sizeof(int) - 1 > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) {\n                            return (int) (((int)-1)*(((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));\n                        }\n                    }\n                    break;\n                case 2:\n                    if (8 * sizeof(int) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) {\n                            return (int) ((((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));\n                        }\n                    }\n                    break;\n                case -3:\n                    if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) {\n                            return (int) (((int)-1)*(((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(int) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) {\n                            return (int) ((((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));\n                        }\n                    }\n                    break;\n                case -4:\n                    if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) {\n                            return (int) (((int)-1)*(((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(int) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) {\n                            return (int) ((((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));\n                        }\n                    }\n                    break;\n            }\n#endif\n            if (sizeof(int) <= sizeof(long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(int, long, PyLong_AsLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(int, PY_LONG_LONG, PyLong_AsLongLong(x))\n#endif\n            }\n        }\n        {\n#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray)\n            PyErr_SetString(PyExc_RuntimeError,\n                            \"_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers\");\n#else\n            int val;\n            PyObject *v = __Pyx_PyNumber_IntOrLong(x);\n #if PY_MAJOR_VERSION < 3\n            if (likely(v) && !PyLong_Check(v)) {\n                PyObject *tmp = v;\n                v = PyNumber_Long(tmp);\n                Py_DECREF(tmp);\n            }\n #endif\n            if (likely(v)) {\n                int one = 1; int is_little = (int)*(unsigned char *)&one;\n                unsigned char *bytes = (unsigned char *)&val;\n                int ret = _PyLong_AsByteArray((PyLongObject *)v,\n                                              bytes, sizeof(val),\n                                              is_little, !is_unsigned);\n                Py_DECREF(v);\n                if (likely(!ret))\n                    return val;\n            }\n#endif\n            return (int) -1;\n        }\n    } else {\n        int val;\n        PyObject *tmp = __Pyx_PyNumber_IntOrLong(x);\n        if (!tmp) return (int) -1;\n        val = __Pyx_PyInt_As_int(tmp);\n        Py_DECREF(tmp);\n        return val;\n    }\nraise_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"value too large to convert to int\");\n    return (int) -1;\nraise_neg_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"can't convert negative value to int\");\n    return (int) -1;\n}\n\n/* CIntFromPy */\nstatic CYTHON_INLINE size_t __Pyx_PyInt_As_size_t(PyObject *x) {\n    const size_t neg_one = (size_t) ((size_t) 0 - (size_t) 1), const_zero = (size_t) 0;\n    const int is_unsigned = neg_one > const_zero;\n#if PY_MAJOR_VERSION < 3\n    if (likely(PyInt_Check(x))) {\n        if (sizeof(size_t) < sizeof(long)) {\n            __PYX_VERIFY_RETURN_INT(size_t, long, PyInt_AS_LONG(x))\n        } else {\n            long val = PyInt_AS_LONG(x);\n            if (is_unsigned && unlikely(val < 0)) {\n                goto raise_neg_overflow;\n            }\n            return (size_t) val;\n        }\n    } else\n#endif\n    if (likely(PyLong_Check(x))) {\n        if (is_unsigned) {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (size_t) 0;\n                case  1: __PYX_VERIFY_RETURN_INT(size_t, digit, digits[0])\n                case 2:\n                    if (8 * sizeof(size_t) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(size_t, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(size_t) >= 2 * PyLong_SHIFT) {\n                            return (size_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(size_t) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(size_t, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(size_t) >= 3 * PyLong_SHIFT) {\n                            return (size_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(size_t) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(size_t, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(size_t) >= 4 * PyLong_SHIFT) {\n                            return (size_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n                        }\n                    }\n                    break;\n            }\n#endif\n#if CYTHON_COMPILING_IN_CPYTHON\n            if (unlikely(Py_SIZE(x) < 0)) {\n                goto raise_neg_overflow;\n            }\n#else\n            {\n                int result = PyObject_RichCompareBool(x, Py_False, Py_LT);\n                if (unlikely(result < 0))\n                    return (size_t) -1;\n                if (unlikely(result == 1))\n                    goto raise_neg_overflow;\n            }\n#endif\n            if (sizeof(size_t) <= sizeof(unsigned long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(size_t, unsigned long, PyLong_AsUnsignedLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(size_t) <= sizeof(unsigned PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(size_t, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x))\n#endif\n            }\n        } else {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (size_t) 0;\n                case -1: __PYX_VERIFY_RETURN_INT(size_t, sdigit, (sdigit) (-(sdigit)digits[0]))\n                case  1: __PYX_VERIFY_RETURN_INT(size_t,  digit, +digits[0])\n                case -2:\n                    if (8 * sizeof(size_t) - 1 > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(size_t, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(size_t) - 1 > 2 * PyLong_SHIFT) {\n                            return (size_t) (((size_t)-1)*(((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])));\n                        }\n                    }\n                    break;\n                case 2:\n                    if (8 * sizeof(size_t) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(size_t, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(size_t) - 1 > 2 * PyLong_SHIFT) {\n                            return (size_t) ((((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])));\n                        }\n                    }\n                    break;\n                case -3:\n                    if (8 * sizeof(size_t) - 1 > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(size_t, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(size_t) - 1 > 3 * PyLong_SHIFT) {\n                            return (size_t) (((size_t)-1)*(((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(size_t) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(size_t, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(size_t) - 1 > 3 * PyLong_SHIFT) {\n                            return (size_t) ((((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])));\n                        }\n                    }\n                    break;\n                case -4:\n                    if (8 * sizeof(size_t) - 1 > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(size_t, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(size_t) - 1 > 4 * PyLong_SHIFT) {\n                            return (size_t) (((size_t)-1)*(((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(size_t) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(size_t, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(size_t) - 1 > 4 * PyLong_SHIFT) {\n                            return (size_t) ((((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])));\n                        }\n                    }\n                    break;\n            }\n#endif\n            if (sizeof(size_t) <= sizeof(long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(size_t, long, PyLong_AsLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(size_t) <= sizeof(PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(size_t, PY_LONG_LONG, PyLong_AsLongLong(x))\n#endif\n            }\n        }\n        {\n#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray)\n            PyErr_SetString(PyExc_RuntimeError,\n                            \"_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers\");\n#else\n            size_t val;\n            PyObject *v = __Pyx_PyNumber_IntOrLong(x);\n #if PY_MAJOR_VERSION < 3\n            if (likely(v) && !PyLong_Check(v)) {\n                PyObject *tmp = v;\n                v = PyNumber_Long(tmp);\n                Py_DECREF(tmp);\n            }\n #endif\n            if (likely(v)) {\n                int one = 1; int is_little = (int)*(unsigned char *)&one;\n                unsigned char *bytes = (unsigned char *)&val;\n                int ret = _PyLong_AsByteArray((PyLongObject *)v,\n                                              bytes, sizeof(val),\n                                              is_little, !is_unsigned);\n                Py_DECREF(v);\n                if (likely(!ret))\n                    return val;\n            }\n#endif\n            return (size_t) -1;\n        }\n    } else {\n        size_t val;\n        PyObject *tmp = __Pyx_PyNumber_IntOrLong(x);\n        if (!tmp) return (size_t) -1;\n        val = __Pyx_PyInt_As_size_t(tmp);\n        Py_DECREF(tmp);\n        return val;\n    }\nraise_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"value too large to convert to size_t\");\n    return (size_t) -1;\nraise_neg_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"can't convert negative value to size_t\");\n    return (size_t) -1;\n}\n\n/* CIntFromPy */\nstatic CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *x) {\n    const long neg_one = (long) ((long) 0 - (long) 1), const_zero = (long) 0;\n    const int is_unsigned = neg_one > const_zero;\n#if PY_MAJOR_VERSION < 3\n    if (likely(PyInt_Check(x))) {\n        if (sizeof(long) < sizeof(long)) {\n            __PYX_VERIFY_RETURN_INT(long, long, PyInt_AS_LONG(x))\n        } else {\n            long val = PyInt_AS_LONG(x);\n            if (is_unsigned && unlikely(val < 0)) {\n                goto raise_neg_overflow;\n            }\n            return (long) val;\n        }\n    } else\n#endif\n    if (likely(PyLong_Check(x))) {\n        if (is_unsigned) {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (long) 0;\n                case  1: __PYX_VERIFY_RETURN_INT(long, digit, digits[0])\n                case 2:\n                    if (8 * sizeof(long) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) >= 2 * PyLong_SHIFT) {\n                            return (long) (((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(long) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) >= 3 * PyLong_SHIFT) {\n                            return (long) (((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(long) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) >= 4 * PyLong_SHIFT) {\n                            return (long) (((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]));\n                        }\n                    }\n                    break;\n            }\n#endif\n#if CYTHON_COMPILING_IN_CPYTHON\n            if (unlikely(Py_SIZE(x) < 0)) {\n                goto raise_neg_overflow;\n            }\n#else\n            {\n                int result = PyObject_RichCompareBool(x, Py_False, Py_LT);\n                if (unlikely(result < 0))\n                    return (long) -1;\n                if (unlikely(result == 1))\n                    goto raise_neg_overflow;\n            }\n#endif\n            if (sizeof(long) <= sizeof(unsigned long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(long, unsigned long, PyLong_AsUnsignedLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(long, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x))\n#endif\n            }\n        } else {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (long) 0;\n                case -1: __PYX_VERIFY_RETURN_INT(long, sdigit, (sdigit) (-(sdigit)digits[0]))\n                case  1: __PYX_VERIFY_RETURN_INT(long,  digit, +digits[0])\n                case -2:\n                    if (8 * sizeof(long) - 1 > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {\n                            return (long) (((long)-1)*(((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));\n                        }\n                    }\n                    break;\n                case 2:\n                    if (8 * sizeof(long) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {\n                            return (long) ((((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));\n                        }\n                    }\n                    break;\n                case -3:\n                    if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {\n                            return (long) (((long)-1)*(((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(long) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {\n                            return (long) ((((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));\n                        }\n                    }\n                    break;\n                case -4:\n                    if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) {\n                            return (long) (((long)-1)*(((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(long) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) {\n                            return (long) ((((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));\n                        }\n                    }\n                    break;\n            }\n#endif\n            if (sizeof(long) <= sizeof(long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(long, long, PyLong_AsLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(long, PY_LONG_LONG, PyLong_AsLongLong(x))\n#endif\n            }\n        }\n        {\n#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray)\n            PyErr_SetString(PyExc_RuntimeError,\n                            \"_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers\");\n#else\n            long val;\n            PyObject *v = __Pyx_PyNumber_IntOrLong(x);\n #if PY_MAJOR_VERSION < 3\n            if (likely(v) && !PyLong_Check(v)) {\n                PyObject *tmp = v;\n                v = PyNumber_Long(tmp);\n                Py_DECREF(tmp);\n            }\n #endif\n            if (likely(v)) {\n                int one = 1; int is_little = (int)*(unsigned char *)&one;\n                unsigned char *bytes = (unsigned char *)&val;\n                int ret = _PyLong_AsByteArray((PyLongObject *)v,\n                                              bytes, sizeof(val),\n                                              is_little, !is_unsigned);\n                Py_DECREF(v);\n                if (likely(!ret))\n                    return val;\n            }\n#endif\n            return (long) -1;\n        }\n    } else {\n        long val;\n        PyObject *tmp = __Pyx_PyNumber_IntOrLong(x);\n        if (!tmp) return (long) -1;\n        val = __Pyx_PyInt_As_long(tmp);\n        Py_DECREF(tmp);\n        return val;\n    }\nraise_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"value too large to convert to long\");\n    return (long) -1;\nraise_neg_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"can't convert negative value to long\");\n    return (long) -1;\n}\n\n/* CIntToPy */\nstatic CYTHON_INLINE PyObject* __Pyx_PyInt_From_enum____pyx_t_6region_RegionType(enum __pyx_t_6region_RegionType value) {\n    const enum __pyx_t_6region_RegionType neg_one = (enum __pyx_t_6region_RegionType) ((enum __pyx_t_6region_RegionType) 0 - (enum __pyx_t_6region_RegionType) 1), const_zero = (enum __pyx_t_6region_RegionType) 0;\n    const int is_unsigned = neg_one > const_zero;\n    if (is_unsigned) {\n        if (sizeof(enum __pyx_t_6region_RegionType) < sizeof(long)) {\n            return PyInt_FromLong((long) value);\n        } else if (sizeof(enum __pyx_t_6region_RegionType) <= sizeof(unsigned long)) {\n            return PyLong_FromUnsignedLong((unsigned long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(enum __pyx_t_6region_RegionType) <= sizeof(unsigned PY_LONG_LONG)) {\n            return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value);\n#endif\n        }\n    } else {\n        if (sizeof(enum __pyx_t_6region_RegionType) <= sizeof(long)) {\n            return PyInt_FromLong((long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(enum __pyx_t_6region_RegionType) <= sizeof(PY_LONG_LONG)) {\n            return PyLong_FromLongLong((PY_LONG_LONG) value);\n#endif\n        }\n    }\n    {\n        int one = 1; int little = (int)*(unsigned char *)&one;\n        unsigned char *bytes = (unsigned char *)&value;\n        return _PyLong_FromByteArray(bytes, sizeof(enum __pyx_t_6region_RegionType),\n                                     little, !is_unsigned);\n    }\n}\n\n/* FastTypeChecks */\n#if CYTHON_COMPILING_IN_CPYTHON\nstatic int __Pyx_InBases(PyTypeObject *a, PyTypeObject *b) {\n    while (a) {\n        a = a->tp_base;\n        if (a == b)\n            return 1;\n    }\n    return b == &PyBaseObject_Type;\n}\nstatic CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b) {\n    PyObject *mro;\n    if (a == b) return 1;\n    mro = a->tp_mro;\n    if (likely(mro)) {\n        Py_ssize_t i, n;\n        n = PyTuple_GET_SIZE(mro);\n        for (i = 0; i < n; i++) {\n            if (PyTuple_GET_ITEM(mro, i) == (PyObject *)b)\n                return 1;\n        }\n        return 0;\n    }\n    return __Pyx_InBases(a, b);\n}\n#if PY_MAJOR_VERSION == 2\nstatic int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject* exc_type2) {\n    PyObject *exception, *value, *tb;\n    int res;\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ErrFetch(&exception, &value, &tb);\n    res = exc_type1 ? PyObject_IsSubclass(err, exc_type1) : 0;\n    if (unlikely(res == -1)) {\n        PyErr_WriteUnraisable(err);\n        res = 0;\n    }\n    if (!res) {\n        res = PyObject_IsSubclass(err, exc_type2);\n        if (unlikely(res == -1)) {\n            PyErr_WriteUnraisable(err);\n            res = 0;\n        }\n    }\n    __Pyx_ErrRestore(exception, value, tb);\n    return res;\n}\n#else\nstatic CYTHON_INLINE int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject *exc_type2) {\n    int res = exc_type1 ? __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type1) : 0;\n    if (!res) {\n        res = __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type2);\n    }\n    return res;\n}\n#endif\nstatic int __Pyx_PyErr_GivenExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) {\n    Py_ssize_t i, n;\n    assert(PyExceptionClass_Check(exc_type));\n    n = PyTuple_GET_SIZE(tuple);\n#if PY_MAJOR_VERSION >= 3\n    for (i=0; i<n; i++) {\n        if (exc_type == PyTuple_GET_ITEM(tuple, i)) return 1;\n    }\n#endif\n    for (i=0; i<n; i++) {\n        PyObject *t = PyTuple_GET_ITEM(tuple, i);\n        #if PY_MAJOR_VERSION < 3\n        if (likely(exc_type == t)) return 1;\n        #endif\n        if (likely(PyExceptionClass_Check(t))) {\n            if (__Pyx_inner_PyErr_GivenExceptionMatches2(exc_type, NULL, t)) return 1;\n        } else {\n        }\n    }\n    return 0;\n}\nstatic CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject* exc_type) {\n    if (likely(err == exc_type)) return 1;\n    if (likely(PyExceptionClass_Check(err))) {\n        if (likely(PyExceptionClass_Check(exc_type))) {\n            return __Pyx_inner_PyErr_GivenExceptionMatches2(err, NULL, exc_type);\n        } else if (likely(PyTuple_Check(exc_type))) {\n            return __Pyx_PyErr_GivenExceptionMatchesTuple(err, exc_type);\n        } else {\n        }\n    }\n    return PyErr_GivenExceptionMatches(err, exc_type);\n}\nstatic CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *exc_type1, PyObject *exc_type2) {\n    assert(PyExceptionClass_Check(exc_type1));\n    assert(PyExceptionClass_Check(exc_type2));\n    if (likely(err == exc_type1 || err == exc_type2)) return 1;\n    if (likely(PyExceptionClass_Check(err))) {\n        return __Pyx_inner_PyErr_GivenExceptionMatches2(err, exc_type1, exc_type2);\n    }\n    return (PyErr_GivenExceptionMatches(err, exc_type1) || PyErr_GivenExceptionMatches(err, exc_type2));\n}\n#endif\n\n/* CheckBinaryVersion */\nstatic int __Pyx_check_binary_version(void) {\n    char ctversion[4], rtversion[4];\n    PyOS_snprintf(ctversion, 4, \"%d.%d\", PY_MAJOR_VERSION, PY_MINOR_VERSION);\n    PyOS_snprintf(rtversion, 4, \"%s\", Py_GetVersion());\n    if (ctversion[0] != rtversion[0] || ctversion[2] != rtversion[2]) {\n        char message[200];\n        PyOS_snprintf(message, sizeof(message),\n                      \"compiletime version %s of module '%.100s' \"\n                      \"does not match runtime version %s\",\n                      ctversion, __Pyx_MODULE_NAME, rtversion);\n        return PyErr_WarnEx(NULL, message, 1);\n    }\n    return 0;\n}\n\n/* InitStrings */\nstatic int __Pyx_InitStrings(__Pyx_StringTabEntry *t) {\n    while (t->p) {\n        #if PY_MAJOR_VERSION < 3\n        if (t->is_unicode) {\n            *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL);\n        } else if (t->intern) {\n            *t->p = PyString_InternFromString(t->s);\n        } else {\n            *t->p = PyString_FromStringAndSize(t->s, t->n - 1);\n        }\n        #else\n        if (t->is_unicode | t->is_str) {\n            if (t->intern) {\n                *t->p = PyUnicode_InternFromString(t->s);\n            } else if (t->encoding) {\n                *t->p = PyUnicode_Decode(t->s, t->n - 1, t->encoding, NULL);\n            } else {\n                *t->p = PyUnicode_FromStringAndSize(t->s, t->n - 1);\n            }\n        } else {\n            *t->p = PyBytes_FromStringAndSize(t->s, t->n - 1);\n        }\n        #endif\n        if (!*t->p)\n            return -1;\n        if (PyObject_Hash(*t->p) == -1)\n            return -1;\n        ++t;\n    }\n    return 0;\n}\n\nstatic CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_str) {\n    return __Pyx_PyUnicode_FromStringAndSize(c_str, (Py_ssize_t)strlen(c_str));\n}\nstatic CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject* o) {\n    Py_ssize_t ignore;\n    return __Pyx_PyObject_AsStringAndSize(o, &ignore);\n}\n#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT\n#if !CYTHON_PEP393_ENABLED\nstatic const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) {\n    char* defenc_c;\n    PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL);\n    if (!defenc) return NULL;\n    defenc_c = PyBytes_AS_STRING(defenc);\n#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII\n    {\n        char* end = defenc_c + PyBytes_GET_SIZE(defenc);\n        char* c;\n        for (c = defenc_c; c < end; c++) {\n            if ((unsigned char) (*c) >= 128) {\n                PyUnicode_AsASCIIString(o);\n                return NULL;\n            }\n        }\n    }\n#endif\n    *length = PyBytes_GET_SIZE(defenc);\n    return defenc_c;\n}\n#else\nstatic CYTHON_INLINE const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) {\n    if (unlikely(__Pyx_PyUnicode_READY(o) == -1)) return NULL;\n#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII\n    if (likely(PyUnicode_IS_ASCII(o))) {\n        *length = PyUnicode_GET_LENGTH(o);\n        return PyUnicode_AsUTF8(o);\n    } else {\n        PyUnicode_AsASCIIString(o);\n        return NULL;\n    }\n#else\n    return PyUnicode_AsUTF8AndSize(o, length);\n#endif\n}\n#endif\n#endif\nstatic CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) {\n#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT\n    if (\n#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII\n            __Pyx_sys_getdefaultencoding_not_ascii &&\n#endif\n            PyUnicode_Check(o)) {\n        return __Pyx_PyUnicode_AsStringAndSize(o, length);\n    } else\n#endif\n#if (!CYTHON_COMPILING_IN_PYPY) || (defined(PyByteArray_AS_STRING) && defined(PyByteArray_GET_SIZE))\n    if (PyByteArray_Check(o)) {\n        *length = PyByteArray_GET_SIZE(o);\n        return PyByteArray_AS_STRING(o);\n    } else\n#endif\n    {\n        char* result;\n        int r = PyBytes_AsStringAndSize(o, &result, length);\n        if (unlikely(r < 0)) {\n            return NULL;\n        } else {\n            return result;\n        }\n    }\n}\nstatic CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) {\n   int is_true = x == Py_True;\n   if (is_true | (x == Py_False) | (x == Py_None)) return is_true;\n   else return PyObject_IsTrue(x);\n}\nstatic CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject* x) {\n    int retval;\n    if (unlikely(!x)) return -1;\n    retval = __Pyx_PyObject_IsTrue(x);\n    Py_DECREF(x);\n    return retval;\n}\nstatic PyObject* __Pyx_PyNumber_IntOrLongWrongResultType(PyObject* result, const char* type_name) {\n#if PY_MAJOR_VERSION >= 3\n    if (PyLong_Check(result)) {\n        if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1,\n                \"__int__ returned non-int (type %.200s).  \"\n                \"The ability to return an instance of a strict subclass of int \"\n                \"is deprecated, and may be removed in a future version of Python.\",\n                Py_TYPE(result)->tp_name)) {\n            Py_DECREF(result);\n            return NULL;\n        }\n        return result;\n    }\n#endif\n    PyErr_Format(PyExc_TypeError,\n                 \"__%.4s__ returned non-%.4s (type %.200s)\",\n                 type_name, type_name, Py_TYPE(result)->tp_name);\n    Py_DECREF(result);\n    return NULL;\n}\nstatic CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) {\n#if CYTHON_USE_TYPE_SLOTS\n  PyNumberMethods *m;\n#endif\n  const char *name = NULL;\n  PyObject *res = NULL;\n#if PY_MAJOR_VERSION < 3\n  if (likely(PyInt_Check(x) || PyLong_Check(x)))\n#else\n  if (likely(PyLong_Check(x)))\n#endif\n    return __Pyx_NewRef(x);\n#if CYTHON_USE_TYPE_SLOTS\n  m = Py_TYPE(x)->tp_as_number;\n  #if PY_MAJOR_VERSION < 3\n  if (m && m->nb_int) {\n    name = \"int\";\n    res = m->nb_int(x);\n  }\n  else if (m && m->nb_long) {\n    name = \"long\";\n    res = m->nb_long(x);\n  }\n  #else\n  if (likely(m && m->nb_int)) {\n    name = \"int\";\n    res = m->nb_int(x);\n  }\n  #endif\n#else\n  if (!PyBytes_CheckExact(x) && !PyUnicode_CheckExact(x)) {\n    res = PyNumber_Int(x);\n  }\n#endif\n  if (likely(res)) {\n#if PY_MAJOR_VERSION < 3\n    if (unlikely(!PyInt_Check(res) && !PyLong_Check(res))) {\n#else\n    if (unlikely(!PyLong_CheckExact(res))) {\n#endif\n        return __Pyx_PyNumber_IntOrLongWrongResultType(res, name);\n    }\n  }\n  else if (!PyErr_Occurred()) {\n    PyErr_SetString(PyExc_TypeError,\n                    \"an integer is required\");\n  }\n  return res;\n}\nstatic CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) {\n  Py_ssize_t ival;\n  PyObject *x;\n#if PY_MAJOR_VERSION < 3\n  if (likely(PyInt_CheckExact(b))) {\n    if (sizeof(Py_ssize_t) >= sizeof(long))\n        return PyInt_AS_LONG(b);\n    else\n        return PyInt_AsSsize_t(b);\n  }\n#endif\n  if (likely(PyLong_CheckExact(b))) {\n    #if CYTHON_USE_PYLONG_INTERNALS\n    const digit* digits = ((PyLongObject*)b)->ob_digit;\n    const Py_ssize_t size = Py_SIZE(b);\n    if (likely(__Pyx_sst_abs(size) <= 1)) {\n        ival = likely(size) ? digits[0] : 0;\n        if (size == -1) ival = -ival;\n        return ival;\n    } else {\n      switch (size) {\n         case 2:\n           if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) {\n             return (Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n           }\n           break;\n         case -2:\n           if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) {\n             return -(Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n           }\n           break;\n         case 3:\n           if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) {\n             return (Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n           }\n           break;\n         case -3:\n           if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) {\n             return -(Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n           }\n           break;\n         case 4:\n           if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) {\n             return (Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n           }\n           break;\n         case -4:\n           if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) {\n             return -(Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n           }\n           break;\n      }\n    }\n    #endif\n    return PyLong_AsSsize_t(b);\n  }\n  x = PyNumber_Index(b);\n  if (!x) return -1;\n  ival = PyInt_AsSsize_t(x);\n  Py_DECREF(x);\n  return ival;\n}\nstatic CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b) {\n  return b ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False);\n}\nstatic CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) {\n    return PyInt_FromSize_t(ival);\n}\n\n\n#endif /* Py_PYTHON_H */\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/utils/region.pyx",
    "content": "\n# distutils: sources = src/region.c\n# distutils: include_dirs = src/\n\nfrom libc.stdlib cimport malloc, free\nfrom libc.stdio cimport sprintf\nfrom libc.string cimport strlen\n\ncimport c_region\n\ncpdef enum RegionType:\n    EMTPY\n    SPECIAL\n    RECTANGEL\n    POLYGON\n    MASK\n\ncdef class RegionBounds:\n    cdef c_region.region_bounds* _c_region_bounds\n\n    def __cinit__(self):\n        self._c_region_bounds = <c_region.region_bounds*>malloc(\n                sizeof(c_region.region_bounds))\n        if not self._c_region_bounds:\n            self._c_region_bounds = NULL\n            raise MemoryError()\n\n    def __init__(self, top, bottom, left, right):\n        self.set(top, bottom, left, right)\n\n    def __dealloc__(self):\n        if self._c_region_bounds is not NULL:\n            free(self._c_region_bounds)\n            self._c_region_bounds = NULL\n\n    def __str__(self):\n        return \"top: {:.3f} bottom: {:.3f} left: {:.3f} reight: {:.3f}\".format(\n                self._c_region_bounds.top,\n                self._c_region_bounds.bottom,\n                self._c_region_bounds.left,\n                self._c_region_bounds.right)\n\n    def get(self):\n        return (self._c_region_bounds.top,\n                self._c_region_bounds.bottom,\n                self._c_region_bounds.left,\n                self._c_region_bounds.right)\n\n    def set(self, top, bottom, left, right):\n        self._c_region_bounds.top = top\n        self._c_region_bounds.bottom = bottom\n        self._c_region_bounds.left = left\n        self._c_region_bounds.right = right\n\ncdef class Rectangle:\n    cdef c_region.region_rectangle* _c_region_rectangle\n\n    def __cinit__(self):\n        self._c_region_rectangle = <c_region.region_rectangle*>malloc(\n                sizeof(c_region.region_rectangle))\n        if not self._c_region_rectangle:\n            self._c_region_rectangle = NULL\n            raise MemoryError()\n\n    def __init__(self, x, y, width, height):\n        self.set(x, y, width, height)\n\n    def __dealloc__(self):\n        if self._c_region_rectangle is not NULL:\n            free(self._c_region_rectangle)\n            self._c_region_rectangle = NULL\n\n    def __str__(self):\n        return \"x: {:.3f} y: {:.3f} width: {:.3f} height: {:.3f}\".format(\n                self._c_region_rectangle.x,\n                self._c_region_rectangle.y,\n                self._c_region_rectangle.width,\n                self._c_region_rectangle.height)\n\n    def set(self, x, y, width, height):\n        self._c_region_rectangle.x = x\n        self._c_region_rectangle.y = y\n        self._c_region_rectangle.width = width\n        self._c_region_rectangle.height = height\n\n    def get(self):\n        \"\"\"\n        return:\n            (x, y, width, height)\n        \"\"\"\n        return (self._c_region_rectangle.x,\n                self._c_region_rectangle.y,\n                self._c_region_rectangle.width,\n                self._c_region_rectangle.height)\n\ncdef class Polygon:\n    cdef c_region.region_polygon* _c_region_polygon\n\n    def __cinit__(self, points):\n        \"\"\"\n        args:\n            points: tuple of point\n            points = ((1, 1), (10, 10))\n        \"\"\"\n        num = len(points) // 2\n        self._c_region_polygon = <c_region.region_polygon*>malloc(\n                sizeof(c_region.region_polygon))\n        if not self._c_region_polygon:\n            self._c_region_polygon = NULL\n            raise MemoryError()\n        self._c_region_polygon.count = num\n        self._c_region_polygon.x = <float*>malloc(sizeof(float) * num)\n        if not self._c_region_polygon.x:\n            raise MemoryError()\n        self._c_region_polygon.y = <float*>malloc(sizeof(float) * num)\n        if not self._c_region_polygon.y:\n            raise MemoryError()\n\n        for i in range(num):\n            self._c_region_polygon.x[i] = points[i*2]\n            self._c_region_polygon.y[i] = points[i*2+1]\n\n    def __dealloc__(self):\n        if self._c_region_polygon is not NULL:\n            if self._c_region_polygon.x is not NULL:\n                free(self._c_region_polygon.x)\n                self._c_region_polygon.x = NULL\n            if self._c_region_polygon.y is not NULL:\n                free(self._c_region_polygon.y)\n                self._c_region_polygon.y = NULL\n            free(self._c_region_polygon)\n            self._c_region_polygon = NULL\n\n    def __str__(self):\n        ret = \"\"\n        for i in range(self._c_region_polygon.count-1):\n            ret += \"({:.3f} {:.3f}) \".format(self._c_region_polygon.x[i],\n                    self._c_region_polygon.y[i])\n        ret += \"({:.3f} {:.3f})\".format(self._c_region_polygon.x[i],\n                self._c_region_polygon.y[i])\n        return ret\n\ndef vot_overlap(polygon1, polygon2, bounds=None):\n    \"\"\" computing overlap between two polygon\n    Args:\n        polygon1: polygon tuple of points\n        polygon2: polygon tuple of points\n        bounds: tuple of (left, top, right, bottom) or tuple of (width height)\n    Return:\n        overlap: overlap between two polygons\n    \"\"\"\n    if len(polygon1) == 1 or len(polygon2) == 1:\n        return float(\"nan\")\n\n    if len(polygon1) == 4:\n        polygon1_ = Polygon([polygon1[0], polygon1[1],\n                             polygon1[0]+polygon1[2], polygon1[1],\n                             polygon1[0]+polygon1[2], polygon1[1]+polygon1[3],\n                             polygon1[0], polygon1[1]+polygon1[3]])\n    else:\n        polygon1_ = Polygon(polygon1)\n\n    if len(polygon2) == 4:\n        polygon2_ = Polygon([polygon2[0], polygon2[1],\n                             polygon2[0]+polygon2[2], polygon2[1],\n                             polygon2[0]+polygon2[2], polygon2[1]+polygon2[3],\n                             polygon2[0], polygon2[1]+polygon2[3]])\n    else:\n        polygon2_ = Polygon(polygon2)\n\n    if bounds is not None and len(bounds) == 4:\n        pno_bounds = RegionBounds(bounds[0], bounds[1], bounds[2], bounds[3])\n    elif bounds is not None and len(bounds) == 2:\n        pno_bounds = RegionBounds(0, bounds[1], 0, bounds[0])\n    else:\n        pno_bounds = RegionBounds(-float(\"inf\"), float(\"inf\"),\n                                  -float(\"inf\"), float(\"inf\"))\n    cdef float only1 = 0\n    cdef float only2 = 0\n    cdef c_region.region_polygon* c_polygon1 = polygon1_._c_region_polygon\n    cdef c_region.region_polygon* c_polygon2 = polygon2_._c_region_polygon\n    cdef c_region.region_bounds no_bounds = pno_bounds._c_region_bounds[0] # deference\n    return c_region.compute_polygon_overlap(c_polygon1,\n                                            c_polygon2,\n                                            &only1,\n                                            &only2,\n                                            no_bounds)\n\ndef vot_overlap_traj(polygons1, polygons2, bounds=None):\n    \"\"\" computing overlap between two trajectory\n    Args:\n        polygons1: list of polygon\n        polygons2: list of polygon\n        bounds: tuple of (left, top, right, bottom) or tuple of (width height)\n    Return:\n        overlaps: overlaps between all pair of polygons\n    \"\"\"\n    assert len(polygons1) == len(polygons2)\n    overlaps = []\n    for i in range(len(polygons1)):\n        overlap = vot_overlap(polygons1[i], polygons2[i], bounds=bounds)\n        overlaps.append(overlap)\n    return overlaps\n\n\ndef vot_float2str(template, float value):\n    \"\"\"\n    Args:\n        tempate: like \"%.3f\" in C syntax\n        value: float value\n    \"\"\"\n    cdef bytes ptemplate = template.encode()\n    cdef const char* ctemplate = ptemplate\n    cdef char* output = <char*>malloc(sizeof(char) * 100)\n    if not output:\n        raise MemoryError()\n    sprintf(output, ctemplate, value)\n    try:\n        ret = output[:strlen(output)].decode()\n    finally:\n        free(output)\n    return ret\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/utils/setup.py",
    "content": "from distutils.core import setup\nfrom distutils.extension import Extension\nfrom Cython.Build import cythonize\n\nsetup(\n    ext_modules = cythonize([Extension(\"region\", [\"region.pyx\", \"src/region.c\"])]),\n)\n\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/utils/src/buffer.h",
    "content": "\n#ifndef __STRING_BUFFER_H\n#define __STRING_BUFFER_H\n\n// Enable MinGW secure API for _snprintf_s\n#define MINGW_HAS_SECURE_API 1\n\n#ifdef _MSC_VER\n#define __INLINE __inline\n#else\n#define __INLINE inline\n#endif\n\n#include <string.h>\n#include <stdlib.h>\n#include <stdarg.h>\n\ntypedef struct string_buffer {\n\tchar* buffer;\n\tint position;\n\tint size;\n} string_buffer;\n\ntypedef struct string_list {\n\tchar** buffer;\n\tint position;\n\tint size;\n} string_list;\n\n#define BUFFER_INCREMENT_STEP 4096\n\nstatic __INLINE string_buffer* buffer_create(int L) {\n\tstring_buffer* B = (string_buffer*) malloc(sizeof(string_buffer));\n\tB->size = L;\n\tB->buffer = (char*) malloc(sizeof(char) * B->size);\n\tB->position = 0;\n\treturn B;\n}\n\nstatic __INLINE void buffer_reset(string_buffer* B) {\n\tB->position = 0;\n}\n\nstatic __INLINE void buffer_destroy(string_buffer** B) {\n\tif (!(*B)) return;\n\tif ((*B)->buffer) {\n\t\tfree((*B)->buffer);\n\t\t(*B)->buffer = NULL;\n\t}\n\tfree((*B));\n\t(*B) = NULL;\n}\n\nstatic __INLINE char* buffer_extract(const string_buffer* B) {\n\tchar *S = (char*) malloc(sizeof(char) * (B->position + 1));\n\tmemcpy(S, B->buffer, B->position);\n\tS[B->position] = '\\0';\n\treturn S;\n}\n\nstatic __INLINE int buffer_size(const string_buffer* B) {\n\treturn B->position;\n}\n\nstatic __INLINE void buffer_push(string_buffer* B, char C) {\n\tint required = 1;\n\tif (required > B->size - B->position) {\n\t\tB->size = B->position + BUFFER_INCREMENT_STEP;\n\t\tB->buffer = (char*) realloc(B->buffer, sizeof(char) * B->size);\n\t}\n\tB->buffer[B->position] = C;\n\tB->position += required;\n}\n\nstatic __INLINE void buffer_append(string_buffer* B, const char *format, ...) {\n\n\tint required;\n\tva_list args;\n\n#if defined(__OS2__) || defined(__WINDOWS__) || defined(WIN32) || defined(_MSC_VER)\n\n\tva_start(args, format);\n\trequired = _vscprintf(format, args) + 1;\n\tva_end(args);\n\tif (required >= B->size - B->position) {\n\t\tB->size = B->position + required + 1;\n\t\tB->buffer = (char*) realloc(B->buffer, sizeof(char) * B->size);\n\t}\n\tva_start(args, format);\n\trequired = _vsnprintf_s(&(B->buffer[B->position]), B->size - B->position, _TRUNCATE, format, args);\n\tva_end(args);\n\tB->position += required;\n\n#else\n\tva_start(args, format);\n\trequired = vsnprintf(&(B->buffer[B->position]), B->size - B->position, format, args);\n\tva_end(args);\n\tif (required >= B->size - B->position) {\n\t\tB->size = B->position + required + 1;\n\t\tB->buffer = (char*) realloc(B->buffer, sizeof(char) * B->size);\n\t\tva_start(args, format);\n\t\trequired = vsnprintf(&(B->buffer[B->position]), B->size - B->position, format, args);\n\t\tva_end(args);\n\t}\n\tB->position += required;\n#endif\n\n}\n\nstatic __INLINE string_list* list_create(int L) {\n\tstring_list* B = (string_list*) malloc(sizeof(string_list));\n\tB->size = L;\n\tB->buffer = (char**) malloc(sizeof(char*) * B->size);\n\tmemset(B->buffer, 0, sizeof(char*) * B->size);\n\tB->position = 0;\n\treturn B;\n}\n\nstatic __INLINE void list_reset(string_list* B) {\n\tint i;\n\tfor (i = 0; i < B->position; i++) {\n\t\tif (B->buffer[i]) free(B->buffer[i]);\n\t\tB->buffer[i] = NULL;\n\t}\n\tB->position = 0;\n}\n\nstatic __INLINE void list_destroy(string_list **B) {\n\tint i;\n\n\tif (!(*B)) return;\n\n\tfor (i = 0; i < (*B)->position; i++) {\n\t\tif ((*B)->buffer[i]) free((*B)->buffer[i]); (*B)->buffer[i] = NULL;\n\t}\n\n\tif ((*B)->buffer) {\n\t\tfree((*B)->buffer); (*B)->buffer = NULL;\n\t}\n\n\tfree((*B));\n\t(*B) = NULL;\n}\n\nstatic __INLINE char* list_get(const string_list *B, int I) {\n\tif (I < 0 || I >= B->position) {\n\t\treturn NULL;\n\t} else {\n\t\tif (!B->buffer[I]) {\n\t\t\treturn NULL;\n\t\t} else {\n\t\t\tchar *S;\n\t\t\tint length = strlen(B->buffer[I]);\n\t\t\tS = (char*) malloc(sizeof(char) * (length + 1));\n\t\t\tmemcpy(S, B->buffer[I], length + 1);\n\t\t\treturn S;\n\t\t}\n\t}\n}\n\nstatic __INLINE int list_size(const string_list *B) {\n\treturn B->position;\n}\n\nstatic __INLINE void list_append(string_list *B, char* S) {\n\tint required = 1;\n\tint length = strlen(S);\n\tif (required > B->size - B->position) {\n\t\tB->size = B->position + 16;\n\t\tB->buffer = (char**) realloc(B->buffer, sizeof(char*) * B->size);\n\t}\n\tB->buffer[B->position] = (char*) malloc(sizeof(char) * (length + 1));\n\tmemcpy(B->buffer[B->position], S, length + 1);\n\tB->position += required;\n}\n\n// This version of the append does not copy the string but simply takes the control of its allocation\nstatic __INLINE void list_append_direct(string_list *B, char* S) {\n\tint required = 1;\n\t// int length = strlen(S);\n\tif (required > B->size - B->position) {\n\t\tB->size = B->position + 16;\n\t\tB->buffer = (char**) realloc(B->buffer, sizeof(char*) * B->size);\n\t}\n\tB->buffer[B->position] = S;\n\tB->position += required;\n}\n\n\n#endif\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/utils/src/region.c",
    "content": "\n#include <stdio.h>\n#include <float.h>\n#include <stdlib.h>\n#include <string.h>\n#include <math.h>\n#include <assert.h>\n#include <ctype.h>\n\n#include \"region.h\"\n#include \"buffer.h\"\n\n#if defined(__OS2__) || defined(__WINDOWS__) || defined(WIN32) || defined(_MSC_VER)\n#ifndef isnan\n#define isnan(x) _isnan(x)\n#endif\n#ifndef isinf\n#define isinf(x) (!_finite(x))\n#endif\n#ifndef inline\n#define inline _inline\n#endif\n#endif\n\n/* Visual Studio 2013 was first to add C99 INFINITY and NAN */\n#if defined (_MSC_VER) && _MSC_VER < 1800\n#define INFINITY (DBL_MAX+DBL_MAX)\n#define NAN (INFINITY-INFINITY)\n#define round(fp) (int)((fp) >= 0 ? (fp) + 0.5 : (fp) - 0.5)\n#endif\n\n#define PRINT_BOUNDS(B) printf(\"[left: %.2f, top: %.2f, right: %.2f, bottom: %.2f]\\n\", B.left, B.top, B.right, B.bottom)\n\nconst region_bounds region_no_bounds = { -FLT_MAX, FLT_MAX, -FLT_MAX, FLT_MAX };\n\nint __flags = 0;\n\nint region_set_flags(int mask) {\n\n\t__flags |= mask;\n\n\treturn __flags;\n\n}\n\nint region_clear_flags(int mask) {\n\n\t__flags &= ~mask;\n\n\treturn __flags;\n\n}\n\nint __is_valid_sequence(float* sequence, int len) {\n\tint i;\n\n\tfor (i = 0; i < len; i++) {\n\t\tif (isnan(sequence[i])) return 0;\n\t}\n\n\treturn 1;\n}\n\n\n#define MAX_URI_SCHEME 16\n\nconst char* __parse_uri_prefix(const char* buffer, region_type* type) {\n\n\tint i = 0;\n\n\t*type = EMPTY;\n\n\tfor (; i < MAX_URI_SCHEME; i++) {\n\t\tif ((buffer[i] >= 'a' && buffer[i] <= 'z') || buffer[i] == '+' || buffer[i] == '.' || buffer[i] == '-') continue;\n\n\t\tif (buffer[i] == ':') {\n\t\t\tif (strncmp(buffer, \"rect\", i - 1) == 0)\n\t\t\t\t*type = RECTANGLE;\n\t\t\telse if (strncmp(buffer, \"poly\", i - 1) == 0)\n\t\t\t\t*type = POLYGON;\n\t\t\telse if (strncmp(buffer, \"mask\", i - 1) == 0)\n\t\t\t\t*type = MASK;\n\t\t\telse if (strncmp(buffer, \"special\", i - 1) == 0)\n\t\t\t\t*type = SPECIAL;\n\t\t\treturn &(buffer[i + 1]);\n\t\t}\n\n\t\treturn buffer;\n\t}\n\n\treturn buffer;\n\n}\n\nregion_container* __create_region(region_type type) {\n\n\tregion_container* reg = (region_container*) malloc(sizeof(region_container));\n\n\treg->type = type;\n\n\treturn reg;\n\n}\n\nstatic inline const char* _str_find(const char* in, const char delimiter) {\n\n\tint i = 0;\n\twhile (in[i] && in[i] != delimiter) {\n\t\ti++;\n\t}\n\n\treturn (in[i] == delimiter) ? &(in[i]) + 1 : NULL;\n\n}\n\nint _parse_sequence(const char* buffer, float** data) {\n\n\tint i;\n\n\tfloat* numbers = (float*) malloc(sizeof(float) * (strlen(buffer) / 2));\n\n\tconst char* pch = buffer;\n\tfor (i = 0; ; i++) {\n\n\t\tif (pch) {\n#if defined (_MSC_VER)\n\t\t\tif (tolower(pch[0]) == 'n' && tolower(pch[1]) == 'a' && tolower(pch[2]) == 'n') {\n\t\t\t\tnumbers[i] = NAN;\n\t\t\t} else {\n\t\t\t\tnumbers[i] = (float) atof(pch);\n\t\t\t}\n#else\n\t\t\tnumbers[i] = (float) atof(pch);\n#endif\n\t\t} else\n\t\t\tbreak;\n\n\t\tpch = _str_find(pch, ',');\n\t}\n\n\tif (i > 0) {\n\t\tint j;\n\t\t*data = (float*) malloc(sizeof(float) * i);\n\t\tfor (j = 0; j < i; j++) { (*data)[j] = numbers[j]; }\n\t\tfree(numbers);\n\t} else {\n\t\t*data = NULL;\n\t\tfree(numbers);\n\t}\n\n\treturn i;\n}\n\nint region_parse(const char* buffer, region_container** region) {\n\n\tfloat* data = NULL;\n\tconst char* strdata = NULL;\n\tint num;\n\n\tregion_type prefix_type;\n\n\t// const char* tmp = buffer;\n\n\t(*region) = NULL;\n\n\tif (!buffer || !buffer[0]) {\n\t\treturn 1;\n\t}\n\n\tstrdata = __parse_uri_prefix(buffer, &prefix_type);\n\n\tnum = _parse_sequence(strdata, &data);\n\n\t// If at least one of the elements is NaN, then the region cannot be parsed\n\t// We return special region with a default code.\n\tif (!__is_valid_sequence(data, num) || num == 0) {\n\t\t// Preserve legacy support: if four values are given and the fourth one is a number\n\t\t// then this number is taken as a code.\n\t\tif (num == 4 && !isnan(data[3])) {\n\t\t\t(*region) = region_create_special(-(int) data[3]);\n\t\t} else {\n\t\t\t(*region) = region_create_special(TRAX_DEFAULT_CODE);\n\t\t}\n\t\tfree(data);\n\t\treturn 1;\n\t}\n\n\tif (prefix_type == EMPTY && num > 0) {\n\t\tif (num == 1)\n\t\t\tprefix_type = SPECIAL;\n\t\telse if (num == 4)\n\t\t\tprefix_type = RECTANGLE;\n\t\telse if (num >= 6 && num % 2 == 0)\n\t\t\tprefix_type = POLYGON;\n\t}\n\n\tswitch (prefix_type) {\n\tcase SPECIAL: {\n\t\tassert(num == 1);\n\t\t(*region) = (region_container*) malloc(sizeof(region_container));\n\t\t(*region)->type = SPECIAL;\n\t\t(*region)->data.special = (int) data[0];\n\t\tfree(data);\n\t\treturn 1;\n\n\t}\n\tcase RECTANGLE: {\n\t\tassert(num == 4);\n\t\t(*region) = (region_container*) malloc(sizeof(region_container));\n\t\t(*region)->type = RECTANGLE;\n\n\t\t(*region)->data.rectangle.x = data[0];\n\t\t(*region)->data.rectangle.y = data[1];\n\t\t(*region)->data.rectangle.width = data[2];\n\t\t(*region)->data.rectangle.height = data[3];\n\n\t\tfree(data);\n\t\treturn 1;\n\n\t}\n\tcase POLYGON: {\n\t\tint j;\n\n\t\tassert(num >= 6 && num % 2 == 0);\n\n\t\t(*region) = (region_container*) malloc(sizeof(region_container));\n\t\t(*region)->type = POLYGON;\n\n\t\t(*region)->data.polygon.count = num / 2;\n\t\t(*region)->data.polygon.x = (float*) malloc(sizeof(float) * (*region)->data.polygon.count);\n\t\t(*region)->data.polygon.y = (float*) malloc(sizeof(float) * (*region)->data.polygon.count);\n\n\t\tfor (j = 0; j < (*region)->data.polygon.count; j++) {\n\t\t\t(*region)->data.polygon.x[j] = data[j * 2];\n\t\t\t(*region)->data.polygon.y[j] = data[j * 2 + 1];\n\t\t}\n\n\t\tfree(data);\n\t\treturn 1;\n\tcase EMPTY:\n\t\treturn 1;\n\tcase MASK:\n\t\treturn 1;\n\t}\n\t\t/*\t    case MASK: {\n\n\t\t\t    \tint i;\n\t\t\t    \tint position;\n\t\t\t    \tint value;\n\n\t\t\t    \tassert(num > 4);\n\n\t\t\t\t\t(*region) = (region_container*) malloc(sizeof(region_container));\n\t\t\t\t\t(*region)->type = MASK;\n\n\t\t\t    \t(*region)->data.mask.x = (int) data[0];\n\t\t\t    \t(*region)->data.mask.y = (int) data[1];\n\t\t\t    \t(*region)->data.mask.width = (int) data[2];\n\t\t\t    \t(*region)->data.mask.height = (int) data[3];\n\n\t\t\t    \t(*region)->data.mask.data = (char*) malloc(sizeof(char) * (*region)->data.mask.width * (*region)->data.mask.height);\n\n\t\t\t    \tvalue = 0;\n\t\t\t    \tposition = 0;\n\n\t\t\t    \tfor (i = 4; i < num; i++) {\n\n\t\t\t    \t\tint count =\n\n\n\n\n\t\t\t    \t}\n\n\n\t\t\t    }*/\n\t}\n\n\tif (data) free(data);\n\n\treturn 0;\n}\n\nchar* region_string(region_container* region) {\n\n\tint i;\n\tchar* result = NULL;\n\tstring_buffer *buffer;\n\n\tif (!region) return NULL;\n\n\tbuffer = buffer_create(32);\n\n\tif (region->type == SPECIAL) {\n\n\t\tbuffer_append(buffer, \"%d\", region->data.special);\n\n\t} else if (region->type == RECTANGLE) {\n\n\t\tbuffer_append(buffer, \"%.4f,%.4f,%.4f,%.4f\",\n\t\t              region->data.rectangle.x, region->data.rectangle.y,\n\t\t              region->data.rectangle.width, region->data.rectangle.height);\n\n\t} else if (region->type == POLYGON) {\n\n\t\tfor (i = 0; i < region->data.polygon.count; i++) {\n\t\t\tbuffer_append(buffer, (i == 0 ? \"%.4f,%.4f\" : \",%.4f,%.4f\"), region->data.polygon.x[i], region->data.polygon.y[i]);\n\t\t}\n\t}\n\n\tif (buffer_size(buffer) > 0)\n\t\tresult = buffer_extract(buffer);\n\tbuffer_destroy(&buffer);\n\n\treturn result;\n}\n\nvoid region_print(FILE* out, region_container* region) {\n\n\tchar* buffer = region_string(region);\n\n\tif (buffer) {\n\t\tfputs(buffer, out);\n\t\tfree(buffer);\n\t}\n\n}\n\nregion_container* region_convert(const region_container* region, region_type type) {\n\n\tregion_container* reg = NULL;\n\tswitch (type) {\n\n\tcase RECTANGLE: {\n\n\t\treg = (region_container*) malloc(sizeof(region_container));\n\t\treg->type = type;\n\n\t\tswitch (region->type) {\n\t\tcase RECTANGLE:\n\t\t\treg->data.rectangle = region->data.rectangle;\n\t\t\tbreak;\n\t\tcase POLYGON: {\n\n\t\t\tfloat top = FLT_MAX;\n\t\t\tfloat bottom = FLT_MIN;\n\t\t\tfloat left = FLT_MAX;\n\t\t\tfloat right = FLT_MIN;\n\t\t\tint i;\n\n\t\t\tfor (i = 0; i < region->data.polygon.count; i++) {\n\t\t\t\ttop = MIN(top, region->data.polygon.y[i]);\n\t\t\t\tbottom = MAX(bottom, region->data.polygon.y[i]);\n\t\t\t\tleft = MIN(left, region->data.polygon.x[i]);\n\t\t\t\tright = MAX(right, region->data.polygon.x[i]);\n\t\t\t}\n\n\t\t\treg->data.rectangle.x = left;\n\t\t\treg->data.rectangle.y = top;\n\t\t\treg->data.rectangle.width = right - left;\n\t\t\treg->data.rectangle.height = bottom - top;\n\t\t\tbreak;\n\t\t}\n\t\tcase SPECIAL: {\n\t\t\tfree(reg); reg = NULL;\n\t\t\tbreak;\n\t\t}\n\t\tdefault: {\n\t\t\tfree(reg); reg = NULL;\n\t\t\tbreak;\n\t\t}\n\t\t}\n\t\tbreak;\n\t}\n\n\tcase POLYGON: {\n\n\t\treg = (region_container*) malloc(sizeof(region_container));\n\t\treg->type = type;\n\n\t\tswitch (region->type) {\n\t\tcase RECTANGLE: {\n\n\t\t\treg->data.polygon.count = 4;\n\n\t\t\treg->data.polygon.x = (float *) malloc(sizeof(float) * reg->data.polygon.count);\n\t\t\treg->data.polygon.y = (float *) malloc(sizeof(float) * reg->data.polygon.count);\n\n\t\t\tif (__flags & REGION_LEGACY_RASTERIZATION) {\n\n\t\t\t\treg->data.polygon.x[0] = region->data.rectangle.x;\n\t\t\t\treg->data.polygon.x[1] = region->data.rectangle.x + region->data.rectangle.width;\n\t\t\t\treg->data.polygon.x[2] = region->data.rectangle.x + region->data.rectangle.width;\n\t\t\t\treg->data.polygon.x[3] = region->data.rectangle.x;\n\n\t\t\t\treg->data.polygon.y[0] = region->data.rectangle.y;\n\t\t\t\treg->data.polygon.y[1] = region->data.rectangle.y;\n\t\t\t\treg->data.polygon.y[2] = region->data.rectangle.y + region->data.rectangle.height;\n\t\t\t\treg->data.polygon.y[3] = region->data.rectangle.y + region->data.rectangle.height;\n\n\t\t\t} else {\n\n\t\t\t\treg->data.polygon.x[0] = region->data.rectangle.x;\n\t\t\t\treg->data.polygon.x[1] = region->data.rectangle.x + region->data.rectangle.width - 1;\n\t\t\t\treg->data.polygon.x[2] = region->data.rectangle.x + region->data.rectangle.width - 1;\n\t\t\t\treg->data.polygon.x[3] = region->data.rectangle.x;\n\n\t\t\t\treg->data.polygon.y[0] = region->data.rectangle.y;\n\t\t\t\treg->data.polygon.y[1] = region->data.rectangle.y;\n\t\t\t\treg->data.polygon.y[2] = region->data.rectangle.y + region->data.rectangle.height - 1;\n\t\t\t\treg->data.polygon.y[3] = region->data.rectangle.y + region->data.rectangle.height - 1;\n\n\t\t\t}\n\n\t\t\tbreak;\n\t\t}\n\t\tcase POLYGON: {\n\n\t\t\treg->data.polygon.count = region->data.polygon.count;\n\n\t\t\treg->data.polygon.x = (float *) malloc(sizeof(float) * region->data.polygon.count);\n\t\t\treg->data.polygon.y = (float *) malloc(sizeof(float) * region->data.polygon.count);\n\n\t\t\tmemcpy(reg->data.polygon.x, region->data.polygon.x, sizeof(float) * region->data.polygon.count);\n\t\t\tmemcpy(reg->data.polygon.y, region->data.polygon.y, sizeof(float) * region->data.polygon.count);\n\n\t\t\tbreak;\n\t\t}\n\t\tcase SPECIAL: {\n\t\t\tfree(reg); reg = NULL;\n\t\t\tbreak;\n\t\t}\n\t\tdefault: {\n\t\t\tfree(reg); reg = NULL;\n\t\t\tbreak;\n\t\t}\n\t\t}\n\t\tbreak;\n\n\t\tcase SPECIAL: {\n\t\t\tif (region->type == SPECIAL)\n\t\t\t\t// If source is also code then just copy the value\n\t\t\t\treg = region_create_special(region->data.special);\n\t\t\telse\n\t\t\t\t// All types are converted to default region\n\t\t\t\treg = region_create_special(TRAX_DEFAULT_CODE);\n\t\t\tbreak;\n\t\t}\n\n\t\tdefault:\n\t\t\tbreak;\n\n\t\t}\n\n\t}\n\n\treturn reg;\n\n}\n\nvoid region_release(region_container** region) {\n\n\tswitch ((*region)->type) {\n\tcase RECTANGLE:\n\t\tbreak;\n\tcase POLYGON:\n\t\tfree((*region)->data.polygon.x);\n\t\tfree((*region)->data.polygon.y);\n\t\t(*region)->data.polygon.count = 0;\n\t\tbreak;\n\tcase SPECIAL: {\n\t\tbreak;\n\t}\n\tcase MASK:\n\t\tbreak;\n\tcase EMPTY:\n\t\tbreak;\n\t}\n\n\tfree(*region);\n\n\t*region = NULL;\n\n}\n\nregion_container* region_create_special(int code) {\n\n\tregion_container* reg = __create_region(SPECIAL);\n\n\treg->data.special = code;\n\n\treturn reg;\n\n}\n\nregion_container* region_create_rectangle(float x, float y, float width, float height) {\n\n\tregion_container* reg = __create_region(RECTANGLE);\n\n\treg->data.rectangle.width = width;\n\treg->data.rectangle.height = height;\n\treg->data.rectangle.x = x;\n\treg->data.rectangle.y = y;\n\n\treturn reg;\n\n}\n\nregion_container* region_create_polygon(int count) {\n\n\tassert(count > 0);\n\n\t{\n\n\t\tregion_container* reg = __create_region(POLYGON);\n\n\t\treg->data.polygon.count = count;\n\t\treg->data.polygon.x = (float *) malloc(sizeof(float) * count);\n\t\treg->data.polygon.y = (float *) malloc(sizeof(float) * count);\n\n\t\treturn reg;\n\n\t}\n}\n\n#define MAX_MASK 10000\n\nvoid free_polygon(region_polygon* polygon) {\n\n\tfree(polygon->x);\n\tfree(polygon->y);\n\n\tpolygon->x = NULL;\n\tpolygon->y = NULL;\n\n\tpolygon->count = 0;\n\n}\n\nregion_polygon* allocate_polygon(int count) {\n\n\tregion_polygon* polygon = (region_polygon*) malloc(sizeof(region_polygon));\n\n\tpolygon->count = count;\n\n\tpolygon->x = (float*) malloc(sizeof(float) * count);\n\tpolygon->y = (float*) malloc(sizeof(float) * count);\n\n\tmemset(polygon->x, 0, sizeof(float) * count);\n\tmemset(polygon->y, 0, sizeof(float) * count);\n\n\treturn polygon;\n}\n\nregion_polygon* clone_polygon(const region_polygon* polygon) {\n\n\tregion_polygon* clone = allocate_polygon(polygon->count);\n\n\tmemcpy(clone->x, polygon->x, sizeof(float) * polygon->count);\n\tmemcpy(clone->y, polygon->y, sizeof(float) * polygon->count);\n\n\treturn clone;\n}\n\nregion_polygon* offset_polygon(const region_polygon* polygon, float x, float y) {\n\n\tint i;\n\tregion_polygon* clone = clone_polygon(polygon);\n\n\tfor (i = 0; i < clone->count; i++) {\n\t\tclone->x[i] += x;\n\t\tclone->y[i] += y;\n\t}\n\n\treturn clone;\n}\n\nregion_polygon* round_polygon(const region_polygon* polygon) {\n\n\tint i;\n\tregion_polygon* clone = clone_polygon(polygon);\n\n\tfor (i = 0; i < clone->count; i++) {\n\t\tclone->x[i] = round(clone->x[i]);\n\t\tclone->y[i] = round(clone->y[i]);\n\t}\n\n\treturn clone;\n}\n\nint point_in_polygon(const region_polygon* polygon, float x, float y) {\n\tint i, j, c = 0;\n\tfor (i = 0, j = polygon->count - 1; i < polygon->count; j = i++) {\n\t\tif ( ((polygon->y[i] > y) != (polygon->y[j] > y)) &&\n\t\t        (x < (polygon->x[j] - polygon->x[i]) * (y - polygon->y[i]) / (polygon->y[j] - polygon->y[i]) + polygon->x[i]) )\n\t\t\tc = !c;\n\t}\n\treturn c;\n}\n\nvoid print_polygon(const region_polygon* polygon) {\n\n\tint i;\n\tprintf(\"%d:\", polygon->count);\n\n\tfor (i = 0; i < polygon->count; i++) {\n\t\tprintf(\" (%f, %f)\", polygon->x[i], polygon->y[i]);\n\t}\n\n\tprintf(\"\\n\");\n\n}\n\nregion_bounds compute_bounds(const region_polygon* polygon) {\n\n\tint i;\n\tregion_bounds bounds;\n\tbounds.top = FLT_MAX;\n\tbounds.bottom = -FLT_MAX;\n\tbounds.left = FLT_MAX;\n\tbounds.right = -FLT_MAX;\n\n\tfor (i = 0; i < polygon->count; i++) {\n\t\tbounds.top = MIN(bounds.top, polygon->y[i]);\n\t\tbounds.bottom = MAX(bounds.bottom, polygon->y[i]);\n\t\tbounds.left = MIN(bounds.left, polygon->x[i]);\n\t\tbounds.right = MAX(bounds.right, polygon->x[i]);\n\t}\n\n\treturn bounds;\n\n}\n\nregion_bounds bounds_round(region_bounds bounds) {\n\n\tbounds.top = floor(bounds.top);\n\tbounds.bottom = ceil(bounds.bottom);\n\tbounds.left = floor(bounds.left);\n\tbounds.right = ceil(bounds.right);\n\n\treturn bounds;\n\n}\n\nregion_bounds bounds_intersection(region_bounds a, region_bounds b) {\n\n\tregion_bounds result;\n\n\tresult.top = MAX(a.top, b.top);\n\tresult.bottom = MIN(a.bottom, b.bottom);\n\tresult.left = MAX(a.left, b.left);\n\tresult.right = MIN(a.right, b.right);\n\n\treturn result;\n\n}\n\nregion_bounds bounds_union(region_bounds a, region_bounds b) {\n\n\tregion_bounds result;\n\n\tresult.top = MIN(a.top, b.top);\n\tresult.bottom = MAX(a.bottom, b.bottom);\n\tresult.left = MIN(a.left, b.left);\n\tresult.right = MAX(a.right, b.right);\n\n\treturn result;\n\n}\n\nfloat bounds_overlap(region_bounds a, region_bounds b) {\n\n\tregion_bounds rintersection = bounds_intersection(a, b);\n\tfloat intersection = (rintersection.right - rintersection.left) * (rintersection.bottom - rintersection.top);\n\n\treturn MAX(0, intersection / (((a.right - a.left) * (a.bottom - a.top)) + ((b.right - b.left) * (b.bottom - b.top)) - intersection));\n\n}\n\nregion_bounds region_create_bounds(float left, float top, float right, float bottom) {\n\n\tregion_bounds result;\n\n\tresult.top = top;\n\tresult.bottom = bottom;\n\tresult.left = left;\n\tresult.right = right;\n\n\treturn result;\n}\n\nregion_bounds region_compute_bounds(const region_container* region) {\n\n\tregion_bounds bounds;\n\tswitch (region->type) {\n\tcase RECTANGLE:\n\t\tif (__flags & REGION_LEGACY_RASTERIZATION) {\n\t\t\tbounds = region_create_bounds(region->data.rectangle.x,\n\t\t\t                              region->data.rectangle.y,\n\t\t\t                              region->data.rectangle.x + region->data.rectangle.width,\n\t\t\t                              region->data.rectangle.y + region->data.rectangle.height);\n\t\t} else {\n\t\t\tbounds = region_create_bounds(region->data.rectangle.x,\n\t\t\t                              region->data.rectangle.y,\n\t\t\t                              region->data.rectangle.x + region->data.rectangle.width - 1,\n\t\t\t                              region->data.rectangle.y + region->data.rectangle.height - 1);\n\t\t}\n\t\tbreak;\n\tcase POLYGON: {\n\t\tbounds = compute_bounds(&(region->data.polygon));\n\t\tbreak;\n\t}\n\tdefault: {\n\t\tbounds = region_no_bounds;\n\t\tbreak;\n\t}\n\t}\n\n\treturn bounds;\n\n}\n\nint rasterize_polygon(const region_polygon* polygon_input, char* mask, int width, int height) {\n\n\tint nodes, pixelY, i, j, swap;\n\tint sum = 0;\n\tregion_polygon* polygon = (region_polygon*) polygon_input;\n\n\tint* nodeX = (int*) malloc(sizeof(int) * polygon->count);\n\n\tif (mask) memset(mask, 0, width * height * sizeof(char));\n\n\tif (__flags & REGION_LEGACY_RASTERIZATION) {\n\n\t\t/*  Loop through the rows of the image. */\n\t\tfor (pixelY = 0; pixelY < height; pixelY++) {\n\n\t\t\t/*  Build a list of nodes. */\n\t\t\tnodes = 0;\n\t\t\tj = polygon->count - 1;\n\n\t\t\tfor (i = 0; i < polygon->count; i++) {\n\t\t\t\tif (((polygon->y[i] < (double) pixelY) && (polygon->y[j] >= (double) pixelY)) ||\n\t\t\t\t        ((polygon->y[j] < (double) pixelY) && (polygon->y[i] >= (double) pixelY))) {\n\t\t\t\t\tnodeX[nodes++] = (int) (polygon->x[i] + (pixelY - polygon->y[i]) /\n\t\t\t\t\t                        (polygon->y[j] - polygon->y[i]) * (polygon->x[j] - polygon->x[i]));\n\t\t\t\t}\n\t\t\t\tj = i;\n\t\t\t}\n\n\t\t\t/* Sort the nodes, via a simple “Bubble” sort. */\n\t\t\ti = 0;\n\t\t\twhile (i < nodes - 1) {\n\t\t\t\tif (nodeX[i] > nodeX[i + 1]) {\n\t\t\t\t\tswap = nodeX[i];\n\t\t\t\t\tnodeX[i] = nodeX[i + 1];\n\t\t\t\t\tnodeX[i + 1] = swap;\n\t\t\t\t\tif (i) i--;\n\t\t\t\t} else {\n\t\t\t\t\ti++;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/*  Fill the pixels between node pairs. */\n\t\t\tfor (i = 0; i < nodes; i += 2) {\n\t\t\t\tif (nodeX[i] >= width) break;\n\t\t\t\tif (nodeX[i + 1] > 0 ) {\n\t\t\t\t\tif (nodeX[i] < 0 ) nodeX[i] = 0;\n\t\t\t\t\tif (nodeX[i + 1] > width) nodeX[i + 1] = width - 1;\n\t\t\t\t\tfor (j = nodeX[i]; j < nodeX[i + 1]; j++) {\n\t\t\t\t\t\tif (mask) mask[pixelY * width + j] = 1;\n\t\t\t\t\t\tsum++;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t} else {\n\n\t\tpolygon = round_polygon(polygon_input);\n\n\t\t/*  Loop through the rows of the image. */\n\t\tfor (pixelY = 0; pixelY < height; pixelY++) {\n\n\t\t\t/*  Build a list of nodes. */\n\t\t\tnodes = 0;\n\t\t\tj = polygon->count - 1;\n\n\t\t\tfor (i = 0; i < polygon->count; i++) {\n\t\t\t\tif ((((int)polygon->y[i] <= pixelY) && ((int)polygon->y[j] > pixelY)) ||\n\t\t\t\t        (((int)polygon->y[j] <= pixelY) && ((int)polygon->y[i] > pixelY)) ||\n\t\t\t\t        (((int)polygon->y[i] < pixelY) && ((int)polygon->y[j] >= pixelY)) ||\n\t\t\t\t        (((int)polygon->y[j] < pixelY) && ((int)polygon->y[i] >= pixelY)) ||\n\t\t\t\t        (((int)polygon->y[i] == (int)polygon->y[j]) && ((int)polygon->y[i] == pixelY))) {\n\t\t\t\t\tdouble r = (polygon->y[j] - polygon->y[i]);\n\t\t\t\t\tdouble k = (polygon->x[j] - polygon->x[i]);\n\t\t\t\t\tif (r != 0)\n\t\t\t\t\t\tnodeX[nodes++] = (int) ((double) polygon->x[i] + (double) (pixelY - polygon->y[i]) / r * k);\n\t\t\t\t}\n\t\t\t\tj = i;\n\t\t\t}\n\t\t\t/* Sort the nodes, via a simple “Bubble” sort. */\n\t\t\ti = 0;\n\t\t\twhile (i < nodes - 1) {\n\t\t\t\tif (nodeX[i] > nodeX[i + 1]) {\n\t\t\t\t\tswap = nodeX[i];\n\t\t\t\t\tnodeX[i] = nodeX[i + 1];\n\t\t\t\t\tnodeX[i + 1] = swap;\n\t\t\t\t\tif (i) i--;\n\t\t\t\t} else {\n\t\t\t\t\ti++;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/*  Fill the pixels between node pairs. */\n\t\t\ti = 0;\n\t\t\twhile (i < nodes - 1) {\n\t\t\t\t// If a point is in the line then we get two identical values\n\t\t\t\t// Ignore the first\n\t\t\t\tif (nodeX[i] == nodeX[i + 1]) {\n\t\t\t\t\ti++;\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\n\t\t\t\tif (nodeX[i] >= width) break;\n\t\t\t\tif (nodeX[i + 1] >= 0) {\n\t\t\t\t\tif (nodeX[i] < 0) nodeX[i] = 0;\n\t\t\t\t\tif (nodeX[i + 1] >= width) nodeX[i + 1] = width - 1;\n\t\t\t\t\tfor (j = nodeX[i]; j <= nodeX[i + 1]; j++) {\n\t\t\t\t\t\tif (mask) mask[pixelY * width + j] = 1;\n\t\t\t\t\t\tsum++;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\ti += 2;\n\n\t\t\t}\n\t\t}\n\n\t\tfree_polygon(polygon);\n\n\t}\n\n\tfree(nodeX);\n\n\treturn sum;\n}\n\nfloat compute_polygon_overlap(const region_polygon* p1, const region_polygon* p2, float *only1, float *only2, region_bounds bounds) {\n\n\tint i;\n\tint vol_1 = 0;\n\tint vol_2 = 0;\n\tint mask_1 = 0;\n\tint mask_2 = 0;\n\tint mask_intersect = 0;\n\tchar* mask1 = NULL;\n\tchar* mask2 = NULL;\n\tdouble a1, a2;\n\tfloat x, y;\n\tint width, height;\n\tregion_polygon *op1, *op2;\n\tregion_bounds b1, b2;\n\n\tif (__flags & REGION_LEGACY_RASTERIZATION) {\n\t\tb1 = bounds_intersection(compute_bounds(p1), bounds);\n\t\tb2 = bounds_intersection(compute_bounds(p2), bounds);\n\t} else {\n\t\tb1 = bounds_intersection(bounds_round(compute_bounds(p1)), bounds);\n\t\tb2 = bounds_intersection(bounds_round(compute_bounds(p2)), bounds);\n\t}\n\n\tx = MIN(b1.left, b2.left);\n\ty = MIN(b1.top, b2.top);\n\n\twidth = (int) (MAX(b1.right, b2.right) - x) + 1;\n\theight = (int) (MAX(b1.bottom, b2.bottom) - y) + 1;\n\n\t// Fixing crashes due to overflowed regions, a simple check if the ratio\n\t// between the two bounding boxes is simply too big and the overlap would\n\t// be 0 anyway.\n\n\ta1 = (b1.right - b1.left) * (b1.bottom - b1.top);\n\ta2 = (b2.right - b2.left) * (b2.bottom - b2.top);\n\n\tif (a1 / a2 < 1e-10 || a2 / a1 < 1e-10 || width < 1 || height < 1) {\n\n\t\tif (only1)\n\t\t\t(*only1) = 0;\n\n\t\tif (only2)\n\t\t\t(*only2) = 0;\n\n\t\treturn 0;\n\n\t}\n\n\tif (bounds_overlap(b1, b2) == 0) {\n\n\t\tif (only1 || only2) {\n\t\t\tvol_1 = rasterize_polygon(p1, NULL, b1.right - b1.left + 1, b1.bottom - b1.top + 1);\n\t\t\tvol_2 = rasterize_polygon(p2, NULL, b2.right - b2.left + 1, b2.bottom - b2.top + 1);\n\n\t\t\tif (only1)\n\t\t\t\t(*only1) = (float) vol_1 / (float) (vol_1 + vol_2);\n\n\t\t\tif (only2)\n\t\t\t\t(*only2) = (float) vol_2 / (float) (vol_1 + vol_2);\n\t\t}\n\n\t\treturn 0;\n\n\t}\n\n\tmask1 = (char*) malloc(sizeof(char) * width * height);\n\tmask2 = (char*) malloc(sizeof(char) * width * height);\n\n\top1 = offset_polygon(p1, -x, -y);\n\top2 = offset_polygon(p2, -x, -y);\n\n\trasterize_polygon(op1, mask1, width, height);\n\trasterize_polygon(op2, mask2, width, height);\n\n\tfor (i = 0; i < width * height; i++) {\n\t\tif (mask1[i]) vol_1++;\n\t\tif (mask2[i]) vol_2++;\n\t\tif (mask1[i] && mask2[i]) mask_intersect++;\n\t\telse if (mask1[i]) mask_1++;\n\t\telse if (mask2[i]) mask_2++;\n\t}\n\n\tfree_polygon(op1);\n\tfree_polygon(op2);\n\n\tfree(mask1);\n\tfree(mask2);\n\n\tif (only1)\n\t\t(*only1) = (float) mask_1 / (float) (mask_1 + mask_2 + mask_intersect);\n\n\tif (only2)\n\t\t(*only2) = (float) mask_2 / (float) (mask_1 + mask_2 + mask_intersect);\n\n\treturn (float) mask_intersect / (float) (mask_1 + mask_2 + mask_intersect);\n\n}\n\n#define COPY_POLYGON(TP, P) { P.count = TP->data.polygon.count; P.x = TP->data.polygon.x; P.y = TP->data.polygon.y; }\n\nregion_overlap region_compute_overlap(const region_container* ra, const region_container* rb, region_bounds bounds) {\n\n\tregion_container* ta = (region_container *) ra;\n\tregion_container* tb = (region_container *) rb;\n\tregion_overlap overlap;\n\toverlap.overlap = 0;\n\toverlap.only1 = 0;\n\toverlap.only2 = 0;\n\n\tif (ra->type == RECTANGLE)\n\t\tta = region_convert(ra, POLYGON);\n\n\tif (rb->type == RECTANGLE)\n\t\ttb = region_convert(rb, POLYGON);\n\n\tif (ta->type == POLYGON && tb->type == POLYGON) {\n\n\t\tregion_polygon p1, p2;\n\n\t\tCOPY_POLYGON(ta, p1);\n\t\tCOPY_POLYGON(tb, p2);\n\n\t\toverlap.overlap = compute_polygon_overlap(&p1, &p2, &(overlap.only1), &(overlap.only2), bounds);\n\n\t}\n\n\tif (ta != ra)\n\t\tregion_release(&ta);\n\n\tif (tb != rb)\n\t\tregion_release(&tb);\n\n\treturn overlap;\n\n}\n\nint region_contains_point(region_container* r, float x, float y) {\n\t\n\tif (r->type == RECTANGLE) {\n\t\tif (x >= (r->data.rectangle).x && x <= ((r->data.rectangle).width + (r->data.rectangle).x) &&\n\t\t\ty >= (r->data.rectangle).y && y <= ((r->data.rectangle).height + (r->data.rectangle).y))\n            return 1;\n        return 0;\n\t}\n\n\tif (r->type == POLYGON)\n\t\treturn point_in_polygon(&(r->data.polygon), x, y);\n\n\treturn 0;\n\n}\n\nvoid region_get_mask(region_container* r, char* mask, int width, int height) {\n\n\tregion_container* t = r;\n\n\tif (r->type == RECTANGLE)\n\t\tt = region_convert(r, POLYGON);\n\n\trasterize_polygon(&(t->data.polygon), mask, width, height);\n\n\tif (t != r)\n\t\tregion_release(&t);\n\n}\n\nvoid region_get_mask_offset(region_container* r, char* mask, int x, int y, int width, int height) {\n\n\tregion_container* t = r;\n\tregion_polygon *p;\n\n\tif (r->type == RECTANGLE)\n\t\tt = region_convert(r, POLYGON);\n\n\tp = offset_polygon(&(t->data.polygon), -x, -y);\n\n\trasterize_polygon(p, mask, width, height);\n\n\tfree_polygon(p);\n\n\tif (t != r)\n\t\tregion_release(&t);\n\n}\n\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/utils/src/region.h",
    "content": "/* -*- Mode: C; indent-tabs-mode: nil; c-basic-offset: 4; tab-width: 4 -*- */\n\n#ifndef _REGION_H_\n#define _REGION_H_\n\n#ifdef TRAX_STATIC_DEFINE\n#  define __TRAX_EXPORT\n#else\n#  ifndef __TRAX_EXPORT\n#    if defined(_MSC_VER)\n#      ifdef trax_EXPORTS\n         /* We are building this library */\n#        define __TRAX_EXPORT __declspec(dllexport)\n#      else\n         /* We are using this library */\n#        define __TRAX_EXPORT __declspec(dllimport)\n#      endif\n#    elif defined(__GNUC__)\n#      ifdef trax_EXPORTS\n         /* We are building this library */\n#        define __TRAX_EXPORT __attribute__((visibility(\"default\")))\n#      else\n         /* We are using this library */\n#        define __TRAX_EXPORT __attribute__((visibility(\"default\")))\n#      endif\n#    endif\n#  endif\n#endif\n\n#ifndef MAX\n#define MAX(a,b) (((a) > (b)) ? (a) : (b))\n#endif\n\n#ifndef MIN\n#define MIN(a,b) (((a) < (b)) ? (a) : (b))\n#endif\n\n#define TRAX_DEFAULT_CODE 0\n\n#define REGION_LEGACY_RASTERIZATION 1\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\ntypedef enum region_type {EMPTY, SPECIAL, RECTANGLE, POLYGON, MASK} region_type;\n\ntypedef struct region_bounds {\n\n\tfloat top;\n\tfloat bottom;\n\tfloat left;\n\tfloat right;\n\n} region_bounds;\n\ntypedef struct region_polygon {\n\n\tint count;\n\n\tfloat* x;\n\tfloat* y;\n\n} region_polygon;\n\ntypedef struct region_mask {\n\n    int x;\n    int y;\n\n    int width;\n    int height;\n\n    char* data;\n\n} region_mask;\n\ntypedef struct region_rectangle {\n\n    float x;\n    float y;\n    float width;\n    float height;\n\n} region_rectangle;\n\ntypedef struct region_container {\n    enum region_type type;\n    union {\n        region_rectangle rectangle;\n        region_polygon polygon;\n        region_mask mask;\n        int special;\n    } data;\n} region_container;\n\ntypedef struct region_overlap {\n\n\tfloat overlap;    \n    float only1;\n    float only2;\n\n} region_overlap;\n\nextern const region_bounds region_no_bounds; \n\n__TRAX_EXPORT int region_set_flags(int mask);\n\n__TRAX_EXPORT int region_clear_flags(int mask);\n\n__TRAX_EXPORT region_overlap region_compute_overlap(const region_container* ra, const region_container* rb, region_bounds bounds);\n\n__TRAX_EXPORT float compute_polygon_overlap(const region_polygon* p1, const region_polygon* p2, float *only1, float *only2, region_bounds bounds);\n\n__TRAX_EXPORT region_bounds region_create_bounds(float left, float top, float right, float bottom);\n\n__TRAX_EXPORT region_bounds region_compute_bounds(const region_container* region);\n\n__TRAX_EXPORT int region_parse(const char* buffer, region_container** region);\n\n__TRAX_EXPORT char* region_string(region_container* region);\n\n__TRAX_EXPORT void region_print(FILE* out, region_container* region);\n\n__TRAX_EXPORT region_container* region_convert(const region_container* region, region_type type);\n\n__TRAX_EXPORT void region_release(region_container** region);\n\n__TRAX_EXPORT region_container* region_create_special(int code);\n\n__TRAX_EXPORT region_container* region_create_rectangle(float x, float y, float width, float height);\n\n__TRAX_EXPORT region_container* region_create_polygon(int count);\n\n__TRAX_EXPORT int region_contains_point(region_container* r, float x, float y);\n\n__TRAX_EXPORT void region_get_mask(region_container* r, char* mask, int width, int height);\n\n__TRAX_EXPORT void region_get_mask_offset(region_container* r, char* mask, int x, int y, int width, int height);\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/utils/statistics.py",
    "content": "import numpy as np\nfrom . import region\n\ndef calculate_failures(trajectory):\n    \"\"\" Calculate number of failures\n    Args:\n        trajectory: list of bbox\n    Returns:\n        num_failures: number of failures\n        failures: failures point in trajectory, start with 0\n    \"\"\"\n    failures = [i for i, x in zip(range(len(trajectory)), trajectory)\n            if len(x) == 1 and x[0] == 2]\n    num_failures = len(failures)\n    return num_failures, failures\n\ndef calculate_accuracy(pred_trajectory, gt_trajectory,\n        burnin=0, ignore_unknown=True, bound=None):\n    \"\"\"Caculate accuracy socre as average overlap over the entire sequence\n    Args:\n        trajectory: list of bbox\n        gt_trajectory: list of bbox\n        burnin: number of frames that have to be ignored after the failure\n        ignore_unknown: ignore frames where the overlap is unknown\n        bound: bounding region\n    Return:\n        acc: average overlap\n        overlaps: per frame overlaps\n    \"\"\"\n    pred_trajectory_ = pred_trajectory\n    if not ignore_unknown:\n        unkown = [len(x)==1 and x[0] == 0 for x in pred_trajectory]\n    \n    if burnin > 0:\n        pred_trajectory_ = pred_trajectory[:]\n        mask = [len(x)==1 and x[0] == 1 for x in pred_trajectory]\n        for i in range(len(mask)):\n            if mask[i]:\n                for j in range(burnin):\n                    if i + j < len(mask):\n                        pred_trajectory_[i+j] = [0]\n    min_len = min(len(pred_trajectory_), len(gt_trajectory))\n    overlaps = region.vot_overlap_traj(pred_trajectory_[:min_len],\n            gt_trajectory[:min_len], bound)\n\n    if not ignore_unknown:\n        overlaps = [x if u else 0 for u in unkown]\n\n    acc = 0\n    if len(overlaps) > 0:\n        acc = np.nanmean(overlaps)\n    return acc, overlaps\n\n# def caculate_expected_overlap(pred_trajectorys, gt_trajectorys, skip_init, traj_length=None,\n#         weights=None, tags=['all']):\n#     \"\"\" Caculate expected overlap\n#     Args:\n#         pred_trajectory: list of bbox\n#         gt_trajectory: list of bbox\n#         traj_length: a list of sequence length for which the overlap should be evaluated\n#         weights: a list of per-sequence weights that indicate how much does each sequence\n#                 contribute to the estimate\n#         tags:  set list of tags for which to perform calculation\n#     \"\"\"\n#     overlaps = [calculate_accuracy(pred, gt)[1]\n#             for pred, gt in zip(pred_trajectorys, gt_trajectorys)]\n#     failures = [calculate_accuracy(pred, gt)[1]\n#             for pred, gt in zip(pred_trajectorys, gt_trajectorys)]\n# \n#     if traj_length is None:\n#         traj_length = range(1, max([len(x) for x in gt_trajectorys])+1)\n#     traj_length = list(set(traj_length))\n\n#@jit(nopython=True)\ndef overlap_ratio(rect1, rect2):\n    '''Compute overlap ratio between two rects\n    Args\n        rect:2d array of N x [x,y,w,h]\n    Return:\n        iou\n    '''\n    # if rect1.ndim==1:\n    #     rect1 = rect1[np.newaxis, :]\n    # if rect2.ndim==1:\n    #     rect2 = rect2[np.newaxis, :]\n    left = np.maximum(rect1[:,0], rect2[:,0])\n    right = np.minimum(rect1[:,0]+rect1[:,2], rect2[:,0]+rect2[:,2])\n    top = np.maximum(rect1[:,1], rect2[:,1])\n    bottom = np.minimum(rect1[:,1]+rect1[:,3], rect2[:,1]+rect2[:,3])\n\n    intersect = np.maximum(0,right - left) * np.maximum(0,bottom - top)\n    union = rect1[:,2]*rect1[:,3] + rect2[:,2]*rect2[:,3] - intersect\n    iou = intersect / union\n    iou = np.maximum(np.minimum(1, iou), 0)\n    return iou\n\n#@jit(nopython=True)\ndef success_overlap(gt_bb, result_bb, n_frame):\n    thresholds_overlap = np.arange(0, 1.05, 0.05)\n    success = np.zeros(len(thresholds_overlap))\n    iou = np.ones(len(gt_bb)) * (-1)\n    mask = np.sum(gt_bb > 0, axis=1) == 4\n    iou[mask] = overlap_ratio(gt_bb[mask], result_bb[mask])\n    for i in range(len(thresholds_overlap)):\n        success[i] = np.sum(iou > thresholds_overlap[i]) / float(n_frame)\n    return success\n\n#@jit(nopython=True)\ndef success_error(gt_center, result_center, thresholds, n_frame):\n    # n_frame = len(gt_center)\n    success = np.zeros(len(thresholds))\n    dist = np.ones(len(gt_center)) * (-1)\n    mask = np.sum(gt_center > 0, axis=1) == 2\n    dist[mask] = np.sqrt(np.sum(\n        np.power(gt_center[mask] - result_center[mask], 2), axis=1))\n    for i in range(len(thresholds)):\n        success[i] = np.sum(dist <= thresholds[i]) / float(n_frame)\n    return success\n\n#@jit(nopython=True)\ndef determine_thresholds(scores, resolution=100):\n    \"\"\"\n    Args:\n        scores: 1d array of score\n    \"\"\"\n    scores = np.sort(scores[np.logical_not(np.isnan(scores))])\n    delta = np.floor(len(scores) / (resolution - 2))\n    idxs = np.floor(np.linspace(delta-1, len(scores)-delta, resolution-2)+0.5).astype(np.int32)\n    thresholds = np.zeros((resolution))\n    thresholds[0] = - np.inf\n    thresholds[-1] = np.inf\n    thresholds[1:-1] = scores[idxs]\n    return thresholds\n\n#@jit(nopython=True)\ndef calculate_f1(overlaps, score, bound, thresholds, N):\n    overlaps = np.array(overlaps)\n    overlaps[np.isnan(overlaps)] = 0\n    score = np.array(score)\n    score[np.isnan(score)] = 0\n    precision = np.zeros(len(thresholds))\n    recall = np.zeros(len(thresholds))\n    for i, th in enumerate(thresholds):\n        if th == - np.inf:\n            idx = score > 0\n        else:\n            idx = score >= th\n        if np.sum(idx) == 0:\n            precision[i] = 1\n            recall[i] = 0\n        else:\n            precision[i] = np.mean(overlaps[idx])\n            recall[i] = np.sum(overlaps[idx]) / N\n    f1 = 2 * precision * recall / (precision + recall)\n    return f1, precision, recall\n\n#@jit(nopython=True)\ndef calculate_expected_overlap(fragments, fweights):\n    max_len = fragments.shape[1]\n    expected_overlaps = np.zeros((max_len), np.float32)\n    expected_overlaps[0] = 1\n\n    # TODO Speed Up \n    for i in range(1, max_len):\n        mask = np.logical_not(np.isnan(fragments[:, i]))\n        if np.any(mask):\n            fragment = fragments[mask, 1:i+1]\n            seq_mean = np.sum(fragment, 1) / fragment.shape[1]\n            expected_overlaps[i] = np.sum(seq_mean *\n                fweights[mask]) / np.sum(fweights[mask])\n    return expected_overlaps\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/visualization/__init__.py",
    "content": "from .draw_f1 import draw_f1\nfrom .draw_success_precision import draw_success_precision\nfrom .draw_eao import draw_eao\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/visualization/draw_eao.py",
    "content": "import matplotlib.pyplot as plt\nimport numpy as np\nimport pickle\n\nfrom matplotlib import rc\nfrom .draw_utils import COLOR, MARKER_STYLE\n\nrc('font',**{'family':'sans-serif','sans-serif':['Helvetica']})\nrc('text', usetex=True)\n\ndef draw_eao(result):\n    fig = plt.figure()\n    ax = fig.add_subplot(111, projection='polar')\n    angles = np.linspace(0, 2*np.pi, 8, endpoint=True)\n\n    attr2value = []\n    for i, (tracker_name, ret) in enumerate(result.items()):\n        value = list(ret.values())\n        attr2value.append(value)\n        value.append(value[0])\n    attr2value = np.array(attr2value)\n    max_value = np.max(attr2value, axis=0)\n    min_value = np.min(attr2value, axis=0)\n    for i, (tracker_name, ret) in enumerate(result.items()):\n        value = list(ret.values())\n        value.append(value[0])\n        value = np.array(value)\n        value *= (1 / max_value)\n        plt.plot(angles, value, linestyle='-', color=COLOR[i], marker=MARKER_STYLE[i],\n                label=tracker_name, linewidth=1.5, markersize=6)\n\n    attrs = [\"Overall\", \"Camera motion\",\n             \"Illumination change\",\"Motion Change\",\n             \"Size change\",\"Occlusion\",\n             \"Unassigned\"]\n    attr_value = []\n    for attr, maxv, minv in zip(attrs, max_value, min_value):\n        attr_value.append(attr + \"\\n({:.3f},{:.3f})\".format(minv, maxv))\n    ax.set_thetagrids(angles[:-1] * 180/np.pi, attr_value)\n    ax.spines['polar'].set_visible(False)\n    ax.legend(loc='upper center', bbox_to_anchor=(0.5,-0.07), frameon=False, ncol=5)\n    ax.grid(b=False)\n    ax.set_ylim(0, 1.18)\n    ax.set_yticks([])\n    plt.show()\n\nif __name__ == '__main__':\n    result = pickle.load(open(\"../../result.pkl\", 'rb'))\n    draw_eao(result)\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/visualization/draw_f1.py",
    "content": "import matplotlib.pyplot as plt\nimport numpy as np\n\nfrom matplotlib import rc\nfrom .draw_utils import COLOR, LINE_STYLE\n\nrc('font',**{'family':'sans-serif','sans-serif':['Helvetica']})\nrc('text', usetex=True)\n\ndef draw_f1(result, bold_name=None):\n    # drawing f1 contour\n    fig, ax = plt.subplots()\n    for f1 in np.arange(0.1, 1, 0.1):\n        recall = np.arange(f1, 1+0.01, 0.01)\n        precision = f1 * recall / (2 * recall - f1)\n        ax.plot(recall, precision, color=[0,1,0], linestyle='-', linewidth=0.5)\n        ax.plot(precision, recall, color=[0,1,0], linestyle='-', linewidth=0.5)\n    ax.grid(b=True)\n    ax.set_aspect(1)\n    plt.xlabel('Recall')\n    plt.ylabel('Precision')\n    plt.axis([0, 1, 0, 1])\n    plt.title(r'\\textbf{VOT2018-LT Precision vs Recall}')\n\n    # draw result line\n    all_precision = {}\n    all_recall = {}\n    best_f1 = {}\n    best_idx = {}\n    for tracker_name, ret in result.items():\n        precision = np.mean(list(ret['precision'].values()), axis=0)\n        recall = np.mean(list(ret['recall'].values()), axis=0)\n        f1 = 2 * precision * recall / (precision + recall)\n        max_idx = np.argmax(f1)\n        all_precision[tracker_name] = precision\n        all_recall[tracker_name] = recall\n        best_f1[tracker_name] = f1[max_idx]\n        best_idx[tracker_name] = max_idx\n\n    for idx, (tracker_name, best_f1) in \\\n            enumerate(sorted(best_f1.items(), key=lambda x:x[1], reverse=True)):\n        if tracker_name == bold_name:\n            label = r\"\\textbf{[%.3f] Ours}\" % (best_f1)\n        else:\n            label = \"[%.3f] \" % (best_f1) + tracker_name\n        recall = all_recall[tracker_name][:-1]\n        precision = all_precision[tracker_name][:-1]\n        ax.plot(recall, precision, color=COLOR[idx], linestyle='-',\n                label=label)\n        f1_idx = best_idx[tracker_name]\n        ax.plot(recall[f1_idx], precision[f1_idx], color=[0,0,0], marker='o',\n                markerfacecolor=COLOR[idx], markersize=5)\n    ax.legend(loc='lower right', labelspacing=0.2)\n    plt.xticks(np.arange(0, 1+0.1, 0.1))\n    plt.yticks(np.arange(0, 1+0.1, 0.1))\n    plt.show()\n\nif __name__ == '__main__':\n    draw_f1(None)\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/visualization/draw_success_precision.py",
    "content": "import matplotlib.pyplot as plt\nimport numpy as np\n\nfrom .draw_utils import COLOR, LINE_STYLE\n\ndef draw_success_precision(success_ret, name, videos, attr, precision_ret=None,\n        norm_precision_ret=None, bold_name=None, axis=[0, 1]):\n    # success plot\n    fig, ax = plt.subplots()\n    ax.grid(b=True)\n    ax.set_aspect(1)\n    plt.xlabel('Overlap threshold')\n    plt.ylabel('Success rate')\n    if attr == 'ALL':\n        plt.title(r'\\textbf{Success plots of OPE on %s}' % (name))\n    else:\n        plt.title(r'\\textbf{Success plots of OPE - %s}' % (attr))\n    plt.axis([0, 1]+axis)\n    success = {}\n    thresholds = np.arange(0, 1.05, 0.05)\n    for tracker_name in success_ret.keys():\n        value = [v for k, v in success_ret[tracker_name].items() if k in videos]\n        success[tracker_name] = np.mean(value)\n    for idx, (tracker_name, auc) in  \\\n            enumerate(sorted(success.items(), key=lambda x:x[1], reverse=True)):\n        if tracker_name == bold_name:\n            label = r\"\\textbf{[%.3f] %s}\" % (auc, tracker_name)\n        else:\n            label = \"[%.3f] \" % (auc) + tracker_name\n        value = [v for k, v in success_ret[tracker_name].items() if k in videos]\n        plt.plot(thresholds, np.mean(value, axis=0),\n                color=COLOR[idx], linestyle=LINE_STYLE[idx],label=label, linewidth=2)\n    ax.legend(loc='lower left', labelspacing=0.2)\n    ax.autoscale(enable=True, axis='both', tight=True)\n    xmin, xmax, ymin, ymax = plt.axis()\n    ax.autoscale(enable=False)\n    ymax += 0.03\n    ymin = 0\n    plt.axis([xmin, xmax, ymin, ymax])\n    plt.xticks(np.arange(xmin, xmax+0.01, 0.1))\n    plt.yticks(np.arange(ymin, ymax, 0.1))\n    ax.set_aspect((xmax - xmin)/(ymax-ymin))\n    plt.show()\n\n    if precision_ret:\n        # norm precision plot\n        fig, ax = plt.subplots()\n        ax.grid(b=True)\n        ax.set_aspect(50)\n        plt.xlabel('Location error threshold')\n        plt.ylabel('Precision')\n        if attr == 'ALL':\n            plt.title(r'\\textbf{Precision plots of OPE on %s}' % (name))\n        else:\n            plt.title(r'\\textbf{Precision plots of OPE - %s}' % (attr))\n        plt.axis([0, 50]+axis)\n        precision = {}\n        thresholds = np.arange(0, 51, 1)\n        for tracker_name in precision_ret.keys():\n            value = [v for k, v in precision_ret[tracker_name].items() if k in videos]\n            precision[tracker_name] = np.mean(value, axis=0)[20]\n        for idx, (tracker_name, pre) in \\\n                enumerate(sorted(precision.items(), key=lambda x:x[1], reverse=True)):\n            if tracker_name == bold_name:\n                label = r\"\\textbf{[%.3f] %s}\" % (pre, tracker_name)\n            else:\n                label = \"[%.3f] \" % (pre) + tracker_name\n            value = [v for k, v in precision_ret[tracker_name].items() if k in videos]\n            plt.plot(thresholds, np.mean(value, axis=0),\n                    color=COLOR[idx], linestyle=LINE_STYLE[idx],label=label, linewidth=2)\n        ax.legend(loc='lower right', labelspacing=0.2)\n        ax.autoscale(enable=True, axis='both', tight=True)\n        xmin, xmax, ymin, ymax = plt.axis()\n        ax.autoscale(enable=False)\n        ymax += 0.03\n        ymin = 0\n        plt.axis([xmin, xmax, ymin, ymax])\n        plt.xticks(np.arange(xmin, xmax+0.01, 5))\n        plt.yticks(np.arange(ymin, ymax, 0.1))\n        ax.set_aspect((xmax - xmin)/(ymax-ymin))\n        plt.show()\n\n    # norm precision plot\n    if norm_precision_ret:\n        fig, ax = plt.subplots()\n        ax.grid(b=True)\n        plt.xlabel('Location error threshold')\n        plt.ylabel('Precision')\n        if attr == 'ALL':\n            plt.title(r'\\textbf{Normalized Precision plots of OPE on %s}' % (name))\n        else:\n            plt.title(r'\\textbf{Normalized Precision plots of OPE - %s}' % (attr))\n        norm_precision = {}\n        thresholds = np.arange(0, 51, 1) / 100\n        for tracker_name in precision_ret.keys():\n            value = [v for k, v in norm_precision_ret[tracker_name].items() if k in videos]\n            norm_precision[tracker_name] = np.mean(value, axis=0)[20]\n        for idx, (tracker_name, pre) in \\\n                enumerate(sorted(norm_precision.items(), key=lambda x:x[1], reverse=True)):\n            if tracker_name == bold_name:\n                label = r\"\\textbf{[%.3f] %s}\" % (pre, tracker_name)\n            else:\n                label = \"[%.3f] \" % (pre) + tracker_name\n            value = [v for k, v in norm_precision_ret[tracker_name].items() if k in videos]\n            plt.plot(thresholds, np.mean(value, axis=0),\n                    color=COLOR[idx], linestyle=LINE_STYLE[idx],label=label, linewidth=2)\n        ax.legend(loc='lower right', labelspacing=0.2)\n        ax.autoscale(enable=True, axis='both', tight=True)\n        xmin, xmax, ymin, ymax = plt.axis()\n        ax.autoscale(enable=False)\n        ymax += 0.03\n        ymin = 0\n        plt.axis([xmin, xmax, ymin, ymax])\n        plt.xticks(np.arange(xmin, xmax+0.01, 0.05))\n        plt.yticks(np.arange(ymin, ymax, 0.1))\n        ax.set_aspect((xmax - xmin)/(ymax-ymin))\n        plt.show()\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/pysot/visualization/draw_utils.py",
    "content": "\nCOLOR = ((1, 0, 0),\n         (0, 1, 0),\n         (1, 0, 1),\n         (1, 1, 0),\n         (0  , 162/255, 232/255),\n         (0.5, 0.5, 0.5),\n         (0, 0, 1),\n         (0, 1, 1),\n         (136/255, 0  , 21/255),\n         (255/255, 127/255, 39/255),\n         (0, 0, 0))\n\nLINE_STYLE = ['-', '--', ':', '-', '--', ':', '-', '--', ':', '-']\n\nMARKER_STYLE = ['o', 'v', '<', '*', 'D', 'x', '.', 'x', '<', '.']\n"
  },
  {
    "path": "tracker/sot/lib/eval_toolkit/requirements.txt",
    "content": "tqdm\nnumpy\nglob\nopencv-python\ncolorama\nnumba\n"
  },
  {
    "path": "tracker/sot/lib/models/__init__.py",
    "content": "from .cfnet import CFNet\nfrom .siamfc import SiamFC\nfrom .connect import box_tower, AdjustLayer, AlignHead, Corr_Up, MultiDiCorr, OceanCorr\nfrom .backbones import ResNet50, ResNet22W\nfrom .modules import MultiFeatureBase\n\n"
  },
  {
    "path": "tracker/sot/lib/models/backbones.py",
    "content": "# -----------------------------------------------------------------------------\n# Copyright (c) Microsoft\n# Licensed under the MIT License.\n# Written by Zhipeng Zhang (zhangzhipeng2017@ia.ac.cn)\n# ------------------------------------------------------------------------------\n\nimport torch\nimport torch.nn as nn\nfrom .modules import  Bottleneck, ResNet_plus2, Bottleneck_BIG_CI, ResNet\n\neps = 1e-5\n# ---------------------\n# For Ocean and Ocean+\n# ---------------------\nclass ResNet50(nn.Module):\n    def __init__(self, used_layers=[2, 3, 4], online=False):\n        super(ResNet50, self).__init__()\n        self.features = ResNet_plus2(Bottleneck, [3, 4, 6, 3], used_layers=used_layers, online=online)\n\n    def forward(self, x, online=False):\n        if not online:\n            x_stages, x = self.features(x, online=online)\n            return x_stages, x\n        else:\n            x = self.features(x, online=online)\n            return x\n\n# ---------------------\n# For SiamDW\n# ---------------------\nclass ResNet22W(nn.Module):\n    \"\"\"\n    ResNet22W: double 3*3 layer (only) channels in residual blob\n    \"\"\"\n    def __init__(self):\n        super(ResNet22W, self).__init__()\n        self.features = ResNet(Bottleneck_BIG_CI, [3, 4], [True, False], [False, True], firstchannels=64, channels=[64, 128])\n        self.feature_size = 512\n\n    def forward(self, x):\n        x = self.features(x)\n\n        return x\n\n\nif __name__ == '__main__':\n    import torch\n    net = ResNet50().cuda()\n    print(net)\n\n    params = list(net.parameters())\n    k = 0\n    for i in params:\n        l = 1\n        for j in i.size():\n            l *= j\n        k = k + l\n    print(\"total params\" + str(k/1e6) + \"M\")\n\n    search = torch.rand(1, 3, 255, 255)\n    search = torch.Tensor(search).cuda()\n    out = net(search)\n    print(out.size())\n\n    print()\n"
  },
  {
    "path": "tracker/sot/lib/models/cfnet.py",
    "content": "import pdb\r\nimport torch\r\nimport torch.nn as nn\r\nimport torch.nn.functional as F\r\nfrom torch.autograd import Variable\r\n\r\ndef complex_mul(x, z):\r\n    out_real = x[..., 0] * z[..., 0] - x[..., 1] * z[..., 1]\r\n    out_imag = x[..., 0] * z[..., 1] + x[..., 1] * z[..., 0]\r\n    return torch.stack((out_real, out_imag), -1)\r\n\r\n\r\ndef complex_mulconj(x, z):\r\n    out_real = x[..., 0] * z[..., 0] + x[..., 1] * z[..., 1]\r\n    out_imag = x[..., 1] * z[..., 0] - x[..., 0] * z[..., 1]\r\n    return torch.stack((out_real, out_imag), -1)\r\n\r\nclass CFNet(nn.Module):\r\n    def __init__(self, config,  **kwargs):\r\n        super(CFNet, self).__init__()\r\n        self.features = None\r\n        self.connect_model = None\r\n        self.zf = None  # for online tracking\r\n        if kwargs['base'] is None:\r\n            self.features = ResNet22W()\r\n        else:\r\n            self.features = kwargs['base'] \r\n        self.config = config\r\n        self.model_alphaf = 0\r\n        self.model_zf = 0\r\n\r\n\r\n    def feature_extractor(self, x):\r\n        return self.features(x)\r\n\r\n    def track(self, x):\r\n        xf = self.feature_extractor(x)\r\n        score = self.connector(self.zf, xf)\r\n        return score\r\n\r\n    def forward(self, x):\r\n        x = self.feature_extractor(x) * self.config.cos_window\r\n        xf = torch.rfft(x, signal_ndim=2)\r\n        kxzf = torch.sum(complex_mulconj(xf, self.model_zf), dim=1, keepdim=True)\r\n        response = torch.irfft(complex_mul(kxzf, self.model_alphaf), signal_ndim=2)\r\n        return response\r\n    \r\n    def update(self, z, lr=0):\r\n        z = self.feature_extractor(z) * self.config.inner_window\r\n        zf = torch.rfft(z, signal_ndim=2)\r\n        kzzf = torch.sum(torch.sum(zf ** 2, dim=4, keepdim=True), dim=1, keepdim=True)\r\n        alphaf = self.config.yf / (kzzf + self.config.lambda0)\r\n        self.model_alphaf = (1 - lr) * self.model_alphaf + lr * alphaf.data\r\n        self.model_zf = (1 - lr) * self.model_zf + lr * zf.data\r\n\r\n\r\n\r\n"
  },
  {
    "path": "tracker/sot/lib/models/connect.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\n\nclass Corr_Up(nn.Module):\n    \"\"\"\n    SiamFC head\n    \"\"\"\n    def __init__(self):\n        super(Corr_Up, self).__init__()\n\n    def _conv2d_group(self, x, kernel):\n        batch = x.size()[0]\n        pk = kernel.view(-1, x.size()[1], kernel.size()[2], kernel.size()[3])\n        px = x.view(1, -1, x.size()[2], x.size()[3])\n        po = F.conv2d(px, pk, groups=batch)\n        po = po.view(batch, -1, po.size()[2], po.size()[3])\n        return po\n\n    def forward(self, z_f, x_f):\n        if not self.training:\n            return 0.1 * F.conv2d(x_f, z_f)\n        else:\n            return 0.1 * self._conv2d_group(x_f, z_f)\n\n\ndef xcorr_depthwise(x, kernel):\n    \"\"\"depthwise cross correlation\n    \"\"\"\n    batch = kernel.size(0)\n    channel = kernel.size(1)\n    x = x.view(1, batch*channel, x.size(2), x.size(3))\n    kernel = kernel.view(batch*channel, 1, kernel.size(2), kernel.size(3))\n    out = F.conv2d(x, kernel, groups=batch*channel)\n    out = out.view(batch, channel, out.size(2), out.size(3))\n    return out\n\n\nclass DepthwiseXCorr(nn.Module):\n    def __init__(self, in_channels, hidden, out_channels, kernel_size=3, hidden_kernel_size=5):\n        super(DepthwiseXCorr, self).__init__()\n        self.conv_kernel = nn.Sequential(\n                nn.Conv2d(in_channels, hidden, kernel_size=kernel_size, bias=False),\n                nn.BatchNorm2d(hidden),\n                nn.ReLU(inplace=True),\n                )\n        self.conv_search = nn.Sequential(\n                nn.Conv2d(in_channels, hidden, kernel_size=kernel_size, bias=False),\n                nn.BatchNorm2d(hidden),\n                nn.ReLU(inplace=True),\n                )\n        self.head = nn.Sequential(\n                nn.Conv2d(hidden, hidden, kernel_size=1, bias=False),\n                nn.BatchNorm2d(hidden),\n                nn.ReLU(inplace=True),\n                nn.Conv2d(hidden, out_channels, kernel_size=1)\n                )\n\n    def forward(self, kernel, search):\n        kernel = self.conv_kernel(kernel)\n        search = self.conv_search(search)\n        feature = xcorr_depthwise(search, kernel)\n        out = self.head(feature)\n        return out\n\n\nclass MultiDiCorr(nn.Module):\n    \"\"\"\n    For tensorRT version\n    \"\"\"\n    def __init__(self, inchannels=512, outchannels=256):\n        super(MultiDiCorr, self).__init__()\n        self.cls_encode = matrix(in_channels=inchannels, out_channels=outchannels)\n        self.reg_encode = matrix(in_channels=inchannels, out_channels=outchannels)\n\n\n    def forward(self, search, kernal):\n        \"\"\"\n        :param search:\n        :param kernal:\n        :return:  for tensor2trt\n        \"\"\"\n        cls_z0, cls_z1, cls_z2, cls_x0, cls_x1, cls_x2 = self.cls_encode(kernal, search)  # [z11, z12, z13]\n        reg_z0, reg_z1, reg_z2, reg_x0, reg_x1, reg_x2 = self.reg_encode(kernal, search)  # [x11, x12, x13]\n\n        return cls_z0, cls_z1, cls_z2, cls_x0, cls_x1, cls_x2, reg_z0, reg_z1, reg_z2, reg_x0, reg_x1, reg_x2\n\nclass OceanCorr(nn.Module):\n    \"\"\"\n    For tensorRT version\n    \"\"\"\n    def __init__(self, inchannels=512):\n        super(OceanCorr, self).__init__()\n\n        self.cls_dw = GroupDW(in_channels=inchannels)\n        self.reg_dw = GroupDW(in_channels=inchannels)\n\n    def forward(self, cls_z, cls_x, reg_z, reg_x):\n        cls_dw = self.cls_dw(cls_z, cls_x)\n        reg_dw = self.reg_dw(reg_z, reg_x)\n\n        return cls_dw, reg_dw\n\n\nclass AdjustLayer(nn.Module):\n    def __init__(self, in_channels, out_channels):\n        super(AdjustLayer, self).__init__()\n        self.downsample = nn.Sequential(\n            nn.Conv2d(in_channels, out_channels, kernel_size=1, bias=False),\n            nn.BatchNorm2d(out_channels),\n            )\n\n    def forward(self, x, crop=False):\n        x_ori = self.downsample(x)\n        if x_ori.size(3) < 20 and crop:\n            l = 4\n            r = -4\n            xf = x_ori[:, :, l:r, l:r]\n\n        if not crop:\n            return x_ori\n        else:\n            return x_ori, xf\n\n# --------------------\n# Ocean module\n# --------------------\nclass matrix(nn.Module):\n    \"\"\"\n    encode backbone feature\n    \"\"\"\n    def __init__(self, in_channels, out_channels):\n        super(matrix, self).__init__()\n\n        # same size (11)\n        self.matrix11_k = nn.Sequential(\n            nn.Conv2d(in_channels, out_channels, kernel_size=3, bias=False),\n            nn.BatchNorm2d(out_channels),\n            nn.ReLU(inplace=True),\n        )\n        self.matrix11_s = nn.Sequential(\n            nn.Conv2d(in_channels, out_channels, kernel_size=3, bias=False),\n            nn.BatchNorm2d(out_channels),\n            nn.ReLU(inplace=True),\n        )\n\n        # h/2, w\n        self.matrix12_k = nn.Sequential(\n            nn.Conv2d(out_channels, out_channels, kernel_size=3, bias=False, dilation=(2, 1)),\n            nn.BatchNorm2d(out_channels),\n            nn.ReLU(inplace=True),\n        )\n        self.matrix12_s = nn.Sequential(\n            nn.Conv2d(out_channels, out_channels, kernel_size=3, bias=False, dilation=(2, 1)),\n            nn.BatchNorm2d(out_channels),\n            nn.ReLU(inplace=True),\n        )\n\n        # w/2, h\n        self.matrix21_k = nn.Sequential(\n            nn.Conv2d(out_channels, out_channels, kernel_size=3, bias=False, dilation=(1, 2)),\n            nn.BatchNorm2d(out_channels),\n            nn.ReLU(inplace=True),\n        )\n        self.matrix21_s = nn.Sequential(\n            nn.Conv2d(out_channels, out_channels, kernel_size=3, bias=False, dilation=(1, 2)),\n            nn.BatchNorm2d(out_channels),\n            nn.ReLU(inplace=True),\n        )\n\n    def forward(self, z, x):\n        z11 = self.matrix11_k(z)\n        x11 = self.matrix11_s(x)\n\n        z12 = self.matrix12_k(z)\n        x12 = self.matrix12_s(x)\n\n        z21 = self.matrix21_k(z)\n        x21 = self.matrix21_s(x)\n\n        return [z11, z12, z21], [x11, x12, x21]\n\n\nclass AdaptiveConv(nn.Module):\n    \"\"\" Adaptive Conv is built based on Deformable Conv\n    with precomputed offsets which derived from anchors\"\"\"\n\n    def __init__(self, in_channels, out_channels):\n        super(AdaptiveConv, self).__init__()\n        self.conv = DeformConv(in_channels, out_channels, 3, padding=1)\n\n    def forward(self, x, offset):\n        N, _, H, W = x.shape\n        assert offset is not None\n        assert H * W == offset.shape[1]\n        # reshape [N, NA, 18] to (N, 18, H, W)\n        offset = offset.permute(0, 2, 1).reshape(N, -1, H, W)\n        x = self.conv(x, offset)\n\n        return x\n\n\nclass AlignHead(nn.Module):\n    # align features and classification score\n\n    def __init__(self, in_channels, feat_channels):\n        super(AlignHead, self).__init__()\n\n        self.rpn_conv = AdaptiveConv(in_channels, feat_channels)\n        self.rpn_cls = nn.Conv2d(feat_channels, 1, 1)\n        self.relu = nn.ReLU(inplace=True)\n\n    def forward(self, x, offset):\n        x = self.relu(self.rpn_conv(x, offset))\n        cls_score = self.rpn_cls(x)\n        return cls_score\n\n\n\nclass GroupDW(nn.Module):\n    \"\"\"\n    encode backbone feature\n    \"\"\"\n    def __init__(self, in_channels=256):\n        super(GroupDW, self).__init__()\n        self.weight = nn.Parameter(torch.ones(3))\n\n    def forward(self, z, x):\n        z11, z12, z21 = z\n        x11, x12, x21 = x\n\n        re11 = xcorr_depthwise(x11, z11)\n        re12 = xcorr_depthwise(x12, z12)\n        re21 = xcorr_depthwise(x21, z21)\n        re = [re11, re12, re21]\n        \n        # weight\n        weight = F.softmax(self.weight, 0)\n\n        s = 0\n        for i in range(3):\n            s += weight[i] * re[i]\n\n        return s\n\n\nclass SingleDW(nn.Module):\n    \"\"\"\n    encode backbone feature\n    \"\"\"\n\n    def __init__(self, in_channels=256):\n        super(SingleDW, self).__init__()\n\n    def forward(self, z, x):\n\n        s = xcorr_depthwise(x, z)\n\n        return s\n\n\nclass box_tower(nn.Module):\n    \"\"\"\n    box tower for FCOS reg\n    \"\"\"\n    def __init__(self, inchannels=512, outchannels=256, towernum=1):\n        super(box_tower, self).__init__()\n        tower = []\n        cls_tower = []\n        # encode backbone\n        self.cls_encode = matrix(in_channels=inchannels, out_channels=outchannels)\n        self.reg_encode = matrix(in_channels=inchannels, out_channels=outchannels)\n        self.cls_dw = GroupDW(in_channels=inchannels)\n        self.reg_dw = GroupDW(in_channels=inchannels)\n\n        # box pred head\n        for i in range(towernum):\n            if i == 0:\n                tower.append(nn.Conv2d(outchannels, outchannels, kernel_size=3, stride=1, padding=1))\n            else:\n                tower.append(nn.Conv2d(outchannels, outchannels, kernel_size=3, stride=1, padding=1))\n\n            tower.append(nn.BatchNorm2d(outchannels))\n            tower.append(nn.ReLU())\n\n        # cls tower\n        for i in range(towernum):\n            if i == 0:\n                cls_tower.append(nn.Conv2d(outchannels, outchannels, kernel_size=3, stride=1, padding=1))\n            else:\n                cls_tower.append(nn.Conv2d(outchannels, outchannels, kernel_size=3, stride=1, padding=1))\n\n            cls_tower.append(nn.BatchNorm2d(outchannels))\n            cls_tower.append(nn.ReLU())\n\n        self.add_module('bbox_tower', nn.Sequential(*tower))\n        self.add_module('cls_tower', nn.Sequential(*cls_tower))\n\n\n        # reg head\n        self.bbox_pred = nn.Conv2d(outchannels, 4, kernel_size=3, stride=1, padding=1)\n        self.cls_pred = nn.Conv2d(outchannels, 1, kernel_size=3, stride=1, padding=1)\n\n        # adjust scale\n        self.adjust = nn.Parameter(0.1 * torch.ones(1))\n        self.bias = nn.Parameter(torch.Tensor(1.0 * torch.ones(1, 4, 1, 1)).cuda())\n\n    def forward(self, search, kernal, update=None):\n        # encode first\n        if update is None:\n            cls_z, cls_x = self.cls_encode(kernal, search)   # [z11, z12, z13]\n        else:\n            cls_z, cls_x = self.cls_encode(update, search)  # [z11, z12, z13]\n\n        reg_z, reg_x = self.reg_encode(kernal, search)  # [x11, x12, x13]\n\n        # cls and reg DW\n        cls_dw = self.cls_dw(cls_z, cls_x)\n        reg_dw = self.reg_dw(reg_z, reg_x)\n        x_reg = self.bbox_tower(reg_dw)\n        x = self.adjust * self.bbox_pred(x_reg) + self.bias\n        x = torch.exp(x)\n\n        # cls tower\n        c = self.cls_tower(cls_dw)\n        cls = 0.1 * self.cls_pred(c)\n\n        return x, cls, cls_dw, x_reg\n\n"
  },
  {
    "path": "tracker/sot/lib/models/modules.py",
    "content": "import math\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom collections import OrderedDict\nfrom functools import partial\nimport collections\nimport re\n\neps = 1e-5\n\n# -------------\n# Single Layer\n# -------------\ndef conv3x3(in_planes, out_planes, stride=1):\n    \"\"\"3x3 convolution with padding\"\"\"\n    return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,\n                     padding=1, bias=False)\n\ndef conv3x3NP(in_planes, out_planes, stride=1):\n    \"\"\"3x3 convolution without padding\"\"\"\n    return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, bias=False)\n\n\n\ndef conv1x1(in_planes, out_planes, stride=1):\n    \"\"\"1x1 convolution\"\"\"\n    return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)\n\n\ndef down(in_planes, out_planes):\n    \"\"\"downsampling at the output layer\"\"\"\n    return nn.Sequential(nn.Conv2d(in_planes, out_planes, kernel_size=1),\n            nn.BatchNorm2d(out_planes))\n\ndef down_spatial(in_planes, out_planes):\n    \"\"\"downsampling 21*21 to 5*5 (21-5)//4+1=5\"\"\"\n    return nn.Sequential(nn.Conv2d(in_planes, out_planes, kernel_size=5, stride=4),\n            nn.BatchNorm2d(out_planes))\n\n\n# -------------------------------\n# Several Kinds Bottleneck Blocks\n# -------------------------------\n\nclass Bottleneck(nn.Module):\n    expansion = 4\n\n    def __init__(self, inplanes, planes, stride=1, downsample=None, dilation=1):\n        super(Bottleneck, self).__init__()\n        self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)\n        self.bn1 = nn.BatchNorm2d(planes)\n        padding = 2 - stride\n        if downsample is not None and dilation > 1:\n            dilation = dilation // 2\n            padding = dilation\n\n        assert stride == 1 or dilation == 1, \\\n            \"stride and dilation must have one equals to zero at least\"\n\n        if dilation > 1:\n            padding = dilation\n        self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,\n                               padding=padding, bias=False, dilation=dilation)\n        self.bn2 = nn.BatchNorm2d(planes)\n        self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)\n        self.bn3 = nn.BatchNorm2d(planes * 4)\n        self.relu = nn.ReLU(inplace=True)\n        self.downsample = downsample\n        self.stride = stride\n\n    def forward(self, x):\n        residual = x\n\n        out = self.conv1(x)\n        out = self.bn1(out)\n        out = self.relu(out)\n\n        out = self.conv2(out)\n        out = self.bn2(out)\n        out = self.relu(out)\n\n        out = self.conv3(out)\n        out = self.bn3(out)\n\n        if self.downsample is not None:\n            residual = self.downsample(x)\n\n        out += residual\n\n        out = self.relu(out)\n\n        return out\n\nclass Bottleneck_BIG_CI(nn.Module):\n    \"\"\"\n    Bottleneck with center crop layer, double channels in 3*3 conv layer in shortcut branch\n    \"\"\"\n    expansion = 4\n\n    def __init__(self, inplanes, planes, last_relu, stride=1, downsample=None, dilation=1):\n        super(Bottleneck_BIG_CI, self).__init__()\n        self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)\n        self.bn1 = nn.BatchNorm2d(planes)\n\n        padding = 1\n        if abs(dilation - 2) < eps: padding = 2\n        if abs(dilation - 3) < eps: padding = 3\n\n        self.conv2 = nn.Conv2d(planes, planes*2, kernel_size=3, stride=stride, padding=padding, bias=False, dilation=dilation)\n        self.bn2 = nn.BatchNorm2d(planes*2)\n        self.conv3 = nn.Conv2d(planes*2, planes * self.expansion, kernel_size=1, bias=False)\n        self.bn3 = nn.BatchNorm2d(planes * self.expansion)\n        self.relu = nn.ReLU(inplace=True)\n        self.downsample = downsample\n        self.stride = stride\n        self.last_relu = last_relu\n\n    def forward(self, x):\n        residual = x\n\n        out = self.conv1(x)\n        out = self.bn1(out)\n        out = self.relu(out)\n\n        out = self.conv2(out)\n        out = self.bn2(out)\n        out = self.relu(out)\n\n        out = self.conv3(out)\n        out = self.bn3(out)\n\n        if self.downsample is not None:\n            residual = self.downsample(x)\n\n        out += residual\n\n        if self.last_relu:  # feature out no relu\n            out = self.relu(out)\n\n        out = self.center_crop(out)  # in-layer crop\n\n        return out\n\n    def center_crop(self, x):\n        \"\"\"\n        center crop layer. crop [1:-2] to eliminate padding influence.\n        Crop 1 element around the tensor\n        input x can be a Variable or Tensor\n        \"\"\"\n        return x[:, :, 1:-1, 1:-1].contiguous()\n\n# ---------------------\n# Modified ResNet\n# ---------------------\nclass ResNet_plus2(nn.Module):\n    def __init__(self, block, layers, used_layers, online=False):\n        self.inplanes = 64\n        super(ResNet_plus2, self).__init__()\n        self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=0,  # 3\n                               bias=False)\n        self.bn1 = nn.BatchNorm2d(64)\n        self.relu = nn.ReLU(inplace=True)\n        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)\n        self.layer1 = self._make_layer(block, 64, layers[0])\n        self.layer2 = self._make_layer(block, 128, layers[1], stride=2)\n\n        self.feature_size = 128 * block.expansion\n        self.used_layers = used_layers\n        self.layer3_use = True if 3 in used_layers else False\n        self.layer4_use = True if 4 in used_layers else False\n\n        if self.layer3_use:\n            if online:\n                self.layer3 = self._make_layer(block, 256, layers[2], stride=1, dilation=2, update=True)\n                self.layeronline = self._make_layer(block, 256, layers[2], stride=2)\n            else:\n                self.layer3 = self._make_layer(block, 256, layers[2], stride=1, dilation=2)\n\n            self.feature_size = (256 + 128) * block.expansion\n        else:\n            self.layer3 = lambda x: x  # identity\n\n        if self.layer4_use:\n            self.layer4 = self._make_layer(block, 512, layers[3], stride=1, dilation=4)  # 7x7, 3x3\n            self.feature_size = 512 * block.expansion\n        else:\n            self.layer4 = lambda x: x  # identity\n\n        for m in self.modules():\n            if isinstance(m, nn.Conv2d):\n                n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels\n                m.weight.data.normal_(0, math.sqrt(2. / n))\n            elif isinstance(m, nn.BatchNorm2d):\n                m.weight.data.fill_(1)\n                m.bias.data.zero_()\n\n    def _make_layer(self, block, planes, blocks, stride=1, dilation=1, update=False):\n        downsample = None\n        dd = dilation\n        if stride != 1 or self.inplanes != planes * block.expansion:\n            if stride == 1 and dilation == 1:\n                downsample = nn.Sequential(\n                    nn.Conv2d(self.inplanes, planes * block.expansion,\n                              kernel_size=1, stride=stride, bias=False),\n                    nn.BatchNorm2d(planes * block.expansion),\n                )\n            else:\n                if dilation > 1:\n                    dd = dilation // 2\n                    padding = dd\n                else:\n                    dd = 1\n                    padding = 0\n                downsample = nn.Sequential(\n                    nn.Conv2d(self.inplanes, planes * block.expansion,\n                              kernel_size=3, stride=stride, bias=False,\n                              padding=padding, dilation=dd),\n                    nn.BatchNorm2d(planes * block.expansion),\n                )\n\n        layers = []\n        layers.append(block(self.inplanes, planes, stride=stride,\n                            downsample=downsample, dilation=dilation))\n        self.inplanes = planes * block.expansion\n        for i in range(1, blocks):\n            layers.append(block(self.inplanes, planes, dilation=dilation))\n\n        if update: self.inplanes = int(self.inplanes / 2)  # for online\n        return nn.Sequential(*layers)\n\n    def forward(self, x, online=False):\n        x = self.conv1(x)\n        x = self.bn1(x)\n        x_ = self.relu(x)\n        x = self.maxpool(x_)\n\n        p1 = self.layer1(x)\n        p2 = self.layer2(p1)\n\n        if online: return self.layeronline(p2)\n        p3 = self.layer3(p2)\n\n        return [x_, p1, p2], p3\n\n\nclass ResNet(nn.Module):\n    \"\"\"\n    ResNet with 22 layer utilized in CVPR2019 paper.\n    Usage: ResNet(Bottleneck_CI, [3, 4], [True, False], [False, True], 64, [64, 128])\n    \"\"\"\n\n    def __init__(self, block, layers, last_relus, s2p_flags, firstchannels=64, channels=[64, 128], dilation=1):\n        self.inplanes = firstchannels\n        self.stage_len = len(layers)\n        super(ResNet, self).__init__()\n        self.conv1 = nn.Conv2d(3, firstchannels, kernel_size=7, stride=2, padding=3, bias=False)\n        self.bn1 = nn.BatchNorm2d(firstchannels)\n        self.relu = nn.ReLU(inplace=True)\n        self.maxpool = nn.MaxPool2d(kernel_size=2, stride=2)\n\n        # stage2\n        if s2p_flags[0]:\n            self.layer1 = self._make_layer(block, channels[0], layers[0], stride2pool=True, last_relu=last_relus[0])\n        else:\n            self.layer1 = self._make_layer(block, channels[0], layers[0], last_relu=last_relus[0])\n\n        # stage3\n        if s2p_flags[1]:\n            self.layer2 = self._make_layer(block, channels[1], layers[1], stride2pool=True, last_relu=last_relus[1], dilation=dilation)\n        else:\n            self.layer2 = self._make_layer(block, channels[1], layers[1], last_relu=last_relus[1], dilation=dilation)\n\n        for m in self.modules():\n            if isinstance(m, nn.Conv2d):\n                nn.init.kaiming_normal(m.weight, mode='fan_out')\n            elif isinstance(m, nn.BatchNorm2d):\n                nn.init.constant_(m.weight, 1)\n                nn.init.constant_(m.bias, 0)\n\n    def _make_layer(self, block, planes, blocks, last_relu, stride=1, stride2pool=False, dilation=1):\n        \"\"\"\n        :param block:\n        :param planes:\n        :param blocks:\n        :param stride:\n        :param stride2pool: translate (3,2) conv to (3, 1)conv + (2, 2)pool\n        :return:\n        \"\"\"\n        downsample = None\n        if stride != 1 or self.inplanes != planes * block.expansion:\n            downsample = nn.Sequential(\n                nn.Conv2d(self.inplanes, planes * block.expansion,\n                          kernel_size=1, stride=stride, bias=False),\n                nn.BatchNorm2d(planes * block.expansion),\n            )\n\n        layers = []\n        layers.append(block(self.inplanes, planes, last_relu=True, stride=stride, downsample=downsample, dilation=dilation))\n        if stride2pool:\n            layers.append(self.maxpool)\n        self.inplanes = planes * block.expansion\n        for i in range(1, blocks):\n            if i == blocks - 1:\n                layers.append(block(self.inplanes, planes, last_relu=last_relu, dilation=dilation))\n            else:\n                layers.append(block(self.inplanes, planes, last_relu=True, dilation=dilation))\n\n        return nn.Sequential(*layers)\n\n    def forward(self, x):\n        x = self.conv1(x)     # stride = 2\n        x = self.bn1(x)\n        x = self.relu(x)\n        x = self.center_crop7(x)\n        x = self.maxpool(x)   # stride = 4\n\n        x = self.layer1(x)\n        x = self.layer2(x)    # stride = 8\n\n        return x\n\n    def center_crop7(self, x):\n        \"\"\"\n        Center crop layer for stage1 of resnet. (7*7)\n        input x can be a Variable or Tensor\n        \"\"\"\n\n        return x[:, :, 2:-2, 2:-2].contiguous()\n\n# ----------------------\n# Modules used by ATOM\n# ----------------------\nclass FeatureBase:\n    \"\"\"Base feature class.\n    args:\n        fparams: Feature specific parameters.\n        pool_stride: Amount of average pooling to apply do downsample the feature map.\n        output_size: Alternatively, specify the output size of the feature map. Adaptive average pooling will be applied.\n        normalize_power: The power exponent for the normalization. None means no normalization (default).\n        use_for_color: Use this feature for color images.\n        use_for_gray: Use this feature for grayscale images.\n    \"\"\"\n    def __init__(self, fparams = None, pool_stride = None, output_size = None, normalize_power = None, use_for_color = True, use_for_gray = True):\n        self.fparams = fparams\n        self.pool_stride = 1 if pool_stride is None else pool_stride\n        self.output_size = output_size\n        self.normalize_power = normalize_power\n        self.use_for_color = use_for_color\n        self.use_for_gray = use_for_gray\n\n    def initialize(self):\n        pass\n\n    def free_memory(self):\n        pass\n\n    def dim(self):\n        raise NotImplementedError\n\n    def stride(self):\n        raise NotImplementedError\n\n    def size(self, im_sz):\n        if self.output_size is None:\n            return im_sz // self.stride()\n        if isinstance(im_sz, torch.Tensor):\n            return torch.Tensor([self.output_size[0], self.output_size[1]])\n        return self.output_size\n\n    def extract(self, im):\n        \"\"\"Performs feature extraction.\"\"\"\n        raise NotImplementedError\n\n    def get_feature(self, im: torch.Tensor):\n        \"\"\"Get the feature. Generally, call this function.\n        args:\n            im: image patch as a torch.Tensor.\n        \"\"\"\n\n        # Return empty tensor if it should not be used\n        is_color = im.shape[1] == 3\n        if is_color and not self.use_for_color or not is_color and not self.use_for_gray:\n            return torch.Tensor([])\n\n        # Extract feature\n        feat = self.extract(im)\n\n        # Pool/downsample\n        if self.output_size is not None:\n            feat = F.adaptive_avg_pool2d(feat, self.output_size)\n        elif self.pool_stride != 1:\n            feat = F.avg_pool2d(feat, self.pool_stride, self.pool_stride)\n\n        # Normalize\n        if self.normalize_power is not None:\n            feat /= (torch.sum(feat.abs().view(feat.shape[0],1,1,-1)**self.normalize_power, dim=3, keepdim=True) /\n                     (feat.shape[1]*feat.shape[2]*feat.shape[3]) + 1e-10)**(1/self.normalize_power)\n\n        return feat\n\n\nclass MultiFeatureBase(FeatureBase):\n    \"\"\"Base class for features potentially having multiple feature blocks as output (like CNNs).\n    See FeatureBase for more info.\n    \"\"\"\n    def size(self, im_sz):\n        if self.output_size is None:\n            return TensorList([im_sz // s for s in self.stride()])\n        if isinstance(im_sz, torch.Tensor):\n            return TensorList([im_sz // s if sz is None else torch.Tensor([sz[0], sz[1]]) for sz, s in zip(self.output_size, self.stride())])\n\n    def get_feature(self, im: torch.Tensor):\n        \"\"\"Get the feature. Generally, call this function.\n        args:\n            im: image patch as a torch.Tensor.\n        \"\"\"\n\n        # Return empty tensor if it should not be used\n        is_color = im.shape[1] == 3\n        if is_color and not self.use_for_color or not is_color and not self.use_for_gray:\n            return torch.Tensor([])\n\n        feat_list = self.extract(im)\n\n        output_sz = [None]*len(feat_list) if self.output_size is None else self.output_size\n\n        # Pool/downsample\n        for i, (sz, s) in enumerate(zip(output_sz, self.pool_stride)):\n            if sz is not None:\n                feat_list[i] = F.adaptive_avg_pool2d(feat_list[i], sz)\n            elif s != 1:\n                feat_list[i] = F.avg_pool2d(feat_list[i], s, s)\n\n        # Normalize\n        if self.normalize_power is not None:\n            for feat in feat_list:\n                feat /= (torch.sum(feat.abs().view(feat.shape[0],1,1,-1)**self.normalize_power, dim=3, keepdim=True) /\n                         (feat.shape[1]*feat.shape[2]*feat.shape[3]) + 1e-10)**(1/self.normalize_power)\n\n        return feat_list\n\n\n# -----------------\n# Efficient net\n# -----------------\n#  --------------------------------------\n# Efficient net -- lighter and stronger\n# -------------------------------------\nclass Identity(nn.Module):\n    def __init__(self, ):\n        super(Identity, self).__init__()\n\n    def forward(self, input):\n        return input\n\nclass Conv2dStaticSamePadding(nn.Conv2d):\n    \"\"\" 2D Convolutions like TensorFlow, for a fixed image size\"\"\"\n\n    def __init__(self, in_channels, out_channels, kernel_size, image_size=None, **kwargs):\n        super().__init__(in_channels, out_channels, kernel_size, **kwargs)\n        self.stride = self.stride if len(self.stride) == 2 else [self.stride[0]] * 2\n\n        # Calculate padding based on image size and save it\n        assert image_size is not None\n        ih, iw = image_size if type(image_size) == list else [image_size, image_size]\n        kh, kw = self.weight.size()[-2:]\n        sh, sw = self.stride\n        oh, ow = math.ceil(ih / sh), math.ceil(iw / sw)\n        pad_h = max((oh - 1) * self.stride[0] + (kh - 1) * self.dilation[0] + 1 - ih, 0)\n        pad_w = max((ow - 1) * self.stride[1] + (kw - 1) * self.dilation[1] + 1 - iw, 0)\n        if pad_h > 0 or pad_w > 0:\n            self.static_padding = nn.ZeroPad2d((pad_w // 2, pad_w - pad_w // 2, pad_h // 2, pad_h - pad_h // 2))\n        else:\n            self.static_padding = Identity()\n\n    def forward(self, x):\n        x = self.static_padding(x)\n        x = F.conv2d(x, self.weight, self.bias, self.stride, self.padding, self.dilation, self.groups)\n        return x\n\n\ndef get_same_padding_conv2d(image_size=None):\n    \"\"\" Chooses static padding if you have specified an image size, and dynamic padding otherwise.\n        Static padding is necessary for ONNX exporting of models. \"\"\"\n    if image_size is None:\n        return Conv2dDynamicSamePadding\n    else:\n        return partial(Conv2dStaticSamePadding, image_size=image_size)\n\n\nclass Conv2dDynamicSamePadding(nn.Conv2d):\n    \"\"\" 2D Convolutions like TensorFlow, for a dynamic image size \"\"\"\n\n    def __init__(self, in_channels, out_channels, kernel_size, stride=1, dilation=1, groups=1, bias=True):\n        super().__init__(in_channels, out_channels, kernel_size, stride, 0, dilation, groups, bias)\n        self.stride = self.stride if len(self.stride) == 2 else [self.stride[0]] * 2\n\n    def forward(self, x):\n        ih, iw = x.size()[-2:]\n        kh, kw = self.weight.size()[-2:]\n        sh, sw = self.stride\n        oh, ow = math.ceil(ih / sh), math.ceil(iw / sw)\n        pad_h = max((oh - 1) * self.stride[0] + (kh - 1) * self.dilation[0] + 1 - ih, 0)\n        pad_w = max((ow - 1) * self.stride[1] + (kw - 1) * self.dilation[1] + 1 - iw, 0)\n        if pad_h > 0 or pad_w > 0:\n            x = F.pad(x, [pad_w // 2, pad_w - pad_w // 2, pad_h // 2, pad_h - pad_h // 2])\n        return F.conv2d(x, self.weight, self.bias, self.stride, self.padding, self.dilation, self.groups)\n\ndef round_filters(filters, global_params):\n    \"\"\" Calculate and round number of filters based on depth multiplier. \"\"\"\n    multiplier = global_params.width_coefficient\n    if not multiplier:\n        return filters\n    divisor = global_params.depth_divisor\n    min_depth = global_params.min_depth\n    filters *= multiplier\n    min_depth = min_depth or divisor\n    new_filters = max(min_depth, int(filters + divisor / 2) // divisor * divisor)\n    if new_filters < 0.9 * filters:  # prevent rounding by more than 10%\n        new_filters += divisor\n    return int(new_filters)\n\ndef round_repeats(repeats, global_params):\n    \"\"\" Round number of filters based on depth multiplier. \"\"\"\n    multiplier = global_params.depth_coefficient\n    if not multiplier:\n        return repeats\n    return int(math.ceil(multiplier * repeats))\n\nclass SwishImplementation(torch.autograd.Function):\n    @staticmethod\n    def forward(ctx, i):\n        result = i * torch.sigmoid(i)\n        ctx.save_for_backward(i)\n        return result\n\n    @staticmethod\n    def backward(ctx, grad_output):\n        i = ctx.saved_variables[0]\n        sigmoid_i = torch.sigmoid(i)\n        return grad_output * (sigmoid_i * (1 + i * (1 - sigmoid_i)))\n\n\nclass MemoryEfficientSwish(nn.Module):\n    def forward(self, x):\n        return SwishImplementation.apply(x)\n\ndef drop_connect(inputs, p, training):\n    \"\"\" Drop connect. \"\"\"\n    if not training: return inputs\n    batch_size = inputs.shape[0]\n    keep_prob = 1 - p\n    random_tensor = keep_prob\n    random_tensor += torch.rand([batch_size, 1, 1, 1], dtype=inputs.dtype, device=inputs.device)\n    binary_tensor = torch.floor(random_tensor)\n    output = inputs / keep_prob * binary_tensor\n    return output\n\n\n\nclass MBConvBlock(nn.Module):\n    \"\"\"\n    Mobile Inverted Residual Bottleneck Block\n    Args:\n        block_args (namedtuple): BlockArgs, see above\n        global_params (namedtuple): GlobalParam, see above\n    Attributes:\n        has_se (bool): Whether the block contains a Squeeze and Excitation layer.\n    \"\"\"\n\n    def __init__(self, block_args, global_params):\n        super().__init__()\n        self._block_args = block_args\n        self._bn_mom = 1 - global_params.batch_norm_momentum\n        self._bn_eps = global_params.batch_norm_epsilon\n        self.has_se = (self._block_args.se_ratio is not None) and (0 < self._block_args.se_ratio <= 1)\n        self.id_skip = block_args.id_skip  # skip connection and drop connect\n\n        # Get static or dynamic convolution depending on image size\n        Conv2d = get_same_padding_conv2d(image_size=global_params.image_size)\n\n        # Expansion phase\n        inp = self._block_args.input_filters  # number of input channels\n        oup = self._block_args.input_filters * self._block_args.expand_ratio  # number of output channels\n        if self._block_args.expand_ratio != 1:\n            self._expand_conv = Conv2d(in_channels=inp, out_channels=oup, kernel_size=1, bias=False)\n            self._bn0 = nn.BatchNorm2d(num_features=oup, momentum=self._bn_mom, eps=self._bn_eps)\n\n        # Depthwise convolution phase\n        k = self._block_args.kernel_size\n        s = self._block_args.stride\n        self._depthwise_conv = Conv2d(\n            in_channels=oup, out_channels=oup, groups=oup,  # groups makes it depthwise\n            kernel_size=k, stride=s, bias=False)\n        self._bn1 = nn.BatchNorm2d(num_features=oup, momentum=self._bn_mom, eps=self._bn_eps)\n\n        # Squeeze and Excitation layer, if desired\n        if self.has_se:\n            num_squeezed_channels = max(1, int(self._block_args.input_filters * self._block_args.se_ratio))\n            self._se_reduce = Conv2d(in_channels=oup, out_channels=num_squeezed_channels, kernel_size=1)\n            self._se_expand = Conv2d(in_channels=num_squeezed_channels, out_channels=oup, kernel_size=1)\n\n        # Output phase\n        final_oup = self._block_args.output_filters\n        self._project_conv = Conv2d(in_channels=oup, out_channels=final_oup, kernel_size=1, bias=False)\n        self._bn2 = nn.BatchNorm2d(num_features=final_oup, momentum=self._bn_mom, eps=self._bn_eps)\n        self._swish = MemoryEfficientSwish()\n\n    def forward(self, inputs, drop_connect_rate=None):\n        \"\"\"\n        :param inputs: input tensor\n        :param drop_connect_rate: drop connect rate (float, between 0 and 1)\n        :return: output of block\n        \"\"\"\n\n        # Expansion and Depthwise Convolution\n        x = inputs\n        if self._block_args.expand_ratio != 1:\n            x = self._swish(self._bn0(self._expand_conv(inputs)))\n        x = self._swish(self._bn1(self._depthwise_conv(x)))\n\n        # Squeeze and Excitation\n        if self.has_se:\n            x_squeezed = F.adaptive_avg_pool2d(x, 1)\n            x_squeezed = self._se_expand(self._swish(self._se_reduce(x_squeezed)))\n            x = torch.sigmoid(x_squeezed) * x\n\n        x = self._bn2(self._project_conv(x))\n\n        # Skip connection and drop connect\n        input_filters, output_filters = self._block_args.input_filters, self._block_args.output_filters\n        if self.id_skip and self._block_args.stride == 1 and input_filters == output_filters:\n            if drop_connect_rate:\n                x = drop_connect(x, p=drop_connect_rate, training=self.training)\n            x = x + inputs  # skip connection\n        return x\n\n    def set_swish(self, memory_efficient=True):\n        \"\"\"Sets swish function as memory efficient (for training) or standard (for export)\"\"\"\n        self._swish = MemoryEfficientSwish() if memory_efficient else Swish()\n\n\nclass BlockDecoder(object):\n    \"\"\" Block Decoder for readability, straight from the official TensorFlow repository \"\"\"\n\n    @staticmethod\n    def _decode_block_string(block_string):\n        \"\"\" Gets a block through a string notation of arguments. \"\"\"\n        assert isinstance(block_string, str)\n\n        ops = block_string.split('_')\n        options = {}\n        for op in ops:\n            splits = re.split(r'(\\d.*)', op)\n            if len(splits) >= 2:\n                key, value = splits[:2]\n                options[key] = value\n\n        # Check stride\n        assert (('s' in options and len(options['s']) == 1) or\n                (len(options['s']) == 2 and options['s'][0] == options['s'][1]))\n\n        return BlockArgs(\n            kernel_size=int(options['k']),\n            num_repeat=int(options['r']),\n            input_filters=int(options['i']),\n            output_filters=int(options['o']),\n            expand_ratio=int(options['e']),\n            id_skip=('noskip' not in block_string),\n            se_ratio=float(options['se']) if 'se' in options else None,\n            stride=[int(options['s'][0])])\n\n    @staticmethod\n    def _encode_block_string(block):\n        \"\"\"Encodes a block to a string.\"\"\"\n        args = [\n            'r%d' % block.num_repeat,\n            'k%d' % block.kernel_size,\n            's%d%d' % (block.strides[0], block.strides[1]),\n            'e%s' % block.expand_ratio,\n            'i%d' % block.input_filters,\n            'o%d' % block.output_filters\n        ]\n        if 0 < block.se_ratio <= 1:\n            args.append('se%s' % block.se_ratio)\n        if block.id_skip is False:\n            args.append('noskip')\n        return '_'.join(args)\n\n    @staticmethod\n    def decode(string_list):\n        \"\"\"\n        Decodes a list of string notations to specify blocks inside the network.\n        :param string_list: a list of strings, each string is a notation of block\n        :return: a list of BlockArgs namedtuples of block args\n        \"\"\"\n        assert isinstance(string_list, list)\n        blocks_args = []\n        for block_string in string_list:\n            blocks_args.append(BlockDecoder._decode_block_string(block_string))\n        return blocks_args\n\n    @staticmethod\n    def encode(blocks_args):\n        \"\"\"\n        Encodes a list of BlockArgs to a list of strings.\n        :param blocks_args: a list of BlockArgs namedtuples of block args\n        :return: a list of strings, each string is a notation of block\n        \"\"\"\n        block_strings = []\n        for block in blocks_args:\n            block_strings.append(BlockDecoder._encode_block_string(block))\n        return block_strings\n\n\n\n"
  },
  {
    "path": "tracker/sot/lib/models/online/__init__.py",
    "content": ""
  },
  {
    "path": "tracker/sot/lib/models/online/backbone/__init__.py",
    "content": "from .resnet import *\nfrom .resnet18_vggm import *\n"
  },
  {
    "path": "tracker/sot/lib/models/online/backbone/resnet.py",
    "content": "import math\nimport torch.nn as nn\nfrom collections import OrderedDict\nimport torch.utils.model_zoo as model_zoo\nfrom torchvision.models.resnet import BasicBlock, Bottleneck, model_urls\n\n\nclass ResNet(nn.Module):\n    \"\"\" ResNet network module. Allows extracting specific feature blocks.\"\"\"\n    def __init__(self, block, layers, output_layers, num_classes=1000, inplanes=64):\n        self.inplanes = inplanes\n        super(ResNet, self).__init__()\n        self.output_layers = output_layers\n        self.conv1 = nn.Conv2d(3, inplanes, kernel_size=7, stride=2, padding=3,\n                               bias=False)\n        self.bn1 = nn.BatchNorm2d(inplanes)\n        self.relu = nn.ReLU(inplace=True)\n        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)\n        self.layer1 = self._make_layer(block, inplanes, layers[0])\n        self.layer2 = self._make_layer(block, inplanes*2, layers[1], stride=2)\n        self.layer3 = self._make_layer(block, inplanes*4, layers[2], stride=2)\n        self.layer4 = self._make_layer(block, inplanes*8, layers[3], stride=2)\n        # self.avgpool = nn.AvgPool2d(7, stride=1)\n        self.avgpool = nn.AdaptiveAvgPool2d((1,1))\n        self.fc = nn.Linear(inplanes*8 * block.expansion, num_classes)\n\n        for m in self.modules():\n            if isinstance(m, nn.Conv2d):\n                n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels\n                m.weight.data.normal_(0, math.sqrt(2. / n))\n            elif isinstance(m, nn.BatchNorm2d):\n                m.weight.data.fill_(1)\n                m.bias.data.zero_()\n\n    def _make_layer(self, block, planes, blocks, stride=1):\n        downsample = None\n        if stride != 1 or self.inplanes != planes * block.expansion:\n            downsample = nn.Sequential(\n                nn.Conv2d(self.inplanes, planes * block.expansion,\n                          kernel_size=1, stride=stride, bias=False),\n                nn.BatchNorm2d(planes * block.expansion),\n            )\n\n        layers = []\n        layers.append(block(self.inplanes, planes, stride, downsample))\n        self.inplanes = planes * block.expansion\n        for i in range(1, blocks):\n            layers.append(block(self.inplanes, planes))\n\n        return nn.Sequential(*layers)\n\n    def _add_output_and_check(self, name, x, outputs, output_layers):\n        if name in output_layers:\n            outputs[name] = x\n        return len(output_layers) == len(outputs)\n\n    def forward(self, x, output_layers=None):\n        \"\"\" Forward pass with input x. The output_layers specify the feature blocks which must be returned \"\"\"\n        outputs = OrderedDict()\n\n        if output_layers is None:\n            output_layers = self.output_layers\n\n        x = self.conv1(x)\n        x = self.bn1(x)\n        x = self.relu(x)\n\n        if self._add_output_and_check('conv1', x, outputs, output_layers):\n            return outputs\n\n        x = self.maxpool(x)\n\n        x = self.layer1(x)\n\n        if self._add_output_and_check('layer1', x, outputs, output_layers):\n            return outputs\n\n        x = self.layer2(x)\n\n        if self._add_output_and_check('layer2', x, outputs, output_layers):\n            return outputs\n\n        x = self.layer3(x)\n\n        if self._add_output_and_check('layer3', x, outputs, output_layers):\n            return outputs\n\n        x = self.layer4(x)\n\n        if self._add_output_and_check('layer4', x, outputs, output_layers):\n            return outputs\n\n        x = self.avgpool(x)\n        x = x.view(x.size(0), -1)\n        x = self.fc(x)\n\n        if self._add_output_and_check('fc', x, outputs, output_layers):\n            return outputs\n\n        if len(output_layers) == 1 and output_layers[0] == 'default':\n            return x\n\n        raise ValueError('output_layer is wrong.')\n\n\ndef resnet18(output_layers=None, pretrained=False):\n    \"\"\"Constructs a ResNet-18 model.\n    \"\"\"\n\n    if output_layers is None:\n        output_layers = ['default']\n    else:\n        for l in output_layers:\n            if l not in ['conv1', 'layer1', 'layer2', 'layer3', 'layer4', 'fc']:\n                raise ValueError('Unknown layer: {}'.format(l))\n\n    model = ResNet(BasicBlock, [2, 2, 2, 2], output_layers)\n\n    if pretrained:\n        model.load_state_dict(model_zoo.load_url(model_urls['resnet18']))\n    return model\n\n\ndef resnet50(output_layers=None, pretrained=False):\n    \"\"\"Constructs a ResNet-50 model.\n    \"\"\"\n\n    if output_layers is None:\n        output_layers = ['default']\n    else:\n        for l in output_layers:\n            if l not in ['conv1', 'layer1', 'layer2', 'layer3', 'layer4', 'fc']:\n                raise ValueError('Unknown layer: {}'.format(l))\n\n    model = ResNet(Bottleneck, [3, 4, 6, 3], output_layers)\n    if pretrained:\n        model.load_state_dict(model_zoo.load_url(model_urls['resnet50']))\n    return model\n"
  },
  {
    "path": "tracker/sot/lib/models/online/backbone/resnet18_vggm.py",
    "content": "import math\nimport torch\nimport torch.nn as nn\nfrom collections import OrderedDict\nfrom torchvision.models.resnet import BasicBlock\n\n\nclass SpatialCrossMapLRN(nn.Module):\n    def __init__(self, local_size=1, alpha=1.0, beta=0.75, k=1, ACROSS_CHANNELS=True):\n        super(SpatialCrossMapLRN, self).__init__()\n        self.ACROSS_CHANNELS = ACROSS_CHANNELS\n        if ACROSS_CHANNELS:\n            self.average=nn.AvgPool3d(kernel_size=(local_size, 1, 1),\n                    stride=1,\n                    padding=(int((local_size-1.0)/2), 0, 0))\n        else:\n            self.average=nn.AvgPool2d(kernel_size=local_size,\n                    stride=1,\n                    padding=int((local_size-1.0)/2))\n        self.alpha = alpha\n        self.beta = beta\n        self.k = k\n\n    def forward(self, x):\n        if self.ACROSS_CHANNELS:\n            div = x.pow(2).unsqueeze(1)\n            div = self.average(div).squeeze(1)\n            div = div.mul(self.alpha).add(self.k).pow(self.beta)\n        else:\n            div = x.pow(2)\n            div = self.average(div)\n            div = div.mul(self.alpha).add(self.k).pow(self.beta)\n        x = x.div(div)\n        return x\n\n\nclass ResNetVGGm1(nn.Module):\n    \"\"\" ResNet network module, with vgg-m conv1 layer \"\"\"\n    def __init__(self, block, layers, output_layers, num_classes=1000):\n        self.inplanes = 64\n        super(ResNetVGGm1, self).__init__()\n        self.output_layers = output_layers\n        self.vggmconv1 = nn.Conv2d(3,96,(7, 7),(2, 2), padding=3)\n        self.vgglrn = SpatialCrossMapLRN(5, 0.0005, 0.75, 2)\n        self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,\n                               bias=False)\n        self.bn1 = nn.BatchNorm2d(64)\n        self.relu = nn.ReLU(inplace=True)\n        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)\n        self.layer1 = self._make_layer(block, 64, layers[0])\n        self.layer2 = self._make_layer(block, 128, layers[1], stride=2)\n        self.layer3 = self._make_layer(block, 256, layers[2], stride=2)\n        self.layer4 = self._make_layer(block, 512, layers[3], stride=2)\n        # self.avgpool = nn.AvgPool2d(7, stride=1)\n        self.avgpool = nn.AdaptiveAvgPool2d((1,1))\n        self.fc = nn.Linear(512 * block.expansion, num_classes)\n\n        for m in self.modules():\n            if isinstance(m, nn.Conv2d):\n                n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels\n                m.weight.data.normal_(0, math.sqrt(2. / n))\n            elif isinstance(m, nn.BatchNorm2d):\n                m.weight.data.fill_(1)\n                m.bias.data.zero_()\n\n    def _make_layer(self, block, planes, blocks, stride=1):\n        downsample = None\n        if stride != 1 or self.inplanes != planes * block.expansion:\n            downsample = nn.Sequential(\n                nn.Conv2d(self.inplanes, planes * block.expansion,\n                          kernel_size=1, stride=stride, bias=False),\n                nn.BatchNorm2d(planes * block.expansion),\n            )\n\n        layers = []\n        layers.append(block(self.inplanes, planes, stride, downsample))\n        self.inplanes = planes * block.expansion\n        for i in range(1, blocks):\n            layers.append(block(self.inplanes, planes))\n\n        return nn.Sequential(*layers)\n\n\n    def _add_output_and_check(self, name, x, outputs, output_layers):\n        if name in output_layers:\n            outputs[name] = x\n        return len(output_layers) == len(outputs)\n\n\n    def forward(self, x, output_layers=None):\n        outputs = OrderedDict()\n\n        if output_layers is None:\n            output_layers = self.output_layers\n\n        if 'vggconv1' in output_layers:\n            c1 = self.vgglrn(self.relu(self.vggmconv1(x)))\n            if self._add_output_and_check('vggconv1', c1, outputs, output_layers):\n                return outputs\n\n        x = self.conv1(x)\n        x = self.bn1(x)\n        x = self.relu(x)\n\n        if self._add_output_and_check('conv1', x, outputs, output_layers):\n            return outputs\n\n        x = self.maxpool(x)\n\n        x = self.layer1(x)\n\n        if self._add_output_and_check('layer1', x, outputs, output_layers):\n            return outputs\n\n        x = self.layer2(x)\n\n        if self._add_output_and_check('layer2', x, outputs, output_layers):\n            return outputs\n\n        x = self.layer3(x)\n\n        if self._add_output_and_check('layer3', x, outputs, output_layers):\n            return outputs\n\n        x = self.layer4(x)\n\n        if self._add_output_and_check('layer4', x, outputs, output_layers):\n            return outputs\n\n        x = self.avgpool(x)\n        x = x.view(x.size(0), -1)\n        x = self.fc(x)\n\n        if self._add_output_and_check('fc', x, outputs, output_layers):\n            return outputs\n\n        if len(output_layers) == 1 and output_layers[0] == 'default':\n            return x\n\n        raise ValueError('output_layer is wrong.')\n\n\ndef resnet18_vggmconv1(output_layers=None, path=None):\n    \"\"\"Constructs a ResNet-18 model with first-layer VGGm features.\n    \"\"\"\n\n    if output_layers is None:\n        output_layers = ['default']\n    else:\n        for l in output_layers:\n            if l not in ['vggconv1', 'conv1', 'layer1', 'layer2', 'layer3', 'layer4', 'fc']:\n                raise ValueError('Unknown layer: {}'.format(l))\n\n    model = ResNetVGGm1(BasicBlock, [2, 2, 2, 2], output_layers)\n\n    if path is not None:\n        model.load_state_dict(torch.load(path), strict=False)\n    return model"
  },
  {
    "path": "tracker/sot/lib/models/online/bbreg/__init__.py",
    "content": "from .iou_net import IoUNet\n"
  },
  {
    "path": "tracker/sot/lib/models/online/bbreg/iou_net.py",
    "content": "import torch.nn as nn\nimport torch\n\nfrom models.online.layers.blocks import LinearBlock\nfrom models.online.external.PreciseRoIPooling.pytorch.prroi_pool import PrRoIPool2D\n\n\ndef conv(in_planes, out_planes, kernel_size=3, stride=1, padding=1, dilation=1):\n    return nn.Sequential(\n            nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride,\n                      padding=padding, dilation=dilation, bias=True),\n            nn.BatchNorm2d(out_planes),\n            nn.ReLU(inplace=True))\n\n\ndef valid_roi(roi: torch.Tensor, image_size: torch.Tensor):\n    valid = all(0 <= roi[:, 1]) and all(0 <= roi[:, 2]) and all(roi[:, 3] <= image_size[0]-1) and \\\n            all(roi[:, 4] <= image_size[1]-1)\n    return valid\n\n\nclass IoUNet(nn.Module):\n    \"\"\" Network module for IoU prediction. Refer to the paper for an illustration of the architecture.\"\"\"\n    def __init__(self, input_dim=(128,256), pred_input_dim=(256,256), pred_inter_dim=(256,256)):\n        super().__init__()\n        # _r for reference, _t for test\n        self.conv3_1r = conv(input_dim[0], 128, kernel_size=3, stride=1)\n        self.conv3_1t = conv(input_dim[0], 256, kernel_size=3, stride=1)\n\n        self.conv3_2t = conv(256, pred_input_dim[0], kernel_size=3, stride=1)\n\n        self.prroi_pool3r = PrRoIPool2D(3, 3, 1/8)\n        self.prroi_pool3t = PrRoIPool2D(5, 5, 1/8)\n\n        self.fc3_1r = conv(128, 256, kernel_size=3, stride=1, padding=0)\n\n        self.conv4_1r = conv(input_dim[1], 256, kernel_size=3, stride=1)\n        self.conv4_1t = conv(input_dim[1], 256, kernel_size=3, stride=1)\n\n        self.conv4_2t = conv(256, pred_input_dim[1], kernel_size=3, stride=1)\n\n        self.prroi_pool4r = PrRoIPool2D(1, 1, 1/16)\n        self.prroi_pool4t = PrRoIPool2D(3, 3, 1 / 16)\n\n        self.fc34_3r = conv(256 + 256, pred_input_dim[0], kernel_size=1, stride=1, padding=0)\n        self.fc34_4r = conv(256 + 256, pred_input_dim[1], kernel_size=1, stride=1, padding=0)\n\n        self.fc3_rt = LinearBlock(pred_input_dim[0], pred_inter_dim[0], 5)\n        self.fc4_rt = LinearBlock(pred_input_dim[1], pred_inter_dim[1], 3)\n\n        self.iou_predictor = nn.Linear(pred_inter_dim[0]+pred_inter_dim[1], 1, bias=True)\n\n        # Init weights\n        for m in self.modules():\n            if isinstance(m, nn.Conv2d) or isinstance(m, nn.ConvTranspose2d) or isinstance(m, nn.Linear):\n                nn.init.kaiming_normal_(m.weight.data, mode='fan_in')\n                if m.bias is not None:\n                    m.bias.data.zero_()\n\n    def forward(self, feat1, feat2, bb1, proposals2):\n        assert feat1[0].dim() == 5, 'Expect 5  dimensional feat1'\n\n        num_test_images = feat2[0].shape[0]\n        batch_size = feat2[0].shape[1]\n\n        # Extract first train sample\n        feat1 = [f[0,...] for f in feat1]\n        bb1 = bb1[0,...]\n\n        # Get modulation vector\n        filter = self.get_filter(feat1, bb1)\n\n        feat2 = [f.view(batch_size * num_test_images, f.shape[2], f.shape[3], f.shape[4]) for f in feat2]\n        iou_feat = self.get_iou_feat(feat2)\n\n        filter = [f.view(1, batch_size, -1).repeat(num_test_images, 1, 1).view(batch_size*num_test_images, -1) for f in filter]\n\n        proposals2 = proposals2.view(batch_size*num_test_images, -1, 4)\n        pred_iou = self.predict_iou(filter, iou_feat, proposals2)\n        return pred_iou.view(num_test_images, batch_size, -1)\n\n    def predict_iou(self, filter, feat2, proposals):\n        fc34_3_r, fc34_4_r = filter\n        c3_t, c4_t = feat2\n\n        batch_size = c3_t.size()[0]\n\n        # Modulation\n        c3_t_att = c3_t * fc34_3_r.view(batch_size, -1, 1, 1)\n        c4_t_att = c4_t * fc34_4_r.view(batch_size, -1, 1, 1)\n\n        # Add batch_index to rois\n        batch_index = torch.Tensor([x for x in range(batch_size)]).view(batch_size, 1).to(c3_t.device)\n\n        # Push the different rois for the same image along the batch dimension\n        num_proposals_per_batch = proposals.shape[1]\n\n        # input proposals2 is in format xywh, convert it to x0y0x1y1 format\n        proposals_xyxy = torch.cat((proposals[:, :, 0:2], proposals[:, :, 0:2] + proposals[:, :, 2:4]), dim=2)\n\n        # Add batch index\n        roi2 = torch.cat((batch_index.view(batch_size, -1, 1).expand(-1, num_proposals_per_batch, -1),\n                          proposals_xyxy), dim=2)\n        roi2 = roi2.view(-1, 5).to(proposals_xyxy.device)\n\n        roi3t = self.prroi_pool3t(c3_t_att, roi2)\n        roi4t = self.prroi_pool4t(c4_t_att, roi2)\n\n        fc3_rt = self.fc3_rt(roi3t)\n        fc4_rt = self.fc4_rt(roi4t)\n\n        fc34_rt_cat = torch.cat((fc3_rt, fc4_rt), dim=1)\n\n        iou_pred = self.iou_predictor(fc34_rt_cat).view(batch_size, num_proposals_per_batch)\n\n        return iou_pred\n\n    def get_filter(self, feat1, bb1):\n        feat3_r, feat4_r = feat1\n\n        c3_r = self.conv3_1r(feat3_r)\n\n        # Add batch_index to rois\n        batch_size = bb1.size()[0]\n        batch_index = torch.Tensor([x for x in range(batch_size)]).view(batch_size, 1).to(bb1.device)\n\n        # input bb is in format xywh, convert it to x0y0x1y1 format\n        bb1 = bb1.clone()\n        bb1[:, 2:4] = bb1[:, 0:2] + bb1[:, 2:4]\n        roi1 = torch.cat((batch_index, bb1), dim=1)\n\n        roi3r = self.prroi_pool3r(c3_r, roi1)\n\n        c4_r = self.conv4_1r(feat4_r)\n        roi4r = self.prroi_pool4r(c4_r, roi1)\n\n        fc3_r = self.fc3_1r(roi3r)\n\n        # Concatenate from block 3 and 4\n        fc34_r = torch.cat((fc3_r, roi4r), dim=1)\n\n        fc34_3_r = self.fc34_3r(fc34_r)\n        fc34_4_r = self.fc34_4r(fc34_r)\n\n        return fc34_3_r, fc34_4_r\n\n    def get_iou_feat(self, feat2):\n        feat3_t, feat4_t = feat2\n        c3_t = self.conv3_2t(self.conv3_1t(feat3_t))\n        c4_t = self.conv4_2t(self.conv4_1t(feat4_t))\n\n        return c3_t, c4_t\n"
  },
  {
    "path": "tracker/sot/lib/models/online/classifier/__init__.py",
    "content": "from .linear_filter import LinearFilter\n"
  },
  {
    "path": "tracker/sot/lib/models/online/classifier/features.py",
    "content": "import torch\nfrom torch import nn\nimport torch.nn.functional as F\nfrom torchvision.models.resnet import BasicBlock, Bottleneck\nfrom models.online.layers.normalization import InstanceL2Norm\nfrom models.online.layers.transform import InterpCat\n\n\ndef residual_basic_block(feature_dim=256, num_blocks=1, l2norm=True, final_conv=False, norm_scale=1.0, out_dim=None,\n                         interp_cat=False):\n    \"\"\"Construct a network block based on the BasicBlock used in ResNet 18 and 34.\"\"\"\n    if out_dim is None:\n        out_dim = feature_dim\n    feat_layers = []\n    if interp_cat:\n        feat_layers.append(InterpCat())\n    for i in range(num_blocks):\n        odim = feature_dim if i < num_blocks - 1 + int(final_conv) else out_dim\n        feat_layers.append(BasicBlock(feature_dim, odim))\n    if final_conv:\n        feat_layers.append(nn.Conv2d(feature_dim, out_dim, kernel_size=3, padding=1, bias=False))\n    if l2norm:\n        feat_layers.append(InstanceL2Norm(scale=norm_scale))\n    return nn.Sequential(*feat_layers)\n\n\ndef residual_basic_block_pool(feature_dim=256, num_blocks=1, l2norm=True, final_conv=False, norm_scale=1.0, out_dim=None,\n                              pool=True):\n    \"\"\"Construct a network block based on the BasicBlock used in ResNet.\"\"\"\n    if out_dim is None:\n        out_dim = feature_dim\n    feat_layers = []\n    for i in range(num_blocks):\n        odim = feature_dim if i < num_blocks - 1 + int(final_conv) else out_dim\n        feat_layers.append(BasicBlock(feature_dim, odim))\n    if final_conv:\n        feat_layers.append(nn.Conv2d(feature_dim, out_dim, kernel_size=3, padding=1, bias=False))\n    if pool:\n        feat_layers.append(nn.MaxPool2d(kernel_size=3, stride=2, padding=1))\n    if l2norm:\n        feat_layers.append(InstanceL2Norm(scale=norm_scale))\n\n    return nn.Sequential(*feat_layers)\n\n\ndef residual_bottleneck(feature_dim=256, num_blocks=1, l2norm=True, final_conv=False, norm_scale=1.0, out_dim=None,\n                        interp_cat=False):\n    \"\"\"Construct a network block based on the Bottleneck block used in ResNet.\"\"\"\n    if out_dim is None:\n        out_dim = feature_dim\n    feat_layers = []\n    if interp_cat:\n        feat_layers.append(InterpCat())\n    for i in range(num_blocks):\n        planes = feature_dim if i < num_blocks - 1 + int(final_conv) else out_dim // 4\n        feat_layers.append(Bottleneck(4*feature_dim, planes))\n    if final_conv:\n        feat_layers.append(nn.Conv2d(4*feature_dim, out_dim, kernel_size=3, padding=1, bias=False))\n    if l2norm:\n        feat_layers.append(InstanceL2Norm(scale=norm_scale))\n    return nn.Sequential(*feat_layers)"
  },
  {
    "path": "tracker/sot/lib/models/online/classifier/initializer.py",
    "content": "import torch.nn as nn\nimport torch\nimport torch.nn.functional as F\n# import sys\n# sys.path.append('../../../../lib')\n#\n\nfrom models.online.layers.blocks import conv_block\nfrom models.online.external.PreciseRoIPooling.pytorch.prroi_pool import PrRoIPool2D\nimport math\n\n\nclass FilterPool(nn.Module):\n    \"\"\"Pool the target region in a feature map.\n    args:\n        filter_size:  Size of the filter.\n        feature_stride:  Input feature stride.\n        pool_square:  Do a square pooling instead of pooling the exact target region.\"\"\"\n\n    def __init__(self, filter_size=1, feature_stride=16, pool_square=False):\n        super().__init__()\n        self.prroi_pool = PrRoIPool2D(filter_size, filter_size, 1/feature_stride)\n        self.pool_square = pool_square\n\n    def forward(self, feat, bb):\n        \"\"\"Pool the regions in bb.\n        args:\n            feat:  Input feature maps. Dims (num_samples, feat_dim, H, W).\n            bb:  Target bounding boxes (x, y, w, h) in the image coords. Dims (num_samples, 4).\n        returns:\n            pooled_feat:  Pooled features. Dims (num_samples, feat_dim, wH, wW).\"\"\"\n\n        # Add batch_index to rois\n        bb = bb.view(-1,4)\n        num_images_total = bb.shape[0]\n        batch_index = torch.arange(num_images_total, dtype=torch.float32).view(-1, 1).to(bb.device)\n\n        # input bb is in format xywh, convert it to x0y0x1y1 format\n        pool_bb = bb.clone()\n\n        if self.pool_square:\n            bb_sz = pool_bb[:, 2:4].prod(dim=1, keepdim=True).sqrt()\n            pool_bb[:, :2] += pool_bb[:, 2:]/2 - bb_sz/2\n            pool_bb[:, 2:] = bb_sz\n\n        pool_bb[:, 2:4] = pool_bb[:, 0:2] + pool_bb[:, 2:4]\n        roi1 = torch.cat((batch_index, pool_bb), dim=1)\n\n        return self.prroi_pool(feat, roi1)\n\n\n\nclass FilterInitializer(nn.Module):\n    \"\"\"Initializes a target classification filter by applying a number of conv layers before and after pooling the target region.\n    args:\n        filter_size:  Size of the filter.\n        feature_dim:  Input feature dimentionality.\n        feature_stride:  Input feature stride.\n        pool_square:  Do a square pooling instead of pooling the exact target region.\n        filter_norm:  Normalize the output filter with its size in the end.\n        num_filter_pre_convs:  Conv layers before pooling.\n        num_filter_post_convs:  Conv layers after pooling.\"\"\"\n\n    def __init__(self, filter_size=1, feature_dim=256, feature_stride=16, pool_square=False, filter_norm=True,\n                 num_filter_pre_convs=1, num_filter_post_convs=0):\n        super().__init__()\n\n        self.filter_pool = FilterPool(filter_size=filter_size, feature_stride=feature_stride, pool_square=pool_square)\n        self.filter_norm = filter_norm\n\n        # Make pre conv\n        pre_conv_layers = []\n        for i in range(num_filter_pre_convs):\n            pre_conv_layers.append(conv_block(feature_dim, feature_dim, kernel_size=3, padding=1))\n        self.filter_pre_layers = nn.Sequential(*pre_conv_layers) if pre_conv_layers else None\n\n        # Make post conv\n        post_conv_layers = []\n        for i in range(num_filter_post_convs):\n            post_conv_layers.append(conv_block(feature_dim, feature_dim, kernel_size=1, padding=0))\n        post_conv_layers.append(nn.Conv2d(feature_dim, feature_dim, kernel_size=1, padding=0))\n        self.filter_post_layers = nn.Sequential(*post_conv_layers)\n\n        # Init weights\n        for m in self.modules():\n            if isinstance(m, nn.Conv2d):\n                n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels\n                m.weight.data.normal_(0, math.sqrt(2. / n))\n                if m.bias is not None:\n                    m.bias.data.zero_()\n            elif isinstance(m, nn.BatchNorm2d):\n                m.weight.data.fill_(1)\n                m.bias.data.zero_()\n\n\n    def forward(self, feat, bb):\n        \"\"\"Runs the initializer module.\n        Note that [] denotes an optional dimension.\n        args:\n            feat:  Input feature maps. Dims (images_in_sequence, [sequences], feat_dim, H, W).\n            bb:  Target bounding boxes (x, y, w, h) in the image coords. Dims (images_in_sequence, [sequences], 4).\n        returns:\n            weights:  The output weights. Dims (sequences, feat_dim, wH, wW).\"\"\"\n\n        num_images = bb.shape[0] if bb.dim() == 3 else 1\n\n        if self.filter_pre_layers is not None:\n            feat = self.filter_pre_layers(feat.view(-1, feat.shape[-3], feat.shape[-2], feat.shape[-1]))\n\n        feat_post = self.filter_pool(feat, bb)\n        weights = self.filter_post_layers(feat_post)\n\n        if num_images > 1:\n            weights = torch.mean(weights.view(num_images, -1, weights.shape[-3], weights.shape[-2], weights.shape[-1]), dim=0)\n\n        if self.filter_norm:\n            weights = weights / (weights.shape[1] * weights.shape[2] * weights.shape[3])\n\n        return weights\n\n\nclass FilterInitializerLinear(nn.Module):\n    \"\"\"Initializes a target classification filter by applying a linear conv layer and then pooling the target region.\n    args:\n        filter_size:  Size of the filter.\n        feature_dim:  Input feature dimentionality.\n        feature_stride:  Input feature stride.\n        pool_square:  Do a square pooling instead of pooling the exact target region.\n        filter_norm:  Normalize the output filter with its size in the end.\n        conv_ksz:  Kernel size of the conv layer before pooling.\"\"\"\n\n    def __init__(self, filter_size=1, feature_dim=256, feature_stride=16, pool_square=False, filter_norm=True,\n                 conv_ksz=3):\n        super().__init__()\n\n        self.filter_conv = nn.Conv2d(feature_dim, feature_dim, kernel_size=conv_ksz, padding=conv_ksz // 2)\n        self.filter_pool = FilterPool(filter_size=filter_size, feature_stride=feature_stride, pool_square=pool_square)\n        self.filter_norm = filter_norm\n\n        # Init weights\n        for m in self.modules():\n            if isinstance(m, nn.Conv2d):\n                n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels\n                m.weight.data.normal_(0, math.sqrt(2. / n))\n                if m.bias is not None:\n                    m.bias.data.zero_()\n            elif isinstance(m, nn.BatchNorm2d):\n                m.weight.data.fill_(1)\n                m.bias.data.zero_()\n\n\n    def forward(self, feat, bb):\n        \"\"\"Runs the initializer module.\n        Note that [] denotes an optional dimension.\n        args:\n            feat:  Input feature maps. Dims (images_in_sequence, [sequences], feat_dim, H, W).\n            bb:  Target bounding boxes (x, y, w, h) in the image coords. Dims (images_in_sequence, [sequences], 4).\n        returns:\n            weights:  The output weights. Dims (sequences, feat_dim, wH, wW).\"\"\"\n\n        num_images = feat.shape[0]\n\n        feat = self.filter_conv(feat.view(-1, feat.shape[-3], feat.shape[-2], feat.shape[-1]))\n\n        weights = self.filter_pool(feat, bb)\n\n        # If multiple input images, compute the initial filter as the average filter.\n        if num_images > 1:\n            weights = torch.mean(weights.view(num_images, -1, weights.shape[-3], weights.shape[-2], weights.shape[-1]), dim=0)\n\n        if self.filter_norm:\n            weights = weights / (weights.shape[1] * weights.shape[2] * weights.shape[3])\n\n        return weights\n\n\n\nclass FilterInitializerZero(nn.Module):\n    \"\"\"Initializes a target classification filter with zeros.\n    args:\n        filter_size:  Size of the filter.\n        feature_dim:  Input feature dimentionality.\"\"\"\n\n    def __init__(self, filter_size=1, feature_dim=256):\n        super().__init__()\n\n        self.filter_size = (feature_dim, filter_size, filter_size)\n\n    def forward(self, feat, bb):\n        \"\"\"Runs the initializer module.\n        Note that [] denotes an optional dimension.\n        args:\n            feat:  Input feature maps. Dims (images_in_sequence, [sequences], feat_dim, H, W).\n            bb:  Target bounding boxes (x, y, w, h) in the image coords. Dims (images_in_sequence, [sequences], 4).\n        returns:\n            weights:  The output weights. Dims (sequences, feat_dim, wH, wW).\"\"\"\n\n        num_sequences = feat.shape[1] if feat.dim() == 5 else 1\n\n        return feat.new_zeros(num_sequences, self.filter_size[0], self.filter_size[1], self.filter_size[2])\n\n\nclass FilterInitializerSiamese(nn.Module):\n    \"\"\"Initializes a target classification filter by only pooling the target region (similar to Siamese trackers).\n    args:\n        filter_size:  Size of the filter.\n        feature_stride:  Input feature stride.\n        pool_square:  Do a square pooling instead of pooling the exact target region.\n        filter_norm:  Normalize the output filter with its size in the end.\"\"\"\n\n    def __init__(self, filter_size=1, feature_stride=16, pool_square=False, filter_norm=True):\n        super().__init__()\n\n        self.filter_pool = FilterPool(filter_size=filter_size, feature_stride=feature_stride, pool_square=pool_square)\n        self.filter_norm = filter_norm\n\n        # Init weights\n        for m in self.modules():\n            if isinstance(m, nn.Conv2d):\n                n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels\n                m.weight.data.normal_(0, math.sqrt(2. / n))\n                if m.bias is not None:\n                    m.bias.data.zero_()\n            elif isinstance(m, nn.BatchNorm2d):\n                m.weight.data.fill_(1)\n                m.bias.data.zero_()\n\n\n    def forward(self, feat, bb):\n        \"\"\"Runs the initializer module.\n        Note that [] denotes an optional dimension.\n        args:\n            feat:  Input feature maps. Dims (images_in_sequence, [sequences], feat_dim, H, W).\n            bb:  Target bounding boxes (x, y, w, h) in the image coords. Dims (images_in_sequence, [sequences], 4).\n        returns:\n            weights:  The output weights. Dims (sequences, feat_dim, wH, wW).\"\"\"\n\n        num_images = feat.shape[0]\n\n        feat = feat.view(-1, feat.shape[-3], feat.shape[-2], feat.shape[-1])\n        weights = self.filter_pool(feat, bb)\n\n        if num_images > 1:\n            weights = torch.mean(weights.view(num_images, -1, weights.shape[-3], weights.shape[-2], weights.shape[-1]), dim=0)\n\n        if self.filter_norm:\n            weights = weights / (weights.shape[1] * weights.shape[2] * weights.shape[3])\n\n        return weights\n"
  },
  {
    "path": "tracker/sot/lib/models/online/classifier/linear_filter.py",
    "content": "import torch.nn as nn\nimport torch\nimport models.online.layers.filter as filter_layer\nimport math\n\n\n\nclass LinearFilter(nn.Module):\n    \"\"\"Target classification filter module.\n    args:\n        filter_size:  Size of filter (int).\n        filter_initialize:  Filter initializer module.\n        filter_optimizer:  Filter optimizer module.\n        feature_extractor:  Feature extractor module applied to the input backbone features.\"\"\"\n\n    def __init__(self, filter_size, filter_initializer, filter_optimizer=None, feature_extractor=None):\n        super().__init__()\n\n        self.filter_size = filter_size\n\n        # Modules\n        self.filter_initializer = filter_initializer\n        self.filter_optimizer = filter_optimizer\n        self.feature_extractor = feature_extractor\n\n        # Init weights\n        for m in self.feature_extractor.modules():\n            if isinstance(m, nn.Conv2d):\n                n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels\n                m.weight.data.normal_(0, math.sqrt(2. / n))\n                if m.bias is not None:\n                    m.bias.data.zero_()\n            elif isinstance(m, nn.BatchNorm2d):\n                m.weight.data.fill_(1)\n                m.bias.data.zero_()\n\n    def forward(self, train_feat, test_feat, train_bb, *args, **kwargs):\n        \"\"\"Learns a target classification filter based on the train samples and return the resulting classification\n        scores on the test samples.\n        The forward function is ONLY used for training. Call the individual functions during tracking.\n        args:\n            train_feat:  Backbone features for the train samples (4 or 5 dims).\n            test_feat:  Backbone features for the test samples (4 or 5 dims).\n            trian_bb:  Target boxes (x,y,w,h) for the train samples in image coordinates. Dims (images, sequences, 4).\n            *args, **kwargs:  These are passed to the optimizer module.\n        returns:\n            test_scores:  Classification scores on the test samples.\"\"\"\n\n        assert train_bb.dim() == 3\n\n        num_sequences = train_bb.shape[1]\n\n        if train_feat.dim() == 5:\n            train_feat = train_feat.view(-1, *train_feat.shape[-3:])\n        if test_feat.dim() == 5:\n            test_feat = test_feat.view(-1, *test_feat.shape[-3:])\n\n        # Extract features\n        train_feat = self.extract_classification_feat(train_feat, num_sequences)\n        test_feat = self.extract_classification_feat(test_feat, num_sequences)\n\n        # Train filter\n        filter, filter_iter, losses = self.get_filter(train_feat, train_bb, *args, **kwargs)\n\n        # Classify samples using all return filters\n        test_scores = [self.classify(f, test_feat) for f in filter_iter]\n\n        return test_scores\n\n    def extract_classification_feat(self, feat, num_sequences=None):\n        \"\"\"Extract classification features based on the input backbone features.\"\"\"\n        if self.feature_extractor is None:\n            return feat\n        if num_sequences is None:\n            return self.feature_extractor(feat)\n\n        output = self.feature_extractor(feat)\n        return output.view(-1, num_sequences, *output.shape[-3:])\n\n    def classify(self, weights, feat):\n        \"\"\"Run classifier (filter) on the features (feat).\"\"\"\n\n        scores = filter_layer.apply_filter(feat, weights)\n\n        return scores\n\n    def get_filter(self, feat, bb, *args, **kwargs):\n        \"\"\"Outputs the learned filter based on the input features (feat) and target boxes (bb) by running the\n        filter initializer and optimizer. Note that [] denotes an optional dimension.\n        args:\n            feat:  Input feature maps. Dims (images_in_sequence, [sequences], feat_dim, H, W).\n            bb:  Target bounding boxes (x, y, w, h) in the image coords. Dims (images_in_sequence, [sequences], 4).\n            *args, **kwargs:  These are passed to the optimizer module.\n        returns:\n            weights:  The final oprimized weights. Dims (sequences, feat_dim, wH, wW).\n            weight_iterates:  The weights computed in each iteration (including initial input and final output).\n            losses:  Train losses.\"\"\"\n        \n        weights = self.filter_initializer(feat, bb)\n\n        if self.filter_optimizer is not None:\n            weights, weights_iter, losses = self.filter_optimizer(weights, feat=feat, bb=bb, *args, **kwargs)\n        else:\n            weights_iter = [weights]\n            losses = None\n\n        return weights, weights_iter, losses\n"
  },
  {
    "path": "tracker/sot/lib/models/online/classifier/optimizer.py",
    "content": "import torch.nn as nn\nimport torch\nimport torch.nn.functional as F\nimport models.online.layers.filter as filter_layer\nimport models.online.layers.activation as activation\nfrom models.online.layers.distance import DistanceMap\nimport math\n\n\n\nclass ONLINESteepestDescentGN(nn.Module):\n    \"\"\"Optimizer module for ONLINE.\n    It unrolls the steepest descent with Gauss-Newton iterations to optimize the target filter.\n    Moreover it learns parameters in the loss itself, as described in the ONLINE paper.\n    args:\n        num_iter:  Number of default optimization iterations.\n        feat_stride:  The stride of the input feature.\n        init_step_length:  Initial scaling of the step length (which is then learned).\n        init_filter_reg:  Initial filter regularization weight (which is then learned).\n        init_gauss_sigma:  The standard deviation to use for the initialization of the label function.\n        num_dist_bins:  Number of distance bins used for learning the loss label, mask and weight.\n        bin_displacement:  The displacement of the bins (level of discritization).\n        mask_init_factor:  Parameter controlling the initialization of the target mask.\n        score_act:  Type of score activation (target mask computation) to use. The default 'relu' is what is described in the paper.\n        act_param:  Parameter for the score_act.\n        min_filter_reg:  Enforce a minimum value on the regularization (helps stability sometimes).\n        mask_act:  What activation to do on the output of the mask computation ('sigmoid' or 'linear').\n        detach_length:  Detach the filter every n-th iteration. Default is to never detech, i.e. 'Inf'.\"\"\"\n\n    def __init__(self, num_iter=1, feat_stride=16, init_step_length=1.0,\n                 init_filter_reg=1e-2, init_gauss_sigma=1.0, num_dist_bins=5, bin_displacement=1.0, mask_init_factor=4.0,\n                 score_act='relu', act_param=None, min_filter_reg=1e-3, mask_act='sigmoid',\n                 detach_length=float('Inf')):\n        super().__init__()\n\n        self.num_iter = num_iter\n        self.feat_stride = feat_stride\n        self.log_step_length = nn.Parameter(math.log(init_step_length) * torch.ones(1))\n        self.filter_reg = nn.Parameter(init_filter_reg * torch.ones(1))\n        self.distance_map = DistanceMap(num_dist_bins, bin_displacement)\n        self.min_filter_reg = min_filter_reg\n        self.detach_length = detach_length\n\n        # Distance coordinates\n        d = torch.arange(num_dist_bins, dtype=torch.float32).view(1,-1,1,1) * bin_displacement\n        if init_gauss_sigma == 0:\n            init_gauss = torch.zeros_like(d)\n            init_gauss[0,0,0,0] = 1\n        else:\n            init_gauss = torch.exp(-1/2 * (d / init_gauss_sigma)**2)\n\n        # Module that predicts the target label function (y in the paper)\n        self.label_map_predictor = nn.Conv2d(num_dist_bins, 1, kernel_size=1, bias=False)\n        self.label_map_predictor.weight.data = init_gauss - init_gauss.min()\n\n        # Module that predicts the target mask (m in the paper)\n        mask_layers = [nn.Conv2d(num_dist_bins, 1, kernel_size=1, bias=False)]\n        if mask_act == 'sigmoid':\n            mask_layers.append(nn.Sigmoid())\n            init_bias = 0.0\n        elif mask_act == 'linear':\n            init_bias = 0.5\n        else:\n            raise ValueError('Unknown activation')\n        self.target_mask_predictor = nn.Sequential(*mask_layers)\n        self.target_mask_predictor[0].weight.data = mask_init_factor * torch.tanh(2.0 - d) + init_bias\n\n        # Module that predicts the residual weights (v in the paper)\n        self.spatial_weight_predictor = nn.Conv2d(num_dist_bins, 1, kernel_size=1, bias=False)\n        self.spatial_weight_predictor.weight.data.fill_(1.0)\n\n        # The score actvation and its derivative\n        if score_act == 'bentpar':\n            self.score_activation = activation.BentIdentPar(act_param)\n            self.score_activation_deriv = activation.BentIdentParDeriv(act_param)\n        elif score_act == 'relu':\n            self.score_activation = activation.LeakyReluPar()\n            self.score_activation_deriv = activation.LeakyReluParDeriv()\n        else:\n            raise ValueError('Unknown score activation')\n\n\n    def forward(self, weights, feat, bb, sample_weight=None, num_iter=None, compute_losses=True):\n        \"\"\"Runs the optimizer module.\n        Note that [] denotes an optional dimension.\n        args:\n            weights:  Initial weights. Dims (sequences, feat_dim, wH, wW).\n            feat:  Input feature maps. Dims (images_in_sequence, [sequences], feat_dim, H, W).\n            bb:  Target bounding boxes (x, y, w, h) in the image coords. Dims (images_in_sequence, [sequences], 4).\n            sample_weight:  Optional weight for each sample. Dims: (images_in_sequence, [sequences]).\n            num_iter:  Number of iterations to run.\n            compute_losses:  Whether to compute the (train) loss in each iteration.\n        returns:\n            weights:  The final oprimized weights.\n            weight_iterates:  The weights computed in each iteration (including initial input and final output).\n            losses:  Train losses.\"\"\"\n\n        # Sizes\n        num_iter = self.num_iter if num_iter is None else num_iter\n        num_images = feat.shape[0]\n        num_sequences = feat.shape[1] if feat.dim() == 5 else 1\n        filter_sz = (weights.shape[-2], weights.shape[-1])\n        output_sz = (feat.shape[-2] + (weights.shape[-2] + 1) % 2, feat.shape[-1] + (weights.shape[-1] + 1) % 2)\n\n        # Get learnable scalars\n        step_length_factor = torch.exp(self.log_step_length)\n        reg_weight = (self.filter_reg*self.filter_reg).clamp(min=self.min_filter_reg**2)\n\n        # Compute distance map\n        dmap_offset = (torch.Tensor(filter_sz).to(bb.device) % 2) / 2.0\n        center = ((bb[..., :2] + bb[..., 2:] / 2) / self.feat_stride).view(-1, 2).flip((1,)) - dmap_offset\n        dist_map = self.distance_map(center, output_sz)\n\n        # Compute label map masks and weight\n        label_map = self.label_map_predictor(dist_map).view(num_images, num_sequences, *dist_map.shape[-2:])\n        target_mask = self.target_mask_predictor(dist_map).view(num_images, num_sequences, *dist_map.shape[-2:])\n        spatial_weight = self.spatial_weight_predictor(dist_map).view(num_images, num_sequences, *dist_map.shape[-2:])\n\n        # Get total sample weights\n        if sample_weight is None:\n            sample_weight = math.sqrt(1.0 / num_images) * spatial_weight\n        elif isinstance(sample_weight, torch.Tensor):\n            sample_weight = sample_weight.sqrt().view(num_images, num_sequences, 1, 1) * spatial_weight\n\n        weight_iterates = [weights]\n        losses = []\n\n        for i in range(num_iter):\n            if i > 0 and i % self.detach_length == 0:\n                weights = weights.detach()\n\n            # Compute residuals\n            scores = filter_layer.apply_filter(feat, weights)\n            scores_act = self.score_activation(scores, target_mask)\n            score_mask = self.score_activation_deriv(scores, target_mask)\n            residuals = sample_weight * (scores_act - label_map)\n\n            if compute_losses:\n                losses.append(((residuals**2).sum() + reg_weight * (weights**2).sum())/num_sequences)\n\n            # Compute gradient\n            residuals_mapped = score_mask * (sample_weight * residuals)\n            weights_grad = filter_layer.apply_feat_transpose(feat, residuals_mapped, filter_sz, training=self.training) + \\\n                          reg_weight * weights\n\n            # Map the gradient with the Jacobian\n            scores_grad = filter_layer.apply_filter(feat, weights_grad)\n            scores_grad = sample_weight * (score_mask * scores_grad)\n\n            # Compute optimal step length\n            alpha_num = (weights_grad * weights_grad).view(num_sequences, -1).sum(dim=1)\n            alpha_den = ((scores_grad * scores_grad).view(num_images, num_sequences, -1).sum(dim=(0,2)) + reg_weight * alpha_num).clamp(1e-8)\n            alpha = alpha_num / alpha_den\n\n            # Update filter\n            weights = weights - (step_length_factor * alpha.view(-1, 1, 1, 1)) * weights_grad\n\n            # Add the weight iterate\n            weight_iterates.append(weights)\n\n        if compute_losses:\n            scores = filter_layer.apply_filter(feat, weights)\n            scores = self.score_activation(scores, target_mask)\n            losses.append((((sample_weight * (scores - label_map))**2).sum() + reg_weight * (weights**2).sum())/num_sequences)\n\n        return weights, weight_iterates, losses\n"
  },
  {
    "path": "tracker/sot/lib/models/online/layers/__init__.py",
    "content": ""
  },
  {
    "path": "tracker/sot/lib/models/online/layers/activation.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\n\nclass MLU(nn.Module):\n    r\"\"\"MLU activation\n    \"\"\"\n    def __init__(self, min_val, inplace=False):\n        super().__init__()\n        self.min_val = min_val\n        self.inplace = inplace\n\n    def forward(self, input):\n        return F.elu(F.leaky_relu(input, 1/self.min_val, inplace=self.inplace), self.min_val, inplace=self.inplace)\n\n\nclass LeakyReluPar(nn.Module):\n    r\"\"\"LeakyRelu parametric activation\n    \"\"\"\n\n    def forward(self, x, a):\n        return (1.0 - a)/2.0 * torch.abs(x) + (1.0 + a)/2.0 * x\n\nclass LeakyReluParDeriv(nn.Module):\n    r\"\"\"Derivative of the LeakyRelu parametric activation, wrt x.\n    \"\"\"\n\n    def forward(self, x, a):\n        return (1.0 - a)/2.0 * torch.sign(x.detach()) + (1.0 + a)/2.0\n\n\nclass BentIdentPar(nn.Module):\n    r\"\"\"BentIdent parametric activation\n    \"\"\"\n    def __init__(self, b=1.0):\n        super().__init__()\n        self.b = b\n\n    def forward(self, x, a):\n        return (1.0 - a)/2.0 * (torch.sqrt(x*x + 4.0*self.b*self.b) - 2.0*self.b) + (1.0 + a)/2.0 * x\n\n\nclass BentIdentParDeriv(nn.Module):\n    r\"\"\"BentIdent parametric activation deriv\n    \"\"\"\n    def __init__(self, b=1.0):\n        super().__init__()\n        self.b = b\n\n    def forward(self, x, a):\n        return (1.0 - a)/2.0 * (x / torch.sqrt(x*x + 4.0*self.b*self.b)) + (1.0 + a)/2.0\n\n"
  },
  {
    "path": "tracker/sot/lib/models/online/layers/blocks.py",
    "content": "from torch import nn\n\n\ndef conv_block(in_planes, out_planes, kernel_size=3, stride=1, padding=1, dilation=1, bias=True,\n               batch_norm=True, relu=True):\n    layers = [nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride,\n                        padding=padding, dilation=dilation, bias=bias)]\n    if batch_norm:\n        layers.append(nn.BatchNorm2d(out_planes))\n    if relu:\n        layers.append(nn.ReLU(inplace=True))\n    return nn.Sequential(*layers)\n\n\nclass LinearBlock(nn.Module):\n    def __init__(self, in_planes, out_planes, input_sz, bias=True, batch_norm=True, relu=True):\n        super().__init__()\n        self.linear = nn.Linear(in_planes*input_sz*input_sz, out_planes, bias=bias)\n        self.bn = nn.BatchNorm2d(out_planes) if batch_norm else None\n        self.relu = nn.ReLU(inplace=True) if relu else None\n\n    def forward(self, x):\n        x = self.linear(x.view(x.shape[0], -1))\n        if self.bn is not None:\n            x = self.bn(x.view(x.shape[0], x.shape[1], 1, 1))\n        if self.relu is not None:\n            x = self.relu(x)\n        return x.view(x.shape[0], -1)"
  },
  {
    "path": "tracker/sot/lib/models/online/layers/distance.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\n\nclass DistanceMap(nn.Module):\n    \"\"\"Generate a distance map from a origin center location.\n    args:\n        num_bins:  Number of bins in the map.\n        bin_displacement:  Displacement of the bins.\n    \"\"\"\n    def __init__(self, num_bins, bin_displacement=1.0):\n        super().__init__()\n        self.num_bins = num_bins\n        self.bin_displacement = bin_displacement\n\n    def forward(self, center, output_sz):\n        \"\"\"Create the distance map.\n        args:\n            center: Torch tensor with (y,x) center position. Dims (batch, 2)\n            output_sz: Size of output distance map. 2-dimensional tuple.\"\"\"\n\n        center = center.view(-1,2)\n\n        bin_centers = torch.arange(self.num_bins, dtype=torch.float32, device=center.device).view(1, -1, 1, 1)\n\n        k0 = torch.arange(output_sz[0], dtype=torch.float32, device=center.device).view(1,1,-1,1)\n        k1 = torch.arange(output_sz[1], dtype=torch.float32, device=center.device).view(1,1,1,-1)\n\n        d0 = k0 - center[:,0].view(-1,1,1,1)\n        d1 = k1 - center[:,1].view(-1,1,1,1)\n\n        dist = torch.sqrt(d0*d0 + d1*d1)\n        bin_diff = dist / self.bin_displacement - bin_centers\n\n        bin_val = torch.cat((F.relu(1.0 - torch.abs(bin_diff[:,:-1,:,:]), inplace=True),\n                             (1.0 + bin_diff[:,-1:,:,:]).clamp(0, 1)), dim=1)\n\n        return bin_val\n\n\n"
  },
  {
    "path": "tracker/sot/lib/models/online/layers/filter.py",
    "content": "import torch\nimport torch.nn.functional as F\n\n\ndef apply_filter(feat, filter):\n    \"\"\"Applies the filter on the input features (feat).\n    args:\n        feat: These are the input features. Must have dimensions (images_in_sequence, sequences, feat_dim, H, W)\n        filter: The filter to apply. Must have dimensions (sequences, feat_dim, fH, fW) or (sequences, filters, feat_dim, fH, fW)\n    output:\n        scores: Output of filtering. Dimensions (images_in_sequence, sequences, yH, yW) or (images_in_sequence, sequences, filters, yH, yW)\n    \"\"\"\n\n    multiple_filters = (filter.dim() == 5)\n\n    padding = (filter.shape[-2] // 2, filter.shape[-1] // 2)\n\n    num_images = feat.shape[0]\n    num_sequences = feat.shape[1] if feat.dim() == 5 else 1\n\n    if multiple_filters:\n        scores = F.conv2d(feat.view(num_images, -1, feat.shape[-2], feat.shape[-1]), filter.view(-1, *filter.shape[-3:]),\n                          padding=padding, groups=num_sequences)\n\n        return scores.view(num_images, num_sequences, -1, scores.shape[-2], scores.shape[-1])\n\n    scores = F.conv2d(feat.view(num_images, -1, feat.shape[-2], feat.shape[-1]), filter,\n                      padding=padding, groups=num_sequences)\n\n    return scores.view(num_images, num_sequences, scores.shape[-2], scores.shape[-1])\n\n\ndef apply_feat_transpose(feat, input, filter_ksz, training=True):\n    \"\"\"Applies the transposed operation off apply_filter w.r.t. filter itself. Can be used to compute the filter gradient.\n    args:\n        feat: These are the input features. Must have dimensions (images_in_sequence, sequences, feat_dim, H, W)\n        input: Input activation (e.g. residuals). Must have dimensions (images_in_sequence, sequences, yH, yW) or\n                (images_in_sequence, sequences, filters, yH, yW)\n        training: Choose the faster implementation whether training or not.\n    output:\n        Output of transposed operation. Dimensions (sequences, feat_dim, fH, fW)\n    \"\"\"\n\n    if training or input.dim() == 5:\n        return _apply_feat_transpose_v3(feat, input, filter_ksz)\n    return _apply_feat_transpose_v2(feat, input, filter_ksz)\n\n\ndef _apply_feat_transpose_v1(feat, input, filter_ksz):\n    \"\"\"This one is slow as hell!!!!\"\"\"\n\n    num_images = feat.shape[0]\n    num_sequences = feat.shape[1] if feat.dim() == 5 else 1\n    feat_sz = (feat.shape[-2], feat.shape[-1])\n    if isinstance(filter_ksz, int):\n        filter_ksz = (filter_ksz, filter_ksz)\n\n    # trans_pad = sz + padding - filter_ksz\n    trans_pad = [sz + ksz//2 - ksz for sz, ksz in zip(feat_sz, filter_ksz)]\n\n    filter_grad = F.conv_transpose2d(input.flip((2, 3)).view(1, -1, input.shape[-2], input.shape[-1]),\n                                     feat.view(-1, feat.shape[-3], feat.shape[-2], feat.shape[-1]),\n                                     padding=trans_pad, groups=num_images * num_sequences)\n\n    return filter_grad.view(num_images, num_sequences, -1, filter_grad.shape[-2], filter_grad.shape[-1]).sum(dim=0)\n\n\ndef _apply_feat_transpose_v2(feat, input, filter_ksz):\n    \"\"\"Fast forward and slow backward\"\"\"\n\n    multiple_filters = (input.dim() == 5)\n\n    num_images = feat.shape[0]\n    num_sequences = feat.shape[1] if feat.dim() == 5 else 1\n    num_filters = input.shape[2] if multiple_filters else 1\n    if isinstance(filter_ksz, int):\n        filter_ksz = (filter_ksz, filter_ksz)\n\n    trans_pad = [(ksz-1)//2 for ksz in filter_ksz]\n\n    if multiple_filters:\n        filter_grad = F.conv2d(input.view(-1, num_filters, input.shape[-2], input.shape[-1]).permute(1,0,2,3),\n                               feat.view(-1, 1, feat.shape[-2], feat.shape[-1]),\n                               padding=trans_pad, groups=num_images * num_sequences)\n\n        if num_images == 1:\n            return filter_grad.view(num_filters, num_sequences, -1, filter_grad.shape[-2], filter_grad.shape[-1]).flip((3,4)).permute(1,0,2,3,4)\n        return filter_grad.view(num_filters, num_images, num_sequences, -1, filter_grad.shape[-2], filter_grad.shape[-1]).sum(dim=1).flip((3,4)).permute(1,0,2,3,4)\n\n    filter_grad = F.conv2d(input.view(1, -1, input.shape[-2], input.shape[-1]),\n                                     feat.view(-1, 1, feat.shape[-2], feat.shape[-1]),\n                                     padding=trans_pad, groups=num_images * num_sequences)\n\n    return filter_grad.view(num_images, num_sequences, -1, filter_grad.shape[-2], filter_grad.shape[-1]).sum(dim=0).flip((2,3))\n\n\ndef _apply_feat_transpose_v3(feat, input, filter_ksz):\n    \"\"\"Slow forward fast backward\"\"\"\n\n    multiple_filters = (input.dim() == 5)\n\n    num_images = feat.shape[0]\n    num_sequences = feat.shape[1] if feat.dim() == 5 else 1\n    num_filters = input.shape[2] if multiple_filters else 1\n    if isinstance(filter_ksz, int):\n        filter_ksz = (filter_ksz, filter_ksz)\n\n    trans_pad = [ksz//2 for  ksz in filter_ksz]\n\n    filter_grad = F.conv2d(feat.view(-1, feat.shape[-3], feat.shape[-2], feat.shape[-1]).permute(1,0,2,3),\n                           input.view(-1, 1, input.shape[-2], input.shape[-1]),\n                           padding=trans_pad, groups=num_images * num_sequences)\n\n    if multiple_filters:\n        if num_images == 1:\n            return filter_grad.view(-1, num_sequences, num_filters, filter_grad.shape[-2], filter_grad.shape[-1]).permute(1,2,0,3,4)\n        return filter_grad.view(-1, num_images, num_sequences, num_filters, filter_grad.shape[-2], filter_grad.shape[-1]).sum(dim=1).permute(1,2,0,3,4)\n\n    if num_images == 1:\n        return filter_grad.permute(1,0,2,3)\n    return filter_grad.view(-1, num_images, num_sequences, filter_grad.shape[-2], filter_grad.shape[-1]).sum(dim=1).permute(1,0,2,3)\n\n\ndef _apply_feat_transpose_v4(feat, input, filter_ksz):\n    \"\"\"Slow forward fast backward\"\"\"\n\n    num_images = feat.shape[0]\n    num_sequences = feat.shape[1] if feat.dim() == 5 else 1\n    if isinstance(filter_ksz, int):\n        filter_ksz = (filter_ksz, filter_ksz)\n\n    trans_pad = [ksz//2 for  ksz in filter_ksz]\n\n    filter_grad = F.conv2d(feat.permute(2,1,0,3,4).reshape(feat.shape[-3], -1, feat.shape[-2], feat.shape[-1]),\n                           input.permute(1,0,2,3),\n                           padding=trans_pad, groups=num_sequences)\n\n    return filter_grad.permute(1,0,2,3)\n\n\n\ndef filter_gradient(feat, filter, label=None, training=True):\n    \"\"\"Computes gradient of the filter when applied on the input features and ground truth label.\n    args:\n        feat: These are the input features. Must have dimensions (images_in_sequence, sequences, feat_dim, H, W)\n        filter: The filter to apply. Must have dimensions (sequences, feat_dim, fH, fW)\n        label: Ground truth label in the L2 loss. Dimensions (images_in_sequence, sequences, yH, yW)\n    output:\n        filter_gradient: Dimensions same as input filter (sequences, feat_dim, fH, fW)\n    \"\"\"\n\n    residuals = apply_filter(feat, filter)\n    if label is not None:\n        residuals = residuals - label\n    filter_ksz = (filter.shape[-2], filter.shape[-1])\n    return apply_feat_transpose(feat, residuals, filter_ksz, training=training)\n"
  },
  {
    "path": "tracker/sot/lib/models/online/layers/normalization.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\n\nclass InstanceL2Norm(nn.Module):\n    \"\"\"Instance L2 normalization.\n    \"\"\"\n    def __init__(self, size_average=True, eps=1e-5, scale=1.0):\n        super().__init__()\n        self.size_average = size_average\n        self.eps = eps\n        self.scale = scale\n\n    def forward(self, input):\n        if self.size_average:\n            return input * (self.scale * ((input.shape[1] * input.shape[2] * input.shape[3]) / (\n                        torch.sum((input * input).view(input.shape[0], 1, 1, -1), dim=3, keepdim=True) + self.eps)).sqrt())\n        else:\n            return input * (self.scale / (torch.sum((input * input).view(input.shape[0], 1, 1, -1), dim=3, keepdim=True) + self.eps).sqrt())\n\n"
  },
  {
    "path": "tracker/sot/lib/models/online/layers/transform.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom collections import OrderedDict\n\n\ndef interpolate(x, sz):\n    \"\"\"Interpolate 4D tensor x to size sz.\"\"\"\n    sz = sz.tolist() if torch.is_tensor(sz) else sz\n    return F.interpolate(x, sz, mode='bilinear', align_corners=False) if x.shape[-2:] != sz else x\n\n\nclass InterpCat(nn.Module):\n    \"\"\"Interpolate and concatenate features of different resolutions.\"\"\"\n\n    def forward(self, input):\n        if isinstance(input, (dict, OrderedDict)):\n            input = list(input.values())\n\n        output_shape = None\n        for x in input:\n            if output_shape is None or output_shape[0] > x.shape[-2]:\n                output_shape = x.shape[-2:]\n\n        return torch.cat([interpolate(x, output_shape) for x in input], dim=-3)\n"
  },
  {
    "path": "tracker/sot/lib/models/siamfc.py",
    "content": "import pdb\r\nimport torch\r\nimport torch.nn as nn\r\nimport torch.nn.functional as F\r\n\r\nfrom .connect import Corr_Up\r\n\r\nclass SiamFC(nn.Module):\r\n    def __init__(self, config,  **kwargs):\r\n        super(SiamFC, self).__init__()\r\n        self.features = None\r\n        self.connect_model = Corr_Up()\r\n        self.zf = None  # for online tracking\r\n        if kwargs['base'] is None:\r\n            self.features = ResNet22W()\r\n        else:\r\n            self.features = kwargs['base'] \r\n        self.config = config\r\n        self.model_alphaf = 0\r\n        self.zf = None \r\n        self.features.eval()\r\n\r\n    def feature_extractor(self, x):\r\n        return self.features(x)\r\n\r\n    def forward(self, x):\r\n        xf = self.feature_extractor(x) * self.config.cos_window\r\n        zf = self.zf \r\n        response = self.connect_model(zf, xf)\r\n        return response\r\n    \r\n    def update(self, z, lr=0):\r\n        zf = self.feature_extractor(z).detach() \r\n        _, _, ts, ts = zf.shape\r\n\r\n        bg = ts//2-int(ts//(2*(self.config.padding+1)))\r\n        ed = ts//2+int(ts//(2*(self.config.padding+1)))\r\n        zf = zf[:,:,bg:ed, bg:ed]\r\n\r\n        if self.zf is None:\r\n            self.zf =  zf\r\n        else:\r\n            self.zf = (1 - lr) * self.zf + lr * zf\r\n\r\n\r\n\r\n"
  },
  {
    "path": "tracker/sot/lib/online/__init__.py",
    "content": "from .tensorlist import TensorList\nfrom .tensordict import TensorDict\nfrom .loading import load_network\n"
  },
  {
    "path": "tracker/sot/lib/online/augmentation.py",
    "content": "import numpy as np\nimport math\nimport torch\nimport torch.nn.functional as F\nimport cv2 as cv\nfrom .preprocessing import numpy_to_torch, torch_to_numpy\n\n\nclass Transform:\n    \"\"\"Base data augmentation transform class.\"\"\"\n\n    def __init__(self, output_sz = None, shift = None):\n        self.output_sz = output_sz\n        self.shift = (0,0) if shift is None else shift\n\n    def __call__(self, image):\n        raise NotImplementedError\n\n    def crop_to_output(self, image):\n        if isinstance(image, torch.Tensor):\n            imsz = image.shape[2:]\n            if self.output_sz is None:\n                pad_h = 0\n                pad_w = 0\n            else:\n                pad_h = (self.output_sz[0] - imsz[0]) / 2\n                pad_w = (self.output_sz[1] - imsz[1]) / 2\n\n            pad_left = math.floor(pad_w) + self.shift[1]\n            pad_right = math.ceil(pad_w) - self.shift[1]\n            pad_top = math.floor(pad_h) + self.shift[0]\n            pad_bottom = math.ceil(pad_h) - self.shift[0]\n\n            return F.pad(image, (pad_left, pad_right, pad_top, pad_bottom), 'replicate')\n        else:\n            raise NotImplementedError\n\nclass Identity(Transform):\n    \"\"\"Identity transformation.\"\"\"\n    def __call__(self, image):\n        return self.crop_to_output(image)\n\nclass FlipHorizontal(Transform):\n    \"\"\"Flip along horizontal axis.\"\"\"\n    def __call__(self, image):\n        if isinstance(image, torch.Tensor):\n            return self.crop_to_output(image.flip((3,)))\n        else:\n            return np.fliplr(image)\n\nclass FlipVertical(Transform):\n    \"\"\"Flip along vertical axis.\"\"\"\n    def __call__(self, image: torch.Tensor):\n        if isinstance(image, torch.Tensor):\n            return self.crop_to_output(image.flip((2,)))\n        else:\n            return np.flipud(image)\n\nclass Translation(Transform):\n    \"\"\"Translate.\"\"\"\n    def __init__(self, translation, output_sz = None, shift = None):\n        super().__init__(output_sz, shift)\n        self.shift = (self.shift[0] + translation[0], self.shift[1] + translation[1])\n\n    def __call__(self, image):\n        if isinstance(image, torch.Tensor):\n            return self.crop_to_output(image)\n        else:\n            raise NotImplementedError\n\nclass Scale(Transform):\n    \"\"\"Scale.\"\"\"\n    def __init__(self, scale_factor, output_sz = None, shift = None):\n        super().__init__(output_sz, shift)\n        self.scale_factor = scale_factor\n\n    def __call__(self, image):\n        if isinstance(image, torch.Tensor):\n            # Calculate new size. Ensure that it is even so that crop/pad becomes easier\n            h_orig, w_orig = image.shape[2:]\n\n            if h_orig != w_orig:\n                raise NotImplementedError\n\n            h_new = round(h_orig /self.scale_factor)\n            h_new += (h_new - h_orig) % 2\n            w_new = round(w_orig /self.scale_factor)\n            w_new += (w_new - w_orig) % 2\n\n            image_resized = F.interpolate(image, [h_new, w_new], mode='bilinear')\n\n            return self.crop_to_output(image_resized)\n        else:\n            raise NotImplementedError\n\n\nclass Affine(Transform):\n    \"\"\"Affine transformation.\"\"\"\n    def __init__(self, transform_matrix, output_sz = None, shift = None):\n        super().__init__(output_sz, shift)\n        self.transform_matrix = transform_matrix\n\n    def __call__(self, image):\n        if isinstance(image, torch.Tensor):\n            return self.crop_to_output(numpy_to_torch(self(torch_to_numpy(image))))\n        else:\n            return cv.warpAffine(image, self.transform_matrix, image.shape[1::-1], borderMode=cv.BORDER_REPLICATE)\n\n\nclass Rotate(Transform):\n    \"\"\"Rotate with given angle.\"\"\"\n    def __init__(self, angle, output_sz = None, shift = None):\n        super().__init__(output_sz, shift)\n        self.angle = math.pi * angle/180\n\n    def __call__(self, image):\n        if isinstance(image, torch.Tensor):\n            return self.crop_to_output(numpy_to_torch(self(torch_to_numpy(image))))\n        else:\n            c = (np.expand_dims(np.array(image.shape[:2]),1)-1)/2\n            R = np.array([[math.cos(self.angle), math.sin(self.angle)],\n                          [-math.sin(self.angle), math.cos(self.angle)]])\n            H =np.concatenate([R, c - R @ c], 1)\n            return cv.warpAffine(image, H, image.shape[1::-1], borderMode=cv.BORDER_REPLICATE)\n\n\nclass Blur(Transform):\n    \"\"\"Blur with given sigma (can be axis dependent).\"\"\"\n    def __init__(self, sigma, output_sz = None, shift = None):\n        super().__init__(output_sz, shift)\n        if isinstance(sigma, (float, int)):\n            sigma = (sigma, sigma)\n        self.sigma = sigma\n        self.filter_size = [math.ceil(2*s) for s in self.sigma]\n        x_coord = [torch.arange(-sz, sz+1, dtype=torch.float32) for sz in self.filter_size]\n        self.filter = [torch.exp(-(x**2)/(2*s**2)) for x, s in zip(x_coord, self.sigma)]\n        self.filter[0] = self.filter[0].view(1,1,-1,1) / self.filter[0].sum()\n        self.filter[1] = self.filter[1].view(1,1,1,-1) / self.filter[1].sum()\n\n    def __call__(self, image):\n        if isinstance(image, torch.Tensor):\n            sz = image.shape[2:]\n            im1 = F.conv2d(image.view(-1, 1, sz[0], sz[1]), self.filter[0], padding=(self.filter_size[0],0))\n            return self.crop_to_output(F.conv2d(im1, self.filter[1], padding=(0, self.filter_size[1])).view(1,-1,sz[0],sz[1]))\n        else:\n            raise NotImplementedError\n"
  },
  {
    "path": "tracker/sot/lib/online/base_actor.py",
    "content": "from online import TensorDict\nimport torch.nn as nn\n\n\nclass BaseActor:\n    \"\"\" Base class for actor. The actor class handles the passing of the data through the network\n    and calculation the loss\"\"\"\n    def __init__(self, net, objective):\n        \"\"\"\n        args:\n            net - The network to train\n            objective - The loss function\n        \"\"\"\n        self.net = net\n        self.objective = objective\n\n    def __call__(self, data: TensorDict):\n        \"\"\" Called in each training iteration. Should pass in input data through the network, calculate the loss, and\n        return the training stats for the input data\n        args:\n            data - A TensorDict containing all the necessary data blocks.\n\n        returns:\n            loss    - loss for the input data\n            stats   - a dict containing detailed losses\n        \"\"\"\n        raise NotImplementedError\n\n    def to(self, device):\n        \"\"\" Move the network to device\n        args:\n            device - device to use. 'cpu' or 'cuda'\n        \"\"\"\n        self.net.to(device)\n\n    def train(self, mode=True):\n        \"\"\" Set whether the network is in train mode.\n        args:\n            mode (True) - Bool specifying whether in training mode.\n        \"\"\"\n        self.net.train(mode)\n\n\n        # fix backbone again\n        # fix the first three blocks\n        print('======> fix backbone again <=======')\n        for param in self.net.feature_extractor.parameters():\n            param.requires_grad = False\n        for m in self.net.feature_extractor.modules():\n            if isinstance(m, nn.BatchNorm2d):\n                m.eval()\n\n        for layer in ['layeronline']:\n            for param in getattr(self.net.feature_extractor.features.features, layer).parameters():\n                param.requires_grad = True\n            for m in getattr(self.net.feature_extractor.features.features, layer).modules():\n                if isinstance(m, nn.BatchNorm2d):\n                    m.train()\n\n        print('double check trainable')\n        self.check_trainable(self.net)\n\n\n\n    def eval(self):\n        \"\"\" Set network to eval mode\"\"\"\n        self.train(False)\n\n    def check_trainable(self, model):\n        \"\"\"\n        print trainable params info\n        \"\"\"\n        trainable_params = [p for p in model.parameters() if p.requires_grad]\n        print('trainable params:')\n        for name, param in model.named_parameters():\n            if param.requires_grad:\n                print(name)\n\n        assert len(trainable_params) > 0, 'no trainable parameters'\n"
  },
  {
    "path": "tracker/sot/lib/online/base_trainer.py",
    "content": "import os\nimport glob\nimport torch\nimport traceback\n# from ltr.admin import loading, multigpu\n\n\nclass BaseTrainer:\n    \"\"\"Base trainer class. Contains functions for training and saving/loading chackpoints.\n    Trainer classes should inherit from this one and overload the train_epoch function.\"\"\"\n\n    def __init__(self, actor, loaders, optimizer, settings, lr_scheduler=None):\n        \"\"\"\n        args:\n            actor - The actor for training the network\n            loaders - list of dataset loaders, e.g. [train_loader, val_loader]. In each epoch, the trainer runs one\n                        epoch for each loader.\n            optimizer - The optimizer used for training, e.g. Adam\n            settings - Training settings\n            lr_scheduler - Learning rate scheduler\n        \"\"\"\n        self.actor = actor\n        self.optimizer = optimizer\n        self.lr_scheduler = lr_scheduler\n        self.loaders = loaders\n\n        # self.update_settings(settings)\n\n        self.epoch = 0\n        self.stats = {}\n\n        self.device = getattr(settings, 'device', None)\n        if self.device is None:\n            self.device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n\n        self.actor.to(self.device)\n\n        self.settings = settings\n\n    # def update_settings(self, settings=None):\n    #     \"\"\"Updates the trainer settings. Must be called to update internal settings.\"\"\"\n    #     if settings is not None:\n    #         self.settings = settings\n    #\n    #     if self.settings.env.workspace_dir is not None:\n    #         self.settings.env.workspace_dir = os.path.expanduser(self.settings.env.workspace_dir)\n    #         self._checkpoint_dir = os.path.join(self.settings.env.workspace_dir, 'checkpoints')\n    #         if not os.path.exists(self._checkpoint_dir):\n    #             os.makedirs(self._checkpoint_dir)\n    #     else:\n    #         self._checkpoint_dir = None\n\n\n    def train(self, max_epochs, load_latest=False, fail_safe=True):\n        \"\"\"Do training for the given number of epochs.\n        args:\n            max_epochs - Max number of training epochs,\n            load_latest - Bool indicating whether to resume from latest epoch.\n            fail_safe - Bool indicating whether the training to automatically restart in case of any crashes.\n        \"\"\"\n\n        epoch = -1\n        num_tries = 10\n        # for i in range(num_tries):\n            # try:\n        # if load_latest:\n        #     self.load_checkpoint()\n\n        for epoch in range(self.epoch+1, max_epochs+1):\n            self.epoch = epoch\n\n            self.train_epoch()\n\n            if self.lr_scheduler is not None:\n                self.lr_scheduler.step()\n\n            print('save epoch {:d}'.format(epoch))\n            self.save_model(self.actor.net, epoch, self.optimizer, self.settings.ONLINE.TRAIN.MODEL, self.settings, isbest=False)\n\n\n            # if self._checkpoint_dir:\n            #     self.save_checkpoint()\n            # except:\n            #     print('Training crashed at epoch {}'.format(epoch))\n            #     if fail_safe:\n            #         self.epoch -= 1\n            #         load_latest = True\n            #         print('Traceback for the error!')\n            #         print(traceback.format_exc())\n            #         print('Restarting training from last epoch ...')\n            #     else:\n            #         raise\n\n        print('Finished training!')\n\n\n    def train_epoch(self):\n        raise NotImplementedError\n\n    def save_model(self, model, epoch, optimizer, model_name, cfg, isbest=False):\n        \"\"\"\n        save model\n        \"\"\"\n        if not os.path.exists(cfg.CHECKPOINT_DIR):\n            os.makedirs(cfg.CHECKPOINT_DIR)\n\n        if epoch > 0:\n            self.save_checkpoint({\n                'epoch': epoch + 1,\n                'arch': model_name,\n                'state_dict': model.state_dict(),\n                'optimizer': optimizer.state_dict()\n            }, isbest, cfg.CHECKPOINT_DIR, 'checkpoint_e%d.pth' % (epoch + 1))\n        else:\n            print('epoch not save(<5)')\n\n    def save_checkpoint(self, states, is_best, output_dir, filename='checkpoint.pth.tar'):\n        \"\"\"\n        save checkpoint\n        \"\"\"\n        torch.save(states, os.path.join(output_dir, filename))\n        if is_best and 'state_dict' in states:\n            torch.save(states['state_dict'],\n                       os.path.join(output_dir, 'model_best.pth'))\n\n\n    # def save_checkpoint(self):\n    #     \"\"\"Saves a checkpoint of the network and other variables.\"\"\"\n    #\n    #     net = self.actor.net.module if multigpu.is_multi_gpu(self.actor.net) else self.actor.net\n    #\n    #     actor_type = type(self.actor).__name__\n    #     net_type = type(net).__name__\n    #     state = {\n    #         'epoch': self.epoch,\n    #         'actor_type': actor_type,\n    #         'net_type': net_type,\n    #         'net': net.state_dict(),\n    #         'net_info': getattr(net, 'info', None),\n    #         'constructor': getattr(net, 'constructor', None),\n    #         'optimizer': self.optimizer.state_dict(),\n    #         'stats': self.stats,\n    #         'settings': self.settings\n    #     }\n    #\n    #\n    #     directory = '{}/{}'.format(self._checkpoint_dir, self.settings.project_path)\n    #     if not os.path.exists(directory):\n    #         os.makedirs(directory)\n    #\n    #     file_path = '{}/{}_ep{:04d}.pth.tar'.format(directory, net_type, self.epoch)\n    #     torch.save(state, file_path)\n\n\n    # def load_checkpoint(self, checkpoint = None, fields = None, ignore_fields = None, load_constructor = False):\n    #     \"\"\"Loads a network checkpoint file.\n    #\n    #     Can be called in three different ways:\n    #         load_checkpoint():\n    #             Loads the latest epoch from the workspace. Use this to continue training.\n    #         load_checkpoint(epoch_num):\n    #             Loads the network at the given epoch number (int).\n    #         load_checkpoint(path_to_checkpoint):\n    #             Loads the file from the given absolute path (str).\n    #     \"\"\"\n    #\n    #     net = self.actor.net.module if multigpu.is_multi_gpu(self.actor.net) else self.actor.net\n    #\n    #     actor_type = type(self.actor).__name__\n    #     net_type = type(net).__name__\n    #\n    #     if checkpoint is None:\n    #         # Load most recent checkpoint\n    #         checkpoint_list = sorted(glob.glob('{}/{}/{}_ep*.pth.tar'.format(self._checkpoint_dir,\n    #                                                                          self.settings.project_path, net_type)))\n    #         if checkpoint_list:\n    #             checkpoint_path = checkpoint_list[-1]\n    #         else:\n    #             print('No matching checkpoint file found')\n    #             return\n    #     elif isinstance(checkpoint, int):\n    #         # Checkpoint is the epoch number\n    #         checkpoint_path = '{}/{}/{}_ep{:04d}.pth.tar'.format(self._checkpoint_dir, self.settings.project_path,\n    #                                                              net_type, checkpoint)\n    #     elif isinstance(checkpoint, str):\n    #         # checkpoint is the path\n    #         if os.path.isdir(checkpoint):\n    #             checkpoint_list = sorted(glob.glob('{}/*_ep*.pth.tar'.format(checkpoint)))\n    #             if checkpoint_list:\n    #                 checkpoint_path = checkpoint_list[-1]\n    #             else:\n    #                 raise Exception('No checkpoint found')\n    #         else:\n    #             checkpoint_path = os.path.expanduser(checkpoint)\n    #     else:\n    #         raise TypeError\n    #\n    #     # Load network\n    #     checkpoint_dict = loading.torch_load_legacy(checkpoint_path)\n    #\n    #     assert net_type == checkpoint_dict['net_type'], 'Network is not of correct type.'\n    #\n    #     if fields is None:\n    #         fields = checkpoint_dict.keys()\n    #     if ignore_fields is None:\n    #         ignore_fields = ['settings']\n    #\n    #         # Never load the scheduler. It exists in older checkpoints.\n    #     ignore_fields.extend(['lr_scheduler', 'constructor', 'net_type', 'actor_type', 'net_info'])\n    #\n    #     # Load all fields\n    #     for key in fields:\n    #         if key in ignore_fields:\n    #             continue\n    #         if key == 'net':\n    #             net.load_state_dict(checkpoint_dict[key])\n    #         elif key == 'optimizer':\n    #             self.optimizer.load_state_dict(checkpoint_dict[key])\n    #         else:\n    #             setattr(self, key, checkpoint_dict[key])\n    #\n    #     # Set the net info\n    #     if load_constructor and 'constructor' in checkpoint_dict and checkpoint_dict['constructor'] is not None:\n    #         net.constructor = checkpoint_dict['constructor']\n    #     if 'net_info' in checkpoint_dict and checkpoint_dict['net_info'] is not None:\n    #         net.info = checkpoint_dict['net_info']\n    #\n    #     # Update the epoch in lr scheduler\n    #     if 'epoch' in fields:\n    #         self.lr_scheduler.last_epoch = self.epoch\n    #\n    #     return True\n"
  },
  {
    "path": "tracker/sot/lib/online/complex.py",
    "content": "import torch\nfrom online.tensorlist import tensor_operation\n\n\ndef is_complex(a: torch.Tensor) -> bool:\n    return a.dim() >= 4 and a.shape[-1] == 2\n\n\ndef is_real(a: torch.Tensor) -> bool:\n    return not is_complex(a)\n\n\n@tensor_operation\ndef mult(a: torch.Tensor, b: torch.Tensor):\n    \"\"\"Pointwise complex multiplication of complex tensors.\"\"\"\n\n    if is_real(a):\n        if a.dim() >= b.dim():\n            raise ValueError('Incorrect dimensions.')\n        # a is real\n        return mult_real_cplx(a, b)\n    if is_real(b):\n        if b.dim() >= a.dim():\n            raise ValueError('Incorrect dimensions.')\n        # b is real\n        return mult_real_cplx(b, a)\n\n    # Both complex\n    c = mult_real_cplx(a[..., 0], b)\n    c[..., 0] -= a[..., 1] * b[..., 1]\n    c[..., 1] += a[..., 1] * b[..., 0]\n    return c\n\n\n@tensor_operation\ndef mult_conj(a: torch.Tensor, b: torch.Tensor):\n    \"\"\"Pointwise complex multiplication of complex tensors, with conjugate on b: a*conj(b).\"\"\"\n\n    if is_real(a):\n        if a.dim() >= b.dim():\n            raise ValueError('Incorrect dimensions.')\n        # a is real\n        return mult_real_cplx(a, conj(b))\n    if is_real(b):\n        if b.dim() >= a.dim():\n            raise ValueError('Incorrect dimensions.')\n        # b is real\n        return mult_real_cplx(b, a)\n\n    # Both complex\n    c = mult_real_cplx(b[...,0], a)\n    c[..., 0] += a[..., 1] * b[..., 1]\n    c[..., 1] -= a[..., 0] * b[..., 1]\n    return c\n\n\n@tensor_operation\ndef mult_real_cplx(a: torch.Tensor, b: torch.Tensor):\n    \"\"\"Pointwise complex multiplication of real tensor a with complex tensor b.\"\"\"\n\n    if is_real(b):\n        raise ValueError('Last dimension must have length 2.')\n\n    return a.unsqueeze(-1) * b\n\n\n@tensor_operation\ndef div(a: torch.Tensor, b: torch.Tensor):\n    \"\"\"Pointwise complex division of complex tensors.\"\"\"\n\n    if is_real(b):\n        if b.dim() >= a.dim():\n            raise ValueError('Incorrect dimensions.')\n        # b is real\n        return div_cplx_real(a, b)\n\n    return div_cplx_real(mult_conj(a, b), abs_sqr(b))\n\n\n@tensor_operation\ndef div_cplx_real(a: torch.Tensor, b: torch.Tensor):\n    \"\"\"Pointwise complex division of complex tensor a with real tensor b.\"\"\"\n\n    if is_real(a):\n        raise ValueError('Last dimension must have length 2.')\n\n    return a / b.unsqueeze(-1)\n\n\n@tensor_operation\ndef abs_sqr(a: torch.Tensor):\n    \"\"\"Squared absolute value.\"\"\"\n\n    if is_real(a):\n        raise ValueError('Last dimension must have length 2.')\n\n    return torch.sum(a*a, -1)\n\n\n@tensor_operation\ndef abs(a: torch.Tensor):\n    \"\"\"Absolute value.\"\"\"\n\n    if is_real(a):\n        raise ValueError('Last dimension must have length 2.')\n\n    return torch.sqrt(abs_sqr(a))\n\n\n@tensor_operation\ndef conj(a: torch.Tensor):\n    \"\"\"Complex conjugate.\"\"\"\n\n    if is_real(a):\n        raise ValueError('Last dimension must have length 2.')\n\n    # return a * torch.Tensor([1, -1], device=a.device)\n    return complex(a[...,0], -a[...,1])\n\n\n@tensor_operation\ndef real(a: torch.Tensor):\n    \"\"\"Real part.\"\"\"\n\n    if is_real(a):\n        raise ValueError('Last dimension must have length 2.')\n\n    return a[..., 0]\n\n\n@tensor_operation\ndef imag(a: torch.Tensor):\n    \"\"\"Imaginary part.\"\"\"\n\n    if is_real(a):\n        raise ValueError('Last dimension must have length 2.')\n\n    return a[..., 1]\n\n\n@tensor_operation\ndef complex(a: torch.Tensor, b: torch.Tensor = None):\n    \"\"\"Create complex tensor from real and imaginary part.\"\"\"\n\n    if b is None:\n        b = a.new_zeros(a.shape)\n    elif a is None:\n        a = b.new_zeros(b.shape)\n\n    return torch.cat((a.unsqueeze(-1), b.unsqueeze(-1)), -1)\n\n\n@tensor_operation\ndef mtimes(a: torch.Tensor, b: torch.Tensor, conj_a=False, conj_b=False):\n    \"\"\"Complex matrix multiplication of complex tensors.\n    The dimensions (-3, -2) are matrix multiplied. -1 is the complex dimension.\"\"\"\n\n    if is_real(a):\n        if a.dim() >= b.dim():\n            raise ValueError('Incorrect dimensions.')\n        return mtimes_real_complex(a, b, conj_b=conj_b)\n    if is_real(b):\n        if b.dim() >= a.dim():\n            raise ValueError('Incorrect dimensions.')\n        return mtimes_complex_real(a, b, conj_a=conj_a)\n\n    if not conj_a and not conj_b:\n        return complex(torch.matmul(a[..., 0], b[..., 0]) - torch.matmul(a[..., 1], b[..., 1]),\n                       torch.matmul(a[..., 0], b[..., 1]) + torch.matmul(a[..., 1], b[..., 0]))\n    if conj_a and not conj_b:\n        return complex(torch.matmul(a[..., 0], b[..., 0]) + torch.matmul(a[..., 1], b[..., 1]),\n                       torch.matmul(a[..., 0], b[..., 1]) - torch.matmul(a[..., 1], b[..., 0]))\n    if not conj_a and conj_b:\n        return complex(torch.matmul(a[..., 0], b[..., 0]) + torch.matmul(a[..., 1], b[..., 1]),\n                       torch.matmul(a[..., 1], b[..., 0]) - torch.matmul(a[..., 0], b[..., 1]))\n    if conj_a and conj_b:\n        return complex(torch.matmul(a[..., 0], b[..., 0]) - torch.matmul(a[..., 1], b[..., 1]),\n                       -torch.matmul(a[..., 0], b[..., 1]) - torch.matmul(a[..., 1], b[..., 0]))\n\n\n@tensor_operation\ndef mtimes_real_complex(a: torch.Tensor, b: torch.Tensor, conj_b=False):\n    if is_real(b):\n        raise ValueError('Incorrect dimensions.')\n\n    if not conj_b:\n        return complex(torch.matmul(a, b[..., 0]), torch.matmul(a, b[..., 1]))\n    if conj_b:\n        return complex(torch.matmul(a, b[..., 0]), -torch.matmul(a, b[..., 1]))\n\n\n@tensor_operation\ndef mtimes_complex_real(a: torch.Tensor, b: torch.Tensor, conj_a=False):\n    if is_real(a):\n        raise ValueError('Incorrect dimensions.')\n\n    if not conj_a:\n        return complex(torch.matmul(a[..., 0], b), torch.matmul(a[..., 1], b))\n    if conj_a:\n        return complex(torch.matmul(a[..., 0], b), -torch.matmul(a[..., 1], b))\n\n\n@tensor_operation\ndef exp_imag(a: torch.Tensor):\n    \"\"\"Complex exponential with imaginary input: e^(i*a)\"\"\"\n\n    a = a.unsqueeze(-1)\n    return torch.cat((torch.cos(a), torch.sin(a)), -1)\n\n\n\n"
  },
  {
    "path": "tracker/sot/lib/online/dcf.py",
    "content": "import torch\nimport math\nfrom online import fourier\nfrom online import complex\nimport torch.nn.functional as F\n\n\ndef hann1d(sz: int, centered = True) -> torch.Tensor:\n    \"\"\"1D cosine window.\"\"\"\n    if centered:\n        return 0.5 * (1 - torch.cos((2 * math.pi / (sz + 2)) * torch.arange(1, sz + 1).float()))\n    w = 0.5 * (1 + torch.cos((2 * math.pi / (sz + 2)) * torch.arange(0, sz//2 + 1).float()))\n    return torch.cat([w, w[1:sz-sz//2].flip((0,))])\n\n\ndef hann2d(sz: torch.Tensor, centered = True) -> torch.Tensor:\n    \"\"\"2D cosine window.\"\"\"\n\n    # try:\n    return hann1d(sz[0].item(), centered).reshape(1, 1, -1, 1) * hann1d(sz[1].item(), centered).reshape(1, 1, 1, -1)\n    # except:\n    #     # zzp: bug in my code and don't know why\n    #     return hann1d(sz[0][0].item(), centered).reshape(1, 1, -1, 1) * hann1d(sz[0][1].item(), centered).reshape(1, 1, 1, -1)\n\n\ndef hann2d_clipped(sz: torch.Tensor, effective_sz: torch.Tensor, centered = True) -> torch.Tensor:\n    \"\"\"1D clipped cosine window.\"\"\"\n\n    # Ensure that the difference is even\n    effective_sz += (effective_sz - sz) % 2\n    effective_window = hann1d(effective_sz[0].item(), True).reshape(1, 1, -1, 1) * hann1d(effective_sz[1].item(), True).reshape(1, 1, 1, -1)\n\n    pad = (sz - effective_sz) / 2\n\n    window = F.pad(effective_window, (pad[1].item(), pad[1].item(), pad[0].item(), pad[0].item()), 'replicate')\n\n    if centered:\n        return window\n    else:\n        mid = (sz / 2).int()\n        window_shift_lr = torch.cat((window[:, :, :, mid[1]:], window[:, :, :, :mid[1]]), 3)\n        return torch.cat((window_shift_lr[:, :, mid[0]:, :], window_shift_lr[:, :, :mid[0], :]), 2)\n\n\ndef gauss_fourier(sz: int, sigma: float, half: bool = False) -> torch.Tensor:\n    if half:\n        k = torch.arange(0, int(sz/2+1))\n    else:\n        k = torch.arange(-int((sz-1)/2), int(sz/2+1))\n    return (math.sqrt(2*math.pi) * sigma / sz) * torch.exp(-2 * (math.pi * sigma * k.float() / sz)**2)\n\n\ndef gauss_spatial(sz, sigma, center=0, end_pad=0):\n    k = torch.arange(-(sz-1)/2, (sz+1)/2+end_pad)\n    return torch.exp(-1.0/(2*sigma**2) * (k - center)**2)\n\n\ndef label_function(sz: torch.Tensor, sigma: torch.Tensor):\n    return gauss_fourier(sz[0].item(), sigma[0].item()).reshape(1, 1, -1, 1) * gauss_fourier(sz[1].item(), sigma[1].item(), True).reshape(1, 1, 1, -1)\n\ndef label_function_spatial(sz: torch.Tensor, sigma: torch.Tensor, center: torch.Tensor = torch.zeros(2), end_pad: torch.Tensor = torch.zeros(2)):\n    \"\"\"The origin is in the middle of the image.\"\"\"\n    # zzp\n    # try:\n    return gauss_spatial(sz[0].item(), sigma[0].item(), center[0], end_pad[0].item()).reshape(1, 1, -1, 1) * \\\n           gauss_spatial(sz[1].item(), sigma[1].item(), center[1], end_pad[1].item()).reshape(1, 1, 1, -1)\n    # except:\n    #     return gauss_spatial(sz[0][0].item(), sigma[0].item(), center[0], end_pad[0].item()).reshape(1, 1, -1, 1) * \\\n    #            gauss_spatial(sz[0][1].item(), sigma[1].item(), center[1], end_pad[1].item()).reshape(1, 1, 1, -1)\n\n\ndef cubic_spline_fourier(f, a):\n    \"\"\"The continuous Fourier transform of a cubic spline kernel.\"\"\"\n\n    bf = (6*(1 - torch.cos(2 * math.pi * f)) + 3*a*(1 - torch.cos(4 * math.pi * f))\n           - (6 + 8*a)*math.pi*f*torch.sin(2 * math.pi * f) - 2*a*math.pi*f*torch.sin(4 * math.pi * f)) \\\n         / (4 * math.pi**4 * f**4)\n\n    bf[f == 0] = 1\n\n    return bf\n\n\ndef get_interp_fourier(sz: torch.Tensor, method='ideal', bicubic_param=0.5, centering=True, windowing=False, device='cpu'):\n\n    ky, kx = fourier.get_frequency_coord(sz)\n\n    if method=='ideal':\n        interp_y = torch.ones(ky.shape) / sz[0]\n        interp_x = torch.ones(kx.shape) / sz[1]\n    elif method=='bicubic':\n        interp_y = cubic_spline_fourier(ky / sz[0], bicubic_param) / sz[0]\n        interp_x = cubic_spline_fourier(kx / sz[1], bicubic_param) / sz[1]\n    else:\n        raise ValueError('Unknown method.')\n\n    if centering:\n        interp_y = complex.mult(interp_y, complex.exp_imag((-math.pi/sz[0]) * ky))\n        interp_x = complex.mult(interp_x, complex.exp_imag((-math.pi/sz[1]) * kx))\n\n    if windowing:\n        raise NotImplementedError\n\n    return interp_y.to(device), interp_x.to(device)\n\n\ndef interpolate_dft(a: torch.Tensor, interp_fs) -> torch.Tensor:\n\n    if isinstance(interp_fs, torch.Tensor):\n        return complex.mult(a, interp_fs)\n    if isinstance(interp_fs, (tuple, list)):\n        return complex.mult(complex.mult(a, interp_fs[0]), interp_fs[1])\n    raise ValueError('\"interp_fs\" must be tensor or tuple of tensors.')\n\n\ndef get_reg_filter(sz: torch.Tensor, target_sz: torch.Tensor, params):\n    \"\"\"Computes regularization filter in CCOT and ECO.\"\"\"\n\n    if not params.use_reg_window:\n        return params.reg_window_min * torch.ones(1,1,1,1)\n\n    if getattr(params, 'reg_window_square', False):\n        target_sz = target_sz.prod().sqrt() * torch.ones(2)\n\n    # Normalization factor\n    reg_scale = 0.5 * target_sz\n\n    # Construct grid\n    if getattr(params, 'reg_window_centered', True):\n        wrg = torch.arange(-int((sz[0]-1)/2), int(sz[0]/2+1), dtype=torch.float32).view(1,1,-1,1)\n        wcg = torch.arange(-int((sz[1]-1)/2), int(sz[1]/2+1), dtype=torch.float32).view(1,1,1,-1)\n    else:\n        wrg = torch.cat([torch.arange(0, int(sz[0]/2+1), dtype=torch.float32),\n                         torch.arange(-int((sz[0] - 1) / 2), 0, dtype=torch.float32)]).view(1,1,-1,1)\n        wcg = torch.cat([torch.arange(0, int(sz[1]/2+1), dtype=torch.float32),\n                         torch.arange(-int((sz[1] - 1) / 2), 0, dtype=torch.float32)]).view(1,1,1,-1)\n\n    # Construct regularization window\n    reg_window = (params.reg_window_edge - params.reg_window_min) * \\\n                 (torch.abs(wrg/reg_scale[0])**params.reg_window_power +\n                  torch.abs(wcg/reg_scale[1])**params.reg_window_power) + params.reg_window_min\n\n    # Compute DFT and enforce sparsity\n    reg_window_dft = torch.rfft(reg_window, 2) / sz.prod()\n    reg_window_dft_abs = complex.abs(reg_window_dft)\n    reg_window_dft[reg_window_dft_abs < params.reg_sparsity_threshold * reg_window_dft_abs.max(), :] = 0\n\n    # Do the inverse transform to correct for the window minimum\n    reg_window_sparse = torch.irfft(reg_window_dft, 2, signal_sizes=sz.long().tolist())\n    reg_window_dft[0,0,0,0,0] += params.reg_window_min - sz.prod() * reg_window_sparse.min()\n    reg_window_dft = complex.real(fourier.rfftshift2(reg_window_dft))\n\n    # Remove zeros\n    max_inds,_ = reg_window_dft.nonzero().max(dim=0)\n    mid_ind = int((reg_window_dft.shape[2]-1)/2)\n    top = max_inds[-2].item() + 1\n    bottom = 2*mid_ind - max_inds[-2].item()\n    right = max_inds[-1].item() + 1\n    reg_window_dft = reg_window_dft[..., bottom:top, :right]\n    if reg_window_dft.shape[-1] > 1:\n        reg_window_dft = torch.cat([reg_window_dft[..., 1:].flip((2, 3)), reg_window_dft], -1)\n\n    return reg_window_dft\n\n\ndef max2d(a: torch.Tensor) -> (torch.Tensor, torch.Tensor):\n    \"\"\"Computes maximum and argmax in the last two dimensions.\"\"\"\n\n    max_val_row, argmax_row = torch.max(a, dim=-2)\n    max_val, argmax_col = torch.max(max_val_row, dim=-1)\n    argmax_row = argmax_row.view(argmax_col.numel(),-1)[torch.arange(argmax_col.numel()), argmax_col.view(-1)]\n    argmax_row = argmax_row.reshape(argmax_col.shape)\n    argmax = torch.cat((argmax_row.unsqueeze(-1), argmax_col.unsqueeze(-1)), -1)\n    return max_val, argmax\n"
  },
  {
    "path": "tracker/sot/lib/online/extractor.py",
    "content": "import torch\nfrom .preprocessing import sample_patch\nfrom online import TensorList, load_network\n\nclass ExtractorBase:\n    \"\"\"Base feature extractor class.\n    args:\n        features: List of features.\n    \"\"\"\n    def __init__(self, features):\n        self.features = features\n\n    def initialize(self, siam_net):\n        for f in self.features:\n            f.initialize(siam_net)\n\n    def free_memory(self):\n        for f in self.features:\n            f.free_memory()\n\n\nclass SingleResolutionExtractor(ExtractorBase):\n    \"\"\"Single resolution feature extractor.\n    args:\n        features: List of features.\n    \"\"\"\n    def __init__(self, features):\n        super().__init__(features)\n\n        self.feature_stride = self.features[0].stride()\n        if isinstance(self.feature_stride, (list, TensorList)):\n            self.feature_stride = self.feature_stride[0]\n\n    def stride(self):\n        return self.feature_stride\n\n    def size(self, input_sz):\n        return input_sz // self.stride()\n\n    def extract(self, im, pos, scales, image_sz):\n        if isinstance(scales, (int, float)):\n            scales = [scales]\n\n        # Get image patches\n        im_patches = torch.cat([sample_patch(im, pos, s*image_sz, image_sz) for s in scales])\n\n        # Compute features\n        feature_map = torch.cat(TensorList([f.get_feature(im_patches) for f in self.features]).unroll(), dim=1)\n\n        return feature_map\n\n\nclass MultiResolutionExtractor(ExtractorBase):\n    \"\"\"Multi-resolution feature extractor.\n    args:\n        features: List of features.\n    \"\"\"\n    def __init__(self, features):\n        super().__init__(features)\n        self.is_color = None\n\n    def stride(self):\n        return torch.Tensor(TensorList([f.stride() for f in self.features if self._return_feature(f)]).unroll())\n\n    def size(self, input_sz):\n        return TensorList([f.size(input_sz) for f in self.features if self._return_feature(f)]).unroll()\n\n    def dim(self):\n        return TensorList([f.dim() for f in self.features if self._return_feature(f)]).unroll()\n\n    def get_fparams(self, name: str = None):\n        if name is None:\n            return [f.fparams for f in self.features if self._return_feature(f)]\n        return TensorList([getattr(f.fparams, name) for f in self.features if self._return_feature(f)]).unroll()\n\n    def get_attribute(self, name: str, ignore_missing: bool = False):\n        if ignore_missing:\n            return TensorList([getattr(f, name) for f in self.features if self._return_feature(f) and hasattr(f, name)])\n        else:\n            return TensorList([getattr(f, name, None) for f in self.features if self._return_feature(f)])\n\n    def get_unique_attribute(self, name: str):\n        feat = None\n        for f in self.features:\n            if self._return_feature(f) and hasattr(f, name):\n                if feat is not None:\n                    raise RuntimeError('The attribute was not unique.')\n                feat = f\n        if feat is None:\n            raise RuntimeError('The attribute did not exist')\n        return getattr(feat, name)\n\n    def _return_feature(self, f):\n        return self.is_color is None or self.is_color and f.use_for_color or not self.is_color and f.use_for_gray\n\n    def set_is_color(self, is_color: bool):\n        self.is_color = is_color\n\n    def extract(self, im, pos, scales, image_sz):\n        \"\"\"Extract features.\n        args:\n            im: Image.\n            pos: Center position for extraction.\n            scales: Image scales to extract features from.\n            image_sz: Size to resize the image samples to before extraction.\n        \"\"\"\n        if isinstance(scales, (int, float)):\n            scales = [scales]\n\n        # Get image patches\n        im_patches = torch.cat([sample_patch(im, pos, s*image_sz, image_sz) for s in scales])\n\n        # Compute features\n        feature_map = TensorList([f.get_feature(im_patches) for f in self.features]).unroll()\n\n        return feature_map\n\n    def extract_transformed(self, im, pos, scale, image_sz, transforms):\n        \"\"\"Extract features from a set of transformed image samples.\n        args:\n            im: Image.\n            pos: Center position for extraction.\n            scale: Image scale to extract features from.\n            image_sz: Size to resize the image samples to before extraction.\n            transforms: A set of image transforms to apply.\n        \"\"\"\n\n        # Get image patche\n        im_patch = sample_patch(im, pos, scale*image_sz, image_sz)\n\n        # Apply transforms\n        im_patches = torch.cat([T(im_patch) for T in transforms])\n\n        # Compute features\n        # debug\n        # for f in self.features:\n        #     f.get_feature(im_patches)\n\n        feature_map = TensorList([f.get_feature(im_patches) for f in self.features]).unroll()\n\n        return feature_map\n"
  },
  {
    "path": "tracker/sot/lib/online/fourier.py",
    "content": "import torch\nimport torch.nn.functional as F\nfrom online import TensorList, complex\nfrom online.tensorlist import tensor_operation\n\n\n@tensor_operation\ndef rfftshift2(a: torch.Tensor):\n    h = a.shape[2] + 2\n    return torch.cat((a[:,:,(h-1)//2:,...], a[:,:,:h//2,...]), 2)\n\n\n@tensor_operation\ndef irfftshift2(a: torch.Tensor):\n    mid = int((a.shape[2]-1)/2)\n    return torch.cat((a[:,:,mid:,...], a[:,:,:mid,...]), 2)\n\n\n@tensor_operation\ndef cfft2(a):\n    \"\"\"Do FFT and center the low frequency component.\n    Always produces odd (full) output sizes.\"\"\"\n\n    return rfftshift2(torch.rfft(a, 2))\n\n\n@tensor_operation\ndef cifft2(a, signal_sizes=None):\n    \"\"\"Do inverse FFT corresponding to cfft2.\"\"\"\n\n    return torch.irfft(irfftshift2(a), 2, signal_sizes=signal_sizes)\n\n\n@tensor_operation\ndef sample_fs(a: torch.Tensor, grid_sz: torch.Tensor = None, rescale = True):\n    \"\"\"Samples the Fourier series.\"\"\"\n\n    # Size of the fourier series\n    sz = torch.Tensor([a.shape[2], 2*a.shape[3]-1]).float()\n\n    # Default grid\n    if grid_sz is None or sz[0] == grid_sz[0] and sz[1] == grid_sz[1]:\n        if rescale:\n            return sz.prod().item() * cifft2(a)\n        return cifft2(a)\n\n    if sz[0] > grid_sz[0] or sz[1] > grid_sz[1]:\n        raise ValueError(\"Only grid sizes that are smaller than the Fourier series size are supported.\")\n\n    tot_pad = (grid_sz - sz).tolist()\n    is_even = [s.item() % 2 == 0 for s in sz]\n\n    # Compute paddings\n    pad_top = int((tot_pad[0]+1)/2) if is_even[0] else int(tot_pad[0]/2)\n    pad_bottom = int(tot_pad[0] - pad_top)\n    pad_right = int((tot_pad[1]+1)/2)\n\n    if rescale:\n        return grid_sz.prod().item() * cifft2(F.pad(a, (0, 0, 0, pad_right, pad_top, pad_bottom)), signal_sizes=grid_sz.long().tolist())\n    else:\n        return cifft2(F.pad(a, (0, 0, 0, pad_right, pad_top, pad_bottom)), signal_sizes=grid_sz.long().tolist())\n\n\ndef get_frequency_coord(sz, add_complex_dim = False, device='cpu'):\n    \"\"\"Frequency coordinates.\"\"\"\n\n    ky = torch.arange(-int((sz[0]-1)/2), int(sz[0]/2+1), dtype=torch.float32, device=device).view(1,1,-1,1)\n    kx = torch.arange(0, int(sz[1]/2+1), dtype=torch.float32, device=device).view(1,1,1,-1)\n\n    if add_complex_dim:\n        ky = ky.unsqueeze(-1)\n        kx = kx.unsqueeze(-1)\n\n    return ky, kx\n\n\n@tensor_operation\ndef shift_fs(a: torch.Tensor, shift: torch.Tensor):\n    \"\"\"Shift a sample a in the Fourier domain.\n    Params:\n        a : The fourier coefficiens of the sample.\n        shift : The shift to be performed normalized to the range [-pi, pi].\"\"\"\n\n    if a.dim() != 5:\n        raise ValueError('a must be the Fourier coefficients, a 5-dimensional tensor.')\n\n    if shift[0] == 0 and shift[1] == 0:\n        return a\n\n    ky, kx = get_frequency_coord((a.shape[2], 2*a.shape[3]-1), device=a.device)\n\n    return complex.mult(complex.mult(a, complex.exp_imag(shift[0].item()*ky)), complex.exp_imag(shift[1].item()*kx))\n\n\ndef sum_fs(a: TensorList) -> torch.Tensor:\n    \"\"\"Sum a list of Fourier series expansions.\"\"\"\n\n    s = None\n    mid = None\n\n    for e in sorted(a, key=lambda elem: elem.shape[-3], reverse=True):\n        if s is None:\n            s = e.clone()\n            mid = int((s.shape[-3] - 1) / 2)\n        else:\n            # Compute coordinates\n            top = mid - int((e.shape[-3] - 1) / 2)\n            bottom = mid + int(e.shape[-3] / 2) + 1\n            right = e.shape[-2]\n\n            # Add the data\n            s[..., top:bottom, :right, :] += e\n\n    return s\n\n\ndef sum_fs12(a: TensorList) -> torch.Tensor:\n    \"\"\"Sum a list of Fourier series expansions.\"\"\"\n\n    s = None\n    mid = None\n\n    for e in sorted(a, key=lambda elem: elem.shape[0], reverse=True):\n        if s is None:\n            s = e.clone()\n            mid = int((s.shape[0] - 1) / 2)\n        else:\n            # Compute coordinates\n            top = mid - int((e.shape[0] - 1) / 2)\n            bottom = mid + int(e.shape[0] / 2) + 1\n            right = e.shape[1]\n\n            # Add the data\n            s[top:bottom, :right, ...] += e\n\n    return s\n\n\n@tensor_operation\ndef inner_prod_fs(a: torch.Tensor, b: torch.Tensor):\n    if complex.is_complex(a) and complex.is_complex(b):\n        return 2 * (a.reshape(-1) @ b.reshape(-1)) - a[:, :, :, 0, :].reshape(-1) @ b[:, :, :, 0, :].reshape(-1)\n    elif complex.is_real(a) and complex.is_real(b):\n        return 2 * (a.reshape(-1) @ b.reshape(-1)) - a[:, :, :, 0].reshape(-1) @ b[:, :, :, 0].reshape(-1)\n    else:\n        raise NotImplementedError('Not implemented for mixed real and complex.')"
  },
  {
    "path": "tracker/sot/lib/online/loading.py",
    "content": "import torch\nimport os\nimport sys\nfrom pathlib import Path\nimport importlib\nfrom online.model_constructor import NetConstructor\n\ndef check_keys(model, pretrained_state_dict):\n    ckpt_keys = set(pretrained_state_dict.keys())\n    model_keys = set(model.state_dict().keys())\n    used_pretrained_keys = model_keys & ckpt_keys\n    unused_pretrained_keys = ckpt_keys - model_keys\n    missing_keys = model_keys - ckpt_keys\n\n    print('missing keys:{}'.format(missing_keys))\n\n    print('=========================================')\n    # clean it to no batch_tracked key words\n    unused_pretrained_keys = [k for k in unused_pretrained_keys if 'num_batches_tracked' not in k]\n\n    print('unused checkpoint keys:{}'.format(unused_pretrained_keys))\n    # print('used keys:{}'.format(used_pretrained_keys))\n    assert len(used_pretrained_keys) > 0, 'load NONE from pretrained checkpoint'\n    return True\n\ndef load_pretrain(model, pretrained_dict):\n\n    device = torch.cuda.current_device()\n\n    check_keys(model, pretrained_dict)\n    model.load_state_dict(pretrained_dict, strict=False)\n    return model\n\n\ndef load_network(ckpt_path=None, constructor_fun_name='online_resnet18', constructor_module='lib.models.online.bbreg.online'):\n\n        # Load network\n        checkpoint_dict = torch.load(ckpt_path) # key: net\n\n        # get model structure from constructor\n        net_constr = NetConstructor(fun_name=constructor_fun_name, fun_module=constructor_module)\n        # Legacy networks before refactoring\n\n        net = net_constr.get()\n\n        net = load_pretrain(net, checkpoint_dict['net'])\n\n        return net\n\n\ndef load_weights(net, path, strict=True):\n    checkpoint_dict = torch.load(path)\n    weight_dict = checkpoint_dict['net']\n    net.load_state_dict(weight_dict, strict=strict)\n    return net\n\n\ndef torch_load_legacy(path):\n    \"\"\"Load network with legacy environment.\"\"\"\n\n    # Setup legacy env (for older networks)\n    _setup_legacy_env()\n\n    # Load network\n    checkpoint_dict = torch.load(path)\n\n    # Cleanup legacy\n    _cleanup_legacy_env()\n\n    return checkpoint_dict\n\n\ndef _setup_legacy_env():\n    importlib.import_module('ltr')\n    sys.modules['dlframework'] = sys.modules['ltr']\n    sys.modules['dlframework.common'] = sys.modules['ltr']\n    for m in ('model_constructor', 'stats', 'settings', 'local'):\n        importlib.import_module('ltr.admin.'+m)\n        sys.modules['dlframework.common.utils.'+m] = sys.modules['ltr.admin.'+m]\n\n\ndef _cleanup_legacy_env():\n    del_modules = []\n    for m in sys.modules.keys():\n        if m.startswith('dlframework'):\n            del_modules.append(m)\n    for m in del_modules:\n        del sys.modules[m]\n"
  },
  {
    "path": "tracker/sot/lib/online/ltr_trainer.py",
    "content": "import os\nfrom collections import OrderedDict\nfrom .base_trainer import BaseTrainer\nfrom utils.utils import AverageMeter, StatValue\nfrom utils.utils import TensorboardWriter\nimport torch\nimport time\n\n\nclass LTRTrainer(BaseTrainer):\n    def __init__(self, actor, loaders, optimizer, settings, lr_scheduler=None):\n        \"\"\"\n        args:\n            actor - The actor for training the network\n            loaders - list of dataset loaders, e.g. [train_loader, val_loader]. In each epoch, the trainer runs one\n                        epoch for each loader.\n            optimizer - The optimizer used for training, e.g. Adam\n            settings - Training settings\n            lr_scheduler - Learning rate scheduler\n        \"\"\"\n        super().__init__(actor, loaders, optimizer, settings, lr_scheduler)\n\n        self._set_default_settings()\n\n        # Initialize statistics variables\n        self.stats = OrderedDict({loader.name: None for loader in self.loaders})\n\n        # Initialize tensorboard\n        tensorboard_writer_dir = os.path.join('logs')\n        self.tensorboard_writer = TensorboardWriter(tensorboard_writer_dir, [l.name for l in loaders])\n\n    def _set_default_settings(self):\n        # Dict of all default values\n        default = {'print_interval': 10,\n                   'print_stats': None,\n                   'description': ''}\n\n        # for param, default_value in default.items():\n        #     if getattr(self.settings, param, None) is None:\n        #         setattr(self.settings, param, default_value)\n\n    def cycle_dataset(self, loader):\n        \"\"\"Do a cycle of training or validation.\"\"\"\n\n        self.actor.train(loader.training)\n        # torch.set_grad_enabled(loader.training)  # zzp\n\n        self._init_timing()\n\n        for i, data in enumerate(loader, 1):\n            # get inputs\n            data = data.to(self.device)\n            data['epoch'] = self.epoch\n            data['settings'] = self.settings\n\n            # forward pass\n            loss, stats = self.actor(data)\n\n            # backward pass and update weights\n            if loader.training:\n                self.optimizer.zero_grad()\n                loss.backward()\n                self.optimizer.step()\n\n            # update statistics\n            batch_size = data['train_images'].shape[loader.stack_dim]\n            self._update_stats(stats, batch_size, loader)\n\n            # print statistics\n            self._print_stats(i, loader, batch_size)\n\n    def train_epoch(self):\n        \"\"\"Do one epoch for each loader.\"\"\"\n        for loader in self.loaders:\n            if self.epoch % loader.epoch_interval == 0:\n                self.cycle_dataset(loader)\n\n        self._stats_new_epoch()\n        self._write_tensorboard()\n\n    def _init_timing(self):\n        self.num_frames = 0\n        self.start_time = time.time()\n        self.prev_time = self.start_time\n\n    def _update_stats(self, new_stats: OrderedDict, batch_size, loader):\n        # Initialize stats if not initialized yet\n        if loader.name not in self.stats.keys() or self.stats[loader.name] is None:\n            self.stats[loader.name] = OrderedDict({name: AverageMeter() for name in new_stats.keys()})\n\n        for name, val in new_stats.items():\n            if name not in self.stats[loader.name].keys():\n                self.stats[loader.name][name] = AverageMeter()\n            self.stats[loader.name][name].update(val, batch_size)\n\n    def _print_stats(self, i, loader, batch_size):\n        self.num_frames += batch_size\n        current_time = time.time()\n        batch_fps = batch_size / (current_time - self.prev_time)\n        average_fps = self.num_frames / (current_time - self.start_time)\n        self.prev_time = current_time\n        if i % 10 == 0 or i == loader.__len__():\n            print_str = '[%s: %d, %d / %d] ' % (loader.name, self.epoch, i, loader.__len__())\n            print_str += 'FPS: %.1f (%.1f)  ,  ' % (average_fps, batch_fps)\n            for name, val in self.stats[loader.name].items():\n                if hasattr(val, 'avg'):\n                    print_str += '%s: %.5f  ,  ' % (name, val.avg)\n            print(print_str[:-5])\n\n    def _stats_new_epoch(self):\n        # Record learning rate\n        for loader in self.loaders:\n            if loader.training:\n                lr_list = self.lr_scheduler.get_lr()\n                for i, lr in enumerate(lr_list):\n                    var_name = 'LearningRate/group{}'.format(i)\n                    if var_name not in self.stats[loader.name].keys():\n                        self.stats[loader.name][var_name] = StatValue()\n                    self.stats[loader.name][var_name].update(lr)\n\n        for loader_stats in self.stats.values():\n            if loader_stats is None:\n                continue\n            for stat_value in loader_stats.values():\n                if hasattr(stat_value, 'new_epoch'):\n                    stat_value.new_epoch()\n\n    def _write_tensorboard(self):\n        if self.epoch == 1:\n            self.tensorboard_writer.write_info('adafree_online', 'adafree', 'Train online for Adafree')\n\n        self.tensorboard_writer.write_epoch(self.stats, self.epoch)"
  },
  {
    "path": "tracker/sot/lib/online/model_constructor.py",
    "content": "from functools import wraps\nimport importlib\n\n\ndef model_constructor(f):\n    \"\"\" Wraps the function 'f' which returns the network. An extra field 'constructor' is added to the network returned\n    by 'f'. This field contains an instance of the  'NetConstructor' class, which contains the information needed to\n    re-construct the network, such as the name of the function 'f', the function arguments etc. Thus, the network can\n    be easily constructed from a saved checkpoint by calling NetConstructor.get() function.\n    \"\"\"\n    @wraps(f)\n    def f_wrapper(*args, **kwds):\n        net_constr = NetConstructor(f.__name__, f.__module__, args, kwds)\n        output = f(*args, **kwds)\n        if isinstance(output, (tuple, list)):\n            # Assume first argument is the network\n            output[0].constructor = net_constr\n        else:\n            output.constructor = net_constr\n        return output\n    return f_wrapper\n\n\nclass NetConstructor:\n    \"\"\" Class to construct networks. Takes as input the function name (e.g. atom_resnet18), the name of the module\n    which contains the network function (e.g. ltr.models.bbreg.atom) and the arguments for the network\n    function. The class object can then be stored along with the network weights to re-construct the network.\"\"\"\n    def __init__(self, fun_name, fun_module):\n        \"\"\"\n        args:\n            fun_name - The function which returns the network\n            fun_module - the module which contains the network function\n            args - arguments which are passed to the network function\n            kwds - arguments which are passed to the network function\n        \"\"\"\n        self.fun_name = fun_name\n        self.fun_module = fun_module\n        #self.args = args\n        #self.kwds = kwds\n\n    def get(self):\n        \"\"\" Rebuild the network by calling the network function with the correct arguments. \"\"\"\n        net_module = importlib.import_module(self.fun_module)\n        net_fun = getattr(net_module, self.fun_name)\n        return net_fun()\n"
  },
  {
    "path": "tracker/sot/lib/online/operation.py",
    "content": "import torch\nimport torch.nn.functional as F\nfrom online.tensorlist import tensor_operation, TensorList\n\n\n@tensor_operation\ndef conv2d(input: torch.Tensor, weight: torch.Tensor, bias: torch.Tensor = None, stride=1, padding=0, dilation=1, groups=1, mode=None):\n    \"\"\"Standard conv2d. Returns the input if weight=None.\"\"\"\n\n    if weight is None:\n        return input\n\n    ind = None\n    if mode is not None:\n        if padding != 0:\n            raise ValueError('Cannot input both padding and mode.')\n        if mode == 'same':\n            padding = (weight.shape[2]//2, weight.shape[3]//2)\n            if weight.shape[2] % 2 == 0 or weight.shape[3] % 2 == 0:\n                ind = (slice(-1) if weight.shape[2] % 2 == 0 else slice(None),\n                       slice(-1) if weight.shape[3] % 2 == 0 else slice(None))\n        elif mode == 'valid':\n            padding = (0, 0)\n        elif mode == 'full':\n            padding = (weight.shape[2]-1, weight.shape[3]-1)\n        else:\n            raise ValueError('Unknown mode for padding.')\n\n    out = F.conv2d(input, weight, bias=bias, stride=stride, padding=padding, dilation=dilation, groups=groups)\n    if ind is None:\n        return out\n    return out[:,:,ind[0],ind[1]]\n\n\n@tensor_operation\ndef conv1x1(input: torch.Tensor, weight: torch.Tensor):\n    \"\"\"Do a convolution with a 1x1 kernel weights. Implemented with matmul, which can be faster than using conv.\"\"\"\n\n    if weight is None:\n        return input\n\n    return torch.matmul(weight.view(weight.shape[0], weight.shape[1]),\n                        input.view(input.shape[0], input.shape[1], -1)).view(input.shape[0], weight.shape[0], input.shape[2], input.shape[3])\n"
  },
  {
    "path": "tracker/sot/lib/online/optim.py",
    "content": "import torch\nimport sys\nfrom online import optimization, TensorList, operation\nimport math\n\n\nclass FactorizedConvProblem(optimization.L2Problem):\n    def __init__(self, training_samples: TensorList, y: TensorList, filter_reg: torch.Tensor, projection_reg, params, sample_weights: TensorList,\n                 projection_activation, response_activation):\n        self.training_samples = training_samples\n        self.y = y\n        self.filter_reg = filter_reg\n        self.sample_weights = sample_weights\n        self.params = params\n        self.projection_reg = projection_reg\n        self.projection_activation = projection_activation\n        self.response_activation = response_activation\n\n        self.diag_M = self.filter_reg.concat(projection_reg)\n\n    def __call__(self, x: TensorList):\n        \"\"\"\n        Compute residuals\n        :param x: [filters, projection_matrices]\n        :return: [data_terms, filter_regularizations, proj_mat_regularizations]\n        \"\"\"\n        filter = x[:len(x)//2]  # w2 in paper\n        P = x[len(x)//2:]       # w1 in paper\n\n        # Do first convolution\n        compressed_samples = operation.conv1x1(self.training_samples, P).apply(self.projection_activation)\n\n        # Do second convolution\n        residuals = operation.conv2d(compressed_samples, filter, mode='same').apply(self.response_activation)\n\n        # Compute data residuals\n        residuals = residuals - self.y\n\n        residuals = self.sample_weights.sqrt().view(-1, 1, 1, 1) * residuals\n\n        # Add regularization for projection matrix\n        residuals.extend(self.filter_reg.apply(math.sqrt) * filter)\n\n        # Add regularization for projection matrix\n        residuals.extend(self.projection_reg.apply(math.sqrt) * P)\n\n        return residuals\n\n\n    def ip_input(self, a: TensorList, b: TensorList):\n        num = len(a) // 2       # Number of filters\n        a_filter = a[:num]\n        b_filter = b[:num]\n        a_P = a[num:]\n        b_P = b[num:]\n\n        # Filter inner product\n        # ip_out = a_filter.reshape(-1) @ b_filter.reshape(-1)\n        ip_out = operation.conv2d(a_filter, b_filter).view(-1)\n\n        # Add projection matrix part\n        # ip_out += a_P.reshape(-1) @ b_P.reshape(-1)\n        ip_out += operation.conv2d(a_P.view(1,-1,1,1), b_P.view(1,-1,1,1)).view(-1)\n\n        # Have independent inner products for each filter\n        return ip_out.concat(ip_out.clone())\n\n    def M1(self, x: TensorList):\n        return x / self.diag_M\n\n\nclass ConvProblem(optimization.L2Problem):\n    def __init__(self, training_samples: TensorList, y: TensorList, filter_reg: torch.Tensor, sample_weights: TensorList, response_activation):\n        self.training_samples = training_samples\n        self.y = y\n        self.filter_reg = filter_reg\n        self.sample_weights = sample_weights\n        self.response_activation = response_activation\n\n    def __call__(self, x: TensorList):\n        \"\"\"\n        Compute residuals\n        :param x: [filters]\n        :return: [data_terms, filter_regularizations]\n        \"\"\"\n        # Do convolution and compute residuals\n        residuals = operation.conv2d(self.training_samples, x, mode='same').apply(self.response_activation)\n        residuals = residuals - self.y\n\n        residuals = self.sample_weights.sqrt().view(-1, 1, 1, 1) * residuals\n\n        # Add regularization for projection matrix\n        residuals.extend(self.filter_reg.apply(math.sqrt) * x)\n\n        return residuals\n\n    def ip_input(self, a: TensorList, b: TensorList):\n        # return a.reshape(-1) @ b.reshape(-1)\n        # return (a * b).sum()\n        return operation.conv2d(a, b).view(-1)\n"
  },
  {
    "path": "tracker/sot/lib/online/optimization.py",
    "content": "import torch\nimport torch.autograd\nfrom online import TensorList\n# from pytracking.utils.plotting import plot_graph\n\n\nclass L2Problem:\n    \"\"\"Base class for representing an L2 optimization problem.\"\"\"\n\n    def __call__(self, x: TensorList) -> TensorList:\n        \"\"\"Shall compute the residuals of the problem.\"\"\"\n        raise NotImplementedError\n\n    def ip_input(self, a, b):\n        \"\"\"Inner product of the input space.\"\"\"\n        return sum(a.view(-1) @ b.view(-1))\n\n    def ip_output(self, a, b):\n        \"\"\"Inner product of the output space.\"\"\"\n        return sum(a.view(-1) @ b.view(-1))\n\n    def M1(self, x):\n        \"\"\"M1 preconditioner.\"\"\"\n        return x\n\n    def M2(self, x):\n        \"\"\"M2 preconditioner.\"\"\"\n        return x\n\n\nclass MinimizationProblem:\n    \"\"\"General minimization problem.\"\"\"\n    def __call__(self, x: TensorList) -> TensorList:\n        \"\"\"Shall compute the loss.\"\"\"\n        raise NotImplementedError\n\n    def ip_input(self, a, b):\n        \"\"\"Inner product of the input space.\"\"\"\n        return sum(a.view(-1) @ b.view(-1))\n\n    def M1(self, x):\n        return x\n\n    def M2(self, x):\n        return x\n\n\n\nclass ConjugateGradientBase:\n    \"\"\"Conjugate Gradient optimizer base class. Implements the CG loop.\"\"\"\n\n    def __init__(self, fletcher_reeves = True, standard_alpha = True, direction_forget_factor = 0, debug = False):\n        self.fletcher_reeves = fletcher_reeves\n        self.standard_alpha = standard_alpha\n        self.direction_forget_factor = direction_forget_factor\n        self.debug = debug\n\n        # State\n        self.p = None\n        self.rho = torch.ones(1)\n        self.r_prev = None\n\n        # Right hand side\n        self.b = None\n\n    def reset_state(self):\n        self.p = None\n        self.rho = torch.ones(1)\n        self.r_prev = None\n\n\n    def run_CG(self, num_iter, x=None, eps=0.0):\n        \"\"\"Main conjugate gradient method.\n\n        args:\n            num_iter: Number of iterations.\n            x: Initial guess. Assumed zero if None.\n            eps: Stop if the residual norm gets smaller than this.\n        \"\"\"\n\n        # Apply forgetting factor\n        if self.direction_forget_factor == 0:\n            self.reset_state()\n        elif self.p is not None:\n            self.rho /= self.direction_forget_factor\n\n        if x is None:\n            r = self.b.clone()\n        else:\n            r = self.b - self.A(x)\n\n        # Norms of residuals etc for debugging\n        resvec = None\n        if self.debug:\n            normr = self.residual_norm(r)\n            resvec = torch.zeros(num_iter+1)\n            resvec[0] = normr\n\n        # Loop over iterations\n        for ii in range(num_iter):\n            # Preconditioners\n            y = self.M1(r)\n            z = self.M2(y)\n\n            rho1 = self.rho\n            self.rho = self.ip(r, z)\n\n            if self.check_zero(self.rho):\n                if self.debug:\n                    print('Stopped CG since rho = 0')\n                    if resvec is not None:\n                        resvec = resvec[:ii+1]\n                return x, resvec\n\n            if self.p is None:\n                self.p = z.clone()\n            else:\n                if self.fletcher_reeves:\n                    beta = self.rho / rho1\n                else:\n                    rho2 = self.ip(self.r_prev, z)\n                    beta = (self.rho - rho2) / rho1\n\n                beta = beta.clamp(0)\n                self.p = z + self.p * beta\n\n            q = self.A(self.p)\n            pq = self.ip(self.p, q)\n\n            if self.standard_alpha:\n                alpha = self.rho / pq\n            else:\n                alpha = self.ip(self.p, r) / pq\n\n            # Save old r for PR formula\n            if not self.fletcher_reeves:\n                self.r_prev = r.clone()\n\n            # Form new iterate\n            if x is None:\n                x = self.p * alpha\n            else:\n                x += self.p * alpha\n\n            if ii < num_iter - 1 or self.debug:\n                r -= q * alpha\n\n            if eps > 0.0 or self.debug:\n                normr = self.residual_norm(r)\n\n            if self.debug:\n                self.evaluate_CG_iteration(x)\n                resvec[ii+1] = normr\n\n            if eps > 0 and normr <= eps:\n                if self.debug:\n                    print('Stopped CG since norm smaller than eps')\n                break\n\n        if resvec is not None:\n            resvec = resvec[:ii+2]\n\n        return x, resvec\n\n\n    def A(self, x):\n        # Implements the left hand operation\n        raise NotImplementedError\n\n    def ip(self, a, b):\n        # Implements the inner product\n        return a.view(-1) @ b.view(-1)\n\n    def residual_norm(self, r):\n        res = self.ip(r, r).sum()\n        if isinstance(res, (TensorList, list, tuple)):\n            res = sum(res)\n        return res.sqrt()\n\n    def check_zero(self, s, eps = 0.0):\n        ss = s.abs() <= eps\n        if isinstance(ss, (TensorList, list, tuple)):\n            ss = sum(ss)\n        return ss.item() > 0\n\n    def M1(self, x):\n        # M1 preconditioner\n        return x\n\n    def M2(self, x):\n        # M2 preconditioner\n        return x\n\n    def evaluate_CG_iteration(self, x):\n        pass\n\n\n\nclass ConjugateGradient(ConjugateGradientBase):\n    \"\"\"Conjugate Gradient optimizer, performing single linearization of the residuals in the start.\"\"\"\n\n    def __init__(self, problem: L2Problem, variable: TensorList, cg_eps = 0.0, fletcher_reeves = True,\n                 standard_alpha = True, direction_forget_factor = 0, debug = False, plotting = False, fig_num=(10,11)):\n        super().__init__(fletcher_reeves, standard_alpha, direction_forget_factor, debug or plotting)\n\n        self.problem = problem\n        self.x = variable\n\n        self.plotting = plotting\n        self.fig_num = fig_num\n\n        self.cg_eps = cg_eps\n        self.f0 = None\n        self.g = None\n        self.dfdxt_g = None\n\n        self.residuals = torch.zeros(0)\n        self.losses = torch.zeros(0)\n\n    def clear_temp(self):\n        self.f0 = None\n        self.g = None\n        self.dfdxt_g = None\n\n\n    def run(self, num_cg_iter):\n        \"\"\"Run the oprimizer with the provided number of iterations.\"\"\"\n\n        if num_cg_iter == 0:\n            return\n\n        lossvec = None\n        if self.debug:\n            lossvec = torch.zeros(2)\n\n        self.x.requires_grad_(True)\n\n        # Evaluate function at current estimate\n        self.f0 = self.problem(self.x)\n\n        # Create copy with graph detached\n        self.g = self.f0.detach()\n\n        if self.debug:\n            lossvec[0] = self.problem.ip_output(self.g, self.g)\n\n        self.g.requires_grad_(True)\n\n        # Get df/dx^t @ f0\n        self.dfdxt_g = TensorList(torch.autograd.grad(self.f0, self.x, self.g, create_graph=True))\n\n        # Get the right hand side\n        self.b = - self.dfdxt_g.detach()\n\n        # Run CG\n        delta_x, res = self.run_CG(num_cg_iter, eps=self.cg_eps)\n\n        self.x.detach_()\n        self.x += delta_x\n\n        if self.debug:\n            self.f0 = self.problem(self.x)\n            lossvec[-1] = self.problem.ip_output(self.f0, self.f0)\n            self.residuals = torch.cat((self.residuals, res))\n            self.losses = torch.cat((self.losses, lossvec))\n            if self.plotting:\n                plot_graph(self.losses, self.fig_num[0], title='Loss')\n                plot_graph(self.residuals, self.fig_num[1], title='CG residuals')\n\n        self.x.detach_()\n        self.clear_temp()\n\n\n    def A(self, x):\n        dfdx_x = torch.autograd.grad(self.dfdxt_g, self.g, x, retain_graph=True)\n        return TensorList(torch.autograd.grad(self.f0, self.x, dfdx_x, retain_graph=True))\n\n    def ip(self, a, b):\n        return self.problem.ip_input(a, b)\n\n    def M1(self, x):\n        return self.problem.M1(x)\n\n    def M2(self, x):\n        return self.problem.M2(x)\n\n\n\nclass GaussNewtonCG(ConjugateGradientBase):\n    \"\"\"Gauss-Newton with Conjugate Gradient optimizer.\"\"\"\n\n    def __init__(self, problem: L2Problem, variable: TensorList, cg_eps = 0.0, fletcher_reeves = True,\n                 standard_alpha = True, direction_forget_factor = 0, debug = False, analyze = False, plotting = False,\n                 fig_num=(10,11,12)):\n        super().__init__(fletcher_reeves, standard_alpha, direction_forget_factor, debug or analyze or plotting)\n\n        self.problem = problem\n        self.x = variable\n\n        self.analyze_convergence = analyze\n        self.plotting = plotting\n        self.fig_num = fig_num\n\n        self.cg_eps = cg_eps\n        self.f0 = None\n        self.g = None\n        self.dfdxt_g = None\n\n        self.residuals = torch.zeros(0)\n        self.losses = torch.zeros(0)\n        self.gradient_mags = torch.zeros(0)\n\n    def clear_temp(self):\n        self.f0 = None\n        self.g = None\n        self.dfdxt_g = None\n\n\n    def run_GN(self, *args, **kwargs):\n        return self.run(*args, **kwargs)\n\n\n    def run(self, num_cg_iter, num_gn_iter=None):\n        \"\"\"Run the optimizer.\n        args:\n            num_cg_iter: Number of CG iterations per GN iter. If list, then each entry specifies number of CG iterations\n                         and number of GN iterations is given by the length of the list.\n            num_gn_iter: Number of GN iterations. Shall only be given if num_cg_iter is an integer.\n        \"\"\"\n\n        if isinstance(num_cg_iter, int):\n            if num_gn_iter is None:\n                raise ValueError('Must specify number of GN iter if CG iter is constant')\n            num_cg_iter = [num_cg_iter]*num_gn_iter\n\n        num_gn_iter = len(num_cg_iter)\n        if num_gn_iter == 0:\n            return\n\n        if self.analyze_convergence:\n            self.evaluate_CG_iteration(0)\n\n        # Outer loop for running the GN iterations.\n        for cg_iter in num_cg_iter:\n            self.run_GN_iter(cg_iter)\n\n        if self.debug:\n            if not self.analyze_convergence:\n                self.f0 = self.problem(self.x)\n                loss = self.problem.ip_output(self.f0, self.f0)\n                self.losses = torch.cat((self.losses, loss.detach().cpu().view(-1)))\n\n            if self.plotting:\n                plot_graph(self.losses, self.fig_num[0], title='Loss')\n                plot_graph(self.residuals, self.fig_num[1], title='CG residuals')\n                if self.analyze_convergence:\n                    plot_graph(self.gradient_mags, self.fig_num[2], 'Gradient magnitude')\n\n\n        self.x.detach_()\n        self.clear_temp()\n\n        return self.losses, self.residuals\n\n\n    def run_GN_iter(self, num_cg_iter):\n        \"\"\"Runs a single GN iteration.\"\"\"\n\n        self.x.requires_grad_(True)\n\n        # Evaluate function at current estimate\n        self.f0 = self.problem(self.x)\n\n        # Create copy with graph detached\n        self.g = self.f0.detach()\n\n        if self.debug and not self.analyze_convergence:\n            loss = self.problem.ip_output(self.g, self.g)\n            self.losses = torch.cat((self.losses, loss.detach().cpu().view(-1)))\n\n        self.g.requires_grad_(True)\n\n        # Get df/dx^t @ f0\n        self.dfdxt_g = TensorList(torch.autograd.grad(self.f0, self.x, self.g, create_graph=True))\n\n        # Get the right hand side\n        self.b = - self.dfdxt_g.detach()\n\n        # Run CG\n        delta_x, res = self.run_CG(num_cg_iter, eps=self.cg_eps)\n\n        self.x.detach_()\n        self.x += delta_x\n\n        if self.debug:\n            self.residuals = torch.cat((self.residuals, res))\n\n\n    def A(self, x):\n        dfdx_x = torch.autograd.grad(self.dfdxt_g, self.g, x, retain_graph=True)\n        return TensorList(torch.autograd.grad(self.f0, self.x, dfdx_x, retain_graph=True))\n\n    def ip(self, a, b):\n        return self.problem.ip_input(a, b)\n\n    def M1(self, x):\n        return self.problem.M1(x)\n\n    def M2(self, x):\n        return self.problem.M2(x)\n\n    def evaluate_CG_iteration(self, delta_x):\n        if self.analyze_convergence:\n            x = (self.x + delta_x).detach()\n            x.requires_grad_(True)\n\n            # compute loss and gradient\n            f = self.problem(x)\n            loss = self.problem.ip_output(f, f)\n            grad = TensorList(torch.autograd.grad(loss, x))\n\n            # store in the vectors\n            self.losses = torch.cat((self.losses, loss.detach().cpu().view(-1)))\n            self.gradient_mags = torch.cat((self.gradient_mags, sum(grad.view(-1) @ grad.view(-1)).cpu().sqrt().detach().view(-1)))\n\n\nclass GradientDescentL2:\n    \"\"\"Gradient descent with momentum for L2 problems.\"\"\"\n\n    def __init__(self, problem: L2Problem, variable: TensorList, step_length: float, momentum: float = 0.0, debug = False, plotting = False, fig_num=(10,11)):\n\n        self.problem = problem\n        self.x = variable\n\n        self.step_legnth = step_length\n        self.momentum = momentum\n\n        self.debug = debug or plotting\n        self.plotting = plotting\n        self.fig_num = fig_num\n\n        self.losses = torch.zeros(0)\n        self.gradient_mags = torch.zeros(0)\n        self.residuals = None\n\n        self.clear_temp()\n\n\n    def clear_temp(self):\n        self.f0 = None\n        self.dir = None\n\n\n    def run(self, num_iter, dummy = None):\n\n        if num_iter == 0:\n            return\n\n        lossvec = None\n        if self.debug:\n            lossvec = torch.zeros(num_iter+1)\n            grad_mags = torch.zeros(num_iter+1)\n\n        for i in range(num_iter):\n            self.x.requires_grad_(True)\n\n            # Evaluate function at current estimate\n            self.f0 = self.problem(self.x)\n\n            # Compute loss\n            loss = self.problem.ip_output(self.f0, self.f0)\n\n            # Compute grad\n            grad = TensorList(torch.autograd.grad(loss, self.x))\n\n            # Update direction\n            if self.dir is None:\n                self.dir = grad\n            else:\n                self.dir = grad + self.momentum * self.dir\n\n            self.x.detach_()\n            self.x -= self.step_legnth * self.dir\n\n            if self.debug:\n                lossvec[i] = loss.item()\n                grad_mags[i] = sum(grad.view(-1) @ grad.view(-1)).sqrt().item()\n\n        if self.debug:\n            self.x.requires_grad_(True)\n            self.f0 = self.problem(self.x)\n            loss = self.problem.ip_output(self.f0, self.f0)\n            grad = TensorList(torch.autograd.grad(loss, self.x))\n            lossvec[-1] = self.problem.ip_output(self.f0, self.f0).item()\n            grad_mags[-1] = sum(grad.view(-1) @ grad.view(-1)).cpu().sqrt().item()\n            self.losses = torch.cat((self.losses, lossvec))\n            self.gradient_mags = torch.cat((self.gradient_mags, grad_mags))\n            if self.plotting:\n                plot_graph(self.losses, self.fig_num[0], title='Loss')\n                plot_graph(self.gradient_mags, self.fig_num[1], title='Gradient magnitude')\n\n        self.x.detach_()\n        self.clear_temp()\n\n\n\nclass NewtonCG(ConjugateGradientBase):\n    \"\"\"Newton with Conjugate Gradient. Handels general minimization problems.\"\"\"\n\n    def __init__(self, problem: MinimizationProblem, variable: TensorList, init_hessian_reg = 0.0, hessian_reg_factor = 1.0,\n                 cg_eps = 0.0, fletcher_reeves = True, standard_alpha = True, direction_forget_factor = 0,\n                 debug = False, analyze = False, plotting = False, fig_num=(10, 11, 12)):\n        super().__init__(fletcher_reeves, standard_alpha, direction_forget_factor, debug or analyze or plotting)\n\n        self.problem = problem\n        self.x = variable\n\n        self.analyze_convergence = analyze\n        self.plotting = plotting\n        self.fig_num = fig_num\n\n        self.hessian_reg = init_hessian_reg\n        self.hessian_reg_factor = hessian_reg_factor\n        self.cg_eps = cg_eps\n        self.f0 = None\n        self.g = None\n\n        self.residuals = torch.zeros(0)\n        self.losses = torch.zeros(0)\n        self.gradient_mags = torch.zeros(0)\n\n    def clear_temp(self):\n        self.f0 = None\n        self.g = None\n\n\n    def run(self, num_cg_iter, num_newton_iter=None):\n\n        if isinstance(num_cg_iter, int):\n            if num_cg_iter == 0:\n                return\n            if num_newton_iter is None:\n                num_newton_iter = 1\n            num_cg_iter = [num_cg_iter] * num_newton_iter\n\n        num_newton_iter = len(num_cg_iter)\n        if num_newton_iter == 0:\n            return\n\n        if self.analyze_convergence:\n            self.evaluate_CG_iteration(0)\n\n        for cg_iter in num_cg_iter:\n            self.run_newton_iter(cg_iter)\n            self.hessian_reg *= self.hessian_reg_factor\n\n        if self.debug:\n            if not self.analyze_convergence:\n                loss = self.problem(self.x)\n                self.losses = torch.cat((self.losses, loss.detach().cpu().view(-1)))\n\n            if self.plotting:\n                plot_graph(self.losses, self.fig_num[0], title='Loss')\n                plot_graph(self.residuals, self.fig_num[1], title='CG residuals')\n                if self.analyze_convergence:\n                    plot_graph(self.gradient_mags, self.fig_num[2], 'Gradient magnitude')\n\n        self.x.detach_()\n        self.clear_temp()\n\n        return self.losses, self.residuals\n\n\n    def run_newton_iter(self, num_cg_iter):\n\n        self.x.requires_grad_(True)\n\n        # Evaluate function at current estimate\n        self.f0 = self.problem(self.x)\n\n        if self.debug and not self.analyze_convergence:\n            self.losses = torch.cat((self.losses, self.f0.detach().cpu().view(-1)))\n\n        # Gradient of loss\n        self.g = TensorList(torch.autograd.grad(self.f0, self.x, create_graph=True))\n\n        # Get the right hand side\n        self.b = - self.g.detach()\n\n        # Run CG\n        delta_x, res = self.run_CG(num_cg_iter, eps=self.cg_eps)\n\n        self.x.detach_()\n        self.x += delta_x\n\n        if self.debug:\n            self.residuals = torch.cat((self.residuals, res))\n\n\n    def A(self, x):\n        return TensorList(torch.autograd.grad(self.g, self.x, x, retain_graph=True)) + self.hessian_reg * x\n\n    def ip(self, a, b):\n        # Implements the inner product\n        return self.problem.ip_input(a, b)\n\n    def M1(self, x):\n        return self.problem.M1(x)\n\n    def M2(self, x):\n        return self.problem.M2(x)\n\n    def evaluate_CG_iteration(self, delta_x):\n        if self.analyze_convergence:\n            x = (self.x + delta_x).detach()\n            x.requires_grad_(True)\n\n            # compute loss and gradient\n            loss = self.problem(x)\n            grad = TensorList(torch.autograd.grad(loss, x))\n\n            # store in the vectors\n            self.losses = torch.cat((self.losses, loss.detach().cpu().view(-1)))\n            self.gradient_mags = torch.cat((self.gradient_mags, sum(grad.view(-1) @ grad.view(-1)).cpu().sqrt().detach().view(-1)))\n\n\nclass GradientDescent:\n    \"\"\"Gradient descent for general minimization problems.\"\"\"\n\n    def __init__(self, problem: MinimizationProblem, variable: TensorList, step_length: float, momentum: float = 0.0,\n                 debug = False, plotting = False, fig_num=(10,11)):\n\n        self.problem = problem\n        self.x = variable\n\n        self.step_legnth = step_length\n        self.momentum = momentum\n\n        self.debug = debug or plotting\n        self.plotting = plotting\n        self.fig_num = fig_num\n\n        self.losses = torch.zeros(0)\n        self.gradient_mags = torch.zeros(0)\n        self.residuals = None\n\n        self.clear_temp()\n\n\n    def clear_temp(self):\n        self.dir = None\n\n\n    def run(self, num_iter, dummy = None):\n\n        if num_iter == 0:\n            return\n\n        lossvec = None\n        if self.debug:\n            lossvec = torch.zeros(num_iter+1)\n            grad_mags = torch.zeros(num_iter+1)\n\n        for i in range(num_iter):\n            self.x.requires_grad_(True)\n\n            # Evaluate function at current estimate\n            loss = self.problem(self.x)\n\n            # Compute grad\n            grad = TensorList(torch.autograd.grad(loss, self.x))\n\n            # Update direction\n            if self.dir is None:\n                self.dir = grad\n            else:\n                self.dir = grad + self.momentum * self.dir\n\n            self.x.detach_()\n            self.x -= self.step_legnth * self.dir\n\n            if self.debug:\n                lossvec[i] = loss.item()\n                grad_mags[i] = sum(grad.view(-1) @ grad.view(-1)).sqrt().item()\n\n        if self.debug:\n            self.x.requires_grad_(True)\n            loss = self.problem(self.x)\n            grad = TensorList(torch.autograd.grad(loss, self.x))\n            lossvec[-1] = loss.item()\n            grad_mags[-1] = sum(grad.view(-1) @ grad.view(-1)).cpu().sqrt().item()\n            self.losses = torch.cat((self.losses, lossvec))\n            self.gradient_mags = torch.cat((self.gradient_mags, grad_mags))\n            if self.plotting:\n                plot_graph(self.losses, self.fig_num[0], title='Loss')\n                plot_graph(self.gradient_mags, self.fig_num[1], title='Gradient magnitude')\n\n        self.x.detach_()\n        self.clear_temp()"
  },
  {
    "path": "tracker/sot/lib/online/preprocessing.py",
    "content": "import torch\nimport torch.nn.functional as F\nimport numpy as np\n\n\ndef numpy_to_torch(a: np.ndarray):\n    return torch.from_numpy(a).float().permute(2, 0, 1).unsqueeze(0)\n\n\ndef torch_to_numpy(a: torch.Tensor):\n    return a.squeeze(0).permute(1,2,0).numpy()\n\n\ndef sample_patch_transformed(im, pos, scale, image_sz, transforms):\n    \"\"\"Extract transformed image samples.\n    args:\n        im: Image.\n        pos: Center position for extraction.\n        scale: Image scale to extract features from.\n        image_sz: Size to resize the image samples to before extraction.\n        transforms: A set of image transforms to apply.\n    \"\"\"\n\n    # Get image patche\n    im_patch, _ = sample_patch_online(im, pos, scale*image_sz, image_sz)\n\n    # Apply transforms\n    im_patches = torch.cat([T(im_patch) for T in transforms])\n\n    return im_patches\n\n\ndef sample_patch_multiscale(im, pos, scales, image_sz, mode: str='replicate'):\n    \"\"\"Extract image patches at multiple scales.\n    args:\n        im: Image.\n        pos: Center position for extraction.\n        scales: Image scales to extract image patches from.\n        image_sz: Size to resize the image samples to\n        mode: how to treat image borders: 'replicate' (default) or 'inside'\n    \"\"\"\n    if isinstance(scales, (int, float)):\n        scales = [scales]\n\n    # Get image patches\n    patch_iter, coord_iter = zip(*(sample_patch_online(im, pos, s*image_sz, image_sz, mode=mode) for s in scales))\n    im_patches = torch.cat(list(patch_iter))\n    patch_coords = torch.cat(list(coord_iter))\n\n    return  im_patches, patch_coords\n\ndef sample_patch(im: torch.Tensor, pos: torch.Tensor, sample_sz: torch.Tensor, output_sz: torch.Tensor = None):\n    \"\"\"Sample an image patch.\n\n    args:\n        im: Image\n        pos: center position of crop\n        sample_sz: size to crop\n        output_sz: size to resize to\n    \"\"\"\n\n    # copy and convert\n    posl = pos.long().clone()\n\n    # Compute pre-downsampling factor\n    if output_sz is not None:\n        resize_factor = torch.min(sample_sz.float() / output_sz.float()).item()\n        df = int(max(int(resize_factor - 0.1), 1))\n    else:\n        df = int(1)\n\n    sz = sample_sz.float() / df     # new size\n\n    # Do downsampling\n    if df > 1:\n        os = posl % df              # offset\n        posl = (posl - os) / df     # new position\n        im2 = im[..., os[0].item()::df, os[1].item()::df]   # downsample\n    else:\n        im2 = im\n\n    # compute size to crop\n    szl = torch.max(sz.round(), torch.Tensor([2])).long()\n\n    # Extract top and bottom coordinates\n    tl = posl - (szl - 1)/2\n    br = posl + szl/2\n\n    # Get image patch\n    im_patch = F.pad(im2, (-tl[1].item(), br[1].item() - im2.shape[3] + 1, -tl[0].item(), br[0].item() - im2.shape[2] + 1), 'replicate')\n\n    if output_sz is None or (im_patch.shape[-2] == output_sz[0] and im_patch.shape[-1] == output_sz[1]):\n        return im_patch\n\n    # Resample\n    im_patch = F.interpolate(im_patch, output_sz.long().tolist(), mode='bilinear')\n\n    return im_patch\n\ndef sample_patch_online(im: torch.Tensor, pos: torch.Tensor, sample_sz: torch.Tensor, output_sz: torch.Tensor = None,\n                 mode: str = 'replicate'):\n    \"\"\"Sample an image patch.\n\n    args:\n        im: Image\n        pos: center position of crop\n        sample_sz: size to crop\n        output_sz: size to resize to\n        mode: how to treat image borders: 'replicate' (default) or 'inside'\n    \"\"\"\n\n    if mode not in ['replicate', 'inside']:\n        raise ValueError('Unknown border mode \\'{}\\'.'.format(mode))\n\n    # copy and convert\n    posl = pos.long().clone()\n\n    # Get new sample size if forced inside the image\n    if mode == 'inside':\n        im_sz = torch.Tensor([im.shape[2], im.shape[3]])\n        shrink_factor = (sample_sz.float() / im_sz).max().clamp(1)\n        sample_sz = (sample_sz.float() / shrink_factor).long()\n\n    # Compute pre-downsampling factor\n    if output_sz is not None:\n        resize_factor = torch.min(sample_sz.float() / output_sz.float()).item()\n        df = int(max(int(resize_factor - 0.1), 1))\n    else:\n        df = int(1)\n\n    sz = sample_sz.float() / df     # new size\n\n    # Do downsampling\n    if df > 1:\n        os = posl % df              # offset\n        posl = (posl - os) / df     # new position\n        im2 = im[..., os[0].item()::df, os[1].item()::df]   # downsample\n    else:\n        im2 = im\n\n    # compute size to crop\n    szl = torch.max(sz.round(), torch.Tensor([2])).long()\n\n    # Extract top and bottom coordinates\n    tl = posl - (szl - 1)/2\n    br = posl + szl/2 + 1\n\n    # Shift the crop to inside\n    if mode == 'inside':\n        im2_sz = torch.LongTensor([im2.shape[2], im2.shape[3]])\n        shift = (-tl).clamp(0) - (br - im2_sz).clamp(0)\n        tl += shift\n        br += shift\n\n        # Get image patch\n        im_patch = im2[...,tl[0].item():br[0].item(),tl[1].item():br[1].item()]\n    else:\n        # Get image patch\n        im_patch = F.pad(im2, (-tl[1].item(), br[1].item() - im2.shape[3], -tl[0].item(), br[0].item() - im2.shape[2]), mode)\n\n    # Get image coordinates\n    patch_coord = df * torch.cat((tl, br)).view(1,4)\n\n    if output_sz is None or (im_patch.shape[-2] == output_sz[0] and im_patch.shape[-1] == output_sz[1]):\n        return im_patch.clone(), patch_coord\n\n    # Resample\n    im_patch = F.interpolate(im_patch, output_sz.long().tolist(), mode='bilinear')\n\n    return im_patch, patch_coord"
  },
  {
    "path": "tracker/sot/lib/online/tensordict.py",
    "content": "from collections import OrderedDict\nimport torch\n\n\nclass TensorDict(OrderedDict):\n    \"\"\"Container mainly used for dicts of torch tensors. Extends OrderedDict with pytorch functionality.\"\"\"\n\n    def concat(self, other):\n        \"\"\"Concatenates two dicts without copying internal data.\"\"\"\n        return TensorDict(self, **other)\n\n    def copy(self):\n        return TensorDict(super(TensorDict, self).copy())\n\n    def __getattr__(self, name):\n        if not hasattr(torch.Tensor, name):\n            raise AttributeError('\\'TensorDict\\' object has not attribute \\'{}\\''.format(name))\n\n        def apply_attr(*args, **kwargs):\n            return TensorDict({n: getattr(e, name)(*args, **kwargs) if hasattr(e, name) else e for n, e in self.items()})\n        return apply_attr\n\n    def attribute(self, attr: str, *args):\n        return TensorDict({n: getattr(e, attr, *args) for n, e in self.items()})\n\n    def apply(self, fn, *args, **kwargs):\n        return TensorDict({n: fn(e, *args, **kwargs) for n, e in self.items()})\n\n    @staticmethod\n    def _iterable(a):\n        return isinstance(a, (TensorDict, list))\n\n"
  },
  {
    "path": "tracker/sot/lib/online/tensorlist.py",
    "content": "import functools\nimport torch\n\n\nclass TensorList(list):\n    \"\"\"Container mainly used for lists of torch tensors. Extends lists with pytorch functionality.\"\"\"\n\n    def __init__(self, list_of_tensors = list()):\n        super(TensorList, self).__init__(list_of_tensors)\n\n    def __getitem__(self, item):\n        if isinstance(item, int):\n            return super(TensorList, self).__getitem__(item)\n        elif isinstance(item, (tuple, list)):\n            return TensorList([super(TensorList, self).__getitem__(i) for i in item])\n        else:\n            return TensorList(super(TensorList, self).__getitem__(item))\n\n    def __add__(self, other):\n        if TensorList._iterable(other):\n            return TensorList([e1 + e2 for e1, e2 in zip(self, other)])\n        return TensorList([e + other for e in self])\n\n    def __radd__(self, other):\n        if TensorList._iterable(other):\n            return TensorList([e2 + e1 for e1, e2 in zip(self, other)])\n        return TensorList([other + e for e in self])\n\n    def __iadd__(self, other):\n        if TensorList._iterable(other):\n            for i, e2 in enumerate(other):\n                self[i] += e2\n        else:\n            for i in range(len(self)):\n                self[i] += other\n        return self\n\n    def __sub__(self, other):\n        if TensorList._iterable(other):\n            return TensorList([e1 - e2 for e1, e2 in zip(self, other)])\n        return TensorList([e - other for e in self])\n\n    def __rsub__(self, other):\n        if TensorList._iterable(other):\n            return TensorList([e2 - e1 for e1, e2 in zip(self, other)])\n        return TensorList([other - e for e in self])\n\n    def __isub__(self, other):\n        if TensorList._iterable(other):\n            for i, e2 in enumerate(other):\n                self[i] -= e2\n        else:\n            for i in range(len(self)):\n                self[i] -= other\n        return self\n\n    def __mul__(self, other):\n        if TensorList._iterable(other):\n            return TensorList([e1 * e2 for e1, e2 in zip(self, other)])\n        return TensorList([e * other for e in self])\n\n    def __rmul__(self, other):\n        if TensorList._iterable(other):\n            return TensorList([e2 * e1 for e1, e2 in zip(self, other)])\n        return TensorList([other * e for e in self])\n\n    def __imul__(self, other):\n        if TensorList._iterable(other):\n            for i, e2 in enumerate(other):\n                self[i] *= e2\n        else:\n            for i in range(len(self)):\n                self[i] *= other\n        return self\n\n    def __truediv__(self, other):\n        if TensorList._iterable(other):\n            try:  # zzp\n                return TensorList([e1 / e2 for e1, e2 in zip(self, other)])\n            except:\n                return TensorList([e1 / eval(e2) for e1, e2 in zip(self, other)])\n        return TensorList([e / other for e in self])\n\n    def __rtruediv__(self, other):\n        if TensorList._iterable(other):\n            return TensorList([e2 / e1 for e1, e2 in zip(self, other)])\n        return TensorList([other / e for e in self])\n\n    def __itruediv__(self, other):\n        if TensorList._iterable(other):\n            for i, e2 in enumerate(other):\n                self[i] /= e2\n        else:\n            for i in range(len(self)):\n                self[i] /= other\n        return self\n\n    def __matmul__(self, other):\n        if TensorList._iterable(other):\n            return TensorList([e1 @ e2 for e1, e2 in zip(self, other)])\n        return TensorList([e @ other for e in self])\n\n    def __rmatmul__(self, other):\n        if TensorList._iterable(other):\n            return TensorList([e2 @ e1 for e1, e2 in zip(self, other)])\n        return TensorList([other @ e for e in self])\n\n    def __imatmul__(self, other):\n        if TensorList._iterable(other):\n            for i, e2 in enumerate(other):\n                self[i] @= e2\n        else:\n            for i in range(len(self)):\n                self[i] @= other\n        return self\n\n    def __mod__(self, other):\n        if TensorList._iterable(other):\n            return TensorList([e1 % e2 for e1, e2 in zip(self, other)])\n        return TensorList([e % other for e in self])\n\n    def __rmod__(self, other):\n        if TensorList._iterable(other):\n            return TensorList([e2 % e1 for e1, e2 in zip(self, other)])\n        return TensorList([other % e for e in self])\n\n    def __pos__(self):\n        return TensorList([+e for e in self])\n\n    def __neg__(self):\n        return TensorList([-e for e in self])\n\n    def __le__(self, other):\n        if TensorList._iterable(other):\n            return TensorList([e1 <= e2 for e1, e2 in zip(self, other)])\n        return TensorList([e <= other for e in self])\n\n    def __ge__(self, other):\n        if TensorList._iterable(other):\n            return TensorList([e1 >= e2 for e1, e2 in zip(self, other)])\n        return TensorList([e >= other for e in self])\n\n    def concat(self, other):\n        return TensorList(super(TensorList, self).__add__(other))\n\n    def copy(self):\n        return TensorList(super(TensorList, self).copy())\n\n    def unroll(self):\n        if not any(isinstance(t, TensorList) for t in self):\n            return self\n\n        new_list = TensorList()\n        for t in self:\n            if isinstance(t, TensorList):\n                new_list.extend(t.unroll())\n            else:\n                new_list.append(t)\n        return new_list\n\n    def attribute(self, attr: str, *args):\n        return TensorList([getattr(e, attr, *args) for e in self])\n\n    def apply(self, fn):\n        try:\n            return TensorList([fn(e) for e in self])\n        except:\n            return TensorList([fn(eval(e)) for e in self])\n\n    def __getattr__(self, name):\n        if not hasattr(torch.Tensor, name):\n            raise AttributeError('\\'TensorList\\' object has not attribute \\'{}\\''.format(name))\n\n        def apply_attr(*args, **kwargs):\n            return TensorList([getattr(e, name)(*args, **kwargs) for e in self])\n\n        return apply_attr\n\n    @staticmethod\n    def _iterable(a):\n        return isinstance(a, (TensorList, list))\n\n\n\ndef tensor_operation(op):\n    def islist(a):\n        return isinstance(a, TensorList)\n\n    @functools.wraps(op)\n    def oplist(*args, **kwargs):\n        if len(args) == 0:\n            raise ValueError('Must be at least one argument without keyword (i.e. operand).')\n\n        if len(args) == 1:\n            if islist(args[0]):\n                return TensorList([op(a, **kwargs) for a in args[0]])\n        else:\n            # Multiple operands, assume max two\n            if islist(args[0]) and islist(args[1]):\n                return TensorList([op(a, b, *args[2:], **kwargs) for a, b in zip(*args[:2])])\n            if islist(args[0]):\n                return TensorList([op(a, *args[1:], **kwargs) for a in args[0]])\n            if islist(args[1]):\n                return TensorList([op(args[0], b, *args[2:], **kwargs) for b in args[1]])\n\n        # None of the operands are lists\n        return op(*args, **kwargs)\n\n    return oplist\n"
  },
  {
    "path": "tracker/sot/lib/online/tracking.py",
    "content": "from .base_actor import BaseActor\n\n\nclass ONLINEActor(BaseActor):\n    \"\"\"Actor for training the ONLINE network.\"\"\"\n    def __init__(self, net, objective, loss_weight=None):\n        super().__init__(net, objective)\n        if loss_weight is None:\n            loss_weight = {'iou': 1.0, 'test_clf': 1.0}\n        self.loss_weight = loss_weight\n\n    def __call__(self, data):\n        \"\"\"\n        args:\n            data - The input data, should contain the fields 'train_images', 'test_images', 'train_anno',\n                    'test_proposals', 'proposal_iou' and 'test_label'.\n\n        returns:\n            loss    - the training loss\n            stats  -  dict containing detailed losses\n        \"\"\"\n        # Run network\n        target_scores = self.net(train_imgs=data['train_images'],\n                                           test_imgs=data['test_images'],\n                                           train_bb=data['train_anno'],\n                                           test_proposals=data['test_proposals'])\n\n        # Classification losses for the different optimization iterations\n        clf_losses_test = [self.objective['test_clf'](s, data['test_label'], data['test_anno']) for s in target_scores]\n\n        # Loss of the final filter\n        clf_loss_test = clf_losses_test[-1]\n        loss_target_classifier = self.loss_weight['test_clf'] * clf_loss_test\n\n        # Loss for the initial filter iteration\n        loss_test_init_clf = 0\n        if 'test_init_clf' in self.loss_weight.keys():\n            loss_test_init_clf = self.loss_weight['test_init_clf'] * clf_losses_test[0]\n\n        # Loss for the intermediate filter iterations\n        loss_test_iter_clf = 0\n        if 'test_iter_clf' in self.loss_weight.keys():\n            test_iter_weights = self.loss_weight['test_iter_clf']\n            if isinstance(test_iter_weights, list):\n                loss_test_iter_clf = sum([a*b for a, b in zip(test_iter_weights, clf_losses_test[1:-1])])\n            else:\n                loss_test_iter_clf = (test_iter_weights / (len(clf_losses_test) - 2)) * sum(clf_losses_test[1:-1])\n\n        # Total loss\n        # loss = loss_iou + loss_target_classifier + loss_test_init_clf + loss_test_iter_clf\n        loss = loss_target_classifier + loss_test_init_clf + loss_test_iter_clf\n\n        # Log stats\n        stats = {'Loss/total': loss.item(),\n                 # 'Loss/iou': loss_iou.item(),\n                 'Loss/iou': 0,\n                 'Loss/target_clf': loss_target_classifier.item()}\n        if 'test_init_clf' in self.loss_weight.keys():\n            stats['Loss/test_init_clf'] = loss_test_init_clf.item()\n        if 'test_iter_clf' in self.loss_weight.keys():\n            stats['Loss/test_iter_clf'] = loss_test_iter_clf.item()\n        stats['ClfTrain/test_loss'] = clf_loss_test.item()\n        if len(clf_losses_test) > 0:\n            stats['ClfTrain/test_init_loss'] = clf_losses_test[0].item()\n            if len(clf_losses_test) > 2:\n                stats['ClfTrain/test_iter_loss'] = sum(clf_losses_test[1:-1]).item() / (len(clf_losses_test) - 2)\n\n        return loss, stats\n"
  },
  {
    "path": "tracker/sot/lib/tracker/ocean.py",
    "content": "import os\nimport cv2\nimport yaml\nimport numpy as np\n\nimport torch\nimport torch.nn.functional as F\nfrom utils.utils import load_yaml, im_to_torch, get_subwindow_tracking, make_scale_pyramid, python2round\n\n\nclass Ocean(object):\n    def __init__(self, info):\n        super(Ocean, self).__init__()\n        self.info = info   # model and benchmark info\n        self.stride = 8\n        self.align = info.align\n        self.online = info.online\n        self.trt = info.TRT\n\n    def init(self, im, target_pos, target_sz, model, hp=None):\n        # in: whether input infrared image\n        state = dict()\n        # epoch test\n        p = OceanConfig()\n\n        state['im_h'] = im.shape[0]\n        state['im_w'] = im.shape[1]\n\n        # single test\n        if not hp and not self.info.epoch_test:\n            prefix = [x for x in ['OTB', 'VOT'] if x in self.info.dataset]\n            if len(prefix) == 0: prefix = [self.info.dataset]\n            absPath = os.path.abspath(os.path.dirname(__file__))\n            yname = 'Ocean.yaml'\n            yamlPath = os.path.join(absPath, '../../experiments/test/{0}/'.format(prefix[0]), yname)\n            cfg = load_yaml(yamlPath)\n            if self.online:\n                temp = self.info.dataset + 'ON'\n                cfg_benchmark = cfg[temp]\n            else:\n                cfg_benchmark = cfg[self.info.dataset]\n            p.update(cfg_benchmark)\n            p.renew()\n\n            if ((target_sz[0] * target_sz[1]) / float(state['im_h'] * state['im_w'])) < 0.004:\n                p.instance_size = cfg_benchmark['big_sz']\n                p.renew()\n            else:\n                p.instance_size = cfg_benchmark['small_sz']\n                p.renew()\n\n        # double check\n        # print('======= hyper-parameters: penalty_k: {}, wi: {}, lr: {}, ratio: {}, instance_sz: {}, score_sz: {} ======='.format(p.penalty_k, p.window_influence, p.lr, p.ratio, p.instance_size, p.score_size))\n\n        # param tune\n        if hp:\n            p.update(hp)\n            p.renew()\n\n            # for small object (from DaSiamRPN released)\n            if ((target_sz[0] * target_sz[1]) / float(state['im_h'] * state['im_w'])) < 0.004:\n                p.instance_size = hp['big_sz']\n                p.renew()\n            else:\n                p.instance_size = hp['small_sz']\n                p.renew()\n\n        if self.trt:\n            print('====> TRT version testing: only support 255 input, the hyper-param is random <====')\n            p.instance_size = 255\n            p.renew()\n\n        self.grids(p)   # self.grid_to_search_x, self.grid_to_search_y\n\n        net = model\n\n        wc_z = target_sz[0] + p.context_amount * sum(target_sz)\n        hc_z = target_sz[1] + p.context_amount * sum(target_sz)\n        s_z = round(np.sqrt(wc_z * hc_z))\n\n        avg_chans = np.mean(im, axis=(0, 1))\n        z_crop, _ = get_subwindow_tracking(im, target_pos, p.exemplar_size, s_z, avg_chans)\n\n        z = z_crop.unsqueeze(0)\n        net.template(z.cuda())\n\n        if p.windowing == 'cosine':\n            window = np.outer(np.hanning(p.score_size), np.hanning(p.score_size))  # [17,17]\n        elif p.windowing == 'uniform':\n            window = np.ones(int(p.score_size), int(p.score_size))\n\n        state['p'] = p\n        state['net'] = net\n        state['avg_chans'] = avg_chans\n        state['window'] = window\n        state['target_pos'] = target_pos\n        state['target_sz'] = target_sz\n\n        return state\n\n    def update(self, net, x_crops, target_pos, target_sz, window, scale_z, p):\n\n        if self.align:\n            cls_score, bbox_pred, cls_align = net.track(x_crops)\n\n            cls_score = F.sigmoid(cls_score).squeeze().cpu().data.numpy()\n            cls_align = F.sigmoid(cls_align).squeeze().cpu().data.numpy()\n            cls_score = p.ratio * cls_score + (1- p.ratio) * cls_align\n\n        else:\n            cls_score, bbox_pred = net.track(x_crops)\n            cls_score = F.sigmoid(cls_score).squeeze().cpu().data.numpy()\n\n        # bbox to real predict\n        bbox_pred = bbox_pred.squeeze().cpu().data.numpy()\n\n        pred_x1 = self.grid_to_search_x - bbox_pred[0, ...]\n        pred_y1 = self.grid_to_search_y - bbox_pred[1, ...]\n        pred_x2 = self.grid_to_search_x + bbox_pred[2, ...]\n        pred_y2 = self.grid_to_search_y + bbox_pred[3, ...]\n\n        # size penalty\n        s_c = self.change(self.sz(pred_x2-pred_x1, pred_y2-pred_y1) / (self.sz_wh(target_sz)))  # scale penalty\n        r_c = self.change((target_sz[0] / target_sz[1]) / ((pred_x2-pred_x1) / (pred_y2-pred_y1)))  # ratio penalty\n\n        penalty = np.exp(-(r_c * s_c - 1) * p.penalty_k)\n        pscore = penalty * cls_score\n\n        # window penalty\n        pscore = pscore * (1 - p.window_influence) + window * p.window_influence\n\n        if self.online_score is not None:\n            s_size = pscore.shape[0]\n            o_score = cv2.resize(self.online_score, (s_size, s_size), interpolation=cv2.INTER_CUBIC)\n            pscore = p.online_ratio * o_score + (1 - p.online_ratio) * pscore\n        else:\n            pass\n\n        # get max\n        r_max, c_max = np.unravel_index(pscore.argmax(), pscore.shape)\n\n        # to real size\n        pred_x1 = pred_x1[r_max, c_max]\n        pred_y1 = pred_y1[r_max, c_max]\n        pred_x2 = pred_x2[r_max, c_max]\n        pred_y2 = pred_y2[r_max, c_max]\n\n        pred_xs = (pred_x1 + pred_x2) / 2\n        pred_ys = (pred_y1 + pred_y2) / 2\n        pred_w = pred_x2 - pred_x1\n        pred_h = pred_y2 - pred_y1\n\n        diff_xs = pred_xs - p.instance_size // 2\n        diff_ys = pred_ys - p.instance_size // 2\n\n        diff_xs, diff_ys, pred_w, pred_h = diff_xs / scale_z, diff_ys / scale_z, pred_w / scale_z, pred_h / scale_z\n\n        target_sz = target_sz / scale_z\n\n        # size learning rate\n        lr = penalty[r_max, c_max] * cls_score[r_max, c_max] * p.lr\n\n        # size rate\n        res_xs = target_pos[0] + diff_xs\n        res_ys = target_pos[1] + diff_ys\n        res_w = pred_w * lr + (1 - lr) * target_sz[0]\n        res_h = pred_h * lr + (1 - lr) * target_sz[1]\n\n        target_pos = np.array([res_xs, res_ys])\n        target_sz = target_sz * (1 - lr) + lr * np.array([res_w, res_h])\n\n        return target_pos, target_sz, cls_score[r_max, c_max]\n\n    def track(self, state, im, online_score=None, gt=None):\n        p = state['p']\n        net = state['net']\n        avg_chans = state['avg_chans']\n        window = state['window']\n        target_pos = state['target_pos']\n        target_sz = state['target_sz']\n\n        if online_score is not None:\n            self.online_score = online_score.squeeze().cpu().data.numpy()\n        else:\n            self.online_score = None\n\n        hc_z = target_sz[1] + p.context_amount * sum(target_sz)\n        wc_z = target_sz[0] + p.context_amount * sum(target_sz)\n        s_z = np.sqrt(wc_z * hc_z)\n        scale_z = p.exemplar_size / s_z\n        d_search = (p.instance_size - p.exemplar_size) / 2  # slightly different from rpn++\n        pad = d_search / scale_z\n        s_x = s_z + 2 * pad\n\n        x_crop, _ = get_subwindow_tracking(im, target_pos, p.instance_size, python2round(s_x), avg_chans)\n        x_crop = x_crop.unsqueeze(0)\n\n        target_pos, target_sz, _ = self.update(net, x_crop.cuda(), target_pos, target_sz*scale_z, window, scale_z, p)\n\n        target_pos[0] = max(0, min(state['im_w'], target_pos[0]))\n        target_pos[1] = max(0, min(state['im_h'], target_pos[1]))\n        target_sz[0] = max(10, min(state['im_w'], target_sz[0]))\n        target_sz[1] = max(10, min(state['im_h'], target_sz[1]))\n        state['target_pos'] = target_pos\n        state['target_sz'] = target_sz\n        state['p'] = p\n\n        return state\n\n    def grids(self, p):\n        \"\"\"\n        each element of feature map on input search image\n        :return: H*W*2 (position for each element)\n        \"\"\"\n        sz = p.score_size\n\n        # the real shift is -param['shifts']\n        sz_x = sz // 2\n        sz_y = sz // 2\n\n        x, y = np.meshgrid(np.arange(0, sz) - np.floor(float(sz_x)),\n                           np.arange(0, sz) - np.floor(float(sz_y)))\n\n        self.grid_to_search_x = x * p.total_stride + p.instance_size // 2\n        self.grid_to_search_y = y * p.total_stride + p.instance_size // 2\n\n\n    def IOUgroup(self, pred_x1, pred_y1, pred_x2, pred_y2, gt_xyxy):\n        # overlap\n\n        x1, y1, x2, y2 = gt_xyxy\n\n        xx1 = np.maximum(pred_x1, x1)  # 17*17\n        yy1 = np.maximum(pred_y1, y1)\n        xx2 = np.minimum(pred_x2, x2)\n        yy2 = np.minimum(pred_y2, y2)\n\n        ww = np.maximum(0, xx2 - xx1)\n        hh = np.maximum(0, yy2 - yy1)\n\n        area = (x2 - x1) * (y2 - y1)\n\n        target_a = (pred_x2 - pred_x1) * (pred_y2 - pred_y1)\n\n        inter = ww * hh\n        overlap = inter / (area + target_a - inter)\n\n        return overlap\n\n    def change(self, r):\n        return np.maximum(r, 1. / r)\n\n    def sz(self, w, h):\n        pad = (w + h) * 0.5\n        sz2 = (w + pad) * (h + pad)\n        return np.sqrt(sz2)\n\n    def sz_wh(self, wh):\n        pad = (wh[0] + wh[1]) * 0.5\n        sz2 = (wh[0] + pad) * (wh[1] + pad)\n        return np.sqrt(sz2)\n\n\nclass OceanConfig(object):\n    penalty_k = 0.062\n    window_influence = 0.38\n    lr = 0.765\n    windowing = 'cosine'\n    exemplar_size = 127\n    instance_size = 255\n    total_stride = 8\n    score_size = (instance_size - exemplar_size) // total_stride + 1 + 8  # for ++\n    context_amount = 0.5\n    ratio = 0.94\n\n\n    def update(self, newparam=None):\n        if newparam:\n            for key, value in newparam.items():\n                setattr(self, key, value)\n            self.renew()\n\n    def renew(self):\n        self.score_size = (self.instance_size - self.exemplar_size) // self.total_stride + 1 + 8 # for ++\n"
  },
  {
    "path": "tracker/sot/lib/tracker/oceanplus.py",
    "content": "import os\nimport cv2\nimport yaml\nimport numpy as np\n\nimport torch\nimport torch.nn.functional as F\nfrom os.path import join, exists\nfrom utils.utils import load_yaml, im_to_torch, get_subwindow_tracking, make_scale_pyramid, python2round, get_subwindow_tracking_mask\n\n\nclass OceanPlus(object):\n    def __init__(self, info):\n        super(OceanPlus, self).__init__()\n        self.info = info   # model and benchmark info\n        self.stride = 8\n\n        if info.dataset in ['DAVIS2016', 'DAVIS2017', 'YTBVOS']:\n            self.vos = True\n        else:\n            self.vos = False\n\n    def init(self, im, target_pos, target_sz, model, hp=None, online=False, mask=None, debug=False):\n        # in: whether input infrared image\n        state = dict()\n        # epoch test\n        p = AdaConfig()\n\n        self.debug = debug\n\n        state['im_h'] = im.shape[0]\n        state['im_w'] = im.shape[1]\n        self.imh = state['im_h']\n        self.imw = state['im_w']\n\n        # single test\n        # if not hp and not self.info.epoch_test:\n        if True:\n            prefix = [x for x in ['OTB', 'VOT', 'DAVIS'] if x in self.info.dataset]\n            if len(prefix) == 0: prefix = [self.info.dataset]\n            absPath = os.path.abspath(os.path.dirname(__file__))\n            yname='OceanPlus.yaml'\n            yamlPath = os.path.join(absPath, '../../experiments/test/{}/'.format(prefix[0]), yname)\n            cfg = load_yaml(yamlPath)\n         \n            if self.info.dataset not in list(cfg.keys()):\n                print('[*] unsupported benchmark, use VOT2020 hyper-parameters (not optimal)')\n                cfg_benchmark = cfg['VOT2020']\n            else:\n                cfg_benchmark = cfg[self.info.dataset]\n      \n            p.update(cfg_benchmark)\n            p.renew()\n\n            if ((target_sz[0] * target_sz[1]) / float(state['im_h'] * state['im_w'])) < 0.004:\n                p.instance_size = cfg_benchmark['big_sz']\n                p.renew()\n            else:\n                p.instance_size = cfg_benchmark['small_sz']\n                p.renew()\n\n        self.grids(p)   # self.grid_to_search_x, self.grid_to_search_y\n\n        net = model\n        # param tune\n        if hp:\n            p.update(hp)\n            if 'lambda_u' in hp.keys() or 'lambda_s' in hp.keys():\n                net.update_lambda(hp['lambda_u'], hp['lambda_s'])\n            if 'iter1' in hp.keys() or 'iter2' in hp.keys():\n                 net.update_iter(hp['iter1'], hp['iter2'])\n\n            print('======= hyper-parameters: pk: {:.3f}, wi: {:.2f}, lr: {:.2f} ======='.format(p.penalty_k, p.window_influence, p.lr))\n        wc_z = target_sz[0] + p.context_amount * sum(target_sz)\n        hc_z = target_sz[1] + p.context_amount * sum(target_sz)\n        s_z = round(np.sqrt(wc_z * hc_z))\n\n        avg_chans = np.mean(im, axis=(0, 1))\n        z_crop, _ = get_subwindow_tracking(im, target_pos, p.exemplar_size, s_z, avg_chans)\n        mask_crop, _ = get_subwindow_tracking_mask(mask, target_pos, p.exemplar_size, s_z, out_mode=None)\n        mask_crop = (mask_crop > 0.5).astype(np.uint8)\n        mask_crop = torch.from_numpy(mask_crop)\n\n        # vis zcrop\n        # vis = 0.5 * z_crop.permute(1,2,0) + 255 *  mask_crop.unsqueeze(-1).float()\n        # cv2.imwrite('zcrop.jpg', vis.numpy())\n\n        z = z_crop.unsqueeze(0)\n        net.template(z.cuda(), mask_crop.unsqueeze(0).cuda())\n\n\n        if p.windowing == 'cosine':\n            window = np.outer(np.hanning(p.score_size), np.hanning(p.score_size))  # [17,17]\n        elif p.windowing == 'uniform':\n            window = np.ones(int(p.score_size), int(p.score_size))\n\n        state['p'] = p\n        state['net'] = net\n        state['avg_chans'] = avg_chans\n        state['window'] = window\n        state['target_pos'] = target_pos\n        state['target_sz'] = target_sz\n\n        self.p = p\n        self.debug_on_crop = False\n        self.debug_on_ori = False\n        self.save_mask = False  # save all mask results\n        self.mask_ratio = False\n\n        self.update_template = True\n\n        if self.debug_on_ori or self.debug_on_crop:\n            print('Warning: debuging...')\n            print('Warning: turning off debugging mode after this process')\n            self.debug = True\n\n        return state\n\n    def update(self, net, x_crops, target_pos, target_sz, window, scale_z, p):\n\n        cls_score, bbox_pred, mask = net.track(x_crops)\n        cls_score = F.sigmoid(cls_score).squeeze().cpu().data.numpy()\n\n        # bbox to real predict\n        bbox_pred = bbox_pred.squeeze().cpu().data.numpy()\n\n        pred_x1 = self.grid_to_search_x - bbox_pred[0, ...]\n        pred_y1 = self.grid_to_search_y - bbox_pred[1, ...]\n        pred_x2 = self.grid_to_search_x + bbox_pred[2, ...]\n        pred_y2 = self.grid_to_search_y + bbox_pred[3, ...]\n\n        # size penalty\n        s_c = self.change(self.sz(pred_x2-pred_x1, pred_y2-pred_y1) / (self.sz_wh(target_sz)))  # scale penalty\n        r_c = self.change((target_sz[0] / target_sz[1]) / ((pred_x2-pred_x1) / (pred_y2-pred_y1)))  # ratio penalty\n\n        penalty = np.exp(-(r_c * s_c - 1) * p.penalty_k)\n        pscore = penalty * cls_score\n\n        # window penalty\n        if self.online_score is not None:\n            pscore_ori = pscore * (1 - p.window_influence) + window * p.window_influence\n        else:\n            pscore = pscore * (1 - p.window_influence) + window * p.window_influence\n            pscore_ori = pscore\n\n        if self.online_score is not None:\n            s_size = pscore.shape[0]\n            o_score = cv2.resize(self.online_score, (s_size, s_size), interpolation=cv2.INTER_CUBIC)\n            pscore = p.online_ratio * o_score + (1 - p.online_ratio) * pscore_ori\n        else:\n            pass\n\n        # get max\n        r_max, c_max = np.unravel_index(pscore.argmax(), pscore.shape)\n\n        # to real size\n        pred_x1 = pred_x1[r_max, c_max]\n        pred_y1 = pred_y1[r_max, c_max]\n        pred_x2 = pred_x2[r_max, c_max]\n        pred_y2 = pred_y2[r_max, c_max]\n\n        pred_xs = (pred_x1 + pred_x2) / 2\n        pred_ys = (pred_y1 + pred_y2) / 2\n        pred_w = pred_x2 - pred_x1\n        pred_h = pred_y2 - pred_y1\n\n        diff_xs = pred_xs - p.instance_size // 2\n        diff_ys = pred_ys - p.instance_size // 2\n\n        diff_xs, diff_ys, pred_w, pred_h = diff_xs / scale_z, diff_ys / scale_z, pred_w / scale_z, pred_h / scale_z\n\n        target_sz = target_sz / scale_z\n\n        # size learning rate\n        lr = penalty[r_max, c_max] * cls_score[r_max, c_max] * p.lr\n\n        if pscore_ori[r_max, c_max] > 0.95 and self.update_template:  # donot update for vos dataset\n            pos_in_crop = np.array([diff_xs, diff_ys]) * scale_z   \n            sz_in_crop = target_sz * scale_z\n            net.update_roi_template(pos_in_crop, sz_in_crop, pscore[r_max, c_max])  # update template\n\n        # size rate\n        res_xs = target_pos[0] + diff_xs\n        res_ys = target_pos[1] + diff_ys\n        res_w = pred_w * lr + (1 - lr) * target_sz[0]\n        res_h = pred_h * lr + (1 - lr) * target_sz[1]\n\n        target_pos = np.array([res_xs, res_ys])\n        target_sz = target_sz * (1 - lr) + lr * np.array([res_w, res_h])\n\n        if self.debug:\n            bbox = [int(target_pos[0]-target_sz[0]/2), int(target_pos[1]-target_sz[1]/2), int(target_pos[0]+target_sz[0]/2), int(target_pos[1]+target_sz[1]/2)]\n\n        # -----------------------  mask --------------------\n        mask = mask.squeeze()\n        mask = F.softmax(mask, dim=0)[1]\n        mask = mask.squeeze().cpu().data.numpy()  # [255, 255]\n        # print('---- in track0')\n        if self.debug_on_crop:\n            print('===========> debug on crop image <==========')\n            # draw on crop image\n            polygon = self.mask2box(mask, method='cv2poly')\n            im = x_crops.squeeze().permute(1, 2, 0).cpu().data.numpy()\n            output = self.draw_mask(mask, im, polygon=polygon, mask_ratio=0.8, draw_contour=False, object_num=1)\n            cv2.imwrite('mask.jpg', output)\n        else:\n            # print('===========> debug on original image <==========')\n            # width and height of original image patch in get_sub_window tracking\n            context_xmin, context_xmax, context_ymin, context_ymax = self.crop_info['crop_cords']\n            top_pad, left_pad, r, c = self.crop_info['pad_info']\n\n            temp_w = context_xmax - context_xmin + 1\n            temp_h = context_ymax - context_ymin + 1\n            mask_temp = cv2.resize(mask, (int(temp_h), int(temp_w)), interpolation=cv2.INTER_CUBIC)\n\n            # return mask to original image patch in get_sub_window tracking\n            empty_mask = self.crop_info['empty_mask']\n            empty_mask[int(context_ymin):int(context_ymax + 1), int(context_xmin):int(context_xmax + 1)] = mask_temp\n\n            # remove crop padding\n            mask_in_im = empty_mask[top_pad:top_pad + r, left_pad:left_pad + c]\n            \n            if self.debug_on_ori or self.debug:\n                polygon = self.mask2box(mask_in_im, method='cv2poly')\n                output = self.draw_mask(mask_in_im, self.im_ori, polygon=polygon, box=bbox, mask_ratio=0.8, draw_contour=False, object_num=1)\n                cv2.imwrite(join(self.save_dir, self.name.split('/')[-1]), output)\n            else:\n                polygon = None\n\n            # ------ test -------\n            results = dict()\n            results['target_pos'] = target_pos\n            results['target_sz'] = target_sz\n            results['cls_score'] = cls_score[r_max, c_max]\n            results['mask'] = (mask_in_im > self.p.seg_thr).astype(np.uint8)\n            results['mask_ori'] = mask_in_im\n            results['polygon'] = polygon\n\n        return results\n\n    def track(self, state, im, online_score=None, gt=None, name=None):\n        p = state['p']\n        net = state['net']\n        avg_chans = state['avg_chans']\n        window = state['window']\n        target_pos = state['target_pos']\n        target_sz = state['target_sz']\n        self.im_ori = im.copy()\n        self.gt = gt\n\n        if online_score is not None:\n            self.online_score = online_score.squeeze().cpu().data.numpy()\n        else:\n            self.online_score = None\n\n        # debug\n        if self.debug:\n            temp = name.split('/')[-2]\n            self.name = name\n            self.save_dir = join('debug', temp)\n            if not exists(self.save_dir):\n                os.makedirs(self.save_dir)\n\n        hc_z = target_sz[1] + p.context_amount * sum(target_sz)\n        wc_z = target_sz[0] + p.context_amount * sum(target_sz)\n        s_z = np.sqrt(wc_z * hc_z)\n        scale_z = p.exemplar_size / s_z\n        d_search = (p.instance_size - p.exemplar_size) / 2  # slightly different from rpn++\n        pad = d_search / scale_z\n        s_x = s_z + 2 * pad\n        x_crop, self.crop_info = get_subwindow_tracking(im, target_pos, p.instance_size, python2round(s_x), avg_chans)\n        x_crop = x_crop.unsqueeze(0)\n\n        results = self.update(net, x_crop.cuda(), target_pos, target_sz*scale_z, window, scale_z, p)\n\n        target_pos, target_sz, cls_score, mask, mask_ori, polygon = results['target_pos'], results['target_sz'], results['cls_score'], results['mask'], results['mask_ori'], results['polygon']\n        target_pos[0] = max(0, min(state['im_w'], target_pos[0]))\n        target_pos[1] = max(0, min(state['im_h'], target_pos[1]))\n        target_sz[0] = max(10, min(state['im_w'], target_sz[0]))\n        target_sz[1] = max(10, min(state['im_h'], target_sz[1]))\n        state['target_pos'] = target_pos\n        state['target_sz'] = target_sz\n        state['cls_score'] = cls_score\n        state['mask'] = mask\n        state['mask_ori'] = mask_ori\n        state['polygon'] = polygon\n        state['p'] = p\n\n        return state\n\n    def mask2box(self, mask, method='cv2poly'):\n        \"\"\"\n        method: cv2poly --> opencv\n                opt --> vot version\n        \"\"\"\n        mask = (mask > self.p.seg_thr).astype(np.uint8)\n        if method == 'cv2poly':\n            if cv2.__version__[-5] == '4':\n                contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)\n            else:\n                _, contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)\n\n            cnt_area = [cv2.contourArea(cnt) for cnt in contours]\n            if len(contours) != 0 and np.max(cnt_area) > 0:\n                contour = contours[np.argmax(cnt_area)]  # use max area polygon\n                polygon = contour.reshape(-1, 2)\n                # pbox = cv2.boundingRect(polygon)  # Min Max Rectangle\n                # box_in_img = pbox\n                prbox = cv2.boxPoints(cv2.minAreaRect(polygon))  # Rotated Rectangle\n                pred_polygon = ((prbox[0][0], prbox[0][1]), (prbox[1][0], prbox[1][1]),\n                                (prbox[2][0], prbox[2][1]), (prbox[3][0], prbox[3][1]))\n\n                return pred_polygon\n            else:\n                return None\n\n        elif method == 'opt':\n            pass\n        else:\n            raise ValueError('not supported mask2box methods')\n\n    def draw_mask(self, mask, im, polygon=None, box=None, mask_ratio=0.2, draw_contour=False, object_num=1):\n        # draw mask\n        # mask: 0, 255\n        mask = mask > self.p.seg_thr\n        mask = mask.astype('uint8')\n        # COLOR\n        COLORS = np.random.randint(128, 255, size=(object_num, 3), dtype=\"uint8\")\n        COLORSIM = np.vstack([[0, 0, 0], COLORS]).astype(\"uint8\")\n        mask_draw = COLORSIM[mask]\n\n        # mask = mask * 255\n\n        where_is = (mask == 0).astype(int)\n        where_is = np.expand_dims(where_is, axis=-1)\n        out_mask = where_is * im\n        output = ((1 - mask_ratio) * im + mask_ratio * mask_draw + mask_ratio * out_mask).astype(\"uint8\")\n        # output = ((1 - mask_ratio) * im + mask_ratio * mask).astype(\"uint8\")\n\n        if draw_contour:\n            mask = cv2.cvtColor(mask, cv2.COLOR_RGB2GRAY)\n            try:\n                contours, hierarchy = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)\n\n                # remove small contours\n                areas = np.array([cv2.contourArea(c) for c in contours])\n                max_area = np.max(areas)\n                max_idx = np.argmax(areas)\n\n                minArea = max_area * 0.01\n\n                filteredContours = []\n                findhier = []\n                for id, i in enumerate(contours):\n                    area = cv2.contourArea(i)\n                    if area > minArea:\n                        filteredContours.append(i)\n                        findhier.append(hierarchy[:, id, :])\n\n                # findhier = np.array(findhier).transpose(1, 0, 2)\n                output = cv2.drawContours(output, filteredContours, -1, (255, 255, 255), 2, cv2.LINE_8)\n            except:\n                print('draw contour process fails...')\n        else:\n            pass\n\n        if polygon is not None:\n            # draw rotated box\n            polygon = np.int0(polygon)  # to int\n            output = cv2.polylines(output, [polygon.reshape((-1, 1, 2))], True, (0, 255, 255), 3)\n            # output = cv2.drawContours(output, [polygon], 0, (0, 0, 255), 3)\n\n        # draw gt\n        try:\n            gt = ((self.gt[0], self.gt[1]), (self.gt[2], self.gt[3]), (self.gt[4], self.gt[5]), (self.gt[6], self.gt[7]))\n            gt = np.int0(gt)  # to int\n            output = cv2.polylines(output, [gt.reshape((-1, 1, 2))], True, (0, 0, 255), 3)\n        except:\n            pass\n\n        if box is not None:\n            output = cv2.rectangle(output, (box[0], box[1]), (box[2], box[3]), (0, 255, 0))\n\n        return output\n\n    def grids(self, p):\n        \"\"\"\n        each element of feature map on input search image\n        :return: H*W*2 (position for each element)\n        \"\"\"\n        sz = p.score_size\n\n        # the real shift is -param['shifts']\n        sz_x = sz // 2\n        sz_y = sz // 2\n\n        x, y = np.meshgrid(np.arange(0, sz) - np.floor(float(sz_x)),\n                           np.arange(0, sz) - np.floor(float(sz_y)))\n\n        self.grid_to_search_x = x * p.total_stride + p.instance_size // 2\n        self.grid_to_search_y = y * p.total_stride + p.instance_size // 2\n\n\n    def IOUgroup(self, pred_x1, pred_y1, pred_x2, pred_y2, gt_xyxy):\n        # overlap\n\n        x1, y1, x2, y2 = gt_xyxy\n\n        xx1 = np.maximum(pred_x1, x1)  # 17*17\n        yy1 = np.maximum(pred_y1, y1)\n        xx2 = np.minimum(pred_x2, x2)\n        yy2 = np.minimum(pred_y2, y2)\n\n        ww = np.maximum(0, xx2 - xx1)\n        hh = np.maximum(0, yy2 - yy1)\n\n        area = (x2 - x1) * (y2 - y1)\n\n        target_a = (pred_x2 - pred_x1) * (pred_y2 - pred_y1)\n\n        inter = ww * hh\n        overlap = inter / (area + target_a - inter)\n\n        return overlap\n\n    def change(self, r):\n        return np.maximum(r, 1. / r)\n\n    def sz(self, w, h):\n        pad = (w + h) * 0.5\n        sz2 = (w + pad) * (h + pad)\n        return np.sqrt(sz2)\n\n    def sz_wh(self, wh):\n        pad = (wh[0] + wh[1]) * 0.5\n        sz2 = (wh[0] + pad) * (wh[1] + pad)\n        return np.sqrt(sz2)\n\n\nclass AdaConfig(object):\n    penalty_k = 0.06\n    window_influence = 0.484\n    lr = 0.644\n    windowing = 'cosine'\n    exemplar_size = 127\n    instance_size = 255\n    total_stride = 8\n    score_size = (instance_size - exemplar_size) // total_stride + 1 + 8  # for ++\n    context_amount = 0.5\n    ratio = 0.94\n    online_ratio = 0.7\n    #seg_thr = 0.84\n    seg_thr = 0.9\n    lambda_u = 0.1\n    lambda_s = 0.2\n    iter1 = 0.33\n    iter2 = 0.33\n\n\n    def update(self, newparam=None):\n        if newparam:\n            for key, value in newparam.items():\n                setattr(self, key, value)\n            self.renew()\n\n    def renew(self):\n        self.score_size = (self.instance_size - self.exemplar_size) // self.total_stride + 1 + 8 # for ++\n"
  },
  {
    "path": "tracker/sot/lib/tracker/online.py",
    "content": "import os\nimport cv2\nimport yaml\nimport math\nimport torch\nimport numpy as np\n\nimport torch\nimport torch.nn.functional as F\nimport math\nimport time\nimport numpy as np\n\nfrom utils.utils import TrackerParams, FeatureParams, load_yaml, load_pretrain\nfrom online import dcf, TensorList, augmentation, operation, fourier\nfrom online.preprocessing import numpy_to_torch, sample_patch_multiscale, sample_patch_transformed\n\nimport models.models as models\nimport torch\nimport random\n\nseed = 1234\ntorch.manual_seed(seed)\ntorch.cuda.manual_seed(seed)\ntorch.cuda.manual_seed_all(seed)  # if you are using multi-GPU.\nnp.random.seed(seed)  # Numpy module.\nrandom.seed(seed)  # Python random module.\ntorch.manual_seed(seed)\ntorch.backends.cudnn.benchmark = False\ntorch.backends.cudnn.deterministic = True\n\n\nclass ONLINE(object):\n    \"\"\"\n    modified from DiMP released\n    \"\"\"\n    def __init__(self, info):\n        super(ONLINE, self).__init__()\n        self.info = info   # model and benchmark info\n        self.params = TrackerParams()\n        self.load_online = True\n\n\n    # def initialize(self, image, info: dict) -> dict:\n    def init(self, bgr_image, image, siam_net, target_pos, target_sz, load_online, dataname='VOT2019', resume=None):\n        # Initialize some stuff\n        self.frame_num = 1\n        self.load_online = load_online\n\n        # --------------------------------------------\n        # Init hyper-parameter\n        # print('====> init phase: load default parameters')\n        self.p = ONLINEConfig()\n        absPath = os.path.abspath(os.path.dirname(__file__))\n        yname='ONLINE.yaml' if 'VOT' in dataname else 'ONLINE_NV.yaml'\n        yamlPath = os.path.join(absPath, '../../experiments/test/VOT/', yname)\n        cfg = load_yaml(yamlPath, subset=False)\n        self.p.update(cfg)\n        self.params_convert()\n\n        # ---------------------------------------------\n        # Init feature extractor\n        # ---------------------------------------------\n        self.params.net = models.__dict__['ONLINEnet50'](backbone=None).cuda()\n\n        # Initialize network\n        self.features_initialized = True\n        # ads_path = os.path.dirname(os.path.abspath(__file__))\n        resume_path = os.path.join(absPath, '../../', resume)\n        self.params.net = load_pretrain(self.params.net, resume_path, print_unuse=False)\n        self.params.net.feature_extractor = siam_net\n        self.params.net.cuda()\n        self.params.net.eval()\n\n        # ---------------------------------------------\n        # ONLINE init phase\n        # ---------------------------------------------\n        if not hasattr(self.p, 'device'):\n            self.p.device = 'cuda' if self.p.use_gpu else 'cpu'\n\n        # The ONLINE network\n        self.net = self.params.net\n\n        # Time initialization\n        tic = time.time()\n\n        # Get target position and size\n        self.pos = torch.Tensor([target_pos[1], target_pos[0]])   # height, width\n        self.target_sz = torch.Tensor([target_sz[1], target_sz[0]])\n\n        # Set sizes\n        sz = self.p.image_sample_size\n        self.img_sample_sz = torch.Tensor([sz, sz] if isinstance(sz, int) else sz)\n        self.img_support_sz = self.img_sample_sz\n\n        # Set search area\n        search_area = torch.prod(self.target_sz * self.p.search_area_scale).item()\n        self.target_scale =  math.sqrt(search_area) / self.img_sample_sz.prod().sqrt()\n\n        # Target size in base scale\n        self.base_target_sz = self.target_sz / self.target_scale\n\n        # Convert image\n        im = numpy_to_torch(image)  # image is RGB\n\n        # Setup scale factors\n        if not hasattr(self.p, 'scale_factors'):\n            self.p.scale_factors = torch.ones(1)\n        elif isinstance(self.p.scale_factors, (list, tuple)):\n            self.p.scale_factors = torch.Tensor(self.p.scale_factors)\n\n        # Setup scale bounds\n        self.image_sz = torch.Tensor([im.shape[2], im.shape[3]])\n        self.min_scale_factor = torch.max(10 / self.base_target_sz)\n        self.max_scale_factor = torch.min(self.image_sz / self.base_target_sz)\n\n        # Extract and transform sample\n        init_backbone_feat = self.generate_init_samples(im)\n\n        # Initialize classifier\n        self.init_classifier(init_backbone_feat)\n\n        # Initialize IoUNet\n        if getattr(self.p, 'use_iou_net', True):\n            self.init_iou_net(init_backbone_feat)\n\n        out = {'time': time.time() - tic}\n        return out\n\n\n    def track(self, bgr_im, image, siam_tracker, siam_state) -> dict:\n        self.debug_info = {}\n\n        self.frame_num += 1\n        self.debug_info['frame_num'] = self.frame_num\n\n        # Convert image\n        im = numpy_to_torch(image)\n\n        # ------- LOCALIZATION ------- #\n\n        # Extract backbone features\n        backbone_feat, sample_coords = self.extract_backbone_features(im, self.get_centered_sample_pos(),\n                                                                      self.target_scale * self.p.scale_factors,\n                                                                      self.img_sample_sz)\n        # Extract classification features\n        test_x = self.get_classification_features(backbone_feat)\n\n        # Location of sample\n        sample_pos, sample_scales = self.get_sample_location(sample_coords)\n\n        # Compute classification scores\n        scores_raw = self.classify_target(test_x)\n\n        # Localize the target\n        translation_vec, scale_ind, s, flag = self.localize_target(scores_raw, sample_scales)\n        self.siam_state = siam_tracker.track(siam_state, bgr_im, s)\n        new_pos = siam_state['target_pos'][::-1]\n\n        # new_pos = sample_pos[scale_ind,:] + translation_vec\n\n        self.debug_info['flag'] = flag\n\n        # Update position and scale\n        if flag != 'not_found':\n            if getattr(self.p, 'use_iou_net', True):\n                update_scale_flag = getattr(self.p, 'update_scale_when_uncertain', True) or flag != 'uncertain'\n                if getattr(self.p, 'use_classifier', True):\n                    self.update_state(new_pos)\n                self.refine_target_box(backbone_feat, sample_pos[scale_ind,:], sample_scales[scale_ind], scale_ind, update_scale_flag)\n            elif getattr(self.p, 'use_classifier', True):\n                self.update_state(new_pos, sample_scales[scale_ind])\n\n        # ------- UPDATE ------- #\n\n        update_flag = flag not in ['not_found', 'uncertain']\n        hard_negative = (flag == 'hard_negative')\n        learning_rate = getattr(self.p, 'hard_negative_learning_rate', None) if hard_negative else None\n\n        if getattr(self.p, 'update_classifier', False) and update_flag:\n            # Get train sample\n            train_x = test_x[scale_ind:scale_ind+1, ...]\n\n            # Create target_box and label for spatial sample\n            target_box = self.get_iounet_box(self.pos, self.target_sz, sample_pos[scale_ind,:], sample_scales[scale_ind])\n\n            # Update the classifier model\n            self.update_classifier(train_x, target_box, learning_rate, s[scale_ind,...])\n\n        # Set the pos of the tracker to iounet pos\n        if getattr(self.p, 'use_iou_net', True) and flag != 'not_found' and hasattr(self, 'pos_iounet'):\n            self.pos = self.pos_iounet.clone()\n\n        score_map = s[scale_ind, ...]\n        max_score = torch.max(score_map).item()\n        self.debug_info['max_score'] = max_score\n\n        return siam_state\n\n\n    def get_sample_location(self, sample_coord):\n        \"\"\"Get the location of the extracted sample.\"\"\"\n        sample_coord = sample_coord.float()\n        sample_pos = 0.5*(sample_coord[:,:2] + sample_coord[:,2:] - 1)\n        sample_scales = ((sample_coord[:,2:] - sample_coord[:,:2]) / self.img_sample_sz).prod(dim=1).sqrt()\n        return sample_pos, sample_scales\n\n    def get_centered_sample_pos(self):\n        \"\"\"Get the center position for the new sample. Make sure the target is correctly centered.\"\"\"\n        return self.pos + ((self.feature_sz + self.kernel_size) % 2) * self.target_scale * \\\n               self.img_support_sz / (2*self.feature_sz)\n\n    def classify_target(self, sample_x: TensorList):\n        \"\"\"Classify target by applying the ONLINE filter.\"\"\"\n        with torch.no_grad():\n            scores = self.net.classifier.classify(self.target_filter, sample_x)\n        return scores\n\n    def localize_target(self, scores, sample_scales):\n        \"\"\"Run the target localization.\"\"\"\n\n        scores = scores.squeeze(1)\n\n        if getattr(self.p, 'advanced_localization', False):\n            return self.localize_advanced(scores, sample_scales)\n\n        # Get maximum\n        score_sz = torch.Tensor(list(scores.shape[-2:]))\n        score_center = (score_sz - 1)/2\n        max_score, max_disp = dcf.max2d(scores)\n        _, scale_ind = torch.max(max_score, dim=0)\n        max_disp = max_disp[scale_ind,...].float().cpu().view(-1)\n        target_disp = max_disp - score_center\n\n        # Compute translation vector and scale change factor\n        translation_vec = target_disp * (self.img_support_sz / self.feature_sz) * sample_scales[scale_ind]\n\n        return translation_vec, scale_ind, scores, None\n\n\n    def localize_advanced(self, scores, sample_scales):\n        \"\"\"Run the target advanced localization (as in ATOM).\"\"\"\n\n        sz = scores.shape[-2:]\n        score_sz = torch.Tensor(list(sz))\n        score_center = (score_sz - 1)/2\n\n        scores_hn = scores\n        if self.output_window is not None and getattr(self.p, 'perform_hn_without_windowing', False):\n            scores_hn = scores.clone()\n            scores *= self.output_window\n\n        max_score1, max_disp1 = dcf.max2d(scores)\n        _, scale_ind = torch.max(max_score1, dim=0)\n        sample_scale = sample_scales[scale_ind]\n        max_score1 = max_score1[scale_ind]\n        max_disp1 = max_disp1[scale_ind,...].float().cpu().view(-1)\n        target_disp1 = max_disp1 - score_center\n        translation_vec1 = target_disp1 * (self.img_support_sz / self.feature_sz) * sample_scale\n\n        if max_score1.item() < self.p.target_not_found_threshold:\n            return translation_vec1, scale_ind, scores_hn, 'not_found'\n\n        # Mask out target neighborhood\n        target_neigh_sz = self.p.target_neighborhood_scale * (self.target_sz / sample_scale) * (self.feature_sz / self.img_support_sz)\n\n        tneigh_top = max(round(max_disp1[0].item() - target_neigh_sz[0].item() / 2), 0)\n        tneigh_bottom = min(round(max_disp1[0].item() + target_neigh_sz[0].item() / 2 + 1), sz[0])\n        tneigh_left = max(round(max_disp1[1].item() - target_neigh_sz[1].item() / 2), 0)\n        tneigh_right = min(round(max_disp1[1].item() + target_neigh_sz[1].item() / 2 + 1), sz[1])\n        scores_masked = scores_hn[scale_ind:scale_ind + 1, ...].clone()\n        scores_masked[...,tneigh_top:tneigh_bottom,tneigh_left:tneigh_right] = 0\n\n        # Find new maximum\n        max_score2, max_disp2 = dcf.max2d(scores_masked)\n        max_disp2 = max_disp2.float().cpu().view(-1)\n        target_disp2 = max_disp2 - score_center\n        translation_vec2 = target_disp2 * (self.img_support_sz / self.feature_sz) * sample_scale\n\n        # Handle the different cases\n        if max_score2 > self.p.distractor_threshold * max_score1:\n            disp_norm1 = torch.sqrt(torch.sum(target_disp1**2))\n            disp_norm2 = torch.sqrt(torch.sum(target_disp2**2))\n            disp_threshold = self.p.dispalcement_scale * math.sqrt(sz[0] * sz[1]) / 2\n\n            if disp_norm2 > disp_threshold and disp_norm1 < disp_threshold:\n                return translation_vec1, scale_ind, scores_hn, 'hard_negative'\n            if disp_norm2 < disp_threshold and disp_norm1 > disp_threshold:\n                return translation_vec2, scale_ind, scores_hn, 'hard_negative'\n            if disp_norm2 > disp_threshold and disp_norm1 > disp_threshold:\n                return translation_vec1, scale_ind, scores_hn, 'uncertain'\n\n            # If also the distractor is close, return with highest score\n            return translation_vec1, scale_ind, scores_hn, 'uncertain'\n\n        if max_score2 > self.p.hard_negative_threshold * max_score1 and max_score2 > self.p.target_not_found_threshold:\n            return translation_vec1, scale_ind, scores_hn, 'hard_negative'\n\n        return translation_vec1, scale_ind, scores_hn, 'normal'\n\n    def extract_backbone_features(self, im: torch.Tensor, pos: torch.Tensor, scales, sz: torch.Tensor):\n        im_patches, patch_coords = sample_patch_multiscale(im, pos, scales, sz, getattr(self.p, 'border_mode', 'replicate'))\n        with torch.no_grad():\n            backbone_feat = self.net.extract_backbone(im_patches)\n        return backbone_feat, patch_coords\n\n    def get_classification_features(self, backbone_feat):\n        with torch.no_grad():\n            return self.net.extract_classification_feat(backbone_feat)\n\n    def get_iou_backbone_features(self, backbone_feat):\n        return self.net.get_backbone_bbreg_feat(backbone_feat)\n\n    def get_iou_features(self, backbone_feat):\n        with torch.no_grad():\n            return self.net.bb_regressor.get_iou_feat(self.get_iou_backbone_features(backbone_feat))\n\n    def get_iou_modulation(self, iou_backbone_feat, target_boxes):\n        with torch.no_grad():\n            return self.net.bb_regressor.get_modulation(iou_backbone_feat, target_boxes)\n\n\n    def generate_init_samples(self, im: torch.Tensor) -> TensorList:\n        \"\"\"Perform data augmentation to generate initial training samples.\"\"\"\n\n        if getattr(self.p, 'border_mode', 'replicate') == 'inside':\n            # Get new sample size if forced inside the image\n            im_sz = torch.Tensor([im.shape[2], im.shape[3]])\n            sample_sz = self.target_scale * self.img_sample_sz\n            shrink_factor = (sample_sz.float() / im_sz).max().clamp(1)\n            sample_sz = (sample_sz.float() / shrink_factor)\n            self.init_sample_scale = (sample_sz / self.img_sample_sz).prod().sqrt()\n            tl = self.pos - (sample_sz - 1) / 2\n            br = self.pos + sample_sz / 2 + 1\n            global_shift = - ((-tl).clamp(0) - (br - im_sz).clamp(0)) / self.init_sample_scale\n        else:\n            self.init_sample_scale = self.target_scale\n            global_shift = torch.zeros(2)\n\n        self.init_sample_pos = self.pos.round()\n\n        # Compute augmentation size\n        aug_expansion_factor = getattr(self.p, 'augmentation_expansion_factor', None)\n        aug_expansion_sz = self.img_sample_sz.clone()\n        aug_output_sz = None\n        if aug_expansion_factor is not None and aug_expansion_factor != 1:\n            aug_expansion_sz = (self.img_sample_sz * aug_expansion_factor).long()\n            aug_expansion_sz += (aug_expansion_sz - self.img_sample_sz.long()) % 2\n            aug_expansion_sz = aug_expansion_sz.float()\n            aug_output_sz = self.img_sample_sz.long().tolist()\n\n        # Random shift for each sample\n        get_rand_shift = lambda: None\n        random_shift_factor = getattr(self.p, 'random_shift_factor', 0)\n        if random_shift_factor > 0:\n            get_rand_shift = lambda: ((torch.rand(2) - 0.5) * self.img_sample_sz * random_shift_factor + global_shift).long().tolist()\n\n        # Always put identity transformation first, since it is the unaugmented sample that is always used\n        self.transforms = [augmentation.Identity(aug_output_sz, global_shift.long().tolist())]\n\n        augs = self.p.augmentation if getattr(self.p, 'use_augmentation', True) else {}\n\n        # Add all augmentations\n        if 'shift' in augs:\n            self.transforms.extend([augmentation.Translation(shift, aug_output_sz, global_shift.long().tolist()) for shift in augs['shift']])\n        if 'relativeshift' in augs:\n            get_absolute = lambda shift: (torch.Tensor(shift) * self.img_sample_sz/2).long().tolist()\n            self.transforms.extend([augmentation.Translation(get_absolute(shift), aug_output_sz, global_shift.long().tolist()) for shift in augs['relativeshift']])\n        if 'fliplr' in augs and augs['fliplr']:\n            self.transforms.append(augmentation.FlipHorizontal(aug_output_sz, get_rand_shift()))\n        if 'blur' in augs:\n            self.transforms.extend([augmentation.Blur(sigma, aug_output_sz, get_rand_shift()) for sigma in augs['blur']])\n        if 'scale' in augs:\n            self.transforms.extend([augmentation.Scale(scale_factor, aug_output_sz, get_rand_shift()) for scale_factor in augs['scale']])\n        if 'rotate' in augs:\n            self.transforms.extend([augmentation.Rotate(angle, aug_output_sz, get_rand_shift()) for angle in augs['rotate']])\n\n        # Extract augmented image patches\n        im_patches = sample_patch_transformed(im, self.init_sample_pos, self.init_sample_scale, aug_expansion_sz, self.transforms)\n\n        # Extract initial backbone features\n        with torch.no_grad():\n            init_backbone_feat = self.net.extract_backbone(im_patches)\n\n        return init_backbone_feat\n\n    def init_target_boxes(self):\n        \"\"\"Get the target bounding boxes for the initial augmented samples.\"\"\"\n        self.classifier_target_box = self.get_iounet_box(self.pos, self.target_sz, self.init_sample_pos, self.init_sample_scale)\n        init_target_boxes = TensorList()\n        for T in self.transforms:\n            init_target_boxes.append(self.classifier_target_box + torch.Tensor([T.shift[1], T.shift[0], 0, 0]))\n        init_target_boxes = torch.cat(init_target_boxes.view(1, 4), 0).to(self.p.device)\n        self.target_boxes = init_target_boxes.new_zeros(self.p.sample_memory_size, 4)\n        self.target_boxes[:init_target_boxes.shape[0],:] = init_target_boxes\n        return init_target_boxes\n\n    def init_memory(self, train_x: TensorList):\n        # Initialize first-frame spatial training samples\n        self.num_init_samples = train_x.size(0)\n        init_sample_weights = TensorList([x.new_ones(1) / x.shape[0] for x in train_x])\n\n        # Sample counters and weights for spatial\n        self.num_stored_samples = self.num_init_samples.copy()\n        self.previous_replace_ind = [None] * len(self.num_stored_samples)\n        self.sample_weights = TensorList([x.new_zeros(self.p.sample_memory_size) for x in train_x])\n        for sw, init_sw, num in zip(self.sample_weights, init_sample_weights, self.num_init_samples):\n            sw[:num] = init_sw\n\n        # Initialize memory\n        self.training_samples = TensorList(\n            [x.new_zeros(self.p.sample_memory_size, x.shape[1], x.shape[2], x.shape[3]) for x in train_x])\n\n        for ts, x in zip(self.training_samples, train_x):\n            ts[:x.shape[0],...] = x\n\n\n    def update_memory(self, sample_x: TensorList, target_box, learning_rate = None):\n        # Update weights and get replace ind\n        replace_ind = self.update_sample_weights(self.sample_weights, self.previous_replace_ind, self.num_stored_samples, self.num_init_samples, learning_rate)\n        self.previous_replace_ind = replace_ind\n\n        # Update sample and label memory\n        for train_samp, x, ind in zip(self.training_samples, sample_x, replace_ind):\n            train_samp[ind:ind+1,...] = x\n\n        # Update bb memory\n        self.target_boxes[replace_ind[0],:] = target_box\n\n        self.num_stored_samples += 1\n\n\n    def update_sample_weights(self, sample_weights, previous_replace_ind, num_stored_samples, num_init_samples, learning_rate = None):\n        # Update weights and get index to replace\n        replace_ind = []\n        for sw, prev_ind, num_samp, num_init in zip(sample_weights, previous_replace_ind, num_stored_samples, num_init_samples):\n            lr = learning_rate\n            if lr is None:\n                lr = self.p.learning_rate\n\n            init_samp_weight = getattr(self.p, 'init_samples_minimum_weight', None)\n            if init_samp_weight == 0:\n                init_samp_weight = None\n            s_ind = 0 if init_samp_weight is None else num_init\n\n            if num_samp == 0 or lr == 1:\n                sw[:] = 0\n                sw[0] = 1\n                r_ind = 0\n            else:\n                # Get index to replace\n                if num_samp < sw.shape[0]:\n                    r_ind = num_samp\n                else:\n                    _, r_ind = torch.min(sw[s_ind:], 0)\n                    r_ind = r_ind.item() + s_ind\n\n                # Update weights\n                if prev_ind is None:\n                    sw /= 1 - lr\n                    sw[r_ind] = lr\n                else:\n                    sw[r_ind] = sw[prev_ind] / (1 - lr)\n\n            sw /= sw.sum()\n            if init_samp_weight is not None and sw[:num_init].sum() < init_samp_weight:\n                sw /= init_samp_weight + sw[num_init:].sum()\n                sw[:num_init] = init_samp_weight / num_init\n\n            replace_ind.append(r_ind)\n\n        return replace_ind\n\n    def update_state(self, new_pos, new_scale = None):\n        # Update scale\n        if new_scale is not None:\n            self.target_scale = new_scale.clamp(self.min_scale_factor, self.max_scale_factor)\n            # self.target_sz = self.base_target_sz * self.target_scale  # zzp\n            self.target_sz = self.siam_state['target_sz'][::-1]\n            self.target_sz = torch.from_numpy(self.target_sz.astype(np.float32).copy())\n\n        # Update pos\n        inside_ratio = getattr(self.p, 'target_inside_ratio', 0.2)\n        inside_offset = (inside_ratio - 0.5) * self.target_sz\n        # self.pos = torch.max(torch.min(new_pos, self.image_sz - inside_offset), inside_offset)\n        self.pos = torch.max(\n            torch.min(torch.from_numpy(new_pos.astype(np.float32).copy()), self.image_sz - inside_offset),\n            inside_offset)\n\n\n    def get_iounet_box(self, pos, sz, sample_pos, sample_scale):\n        \"\"\"All inputs in original image coordinates.\n        Generates a box in the cropped image sample reference frame, in the format used by the IoUNet.\"\"\"\n        box_center = (pos - sample_pos) / sample_scale + (self.img_sample_sz - 1) / 2\n        box_sz = sz / sample_scale\n        target_ul = box_center - (box_sz - 1) / 2\n        return torch.cat([target_ul.flip((0,)), box_sz.flip((0,))])\n\n\n    def init_iou_net(self, backbone_feat):\n        # Setup IoU net and objective\n        for p in self.net.bb_regressor.parameters():\n            p.requires_grad = False\n\n        # Get target boxes for the different augmentations\n        self.classifier_target_box = self.get_iounet_box(self.pos, self.target_sz, self.init_sample_pos, self.init_sample_scale)\n        target_boxes = TensorList()\n        if self.p.iounet_augmentation:\n            for T in self.transforms:\n                if not isinstance(T, (augmentation.Identity, augmentation.Translation, augmentation.FlipHorizontal, augmentation.FlipVertical, augmentation.Blur)):\n                    break\n                target_boxes.append(self.classifier_target_box + torch.Tensor([T.shift[1], T.shift[0], 0, 0]))\n        else:\n            target_boxes.append(self.classifier_target_box + torch.Tensor([self.transforms[0].shift[1], self.transforms[0].shift[0], 0, 0]))\n        target_boxes = torch.cat(target_boxes.view(1,4), 0).to(self.p.device)\n\n        # Get iou features\n        iou_backbone_feat = self.get_iou_backbone_features(backbone_feat)\n\n        # Remove other augmentations such as rotation\n        iou_backbone_feat = TensorList([x[:target_boxes.shape[0],...] for x in iou_backbone_feat])\n\n        # Get modulation vector\n        self.iou_modulation = self.get_iou_modulation(iou_backbone_feat, target_boxes)\n        self.iou_modulation = TensorList([x.detach().mean(0) for x in self.iou_modulation])\n\n\n    def init_classifier(self, init_backbone_feat):\n        # Get classification features\n        x = self.get_classification_features(init_backbone_feat)\n\n        # Add the dropout augmentation here, since it requires extraction of the classification features\n        if 'dropout' in self.p.augmentation and getattr(self.p, 'use_augmentation', True):\n            num, prob = self.p.augmentation['dropout']\n            self.transforms.extend(self.transforms[:1]*num)\n            x = torch.cat([x, F.dropout2d(x[0:1,...].expand(num,-1,-1,-1), p=prob, training=True)])\n\n        # Set feature size and other related sizes\n        self.feature_sz = torch.Tensor(list(x.shape[-2:]))\n        ksz = self.net.classifier.filter_size\n        self.kernel_size = torch.Tensor([ksz, ksz] if isinstance(ksz, (int, float)) else ksz)\n        self.output_sz = self.feature_sz + (self.kernel_size + 1)%2\n\n        # Construct output window\n        self.output_window = None\n        if getattr(self.p, 'window_output', False):\n            if getattr(self.p, 'use_clipped_window', False):\n                self.output_window = dcf.hann2d_clipped(self.output_sz.long(), self.output_sz.long()*self.p.effective_search_area / self.p.search_area_scale, centered=False).to(self.p.device)\n            else:\n                self.output_window = dcf.hann2d(self.output_sz.long(), centered=True).to(self.p.device)\n            self.output_window = self.output_window.squeeze(0)\n\n        # Get target boxes for the different augmentations\n        target_boxes = self.init_target_boxes()\n\n        # Set number of iterations\n        plot_loss = self.p.debug > 0\n        num_iter = getattr(self.p, 'net_opt_iter', None)\n\n        # Get target filter by running the discriminative model prediction module\n        with torch.no_grad():\n            self.target_filter, _, losses = self.net.classifier.get_filter(x, target_boxes, num_iter=num_iter,\n                                                                           compute_losses=plot_loss)\n\n        # Init memory\n        if getattr(self.p, 'update_classifier', True):\n            self.init_memory(TensorList([x]))\n\n        if plot_loss:\n            if isinstance(losses, dict):\n                losses = losses['train']\n            self.losses = torch.stack(losses)\n            # if self.visdom is not None:\n            #     self.visdom.register((self.losses, torch.arange(self.losses.numel())), 'lineplot', 3, 'Training Loss')\n            # elif self.params.debug >= 3:\n            #     plot_graph(self.losses, 10, title='Training loss')\n\n\n    def update_classifier(self, train_x, target_box, learning_rate=None, scores=None):\n        # Set flags and learning rate\n        hard_negative_flag = learning_rate is not None\n        if learning_rate is None:\n            learning_rate = self.p.learning_rate\n\n        # Update the tracker memory\n        self.update_memory(TensorList([train_x]), target_box, learning_rate)\n\n        # Decide the number of iterations to run\n        num_iter = 0\n        low_score_th = getattr(self.p, 'low_score_opt_threshold', None)\n        if hard_negative_flag:\n            num_iter = getattr(self.p, 'net_opt_hn_iter', None)\n        elif low_score_th is not None and low_score_th > scores.max().item():\n            num_iter = getattr(self.p, 'net_opt_low_iter', None)\n        elif (self.frame_num - 1) % self.p.train_skipping == 0:\n            num_iter = getattr(self.p, 'net_opt_update_iter', None)\n\n        plot_loss = self.p.debug > 0\n\n        if num_iter > 0:\n            # Get inputs for the ONLINE filter optimizer module\n            samples = self.training_samples[0][:self.num_stored_samples[0],...]\n            target_boxes = self.target_boxes[:self.num_stored_samples[0],:].clone()\n            sample_weights = self.sample_weights[0][:self.num_stored_samples[0]]\n\n            # Run the filter optimizer module\n            with torch.no_grad():\n                self.target_filter, _, losses = self.net.classifier.filter_optimizer(self.target_filter, samples, target_boxes,\n                                                                                     sample_weight=sample_weights,\n                                                                                     num_iter=num_iter,\n                                                                                     compute_losses=plot_loss)\n\n            if plot_loss:\n                if isinstance(losses, dict):\n                    losses = losses['train']\n                self.losses = torch.cat((self.losses, torch.stack(losses)))\n                # if self.visdom is not None:\n                #     self.visdom.register((self.losses, torch.arange(self.losses.numel())), 'lineplot', 3, 'Training Loss')\n                # elif self.params.debug >= 3:\n                #     plot_graph(self.losses, 10, title='Training loss')\n\n    def refine_target_box(self, backbone_feat, sample_pos, sample_scale, scale_ind, update_scale = True):\n\n        # Initial box for refinement\n        init_box = self.get_iounet_box(self.pos, self.target_sz, sample_pos, sample_scale)\n\n        # Extract features from the relevant scale\n        iou_features = self.get_iou_features(backbone_feat)\n        iou_features = TensorList([x[scale_ind:scale_ind+1,...] for x in iou_features])\n\n        # Generate random initial boxes\n        init_boxes = init_box.view(1,4).clone()\n        if self.p.num_init_random_boxes > 0:\n            square_box_sz = init_box[2:].prod().sqrt()\n            rand_factor = square_box_sz * torch.cat([self.p.box_jitter_pos * torch.ones(2), self.p.box_jitter_sz * torch.ones(2)])\n\n            minimal_edge_size = init_box[2:].min()/3\n            rand_bb = (torch.rand(self.p.num_init_random_boxes, 4) - 0.5) * rand_factor\n            new_sz = (init_box[2:] + rand_bb[:,2:]).clamp(minimal_edge_size)\n            new_center = (init_box[:2] + init_box[2:]/2) + rand_bb[:,:2]\n            init_boxes = torch.cat([new_center - new_sz/2, new_sz], 1)\n            init_boxes = torch.cat([init_box.view(1,4), init_boxes])\n\n        # Optimize the boxes\n        output_boxes, output_iou = self.optimize_boxes(iou_features, init_boxes)\n\n        # Remove weird boxes\n        output_boxes[:, 2:].clamp_(1)\n        aspect_ratio = output_boxes[:,2] / output_boxes[:,3]\n        keep_ind = (aspect_ratio < self.p.maximal_aspect_ratio) * (aspect_ratio > 1/self.p.maximal_aspect_ratio)\n        output_boxes = output_boxes[keep_ind,:]\n        output_iou = output_iou[keep_ind]\n\n        # If no box found\n        if output_boxes.shape[0] == 0:\n            return\n\n        # Predict box\n        k = getattr(self.p, 'iounet_k', 5)\n        topk = min(k, output_boxes.shape[0])\n        _, inds = torch.topk(output_iou, topk)\n        predicted_box = output_boxes[inds, :].mean(0)\n        predicted_iou = output_iou.view(-1, 1)[inds, :].mean(0)\n\n        # Get new position and size\n        new_pos = predicted_box[:2] + predicted_box[2:] / 2\n        new_pos = (new_pos.flip((0,)) - (self.img_sample_sz - 1) / 2) * sample_scale + sample_pos\n        new_target_sz = predicted_box[2:].flip((0,)) * sample_scale\n        new_scale = torch.sqrt(new_target_sz.prod() / self.base_target_sz.prod())\n\n        self.pos_iounet = new_pos.clone()\n\n        if getattr(self.p, 'use_iounet_pos_for_learning', True):\n            self.pos = new_pos.clone()\n\n        self.target_sz = new_target_sz\n\n        if update_scale:\n            self.target_scale = new_scale\n\n\n    def optimize_boxes(self, iou_features, init_boxes):\n        # Optimize iounet boxes\n        output_boxes = init_boxes.view(1, -1, 4).to(self.p.device)\n        step_length = self.p.box_refinement_step_length\n        if isinstance(step_length, (tuple, list)):\n            step_length = torch.Tensor([step_length[0], step_length[0], step_length[1], step_length[1]], device=self.p.device).view(1,1,4)\n\n        for i_ in range(self.p.box_refinement_iter):\n            # forward pass\n            bb_init = output_boxes.clone().detach()\n            bb_init.requires_grad = True\n\n            outputs = self.net.bb_regressor.predict_iou(self.iou_modulation, iou_features, bb_init)\n\n            if isinstance(outputs, (list, tuple)):\n                outputs = outputs[0]\n\n            outputs.backward(gradient = torch.ones_like(outputs))\n\n            # Update proposal\n            output_boxes = bb_init + step_length * bb_init.grad * bb_init[:, :, 2:].repeat(1, 1, 2)\n            output_boxes.detach_()\n\n            step_length *= self.p.box_refinement_step_decay\n\n        return output_boxes.view(-1,4).cpu(), outputs.detach().view(-1).cpu()\n\n    # --------------------------\n    # New Functions from ONLINE-init\n    # --------------------------\n    def params_convert(self):\n        self.p.image_sample_size = eval(self.p.image_sample_size)\n        self.p.scale_factors = eval(self.p.scale_factors)\n        self.p.learning_rate = self.p.learning_rate\n        try:\n            self.p.random_shift_factor = eval(self.p.random_shift_factor)\n        except:\n            self.p.random_shift_factor = 0.33\n        self.p.augmentation['blur'] = eval(self.p.augmentation['blur'])\n        self.p.augmentation['relativeshift'] = eval(self.p.augmentation['relativeshift'])\n        self.p.augmentation['dropout'] = eval(self.p.augmentation['dropout'])\n\n\nclass ONLINEConfig(object):\n\n    def update(self, newparam=None):\n        if newparam:\n            for key, value in newparam.items():\n                setattr(self, key, value)\n"
  },
  {
    "path": "tracker/sot/lib/tracker/siamfc.py",
    "content": "# ------------------------------------------------------------------------------\r\n# Copyright (c) Microsoft\r\n# Licensed under the MIT License.\r\n# Written by Houwen Peng and  Zhipeng Zhang\r\n# Email: houwen.peng@microsoft.com\r\n# siamfc class\r\n# ------------------------------------------------------------------------------\r\nimport os\r\nimport cv2\r\nimport numpy as np\r\n\r\nfrom torch.autograd import Variable\r\nfrom lib.utils.utils import load_yaml, im_to_torch, get_subwindow_tracking, make_scale_pyramid\r\n\r\nimport pdb\r\n\r\nclass SiamFC(object):\r\n    def __init__(self, info):\r\n        super(SiamFC, self).__init__()\r\n        self.info = info   # model and benchmark info\r\n\r\n    def init(self, im, target_pos, target_sz, model, hp=None):\r\n        state = dict()\r\n        # epoch test\r\n        p = FCConfig()\r\n\r\n        # single test\r\n        if not hp and not self.info.epoch_test:\r\n            prefix = [x for x in ['OTB', 'VOT'] if x in self.info.dataset]\r\n            absPath = os.path.abspath(os.path.dirname(__file__))\r\n            yname = 'SiamDW.yaml'\r\n            yamlPath = os.path.join(absPath, '../../experiments/test/{0}/'.format(prefix[0]), yname)\r\n            cfg = load_yaml(yamlPath)\r\n\r\n            cfg_benchmark = cfg[self.info.dataset]\r\n            p.update(cfg_benchmark)\r\n            p.renew()\r\n\r\n        # param tune\r\n        if hp:\r\n            p.update(hp)\r\n            p.renew()\r\n\r\n\r\n        net = model\r\n\r\n        avg_chans = np.mean(im, axis=(0, 1))\r\n\r\n        wc_z = target_sz[0] + p.context_amount * sum(target_sz)\r\n        hc_z = target_sz[1] + p.context_amount * sum(target_sz)\r\n        s_z = round(np.sqrt(wc_z * hc_z))\r\n        scale_z = p.exemplar_size / s_z\r\n\r\n        z_crop, _ = get_subwindow_tracking(im, target_pos, p.exemplar_size, s_z, avg_chans)\r\n\r\n        d_search = (p.instance_size - p.exemplar_size) / 2\r\n        pad = d_search / scale_z\r\n        s_x = s_z + 2 * pad\r\n        min_s_x = 0.2 * s_x\r\n        max_s_x = 5 * s_x\r\n\r\n        s_x_serise = {'s_x': s_x, 'min_s_x': min_s_x, 'max_s_x': max_s_x}\r\n        p.update(s_x_serise)\r\n\r\n        z = Variable(z_crop.unsqueeze(0))\r\n\r\n        net.template(z.cuda())\r\n\r\n        if p.windowing == 'cosine':\r\n            window = np.outer(np.hanning(int(p.score_size) * int(p.response_up)),\r\n                              np.hanning(int(p.score_size) * int(p.response_up)))\r\n        elif p.windowing == 'uniform':\r\n            window = np.ones((int(p.score_size) * int(p.response_up), int(p.score_size) * int(p.response_up)))\r\n        window /= window.sum()\r\n\r\n        p.scales = p.scale_step ** (range(p.num_scale) - np.ceil(p.num_scale // 2))\r\n\r\n        state['p'] = p\r\n        state['net'] = net\r\n        state['avg_chans'] = avg_chans\r\n        state['window'] = window\r\n        state['target_pos'] = target_pos\r\n        state['target_sz'] = target_sz\r\n        state['im_h'] = im.shape[0]\r\n        state['im_w'] = im.shape[1]\r\n        return state\r\n\r\n    def update(self, net, s_x, x_crops, target_pos, window, p):\r\n        # refer to original SiamFC code\r\n        response_map = net.track(x_crops).squeeze().permute(1, 2, 0).cpu().data.numpy()\r\n        #pdb.set_trace()\r\n        up_size = p.response_up * response_map.shape[0]\r\n        response_map_up = cv2.resize(response_map, (up_size, up_size), interpolation=cv2.INTER_CUBIC)\r\n        temp_max = np.max(response_map_up, axis=(0, 1))\r\n        s_penaltys = np.array([p.scale_penalty, 1., p.scale_penalty])\r\n        temp_max *= s_penaltys\r\n        best_scale = np.argmax(temp_max)\r\n\r\n        response_map = response_map_up[..., best_scale]\r\n        # apply windowing\r\n        response_map = response_map *window\r\n        response_map = response_map - response_map.min()\r\n        response_map = response_map / response_map.sum()\r\n        \r\n        response_map = (1 - p.w_influence) * response_map + p.w_influence * window\r\n        \r\n        r_max, c_max = np.unravel_index(response_map.argmax(), response_map.shape)\r\n        p_corr = [c_max, r_max]\r\n\r\n        disp_instance_final = p_corr - np.ceil(p.score_size * p.response_up / 2)\r\n        disp_instance_input = disp_instance_final * p.total_stride / p.response_up\r\n        disp_instance_frame = disp_instance_input * s_x / p.instance_size\r\n        new_target_pos = target_pos + disp_instance_frame\r\n\r\n\r\n        return new_target_pos, best_scale\r\n\r\n    def track(self, state, im):\r\n        p = state['p']\r\n        net = state['net']\r\n        avg_chans = state['avg_chans']\r\n        window = state['window']\r\n        target_pos = state['target_pos']\r\n        target_sz = state['target_sz']\r\n\r\n        scaled_instance = p.s_x * p.scales\r\n        scaled_target = [[target_sz[0] * p.scales], [target_sz[1] * p.scales]]\r\n\r\n        x_crops = make_scale_pyramid(im, target_pos, scaled_instance, p.instance_size, avg_chans)\r\n\r\n        target_pos, new_scale = self.update(net, p.s_x, x_crops.cuda(), target_pos, window, p)\r\n\r\n        # scale damping and saturation\r\n        p.s_x = max(p.min_s_x, min(p.max_s_x, (1 - p.scale_lr) * p.s_x + p.scale_lr * scaled_instance[new_scale]))\r\n\r\n        target_sz = [(1 - p.scale_lr) * target_sz[0] + p.scale_lr * scaled_target[0][0][new_scale],\r\n                     (1 - p.scale_lr) * target_sz[1] + p.scale_lr * scaled_target[1][0][new_scale]]\r\n\r\n        target_pos[0] = max(0, min(state['im_w'], target_pos[0]))\r\n        target_pos[1] = max(0, min(state['im_h'], target_pos[1]))\r\n        target_sz[0] = max(10, min(state['im_w'], target_sz[0]))\r\n        target_sz[1] = max(10, min(state['im_h'], target_sz[1]))\r\n        state['target_pos'] = target_pos\r\n        state['target_sz'] = target_sz\r\n        state['p'] = p\r\n\r\n        return state\r\n\r\n\r\nclass FCConfig(object):\r\n    # These are the default hyper-params for SiamFC\r\n    num_scale = 3\r\n    scale_step = 1.0375\r\n    scale_penalty = 0.9745\r\n    scale_lr = 0.590\r\n    response_up = 16\r\n\r\n    windowing = 'cosine'\r\n    w_influence = 0.35\r\n\r\n    exemplar_size = 128\r\n    instance_size = 256\r\n    score_size = 27\r\n    total_stride = 8\r\n    context_amount = 0.5\r\n\r\n    def update(self, newparam=None):\r\n        if newparam:\r\n            for key, value in newparam.items():\r\n                setattr(self, key, value)\r\n            self.renew()\r\n\r\n    def renew(self):\r\n        self.exemplar_size = self.instance_size - 128\r\n        #self.score_size = (self.instance_size - self.exemplar_size) // self.total_stride + 1\r\n        self.score_size = 27\r\n\r\n"
  },
  {
    "path": "tracker/sot/lib/utils/__init__.py",
    "content": ""
  },
  {
    "path": "tracker/sot/lib/utils/cutout.py",
    "content": "import torch\nimport numpy as np\n\n\nclass Cutout(object):\n    \"\"\"Randomly mask out one or more patches from an image.\n\n    Args:\n        n_holes (int): Number of patches to cut out of each image.\n        length (int): The length (in pixels) of each square patch.\n    \"\"\"\n    def __init__(self, n_holes, length):\n        self.n_holes = n_holes\n        self.length = length\n\n    def __call__(self, img):\n        \"\"\"\n        Args:\n            img (Tensor): Tensor image of size (C, H, W).\n        Returns:\n            Tensor: Image with n_holes of dimension length x length cut out of it.\n        \"\"\"\n        h = img.size(1)\n        w = img.size(2)\n\n        mask = np.ones((h, w), np.float32)\n\n        for n in range(self.n_holes):\n            y = np.random.randint(h//4, h-h//4)\n            x = np.random.randint(w//4, w-w//4)\n\n            y1 = np.clip(y - self.length // 2, 0, h)\n            y2 = np.clip(y + self.length // 2, 0, h)\n            x1 = np.clip(x - self.length // 2, 0, w)\n            x2 = np.clip(x + self.length // 2, 0, w)\n\n            mask[y1: y2, x1: x2] = 0.\n\n        mask = torch.from_numpy(mask)\n        mask = mask.expand_as(img)\n        img = img * mask\n\n        return img\n"
  },
  {
    "path": "tracker/sot/lib/utils/extract_tpejson_fc.py",
    "content": "# -*- coding:utf-8 -*-\r\n# ! ./usr/bin/env python\r\n\r\nimport os\r\nimport json\r\nimport shutil\r\nimport argparse\r\nimport numpy as np\r\nimport pdb\r\n\r\n\r\nparser = argparse.ArgumentParser(description='Analysis siamfc tune results')\r\nparser.add_argument('--path', default='./TPE_results/zp_tune', help='tune result path')\r\nparser.add_argument('--dataset', default='VOT2018', help='test dataset')\r\nparser.add_argument('--save_path', default='logs', help='log file save path')\r\n\r\n\r\ndef collect_results(args):\r\n    dirs = os.listdir(args.path)\r\n    print('[*] ===== total {} files in TPE dir'.format(len(dirs)))\r\n\r\n    count = 0\r\n    scale_penalty = []\r\n    scale_lr = []\r\n    wi = []\r\n    scale_step = []\r\n    eao = []\r\n    count = 0 # total numbers\r\n\r\n    for d in dirs:\r\n        param_path = os.path.join(args.path, d)\r\n        json_path = os.path.join(param_path, 'result.json')\r\n\r\n        if not os.path.exists(json_path):\r\n            continue\r\n\r\n        # pdb.set_trace()\r\n        try:\r\n            js = json.load(open(json_path, 'r'))\r\n        except:\r\n            continue\r\n \r\n        if not \"EAO\" in list(js.keys()):\r\n            continue\r\n        else:\r\n            count += 1\r\n            eao.append(js['EAO'])\r\n            temp = js['config']\r\n            scale_lr.append(temp[\"scale_lr\"])\r\n            wi.append(temp[\"w_influence\"])\r\n            scale_step.append(temp[\"scale_step\"])\r\n            scale_penalty.append(temp[\"scale_penalty\"])\r\n \r\n            \r\n    # find max\r\n    print('{} params group  have been tested'.format(count))\r\n    eao = np.array(eao)\r\n    max_idx = np.argmax(eao)\r\n    max_eao = eao[max_idx]\r\n    print('scale_penalty: {:.4f}, scale_lr: {:.4f}, wi: {:.4f}, scale_step: {}, eao: {}'.format(scale_penalty[max_idx], scale_lr[max_idx], wi[max_idx], scale_step[max_idx], max_eao))\r\n\r\n\r\nif __name__ == '__main__':\r\n    args = parser.parse_args()\r\n    collect_results(args)"
  },
  {
    "path": "tracker/sot/lib/utils/extract_tpejson_ocean.py",
    "content": "# -*- coding:utf-8 -*-\r\n# ! ./usr/bin/env python\r\n\r\nimport os\r\nimport json\r\nimport shutil\r\nimport argparse\r\nimport numpy as np\r\nimport pdb\r\n\r\n\r\nparser = argparse.ArgumentParser(description='Analysis siamfc tune results')\r\nparser.add_argument('--path', default='./TPE_results/zp_tune', help='tune result path')\r\nparser.add_argument('--dataset', default='VOT2019', help='test dataset')\r\nparser.add_argument('--save_path', default='logs', help='log file save path')\r\n\r\n\r\ndef collect_results(args):\r\n    dirs = os.listdir(args.path)\r\n    print('[*] ===== total {} files in TPE dir'.format(len(dirs)))\r\n\r\n    count = 0\r\n    penalty_k = []\r\n    scale_lr = []\r\n    wi = []\r\n    big_sz = []\r\n    small_sz = []\r\n    ratio = []\r\n    eao = []\r\n    count = 0 # total numbers\r\n\r\n    for d in dirs:\r\n        param_path = os.path.join(args.path, d)\r\n        json_path = os.path.join(param_path, 'result.json')\r\n\r\n        if not os.path.exists(json_path):\r\n            continue\r\n\r\n        # pdb.set_trace()\r\n        try:\r\n            js = json.load(open(json_path, 'r'))\r\n        except:\r\n            continue\r\n \r\n        if not \"EAO\" in list(js.keys()):\r\n            continue\r\n        else:\r\n            count += 1\r\n            # pdb.set_trace()\r\n            eao.append(js['EAO'])\r\n            temp = js['config']\r\n            scale_lr.append(temp[\"scale_lr\"])\r\n            wi.append(temp[\"window_influence\"])\r\n            penalty_k.append(temp[\"penalty_k\"])\r\n            ratio.append(temp[\"ratio\"])\r\n            small_sz.append(temp[\"small_sz\"])\r\n            big_sz.append(temp[\"big_sz\"])\r\n \r\n            \r\n    # find max\r\n    print('{} params group  have been tested'.format(count))\r\n    eao = np.array(eao)\r\n    max_idx = np.argmax(eao)\r\n    max_eao = eao[max_idx]\r\n    print('penalty_k: {:.4f}, scale_lr: {:.4f}, wi: {:.4f}, ratio: {:.4f}, small_sz: {}, big_sz: {:.4f}, eao: {}'.format(penalty_k[max_idx], scale_lr[max_idx], wi[max_idx], ratio[max_idx], small_sz[max_idx], big_sz[max_idx], max_eao))\r\n\r\n\r\nif __name__ == '__main__':\r\n    args = parser.parse_args()\r\n    collect_results(args)"
  },
  {
    "path": "tracker/sot/lib/utils/extract_tpelog.py",
    "content": "# -*- coding:utf-8 -*-\n# ! ./usr/bin/env python\n\nimport shutil\nimport argparse\nimport numpy as np\n\n\nparser = argparse.ArgumentParser(description='Analysis siamfc tune results')\nparser.add_argument('--path', default='logs/gene_adjust_rpn.log', help='tune result path')\nparser.add_argument('--dataset', default='VOT2018', help='test dataset')\nparser.add_argument('--save_path', default='logs', help='log file save path')\n\n\ndef collect_results(args):\n    if not args.path.endswith('txt'):\n        name = args.path.split('.')[0]\n        name = name + '.txt'\n        shutil.copy(args.path, name)\n        args.path = name\n    fin = open(args.path, 'r')\n    lines = fin.readlines()\n    penalty_k = []\n    scale_lr = []\n    wi = []\n    wi = []\n    ratio = []\n    sz = []\n    bz = []\n    eao = []\n    ratio = []\n    count = 0 # total numbers\n\n    for line in lines:\n        if not 'penalty_k:' in line:\n            pass\n        else:\n            count += 1\n            temp0, temp1, temp2, temp3, temp4, temp5, temp6 = line.split(',')\n            #print(temp6.split(': ')[-1])\n            #exit()\n            penalty_k.append(float(temp0.split(': ')[-1]))\n            scale_lr.append(float(temp1.split(': ')[-1]))\n            wi.append(float(temp2.split(': ')[-1]))\n            sz.append(float(temp3.split(': ')[-1]))\n            bz.append(float(temp4.split(': ')[-1]))\n            ratio.append(float(temp5.split(': ')[-1]))\n            try:\n                eao.append(float(temp6.split(': ')[-1].split('==')[0]))\n            except:\n                eao.append(float(temp6.split(': ')[-1].split('Result')[0]))\n                #print(line)\n                #print(temp6.split(': ')[-1])\n                #exit()\n\n    # find max\n    eao = np.array(eao)\n    max_idx = np.argmax(eao)\n    max_eao = eao[max_idx]\n    print('{} params group  have been tested'.format(count))\n    print('penalty_k: {:.4f}, scale_lr: {:.4f}, wi: {:.4f}, ratio: {:.4f}, small_sz: {}, big_sz: {}, auc: {}'.format(penalty_k[max_idx], scale_lr[max_idx], wi[max_idx], ratio[max_idx], sz[max_idx], bz[max_idx], max_eao))\n\n\nif __name__ == '__main__':\n    args = parser.parse_args()\n    collect_results(args)\n"
  },
  {
    "path": "tracker/sot/lib/utils/extract_tpelog_fc.py",
    "content": "# -*- coding:utf-8 -*-\n# ! ./usr/bin/env python\n\n\nimport shutil\nimport argparse\nimport numpy as np\n\n\nparser = argparse.ArgumentParser(description='Analysis siamfc tune results')\nparser.add_argument('--path', default='logs/gene_adjust_rpn.log', help='tune result path')\nparser.add_argument('--dataset', default='VOT2018', help='test dataset')\nparser.add_argument('--save_path', default='logs', help='log file save path')\n\n\ndef collect_results(args):\n    if not args.path.endswith('txt'):\n        name = args.path.split('.')[0]\n        name = name + '.txt'\n        shutil.copy(args.path, name)\n        args.path = name\n    fin = open(args.path, 'r')\n    lines = fin.readlines()\n    scale_step = []\n    scale_lr = []\n    scale_penalty = []\n    wi = []\n    eao = []\n    count = 0 # total numbers\n\n    for line in lines:\n        if not line.startswith('scale_step'):\n            pass\n        else:\n     #       print(line)\n            count += 1\n            print(line.split(','))\n            exit()\n            temp0, temp1, temp2, temp3, temp4, temp5 = line.split(',')\n            scale_step.append(float(temp0.split(': ')[-1]))\n            scale_lr.append(float(temp1.split(': ')[-1]))\n            scale_penalty.append(float(temp2.split(': ')[-1]))\n            wi.append(float(temp3.split(': ')[-1]))\n            eao.append(float(temp4.split(': ')[-1]))\n\n    # find max\n    eao = np.array(eao)\n    max_idx = np.argmax(eao)\n    max_eao = eao[max_idx]\n    print('{} params group  have been tested'.format(count))\n    print('scale_step: {:.4f}, scale_lr: {:.4f}, scale_penalty: {:.4f}, win_influence: {}, eao: {}'.format(scale_step[max_idx], scale_lr[max_idx], scale_penalty[max_idx], wi[max_idx],  max_eao))\n\n\nif __name__ == '__main__':\n    args = parser.parse_args()\n    collect_results(args)\n"
  },
  {
    "path": "tracker/sot/lib/utils/utils.py",
    "content": "import os\nimport json\nimport glob\nimport torch\nimport logging\nimport time\nimport math\nimport torch\nimport yaml\nimport cv2\nimport random\nimport numpy as np\n\nfrom torch.optim.lr_scheduler import _LRScheduler\nfrom pathlib import Path\nfrom collections import namedtuple\nfrom shapely.geometry import Polygon, box\nfrom os.path import join, realpath, dirname, exists\n# from utils.visdom import Visdom\nfrom _collections import OrderedDict\ntry:\n    torch._utils._rebuild_tensor_v2\nexcept AttributeError:\n    def _rebuild_tensor_v2(storage, storage_offset, size, stride, requires_grad, backward_hooks):\n        tensor = torch._utils._rebuild_tensor(storage, storage_offset, size, stride)\n        tensor.requires_grad = requires_grad\n        tensor._backward_hooks = backward_hooks\n        return tensor\n    torch._utils._rebuild_tensor_v2 = _rebuild_tensor_v2\n\n# ---------------------------------\n# vis\n# ---------------------------------\ndef visdom_draw_tracking(visdom, image, box, segmentation=None):\n    if isinstance(box, OrderedDict):\n        box = [v for k, v in box.items()]\n    else:\n        box = (box,)\n    if segmentation is None:\n        visdom.register((image, *box), 'Tracking', 1, 'Tracking')\n    else:\n        visdom.register((image, *box, segmentation), 'Tracking', 1, 'Tracking')\n\ndef _visdom_ui_handler(self, data):\n    pause_mode = False\n    if data['event_type'] == 'KeyPress':\n        if data['key'] == ' ':\n            pause_mode = not pause_mode\n\n        elif data['key'] == 'ArrowRight' and pause_mode:\n            self.step = True\n\ndef _init_visdom(visdom_info, debug=False):\n    visdom_info = {} if visdom_info is None else visdom_info\n    pause_mode = False\n    step = False\n\n    visdom = Visdom(debug, {'handler': _visdom_ui_handler, 'win_id': 'Tracking'},\n                         visdom_info=visdom_info)\n\n    # Show help\n    help_text = 'You can pause/unpause the tracker by pressing ''space'' with the ''Tracking'' window ' \\\n                'selected. During paused mode, you can track for one frame by pressing the right arrow key.' \\\n                'To enable/disable plotting of a data block, tick/untick the corresponding entry in ' \\\n                'block list.'\n    visdom.register(help_text, 'text', 1, 'Help')\n\n    return visdom\n\n\n# ---------------------------------\n# Functions for FC tracking tools\n# ---------------------------------\ndef load_yaml(path, subset=True):\n    file = open(path, 'r')\n    yaml_obj = yaml.load(file.read(), Loader=yaml.FullLoader)\n\n    if subset:\n        hp = yaml_obj['TEST']\n    else:\n        hp = yaml_obj\n\n    return hp\n\n\ndef to_torch(ndarray):\n    return torch.from_numpy(ndarray)\n\n\ndef im_to_torch(img):\n    img = np.transpose(img, (2, 0, 1))  # C*H*W\n    img = to_torch(img).float()\n    return img\n\n\n\ndef get_subwindow_tracking_mask(im, pos, model_sz, original_sz, out_mode='torch'):\n    \"\"\"\n    SiamFC type cropping\n    \"\"\"\n    crop_info = dict()\n\n    if isinstance(pos, float):\n        pos = [pos, pos]\n\n    sz = original_sz\n    im_sz = im.shape\n    c = (original_sz+1) / 2\n    context_xmin = round(pos[0] - c)\n    context_xmax = context_xmin + sz - 1\n    context_ymin = round(pos[1] - c)\n    context_ymax = context_ymin + sz - 1\n    left_pad = int(max(0., -context_xmin))\n    top_pad = int(max(0., -context_ymin))\n    right_pad = int(max(0., context_xmax - im_sz[1] + 1))\n    bottom_pad = int(max(0., context_ymax - im_sz[0] + 1))\n\n    context_xmin = context_xmin + left_pad\n    context_xmax = context_xmax + left_pad\n    context_ymin = context_ymin + top_pad\n    context_ymax = context_ymax + top_pad\n\n    r, c = im.shape\n    if any([top_pad, bottom_pad, left_pad, right_pad]):\n        te_im = np.zeros((r + top_pad + bottom_pad, c + left_pad + right_pad), np.uint8)\n        # for return mask\n        tete_im = np.zeros((r + top_pad + bottom_pad, c + left_pad + right_pad))\n\n        te_im[top_pad:top_pad + r, left_pad:left_pad + c] = im\n        if top_pad:\n            te_im[0:top_pad, left_pad:left_pad + c] = 0\n        if bottom_pad:\n            te_im[r + top_pad:, left_pad:left_pad + c] = 0\n        if left_pad:\n            te_im[:, 0:left_pad] = 0\n        if right_pad:\n            te_im[:, c + left_pad:] = 0\n        im_patch_original = te_im[int(context_ymin):int(context_ymax + 1), int(context_xmin):int(context_xmax + 1)]\n    else:\n        tete_im = np.zeros(im.shape[0:2])\n        im_patch_original = im[int(context_ymin):int(context_ymax + 1), int(context_xmin):int(context_xmax + 1)]\n\n    if not np.array_equal(model_sz, original_sz):\n        im_patch = cv2.resize(im_patch_original, (model_sz, model_sz))\n    else:\n        im_patch = im_patch_original\n\n    crop_info['crop_cords'] = [context_xmin, context_xmax, context_ymin, context_ymax]\n    crop_info['empty_mask'] = tete_im\n    crop_info['pad_info'] = [top_pad, left_pad, r, c]\n\n    if out_mode == \"torch\":\n        return im_to_torch(im_patch.copy()), crop_info\n    else:\n        return im_patch, crop_info\n\n\ndef get_subwindow_tracking(im, pos, model_sz, original_sz, avg_chans, out_mode='torch'):\n    \"\"\"\n    SiamFC type cropping\n    \"\"\"\n    crop_info = dict()\n\n    if isinstance(pos, float):\n        pos = [pos, pos]\n\n    sz = original_sz\n    im_sz = im.shape\n    c = (original_sz+1) / 2\n    context_xmin = round(pos[0] - c)\n    context_xmax = context_xmin + sz - 1\n    context_ymin = round(pos[1] - c)\n    context_ymax = context_ymin + sz - 1\n    left_pad = int(max(0., -context_xmin))\n    top_pad = int(max(0., -context_ymin))\n    right_pad = int(max(0., context_xmax - im_sz[1] + 1))\n    bottom_pad = int(max(0., context_ymax - im_sz[0] + 1))\n\n    context_xmin = context_xmin + left_pad\n    context_xmax = context_xmax + left_pad\n    context_ymin = context_ymin + top_pad\n    context_ymax = context_ymax + top_pad\n\n    r, c, k = im.shape\n    if any([top_pad, bottom_pad, left_pad, right_pad]):\n        te_im = np.zeros((r + top_pad + bottom_pad, c + left_pad + right_pad, k), np.uint8)\n        # for return mask\n        tete_im = np.zeros((r + top_pad + bottom_pad, c + left_pad + right_pad))\n\n        te_im[top_pad:top_pad + r, left_pad:left_pad + c, :] = im\n        if top_pad:\n            te_im[0:top_pad, left_pad:left_pad + c, :] = avg_chans\n        if bottom_pad:\n            te_im[r + top_pad:, left_pad:left_pad + c, :] = avg_chans\n        if left_pad:\n            te_im[:, 0:left_pad, :] = avg_chans\n        if right_pad:\n            te_im[:, c + left_pad:, :] = avg_chans\n        im_patch_original = te_im[int(context_ymin):int(context_ymax + 1), int(context_xmin):int(context_xmax + 1), :]\n    else:\n        tete_im = np.zeros(im.shape[0:2])\n        im_patch_original = im[int(context_ymin):int(context_ymax + 1), int(context_xmin):int(context_xmax + 1), :]\n\n    if not np.array_equal(model_sz, original_sz):\n        im_patch = cv2.resize(im_patch_original, (model_sz, model_sz))\n    else:\n        im_patch = im_patch_original\n\n    crop_info['crop_cords'] = [context_xmin, context_xmax, context_ymin, context_ymax]\n    crop_info['empty_mask'] = tete_im\n    crop_info['pad_info'] = [top_pad, left_pad, r, c]\n\n    if out_mode == \"torch\":\n        return im_to_torch(im_patch.copy()), crop_info\n    else:\n        return im_patch, crop_info\n\n\ndef make_scale_pyramid(im, pos, in_side_scaled, out_side, avg_chans):\n    \"\"\"\n    SiamFC 3/5 scale imputs\n    \"\"\"\n    in_side_scaled = [round(x) for x in in_side_scaled]\n    num_scale = len(in_side_scaled)\n    pyramid = torch.zeros(num_scale, 3, out_side, out_side)\n    max_target_side = in_side_scaled[-1]\n    min_target_side = in_side_scaled[0]\n    beta = out_side / min_target_side\n\n    search_side = round(beta * max_target_side)\n    search_region, _ = get_subwindow_tracking(im, pos, int(search_side), int(max_target_side), avg_chans, out_mode='np')\n\n    for s, temp in enumerate(in_side_scaled):\n        target_side = round(beta * temp)\n        temp, _ = get_subwindow_tracking(search_region, (1 + search_side) / 2, out_side, target_side, avg_chans)\n        pyramid[s, :] = temp\n    return pyramid\n\n# ---------------------------------\n# Functions for FC tracking tools\n# ---------------------------------\ndef python2round(f):\n    \"\"\"\n    use python2 round function in python3\n    \"\"\"\n    if round(f + 1) - round(f) != 1:\n        return f + abs(f) / f * 0.5\n    return round(f)\n\n\ndef generate_anchor(total_stride, scales, ratios, score_size):\n    \"\"\"\n    slight different with released SiamRPN-VOT18\n    prefer original size without flatten\n    \"\"\"\n    anchor_num = len(ratios) * len(scales)\n    anchor = np.zeros((anchor_num, 4),  dtype=np.float32)\n    size = total_stride * total_stride\n    count = 0\n    for ratio in ratios:\n        ws = int(np.sqrt(size / ratio))\n        hs = int(ws * ratio)\n        for scale in scales:\n            wws = ws * scale\n            hhs = hs * scale\n            anchor[count, 0] = 0\n            anchor[count, 1] = 0\n            anchor[count, 2] = wws\n            anchor[count, 3] = hhs\n            count += 1\n\n    score_size = int(score_size)\n    anchor = np.tile(anchor, score_size * score_size).reshape((-1, 4))\n\n    ori = - (score_size // 2) * total_stride\n    xx, yy = np.meshgrid([ori + total_stride * dx for dx in range(score_size)],\n                         [ori + total_stride * dy for dy in range(score_size)])\n\n    xx, yy = np.tile(xx.flatten(), (anchor_num, 1)).flatten(), \\\n             np.tile(yy.flatten(), (anchor_num, 1)).flatten()\n    anchor[:, 0], anchor[:, 1] = xx.astype(np.float32), yy.astype(np.float32)\n\n    anchor = np.reshape(anchor, (5, score_size, score_size, 4))   # this order is right  [5, 17, 17, 4]\n    anchor = np.transpose(anchor, (3, 0, 1, 2))    # [4,5,17,17]\n\n    return anchor   # [4, 5, 17, 17]\n\n\n\nclass ImageNormalizer(object):\n\n    def __init__(self, mean, std, in_type='opencv', out_type='pil'):\n        \"\"\"\n        Normalize input tensor by substracting mean value & scale std value.\n        \"\"\"\n        self.mean = mean\n        self.std = std\n\n        assert in_type in (\"opencv\", \"pil\"), \"Type must be 'opencv' or 'pil'\"\n        assert out_type in (\"opencv\", \"pil\"), \"Type must be 'opencv' or 'pil'\"\n\n        if in_type == out_type:\n            self.order_trans = False\n            self.scale_factor = 1.0\n        elif in_type == 'opencv' and out_type == 'pil':\n            self.order_trans = True\n            self.div_factor = 255.0\n        elif in_type == 'pil' and out_type == 'opencv':\n            self.order_trans = True\n            self.div_factor = 1.0 / 255.0\n        else:\n            raise ValueError(\"Unknown key for {} {}\".format(in_type, out_type))\n\n    def __call__(self, img_tensor):\n\n        if self.order_trans:\n            img_tensor = img_tensor[:, [2, 1, 0], :, :].contiguous()\n            img_tensor.div_(self.div_factor)\n\n        for i in range(3):\n            img_tensor[:, i, :, :].sub_(self.mean[i]).div_(self.std[i])\n\n        return img_tensor\n\n\ndef crop_with_boxes(img_tensor, x_crop_boxes, out_height, out_width, crop_inds=None, avg_channels=True,\n                    has_normed_coords=False):\n    \"\"\"Crop the image tensor by given boxes. The output will be resized to target size\n\n    Params:\n        img_tensor: torch.Tensor, in shape of [N, C, H, W]. If N > 1, the crop_inds must be specified.\n        crop_boxes: list/numpy.ndarray/torch.Tensor in shape of [K x 4].\n        out_height: int.\n        out_width: int.\n        crop_inds: list/numpy.ndarray/torch.Tensor in shape of [K]\n    Returns:\n        crop_img_tensor: torch.Tensor, in shape of [K, C, H, W]\n    \"\"\"\n\n    img_device = img_tensor.device\n\n    if isinstance(x_crop_boxes, list):\n        crop_boxes = torch.tensor(x_crop_boxes, dtype=torch.float32).to(img_device)\n    elif isinstance(x_crop_boxes, np.ndarray):\n        crop_boxes = torch.tensor(x_crop_boxes, dtype=torch.float32).to(img_device)\n    elif isinstance(x_crop_boxes, torch.Tensor):\n        # change type and device if necessary\n        crop_boxes = x_crop_boxes.clone().to(device=img_device, dtype=torch.float32)\n    else:\n        raise ValueError('Unknown type for crop_boxes {}'.format(type(x_crop_boxes)))\n\n    if len(crop_boxes.size()) == 1:\n        crop_boxes = crop_boxes.view(1, 4)\n\n    num_imgs, chanenls, img_height, img_width = img_tensor.size()\n    num_crops = crop_boxes.size(0)\n\n    if crop_inds is not None:\n        if isinstance(crop_inds, list) or isinstance(crop_inds, np.ndarray):\n            crop_inds = torch.tensor(crop_inds, dtype=torch.float32).to(img_device)\n        elif isinstance(crop_inds, torch.Tensor):\n            crop_inds = crop_inds.to(device=img_device, dtype=torch.float32)\n        else:\n            raise ValueError('Unknown type for crop_inds {}'.format(type(crop_inds)))\n        crop_inds = crop_inds.view(-1)\n        assert crop_inds.size(0) == crop_boxes.size(0)\n    else:\n        if num_imgs == 1:\n            crop_inds = torch.zeros(num_crops, dtype=torch.float32, device=img_device)\n        elif num_imgs == num_crops:\n            crop_inds = torch.arange(num_crops, dtype=torch.float32, device=img_device)\n        else:\n            raise ValueError('crop_inds MUST NOT be None.')\n\n    if avg_channels:\n        img_channel_avg = img_tensor.mean(dim=2, keepdim=True).mean(dim=3, keepdim=True)\n        img_tensor_minus_avg = img_tensor - img_channel_avg  # minus mean values\n    else:\n        img_tensor_minus_avg = img_tensor\n    crop_img_tensor = CropAndResizeFunction(out_height, out_width, has_normed=has_normed_coords)(\n        img_tensor_minus_avg, crop_boxes, crop_inds)\n\n    if avg_channels:\n        # add mean value\n        crop_img_tensor += img_channel_avg[crop_inds.long()]\n\n    return crop_img_tensor\n\n\n\n# -----------------------------------\n# Functions for benchmark and others\n# -----------------------------------\ndef load_dataset(dataset, base_path):\n    \"\"\"\n    support OTB and VOT now\n    TODO: add other datasets\n    \"\"\"\n    info = {}\n\n    if 'OTB' in dataset:\n        json_path = join(base_path, dataset+'.json')\n        info = json.load(open(json_path, 'r'))\n        for v in info.keys():\n            #path_name = info[v]['name']\n            path_name = info[v]['video_dir']\n            #info[v]['image_files'] = [join(base_path, path_name, 'img', im_f) for im_f in info[v]['image_files']]\n            info[v]['image_files'] = [join(base_path,im_f) for im_f in info[v]['img_names']]\n            info[v]['gt'] = np.array(info[v]['gt_rect']) - [1, 1, 0, 0]\n            info[v]['name'] = v\n\n    elif 'VOT' in dataset and (not 'VOT2019RGBT' in dataset) and (not 'VOT2020' in dataset):\n        base_path = join(realpath(dirname(__file__)), '../../dataset', dataset)\n        list_path = join(base_path, 'list.txt')\n        with open(list_path) as f:\n            videos = [v.strip() for v in f.readlines()]\n        videos = sorted(videos)\n        for video in videos:\n            video_path = join(base_path, video)\n            image_path = join(video_path, '*.jpg')\n            image_files = sorted(glob.glob(image_path))\n            if len(image_files) == 0:  # VOT2018\n                image_path = join(video_path, 'color', '*.jpg')\n                image_files = sorted(glob.glob(image_path))\n            gt_path = join(video_path, 'groundtruth.txt')\n            gt = np.loadtxt(gt_path, delimiter=',').astype(np.float64)\n            info[video] = {'image_files': image_files, 'gt': gt, 'name': video}\n    elif 'VOT2020' in dataset:\n        base_path = join(realpath(dirname(__file__)), '../../dataset', dataset)\n        list_path = join(base_path, 'list.txt')\n        with open(list_path) as f:\n            videos = [v.strip() for v in f.readlines()]\n        videos = sorted(videos)\n        for video in videos:\n            video_path = join(base_path, video)\n            image_path = join(video_path, '*.jpg')\n            image_files = sorted(glob.glob(image_path))\n            if len(image_files) == 0:  # VOT2018\n                image_path = join(video_path, 'color', '*.jpg')\n                image_files = sorted(glob.glob(image_path))\n            gt_path = join(video_path, 'groundtruth.txt')\n            gt = open(gt_path, 'r').readlines()\n            info[video] = {'image_files': image_files, 'gt': gt, 'name': video}\n    elif 'RGBT234' in dataset:\n        base_path = join(realpath(dirname(__file__)), '../../dataset', dataset)\n        json_path = join(realpath(dirname(__file__)), '../../dataset', dataset + '.json')\n        info = json.load(open(json_path, 'r'))\n        for v in info.keys():\n            path_name = info[v]['name']\n            info[v]['infrared_imgs'] = [join(base_path, path_name, 'infrared', im_f) for im_f in\n                                        info[v]['infrared_imgs']]\n            info[v]['visiable_imgs'] = [join(base_path, path_name, 'visible', im_f) for im_f in\n                                        info[v]['visiable_imgs']]\n            info[v]['infrared_gt'] = np.array(info[v]['infrared_gt'])  # 0-index\n            info[v]['visiable_gt'] = np.array(info[v]['visiable_gt'])  # 0-index\n            info[v]['name'] = v\n\n    elif 'VOT2019RGBT' in dataset:\n        base_path = join(realpath(dirname(__file__)), '../../dataset', dataset)\n        list_path = join(base_path, 'list.txt')\n        with open(list_path) as f:\n            videos = [v.strip() for v in f.readlines()]\n        videos = sorted(videos)\n        for video in videos:\n            video_path = join(base_path, video)\n            in_image_path = join(video_path, 'ir', '*.jpg')\n            rgb_image_path = join(video_path, 'color', '*.jpg')\n            in_image_files = sorted(glob.glob(in_image_path))\n            rgb_image_files = sorted(glob.glob(rgb_image_path))\n\n            assert len(in_image_files) > 0, 'please check RGBT-VOT dataloader'\n            gt_path = join(video_path, 'groundtruth.txt')\n            gt = np.loadtxt(gt_path, delimiter=',').astype(np.float64)\n            info[video] = {'infrared_imgs': in_image_files, 'visiable_imgs': rgb_image_files, 'gt': gt, 'name': video}\n    elif 'VISDRONEVAL' in dataset:\n        base_path = join(realpath(dirname(__file__)), '../../dataset', dataset)\n        seq_path = join(base_path, 'sequences')\n        anno_path = join(base_path, 'annotations')\n        attr_path = join(base_path, 'attributes')\n\n        videos = sorted(os.listdir(seq_path))\n        for video in videos:\n            video_path = join(seq_path, video)\n\n            image_path = join(video_path, '*.jpg')\n            image_files = sorted(glob.glob(image_path))\n            gt_path = join(anno_path, '{}.txt'.format(video))\n            gt = np.loadtxt(gt_path, delimiter=',')\n            info[video] = {'image_files': image_files, 'gt': gt, 'name': video}\n    elif 'VISDRONETEST' in dataset:\n        base_path = join(realpath(dirname(__file__)), '../../dataset', dataset)\n        seq_path = join(base_path, 'sequences')\n        anno_path = join(base_path, 'initialization')\n\n        videos = sorted(os.listdir(seq_path))\n        for video in videos:\n            video_path = join(seq_path, video)\n\n            image_path = join(video_path, '*.jpg')\n            image_files = sorted(glob.glob(image_path))\n            gt_path = join(anno_path, '{}.txt'.format(video))\n            gt = np.loadtxt(gt_path, delimiter=',').reshape(1, 4)\n            info[video] = {'image_files': image_files, 'gt': gt, 'name': video}\n\n    elif 'GOT10KVAL' in dataset:\n        base_path = join(realpath(dirname(__file__)), '../../dataset', dataset)\n        seq_path = base_path\n\n        videos = sorted(os.listdir(seq_path))\n        videos.remove('list.txt')\n        for video in videos:\n            video_path = join(seq_path, video)\n            image_path = join(video_path, '*.jpg')\n            image_files = sorted(glob.glob(image_path))\n            gt_path = join(video_path, 'groundtruth.txt')\n            gt = np.loadtxt(gt_path, delimiter=',')\n            info[video] = {'image_files': image_files, 'gt': gt, 'name': video}\n\n    elif 'GOT10K' in dataset:  # GOT10K TEST\n        base_path = join(realpath(dirname(__file__)), '../../dataset', dataset)\n        seq_path = base_path\n\n        videos = sorted(os.listdir(seq_path))\n        videos.remove('list.txt')\n        for video in videos:\n            video_path = join(seq_path, video)\n            image_path = join(video_path, '*.jpg')\n            image_files = sorted(glob.glob(image_path))\n            gt_path = join(video_path, 'groundtruth.txt')\n            gt = np.loadtxt(gt_path, delimiter=',')\n            info[video] = {'image_files': image_files, 'gt': [gt], 'name': video}\n\n    elif 'LASOT' in dataset:\n        base_path = join(realpath(dirname(__file__)), '../../dataset', dataset)\n        json_path = join(realpath(dirname(__file__)), '../../dataset', dataset + '.json')\n        jsons = json.load(open(json_path, 'r'))\n        testingvideos = list(jsons.keys())\n\n        father_videos = sorted(os.listdir(base_path))\n        for f_video in father_videos:\n            f_video_path = join(base_path, f_video)\n            son_videos = sorted(os.listdir(f_video_path))\n            for s_video in son_videos:\n                if s_video not in testingvideos:  # 280 testing videos\n                    continue\n\n                s_video_path = join(f_video_path, s_video)\n                # ground truth\n                gt_path = join(s_video_path, 'groundtruth.txt')\n                gt = np.loadtxt(gt_path, delimiter=',')\n                gt = gt - [1, 1, 0, 0]\n                # get img file\n                img_path = join(s_video_path, 'img', '*jpg')\n                image_files = sorted(glob.glob(img_path))\n\n                info[s_video] = {'image_files': image_files, 'gt': gt, 'name': s_video}\n    elif 'DAVIS' in dataset and 'TEST' not in dataset:\n        base_path = join(realpath(dirname(__file__)), '../../dataset', 'DAVIS')\n        list_path = join(realpath(dirname(__file__)), '../../dataset', 'DAVIS', 'ImageSets', dataset[-4:],\n                         'val.txt')\n        with open(list_path) as f:\n            videos = [v.strip() for v in f.readlines()]\n        for video in videos:\n            info[video] = {}\n            info[video]['anno_files'] = sorted(glob.glob(join(base_path, 'Annotations/480p', video, '*.png')))\n            info[video]['image_files'] = sorted(glob.glob(join(base_path, 'JPEGImages/480p', video, '*.jpg')))\n            info[video]['name'] = video\n    elif 'YTBVOS' in dataset:\n        base_path = join(realpath(dirname(__file__)), '../../dataset', 'YTBVOS', 'valid')\n        json_path = join(realpath(dirname(__file__)), '../../dataset', 'YTBVOS', 'valid', 'meta.json')\n        meta = json.load(open(json_path, 'r'))\n        meta = meta['videos']\n        info = dict()\n        for v in meta.keys():\n            objects = meta[v]['objects']\n            frames = []\n            anno_frames = []\n            info[v] = dict()\n            for obj in objects:\n                frames += objects[obj]['frames']\n                anno_frames += [objects[obj]['frames'][0]]\n            frames = sorted(np.unique(frames))\n            info[v]['anno_files'] = [join(base_path, 'Annotations', v, im_f + '.png') for im_f in frames]\n            info[v]['anno_init_files'] = [join(base_path, 'Annotations', v, im_f + '.png') for im_f in anno_frames]\n            info[v]['image_files'] = [join(base_path, 'JPEGImages', v, im_f + '.jpg') for im_f in frames]\n            info[v]['name'] = v\n\n            info[v]['start_frame'] = dict()\n            info[v]['end_frame'] = dict()\n            for obj in objects:\n                start_file = objects[obj]['frames'][0]\n                end_file = objects[obj]['frames'][-1]\n                info[v]['start_frame'][obj] = frames.index(start_file)\n                info[v]['end_frame'][obj] = frames.index(end_file)\n\n    else:\n        raise ValueError(\"Dataset not support now, edit for other dataset youself...\")\n\n    return info\n\ndef load_video_info_im_gt(dataset, video_name):\n    if 'LASOT' in dataset:\n        base_path = join(realpath(dirname(__file__)), '../../dataset', dataset)\n        json_path = join(realpath(dirname(__file__)), '../../dataset', dataset+'.json')\n        jsons = json.load(open(json_path, 'r'))\n        testingvideos = list(jsons.keys())\n\n\n        father_video = video_name.split('-')[0]\n\n        f_video_path = join(base_path, father_video)\n        s_video_path = join(f_video_path, video_name)\n        \n        # ground truth\n        gt_path = join(s_video_path, 'groundtruth.txt')\n        gt = np.loadtxt(gt_path, delimiter=',')\n        gt = gt - [1, 1, 0, 0]\n        # get img file\n        img_path = join(s_video_path, 'img', '*jpg')\n        image_files = sorted(glob.glob(img_path))\n                \n        imgs = []\n        for path in image_files:\n            imgs.append(cv2.imread(path))\n\n    else:\n        raise ValueError('not supported now')\n\n    return imgs, gt\n\ndef check_keys(model, pretrained_state_dict, print_unuse=True):\n    ckpt_keys = set(pretrained_state_dict.keys())\n    model_keys = set(model.state_dict().keys())\n    used_pretrained_keys = model_keys & ckpt_keys\n    unused_pretrained_keys = list(ckpt_keys - model_keys)\n    missing_keys = list(model_keys - ckpt_keys)\n\n    # remove num_batches_tracked\n    for k in sorted(missing_keys):\n        if 'num_batches_tracked' in k:\n            missing_keys.remove(k)\n\n    print('missing keys:{}'.format(missing_keys))\n    if print_unuse:\n        print('unused checkpoint keys:{}'.format(unused_pretrained_keys))\n    # print('used keys:{}'.format(used_pretrained_keys))\n    assert len(used_pretrained_keys) > 0, 'load NONE from pretrained checkpoint'\n    return True\n\ndef remove_prefix(state_dict, prefix):\n    '''\n    Old style model is stored with all names of parameters share common prefix 'module.'\n    '''\n    print('remove prefix \\'{}\\''.format(prefix))\n    f = lambda x: x.split(prefix, 1)[-1] if x.startswith(prefix) else x\n    return {f(key): value for key, value in state_dict.items()}\n\ndef load_pretrain(model, pretrained_path, print_unuse=True):\n    print('load pretrained model from {}'.format(pretrained_path))\n\n    device = torch.cuda.current_device()\n    pretrained_dict = torch.load(pretrained_path, map_location=lambda storage, loc: storage.cuda(device))\n\n    if \"state_dict\" in pretrained_dict.keys():\n        pretrained_dict = remove_prefix(pretrained_dict['state_dict'], 'module.')\n        pretrained_dict = remove_prefix(pretrained_dict, 'feature_extractor.')  # remove online train\n    else:\n        pretrained_dict = remove_prefix(pretrained_dict, 'module.')  # remove multi-gpu label\n        pretrained_dict = remove_prefix(pretrained_dict, 'feature_extractor.')   # remove online train\n\n    check_keys(model, pretrained_dict, print_unuse=print_unuse)\n    model.load_state_dict(pretrained_dict, strict=False)\n    return model\n\n\ndef trans_model(model_path, save_path):\n    pretrained = torch.load(model_path, map_location=lambda storage, loc: storage)\n    save_ckpt = {}\n\n    # # for self train imagenet\n    # pretrained = remove_prefix(pretrained['state_dict'], 'module.')\n\n    save_ckpt = {}\n    for key in pretrained.keys():\n        if key.startswith('layer'):\n            key_in_new_res = 'features.features.' + key\n            save_ckpt[key_in_new_res] = pretrained[key]\n\n        else:\n            save_ckpt[key] = pretrained[key]\n\n    torch.save(save_ckpt, save_path)\n\n\nCorner = namedtuple('Corner', 'x1 y1 x2 y2')\nBBox = Corner\nCenter = namedtuple('Center', 'x y w h')\n\ndef corner2center(corner):\n    \"\"\"\n    [x1, y1, x2, y2] --> [cx, cy, w, h]\n    \"\"\"\n    if isinstance(corner, Corner):\n        x1, y1, x2, y2 = corner\n        return Center((x1 + x2) * 0.5, (y1 + y2) * 0.5, (x2 - x1), (y2 - y1))\n    else:\n        x1, y1, x2, y2 = corner[0], corner[1], corner[2], corner[3]\n        x = (x1 + x2) * 0.5\n        y = (y1 + y2) * 0.5\n        w = x2 - x1\n        h = y2 - y1\n        return x, y, w, h\n\ndef center2corner(center):\n    \"\"\"\n    [cx, cy, w, h] --> [x1, y1, x2, y2]\n    \"\"\"\n    if isinstance(center, Center):\n        x, y, w, h = center\n        return Corner(x - w * 0.5, y - h * 0.5, x + w * 0.5, y + h * 0.5)\n    else:\n        x, y, w, h = center[0], center[1], center[2], center[3]\n        x1 = x - w * 0.5\n        y1 = y - h * 0.5\n        x2 = x + w * 0.5\n        y2 = y + h * 0.5\n        return x1, y1, x2, y2\n\ndef IoU(rect1, rect2):\n    # overlap\n\n    x1, y1, x2, y2 = rect1[0], rect1[1], rect1[2], rect1[3]\n    tx1, ty1, tx2, ty2 = rect2[0], rect2[1], rect2[2], rect2[3]\n\n    xx1 = np.maximum(tx1, x1)\n    yy1 = np.maximum(ty1, y1)\n    xx2 = np.minimum(tx2, x2)\n    yy2 = np.minimum(ty2, y2)\n\n    ww = np.maximum(0, xx2 - xx1)\n    hh = np.maximum(0, yy2 - yy1)\n\n    area = (x2-x1) * (y2-y1)\n\n    target_a = (tx2-tx1) * (ty2 - ty1)\n\n    inter = ww * hh\n    overlap = inter / (area + target_a - inter)\n\n    return overlap\n\ndef aug_apply(bbox, param, shape, inv=False, rd=False):\n    \"\"\"\n    apply augmentation\n    :param bbox: original bbox in image\n    :param param: augmentation param, shift/scale\n    :param shape: image shape, h, w, (c)\n    :param inv: inverse\n    :param rd: round bbox\n    :return: bbox(, param)\n        bbox: augmented bbox\n        param: real augmentation param\n    \"\"\"\n    if not inv:\n        center = corner2center(bbox)\n        original_center = center\n\n        real_param = {}\n        if 'scale' in param:\n            scale_x, scale_y = param['scale']\n            imh, imw = shape[:2]\n            h, w = center.h, center.w\n\n            scale_x = min(scale_x, float(imw) / w)\n            scale_y = min(scale_y, float(imh) / h)\n            center = Center(center.x, center.y, center.w * scale_x, center.h * scale_y)\n\n        bbox = center2corner(center)\n\n        if 'shift' in param:\n            tx, ty = param['shift']\n            x1, y1, x2, y2 = bbox\n            imh, imw = shape[:2]\n\n            tx = max(-x1, min(imw - 1 - x2, tx))\n            ty = max(-y1, min(imh - 1 - y2, ty))\n\n            bbox = Corner(x1 + tx, y1 + ty, x2 + tx, y2 + ty)\n\n        if rd:\n            bbox = Corner(*map(round, bbox))\n\n        current_center = corner2center(bbox)\n\n        real_param['scale'] = current_center.w / original_center.w, current_center.h / original_center.h\n        real_param['shift'] = current_center.x - original_center.x, current_center.y - original_center.y\n\n        return bbox, real_param\n    else:\n        if 'scale' in param:\n            scale_x, scale_y = param['scale']\n        else:\n            scale_x, scale_y = 1., 1.\n\n        if 'shift' in param:\n            tx, ty = param['shift']\n        else:\n            tx, ty = 0, 0\n\n        center = corner2center(bbox)\n\n        center = Center(center.x - tx, center.y - ty, center.w / scale_x, center.h / scale_y)\n\n        return center2corner(center)\n\n\n# others\ndef cxy_wh_2_rect(pos, sz):\n    return [float(max(float(0), pos[0]-sz[0]/2)), float(max(float(0), pos[1]-sz[1]/2)), float(sz[0]), float(sz[1])]  # 0-index\n\n\ndef get_axis_aligned_bbox(region):\n    nv = region.size\n    if nv == 8:\n        cx = np.mean(region[0::2])\n        cy = np.mean(region[1::2])\n        x1 = min(region[0::2])\n        x2 = max(region[0::2])\n        y1 = min(region[1::2])\n        y2 = max(region[1::2])\n        A1 = np.linalg.norm(region[0:2] - region[2:4]) * np.linalg.norm(region[2:4] - region[4:6])\n        A2 = (x2 - x1) * (y2 - y1)\n        s = np.sqrt(A1 / A2)\n        w = s * (x2 - x1) + 1\n        h = s * (y2 - y1) + 1\n    else:\n        x = region[0]\n        y = region[1]\n        w = region[2]\n        h = region[3]\n        cx = x+w/2\n        cy = y+h/2\n\n    return cx, cy, w, h\n\n# poly_iou and _to_polygon comes from Linghua Huang\ndef poly_iou(polys1, polys2, bound=None):\n    r\"\"\"Intersection over union of polygons.\n\n    Args:\n        polys1 (numpy.ndarray): An N x 4 numpy array, each line represent a rectangle\n            (left, top, width, height); or an N x 8 numpy array, each line represent\n            the coordinates (x1, y1, x2, y2, x3, y3, x4, y4) of 4 corners.\n        polys2 (numpy.ndarray): An N x 4 numpy array, each line represent a rectangle\n            (left, top, width, height); or an N x 8 numpy array, each line represent\n            the coordinates (x1, y1, x2, y2, x3, y3, x4, y4) of 4 corners.\n    \"\"\"\n    assert polys1.ndim in [1, 2]\n    if polys1.ndim == 1:\n        polys1 = np.array([polys1])\n        polys2 = np.array([polys2])\n    assert len(polys1) == len(polys2)\n\n    polys1 = _to_polygon(polys1)\n    polys2 = _to_polygon(polys2)\n    if bound is not None:\n        bound = box(0, 0, bound[0], bound[1])\n        polys1 = [p.intersection(bound) for p in polys1]\n        polys2 = [p.intersection(bound) for p in polys2]\n\n    eps = np.finfo(float).eps\n    ious = []\n    for poly1, poly2 in zip(polys1, polys2):\n        area_inter = poly1.intersection(poly2).area\n        area_union = poly1.union(poly2).area\n        ious.append(area_inter / (area_union + eps))\n    ious = np.clip(ious, 0.0, 1.0)\n\n    return ious\n\n\ndef _to_polygon(polys):\n    r\"\"\"Convert 4 or 8 dimensional array to Polygons\n\n    Args:\n        polys (numpy.ndarray): An N x 4 numpy array, each line represent a rectangle\n            (left, top, width, height); or an N x 8 numpy array, each line represent\n            the coordinates (x1, y1, x2, y2, x3, y3, x4, y4) of 4 corners.\n    \"\"\"\n\n    def to_polygon(x):\n        assert len(x) in [4, 8]\n        if len(x) == 4:\n            return box(x[0], x[1], x[0] + x[2], x[1] + x[3])\n        elif len(x) == 8:\n            return Polygon([(x[2 * i], x[2 * i + 1]) for i in range(4)])\n\n    if polys.ndim == 1:\n        return to_polygon(polys)\n    else:\n        return [to_polygon(t) for t in polys]\n\n\ndef restore_from(model, optimizer, ckpt_path):\n    print('restore from {}'.format(ckpt_path))\n    device = torch.cuda.current_device()\n    ckpt = torch.load(ckpt_path, map_location = lambda storage, loc: storage.cuda(device))\n    epoch = ckpt['epoch']\n    arch = ckpt['arch']\n    ckpt_model_dict = remove_prefix(ckpt['state_dict'], 'module.')\n    check_keys(model, ckpt_model_dict)\n    model.load_state_dict(ckpt_model_dict, strict=False)\n\n    optimizer.load_state_dict(ckpt['optimizer'])\n    return model, optimizer, epoch,  arch\n\n\ndef print_speed(i, i_time, n, logger):\n    \"\"\"print_speed(index, index_time, total_iteration)\"\"\"\n    average_time = i_time\n    remaining_time = (n - i) * average_time\n    remaining_day = math.floor(remaining_time / 86400)\n    remaining_hour = math.floor(remaining_time / 3600 - remaining_day * 24)\n    remaining_min = math.floor(remaining_time / 60 - remaining_day * 1440 - remaining_hour * 60)\n    logger.info('Progress: %d / %d [%d%%], Speed: %.3f s/iter, ETA %d:%02d:%02d (D:H:M)\\n' % (i, n, i/n*100, average_time, remaining_day, remaining_hour, remaining_min))\n    logger.info('\\nPROGRESS: {:.2f}%\\n'.format(100 * i / n))  # for philly. let's reduce it in case others kill our job 100-25\n\n\ndef create_logger(cfg, modelFlag='OCEAN', phase='train'):\n    root_output_dir = Path(cfg.OUTPUT_DIR)\n    # set up logger\n    if not root_output_dir.exists():\n        print('=> creating {}'.format(root_output_dir))\n        root_output_dir.mkdir()\n    cfg = cfg[modelFlag]\n    model = cfg.TRAIN.MODEL\n\n    final_output_dir = root_output_dir / model\n\n    print('=> creating {}'.format(final_output_dir))\n    final_output_dir.mkdir(parents=True, exist_ok=True)\n\n    time_str = time.strftime('%Y-%m-%d-%H-%M')\n    log_file = '{}_{}_{}.log'.format(model, time_str, phase)\n    final_log_file = final_output_dir / log_file\n    head = '%(asctime)-15s %(message)s'\n    logging.basicConfig(filename=str(final_log_file),\n                        format=head)\n    logger = logging.getLogger()\n    logger.setLevel(logging.INFO)\n    console = logging.StreamHandler()\n    logging.getLogger('').addHandler(console)\n\n    tensorboard_log_dir = root_output_dir / model / (model + '_' + time_str)\n    print('=> creating {}'.format(tensorboard_log_dir))\n    tensorboard_log_dir.mkdir(parents=True, exist_ok=True)\n\n    return logger, str(final_output_dir), str(tensorboard_log_dir)\n\n\ndef save_checkpoint(states, is_best, output_dir, filename='checkpoint.pth.tar'):\n    \"\"\"\n    save checkpoint\n    \"\"\"\n    torch.save(states, os.path.join(output_dir, filename))\n    if is_best and 'state_dict' in states:\n        torch.save(states['state_dict'],\n                   os.path.join(output_dir, 'model_best.pth'))\n\n\ndef save_model(model, epoch, optimizer, model_name, cfg, isbest=False):\n    \"\"\"\n    save model\n    \"\"\"\n    if not exists(cfg.CHECKPOINT_DIR):\n        os.makedirs(cfg.CHECKPOINT_DIR)\n\n    if epoch > 0:\n        save_checkpoint({\n            'epoch': epoch + 1,\n            'arch': model_name,\n            'state_dict': model.module.state_dict(),\n            'optimizer': optimizer.state_dict()\n        }, isbest, cfg.CHECKPOINT_DIR, 'checkpoint_e%d.pth' % (epoch + 1))\n    else:\n        print('epoch not save(<5)')\n\n\ndef extract_eaos(lines):\n    \"\"\"\n    extract info of VOT eao\n    \"\"\"\n    epochs = []\n    eaos = []\n    for line in lines:\n        print(line)\n        # if not line.startswith('[*]'):   # matlab version\n        if not line.startswith('| Ocean'):\n            continue\n        temp = line.split('|')\n        epochs.append(int(temp[1].split('_e')[-1]))\n        eaos.append(float(temp[-2]))\n    # fine bese epoch\n    idx = eaos.index(max(eaos))\n    epoch = epochs[idx]\n    return epoch\n\n\ndef extract_logs(logfile, prefix):\n    \"\"\"\n    extract logs for tuning, return best epoch number\n    prefix: VOT, OTB, VOTLT, VOTRGBD, VOTRGBT\n    \"\"\"\n    lines = open(logfile, 'r').readlines()\n    if prefix == 'VOT':\n        epoch = extract_eaos(lines)\n    else:\n        raise ValueError('not supported now')\n\n    return 'checkpoint_e{}.pth'.format(epoch)\n\n\n# ----------------------------\n# build lr (from SiamRPN++)\n# ---------------------------\nclass LRScheduler(_LRScheduler):\n    def __init__(self, optimizer, last_epoch=-1):\n        if 'lr_spaces' not in self.__dict__:\n            raise Exception('lr_spaces must be set in \"LRSchduler\"')\n        super(LRScheduler, self).__init__(optimizer, last_epoch)\n\n    def get_cur_lr(self):\n        return self.lr_spaces[self.last_epoch]\n\n    def get_lr(self):\n        epoch = self.last_epoch\n        return [self.lr_spaces[epoch] * pg['initial_lr'] / self.start_lr\n                for pg in self.optimizer.param_groups]\n\n    def __repr__(self):\n        return \"({}) lr spaces: \\n{}\".format(self.__class__.__name__,\n                                             self.lr_spaces)\n\n\nclass LogScheduler(LRScheduler):\n    def __init__(self, optimizer, start_lr=0.03, end_lr=5e-4,\n                 epochs=50, last_epoch=-1, **kwargs):\n        self.start_lr = start_lr\n        self.end_lr = end_lr\n        self.epochs = epochs\n        self.lr_spaces = np.logspace(math.log10(start_lr),\n                                     math.log10(end_lr),\n                                     epochs)\n\n        super(LogScheduler, self).__init__(optimizer, last_epoch)\n\n\nclass StepScheduler(LRScheduler):\n    def __init__(self, optimizer, start_lr=0.01, end_lr=None,\n                 step=10, mult=0.1, epochs=50, last_epoch=-1, **kwargs):\n        if end_lr is not None:\n            if start_lr is None:\n                start_lr = end_lr / (mult ** (epochs // step))\n            else:  # for warm up policy\n                mult = math.pow(end_lr/start_lr, 1. / (epochs // step))\n        self.start_lr = start_lr\n        self.lr_spaces = self.start_lr * (mult**(np.arange(epochs) // step))\n        self.mult = mult\n        self._step = step\n\n        super(StepScheduler, self).__init__(optimizer, last_epoch)\n\n\nclass MultiStepScheduler(LRScheduler):\n    def __init__(self, optimizer, start_lr=0.01, end_lr=None,\n                 steps=[10, 20, 30, 40], mult=0.5, epochs=50,\n                 last_epoch=-1, **kwargs):\n        if end_lr is not None:\n            if start_lr is None:\n                start_lr = end_lr / (mult ** (len(steps)))\n            else:\n                mult = math.pow(end_lr/start_lr, 1. / len(steps))\n        self.start_lr = start_lr\n        self.lr_spaces = self._build_lr(start_lr, steps, mult, epochs)\n        self.mult = mult\n        self.steps = steps\n\n        super(MultiStepScheduler, self).__init__(optimizer, last_epoch)\n\n    def _build_lr(self, start_lr, steps, mult, epochs):\n        lr = [0] * epochs\n        lr[0] = start_lr\n        for i in range(1, epochs):\n            lr[i] = lr[i-1]\n            if i in steps:\n                lr[i] *= mult\n        return np.array(lr, dtype=np.float32)\n\n\nclass LinearStepScheduler(LRScheduler):\n    def __init__(self, optimizer, start_lr=0.01, end_lr=0.005,\n                 epochs=50, last_epoch=-1, **kwargs):\n        self.start_lr = start_lr\n        self.end_lr = end_lr\n        self.lr_spaces = np.linspace(start_lr, end_lr, epochs)\n        super(LinearStepScheduler, self).__init__(optimizer, last_epoch)\n\n\nclass CosStepScheduler(LRScheduler):\n    def __init__(self, optimizer, start_lr=0.01, end_lr=0.005,\n                 epochs=50, last_epoch=-1, **kwargs):\n        self.start_lr = start_lr\n        self.end_lr = end_lr\n        self.lr_spaces = self._build_lr(start_lr, end_lr, epochs)\n\n        super(CosStepScheduler, self).__init__(optimizer, last_epoch)\n\n    def _build_lr(self, start_lr, end_lr, epochs):\n        index = np.arange(epochs).astype(np.float32)\n        lr = end_lr + (start_lr - end_lr) * \\\n            (1. + np.cos(index * np.pi / epochs)) * 0.5\n        return lr.astype(np.float32)\n\n\nclass WarmUPScheduler(LRScheduler):\n    def __init__(self, optimizer, warmup, normal, epochs=50, last_epoch=-1):\n        warmup = warmup.lr_spaces  # [::-1]\n        normal = normal.lr_spaces\n        self.lr_spaces = np.concatenate([warmup, normal])\n        self.start_lr = normal[0]\n\n        super(WarmUPScheduler, self).__init__(optimizer, last_epoch)\n\n\nLRs = {\n    'log': LogScheduler,\n    'step': StepScheduler,\n    'multi-step': MultiStepScheduler,\n    'linear': LinearStepScheduler,\n    'cos': CosStepScheduler}\n\n\ndef _build_lr_scheduler(optimizer, config, epochs=50, last_epoch=-1):\n    return LRs[config.TYPE](optimizer, last_epoch=last_epoch,\n                            epochs=epochs, **config.KWARGS)\n\n\ndef _build_warm_up_scheduler(optimizer, cfg, epochs=50, last_epoch=-1, modelFLAG='OCEAN'):\n    #cfg = cfg[modelFLAG]\n    warmup_epoch = cfg.TRAIN.WARMUP.EPOCH\n    sc1 = _build_lr_scheduler(optimizer, cfg.TRAIN.WARMUP,\n                              warmup_epoch, last_epoch)\n    sc2 = _build_lr_scheduler(optimizer, cfg.TRAIN.LR,\n                              epochs - warmup_epoch, last_epoch)\n    return WarmUPScheduler(optimizer, sc1, sc2, epochs, last_epoch)\n\n\ndef build_lr_scheduler(optimizer, cfg, epochs=50, last_epoch=-1, modelFLAG='OCEAN'):\n    cfg = cfg[modelFLAG]\n    if cfg.TRAIN.WARMUP.IFNOT:\n        return _build_warm_up_scheduler(optimizer, cfg, epochs, last_epoch)\n    else:\n        return _build_lr_scheduler(optimizer, cfg.TRAIN.LR, epochs, last_epoch)\n\n\n\n# ----------------------------------\n# Some functions for online\n# ----------------------------------\n\n## original utils/params.py\nclass TrackerParams:\n    \"\"\"Class for tracker parameters.\"\"\"\n    def free_memory(self):\n        for a in dir(self):\n            if not a.startswith('__') and hasattr(getattr(self, a), 'free_memory'):\n                getattr(self, a).free_memory()\n\n\nclass FeatureParams:\n    \"\"\"Class for feature specific parameters\"\"\"\n    def __init__(self, *args, **kwargs):\n        if len(args) > 0:\n            raise ValueError\n\n        for name, val in kwargs.items():\n            if isinstance(val, list):\n                setattr(self, name, TensorList(val))\n            else:\n                setattr(self, name, val)\n\n\ndef Choice(*args):\n    \"\"\"Can be used to sample random parameter values.\"\"\"\n    return random.choice(args)\n\n# ----------------------------------\n# From CFNet\n# ----------------------------------\n\n\ndef cxy_wh_2_rect1(pos, sz):\n    return np.array([pos[0]-sz[0]/2+1, pos[1]-sz[1]/2+1, sz[0], sz[1]])  # 1-index\n\n\ndef rect1_2_cxy_wh(rect):\n    return np.array([rect[0]+rect[2]/2-1, rect[1]+rect[3]/2-1]), np.array([rect[2], rect[3]])  # 0-index\n\n\ndef cxy_wh_2_bbox(cxy, wh):\n    return np.array([cxy[0]-wh[0]/2, cxy[1]-wh[1]/2, cxy[0]+wh[0]/2, cxy[1]+wh[1]/2])  # 0-index\n\n\ndef gaussian_shaped_labels(sigma, sz):\n    x, y = np.meshgrid(np.arange(1, sz[0]+1) - np.floor(float(sz[0]) / 2), np.arange(1, sz[1]+1) - np.floor(float(sz[1]) / 2))\n    d = x ** 2 + y ** 2\n    g = np.exp(-0.5 / (sigma ** 2) * d)\n    g = np.roll(g, int(-np.floor(float(sz[0]) / 2.) + 1), axis=0)\n    g = np.roll(g, int(-np.floor(float(sz[1]) / 2.) + 1), axis=1)\n    return g\n\n\ndef crop_chw(image, bbox, out_sz, padding=(0, 0, 0)):\n    a = (out_sz-1) / (bbox[2]-bbox[0])\n    b = (out_sz-1) / (bbox[3]-bbox[1])\n    c = -a * bbox[0]\n    d = -b * bbox[1]\n    mapping = np.array([[a, 0, c],\n                        [0, b, d]]).astype(np.float)\n    crop = cv2.warpAffine(image, mapping, (out_sz, out_sz), borderMode=cv2.BORDER_CONSTANT, borderValue=padding)\n    return np.transpose(crop, (2, 0, 1))\n\n"
  },
  {
    "path": "tracker/sot/lib/utils/watch_tpe.sh",
    "content": "watch -n 1 python lib/utils/extract_tpelog.py --path logs/tpe_tune.log\n"
  },
  {
    "path": "tracker/sot/lib/version.py",
    "content": "# GENERATED VERSION FILE\n# TIME: Fri Dec 11 13:54:02 2020\n\n__version__ = '1.0.rc0'\nshort_version = '1.0.rc0'\n"
  },
  {
    "path": "utils/__init__.py",
    "content": "from collections import defaultdict, deque\nimport datetime\nimport time\nimport torch\n\nimport errno\nimport os\nimport pdb\nimport sys\n\nfrom . import visualize\nfrom . import box\nfrom . import meter\nfrom . import log\n\nimport numpy as np\nfrom torch import nn\nfrom torch.nn import functional as F\n\n\ndef to_numpy(tensor):\n    if torch.is_tensor(tensor):\n        return tensor.cpu().numpy()\n    elif type(tensor).__module__ != 'numpy':\n        raise ValueError(\"Cannot convert {} to numpy array\"\n                         .format(type(tensor)))\n    return tensor\n\ndef to_torch(ndarray):\n    if type(ndarray).__module__ == 'numpy':\n        return torch.from_numpy(ndarray)\n    elif not torch.is_tensor(ndarray):\n        raise ValueError(\"Cannot convert {} to torch tensor\"\n                         .format(type(ndarray)))\n    return ndarray\n\ndef im_to_numpy(img):\n    img = to_numpy(img)\n    img = np.transpose(img, (1, 2, 0)) # H*W*C\n    return img\n\ndef im_to_torch(img):\n    img = np.transpose(img, (2, 0, 1)) # C*H*W\n    img = to_torch(img).float()\n    return img\n"
  },
  {
    "path": "utils/box.py",
    "content": "###################################################################\n# File Name: box.py\n# Author: Zhongdao Wang\n# mail: wcd17@mails.tsinghua.edu.cn\n# Created Time: Wed Dec 23 16:27:15 2020\n###################################################################\n\nimport torch\nimport torchvision\nimport numpy as np\n\n\ndef xyxy2xywh(x):\n    # Convert bounding box format from [x1, y1, x2, y2] to [x, y, w, h]\n    y = x.clone() if x.dtype is torch.float32 else x.copy()\n    y[:, 0] = (x[:, 0] + x[:, 2]) / 2\n    y[:, 1] = (x[:, 1] + x[:, 3]) / 2\n    y[:, 2] = x[:, 2] - x[:, 0]\n    y[:, 3] = x[:, 3] - x[:, 1]\n    return y\n\n\ndef xywh2xyxy(x):\n    # Convert bounding box format from [x, y, w, h] to [x1, y1, x2, y2]\n    y = x.clone() if x.dtype is torch.float32 else x.copy()\n    y[:, 0] = (x[:, 0] - x[:, 2] / 2)\n    y[:, 1] = (x[:, 1] - x[:, 3] / 2)\n    y[:, 2] = (x[:, 0] + x[:, 2] / 2)\n    y[:, 3] = (x[:, 1] + x[:, 3] / 2)\n    return y\n\n\ndef tlwh2xyxy(x):\n    # Convert bounding box format from [x, y, w, h] to [x1, y1, x2, y2]\n    y = x.clone() if x.dtype is torch.float32 else x.copy()\n    y[:, 2] = (x[:, 0] + x[:, 2])\n    y[:, 3] = (x[:, 1] + x[:, 3])\n    return y\n\n\ndef tlwh_to_xywh(tlwh):\n    ret = np.asarray(tlwh).copy()\n    ret[:2] += ret[2:] / 2\n    return ret\n\n\ndef tlwh_to_xyah(tlwh):\n    \"\"\"Convert bounding box to format `(center x, center y, aspect ratio,\n    height)`, where the aspect ratio is `width / height`.\n    \"\"\"\n    ret = np.asarray(tlwh).copy()\n    ret[:2] += ret[2:] / 2\n    ret[2] /= (ret[3] + 1e-6)\n    return ret\n\n\ndef tlbr_to_tlwh(tlbr):\n    ret = np.asarray(tlbr).copy()\n    ret[2:] -= ret[:2]\n    return ret\n\n\ndef tlwh_to_tlbr(tlwh):\n    ret = np.asarray(tlwh).copy()\n    ret[2:] += ret[:2]\n    return ret\n\n\ndef scale_box(scale, coords):\n    c = coords.clone()\n    c[:, [0, 2]] = coords[:, [0, 2]] * scale[0]\n    c[:, [1, 3]] = coords[:, [1, 3]] * scale[1]\n    return c\n\n\ndef scale_box_letterbox_size(img_size, coords, img0_shape):\n    gain_w = float(img_size[0]) / img0_shape[1]  # gain  = old / new\n    gain_h = float(img_size[1]) / img0_shape[0]\n    gain = min(gain_w, gain_h)\n    pad_x = (img_size[0] - img0_shape[1] * gain) / 2  # width padding\n    pad_y = (img_size[1] - img0_shape[0] * gain) / 2  # height padding\n    coords[:, 0:4] *= gain\n    coords[:, [0, 2]] += pad_x\n    coords[:, [1, 3]] += pad_y\n    return coords\n\n\ndef scale_box_input_size(img_size, coords, img0_shape):\n    # Rescale x1, y1, x2, y2 from 416 to image size\n    gain_w = float(img_size[0]) / img0_shape[1]  # gain  = old / new\n    gain_h = float(img_size[1]) / img0_shape[0]\n    gain = min(gain_w, gain_h)\n    pad_x = (img_size[0] - img0_shape[1] * gain) / 2  # width padding\n    pad_y = (img_size[1] - img0_shape[0] * gain) / 2  # height padding\n    coords[:, [0, 2]] -= pad_x\n    coords[:, [1, 3]] -= pad_y\n    coords[:, 0:4] /= gain\n    return coords\n\n\ndef clip_boxes(boxes, im_shape):\n    \"\"\"\n    Clip boxes to image boundaries.\n    \"\"\"\n    boxes = np.asarray(boxes)\n    if boxes.shape[0] == 0:\n        return boxes\n    boxes = np.copy(boxes)\n    # x1 >= 0\n    boxes[:, 0::4] = np.maximum(np.minimum(boxes[:, 0::4], im_shape[1] - 1), 0)\n    # y1 >= 0\n    boxes[:, 1::4] = np.maximum(np.minimum(boxes[:, 1::4], im_shape[0] - 1), 0)\n    # x2 < im_shape[1]\n    boxes[:, 2::4] = np.maximum(np.minimum(boxes[:, 2::4], im_shape[1] - 1), 0)\n    # y2 < im_shape[0]\n    boxes[:, 3::4] = np.maximum(np.minimum(boxes[:, 3::4], im_shape[0] - 1), 0)\n    return boxes\n\n\ndef clip_box(bbox, im_shape):\n    h, w = im_shape[:2]\n    bbox = np.copy(bbox)\n    bbox[0] = max(min(bbox[0], w - 1), 0)\n    bbox[1] = max(min(bbox[1], h - 1), 0)\n    bbox[2] = max(min(bbox[2], w - 1), 0)\n    bbox[3] = max(min(bbox[3], h - 1), 0)\n\n    return bbox\n\n\ndef int_box(box):\n    box = np.asarray(box, dtype=np.float)\n    box = np.round(box)\n    return np.asarray(box, dtype=np.int)\n\n\ndef remove_duplicated_box(boxes, iou_th=0.5):\n    if isinstance(boxes, np.ndarray):\n        boxes = torch.from_numpy(boxes)\n    jac = torchvision.ops.box_iou(boxes, boxes).float()\n    jac -= torch.eye(jac.shape[0])\n    keep = np.ones(len(boxes)) == 1\n    for i, b in enumerate(boxes):\n        if b[0] == -1 and b[1] == -1 and b[2] == 10 and b[3] == 10:\n            keep[i] = False\n    for r, row in enumerate(jac):\n        if keep[r]:\n            discard = torch.where(row > iou_th)\n            keep[discard] = False\n    return np.where(keep)[0]\n\n\ndef skltn2box(skltn):\n    dskltn = dict()\n    for s in skltn:\n        dskltn[s['id'][0]] = (int(s['x'][0]), int(s['y'][0]))\n    if len(dskltn) == 0:\n        return np.array(\n                [-1, -1, np.random.randint(1, 40), np.random.randint(1, 70)])\n\n    xmin = np.min([dskltn[k][0] for k in dskltn])\n    xmax = np.max([dskltn[k][0] for k in dskltn])\n    ymin = np.min([dskltn[k][1] for k in dskltn])\n    ymax = np.max([dskltn[k][1] for k in dskltn])\n    if xmin == xmax:\n        xmax += 10\n    if ymin == ymax:\n        ymax += 10\n    return np.array([xmin, ymin, xmax, ymax])\n"
  },
  {
    "path": "utils/io.py",
    "content": "import os\r\nimport os.path as osp\r\nfrom typing import Dict\r\nimport numpy as np\r\n\r\nfrom utils.log import logger\r\n\r\ndef mkdir_if_missing(d):\r\n    if not osp.exists(d):\r\n        os.makedirs(d)\r\n\r\ndef write_mots_results(filename, results, data_type='mot'):\r\n    if not filename:\r\n        return\r\n    path = os.path.dirname(filename)\r\n    if not os.path.exists(path):\r\n        os.makedirs(path)\r\n\r\n    if data_type in ('mot'):\r\n        save_format = '{frame} {id} {cid} {imh} {imw} {rle}\\n'\r\n    else:\r\n        raise ValueError(data_type)\r\n\r\n    with open(filename, 'w') as f:\r\n        for frame_id, tlwhs, rles, track_ids in results:\r\n            for rle, track_id in zip(rles, track_ids):\r\n                if track_id < 0:\r\n                    continue\r\n                rle_str = rle['counts']\r\n                imh, imw = rle['size']\r\n                line = save_format.format(frame=frame_id, id=track_id+2000, cid=2, imh=imh, imw=imw, rle=rle_str)\r\n                f.write(line)\r\n    logger.info('Save results to {}'.format(filename))\r\n\r\ndef write_mot_results(filename, results, data_type='mot'):\r\n    if not filename:\r\n        return\r\n    path = os.path.dirname(filename)\r\n    if not os.path.exists(path):\r\n        os.makedirs(path)\r\n\r\n    if data_type in ('mot', 'mcmot', 'lab'):\r\n        save_format = '{frame},{id},{x1},{y1},{w},{h},1,-1,-1,-1\\n'\r\n    elif data_type == 'kitti':\r\n        save_format = '{frame} {id} pedestrian -1 -1 -10 {x1} {y1} {x2} {y2} -1 -1 -1 -1000 -1000 -1000 -10 {score}\\n'\r\n    else:\r\n        raise ValueError(data_type)\r\n\r\n    with open(filename, 'w') as f:\r\n        for frame_id, tlwhs, track_ids in results:\r\n            if data_type == 'kitti':\r\n                frame_id -= 1\r\n            for tlwh, track_id in zip(tlwhs, track_ids):\r\n                if track_id < 0:\r\n                    continue\r\n                x1, y1, w, h = tlwh\r\n                x2, y2 = x1 + w, y1 + h\r\n                line = save_format.format(frame=frame_id, id=track_id, x1=x1, y1=y1, x2=x2, y2=y2, w=w, h=h)\r\n                f.write(line)\r\n    logger.info('Save results to {}'.format(filename))\r\n\r\n\r\ndef read_mot_results(filename, data_type='mot', is_gt=False, is_ignore=False):\r\n    if data_type in ('mot', 'lab'):\r\n        read_fun = _read_mot_results\r\n    else:\r\n        raise ValueError('Unknown data type: {}'.format(data_type))\r\n\r\n    return read_fun(filename, is_gt, is_ignore)\r\n\r\n\r\n\"\"\"\r\nlabels={'ped', ...\t\t\t% 1\r\n'person_on_vhcl', ...\t% 2\r\n'car', ...\t\t\t\t% 3\r\n'bicycle', ...\t\t\t% 4\r\n'mbike', ...\t\t\t% 5\r\n'non_mot_vhcl', ...\t\t% 6\r\n'static_person', ...\t% 7\r\n'distractor', ...\t\t% 8\r\n'occluder', ...\t\t\t% 9\r\n'occluder_on_grnd', ...\t\t%10\r\n'occluder_full', ...\t\t% 11\r\n'reflection', ...\t\t% 12\r\n'crowd' ...\t\t\t% 13\r\n};\r\n\"\"\"\r\n\r\n\r\ndef _read_mot_results(filename, is_gt, is_ignore):\r\n    valid_labels = {1}\r\n    ignore_labels = {2, 7, 8, 12}\r\n    results_dict = dict()\r\n    if os.path.isfile(filename):\r\n        with open(filename, 'r') as f:\r\n            for line in f.readlines():\r\n                linelist = line.split(',')\r\n                if len(linelist) < 7:\r\n                    continue\r\n                fid = int(linelist[0])\r\n                if fid < 1:\r\n                    continue\r\n                results_dict.setdefault(fid, list())\r\n\r\n                if is_gt:\r\n                    if 'MOT16-' in filename or 'MOT17-' in filename:\r\n                        label = int(float(linelist[7]))\r\n                        mark = int(float(linelist[6]))\r\n                        if mark == 0 or label not in valid_labels:\r\n                            continue\r\n                    score = 1\r\n                elif is_ignore:\r\n                    if 'MOT16-' in filename or 'MOT17-' in filename:\r\n                        label = int(float(linelist[7]))\r\n                        vis_ratio = float(linelist[8])\r\n                        if label not in ignore_labels and vis_ratio >= 0:\r\n                            continue\r\n                    else:\r\n                        continue\r\n                    score = 1\r\n                else:\r\n                    score = float(linelist[6])\r\n\r\n                tlwh = tuple(map(float, linelist[2:6]))\r\n                target_id = int(linelist[1])\r\n\r\n                results_dict[fid].append((tlwh, target_id, score))\r\n\r\n    return results_dict\r\n\r\n\r\ndef unzip_objs(objs):\r\n    if len(objs) > 0:\r\n        tlwhs, ids, scores = zip(*objs)\r\n    else:\r\n        tlwhs, ids, scores = [], [], []\r\n    tlwhs = np.asarray(tlwhs, dtype=float).reshape(-1, 4)\r\n\r\n    return tlwhs, ids, scores\r\n"
  },
  {
    "path": "utils/log.py",
    "content": "import logging\r\n\r\n\r\ndef get_logger(name='root'):\r\n    formatter = logging.Formatter(\r\n        # fmt='%(asctime)s [%(levelname)s]: %(filename)s(%(funcName)s:%(lineno)s) >> %(message)s')\r\n        fmt='%(asctime)s [%(levelname)s]: %(message)s', datefmt='%Y-%m-%d %H:%M:%S')\r\n\r\n    handler = logging.StreamHandler()\r\n    handler.setFormatter(formatter)\r\n\r\n    logger = logging.getLogger(name)\r\n    logger.setLevel(logging.DEBUG)\r\n    logger.addHandler(handler)\r\n    return logger\r\n\r\n\r\nlogger = get_logger('root')\r\n"
  },
  {
    "path": "utils/mask.py",
    "content": "###################################################################\n# File Name: mask.py\n# Author: Zhongdao Wang\n# mail: wcd17@mails.tsinghua.edu.cn\n# Created Time: Tue Feb  9 10:05:47 2021\n###################################################################\n\nfrom __future__ import print_function\nfrom __future__ import division\nfrom __future__ import absolute_import\n\nimport pdb\nimport cv2\nimport torch\nimport numpy as np\nfrom sklearn.metrics import jaccard_similarity_score\nimport pycocotools.mask as mask_utils\n\nimport matplotlib.pyplot as plt\n\ndef coords2bbox(coords, extend=2):\n    \"\"\"\n    INPUTS:\n     - coords: coordinates of pixels in the next frame\n    \"\"\"\n    center = torch.mean(coords, dim=0) # b * 2\n    center = center.view(1,2)\n    center_repeat = center.repeat(coords.size(0),1)\n\n    dis_x = torch.sqrt(torch.pow(coords[:,0] - center_repeat[:,0], 2))\n    dis_x = max(torch.mean(dis_x, dim=0).detach(),1)\n    dis_y = torch.sqrt(torch.pow(coords[:,1] - center_repeat[:,1], 2))\n    dis_y = max(torch.mean(dis_y, dim=0).detach(),1)\n\n    left = center[:,0] - dis_x*extend\n    right = center[:,0] + dis_x*extend\n    top = center[:,1] - dis_y*extend\n    bottom = center[:,1] + dis_y*extend\n\n    return (top.item(), left.item(), bottom.item(), right.item())\n\ndef mask2box(masks):\n    boxes = []\n    for mask in masks:\n        m = mask[0].nonzero().float()\n        if m.numel() > 0:\n            box = coords2bbox(m, extend=2)\n        else:\n            box = (-1,-1,10,10)\n        boxes.append(box)\n    return np.asarray(boxes)\n\n    \ndef temp_interp_mask(maskseq, T):\n    '''\n    maskseq: list of elements (RLE_mask, timestamp)\n    return list of RLE_mask, length of list is T\n    '''\n    size = maskseq[0][0]['size']\n    blank_mask = np.asfortranarray(np.zeros(size).astype(np.uint8))\n    blank_mask = mask_utils.encode(blank_mask)\n    blank_mask['counts'] = blank_mask['counts'].decode('ascii')\n    ret = [blank_mask,] * T\n    for m, t in maskseq:\n        ret[t] = m\n    return ret\n\ndef mask_seq_jac(sa, sb):\n    j = np.zeros((len(sa), len(sb)))\n    for ia, a in enumerate(sa):\n        for ib, b in enumerate(sb):\n            ious = [mask_utils.iou([at], [bt], [False,]) for (at, bt) in zip(a,b)]\n            tiou = np.mean(ious)\n            j[ia, ib] = tiou\n    return j\n        \n\ndef skltn2mask(skltn, size):\n    h, w = size\n    mask = np.zeros((h,w))\n    \n    dskltn = dict()\n    for s in skltn:\n        dskltn[s['id'][0]] = (int(s['x'][0]), int(s['y'][0]))\n    if len(dskltn)==0:\n        return mask\n    trunk_polygon = list()\n    for k in np.array([3,4,10,13,9])-1:\n        p = dskltn.get(k, None)\n        if not p is None:\n            trunk_polygon.append(p)\n    trunk_polygon = np.asarray(trunk_polygon, 'int32')\n    if len(trunk_polygon) > 2:\n        cv2.fillConvexPoly(mask, trunk_polygon, 1)\n\n    xmin = np.min([dskltn[k][0] for k in dskltn])\n    xmax = np.max([dskltn[k][0] for k in dskltn])\n    ymin = np.min([dskltn[k][1] for k in dskltn])\n    ymax = np.max([dskltn[k][1] for k in dskltn])\n    line_width = np.max([int(np.max([xmax-xmin, ymax-ymin, 0])/20),8])\n\n\n    skeleton = [[10, 11], [11, 12], [9,8], \n                [8,7], [10, 13], [9, 13], \n                [13, 15], [10,4], [4,5], \n                [5,6], [9,3], [3,2], [2,1]]\n    \n\n    for sk in skeleton:\n        st = dskltn.get(sk[0]-1, None)\n        ed = dskltn.get(sk[1]-1, None)\n        if st is None or ed is None:\n            continue\n        cv2.line(mask, st, ed, color=1, thickness=line_width)\n    \n    #dmask = cv2.resize(mask, (w//8, h//8), interpolation=cv2.INTER_NEAREST)\n    #pdb.set_trace()\n    \n    return mask\n\ndef pts2array(pts):\n    arr = np.zeros((15,3))\n    for s in pts:\n        arr[s['id'][0]][0] = int(s['x'][0])\n        arr[s['id'][0]][1] = int(s['y'][0])\n        arr[s['id'][0]][2] = s['score'][0]\n    return arr\n\n    \n"
  },
  {
    "path": "utils/meter.py",
    "content": "###################################################################\n# File Name: meter.py\n# Author: Zhongdao Wang\n# mail: wcd17@mails.tsinghua.edu.cn\n# Created Time: Wed Dec 23 16:35:34 2020\n###################################################################\n\nfrom __future__ import print_function\nfrom __future__ import division\nfrom __future__ import absolute_import\nimport time\n\n\nclass Timer(object):\n    \"\"\"A simple timer.\"\"\"\n    def __init__(self):\n        self.total_time = 0.\n        self.calls = 0\n        self.start_time = 0.\n        self.diff = 0.\n        self.average_time = 0.\n\n        self.duration = 0.\n\n    def tic(self):\n        # using time.time instead of time.clock because time time.clock\n        # does not normalize for multithreading\n        self.start_time = time.time()\n\n    def toc(self, average=True):\n        self.diff = time.time() - self.start_time\n        self.total_time += self.diff\n        self.calls += 1\n        self.average_time = self.total_time / self.calls\n        if average:\n            self.duration = self.average_time\n        else:\n            self.duration = self.diff\n        return self.duration\n\n    def clear(self):\n        self.total_time = 0.\n        self.calls = 0\n        self.start_time = 0.\n        self.diff = 0.\n        self.average_time = 0.\n        self.duration = 0.\n\n"
  },
  {
    "path": "utils/palette.py",
    "content": "palette_str = '''0 0 0\n128 0 0\n0 128 0\n128 128 0\n0 0 128\n128 0 128\n0 128 128\n128 128 128\n64 0 0\n191 0 0\n64 128 0\n191 128 0\n64 0 128\n191 0 128\n64 128 128\n191 128 128\n0 64 0\n128 64 0\n0 191 0\n128 191 0\n0 64 128\n128 64 128\n22 22 22\n23 23 23\n24 24 24\n25 25 25\n26 26 26\n27 27 27\n28 28 28\n29 29 29\n30 30 30\n31 31 31\n32 32 32\n33 33 33\n34 34 34\n35 35 35\n36 36 36\n37 37 37\n38 38 38\n39 39 39\n40 40 40\n41 41 41\n42 42 42\n43 43 43\n44 44 44\n45 45 45\n46 46 46\n47 47 47\n48 48 48\n49 49 49\n50 50 50\n51 51 51\n52 52 52\n53 53 53\n54 54 54\n55 55 55\n56 56 56\n57 57 57\n58 58 58\n59 59 59\n60 60 60\n61 61 61\n62 62 62\n63 63 63\n64 64 64\n65 65 65\n66 66 66\n67 67 67\n68 68 68\n69 69 69\n70 70 70\n71 71 71\n72 72 72\n73 73 73\n74 74 74\n75 75 75\n76 76 76\n77 77 77\n78 78 78\n79 79 79\n80 80 80\n81 81 81\n82 82 82\n83 83 83\n84 84 84\n85 85 85\n86 86 86\n87 87 87\n88 88 88\n89 89 89\n90 90 90\n91 91 91\n92 92 92\n93 93 93\n94 94 94\n95 95 95\n96 96 96\n97 97 97\n98 98 98\n99 99 99\n100 100 100\n101 101 101\n102 102 102\n103 103 103\n104 104 104\n105 105 105\n106 106 106\n107 107 107\n108 108 108\n109 109 109\n110 110 110\n111 111 111\n112 112 112\n113 113 113\n114 114 114\n115 115 115\n116 116 116\n117 117 117\n118 118 118\n119 119 119\n120 120 120\n121 121 121\n122 122 122\n123 123 123\n124 124 124\n125 125 125\n126 126 126\n127 127 127\n128 128 128\n129 129 129\n130 130 130\n131 131 131\n132 132 132\n133 133 133\n134 134 134\n135 135 135\n136 136 136\n137 137 137\n138 138 138\n139 139 139\n140 140 140\n141 141 141\n142 142 142\n143 143 143\n144 144 144\n145 145 145\n146 146 146\n147 147 147\n148 148 148\n149 149 149\n150 150 150\n151 151 151\n152 152 152\n153 153 153\n154 154 154\n155 155 155\n156 156 156\n157 157 157\n158 158 158\n159 159 159\n160 160 160\n161 161 161\n162 162 162\n163 163 163\n164 164 164\n165 165 165\n166 166 166\n167 167 167\n168 168 168\n169 169 169\n170 170 170\n171 171 171\n172 172 172\n173 173 173\n174 174 174\n175 175 175\n176 176 176\n177 177 177\n178 178 178\n179 179 179\n180 180 180\n181 181 181\n182 182 182\n183 183 183\n184 184 184\n185 185 185\n186 186 186\n187 187 187\n188 188 188\n189 189 189\n190 190 190\n191 191 191\n192 192 192\n193 193 193\n194 194 194\n195 195 195\n196 196 196\n197 197 197\n198 198 198\n199 199 199\n200 200 200\n201 201 201\n202 202 202\n203 203 203\n204 204 204\n205 205 205\n206 206 206\n207 207 207\n208 208 208\n209 209 209\n210 210 210\n211 211 211\n212 212 212\n213 213 213\n214 214 214\n215 215 215\n216 216 216\n217 217 217\n218 218 218\n219 219 219\n220 220 220\n221 221 221\n222 222 222\n223 223 223\n224 224 224\n225 225 225\n226 226 226\n227 227 227\n228 228 228\n229 229 229\n230 230 230\n231 231 231\n232 232 232\n233 233 233\n234 234 234\n235 235 235\n236 236 236\n237 237 237\n238 238 238\n239 239 239\n240 240 240\n241 241 241\n242 242 242\n243 243 243\n244 244 244\n245 245 245\n246 246 246\n247 247 247\n248 248 248\n249 249 249\n250 250 250\n251 251 251\n252 252 252\n253 253 253\n254 254 254\n255 255 255'''\nimport numpy as np\ntensor = np.array([[int(x) for x in line.split()] for line in palette_str.split('\\n')])\n"
  },
  {
    "path": "utils/visualize.py",
    "content": "\nimport cv2\nimport numpy as np\nimport imageio as io\nfrom matplotlib import cm\n\nimport time\nimport PIL\n\nimport pycocotools.mask as mask_utils\nfrom . import palette\n\n\ndef dump_predictions(pred, lbl_set, img, prefix):\n    '''\n    Save:\n        1. Predicted labels for evaluation\n        2. Label heatmaps for visualization\n    '''\n    lbl_set = palette.tensor.astype(np.uint8)\n    sz = img.shape[:-1]\n\n    # Upsample predicted soft label maps\n    # pred_dist = pred.copy()\n    pred_dist = cv2.resize(pred, sz[::-1])[:]\n    \n    # Argmax to get the hard label for index\n    pred_lbl = np.argmax(pred_dist, axis=-1)\n    pred_lbl = np.array(lbl_set, dtype=np.int32)[pred_lbl]      \n    mask = np.float32(pred_lbl.sum(2) > 0)[:,:,None]\n    alpha = 0.5\n    img_with_label = mask * (np.float32(img) * alpha + \\\n            np.float32(pred_lbl) * (1-alpha)) + (1-mask) * np.float32(img)\n\n    # Visualize label distribution for object 1 (debugging/analysis)\n    pred_soft = pred_dist[..., 1]\n    pred_soft = cv2.resize(pred_soft, (img.shape[1], img.shape[0]), \n            interpolation=cv2.INTER_NEAREST)\n    pred_soft = cm.jet(pred_soft)[..., :3] * 255.0\n    img_with_heatmap1 =  np.float32(img) * 0.5 + np.float32(pred_soft) * 0.5\n\n    # Save blend image for visualization\n    io.imwrite('%s_blend.jpg' % prefix, np.uint8(img_with_label))\n\n    if prefix[-4] != '.':  # Super HACK-y\n        imname2 = prefix + '_mask.png'\n    else:\n        imname2 = prefix.replace('jpg','png')\n\n    # Save predicted labels for evaluation\n    io.imwrite(imname2, np.uint8(pred_lbl))\n\n    return img_with_label, pred_lbl, img_with_heatmap1\n\n\n\ndef make_gif(video, outname='/tmp/test.gif', sz=256):\n    if hasattr(video, 'shape'):\n        video = video.cpu()\n        if video.shape[0] == 3:\n            video = video.transpose(0, 1)\n\n        video = video.numpy().transpose(0, 2, 3, 1)\n        video = (video*255).astype(np.uint8)\n        \n    video = [cv2.resize(vv, (sz, sz)) for vv in video]\n\n    if outname is None:\n        return np.stack(video)\n\n    io.mimsave(outname, video, duration = 0.2)\n\ndef get_color(idx):\n    idx = idx * 17\n    color = ((37 * idx) % 255, (17 * idx) % 255, (29 * idx) % 255)\n    return color\n\ndef plot_tracking(image, obs, obj_ids, scores=None, frame_id=0, fps=0.):\n    im = np.ascontiguousarray(np.copy(image))\n    im_h, im_w = im.shape[:2]\n\n    text_scale = max(1, image.shape[1] / 1600.)\n    text_thickness = 1 if text_scale > 1.1 else 1\n    line_thickness = max(1, int(image.shape[1] / 150.))\n    alpha = 0.4\n\n    for i, ob in enumerate(obs): \n        obj_id = int(obj_ids[i])\n        id_text = '{}'.format(int(obj_id))\n        _line_thickness = 1 if obj_id <= 0 else line_thickness\n        color = get_color(obj_id)\n        if len(ob) == 4:\n            x1, y1, w, h = ob\n            intbox = tuple(map(int, (x1, y1, x1 + w, y1 + h)))\n            cv2.rectangle(im, intbox[0:2], intbox[2:4], color=color, thickness=line_thickness)\n            cv2.putText(im, id_text, (intbox[0], intbox[1] + 30), cv2.FONT_HERSHEY_PLAIN, text_scale, (0, 0, 255),\n                        thickness=text_thickness)\n        elif isinstance(ob, dict):\n            mask = mask_utils.decode(ob)\n            mask = cv2.resize(mask, (im_w, im_h), interpolation=cv2.INTER_LINEAR)\n            mask = (mask > 0.5).astype(np.uint8)[:,:,None]\n            mask_color = mask * color\n            im = (1 - mask) * im + mask * (alpha*im + (1-alpha)*mask_color) \n        else:\n            raise ValueError('Observation format not supported.')\n    return im\n\n\ndef vis_pose(oriImg, points):\n\n    pa = np.zeros(15)\n    pa[2] = 0\n    pa[12] = 8\n    pa[8] = 4\n    pa[4] = 0\n    pa[11] = 7\n    pa[7] = 3\n    pa[3] = 0\n    pa[0] = 1\n    pa[14] = 10\n    pa[10] = 6\n    pa[6] = 1\n    pa[13] = 9\n    pa[9] = 5\n    pa[5] = 1\n\n    colors = [[255, 0, 0], [255, 85, 0], [255, 170, 0], [255, 255, 0], [170, 255, 0], [85, 255, 0], [0, 255, 0],\n              [0, 255, 85], [0, 255, 170], [0, 255, 255], [0, 170, 255], [0, 85, 255], [0, 0, 255], [85, 0, 255],\n              [170,0,255],[255,0,255]]\n    canvas = oriImg\n    stickwidth = 4\n    x = points[0, :]\n    y = points[1, :]\n\n    for n in range(len(x)):\n        pair_id = int(pa[n])\n\n        x1 = int(x[pair_id])\n        y1 = int(y[pair_id])\n        x2 = int(x[n])\n        y2 = int(y[n])\n\n        if x1 >= 0 and y1 >= 0 and x2 >= 0 and y2 >= 0:\n            cv2.line(canvas, (x1, y1), (x2, y2), colors[n], 8)\n\n    return canvas\n\n\ndef draw_skeleton(aa, kp, color, show_skeleton_labels=False, dataset= \"PoseTrack\"):\n    if dataset == \"COCO\":\n        skeleton = [[16, 14], [14, 12], [17, 15], [15, 13], [12, 13], \n                [6, 12], [7, 13], [6, 7], [6, 8], [7, 9], [8, 10], \n                [9, 11], [2, 3], [1, 2], [1, 3], [2, 4], [3, 5], [4, 6], [5, 7]]\n        kp_names = ['nose', 'l_eye', 'r_eye', 'l_ear', 'r_ear', 'l_shoulder',\n                    'r_shoulder', 'l_elbow', 'r_elbow', 'l_wrist', 'r_wrist',\n                    'l_hip', 'r_hip', 'l_knee', 'r_knee', 'l_ankle', 'r_ankle']\n    elif dataset == \"PoseTrack\":\n        skeleton = [[10, 11], [11, 12], [9,8], [8,7],\n                    [10, 13], [9, 13], [13, 15], [10,4],\n                    [4,5], [5,6], [9,3], [3,2], [2,1]]\n        kp_names = ['right_ankle', 'right_knee', 'right_pelvis',\n                    'left_pelvis', 'left_knee', 'left_ankle',\n                    'right_wrist', 'right_elbow', 'right_shoulder',\n                    'left_shoulder', 'left_elbow', 'left_wrist',\n                    'upper_neck', 'nose', 'head']\n    for i, j in skeleton:\n        if kp[i-1][0] >= 0 and kp[i-1][1] >= 0 and kp[j-1][0] >= 0 and kp[j-1][1] >= 0 and \\\n            (len(kp[i-1]) <= 2 or (len(kp[i-1]) > 2 and  kp[i-1][2] > 0.1 and kp[j-1][2] > 0.1)):\n            st = (int(kp[i-1][0]), int(kp[i-1][1]))\n            ed = (int(kp[j-1][0]), int(kp[j-1][1]))\n            cv2.line(aa, st, ed,  color, max(1, int(aa.shape[1]/150.)))\n    for j in range(len(kp)):\n        if kp[j][0] >= 0 and kp[j][1] >= 0:\n            pt = (int(kp[j][0]), int(kp[j][1]))\n            if len(kp[j]) <= 2 or (len(kp[j]) > 2 and kp[j][2] > 1.1):\n                cv2.circle(aa, pt, 2, tuple((0,0,255)), 2)\n            elif len(kp[j]) <= 2 or (len(kp[j]) > 2 and kp[j][2] > 0.1):\n                cv2.circle(aa, pt, 2, tuple((255,0,0)), 2)\n\n            if show_skeleton_labels and (len(kp[j]) <= 2 or (len(kp[j]) > 2 and kp[j][2] > 0.1)):\n                cv2.putText(aa, kp_names[j], tuple(kp[j][:2]), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 0))\n"
  }
]