[
  {
    "path": ".flake8",
    "content": "# This is an example .flake8 config, used when developing *Black* itself.\n# Keep in sync with setup.cfg which is used for source packages.\n\n[flake8]\nignore = E203, E266, E501, W503\nmax-line-length = 80\nmax-complexity = 18\nselect = B,C,E,F,W,T4,B9\n"
  },
  {
    "path": ".gitignore",
    "content": "# compilation and distribution\n__pycache__\n_ext\n*.pyc\n*.so\nmaskrcnn_benchmark.egg-info/\nbuild/\ndist/\n\n# pytorch/python/numpy formats\n*.pth\n*.pkl\n*.npy\n\n# ipython/jupyter notebooks\n*.ipynb\n**/.ipynb_checkpoints/\n\n# Editor temporaries\n*.swn\n*.swo\n*.swp\n*~\n\n# Pycharm editor settings\n.idea\n\n# project dirs\n/datasets\n/models\n"
  },
  {
    "path": "ABSTRACTIONS.md",
    "content": "## Abstractions\nThe main abstractions introduced by `maskrcnn_benchmark` that are useful to\nhave in mind are the following:\n\n### ImageList\nIn PyTorch, the first dimension of the input to the network generally represents\nthe batch dimension, and thus all elements of the same batch have the same\nheight / width.\nIn order to support images with different sizes and aspect ratios in the same\nbatch, we created the `ImageList` class, which holds internally a batch of\nimages (os possibly different sizes). The images are padded with zeros such that\nthey have the same final size and batched over the first dimension. The original\nsizes of the images before padding are stored in the `image_sizes` attribute,\nand the batched tensor in `tensors`.\nWe provide a convenience function `to_image_list` that accepts a few different\ninput types, including a list of tensors, and returns an `ImageList` object.\n\n```python\nfrom maskrcnn_benchmark.structures.image_list import to_image_list\n\nimages = [torch.rand(3, 100, 200), torch.rand(3, 150, 170)]\nbatched_images = to_image_list(images)\n\n# it is also possible to make the final batched image be a multiple of a number\nbatched_images_32 = to_image_list(images, size_divisible=32)\n```\n\n### BoxList\nThe `BoxList` class holds a set of bounding boxes (represented as a `Nx4` tensor) for\na specific image, as well as the size of the image as a `(width, height)` tuple.\nIt also contains a set of methods that allow to perform geometric\ntransformations to the bounding boxes (such as cropping, scaling and flipping).\nThe class accepts bounding boxes from two different input formats:\n- `xyxy`, where each box is encoded as a `x1`, `y1`, `x2` and `y2` coordinates, and\n- `xywh`, where each box is encoded as `x1`, `y1`, `w` and `h`.\n\nAdditionally, each `BoxList` instance can also hold arbitrary additional information\nfor each bounding box, such as labels, visibility, probability scores etc.\n\nHere is an example on how to create a `BoxList` from a list of coordinates:\n```python\nfrom maskrcnn_benchmark.structures.bounding_box import BoxList, FLIP_LEFT_RIGHT\n\nwidth = 100\nheight = 200\nboxes = [\n  [0, 10, 50, 50],\n  [50, 20, 90, 60],\n  [10, 10, 50, 50]\n]\n# create a BoxList with 3 boxes\nbbox = BoxList(boxes, image_size=(width, height), mode='xyxy')\n\n# perform some box transformations, has similar API as PIL.Image\nbbox_scaled = bbox.resize((width * 2, height * 3))\nbbox_flipped = bbox.transpose(FLIP_LEFT_RIGHT)\n\n# add labels for each bbox\nlabels = torch.tensor([0, 10, 1])\nbbox.add_field('labels', labels)\n\n# bbox also support a few operations, like indexing\n# here, selects boxes 0 and 2\nbbox_subset = bbox[[0, 2]]\n```\n"
  },
  {
    "path": "CODE_OF_CONDUCT.md",
    "content": "# Code of Conduct\n\nFacebook has adopted a Code of Conduct that we expect project participants to adhere to.\nPlease read the [full text](https://code.fb.com/codeofconduct/)\nso that you can understand what actions will and will not be tolerated.\n"
  },
  {
    "path": "CONTRIBUTING.md",
    "content": "# Contributing to Mask-RCNN Benchmark\nWe want to make contributing to this project as easy and transparent as\npossible.\n\n## Our Development Process\nMinor changes and improvements will be released on an ongoing basis. Larger changes (e.g., changesets implementing a new paper) will be released on a more periodic basis.\n\n## Pull Requests\nWe actively welcome your pull requests.\n\n1. Fork the repo and create your branch from `master`.\n2. If you've added code that should be tested, add tests.\n3. If you've changed APIs, update the documentation.\n4. Ensure the test suite passes.\n5. Make sure your code lints.\n6. If you haven't already, complete the Contributor License Agreement (\"CLA\").\n\n## Contributor License Agreement (\"CLA\")\nIn order to accept your pull request, we need you to submit a CLA. You only need\nto do this once to work on any of Facebook's open source projects.\n\nComplete your CLA here: <https://code.facebook.com/cla>\n\n## Issues\nWe use GitHub issues to track public bugs. Please ensure your description is\nclear and has sufficient instructions to be able to reproduce the issue.\n\nFacebook has a [bounty program](https://www.facebook.com/whitehat/) for the safe\ndisclosure of security bugs. In those cases, please go through the process\noutlined on that page and do not file a public issue.\n\n## Coding Style  \n* 4 spaces for indentation rather than tabs\n* 80 character line length\n* PEP8 formatting following [Black](https://black.readthedocs.io/en/stable/)\n\n## License\nBy contributing to Mask-RCNN Benchmark, you agree that your contributions will be licensed\nunder the LICENSE file in the root directory of this source tree.\n"
  },
  {
    "path": "INSTALL.md",
    "content": "## Installation\n\n### Requirements:\n- PyTorch >= 1.0. Installation instructions can be found in https://pytorch.org/get-started/locally/.\n- torchvision==0.2.1\n- cocoapi\n- yacs\n- matplotlib\n- GCC >= 4.9\n- (optional) OpenCV for the webcam demo\n\n### Option 1: Step-by-step installation\n\n```bash\n# first, make sure that your conda is setup properly with the right environment\n# for that, check that `which conda`, `which pip` and `which python` points to the\n# right path. From a clean conda env, this is what you need to do\n\nconda create --name FCOS\nconda activate FCOS\n\n# this installs the right pip and dependencies for the fresh python\nconda install ipython\n\n# FCOS and coco api dependencies\npip install ninja yacs cython matplotlib tqdm\n\n# follow PyTorch installation in https://pytorch.org/get-started/locally/\n# we give the instructions for CUDA 9.0\nconda install -c pytorch torchvision=0.2.1 cudatoolkit=9.0\n\nexport INSTALL_DIR=$PWD\n\n# install pycocotools. Please make sure you have installed cython.\ncd $INSTALL_DIR\ngit clone https://github.com/cocodataset/cocoapi.git\ncd cocoapi/PythonAPI\npython setup.py build_ext install\n\n# install PyTorch Detection\ncd $INSTALL_DIR\ngit clone https://github.com/yqyao/FCOS_PLUS.git\ncd FCOS_PLUS\n\n# the following will install the lib with\n# symbolic links, so that you can modify\n# the files if you want and won't need to\n# re-build it\npython setup.py build develop\n\n\nunset INSTALL_DIR\n\n# or if you are on macOS\n# MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py build develop\n```\n\n### Option 2: Docker Image (Requires CUDA, Linux only)\n*The following steps are for original maskrcnn-benchmark. Please change the repository name if needed.* \n\nBuild image with defaults (`CUDA=9.0`, `CUDNN=7`, `FORCE_CUDA=1`):\n\n    nvidia-docker build -t maskrcnn-benchmark docker/\n    \nBuild image with other CUDA and CUDNN versions:\n\n    nvidia-docker build -t maskrcnn-benchmark --build-arg CUDA=9.2 --build-arg CUDNN=7 docker/\n    \nBuild image with FORCE_CUDA disabled:\n\n    nvidia-docker build -t maskrcnn-benchmark --build-arg FORCE_CUDA=0 docker/\n    \nBuild and run image with built-in jupyter notebook(note that the password is used to log in jupyter notebook):\n\n    nvidia-docker build -t maskrcnn-benchmark-jupyter docker/docker-jupyter/\n    nvidia-docker run -td -p 8888:8888 -e PASSWORD=<password> -v <host-dir>:<container-dir> maskrcnn-benchmark-jupyter\n"
  },
  {
    "path": "LICENSE",
    "content": "FCOS for non-commercial purposes\n\nCopyright (c) 2019 the authors\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\n* Redistributions of source code must retain the above copyright notice, this\n  list of conditions and the following disclaimer.\n\n* Redistributions in binary form must reproduce the above copyright notice,\n  this list of conditions and the following disclaimer in the documentation\n  and/or other materials provided with the distribution.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\nAND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\nSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\nCAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\nOR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n"
  },
  {
    "path": "MASKRCNN_README.md",
    "content": "# Faster R-CNN and Mask R-CNN in PyTorch 1.0\n\nThis project aims at providing the necessary building blocks for easily\ncreating detection and segmentation models using PyTorch 1.0.\n\n![alt text](demo/demo_e2e_mask_rcnn_X_101_32x8d_FPN_1x.png \"from http://cocodataset.org/#explore?id=345434\")\n\n## Highlights\n- **PyTorch 1.0:** RPN, Faster R-CNN and Mask R-CNN implementations that matches or exceeds Detectron accuracies\n- **Very fast**: up to **2x** faster than [Detectron](https://github.com/facebookresearch/Detectron) and **30%** faster than [mmdetection](https://github.com/open-mmlab/mmdetection) during training. See [MODEL_ZOO.md](MODEL_ZOO.md) for more details.\n- **Memory efficient:** uses roughly 500MB less GPU memory than mmdetection during training\n- **Multi-GPU training and inference**\n- **Batched inference:** can perform inference using multiple images per batch per GPU\n- **CPU support for inference:** runs on CPU in inference time. See our [webcam demo](demo) for an example\n- Provides pre-trained models for almost all reference Mask R-CNN and Faster R-CNN configurations with 1x schedule.\n\n## Webcam and Jupyter notebook demo\n\nWe provide a simple webcam demo that illustrates how you can use `maskrcnn_benchmark` for inference:\n```bash\ncd demo\n# by default, it runs on the GPU\n# for best results, use min-image-size 800\npython webcam.py --min-image-size 800\n# can also run it on the CPU\npython webcam.py --min-image-size 300 MODEL.DEVICE cpu\n# or change the model that you want to use\npython webcam.py --config-file ../configs/caffe2/e2e_mask_rcnn_R_101_FPN_1x_caffe2.yaml --min-image-size 300 MODEL.DEVICE cpu\n# in order to see the probability heatmaps, pass --show-mask-heatmaps\npython webcam.py --min-image-size 300 --show-mask-heatmaps MODEL.DEVICE cpu\n# for the keypoint demo\npython webcam.py --config-file ../configs/caffe2/e2e_keypoint_rcnn_R_50_FPN_1x_caffe2.yaml --min-image-size 300 MODEL.DEVICE cpu\n```\n\nA notebook with the demo can be found in [demo/Mask_R-CNN_demo.ipynb](demo/Mask_R-CNN_demo.ipynb).\n\n## Installation\n\nCheck [INSTALL.md](INSTALL.md) for installation instructions.\n\n\n## Model Zoo and Baselines\n\nPre-trained models, baselines and comparison with Detectron and mmdetection\ncan be found in [MODEL_ZOO.md](MODEL_ZOO.md)\n\n## Inference in a few lines\nWe provide a helper class to simplify writing inference pipelines using pre-trained models.\nHere is how we would do it. Run this from the `demo` folder:\n```python\nfrom maskrcnn_benchmark.config import cfg\nfrom predictor import COCODemo\n\nconfig_file = \"../configs/caffe2/e2e_mask_rcnn_R_50_FPN_1x_caffe2.yaml\"\n\n# update the config options with the config file\ncfg.merge_from_file(config_file)\n# manual override some options\ncfg.merge_from_list([\"MODEL.DEVICE\", \"cpu\"])\n\ncoco_demo = COCODemo(\n    cfg,\n    min_image_size=800,\n    confidence_threshold=0.7,\n)\n# load image and then run prediction\nimage = ...\npredictions = coco_demo.run_on_opencv_image(image)\n```\n\n## Perform training on COCO dataset\n\nFor the following examples to work, you need to first install `maskrcnn_benchmark`.\n\nYou will also need to download the COCO dataset.\nWe recommend to symlink the path to the coco dataset to `datasets/` as follows\n\nWe use `minival` and `valminusminival` sets from [Detectron](https://github.com/facebookresearch/Detectron/blob/master/detectron/datasets/data/README.md#coco-minival-annotations)\n\n```bash\n# symlink the coco dataset\ncd ~/github/maskrcnn-benchmark\nmkdir -p datasets/coco\nln -s /path_to_coco_dataset/annotations datasets/coco/annotations\nln -s /path_to_coco_dataset/train2014 datasets/coco/train2014\nln -s /path_to_coco_dataset/test2014 datasets/coco/test2014\nln -s /path_to_coco_dataset/val2014 datasets/coco/val2014\n# or use COCO 2017 version\nln -s /path_to_coco_dataset/annotations datasets/coco/annotations\nln -s /path_to_coco_dataset/train2017 datasets/coco/train2017\nln -s /path_to_coco_dataset/test2017 datasets/coco/test2017\nln -s /path_to_coco_dataset/val2017 datasets/coco/val2017\n\n# for pascal voc dataset:\nln -s /path_to_VOCdevkit_dir datasets/voc\n```\n\nP.S. `COCO_2017_train` = `COCO_2014_train` + `valminusminival` , `COCO_2017_val` = `minival`\n      \n\nYou can also configure your own paths to the datasets.\nFor that, all you need to do is to modify `maskrcnn_benchmark/config/paths_catalog.py` to\npoint to the location where your dataset is stored.\nYou can also create a new `paths_catalog.py` file which implements the same two classes,\nand pass it as a config argument `PATHS_CATALOG` during training.\n\n### Single GPU training\n\nMost of the configuration files that we provide assume that we are running on 8 GPUs.\nIn order to be able to run it on fewer GPUs, there are a few possibilities:\n\n**1. Run the following without modifications**\n\n```bash\npython /path_to_maskrcnn_benchmark/tools/train_net.py --config-file \"/path/to/config/file.yaml\"\n```\nThis should work out of the box and is very similar to what we should do for multi-GPU training.\nBut the drawback is that it will use much more GPU memory. The reason is that we set in the\nconfiguration files a global batch size that is divided over the number of GPUs. So if we only\nhave a single GPU, this means that the batch size for that GPU will be 8x larger, which might lead\nto out-of-memory errors.\n\nIf you have a lot of memory available, this is the easiest solution.\n\n**2. Modify the cfg parameters**\n\nIf you experience out-of-memory errors, you can reduce the global batch size. But this means that\nyou'll also need to change the learning rate, the number of iterations and the learning rate schedule.\n\nHere is an example for Mask R-CNN R-50 FPN with the 1x schedule:\n```bash\npython tools/train_net.py --config-file \"configs/e2e_mask_rcnn_R_50_FPN_1x.yaml\" SOLVER.IMS_PER_BATCH 2 SOLVER.BASE_LR 0.0025 SOLVER.MAX_ITER 720000 SOLVER.STEPS \"(480000, 640000)\" TEST.IMS_PER_BATCH 1\n```\nThis follows the [scheduling rules from Detectron.](https://github.com/facebookresearch/Detectron/blob/master/configs/getting_started/tutorial_1gpu_e2e_faster_rcnn_R-50-FPN.yaml#L14-L30)\nNote that we have multiplied the number of iterations by 8x (as well as the learning rate schedules),\nand we have divided the learning rate by 8x.\n\nWe also changed the batch size during testing, but that is generally not necessary because testing\nrequires much less memory than training.\n\n\n### Multi-GPU training\nWe use internally `torch.distributed.launch` in order to launch\nmulti-gpu training. This utility function from PyTorch spawns as many\nPython processes as the number of GPUs we want to use, and each Python\nprocess will only use a single GPU.\n\n```bash\nexport NGPUS=8\npython -m torch.distributed.launch --nproc_per_node=$NGPUS /path_to_maskrcnn_benchmark/tools/train_net.py --config-file \"path/to/config/file.yaml\"\n```\n\n## Abstractions\nFor more information on some of the main abstractions in our implementation, see [ABSTRACTIONS.md](ABSTRACTIONS.md).\n\n## Adding your own dataset\n\nThis implementation adds support for COCO-style datasets.\nBut adding support for training on a new dataset can be done as follows:\n```python\nfrom maskrcnn_benchmark.structures.bounding_box import BoxList\n\nclass MyDataset(object):\n    def __init__(self, ...):\n        # as you would do normally\n\n    def __getitem__(self, idx):\n        # load the image as a PIL Image\n        image = ...\n\n        # load the bounding boxes as a list of list of boxes\n        # in this case, for illustrative purposes, we use\n        # x1, y1, x2, y2 order.\n        boxes = [[0, 0, 10, 10], [10, 20, 50, 50]]\n        # and labels\n        labels = torch.tensor([10, 20])\n\n        # create a BoxList from the boxes\n        boxlist = BoxList(boxes, image.size, mode=\"xyxy\")\n        # add the labels to the boxlist\n        boxlist.add_field(\"labels\", labels)\n\n        if self.transforms:\n            image, boxlist = self.transforms(image, boxlist)\n\n        # return the image, the boxlist and the idx in your dataset\n        return image, boxlist, idx\n\n    def get_img_info(self, idx):\n        # get img_height and img_width. This is used if\n        # we want to split the batches according to the aspect ratio\n        # of the image, as it can be more efficient than loading the\n        # image from disk\n        return {\"height\": img_height, \"width\": img_width}\n```\nThat's it. You can also add extra fields to the boxlist, such as segmentation masks\n(using `structures.segmentation_mask.SegmentationMask`), or even your own instance type.\n\nFor a full example of how the `COCODataset` is implemented, check [`maskrcnn_benchmark/data/datasets/coco.py`](maskrcnn_benchmark/data/datasets/coco.py).\n\nOnce you have created your dataset, it needs to be added in a couple of places:\n- [`maskrcnn_benchmark/data/datasets/__init__.py`](maskrcnn_benchmark/data/datasets/__init__.py): add it to `__all__`\n- [`maskrcnn_benchmark/config/paths_catalog.py`](maskrcnn_benchmark/config/paths_catalog.py): `DatasetCatalog.DATASETS` and corresponding `if` clause in `DatasetCatalog.get()`\n\n### Testing\nWhile the aforementioned example should work for training, we leverage the\ncocoApi for computing the accuracies during testing. Thus, test datasets\nshould currently follow the cocoApi for now.\n\nTo enable your dataset for testing, add a corresponding if statement in [`maskrcnn_benchmark/data/datasets/evaluation/__init__.py`](maskrcnn_benchmark/data/datasets/evaluation/__init__.py):\n```python\nif isinstance(dataset, datasets.MyDataset):\n        return coco_evaluation(**args)\n```\n\n## Finetuning from Detectron weights on custom datasets\nCreate a script `tools/trim_detectron_model.py` like [here](https://gist.github.com/wangg12/aea194aa6ab6a4de088f14ee193fd968).\nYou can decide which keys to be removed and which keys to be kept by modifying the script.\n\nThen you can simply point the converted model path in the config file by changing `MODEL.WEIGHT`.\n\nFor further information, please refer to [#15](https://github.com/facebookresearch/maskrcnn-benchmark/issues/15).\n\n## Troubleshooting\nIf you have issues running or compiling this code, we have compiled a list of common issues in\n[TROUBLESHOOTING.md](TROUBLESHOOTING.md). If your issue is not present there, please feel\nfree to open a new issue.\n\n## Citations\nPlease consider citing this project in your publications if it helps your research. The following is a BibTeX reference. The BibTeX entry requires the `url` LaTeX package.\n```\n@misc{massa2018mrcnn,\nauthor = {Massa, Francisco and Girshick, Ross},\ntitle = {{maskrcnn-benchmark: Fast, modular reference implementation of Instance Segmentation and Object Detection algorithms in PyTorch}},\nyear = {2018},\nhowpublished = {\\url{https://github.com/facebookresearch/maskrcnn-benchmark}},\nnote = {Accessed: [Insert date here]}\n}\n```\n\n## Projects using maskrcnn-benchmark\n\n- [RetinaMask: Learning to predict masks improves state-of-the-art single-shot detection for free](https://arxiv.org/abs/1901.03353). \n  Cheng-Yang Fu, Mykhailo Shvets, and Alexander C. Berg.\n  Tech report, arXiv,1901.03353.\n\n\n\n## License\n\nmaskrcnn-benchmark is released under the MIT license. See [LICENSE](LICENSE) for additional details.\n"
  },
  {
    "path": "MODEL_ZOO.md",
    "content": "## Model Zoo and Baselines\n\n### Hardware\n- 8 NVIDIA V100 GPUs\n\n### Software\n- PyTorch version: 1.0.0a0+dd2c487\n- CUDA 9.2\n- CUDNN 7.1\n- NCCL 2.2.13-1\n\n### End-to-end Faster and Mask R-CNN baselines\n\nAll the baselines were trained using the exact same experimental setup as in Detectron.\nWe initialize the detection models with ImageNet weights from Caffe2, the same as used by Detectron.\n\nThe pre-trained models are available in the link in the model id.\n\nbackbone | type | lr sched | im / gpu | train mem(GB) | train time (s/iter) | total train time(hr) | inference time(s/im) | box AP | mask AP | model id\n-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --\nR-50-C4 | Fast | 1x | 1 | 5.8 | 0.4036 | 20.2 | 0.17130 | 34.8 | - | [6358800](https://download.pytorch.org/models/maskrcnn/e2e_faster_rcnn_R_50_C4_1x.pth)\nR-50-FPN | Fast | 1x | 2 | 4.4 | 0.3530 | 8.8 | 0.12580 | 36.8 | - | [6358793](https://download.pytorch.org/models/maskrcnn/e2e_faster_rcnn_R_50_FPN_1x.pth)\nR-101-FPN | Fast | 1x | 2 | 7.1 | 0.4591 | 11.5 | 0.143149 | 39.1 | - | [6358804](https://download.pytorch.org/models/maskrcnn/e2e_faster_rcnn_R_101_FPN_1x.pth)\nX-101-32x8d-FPN | Fast | 1x | 1 | 7.6 | 0.7007 | 35.0 | 0.209965 | 41.2 | - | [6358717](https://download.pytorch.org/models/maskrcnn/e2e_faster_rcnn_X_101_32x8d_FPN_1x.pth)\nR-50-C4 | Mask | 1x | 1 | 5.8 | 0.4520 | 22.6 | 0.17796 + 0.028 | 35.6 | 31.5 | [6358801](https://download.pytorch.org/models/maskrcnn/e2e_mask_rcnn_R_50_C4_1x.pth)\nR-50-FPN | Mask | 1x | 2 | 5.2 | 0.4536 | 11.3 | 0.12966 + 0.034 | 37.8 | 34.2 | [6358792](https://download.pytorch.org/models/maskrcnn/e2e_mask_rcnn_R_50_FPN_1x.pth)\nR-101-FPN | Mask | 1x | 2 | 7.9 | 0.5665 | 14.2 | 0.15384 + 0.034 | 40.1 | 36.1 | [6358805](https://download.pytorch.org/models/maskrcnn/e2e_mask_rcnn_R_101_FPN_1x.pth)\nX-101-32x8d-FPN | Mask | 1x | 1 | 7.8 | 0.7562 | 37.8 | 0.21739 + 0.034 | 42.2 | 37.8 | [6358718](https://download.pytorch.org/models/maskrcnn/e2e_mask_rcnn_X_101_32x8d_FPN_1x.pth)\n\nFor person keypoint detection:\n\nbackbone | type | lr sched | im / gpu | train mem(GB) | train time (s/iter) | total train time(hr) | inference time(s/im) | box AP | keypoint AP | model id\n-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --\nR-50-FPN | Keypoint | 1x | 2 | 5.7 | 0.3771 | 9.4 | 0.10941 | 53.7 | 64.3 | 9981060\n\n### Light-weight Model baselines\n\nWe provided pre-trained models for selected FBNet models. \n* All the models are trained from scratched with BN using the training schedule specified below. \n* Evaluation is performed on a single NVIDIA V100 GPU with `MODEL.RPN.POST_NMS_TOP_N_TEST` set to `200`. \n\nThe following inference time is reported:\n  * inference total batch=8: Total inference time including data loading, model inference and pre/post preprocessing using 8 images per batch.\n  * inference model batch=8: Model inference time only and using 8 images per batch.\n  * inference model batch=1: Model inference time only and using 1 image per batch.\n  * inferenee caffe2 batch=1: Model inference time for the model in Caffe2 format using 1 image per batch. The Caffe2 models fused the BN to Conv and purely run on C++/CUDA by using Caffe2 ops for rpn/detection post processing.\n\nThe pre-trained models are available in the link in the model id.\n\nbackbone | type | resolution | lr sched | im / gpu | train mem(GB) | train time (s/iter) | total train time (hr) | inference total batch=8 (s/im) | inference model batch=8 (s/im) | inference model batch=1 (s/im) | inference caffe2 batch=1 (s/im) | box AP | mask AP | model id\n-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --\n[R-50-C4](configs/e2e_faster_rcnn_R_50_C4_1x.yaml) (reference) | Fast | 800 | 1x | 1 | 5.8 | 0.4036 | 20.2 | 0.0875 | **0.0793** | 0.0831 | **0.0625** | 34.4 | - | f35857197\n[fbnet_chamv1a](configs/e2e_faster_rcnn_fbnet_chamv1a_600.yaml) | Fast | 600 | 0.75x | 12 | 13.6 | 0.5444 | 20.5 | 0.0315 | **0.0260** | 0.0376 | **0.0188** | 33.5 | - | [f100940543](https://download.pytorch.org/models/maskrcnn/e2e_faster_rcnn_fbnet_chamv1a_600.pth)\n[fbnet_default](configs/e2e_faster_rcnn_fbnet_600.yaml) | Fast | 600 | 0.5x | 16 | 11.1 | 0.4872 | 12.5 | 0.0316 | **0.0250** | 0.0297 | **0.0130** | 28.2 | - | [f101086388](https://download.pytorch.org/models/maskrcnn/e2e_faster_rcnn_fbnet_600.pth)\n[R-50-C4](configs/e2e_mask_rcnn_R_50_C4_1x.yaml) (reference) | Mask | 800 | 1x | 1 | 5.8 | 0.452 | 22.6 | 0.0918 | **0.0848** | 0.0844 | - | 35.2 | 31.0 | f35858791\n[fbnet_xirb16d](configs/e2e_mask_rcnn_fbnet_xirb16d_dsmask_600.yaml) | Mask | 600 | 0.5x | 16 | 13.4 | 1.1732 | 29 | 0.0386 | **0.0319** | 0.0356 | - | 30.7 | 26.9 | [f101086394](https://download.pytorch.org/models/maskrcnn/e2e_mask_rcnn_fbnet_xirb16d_dsmask.pth)\n[fbnet_default](configs/e2e_mask_rcnn_fbnet_600.yaml) | Mask | 600 | 0.5x | 16 | 13.0 | 0.9036 | 23.0 | 0.0327 | **0.0269** | 0.0385 | - | 29.0 | 26.1 | [f101086385](https://download.pytorch.org/models/maskrcnn/e2e_mask_rcnn_fbnet_600.pth)\n\n## Comparison with Detectron and mmdetection\n\nIn the following section, we compare our implementation with [Detectron](https://github.com/facebookresearch/Detectron)\nand [mmdetection](https://github.com/open-mmlab/mmdetection).\nThe same remarks from [mmdetection](https://github.com/open-mmlab/mmdetection/blob/master/MODEL_ZOO.md#training-speed)\nabout different hardware applies here.\n\n### Training speed\n\nThe numbers here are in seconds / iteration. The lower, the better.\n\ntype | Detectron (P100) | mmdetection (V100) | maskrcnn_benchmark (V100)\n-- | -- | -- | --\nFaster R-CNN R-50 C4 | 0.566 | - | 0.4036\nFaster R-CNN R-50 FPN | 0.544 | 0.554 | 0.3530\nFaster R-CNN R-101 FPN | 0.647 | - | 0.4591\nFaster R-CNN X-101-32x8d FPN | 0.799 | - | 0.7007\nMask R-CNN R-50 C4 | 0.620 | - | 0.4520\nMask R-CNN R-50 FPN | 0.889 | 0.690 | 0.4536\nMask R-CNN R-101 FPN | 1.008 | - | 0.5665\nMask R-CNN X-101-32x8d FPN | 0.961 | - | 0.7562\n\n### Training memory\n\nThe lower, the better\n\ntype | Detectron (P100) | mmdetection (V100) | maskrcnn_benchmark (V100)\n-- | -- | -- | --\nFaster R-CNN R-50 C4 | 6.3 | - | 5.8\nFaster R-CNN R-50 FPN | 7.2 | 4.9 | 4.4\nFaster R-CNN R-101 FPN | 8.9 | - | 7.1\nFaster R-CNN X-101-32x8d FPN | 7.0 | - | 7.6\nMask R-CNN R-50 C4 | 6.6 | - | 5.8\nMask R-CNN R-50 FPN | 8.6 | 5.9 | 5.2\nMask R-CNN R-101 FPN | 10.2 | - | 7.9\nMask R-CNN X-101-32x8d FPN | 7.7 | - | 7.8\n\n### Accuracy\n\nThe higher, the better\n\ntype | Detectron (P100) | mmdetection (V100) | maskrcnn_benchmark (V100)\n-- | -- | -- | --\nFaster R-CNN R-50 C4 | 34.8 | - | 34.8\nFaster R-CNN R-50 FPN | 36.7 | 36.7 | 36.8\nFaster R-CNN R-101 FPN | 39.4 | - | 39.1\nFaster R-CNN X-101-32x8d FPN | 41.3 | - | 41.2\nMask R-CNN R-50 C4 | 35.8 & 31.4 | - | 35.6 & 31.5\nMask R-CNN R-50 FPN | 37.7 & 33.9 | 37.5 & 34.4 | 37.8 & 34.2\nMask R-CNN R-101 FPN | 40.0 & 35.9 | - | 40.1 & 36.1\nMask R-CNN X-101-32x8d FPN | 42.1 & 37.3 | - | 42.2 & 37.8\n\n"
  },
  {
    "path": "README.md",
    "content": "# FCOS_PLUS\n\nThis project contains some improvements about FCOS (Fully Convolutional One-Stage Object Detection).\n\n\n## Installation\n\nPlease check [INSTALL.md](INSTALL.md) (same as original FCOS) for installation instructions. \n\n\n**Results**\n\n\nModel | Total training mem (GB) | Multi-scale training | Testing time / im | AP (minival) | link \n---   |:---:|:---:|:---:|:---:|:---:|\nFCOS_R_50_FPN_1x | 29.3 | No | 71ms | 37.0 | [model](https://pan.baidu.com/s/1Xcbx7EfOGvwnexXAuovM0A) |\nFCOS_R_50_FPN_1x_center | 30.61 | No | 71ms | 37.8 | [model](https://pan.baidu.com/s/1Gs7AzmJRmeYhXUPDQZuSLA) |\nFCOS_R_50_FPN_1x_center_liou | 30.61 | No | 71ms | 38.1 | [model](https://pan.baidu.com/s/1HpYrkAsVXNvXRFTd06SGgA) |\nFCOS_R_50_FPN_1x_center_giou | 30.61 | No | 71ms | 38.2 | [model](https://pan.baidu.com/s/13_o6343Ikg4td01kVXxGSw) |\nFCOS_R_101_FPN_2x | 44.1 | Yes | 74ms | 41.4 | [model](https://pan.baidu.com/s/1u_5OD5NURYe1EYFWnohgEA) |\nFCOS_R_101_FPN_2x_center_giou | 44.1 | Yes | 74ms | 42.5 | [model](https://pan.baidu.com/s/1qhHM067ywwlEXfamaFq23g) |\n\n[1] *1x and 2x mean the model is trained for 90K and 180K iterations, respectively.* \\\n[2] center means [center sample](fcos.pdf) is used in our training. \\\n[3] liou means the model use linear iou loss function. (1 - iou) \\\n[4] giou means the use giou loss function. (1 - giou) \n\n\n## Training\n\nThe following command line will train FCOS_R_50_FPN_1x on 8 GPUs with Synchronous Stochastic Gradient Descent (SGD):\n\n    python -m torch.distributed.launch \\\n        --nproc_per_node=8 \\\n        --master_port=$((RANDOM + 10000)) \\\n        tools/train_net.py \\\n        --skip-test \\\n        --config-file configs/fcos/fcos_R_50_FPN_1x_center_giou.yaml \\\n        DATALOADER.NUM_WORKERS 2 \\\n        OUTPUT_DIR training_dir/fcos_R_50_FPN_1x_center_giou\n        \nNote that:\n1) If you want to use fewer GPUs, please change `--nproc_per_node` to the number of GPUs. No other settings need to be changed. The total batch size does not depends on `nproc_per_node`. If you want to change the total batch size, please change `SOLVER.IMS_PER_BATCH` in [configs/fcos/fcos_R_50_FPN_1x_center_giou.yaml](configs/fcos/fcos_R_50_FPN_1x_center_giou.yaml).\n2) The models will be saved into `OUTPUT_DIR`.\n3) If you want to train FCOS with other backbones, please change `--config-file`.\n\n## Citations\nPlease consider citing original paper in your publications if the project helps your research. \n```\n@article{tian2019fcos,\n  title   =  {{FCOS}: Fully Convolutional One-Stage Object Detection},\n  author  =  {Tian, Zhi and Shen, Chunhua and Chen, Hao and He, Tong},\n  journal =  {arXiv preprint arXiv:1904.01355},\n  year    =  {2019}\n}\n```\n\n\n## License\n\nFor academic use, this project is licensed under the 2-clause BSD License - see the LICENSE file for details. For commercial use, please contact the authors. \n\n"
  },
  {
    "path": "TROUBLESHOOTING.md",
    "content": "# Troubleshooting\n\nHere is a compilation if common issues that you might face\nwhile compiling / running this code:\n\n## Compilation errors when compiling the library\nIf you encounter build errors like the following:\n```\n/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’\n     struct is_convertible\n        ^~~~~~~~~~~~~~\n/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement\n     }\n ^\nerror: command '/usr/local/cuda/bin/nvcc' failed with exit status 1\n```\ncheck your CUDA version and your `gcc` version.\n```\nnvcc --version\ngcc --version\n```\nIf you are using CUDA 9.0 and gcc 6.4.0, then refer to https://github.com/facebookresearch/maskrcnn-benchmark/issues/25,\nwhich has a summary of the solution. Basically, CUDA 9.0 is not compatible with gcc 6.4.0.\n\n## ImportError: No module named maskrcnn_benchmark.config when running webcam.py\n\nThis means that `maskrcnn-benchmark` has not been properly installed.\nRefer to https://github.com/facebookresearch/maskrcnn-benchmark/issues/22 for a few possible issues.\nNote that we now support Python 2 as well.\n\n\n## ImportError: Undefined symbol: __cudaPopCallConfiguration error when import _C\n\nThis probably means that the inconsistent version of NVCC compile and your conda CUDAToolKit package. This is firstly mentioned in https://github.com/facebookresearch/maskrcnn-benchmark/issues/45 . All you need to do is:\n\n```\n# Check the NVCC compile version(e.g.)\n/usr/cuda-9.2/bin/nvcc --version\n# Check the CUDAToolKit version(e.g.)\n~/anaconda3/bin/conda list | grep cuda\n\n# If you need to update your CUDAToolKit\n~/anaconda3/bin/conda install -c anaconda cudatoolkit==9.2\n```\n\nBoth of them should have the **same** version. For example, if NVCC==9.2 and CUDAToolKit==9.2, this will be fine while when NVCC==9.2 but CUDAToolKit==9, it fails.\n\n\n## Segmentation fault (core dumped) when running the library\nThis probably means that you have compiled the library using GCC < 4.9, which is ABI incompatible with PyTorch.\nIndeed, during installation, you probably saw a message like\n```\nYour compiler (g++ 4.8) may be ABI-incompatible with PyTorch!\nPlease use a compiler that is ABI-compatible with GCC 4.9 and above.\nSee https://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html.\n\nSee https://gist.github.com/goldsborough/d466f43e8ffc948ff92de7486c5216d6\nfor instructions on how to install GCC 4.9 or higher.\n```\nFollow the instructions on https://gist.github.com/goldsborough/d466f43e8ffc948ff92de7486c5216d6\nto install GCC 4.9 or higher, and try recompiling `maskrcnn-benchmark` again, after cleaning the\n`build` folder with\n```\nrm -rf build\n```\n\n\n"
  },
  {
    "path": "configs/caffe2/e2e_faster_rcnn_R_101_FPN_1x_caffe2.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://Caffe2Detectron/COCO/35857890/e2e_faster_rcnn_R-101-FPN_1x\"\n  BACKBONE:\n    CONV_BODY: \"R-101-FPN\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\nDATASETS:\n  TEST: (\"coco_2014_minival\",)\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\n"
  },
  {
    "path": "configs/caffe2/e2e_faster_rcnn_R_50_C4_1x_caffe2.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://Caffe2Detectron/COCO/35857197/e2e_faster_rcnn_R-50-C4_1x\"\nDATASETS:\n  TEST: (\"coco_2014_minival\",)\n"
  },
  {
    "path": "configs/caffe2/e2e_faster_rcnn_R_50_FPN_1x_caffe2.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://Caffe2Detectron/COCO/35857345/e2e_faster_rcnn_R-50-FPN_1x\"\n  BACKBONE:\n    CONV_BODY: \"R-50-FPN\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\nDATASETS:\n  TEST: (\"coco_2014_minival\",)\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\n"
  },
  {
    "path": "configs/caffe2/e2e_faster_rcnn_X_101_32x8d_FPN_1x_caffe2.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://Caffe2Detectron/COCO/36761737/e2e_faster_rcnn_X-101-32x8d-FPN_1x\"\n  BACKBONE:\n    CONV_BODY: \"R-101-FPN\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n    STRIDE_IN_1X1: False\n    NUM_GROUPS: 32\n    WIDTH_PER_GROUP: 8\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\nDATASETS:\n  TEST: (\"coco_2014_minival\",)\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\n"
  },
  {
    "path": "configs/caffe2/e2e_keypoint_rcnn_R_50_FPN_1x_caffe2.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://Caffe2Detectron/COCO/37697547/e2e_keypoint_rcnn_R-50-FPN_1x\"\n  BACKBONE:\n    CONV_BODY: \"R-50-FPN\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\n    NUM_CLASSES: 2\n  ROI_KEYPOINT_HEAD:\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    FEATURE_EXTRACTOR: \"KeypointRCNNFeatureExtractor\"\n    PREDICTOR: \"KeypointRCNNPredictor\"\n    POOLER_RESOLUTION: 14\n    POOLER_SAMPLING_RATIO: 2\n    RESOLUTION: 56\n    SHARE_BOX_FEATURE_EXTRACTOR: False\n  KEYPOINT_ON: True\nDATASETS:\n  TRAIN: (\"keypoints_coco_2014_train\", \"keypoints_coco_2014_valminusminival\",)\n  TEST: (\"keypoints_coco_2014_minival\",)\nINPUT:\n  MIN_SIZE_TRAIN: (640, 672, 704, 736, 768, 800)\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.02\n  WEIGHT_DECAY: 0.0001\n  STEPS: (60000, 80000)\n  MAX_ITER: 90000\n"
  },
  {
    "path": "configs/caffe2/e2e_mask_rcnn_R_101_FPN_1x_caffe2.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://Caffe2Detectron/COCO/35861795/e2e_mask_rcnn_R-101-FPN_1x\"\n  BACKBONE:\n    CONV_BODY: \"R-101-FPN\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\n  ROI_MASK_HEAD:\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    FEATURE_EXTRACTOR: \"MaskRCNNFPNFeatureExtractor\"\n    PREDICTOR: \"MaskRCNNC4Predictor\"\n    POOLER_RESOLUTION: 14\n    POOLER_SAMPLING_RATIO: 2\n    RESOLUTION: 28\n    SHARE_BOX_FEATURE_EXTRACTOR: False\n  MASK_ON: True\nDATASETS:\n  TEST: (\"coco_2014_minival\",)\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\n"
  },
  {
    "path": "configs/caffe2/e2e_mask_rcnn_R_50_C4_1x_caffe2.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://Caffe2Detectron/COCO/35858791/e2e_mask_rcnn_R-50-C4_1x\"\n  ROI_MASK_HEAD:\n    PREDICTOR: \"MaskRCNNC4Predictor\"\n    SHARE_BOX_FEATURE_EXTRACTOR: True\n  MASK_ON: True\nDATASETS:\n  TEST: (\"coco_2014_minival\",)\n"
  },
  {
    "path": "configs/caffe2/e2e_mask_rcnn_R_50_FPN_1x_caffe2.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://Caffe2Detectron/COCO/35858933/e2e_mask_rcnn_R-50-FPN_1x\"\n  BACKBONE:\n    CONV_BODY: \"R-50-FPN\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\n  ROI_MASK_HEAD:\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    FEATURE_EXTRACTOR: \"MaskRCNNFPNFeatureExtractor\"\n    PREDICTOR: \"MaskRCNNC4Predictor\"\n    POOLER_RESOLUTION: 14\n    POOLER_SAMPLING_RATIO: 2\n    RESOLUTION: 28\n    SHARE_BOX_FEATURE_EXTRACTOR: False\n  MASK_ON: True\nDATASETS:\n  TEST: (\"coco_2014_minival\",)\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\n"
  },
  {
    "path": "configs/caffe2/e2e_mask_rcnn_X-152-32x8d-FPN-IN5k_1.44x_caffe2.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://Caffe2Detectron/COCO/37129812/e2e_mask_rcnn_X-152-32x8d-FPN-IN5k_1.44x\"\n  BACKBONE:\n    CONV_BODY: \"R-152-FPN\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n    STRIDE_IN_1X1: False\n    NUM_GROUPS: 32\n    WIDTH_PER_GROUP: 8\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\n  ROI_MASK_HEAD:\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    FEATURE_EXTRACTOR: \"MaskRCNNFPNFeatureExtractor\"\n    PREDICTOR: \"MaskRCNNC4Predictor\"\n    POOLER_RESOLUTION: 14\n    POOLER_SAMPLING_RATIO: 2\n    RESOLUTION: 28\n    SHARE_BOX_FEATURE_EXTRACTOR: False\n  MASK_ON: True\nDATASETS:\n  TEST: (\"coco_2014_minival\",)\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\n"
  },
  {
    "path": "configs/caffe2/e2e_mask_rcnn_X_101_32x8d_FPN_1x_caffe2.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://Caffe2Detectron/COCO/36761843/e2e_mask_rcnn_X-101-32x8d-FPN_1x\"\n  BACKBONE:\n    CONV_BODY: \"R-101-FPN\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n    STRIDE_IN_1X1: False\n    NUM_GROUPS: 32\n    WIDTH_PER_GROUP: 8\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\n  ROI_MASK_HEAD:\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    FEATURE_EXTRACTOR: \"MaskRCNNFPNFeatureExtractor\"\n    PREDICTOR: \"MaskRCNNC4Predictor\"\n    POOLER_RESOLUTION: 14\n    POOLER_SAMPLING_RATIO: 2\n    RESOLUTION: 28\n    SHARE_BOX_FEATURE_EXTRACTOR: False\n  MASK_ON: True\nDATASETS:\n  TEST: (\"coco_2014_minival\",)\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\n"
  },
  {
    "path": "configs/cityscapes/e2e_faster_rcnn_R_50_FPN_1x_cocostyle.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-50\"\n  BACKBONE:\n    CONV_BODY: \"R-50-FPN\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\n    NUM_CLASSES: 9\nDATASETS:\n  TRAIN: (\"cityscapes_fine_instanceonly_seg_train_cocostyle\",)\n  TEST: (\"cityscapes_fine_instanceonly_seg_val_cocostyle\",)\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.01\n  WEIGHT_DECAY: 0.0001\n  STEPS: (18000,)\n  MAX_ITER: 24000\n"
  },
  {
    "path": "configs/cityscapes/e2e_mask_rcnn_R_50_FPN_1x_cocostyle.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-50\"\n  BACKBONE:\n    CONV_BODY: \"R-50-FPN\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\n    NUM_CLASSES: 9\n  ROI_MASK_HEAD:\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    FEATURE_EXTRACTOR: \"MaskRCNNFPNFeatureExtractor\"\n    PREDICTOR: \"MaskRCNNC4Predictor\"\n    POOLER_RESOLUTION: 14\n    POOLER_SAMPLING_RATIO: 2\n    RESOLUTION: 28\n    SHARE_BOX_FEATURE_EXTRACTOR: False\n  MASK_ON: True\nDATASETS:\n  TRAIN: (\"cityscapes_fine_instanceonly_seg_train_cocostyle\",)\n  TEST: (\"cityscapes_fine_instanceonly_seg_val_cocostyle\",)\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.01\n  WEIGHT_DECAY: 0.0001\n  STEPS: (18000,)\n  MAX_ITER: 24000\n"
  },
  {
    "path": "configs/e2e_faster_rcnn_R_101_FPN_1x.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-101\"\n  BACKBONE:\n    CONV_BODY: \"R-101-FPN\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.02\n  WEIGHT_DECAY: 0.0001\n  STEPS: (60000, 80000)\n  MAX_ITER: 90000\n"
  },
  {
    "path": "configs/e2e_faster_rcnn_R_50_C4_1x.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-50\"\n  RPN:\n    PRE_NMS_TOP_N_TEST: 6000\n    POST_NMS_TOP_N_TEST: 1000\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nSOLVER:\n  BASE_LR: 0.01\n  WEIGHT_DECAY: 0.0001\n  STEPS: (120000, 160000)\n  MAX_ITER: 180000\n  IMS_PER_BATCH: 8\n"
  },
  {
    "path": "configs/e2e_faster_rcnn_R_50_FPN_1x.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-50\"\n  BACKBONE:\n    CONV_BODY: \"R-50-FPN\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.02\n  WEIGHT_DECAY: 0.0001\n  STEPS: (60000, 80000)\n  MAX_ITER: 90000\n"
  },
  {
    "path": "configs/e2e_faster_rcnn_X_101_32x8d_FPN_1x.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/FAIR/20171220/X-101-32x8d\"\n  BACKBONE:\n    CONV_BODY: \"R-101-FPN\"\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n    STRIDE_IN_1X1: False\n    NUM_GROUPS: 32\n    WIDTH_PER_GROUP: 8\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.01\n  WEIGHT_DECAY: 0.0001\n  STEPS: (120000, 160000)\n  MAX_ITER: 180000\n  IMS_PER_BATCH: 8\n"
  },
  {
    "path": "configs/e2e_faster_rcnn_fbnet.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  BACKBONE:\n    CONV_BODY: FBNet\n  FBNET:\n    ARCH: \"default\"\n    BN_TYPE: \"bn\"\n    WIDTH_DIVISOR: 8\n    DW_CONV_SKIP_BN: True\n    DW_CONV_SKIP_RELU: True\n  RPN:\n    ANCHOR_SIZES: (16, 32, 64, 128, 256)\n    ANCHOR_STRIDE: (16, )\n    BATCH_SIZE_PER_IMAGE: 256\n    PRE_NMS_TOP_N_TRAIN: 6000\n    PRE_NMS_TOP_N_TEST: 6000\n    POST_NMS_TOP_N_TRAIN: 2000\n    POST_NMS_TOP_N_TEST: 100\n    RPN_HEAD: FBNet.rpn_head\n  ROI_HEADS:\n    BATCH_SIZE_PER_IMAGE: 512\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 6\n    FEATURE_EXTRACTOR: FBNet.roi_head\n    NUM_CLASSES: 81\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nSOLVER:\n  BASE_LR: 0.06\n  WARMUP_FACTOR: 0.1\n  WEIGHT_DECAY: 0.0001\n  STEPS: (60000, 80000)\n  MAX_ITER: 90000\n  IMS_PER_BATCH: 128  # for 8GPUs\n# TEST:\n#   IMS_PER_BATCH: 8\nINPUT:\n  MIN_SIZE_TRAIN: (320, )\n  MAX_SIZE_TRAIN: 640\n  MIN_SIZE_TEST: 320\n  MAX_SIZE_TEST: 640\n  PIXEL_MEAN: [103.53, 116.28, 123.675]\n  PIXEL_STD: [57.375, 57.12, 58.395]\n"
  },
  {
    "path": "configs/e2e_faster_rcnn_fbnet_600.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  BACKBONE:\n    CONV_BODY: FBNet\n  FBNET:\n    ARCH: \"default\"\n    BN_TYPE: \"bn\"\n    WIDTH_DIVISOR: 8\n    DW_CONV_SKIP_BN: True\n    DW_CONV_SKIP_RELU: True\n  RPN:\n    ANCHOR_SIZES: (32, 64, 128, 256, 512)\n    ANCHOR_STRIDE: (16, )\n    BATCH_SIZE_PER_IMAGE: 256\n    PRE_NMS_TOP_N_TRAIN: 6000\n    PRE_NMS_TOP_N_TEST: 6000\n    POST_NMS_TOP_N_TRAIN: 2000\n    POST_NMS_TOP_N_TEST: 200\n    RPN_HEAD: FBNet.rpn_head\n  ROI_HEADS:\n    BATCH_SIZE_PER_IMAGE: 256\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 6\n    FEATURE_EXTRACTOR: FBNet.roi_head\n    NUM_CLASSES: 81\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nSOLVER:\n  BASE_LR: 0.06\n  WARMUP_FACTOR: 0.1\n  WEIGHT_DECAY: 0.0001\n  STEPS: (60000, 80000)\n  MAX_ITER: 90000\n  IMS_PER_BATCH: 128  # for 8GPUs\n# TEST:\n#   IMS_PER_BATCH: 8\nINPUT:\n  MIN_SIZE_TRAIN: (600, )\n  MAX_SIZE_TRAIN: 1000\n  MIN_SIZE_TEST: 600\n  MAX_SIZE_TEST: 1000\n  PIXEL_MEAN: [103.53, 116.28, 123.675]\n  PIXEL_STD: [57.375, 57.12, 58.395]\n"
  },
  {
    "path": "configs/e2e_faster_rcnn_fbnet_chamv1a_600.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  BACKBONE:\n    CONV_BODY: FBNet\n  FBNET:\n    ARCH: \"cham_v1a\"\n    BN_TYPE: \"bn\"\n    WIDTH_DIVISOR: 8\n    DW_CONV_SKIP_BN: True\n    DW_CONV_SKIP_RELU: True\n  RPN:\n    ANCHOR_SIZES: (32, 64, 128, 256, 512)\n    ANCHOR_STRIDE: (16, )\n    BATCH_SIZE_PER_IMAGE: 256\n    PRE_NMS_TOP_N_TRAIN: 6000\n    PRE_NMS_TOP_N_TEST: 6000\n    POST_NMS_TOP_N_TRAIN: 2000\n    POST_NMS_TOP_N_TEST: 200\n    RPN_HEAD: FBNet.rpn_head\n  ROI_HEADS:\n    BATCH_SIZE_PER_IMAGE: 128\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 6\n    FEATURE_EXTRACTOR: FBNet.roi_head\n    NUM_CLASSES: 81\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nSOLVER:\n  BASE_LR: 0.045\n  WARMUP_FACTOR: 0.1\n  WEIGHT_DECAY: 0.0001\n  STEPS: (90000, 120000)\n  MAX_ITER: 135000\n  IMS_PER_BATCH: 96  # for 8GPUs\n# TEST:\n#   IMS_PER_BATCH: 8\nINPUT:\n  MIN_SIZE_TRAIN: (600, )\n  MAX_SIZE_TRAIN: 1000\n  MIN_SIZE_TEST: 600\n  MAX_SIZE_TEST: 1000\n  PIXEL_MEAN: [103.53, 116.28, 123.675]\n  PIXEL_STD: [57.375, 57.12, 58.395]\n"
  },
  {
    "path": "configs/e2e_keypoint_rcnn_R_50_FPN_1x.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-50\"\n  BACKBONE:\n    CONV_BODY: \"R-50-FPN\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\n    NUM_CLASSES: 2\n  ROI_KEYPOINT_HEAD:\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    FEATURE_EXTRACTOR: \"KeypointRCNNFeatureExtractor\"\n    PREDICTOR: \"KeypointRCNNPredictor\"\n    POOLER_RESOLUTION: 14\n    POOLER_SAMPLING_RATIO: 2\n    RESOLUTION: 56\n    SHARE_BOX_FEATURE_EXTRACTOR: False\n  KEYPOINT_ON: True\nDATASETS:\n  TRAIN: (\"keypoints_coco_2014_train\", \"keypoints_coco_2014_valminusminival\",)\n  TEST: (\"keypoints_coco_2014_minival\",)\nINPUT:\n  MIN_SIZE_TRAIN: (640, 672, 704, 736, 768, 800)\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.02\n  WEIGHT_DECAY: 0.0001\n  STEPS: (60000, 80000)\n  MAX_ITER: 90000\n"
  },
  {
    "path": "configs/e2e_mask_rcnn_R_101_FPN_1x.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-101\"\n  BACKBONE:\n    CONV_BODY: \"R-101-FPN\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\n  ROI_MASK_HEAD:\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    FEATURE_EXTRACTOR: \"MaskRCNNFPNFeatureExtractor\"\n    PREDICTOR: \"MaskRCNNC4Predictor\"\n    POOLER_RESOLUTION: 14\n    POOLER_SAMPLING_RATIO: 2\n    RESOLUTION: 28\n    SHARE_BOX_FEATURE_EXTRACTOR: False\n  MASK_ON: True\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.02\n  WEIGHT_DECAY: 0.0001\n  STEPS: (60000, 80000)\n  MAX_ITER: 90000\n"
  },
  {
    "path": "configs/e2e_mask_rcnn_R_50_C4_1x.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-50\"\n  RPN:\n    PRE_NMS_TOP_N_TEST: 6000\n    POST_NMS_TOP_N_TEST: 1000\n  ROI_MASK_HEAD:\n    PREDICTOR: \"MaskRCNNC4Predictor\"\n    SHARE_BOX_FEATURE_EXTRACTOR: True\n  MASK_ON: True\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nSOLVER:\n  BASE_LR: 0.01\n  WEIGHT_DECAY: 0.0001\n  STEPS: (120000, 160000)\n  MAX_ITER: 180000\n  IMS_PER_BATCH: 8\n"
  },
  {
    "path": "configs/e2e_mask_rcnn_R_50_FPN_1x.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-50\"\n  BACKBONE:\n    CONV_BODY: \"R-50-FPN\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\n  ROI_MASK_HEAD:\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    FEATURE_EXTRACTOR: \"MaskRCNNFPNFeatureExtractor\"\n    PREDICTOR: \"MaskRCNNC4Predictor\"\n    POOLER_RESOLUTION: 14\n    POOLER_SAMPLING_RATIO: 2\n    RESOLUTION: 28\n    SHARE_BOX_FEATURE_EXTRACTOR: False\n  MASK_ON: True\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.02\n  WEIGHT_DECAY: 0.0001\n  STEPS: (60000, 80000)\n  MAX_ITER: 90000\n"
  },
  {
    "path": "configs/e2e_mask_rcnn_X_101_32x8d_FPN_1x.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/FAIR/20171220/X-101-32x8d\"\n  BACKBONE:\n    CONV_BODY: \"R-101-FPN\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n    STRIDE_IN_1X1: False\n    NUM_GROUPS: 32\n    WIDTH_PER_GROUP: 8\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\n  ROI_MASK_HEAD:\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    FEATURE_EXTRACTOR: \"MaskRCNNFPNFeatureExtractor\"\n    PREDICTOR: \"MaskRCNNC4Predictor\"\n    POOLER_RESOLUTION: 14\n    POOLER_SAMPLING_RATIO: 2\n    RESOLUTION: 28\n    SHARE_BOX_FEATURE_EXTRACTOR: False\n  MASK_ON: True\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.01\n  WEIGHT_DECAY: 0.0001\n  STEPS: (120000, 160000)\n  MAX_ITER: 180000\n  IMS_PER_BATCH: 8\n"
  },
  {
    "path": "configs/e2e_mask_rcnn_fbnet.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  BACKBONE:\n    CONV_BODY: FBNet\n  FBNET:\n    ARCH: \"default\"\n    BN_TYPE: \"bn\"\n    WIDTH_DIVISOR: 8\n    DW_CONV_SKIP_BN: True\n    DW_CONV_SKIP_RELU: True\n    DET_HEAD_LAST_SCALE: 0.0\n  RPN:\n    ANCHOR_SIZES: (16, 32, 64, 128, 256)\n    ANCHOR_STRIDE: (16, )\n    BATCH_SIZE_PER_IMAGE: 256\n    PRE_NMS_TOP_N_TRAIN: 6000\n    PRE_NMS_TOP_N_TEST: 6000\n    POST_NMS_TOP_N_TRAIN: 2000\n    POST_NMS_TOP_N_TEST: 100\n    RPN_HEAD: FBNet.rpn_head\n  ROI_HEADS:\n    BATCH_SIZE_PER_IMAGE: 256\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 6\n    FEATURE_EXTRACTOR: FBNet.roi_head\n    NUM_CLASSES: 81\n  ROI_MASK_HEAD:\n    POOLER_RESOLUTION: 6\n    FEATURE_EXTRACTOR: FBNet.roi_head_mask\n    PREDICTOR: \"MaskRCNNConv1x1Predictor\"\n    RESOLUTION: 12\n    SHARE_BOX_FEATURE_EXTRACTOR: False\n  MASK_ON: True\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nSOLVER:\n  BASE_LR: 0.06\n  WARMUP_FACTOR: 0.1\n  WEIGHT_DECAY: 0.0001\n  STEPS: (60000, 80000)\n  MAX_ITER: 90000\n  IMS_PER_BATCH: 128  # for 8GPUs\n# TEST:\n#   IMS_PER_BATCH: 8\nINPUT:\n  MIN_SIZE_TRAIN: (320, )\n  MAX_SIZE_TRAIN: 640\n  MIN_SIZE_TEST: 320\n  MAX_SIZE_TEST: 640\n  PIXEL_MEAN: [103.53, 116.28, 123.675]\n  PIXEL_STD: [57.375, 57.12, 58.395]\n"
  },
  {
    "path": "configs/e2e_mask_rcnn_fbnet_600.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  BACKBONE:\n    CONV_BODY: FBNet\n  FBNET:\n    ARCH: \"default\"\n    BN_TYPE: \"bn\"\n    WIDTH_DIVISOR: 8\n    DW_CONV_SKIP_BN: True\n    DW_CONV_SKIP_RELU: True\n    DET_HEAD_LAST_SCALE: 0.0\n  RPN:\n    ANCHOR_SIZES: (32, 64, 128, 256, 512)\n    ANCHOR_STRIDE: (16, )\n    BATCH_SIZE_PER_IMAGE: 256\n    PRE_NMS_TOP_N_TRAIN: 6000\n    PRE_NMS_TOP_N_TEST: 6000\n    POST_NMS_TOP_N_TRAIN: 2000\n    POST_NMS_TOP_N_TEST: 200\n    RPN_HEAD: FBNet.rpn_head\n  ROI_HEADS:\n    BATCH_SIZE_PER_IMAGE: 256\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 6\n    FEATURE_EXTRACTOR: FBNet.roi_head\n    NUM_CLASSES: 81\n  ROI_MASK_HEAD:\n    POOLER_RESOLUTION: 6\n    FEATURE_EXTRACTOR: FBNet.roi_head_mask\n    PREDICTOR: \"MaskRCNNConv1x1Predictor\"\n    RESOLUTION: 12\n    SHARE_BOX_FEATURE_EXTRACTOR: False\n  MASK_ON: True\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nSOLVER:\n  BASE_LR: 0.06\n  WARMUP_FACTOR: 0.1\n  WEIGHT_DECAY: 0.0001\n  STEPS: (60000, 80000)\n  MAX_ITER: 90000\n  IMS_PER_BATCH: 128  # for 8GPUs\n# TEST:\n#   IMS_PER_BATCH: 8\nINPUT:\n  MIN_SIZE_TRAIN: (600, )\n  MAX_SIZE_TRAIN: 1000\n  MIN_SIZE_TEST: 600\n  MAX_SIZE_TEST: 1000\n  PIXEL_MEAN: [103.53, 116.28, 123.675]\n  PIXEL_STD: [57.375, 57.12, 58.395]\n"
  },
  {
    "path": "configs/e2e_mask_rcnn_fbnet_xirb16d_dsmask.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  BACKBONE:\n    CONV_BODY: FBNet\n  FBNET:\n    ARCH: \"xirb16d_dsmask\"\n    BN_TYPE: \"bn\"\n    WIDTH_DIVISOR: 8\n    DW_CONV_SKIP_BN: True\n    DW_CONV_SKIP_RELU: True\n    DET_HEAD_LAST_SCALE: -1.0\n  RPN:\n    ANCHOR_SIZES: (16, 32, 64, 128, 256)\n    ANCHOR_STRIDE: (16, )\n    BATCH_SIZE_PER_IMAGE: 256\n    PRE_NMS_TOP_N_TRAIN: 6000\n    PRE_NMS_TOP_N_TEST: 6000\n    POST_NMS_TOP_N_TRAIN: 2000\n    POST_NMS_TOP_N_TEST: 100\n    RPN_HEAD: FBNet.rpn_head\n  ROI_HEADS:\n    BATCH_SIZE_PER_IMAGE: 512\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 6\n    FEATURE_EXTRACTOR: FBNet.roi_head\n    NUM_CLASSES: 81\n  ROI_MASK_HEAD:\n    POOLER_RESOLUTION: 6\n    FEATURE_EXTRACTOR: FBNet.roi_head_mask\n    PREDICTOR: \"MaskRCNNConv1x1Predictor\"\n    RESOLUTION: 12\n    SHARE_BOX_FEATURE_EXTRACTOR: False\n  MASK_ON: True\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nSOLVER:\n  BASE_LR: 0.06\n  WARMUP_FACTOR: 0.1\n  WEIGHT_DECAY: 0.0001\n  STEPS: (60000, 80000)\n  MAX_ITER: 90000\n  IMS_PER_BATCH: 128  # for 8GPUs\n# TEST:\n#   IMS_PER_BATCH: 8\nINPUT:\n  MIN_SIZE_TRAIN: (320, )\n  MAX_SIZE_TRAIN: 640\n  MIN_SIZE_TEST: 320\n  MAX_SIZE_TEST: 640\n  PIXEL_MEAN: [103.53, 116.28, 123.675]\n  PIXEL_STD: [57.375, 57.12, 58.395]\n"
  },
  {
    "path": "configs/e2e_mask_rcnn_fbnet_xirb16d_dsmask_600.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  BACKBONE:\n    CONV_BODY: FBNet\n  FBNET:\n    ARCH: \"xirb16d_dsmask\"\n    BN_TYPE: \"bn\"\n    WIDTH_DIVISOR: 8\n    DW_CONV_SKIP_BN: True\n    DW_CONV_SKIP_RELU: True\n    DET_HEAD_LAST_SCALE: 0.0\n  RPN:\n    ANCHOR_SIZES: (32, 64, 128, 256, 512)\n    ANCHOR_STRIDE: (16, )\n    BATCH_SIZE_PER_IMAGE: 256\n    PRE_NMS_TOP_N_TRAIN: 6000\n    PRE_NMS_TOP_N_TEST: 6000\n    POST_NMS_TOP_N_TRAIN: 2000\n    POST_NMS_TOP_N_TEST: 200\n    RPN_HEAD: FBNet.rpn_head\n  ROI_HEADS:\n    BATCH_SIZE_PER_IMAGE: 256\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 6\n    FEATURE_EXTRACTOR: FBNet.roi_head\n    NUM_CLASSES: 81\n  ROI_MASK_HEAD:\n    POOLER_RESOLUTION: 6\n    FEATURE_EXTRACTOR: FBNet.roi_head_mask\n    PREDICTOR: \"MaskRCNNConv1x1Predictor\"\n    RESOLUTION: 12\n    SHARE_BOX_FEATURE_EXTRACTOR: False\n  MASK_ON: True\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nSOLVER:\n  BASE_LR: 0.06\n  WARMUP_FACTOR: 0.1\n  WEIGHT_DECAY: 0.0001\n  STEPS: (60000, 80000)\n  MAX_ITER: 90000\n  IMS_PER_BATCH: 128  # for 8GPUs\n# TEST:\n#   IMS_PER_BATCH: 8\nINPUT:\n  MIN_SIZE_TRAIN: (600, )\n  MAX_SIZE_TRAIN: 1000\n  MIN_SIZE_TEST: 600\n  MAX_SIZE_TEST: 1000\n  PIXEL_MEAN: [103.53, 116.28, 123.675]\n  PIXEL_STD: [57.375, 57.12, 58.395]\n"
  },
  {
    "path": "configs/fcos/fcos_R_101_FPN_2x.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-101\"\n  RPN_ONLY: True\n  FCOS_ON: True\n  BACKBONE:\n    CONV_BODY: \"R-101-FPN-RETINANET\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n  RETINANET:\n    USE_C5: False # FCOS uses P5 instead of C5\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nINPUT:\n  MIN_SIZE_RANGE_TRAIN: (640, 800)\n  MAX_SIZE_TRAIN: 1333\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1333\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.01\n  WEIGHT_DECAY: 0.0001\n  STEPS: (120000, 160000)\n  MAX_ITER: 180000\n  IMS_PER_BATCH: 16\n  WARMUP_METHOD: \"constant\""
  },
  {
    "path": "configs/fcos/fcos_R_50_FPN_1x.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"pretrain_models/R-50.pkl\"\n  RPN_ONLY: True\n  FCOS_ON: True\n  BACKBONE:\n    CONV_BODY: \"R-50-FPN-RETINANET\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n  RETINANET:\n    USE_C5: False # FCOS uses P5 instead of C5\n  FCOS:\n    CENTER_SAMPLE: False\nDATASETS:\n  TRAIN: (\"coco_2017_train\", )\n  TEST: (\"coco_2017_val\", )\nINPUT:\n  MIN_SIZE_TRAIN: (800,)\n  MAX_SIZE_TRAIN: 1333\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1333\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.01\n  WEIGHT_DECAY: 0.0001\n  STEPS: (60000, 80000)\n  MAX_ITER: 90000\n  IMS_PER_BATCH: 16\n  WARMUP_METHOD: \"constant\"\n"
  },
  {
    "path": "configs/fcos/fcos_R_50_FPN_1x_center.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"pretrain_models/R-50.pkl\"\n  RPN_ONLY: True\n  FCOS_ON: True\n  BACKBONE:\n    CONV_BODY: \"R-50-FPN-RETINANET\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n  RETINANET:\n    USE_C5: False # FCOS uses P5 instead of C5\n  FCOS:\n    CENTER_SAMPLE: True\n    POS_RADIUS: 1.5\nDATASETS:\n  TRAIN: (\"coco_2017_train\", )\n  TEST: (\"coco_2017_val\", )\nINPUT:\n  MIN_SIZE_TRAIN: (800,)\n  MAX_SIZE_TRAIN: 1333\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1333\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.01\n  WEIGHT_DECAY: 0.0001\n  STEPS: (60000, 80000)\n  MAX_ITER: 90000\n  IMS_PER_BATCH: 16\n  WARMUP_METHOD: \"constant\"\n"
  },
  {
    "path": "configs/fcos/fcos_R_50_FPN_1x_center_giou.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"pretrain_models/R-50.pkl\"\n  RPN_ONLY: True\n  FCOS_ON: True\n  BACKBONE:\n    CONV_BODY: \"R-50-FPN-RETINANET\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n  RETINANET:\n    USE_C5: False # FCOS uses P5 instead of C5\n  FCOS:\n    CENTER_SAMPLE: True\n    POS_RADIUS: 1.5\n    LOC_LOSS_TYPE: \"giou\"\nDATASETS:\n  TRAIN: (\"coco_2017_train\", )\n  TEST: (\"coco_2017_val\", )\nINPUT:\n  MIN_SIZE_TRAIN: (800,)\n  MAX_SIZE_TRAIN: 1333\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1333\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.01\n  WEIGHT_DECAY: 0.0001\n  STEPS: (60000, 80000)\n  MAX_ITER: 90000\n  IMS_PER_BATCH: 16\n  WARMUP_METHOD: \"constant\"\n"
  },
  {
    "path": "configs/fcos/fcos_X_101_32x8d_FPN_2x.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/FAIR/20171220/X-101-32x8d\"\n  RPN_ONLY: True\n  FCOS_ON: True\n  BACKBONE:\n    CONV_BODY: \"R-101-FPN-RETINANET\"\n  RESNETS:\n    STRIDE_IN_1X1: False\n    BACKBONE_OUT_CHANNELS: 256\n    NUM_GROUPS: 32\n    WIDTH_PER_GROUP: 8\n  RETINANET:\n    USE_C5: False # FCOS uses P5 instead of C5\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nINPUT:\n  MIN_SIZE_RANGE_TRAIN: (640, 800)\n  MAX_SIZE_TRAIN: 1333\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1333\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.01\n  WEIGHT_DECAY: 0.0001\n  STEPS: (120000, 160000)\n  MAX_ITER: 180000\n  IMS_PER_BATCH: 16\n  WARMUP_METHOD: \"constant\"\n"
  },
  {
    "path": "configs/fcos/fcos_X_101_64x4d_FPN_2x.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/FAIR/20171220/X-101-64x4d\"\n  RPN_ONLY: True\n  FCOS_ON: True\n  BACKBONE:\n    CONV_BODY: \"R-101-FPN-RETINANET\"\n  RESNETS:\n    STRIDE_IN_1X1: False\n    BACKBONE_OUT_CHANNELS: 256\n    NUM_GROUPS: 64\n    WIDTH_PER_GROUP: 4\n  RETINANET:\n    USE_C5: False # FCOS uses P5 instead of C5\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nINPUT:\n  MIN_SIZE_RANGE_TRAIN: (640, 800)\n  MAX_SIZE_TRAIN: 1333\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1333\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.01\n  WEIGHT_DECAY: 0.0001\n  STEPS: (120000, 160000)\n  MAX_ITER: 180000\n  IMS_PER_BATCH: 16\n  WARMUP_METHOD: \"constant\"\n"
  },
  {
    "path": "configs/fcos/fcos_bn_bs16_MNV2_FPN_1x.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"https://cloudstor.aarnet.edu.au/plus/s/xtixKaxLWmbcyf7/download#mobilenet_v2-ecbe2b5.pth\"\n  RPN_ONLY: True\n  FCOS_ON: True\n  BACKBONE:\n    CONV_BODY: \"MNV2-FPN-RETINANET\"\n    FREEZE_CONV_BODY_AT: 0\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n  RETINANET:\n    USE_C5: False # FCOS uses P5 instead of C5\n  USE_SYNCBN: False\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nINPUT:\n  MIN_SIZE_TRAIN: (800,)\n  MAX_SIZE_TRAIN: 1333\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1333\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.01\n  WEIGHT_DECAY: 0.0001\n  STEPS: (60000, 80000)\n  MAX_ITER: 90000\n  IMS_PER_BATCH: 16\n  WARMUP_METHOD: \"constant\"\n"
  },
  {
    "path": "configs/fcos/fcos_syncbn_bs32_MNV2_FPN_1x.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"https://cloudstor.aarnet.edu.au/plus/s/xtixKaxLWmbcyf7/download#mobilenet_v2-ecbe2b5.pth\"\n  RPN_ONLY: True\n  FCOS_ON: True\n  BACKBONE:\n    CONV_BODY: \"MNV2-FPN-RETINANET\"\n    FREEZE_CONV_BODY_AT: 0\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n  RETINANET:\n    USE_C5: False # FCOS uses P5 instead of C5\n  USE_SYNCBN: True\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nINPUT:\n  MIN_SIZE_TRAIN: (800,)\n  MAX_SIZE_TRAIN: 1333\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1333\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.01\n  WEIGHT_DECAY: 0.0001\n  STEPS: (60000, 80000)\n  MAX_ITER: 90000\n  IMS_PER_BATCH: 32\n  WARMUP_METHOD: \"constant\"\n"
  },
  {
    "path": "configs/fcos/fcos_syncbn_bs32_c128_MNV2_FPN_1x.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"https://cloudstor.aarnet.edu.au/plus/s/xtixKaxLWmbcyf7/download#mobilenet_v2-ecbe2b5.pth\"\n  RPN_ONLY: True\n  FCOS_ON: True\n  BACKBONE:\n    CONV_BODY: \"MNV2-FPN-RETINANET\"\n    FREEZE_CONV_BODY_AT: 0\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 128\n  RETINANET:\n    USE_C5: False # FCOS uses P5 instead of C5\n  USE_SYNCBN: True\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nINPUT:\n  MIN_SIZE_TRAIN: (800,)\n  MAX_SIZE_TRAIN: 1333\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1333\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.01\n  WEIGHT_DECAY: 0.0001\n  STEPS: (60000, 80000)\n  MAX_ITER: 90000\n  IMS_PER_BATCH: 32\n  WARMUP_METHOD: \"constant\"\n"
  },
  {
    "path": "configs/fcos/fcos_syncbn_bs32_c128_ms_MNV2_FPN_1x.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"https://cloudstor.aarnet.edu.au/plus/s/xtixKaxLWmbcyf7/download#mobilenet_v2-ecbe2b5.pth\"\n  RPN_ONLY: True\n  FCOS_ON: True\n  BACKBONE:\n    CONV_BODY: \"MNV2-FPN-RETINANET\"\n    FREEZE_CONV_BODY_AT: 0\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 128\n  RETINANET:\n    USE_C5: False # FCOS uses P5 instead of C5\n  USE_SYNCBN: True\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nINPUT:\n  MIN_SIZE_RANGE_TRAIN: (640, 800)\n  MAX_SIZE_TRAIN: 1333\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1333\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.01\n  WEIGHT_DECAY: 0.0001\n  STEPS: (60000, 80000)\n  MAX_ITER: 90000\n  IMS_PER_BATCH: 32\n  WARMUP_METHOD: \"constant\"\n"
  },
  {
    "path": "configs/fcos/fcos_syncbn_bs64_c128_ms_MNV2_FPN_1x.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"https://cloudstor.aarnet.edu.au/plus/s/xtixKaxLWmbcyf7/download#mobilenet_v2-ecbe2b5.pth\"\n  RPN_ONLY: True\n  FCOS_ON: True\n  BACKBONE:\n    CONV_BODY: \"MNV2-FPN-RETINANET\"\n    FREEZE_CONV_BODY_AT: 0\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 128\n  RETINANET:\n    USE_C5: False # FCOS uses P5 instead of C5\n  USE_SYNCBN: True\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nINPUT:\n  MIN_SIZE_RANGE_TRAIN: (640, 800)\n  MAX_SIZE_TRAIN: 1333\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1333\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.01\n  WEIGHT_DECAY: 0.0001\n  STEPS: (60000, 80000)\n  MAX_ITER: 90000\n  IMS_PER_BATCH: 64\n  WARMUP_METHOD: \"constant\"\n"
  },
  {
    "path": "configs/gn_baselines/README.md",
    "content": "### Group Normalization\n1 [Group Normalization](https://arxiv.org/abs/1803.08494)  \n2 [Rethinking ImageNet Pre-training](https://arxiv.org/abs/1811.08883)  \n3 [official code](https://github.com/facebookresearch/Detectron/blob/master/projects/GN/README.md)  \n\n\n### Performance\n|      case                  |    Type      |  lr schd  |  im/gpu | bbox AP | mask AP |\n|----------------------------|:------------:|:---------:|:-------:|:-------:|:-------:|\n|   R-50-FPN, GN (paper)     | finetune     |    2x     |   2     |   40.3  |  35.7   |\n|   R-50-FPN, GN (implement) | finetune     |    2x     |   2     |   40.2  |  36.0   |\n|   R-50-FPN, GN (paper)     | from scratch |    3x     |   2     |   39.5  |  35.2   |\n|   R-50-FPN, GN (implement) | from scratch |    3x     |   2     |   38.9  |  35.1   |\n"
  },
  {
    "path": "configs/gn_baselines/e2e_faster_rcnn_R_50_FPN_1x_gn.yaml",
    "content": "INPUT:\n  MIN_SIZE_TRAIN: (800,)\n  MAX_SIZE_TRAIN: 1333\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1333\nMODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-50-GN\"\n  BACKBONE:\n    CONV_BODY: \"R-50-FPN\"\n  RESNETS: # use GN for backbone\n    BACKBONE_OUT_CHANNELS: 256\n    STRIDE_IN_1X1: False\n    TRANS_FUNC: \"BottleneckWithGN\"\n    STEM_FUNC: \"StemWithGN\"\n  FPN:\n    USE_GN: True # use GN for FPN\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n    BATCH_SIZE_PER_IMAGE: 512\n    POSITIVE_FRACTION: 0.25\n  ROI_BOX_HEAD:\n    USE_GN: True # use GN for bbox head\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  # Assume 8 gpus\n  BASE_LR: 0.02\n  WEIGHT_DECAY: 0.0001\n  STEPS: (60000, 80000)\n  MAX_ITER: 90000\n  IMS_PER_BATCH: 16\nTEST:\n  IMS_PER_BATCH: 8\n"
  },
  {
    "path": "configs/gn_baselines/e2e_faster_rcnn_R_50_FPN_Xconv1fc_1x_gn.yaml",
    "content": "INPUT:\n  MIN_SIZE_TRAIN: (800,)\n  MAX_SIZE_TRAIN: 1333\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1333\nMODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-50-GN\"\n  BACKBONE:\n    CONV_BODY: \"R-50-FPN\"\n  RESNETS: # use GN for backbone\n    BACKBONE_OUT_CHANNELS: 256\n    STRIDE_IN_1X1: False\n    TRANS_FUNC: \"BottleneckWithGN\"\n    STEM_FUNC: \"StemWithGN\"\n  FPN:\n    USE_GN: True # use GN for FPN\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n    BATCH_SIZE_PER_IMAGE: 512\n    POSITIVE_FRACTION: 0.25\n  ROI_BOX_HEAD:\n    USE_GN: True # use GN for bbox head\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    CONV_HEAD_DIM: 256\n    NUM_STACKED_CONVS: 4\n    FEATURE_EXTRACTOR: \"FPNXconv1fcFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  # Assume 8 gpus\n  BASE_LR: 0.02\n  WEIGHT_DECAY: 0.0001\n  STEPS: (60000, 80000)\n  MAX_ITER: 90000\n  IMS_PER_BATCH: 16\nTEST:\n  IMS_PER_BATCH: 8\n"
  },
  {
    "path": "configs/gn_baselines/e2e_mask_rcnn_R_50_FPN_1x_gn.yaml",
    "content": "INPUT:\n  MIN_SIZE_TRAIN: (800,)\n  MAX_SIZE_TRAIN: 1333\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1333\nMODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-50-GN\"\n  BACKBONE:\n    CONV_BODY: \"R-50-FPN\"\n  RESNETS: # use GN for backbone\n    BACKBONE_OUT_CHANNELS: 256\n    STRIDE_IN_1X1: False\n    TRANS_FUNC: \"BottleneckWithGN\"\n    STEM_FUNC: \"StemWithGN\"\n  FPN:\n    USE_GN: True # use GN for FPN\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n    BATCH_SIZE_PER_IMAGE: 512\n    POSITIVE_FRACTION: 0.25\n  ROI_BOX_HEAD:\n    USE_GN: True # use GN for bbox head\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\n  ROI_MASK_HEAD:\n    USE_GN: True # use GN for mask head\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    CONV_LAYERS: (256, 256, 256, 256)\n    FEATURE_EXTRACTOR: \"MaskRCNNFPNFeatureExtractor\"\n    PREDICTOR: \"MaskRCNNC4Predictor\"\n    POOLER_RESOLUTION: 14\n    POOLER_SAMPLING_RATIO: 2\n    RESOLUTION: 28\n    SHARE_BOX_FEATURE_EXTRACTOR: False\n  MASK_ON: True\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  # Assume 8 gpus\n  BASE_LR: 0.02\n  WEIGHT_DECAY: 0.0001\n  STEPS: (60000, 80000)\n  MAX_ITER: 90000\n  IMS_PER_BATCH: 16\nTEST:\n  IMS_PER_BATCH: 8\n"
  },
  {
    "path": "configs/gn_baselines/e2e_mask_rcnn_R_50_FPN_Xconv1fc_1x_gn.yaml",
    "content": "INPUT:\n  MIN_SIZE_TRAIN: (800,)\n  MAX_SIZE_TRAIN: 1333\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1333\nMODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-50-GN\"\n  BACKBONE:\n    CONV_BODY: \"R-50-FPN\"\n  RESNETS: # use GN for backbone\n    BACKBONE_OUT_CHANNELS: 256\n    STRIDE_IN_1X1: False\n    TRANS_FUNC: \"BottleneckWithGN\"\n    STEM_FUNC: \"StemWithGN\"\n  FPN:\n    USE_GN: True # use GN for FPN\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n    BATCH_SIZE_PER_IMAGE: 512\n    POSITIVE_FRACTION: 0.25\n  ROI_BOX_HEAD:\n    USE_GN: True # use GN for bbox head\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    CONV_HEAD_DIM: 256\n    NUM_STACKED_CONVS: 4\n    FEATURE_EXTRACTOR: \"FPNXconv1fcFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\n  ROI_MASK_HEAD:\n    USE_GN: True # use GN for mask head\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    CONV_LAYERS: (256, 256, 256, 256)\n    FEATURE_EXTRACTOR: \"MaskRCNNFPNFeatureExtractor\"\n    PREDICTOR: \"MaskRCNNC4Predictor\"\n    POOLER_RESOLUTION: 14\n    POOLER_SAMPLING_RATIO: 2\n    RESOLUTION: 28\n    SHARE_BOX_FEATURE_EXTRACTOR: False\n  MASK_ON: True\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  # Assume 8 gpus\n  BASE_LR: 0.02\n  WEIGHT_DECAY: 0.0001\n  STEPS: (60000, 80000)\n  MAX_ITER: 90000\n  IMS_PER_BATCH: 16\nTEST:\n  IMS_PER_BATCH: 8\n"
  },
  {
    "path": "configs/gn_baselines/scratch_e2e_faster_rcnn_R_50_FPN_3x_gn.yaml",
    "content": "INPUT:\n  MIN_SIZE_TRAIN: (800,)\n  MAX_SIZE_TRAIN: 1333\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1333\nMODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"\" # no pretrained model\n  BACKBONE:\n    CONV_BODY: \"R-50-FPN\"\n    FREEZE_CONV_BODY_AT: 0 # finetune all layers\n  RESNETS: # use GN for backbone\n    BACKBONE_OUT_CHANNELS: 256\n    STRIDE_IN_1X1: False\n    TRANS_FUNC: \"BottleneckWithGN\"\n    STEM_FUNC: \"StemWithGN\"\n  FPN:\n    USE_GN: True # use GN for FPN\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n    BATCH_SIZE_PER_IMAGE: 512\n    POSITIVE_FRACTION: 0.25\n  ROI_BOX_HEAD:\n    USE_GN: True # use GN for bbox head\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  # Assume 8 gpus\n  BASE_LR: 0.02\n  WEIGHT_DECAY: 0.0001\n  STEPS: (210000, 250000)\n  MAX_ITER: 270000\n  IMS_PER_BATCH: 16\nTEST:\n  IMS_PER_BATCH: 8\n"
  },
  {
    "path": "configs/gn_baselines/scratch_e2e_faster_rcnn_R_50_FPN_Xconv1fc_3x_gn.yaml",
    "content": "INPUT:\n  MIN_SIZE_TRAIN: (800,)\n  MAX_SIZE_TRAIN: 1333\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1333\nMODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"\" # no pretrained model\n  BACKBONE:\n    CONV_BODY: \"R-50-FPN\"\n    FREEZE_CONV_BODY_AT: 0 # finetune all layers\n  RESNETS: # use GN for backbone\n    BACKBONE_OUT_CHANNELS: 256\n    STRIDE_IN_1X1: False\n    TRANS_FUNC: \"BottleneckWithGN\"\n    STEM_FUNC: \"StemWithGN\"\n  FPN:\n    USE_GN: True # use GN for FPN\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n    BATCH_SIZE_PER_IMAGE: 512\n    POSITIVE_FRACTION: 0.25\n  ROI_BOX_HEAD:\n    USE_GN: True # use GN for bbox head\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    CONV_HEAD_DIM: 256\n    NUM_STACKED_CONVS: 4\n    FEATURE_EXTRACTOR: \"FPNXconv1fcFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  # Assume 8 gpus\n  BASE_LR: 0.02\n  WEIGHT_DECAY: 0.0001\n  STEPS: (210000, 250000)\n  MAX_ITER: 270000\n  IMS_PER_BATCH: 16\nTEST:\n  IMS_PER_BATCH: 8\n"
  },
  {
    "path": "configs/gn_baselines/scratch_e2e_mask_rcnn_R_50_FPN_3x_gn.yaml",
    "content": "INPUT:\n  MIN_SIZE_TRAIN: (800,)\n  MAX_SIZE_TRAIN: 1333\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1333\nMODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"\" # no pretrained model\n  BACKBONE:\n    CONV_BODY: \"R-50-FPN\"\n    FREEZE_CONV_BODY_AT: 0 # finetune all layers\n  RESNETS: # use GN for backbone\n    BACKBONE_OUT_CHANNELS: 256\n    STRIDE_IN_1X1: False\n    TRANS_FUNC: \"BottleneckWithGN\"\n    STEM_FUNC: \"StemWithGN\"\n  FPN:\n    USE_GN: True # use GN for FPN\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n    BATCH_SIZE_PER_IMAGE: 512\n    POSITIVE_FRACTION: 0.25\n  ROI_BOX_HEAD:\n    USE_GN: True # use GN for bbox head\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\n  ROI_MASK_HEAD:\n    USE_GN: True # use GN for mask head\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    CONV_LAYERS: (256, 256, 256, 256)\n    FEATURE_EXTRACTOR: \"MaskRCNNFPNFeatureExtractor\"\n    PREDICTOR: \"MaskRCNNC4Predictor\"\n    POOLER_RESOLUTION: 14\n    POOLER_SAMPLING_RATIO: 2\n    RESOLUTION: 28\n    SHARE_BOX_FEATURE_EXTRACTOR: False\n  MASK_ON: True\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  # Assume 8 gpus\n  BASE_LR: 0.02\n  WEIGHT_DECAY: 0.0001\n  STEPS: (210000, 250000)\n  MAX_ITER: 270000\n  IMS_PER_BATCH: 16\nTEST:\n  IMS_PER_BATCH: 8\n"
  },
  {
    "path": "configs/gn_baselines/scratch_e2e_mask_rcnn_R_50_FPN_Xconv1fc_3x_gn.yaml",
    "content": "INPUT:\n  MIN_SIZE_TRAIN: (800,)\n  MAX_SIZE_TRAIN: 1333\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1333\nMODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"\" # no pretrained model\n  BACKBONE:\n    CONV_BODY: \"R-50-FPN\"\n    FREEZE_CONV_BODY_AT: 0 # finetune all layers\n  RESNETS: # use GN for backbone\n    BACKBONE_OUT_CHANNELS: 256\n    STRIDE_IN_1X1: False\n    TRANS_FUNC: \"BottleneckWithGN\"\n    STEM_FUNC: \"StemWithGN\"\n  FPN:\n    USE_GN: True # use GN for FPN\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n    BATCH_SIZE_PER_IMAGE: 512\n    POSITIVE_FRACTION: 0.25\n  ROI_BOX_HEAD:\n    USE_GN: True # use GN for bbox head\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    CONV_HEAD_DIM: 256\n    NUM_STACKED_CONVS: 4\n    FEATURE_EXTRACTOR: \"FPNXconv1fcFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\n  ROI_MASK_HEAD:\n    USE_GN: True # use GN for mask head\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    CONV_LAYERS: (256, 256, 256, 256)\n    FEATURE_EXTRACTOR: \"MaskRCNNFPNFeatureExtractor\"\n    PREDICTOR: \"MaskRCNNC4Predictor\"\n    POOLER_RESOLUTION: 14\n    POOLER_SAMPLING_RATIO: 2\n    RESOLUTION: 28\n    SHARE_BOX_FEATURE_EXTRACTOR: False\n  MASK_ON: True\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  # Assume 8 gpus\n  BASE_LR: 0.02\n  WEIGHT_DECAY: 0.0001\n  STEPS: (210000, 250000)\n  MAX_ITER: 270000\n  IMS_PER_BATCH: 16\nTEST:\n  IMS_PER_BATCH: 8\n"
  },
  {
    "path": "configs/pascal_voc/e2e_faster_rcnn_R_50_C4_1x_1_gpu_voc.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-50\"\n  RPN:\n    PRE_NMS_TOP_N_TEST: 6000\n    POST_NMS_TOP_N_TEST: 300\n    ANCHOR_SIZES: (128, 256, 512)\n  ROI_BOX_HEAD:\n    NUM_CLASSES: 21\nDATASETS:\n  TRAIN: (\"voc_2007_train\", \"voc_2007_val\")\n  TEST: (\"voc_2007_test\",)\nSOLVER:\n  BASE_LR: 0.001\n  WEIGHT_DECAY: 0.0001\n  STEPS: (50000, )\n  MAX_ITER: 70000\n  IMS_PER_BATCH: 1\nTEST:\n  IMS_PER_BATCH: 1\n"
  },
  {
    "path": "configs/pascal_voc/e2e_faster_rcnn_R_50_C4_1x_4_gpu_voc.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-50\"\n  RPN:\n    PRE_NMS_TOP_N_TEST: 6000\n    POST_NMS_TOP_N_TEST: 300\n    ANCHOR_SIZES: (128, 256, 512)\n  ROI_BOX_HEAD:\n    NUM_CLASSES: 21\nDATASETS:\n  TRAIN: (\"voc_2007_train\", \"voc_2007_val\")\n  TEST: (\"voc_2007_test\",)\nSOLVER:\n  BASE_LR: 0.004\n  WEIGHT_DECAY: 0.0001\n  STEPS: (12500, )\n  MAX_ITER: 17500\n  IMS_PER_BATCH: 4\nTEST:\n  IMS_PER_BATCH: 4\n"
  },
  {
    "path": "configs/pascal_voc/e2e_mask_rcnn_R_50_FPN_1x_cocostyle.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-50\"\n  BACKBONE:\n    CONV_BODY: \"R-50-FPN\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\n    NUM_CLASSES: 21\n  ROI_MASK_HEAD:\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    FEATURE_EXTRACTOR: \"MaskRCNNFPNFeatureExtractor\"\n    PREDICTOR: \"MaskRCNNC4Predictor\"\n    POOLER_RESOLUTION: 14\n    POOLER_SAMPLING_RATIO: 2\n    RESOLUTION: 28\n    SHARE_BOX_FEATURE_EXTRACTOR: False\n  MASK_ON: True\nDATASETS:\n  TRAIN: (\"voc_2012_train_cocostyle\",)\n  TEST: (\"voc_2012_val_cocostyle\",)\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.01\n  WEIGHT_DECAY: 0.0001\n  STEPS: (18000,)\n  MAX_ITER: 24000\n"
  },
  {
    "path": "configs/quick_schedules/e2e_faster_rcnn_R_50_C4_quick.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-50\"\n  RPN:\n    PRE_NMS_TOP_N_TEST: 6000\n    POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    BATCH_SIZE_PER_IMAGE: 256\nDATASETS:\n  TRAIN: (\"coco_2014_minival\",)\n  TEST: (\"coco_2014_minival\",)\nINPUT:\n  MIN_SIZE_TRAIN: (600,)\n  MAX_SIZE_TRAIN: 1000\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1000\nSOLVER:\n  BASE_LR: 0.005\n  WEIGHT_DECAY: 0.0001\n  STEPS: (1500,)\n  MAX_ITER: 2000\n  IMS_PER_BATCH: 2\nTEST:\n  IMS_PER_BATCH: 2\n"
  },
  {
    "path": "configs/quick_schedules/e2e_faster_rcnn_R_50_FPN_quick.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-50\"\n  BACKBONE:\n    CONV_BODY: \"R-50-FPN\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n    BATCH_SIZE_PER_IMAGE: 256\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\nDATASETS:\n  TRAIN: (\"coco_2014_minival\",)\n  TEST: (\"coco_2014_minival\",)\nINPUT:\n  MIN_SIZE_TRAIN: (600,)\n  MAX_SIZE_TRAIN: 1000\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1000\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.005\n  WEIGHT_DECAY: 0.0001\n  STEPS: (1500,)\n  MAX_ITER: 2000\n  IMS_PER_BATCH: 4\nTEST:\n  IMS_PER_BATCH: 2\n"
  },
  {
    "path": "configs/quick_schedules/e2e_faster_rcnn_X_101_32x8d_FPN_quick.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/FAIR/20171220/X-101-32x8d\"\n  BACKBONE:\n    CONV_BODY: \"R-101-FPN\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n    STRIDE_IN_1X1: False\n    NUM_GROUPS: 32\n    WIDTH_PER_GROUP: 8\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n    BATCH_SIZE_PER_IMAGE: 256\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\nDATASETS:\n  TRAIN: (\"coco_2014_minival\",)\n  TEST: (\"coco_2014_minival\",)\nINPUT:\n  MIN_SIZE_TRAIN: (600,)\n  MAX_SIZE_TRAIN: 1000\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1000\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.005\n  WEIGHT_DECAY: 0.0001\n  STEPS: (1500,)\n  MAX_ITER: 2000\n  IMS_PER_BATCH: 2\nTEST:\n  IMS_PER_BATCH: 2\n"
  },
  {
    "path": "configs/quick_schedules/e2e_keypoint_rcnn_R_50_FPN_quick.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-50\"\n  BACKBONE:\n    CONV_BODY: \"R-50-FPN\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n    BATCH_SIZE_PER_IMAGE: 256\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\n    NUM_CLASSES: 2\n  ROI_KEYPOINT_HEAD:\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    FEATURE_EXTRACTOR: \"KeypointRCNNFeatureExtractor\"\n    PREDICTOR: \"KeypointRCNNPredictor\"\n    POOLER_RESOLUTION: 14\n    POOLER_SAMPLING_RATIO: 2\n    RESOLUTION: 56\n    SHARE_BOX_FEATURE_EXTRACTOR: False\n  KEYPOINT_ON: True\nDATASETS:\n  TRAIN: (\"keypoints_coco_2014_minival\",)\n  TEST: (\"keypoints_coco_2014_minival\",)\nINPUT:\n  MIN_SIZE_TRAIN: (640, 672, 704, 736, 768, 800)\n  MAX_SIZE_TRAIN: 1000\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1000\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.005\n  WEIGHT_DECAY: 0.0001\n  STEPS: (1500,)\n  MAX_ITER: 2000\n  IMS_PER_BATCH: 4\nTEST:\n  IMS_PER_BATCH: 2\n"
  },
  {
    "path": "configs/quick_schedules/e2e_mask_rcnn_R_50_C4_quick.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-50\"\n  RPN:\n    PRE_NMS_TOP_N_TEST: 6000\n    POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    BATCH_SIZE_PER_IMAGE: 256\n  ROI_MASK_HEAD:\n    PREDICTOR: \"MaskRCNNC4Predictor\"\n    SHARE_BOX_FEATURE_EXTRACTOR: True\n  MASK_ON: True\nDATASETS:\n  TRAIN: (\"coco_2014_minival\",)\n  TEST: (\"coco_2014_minival\",)\nINPUT:\n  MIN_SIZE_TRAIN: (600,)\n  MAX_SIZE_TRAIN: 1000\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1000\nSOLVER:\n  BASE_LR: 0.005\n  WEIGHT_DECAY: 0.0001\n  STEPS: (1500,)\n  MAX_ITER: 2000\n  IMS_PER_BATCH: 4\nTEST:\n  IMS_PER_BATCH: 2\n"
  },
  {
    "path": "configs/quick_schedules/e2e_mask_rcnn_R_50_FPN_quick.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-50\"\n  BACKBONE:\n    CONV_BODY: \"R-50-FPN\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n    BATCH_SIZE_PER_IMAGE: 256\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\n  ROI_MASK_HEAD:\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    FEATURE_EXTRACTOR: \"MaskRCNNFPNFeatureExtractor\"\n    PREDICTOR: \"MaskRCNNC4Predictor\"\n    POOLER_RESOLUTION: 14\n    POOLER_SAMPLING_RATIO: 2\n    RESOLUTION: 28\n    SHARE_BOX_FEATURE_EXTRACTOR: False\n  MASK_ON: True\nDATASETS:\n  TRAIN: (\"coco_2014_minival\",)\n  TEST: (\"coco_2014_minival\",)\nINPUT:\n  MIN_SIZE_TRAIN: (600,)\n  MAX_SIZE_TRAIN: 1000\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1000\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.005\n  WEIGHT_DECAY: 0.0001\n  STEPS: (1500,)\n  MAX_ITER: 2000\n  IMS_PER_BATCH: 4\nTEST:\n  IMS_PER_BATCH: 2\n"
  },
  {
    "path": "configs/quick_schedules/e2e_mask_rcnn_X_101_32x8d_FPN_quick.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/FAIR/20171220/X-101-32x8d\"\n  BACKBONE:\n    CONV_BODY: \"R-101-FPN\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n    STRIDE_IN_1X1: False\n    NUM_GROUPS: 32\n    WIDTH_PER_GROUP: 8\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n    BATCH_SIZE_PER_IMAGE: 256\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\n  ROI_MASK_HEAD:\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    FEATURE_EXTRACTOR: \"MaskRCNNFPNFeatureExtractor\"\n    PREDICTOR: \"MaskRCNNC4Predictor\"\n    POOLER_RESOLUTION: 14\n    POOLER_SAMPLING_RATIO: 2\n    RESOLUTION: 28\n    SHARE_BOX_FEATURE_EXTRACTOR: False\n  MASK_ON: True\nDATASETS:\n  TRAIN: (\"coco_2014_minival\",)\n  TEST: (\"coco_2014_minival\",)\nINPUT:\n  MIN_SIZE_TRAIN: (600,)\n  MAX_SIZE_TRAIN: 1000\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1000\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.005\n  WEIGHT_DECAY: 0.0001\n  STEPS: (1500,)\n  MAX_ITER: 2000\n  IMS_PER_BATCH: 2\nTEST:\n  IMS_PER_BATCH: 2\n"
  },
  {
    "path": "configs/quick_schedules/rpn_R_50_C4_quick.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-50\"\n  RPN_ONLY: True\n  RPN:\n    PRE_NMS_TOP_N_TEST: 12000\n    POST_NMS_TOP_N_TEST: 2000\nDATASETS:\n  TRAIN: (\"coco_2014_minival\",)\n  TEST: (\"coco_2014_minival\",)\nINPUT:\n  MIN_SIZE_TRAIN: (600,)\n  MAX_SIZE_TRAIN: 1000\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1000\nSOLVER:\n  BASE_LR: 0.005\n  WEIGHT_DECAY: 0.0001\n  STEPS: (1500,)\n  MAX_ITER: 2000\n  IMS_PER_BATCH: 4\nTEST:\n  IMS_PER_BATCH: 2\n"
  },
  {
    "path": "configs/quick_schedules/rpn_R_50_FPN_quick.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-50\"\n  RPN_ONLY: True\n  BACKBONE:\n    CONV_BODY: \"R-50-FPN\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 2000\n    FPN_POST_NMS_TOP_N_TEST: 2000\nDATASETS:\n  TRAIN: (\"coco_2014_minival\",)\n  TEST: (\"coco_2014_minival\",)\nINPUT:\n  MIN_SIZE_TRAIN: (600,)\n  MAX_SIZE_TRAIN: 1000\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1000\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.005\n  WEIGHT_DECAY: 0.0001\n  STEPS: (1500,)\n  MAX_ITER: 2000\n  IMS_PER_BATCH: 4\nTEST:\n  IMS_PER_BATCH: 2\n"
  },
  {
    "path": "configs/retinanet/retinanet_R-101-FPN_1x.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-101\"\n  RPN_ONLY: True\n  RETINANET_ON: True\n  BACKBONE:\n    CONV_BODY: \"R-101-FPN-RETINANET\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n  RPN:\n    USE_FPN: True\n    FG_IOU_THRESHOLD: 0.5\n    BG_IOU_THRESHOLD: 0.4\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n    BATCH_SIZE_PER_IMAGE: 256\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\n  RETINANET:\n    SCALES_PER_OCTAVE: 3\n    STRADDLE_THRESH: -1\n    FG_IOU_THRESHOLD: 0.5\n    BG_IOU_THRESHOLD: 0.4\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nINPUT:\n  MIN_SIZE_TRAIN: (800, )\n  MAX_SIZE_TRAIN: 1333\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1333\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  # Assume 4 gpus\n  BASE_LR: 0.005\n  WEIGHT_DECAY: 0.0001\n  STEPS: (120000, 160000)\n  MAX_ITER: 180000\n  IMS_PER_BATCH: 8\n"
  },
  {
    "path": "configs/retinanet/retinanet_R-101-FPN_P5_1x.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-101\"\n  RPN_ONLY: True\n  RETINANET_ON: True\n  BACKBONE:\n    CONV_BODY: \"R-101-FPN-RETINANET\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n  RPN:\n    USE_FPN: True\n    FG_IOU_THRESHOLD: 0.5\n    BG_IOU_THRESHOLD: 0.4\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n    BATCH_SIZE_PER_IMAGE: 256\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\n  RETINANET:\n    SCALES_PER_OCTAVE: 3\n    STRADDLE_THRESH: -1\n    USE_C5: False\n    FG_IOU_THRESHOLD: 0.5\n    BG_IOU_THRESHOLD: 0.4\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nINPUT:\n  MIN_SIZE_TRAIN: (800, )\n  MAX_SIZE_TRAIN: 1333\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1333\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  # Assume 4 gpus\n  BASE_LR: 0.005\n  WEIGHT_DECAY: 0.0001\n  STEPS: (120000, 160000)\n  MAX_ITER: 180000\n  IMS_PER_BATCH: 8\n"
  },
  {
    "path": "configs/retinanet/retinanet_R-50-FPN_1x.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-50\"\n  RPN_ONLY: True\n  RETINANET_ON: True\n  BACKBONE:\n    CONV_BODY: \"R-50-FPN-RETINANET\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n  RPN:\n    USE_FPN: True\n    FG_IOU_THRESHOLD: 0.5\n    BG_IOU_THRESHOLD: 0.4\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n    BATCH_SIZE_PER_IMAGE: 256\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\n  RETINANET:\n    SCALES_PER_OCTAVE: 3\n    STRADDLE_THRESH: -1\n    FG_IOU_THRESHOLD: 0.5\n    BG_IOU_THRESHOLD: 0.4\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nINPUT:\n  MIN_SIZE_TRAIN: (800,)\n  MAX_SIZE_TRAIN: 1333\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1333\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  # Assume 4 gpus\n  BASE_LR: 0.005\n  WEIGHT_DECAY: 0.0001\n  STEPS: (120000, 160000)\n  MAX_ITER: 180000\n  IMS_PER_BATCH: 8\n"
  },
  {
    "path": "configs/retinanet/retinanet_R-50-FPN_1x_quick.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-50\"\n  RPN_ONLY: True\n  RETINANET_ON: True\n  BACKBONE:\n    CONV_BODY: \"R-50-FPN-RETINANET\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n  RPN:\n    USE_FPN: True\n    FG_IOU_THRESHOLD: 0.5\n    BG_IOU_THRESHOLD: 0.4\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n    BATCH_SIZE_PER_IMAGE: 256\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\n  RETINANET:\n    SCALES_PER_OCTAVE: 3\n    STRADDLE_THRESH: -1\n    FG_IOU_THRESHOLD: 0.5\n    BG_IOU_THRESHOLD: 0.4\nDATASETS:\n  TRAIN: (\"coco_2014_minival\",)\n  TEST: (\"coco_2014_minival\",)\nINPUT:\n  MIN_SIZE_TRAIN: (600,)\n  MAX_SIZE_TRAIN: 1000\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1000\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.005\n  WEIGHT_DECAY: 0.0001\n  STEPS: (3500,)\n  MAX_ITER: 4000\n  IMS_PER_BATCH: 4\n"
  },
  {
    "path": "configs/retinanet/retinanet_R-50-FPN_P5_1x.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-50\"\n  RPN_ONLY: True\n  RETINANET_ON: True\n  BACKBONE:\n    CONV_BODY: \"R-50-FPN-RETINANET\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n  RPN:\n    USE_FPN: True\n    FG_IOU_THRESHOLD: 0.5\n    BG_IOU_THRESHOLD: 0.4\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n    BATCH_SIZE_PER_IMAGE: 256\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\n  RETINANET:\n    SCALES_PER_OCTAVE: 3\n    STRADDLE_THRESH: -1\n    USE_C5: False\n    FG_IOU_THRESHOLD: 0.5\n    BG_IOU_THRESHOLD: 0.4\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nINPUT:\n  MIN_SIZE_TRAIN: (800,)\n  MAX_SIZE_TRAIN: 1333\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1333\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  # Assume 4 gpus\n  BASE_LR: 0.005\n  WEIGHT_DECAY: 0.0001\n  STEPS: (120000, 160000)\n  MAX_ITER: 180000\n  IMS_PER_BATCH: 8\n"
  },
  {
    "path": "configs/retinanet/retinanet_X_101_32x8d_FPN_1x.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/FAIR/20171220/X-101-32x8d\"\n  RPN_ONLY: True\n  RETINANET_ON: True\n  BACKBONE:\n    CONV_BODY: \"R-101-FPN-RETINANET\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n    STRIDE_IN_1X1: False\n    NUM_GROUPS: 32\n    WIDTH_PER_GROUP: 8\n  RPN:\n    USE_FPN: True\n    FG_IOU_THRESHOLD: 0.5\n    BG_IOU_THRESHOLD: 0.4\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TRAIN: 2000\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 1000\n    FPN_POST_NMS_TOP_N_TEST: 1000\n  ROI_HEADS:\n    USE_FPN: True\n    BATCH_SIZE_PER_IMAGE: 256\n  ROI_BOX_HEAD:\n    POOLER_RESOLUTION: 7\n    POOLER_SCALES: (0.25, 0.125, 0.0625, 0.03125)\n    POOLER_SAMPLING_RATIO: 2\n    FEATURE_EXTRACTOR: \"FPN2MLPFeatureExtractor\"\n    PREDICTOR: \"FPNPredictor\"\n  RETINANET:\n    SCALES_PER_OCTAVE: 3\n    STRADDLE_THRESH: -1\n    FG_IOU_THRESHOLD: 0.5\n    BG_IOU_THRESHOLD: 0.4\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nINPUT:\n  MIN_SIZE_TRAIN: (800, )\n  MAX_SIZE_TRAIN: 1333\n  MIN_SIZE_TEST: 800\n  MAX_SIZE_TEST: 1333\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  # Assume 4 gpus\n  BASE_LR: 0.0025\n  WEIGHT_DECAY: 0.0001\n  STEPS: (240000, 320000)\n  MAX_ITER: 360000\n  IMS_PER_BATCH: 4\n"
  },
  {
    "path": "configs/rpn_R_101_FPN_1x.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-101\"\n  RPN_ONLY: True\n  BACKBONE:\n    CONV_BODY: \"R-101-FPN\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 2000\n    FPN_POST_NMS_TOP_N_TEST: 2000\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.02\n  WEIGHT_DECAY: 0.0001\n  STEPS: (60000, 80000)\n  MAX_ITER: 90000\n"
  },
  {
    "path": "configs/rpn_R_50_C4_1x.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-50\"\n  RPN_ONLY: True\n  RPN:\n    PRE_NMS_TOP_N_TEST: 12000\n    POST_NMS_TOP_N_TEST: 2000\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nSOLVER:\n  BASE_LR: 0.02\n  WEIGHT_DECAY: 0.0001\n  STEPS: (60000, 80000)\n  MAX_ITER: 90000\n"
  },
  {
    "path": "configs/rpn_R_50_FPN_1x.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/MSRA/R-50\"\n  RPN_ONLY: True\n  BACKBONE:\n    CONV_BODY: \"R-50-FPN\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 2000\n    FPN_POST_NMS_TOP_N_TEST: 2000\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.02\n  WEIGHT_DECAY: 0.0001\n  STEPS: (60000, 80000)\n  MAX_ITER: 90000\n"
  },
  {
    "path": "configs/rpn_X_101_32x8d_FPN_1x.yaml",
    "content": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  WEIGHT: \"catalog://ImageNetPretrained/FAIR/20171220/X-101-32x8d\"\n  RPN_ONLY: True\n  BACKBONE:\n    CONV_BODY: \"R-101-FPN\"\n  RESNETS:\n    BACKBONE_OUT_CHANNELS: 256\n    STRIDE_IN_1X1: False\n    NUM_GROUPS: 32\n    WIDTH_PER_GROUP: 8\n  RPN:\n    USE_FPN: True\n    ANCHOR_STRIDE: (4, 8, 16, 32, 64)\n    PRE_NMS_TOP_N_TEST: 1000\n    POST_NMS_TOP_N_TEST: 2000\n    FPN_POST_NMS_TOP_N_TEST: 2000\nDATASETS:\n  TRAIN: (\"coco_2014_train\", \"coco_2014_valminusminival\")\n  TEST: (\"coco_2014_minival\",)\nDATALOADER:\n  SIZE_DIVISIBILITY: 32\nSOLVER:\n  BASE_LR: 0.02\n  WEIGHT_DECAY: 0.0001\n  STEPS: (60000, 80000)\n  MAX_ITER: 90000\n"
  },
  {
    "path": "demo/README.md",
    "content": "## Webcam and Jupyter notebook demo\n\nThis folder contains a simple webcam demo that illustrates how you can use `maskrcnn_benchmark` for inference.\n\n\n### With your preferred environment\n\nYou can start it by running it from this folder, using one of the following commands:\n```bash\n# by default, it runs on the GPU\n# for best results, use min-image-size 800\npython webcam.py --min-image-size 800\n# can also run it on the CPU\npython webcam.py --min-image-size 300 MODEL.DEVICE cpu\n# or change the model that you want to use\npython webcam.py --config-file ../configs/caffe2/e2e_mask_rcnn_R_101_FPN_1x_caffe2.yaml --min-image-size 300 MODEL.DEVICE cpu\n# in order to see the probability heatmaps, pass --show-mask-heatmaps\npython webcam.py --min-image-size 300 --show-mask-heatmaps MODEL.DEVICE cpu\n```\n\n### With Docker\n\nBuild the image with the tag `maskrcnn-benchmark` (check [INSTALL.md](../INSTALL.md) for instructions)\n\nAdjust permissions of the X server host (be careful with this step, refer to \n[here](http://wiki.ros.org/docker/Tutorials/GUI) for alternatives)\n\n```bash\nxhost +\n``` \n\nThen run a container with the demo:\n \n```\ndocker run --rm -it \\\n    -e DISPLAY=${DISPLAY} \\\n    --privileged \\\n    -v /tmp/.X11-unix:/tmp/.X11-unix \\\n    --device=/dev/video0:/dev/video0 \\\n    --ipc=host maskrcnn-benchmark \\\n    python demo/webcam.py --min-image-size 300 \\\n    --config-file configs/caffe2/e2e_mask_rcnn_R_50_FPN_1x_caffe2.yaml\n```\n\n**DISCLAIMER:** *This was tested for an Ubuntu 16.04 machine, \nthe volume mapping may vary depending on your platform*\n"
  },
  {
    "path": "demo/fcos_demo.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport argparse\nimport cv2, os\n\nfrom maskrcnn_benchmark.config import cfg\nfrom predictor import COCODemo\n\nimport time\n\n\ndef main():\n    parser = argparse.ArgumentParser(description=\"PyTorch Object Detection Webcam Demo\")\n    parser.add_argument(\n        \"--config-file\",\n        default=\"configs/fcos/fcos_R_50_FPN_1x.yaml\",\n        metavar=\"FILE\",\n        help=\"path to config file\",\n    )\n    parser.add_argument(\n        \"--weights\",\n        default=\"FCOS_R_50_FPN_1x.pth\",\n        metavar=\"FILE\",\n        help=\"path to the trained model\",\n    )\n    parser.add_argument(\n        \"--images-dir\",\n        default=\"demo/images\",\n        metavar=\"DIR\",\n        help=\"path to demo images directory\",\n    )\n    parser.add_argument(\n        \"--min-image-size\",\n        type=int,\n        default=800,\n        help=\"Smallest size of the image to feed to the model. \"\n            \"Model was trained with 800, which gives best results\",\n    )\n    parser.add_argument(\n        \"opts\",\n        help=\"Modify model config options using the command-line\",\n        default=None,\n        nargs=argparse.REMAINDER,\n    )\n\n    args = parser.parse_args()\n\n    # load config from file and command-line arguments\n    cfg.merge_from_file(args.config_file)\n    cfg.merge_from_list(args.opts)\n    cfg.MODEL.WEIGHT = args.weights\n\n    cfg.freeze()\n\n    # The following per-class thresholds are computed by maximizing\n    # per-class f-measure in their precision-recall curve.\n    # Please see compute_thresholds_for_classes() in coco_eval.py for details.\n    thresholds_for_classes = [\n        0.23860901594161987, 0.24108672142028809, 0.2470853328704834,\n        0.2316885143518448, 0.2708061933517456, 0.23173952102661133,\n        0.31990334391593933, 0.21302376687526703, 0.20151866972446442,\n        0.20928964018821716, 0.3793887197971344, 0.2715213894844055,\n        0.2836397588253021, 0.26449233293533325, 0.1728038638830185,\n        0.314998596906662, 0.28575003147125244, 0.28987520933151245,\n        0.2727000117301941, 0.23306897282600403, 0.265937477350235,\n        0.32663893699645996, 0.27102580666542053, 0.29177549481391907,\n        0.2043062448501587, 0.24331751465797424, 0.20752687752246857,\n        0.22951272130012512, 0.22753854095935822, 0.2159966081380844,\n        0.1993938684463501, 0.23676514625549316, 0.20982342958450317,\n        0.18315598368644714, 0.2489681988954544, 0.24793922901153564,\n        0.287187397480011, 0.23045086860656738, 0.2462811917066574,\n        0.21191294491291046, 0.22845126688480377, 0.24365000426769257,\n        0.22687821090221405, 0.18365581333637238, 0.2035856395959854,\n        0.23478077352046967, 0.18431290984153748, 0.18184082210063934,\n        0.2708037495613098, 0.2268175482749939, 0.19970566034317017,\n        0.21832780539989471, 0.21120598912239075, 0.270445853471756,\n        0.189377561211586, 0.2101106345653534, 0.2112293541431427,\n        0.23484709858894348, 0.22701986134052277, 0.20732736587524414,\n        0.1953316181898117, 0.3237660229206085, 0.3078872859477997,\n        0.2881140112876892, 0.38746657967567444, 0.20038367807865143,\n        0.28123822808265686, 0.2588447630405426, 0.2796839773654938,\n        0.266757994890213, 0.3266656696796417, 0.25759157538414,\n        0.2578003704547882, 0.17009201645851135, 0.29051828384399414,\n        0.24002137780189514, 0.22378061711788177, 0.26134759187698364,\n        0.1730124056339264, 0.1857597529888153\n    ]\n\n    demo_im_names = os.listdir(args.images_dir)\n\n    # prepare object that handles inference plus adds predictions on top of image\n    coco_demo = COCODemo(\n        cfg,\n        confidence_thresholds_for_classes=thresholds_for_classes,\n        min_image_size=args.min_image_size\n    )\n\n    for im_name in demo_im_names:\n        img = cv2.imread(os.path.join(args.images_dir, im_name))\n        if img is None:\n            continue\n        start_time = time.time()\n        composite = coco_demo.run_on_opencv_image(img)\n        print(\"{}\\tinference time: {:.2f}s\".format(im_name, time.time() - start_time))\n        cv2.imshow(im_name, composite)\n    print(\"Press any keys to exit ...\")\n    cv2.waitKey()\n    cv2.destroyAllWindows()\n\nif __name__ == \"__main__\":\n    main()\n\n"
  },
  {
    "path": "demo/predictor.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport cv2\nimport torch\nfrom torchvision import transforms as T\n\nfrom maskrcnn_benchmark.modeling.detector import build_detection_model\nfrom maskrcnn_benchmark.utils.checkpoint import DetectronCheckpointer\nfrom maskrcnn_benchmark.structures.image_list import to_image_list\nfrom maskrcnn_benchmark.modeling.roi_heads.mask_head.inference import Masker\nfrom maskrcnn_benchmark import layers as L\nfrom maskrcnn_benchmark.utils import cv2_util\n\n\nclass COCODemo(object):\n    # COCO categories for pretty print\n    CATEGORIES = [\n        \"__background\",\n        \"person\",\n        \"bicycle\",\n        \"car\",\n        \"motorcycle\",\n        \"airplane\",\n        \"bus\",\n        \"train\",\n        \"truck\",\n        \"boat\",\n        \"traffic light\",\n        \"fire hydrant\",\n        \"stop sign\",\n        \"parking meter\",\n        \"bench\",\n        \"bird\",\n        \"cat\",\n        \"dog\",\n        \"horse\",\n        \"sheep\",\n        \"cow\",\n        \"elephant\",\n        \"bear\",\n        \"zebra\",\n        \"giraffe\",\n        \"backpack\",\n        \"umbrella\",\n        \"handbag\",\n        \"tie\",\n        \"suitcase\",\n        \"frisbee\",\n        \"skis\",\n        \"snowboard\",\n        \"sports ball\",\n        \"kite\",\n        \"baseball bat\",\n        \"baseball glove\",\n        \"skateboard\",\n        \"surfboard\",\n        \"tennis racket\",\n        \"bottle\",\n        \"wine glass\",\n        \"cup\",\n        \"fork\",\n        \"knife\",\n        \"spoon\",\n        \"bowl\",\n        \"banana\",\n        \"apple\",\n        \"sandwich\",\n        \"orange\",\n        \"broccoli\",\n        \"carrot\",\n        \"hot dog\",\n        \"pizza\",\n        \"donut\",\n        \"cake\",\n        \"chair\",\n        \"couch\",\n        \"potted plant\",\n        \"bed\",\n        \"dining table\",\n        \"toilet\",\n        \"tv\",\n        \"laptop\",\n        \"mouse\",\n        \"remote\",\n        \"keyboard\",\n        \"cell phone\",\n        \"microwave\",\n        \"oven\",\n        \"toaster\",\n        \"sink\",\n        \"refrigerator\",\n        \"book\",\n        \"clock\",\n        \"vase\",\n        \"scissors\",\n        \"teddy bear\",\n        \"hair drier\",\n        \"toothbrush\",\n    ]\n\n    def __init__(\n        self,\n        cfg,\n        confidence_thresholds_for_classes,\n        show_mask_heatmaps=False,\n        masks_per_dim=2,\n        min_image_size=224,\n    ):\n        self.cfg = cfg.clone()\n        self.model = build_detection_model(cfg)\n        self.model.eval()\n        self.device = torch.device(cfg.MODEL.DEVICE)\n        self.model.to(self.device)\n        self.min_image_size = min_image_size\n\n        save_dir = cfg.OUTPUT_DIR\n        checkpointer = DetectronCheckpointer(cfg, self.model, save_dir=save_dir)\n        _ = checkpointer.load(cfg.MODEL.WEIGHT)\n\n        self.transforms = self.build_transform()\n\n        mask_threshold = -1 if show_mask_heatmaps else 0.5\n        self.masker = Masker(threshold=mask_threshold, padding=1)\n\n        # used to make colors for each class\n        self.palette = torch.tensor([2 ** 25 - 1, 2 ** 15 - 1, 2 ** 21 - 1])\n\n        self.cpu_device = torch.device(\"cpu\")\n        self.confidence_thresholds_for_classes = torch.tensor(confidence_thresholds_for_classes)\n        self.show_mask_heatmaps = show_mask_heatmaps\n        self.masks_per_dim = masks_per_dim\n\n    def build_transform(self):\n        \"\"\"\n        Creates a basic transformation that was used to train the models\n        \"\"\"\n        cfg = self.cfg\n\n        # we are loading images with OpenCV, so we don't need to convert them\n        # to BGR, they are already! So all we need to do is to normalize\n        # by 255 if we want to convert to BGR255 format, or flip the channels\n        # if we want it to be in RGB in [0-1] range.\n        if cfg.INPUT.TO_BGR255:\n            to_bgr_transform = T.Lambda(lambda x: x * 255)\n        else:\n            to_bgr_transform = T.Lambda(lambda x: x[[2, 1, 0]])\n\n        normalize_transform = T.Normalize(\n            mean=cfg.INPUT.PIXEL_MEAN, std=cfg.INPUT.PIXEL_STD\n        )\n\n        transform = T.Compose(\n            [\n                T.ToPILImage(),\n                T.Resize(self.min_image_size),\n                T.ToTensor(),\n                to_bgr_transform,\n                normalize_transform,\n            ]\n        )\n        return transform\n\n    def run_on_opencv_image(self, image):\n        \"\"\"\n        Arguments:\n            image (np.ndarray): an image as returned by OpenCV\n\n        Returns:\n            prediction (BoxList): the detected objects. Additional information\n                of the detection properties can be found in the fields of\n                the BoxList via `prediction.fields()`\n        \"\"\"\n        predictions = self.compute_prediction(image)\n        top_predictions = self.select_top_predictions(predictions)\n\n        result = image.copy()\n        if self.show_mask_heatmaps:\n            return self.create_mask_montage(result, top_predictions)\n        result = self.overlay_boxes(result, top_predictions)\n        if self.cfg.MODEL.MASK_ON:\n            result = self.overlay_mask(result, top_predictions)\n        if self.cfg.MODEL.KEYPOINT_ON:\n            result = self.overlay_keypoints(result, top_predictions)\n        result = self.overlay_class_names(result, top_predictions)\n\n        return result\n\n    def compute_prediction(self, original_image):\n        \"\"\"\n        Arguments:\n            original_image (np.ndarray): an image as returned by OpenCV\n\n        Returns:\n            prediction (BoxList): the detected objects. Additional information\n                of the detection properties can be found in the fields of\n                the BoxList via `prediction.fields()`\n        \"\"\"\n        # apply pre-processing to image\n        image = self.transforms(original_image)\n        # convert to an ImageList, padded so that it is divisible by\n        # cfg.DATALOADER.SIZE_DIVISIBILITY\n        image_list = to_image_list(image, self.cfg.DATALOADER.SIZE_DIVISIBILITY)\n        image_list = image_list.to(self.device)\n        # compute predictions\n        with torch.no_grad():\n            predictions = self.model(image_list)\n        predictions = [o.to(self.cpu_device) for o in predictions]\n\n        # always single image is passed at a time\n        prediction = predictions[0]\n\n        # reshape prediction (a BoxList) into the original image size\n        height, width = original_image.shape[:-1]\n        prediction = prediction.resize((width, height))\n\n        if prediction.has_field(\"mask\"):\n            # if we have masks, paste the masks in the right position\n            # in the image, as defined by the bounding boxes\n            masks = prediction.get_field(\"mask\")\n            # always single image is passed at a time\n            masks = self.masker([masks], [prediction])[0]\n            prediction.add_field(\"mask\", masks)\n        return prediction\n\n    def select_top_predictions(self, predictions):\n        \"\"\"\n        Select only predictions which have a `score` > self.confidence_threshold,\n        and returns the predictions in descending order of score\n\n        Arguments:\n            predictions (BoxList): the result of the computation by the model.\n                It should contain the field `scores`.\n\n        Returns:\n            prediction (BoxList): the detected objects. Additional information\n                of the detection properties can be found in the fields of\n                the BoxList via `prediction.fields()`\n        \"\"\"\n        scores = predictions.get_field(\"scores\")\n        labels = predictions.get_field(\"labels\")\n        thresholds = self.confidence_thresholds_for_classes[(labels - 1).long()]\n        keep = torch.nonzero(scores > thresholds).squeeze(1)\n        predictions = predictions[keep]\n        scores = predictions.get_field(\"scores\")\n        _, idx = scores.sort(0, descending=True)\n        return predictions[idx]\n\n    def compute_colors_for_labels(self, labels):\n        \"\"\"\n        Simple function that adds fixed colors depending on the class\n        \"\"\"\n        colors = labels[:, None] * self.palette\n        colors = (colors % 255).numpy().astype(\"uint8\")\n        return colors\n\n    def overlay_boxes(self, image, predictions):\n        \"\"\"\n        Adds the predicted boxes on top of the image\n\n        Arguments:\n            image (np.ndarray): an image as returned by OpenCV\n            predictions (BoxList): the result of the computation by the model.\n                It should contain the field `labels`.\n        \"\"\"\n        labels = predictions.get_field(\"labels\")\n        boxes = predictions.bbox\n\n        colors = self.compute_colors_for_labels(labels).tolist()\n\n        for box, color in zip(boxes, colors):\n            box = box.to(torch.int64)\n            top_left, bottom_right = box[:2].tolist(), box[2:].tolist()\n            image = cv2.rectangle(\n                image, tuple(top_left), tuple(bottom_right), tuple(color), 2\n            )\n\n        return image\n\n    def overlay_mask(self, image, predictions):\n        \"\"\"\n        Adds the instances contours for each predicted object.\n        Each label has a different color.\n\n        Arguments:\n            image (np.ndarray): an image as returned by OpenCV\n            predictions (BoxList): the result of the computation by the model.\n                It should contain the field `mask` and `labels`.\n        \"\"\"\n        masks = predictions.get_field(\"mask\").numpy()\n        labels = predictions.get_field(\"labels\")\n\n        colors = self.compute_colors_for_labels(labels).tolist()\n\n        for mask, color in zip(masks, colors):\n            thresh = mask[0, :, :, None]\n            contours, hierarchy = cv2_util.findContours(\n                thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE\n            )\n            image = cv2.drawContours(image, contours, -1, color, 3)\n\n        composite = image\n\n        return composite\n\n    def overlay_keypoints(self, image, predictions):\n        keypoints = predictions.get_field(\"keypoints\")\n        kps = keypoints.keypoints\n        scores = keypoints.get_field(\"logits\")\n        kps = torch.cat((kps[:, :, 0:2], scores[:, :, None]), dim=2).numpy()\n        for region in kps:\n            image = vis_keypoints(image, region.transpose((1, 0)))\n        return image\n\n    def create_mask_montage(self, image, predictions):\n        \"\"\"\n        Create a montage showing the probability heatmaps for each one one of the\n        detected objects\n\n        Arguments:\n            image (np.ndarray): an image as returned by OpenCV\n            predictions (BoxList): the result of the computation by the model.\n                It should contain the field `mask`.\n        \"\"\"\n        masks = predictions.get_field(\"mask\")\n        masks_per_dim = self.masks_per_dim\n        masks = L.interpolate(\n            masks.float(), scale_factor=1 / masks_per_dim\n        ).byte()\n        height, width = masks.shape[-2:]\n        max_masks = masks_per_dim ** 2\n        masks = masks[:max_masks]\n        # handle case where we have less detections than max_masks\n        if len(masks) < max_masks:\n            masks_padded = torch.zeros(max_masks, 1, height, width, dtype=torch.uint8)\n            masks_padded[: len(masks)] = masks\n            masks = masks_padded\n        masks = masks.reshape(masks_per_dim, masks_per_dim, height, width)\n        result = torch.zeros(\n            (masks_per_dim * height, masks_per_dim * width), dtype=torch.uint8\n        )\n        for y in range(masks_per_dim):\n            start_y = y * height\n            end_y = (y + 1) * height\n            for x in range(masks_per_dim):\n                start_x = x * width\n                end_x = (x + 1) * width\n                result[start_y:end_y, start_x:end_x] = masks[y, x]\n        return cv2.applyColorMap(result.numpy(), cv2.COLORMAP_JET)\n\n    def overlay_class_names(self, image, predictions):\n        \"\"\"\n        Adds detected class names and scores in the positions defined by the\n        top-left corner of the predicted bounding box\n\n        Arguments:\n            image (np.ndarray): an image as returned by OpenCV\n            predictions (BoxList): the result of the computation by the model.\n                It should contain the field `scores` and `labels`.\n        \"\"\"\n        scores = predictions.get_field(\"scores\").tolist()\n        labels = predictions.get_field(\"labels\").tolist()\n        labels = [self.CATEGORIES[i] for i in labels]\n        boxes = predictions.bbox\n\n        template = \"{}: {:.2f}\"\n        for box, score, label in zip(boxes, scores, labels):\n            x, y = box[:2]\n            s = template.format(label, score)\n            cv2.putText(\n                image, s, (x, y), cv2.FONT_HERSHEY_SIMPLEX, .5, (255, 255, 255), 1\n            )\n\n        return image\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom maskrcnn_benchmark.structures.keypoint import PersonKeypoints\n\ndef vis_keypoints(img, kps, kp_thresh=2, alpha=0.7):\n    \"\"\"Visualizes keypoints (adapted from vis_one_image).\n    kps has shape (4, #keypoints) where 4 rows are (x, y, logit, prob).\n    \"\"\"\n    dataset_keypoints = PersonKeypoints.NAMES\n    kp_lines = PersonKeypoints.CONNECTIONS\n\n    # Convert from plt 0-1 RGBA colors to 0-255 BGR colors for opencv.\n    cmap = plt.get_cmap('rainbow')\n    colors = [cmap(i) for i in np.linspace(0, 1, len(kp_lines) + 2)]\n    colors = [(c[2] * 255, c[1] * 255, c[0] * 255) for c in colors]\n\n    # Perform the drawing on a copy of the image, to allow for blending.\n    kp_mask = np.copy(img)\n\n    # Draw mid shoulder / mid hip first for better visualization.\n    mid_shoulder = (\n        kps[:2, dataset_keypoints.index('right_shoulder')] +\n        kps[:2, dataset_keypoints.index('left_shoulder')]) / 2.0\n    sc_mid_shoulder = np.minimum(\n        kps[2, dataset_keypoints.index('right_shoulder')],\n        kps[2, dataset_keypoints.index('left_shoulder')])\n    mid_hip = (\n        kps[:2, dataset_keypoints.index('right_hip')] +\n        kps[:2, dataset_keypoints.index('left_hip')]) / 2.0\n    sc_mid_hip = np.minimum(\n        kps[2, dataset_keypoints.index('right_hip')],\n        kps[2, dataset_keypoints.index('left_hip')])\n    nose_idx = dataset_keypoints.index('nose')\n    if sc_mid_shoulder > kp_thresh and kps[2, nose_idx] > kp_thresh:\n        cv2.line(\n            kp_mask, tuple(mid_shoulder), tuple(kps[:2, nose_idx]),\n            color=colors[len(kp_lines)], thickness=2, lineType=cv2.LINE_AA)\n    if sc_mid_shoulder > kp_thresh and sc_mid_hip > kp_thresh:\n        cv2.line(\n            kp_mask, tuple(mid_shoulder), tuple(mid_hip),\n            color=colors[len(kp_lines) + 1], thickness=2, lineType=cv2.LINE_AA)\n\n    # Draw the keypoints.\n    for l in range(len(kp_lines)):\n        i1 = kp_lines[l][0]\n        i2 = kp_lines[l][1]\n        p1 = kps[0, i1], kps[1, i1]\n        p2 = kps[0, i2], kps[1, i2]\n        if kps[2, i1] > kp_thresh and kps[2, i2] > kp_thresh:\n            cv2.line(\n                kp_mask, p1, p2,\n                color=colors[l], thickness=2, lineType=cv2.LINE_AA)\n        if kps[2, i1] > kp_thresh:\n            cv2.circle(\n                kp_mask, p1,\n                radius=3, color=colors[l], thickness=-1, lineType=cv2.LINE_AA)\n        if kps[2, i2] > kp_thresh:\n            cv2.circle(\n                kp_mask, p2,\n                radius=3, color=colors[l], thickness=-1, lineType=cv2.LINE_AA)\n\n    # Blend the keypoints.\n    return cv2.addWeighted(img, 1.0 - alpha, kp_mask, alpha, 0)\n"
  },
  {
    "path": "demo/webcam.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport argparse\nimport cv2\n\nfrom maskrcnn_benchmark.config import cfg\nfrom predictor import COCODemo\n\nimport time\n\n\ndef main():\n    parser = argparse.ArgumentParser(description=\"PyTorch Object Detection Webcam Demo\")\n    parser.add_argument(\n        \"--config-file\",\n        default=\"../configs/caffe2/e2e_mask_rcnn_R_50_FPN_1x_caffe2.yaml\",\n        metavar=\"FILE\",\n        help=\"path to config file\",\n    )\n    parser.add_argument(\n        \"--confidence-threshold\",\n        type=float,\n        default=0.7,\n        help=\"Minimum score for the prediction to be shown\",\n    )\n    parser.add_argument(\n        \"--min-image-size\",\n        type=int,\n        default=224,\n        help=\"Smallest size of the image to feed to the model. \"\n            \"Model was trained with 800, which gives best results\",\n    )\n    parser.add_argument(\n        \"--show-mask-heatmaps\",\n        dest=\"show_mask_heatmaps\",\n        help=\"Show a heatmap probability for the top masks-per-dim masks\",\n        action=\"store_true\",\n    )\n    parser.add_argument(\n        \"--masks-per-dim\",\n        type=int,\n        default=2,\n        help=\"Number of heatmaps per dimension to show\",\n    )\n    parser.add_argument(\n        \"opts\",\n        help=\"Modify model config options using the command-line\",\n        default=None,\n        nargs=argparse.REMAINDER,\n    )\n\n    args = parser.parse_args()\n\n    # load config from file and command-line arguments\n    cfg.merge_from_file(args.config_file)\n    cfg.merge_from_list(args.opts)\n    cfg.freeze()\n\n    # prepare object that handles inference plus adds predictions on top of image\n    coco_demo = COCODemo(\n        cfg,\n        confidence_threshold=args.confidence_threshold,\n        show_mask_heatmaps=args.show_mask_heatmaps,\n        masks_per_dim=args.masks_per_dim,\n        min_image_size=args.min_image_size,\n    )\n\n    cam = cv2.VideoCapture(0)\n    while True:\n        start_time = time.time()\n        ret_val, img = cam.read()\n        composite = coco_demo.run_on_opencv_image(img)\n        print(\"Time: {:.2f} s / img\".format(time.time() - start_time))\n        cv2.imshow(\"COCO detections\", composite)\n        if cv2.waitKey(1) == 27:\n            break  # esc to quit\n    cv2.destroyAllWindows()\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "docker/Dockerfile",
    "content": "ARG CUDA=\"9.0\"\nARG CUDNN=\"7\"\n\nFROM nvidia/cuda:${CUDA}-cudnn${CUDNN}-devel-ubuntu16.04\n\nRUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections\n\n# install basics\nRUN apt-get update -y \\\n && apt-get install -y apt-utils git curl ca-certificates bzip2 cmake tree htop bmon iotop g++ \\\n && apt-get install -y libglib2.0-0 libsm6 libxext6 libxrender-dev\n\n# Install Miniconda\nRUN curl -so /miniconda.sh https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh \\\n && chmod +x /miniconda.sh \\\n && /miniconda.sh -b -p /miniconda \\\n && rm /miniconda.sh\n\nENV PATH=/miniconda/bin:$PATH\n\n# Create a Python 3.6 environment\nRUN /miniconda/bin/conda install -y conda-build \\\n && /miniconda/bin/conda create -y --name py36 python=3.6.7 \\\n && /miniconda/bin/conda clean -ya\n\nENV CONDA_DEFAULT_ENV=py36\nENV CONDA_PREFIX=/miniconda/envs/$CONDA_DEFAULT_ENV\nENV PATH=$CONDA_PREFIX/bin:$PATH\nENV CONDA_AUTO_UPDATE_CONDA=false\n\nRUN conda install -y ipython\nRUN pip install ninja yacs cython matplotlib opencv-python tqdm\n\n# Install PyTorch 1.0 Nightly\nARG CUDA\nRUN conda install pytorch-nightly cudatoolkit=${CUDA} -c pytorch \\\n && conda clean -ya\n\n# Install TorchVision master\nRUN git clone https://github.com/pytorch/vision.git \\\n && cd vision \\\n && python setup.py install\n\n# install pycocotools\nRUN git clone https://github.com/cocodataset/cocoapi.git \\\n && cd cocoapi/PythonAPI \\\n && python setup.py build_ext install\n\n# install PyTorch Detection\nARG FORCE_CUDA=\"1\"\nENV FORCE_CUDA=${FORCE_CUDA}\nRUN git clone https://github.com/facebookresearch/maskrcnn-benchmark.git \\\n && cd maskrcnn-benchmark \\\n && python setup.py build develop\n\nWORKDIR /maskrcnn-benchmark\n"
  },
  {
    "path": "docker/docker-jupyter/Dockerfile",
    "content": "ARG CUDA=\"9.0\"\nARG CUDNN=\"7\"\n\nFROM nvidia/cuda:${CUDA}-cudnn${CUDNN}-devel-ubuntu16.04\n\nRUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections\n\n# install basics\nRUN apt-get update -y \\\n && apt-get install -y apt-utils git curl ca-certificates bzip2 cmake tree htop bmon iotop g++\n\n# Install Miniconda\nRUN curl -so /miniconda.sh https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh \\\n && chmod +x /miniconda.sh \\\n && /miniconda.sh -b -p /miniconda \\\n && rm /miniconda.sh\n\nENV PATH=/miniconda/bin:$PATH\n\n# Create a Python 3.6 environment\nRUN /miniconda/bin/conda install -y conda-build \\\n && /miniconda/bin/conda create -y --name py36 python=3.6.7 \\\n && /miniconda/bin/conda clean -ya\n\nENV CONDA_DEFAULT_ENV=py36\nENV CONDA_PREFIX=/miniconda/envs/$CONDA_DEFAULT_ENV\nENV PATH=$CONDA_PREFIX/bin:$PATH\nENV CONDA_AUTO_UPDATE_CONDA=false\n\nRUN conda install -y ipython\nRUN pip install ninja yacs cython matplotlib jupyter\n\n# Install PyTorch 1.0 Nightly and OpenCV\nRUN conda install -y pytorch-nightly -c pytorch \\\n && conda install -y opencv -c menpo \\\n && conda clean -ya\n\nWORKDIR /root\n\nUSER root\n\nRUN mkdir /notebooks\n\nWORKDIR /notebooks\n\n# Install TorchVision master\nRUN git clone https://github.com/pytorch/vision.git \\\n && cd vision \\\n && python setup.py install\n\n# install pycocotools\nRUN git clone https://github.com/cocodataset/cocoapi.git \\\n && cd cocoapi/PythonAPI \\\n && python setup.py build_ext install\n\n# install PyTorch Detection\nRUN git clone https://github.com/facebookresearch/maskrcnn-benchmark.git \\\n && cd maskrcnn-benchmark \\\n && python setup.py build develop\n\nRUN jupyter notebook --generate-config\n\nENV CONFIG_PATH=\"/root/.jupyter/jupyter_notebook_config.py\"\n\nCOPY \"jupyter_notebook_config.py\" ${CONFIG_PATH}\n\nENTRYPOINT [\"sh\", \"-c\", \"jupyter notebook --allow-root -y --no-browser --ip=0.0.0.0 --config=${CONFIG_PATH}\"]\n"
  },
  {
    "path": "docker/docker-jupyter/jupyter_notebook_config.py",
    "content": "import os\nfrom IPython.lib import passwd\n\n#c = c  # pylint:disable=undefined-variable\nc = get_config()\nc.NotebookApp.ip = '0.0.0.0'\nc.NotebookApp.port = int(os.getenv('PORT', 8888))\nc.NotebookApp.open_browser = False\n\n# sets a password if PASSWORD is set in the environment\nif 'PASSWORD' in os.environ:\n  password = os.environ['PASSWORD']\n  if password:\n    c.NotebookApp.password = passwd(password)\n  else:\n    c.NotebookApp.password = ''\n    c.NotebookApp.token = ''\n  del os.environ['PASSWORD']\n"
  },
  {
    "path": "maskrcnn_benchmark/__init__.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n"
  },
  {
    "path": "maskrcnn_benchmark/config/__init__.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nfrom .defaults import _C as cfg\n"
  },
  {
    "path": "maskrcnn_benchmark/config/defaults.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport os\n\nfrom yacs.config import CfgNode as CN\n\n\n# -----------------------------------------------------------------------------\n# Convention about Training / Test specific parameters\n# -----------------------------------------------------------------------------\n# Whenever an argument can be either used for training or for testing, the\n# corresponding name will be post-fixed by a _TRAIN for a training parameter,\n# or _TEST for a test-specific parameter.\n# For example, the number of images during training will be\n# IMAGES_PER_BATCH_TRAIN, while the number of images for testing will be\n# IMAGES_PER_BATCH_TEST\n\n# -----------------------------------------------------------------------------\n# Config definition\n# -----------------------------------------------------------------------------\n\n_C = CN()\n\n_C.MODEL = CN()\n_C.MODEL.RPN_ONLY = False\n_C.MODEL.MASK_ON = False\n_C.MODEL.FCOS_ON = True\n_C.MODEL.RETINANET_ON = False\n_C.MODEL.KEYPOINT_ON = False\n_C.MODEL.DEVICE = \"cuda\"\n_C.MODEL.META_ARCHITECTURE = \"GeneralizedRCNN\"\n_C.MODEL.CLS_AGNOSTIC_BBOX_REG = False\n\n# If the WEIGHT starts with a catalog://, like :R-50, the code will look for\n# the path in paths_catalog. Else, it will use it as the specified absolute\n# path\n_C.MODEL.WEIGHT = \"\"\n_C.MODEL.USE_SYNCBN = False\n\n\n# -----------------------------------------------------------------------------\n# INPUT\n# -----------------------------------------------------------------------------\n_C.INPUT = CN()\n# Size of the smallest side of the image during training\n_C.INPUT.MIN_SIZE_TRAIN = (800,)  # (800,)\n# The range of the smallest side for multi-scale training\n_C.INPUT.MIN_SIZE_RANGE_TRAIN = (-1, -1)  # -1 means disabled and it will use MIN_SIZE_TRAIN\n# Maximum size of the side of the image during training\n_C.INPUT.MAX_SIZE_TRAIN = 1333\n# Size of the smallest side of the image during testing\n_C.INPUT.MIN_SIZE_TEST = 800\n# Maximum size of the side of the image during testing\n_C.INPUT.MAX_SIZE_TEST = 1333\n# Values to be used for image normalization\n_C.INPUT.PIXEL_MEAN = [102.9801, 115.9465, 122.7717]\n# Values to be used for image normalization\n_C.INPUT.PIXEL_STD = [1., 1., 1.]\n# Convert image to BGR format (for Caffe2 models), in range 0-255\n_C.INPUT.TO_BGR255 = True\n\n\n# -----------------------------------------------------------------------------\n# Dataset\n# -----------------------------------------------------------------------------\n_C.DATASETS = CN()\n# List of the dataset names for training, as present in paths_catalog.py\n_C.DATASETS.TRAIN = ()\n# List of the dataset names for testing, as present in paths_catalog.py\n_C.DATASETS.TEST = ()\n\n# -----------------------------------------------------------------------------\n# DataLoader\n# -----------------------------------------------------------------------------\n_C.DATALOADER = CN()\n# Number of data loading threads\n_C.DATALOADER.NUM_WORKERS = 4\n# If > 0, this enforces that each collated batch should have a size divisible\n# by SIZE_DIVISIBILITY\n_C.DATALOADER.SIZE_DIVISIBILITY = 0\n# If True, each batch should contain only images for which the aspect ratio\n# is compatible. This groups portrait images together, and landscape images\n# are not batched with portrait images.\n_C.DATALOADER.ASPECT_RATIO_GROUPING = True\n\n\n# ---------------------------------------------------------------------------- #\n# Backbone options\n# ---------------------------------------------------------------------------- #\n_C.MODEL.BACKBONE = CN()\n\n# The backbone conv body to use\n# The string must match a function that is imported in modeling.model_builder\n# (e.g., 'FPN.add_fpn_ResNet101_conv5_body' to specify a ResNet-101-FPN\n# backbone)\n_C.MODEL.BACKBONE.CONV_BODY = \"R-50-C4\"\n\n# Add StopGrad at a specified stage so the bottom layers are frozen\n_C.MODEL.BACKBONE.FREEZE_CONV_BODY_AT = 2\n# GN for backbone\n_C.MODEL.BACKBONE.USE_GN = False\n\n\n# ---------------------------------------------------------------------------- #\n# FPN options\n# ---------------------------------------------------------------------------- #\n_C.MODEL.FPN = CN()\n_C.MODEL.FPN.USE_GN = False\n_C.MODEL.FPN.USE_RELU = False\n\n\n# ---------------------------------------------------------------------------- #\n# Group Norm options\n# ---------------------------------------------------------------------------- #\n_C.MODEL.GROUP_NORM = CN()\n# Number of dimensions per group in GroupNorm (-1 if using NUM_GROUPS)\n_C.MODEL.GROUP_NORM.DIM_PER_GP = -1\n# Number of groups in GroupNorm (-1 if using DIM_PER_GP)\n_C.MODEL.GROUP_NORM.NUM_GROUPS = 32\n# GroupNorm's small constant in the denominator\n_C.MODEL.GROUP_NORM.EPSILON = 1e-5\n\n\n# ---------------------------------------------------------------------------- #\n# RPN options\n# ---------------------------------------------------------------------------- #\n_C.MODEL.RPN = CN()\n_C.MODEL.RPN.USE_FPN = False\n# Base RPN anchor sizes given in absolute pixels w.r.t. the scaled network input\n_C.MODEL.RPN.ANCHOR_SIZES = (32, 64, 128, 256, 512)\n# Stride of the feature map that RPN is attached.\n# For FPN, number of strides should match number of scales\n_C.MODEL.RPN.ANCHOR_STRIDE = (16,)\n# RPN anchor aspect ratios\n_C.MODEL.RPN.ASPECT_RATIOS = (0.5, 1.0, 2.0)\n# Remove RPN anchors that go outside the image by RPN_STRADDLE_THRESH pixels\n# Set to -1 or a large value, e.g. 100000, to disable pruning anchors\n_C.MODEL.RPN.STRADDLE_THRESH = 0\n# Minimum overlap required between an anchor and ground-truth box for the\n# (anchor, gt box) pair to be a positive example (IoU >= FG_IOU_THRESHOLD\n# ==> positive RPN example)\n_C.MODEL.RPN.FG_IOU_THRESHOLD = 0.7\n# Maximum overlap allowed between an anchor and ground-truth box for the\n# (anchor, gt box) pair to be a negative examples (IoU < BG_IOU_THRESHOLD\n# ==> negative RPN example)\n_C.MODEL.RPN.BG_IOU_THRESHOLD = 0.3\n# Total number of RPN examples per image\n_C.MODEL.RPN.BATCH_SIZE_PER_IMAGE = 256\n# Target fraction of foreground (positive) examples per RPN minibatch\n_C.MODEL.RPN.POSITIVE_FRACTION = 0.5\n# Number of top scoring RPN proposals to keep before applying NMS\n# When FPN is used, this is *per FPN level* (not total)\n_C.MODEL.RPN.PRE_NMS_TOP_N_TRAIN = 12000\n_C.MODEL.RPN.PRE_NMS_TOP_N_TEST = 6000\n# Number of top scoring RPN proposals to keep after applying NMS\n_C.MODEL.RPN.POST_NMS_TOP_N_TRAIN = 2000\n_C.MODEL.RPN.POST_NMS_TOP_N_TEST = 1000\n# NMS threshold used on RPN proposals\n_C.MODEL.RPN.NMS_THRESH = 0.7\n# Proposal height and width both need to be greater than RPN_MIN_SIZE\n# (a the scale used during training or inference)\n_C.MODEL.RPN.MIN_SIZE = 0\n# Number of top scoring RPN proposals to keep after combining proposals from\n# all FPN levels\n_C.MODEL.RPN.FPN_POST_NMS_TOP_N_TRAIN = 2000\n_C.MODEL.RPN.FPN_POST_NMS_TOP_N_TEST = 2000\n# Custom rpn head, empty to use default conv or separable conv\n_C.MODEL.RPN.RPN_HEAD = \"SingleConvRPNHead\"\n\n\n# ---------------------------------------------------------------------------- #\n# ROI HEADS options\n# ---------------------------------------------------------------------------- #\n_C.MODEL.ROI_HEADS = CN()\n_C.MODEL.ROI_HEADS.USE_FPN = False\n# Overlap threshold for an RoI to be considered foreground (if >= FG_IOU_THRESHOLD)\n_C.MODEL.ROI_HEADS.FG_IOU_THRESHOLD = 0.5\n# Overlap threshold for an RoI to be considered background\n# (class = 0 if overlap in [0, BG_IOU_THRESHOLD))\n_C.MODEL.ROI_HEADS.BG_IOU_THRESHOLD = 0.5\n# Default weights on (dx, dy, dw, dh) for normalizing bbox regression targets\n# These are empirically chosen to approximately lead to unit variance targets\n_C.MODEL.ROI_HEADS.BBOX_REG_WEIGHTS = (10., 10., 5., 5.)\n# RoI minibatch size *per image* (number of regions of interest [ROIs])\n# Total number of RoIs per training minibatch =\n#   TRAIN.BATCH_SIZE_PER_IM * TRAIN.IMS_PER_BATCH\n# E.g., a common configuration is: 512 * 2 * 8 = 8192\n_C.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 512\n# Target fraction of RoI minibatch that is labeled foreground (i.e. class > 0)\n_C.MODEL.ROI_HEADS.POSITIVE_FRACTION = 0.25\n\n# Only used on test mode\n\n# Minimum score threshold (assuming scores in a [0, 1] range); a value chosen to\n# balance obtaining high recall with not having too many low precision\n# detections that will slow down inference post processing steps (like NMS)\n_C.MODEL.ROI_HEADS.SCORE_THRESH = 0.05\n# Overlap threshold used for non-maximum suppression (suppress boxes with\n# IoU >= this threshold)\n_C.MODEL.ROI_HEADS.NMS = 0.5\n# Maximum number of detections to return per image (100 is based on the limit\n# established for the COCO dataset)\n_C.MODEL.ROI_HEADS.DETECTIONS_PER_IMG = 100\n\n\n_C.MODEL.ROI_BOX_HEAD = CN()\n_C.MODEL.ROI_BOX_HEAD.FEATURE_EXTRACTOR = \"ResNet50Conv5ROIFeatureExtractor\"\n_C.MODEL.ROI_BOX_HEAD.PREDICTOR = \"FastRCNNPredictor\"\n_C.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION = 14\n_C.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO = 0\n_C.MODEL.ROI_BOX_HEAD.POOLER_SCALES = (1.0 / 16,)\n_C.MODEL.ROI_BOX_HEAD.NUM_CLASSES = 81\n# Hidden layer dimension when using an MLP for the RoI box head\n_C.MODEL.ROI_BOX_HEAD.MLP_HEAD_DIM = 1024\n# GN\n_C.MODEL.ROI_BOX_HEAD.USE_GN = False\n# Dilation\n_C.MODEL.ROI_BOX_HEAD.DILATION = 1\n_C.MODEL.ROI_BOX_HEAD.CONV_HEAD_DIM = 256\n_C.MODEL.ROI_BOX_HEAD.NUM_STACKED_CONVS = 4\n\n\n_C.MODEL.ROI_MASK_HEAD = CN()\n_C.MODEL.ROI_MASK_HEAD.FEATURE_EXTRACTOR = \"ResNet50Conv5ROIFeatureExtractor\"\n_C.MODEL.ROI_MASK_HEAD.PREDICTOR = \"MaskRCNNC4Predictor\"\n_C.MODEL.ROI_MASK_HEAD.POOLER_RESOLUTION = 14\n_C.MODEL.ROI_MASK_HEAD.POOLER_SAMPLING_RATIO = 0\n_C.MODEL.ROI_MASK_HEAD.POOLER_SCALES = (1.0 / 16,)\n_C.MODEL.ROI_MASK_HEAD.MLP_HEAD_DIM = 1024\n_C.MODEL.ROI_MASK_HEAD.CONV_LAYERS = (256, 256, 256, 256)\n_C.MODEL.ROI_MASK_HEAD.RESOLUTION = 14\n_C.MODEL.ROI_MASK_HEAD.SHARE_BOX_FEATURE_EXTRACTOR = True\n# Whether or not resize and translate masks to the input image.\n_C.MODEL.ROI_MASK_HEAD.POSTPROCESS_MASKS = False\n_C.MODEL.ROI_MASK_HEAD.POSTPROCESS_MASKS_THRESHOLD = 0.5\n# Dilation\n_C.MODEL.ROI_MASK_HEAD.DILATION = 1\n# GN\n_C.MODEL.ROI_MASK_HEAD.USE_GN = False\n\n_C.MODEL.ROI_KEYPOINT_HEAD = CN()\n_C.MODEL.ROI_KEYPOINT_HEAD.FEATURE_EXTRACTOR = \"KeypointRCNNFeatureExtractor\"\n_C.MODEL.ROI_KEYPOINT_HEAD.PREDICTOR = \"KeypointRCNNPredictor\"\n_C.MODEL.ROI_KEYPOINT_HEAD.POOLER_RESOLUTION = 14\n_C.MODEL.ROI_KEYPOINT_HEAD.POOLER_SAMPLING_RATIO = 0\n_C.MODEL.ROI_KEYPOINT_HEAD.POOLER_SCALES = (1.0 / 16,)\n_C.MODEL.ROI_KEYPOINT_HEAD.MLP_HEAD_DIM = 1024\n_C.MODEL.ROI_KEYPOINT_HEAD.CONV_LAYERS = tuple(512 for _ in range(8))\n_C.MODEL.ROI_KEYPOINT_HEAD.RESOLUTION = 14\n_C.MODEL.ROI_KEYPOINT_HEAD.NUM_CLASSES = 17\n_C.MODEL.ROI_KEYPOINT_HEAD.SHARE_BOX_FEATURE_EXTRACTOR = True\n\n# ---------------------------------------------------------------------------- #\n# ResNe[X]t options (ResNets = {ResNet, ResNeXt}\n# Note that parts of a resnet may be used for both the backbone and the head\n# These options apply to both\n# ---------------------------------------------------------------------------- #\n_C.MODEL.RESNETS = CN()\n\n# Number of groups to use; 1 ==> ResNet; > 1 ==> ResNeXt\n_C.MODEL.RESNETS.NUM_GROUPS = 1\n\n# Baseline width of each group\n_C.MODEL.RESNETS.WIDTH_PER_GROUP = 64\n\n# Place the stride 2 conv on the 1x1 filter\n# Use True only for the original MSRA ResNet; use False for C2 and Torch models\n_C.MODEL.RESNETS.STRIDE_IN_1X1 = True\n\n# Residual transformation function\n_C.MODEL.RESNETS.TRANS_FUNC = \"BottleneckWithFixedBatchNorm\"\n# ResNet's stem function (conv1 and pool1)\n_C.MODEL.RESNETS.STEM_FUNC = \"StemWithFixedBatchNorm\"\n\n# Apply dilation in stage \"res5\"\n_C.MODEL.RESNETS.RES5_DILATION = 1\n\n_C.MODEL.RESNETS.BACKBONE_OUT_CHANNELS = 256 * 4\n_C.MODEL.RESNETS.RES2_OUT_CHANNELS = 256\n_C.MODEL.RESNETS.STEM_OUT_CHANNELS = 64\n\n# ---------------------------------------------------------------------------- #\n# FCOS Options\n# ---------------------------------------------------------------------------- #\n_C.MODEL.FCOS = CN()\n_C.MODEL.FCOS.NUM_CLASSES = 81  # the number of classes including background\n_C.MODEL.FCOS.FPN_STRIDES = [8, 16, 32, 64, 128]\n_C.MODEL.FCOS.PRIOR_PROB = 0.01\n_C.MODEL.FCOS.INFERENCE_TH = 0.05\n_C.MODEL.FCOS.NMS_TH = 0.6\n_C.MODEL.FCOS.PRE_NMS_TOP_N = 1000\n\n# Focal loss parameter: alpha\n_C.MODEL.FCOS.LOSS_ALPHA = 0.25\n# Focal loss parameter: gamma\n_C.MODEL.FCOS.LOSS_GAMMA = 2.0\n_C.MODEL.FCOS.CENTER_SAMPLE = False\n_C.MODEL.FCOS.POS_RADIUS = 1.5\n_C.MODEL.FCOS.LOC_LOSS_TYPE = 'iou'\n_C.MODEL.FCOS.DENSE_POINTS = 1\n\n# the number of convolutions used in the cls and bbox tower\n_C.MODEL.FCOS.NUM_CONVS = 4\n\n# ---------------------------------------------------------------------------- #\n# RetinaNet Options (Follow the Detectron version)\n# ---------------------------------------------------------------------------- #\n_C.MODEL.RETINANET = CN()\n\n# This is the number of foreground classes and background.\n_C.MODEL.RETINANET.NUM_CLASSES = 81\n\n# Anchor aspect ratios to use\n_C.MODEL.RETINANET.ANCHOR_SIZES = (32, 64, 128, 256, 512)\n_C.MODEL.RETINANET.ASPECT_RATIOS = (0.5, 1.0, 2.0)\n_C.MODEL.RETINANET.ANCHOR_STRIDES = (8, 16, 32, 64, 128)\n_C.MODEL.RETINANET.STRADDLE_THRESH = 0\n\n# Anchor scales per octave\n_C.MODEL.RETINANET.OCTAVE = 2.0\n_C.MODEL.RETINANET.SCALES_PER_OCTAVE = 3\n\n# Use C5 or P5 to generate P6\n_C.MODEL.RETINANET.USE_C5 = True\n\n# Convolutions to use in the cls and bbox tower\n# NOTE: this doesn't include the last conv for logits\n_C.MODEL.RETINANET.NUM_CONVS = 4\n\n# Weight for bbox_regression loss\n_C.MODEL.RETINANET.BBOX_REG_WEIGHT = 4.0\n\n# Smooth L1 loss beta for bbox regression\n_C.MODEL.RETINANET.BBOX_REG_BETA = 0.11\n\n# During inference, #locs to select based on cls score before NMS is performed\n# per FPN level\n_C.MODEL.RETINANET.PRE_NMS_TOP_N = 1000\n\n# IoU overlap ratio for labeling an anchor as positive\n# Anchors with >= iou overlap are labeled positive\n_C.MODEL.RETINANET.FG_IOU_THRESHOLD = 0.5\n\n# IoU overlap ratio for labeling an anchor as negative\n# Anchors with < iou overlap are labeled negative\n_C.MODEL.RETINANET.BG_IOU_THRESHOLD = 0.4\n\n# Focal loss parameter: alpha\n_C.MODEL.RETINANET.LOSS_ALPHA = 0.25\n\n# Focal loss parameter: gamma\n_C.MODEL.RETINANET.LOSS_GAMMA = 2.0\n\n# Prior prob for the positives at the beginning of training. This is used to set\n# the bias init for the logits layer\n_C.MODEL.RETINANET.PRIOR_PROB = 0.01\n\n# Inference cls score threshold, anchors with score > INFERENCE_TH are\n# considered for inference\n_C.MODEL.RETINANET.INFERENCE_TH = 0.05\n\n# NMS threshold used in RetinaNet\n_C.MODEL.RETINANET.NMS_TH = 0.4\n\n\n# ---------------------------------------------------------------------------- #\n# FBNet options\n# ---------------------------------------------------------------------------- #\n_C.MODEL.FBNET = CN()\n_C.MODEL.FBNET.ARCH = \"default\"\n# custom arch\n_C.MODEL.FBNET.ARCH_DEF = \"\"\n_C.MODEL.FBNET.BN_TYPE = \"bn\"\n_C.MODEL.FBNET.SCALE_FACTOR = 1.0\n# the output channels will be divisible by WIDTH_DIVISOR\n_C.MODEL.FBNET.WIDTH_DIVISOR = 1\n_C.MODEL.FBNET.DW_CONV_SKIP_BN = True\n_C.MODEL.FBNET.DW_CONV_SKIP_RELU = True\n\n# > 0 scale, == 0 skip, < 0 same dimension\n_C.MODEL.FBNET.DET_HEAD_LAST_SCALE = 1.0\n_C.MODEL.FBNET.DET_HEAD_BLOCKS = []\n# overwrite the stride for the head, 0 to use original value\n_C.MODEL.FBNET.DET_HEAD_STRIDE = 0\n\n# > 0 scale, == 0 skip, < 0 same dimension\n_C.MODEL.FBNET.KPTS_HEAD_LAST_SCALE = 0.0\n_C.MODEL.FBNET.KPTS_HEAD_BLOCKS = []\n# overwrite the stride for the head, 0 to use original value\n_C.MODEL.FBNET.KPTS_HEAD_STRIDE = 0\n\n# > 0 scale, == 0 skip, < 0 same dimension\n_C.MODEL.FBNET.MASK_HEAD_LAST_SCALE = 0.0\n_C.MODEL.FBNET.MASK_HEAD_BLOCKS = []\n# overwrite the stride for the head, 0 to use original value\n_C.MODEL.FBNET.MASK_HEAD_STRIDE = 0\n\n# 0 to use all blocks defined in arch_def\n_C.MODEL.FBNET.RPN_HEAD_BLOCKS = 0\n_C.MODEL.FBNET.RPN_BN_TYPE = \"\"\n\n\n# ---------------------------------------------------------------------------- #\n# Solver\n# ---------------------------------------------------------------------------- #\n_C.SOLVER = CN()\n_C.SOLVER.MAX_ITER = 40000\n\n_C.SOLVER.BASE_LR = 0.001\n_C.SOLVER.BIAS_LR_FACTOR = 2\n\n_C.SOLVER.MOMENTUM = 0.9\n\n_C.SOLVER.WEIGHT_DECAY = 0.0005\n_C.SOLVER.WEIGHT_DECAY_BIAS = 0\n\n_C.SOLVER.GAMMA = 0.1\n_C.SOLVER.STEPS = (30000,)\n\n_C.SOLVER.WARMUP_FACTOR = 1.0 / 3\n_C.SOLVER.WARMUP_ITERS = 500\n_C.SOLVER.WARMUP_METHOD = \"linear\"\n\n_C.SOLVER.CHECKPOINT_PERIOD = 2500\n\n# Number of images per batch\n# This is global, so if we have 8 GPUs and IMS_PER_BATCH = 16, each GPU will\n# see 2 images per batch\n_C.SOLVER.IMS_PER_BATCH = 16\n\n# ---------------------------------------------------------------------------- #\n# Specific test options\n# ---------------------------------------------------------------------------- #\n_C.TEST = CN()\n_C.TEST.EXPECTED_RESULTS = []\n_C.TEST.EXPECTED_RESULTS_SIGMA_TOL = 4\n# Number of images per batch\n# This is global, so if we have 8 GPUs and IMS_PER_BATCH = 16, each GPU will\n# see 2 images per batch\n_C.TEST.IMS_PER_BATCH = 8\n# Number of detections per image\n_C.TEST.DETECTIONS_PER_IMG = 100\n\n\n# ---------------------------------------------------------------------------- #\n# Misc options\n# ---------------------------------------------------------------------------- #\n_C.OUTPUT_DIR = \".\"\n\n_C.PATHS_CATALOG = os.path.join(os.path.dirname(__file__), \"paths_catalog.py\")\n"
  },
  {
    "path": "maskrcnn_benchmark/config/paths_catalog.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n\"\"\"Centralized catalog of paths.\"\"\"\n\nimport os\n\n\nclass DatasetCatalog(object):\n    DATA_DIR = \"datasets\"\n    DATASETS = {\n        \"coco_2017_train\": {\n            \"img_dir\": \"coco/train2017\",\n            \"ann_file\": \"coco/annotations/instances_train2017.json\"\n        },\n        \"coco_2017_val\": {\n            \"img_dir\": \"coco/val2017\",\n            \"ann_file\": \"coco/annotations/instances_val2017.json\"\n        },\n        \"coco_2014_train\": {\n            \"img_dir\": \"coco/train2014\",\n            \"ann_file\": \"coco/annotations/instances_train2014.json\"\n        },\n        \"coco_2014_val\": {\n            \"img_dir\": \"coco/val2014\",\n            \"ann_file\": \"coco/annotations/instances_val2014.json\"\n        },\n        \"coco_2014_minival\": {\n            \"img_dir\": \"coco/val2014\",\n            \"ann_file\": \"coco/annotations/instances_minival2014.json\"\n        },\n        \"coco_2014_valminusminival\": {\n            \"img_dir\": \"coco/val2014\",\n            \"ann_file\": \"coco/annotations/instances_valminusminival2014.json\"\n        },\n        \"keypoints_coco_2014_train\": {\n            \"img_dir\": \"coco/train2014\",\n            \"ann_file\": \"coco/annotations/person_keypoints_train2014.json\",\n        },\n        \"keypoints_coco_2014_val\": {\n            \"img_dir\": \"coco/val2014\",\n            \"ann_file\": \"coco/annotations/person_keypoints_val2014.json\"\n        },\n        \"keypoints_coco_2014_minival\": {\n            \"img_dir\": \"coco/val2014\",\n            \"ann_file\": \"coco/annotations/person_keypoints_minival2014.json\",\n        },\n        \"keypoints_coco_2014_valminusminival\": {\n            \"img_dir\": \"coco/val2014\",\n            \"ann_file\": \"coco/annotations/person_keypoints_valminusminival2014.json\",\n        },\n        \"voc_2007_train\": {\n            \"data_dir\": \"voc/VOC2007\",\n            \"split\": \"train\"\n        },\n        \"voc_2007_train_cocostyle\": {\n            \"img_dir\": \"voc/VOC2007/JPEGImages\",\n            \"ann_file\": \"voc/VOC2007/Annotations/pascal_train2007.json\"\n        },\n        \"voc_2007_val\": {\n            \"data_dir\": \"voc/VOC2007\",\n            \"split\": \"val\"\n        },\n        \"voc_2007_val_cocostyle\": {\n            \"img_dir\": \"voc/VOC2007/JPEGImages\",\n            \"ann_file\": \"voc/VOC2007/Annotations/pascal_val2007.json\"\n        },\n        \"voc_2007_test\": {\n            \"data_dir\": \"voc/VOC2007\",\n            \"split\": \"test\"\n        },\n        \"voc_2007_test_cocostyle\": {\n            \"img_dir\": \"voc/VOC2007/JPEGImages\",\n            \"ann_file\": \"voc/VOC2007/Annotations/pascal_test2007.json\"\n        },\n        \"voc_2012_train\": {\n            \"data_dir\": \"voc/VOC2012\",\n            \"split\": \"train\"\n        },\n        \"voc_2012_train_cocostyle\": {\n            \"img_dir\": \"voc/VOC2012/JPEGImages\",\n            \"ann_file\": \"voc/VOC2012/Annotations/pascal_train2012.json\"\n        },\n        \"voc_2012_val\": {\n            \"data_dir\": \"voc/VOC2012\",\n            \"split\": \"val\"\n        },\n        \"voc_2012_val_cocostyle\": {\n            \"img_dir\": \"voc/VOC2012/JPEGImages\",\n            \"ann_file\": \"voc/VOC2012/Annotations/pascal_val2012.json\"\n        },\n        \"voc_2012_test\": {\n            \"data_dir\": \"voc/VOC2012\",\n            \"split\": \"test\"\n            # PASCAL VOC2012 doesn't made the test annotations available, so there's no json annotation\n        },\n        \"cityscapes_fine_instanceonly_seg_train_cocostyle\": {\n            \"img_dir\": \"cityscapes/images\",\n            \"ann_file\": \"cityscapes/annotations/instancesonly_filtered_gtFine_train.json\"\n        },\n        \"cityscapes_fine_instanceonly_seg_val_cocostyle\": {\n            \"img_dir\": \"cityscapes/images\",\n            \"ann_file\": \"cityscapes/annotations/instancesonly_filtered_gtFine_val.json\"\n        },\n        \"cityscapes_fine_instanceonly_seg_test_cocostyle\": {\n            \"img_dir\": \"cityscapes/images\",\n            \"ann_file\": \"cityscapes/annotations/instancesonly_filtered_gtFine_test.json\"\n        }\n    }\n\n    @staticmethod\n    def get(name):\n        if \"coco\" in name:\n            data_dir = DatasetCatalog.DATA_DIR\n            attrs = DatasetCatalog.DATASETS[name]\n            args = dict(\n                root=os.path.join(data_dir, attrs[\"img_dir\"]),\n                ann_file=os.path.join(data_dir, attrs[\"ann_file\"]),\n            )\n            return dict(\n                factory=\"COCODataset\",\n                args=args,\n            )\n        elif \"voc\" in name:\n            data_dir = DatasetCatalog.DATA_DIR\n            attrs = DatasetCatalog.DATASETS[name]\n            args = dict(\n                data_dir=os.path.join(data_dir, attrs[\"data_dir\"]),\n                split=attrs[\"split\"],\n            )\n            return dict(\n                factory=\"PascalVOCDataset\",\n                args=args,\n            )\n        raise RuntimeError(\"Dataset not available: {}\".format(name))\n\n\nclass ModelCatalog(object):\n    S3_C2_DETECTRON_URL = \"https://dl.fbaipublicfiles.com/detectron\"\n    C2_IMAGENET_MODELS = {\n        \"MSRA/R-50\": \"ImageNetPretrained/MSRA/R-50.pkl\",\n        \"MSRA/R-50-GN\": \"ImageNetPretrained/47261647/R-50-GN.pkl\",\n        \"MSRA/R-101\": \"ImageNetPretrained/MSRA/R-101.pkl\",\n        \"MSRA/R-101-GN\": \"ImageNetPretrained/47592356/R-101-GN.pkl\",\n        \"FAIR/20171220/X-101-32x8d\": \"ImageNetPretrained/20171220/X-101-32x8d.pkl\",\n        \"FAIR/20171220/X-101-64x4d\": \"ImageNetPretrained/20171220/X-101-64x4d.pkl\",\n    }\n\n    C2_DETECTRON_SUFFIX = \"output/train/{}coco_2014_train%3A{}coco_2014_valminusminival/generalized_rcnn/model_final.pkl\"\n    C2_DETECTRON_MODELS = {\n        \"35857197/e2e_faster_rcnn_R-50-C4_1x\": \"01_33_49.iAX0mXvW\",\n        \"35857345/e2e_faster_rcnn_R-50-FPN_1x\": \"01_36_30.cUF7QR7I\",\n        \"35857890/e2e_faster_rcnn_R-101-FPN_1x\": \"01_38_50.sNxI7sX7\",\n        \"36761737/e2e_faster_rcnn_X-101-32x8d-FPN_1x\": \"06_31_39.5MIHi1fZ\",\n        \"35858791/e2e_mask_rcnn_R-50-C4_1x\": \"01_45_57.ZgkA7hPB\",\n        \"35858933/e2e_mask_rcnn_R-50-FPN_1x\": \"01_48_14.DzEQe4wC\",\n        \"35861795/e2e_mask_rcnn_R-101-FPN_1x\": \"02_31_37.KqyEK4tT\",\n        \"36761843/e2e_mask_rcnn_X-101-32x8d-FPN_1x\": \"06_35_59.RZotkLKI\",\n        \"37129812/e2e_mask_rcnn_X-152-32x8d-FPN-IN5k_1.44x\": \"09_35_36.8pzTQKYK\",\n        # keypoints\n        \"37697547/e2e_keypoint_rcnn_R-50-FPN_1x\": \"08_42_54.kdzV35ao\"\n    }\n\n    @staticmethod\n    def get(name):\n        if name.startswith(\"Caffe2Detectron/COCO\"):\n            return ModelCatalog.get_c2_detectron_12_2017_baselines(name)\n        if name.startswith(\"ImageNetPretrained\"):\n            return ModelCatalog.get_c2_imagenet_pretrained(name)\n        raise RuntimeError(\"model not present in the catalog {}\".format(name))\n\n    @staticmethod\n    def get_c2_imagenet_pretrained(name):\n        prefix = ModelCatalog.S3_C2_DETECTRON_URL\n        name = name[len(\"ImageNetPretrained/\"):]\n        name = ModelCatalog.C2_IMAGENET_MODELS[name]\n        url = \"/\".join([prefix, name])\n        return url\n\n    @staticmethod\n    def get_c2_detectron_12_2017_baselines(name):\n        # Detectron C2 models are stored following the structure\n        # prefix/<model_id>/2012_2017_baselines/<model_name>.yaml.<signature>/suffix\n        # we use as identifiers in the catalog Caffe2Detectron/COCO/<model_id>/<model_name>\n        prefix = ModelCatalog.S3_C2_DETECTRON_URL\n        dataset_tag = \"keypoints_\" if \"keypoint\" in name else \"\"\n        suffix = ModelCatalog.C2_DETECTRON_SUFFIX.format(dataset_tag, dataset_tag)\n        # remove identification prefix\n        name = name[len(\"Caffe2Detectron/COCO/\"):]\n        # split in <model_id> and <model_name>\n        model_id, model_name = name.split(\"/\")\n        # parsing to make it match the url address from the Caffe2 models\n        model_name = \"{}.yaml\".format(model_name)\n        signature = ModelCatalog.C2_DETECTRON_MODELS[name]\n        unique_name = \".\".join([model_name, signature])\n        url = \"/\".join([prefix, model_id, \"12_2017_baselines\", unique_name, suffix])\n        return url\n"
  },
  {
    "path": "maskrcnn_benchmark/csrc/ROIAlign.h",
    "content": "// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n#pragma once\n\n#include \"cpu/vision.h\"\n\n#ifdef WITH_CUDA\n#include \"cuda/vision.h\"\n#endif\n\n// Interface for Python\nat::Tensor ROIAlign_forward(const at::Tensor& input,\n                            const at::Tensor& rois,\n                            const float spatial_scale,\n                            const int pooled_height,\n                            const int pooled_width,\n                            const int sampling_ratio) {\n  if (input.type().is_cuda()) {\n#ifdef WITH_CUDA\n    return ROIAlign_forward_cuda(input, rois, spatial_scale, pooled_height, pooled_width, sampling_ratio);\n#else\n    AT_ERROR(\"Not compiled with GPU support\");\n#endif\n  }\n  return ROIAlign_forward_cpu(input, rois, spatial_scale, pooled_height, pooled_width, sampling_ratio);\n}\n\nat::Tensor ROIAlign_backward(const at::Tensor& grad,\n                             const at::Tensor& rois,\n                             const float spatial_scale,\n                             const int pooled_height,\n                             const int pooled_width,\n                             const int batch_size,\n                             const int channels,\n                             const int height,\n                             const int width,\n                             const int sampling_ratio) {\n  if (grad.type().is_cuda()) {\n#ifdef WITH_CUDA\n    return ROIAlign_backward_cuda(grad, rois, spatial_scale, pooled_height, pooled_width, batch_size, channels, height, width, sampling_ratio);\n#else\n    AT_ERROR(\"Not compiled with GPU support\");\n#endif\n  }\n  AT_ERROR(\"Not implemented on the CPU\");\n}\n\n"
  },
  {
    "path": "maskrcnn_benchmark/csrc/ROIPool.h",
    "content": "// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n#pragma once\n\n#include \"cpu/vision.h\"\n\n#ifdef WITH_CUDA\n#include \"cuda/vision.h\"\n#endif\n\n\nstd::tuple<at::Tensor, at::Tensor> ROIPool_forward(const at::Tensor& input,\n                                const at::Tensor& rois,\n                                const float spatial_scale,\n                                const int pooled_height,\n                                const int pooled_width) {\n  if (input.type().is_cuda()) {\n#ifdef WITH_CUDA\n    return ROIPool_forward_cuda(input, rois, spatial_scale, pooled_height, pooled_width);\n#else\n    AT_ERROR(\"Not compiled with GPU support\");\n#endif\n  }\n  AT_ERROR(\"Not implemented on the CPU\");\n}\n\nat::Tensor ROIPool_backward(const at::Tensor& grad,\n                                 const at::Tensor& input,\n                                 const at::Tensor& rois,\n                                 const at::Tensor& argmax,\n                                 const float spatial_scale,\n                                 const int pooled_height,\n                                 const int pooled_width,\n                                 const int batch_size,\n                                 const int channels,\n                                 const int height,\n                                 const int width) {\n  if (grad.type().is_cuda()) {\n#ifdef WITH_CUDA\n    return ROIPool_backward_cuda(grad, input, rois, argmax, spatial_scale, pooled_height, pooled_width, batch_size, channels, height, width);\n#else\n    AT_ERROR(\"Not compiled with GPU support\");\n#endif\n  }\n  AT_ERROR(\"Not implemented on the CPU\");\n}\n\n\n\n"
  },
  {
    "path": "maskrcnn_benchmark/csrc/SigmoidFocalLoss.h",
    "content": "#pragma once\n\n#include \"cpu/vision.h\"\n\n#ifdef WITH_CUDA\n#include \"cuda/vision.h\"\n#endif\n\n// Interface for Python\nat::Tensor SigmoidFocalLoss_forward(\n\t\tconst at::Tensor& logits,\n                const at::Tensor& targets,\n\t\tconst int num_classes, \n\t\tconst float gamma, \n\t\tconst float alpha) {\n  if (logits.type().is_cuda()) {\n#ifdef WITH_CUDA\n    return SigmoidFocalLoss_forward_cuda(logits, targets, num_classes, gamma, alpha);\n#else\n    AT_ERROR(\"Not compiled with GPU support\");\n#endif\n  }\n  AT_ERROR(\"Not implemented on the CPU\");\n}\n\nat::Tensor SigmoidFocalLoss_backward(\n\t\t\t     const at::Tensor& logits,\n                             const at::Tensor& targets,\n\t\t\t     const at::Tensor& d_losses,\n\t\t\t     const int num_classes,\n\t\t\t     const float gamma,\n\t\t\t     const float alpha) {\n  if (logits.type().is_cuda()) {\n#ifdef WITH_CUDA\n    return SigmoidFocalLoss_backward_cuda(logits, targets, d_losses, num_classes, gamma, alpha);\n#else\n    AT_ERROR(\"Not compiled with GPU support\");\n#endif\n  }\n  AT_ERROR(\"Not implemented on the CPU\");\n}\n"
  },
  {
    "path": "maskrcnn_benchmark/csrc/cpu/ROIAlign_cpu.cpp",
    "content": "// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n#include \"cpu/vision.h\"\n\n// implementation taken from Caffe2\ntemplate <typename T>\nstruct PreCalc {\n  int pos1;\n  int pos2;\n  int pos3;\n  int pos4;\n  T w1;\n  T w2;\n  T w3;\n  T w4;\n};\n\ntemplate <typename T>\nvoid pre_calc_for_bilinear_interpolate(\n    const int height,\n    const int width,\n    const int pooled_height,\n    const int pooled_width,\n    const int iy_upper,\n    const int ix_upper,\n    T roi_start_h,\n    T roi_start_w,\n    T bin_size_h,\n    T bin_size_w,\n    int roi_bin_grid_h,\n    int roi_bin_grid_w,\n    std::vector<PreCalc<T>>& pre_calc) {\n  int pre_calc_index = 0;\n  for (int ph = 0; ph < pooled_height; ph++) {\n    for (int pw = 0; pw < pooled_width; pw++) {\n      for (int iy = 0; iy < iy_upper; iy++) {\n        const T yy = roi_start_h + ph * bin_size_h +\n            static_cast<T>(iy + .5f) * bin_size_h /\n                static_cast<T>(roi_bin_grid_h); // e.g., 0.5, 1.5\n        for (int ix = 0; ix < ix_upper; ix++) {\n          const T xx = roi_start_w + pw * bin_size_w +\n              static_cast<T>(ix + .5f) * bin_size_w /\n                  static_cast<T>(roi_bin_grid_w);\n\n          T x = xx;\n          T y = yy;\n          // deal with: inverse elements are out of feature map boundary\n          if (y < -1.0 || y > height || x < -1.0 || x > width) {\n            // empty\n            PreCalc<T> pc;\n            pc.pos1 = 0;\n            pc.pos2 = 0;\n            pc.pos3 = 0;\n            pc.pos4 = 0;\n            pc.w1 = 0;\n            pc.w2 = 0;\n            pc.w3 = 0;\n            pc.w4 = 0;\n            pre_calc[pre_calc_index] = pc;\n            pre_calc_index += 1;\n            continue;\n          }\n\n          if (y <= 0) {\n            y = 0;\n          }\n          if (x <= 0) {\n            x = 0;\n          }\n\n          int y_low = (int)y;\n          int x_low = (int)x;\n          int y_high;\n          int x_high;\n\n          if (y_low >= height - 1) {\n            y_high = y_low = height - 1;\n            y = (T)y_low;\n          } else {\n            y_high = y_low + 1;\n          }\n\n          if (x_low >= width - 1) {\n            x_high = x_low = width - 1;\n            x = (T)x_low;\n          } else {\n            x_high = x_low + 1;\n          }\n\n          T ly = y - y_low;\n          T lx = x - x_low;\n          T hy = 1. - ly, hx = 1. - lx;\n          T w1 = hy * hx, w2 = hy * lx, w3 = ly * hx, w4 = ly * lx;\n\n          // save weights and indeces\n          PreCalc<T> pc;\n          pc.pos1 = y_low * width + x_low;\n          pc.pos2 = y_low * width + x_high;\n          pc.pos3 = y_high * width + x_low;\n          pc.pos4 = y_high * width + x_high;\n          pc.w1 = w1;\n          pc.w2 = w2;\n          pc.w3 = w3;\n          pc.w4 = w4;\n          pre_calc[pre_calc_index] = pc;\n\n          pre_calc_index += 1;\n        }\n      }\n    }\n  }\n}\n\ntemplate <typename T>\nvoid ROIAlignForward_cpu_kernel(\n    const int nthreads,\n    const T* bottom_data,\n    const T& spatial_scale,\n    const int channels,\n    const int height,\n    const int width,\n    const int pooled_height,\n    const int pooled_width,\n    const int sampling_ratio,\n    const T* bottom_rois,\n    //int roi_cols,\n    T* top_data) {\n  //AT_ASSERT(roi_cols == 4 || roi_cols == 5);\n  int roi_cols = 5;\n\n  int n_rois = nthreads / channels / pooled_width / pooled_height;\n  // (n, c, ph, pw) is an element in the pooled output\n  // can be parallelized using omp\n  // #pragma omp parallel for num_threads(32)\n  for (int n = 0; n < n_rois; n++) {\n    int index_n = n * channels * pooled_width * pooled_height;\n\n    // roi could have 4 or 5 columns\n    const T* offset_bottom_rois = bottom_rois + n * roi_cols;\n    int roi_batch_ind = 0;\n    if (roi_cols == 5) {\n      roi_batch_ind = offset_bottom_rois[0];\n      offset_bottom_rois++;\n    }\n\n    // Do not using rounding; this implementation detail is critical\n    T roi_start_w = offset_bottom_rois[0] * spatial_scale;\n    T roi_start_h = offset_bottom_rois[1] * spatial_scale;\n    T roi_end_w = offset_bottom_rois[2] * spatial_scale;\n    T roi_end_h = offset_bottom_rois[3] * spatial_scale;\n    // T roi_start_w = round(offset_bottom_rois[0] * spatial_scale);\n    // T roi_start_h = round(offset_bottom_rois[1] * spatial_scale);\n    // T roi_end_w = round(offset_bottom_rois[2] * spatial_scale);\n    // T roi_end_h = round(offset_bottom_rois[3] * spatial_scale);\n\n    // Force malformed ROIs to be 1x1\n    T roi_width = std::max(roi_end_w - roi_start_w, (T)1.);\n    T roi_height = std::max(roi_end_h - roi_start_h, (T)1.);\n    T bin_size_h = static_cast<T>(roi_height) / static_cast<T>(pooled_height);\n    T bin_size_w = static_cast<T>(roi_width) / static_cast<T>(pooled_width);\n\n    // We use roi_bin_grid to sample the grid and mimic integral\n    int roi_bin_grid_h = (sampling_ratio > 0)\n        ? sampling_ratio\n        : ceil(roi_height / pooled_height); // e.g., = 2\n    int roi_bin_grid_w =\n        (sampling_ratio > 0) ? sampling_ratio : ceil(roi_width / pooled_width);\n\n    // We do average (integral) pooling inside a bin\n    const T count = roi_bin_grid_h * roi_bin_grid_w; // e.g. = 4\n\n    // we want to precalculate indeces and weights shared by all chanels,\n    // this is the key point of optimiation\n    std::vector<PreCalc<T>> pre_calc(\n        roi_bin_grid_h * roi_bin_grid_w * pooled_width * pooled_height);\n    pre_calc_for_bilinear_interpolate(\n        height,\n        width,\n        pooled_height,\n        pooled_width,\n        roi_bin_grid_h,\n        roi_bin_grid_w,\n        roi_start_h,\n        roi_start_w,\n        bin_size_h,\n        bin_size_w,\n        roi_bin_grid_h,\n        roi_bin_grid_w,\n        pre_calc);\n\n      for (int c = 0; c < channels; c++) {\n      int index_n_c = index_n + c * pooled_width * pooled_height;\n      const T* offset_bottom_data =\n          bottom_data + (roi_batch_ind * channels + c) * height * width;\n      int pre_calc_index = 0;\n\n      for (int ph = 0; ph < pooled_height; ph++) {\n        for (int pw = 0; pw < pooled_width; pw++) {\n          int index = index_n_c + ph * pooled_width + pw;\n\n          T output_val = 0.;\n          for (int iy = 0; iy < roi_bin_grid_h; iy++) {\n            for (int ix = 0; ix < roi_bin_grid_w; ix++) {\n              PreCalc<T> pc = pre_calc[pre_calc_index];\n              output_val += pc.w1 * offset_bottom_data[pc.pos1] +\n                  pc.w2 * offset_bottom_data[pc.pos2] +\n                  pc.w3 * offset_bottom_data[pc.pos3] +\n                  pc.w4 * offset_bottom_data[pc.pos4];\n\n              pre_calc_index += 1;\n            }\n          }\n          output_val /= count;\n\n          top_data[index] = output_val;\n        } // for pw\n      } // for ph\n    } // for c\n  } // for n\n}\n\nat::Tensor ROIAlign_forward_cpu(const at::Tensor& input,\n                                const at::Tensor& rois,\n                                const float spatial_scale,\n                                const int pooled_height,\n                                const int pooled_width,\n                                const int sampling_ratio) {\n  AT_ASSERTM(!input.type().is_cuda(), \"input must be a CPU tensor\");\n  AT_ASSERTM(!rois.type().is_cuda(), \"rois must be a CPU tensor\");\n\n  auto num_rois = rois.size(0);\n  auto channels = input.size(1);\n  auto height = input.size(2);\n  auto width = input.size(3);\n\n  auto output = at::empty({num_rois, channels, pooled_height, pooled_width}, input.options());\n  auto output_size = num_rois * pooled_height * pooled_width * channels;\n\n  if (output.numel() == 0) {\n    return output;\n  }\n\n  AT_DISPATCH_FLOATING_TYPES(input.type(), \"ROIAlign_forward\", [&] {\n    ROIAlignForward_cpu_kernel<scalar_t>(\n         output_size,\n         input.data<scalar_t>(),\n         spatial_scale,\n         channels,\n         height,\n         width,\n         pooled_height,\n         pooled_width,\n         sampling_ratio,\n         rois.data<scalar_t>(),\n         output.data<scalar_t>());\n  });\n  return output;\n}\n"
  },
  {
    "path": "maskrcnn_benchmark/csrc/cpu/nms_cpu.cpp",
    "content": "// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n#include \"cpu/vision.h\"\n\n\ntemplate <typename scalar_t>\nat::Tensor nms_cpu_kernel(const at::Tensor& dets,\n                          const at::Tensor& scores,\n                          const float threshold) {\n  AT_ASSERTM(!dets.type().is_cuda(), \"dets must be a CPU tensor\");\n  AT_ASSERTM(!scores.type().is_cuda(), \"scores must be a CPU tensor\");\n  AT_ASSERTM(dets.type() == scores.type(), \"dets should have the same type as scores\");\n\n  if (dets.numel() == 0) {\n    return at::empty({0}, dets.options().dtype(at::kLong).device(at::kCPU));\n  }\n\n  auto x1_t = dets.select(1, 0).contiguous();\n  auto y1_t = dets.select(1, 1).contiguous();\n  auto x2_t = dets.select(1, 2).contiguous();\n  auto y2_t = dets.select(1, 3).contiguous();\n\n  at::Tensor areas_t = (x2_t - x1_t + 1) * (y2_t - y1_t + 1);\n\n  auto order_t = std::get<1>(scores.sort(0, /* descending=*/true));\n\n  auto ndets = dets.size(0);\n  at::Tensor suppressed_t = at::zeros({ndets}, dets.options().dtype(at::kByte).device(at::kCPU));\n\n  auto suppressed = suppressed_t.data<uint8_t>();\n  auto order = order_t.data<int64_t>();\n  auto x1 = x1_t.data<scalar_t>();\n  auto y1 = y1_t.data<scalar_t>();\n  auto x2 = x2_t.data<scalar_t>();\n  auto y2 = y2_t.data<scalar_t>();\n  auto areas = areas_t.data<scalar_t>();\n\n  for (int64_t _i = 0; _i < ndets; _i++) {\n    auto i = order[_i];\n    if (suppressed[i] == 1)\n      continue;\n    auto ix1 = x1[i];\n    auto iy1 = y1[i];\n    auto ix2 = x2[i];\n    auto iy2 = y2[i];\n    auto iarea = areas[i];\n\n    for (int64_t _j = _i + 1; _j < ndets; _j++) {\n      auto j = order[_j];\n      if (suppressed[j] == 1)\n        continue;\n      auto xx1 = std::max(ix1, x1[j]);\n      auto yy1 = std::max(iy1, y1[j]);\n      auto xx2 = std::min(ix2, x2[j]);\n      auto yy2 = std::min(iy2, y2[j]);\n\n      auto w = std::max(static_cast<scalar_t>(0), xx2 - xx1 + 1);\n      auto h = std::max(static_cast<scalar_t>(0), yy2 - yy1 + 1);\n      auto inter = w * h;\n      auto ovr = inter / (iarea + areas[j] - inter);\n      if (ovr >= threshold)\n        suppressed[j] = 1;\n   }\n  }\n  return at::nonzero(suppressed_t == 0).squeeze(1);\n}\n\nat::Tensor nms_cpu(const at::Tensor& dets,\n               const at::Tensor& scores,\n               const float threshold) {\n  at::Tensor result;\n  AT_DISPATCH_FLOATING_TYPES(dets.type(), \"nms\", [&] {\n    result = nms_cpu_kernel<scalar_t>(dets, scores, threshold);\n  });\n  return result;\n}\n"
  },
  {
    "path": "maskrcnn_benchmark/csrc/cpu/vision.h",
    "content": "// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n#pragma once\n#include <torch/extension.h>\n\n\nat::Tensor ROIAlign_forward_cpu(const at::Tensor& input,\n                                const at::Tensor& rois,\n                                const float spatial_scale,\n                                const int pooled_height,\n                                const int pooled_width,\n                                const int sampling_ratio);\n\n\nat::Tensor nms_cpu(const at::Tensor& dets,\n                   const at::Tensor& scores,\n                   const float threshold);\n"
  },
  {
    "path": "maskrcnn_benchmark/csrc/cuda/ROIAlign_cuda.cu",
    "content": "// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n#include <ATen/ATen.h>\n#include <ATen/cuda/CUDAContext.h>\n\n#include <THC/THC.h>\n#include <THC/THCAtomics.cuh>\n#include <THC/THCDeviceUtils.cuh>\n\n// TODO make it in a common file\n#define CUDA_1D_KERNEL_LOOP(i, n)                            \\\n  for (int i = blockIdx.x * blockDim.x + threadIdx.x; i < n; \\\n       i += blockDim.x * gridDim.x)\n\n\ntemplate <typename T>\n__device__ T bilinear_interpolate(const T* bottom_data,\n    const int height, const int width,\n    T y, T x,\n    const int index /* index for debug only*/) {\n\n  // deal with cases that inverse elements are out of feature map boundary\n  if (y < -1.0 || y > height || x < -1.0 || x > width) {\n    //empty\n    return 0;\n  }\n\n  if (y <= 0) y = 0;\n  if (x <= 0) x = 0;\n\n  int y_low = (int) y;\n  int x_low = (int) x;\n  int y_high;\n  int x_high;\n\n  if (y_low >= height - 1) {\n    y_high = y_low = height - 1;\n    y = (T) y_low;\n  } else {\n    y_high = y_low + 1;\n  }\n\n  if (x_low >= width - 1) {\n    x_high = x_low = width - 1;\n    x = (T) x_low;\n  } else {\n    x_high = x_low + 1;\n  }\n\n  T ly = y - y_low;\n  T lx = x - x_low;\n  T hy = 1. - ly, hx = 1. - lx;\n  // do bilinear interpolation\n  T v1 = bottom_data[y_low * width + x_low];\n  T v2 = bottom_data[y_low * width + x_high];\n  T v3 = bottom_data[y_high * width + x_low];\n  T v4 = bottom_data[y_high * width + x_high];\n  T w1 = hy * hx, w2 = hy * lx, w3 = ly * hx, w4 = ly * lx;\n\n  T val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4);\n\n  return val;\n}\n\ntemplate <typename T>\n__global__ void RoIAlignForward(const int nthreads, const T* bottom_data,\n    const T spatial_scale, const int channels,\n    const int height, const int width,\n    const int pooled_height, const int pooled_width,\n    const int sampling_ratio,\n    const T* bottom_rois, T* top_data) {\n  CUDA_1D_KERNEL_LOOP(index, nthreads) {\n    // (n, c, ph, pw) is an element in the pooled output\n    int pw = index % pooled_width;\n    int ph = (index / pooled_width) % pooled_height;\n    int c = (index / pooled_width / pooled_height) % channels;\n    int n = index / pooled_width / pooled_height / channels;\n\n    const T* offset_bottom_rois = bottom_rois + n * 5;\n    int roi_batch_ind = offset_bottom_rois[0];\n\n    // Do not using rounding; this implementation detail is critical\n    T roi_start_w = offset_bottom_rois[1] * spatial_scale;\n    T roi_start_h = offset_bottom_rois[2] * spatial_scale;\n    T roi_end_w = offset_bottom_rois[3] * spatial_scale;\n    T roi_end_h = offset_bottom_rois[4] * spatial_scale;\n    // T roi_start_w = round(offset_bottom_rois[1] * spatial_scale);\n    // T roi_start_h = round(offset_bottom_rois[2] * spatial_scale);\n    // T roi_end_w = round(offset_bottom_rois[3] * spatial_scale);\n    // T roi_end_h = round(offset_bottom_rois[4] * spatial_scale);\n\n    // Force malformed ROIs to be 1x1\n    T roi_width = max(roi_end_w - roi_start_w, (T)1.);\n    T roi_height = max(roi_end_h - roi_start_h, (T)1.);\n    T bin_size_h = static_cast<T>(roi_height) / static_cast<T>(pooled_height);\n    T bin_size_w = static_cast<T>(roi_width) / static_cast<T>(pooled_width);\n\n    const T* offset_bottom_data = bottom_data + (roi_batch_ind * channels + c) * height * width;\n\n    // We use roi_bin_grid to sample the grid and mimic integral\n    int roi_bin_grid_h = (sampling_ratio > 0) ? sampling_ratio : ceil(roi_height / pooled_height); // e.g., = 2\n    int roi_bin_grid_w = (sampling_ratio > 0) ? sampling_ratio : ceil(roi_width / pooled_width);\n\n    // We do average (integral) pooling inside a bin\n    const T count = roi_bin_grid_h * roi_bin_grid_w; // e.g. = 4\n\n    T output_val = 0.;\n    for (int iy = 0; iy < roi_bin_grid_h; iy ++) // e.g., iy = 0, 1\n    {\n      const T y = roi_start_h + ph * bin_size_h + static_cast<T>(iy + .5f) * bin_size_h / static_cast<T>(roi_bin_grid_h); // e.g., 0.5, 1.5\n      for (int ix = 0; ix < roi_bin_grid_w; ix ++)\n      {\n        const T x = roi_start_w + pw * bin_size_w + static_cast<T>(ix + .5f) * bin_size_w / static_cast<T>(roi_bin_grid_w);\n\n        T val = bilinear_interpolate(offset_bottom_data, height, width, y, x, index);\n        output_val += val;\n      }\n    }\n    output_val /= count;\n\n    top_data[index] = output_val;\n  }\n}\n\n\ntemplate <typename T>\n__device__ void bilinear_interpolate_gradient(\n    const int height, const int width,\n    T y, T x,\n    T & w1, T & w2, T & w3, T & w4,\n    int & x_low, int & x_high, int & y_low, int & y_high,\n    const int index /* index for debug only*/) {\n\n  // deal with cases that inverse elements are out of feature map boundary\n  if (y < -1.0 || y > height || x < -1.0 || x > width) {\n    //empty\n    w1 = w2 = w3 = w4 = 0.;\n    x_low = x_high = y_low = y_high = -1;\n    return;\n  }\n\n  if (y <= 0) y = 0;\n  if (x <= 0) x = 0;\n\n  y_low = (int) y;\n  x_low = (int) x;\n\n  if (y_low >= height - 1) {\n    y_high = y_low = height - 1;\n    y = (T) y_low;\n  } else {\n    y_high = y_low + 1;\n  }\n\n  if (x_low >= width - 1) {\n    x_high = x_low = width - 1;\n    x = (T) x_low;\n  } else {\n    x_high = x_low + 1;\n  }\n\n  T ly = y - y_low;\n  T lx = x - x_low;\n  T hy = 1. - ly, hx = 1. - lx;\n\n  // reference in forward\n  // T v1 = bottom_data[y_low * width + x_low];\n  // T v2 = bottom_data[y_low * width + x_high];\n  // T v3 = bottom_data[y_high * width + x_low];\n  // T v4 = bottom_data[y_high * width + x_high];\n  // T val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4);\n\n  w1 = hy * hx, w2 = hy * lx, w3 = ly * hx, w4 = ly * lx;\n\n  return;\n}\n\ntemplate <typename T>\n__global__ void RoIAlignBackwardFeature(const int nthreads, const T* top_diff,\n    const int num_rois, const T spatial_scale,\n    const int channels, const int height, const int width,\n    const int pooled_height, const int pooled_width,\n    const int sampling_ratio,\n    T* bottom_diff,\n    const T* bottom_rois) {\n  CUDA_1D_KERNEL_LOOP(index, nthreads) {\n    // (n, c, ph, pw) is an element in the pooled output\n    int pw = index % pooled_width;\n    int ph = (index / pooled_width) % pooled_height;\n    int c = (index / pooled_width / pooled_height) % channels;\n    int n = index / pooled_width / pooled_height / channels;\n\n    const T* offset_bottom_rois = bottom_rois + n * 5;\n    int roi_batch_ind = offset_bottom_rois[0];\n\n    // Do not using rounding; this implementation detail is critical\n    T roi_start_w = offset_bottom_rois[1] * spatial_scale;\n    T roi_start_h = offset_bottom_rois[2] * spatial_scale;\n    T roi_end_w = offset_bottom_rois[3] * spatial_scale;\n    T roi_end_h = offset_bottom_rois[4] * spatial_scale;\n    // T roi_start_w = round(offset_bottom_rois[1] * spatial_scale);\n    // T roi_start_h = round(offset_bottom_rois[2] * spatial_scale);\n    // T roi_end_w = round(offset_bottom_rois[3] * spatial_scale);\n    // T roi_end_h = round(offset_bottom_rois[4] * spatial_scale);\n\n    // Force malformed ROIs to be 1x1\n    T roi_width = max(roi_end_w - roi_start_w, (T)1.);\n    T roi_height = max(roi_end_h - roi_start_h, (T)1.);\n    T bin_size_h = static_cast<T>(roi_height) / static_cast<T>(pooled_height);\n    T bin_size_w = static_cast<T>(roi_width) / static_cast<T>(pooled_width);\n\n    T* offset_bottom_diff = bottom_diff + (roi_batch_ind * channels + c) * height * width;\n\n    int top_offset    = (n * channels + c) * pooled_height * pooled_width;\n    const T* offset_top_diff = top_diff + top_offset;\n    const T top_diff_this_bin = offset_top_diff[ph * pooled_width + pw];\n\n    // We use roi_bin_grid to sample the grid and mimic integral\n    int roi_bin_grid_h = (sampling_ratio > 0) ? sampling_ratio : ceil(roi_height / pooled_height); // e.g., = 2\n    int roi_bin_grid_w = (sampling_ratio > 0) ? sampling_ratio : ceil(roi_width / pooled_width);\n\n    // We do average (integral) pooling inside a bin\n    const T count = roi_bin_grid_h * roi_bin_grid_w; // e.g. = 4\n\n    for (int iy = 0; iy < roi_bin_grid_h; iy ++) // e.g., iy = 0, 1\n    {\n      const T y = roi_start_h + ph * bin_size_h + static_cast<T>(iy + .5f) * bin_size_h / static_cast<T>(roi_bin_grid_h); // e.g., 0.5, 1.5\n      for (int ix = 0; ix < roi_bin_grid_w; ix ++)\n      {\n        const T x = roi_start_w + pw * bin_size_w + static_cast<T>(ix + .5f) * bin_size_w / static_cast<T>(roi_bin_grid_w);\n\n        T w1, w2, w3, w4;\n        int x_low, x_high, y_low, y_high;\n\n        bilinear_interpolate_gradient(height, width, y, x,\n            w1, w2, w3, w4,\n            x_low, x_high, y_low, y_high,\n            index);\n\n        T g1 = top_diff_this_bin * w1 / count;\n        T g2 = top_diff_this_bin * w2 / count;\n        T g3 = top_diff_this_bin * w3 / count;\n        T g4 = top_diff_this_bin * w4 / count;\n\n        if (x_low >= 0 && x_high >= 0 && y_low >= 0 && y_high >= 0)\n        {\n          atomicAdd(offset_bottom_diff + y_low * width + x_low, static_cast<T>(g1));\n          atomicAdd(offset_bottom_diff + y_low * width + x_high, static_cast<T>(g2));\n          atomicAdd(offset_bottom_diff + y_high * width + x_low, static_cast<T>(g3));\n          atomicAdd(offset_bottom_diff + y_high * width + x_high, static_cast<T>(g4));\n        } // if\n      } // ix\n    } // iy\n  } // CUDA_1D_KERNEL_LOOP\n} // RoIAlignBackward\n\n\nat::Tensor ROIAlign_forward_cuda(const at::Tensor& input,\n                                 const at::Tensor& rois,\n                                 const float spatial_scale,\n                                 const int pooled_height,\n                                 const int pooled_width,\n                                 const int sampling_ratio) {\n  AT_ASSERTM(input.type().is_cuda(), \"input must be a CUDA tensor\");\n  AT_ASSERTM(rois.type().is_cuda(), \"rois must be a CUDA tensor\");\n\n  auto num_rois = rois.size(0);\n  auto channels = input.size(1);\n  auto height = input.size(2);\n  auto width = input.size(3);\n\n  auto output = at::empty({num_rois, channels, pooled_height, pooled_width}, input.options());\n  auto output_size = num_rois * pooled_height * pooled_width * channels;\n  cudaStream_t stream = at::cuda::getCurrentCUDAStream();\n\n  dim3 grid(std::min(THCCeilDiv((long)output_size, 512L), 4096L));\n  dim3 block(512);\n\n  if (output.numel() == 0) {\n    THCudaCheck(cudaGetLastError());\n    return output;\n  }\n\n  AT_DISPATCH_FLOATING_TYPES(input.type(), \"ROIAlign_forward\", [&] {\n    RoIAlignForward<scalar_t><<<grid, block, 0, stream>>>(\n         output_size,\n         input.contiguous().data<scalar_t>(),\n         spatial_scale,\n         channels,\n         height,\n         width,\n         pooled_height,\n         pooled_width,\n         sampling_ratio,\n         rois.contiguous().data<scalar_t>(),\n         output.data<scalar_t>());\n  });\n  THCudaCheck(cudaGetLastError());\n  return output;\n}\n\n// TODO remove the dependency on input and use instead its sizes -> save memory\nat::Tensor ROIAlign_backward_cuda(const at::Tensor& grad,\n                                  const at::Tensor& rois,\n                                  const float spatial_scale,\n                                  const int pooled_height,\n                                  const int pooled_width,\n                                  const int batch_size,\n                                  const int channels,\n                                  const int height,\n                                  const int width,\n                                  const int sampling_ratio) {\n  AT_ASSERTM(grad.type().is_cuda(), \"grad must be a CUDA tensor\");\n  AT_ASSERTM(rois.type().is_cuda(), \"rois must be a CUDA tensor\");\n\n  auto num_rois = rois.size(0);\n  auto grad_input = at::zeros({batch_size, channels, height, width}, grad.options());\n\n  cudaStream_t stream = at::cuda::getCurrentCUDAStream();\n\n  dim3 grid(std::min(THCCeilDiv((long)grad.numel(), 512L), 4096L));\n  dim3 block(512);\n\n  // handle possibly empty gradients\n  if (grad.numel() == 0) {\n    THCudaCheck(cudaGetLastError());\n    return grad_input;\n  }\n\n  AT_DISPATCH_FLOATING_TYPES(grad.type(), \"ROIAlign_backward\", [&] {\n    RoIAlignBackwardFeature<scalar_t><<<grid, block, 0, stream>>>(\n         grad.numel(),\n         grad.contiguous().data<scalar_t>(),\n         num_rois,\n         spatial_scale,\n         channels,\n         height,\n         width,\n         pooled_height,\n         pooled_width,\n         sampling_ratio,\n         grad_input.data<scalar_t>(),\n         rois.contiguous().data<scalar_t>());\n  });\n  THCudaCheck(cudaGetLastError());\n  return grad_input;\n}\n"
  },
  {
    "path": "maskrcnn_benchmark/csrc/cuda/ROIPool_cuda.cu",
    "content": "// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n#include <ATen/ATen.h>\n#include <ATen/cuda/CUDAContext.h>\n\n#include <THC/THC.h>\n#include <THC/THCAtomics.cuh>\n#include <THC/THCDeviceUtils.cuh>\n\n\n// TODO make it in a common file\n#define CUDA_1D_KERNEL_LOOP(i, n)                            \\\n  for (int i = blockIdx.x * blockDim.x + threadIdx.x; i < n; \\\n       i += blockDim.x * gridDim.x)\n\n\ntemplate <typename T>\n__global__ void RoIPoolFForward(const int nthreads, const T* bottom_data,\n    const T spatial_scale, const int channels, const int height,\n    const int width, const int pooled_height, const int pooled_width,\n    const T* bottom_rois, T* top_data, int* argmax_data) {\n  CUDA_1D_KERNEL_LOOP(index, nthreads) {\n    // (n, c, ph, pw) is an element in the pooled output\n    int pw = index % pooled_width;\n    int ph = (index / pooled_width) % pooled_height;\n    int c = (index / pooled_width / pooled_height) % channels;\n    int n = index / pooled_width / pooled_height / channels;\n\n    const T* offset_bottom_rois = bottom_rois + n * 5;\n    int roi_batch_ind = offset_bottom_rois[0];\n    int roi_start_w = round(offset_bottom_rois[1] * spatial_scale);\n    int roi_start_h = round(offset_bottom_rois[2] * spatial_scale);\n    int roi_end_w = round(offset_bottom_rois[3] * spatial_scale);\n    int roi_end_h = round(offset_bottom_rois[4] * spatial_scale);\n\n    // Force malformed ROIs to be 1x1\n    int roi_width = max(roi_end_w - roi_start_w + 1, 1);\n    int roi_height = max(roi_end_h - roi_start_h + 1, 1);\n    T bin_size_h = static_cast<T>(roi_height)\n                       / static_cast<T>(pooled_height);\n    T bin_size_w = static_cast<T>(roi_width)\n                       / static_cast<T>(pooled_width);\n\n    int hstart = static_cast<int>(floor(static_cast<T>(ph)\n                                        * bin_size_h));\n    int wstart = static_cast<int>(floor(static_cast<T>(pw)\n                                        * bin_size_w));\n    int hend = static_cast<int>(ceil(static_cast<T>(ph + 1)\n                                     * bin_size_h));\n    int wend = static_cast<int>(ceil(static_cast<T>(pw + 1)\n                                     * bin_size_w));\n\n    // Add roi offsets and clip to input boundaries\n    hstart = min(max(hstart + roi_start_h, 0), height);\n    hend = min(max(hend + roi_start_h, 0), height);\n    wstart = min(max(wstart + roi_start_w, 0), width);\n    wend = min(max(wend + roi_start_w, 0), width);\n    bool is_empty = (hend <= hstart) || (wend <= wstart);\n\n    // Define an empty pooling region to be zero\n    T maxval = is_empty ? 0 : -FLT_MAX;\n    // If nothing is pooled, argmax = -1 causes nothing to be backprop'd\n    int maxidx = -1;\n    const T* offset_bottom_data =\n        bottom_data + (roi_batch_ind * channels + c) * height * width;\n    for (int h = hstart; h < hend; ++h) {\n      for (int w = wstart; w < wend; ++w) {\n        int bottom_index = h * width + w;\n        if (offset_bottom_data[bottom_index] > maxval) {\n          maxval = offset_bottom_data[bottom_index];\n          maxidx = bottom_index;\n        }\n      }\n    }\n    top_data[index] = maxval;\n    argmax_data[index] = maxidx;\n  }\n}\n\ntemplate <typename T>\n__global__ void RoIPoolFBackward(const int nthreads, const T* top_diff,\n    const int* argmax_data, const int num_rois, const T spatial_scale,\n    const int channels, const int height, const int width,\n    const int pooled_height, const int pooled_width, T* bottom_diff,\n    const T* bottom_rois) {\n  CUDA_1D_KERNEL_LOOP(index, nthreads) {\n    // (n, c, ph, pw) is an element in the pooled output\n    int pw = index % pooled_width;\n    int ph = (index / pooled_width) % pooled_height;\n    int c = (index / pooled_width / pooled_height) % channels;\n    int n = index / pooled_width / pooled_height / channels;\n\n    const T* offset_bottom_rois = bottom_rois + n * 5;\n    int roi_batch_ind = offset_bottom_rois[0];\n    int bottom_offset = (roi_batch_ind * channels + c) * height * width;\n    int top_offset    = (n * channels + c) * pooled_height * pooled_width;\n    const T* offset_top_diff = top_diff + top_offset;\n    T* offset_bottom_diff = bottom_diff + bottom_offset;\n    const int* offset_argmax_data = argmax_data + top_offset;\n\n    int argmax = offset_argmax_data[ph * pooled_width + pw];\n    if (argmax != -1) {\n      atomicAdd(\n          offset_bottom_diff + argmax,\n          static_cast<T>(offset_top_diff[ph * pooled_width + pw]));\n\n    }\n  }\n}\n\nstd::tuple<at::Tensor, at::Tensor> ROIPool_forward_cuda(const at::Tensor& input,\n                                const at::Tensor& rois,\n                                const float spatial_scale,\n                                const int pooled_height,\n                                const int pooled_width) {\n  AT_ASSERTM(input.type().is_cuda(), \"input must be a CUDA tensor\");\n  AT_ASSERTM(rois.type().is_cuda(), \"rois must be a CUDA tensor\");\n\n  auto num_rois = rois.size(0);\n  auto channels = input.size(1);\n  auto height = input.size(2);\n  auto width = input.size(3);\n\n  auto output = at::empty({num_rois, channels, pooled_height, pooled_width}, input.options());\n  auto output_size = num_rois * pooled_height * pooled_width * channels;\n  auto argmax = at::zeros({num_rois, channels, pooled_height, pooled_width}, input.options().dtype(at::kInt));\n\n  cudaStream_t stream = at::cuda::getCurrentCUDAStream();\n\n  dim3 grid(std::min(THCCeilDiv((long)output_size, 512L), 4096L));\n  dim3 block(512);\n\n  if (output.numel() == 0) {\n    THCudaCheck(cudaGetLastError());\n    return std::make_tuple(output, argmax);\n  }\n\n  AT_DISPATCH_FLOATING_TYPES(input.type(), \"ROIPool_forward\", [&] {\n    RoIPoolFForward<scalar_t><<<grid, block, 0, stream>>>(\n         output_size,\n         input.contiguous().data<scalar_t>(),\n         spatial_scale,\n         channels,\n         height,\n         width,\n         pooled_height,\n         pooled_width,\n         rois.contiguous().data<scalar_t>(),\n         output.data<scalar_t>(),\n         argmax.data<int>());\n  });\n  THCudaCheck(cudaGetLastError());\n  return std::make_tuple(output, argmax);\n}\n\n// TODO remove the dependency on input and use instead its sizes -> save memory\nat::Tensor ROIPool_backward_cuda(const at::Tensor& grad,\n                                 const at::Tensor& input,\n                                 const at::Tensor& rois,\n                                 const at::Tensor& argmax,\n                                 const float spatial_scale,\n                                 const int pooled_height,\n                                 const int pooled_width,\n                                 const int batch_size,\n                                 const int channels,\n                                 const int height,\n                                 const int width) {\n  AT_ASSERTM(grad.type().is_cuda(), \"grad must be a CUDA tensor\");\n  AT_ASSERTM(rois.type().is_cuda(), \"rois must be a CUDA tensor\");\n  // TODO add more checks\n\n  auto num_rois = rois.size(0);\n  auto grad_input = at::zeros({batch_size, channels, height, width}, grad.options());\n\n  cudaStream_t stream = at::cuda::getCurrentCUDAStream();\n\n  dim3 grid(std::min(THCCeilDiv((long)grad.numel(), 512L), 4096L));\n  dim3 block(512);\n\n  // handle possibly empty gradients\n  if (grad.numel() == 0) {\n    THCudaCheck(cudaGetLastError());\n    return grad_input;\n  }\n\n  AT_DISPATCH_FLOATING_TYPES(grad.type(), \"ROIPool_backward\", [&] {\n    RoIPoolFBackward<scalar_t><<<grid, block, 0, stream>>>(\n         grad.numel(),\n         grad.contiguous().data<scalar_t>(),\n         argmax.data<int>(),\n         num_rois,\n         spatial_scale,\n         channels,\n         height,\n         width,\n         pooled_height,\n         pooled_width,\n         grad_input.data<scalar_t>(),\n         rois.contiguous().data<scalar_t>());\n  });\n  THCudaCheck(cudaGetLastError());\n  return grad_input;\n}\n"
  },
  {
    "path": "maskrcnn_benchmark/csrc/cuda/SigmoidFocalLoss_cuda.cu",
    "content": "// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n// This file is modified from  https://github.com/pytorch/pytorch/blob/master/modules/detectron/sigmoid_focal_loss_op.cu\n// Cheng-Yang Fu\n// cyfu@cs.unc.edu\n#include <ATen/ATen.h>\n#include <ATen/cuda/CUDAContext.h>\n\n#include <THC/THC.h>\n#include <THC/THCAtomics.cuh>\n#include <THC/THCDeviceUtils.cuh>\n\n#include <cfloat>\n\n// TODO make it in a common file\n#define CUDA_1D_KERNEL_LOOP(i, n)                            \\\n  for (int i = blockIdx.x * blockDim.x + threadIdx.x; i < n; \\\n       i += blockDim.x * gridDim.x)\n\n\ntemplate <typename T>\n__global__ void SigmoidFocalLossForward(const int nthreads, \n    const T* logits,\n    const int* targets,\n    const int num_classes,\n    const float gamma, \n    const float alpha,\n    const int num, \n    T* losses) {\n  CUDA_1D_KERNEL_LOOP(i, nthreads) {\n\n    int n = i / num_classes;\n    int d = i % num_classes; // current class[0~79]; \n    int t = targets[n]; // target class [1~80];\n\n    // Decide it is positive or negative case. \n    T c1 = (t == (d+1)); \n    T c2 = (t>=0 & t != (d+1));\n\n    T zn = (1.0 - alpha);\n    T zp = (alpha);\n\n    // p = 1. / 1. + expf(-x); p = sigmoid(x)\n    T  p = 1. / (1. + expf(-logits[i]));\n\n    // (1-p)**gamma * log(p) where\n    T term1 = powf((1. - p), gamma) * logf(max(p, FLT_MIN));\n\n    // p**gamma * log(1-p)\n    T term2 = powf(p, gamma) *\n            (-1. * logits[i] * (logits[i] >= 0) -   \n             logf(1. + expf(logits[i] - 2. * logits[i] * (logits[i] >= 0))));\n\n    losses[i] = 0.0;\n    losses[i] += -c1 * term1 * zp;\n    losses[i] += -c2 * term2 * zn;\n\n  } // CUDA_1D_KERNEL_LOOP\n} // SigmoidFocalLossForward\n\n\ntemplate <typename T>\n__global__ void SigmoidFocalLossBackward(const int nthreads,\n                const T* logits,\n                const int* targets,\n                const T* d_losses,\n                const int num_classes,\n                const float gamma,\n                const float alpha,\n                const int num,\n                T* d_logits) {\n  CUDA_1D_KERNEL_LOOP(i, nthreads) {\n\n    int n = i / num_classes;\n    int d = i % num_classes; // current class[0~79]; \n    int t = targets[n]; // target class [1~80], 0 is background;\n\n    // Decide it is positive or negative case. \n    T c1 = (t == (d+1));\n    T c2 = (t>=0 & t != (d+1));\n\n    T zn = (1.0 - alpha);\n    T zp = (alpha);\n    // p = 1. / 1. + expf(-x); p = sigmoid(x)\n    T  p = 1. / (1. + expf(-logits[i]));\n\n    // (1-p)**g * (1 - p - g*p*log(p)\n    T term1 = powf((1. - p), gamma) *\n                      (1. - p - (p * gamma * logf(max(p, FLT_MIN))));\n\n    // (p**g) * (g*(1-p)*log(1-p) - p)\n    T term2 = powf(p, gamma) *\n                  ((-1. * logits[i] * (logits[i] >= 0) -\n                      logf(1. + expf(logits[i] - 2. * logits[i] * (logits[i] >= 0)))) *\n                      (1. - p) * gamma - p);\n    d_logits[i] = 0.0;\n    d_logits[i] += -c1 * term1 * zp;\n    d_logits[i] += -c2 * term2 * zn;\n    d_logits[i] = d_logits[i] * d_losses[i];\n\n  } // CUDA_1D_KERNEL_LOOP\n} // SigmoidFocalLossBackward\n\n\nat::Tensor SigmoidFocalLoss_forward_cuda(\n\t\tconst at::Tensor& logits,\n                const at::Tensor& targets,\n\t\tconst int num_classes, \n\t\tconst float gamma, \n\t\tconst float alpha) {\n  AT_ASSERTM(logits.type().is_cuda(), \"logits must be a CUDA tensor\");\n  AT_ASSERTM(targets.type().is_cuda(), \"targets must be a CUDA tensor\");\n  AT_ASSERTM(logits.dim() == 2, \"logits should be NxClass\");\n\n  const int num_samples = logits.size(0);\n\t\n  auto losses = at::empty({num_samples, logits.size(1)}, logits.options());\n  auto losses_size = num_samples * logits.size(1);\n  cudaStream_t stream = at::cuda::getCurrentCUDAStream();\n\n  dim3 grid(std::min(THCCeilDiv((long)losses_size, 512L), 4096L));\n  dim3 block(512);\n\n  if (losses.numel() == 0) {\n    THCudaCheck(cudaGetLastError());\n    return losses;\n  }\n\n  AT_DISPATCH_FLOATING_TYPES(logits.type(), \"SigmoidFocalLoss_forward\", [&] {\n    SigmoidFocalLossForward<scalar_t><<<grid, block, 0, stream>>>(\n         losses_size,\n         logits.contiguous().data<scalar_t>(),\n\t targets.contiguous().data<int>(),\n         num_classes,\n\t gamma,\n\t alpha,\n\t num_samples,\n         losses.data<scalar_t>());\n  });\n  THCudaCheck(cudaGetLastError());\n  return losses;   \n}\t\n\n\nat::Tensor SigmoidFocalLoss_backward_cuda(\n\t\tconst at::Tensor& logits,\n                const at::Tensor& targets,\n\t\tconst at::Tensor& d_losses,\n\t\tconst int num_classes, \n\t\tconst float gamma, \n\t\tconst float alpha) {\n  AT_ASSERTM(logits.type().is_cuda(), \"logits must be a CUDA tensor\");\n  AT_ASSERTM(targets.type().is_cuda(), \"targets must be a CUDA tensor\");\n  AT_ASSERTM(d_losses.type().is_cuda(), \"d_losses must be a CUDA tensor\");\n\n  AT_ASSERTM(logits.dim() == 2, \"logits should be NxClass\");\n\n  const int num_samples = logits.size(0);\n  AT_ASSERTM(logits.size(1) == num_classes, \"logits.size(1) should be num_classes\");\n\t\n  auto d_logits = at::zeros({num_samples, num_classes}, logits.options());\n  auto d_logits_size = num_samples * logits.size(1);\n  cudaStream_t stream = at::cuda::getCurrentCUDAStream();\n\n  dim3 grid(std::min(THCCeilDiv((long)d_logits_size, 512L), 4096L));\n  dim3 block(512);\n\n  if (d_logits.numel() == 0) {\n    THCudaCheck(cudaGetLastError());\n    return d_logits;\n  }\n\n  AT_DISPATCH_FLOATING_TYPES(logits.type(), \"SigmoidFocalLoss_backward\", [&] {\n    SigmoidFocalLossBackward<scalar_t><<<grid, block, 0, stream>>>(\n         d_logits_size,\n         logits.contiguous().data<scalar_t>(),\n\t targets.contiguous().data<int>(),\n\t d_losses.contiguous().data<scalar_t>(),\n         num_classes,\n\t gamma,\n\t alpha,\n\t num_samples,\n         d_logits.data<scalar_t>());\n  });\n\n  THCudaCheck(cudaGetLastError());\n  return d_logits;   \n}\t\n\n"
  },
  {
    "path": "maskrcnn_benchmark/csrc/cuda/nms.cu",
    "content": "// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n#include <ATen/ATen.h>\n#include <ATen/cuda/CUDAContext.h>\n\n#include <THC/THC.h>\n#include <THC/THCDeviceUtils.cuh>\n\n#include <vector>\n#include <iostream>\n\nint const threadsPerBlock = sizeof(unsigned long long) * 8;\n\n__device__ inline float devIoU(float const * const a, float const * const b) {\n  float left = max(a[0], b[0]), right = min(a[2], b[2]);\n  float top = max(a[1], b[1]), bottom = min(a[3], b[3]);\n  float width = max(right - left + 1, 0.f), height = max(bottom - top + 1, 0.f);\n  float interS = width * height;\n  float Sa = (a[2] - a[0] + 1) * (a[3] - a[1] + 1);\n  float Sb = (b[2] - b[0] + 1) * (b[3] - b[1] + 1);\n  return interS / (Sa + Sb - interS);\n}\n\n__global__ void nms_kernel(const int n_boxes, const float nms_overlap_thresh,\n                           const float *dev_boxes, unsigned long long *dev_mask) {\n  const int row_start = blockIdx.y;\n  const int col_start = blockIdx.x;\n\n  // if (row_start > col_start) return;\n\n  const int row_size =\n        min(n_boxes - row_start * threadsPerBlock, threadsPerBlock);\n  const int col_size =\n        min(n_boxes - col_start * threadsPerBlock, threadsPerBlock);\n\n  __shared__ float block_boxes[threadsPerBlock * 5];\n  if (threadIdx.x < col_size) {\n    block_boxes[threadIdx.x * 5 + 0] =\n        dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 0];\n    block_boxes[threadIdx.x * 5 + 1] =\n        dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 1];\n    block_boxes[threadIdx.x * 5 + 2] =\n        dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 2];\n    block_boxes[threadIdx.x * 5 + 3] =\n        dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 3];\n    block_boxes[threadIdx.x * 5 + 4] =\n        dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 4];\n  }\n  __syncthreads();\n\n  if (threadIdx.x < row_size) {\n    const int cur_box_idx = threadsPerBlock * row_start + threadIdx.x;\n    const float *cur_box = dev_boxes + cur_box_idx * 5;\n    int i = 0;\n    unsigned long long t = 0;\n    int start = 0;\n    if (row_start == col_start) {\n      start = threadIdx.x + 1;\n    }\n    for (i = start; i < col_size; i++) {\n      if (devIoU(cur_box, block_boxes + i * 5) > nms_overlap_thresh) {\n        t |= 1ULL << i;\n      }\n    }\n    const int col_blocks = THCCeilDiv(n_boxes, threadsPerBlock);\n    dev_mask[cur_box_idx * col_blocks + col_start] = t;\n  }\n}\n\n// boxes is a N x 5 tensor\nat::Tensor nms_cuda(const at::Tensor boxes, float nms_overlap_thresh) {\n  using scalar_t = float;\n  AT_ASSERTM(boxes.type().is_cuda(), \"boxes must be a CUDA tensor\");\n  auto scores = boxes.select(1, 4);\n  auto order_t = std::get<1>(scores.sort(0, /* descending=*/true));\n  auto boxes_sorted = boxes.index_select(0, order_t);\n\n  int boxes_num = boxes.size(0);\n\n  const int col_blocks = THCCeilDiv(boxes_num, threadsPerBlock);\n\n  scalar_t* boxes_dev = boxes_sorted.data<scalar_t>();\n\n  THCState *state = at::globalContext().lazyInitCUDA(); // TODO replace with getTHCState\n\n  unsigned long long* mask_dev = NULL;\n  //THCudaCheck(THCudaMalloc(state, (void**) &mask_dev,\n  //                      boxes_num * col_blocks * sizeof(unsigned long long)));\n\n  mask_dev = (unsigned long long*) THCudaMalloc(state, boxes_num * col_blocks * sizeof(unsigned long long));\n\n  dim3 blocks(THCCeilDiv(boxes_num, threadsPerBlock),\n              THCCeilDiv(boxes_num, threadsPerBlock));\n  dim3 threads(threadsPerBlock);\n  nms_kernel<<<blocks, threads>>>(boxes_num,\n                                  nms_overlap_thresh,\n                                  boxes_dev,\n                                  mask_dev);\n\n  std::vector<unsigned long long> mask_host(boxes_num * col_blocks);\n  THCudaCheck(cudaMemcpy(&mask_host[0],\n                        mask_dev,\n                        sizeof(unsigned long long) * boxes_num * col_blocks,\n                        cudaMemcpyDeviceToHost));\n\n  std::vector<unsigned long long> remv(col_blocks);\n  memset(&remv[0], 0, sizeof(unsigned long long) * col_blocks);\n\n  at::Tensor keep = at::empty({boxes_num}, boxes.options().dtype(at::kLong).device(at::kCPU));\n  int64_t* keep_out = keep.data<int64_t>();\n\n  int num_to_keep = 0;\n  for (int i = 0; i < boxes_num; i++) {\n    int nblock = i / threadsPerBlock;\n    int inblock = i % threadsPerBlock;\n\n    if (!(remv[nblock] & (1ULL << inblock))) {\n      keep_out[num_to_keep++] = i;\n      unsigned long long *p = &mask_host[0] + i * col_blocks;\n      for (int j = nblock; j < col_blocks; j++) {\n        remv[j] |= p[j];\n      }\n    }\n  }\n\n  THCudaFree(state, mask_dev);\n  // TODO improve this part\n  return std::get<0>(order_t.index({\n                       keep.narrow(/*dim=*/0, /*start=*/0, /*length=*/num_to_keep).to(\n                         order_t.device(), keep.scalar_type())\n                     }).sort(0, false));\n}\n"
  },
  {
    "path": "maskrcnn_benchmark/csrc/cuda/vision.h",
    "content": "// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n#pragma once\n#include <torch/extension.h>\n\n\nat::Tensor SigmoidFocalLoss_forward_cuda(\n\t\tconst at::Tensor& logits,\n                const at::Tensor& targets,\n\t\tconst int num_classes, \n\t\tconst float gamma, \n\t\tconst float alpha); \n\nat::Tensor SigmoidFocalLoss_backward_cuda(\n\t\t\t     const at::Tensor& logits,\n                             const at::Tensor& targets,\n\t\t\t     const at::Tensor& d_losses,\n\t\t\t     const int num_classes,\n\t\t\t     const float gamma,\n\t\t\t     const float alpha);\n\nat::Tensor ROIAlign_forward_cuda(const at::Tensor& input,\n                                 const at::Tensor& rois,\n                                 const float spatial_scale,\n                                 const int pooled_height,\n                                 const int pooled_width,\n                                 const int sampling_ratio);\n\nat::Tensor ROIAlign_backward_cuda(const at::Tensor& grad,\n                                  const at::Tensor& rois,\n                                  const float spatial_scale,\n                                  const int pooled_height,\n                                  const int pooled_width,\n                                  const int batch_size,\n                                  const int channels,\n                                  const int height,\n                                  const int width,\n                                  const int sampling_ratio);\n\n\nstd::tuple<at::Tensor, at::Tensor> ROIPool_forward_cuda(const at::Tensor& input,\n                                const at::Tensor& rois,\n                                const float spatial_scale,\n                                const int pooled_height,\n                                const int pooled_width);\n\nat::Tensor ROIPool_backward_cuda(const at::Tensor& grad,\n                                 const at::Tensor& input,\n                                 const at::Tensor& rois,\n                                 const at::Tensor& argmax,\n                                 const float spatial_scale,\n                                 const int pooled_height,\n                                 const int pooled_width,\n                                 const int batch_size,\n                                 const int channels,\n                                 const int height,\n                                 const int width);\n\nat::Tensor nms_cuda(const at::Tensor boxes, float nms_overlap_thresh);\n\n\nat::Tensor compute_flow_cuda(const at::Tensor& boxes,\n                             const int height,\n                             const int width);\n"
  },
  {
    "path": "maskrcnn_benchmark/csrc/nms.h",
    "content": "// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n#pragma once\n#include \"cpu/vision.h\"\n\n#ifdef WITH_CUDA\n#include \"cuda/vision.h\"\n#endif\n\n\nat::Tensor nms(const at::Tensor& dets,\n               const at::Tensor& scores,\n               const float threshold) {\n\n  if (dets.type().is_cuda()) {\n#ifdef WITH_CUDA\n    // TODO raise error if not compiled with CUDA\n    if (dets.numel() == 0)\n      return at::empty({0}, dets.options().dtype(at::kLong).device(at::kCPU));\n    auto b = at::cat({dets, scores.unsqueeze(1)}, 1);\n    return nms_cuda(b, threshold);\n#else\n    AT_ERROR(\"Not compiled with GPU support\");\n#endif\n  }\n\n  at::Tensor result = nms_cpu(dets, scores, threshold);\n  return result;\n}\n"
  },
  {
    "path": "maskrcnn_benchmark/csrc/vision.cpp",
    "content": "// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n#include \"nms.h\"\n#include \"ROIAlign.h\"\n#include \"ROIPool.h\"\n#include \"SigmoidFocalLoss.h\"\n\nPYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {\n  m.def(\"nms\", &nms, \"non-maximum suppression\");\n  m.def(\"roi_align_forward\", &ROIAlign_forward, \"ROIAlign_forward\");\n  m.def(\"roi_align_backward\", &ROIAlign_backward, \"ROIAlign_backward\");\n  m.def(\"roi_pool_forward\", &ROIPool_forward, \"ROIPool_forward\");\n  m.def(\"roi_pool_backward\", &ROIPool_backward, \"ROIPool_backward\");\n  m.def(\"sigmoid_focalloss_forward\", &SigmoidFocalLoss_forward, \"SigmoidFocalLoss_forward\");\n  m.def(\"sigmoid_focalloss_backward\", &SigmoidFocalLoss_backward, \"SigmoidFocalLoss_backward\");\n}\n"
  },
  {
    "path": "maskrcnn_benchmark/data/README.md",
    "content": "# Setting Up Datasets\nThis file describes how to perform training on other datasets.\n\nOnly Pascal VOC dataset can be loaded from its original format and be outputted to Pascal style results currently.\n\nWe expect the annotations from other datasets be converted to COCO json format, and\nthe output will be in COCO-style. (i.e. AP, AP50, AP75, APs, APm, APl for bbox and segm)\n\n## Creating Symlinks for PASCAL VOC\n\nWe assume that your symlinked `datasets/voc/VOC<year>` directory has the following structure:\n\n```\nVOC<year>\n|_ JPEGImages\n|  |_ <im-1-name>.jpg\n|  |_ ...\n|  |_ <im-N-name>.jpg\n|_ Annotations\n|  |_ pascal_train<year>.json (optional)\n|  |_ pascal_val<year>.json (optional)\n|  |_ pascal_test<year>.json (optional)\n|  |_ <im-1-name>.xml\n|  |_ ...\n|  |_ <im-N-name>.xml\n|_ VOCdevkit<year>\n```\n\nCreate symlinks for `voc/VOC<year>`:\n\n```\ncd ~/github/maskrcnn-benchmark\nmkdir -p datasets/voc/VOC<year>\nln -s /path/to/VOC<year> /datasets/voc/VOC<year>\n```\nExample configuration files for PASCAL VOC could be found [here](https://github.com/facebookresearch/maskrcnn-benchmark/blob/master/configs/pascal_voc/).\n\n### PASCAL VOC Annotations in COCO Format\nTo output COCO-style evaluation result, PASCAL VOC annotations in COCO json format is required and could be downloaded from [here](https://storage.googleapis.com/coco-dataset/external/PASCAL_VOC.zip)\nvia http://cocodataset.org/#external.\n\n## Creating Symlinks for Cityscapes:\n\nWe assume that your symlinked `datasets/cityscapes` directory has the following structure:\n\n```\ncityscapes\n|_ images\n|  |_ <im-1-name>.jpg\n|  |_ ...\n|  |_ <im-N-name>.jpg\n|_ annotations\n|  |_ instanceonly_gtFile_train.json\n|  |_ ...\n|_ raw\n   |_ gtFine\n   |_ ...\n   |_ README.md\n```\n\nCreate symlinks for `cityscapes`:\n\n```\ncd ~/github/maskrcnn-benchmark\nmkdir -p datasets/cityscapes\nln -s /path/to/cityscapes datasets/data/cityscapes\n```\n\n### Steps to convert Cityscapes Annotations to COCO Format\n1. Download gtFine_trainvaltest.zip from https://www.cityscapes-dataset.com/downloads/ (login required)\n2. Extract it to /path/to/gtFine_trainvaltest\n```\ncityscapes\n|_ gtFine_trainvaltest.zip\n|_ gtFine_trainvaltest\n   |_ gtFine\n```\n3. Run the below commands to convert the annotations\n\n```\ncd ~/github\ngit clone https://github.com/mcordts/cityscapesScripts.git\ncd cityscapesScripts\ncp ~/github/maskrcnn-benchmark/tools/cityscapes/instances2dict_with_polygons.py cityscapesscripts/evaluation\npython setup.py install\ncd ~/github/maskrcnn-benchmark\npython tools/cityscapes/convert_cityscapes_to_coco.py --datadir /path/to/cityscapes --outdir /path/to/cityscapes/annotations\n```\n\nExample configuration files for Cityscapes could be found [here](https://github.com/facebookresearch/maskrcnn-benchmark/blob/master/configs/cityscapes/).\n"
  },
  {
    "path": "maskrcnn_benchmark/data/__init__.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nfrom .build import make_data_loader\n"
  },
  {
    "path": "maskrcnn_benchmark/data/build.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport bisect\nimport copy\nimport logging\n\nimport torch.utils.data\nfrom maskrcnn_benchmark.utils.comm import get_world_size\nfrom maskrcnn_benchmark.utils.imports import import_file\n\nfrom . import datasets as D\nfrom . import samplers\n\nfrom .collate_batch import BatchCollator\nfrom .transforms import build_transforms\n\n\ndef build_dataset(dataset_list, transforms, dataset_catalog, is_train=True):\n    \"\"\"\n    Arguments:\n        dataset_list (list[str]): Contains the names of the datasets, i.e.,\n            coco_2014_trian, coco_2014_val, etc\n        transforms (callable): transforms to apply to each (image, target) sample\n        dataset_catalog (DatasetCatalog): contains the information on how to\n            construct a dataset.\n        is_train (bool): whether to setup the dataset for training or testing\n    \"\"\"\n    if not isinstance(dataset_list, (list, tuple)):\n        raise RuntimeError(\n            \"dataset_list should be a list of strings, got {}\".format(dataset_list)\n        )\n    datasets = []\n    for dataset_name in dataset_list:\n        data = dataset_catalog.get(dataset_name)\n        factory = getattr(D, data[\"factory\"])\n        args = data[\"args\"]\n        # for COCODataset, we want to remove images without annotations\n        # during training\n        if data[\"factory\"] == \"COCODataset\":\n            args[\"remove_images_without_annotations\"] = is_train\n        if data[\"factory\"] == \"PascalVOCDataset\":\n            args[\"use_difficult\"] = not is_train\n        args[\"transforms\"] = transforms\n        # make dataset from factory\n        dataset = factory(**args)\n        datasets.append(dataset)\n\n    # for testing, return a list of datasets\n    if not is_train:\n        return datasets\n\n    # for training, concatenate all datasets into a single one\n    dataset = datasets[0]\n    if len(datasets) > 1:\n        dataset = D.ConcatDataset(datasets)\n\n    return [dataset]\n\n\ndef make_data_sampler(dataset, shuffle, distributed):\n    if distributed:\n        return samplers.DistributedSampler(dataset, shuffle=shuffle)\n    if shuffle:\n        sampler = torch.utils.data.sampler.RandomSampler(dataset)\n    else:\n        sampler = torch.utils.data.sampler.SequentialSampler(dataset)\n    return sampler\n\n\ndef _quantize(x, bins):\n    bins = copy.copy(bins)\n    bins = sorted(bins)\n    quantized = list(map(lambda y: bisect.bisect_right(bins, y), x))\n    return quantized\n\n\ndef _compute_aspect_ratios(dataset):\n    aspect_ratios = []\n    for i in range(len(dataset)):\n        img_info = dataset.get_img_info(i)\n        aspect_ratio = float(img_info[\"height\"]) / float(img_info[\"width\"])\n        aspect_ratios.append(aspect_ratio)\n    return aspect_ratios\n\n\ndef make_batch_data_sampler(\n    dataset, sampler, aspect_grouping, images_per_batch, num_iters=None, start_iter=0\n):\n    if aspect_grouping:\n        if not isinstance(aspect_grouping, (list, tuple)):\n            aspect_grouping = [aspect_grouping]\n        aspect_ratios = _compute_aspect_ratios(dataset)\n        group_ids = _quantize(aspect_ratios, aspect_grouping)\n        batch_sampler = samplers.GroupedBatchSampler(\n            sampler, group_ids, images_per_batch, drop_uneven=False\n        )\n    else:\n        batch_sampler = torch.utils.data.sampler.BatchSampler(\n            sampler, images_per_batch, drop_last=False\n        )\n    if num_iters is not None:\n        batch_sampler = samplers.IterationBasedBatchSampler(\n            batch_sampler, num_iters, start_iter\n        )\n    return batch_sampler\n\n\ndef make_data_loader(cfg, is_train=True, is_distributed=False, start_iter=0):\n    num_gpus = get_world_size()\n    if is_train:\n        images_per_batch = cfg.SOLVER.IMS_PER_BATCH\n        assert (\n            images_per_batch % num_gpus == 0\n        ), \"SOLVER.IMS_PER_BATCH ({}) must be divisible by the number \"\n        \"of GPUs ({}) used.\".format(images_per_batch, num_gpus)\n        images_per_gpu = images_per_batch // num_gpus\n        shuffle = True\n        num_iters = cfg.SOLVER.MAX_ITER\n    else:\n        images_per_batch = cfg.TEST.IMS_PER_BATCH\n        assert (\n            images_per_batch % num_gpus == 0\n        ), \"TEST.IMS_PER_BATCH ({}) must be divisible by the number \"\n        \"of GPUs ({}) used.\".format(images_per_batch, num_gpus)\n        images_per_gpu = images_per_batch // num_gpus\n        shuffle = False if not is_distributed else True\n        num_iters = None\n        start_iter = 0\n\n    if images_per_gpu > 1:\n        logger = logging.getLogger(__name__)\n        logger.warning(\n            \"When using more than one image per GPU you may encounter \"\n            \"an out-of-memory (OOM) error if your GPU does not have \"\n            \"sufficient memory. If this happens, you can reduce \"\n            \"SOLVER.IMS_PER_BATCH (for training) or \"\n            \"TEST.IMS_PER_BATCH (for inference). For training, you must \"\n            \"also adjust the learning rate and schedule length according \"\n            \"to the linear scaling rule. See for example: \"\n            \"https://github.com/facebookresearch/Detectron/blob/master/configs/getting_started/tutorial_1gpu_e2e_faster_rcnn_R-50-FPN.yaml#L14\"\n        )\n\n    # group images which have similar aspect ratio. In this case, we only\n    # group in two cases: those with width / height > 1, and the other way around,\n    # but the code supports more general grouping strategy\n    aspect_grouping = [1] if cfg.DATALOADER.ASPECT_RATIO_GROUPING else []\n\n    paths_catalog = import_file(\n        \"maskrcnn_benchmark.config.paths_catalog\", cfg.PATHS_CATALOG, True\n    )\n    DatasetCatalog = paths_catalog.DatasetCatalog\n    dataset_list = cfg.DATASETS.TRAIN if is_train else cfg.DATASETS.TEST\n\n    transforms = build_transforms(cfg, is_train)\n    datasets = build_dataset(dataset_list, transforms, DatasetCatalog, is_train)\n\n    data_loaders = []\n    for dataset in datasets:\n        sampler = make_data_sampler(dataset, shuffle, is_distributed)\n        batch_sampler = make_batch_data_sampler(\n            dataset, sampler, aspect_grouping, images_per_gpu, num_iters, start_iter\n        )\n        collator = BatchCollator(cfg.DATALOADER.SIZE_DIVISIBILITY)\n        num_workers = cfg.DATALOADER.NUM_WORKERS\n        data_loader = torch.utils.data.DataLoader(\n            dataset,\n            num_workers=num_workers,\n            batch_sampler=batch_sampler,\n            collate_fn=collator,\n        )\n        data_loaders.append(data_loader)\n    if is_train:\n        # during training, a single (possibly concatenated) data_loader is returned\n        assert len(data_loaders) == 1\n        return data_loaders[0]\n    return data_loaders\n"
  },
  {
    "path": "maskrcnn_benchmark/data/collate_batch.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nfrom maskrcnn_benchmark.structures.image_list import to_image_list\n\n\nclass BatchCollator(object):\n    \"\"\"\n    From a list of samples from the dataset,\n    returns the batched images and targets.\n    This should be passed to the DataLoader\n    \"\"\"\n\n    def __init__(self, size_divisible=0):\n        self.size_divisible = size_divisible\n\n    def __call__(self, batch):\n        transposed_batch = list(zip(*batch))\n        images = to_image_list(transposed_batch[0], self.size_divisible)\n        targets = transposed_batch[1]\n        img_ids = transposed_batch[2]\n        return images, targets, img_ids\n"
  },
  {
    "path": "maskrcnn_benchmark/data/datasets/__init__.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nfrom .coco import COCODataset\nfrom .voc import PascalVOCDataset\nfrom .concat_dataset import ConcatDataset\n\n__all__ = [\"COCODataset\", \"ConcatDataset\", \"PascalVOCDataset\"]\n"
  },
  {
    "path": "maskrcnn_benchmark/data/datasets/coco.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport torch\nimport torchvision\n\nfrom maskrcnn_benchmark.structures.bounding_box import BoxList\nfrom maskrcnn_benchmark.structures.segmentation_mask import SegmentationMask\nfrom maskrcnn_benchmark.structures.keypoint import PersonKeypoints\n\n\nmin_keypoints_per_image = 10\n\n\ndef _count_visible_keypoints(anno):\n    return sum(sum(1 for v in ann[\"keypoints\"][2::3] if v > 0) for ann in anno)\n\n\ndef _has_only_empty_bbox(anno):\n    return all(any(o <= 1 for o in obj[\"bbox\"][2:]) for obj in anno)\n\n\ndef has_valid_annotation(anno):\n    # if it's empty, there is no annotation\n    if len(anno) == 0:\n        return False\n    # if all boxes have close to zero area, there is no annotation\n    if _has_only_empty_bbox(anno):\n        return False\n    # keypoints task have a slight different critera for considering\n    # if an annotation is valid\n    if \"keypoints\" not in anno[0]:\n        return True\n    # for keypoint detection tasks, only consider valid images those\n    # containing at least min_keypoints_per_image\n    if _count_visible_keypoints(anno) >= min_keypoints_per_image:\n        return True\n    return False\n\n\nclass COCODataset(torchvision.datasets.coco.CocoDetection):\n    def __init__(\n        self, ann_file, root, remove_images_without_annotations, transforms=None\n    ):\n        super(COCODataset, self).__init__(root, ann_file)\n        # sort indices for reproducible results\n        self.ids = sorted(self.ids)\n\n        # filter images without detection annotations\n        if remove_images_without_annotations:\n            ids = []\n            for img_id in self.ids:\n                ann_ids = self.coco.getAnnIds(imgIds=img_id, iscrowd=None)\n                anno = self.coco.loadAnns(ann_ids)\n                if has_valid_annotation(anno):\n                    ids.append(img_id)\n            self.ids = ids\n\n        self.json_category_id_to_contiguous_id = {\n            v: i + 1 for i, v in enumerate(self.coco.getCatIds())\n        }\n        self.contiguous_category_id_to_json_id = {\n            v: k for k, v in self.json_category_id_to_contiguous_id.items()\n        }\n        self.id_to_img_map = {k: v for k, v in enumerate(self.ids)}\n        self.transforms = transforms\n\n    def __getitem__(self, idx):\n        img, anno = super(COCODataset, self).__getitem__(idx)\n\n        # filter crowd annotations\n        # TODO might be better to add an extra field\n        anno = [obj for obj in anno if obj[\"iscrowd\"] == 0]\n\n        boxes = [obj[\"bbox\"] for obj in anno]\n        boxes = torch.as_tensor(boxes).reshape(-1, 4)  # guard against no boxes\n        target = BoxList(boxes, img.size, mode=\"xywh\").convert(\"xyxy\")\n\n        classes = [obj[\"category_id\"] for obj in anno]\n        classes = [self.json_category_id_to_contiguous_id[c] for c in classes]\n        classes = torch.tensor(classes)\n        target.add_field(\"labels\", classes)\n\n        masks = [obj[\"segmentation\"] for obj in anno]\n        masks = SegmentationMask(masks, img.size, mode='poly')\n        target.add_field(\"masks\", masks)\n\n        if anno and \"keypoints\" in anno[0]:\n            keypoints = [obj[\"keypoints\"] for obj in anno]\n            keypoints = PersonKeypoints(keypoints, img.size)\n            target.add_field(\"keypoints\", keypoints)\n\n        target = target.clip_to_image(remove_empty=True)\n\n        if self.transforms is not None:\n            img, target = self.transforms(img, target)\n\n        return img, target, idx\n\n    def get_img_info(self, index):\n        img_id = self.id_to_img_map[index]\n        img_data = self.coco.imgs[img_id]\n        return img_data\n"
  },
  {
    "path": "maskrcnn_benchmark/data/datasets/concat_dataset.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport bisect\n\nfrom torch.utils.data.dataset import ConcatDataset as _ConcatDataset\n\n\nclass ConcatDataset(_ConcatDataset):\n    \"\"\"\n    Same as torch.utils.data.dataset.ConcatDataset, but exposes an extra\n    method for querying the sizes of the image\n    \"\"\"\n\n    def get_idxs(self, idx):\n        dataset_idx = bisect.bisect_right(self.cumulative_sizes, idx)\n        if dataset_idx == 0:\n            sample_idx = idx\n        else:\n            sample_idx = idx - self.cumulative_sizes[dataset_idx - 1]\n        return dataset_idx, sample_idx\n\n    def get_img_info(self, idx):\n        dataset_idx, sample_idx = self.get_idxs(idx)\n        return self.datasets[dataset_idx].get_img_info(sample_idx)\n"
  },
  {
    "path": "maskrcnn_benchmark/data/datasets/evaluation/__init__.py",
    "content": "from maskrcnn_benchmark.data import datasets\n\nfrom .coco import coco_evaluation\nfrom .voc import voc_evaluation\n\n\ndef evaluate(dataset, predictions, output_folder, **kwargs):\n    \"\"\"evaluate dataset using different methods based on dataset type.\n    Args:\n        dataset: Dataset object\n        predictions(list[BoxList]): each item in the list represents the\n            prediction results for one image.\n        output_folder: output folder, to save evaluation files or results.\n        **kwargs: other args.\n    Returns:\n        evaluation result\n    \"\"\"\n    args = dict(\n        dataset=dataset, predictions=predictions, output_folder=output_folder, **kwargs\n    )\n    if isinstance(dataset, datasets.COCODataset):\n        return coco_evaluation(**args)\n    elif isinstance(dataset, datasets.PascalVOCDataset):\n        return voc_evaluation(**args)\n    else:\n        dataset_name = dataset.__class__.__name__\n        raise NotImplementedError(\"Unsupported dataset type {}.\".format(dataset_name))\n"
  },
  {
    "path": "maskrcnn_benchmark/data/datasets/evaluation/coco/__init__.py",
    "content": "from .coco_eval import do_coco_evaluation\n\n\ndef coco_evaluation(\n    dataset,\n    predictions,\n    output_folder,\n    box_only,\n    iou_types,\n    expected_results,\n    expected_results_sigma_tol,\n):\n    return do_coco_evaluation(\n        dataset=dataset,\n        predictions=predictions,\n        box_only=box_only,\n        output_folder=output_folder,\n        iou_types=iou_types,\n        expected_results=expected_results,\n        expected_results_sigma_tol=expected_results_sigma_tol,\n    )\n"
  },
  {
    "path": "maskrcnn_benchmark/data/datasets/evaluation/coco/coco_eval.py",
    "content": "import logging\nimport tempfile\nimport os\nimport torch\nfrom collections import OrderedDict\nfrom tqdm import tqdm\n\nfrom maskrcnn_benchmark.modeling.roi_heads.mask_head.inference import Masker\nfrom maskrcnn_benchmark.structures.bounding_box import BoxList\nfrom maskrcnn_benchmark.structures.boxlist_ops import boxlist_iou\n\n\ndef do_coco_evaluation(\n    dataset,\n    predictions,\n    box_only,\n    output_folder,\n    iou_types,\n    expected_results,\n    expected_results_sigma_tol,\n):\n    logger = logging.getLogger(\"maskrcnn_benchmark.inference\")\n\n    if box_only:\n        logger.info(\"Evaluating bbox proposals\")\n        areas = {\"all\": \"\", \"small\": \"s\", \"medium\": \"m\", \"large\": \"l\"}\n        res = COCOResults(\"box_proposal\")\n        for limit in [100, 1000]:\n            for area, suffix in areas.items():\n                stats = evaluate_box_proposals(\n                    predictions, dataset, area=area, limit=limit\n                )\n                key = \"AR{}@{:d}\".format(suffix, limit)\n                res.results[\"box_proposal\"][key] = stats[\"ar\"].item()\n        logger.info(res)\n        check_expected_results(res, expected_results, expected_results_sigma_tol)\n        if output_folder:\n            torch.save(res, os.path.join(output_folder, \"box_proposals.pth\"))\n        return\n    logger.info(\"Preparing results for COCO format\")\n    coco_results = {}\n    if \"bbox\" in iou_types:\n        logger.info(\"Preparing bbox results\")\n        coco_results[\"bbox\"] = prepare_for_coco_detection(predictions, dataset)\n    if \"segm\" in iou_types:\n        logger.info(\"Preparing segm results\")\n        coco_results[\"segm\"] = prepare_for_coco_segmentation(predictions, dataset)\n    if 'keypoints' in iou_types:\n        logger.info('Preparing keypoints results')\n        coco_results['keypoints'] = prepare_for_coco_keypoint(predictions, dataset)\n\n    results = COCOResults(*iou_types)\n    logger.info(\"Evaluating predictions\")\n    for iou_type in iou_types:\n        with tempfile.NamedTemporaryFile() as f:\n            file_path = f.name\n            if output_folder:\n                file_path = os.path.join(output_folder, iou_type + \".json\")\n            res = evaluate_predictions_on_coco(\n                dataset.coco, coco_results[iou_type], file_path, iou_type\n            )\n            results.update(res)\n    logger.info(results)\n    check_expected_results(results, expected_results, expected_results_sigma_tol)\n    if output_folder:\n        torch.save(results, os.path.join(output_folder, \"coco_results.pth\"))\n    return results, coco_results\n\n\ndef prepare_for_coco_detection(predictions, dataset):\n    # assert isinstance(dataset, COCODataset)\n    coco_results = []\n    for image_id, prediction in enumerate(predictions):\n        original_id = dataset.id_to_img_map[image_id]\n        if len(prediction) == 0:\n            continue\n\n        img_info = dataset.get_img_info(image_id)\n        image_width = img_info[\"width\"]\n        image_height = img_info[\"height\"]\n        prediction = prediction.resize((image_width, image_height))\n        prediction = prediction.convert(\"xywh\")\n\n        boxes = prediction.bbox.tolist()\n        scores = prediction.get_field(\"scores\").tolist()\n        labels = prediction.get_field(\"labels\").tolist()\n\n        mapped_labels = [dataset.contiguous_category_id_to_json_id[i] for i in labels]\n\n        coco_results.extend(\n            [\n                {\n                    \"image_id\": original_id,\n                    \"category_id\": mapped_labels[k],\n                    \"bbox\": box,\n                    \"score\": scores[k],\n                }\n                for k, box in enumerate(boxes)\n            ]\n        )\n    return coco_results\n\n\ndef prepare_for_coco_segmentation(predictions, dataset):\n    import pycocotools.mask as mask_util\n    import numpy as np\n\n    masker = Masker(threshold=0.5, padding=1)\n    # assert isinstance(dataset, COCODataset)\n    coco_results = []\n    for image_id, prediction in tqdm(enumerate(predictions)):\n        original_id = dataset.id_to_img_map[image_id]\n        if len(prediction) == 0:\n            continue\n\n        img_info = dataset.get_img_info(image_id)\n        image_width = img_info[\"width\"]\n        image_height = img_info[\"height\"]\n        prediction = prediction.resize((image_width, image_height))\n        masks = prediction.get_field(\"mask\")\n        # t = time.time()\n        # Masker is necessary only if masks haven't been already resized.\n        if list(masks.shape[-2:]) != [image_height, image_width]:\n            masks = masker(masks.expand(1, -1, -1, -1, -1), prediction)\n            masks = masks[0]\n        # logger.info('Time mask: {}'.format(time.time() - t))\n        # prediction = prediction.convert('xywh')\n\n        # boxes = prediction.bbox.tolist()\n        scores = prediction.get_field(\"scores\").tolist()\n        labels = prediction.get_field(\"labels\").tolist()\n\n        # rles = prediction.get_field('mask')\n\n        rles = [\n            mask_util.encode(np.array(mask[0, :, :, np.newaxis], order=\"F\"))[0]\n            for mask in masks\n        ]\n        for rle in rles:\n            rle[\"counts\"] = rle[\"counts\"].decode(\"utf-8\")\n\n        mapped_labels = [dataset.contiguous_category_id_to_json_id[i] for i in labels]\n\n        coco_results.extend(\n            [\n                {\n                    \"image_id\": original_id,\n                    \"category_id\": mapped_labels[k],\n                    \"segmentation\": rle,\n                    \"score\": scores[k],\n                }\n                for k, rle in enumerate(rles)\n            ]\n        )\n    return coco_results\n\n\ndef prepare_for_coco_keypoint(predictions, dataset):\n    # assert isinstance(dataset, COCODataset)\n    coco_results = []\n    for image_id, prediction in enumerate(predictions):\n        original_id = dataset.id_to_img_map[image_id]\n        if len(prediction.bbox) == 0:\n            continue\n\n        # TODO replace with get_img_info?\n        image_width = dataset.coco.imgs[original_id]['width']\n        image_height = dataset.coco.imgs[original_id]['height']\n        prediction = prediction.resize((image_width, image_height))\n        prediction = prediction.convert('xywh')\n\n        boxes = prediction.bbox.tolist()\n        scores = prediction.get_field('scores').tolist()\n        labels = prediction.get_field('labels').tolist()\n        keypoints = prediction.get_field('keypoints')\n        keypoints = keypoints.resize((image_width, image_height))\n        keypoints = keypoints.keypoints.view(keypoints.keypoints.shape[0], -1).tolist()\n\n        mapped_labels = [dataset.contiguous_category_id_to_json_id[i] for i in labels]\n\n        coco_results.extend([{\n            'image_id': original_id,\n            'category_id': mapped_labels[k],\n            'keypoints': keypoint,\n            'score': scores[k]} for k, keypoint in enumerate(keypoints)])\n    return coco_results\n\n# inspired from Detectron\ndef evaluate_box_proposals(\n    predictions, dataset, thresholds=None, area=\"all\", limit=None\n):\n    \"\"\"Evaluate detection proposal recall metrics. This function is a much\n    faster alternative to the official COCO API recall evaluation code. However,\n    it produces slightly different results.\n    \"\"\"\n    # Record max overlap value for each gt box\n    # Return vector of overlap values\n    areas = {\n        \"all\": 0,\n        \"small\": 1,\n        \"medium\": 2,\n        \"large\": 3,\n        \"96-128\": 4,\n        \"128-256\": 5,\n        \"256-512\": 6,\n        \"512-inf\": 7,\n    }\n    area_ranges = [\n        [0 ** 2, 1e5 ** 2],  # all\n        [0 ** 2, 32 ** 2],  # small\n        [32 ** 2, 96 ** 2],  # medium\n        [96 ** 2, 1e5 ** 2],  # large\n        [96 ** 2, 128 ** 2],  # 96-128\n        [128 ** 2, 256 ** 2],  # 128-256\n        [256 ** 2, 512 ** 2],  # 256-512\n        [512 ** 2, 1e5 ** 2],\n    ]  # 512-inf\n    assert area in areas, \"Unknown area range: {}\".format(area)\n    area_range = area_ranges[areas[area]]\n    gt_overlaps = []\n    num_pos = 0\n\n    for image_id, prediction in enumerate(predictions):\n        original_id = dataset.id_to_img_map[image_id]\n\n        img_info = dataset.get_img_info(image_id)\n        image_width = img_info[\"width\"]\n        image_height = img_info[\"height\"]\n        prediction = prediction.resize((image_width, image_height))\n\n        # sort predictions in descending order\n        # TODO maybe remove this and make it explicit in the documentation\n        inds = prediction.get_field(\"objectness\").sort(descending=True)[1]\n        prediction = prediction[inds]\n\n        ann_ids = dataset.coco.getAnnIds(imgIds=original_id)\n        anno = dataset.coco.loadAnns(ann_ids)\n        gt_boxes = [obj[\"bbox\"] for obj in anno if obj[\"iscrowd\"] == 0]\n        gt_boxes = torch.as_tensor(gt_boxes).reshape(-1, 4)  # guard against no boxes\n        gt_boxes = BoxList(gt_boxes, (image_width, image_height), mode=\"xywh\").convert(\n            \"xyxy\"\n        )\n        gt_areas = torch.as_tensor([obj[\"area\"] for obj in anno if obj[\"iscrowd\"] == 0])\n\n        if len(gt_boxes) == 0:\n            continue\n\n        valid_gt_inds = (gt_areas >= area_range[0]) & (gt_areas <= area_range[1])\n        gt_boxes = gt_boxes[valid_gt_inds]\n\n        num_pos += len(gt_boxes)\n\n        if len(gt_boxes) == 0:\n            continue\n\n        if len(prediction) == 0:\n            continue\n\n        if limit is not None and len(prediction) > limit:\n            prediction = prediction[:limit]\n\n        overlaps = boxlist_iou(prediction, gt_boxes)\n\n        _gt_overlaps = torch.zeros(len(gt_boxes))\n        for j in range(min(len(prediction), len(gt_boxes))):\n            # find which proposal box maximally covers each gt box\n            # and get the iou amount of coverage for each gt box\n            max_overlaps, argmax_overlaps = overlaps.max(dim=0)\n\n            # find which gt box is 'best' covered (i.e. 'best' = most iou)\n            gt_ovr, gt_ind = max_overlaps.max(dim=0)\n            assert gt_ovr >= 0\n            # find the proposal box that covers the best covered gt box\n            box_ind = argmax_overlaps[gt_ind]\n            # record the iou coverage of this gt box\n            _gt_overlaps[j] = overlaps[box_ind, gt_ind]\n            assert _gt_overlaps[j] == gt_ovr\n            # mark the proposal box and the gt box as used\n            overlaps[box_ind, :] = -1\n            overlaps[:, gt_ind] = -1\n\n        # append recorded iou coverage level\n        gt_overlaps.append(_gt_overlaps)\n    gt_overlaps = torch.cat(gt_overlaps, dim=0)\n    gt_overlaps, _ = torch.sort(gt_overlaps)\n\n    if thresholds is None:\n        step = 0.05\n        thresholds = torch.arange(0.5, 0.95 + 1e-5, step, dtype=torch.float32)\n    recalls = torch.zeros_like(thresholds)\n    # compute recall for each iou threshold\n    for i, t in enumerate(thresholds):\n        recalls[i] = (gt_overlaps >= t).float().sum() / float(num_pos)\n    # ar = 2 * np.trapz(recalls, thresholds)\n    ar = recalls.mean()\n    return {\n        \"ar\": ar,\n        \"recalls\": recalls,\n        \"thresholds\": thresholds,\n        \"gt_overlaps\": gt_overlaps,\n        \"num_pos\": num_pos,\n    }\n\n\ndef evaluate_predictions_on_coco(\n    coco_gt, coco_results, json_result_file, iou_type=\"bbox\"\n):\n    import json\n\n    with open(json_result_file, \"w\") as f:\n        json.dump(coco_results, f)\n\n    from pycocotools.coco import COCO\n    from pycocotools.cocoeval import COCOeval\n\n    coco_dt = coco_gt.loadRes(str(json_result_file)) if coco_results else COCO()\n\n    # coco_dt = coco_gt.loadRes(coco_results)\n    coco_eval = COCOeval(coco_gt, coco_dt, iou_type)\n    coco_eval.evaluate()\n    coco_eval.accumulate()\n    coco_eval.summarize()\n\n    compute_thresholds_for_classes(coco_eval)\n\n    return coco_eval\n\n\ndef compute_thresholds_for_classes(coco_eval):\n    '''\n    The function is used to compute the thresholds corresponding to best f-measure.\n    The resulting thresholds are used in fcos_demo.py.\n    :param coco_eval:\n    :return:\n    '''\n    import numpy as np\n    # dimension of precision: [TxRxKxAxM]\n    precision = coco_eval.eval['precision']\n    # we compute thresholds with IOU being 0.5\n    precision = precision[0, :, :, 0, -1]\n    scores = coco_eval.eval['scores']\n    scores = scores[0, :, :, 0, -1]\n\n    recall = np.linspace(0, 1, num=precision.shape[0])\n    recall = recall[:, None]\n\n    f_measure = (2 * precision * recall) / (np.maximum(precision + recall, 1e-6))\n    max_f_measure = f_measure.max(axis=0)\n    max_f_measure_inds = f_measure.argmax(axis=0)\n    scores = scores[max_f_measure_inds, range(len(max_f_measure_inds))]\n\n    print(\"Maximum f-measures for classes:\")\n    print(list(max_f_measure))\n    print(\"Score thresholds for classes (used in demos for visualization purposes):\")\n    print(list(scores))\n\n\nclass COCOResults(object):\n    METRICS = {\n        \"bbox\": [\"AP\", \"AP50\", \"AP75\", \"APs\", \"APm\", \"APl\"],\n        \"segm\": [\"AP\", \"AP50\", \"AP75\", \"APs\", \"APm\", \"APl\"],\n        \"box_proposal\": [\n            \"AR@100\",\n            \"ARs@100\",\n            \"ARm@100\",\n            \"ARl@100\",\n            \"AR@1000\",\n            \"ARs@1000\",\n            \"ARm@1000\",\n            \"ARl@1000\",\n        ],\n        \"keypoints\": [\"AP\", \"AP50\", \"AP75\", \"APm\", \"APl\"],\n    }\n\n    def __init__(self, *iou_types):\n        allowed_types = (\"box_proposal\", \"bbox\", \"segm\", \"keypoints\")\n        assert all(iou_type in allowed_types for iou_type in iou_types)\n        results = OrderedDict()\n        for iou_type in iou_types:\n            results[iou_type] = OrderedDict(\n                [(metric, -1) for metric in COCOResults.METRICS[iou_type]]\n            )\n        self.results = results\n\n    def update(self, coco_eval):\n        if coco_eval is None:\n            return\n        from pycocotools.cocoeval import COCOeval\n\n        assert isinstance(coco_eval, COCOeval)\n        s = coco_eval.stats\n        iou_type = coco_eval.params.iouType\n        res = self.results[iou_type]\n        metrics = COCOResults.METRICS[iou_type]\n        for idx, metric in enumerate(metrics):\n            res[metric] = s[idx]\n\n    def __repr__(self):\n        # TODO make it pretty\n        return repr(self.results)\n\n\ndef check_expected_results(results, expected_results, sigma_tol):\n    if not expected_results:\n        return\n\n    logger = logging.getLogger(\"maskrcnn_benchmark.inference\")\n    for task, metric, (mean, std) in expected_results:\n        actual_val = results.results[task][metric]\n        lo = mean - sigma_tol * std\n        hi = mean + sigma_tol * std\n        ok = (lo < actual_val) and (actual_val < hi)\n        msg = (\n            \"{} > {} sanity check (actual vs. expected): \"\n            \"{:.3f} vs. mean={:.4f}, std={:.4}, range=({:.4f}, {:.4f})\"\n        ).format(task, metric, actual_val, mean, std, lo, hi)\n        if not ok:\n            msg = \"FAIL: \" + msg\n            logger.error(msg)\n        else:\n            msg = \"PASS: \" + msg\n            logger.info(msg)\n"
  },
  {
    "path": "maskrcnn_benchmark/data/datasets/evaluation/voc/__init__.py",
    "content": "import logging\n\nfrom .voc_eval import do_voc_evaluation\n\n\ndef voc_evaluation(dataset, predictions, output_folder, box_only, **_):\n    logger = logging.getLogger(\"maskrcnn_benchmark.inference\")\n    if box_only:\n        logger.warning(\"voc evaluation doesn't support box_only, ignored.\")\n    logger.info(\"performing voc evaluation, ignored iou_types.\")\n    return do_voc_evaluation(\n        dataset=dataset,\n        predictions=predictions,\n        output_folder=output_folder,\n        logger=logger,\n    )\n"
  },
  {
    "path": "maskrcnn_benchmark/data/datasets/evaluation/voc/voc_eval.py",
    "content": "# A modification version from chainercv repository.\n# (See https://github.com/chainer/chainercv/blob/master/chainercv/evaluations/eval_detection_voc.py)\nfrom __future__ import division\n\nimport os\nfrom collections import defaultdict\nimport numpy as np\nfrom maskrcnn_benchmark.structures.bounding_box import BoxList\nfrom maskrcnn_benchmark.structures.boxlist_ops import boxlist_iou\n\n\ndef do_voc_evaluation(dataset, predictions, output_folder, logger):\n    # TODO need to make the use_07_metric format available\n    # for the user to choose\n    pred_boxlists = []\n    gt_boxlists = []\n    for image_id, prediction in enumerate(predictions):\n        img_info = dataset.get_img_info(image_id)\n        if len(prediction) == 0:\n            continue\n        image_width = img_info[\"width\"]\n        image_height = img_info[\"height\"]\n        prediction = prediction.resize((image_width, image_height))\n        pred_boxlists.append(prediction)\n\n        gt_boxlist = dataset.get_groundtruth(image_id)\n        gt_boxlists.append(gt_boxlist)\n    result = eval_detection_voc(\n        pred_boxlists=pred_boxlists,\n        gt_boxlists=gt_boxlists,\n        iou_thresh=0.5,\n        use_07_metric=True,\n    )\n    result_str = \"mAP: {:.4f}\\n\".format(result[\"map\"])\n    for i, ap in enumerate(result[\"ap\"]):\n        if i == 0:  # skip background\n            continue\n        result_str += \"{:<16}: {:.4f}\\n\".format(\n            dataset.map_class_id_to_class_name(i), ap\n        )\n    logger.info(result_str)\n    if output_folder:\n        with open(os.path.join(output_folder, \"result.txt\"), \"w\") as fid:\n            fid.write(result_str)\n    return result\n\n\ndef eval_detection_voc(pred_boxlists, gt_boxlists, iou_thresh=0.5, use_07_metric=False):\n    \"\"\"Evaluate on voc dataset.\n    Args:\n        pred_boxlists(list[BoxList]): pred boxlist, has labels and scores fields.\n        gt_boxlists(list[BoxList]): ground truth boxlist, has labels field.\n        iou_thresh: iou thresh\n        use_07_metric: boolean\n    Returns:\n        dict represents the results\n    \"\"\"\n    assert len(gt_boxlists) == len(\n        pred_boxlists\n    ), \"Length of gt and pred lists need to be same.\"\n    prec, rec = calc_detection_voc_prec_rec(\n        pred_boxlists=pred_boxlists, gt_boxlists=gt_boxlists, iou_thresh=iou_thresh\n    )\n    ap = calc_detection_voc_ap(prec, rec, use_07_metric=use_07_metric)\n    return {\"ap\": ap, \"map\": np.nanmean(ap)}\n\n\ndef calc_detection_voc_prec_rec(gt_boxlists, pred_boxlists, iou_thresh=0.5):\n    \"\"\"Calculate precision and recall based on evaluation code of PASCAL VOC.\n    This function calculates precision and recall of\n    predicted bounding boxes obtained from a dataset which has :math:`N`\n    images.\n    The code is based on the evaluation code used in PASCAL VOC Challenge.\n   \"\"\"\n    n_pos = defaultdict(int)\n    score = defaultdict(list)\n    match = defaultdict(list)\n    for gt_boxlist, pred_boxlist in zip(gt_boxlists, pred_boxlists):\n        pred_bbox = pred_boxlist.bbox.numpy()\n        pred_label = pred_boxlist.get_field(\"labels\").numpy()\n        pred_score = pred_boxlist.get_field(\"scores\").numpy()\n        gt_bbox = gt_boxlist.bbox.numpy()\n        gt_label = gt_boxlist.get_field(\"labels\").numpy()\n        gt_difficult = gt_boxlist.get_field(\"difficult\").numpy()\n\n        for l in np.unique(np.concatenate((pred_label, gt_label)).astype(int)):\n            pred_mask_l = pred_label == l\n            pred_bbox_l = pred_bbox[pred_mask_l]\n            pred_score_l = pred_score[pred_mask_l]\n            # sort by score\n            order = pred_score_l.argsort()[::-1]\n            pred_bbox_l = pred_bbox_l[order]\n            pred_score_l = pred_score_l[order]\n\n            gt_mask_l = gt_label == l\n            gt_bbox_l = gt_bbox[gt_mask_l]\n            gt_difficult_l = gt_difficult[gt_mask_l]\n\n            n_pos[l] += np.logical_not(gt_difficult_l).sum()\n            score[l].extend(pred_score_l)\n\n            if len(pred_bbox_l) == 0:\n                continue\n            if len(gt_bbox_l) == 0:\n                match[l].extend((0,) * pred_bbox_l.shape[0])\n                continue\n\n            # VOC evaluation follows integer typed bounding boxes.\n            pred_bbox_l = pred_bbox_l.copy()\n            pred_bbox_l[:, 2:] += 1\n            gt_bbox_l = gt_bbox_l.copy()\n            gt_bbox_l[:, 2:] += 1\n            iou = boxlist_iou(\n                BoxList(pred_bbox_l, gt_boxlist.size),\n                BoxList(gt_bbox_l, gt_boxlist.size),\n            ).numpy()\n            gt_index = iou.argmax(axis=1)\n            # set -1 if there is no matching ground truth\n            gt_index[iou.max(axis=1) < iou_thresh] = -1\n            del iou\n\n            selec = np.zeros(gt_bbox_l.shape[0], dtype=bool)\n            for gt_idx in gt_index:\n                if gt_idx >= 0:\n                    if gt_difficult_l[gt_idx]:\n                        match[l].append(-1)\n                    else:\n                        if not selec[gt_idx]:\n                            match[l].append(1)\n                        else:\n                            match[l].append(0)\n                    selec[gt_idx] = True\n                else:\n                    match[l].append(0)\n\n    n_fg_class = max(n_pos.keys()) + 1\n    prec = [None] * n_fg_class\n    rec = [None] * n_fg_class\n\n    for l in n_pos.keys():\n        score_l = np.array(score[l])\n        match_l = np.array(match[l], dtype=np.int8)\n\n        order = score_l.argsort()[::-1]\n        match_l = match_l[order]\n\n        tp = np.cumsum(match_l == 1)\n        fp = np.cumsum(match_l == 0)\n\n        # If an element of fp + tp is 0,\n        # the corresponding element of prec[l] is nan.\n        prec[l] = tp / (fp + tp)\n        # If n_pos[l] is 0, rec[l] is None.\n        if n_pos[l] > 0:\n            rec[l] = tp / n_pos[l]\n\n    return prec, rec\n\n\ndef calc_detection_voc_ap(prec, rec, use_07_metric=False):\n    \"\"\"Calculate average precisions based on evaluation code of PASCAL VOC.\n    This function calculates average precisions\n    from given precisions and recalls.\n    The code is based on the evaluation code used in PASCAL VOC Challenge.\n    Args:\n        prec (list of numpy.array): A list of arrays.\n            :obj:`prec[l]` indicates precision for class :math:`l`.\n            If :obj:`prec[l]` is :obj:`None`, this function returns\n            :obj:`numpy.nan` for class :math:`l`.\n        rec (list of numpy.array): A list of arrays.\n            :obj:`rec[l]` indicates recall for class :math:`l`.\n            If :obj:`rec[l]` is :obj:`None`, this function returns\n            :obj:`numpy.nan` for class :math:`l`.\n        use_07_metric (bool): Whether to use PASCAL VOC 2007 evaluation metric\n            for calculating average precision. The default value is\n            :obj:`False`.\n    Returns:\n        ~numpy.ndarray:\n        This function returns an array of average precisions.\n        The :math:`l`-th value corresponds to the average precision\n        for class :math:`l`. If :obj:`prec[l]` or :obj:`rec[l]` is\n        :obj:`None`, the corresponding value is set to :obj:`numpy.nan`.\n    \"\"\"\n\n    n_fg_class = len(prec)\n    ap = np.empty(n_fg_class)\n    for l in range(n_fg_class):\n        if prec[l] is None or rec[l] is None:\n            ap[l] = np.nan\n            continue\n\n        if use_07_metric:\n            # 11 point metric\n            ap[l] = 0\n            for t in np.arange(0.0, 1.1, 0.1):\n                if np.sum(rec[l] >= t) == 0:\n                    p = 0\n                else:\n                    p = np.max(np.nan_to_num(prec[l])[rec[l] >= t])\n                ap[l] += p / 11\n        else:\n            # correct AP calculation\n            # first append sentinel values at the end\n            mpre = np.concatenate(([0], np.nan_to_num(prec[l]), [0]))\n            mrec = np.concatenate(([0], rec[l], [1]))\n\n            mpre = np.maximum.accumulate(mpre[::-1])[::-1]\n\n            # to calculate area under PR curve, look for points\n            # where X axis (recall) changes value\n            i = np.where(mrec[1:] != mrec[:-1])[0]\n\n            # and sum (\\Delta recall) * prec\n            ap[l] = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1])\n\n    return ap\n"
  },
  {
    "path": "maskrcnn_benchmark/data/datasets/list_dataset.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n\"\"\"\nSimple dataset class that wraps a list of path names\n\"\"\"\n\nfrom PIL import Image\n\nfrom maskrcnn_benchmark.structures.bounding_box import BoxList\n\n\nclass ListDataset(object):\n    def __init__(self, image_lists, transforms=None):\n        self.image_lists = image_lists\n        self.transforms = transforms\n\n    def __getitem__(self, item):\n        img = Image.open(self.image_lists[item]).convert(\"RGB\")\n\n        # dummy target\n        w, h = img.size\n        target = BoxList([[0, 0, w, h]], img.size, mode=\"xyxy\")\n\n        if self.transforms is not None:\n            img, target = self.transforms(img, target)\n\n        return img, target\n\n    def __len__(self):\n        return len(self.image_lists)\n\n    def get_img_info(self, item):\n        \"\"\"\n        Return the image dimensions for the image, without\n        loading and pre-processing it\n        \"\"\"\n        pass\n"
  },
  {
    "path": "maskrcnn_benchmark/data/datasets/voc.py",
    "content": "import os\n\nimport torch\nimport torch.utils.data\nfrom PIL import Image\nimport sys\n\nif sys.version_info[0] == 2:\n    import xml.etree.cElementTree as ET\nelse:\n    import xml.etree.ElementTree as ET\n\n\nfrom maskrcnn_benchmark.structures.bounding_box import BoxList\n\n\nclass PascalVOCDataset(torch.utils.data.Dataset):\n\n    CLASSES = (\n        \"__background__ \",\n        \"aeroplane\",\n        \"bicycle\",\n        \"bird\",\n        \"boat\",\n        \"bottle\",\n        \"bus\",\n        \"car\",\n        \"cat\",\n        \"chair\",\n        \"cow\",\n        \"diningtable\",\n        \"dog\",\n        \"horse\",\n        \"motorbike\",\n        \"person\",\n        \"pottedplant\",\n        \"sheep\",\n        \"sofa\",\n        \"train\",\n        \"tvmonitor\",\n    )\n\n    def __init__(self, data_dir, split, use_difficult=False, transforms=None):\n        self.root = data_dir\n        self.image_set = split\n        self.keep_difficult = use_difficult\n        self.transforms = transforms\n\n        self._annopath = os.path.join(self.root, \"Annotations\", \"%s.xml\")\n        self._imgpath = os.path.join(self.root, \"JPEGImages\", \"%s.jpg\")\n        self._imgsetpath = os.path.join(self.root, \"ImageSets\", \"Main\", \"%s.txt\")\n\n        with open(self._imgsetpath % self.image_set) as f:\n            self.ids = f.readlines()\n        self.ids = [x.strip(\"\\n\") for x in self.ids]\n        self.id_to_img_map = {k: v for k, v in enumerate(self.ids)}\n\n        cls = PascalVOCDataset.CLASSES\n        self.class_to_ind = dict(zip(cls, range(len(cls))))\n\n    def __getitem__(self, index):\n        img_id = self.ids[index]\n        img = Image.open(self._imgpath % img_id).convert(\"RGB\")\n\n        target = self.get_groundtruth(index)\n        target = target.clip_to_image(remove_empty=True)\n\n        if self.transforms is not None:\n            img, target = self.transforms(img, target)\n\n        return img, target, index\n\n    def __len__(self):\n        return len(self.ids)\n\n    def get_groundtruth(self, index):\n        img_id = self.ids[index]\n        anno = ET.parse(self._annopath % img_id).getroot()\n        anno = self._preprocess_annotation(anno)\n\n        height, width = anno[\"im_info\"]\n        target = BoxList(anno[\"boxes\"], (width, height), mode=\"xyxy\")\n        target.add_field(\"labels\", anno[\"labels\"])\n        target.add_field(\"difficult\", anno[\"difficult\"])\n        return target\n\n    def _preprocess_annotation(self, target):\n        boxes = []\n        gt_classes = []\n        difficult_boxes = []\n        TO_REMOVE = 1\n        \n        for obj in target.iter(\"object\"):\n            difficult = int(obj.find(\"difficult\").text) == 1\n            if not self.keep_difficult and difficult:\n                continue\n            name = obj.find(\"name\").text.lower().strip()\n            bb = obj.find(\"bndbox\")\n            # Make pixel indexes 0-based\n            # Refer to \"https://github.com/rbgirshick/py-faster-rcnn/blob/master/lib/datasets/pascal_voc.py#L208-L211\"\n            box = [\n                bb.find(\"xmin\").text, \n                bb.find(\"ymin\").text, \n                bb.find(\"xmax\").text, \n                bb.find(\"ymax\").text,\n            ]\n            bndbox = tuple(\n                map(lambda x: x - TO_REMOVE, list(map(int, box)))\n            )\n\n            boxes.append(bndbox)\n            gt_classes.append(self.class_to_ind[name])\n            difficult_boxes.append(difficult)\n\n        size = target.find(\"size\")\n        im_info = tuple(map(int, (size.find(\"height\").text, size.find(\"width\").text)))\n\n        res = {\n            \"boxes\": torch.tensor(boxes, dtype=torch.float32),\n            \"labels\": torch.tensor(gt_classes),\n            \"difficult\": torch.tensor(difficult_boxes),\n            \"im_info\": im_info,\n        }\n        return res\n\n    def get_img_info(self, index):\n        img_id = self.ids[index]\n        anno = ET.parse(self._annopath % img_id).getroot()\n        size = anno.find(\"size\")\n        im_info = tuple(map(int, (size.find(\"height\").text, size.find(\"width\").text)))\n        return {\"height\": im_info[0], \"width\": im_info[1]}\n\n    def map_class_id_to_class_name(self, class_id):\n        return PascalVOCDataset.CLASSES[class_id]\n"
  },
  {
    "path": "maskrcnn_benchmark/data/samplers/__init__.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nfrom .distributed import DistributedSampler\nfrom .grouped_batch_sampler import GroupedBatchSampler\nfrom .iteration_based_batch_sampler import IterationBasedBatchSampler\n\n__all__ = [\"DistributedSampler\", \"GroupedBatchSampler\", \"IterationBasedBatchSampler\"]\n"
  },
  {
    "path": "maskrcnn_benchmark/data/samplers/distributed.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n# Code is copy-pasted exactly as in torch.utils.data.distributed.\n# FIXME remove this once c10d fixes the bug it has\nimport math\nimport torch\nimport torch.distributed as dist\nfrom torch.utils.data.sampler import Sampler\n\n\nclass DistributedSampler(Sampler):\n    \"\"\"Sampler that restricts data loading to a subset of the dataset.\n    It is especially useful in conjunction with\n    :class:`torch.nn.parallel.DistributedDataParallel`. In such case, each\n    process can pass a DistributedSampler instance as a DataLoader sampler,\n    and load a subset of the original dataset that is exclusive to it.\n    .. note::\n        Dataset is assumed to be of constant size.\n    Arguments:\n        dataset: Dataset used for sampling.\n        num_replicas (optional): Number of processes participating in\n            distributed training.\n        rank (optional): Rank of the current process within num_replicas.\n    \"\"\"\n\n    def __init__(self, dataset, num_replicas=None, rank=None, shuffle=True):\n        if num_replicas is None:\n            if not dist.is_available():\n                raise RuntimeError(\"Requires distributed package to be available\")\n            num_replicas = dist.get_world_size()\n        if rank is None:\n            if not dist.is_available():\n                raise RuntimeError(\"Requires distributed package to be available\")\n            rank = dist.get_rank()\n        self.dataset = dataset\n        self.num_replicas = num_replicas\n        self.rank = rank\n        self.epoch = 0\n        self.num_samples = int(math.ceil(len(self.dataset) * 1.0 / self.num_replicas))\n        self.total_size = self.num_samples * self.num_replicas\n        self.shuffle = shuffle\n\n    def __iter__(self):\n        if self.shuffle:\n            # deterministically shuffle based on epoch\n            g = torch.Generator()\n            g.manual_seed(self.epoch)\n            indices = torch.randperm(len(self.dataset), generator=g).tolist()\n        else:\n            indices = torch.arange(len(self.dataset)).tolist()\n\n        # add extra samples to make it evenly divisible\n        indices += indices[: (self.total_size - len(indices))]\n        assert len(indices) == self.total_size\n\n        # subsample\n        offset = self.num_samples * self.rank\n        indices = indices[offset : offset + self.num_samples]\n        assert len(indices) == self.num_samples\n\n        return iter(indices)\n\n    def __len__(self):\n        return self.num_samples\n\n    def set_epoch(self, epoch):\n        self.epoch = epoch\n"
  },
  {
    "path": "maskrcnn_benchmark/data/samplers/grouped_batch_sampler.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport itertools\n\nimport torch\nfrom torch.utils.data.sampler import BatchSampler\nfrom torch.utils.data.sampler import Sampler\n\n\nclass GroupedBatchSampler(BatchSampler):\n    \"\"\"\n    Wraps another sampler to yield a mini-batch of indices.\n    It enforces that elements from the same group should appear in groups of batch_size.\n    It also tries to provide mini-batches which follows an ordering which is\n    as close as possible to the ordering from the original sampler.\n\n    Arguments:\n        sampler (Sampler): Base sampler.\n        batch_size (int): Size of mini-batch.\n        drop_uneven (bool): If ``True``, the sampler will drop the batches whose\n            size is less than ``batch_size``\n\n    \"\"\"\n\n    def __init__(self, sampler, group_ids, batch_size, drop_uneven=False):\n        if not isinstance(sampler, Sampler):\n            raise ValueError(\n                \"sampler should be an instance of \"\n                \"torch.utils.data.Sampler, but got sampler={}\".format(sampler)\n            )\n        self.sampler = sampler\n        self.group_ids = torch.as_tensor(group_ids)\n        assert self.group_ids.dim() == 1\n        self.batch_size = batch_size\n        self.drop_uneven = drop_uneven\n\n        self.groups = torch.unique(self.group_ids).sort(0)[0]\n\n        self._can_reuse_batches = False\n\n    def _prepare_batches(self):\n        dataset_size = len(self.group_ids)\n        # get the sampled indices from the sampler\n        sampled_ids = torch.as_tensor(list(self.sampler))\n        # potentially not all elements of the dataset were sampled\n        # by the sampler (e.g., DistributedSampler).\n        # construct a tensor which contains -1 if the element was\n        # not sampled, and a non-negative number indicating the\n        # order where the element was sampled.\n        # for example. if sampled_ids = [3, 1] and dataset_size = 5,\n        # the order is [-1, 1, -1, 0, -1]\n        order = torch.full((dataset_size,), -1, dtype=torch.int64)\n        order[sampled_ids] = torch.arange(len(sampled_ids))\n\n        # get a mask with the elements that were sampled\n        mask = order >= 0\n\n        # find the elements that belong to each individual cluster\n        clusters = [(self.group_ids == i) & mask for i in self.groups]\n        # get relative order of the elements inside each cluster\n        # that follows the order from the sampler\n        relative_order = [order[cluster] for cluster in clusters]\n        # with the relative order, find the absolute order in the\n        # sampled space\n        permutation_ids = [s[s.sort()[1]] for s in relative_order]\n        # permute each cluster so that they follow the order from\n        # the sampler\n        permuted_clusters = [sampled_ids[idx] for idx in permutation_ids]\n\n        # splits each cluster in batch_size, and merge as a list of tensors\n        splits = [c.split(self.batch_size) for c in permuted_clusters]\n        merged = tuple(itertools.chain.from_iterable(splits))\n\n        # now each batch internally has the right order, but\n        # they are grouped by clusters. Find the permutation between\n        # different batches that brings them as close as possible to\n        # the order that we have in the sampler. For that, we will consider the\n        # ordering as coming from the first element of each batch, and sort\n        # correspondingly\n        first_element_of_batch = [t[0].item() for t in merged]\n        # get and inverse mapping from sampled indices and the position where\n        # they occur (as returned by the sampler)\n        inv_sampled_ids_map = {v: k for k, v in enumerate(sampled_ids.tolist())}\n        # from the first element in each batch, get a relative ordering\n        first_index_of_batch = torch.as_tensor(\n            [inv_sampled_ids_map[s] for s in first_element_of_batch]\n        )\n\n        # permute the batches so that they approximately follow the order\n        # from the sampler\n        permutation_order = first_index_of_batch.sort(0)[1].tolist()\n        # finally, permute the batches\n        batches = [merged[i].tolist() for i in permutation_order]\n\n        if self.drop_uneven:\n            kept = []\n            for batch in batches:\n                if len(batch) == self.batch_size:\n                    kept.append(batch)\n            batches = kept\n        return batches\n\n    def __iter__(self):\n        if self._can_reuse_batches:\n            batches = self._batches\n            self._can_reuse_batches = False\n        else:\n            batches = self._prepare_batches()\n        self._batches = batches\n        return iter(batches)\n\n    def __len__(self):\n        if not hasattr(self, \"_batches\"):\n            self._batches = self._prepare_batches()\n            self._can_reuse_batches = True\n        return len(self._batches)\n"
  },
  {
    "path": "maskrcnn_benchmark/data/samplers/iteration_based_batch_sampler.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nfrom torch.utils.data.sampler import BatchSampler\n\n\nclass IterationBasedBatchSampler(BatchSampler):\n    \"\"\"\n    Wraps a BatchSampler, resampling from it until\n    a specified number of iterations have been sampled\n    \"\"\"\n\n    def __init__(self, batch_sampler, num_iterations, start_iter=0):\n        self.batch_sampler = batch_sampler\n        self.num_iterations = num_iterations\n        self.start_iter = start_iter\n\n    def __iter__(self):\n        iteration = self.start_iter\n        while iteration <= self.num_iterations:\n            # if the underlying sampler has a set_epoch method, like\n            # DistributedSampler, used for making each process see\n            # a different split of the dataset, then set it\n            if hasattr(self.batch_sampler.sampler, \"set_epoch\"):\n                self.batch_sampler.sampler.set_epoch(iteration)\n            for batch in self.batch_sampler:\n                iteration += 1\n                if iteration > self.num_iterations:\n                    break\n                yield batch\n\n    def __len__(self):\n        return self.num_iterations\n"
  },
  {
    "path": "maskrcnn_benchmark/data/transforms/__init__.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nfrom .transforms import Compose\nfrom .transforms import Resize\nfrom .transforms import RandomHorizontalFlip\nfrom .transforms import ToTensor\nfrom .transforms import Normalize\n\nfrom .build import build_transforms\n"
  },
  {
    "path": "maskrcnn_benchmark/data/transforms/build.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nfrom . import transforms as T\n\n\ndef build_transforms(cfg, is_train=True):\n    if is_train:\n        if cfg.INPUT.MIN_SIZE_RANGE_TRAIN[0] == -1:\n            min_size = cfg.INPUT.MIN_SIZE_TRAIN\n        else:\n            assert len(cfg.INPUT.MIN_SIZE_RANGE_TRAIN) == 2, \\\n                \"MIN_SIZE_RANGE_TRAIN must have two elements (lower bound, upper bound)\"\n            min_size = list(range(\n                cfg.INPUT.MIN_SIZE_RANGE_TRAIN[0],\n                cfg.INPUT.MIN_SIZE_RANGE_TRAIN[1] + 1\n            ))\n        max_size = cfg.INPUT.MAX_SIZE_TRAIN\n        flip_prob = 0.5  # cfg.INPUT.FLIP_PROB_TRAIN\n    else:\n        min_size = cfg.INPUT.MIN_SIZE_TEST\n        max_size = cfg.INPUT.MAX_SIZE_TEST\n        flip_prob = 0\n\n    to_bgr255 = cfg.INPUT.TO_BGR255\n    normalize_transform = T.Normalize(\n        mean=cfg.INPUT.PIXEL_MEAN, std=cfg.INPUT.PIXEL_STD, to_bgr255=to_bgr255\n    )\n\n    transform = T.Compose(\n        [\n            T.Resize(min_size, max_size),\n            T.RandomHorizontalFlip(flip_prob),\n            T.ToTensor(),\n            normalize_transform,\n        ]\n    )\n    return transform\n"
  },
  {
    "path": "maskrcnn_benchmark/data/transforms/transforms.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport random\n\nimport torch\nimport torchvision\nfrom torchvision.transforms import functional as F\n\n\nclass Compose(object):\n    def __init__(self, transforms):\n        self.transforms = transforms\n\n    def __call__(self, image, target):\n        for t in self.transforms:\n            image, target = t(image, target)\n        return image, target\n\n    def __repr__(self):\n        format_string = self.__class__.__name__ + \"(\"\n        for t in self.transforms:\n            format_string += \"\\n\"\n            format_string += \"    {0}\".format(t)\n        format_string += \"\\n)\"\n        return format_string\n\n\nclass Resize(object):\n    def __init__(self, min_size, max_size):\n        if not isinstance(min_size, (list, tuple)):\n            min_size = (min_size,)\n        self.min_size = min_size\n        self.max_size = max_size\n\n    # modified from torchvision to add support for max size\n    def get_size(self, image_size):\n        w, h = image_size\n        size = random.choice(self.min_size)\n        max_size = self.max_size\n        if max_size is not None:\n            min_original_size = float(min((w, h)))\n            max_original_size = float(max((w, h)))\n            if max_original_size / min_original_size * size > max_size:\n                size = int(round(max_size * min_original_size / max_original_size))\n\n        if (w <= h and w == size) or (h <= w and h == size):\n            return (h, w)\n\n        if w < h:\n            ow = size\n            oh = int(size * h / w)\n        else:\n            oh = size\n            ow = int(size * w / h)\n\n        return (oh, ow)\n\n    def __call__(self, image, target):\n        size = self.get_size(image.size)\n        image = F.resize(image, size)\n        target = target.resize(image.size)\n        return image, target\n\n\nclass RandomHorizontalFlip(object):\n    def __init__(self, prob=0.5):\n        self.prob = prob\n\n    def __call__(self, image, target):\n        if random.random() < self.prob:\n            image = F.hflip(image)\n            target = target.transpose(0)\n        return image, target\n\n\nclass ToTensor(object):\n    def __call__(self, image, target):\n        return F.to_tensor(image), target\n\n\nclass Normalize(object):\n    def __init__(self, mean, std, to_bgr255=True):\n        self.mean = mean\n        self.std = std\n        self.to_bgr255 = to_bgr255\n\n    def __call__(self, image, target):\n        if self.to_bgr255:\n            image = image[[2, 1, 0]] * 255\n        image = F.normalize(image, mean=self.mean, std=self.std)\n        return image, target\n"
  },
  {
    "path": "maskrcnn_benchmark/engine/__init__.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n"
  },
  {
    "path": "maskrcnn_benchmark/engine/inference.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport logging\nimport time\nimport os\n\nimport torch\nfrom tqdm import tqdm\n\nfrom maskrcnn_benchmark.data.datasets.evaluation import evaluate\nfrom ..utils.comm import is_main_process, get_world_size\nfrom ..utils.comm import all_gather\nfrom ..utils.comm import synchronize\nfrom ..utils.timer import Timer, get_time_str\n\n\ndef compute_on_dataset(model, data_loader, device, timer=None):\n    model.eval()\n    results_dict = {}\n    cpu_device = torch.device(\"cpu\")\n    for _, batch in enumerate(tqdm(data_loader)):\n        images, targets, image_ids = batch\n        images = images.to(device)\n        with torch.no_grad():\n            if timer:\n                timer.tic()\n            output = model(images)\n            if timer:\n                torch.cuda.synchronize()\n                timer.toc()\n            output = [o.to(cpu_device) for o in output]\n        results_dict.update(\n            {img_id: result for img_id, result in zip(image_ids, output)}\n        )\n    return results_dict\n\n\ndef _accumulate_predictions_from_multiple_gpus(predictions_per_gpu):\n    all_predictions = all_gather(predictions_per_gpu)\n    if not is_main_process():\n        return\n    # merge the list of dicts\n    predictions = {}\n    for p in all_predictions:\n        predictions.update(p)\n    # convert a dict where the key is the index in a list\n    image_ids = list(sorted(predictions.keys()))\n    if len(image_ids) != image_ids[-1] + 1:\n        logger = logging.getLogger(\"maskrcnn_benchmark.inference\")\n        logger.warning(\n            \"Number of images that were gathered from multiple processes is not \"\n            \"a contiguous set. Some images might be missing from the evaluation\"\n        )\n\n    # convert to a list\n    predictions = [predictions[i] for i in image_ids]\n    return predictions\n\n\ndef inference(\n        model,\n        data_loader,\n        dataset_name,\n        iou_types=(\"bbox\",),\n        box_only=False,\n        device=\"cuda\",\n        expected_results=(),\n        expected_results_sigma_tol=4,\n        output_folder=None,\n):\n    # convert to a torch.device for efficiency\n    device = torch.device(device)\n    num_devices = get_world_size()\n    logger = logging.getLogger(\"maskrcnn_benchmark.inference\")\n    dataset = data_loader.dataset\n    logger.info(\"Start evaluation on {} dataset({} images).\".format(dataset_name, len(dataset)))\n    total_timer = Timer()\n    inference_timer = Timer()\n    total_timer.tic()\n    predictions = compute_on_dataset(model, data_loader, device, inference_timer)\n    # wait for all processes to complete before measuring the time\n    synchronize()\n    total_time = total_timer.toc()\n    total_time_str = get_time_str(total_time)\n    logger.info(\n        \"Total run time: {} ({} s / img per device, on {} devices)\".format(\n            total_time_str, total_time * num_devices / len(dataset), num_devices\n        )\n    )\n    total_infer_time = get_time_str(inference_timer.total_time)\n    logger.info(\n        \"Model inference time: {} ({} s / img per device, on {} devices)\".format(\n            total_infer_time,\n            inference_timer.total_time * num_devices / len(dataset),\n            num_devices,\n        )\n    )\n\n    predictions = _accumulate_predictions_from_multiple_gpus(predictions)\n    if not is_main_process():\n        return\n\n    if output_folder:\n        torch.save(predictions, os.path.join(output_folder, \"predictions.pth\"))\n\n    extra_args = dict(\n        box_only=box_only,\n        iou_types=iou_types,\n        expected_results=expected_results,\n        expected_results_sigma_tol=expected_results_sigma_tol,\n    )\n\n    return evaluate(dataset=dataset,\n                    predictions=predictions,\n                    output_folder=output_folder,\n                    **extra_args)\n"
  },
  {
    "path": "maskrcnn_benchmark/engine/trainer.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport datetime\nimport logging\nimport time\n\nimport torch\nimport torch.distributed as dist\n\nfrom maskrcnn_benchmark.utils.comm import get_world_size, is_pytorch_1_1_0_or_later\nfrom maskrcnn_benchmark.utils.metric_logger import MetricLogger\n\n\ndef reduce_loss_dict(loss_dict):\n    \"\"\"\n    Reduce the loss dictionary from all processes so that process with rank\n    0 has the averaged results. Returns a dict with the same fields as\n    loss_dict, after reduction.\n    \"\"\"\n    world_size = get_world_size()\n    if world_size < 2:\n        return loss_dict\n    with torch.no_grad():\n        loss_names = []\n        all_losses = []\n        for k in sorted(loss_dict.keys()):\n            loss_names.append(k)\n            all_losses.append(loss_dict[k])\n        all_losses = torch.stack(all_losses, dim=0)\n        dist.reduce(all_losses, dst=0)\n        if dist.get_rank() == 0:\n            # only main process gets accumulated, so only divide by\n            # world_size in this case\n            all_losses /= world_size\n        reduced_losses = {k: v for k, v in zip(loss_names, all_losses)}\n    return reduced_losses\n\n\ndef do_train(\n    model,\n    data_loader,\n    optimizer,\n    scheduler,\n    checkpointer,\n    device,\n    checkpoint_period,\n    arguments,\n):\n    logger = logging.getLogger(\"maskrcnn_benchmark.trainer\")\n    logger.info(\"Start training\")\n    meters = MetricLogger(delimiter=\"  \")\n    max_iter = len(data_loader)\n    start_iter = arguments[\"iteration\"]\n    model.train()\n    start_training_time = time.time()\n    end = time.time()\n    pytorch_1_1_0_or_later = is_pytorch_1_1_0_or_later()\n    for iteration, (images, targets, _) in enumerate(data_loader, start_iter):\n        data_time = time.time() - end\n        iteration = iteration + 1\n        arguments[\"iteration\"] = iteration\n\n        # in pytorch >= 1.1.0, scheduler.step() should be run after optimizer.step()\n        if not pytorch_1_1_0_or_later:\n            scheduler.step()\n\n        images = images.to(device)\n        targets = [target.to(device) for target in targets]\n\n        loss_dict = model(images, targets)\n\n        losses = sum(loss for loss in loss_dict.values())\n\n        # reduce losses over all GPUs for logging purposes\n        loss_dict_reduced = reduce_loss_dict(loss_dict)\n        losses_reduced = sum(loss for loss in loss_dict_reduced.values())\n        meters.update(loss=losses_reduced, **loss_dict_reduced)\n\n        optimizer.zero_grad()\n        losses.backward()\n        optimizer.step()\n\n        if pytorch_1_1_0_or_later:\n            scheduler.step()\n\n        batch_time = time.time() - end\n        end = time.time()\n        meters.update(time=batch_time, data=data_time)\n\n        eta_seconds = meters.time.global_avg * (max_iter - iteration)\n        eta_string = str(datetime.timedelta(seconds=int(eta_seconds)))\n\n        if iteration % 20 == 0 or iteration == max_iter:\n            logger.info(\n                meters.delimiter.join(\n                    [\n                        \"eta: {eta}\",\n                        \"iter: {iter}\",\n                        \"{meters}\",\n                        \"lr: {lr:.6f}\",\n                        \"max mem: {memory:.0f}\",\n                    ]\n                ).format(\n                    eta=eta_string,\n                    iter=iteration,\n                    meters=str(meters),\n                    lr=optimizer.param_groups[0][\"lr\"],\n                    memory=torch.cuda.max_memory_allocated() / 1024.0 / 1024.0,\n                )\n            )\n        if iteration % checkpoint_period == 0:\n            checkpointer.save(\"model_{:07d}\".format(iteration), **arguments)\n        if iteration == max_iter:\n            checkpointer.save(\"model_final\", **arguments)\n\n    total_training_time = time.time() - start_training_time\n    total_time_str = str(datetime.timedelta(seconds=total_training_time))\n    logger.info(\n        \"Total training time: {} ({:.4f} s / it)\".format(\n            total_time_str, total_training_time / (max_iter)\n        )\n    )\n"
  },
  {
    "path": "maskrcnn_benchmark/layers/__init__.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport torch\n\nfrom .batch_norm import FrozenBatchNorm2d\nfrom .misc import Conv2d\nfrom .misc import ConvTranspose2d\nfrom .misc import BatchNorm2d\nfrom .misc import interpolate\nfrom .nms import nms\nfrom .roi_align import ROIAlign\nfrom .roi_align import roi_align\nfrom .roi_pool import ROIPool\nfrom .roi_pool import roi_pool\nfrom .smooth_l1_loss import smooth_l1_loss\nfrom .sigmoid_focal_loss import SigmoidFocalLoss\nfrom .iou_loss import IOULoss\nfrom .scale import Scale\n\n\n__all__ = [\"nms\", \"roi_align\", \"ROIAlign\", \"roi_pool\", \"ROIPool\",\n           \"smooth_l1_loss\", \"Conv2d\", \"ConvTranspose2d\", \"interpolate\",\n           \"BatchNorm2d\", \"FrozenBatchNorm2d\", \"SigmoidFocalLoss\", \"IOULoss\",\n           \"Scale\"]\n\n"
  },
  {
    "path": "maskrcnn_benchmark/layers/_utils.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport glob\nimport os.path\n\nimport torch\n\ntry:\n    from torch.utils.cpp_extension import load as load_ext\n    from torch.utils.cpp_extension import CUDA_HOME\nexcept ImportError:\n    raise ImportError(\"The cpp layer extensions requires PyTorch 0.4 or higher\")\n\n\ndef _load_C_extensions():\n    this_dir = os.path.dirname(os.path.abspath(__file__))\n    this_dir = os.path.dirname(this_dir)\n    this_dir = os.path.join(this_dir, \"csrc\")\n\n    main_file = glob.glob(os.path.join(this_dir, \"*.cpp\"))\n    source_cpu = glob.glob(os.path.join(this_dir, \"cpu\", \"*.cpp\"))\n    source_cuda = glob.glob(os.path.join(this_dir, \"cuda\", \"*.cu\"))\n\n    source = main_file + source_cpu\n\n    extra_cflags = []\n    if torch.cuda.is_available() and CUDA_HOME is not None:\n        source.extend(source_cuda)\n        extra_cflags = [\"-DWITH_CUDA\"]\n    source = [os.path.join(this_dir, s) for s in source]\n    extra_include_paths = [this_dir]\n    return load_ext(\n        \"torchvision\",\n        source,\n        extra_cflags=extra_cflags,\n        extra_include_paths=extra_include_paths,\n    )\n\n\n_C = _load_C_extensions()\n"
  },
  {
    "path": "maskrcnn_benchmark/layers/batch_norm.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport torch\nfrom torch import nn\n\n\nclass FrozenBatchNorm2d(nn.Module):\n    \"\"\"\n    BatchNorm2d where the batch statistics and the affine parameters\n    are fixed\n    \"\"\"\n\n    def __init__(self, n):\n        super(FrozenBatchNorm2d, self).__init__()\n        self.register_buffer(\"weight\", torch.ones(n))\n        self.register_buffer(\"bias\", torch.zeros(n))\n        self.register_buffer(\"running_mean\", torch.zeros(n))\n        self.register_buffer(\"running_var\", torch.ones(n))\n\n    def forward(self, x):\n        scale = self.weight * self.running_var.rsqrt()\n        bias = self.bias - self.running_mean * scale\n        scale = scale.reshape(1, -1, 1, 1)\n        bias = bias.reshape(1, -1, 1, 1)\n        return x * scale + bias\n"
  },
  {
    "path": "maskrcnn_benchmark/layers/iou_loss.py",
    "content": "import torch\nfrom torch import nn\n\n\nclass IOULoss(nn.Module):\n    def __init__(self, loc_loss_type):\n        super(IOULoss, self).__init__()\n        self.loc_loss_type = loc_loss_type\n\n    def forward(self, pred, target, weight=None):\n        pred_left = pred[:, 0]\n        pred_top = pred[:, 1]\n        pred_right = pred[:, 2]\n        pred_bottom = pred[:, 3]\n\n        target_left = target[:, 0]\n        target_top = target[:, 1]\n        target_right = target[:, 2]\n        target_bottom = target[:, 3]\n\n        target_area = (target_left + target_right) * \\\n                      (target_top + target_bottom)\n        pred_area = (pred_left + pred_right) * \\\n                    (pred_top + pred_bottom)\n\n        w_intersect = torch.min(pred_left, target_left) + torch.min(pred_right, target_right)\n        g_w_intersect = torch.max(pred_left, target_left) + torch.max(\n            pred_right, target_right)\n        h_intersect = torch.min(pred_bottom, target_bottom) + torch.min(pred_top, target_top)\n        g_h_intersect = torch.max(pred_bottom, target_bottom) + torch.max(pred_top, target_top)\n        ac_uion = g_w_intersect * g_h_intersect + 1e-7\n        area_intersect = w_intersect * h_intersect\n        area_union = target_area + pred_area - area_intersect\n        ious = (area_intersect + 1.0) / (area_union + 1.0)\n        gious = ious - (ac_uion - area_union) / ac_uion\n        if self.loc_loss_type == 'iou':\n            losses = -torch.log(ious)\n        elif self.loc_loss_type == 'linear_iou':\n            losses = 1 - ious\n        elif self.loc_loss_type == 'giou':\n            losses = 1 - gious\n        else:\n            raise NotImplementedError\n\n        if weight is not None and weight.sum() > 0:\n            return (losses * weight).sum() / weight.sum()\n        else:\n            assert losses.numel() != 0\n            return losses.mean()\n"
  },
  {
    "path": "maskrcnn_benchmark/layers/misc.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n\"\"\"\nhelper class that supports empty tensors on some nn functions.\n\nIdeally, add support directly in PyTorch to empty tensors in\nthose functions.\n\nThis can be removed once https://github.com/pytorch/pytorch/issues/12013\nis implemented\n\"\"\"\n\nimport math\nimport torch\nfrom torch.nn.modules.utils import _ntuple\n\n\nclass _NewEmptyTensorOp(torch.autograd.Function):\n    @staticmethod\n    def forward(ctx, x, new_shape):\n        ctx.shape = x.shape\n        return x.new_empty(new_shape)\n\n    @staticmethod\n    def backward(ctx, grad):\n        shape = ctx.shape\n        return _NewEmptyTensorOp.apply(grad, shape), None\n\n\nclass Conv2d(torch.nn.Conv2d):\n    def forward(self, x):\n        if x.numel() > 0:\n            return super(Conv2d, self).forward(x)\n        # get output shape\n\n        output_shape = [\n            (i + 2 * p - (di * (k - 1) + 1)) // d + 1\n            for i, p, di, k, d in zip(\n                x.shape[-2:], self.padding, self.dilation, self.kernel_size, self.stride\n            )\n        ]\n        output_shape = [x.shape[0], self.weight.shape[0]] + output_shape\n        return _NewEmptyTensorOp.apply(x, output_shape)\n\n\nclass ConvTranspose2d(torch.nn.ConvTranspose2d):\n    def forward(self, x):\n        if x.numel() > 0:\n            return super(ConvTranspose2d, self).forward(x)\n        # get output shape\n\n        output_shape = [\n            (i - 1) * d - 2 * p + (di * (k - 1) + 1) + op\n            for i, p, di, k, d, op in zip(\n                x.shape[-2:],\n                self.padding,\n                self.dilation,\n                self.kernel_size,\n                self.stride,\n                self.output_padding,\n            )\n        ]\n        output_shape = [x.shape[0], self.bias.shape[0]] + output_shape\n        return _NewEmptyTensorOp.apply(x, output_shape)\n\n\nclass BatchNorm2d(torch.nn.BatchNorm2d):\n    def forward(self, x):\n        if x.numel() > 0:\n            return super(BatchNorm2d, self).forward(x)\n        # get output shape\n        output_shape = x.shape\n        return _NewEmptyTensorOp.apply(x, output_shape)\n\n\ndef interpolate(\n    input, size=None, scale_factor=None, mode=\"nearest\", align_corners=None\n):\n    if input.numel() > 0:\n        return torch.nn.functional.interpolate(\n            input, size, scale_factor, mode, align_corners\n        )\n\n    def _check_size_scale_factor(dim):\n        if size is None and scale_factor is None:\n            raise ValueError(\"either size or scale_factor should be defined\")\n        if size is not None and scale_factor is not None:\n            raise ValueError(\"only one of size or scale_factor should be defined\")\n        if (\n            scale_factor is not None\n            and isinstance(scale_factor, tuple)\n            and len(scale_factor) != dim\n        ):\n            raise ValueError(\n                \"scale_factor shape must match input shape. \"\n                \"Input is {}D, scale_factor size is {}\".format(dim, len(scale_factor))\n            )\n\n    def _output_size(dim):\n        _check_size_scale_factor(dim)\n        if size is not None:\n            return size\n        scale_factors = _ntuple(dim)(scale_factor)\n        # math.floor might return float in py2.7\n        return [\n            int(math.floor(input.size(i + 2) * scale_factors[i])) for i in range(dim)\n        ]\n\n    output_shape = tuple(_output_size(2))\n    output_shape = input.shape[:-2] + output_shape\n    return _NewEmptyTensorOp.apply(input, output_shape)\n"
  },
  {
    "path": "maskrcnn_benchmark/layers/nms.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n# from ._utils import _C\nfrom maskrcnn_benchmark import _C\n\nnms = _C.nms\n# nms.__doc__ = \"\"\"\n# This function performs Non-maximum suppresion\"\"\"\n"
  },
  {
    "path": "maskrcnn_benchmark/layers/roi_align.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport torch\nfrom torch import nn\nfrom torch.autograd import Function\nfrom torch.autograd.function import once_differentiable\nfrom torch.nn.modules.utils import _pair\n\nfrom maskrcnn_benchmark import _C\n\n\nclass _ROIAlign(Function):\n    @staticmethod\n    def forward(ctx, input, roi, output_size, spatial_scale, sampling_ratio):\n        ctx.save_for_backward(roi)\n        ctx.output_size = _pair(output_size)\n        ctx.spatial_scale = spatial_scale\n        ctx.sampling_ratio = sampling_ratio\n        ctx.input_shape = input.size()\n        output = _C.roi_align_forward(\n            input, roi, spatial_scale, output_size[0], output_size[1], sampling_ratio\n        )\n        return output\n\n    @staticmethod\n    @once_differentiable\n    def backward(ctx, grad_output):\n        rois, = ctx.saved_tensors\n        output_size = ctx.output_size\n        spatial_scale = ctx.spatial_scale\n        sampling_ratio = ctx.sampling_ratio\n        bs, ch, h, w = ctx.input_shape\n        grad_input = _C.roi_align_backward(\n            grad_output,\n            rois,\n            spatial_scale,\n            output_size[0],\n            output_size[1],\n            bs,\n            ch,\n            h,\n            w,\n            sampling_ratio,\n        )\n        return grad_input, None, None, None, None\n\n\nroi_align = _ROIAlign.apply\n\n\nclass ROIAlign(nn.Module):\n    def __init__(self, output_size, spatial_scale, sampling_ratio):\n        super(ROIAlign, self).__init__()\n        self.output_size = output_size\n        self.spatial_scale = spatial_scale\n        self.sampling_ratio = sampling_ratio\n\n    def forward(self, input, rois):\n        return roi_align(\n            input, rois, self.output_size, self.spatial_scale, self.sampling_ratio\n        )\n\n    def __repr__(self):\n        tmpstr = self.__class__.__name__ + \"(\"\n        tmpstr += \"output_size=\" + str(self.output_size)\n        tmpstr += \", spatial_scale=\" + str(self.spatial_scale)\n        tmpstr += \", sampling_ratio=\" + str(self.sampling_ratio)\n        tmpstr += \")\"\n        return tmpstr\n"
  },
  {
    "path": "maskrcnn_benchmark/layers/roi_pool.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport torch\nfrom torch import nn\nfrom torch.autograd import Function\nfrom torch.autograd.function import once_differentiable\nfrom torch.nn.modules.utils import _pair\n\nfrom maskrcnn_benchmark import _C\n\n\nclass _ROIPool(Function):\n    @staticmethod\n    def forward(ctx, input, roi, output_size, spatial_scale):\n        ctx.output_size = _pair(output_size)\n        ctx.spatial_scale = spatial_scale\n        ctx.input_shape = input.size()\n        output, argmax = _C.roi_pool_forward(\n            input, roi, spatial_scale, output_size[0], output_size[1]\n        )\n        ctx.save_for_backward(input, roi, argmax)\n        return output\n\n    @staticmethod\n    @once_differentiable\n    def backward(ctx, grad_output):\n        input, rois, argmax = ctx.saved_tensors\n        output_size = ctx.output_size\n        spatial_scale = ctx.spatial_scale\n        bs, ch, h, w = ctx.input_shape\n        grad_input = _C.roi_pool_backward(\n            grad_output,\n            input,\n            rois,\n            argmax,\n            spatial_scale,\n            output_size[0],\n            output_size[1],\n            bs,\n            ch,\n            h,\n            w,\n        )\n        return grad_input, None, None, None\n\n\nroi_pool = _ROIPool.apply\n\n\nclass ROIPool(nn.Module):\n    def __init__(self, output_size, spatial_scale):\n        super(ROIPool, self).__init__()\n        self.output_size = output_size\n        self.spatial_scale = spatial_scale\n\n    def forward(self, input, rois):\n        return roi_pool(input, rois, self.output_size, self.spatial_scale)\n\n    def __repr__(self):\n        tmpstr = self.__class__.__name__ + \"(\"\n        tmpstr += \"output_size=\" + str(self.output_size)\n        tmpstr += \", spatial_scale=\" + str(self.spatial_scale)\n        tmpstr += \")\"\n        return tmpstr\n"
  },
  {
    "path": "maskrcnn_benchmark/layers/scale.py",
    "content": "import torch\nfrom torch import nn\n\n\nclass Scale(nn.Module):\n    def __init__(self, init_value=1.0):\n        super(Scale, self).__init__()\n        self.scale = nn.Parameter(torch.FloatTensor([init_value]))\n\n    def forward(self, input):\n        return input * self.scale\n"
  },
  {
    "path": "maskrcnn_benchmark/layers/sigmoid_focal_loss.py",
    "content": "import torch\nfrom torch import nn\nfrom torch.autograd import Function\nfrom torch.autograd.function import once_differentiable\n\nfrom maskrcnn_benchmark import _C\n\n# TODO: Use JIT to replace CUDA implementation in the future.\nclass _SigmoidFocalLoss(Function):\n    @staticmethod\n    def forward(ctx, logits, targets, gamma, alpha):\n        ctx.save_for_backward(logits, targets)\n        num_classes = logits.shape[1]\n        ctx.num_classes = num_classes\n        ctx.gamma = gamma\n        ctx.alpha = alpha\n\n        losses = _C.sigmoid_focalloss_forward(\n            logits, targets, num_classes, gamma, alpha\n        )\n        return losses\n\n    @staticmethod\n    @once_differentiable\n    def backward(ctx, d_loss):\n        logits, targets = ctx.saved_tensors\n        num_classes = ctx.num_classes\n        gamma = ctx.gamma\n        alpha = ctx.alpha\n        d_loss = d_loss.contiguous()\n        d_logits = _C.sigmoid_focalloss_backward(\n            logits, targets, d_loss, num_classes, gamma, alpha\n        )\n        return d_logits, None, None, None, None\n\n\nsigmoid_focal_loss_cuda = _SigmoidFocalLoss.apply\n\n\ndef sigmoid_focal_loss_cpu(logits, targets, gamma, alpha):\n    num_classes = logits.shape[1]\n    gamma = gamma[0]\n    alpha = alpha[0]\n    dtype = targets.dtype\n    device = targets.device\n    class_range = torch.arange(1, num_classes+1, dtype=dtype, device=device).unsqueeze(0)\n\n    t = targets.unsqueeze(1)\n    p = torch.sigmoid(logits)\n    term1 = (1 - p) ** gamma * torch.log(p)\n    term2 = p ** gamma * torch.log(1 - p)\n    return -(t == class_range).float() * term1 * alpha - ((t != class_range) * (t >= 0)).float() * term2 * (1 - alpha)\n\n\nclass SigmoidFocalLoss(nn.Module):\n    def __init__(self, gamma, alpha):\n        super(SigmoidFocalLoss, self).__init__()\n        self.gamma = gamma\n        self.alpha = alpha\n\n    def forward(self, logits, targets):\n        device = logits.device\n        if logits.is_cuda:\n            loss_func = sigmoid_focal_loss_cuda\n        else:\n            loss_func = sigmoid_focal_loss_cpu\n\n        loss = loss_func(logits, targets, self.gamma, self.alpha)\n        return loss.sum()\n\n    def __repr__(self):\n        tmpstr = self.__class__.__name__ + \"(\"\n        tmpstr += \"gamma=\" + str(self.gamma)\n        tmpstr += \", alpha=\" + str(self.alpha)\n        tmpstr += \")\"\n        return tmpstr\n"
  },
  {
    "path": "maskrcnn_benchmark/layers/smooth_l1_loss.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport torch\n\n\n# TODO maybe push this to nn?\ndef smooth_l1_loss(input, target, beta=1. / 9, size_average=True):\n    \"\"\"\n    very similar to the smooth_l1_loss from pytorch, but with\n    the extra beta parameter\n    \"\"\"\n    n = torch.abs(input - target)\n    cond = n < beta\n    loss = torch.where(cond, 0.5 * n ** 2 / beta, n - 0.5 * beta)\n    if size_average:\n        return loss.mean()\n    return loss.sum()\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/__init__.py",
    "content": ""
  },
  {
    "path": "maskrcnn_benchmark/modeling/backbone/__init__.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nfrom .backbone import build_backbone\nfrom . import fbnet\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/backbone/backbone.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nfrom collections import OrderedDict\n\nfrom torch import nn\n\nfrom maskrcnn_benchmark.modeling import registry\nfrom maskrcnn_benchmark.modeling.make_layers import conv_with_kaiming_uniform\nfrom . import fpn as fpn_module\nfrom . import resnet\nfrom . import mobilenet\n\n\n@registry.BACKBONES.register(\"R-50-C4\")\n@registry.BACKBONES.register(\"R-50-C5\")\n@registry.BACKBONES.register(\"R-101-C4\")\n@registry.BACKBONES.register(\"R-101-C5\")\ndef build_resnet_backbone(cfg):\n    body = resnet.ResNet(cfg)\n    model = nn.Sequential(OrderedDict([(\"body\", body)]))\n    model.out_channels = cfg.MODEL.RESNETS.BACKBONE_OUT_CHANNELS\n    return model\n\n\n@registry.BACKBONES.register(\"R-50-FPN\")\n@registry.BACKBONES.register(\"R-101-FPN\")\n@registry.BACKBONES.register(\"R-152-FPN\")\ndef build_resnet_fpn_backbone(cfg):\n    body = resnet.ResNet(cfg)\n    in_channels_stage2 = cfg.MODEL.RESNETS.RES2_OUT_CHANNELS\n    out_channels = cfg.MODEL.RESNETS.BACKBONE_OUT_CHANNELS\n    fpn = fpn_module.FPN(\n        in_channels_list=[\n            in_channels_stage2,\n            in_channels_stage2 * 2,\n            in_channels_stage2 * 4,\n            in_channels_stage2 * 8,\n        ],\n        out_channels=out_channels,\n        conv_block=conv_with_kaiming_uniform(\n            cfg.MODEL.FPN.USE_GN, cfg.MODEL.FPN.USE_RELU\n        ),\n        top_blocks=fpn_module.LastLevelMaxPool(),\n    )\n    model = nn.Sequential(OrderedDict([(\"body\", body), (\"fpn\", fpn)]))\n    model.out_channels = out_channels\n    return model\n\n\n@registry.BACKBONES.register(\"R-50-FPN-RETINANET\")\n@registry.BACKBONES.register(\"R-101-FPN-RETINANET\")\ndef build_resnet_fpn_p3p7_backbone(cfg):\n    body = resnet.ResNet(cfg)\n    in_channels_stage2 = cfg.MODEL.RESNETS.RES2_OUT_CHANNELS\n    out_channels = cfg.MODEL.RESNETS.BACKBONE_OUT_CHANNELS\n    in_channels_p6p7 = in_channels_stage2 * 8 if cfg.MODEL.RETINANET.USE_C5 \\\n        else out_channels\n    fpn = fpn_module.FPN(\n        in_channels_list=[\n            0,\n            in_channels_stage2 * 2,\n            in_channels_stage2 * 4,\n            in_channels_stage2 * 8,\n        ],\n        out_channels=out_channels,\n        conv_block=conv_with_kaiming_uniform(\n            cfg.MODEL.FPN.USE_GN, cfg.MODEL.FPN.USE_RELU\n        ),\n        top_blocks=fpn_module.LastLevelP6P7(in_channels_p6p7, out_channels),\n    )\n    model = nn.Sequential(OrderedDict([(\"body\", body), (\"fpn\", fpn)]))\n    model.out_channels = out_channels\n    return model\n\n\n@registry.BACKBONES.register(\"MNV2-FPN-RETINANET\")\ndef build_mnv2_fpn_backbone(cfg):\n    body = mobilenet.MobileNetV2(cfg)\n    in_channels_stage2 = body.return_features_num_channels\n    out_channels = cfg.MODEL.RESNETS.BACKBONE_OUT_CHANNELS\n    fpn = fpn_module.FPN(\n        in_channels_list=[\n            0,\n            in_channels_stage2[1],\n            in_channels_stage2[2],\n            in_channels_stage2[3],\n        ],\n        out_channels=out_channels,\n        conv_block=conv_with_kaiming_uniform(\n            cfg.MODEL.FPN.USE_GN, cfg.MODEL.FPN.USE_RELU\n        ),\n        top_blocks=fpn_module.LastLevelP6P7(out_channels, out_channels),\n    )\n    model = nn.Sequential(OrderedDict([(\"body\", body), (\"fpn\", fpn)]))\n    model.out_channels = out_channels\n    return model\n\n\ndef build_backbone(cfg):\n    assert cfg.MODEL.BACKBONE.CONV_BODY in registry.BACKBONES, \\\n        \"cfg.MODEL.BACKBONE.CONV_BODY: {} are not registered in registry\".format(\n            cfg.MODEL.BACKBONE.CONV_BODY\n        )\n    return registry.BACKBONES[cfg.MODEL.BACKBONE.CONV_BODY](cfg)\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/backbone/fbnet.py",
    "content": "from __future__ import absolute_import, division, print_function, unicode_literals\n\nimport copy\nimport json\nimport logging\nfrom collections import OrderedDict\n\nfrom . import (\n    fbnet_builder as mbuilder,\n    fbnet_modeldef as modeldef,\n)\nimport torch.nn as nn\nfrom maskrcnn_benchmark.modeling import registry\nfrom maskrcnn_benchmark.modeling.rpn import rpn\nfrom maskrcnn_benchmark.modeling import poolers\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef create_builder(cfg):\n    bn_type = cfg.MODEL.FBNET.BN_TYPE\n    if bn_type == \"gn\":\n        bn_type = (bn_type, cfg.GROUP_NORM.NUM_GROUPS)\n    factor = cfg.MODEL.FBNET.SCALE_FACTOR\n\n    arch = cfg.MODEL.FBNET.ARCH\n    arch_def = cfg.MODEL.FBNET.ARCH_DEF\n    if len(arch_def) > 0:\n        arch_def = json.loads(arch_def)\n    if arch in modeldef.MODEL_ARCH:\n        if len(arch_def) > 0:\n            assert (\n                arch_def == modeldef.MODEL_ARCH[arch]\n            ), \"Two architectures with the same name {},\\n{},\\n{}\".format(\n                arch, arch_def, modeldef.MODEL_ARCH[arch]\n            )\n        arch_def = modeldef.MODEL_ARCH[arch]\n    else:\n        assert arch_def is not None and len(arch_def) > 0\n    arch_def = mbuilder.unify_arch_def(arch_def)\n\n    rpn_stride = arch_def.get(\"rpn_stride\", None)\n    if rpn_stride is not None:\n        assert (\n            cfg.MODEL.RPN.ANCHOR_STRIDE[0] == rpn_stride\n        ), \"Needs to set cfg.MODEL.RPN.ANCHOR_STRIDE to {}, got {}\".format(\n            rpn_stride, cfg.MODEL.RPN.ANCHOR_STRIDE\n        )\n    width_divisor = cfg.MODEL.FBNET.WIDTH_DIVISOR\n    dw_skip_bn = cfg.MODEL.FBNET.DW_CONV_SKIP_BN\n    dw_skip_relu = cfg.MODEL.FBNET.DW_CONV_SKIP_RELU\n\n    logger.info(\n        \"Building fbnet model with arch {} (without scaling):\\n{}\".format(\n            arch, arch_def\n        )\n    )\n\n    builder = mbuilder.FBNetBuilder(\n        width_ratio=factor,\n        bn_type=bn_type,\n        width_divisor=width_divisor,\n        dw_skip_bn=dw_skip_bn,\n        dw_skip_relu=dw_skip_relu,\n    )\n\n    return builder, arch_def\n\n\ndef _get_trunk_cfg(arch_def):\n    \"\"\" Get all stages except the last one \"\"\"\n    num_stages = mbuilder.get_num_stages(arch_def)\n    trunk_stages = arch_def.get(\"backbone\", range(num_stages - 1))\n    ret = mbuilder.get_blocks(arch_def, stage_indices=trunk_stages)\n    return ret\n\n\nclass FBNetTrunk(nn.Module):\n    def __init__(\n        self, builder, arch_def, dim_in,\n    ):\n        super(FBNetTrunk, self).__init__()\n        self.first = builder.add_first(arch_def[\"first\"], dim_in=dim_in)\n        trunk_cfg = _get_trunk_cfg(arch_def)\n        self.stages = builder.add_blocks(trunk_cfg[\"stages\"])\n\n    # return features for each stage\n    def forward(self, x):\n        y = self.first(x)\n        y = self.stages(y)\n        ret = [y]\n        return ret\n\n\n@registry.BACKBONES.register(\"FBNet\")\ndef add_conv_body(cfg, dim_in=3):\n    builder, arch_def = create_builder(cfg)\n\n    body = FBNetTrunk(builder, arch_def, dim_in)\n    model = nn.Sequential(OrderedDict([(\"body\", body)]))\n    model.out_channels = builder.last_depth\n\n    return model\n\n\ndef _get_rpn_stage(arch_def, num_blocks):\n    rpn_stage = arch_def.get(\"rpn\")\n    ret = mbuilder.get_blocks(arch_def, stage_indices=rpn_stage)\n    if num_blocks > 0:\n        logger.warn('Use last {} blocks in {} as rpn'.format(num_blocks, ret))\n        block_count = len(ret[\"stages\"])\n        assert num_blocks <= block_count, \"use block {}, block count {}\".format(\n            num_blocks, block_count\n        )\n        blocks = range(block_count - num_blocks, block_count)\n        ret = mbuilder.get_blocks(ret, block_indices=blocks)\n    return ret[\"stages\"]\n\n\nclass FBNetRPNHead(nn.Module):\n    def __init__(\n        self, cfg, in_channels, builder, arch_def,\n    ):\n        super(FBNetRPNHead, self).__init__()\n        assert in_channels == builder.last_depth\n\n        rpn_bn_type = cfg.MODEL.FBNET.RPN_BN_TYPE\n        if len(rpn_bn_type) > 0:\n            builder.bn_type = rpn_bn_type\n\n        use_blocks = cfg.MODEL.FBNET.RPN_HEAD_BLOCKS\n        stages = _get_rpn_stage(arch_def, use_blocks)\n\n        self.head = builder.add_blocks(stages)\n        self.out_channels = builder.last_depth\n\n    def forward(self, x):\n        x = [self.head(y) for y in x]\n        return x\n\n\n@registry.RPN_HEADS.register(\"FBNet.rpn_head\")\ndef add_rpn_head(cfg, in_channels, num_anchors):\n    builder, model_arch = create_builder(cfg)\n    builder.last_depth = in_channels\n\n    assert in_channels == builder.last_depth\n    # builder.name_prefix = \"[rpn]\"\n\n    rpn_feature = FBNetRPNHead(cfg, in_channels, builder, model_arch)\n    rpn_regressor = rpn.RPNHeadConvRegressor(\n        cfg, rpn_feature.out_channels, num_anchors)\n    return nn.Sequential(rpn_feature, rpn_regressor)\n\n\ndef _get_head_stage(arch, head_name, blocks):\n    # use default name 'head' if the specific name 'head_name' does not existed\n    if head_name not in arch:\n        head_name = \"head\"\n    head_stage = arch.get(head_name)\n    ret = mbuilder.get_blocks(arch, stage_indices=head_stage, block_indices=blocks)\n    return ret[\"stages\"]\n\n\n# name mapping for head names in arch def and cfg\nARCH_CFG_NAME_MAPPING = {\n    \"bbox\": \"ROI_BOX_HEAD\",\n    \"kpts\": \"ROI_KEYPOINT_HEAD\",\n    \"mask\": \"ROI_MASK_HEAD\",\n}\n\n\nclass FBNetROIHead(nn.Module):\n    def __init__(\n        self, cfg, in_channels, builder, arch_def,\n        head_name, use_blocks, stride_init, last_layer_scale,\n    ):\n        super(FBNetROIHead, self).__init__()\n        assert in_channels == builder.last_depth\n        assert isinstance(use_blocks, list)\n\n        head_cfg_name = ARCH_CFG_NAME_MAPPING[head_name]\n        self.pooler = poolers.make_pooler(cfg, head_cfg_name)\n\n        stage = _get_head_stage(arch_def, head_name, use_blocks)\n\n        assert stride_init in [0, 1, 2]\n        if stride_init != 0:\n            stage[0][\"block\"][3] = stride_init\n        blocks = builder.add_blocks(stage)\n\n        last_info = copy.deepcopy(arch_def[\"last\"])\n        last_info[1] = last_layer_scale\n        last = builder.add_last(last_info)\n\n        self.head = nn.Sequential(OrderedDict([\n            (\"blocks\", blocks),\n            (\"last\", last)\n        ]))\n\n        self.out_channels = builder.last_depth\n\n    def forward(self, x, proposals):\n        x = self.pooler(x, proposals)\n        x = self.head(x)\n        return x\n\n\n@registry.ROI_BOX_FEATURE_EXTRACTORS.register(\"FBNet.roi_head\")\ndef add_roi_head(cfg, in_channels):\n    builder, model_arch = create_builder(cfg)\n    builder.last_depth = in_channels\n    # builder.name_prefix = \"_[bbox]_\"\n\n    return FBNetROIHead(\n        cfg, in_channels, builder, model_arch,\n        head_name=\"bbox\",\n        use_blocks=cfg.MODEL.FBNET.DET_HEAD_BLOCKS,\n        stride_init=cfg.MODEL.FBNET.DET_HEAD_STRIDE,\n        last_layer_scale=cfg.MODEL.FBNET.DET_HEAD_LAST_SCALE,\n    )\n\n\n@registry.ROI_KEYPOINT_FEATURE_EXTRACTORS.register(\"FBNet.roi_head_keypoints\")\ndef add_roi_head_keypoints(cfg, in_channels):\n    builder, model_arch = create_builder(cfg)\n    builder.last_depth = in_channels\n    # builder.name_prefix = \"_[kpts]_\"\n\n    return FBNetROIHead(\n        cfg, in_channels, builder, model_arch,\n        head_name=\"kpts\",\n        use_blocks=cfg.MODEL.FBNET.KPTS_HEAD_BLOCKS,\n        stride_init=cfg.MODEL.FBNET.KPTS_HEAD_STRIDE,\n        last_layer_scale=cfg.MODEL.FBNET.KPTS_HEAD_LAST_SCALE,\n    )\n\n\n@registry.ROI_MASK_FEATURE_EXTRACTORS.register(\"FBNet.roi_head_mask\")\ndef add_roi_head_mask(cfg, in_channels):\n    builder, model_arch = create_builder(cfg)\n    builder.last_depth = in_channels\n    # builder.name_prefix = \"_[mask]_\"\n\n    return FBNetROIHead(\n        cfg, in_channels, builder, model_arch,\n        head_name=\"mask\",\n        use_blocks=cfg.MODEL.FBNET.MASK_HEAD_BLOCKS,\n        stride_init=cfg.MODEL.FBNET.MASK_HEAD_STRIDE,\n        last_layer_scale=cfg.MODEL.FBNET.MASK_HEAD_LAST_SCALE,\n    )\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/backbone/fbnet_builder.py",
    "content": "\"\"\"\nFBNet model builder\n\"\"\"\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport copy\nimport logging\nimport math\nfrom collections import OrderedDict\n\nimport torch\nimport torch.nn as nn\nfrom maskrcnn_benchmark.layers import (\n    BatchNorm2d,\n    Conv2d,\n    FrozenBatchNorm2d,\n    interpolate,\n)\nfrom maskrcnn_benchmark.layers.misc import _NewEmptyTensorOp\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef _py2_round(x):\n    return math.floor(x + 0.5) if x >= 0.0 else math.ceil(x - 0.5)\n\n\ndef _get_divisible_by(num, divisible_by, min_val):\n    ret = int(num)\n    if divisible_by > 0 and num % divisible_by != 0:\n        ret = int((_py2_round(num / divisible_by) or min_val) * divisible_by)\n    return ret\n\n\nPRIMITIVES = {\n    \"skip\": lambda C_in, C_out, expansion, stride, **kwargs: Identity(\n        C_in, C_out, stride\n    ),\n    \"ir_k3\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, expansion, stride, **kwargs\n    ),\n    \"ir_k5\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, expansion, stride, kernel=5, **kwargs\n    ),\n    \"ir_k7\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, expansion, stride, kernel=7, **kwargs\n    ),\n    \"ir_k1\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, expansion, stride, kernel=1, **kwargs\n    ),\n    \"shuffle\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, expansion, stride, shuffle_type=\"mid\", pw_group=4, **kwargs\n    ),\n    \"basic_block\": lambda C_in, C_out, expansion, stride, **kwargs: CascadeConv3x3(\n        C_in, C_out, stride\n    ),\n    \"shift_5x5\": lambda C_in, C_out, expansion, stride, **kwargs: ShiftBlock5x5(\n        C_in, C_out, expansion, stride\n    ),\n    # layer search 2\n    \"ir_k3_e1\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, 1, stride, kernel=3, **kwargs\n    ),\n    \"ir_k3_e3\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, 3, stride, kernel=3, **kwargs\n    ),\n    \"ir_k3_e6\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, 6, stride, kernel=3, **kwargs\n    ),\n    \"ir_k3_s4\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, 4, stride, kernel=3, shuffle_type=\"mid\", pw_group=4, **kwargs\n    ),\n    \"ir_k5_e1\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, 1, stride, kernel=5, **kwargs\n    ),\n    \"ir_k5_e3\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, 3, stride, kernel=5, **kwargs\n    ),\n    \"ir_k5_e6\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, 6, stride, kernel=5, **kwargs\n    ),\n    \"ir_k5_s4\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, 4, stride, kernel=5, shuffle_type=\"mid\", pw_group=4, **kwargs\n    ),\n    # layer search se\n    \"ir_k3_e1_se\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, 1, stride, kernel=3, se=True, **kwargs\n    ),\n    \"ir_k3_e3_se\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, 3, stride, kernel=3, se=True, **kwargs\n    ),\n    \"ir_k3_e6_se\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, 6, stride, kernel=3, se=True, **kwargs\n    ),\n    \"ir_k3_s4_se\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in,\n        C_out,\n        4,\n        stride,\n        kernel=3,\n        shuffle_type=\"mid\",\n        pw_group=4,\n        se=True,\n        **kwargs\n    ),\n    \"ir_k5_e1_se\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, 1, stride, kernel=5, se=True, **kwargs\n    ),\n    \"ir_k5_e3_se\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, 3, stride, kernel=5, se=True, **kwargs\n    ),\n    \"ir_k5_e6_se\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, 6, stride, kernel=5, se=True, **kwargs\n    ),\n    \"ir_k5_s4_se\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in,\n        C_out,\n        4,\n        stride,\n        kernel=5,\n        shuffle_type=\"mid\",\n        pw_group=4,\n        se=True,\n        **kwargs\n    ),\n    # layer search 3 (in addition to layer search 2)\n    \"ir_k3_s2\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, 1, stride, kernel=3, shuffle_type=\"mid\", pw_group=2, **kwargs\n    ),\n    \"ir_k5_s2\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, 1, stride, kernel=5, shuffle_type=\"mid\", pw_group=2, **kwargs\n    ),\n    \"ir_k3_s2_se\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in,\n        C_out,\n        1,\n        stride,\n        kernel=3,\n        shuffle_type=\"mid\",\n        pw_group=2,\n        se=True,\n        **kwargs\n    ),\n    \"ir_k5_s2_se\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in,\n        C_out,\n        1,\n        stride,\n        kernel=5,\n        shuffle_type=\"mid\",\n        pw_group=2,\n        se=True,\n        **kwargs\n    ),\n    # layer search 4 (in addition to layer search 3)\n    \"ir_k3_sep\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, expansion, stride, kernel=3, cdw=True, **kwargs\n    ),\n    \"ir_k33_e1\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, 1, stride, kernel=3, cdw=True, **kwargs\n    ),\n    \"ir_k33_e3\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, 3, stride, kernel=3, cdw=True, **kwargs\n    ),\n    \"ir_k33_e6\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, 6, stride, kernel=3, cdw=True, **kwargs\n    ),\n    # layer search 5 (in addition to layer search 4)\n    \"ir_k7_e1\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, 1, stride, kernel=7, **kwargs\n    ),\n    \"ir_k7_e3\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, 3, stride, kernel=7, **kwargs\n    ),\n    \"ir_k7_e6\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, 6, stride, kernel=7, **kwargs\n    ),\n    \"ir_k7_sep\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, expansion, stride, kernel=7, cdw=True, **kwargs\n    ),\n    \"ir_k7_sep_e1\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, 1, stride, kernel=7, cdw=True, **kwargs\n    ),\n    \"ir_k7_sep_e3\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, 3, stride, kernel=7, cdw=True, **kwargs\n    ),\n    \"ir_k7_sep_e6\": lambda C_in, C_out, expansion, stride, **kwargs: IRFBlock(\n        C_in, C_out, 6, stride, kernel=7, cdw=True, **kwargs\n    ),\n}\n\n\nclass Identity(nn.Module):\n    def __init__(self, C_in, C_out, stride):\n        super(Identity, self).__init__()\n        self.conv = (\n            ConvBNRelu(\n                C_in,\n                C_out,\n                kernel=1,\n                stride=stride,\n                pad=0,\n                no_bias=1,\n                use_relu=\"relu\",\n                bn_type=\"bn\",\n            )\n            if C_in != C_out or stride != 1\n            else None\n        )\n\n    def forward(self, x):\n        if self.conv:\n            out = self.conv(x)\n        else:\n            out = x\n        return out\n\n\nclass CascadeConv3x3(nn.Sequential):\n    def __init__(self, C_in, C_out, stride):\n        assert stride in [1, 2]\n        ops = [\n            Conv2d(C_in, C_in, 3, stride, 1, bias=False),\n            BatchNorm2d(C_in),\n            nn.ReLU(inplace=True),\n            Conv2d(C_in, C_out, 3, 1, 1, bias=False),\n            BatchNorm2d(C_out),\n        ]\n        super(CascadeConv3x3, self).__init__(*ops)\n        self.res_connect = (stride == 1) and (C_in == C_out)\n\n    def forward(self, x):\n        y = super(CascadeConv3x3, self).forward(x)\n        if self.res_connect:\n            y += x\n        return y\n\n\nclass Shift(nn.Module):\n    def __init__(self, C, kernel_size, stride, padding):\n        super(Shift, self).__init__()\n        self.C = C\n        kernel = torch.zeros((C, 1, kernel_size, kernel_size), dtype=torch.float32)\n        ch_idx = 0\n\n        assert stride in [1, 2]\n        self.stride = stride\n        self.padding = padding\n        self.kernel_size = kernel_size\n        self.dilation = 1\n\n        hks = kernel_size // 2\n        ksq = kernel_size ** 2\n\n        for i in range(kernel_size):\n            for j in range(kernel_size):\n                if i == hks and j == hks:\n                    num_ch = C // ksq + C % ksq\n                else:\n                    num_ch = C // ksq\n                kernel[ch_idx : ch_idx + num_ch, 0, i, j] = 1\n                ch_idx += num_ch\n\n        self.register_parameter(\"bias\", None)\n        self.kernel = nn.Parameter(kernel, requires_grad=False)\n\n    def forward(self, x):\n        if x.numel() > 0:\n            return nn.functional.conv2d(\n                x,\n                self.kernel,\n                self.bias,\n                (self.stride, self.stride),\n                (self.padding, self.padding),\n                self.dilation,\n                self.C,  # groups\n            )\n\n        output_shape = [\n            (i + 2 * p - (di * (k - 1) + 1)) // d + 1\n            for i, p, di, k, d in zip(\n                x.shape[-2:],\n                (self.padding, self.dilation),\n                (self.dilation, self.dilation),\n                (self.kernel_size, self.kernel_size),\n                (self.stride, self.stride),\n            )\n        ]\n        output_shape = [x.shape[0], self.C] + output_shape\n        return _NewEmptyTensorOp.apply(x, output_shape)\n\n\nclass ShiftBlock5x5(nn.Sequential):\n    def __init__(self, C_in, C_out, expansion, stride):\n        assert stride in [1, 2]\n        self.res_connect = (stride == 1) and (C_in == C_out)\n\n        C_mid = _get_divisible_by(C_in * expansion, 8, 8)\n\n        ops = [\n            # pw\n            Conv2d(C_in, C_mid, 1, 1, 0, bias=False),\n            BatchNorm2d(C_mid),\n            nn.ReLU(inplace=True),\n            # shift\n            Shift(C_mid, 5, stride, 2),\n            # pw-linear\n            Conv2d(C_mid, C_out, 1, 1, 0, bias=False),\n            BatchNorm2d(C_out),\n        ]\n        super(ShiftBlock5x5, self).__init__(*ops)\n\n    def forward(self, x):\n        y = super(ShiftBlock5x5, self).forward(x)\n        if self.res_connect:\n            y += x\n        return y\n\n\nclass ChannelShuffle(nn.Module):\n    def __init__(self, groups):\n        super(ChannelShuffle, self).__init__()\n        self.groups = groups\n\n    def forward(self, x):\n        \"\"\"Channel shuffle: [N,C,H,W] -> [N,g,C/g,H,W] -> [N,C/g,g,H,w] -> [N,C,H,W]\"\"\"\n        N, C, H, W = x.size()\n        g = self.groups\n        assert C % g == 0, \"Incompatible group size {} for input channel {}\".format(\n            g, C\n        )\n        return (\n            x.view(N, g, int(C / g), H, W)\n            .permute(0, 2, 1, 3, 4)\n            .contiguous()\n            .view(N, C, H, W)\n        )\n\n\nclass ConvBNRelu(nn.Sequential):\n    def __init__(\n        self,\n        input_depth,\n        output_depth,\n        kernel,\n        stride,\n        pad,\n        no_bias,\n        use_relu,\n        bn_type,\n        group=1,\n        *args,\n        **kwargs\n    ):\n        super(ConvBNRelu, self).__init__()\n\n        assert use_relu in [\"relu\", None]\n        if isinstance(bn_type, (list, tuple)):\n            assert len(bn_type) == 2\n            assert bn_type[0] == \"gn\"\n            gn_group = bn_type[1]\n            bn_type = bn_type[0]\n        assert bn_type in [\"bn\", \"af\", \"gn\", None]\n        assert stride in [1, 2, 4]\n\n        op = Conv2d(\n            input_depth,\n            output_depth,\n            kernel_size=kernel,\n            stride=stride,\n            padding=pad,\n            bias=not no_bias,\n            groups=group,\n            *args,\n            **kwargs\n        )\n        nn.init.kaiming_normal_(op.weight, mode=\"fan_out\", nonlinearity=\"relu\")\n        if op.bias is not None:\n            nn.init.constant_(op.bias, 0.0)\n        self.add_module(\"conv\", op)\n\n        if bn_type == \"bn\":\n            bn_op = BatchNorm2d(output_depth)\n        elif bn_type == \"gn\":\n            bn_op = nn.GroupNorm(num_groups=gn_group, num_channels=output_depth)\n        elif bn_type == \"af\":\n            bn_op = FrozenBatchNorm2d(output_depth)\n        if bn_type is not None:\n            self.add_module(\"bn\", bn_op)\n\n        if use_relu == \"relu\":\n            self.add_module(\"relu\", nn.ReLU(inplace=True))\n\n\nclass SEModule(nn.Module):\n    reduction = 4\n\n    def __init__(self, C):\n        super(SEModule, self).__init__()\n        mid = max(C // self.reduction, 8)\n        conv1 = Conv2d(C, mid, 1, 1, 0)\n        conv2 = Conv2d(mid, C, 1, 1, 0)\n\n        self.op = nn.Sequential(\n            nn.AdaptiveAvgPool2d(1), conv1, nn.ReLU(inplace=True), conv2, nn.Sigmoid()\n        )\n\n    def forward(self, x):\n        return x * self.op(x)\n\n\nclass Upsample(nn.Module):\n    def __init__(self, scale_factor, mode, align_corners=None):\n        super(Upsample, self).__init__()\n        self.scale = scale_factor\n        self.mode = mode\n        self.align_corners = align_corners\n\n    def forward(self, x):\n        return interpolate(\n            x, scale_factor=self.scale, mode=self.mode,\n            align_corners=self.align_corners\n        )\n\n\ndef _get_upsample_op(stride):\n    assert (\n        stride in [1, 2, 4]\n        or stride in [-1, -2, -4]\n        or (isinstance(stride, tuple) and all(x in [-1, -2, -4] for x in stride))\n    )\n\n    scales = stride\n    ret = None\n    if isinstance(stride, tuple) or stride < 0:\n        scales = [-x for x in stride] if isinstance(stride, tuple) else -stride\n        stride = 1\n        ret = Upsample(scale_factor=scales, mode=\"nearest\", align_corners=None)\n\n    return ret, stride\n\n\nclass IRFBlock(nn.Module):\n    def __init__(\n        self,\n        input_depth,\n        output_depth,\n        expansion,\n        stride,\n        bn_type=\"bn\",\n        kernel=3,\n        width_divisor=1,\n        shuffle_type=None,\n        pw_group=1,\n        se=False,\n        cdw=False,\n        dw_skip_bn=False,\n        dw_skip_relu=False,\n    ):\n        super(IRFBlock, self).__init__()\n\n        assert kernel in [1, 3, 5, 7], kernel\n\n        self.use_res_connect = stride == 1 and input_depth == output_depth\n        self.output_depth = output_depth\n\n        mid_depth = int(input_depth * expansion)\n        mid_depth = _get_divisible_by(mid_depth, width_divisor, width_divisor)\n\n        # pw\n        self.pw = ConvBNRelu(\n            input_depth,\n            mid_depth,\n            kernel=1,\n            stride=1,\n            pad=0,\n            no_bias=1,\n            use_relu=\"relu\",\n            bn_type=bn_type,\n            group=pw_group,\n        )\n\n        # negative stride to do upsampling\n        self.upscale, stride = _get_upsample_op(stride)\n\n        # dw\n        if kernel == 1:\n            self.dw = nn.Sequential()\n        elif cdw:\n            dw1 = ConvBNRelu(\n                mid_depth,\n                mid_depth,\n                kernel=kernel,\n                stride=stride,\n                pad=(kernel // 2),\n                group=mid_depth,\n                no_bias=1,\n                use_relu=\"relu\",\n                bn_type=bn_type,\n            )\n            dw2 = ConvBNRelu(\n                mid_depth,\n                mid_depth,\n                kernel=kernel,\n                stride=1,\n                pad=(kernel // 2),\n                group=mid_depth,\n                no_bias=1,\n                use_relu=\"relu\" if not dw_skip_relu else None,\n                bn_type=bn_type if not dw_skip_bn else None,\n            )\n            self.dw = nn.Sequential(OrderedDict([(\"dw1\", dw1), (\"dw2\", dw2)]))\n        else:\n            self.dw = ConvBNRelu(\n                mid_depth,\n                mid_depth,\n                kernel=kernel,\n                stride=stride,\n                pad=(kernel // 2),\n                group=mid_depth,\n                no_bias=1,\n                use_relu=\"relu\" if not dw_skip_relu else None,\n                bn_type=bn_type if not dw_skip_bn else None,\n            )\n\n        # pw-linear\n        self.pwl = ConvBNRelu(\n            mid_depth,\n            output_depth,\n            kernel=1,\n            stride=1,\n            pad=0,\n            no_bias=1,\n            use_relu=None,\n            bn_type=bn_type,\n            group=pw_group,\n        )\n\n        self.shuffle_type = shuffle_type\n        if shuffle_type is not None:\n            self.shuffle = ChannelShuffle(pw_group)\n\n        self.se4 = SEModule(output_depth) if se else nn.Sequential()\n\n        self.output_depth = output_depth\n\n    def forward(self, x):\n        y = self.pw(x)\n        if self.shuffle_type == \"mid\":\n            y = self.shuffle(y)\n        if self.upscale is not None:\n            y = self.upscale(y)\n        y = self.dw(y)\n        y = self.pwl(y)\n        if self.use_res_connect:\n            y += x\n        y = self.se4(y)\n        return y\n\n\ndef _expand_block_cfg(block_cfg):\n    assert isinstance(block_cfg, list)\n    ret = []\n    for idx in range(block_cfg[2]):\n        cur = copy.deepcopy(block_cfg)\n        cur[2] = 1\n        cur[3] = 1 if idx >= 1 else cur[3]\n        ret.append(cur)\n    return ret\n\n\ndef expand_stage_cfg(stage_cfg):\n    \"\"\" For a single stage \"\"\"\n    assert isinstance(stage_cfg, list)\n    ret = []\n    for x in stage_cfg:\n        ret += _expand_block_cfg(x)\n    return ret\n\n\ndef expand_stages_cfg(stage_cfgs):\n    \"\"\" For a list of stages \"\"\"\n    assert isinstance(stage_cfgs, list)\n    ret = []\n    for x in stage_cfgs:\n        ret.append(expand_stage_cfg(x))\n    return ret\n\n\ndef _block_cfgs_to_list(block_cfgs):\n    assert isinstance(block_cfgs, list)\n    ret = []\n    for stage_idx, stage in enumerate(block_cfgs):\n        stage = expand_stage_cfg(stage)\n        for block_idx, block in enumerate(stage):\n            cur = {\"stage_idx\": stage_idx, \"block_idx\": block_idx, \"block\": block}\n            ret.append(cur)\n    return ret\n\n\ndef _add_to_arch(arch, info, name):\n    \"\"\" arch = [{block_0}, {block_1}, ...]\n        info = [\n            # stage 0\n            [\n                block0_info,\n                block1_info,\n                ...\n            ], ...\n        ]\n        convert to:\n        arch = [\n            {\n                block_0,\n                name: block0_info,\n            },\n            {\n                block_1,\n                name: block1_info,\n            }, ...\n        ]\n    \"\"\"\n    assert isinstance(arch, list) and all(isinstance(x, dict) for x in arch)\n    assert isinstance(info, list) and all(isinstance(x, list) for x in info)\n    idx = 0\n    for stage_idx, stage in enumerate(info):\n        for block_idx, block in enumerate(stage):\n            assert (\n                arch[idx][\"stage_idx\"] == stage_idx\n                and arch[idx][\"block_idx\"] == block_idx\n            ), \"Index ({}, {}) does not match for block {}\".format(\n                stage_idx, block_idx, arch[idx]\n            )\n            assert name not in arch[idx]\n            arch[idx][name] = block\n            idx += 1\n\n\ndef unify_arch_def(arch_def):\n    \"\"\" unify the arch_def to:\n        {\n            ...,\n            \"arch\": [\n                {\n                    \"stage_idx\": idx,\n                    \"block_idx\": idx,\n                    ...\n                },\n                {}, ...\n            ]\n        }\n    \"\"\"\n    ret = copy.deepcopy(arch_def)\n\n    assert \"block_cfg\" in arch_def and \"stages\" in arch_def[\"block_cfg\"]\n    assert \"stages\" not in ret\n    # copy 'first', 'last' etc. inside arch_def['block_cfg'] to ret\n    ret.update({x: arch_def[\"block_cfg\"][x] for x in arch_def[\"block_cfg\"]})\n    ret[\"stages\"] = _block_cfgs_to_list(arch_def[\"block_cfg\"][\"stages\"])\n    del ret[\"block_cfg\"]\n\n    assert \"block_op_type\" in arch_def\n    _add_to_arch(ret[\"stages\"], arch_def[\"block_op_type\"], \"block_op_type\")\n    del ret[\"block_op_type\"]\n\n    return ret\n\n\ndef get_num_stages(arch_def):\n    ret = 0\n    for x in arch_def[\"stages\"]:\n        ret = max(x[\"stage_idx\"], ret)\n    ret = ret + 1\n    return ret\n\n\ndef get_blocks(arch_def, stage_indices=None, block_indices=None):\n    ret = copy.deepcopy(arch_def)\n    ret[\"stages\"] = []\n    for block in arch_def[\"stages\"]:\n        keep = True\n        if stage_indices not in (None, []) and block[\"stage_idx\"] not in stage_indices:\n            keep = False\n        if block_indices not in (None, []) and block[\"block_idx\"] not in block_indices:\n            keep = False\n        if keep:\n            ret[\"stages\"].append(block)\n    return ret\n\n\nclass FBNetBuilder(object):\n    def __init__(\n        self,\n        width_ratio,\n        bn_type=\"bn\",\n        width_divisor=1,\n        dw_skip_bn=False,\n        dw_skip_relu=False,\n    ):\n        self.width_ratio = width_ratio\n        self.last_depth = -1\n        self.bn_type = bn_type\n        self.width_divisor = width_divisor\n        self.dw_skip_bn = dw_skip_bn\n        self.dw_skip_relu = dw_skip_relu\n\n    def add_first(self, stage_info, dim_in=3, pad=True):\n        # stage_info: [c, s, kernel]\n        assert len(stage_info) >= 2\n        channel = stage_info[0]\n        stride = stage_info[1]\n        out_depth = self._get_divisible_width(int(channel * self.width_ratio))\n        kernel = 3\n        if len(stage_info) > 2:\n            kernel = stage_info[2]\n\n        out = ConvBNRelu(\n            dim_in,\n            out_depth,\n            kernel=kernel,\n            stride=stride,\n            pad=kernel // 2 if pad else 0,\n            no_bias=1,\n            use_relu=\"relu\",\n            bn_type=self.bn_type,\n        )\n        self.last_depth = out_depth\n        return out\n\n    def add_blocks(self, blocks):\n        \"\"\" blocks: [{}, {}, ...]\n        \"\"\"\n        assert isinstance(blocks, list) and all(\n            isinstance(x, dict) for x in blocks\n        ), blocks\n\n        modules = OrderedDict()\n        for block in blocks:\n            stage_idx = block[\"stage_idx\"]\n            block_idx = block[\"block_idx\"]\n            block_op_type = block[\"block_op_type\"]\n            tcns = block[\"block\"]\n            n = tcns[2]\n            assert n == 1\n            nnblock = self.add_ir_block(tcns, [block_op_type])\n            nn_name = \"xif{}_{}\".format(stage_idx, block_idx)\n            assert nn_name not in modules\n            modules[nn_name] = nnblock\n        ret = nn.Sequential(modules)\n        return ret\n\n    def add_last(self, stage_info):\n        \"\"\" skip last layer if channel_scale == 0\n            use the same output channel if channel_scale < 0\n        \"\"\"\n        assert len(stage_info) == 2\n        channels = stage_info[0]\n        channel_scale = stage_info[1]\n\n        if channel_scale == 0.0:\n            return nn.Sequential()\n\n        if channel_scale > 0:\n            last_channel = (\n                int(channels * self.width_ratio) if self.width_ratio > 1.0 else channels\n            )\n            last_channel = int(last_channel * channel_scale)\n        else:\n            last_channel = int(self.last_depth * (-channel_scale))\n        last_channel = self._get_divisible_width(last_channel)\n\n        if last_channel == 0:\n            return nn.Sequential()\n\n        dim_in = self.last_depth\n        ret = ConvBNRelu(\n            dim_in,\n            last_channel,\n            kernel=1,\n            stride=1,\n            pad=0,\n            no_bias=1,\n            use_relu=\"relu\",\n            bn_type=self.bn_type,\n        )\n        self.last_depth = last_channel\n        return ret\n\n    # def add_final_pool(self, model, blob_in, kernel_size):\n    #     ret = model.AveragePool(blob_in, \"final_avg\", kernel=kernel_size, stride=1)\n    #     return ret\n\n    def _add_ir_block(\n        self, dim_in, dim_out, stride, expand_ratio, block_op_type, **kwargs\n    ):\n        ret = PRIMITIVES[block_op_type](\n            dim_in,\n            dim_out,\n            expansion=expand_ratio,\n            stride=stride,\n            bn_type=self.bn_type,\n            width_divisor=self.width_divisor,\n            dw_skip_bn=self.dw_skip_bn,\n            dw_skip_relu=self.dw_skip_relu,\n            **kwargs\n        )\n        return ret, ret.output_depth\n\n    def add_ir_block(self, tcns, block_op_types, **kwargs):\n        t, c, n, s = tcns\n        assert n == 1\n        out_depth = self._get_divisible_width(int(c * self.width_ratio))\n        dim_in = self.last_depth\n        op, ret_depth = self._add_ir_block(\n            dim_in,\n            out_depth,\n            stride=s,\n            expand_ratio=t,\n            block_op_type=block_op_types[0],\n            **kwargs\n        )\n        self.last_depth = ret_depth\n        return op\n\n    def _get_divisible_width(self, width):\n        ret = _get_divisible_by(int(width), self.width_divisor, self.width_divisor)\n        return ret\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/backbone/fbnet_modeldef.py",
    "content": "from __future__ import absolute_import, division, print_function, unicode_literals\n\n\ndef add_archs(archs):\n    global MODEL_ARCH\n    for x in archs:\n        assert x not in MODEL_ARCH, \"Duplicated model name {} existed\".format(x)\n        MODEL_ARCH[x] = archs[x]\n\n\nMODEL_ARCH = {\n    \"default\": {\n        \"block_op_type\": [\n            # stage 0\n            [\"ir_k3\"],\n            # stage 1\n            [\"ir_k3\"] * 2,\n            # stage 2\n            [\"ir_k3\"] * 3,\n            # stage 3\n            [\"ir_k3\"] * 7,\n            # stage 4, bbox head\n            [\"ir_k3\"] * 4,\n            # stage 5, rpn\n            [\"ir_k3\"] * 3,\n            # stage 5, mask head\n            [\"ir_k3\"] * 5,\n        ],\n        \"block_cfg\": {\n            \"first\": [32, 2],\n            \"stages\": [\n                # [t, c, n, s]\n                # stage 0\n                [[1, 16, 1, 1]],\n                # stage 1\n                [[6, 24, 2, 2]],\n                # stage 2\n                [[6, 32, 3, 2]],\n                # stage 3\n                [[6, 64, 4, 2], [6, 96, 3, 1]],\n                # stage 4, bbox head\n                [[4, 160, 1, 2], [6, 160, 2, 1], [6, 240, 1, 1]],\n                # [[6, 160, 3, 2], [6, 320, 1, 1]],\n                # stage 5, rpn head\n                [[6, 96, 3, 1]],\n                # stage 6, mask head\n                [[4, 160, 1, 1], [6, 160, 3, 1], [3, 80, 1, -2]],\n            ],\n            # [c, channel_scale]\n            \"last\": [0, 0.0],\n            \"backbone\": [0, 1, 2, 3],\n            \"rpn\": [5],\n            \"bbox\": [4],\n            \"mask\": [6],\n        },\n    },\n    \"xirb16d_dsmask\": {\n        \"block_op_type\": [\n            # stage 0\n            [\"ir_k3\"],\n            # stage 1\n            [\"ir_k3\"] * 2,\n            # stage 2\n            [\"ir_k3\"] * 3,\n            # stage 3\n            [\"ir_k3\"] * 7,\n            # stage 4, bbox head\n            [\"ir_k3\"] * 4,\n            # stage 5, mask head\n            [\"ir_k3\"] * 5,\n            # stage 6, rpn\n            [\"ir_k3\"] * 3,\n        ],\n        \"block_cfg\": {\n            \"first\": [16, 2],\n            \"stages\": [\n                # [t, c, n, s]\n                # stage 0\n                [[1, 16, 1, 1]],\n                # stage 1\n                [[6, 32, 2, 2]],\n                # stage 2\n                [[6, 48, 3, 2]],\n                # stage 3\n                [[6, 96, 4, 2], [6, 128, 3, 1]],\n                # stage 4, bbox head\n                [[4, 128, 1, 2], [6, 128, 2, 1], [6, 160, 1, 1]],\n                # stage 5, mask head\n                [[4, 128, 1, 2], [6, 128, 2, 1], [6, 128, 1, -2], [3, 64, 1, -2]],\n                # stage 6, rpn head\n                [[6, 128, 3, 1]],\n            ],\n            # [c, channel_scale]\n            \"last\": [0, 0.0],\n            \"backbone\": [0, 1, 2, 3],\n            \"rpn\": [6],\n            \"bbox\": [4],\n            \"mask\": [5],\n        },\n    },\n    \"mobilenet_v2\": {\n        \"block_op_type\": [\n            # stage 0\n            [\"ir_k3\"],\n            # stage 1\n            [\"ir_k3\"] * 2,\n            # stage 2\n            [\"ir_k3\"] * 3,\n            # stage 3\n            [\"ir_k3\"] * 7,\n            # stage 4\n            [\"ir_k3\"] * 4,\n        ],\n        \"block_cfg\": {\n            \"first\": [32, 2],\n            \"stages\": [\n                # [t, c, n, s]\n                # stage 0\n                [[1, 16, 1, 1]],\n                # stage 1\n                [[6, 24, 2, 2]],\n                # stage 2\n                [[6, 32, 3, 2]],\n                # stage 3\n                [[6, 64, 4, 2], [6, 96, 3, 1]],\n                # stage 4\n                [[6, 160, 3, 1], [6, 320, 1, 1]],\n            ],\n            # [c, channel_scale]\n            \"last\": [0, 0.0],\n            \"backbone\": [0, 1, 2, 3],\n            \"bbox\": [4],\n        },\n    },\n}\n\n\nMODEL_ARCH_CHAM = {\n    \"cham_v1a\": {\n        \"block_op_type\": [\n            # stage 0\n            [\"ir_k3\"],\n            # stage 1\n            [\"ir_k7\"] * 2,\n            # stage 2\n            [\"ir_k3\"] * 5,\n            # stage 3\n            [\"ir_k5\"] * 7 + [\"ir_k3\"] * 5,\n            # stage 4, bbox head\n            [\"ir_k3\"] * 5,\n            # stage 5, rpn\n            [\"ir_k3\"] * 3,\n        ],\n        \"block_cfg\": {\n            \"first\": [32, 2],\n            \"stages\": [\n                # [t, c, n, s]\n                # stage 0\n                [[1, 24, 1, 1]],\n                # stage 1\n                [[4, 48, 2, 2]],\n                # stage 2\n                [[7, 64, 5, 2]],\n                # stage 3\n                [[12, 56, 7, 2], [8, 88, 5, 1]],\n                # stage 4, bbox head\n                [[7, 152, 4, 2], [10, 104, 1, 1]],\n                # stage 5, rpn head\n                [[8, 88, 3, 1]],\n            ],\n            # [c, channel_scale]\n            \"last\": [0, 0.0],\n            \"backbone\": [0, 1, 2, 3],\n            \"rpn\": [5],\n            \"bbox\": [4],\n        },\n    },\n    \"cham_v2\": {\n        \"block_op_type\": [\n            # stage 0\n            [\"ir_k3\"],\n            # stage 1\n            [\"ir_k5\"] * 4,\n            # stage 2\n            [\"ir_k7\"] * 6,\n            # stage 3\n            [\"ir_k5\"] * 3 + [\"ir_k3\"] * 6,\n            # stage 4, bbox head\n            [\"ir_k3\"] * 7,\n            # stage 5, rpn\n            [\"ir_k3\"] * 1,\n        ],\n        \"block_cfg\": {\n            \"first\": [32, 2],\n            \"stages\": [\n                # [t, c, n, s]\n                # stage 0\n                [[1, 24, 1, 1]],\n                # stage 1\n                [[8, 32, 4, 2]],\n                # stage 2\n                [[5, 48, 6, 2]],\n                # stage 3\n                [[9, 56, 3, 2], [6, 56, 6, 1]],\n                # stage 4, bbox head\n                [[2, 160, 6, 2], [6, 112, 1, 1]],\n                # stage 5, rpn head\n                [[6, 56, 1, 1]],\n            ],\n            # [c, channel_scale]\n            \"last\": [0, 0.0],\n            \"backbone\": [0, 1, 2, 3],\n            \"rpn\": [5],\n            \"bbox\": [4],\n        },\n    },\n}\nadd_archs(MODEL_ARCH_CHAM)\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/backbone/fpn.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport torch\nimport torch.nn.functional as F\nfrom torch import nn\n\n\nclass FPN(nn.Module):\n    \"\"\"\n    Module that adds FPN on top of a list of feature maps.\n    The feature maps are currently supposed to be in increasing depth\n    order, and must be consecutive\n    \"\"\"\n\n    def __init__(\n        self, in_channels_list, out_channels, conv_block, top_blocks=None\n    ):\n        \"\"\"\n        Arguments:\n            in_channels_list (list[int]): number of channels for each feature map that\n                will be fed\n            out_channels (int): number of channels of the FPN representation\n            top_blocks (nn.Module or None): if provided, an extra operation will\n                be performed on the output of the last (smallest resolution)\n                FPN output, and the result will extend the result list\n        \"\"\"\n        super(FPN, self).__init__()\n        self.inner_blocks = []\n        self.layer_blocks = []\n        for idx, in_channels in enumerate(in_channels_list, 1):\n            inner_block = \"fpn_inner{}\".format(idx)\n            layer_block = \"fpn_layer{}\".format(idx)\n\n            if in_channels == 0:\n                continue\n            inner_block_module = conv_block(in_channels, out_channels, 1)\n            layer_block_module = conv_block(out_channels, out_channels, 3, 1)\n            self.add_module(inner_block, inner_block_module)\n            self.add_module(layer_block, layer_block_module)\n            self.inner_blocks.append(inner_block)\n            self.layer_blocks.append(layer_block)\n        self.top_blocks = top_blocks\n\n    def forward(self, x):\n        \"\"\"\n        Arguments:\n            x (list[Tensor]): feature maps for each feature level.\n        Returns:\n            results (tuple[Tensor]): feature maps after FPN layers.\n                They are ordered from highest resolution first.\n        \"\"\"\n        last_inner = getattr(self, self.inner_blocks[-1])(x[-1])\n        results = []\n        results.append(getattr(self, self.layer_blocks[-1])(last_inner))\n        for feature, inner_block, layer_block in zip(\n            x[:-1][::-1], self.inner_blocks[:-1][::-1], self.layer_blocks[:-1][::-1]\n        ):\n            if not inner_block:\n                continue\n            inner_top_down = F.interpolate(last_inner, scale_factor=2, mode=\"nearest\")\n            inner_lateral = getattr(self, inner_block)(feature)\n            # TODO use size instead of scale to make it robust to different sizes\n            # inner_top_down = F.upsample(last_inner, size=inner_lateral.shape[-2:],\n            # mode='bilinear', align_corners=False)\n            last_inner = inner_lateral + inner_top_down\n            results.insert(0, getattr(self, layer_block)(last_inner))\n\n        if isinstance(self.top_blocks, LastLevelP6P7):\n            last_results = self.top_blocks(x[-1], results[-1])\n            results.extend(last_results)\n        elif isinstance(self.top_blocks, LastLevelMaxPool):\n            last_results = self.top_blocks(results[-1])\n            results.extend(last_results)\n\n        return tuple(results)\n\n\nclass LastLevelMaxPool(nn.Module):\n    def forward(self, x):\n        return [F.max_pool2d(x, 1, 2, 0)]\n\n\nclass LastLevelP6P7(nn.Module):\n    \"\"\"\n    This module is used in RetinaNet to generate extra layers, P6 and P7.\n    \"\"\"\n    def __init__(self, in_channels, out_channels):\n        super(LastLevelP6P7, self).__init__()\n        self.p6 = nn.Conv2d(in_channels, out_channels, 3, 2, 1)\n        self.p7 = nn.Conv2d(out_channels, out_channels, 3, 2, 1)\n        for module in [self.p6, self.p7]:\n            nn.init.kaiming_uniform_(module.weight, a=1)\n            nn.init.constant_(module.bias, 0)\n        self.use_P5 = in_channels == out_channels\n\n    def forward(self, c5, p5):\n        x = p5 if self.use_P5 else c5\n        p6 = self.p6(x)\n        p7 = self.p7(F.relu(p6))\n        return [p6, p7]\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/backbone/mobilenet.py",
    "content": "# taken from https://github.com/tonylins/pytorch-mobilenet-v2/\n# Published by Ji Lin, tonylins\n# licensed under the  Apache License, Version 2.0, January 2004\n\nfrom torch import nn\nfrom torch.nn import BatchNorm2d\n#from maskrcnn_benchmark.layers import FrozenBatchNorm2d as BatchNorm2d\nfrom maskrcnn_benchmark.layers import Conv2d\n\n\ndef conv_bn(inp, oup, stride):\n    return nn.Sequential(\n        Conv2d(inp, oup, 3, stride, 1, bias=False),\n        BatchNorm2d(oup),\n        nn.ReLU6(inplace=True)\n    )\n\n\ndef conv_1x1_bn(inp, oup):\n    return nn.Sequential(\n        Conv2d(inp, oup, 1, 1, 0, bias=False),\n        BatchNorm2d(oup),\n        nn.ReLU6(inplace=True)\n    )\n\n\nclass InvertedResidual(nn.Module):\n    def __init__(self, inp, oup, stride, expand_ratio):\n        super(InvertedResidual, self).__init__()\n        self.stride = stride\n        assert stride in [1, 2]\n\n        hidden_dim = int(round(inp * expand_ratio))\n        self.use_res_connect = self.stride == 1 and inp == oup\n\n        if expand_ratio == 1:\n            self.conv = nn.Sequential(\n                # dw\n                Conv2d(hidden_dim, hidden_dim, 3, stride, 1, groups=hidden_dim, bias=False),\n                BatchNorm2d(hidden_dim),\n                nn.ReLU6(inplace=True),\n                # pw-linear\n                Conv2d(hidden_dim, oup, 1, 1, 0, bias=False),\n                BatchNorm2d(oup),\n            )\n        else:\n            self.conv = nn.Sequential(\n                # pw\n                Conv2d(inp, hidden_dim, 1, 1, 0, bias=False),\n                BatchNorm2d(hidden_dim),\n                nn.ReLU6(inplace=True),\n                # dw\n                Conv2d(hidden_dim, hidden_dim, 3, stride, 1, groups=hidden_dim, bias=False),\n                BatchNorm2d(hidden_dim),\n                nn.ReLU6(inplace=True),\n                # pw-linear\n                Conv2d(hidden_dim, oup, 1, 1, 0, bias=False),\n                BatchNorm2d(oup),\n            )\n\n    def forward(self, x):\n        if self.use_res_connect:\n            return x + self.conv(x)\n        else:\n            return self.conv(x)\n\n\nclass MobileNetV2(nn.Module):\n    \"\"\"\n    Should freeze bn\n    \"\"\"\n    def __init__(self, cfg, n_class=1000, input_size=224, width_mult=1.):\n        super(MobileNetV2, self).__init__()\n        block = InvertedResidual\n        input_channel = 32\n        interverted_residual_setting = [\n            # t, c, n, s\n            [1, 16, 1, 1],\n            [6, 24, 2, 2],\n            [6, 32, 3, 2],\n            [6, 64, 4, 2],\n            [6, 96, 3, 1],\n            [6, 160, 3, 2],\n            [6, 320, 1, 1],\n        ]\n\n        # building first layer\n        assert input_size % 32 == 0\n        input_channel = int(input_channel * width_mult)\n        self.return_features_indices = [3, 6, 13, 17]\n        self.return_features_num_channels = []\n        self.features = nn.ModuleList([conv_bn(3, input_channel, 2)])\n        # building inverted residual blocks\n        for t, c, n, s in interverted_residual_setting:\n            output_channel = int(c * width_mult)\n            for i in range(n):\n                if i == 0:\n                    self.features.append(block(input_channel, output_channel, s, expand_ratio=t))\n                else:\n                    self.features.append(block(input_channel, output_channel, 1, expand_ratio=t))\n                input_channel = output_channel\n                if len(self.features) - 1 in self.return_features_indices:\n                    self.return_features_num_channels.append(output_channel)\n\n        self._initialize_weights()\n        self._freeze_backbone(cfg.MODEL.BACKBONE.FREEZE_CONV_BODY_AT)\n\n    def _freeze_backbone(self, freeze_at):\n        for layer_index in range(freeze_at):\n            for p in self.features[layer_index].parameters():\n                p.requires_grad = False\n\n    def forward(self, x):\n        res = []\n        for i, m in enumerate(self.features):\n            x = m(x)\n            if i in self.return_features_indices:\n                res.append(x)\n        return res\n\n    def _initialize_weights(self):\n        for m in self.modules():\n            if isinstance(m, Conv2d):\n                n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels\n                m.weight.data.normal_(0, (2. / n) ** 0.5)\n                if m.bias is not None:\n                    m.bias.data.zero_()\n            elif isinstance(m, BatchNorm2d):\n                m.weight.data.fill_(1)\n                m.bias.data.zero_()\n            elif isinstance(m, nn.Linear):\n                n = m.weight.size(1)\n                m.weight.data.normal_(0, 0.01)\n                m.bias.data.zero_()\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/backbone/resnet.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n\"\"\"\nVariant of the resnet module that takes cfg as an argument.\nExample usage. Strings may be specified in the config file.\n    model = ResNet(\n        \"StemWithFixedBatchNorm\",\n        \"BottleneckWithFixedBatchNorm\",\n        \"ResNet50StagesTo4\",\n    )\nOR:\n    model = ResNet(\n        \"StemWithGN\",\n        \"BottleneckWithGN\",\n        \"ResNet50StagesTo4\",\n    )\nCustom implementations may be written in user code and hooked in via the\n`register_*` functions.\n\"\"\"\nfrom collections import namedtuple\n\nimport torch\nimport torch.nn.functional as F\nfrom torch import nn\n\nfrom maskrcnn_benchmark.layers import FrozenBatchNorm2d\nfrom maskrcnn_benchmark.layers import Conv2d\nfrom maskrcnn_benchmark.modeling.make_layers import group_norm\nfrom maskrcnn_benchmark.utils.registry import Registry\n\n\n# ResNet stage specification\nStageSpec = namedtuple(\n    \"StageSpec\",\n    [\n        \"index\",  # Index of the stage, eg 1, 2, ..,. 5\n        \"block_count\",  # Number of residual blocks in the stage\n        \"return_features\",  # True => return the last feature map from this stage\n    ],\n)\n\n# -----------------------------------------------------------------------------\n# Standard ResNet models\n# -----------------------------------------------------------------------------\n# ResNet-50 (including all stages)\nResNet50StagesTo5 = tuple(\n    StageSpec(index=i, block_count=c, return_features=r)\n    for (i, c, r) in ((1, 3, False), (2, 4, False), (3, 6, False), (4, 3, True))\n)\n# ResNet-50 up to stage 4 (excludes stage 5)\nResNet50StagesTo4 = tuple(\n    StageSpec(index=i, block_count=c, return_features=r)\n    for (i, c, r) in ((1, 3, False), (2, 4, False), (3, 6, True))\n)\n# ResNet-101 (including all stages)\nResNet101StagesTo5 = tuple(\n    StageSpec(index=i, block_count=c, return_features=r)\n    for (i, c, r) in ((1, 3, False), (2, 4, False), (3, 23, False), (4, 3, True))\n)\n# ResNet-101 up to stage 4 (excludes stage 5)\nResNet101StagesTo4 = tuple(\n    StageSpec(index=i, block_count=c, return_features=r)\n    for (i, c, r) in ((1, 3, False), (2, 4, False), (3, 23, True))\n)\n# ResNet-50-FPN (including all stages)\nResNet50FPNStagesTo5 = tuple(\n    StageSpec(index=i, block_count=c, return_features=r)\n    for (i, c, r) in ((1, 3, True), (2, 4, True), (3, 6, True), (4, 3, True))\n)\n# ResNet-101-FPN (including all stages)\nResNet101FPNStagesTo5 = tuple(\n    StageSpec(index=i, block_count=c, return_features=r)\n    for (i, c, r) in ((1, 3, True), (2, 4, True), (3, 23, True), (4, 3, True))\n)\n# ResNet-152-FPN (including all stages)\nResNet152FPNStagesTo5 = tuple(\n    StageSpec(index=i, block_count=c, return_features=r)\n    for (i, c, r) in ((1, 3, True), (2, 8, True), (3, 36, True), (4, 3, True))\n)\n\nclass ResNet(nn.Module):\n    def __init__(self, cfg):\n        super(ResNet, self).__init__()\n\n        # If we want to use the cfg in forward(), then we should make a copy\n        # of it and store it for later use:\n        # self.cfg = cfg.clone()\n\n        # Translate string names to implementations\n        stem_module = _STEM_MODULES[cfg.MODEL.RESNETS.STEM_FUNC]\n        stage_specs = _STAGE_SPECS[cfg.MODEL.BACKBONE.CONV_BODY]\n        transformation_module = _TRANSFORMATION_MODULES[cfg.MODEL.RESNETS.TRANS_FUNC]\n\n        # Construct the stem module\n        self.stem = stem_module(cfg)\n\n        # Constuct the specified ResNet stages\n        num_groups = cfg.MODEL.RESNETS.NUM_GROUPS\n        width_per_group = cfg.MODEL.RESNETS.WIDTH_PER_GROUP\n        in_channels = cfg.MODEL.RESNETS.STEM_OUT_CHANNELS\n        stage2_bottleneck_channels = num_groups * width_per_group\n        stage2_out_channels = cfg.MODEL.RESNETS.RES2_OUT_CHANNELS\n        self.stages = []\n        self.return_features = {}\n        for stage_spec in stage_specs:\n            name = \"layer\" + str(stage_spec.index)\n            stage2_relative_factor = 2 ** (stage_spec.index - 1)\n            bottleneck_channels = stage2_bottleneck_channels * stage2_relative_factor\n            out_channels = stage2_out_channels * stage2_relative_factor\n            module = _make_stage(\n                transformation_module,\n                in_channels,\n                bottleneck_channels,\n                out_channels,\n                stage_spec.block_count,\n                num_groups,\n                cfg.MODEL.RESNETS.STRIDE_IN_1X1,\n                first_stride=int(stage_spec.index > 1) + 1,\n            )\n            in_channels = out_channels\n            self.add_module(name, module)\n            self.stages.append(name)\n            self.return_features[name] = stage_spec.return_features\n\n        # Optionally freeze (requires_grad=False) parts of the backbone\n        self._freeze_backbone(cfg.MODEL.BACKBONE.FREEZE_CONV_BODY_AT)\n\n    def _freeze_backbone(self, freeze_at):\n        if freeze_at < 0:\n            return\n        for stage_index in range(freeze_at):\n            if stage_index == 0:\n                m = self.stem  # stage 0 is the stem\n            else:\n                m = getattr(self, \"layer\" + str(stage_index))\n            for p in m.parameters():\n                p.requires_grad = False\n\n    def forward(self, x):\n        outputs = []\n        x = self.stem(x)\n        for stage_name in self.stages:\n            x = getattr(self, stage_name)(x)\n            if self.return_features[stage_name]:\n                outputs.append(x)\n        return outputs\n\n\nclass ResNetHead(nn.Module):\n    def __init__(\n        self,\n        block_module,\n        stages,\n        num_groups=1,\n        width_per_group=64,\n        stride_in_1x1=True,\n        stride_init=None,\n        res2_out_channels=256,\n        dilation=1\n    ):\n        super(ResNetHead, self).__init__()\n\n        stage2_relative_factor = 2 ** (stages[0].index - 1)\n        stage2_bottleneck_channels = num_groups * width_per_group\n        out_channels = res2_out_channels * stage2_relative_factor\n        in_channels = out_channels // 2\n        bottleneck_channels = stage2_bottleneck_channels * stage2_relative_factor\n\n        block_module = _TRANSFORMATION_MODULES[block_module]\n\n        self.stages = []\n        stride = stride_init\n        for stage in stages:\n            name = \"layer\" + str(stage.index)\n            if not stride:\n                stride = int(stage.index > 1) + 1\n            module = _make_stage(\n                block_module,\n                in_channels,\n                bottleneck_channels,\n                out_channels,\n                stage.block_count,\n                num_groups,\n                stride_in_1x1,\n                first_stride=stride,\n                dilation=dilation\n            )\n            stride = None\n            self.add_module(name, module)\n            self.stages.append(name)\n        self.out_channels = out_channels\n\n    def forward(self, x):\n        for stage in self.stages:\n            x = getattr(self, stage)(x)\n        return x\n\n\ndef _make_stage(\n    transformation_module,\n    in_channels,\n    bottleneck_channels,\n    out_channels,\n    block_count,\n    num_groups,\n    stride_in_1x1,\n    first_stride,\n    dilation=1\n):\n    blocks = []\n    stride = first_stride\n    for _ in range(block_count):\n        blocks.append(\n            transformation_module(\n                in_channels,\n                bottleneck_channels,\n                out_channels,\n                num_groups,\n                stride_in_1x1,\n                stride,\n                dilation=dilation\n            )\n        )\n        stride = 1\n        in_channels = out_channels\n    return nn.Sequential(*blocks)\n\n\nclass Bottleneck(nn.Module):\n    def __init__(\n        self,\n        in_channels,\n        bottleneck_channels,\n        out_channels,\n        num_groups,\n        stride_in_1x1,\n        stride,\n        dilation,\n        norm_func\n    ):\n        super(Bottleneck, self).__init__()\n\n        self.downsample = None\n        if in_channels != out_channels:\n            down_stride = stride if dilation == 1 else 1\n            self.downsample = nn.Sequential(\n                Conv2d(\n                    in_channels, out_channels,\n                    kernel_size=1, stride=down_stride, bias=False\n                ),\n                norm_func(out_channels),\n            )\n            for modules in [self.downsample,]:\n                for l in modules.modules():\n                    if isinstance(l, Conv2d):\n                        nn.init.kaiming_uniform_(l.weight, a=1)\n\n        if dilation > 1:\n            stride = 1 # reset to be 1\n\n        # The original MSRA ResNet models have stride in the first 1x1 conv\n        # The subsequent fb.torch.resnet and Caffe2 ResNe[X]t implementations have\n        # stride in the 3x3 conv\n        stride_1x1, stride_3x3 = (stride, 1) if stride_in_1x1 else (1, stride)\n\n        self.conv1 = Conv2d(\n            in_channels,\n            bottleneck_channels,\n            kernel_size=1,\n            stride=stride_1x1,\n            bias=False,\n        )\n        self.bn1 = norm_func(bottleneck_channels)\n        # TODO: specify init for the above\n\n        self.conv2 = Conv2d(\n            bottleneck_channels,\n            bottleneck_channels,\n            kernel_size=3,\n            stride=stride_3x3,\n            padding=dilation,\n            bias=False,\n            groups=num_groups,\n            dilation=dilation\n        )\n        self.bn2 = norm_func(bottleneck_channels)\n\n        self.conv3 = Conv2d(\n            bottleneck_channels, out_channels, kernel_size=1, bias=False\n        )\n        self.bn3 = norm_func(out_channels)\n\n        for l in [self.conv1, self.conv2, self.conv3,]:\n            nn.init.kaiming_uniform_(l.weight, a=1)\n\n    def forward(self, x):\n        identity = x\n\n        out = self.conv1(x)\n        out = self.bn1(out)\n        out = F.relu_(out)\n\n        out = self.conv2(out)\n        out = self.bn2(out)\n        out = F.relu_(out)\n\n        out0 = self.conv3(out)\n        out = self.bn3(out0)\n\n        if self.downsample is not None:\n            identity = self.downsample(x)\n\n        out += identity\n        out = F.relu_(out)\n\n        return out\n\n\nclass BaseStem(nn.Module):\n    def __init__(self, cfg, norm_func):\n        super(BaseStem, self).__init__()\n\n        out_channels = cfg.MODEL.RESNETS.STEM_OUT_CHANNELS\n\n        self.conv1 = Conv2d(\n            3, out_channels, kernel_size=7, stride=2, padding=3, bias=False\n        )\n        self.bn1 = norm_func(out_channels)\n\n        for l in [self.conv1,]:\n            nn.init.kaiming_uniform_(l.weight, a=1)\n\n    def forward(self, x):\n        x = self.conv1(x)\n        x = self.bn1(x)\n        x = F.relu_(x)\n        x = F.max_pool2d(x, kernel_size=3, stride=2, padding=1)\n        return x\n\n\nclass BottleneckWithFixedBatchNorm(Bottleneck):\n    def __init__(\n        self,\n        in_channels,\n        bottleneck_channels,\n        out_channels,\n        num_groups=1,\n        stride_in_1x1=True,\n        stride=1,\n        dilation=1\n    ):\n        super(BottleneckWithFixedBatchNorm, self).__init__(\n            in_channels=in_channels,\n            bottleneck_channels=bottleneck_channels,\n            out_channels=out_channels,\n            num_groups=num_groups,\n            stride_in_1x1=stride_in_1x1,\n            stride=stride,\n            dilation=dilation,\n            norm_func=FrozenBatchNorm2d\n        )\n\n\nclass StemWithFixedBatchNorm(BaseStem):\n    def __init__(self, cfg):\n        super(StemWithFixedBatchNorm, self).__init__(\n            cfg, norm_func=FrozenBatchNorm2d\n        )\n\n\nclass BottleneckWithGN(Bottleneck):\n    def __init__(\n        self,\n        in_channels,\n        bottleneck_channels,\n        out_channels,\n        num_groups=1,\n        stride_in_1x1=True,\n        stride=1,\n        dilation=1\n    ):\n        super(BottleneckWithGN, self).__init__(\n            in_channels=in_channels,\n            bottleneck_channels=bottleneck_channels,\n            out_channels=out_channels,\n            num_groups=num_groups,\n            stride_in_1x1=stride_in_1x1,\n            stride=stride,\n            dilation=dilation,\n            norm_func=group_norm\n        )\n\n\nclass StemWithGN(BaseStem):\n    def __init__(self, cfg):\n        super(StemWithGN, self).__init__(cfg, norm_func=group_norm)\n\n\n_TRANSFORMATION_MODULES = Registry({\n    \"BottleneckWithFixedBatchNorm\": BottleneckWithFixedBatchNorm,\n    \"BottleneckWithGN\": BottleneckWithGN,\n})\n\n_STEM_MODULES = Registry({\n    \"StemWithFixedBatchNorm\": StemWithFixedBatchNorm,\n    \"StemWithGN\": StemWithGN,\n})\n\n_STAGE_SPECS = Registry({\n    \"R-50-C4\": ResNet50StagesTo4,\n    \"R-50-C5\": ResNet50StagesTo5,\n    \"R-101-C4\": ResNet101StagesTo4,\n    \"R-101-C5\": ResNet101StagesTo5,\n    \"R-50-FPN\": ResNet50FPNStagesTo5,\n    \"R-50-FPN-RETINANET\": ResNet50FPNStagesTo5,\n    \"R-101-FPN\": ResNet101FPNStagesTo5,\n    \"R-101-FPN-RETINANET\": ResNet101FPNStagesTo5,\n    \"R-152-FPN\": ResNet152FPNStagesTo5,\n})\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/balanced_positive_negative_sampler.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport torch\n\n\nclass BalancedPositiveNegativeSampler(object):\n    \"\"\"\n    This class samples batches, ensuring that they contain a fixed proportion of positives\n    \"\"\"\n\n    def __init__(self, batch_size_per_image, positive_fraction):\n        \"\"\"\n        Arguments:\n            batch_size_per_image (int): number of elements to be selected per image\n            positive_fraction (float): percentace of positive elements per batch\n        \"\"\"\n        self.batch_size_per_image = batch_size_per_image\n        self.positive_fraction = positive_fraction\n\n    def __call__(self, matched_idxs):\n        \"\"\"\n        Arguments:\n            matched idxs: list of tensors containing -1, 0 or positive values.\n                Each tensor corresponds to a specific image.\n                -1 values are ignored, 0 are considered as negatives and > 0 as\n                positives.\n\n        Returns:\n            pos_idx (list[tensor])\n            neg_idx (list[tensor])\n\n        Returns two lists of binary masks for each image.\n        The first list contains the positive elements that were selected,\n        and the second list the negative example.\n        \"\"\"\n        pos_idx = []\n        neg_idx = []\n        for matched_idxs_per_image in matched_idxs:\n            positive = torch.nonzero(matched_idxs_per_image >= 1).squeeze(1)\n            negative = torch.nonzero(matched_idxs_per_image == 0).squeeze(1)\n\n            num_pos = int(self.batch_size_per_image * self.positive_fraction)\n            # protect against not enough positive examples\n            num_pos = min(positive.numel(), num_pos)\n            num_neg = self.batch_size_per_image - num_pos\n            # protect against not enough negative examples\n            num_neg = min(negative.numel(), num_neg)\n\n            # randomly select positive and negative examples\n            perm1 = torch.randperm(positive.numel(), device=positive.device)[:num_pos]\n            perm2 = torch.randperm(negative.numel(), device=negative.device)[:num_neg]\n\n            pos_idx_per_image = positive[perm1]\n            neg_idx_per_image = negative[perm2]\n\n            # create binary mask from indices\n            pos_idx_per_image_mask = torch.zeros_like(\n                matched_idxs_per_image, dtype=torch.uint8\n            )\n            neg_idx_per_image_mask = torch.zeros_like(\n                matched_idxs_per_image, dtype=torch.uint8\n            )\n            pos_idx_per_image_mask[pos_idx_per_image] = 1\n            neg_idx_per_image_mask[neg_idx_per_image] = 1\n\n            pos_idx.append(pos_idx_per_image_mask)\n            neg_idx.append(neg_idx_per_image_mask)\n\n        return pos_idx, neg_idx\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/box_coder.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport math\n\nimport torch\n\n\nclass BoxCoder(object):\n    \"\"\"\n    This class encodes and decodes a set of bounding boxes into\n    the representation used for training the regressors.\n    \"\"\"\n\n    def __init__(self, weights, bbox_xform_clip=math.log(1000. / 16)):\n        \"\"\"\n        Arguments:\n            weights (4-element tuple)\n            bbox_xform_clip (float)\n        \"\"\"\n        self.weights = weights\n        self.bbox_xform_clip = bbox_xform_clip\n\n    def encode(self, reference_boxes, proposals):\n        \"\"\"\n        Encode a set of proposals with respect to some\n        reference boxes\n\n        Arguments:\n            reference_boxes (Tensor): reference boxes\n            proposals (Tensor): boxes to be encoded\n        \"\"\"\n\n        TO_REMOVE = 1  # TODO remove\n        ex_widths = proposals[:, 2] - proposals[:, 0] + TO_REMOVE\n        ex_heights = proposals[:, 3] - proposals[:, 1] + TO_REMOVE\n        ex_ctr_x = proposals[:, 0] + 0.5 * ex_widths\n        ex_ctr_y = proposals[:, 1] + 0.5 * ex_heights\n\n        gt_widths = reference_boxes[:, 2] - reference_boxes[:, 0] + TO_REMOVE\n        gt_heights = reference_boxes[:, 3] - reference_boxes[:, 1] + TO_REMOVE\n        gt_ctr_x = reference_boxes[:, 0] + 0.5 * gt_widths\n        gt_ctr_y = reference_boxes[:, 1] + 0.5 * gt_heights\n\n        wx, wy, ww, wh = self.weights\n        targets_dx = wx * (gt_ctr_x - ex_ctr_x) / ex_widths\n        targets_dy = wy * (gt_ctr_y - ex_ctr_y) / ex_heights\n        targets_dw = ww * torch.log(gt_widths / ex_widths)\n        targets_dh = wh * torch.log(gt_heights / ex_heights)\n\n        targets = torch.stack((targets_dx, targets_dy, targets_dw, targets_dh), dim=1)\n        return targets\n\n    def decode(self, rel_codes, boxes):\n        \"\"\"\n        From a set of original boxes and encoded relative box offsets,\n        get the decoded boxes.\n\n        Arguments:\n            rel_codes (Tensor): encoded boxes\n            boxes (Tensor): reference boxes.\n        \"\"\"\n\n        boxes = boxes.to(rel_codes.dtype)\n\n        TO_REMOVE = 1  # TODO remove\n        widths = boxes[:, 2] - boxes[:, 0] + TO_REMOVE\n        heights = boxes[:, 3] - boxes[:, 1] + TO_REMOVE\n        ctr_x = boxes[:, 0] + 0.5 * widths\n        ctr_y = boxes[:, 1] + 0.5 * heights\n\n        wx, wy, ww, wh = self.weights\n        dx = rel_codes[:, 0::4] / wx\n        dy = rel_codes[:, 1::4] / wy\n        dw = rel_codes[:, 2::4] / ww\n        dh = rel_codes[:, 3::4] / wh\n\n        # Prevent sending too large values into torch.exp()\n        dw = torch.clamp(dw, max=self.bbox_xform_clip)\n        dh = torch.clamp(dh, max=self.bbox_xform_clip)\n\n        pred_ctr_x = dx * widths[:, None] + ctr_x[:, None]\n        pred_ctr_y = dy * heights[:, None] + ctr_y[:, None]\n        pred_w = torch.exp(dw) * widths[:, None]\n        pred_h = torch.exp(dh) * heights[:, None]\n\n        pred_boxes = torch.zeros_like(rel_codes)\n        # x1\n        pred_boxes[:, 0::4] = pred_ctr_x - 0.5 * pred_w\n        # y1\n        pred_boxes[:, 1::4] = pred_ctr_y - 0.5 * pred_h\n        # x2 (note: \"- 1\" is correct; don't be fooled by the asymmetry)\n        pred_boxes[:, 2::4] = pred_ctr_x + 0.5 * pred_w - 1\n        # y2 (note: \"- 1\" is correct; don't be fooled by the asymmetry)\n        pred_boxes[:, 3::4] = pred_ctr_y + 0.5 * pred_h - 1\n\n        return pred_boxes\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/detector/__init__.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nfrom .detectors import build_detection_model\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/detector/detectors.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nfrom .generalized_rcnn import GeneralizedRCNN\n\n\n_DETECTION_META_ARCHITECTURES = {\"GeneralizedRCNN\": GeneralizedRCNN}\n\n\ndef build_detection_model(cfg):\n    meta_arch = _DETECTION_META_ARCHITECTURES[cfg.MODEL.META_ARCHITECTURE]\n    return meta_arch(cfg)\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/detector/generalized_rcnn.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n\"\"\"\nImplements the Generalized R-CNN framework\n\"\"\"\n\nimport torch\nfrom torch import nn\n\nfrom maskrcnn_benchmark.structures.image_list import to_image_list\n\nfrom ..backbone import build_backbone\nfrom ..rpn.rpn import build_rpn\nfrom ..roi_heads.roi_heads import build_roi_heads\n\n\nclass GeneralizedRCNN(nn.Module):\n    \"\"\"\n    Main class for Generalized R-CNN. Currently supports boxes and masks.\n    It consists of three main parts:\n    - backbone\n    - rpn\n    - heads: takes the features + the proposals from the RPN and computes\n        detections / masks from it.\n    \"\"\"\n\n    def __init__(self, cfg):\n        super(GeneralizedRCNN, self).__init__()\n\n        self.backbone = build_backbone(cfg)\n        self.rpn = build_rpn(cfg, self.backbone.out_channels)\n        self.roi_heads = build_roi_heads(cfg, self.backbone.out_channels)\n\n    def forward(self, images, targets=None):\n        \"\"\"\n        Arguments:\n            images (list[Tensor] or ImageList): images to be processed\n            targets (list[BoxList]): ground-truth boxes present in the image (optional)\n\n        Returns:\n            result (list[BoxList] or dict[Tensor]): the output from the model.\n                During training, it returns a dict[Tensor] which contains the losses.\n                During testing, it returns list[BoxList] contains additional fields\n                like `scores`, `labels` and `mask` (for Mask R-CNN models).\n\n        \"\"\"\n        if self.training and targets is None:\n            raise ValueError(\"In training mode, targets should be passed\")\n        images = to_image_list(images)\n        features = self.backbone(images.tensors)\n        proposals, proposal_losses = self.rpn(images, features, targets)\n        if self.roi_heads:\n            x, result, detector_losses = self.roi_heads(features, proposals, targets)\n        else:\n            # RPN-only models don't have roi_heads\n            x = features\n            result = proposals\n            detector_losses = {}\n\n        if self.training:\n            losses = {}\n            losses.update(detector_losses)\n            losses.update(proposal_losses)\n            return losses\n\n        return result\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/make_layers.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n\"\"\"\nMiscellaneous utility functions\n\"\"\"\n\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\nfrom maskrcnn_benchmark.config import cfg\nfrom maskrcnn_benchmark.layers import Conv2d\nfrom maskrcnn_benchmark.modeling.poolers import Pooler\n\n\ndef get_group_gn(dim, dim_per_gp, num_groups):\n    \"\"\"get number of groups used by GroupNorm, based on number of channels.\"\"\"\n    assert dim_per_gp == -1 or num_groups == -1, \\\n        \"GroupNorm: can only specify G or C/G.\"\n\n    if dim_per_gp > 0:\n        assert dim % dim_per_gp == 0, \\\n            \"dim: {}, dim_per_gp: {}\".format(dim, dim_per_gp)\n        group_gn = dim // dim_per_gp\n    else:\n        assert dim % num_groups == 0, \\\n            \"dim: {}, num_groups: {}\".format(dim, num_groups)\n        group_gn = num_groups\n\n    return group_gn\n\n\ndef group_norm(out_channels, affine=True, divisor=1):\n    out_channels = out_channels // divisor\n    dim_per_gp = cfg.MODEL.GROUP_NORM.DIM_PER_GP // divisor\n    num_groups = cfg.MODEL.GROUP_NORM.NUM_GROUPS // divisor\n    eps = cfg.MODEL.GROUP_NORM.EPSILON # default: 1e-5\n    return torch.nn.GroupNorm(\n        get_group_gn(out_channels, dim_per_gp, num_groups), \n        out_channels, \n        eps, \n        affine\n    )\n\n\ndef make_conv3x3(\n    in_channels, \n    out_channels, \n    dilation=1, \n    stride=1, \n    use_gn=False,\n    use_relu=False,\n    kaiming_init=True\n):\n    conv = Conv2d(\n        in_channels, \n        out_channels, \n        kernel_size=3, \n        stride=stride, \n        padding=dilation, \n        dilation=dilation, \n        bias=False if use_gn else True\n    )\n    if kaiming_init:\n        nn.init.kaiming_normal_(\n            conv.weight, mode=\"fan_out\", nonlinearity=\"relu\"\n        )\n    else:\n        torch.nn.init.normal_(conv.weight, std=0.01)\n    if not use_gn:\n        nn.init.constant_(conv.bias, 0)\n    module = [conv,]\n    if use_gn:\n        module.append(group_norm(out_channels))\n    if use_relu:\n        module.append(nn.ReLU(inplace=True))\n    if len(module) > 1:\n        return nn.Sequential(*module)\n    return conv\n\n\ndef make_fc(dim_in, hidden_dim, use_gn=False):\n    '''\n        Caffe2 implementation uses XavierFill, which in fact\n        corresponds to kaiming_uniform_ in PyTorch\n    '''\n    if use_gn:\n        fc = nn.Linear(dim_in, hidden_dim, bias=False)\n        nn.init.kaiming_uniform_(fc.weight, a=1)\n        return nn.Sequential(fc, group_norm(hidden_dim))\n    fc = nn.Linear(dim_in, hidden_dim)\n    nn.init.kaiming_uniform_(fc.weight, a=1)\n    nn.init.constant_(fc.bias, 0)\n    return fc\n\n\ndef conv_with_kaiming_uniform(use_gn=False, use_relu=False):\n    def make_conv(\n        in_channels, out_channels, kernel_size, stride=1, dilation=1\n    ):\n        conv = Conv2d(\n            in_channels, \n            out_channels, \n            kernel_size=kernel_size, \n            stride=stride, \n            padding=dilation * (kernel_size - 1) // 2, \n            dilation=dilation, \n            bias=False if use_gn else True\n        )\n        # Caffe2 implementation uses XavierFill, which in fact\n        # corresponds to kaiming_uniform_ in PyTorch\n        nn.init.kaiming_uniform_(conv.weight, a=1)\n        if not use_gn:\n            nn.init.constant_(conv.bias, 0)\n        module = [conv,]\n        if use_gn:\n            module.append(group_norm(out_channels))\n        if use_relu:\n            module.append(nn.ReLU(inplace=True))\n        if len(module) > 1:\n            return nn.Sequential(*module)\n        return conv\n\n    return make_conv\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/matcher.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport torch\n\n\nclass Matcher(object):\n    \"\"\"\n    This class assigns to each predicted \"element\" (e.g., a box) a ground-truth\n    element. Each predicted element will have exactly zero or one matches; each\n    ground-truth element may be assigned to zero or more predicted elements.\n\n    Matching is based on the MxN match_quality_matrix, that characterizes how well\n    each (ground-truth, predicted)-pair match. For example, if the elements are\n    boxes, the matrix may contain box IoU overlap values.\n\n    The matcher returns a tensor of size N containing the index of the ground-truth\n    element m that matches to prediction n. If there is no match, a negative value\n    is returned.\n    \"\"\"\n\n    BELOW_LOW_THRESHOLD = -1\n    BETWEEN_THRESHOLDS = -2\n\n    def __init__(self, high_threshold, low_threshold, allow_low_quality_matches=False):\n        \"\"\"\n        Args:\n            high_threshold (float): quality values greater than or equal to\n                this value are candidate matches.\n            low_threshold (float): a lower quality threshold used to stratify\n                matches into three levels:\n                1) matches >= high_threshold\n                2) BETWEEN_THRESHOLDS matches in [low_threshold, high_threshold)\n                3) BELOW_LOW_THRESHOLD matches in [0, low_threshold)\n            allow_low_quality_matches (bool): if True, produce additional matches\n                for predictions that have only low-quality match candidates. See\n                set_low_quality_matches_ for more details.\n        \"\"\"\n        assert low_threshold <= high_threshold\n        self.high_threshold = high_threshold\n        self.low_threshold = low_threshold\n        self.allow_low_quality_matches = allow_low_quality_matches\n\n    def __call__(self, match_quality_matrix):\n        \"\"\"\n        Args:\n            match_quality_matrix (Tensor[float]): an MxN tensor, containing the\n            pairwise quality between M ground-truth elements and N predicted elements.\n\n        Returns:\n            matches (Tensor[int64]): an N tensor where N[i] is a matched gt in\n            [0, M - 1] or a negative value indicating that prediction i could not\n            be matched.\n        \"\"\"\n        if match_quality_matrix.numel() == 0:\n            # empty targets or proposals not supported during training\n            if match_quality_matrix.shape[0] == 0:\n                raise ValueError(\n                    \"No ground-truth boxes available for one of the images \"\n                    \"during training\")\n            else:\n                raise ValueError(\n                    \"No proposal boxes available for one of the images \"\n                    \"during training\")\n\n        # match_quality_matrix is M (gt) x N (predicted)\n        # Max over gt elements (dim 0) to find best gt candidate for each prediction\n        matched_vals, matches = match_quality_matrix.max(dim=0)\n        if self.allow_low_quality_matches:\n            all_matches = matches.clone()\n\n        # Assign candidate matches with low quality to negative (unassigned) values\n        below_low_threshold = matched_vals < self.low_threshold\n        between_thresholds = (matched_vals >= self.low_threshold) & (\n            matched_vals < self.high_threshold\n        )\n        matches[below_low_threshold] = Matcher.BELOW_LOW_THRESHOLD\n        matches[between_thresholds] = Matcher.BETWEEN_THRESHOLDS\n\n        if self.allow_low_quality_matches:\n            self.set_low_quality_matches_(matches, all_matches, match_quality_matrix)\n\n        return matches\n\n    def set_low_quality_matches_(self, matches, all_matches, match_quality_matrix):\n        \"\"\"\n        Produce additional matches for predictions that have only low-quality matches.\n        Specifically, for each ground-truth find the set of predictions that have\n        maximum overlap with it (including ties); for each prediction in that set, if\n        it is unmatched, then match it to the ground-truth with which it has the highest\n        quality value.\n        \"\"\"\n        # For each gt, find the prediction with which it has highest quality\n        highest_quality_foreach_gt, _ = match_quality_matrix.max(dim=1)\n        # Find highest quality match available, even if it is low, including ties\n        gt_pred_pairs_of_highest_quality = torch.nonzero(\n            match_quality_matrix == highest_quality_foreach_gt[:, None]\n        )\n        # Example gt_pred_pairs_of_highest_quality:\n        #   tensor([[    0, 39796],\n        #           [    1, 32055],\n        #           [    1, 32070],\n        #           [    2, 39190],\n        #           [    2, 40255],\n        #           [    3, 40390],\n        #           [    3, 41455],\n        #           [    4, 45470],\n        #           [    5, 45325],\n        #           [    5, 46390]])\n        # Each row is a (gt index, prediction index)\n        # Note how gt items 1, 2, 3, and 5 each have two ties\n\n        pred_inds_to_update = gt_pred_pairs_of_highest_quality[:, 1]\n        matches[pred_inds_to_update] = all_matches[pred_inds_to_update]\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/poolers.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport torch\nimport torch.nn.functional as F\nfrom torch import nn\n\nfrom maskrcnn_benchmark.layers import ROIAlign\n\nfrom .utils import cat\n\n\nclass LevelMapper(object):\n    \"\"\"Determine which FPN level each RoI in a set of RoIs should map to based\n    on the heuristic in the FPN paper.\n    \"\"\"\n\n    def __init__(self, k_min, k_max, canonical_scale=224, canonical_level=4, eps=1e-6):\n        \"\"\"\n        Arguments:\n            k_min (int)\n            k_max (int)\n            canonical_scale (int)\n            canonical_level (int)\n            eps (float)\n        \"\"\"\n        self.k_min = k_min\n        self.k_max = k_max\n        self.s0 = canonical_scale\n        self.lvl0 = canonical_level\n        self.eps = eps\n\n    def __call__(self, boxlists):\n        \"\"\"\n        Arguments:\n            boxlists (list[BoxList])\n        \"\"\"\n        # Compute level ids\n        s = torch.sqrt(cat([boxlist.area() for boxlist in boxlists]))\n\n        # Eqn.(1) in FPN paper\n        target_lvls = torch.floor(self.lvl0 + torch.log2(s / self.s0 + self.eps))\n        target_lvls = torch.clamp(target_lvls, min=self.k_min, max=self.k_max)\n        return target_lvls.to(torch.int64) - self.k_min\n\n\nclass Pooler(nn.Module):\n    \"\"\"\n    Pooler for Detection with or without FPN.\n    It currently hard-code ROIAlign in the implementation,\n    but that can be made more generic later on.\n    Also, the requirement of passing the scales is not strictly necessary, as they\n    can be inferred from the size of the feature map / size of original image,\n    which is available thanks to the BoxList.\n    \"\"\"\n\n    def __init__(self, output_size, scales, sampling_ratio):\n        \"\"\"\n        Arguments:\n            output_size (list[tuple[int]] or list[int]): output size for the pooled region\n            scales (list[float]): scales for each Pooler\n            sampling_ratio (int): sampling ratio for ROIAlign\n        \"\"\"\n        super(Pooler, self).__init__()\n        poolers = []\n        for scale in scales:\n            poolers.append(\n                ROIAlign(\n                    output_size, spatial_scale=scale, sampling_ratio=sampling_ratio\n                )\n            )\n        self.poolers = nn.ModuleList(poolers)\n        self.output_size = output_size\n        # get the levels in the feature map by leveraging the fact that the network always\n        # downsamples by a factor of 2 at each level.\n        lvl_min = -torch.log2(torch.tensor(scales[0], dtype=torch.float32)).item()\n        lvl_max = -torch.log2(torch.tensor(scales[-1], dtype=torch.float32)).item()\n        self.map_levels = LevelMapper(lvl_min, lvl_max)\n\n    def convert_to_roi_format(self, boxes):\n        concat_boxes = cat([b.bbox for b in boxes], dim=0)\n        device, dtype = concat_boxes.device, concat_boxes.dtype\n        ids = cat(\n            [\n                torch.full((len(b), 1), i, dtype=dtype, device=device)\n                for i, b in enumerate(boxes)\n            ],\n            dim=0,\n        )\n        rois = torch.cat([ids, concat_boxes], dim=1)\n        return rois\n\n    def forward(self, x, boxes):\n        \"\"\"\n        Arguments:\n            x (list[Tensor]): feature maps for each level\n            boxes (list[BoxList]): boxes to be used to perform the pooling operation.\n        Returns:\n            result (Tensor)\n        \"\"\"\n        num_levels = len(self.poolers)\n        rois = self.convert_to_roi_format(boxes)\n        if num_levels == 1:\n            return self.poolers[0](x[0], rois)\n\n        levels = self.map_levels(boxes)\n\n        num_rois = len(rois)\n        num_channels = x[0].shape[1]\n        output_size = self.output_size[0]\n\n        dtype, device = x[0].dtype, x[0].device\n        result = torch.zeros(\n            (num_rois, num_channels, output_size, output_size),\n            dtype=dtype,\n            device=device,\n        )\n        for level, (per_level_feature, pooler) in enumerate(zip(x, self.poolers)):\n            idx_in_level = torch.nonzero(levels == level).squeeze(1)\n            rois_per_level = rois[idx_in_level]\n            result[idx_in_level] = pooler(per_level_feature, rois_per_level)\n\n        return result\n\n\ndef make_pooler(cfg, head_name):\n    resolution = cfg.MODEL[head_name].POOLER_RESOLUTION\n    scales = cfg.MODEL[head_name].POOLER_SCALES\n    sampling_ratio = cfg.MODEL[head_name].POOLER_SAMPLING_RATIO\n    pooler = Pooler(\n        output_size=(resolution, resolution),\n        scales=scales,\n        sampling_ratio=sampling_ratio,\n    )\n    return pooler\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/registry.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n\nfrom maskrcnn_benchmark.utils.registry import Registry\n\nBACKBONES = Registry()\nRPN_HEADS = Registry()\nROI_BOX_FEATURE_EXTRACTORS = Registry()\nROI_BOX_PREDICTOR = Registry()\nROI_KEYPOINT_FEATURE_EXTRACTORS = Registry()\nROI_KEYPOINT_PREDICTOR = Registry()\nROI_MASK_FEATURE_EXTRACTORS = Registry()\nROI_MASK_PREDICTOR = Registry()\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/roi_heads/__init__.py",
    "content": ""
  },
  {
    "path": "maskrcnn_benchmark/modeling/roi_heads/box_head/__init__.py",
    "content": ""
  },
  {
    "path": "maskrcnn_benchmark/modeling/roi_heads/box_head/box_head.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport torch\nfrom torch import nn\n\nfrom .roi_box_feature_extractors import make_roi_box_feature_extractor\nfrom .roi_box_predictors import make_roi_box_predictor\nfrom .inference import make_roi_box_post_processor\nfrom .loss import make_roi_box_loss_evaluator\n\n\nclass ROIBoxHead(torch.nn.Module):\n    \"\"\"\n    Generic Box Head class.\n    \"\"\"\n\n    def __init__(self, cfg, in_channels):\n        super(ROIBoxHead, self).__init__()\n        self.feature_extractor = make_roi_box_feature_extractor(cfg, in_channels)\n        self.predictor = make_roi_box_predictor(\n            cfg, self.feature_extractor.out_channels)\n        self.post_processor = make_roi_box_post_processor(cfg)\n        self.loss_evaluator = make_roi_box_loss_evaluator(cfg)\n\n    def forward(self, features, proposals, targets=None):\n        \"\"\"\n        Arguments:\n            features (list[Tensor]): feature-maps from possibly several levels\n            proposals (list[BoxList]): proposal boxes\n            targets (list[BoxList], optional): the ground-truth targets.\n\n        Returns:\n            x (Tensor): the result of the feature extractor\n            proposals (list[BoxList]): during training, the subsampled proposals\n                are returned. During testing, the predicted boxlists are returned\n            losses (dict[Tensor]): During training, returns the losses for the\n                head. During testing, returns an empty dict.\n        \"\"\"\n\n        if self.training:\n            # Faster R-CNN subsamples during training the proposals with a fixed\n            # positive / negative ratio\n            with torch.no_grad():\n                proposals = self.loss_evaluator.subsample(proposals, targets)\n\n        # extract features that will be fed to the final classifier. The\n        # feature_extractor generally corresponds to the pooler + heads\n        x = self.feature_extractor(features, proposals)\n        # final classifier that converts the features into predictions\n        class_logits, box_regression = self.predictor(x)\n\n        if not self.training:\n            result = self.post_processor((class_logits, box_regression), proposals)\n            return x, result, {}\n\n        loss_classifier, loss_box_reg = self.loss_evaluator(\n            [class_logits], [box_regression]\n        )\n        return (\n            x,\n            proposals,\n            dict(loss_classifier=loss_classifier, loss_box_reg=loss_box_reg),\n        )\n\n\ndef build_roi_box_head(cfg, in_channels):\n    \"\"\"\n    Constructs a new box head.\n    By default, uses ROIBoxHead, but if it turns out not to be enough, just register a new class\n    and make it a parameter in the config\n    \"\"\"\n    return ROIBoxHead(cfg, in_channels)\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/roi_heads/box_head/inference.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport torch\nimport torch.nn.functional as F\nfrom torch import nn\n\nfrom maskrcnn_benchmark.structures.bounding_box import BoxList\nfrom maskrcnn_benchmark.structures.boxlist_ops import boxlist_nms\nfrom maskrcnn_benchmark.structures.boxlist_ops import cat_boxlist\nfrom maskrcnn_benchmark.modeling.box_coder import BoxCoder\n\n\nclass PostProcessor(nn.Module):\n    \"\"\"\n    From a set of classification scores, box regression and proposals,\n    computes the post-processed boxes, and applies NMS to obtain the\n    final results\n    \"\"\"\n\n    def __init__(\n        self,\n        score_thresh=0.05,\n        nms=0.5,\n        detections_per_img=100,\n        box_coder=None,\n        cls_agnostic_bbox_reg=False\n    ):\n        \"\"\"\n        Arguments:\n            score_thresh (float)\n            nms (float)\n            detections_per_img (int)\n            box_coder (BoxCoder)\n        \"\"\"\n        super(PostProcessor, self).__init__()\n        self.score_thresh = score_thresh\n        self.nms = nms\n        self.detections_per_img = detections_per_img\n        if box_coder is None:\n            box_coder = BoxCoder(weights=(10., 10., 5., 5.))\n        self.box_coder = box_coder\n        self.cls_agnostic_bbox_reg = cls_agnostic_bbox_reg\n\n    def forward(self, x, boxes):\n        \"\"\"\n        Arguments:\n            x (tuple[tensor, tensor]): x contains the class logits\n                and the box_regression from the model.\n            boxes (list[BoxList]): bounding boxes that are used as\n                reference, one for ech image\n\n        Returns:\n            results (list[BoxList]): one BoxList for each image, containing\n                the extra fields labels and scores\n        \"\"\"\n        class_logits, box_regression = x\n        class_prob = F.softmax(class_logits, -1)\n\n        # TODO think about a representation of batch of boxes\n        image_shapes = [box.size for box in boxes]\n        boxes_per_image = [len(box) for box in boxes]\n        concat_boxes = torch.cat([a.bbox for a in boxes], dim=0)\n\n        if self.cls_agnostic_bbox_reg:\n            box_regression = box_regression[:, -4:]\n        proposals = self.box_coder.decode(\n            box_regression.view(sum(boxes_per_image), -1), concat_boxes\n        )\n        if self.cls_agnostic_bbox_reg:\n            proposals = proposals.repeat(1, class_prob.shape[1])\n\n        num_classes = class_prob.shape[1]\n\n        proposals = proposals.split(boxes_per_image, dim=0)\n        class_prob = class_prob.split(boxes_per_image, dim=0)\n\n        results = []\n        for prob, boxes_per_img, image_shape in zip(\n            class_prob, proposals, image_shapes\n        ):\n            boxlist = self.prepare_boxlist(boxes_per_img, prob, image_shape)\n            boxlist = boxlist.clip_to_image(remove_empty=False)\n            boxlist = self.filter_results(boxlist, num_classes)\n            results.append(boxlist)\n        return results\n\n    def prepare_boxlist(self, boxes, scores, image_shape):\n        \"\"\"\n        Returns BoxList from `boxes` and adds probability scores information\n        as an extra field\n        `boxes` has shape (#detections, 4 * #classes), where each row represents\n        a list of predicted bounding boxes for each of the object classes in the\n        dataset (including the background class). The detections in each row\n        originate from the same object proposal.\n        `scores` has shape (#detection, #classes), where each row represents a list\n        of object detection confidence scores for each of the object classes in the\n        dataset (including the background class). `scores[i, j]`` corresponds to the\n        box at `boxes[i, j * 4:(j + 1) * 4]`.\n        \"\"\"\n        boxes = boxes.reshape(-1, 4)\n        scores = scores.reshape(-1)\n        boxlist = BoxList(boxes, image_shape, mode=\"xyxy\")\n        boxlist.add_field(\"scores\", scores)\n        return boxlist\n\n    def filter_results(self, boxlist, num_classes):\n        \"\"\"Returns bounding-box detection results by thresholding on scores and\n        applying non-maximum suppression (NMS).\n        \"\"\"\n        # unwrap the boxlist to avoid additional overhead.\n        # if we had multi-class NMS, we could perform this directly on the boxlist\n        boxes = boxlist.bbox.reshape(-1, num_classes * 4)\n        scores = boxlist.get_field(\"scores\").reshape(-1, num_classes)\n\n        device = scores.device\n        result = []\n        # Apply threshold on detection probabilities and apply NMS\n        # Skip j = 0, because it's the background class\n        inds_all = scores > self.score_thresh\n        for j in range(1, num_classes):\n            inds = inds_all[:, j].nonzero().squeeze(1)\n            scores_j = scores[inds, j]\n            boxes_j = boxes[inds, j * 4 : (j + 1) * 4]\n            boxlist_for_class = BoxList(boxes_j, boxlist.size, mode=\"xyxy\")\n            boxlist_for_class.add_field(\"scores\", scores_j)\n            boxlist_for_class = boxlist_nms(\n                boxlist_for_class, self.nms\n            )\n            num_labels = len(boxlist_for_class)\n            boxlist_for_class.add_field(\n                \"labels\", torch.full((num_labels,), j, dtype=torch.int64, device=device)\n            )\n            result.append(boxlist_for_class)\n\n        result = cat_boxlist(result)\n        number_of_detections = len(result)\n\n        # Limit to max_per_image detections **over all classes**\n        if number_of_detections > self.detections_per_img > 0:\n            cls_scores = result.get_field(\"scores\")\n            image_thresh, _ = torch.kthvalue(\n                cls_scores.cpu(), number_of_detections - self.detections_per_img + 1\n            )\n            keep = cls_scores >= image_thresh.item()\n            keep = torch.nonzero(keep).squeeze(1)\n            result = result[keep]\n        return result\n\n\ndef make_roi_box_post_processor(cfg):\n    use_fpn = cfg.MODEL.ROI_HEADS.USE_FPN\n\n    bbox_reg_weights = cfg.MODEL.ROI_HEADS.BBOX_REG_WEIGHTS\n    box_coder = BoxCoder(weights=bbox_reg_weights)\n\n    score_thresh = cfg.MODEL.ROI_HEADS.SCORE_THRESH\n    nms_thresh = cfg.MODEL.ROI_HEADS.NMS\n    detections_per_img = cfg.MODEL.ROI_HEADS.DETECTIONS_PER_IMG\n    cls_agnostic_bbox_reg = cfg.MODEL.CLS_AGNOSTIC_BBOX_REG\n\n    postprocessor = PostProcessor(\n        score_thresh,\n        nms_thresh,\n        detections_per_img,\n        box_coder,\n        cls_agnostic_bbox_reg\n    )\n    return postprocessor\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/roi_heads/box_head/loss.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport torch\nfrom torch.nn import functional as F\n\nfrom maskrcnn_benchmark.layers import smooth_l1_loss\nfrom maskrcnn_benchmark.modeling.box_coder import BoxCoder\nfrom maskrcnn_benchmark.modeling.matcher import Matcher\nfrom maskrcnn_benchmark.structures.boxlist_ops import boxlist_iou\nfrom maskrcnn_benchmark.modeling.balanced_positive_negative_sampler import (\n    BalancedPositiveNegativeSampler\n)\nfrom maskrcnn_benchmark.modeling.utils import cat\n\n\nclass FastRCNNLossComputation(object):\n    \"\"\"\n    Computes the loss for Faster R-CNN.\n    Also supports FPN\n    \"\"\"\n\n    def __init__(\n        self, \n        proposal_matcher, \n        fg_bg_sampler, \n        box_coder, \n        cls_agnostic_bbox_reg=False\n    ):\n        \"\"\"\n        Arguments:\n            proposal_matcher (Matcher)\n            fg_bg_sampler (BalancedPositiveNegativeSampler)\n            box_coder (BoxCoder)\n        \"\"\"\n        self.proposal_matcher = proposal_matcher\n        self.fg_bg_sampler = fg_bg_sampler\n        self.box_coder = box_coder\n        self.cls_agnostic_bbox_reg = cls_agnostic_bbox_reg\n\n    def match_targets_to_proposals(self, proposal, target):\n        match_quality_matrix = boxlist_iou(target, proposal)\n        matched_idxs = self.proposal_matcher(match_quality_matrix)\n        # Fast RCNN only need \"labels\" field for selecting the targets\n        target = target.copy_with_fields(\"labels\")\n        # get the targets corresponding GT for each proposal\n        # NB: need to clamp the indices because we can have a single\n        # GT in the image, and matched_idxs can be -2, which goes\n        # out of bounds\n        matched_targets = target[matched_idxs.clamp(min=0)]\n        matched_targets.add_field(\"matched_idxs\", matched_idxs)\n        return matched_targets\n\n    def prepare_targets(self, proposals, targets):\n        labels = []\n        regression_targets = []\n        for proposals_per_image, targets_per_image in zip(proposals, targets):\n            matched_targets = self.match_targets_to_proposals(\n                proposals_per_image, targets_per_image\n            )\n            matched_idxs = matched_targets.get_field(\"matched_idxs\")\n\n            labels_per_image = matched_targets.get_field(\"labels\")\n            labels_per_image = labels_per_image.to(dtype=torch.int64)\n\n            # Label background (below the low threshold)\n            bg_inds = matched_idxs == Matcher.BELOW_LOW_THRESHOLD\n            labels_per_image[bg_inds] = 0\n\n            # Label ignore proposals (between low and high thresholds)\n            ignore_inds = matched_idxs == Matcher.BETWEEN_THRESHOLDS\n            labels_per_image[ignore_inds] = -1  # -1 is ignored by sampler\n\n            # compute regression targets\n            regression_targets_per_image = self.box_coder.encode(\n                matched_targets.bbox, proposals_per_image.bbox\n            )\n\n            labels.append(labels_per_image)\n            regression_targets.append(regression_targets_per_image)\n\n        return labels, regression_targets\n\n    def subsample(self, proposals, targets):\n        \"\"\"\n        This method performs the positive/negative sampling, and return\n        the sampled proposals.\n        Note: this function keeps a state.\n\n        Arguments:\n            proposals (list[BoxList])\n            targets (list[BoxList])\n        \"\"\"\n\n        labels, regression_targets = self.prepare_targets(proposals, targets)\n        sampled_pos_inds, sampled_neg_inds = self.fg_bg_sampler(labels)\n\n        proposals = list(proposals)\n        # add corresponding label and regression_targets information to the bounding boxes\n        for labels_per_image, regression_targets_per_image, proposals_per_image in zip(\n            labels, regression_targets, proposals\n        ):\n            proposals_per_image.add_field(\"labels\", labels_per_image)\n            proposals_per_image.add_field(\n                \"regression_targets\", regression_targets_per_image\n            )\n\n        # distributed sampled proposals, that were obtained on all feature maps\n        # concatenated via the fg_bg_sampler, into individual feature map levels\n        for img_idx, (pos_inds_img, neg_inds_img) in enumerate(\n            zip(sampled_pos_inds, sampled_neg_inds)\n        ):\n            img_sampled_inds = torch.nonzero(pos_inds_img | neg_inds_img).squeeze(1)\n            proposals_per_image = proposals[img_idx][img_sampled_inds]\n            proposals[img_idx] = proposals_per_image\n\n        self._proposals = proposals\n        return proposals\n\n    def __call__(self, class_logits, box_regression):\n        \"\"\"\n        Computes the loss for Faster R-CNN.\n        This requires that the subsample method has been called beforehand.\n\n        Arguments:\n            class_logits (list[Tensor])\n            box_regression (list[Tensor])\n\n        Returns:\n            classification_loss (Tensor)\n            box_loss (Tensor)\n        \"\"\"\n\n        class_logits = cat(class_logits, dim=0)\n        box_regression = cat(box_regression, dim=0)\n        device = class_logits.device\n\n        if not hasattr(self, \"_proposals\"):\n            raise RuntimeError(\"subsample needs to be called before\")\n\n        proposals = self._proposals\n\n        labels = cat([proposal.get_field(\"labels\") for proposal in proposals], dim=0)\n        regression_targets = cat(\n            [proposal.get_field(\"regression_targets\") for proposal in proposals], dim=0\n        )\n\n        classification_loss = F.cross_entropy(class_logits, labels)\n\n        # get indices that correspond to the regression targets for\n        # the corresponding ground truth labels, to be used with\n        # advanced indexing\n        sampled_pos_inds_subset = torch.nonzero(labels > 0).squeeze(1)\n        labels_pos = labels[sampled_pos_inds_subset]\n        if self.cls_agnostic_bbox_reg:\n            map_inds = torch.tensor([4, 5, 6, 7], device=device)\n        else:\n            map_inds = 4 * labels_pos[:, None] + torch.tensor(\n                [0, 1, 2, 3], device=device)\n\n        box_loss = smooth_l1_loss(\n            box_regression[sampled_pos_inds_subset[:, None], map_inds],\n            regression_targets[sampled_pos_inds_subset],\n            size_average=False,\n            beta=1,\n        )\n        box_loss = box_loss / labels.numel()\n\n        return classification_loss, box_loss\n\n\ndef make_roi_box_loss_evaluator(cfg):\n    matcher = Matcher(\n        cfg.MODEL.ROI_HEADS.FG_IOU_THRESHOLD,\n        cfg.MODEL.ROI_HEADS.BG_IOU_THRESHOLD,\n        allow_low_quality_matches=False,\n    )\n\n    bbox_reg_weights = cfg.MODEL.ROI_HEADS.BBOX_REG_WEIGHTS\n    box_coder = BoxCoder(weights=bbox_reg_weights)\n\n    fg_bg_sampler = BalancedPositiveNegativeSampler(\n        cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE, cfg.MODEL.ROI_HEADS.POSITIVE_FRACTION\n    )\n\n    cls_agnostic_bbox_reg = cfg.MODEL.CLS_AGNOSTIC_BBOX_REG\n\n    loss_evaluator = FastRCNNLossComputation(\n        matcher, \n        fg_bg_sampler, \n        box_coder, \n        cls_agnostic_bbox_reg\n    )\n\n    return loss_evaluator\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/roi_heads/box_head/roi_box_feature_extractors.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\nfrom maskrcnn_benchmark.modeling import registry\nfrom maskrcnn_benchmark.modeling.backbone import resnet\nfrom maskrcnn_benchmark.modeling.poolers import Pooler\nfrom maskrcnn_benchmark.modeling.make_layers import group_norm\nfrom maskrcnn_benchmark.modeling.make_layers import make_fc\n\n\n@registry.ROI_BOX_FEATURE_EXTRACTORS.register(\"ResNet50Conv5ROIFeatureExtractor\")\nclass ResNet50Conv5ROIFeatureExtractor(nn.Module):\n    def __init__(self, config, in_channels):\n        super(ResNet50Conv5ROIFeatureExtractor, self).__init__()\n\n        resolution = config.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION\n        scales = config.MODEL.ROI_BOX_HEAD.POOLER_SCALES\n        sampling_ratio = config.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO\n        pooler = Pooler(\n            output_size=(resolution, resolution),\n            scales=scales,\n            sampling_ratio=sampling_ratio,\n        )\n\n        stage = resnet.StageSpec(index=4, block_count=3, return_features=False)\n        head = resnet.ResNetHead(\n            block_module=config.MODEL.RESNETS.TRANS_FUNC,\n            stages=(stage,),\n            num_groups=config.MODEL.RESNETS.NUM_GROUPS,\n            width_per_group=config.MODEL.RESNETS.WIDTH_PER_GROUP,\n            stride_in_1x1=config.MODEL.RESNETS.STRIDE_IN_1X1,\n            stride_init=None,\n            res2_out_channels=config.MODEL.RESNETS.RES2_OUT_CHANNELS,\n            dilation=config.MODEL.RESNETS.RES5_DILATION\n        )\n\n        self.pooler = pooler\n        self.head = head\n        self.out_channels = head.out_channels\n\n    def forward(self, x, proposals):\n        x = self.pooler(x, proposals)\n        x = self.head(x)\n        return x\n\n\n@registry.ROI_BOX_FEATURE_EXTRACTORS.register(\"FPN2MLPFeatureExtractor\")\nclass FPN2MLPFeatureExtractor(nn.Module):\n    \"\"\"\n    Heads for FPN for classification\n    \"\"\"\n\n    def __init__(self, cfg, in_channels):\n        super(FPN2MLPFeatureExtractor, self).__init__()\n\n        resolution = cfg.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION\n        scales = cfg.MODEL.ROI_BOX_HEAD.POOLER_SCALES\n        sampling_ratio = cfg.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO\n        pooler = Pooler(\n            output_size=(resolution, resolution),\n            scales=scales,\n            sampling_ratio=sampling_ratio,\n        )\n        input_size = in_channels * resolution ** 2\n        representation_size = cfg.MODEL.ROI_BOX_HEAD.MLP_HEAD_DIM\n        use_gn = cfg.MODEL.ROI_BOX_HEAD.USE_GN\n        self.pooler = pooler\n        self.fc6 = make_fc(input_size, representation_size, use_gn)\n        self.fc7 = make_fc(representation_size, representation_size, use_gn)\n        self.out_channels = representation_size\n\n    def forward(self, x, proposals):\n        x = self.pooler(x, proposals)\n        x = x.view(x.size(0), -1)\n\n        x = F.relu(self.fc6(x))\n        x = F.relu(self.fc7(x))\n\n        return x\n\n\n@registry.ROI_BOX_FEATURE_EXTRACTORS.register(\"FPNXconv1fcFeatureExtractor\")\nclass FPNXconv1fcFeatureExtractor(nn.Module):\n    \"\"\"\n    Heads for FPN for classification\n    \"\"\"\n\n    def __init__(self, cfg, in_channels):\n        super(FPNXconv1fcFeatureExtractor, self).__init__()\n\n        resolution = cfg.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION\n        scales = cfg.MODEL.ROI_BOX_HEAD.POOLER_SCALES\n        sampling_ratio = cfg.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO\n        pooler = Pooler(\n            output_size=(resolution, resolution),\n            scales=scales,\n            sampling_ratio=sampling_ratio,\n        )\n        self.pooler = pooler\n\n        use_gn = cfg.MODEL.ROI_BOX_HEAD.USE_GN\n        conv_head_dim = cfg.MODEL.ROI_BOX_HEAD.CONV_HEAD_DIM\n        num_stacked_convs = cfg.MODEL.ROI_BOX_HEAD.NUM_STACKED_CONVS\n        dilation = cfg.MODEL.ROI_BOX_HEAD.DILATION\n\n        xconvs = []\n        for ix in range(num_stacked_convs):\n            xconvs.append(\n                nn.Conv2d(\n                    in_channels,\n                    conv_head_dim,\n                    kernel_size=3,\n                    stride=1,\n                    padding=dilation,\n                    dilation=dilation,\n                    bias=False if use_gn else True\n                )\n            )\n            in_channels = conv_head_dim\n            if use_gn:\n                xconvs.append(group_norm(in_channels))\n            xconvs.append(nn.ReLU(inplace=True))\n\n        self.add_module(\"xconvs\", nn.Sequential(*xconvs))\n        for modules in [self.xconvs,]:\n            for l in modules.modules():\n                if isinstance(l, nn.Conv2d):\n                    torch.nn.init.normal_(l.weight, std=0.01)\n                    if not use_gn:\n                        torch.nn.init.constant_(l.bias, 0)\n\n        input_size = conv_head_dim * resolution ** 2\n        representation_size = cfg.MODEL.ROI_BOX_HEAD.MLP_HEAD_DIM\n        self.fc6 = make_fc(input_size, representation_size, use_gn=False)\n        self.out_channels = representation_size\n\n    def forward(self, x, proposals):\n        x = self.pooler(x, proposals)\n        x = self.xconvs(x)\n        x = x.view(x.size(0), -1)\n        x = F.relu(self.fc6(x))\n        return x\n\n\ndef make_roi_box_feature_extractor(cfg, in_channels):\n    func = registry.ROI_BOX_FEATURE_EXTRACTORS[\n        cfg.MODEL.ROI_BOX_HEAD.FEATURE_EXTRACTOR\n    ]\n    return func(cfg, in_channels)\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/roi_heads/box_head/roi_box_predictors.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nfrom maskrcnn_benchmark.modeling import registry\nfrom torch import nn\n\n\n@registry.ROI_BOX_PREDICTOR.register(\"FastRCNNPredictor\")\nclass FastRCNNPredictor(nn.Module):\n    def __init__(self, config, in_channels):\n        super(FastRCNNPredictor, self).__init__()\n        assert in_channels is not None\n\n        num_inputs = in_channels\n\n        num_classes = config.MODEL.ROI_BOX_HEAD.NUM_CLASSES\n        self.avgpool = nn.AdaptiveAvgPool2d(1)\n        self.cls_score = nn.Linear(num_inputs, num_classes)\n        num_bbox_reg_classes = 2 if config.MODEL.CLS_AGNOSTIC_BBOX_REG else num_classes\n        self.bbox_pred = nn.Linear(num_inputs, num_bbox_reg_classes * 4)\n\n        nn.init.normal_(self.cls_score.weight, mean=0, std=0.01)\n        nn.init.constant_(self.cls_score.bias, 0)\n\n        nn.init.normal_(self.bbox_pred.weight, mean=0, std=0.001)\n        nn.init.constant_(self.bbox_pred.bias, 0)\n\n    def forward(self, x):\n        x = self.avgpool(x)\n        x = x.view(x.size(0), -1)\n        cls_logit = self.cls_score(x)\n        bbox_pred = self.bbox_pred(x)\n        return cls_logit, bbox_pred\n\n\n@registry.ROI_BOX_PREDICTOR.register(\"FPNPredictor\")\nclass FPNPredictor(nn.Module):\n    def __init__(self, cfg, in_channels):\n        super(FPNPredictor, self).__init__()\n        num_classes = cfg.MODEL.ROI_BOX_HEAD.NUM_CLASSES\n        representation_size = in_channels\n\n        self.cls_score = nn.Linear(representation_size, num_classes)\n        num_bbox_reg_classes = 2 if cfg.MODEL.CLS_AGNOSTIC_BBOX_REG else num_classes\n        self.bbox_pred = nn.Linear(representation_size, num_bbox_reg_classes * 4)\n\n        nn.init.normal_(self.cls_score.weight, std=0.01)\n        nn.init.normal_(self.bbox_pred.weight, std=0.001)\n        for l in [self.cls_score, self.bbox_pred]:\n            nn.init.constant_(l.bias, 0)\n\n    def forward(self, x):\n        if x.ndimension() == 4:\n            assert list(x.shape[2:]) == [1, 1]\n            x = x.view(x.size(0), -1)\n        scores = self.cls_score(x)\n        bbox_deltas = self.bbox_pred(x)\n\n        return scores, bbox_deltas\n\n\ndef make_roi_box_predictor(cfg, in_channels):\n    func = registry.ROI_BOX_PREDICTOR[cfg.MODEL.ROI_BOX_HEAD.PREDICTOR]\n    return func(cfg, in_channels)\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/roi_heads/keypoint_head/__init__.py",
    "content": ""
  },
  {
    "path": "maskrcnn_benchmark/modeling/roi_heads/keypoint_head/inference.py",
    "content": "import torch\nfrom torch import nn\n\n\nclass KeypointPostProcessor(nn.Module):\n    def __init__(self, keypointer=None):\n        super(KeypointPostProcessor, self).__init__()\n        self.keypointer = keypointer\n\n    def forward(self, x, boxes):\n        mask_prob = x\n\n        scores = None\n        if self.keypointer:\n            mask_prob, scores = self.keypointer(x, boxes)\n\n        assert len(boxes) == 1, \"Only non-batched inference supported for now\"\n        boxes_per_image = [box.bbox.size(0) for box in boxes]\n        mask_prob = mask_prob.split(boxes_per_image, dim=0)\n        scores = scores.split(boxes_per_image, dim=0)\n\n        results = []\n        for prob, box, score in zip(mask_prob, boxes, scores):\n            bbox = BoxList(box.bbox, box.size, mode=\"xyxy\")\n            for field in box.fields():\n                bbox.add_field(field, box.get_field(field))\n            prob = PersonKeypoints(prob, box.size)\n            prob.add_field(\"logits\", score)\n            bbox.add_field(\"keypoints\", prob)\n            results.append(bbox)\n\n        return results\n\n\n# TODO remove and use only the Keypointer\nimport numpy as np\nimport cv2\n\n\ndef heatmaps_to_keypoints(maps, rois):\n    \"\"\"Extract predicted keypoint locations from heatmaps. Output has shape\n    (#rois, 4, #keypoints) with the 4 rows corresponding to (x, y, logit, prob)\n    for each keypoint.\n    \"\"\"\n    # This function converts a discrete image coordinate in a HEATMAP_SIZE x\n    # HEATMAP_SIZE image to a continuous keypoint coordinate. We maintain\n    # consistency with keypoints_to_heatmap_labels by using the conversion from\n    # Heckbert 1990: c = d + 0.5, where d is a discrete coordinate and c is a\n    # continuous coordinate.\n    offset_x = rois[:, 0]\n    offset_y = rois[:, 1]\n\n    widths = rois[:, 2] - rois[:, 0]\n    heights = rois[:, 3] - rois[:, 1]\n    widths = np.maximum(widths, 1)\n    heights = np.maximum(heights, 1)\n    widths_ceil = np.ceil(widths)\n    heights_ceil = np.ceil(heights)\n\n    # NCHW to NHWC for use with OpenCV\n    maps = np.transpose(maps, [0, 2, 3, 1])\n    min_size = 0  # cfg.KRCNN.INFERENCE_MIN_SIZE\n    num_keypoints = maps.shape[3]\n    xy_preds = np.zeros((len(rois), 3, num_keypoints), dtype=np.float32)\n    end_scores = np.zeros((len(rois), num_keypoints), dtype=np.float32)\n    for i in range(len(rois)):\n        if min_size > 0:\n            roi_map_width = int(np.maximum(widths_ceil[i], min_size))\n            roi_map_height = int(np.maximum(heights_ceil[i], min_size))\n        else:\n            roi_map_width = widths_ceil[i]\n            roi_map_height = heights_ceil[i]\n        width_correction = widths[i] / roi_map_width\n        height_correction = heights[i] / roi_map_height\n        roi_map = cv2.resize(\n            maps[i], (roi_map_width, roi_map_height), interpolation=cv2.INTER_CUBIC\n        )\n        # Bring back to CHW\n        roi_map = np.transpose(roi_map, [2, 0, 1])\n        # roi_map_probs = scores_to_probs(roi_map.copy())\n        w = roi_map.shape[2]\n        pos = roi_map.reshape(num_keypoints, -1).argmax(axis=1)\n        x_int = pos % w\n        y_int = (pos - x_int) // w\n        # assert (roi_map_probs[k, y_int, x_int] ==\n        #         roi_map_probs[k, :, :].max())\n        x = (x_int + 0.5) * width_correction\n        y = (y_int + 0.5) * height_correction\n        xy_preds[i, 0, :] = x + offset_x[i]\n        xy_preds[i, 1, :] = y + offset_y[i]\n        xy_preds[i, 2, :] = 1\n        end_scores[i, :] = roi_map[np.arange(num_keypoints), y_int, x_int]\n\n    return np.transpose(xy_preds, [0, 2, 1]), end_scores\n\n\nfrom maskrcnn_benchmark.structures.bounding_box import BoxList\nfrom maskrcnn_benchmark.structures.keypoint import PersonKeypoints\n\n\nclass Keypointer(object):\n    \"\"\"\n    Projects a set of masks in an image on the locations\n    specified by the bounding boxes\n    \"\"\"\n\n    def __init__(self, padding=0):\n        self.padding = padding\n\n    def __call__(self, masks, boxes):\n        # TODO do this properly\n        if isinstance(boxes, BoxList):\n            boxes = [boxes]\n        assert len(boxes) == 1\n\n        result, scores = heatmaps_to_keypoints(\n            masks.detach().cpu().numpy(), boxes[0].bbox.cpu().numpy()\n        )\n        return torch.from_numpy(result).to(masks.device), torch.as_tensor(scores, device=masks.device)\n\n\ndef make_roi_keypoint_post_processor(cfg):\n    keypointer = Keypointer()\n    keypoint_post_processor = KeypointPostProcessor(keypointer)\n    return keypoint_post_processor\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/roi_heads/keypoint_head/keypoint_head.py",
    "content": "import torch\n\nfrom .roi_keypoint_feature_extractors import make_roi_keypoint_feature_extractor\nfrom .roi_keypoint_predictors import make_roi_keypoint_predictor\nfrom .inference import make_roi_keypoint_post_processor\nfrom .loss import make_roi_keypoint_loss_evaluator\n\n\nclass ROIKeypointHead(torch.nn.Module):\n    def __init__(self, cfg, in_channels):\n        super(ROIKeypointHead, self).__init__()\n        self.cfg = cfg.clone()\n        self.feature_extractor = make_roi_keypoint_feature_extractor(cfg, in_channels)\n        self.predictor = make_roi_keypoint_predictor(\n            cfg, self.feature_extractor.out_channels)\n        self.post_processor = make_roi_keypoint_post_processor(cfg)\n        self.loss_evaluator = make_roi_keypoint_loss_evaluator(cfg)\n\n    def forward(self, features, proposals, targets=None):\n        \"\"\"\n        Arguments:\n            features (list[Tensor]): feature-maps from possibly several levels\n            proposals (list[BoxList]): proposal boxes\n            targets (list[BoxList], optional): the ground-truth targets.\n\n        Returns:\n            x (Tensor): the result of the feature extractor\n            proposals (list[BoxList]): during training, the original proposals\n                are returned. During testing, the predicted boxlists are returned\n                with the `mask` field set\n            losses (dict[Tensor]): During training, returns the losses for the\n                head. During testing, returns an empty dict.\n        \"\"\"\n        if self.training:\n            with torch.no_grad():\n                proposals = self.loss_evaluator.subsample(proposals, targets)\n\n        x = self.feature_extractor(features, proposals)\n        kp_logits = self.predictor(x)\n\n        if not self.training:\n            result = self.post_processor(kp_logits, proposals)\n            return x, result, {}\n\n        loss_kp = self.loss_evaluator(proposals, kp_logits)\n\n        return x, proposals, dict(loss_kp=loss_kp)\n\n\ndef build_roi_keypoint_head(cfg, in_channels):\n    return ROIKeypointHead(cfg, in_channels)\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/roi_heads/keypoint_head/loss.py",
    "content": "import torch\nfrom torch.nn import functional as F\n\nfrom maskrcnn_benchmark.modeling.matcher import Matcher\n\nfrom maskrcnn_benchmark.modeling.balanced_positive_negative_sampler import (\n    BalancedPositiveNegativeSampler,\n)\nfrom maskrcnn_benchmark.structures.boxlist_ops import boxlist_iou\nfrom maskrcnn_benchmark.modeling.utils import cat\nfrom maskrcnn_benchmark.layers import smooth_l1_loss\nfrom maskrcnn_benchmark.structures.boxlist_ops import cat_boxlist\n\nfrom maskrcnn_benchmark.structures.keypoint import keypoints_to_heat_map\n\n\ndef project_keypoints_to_heatmap(keypoints, proposals, discretization_size):\n    proposals = proposals.convert(\"xyxy\")\n    return keypoints_to_heat_map(\n        keypoints.keypoints, proposals.bbox, discretization_size\n    )\n\n\ndef cat_boxlist_with_keypoints(boxlists):\n    assert all(boxlist.has_field(\"keypoints\") for boxlist in boxlists)\n\n    kp = [boxlist.get_field(\"keypoints\").keypoints for boxlist in boxlists]\n    kp = cat(kp, 0)\n\n    fields = boxlists[0].get_fields()\n    fields = [field for field in fields if field != \"keypoints\"]\n\n    boxlists = [boxlist.copy_with_fields(fields) for boxlist in boxlists]\n    boxlists = cat_boxlist(boxlists)\n    boxlists.add_field(\"keypoints\", kp)\n    return boxlists\n\n\ndef _within_box(points, boxes):\n    \"\"\"Validate which keypoints are contained inside a given box.\n    points: NxKx2\n    boxes: Nx4\n    output: NxK\n    \"\"\"\n    x_within = (points[..., 0] >= boxes[:, 0, None]) & (\n        points[..., 0] <= boxes[:, 2, None]\n    )\n    y_within = (points[..., 1] >= boxes[:, 1, None]) & (\n        points[..., 1] <= boxes[:, 3, None]\n    )\n    return x_within & y_within\n\n\nclass KeypointRCNNLossComputation(object):\n    def __init__(self, proposal_matcher, fg_bg_sampler, discretization_size):\n        \"\"\"\n        Arguments:\n            proposal_matcher (Matcher)\n            fg_bg_sampler (BalancedPositiveNegativeSampler)\n            discretization_size (int)\n        \"\"\"\n        self.proposal_matcher = proposal_matcher\n        self.fg_bg_sampler = fg_bg_sampler\n        self.discretization_size = discretization_size\n\n    def match_targets_to_proposals(self, proposal, target):\n        match_quality_matrix = boxlist_iou(target, proposal)\n        matched_idxs = self.proposal_matcher(match_quality_matrix)\n        # Keypoint RCNN needs \"labels\" and \"keypoints \"fields for creating the targets\n        target = target.copy_with_fields([\"labels\", \"keypoints\"])\n        # get the targets corresponding GT for each proposal\n        # NB: need to clamp the indices because we can have a single\n        # GT in the image, and matched_idxs can be -2, which goes\n        # out of bounds\n        matched_targets = target[matched_idxs.clamp(min=0)]\n        matched_targets.add_field(\"matched_idxs\", matched_idxs)\n        return matched_targets\n\n    def prepare_targets(self, proposals, targets):\n        labels = []\n        keypoints = []\n        for proposals_per_image, targets_per_image in zip(proposals, targets):\n            matched_targets = self.match_targets_to_proposals(\n                proposals_per_image, targets_per_image\n            )\n            matched_idxs = matched_targets.get_field(\"matched_idxs\")\n\n            labels_per_image = matched_targets.get_field(\"labels\")\n            labels_per_image = labels_per_image.to(dtype=torch.int64)\n\n            # this can probably be removed, but is left here for clarity\n            # and completeness\n            # TODO check if this is the right one, as BELOW_THRESHOLD\n            neg_inds = matched_idxs == Matcher.BELOW_LOW_THRESHOLD\n            labels_per_image[neg_inds] = 0\n\n            keypoints_per_image = matched_targets.get_field(\"keypoints\")\n            within_box = _within_box(\n                keypoints_per_image.keypoints, matched_targets.bbox\n            )\n            vis_kp = keypoints_per_image.keypoints[..., 2] > 0\n            is_visible = (within_box & vis_kp).sum(1) > 0\n\n            labels_per_image[~is_visible] = -1\n\n            labels.append(labels_per_image)\n            keypoints.append(keypoints_per_image)\n\n        return labels, keypoints\n\n    def subsample(self, proposals, targets):\n        \"\"\"\n        This method performs the positive/negative sampling, and return\n        the sampled proposals.\n        Note: this function keeps a state.\n\n        Arguments:\n            proposals (list[BoxList])\n            targets (list[BoxList])\n        \"\"\"\n\n        labels, keypoints = self.prepare_targets(proposals, targets)\n        sampled_pos_inds, sampled_neg_inds = self.fg_bg_sampler(labels)\n\n        proposals = list(proposals)\n        # add corresponding label and regression_targets information to the bounding boxes\n        for labels_per_image, keypoints_per_image, proposals_per_image in zip(\n            labels, keypoints, proposals\n        ):\n            proposals_per_image.add_field(\"labels\", labels_per_image)\n            proposals_per_image.add_field(\"keypoints\", keypoints_per_image)\n\n        # distributed sampled proposals, that were obtained on all feature maps\n        # concatenated via the fg_bg_sampler, into individual feature map levels\n        for img_idx, (pos_inds_img, neg_inds_img) in enumerate(\n            zip(sampled_pos_inds, sampled_neg_inds)\n        ):\n            img_sampled_inds = torch.nonzero(pos_inds_img).squeeze(1)\n            proposals_per_image = proposals[img_idx][img_sampled_inds]\n            proposals[img_idx] = proposals_per_image\n\n        self._proposals = proposals\n        return proposals\n\n    def __call__(self, proposals, keypoint_logits):\n        heatmaps = []\n        valid = []\n        for proposals_per_image in proposals:\n            kp = proposals_per_image.get_field(\"keypoints\")\n            heatmaps_per_image, valid_per_image = project_keypoints_to_heatmap(\n                kp, proposals_per_image, self.discretization_size\n            )\n            heatmaps.append(heatmaps_per_image.view(-1))\n            valid.append(valid_per_image.view(-1))\n\n        keypoint_targets = cat(heatmaps, dim=0)\n        valid = cat(valid, dim=0).to(dtype=torch.uint8)\n        valid = torch.nonzero(valid).squeeze(1)\n\n        # torch.mean (in binary_cross_entropy_with_logits) does'nt\n        # accept empty tensors, so handle it sepaartely\n        if keypoint_targets.numel() == 0 or len(valid) == 0:\n            return keypoint_logits.sum() * 0\n\n        N, K, H, W = keypoint_logits.shape\n        keypoint_logits = keypoint_logits.view(N * K, H * W)\n\n        keypoint_loss = F.cross_entropy(keypoint_logits[valid], keypoint_targets[valid])\n        return keypoint_loss\n\n\ndef make_roi_keypoint_loss_evaluator(cfg):\n    matcher = Matcher(\n        cfg.MODEL.ROI_HEADS.FG_IOU_THRESHOLD,\n        cfg.MODEL.ROI_HEADS.BG_IOU_THRESHOLD,\n        allow_low_quality_matches=False,\n    )\n    fg_bg_sampler = BalancedPositiveNegativeSampler(\n        cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE, cfg.MODEL.ROI_HEADS.POSITIVE_FRACTION\n    )\n    resolution = cfg.MODEL.ROI_KEYPOINT_HEAD.RESOLUTION\n    loss_evaluator = KeypointRCNNLossComputation(matcher, fg_bg_sampler, resolution)\n    return loss_evaluator\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/roi_heads/keypoint_head/roi_keypoint_feature_extractors.py",
    "content": "from torch import nn\nfrom torch.nn import functional as F\n\nfrom maskrcnn_benchmark.modeling import registry\nfrom maskrcnn_benchmark.modeling.poolers import Pooler\n\nfrom maskrcnn_benchmark.layers import Conv2d\n\n\n@registry.ROI_KEYPOINT_FEATURE_EXTRACTORS.register(\"KeypointRCNNFeatureExtractor\")\nclass KeypointRCNNFeatureExtractor(nn.Module):\n    def __init__(self, cfg, in_channels):\n        super(KeypointRCNNFeatureExtractor, self).__init__()\n\n        resolution = cfg.MODEL.ROI_KEYPOINT_HEAD.POOLER_RESOLUTION\n        scales = cfg.MODEL.ROI_KEYPOINT_HEAD.POOLER_SCALES\n        sampling_ratio = cfg.MODEL.ROI_KEYPOINT_HEAD.POOLER_SAMPLING_RATIO\n        pooler = Pooler(\n            output_size=(resolution, resolution),\n            scales=scales,\n            sampling_ratio=sampling_ratio,\n        )\n        self.pooler = pooler\n\n        input_features = in_channels\n        layers = cfg.MODEL.ROI_KEYPOINT_HEAD.CONV_LAYERS\n        next_feature = input_features\n        self.blocks = []\n        for layer_idx, layer_features in enumerate(layers, 1):\n            layer_name = \"conv_fcn{}\".format(layer_idx)\n            module = Conv2d(next_feature, layer_features, 3, stride=1, padding=1)\n            nn.init.kaiming_normal_(module.weight, mode=\"fan_out\", nonlinearity=\"relu\")\n            nn.init.constant_(module.bias, 0)\n            self.add_module(layer_name, module)\n            next_feature = layer_features\n            self.blocks.append(layer_name)\n        self.out_channels = layer_features\n\n    def forward(self, x, proposals):\n        x = self.pooler(x, proposals)\n        for layer_name in self.blocks:\n            x = F.relu(getattr(self, layer_name)(x))\n        return x\n\n\ndef make_roi_keypoint_feature_extractor(cfg, in_channels):\n    func = registry.ROI_KEYPOINT_FEATURE_EXTRACTORS[\n        cfg.MODEL.ROI_KEYPOINT_HEAD.FEATURE_EXTRACTOR\n    ]\n    return func(cfg, in_channels)\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/roi_heads/keypoint_head/roi_keypoint_predictors.py",
    "content": "from torch import nn\n\nfrom maskrcnn_benchmark import layers\nfrom maskrcnn_benchmark.modeling import registry\n\n\n@registry.ROI_KEYPOINT_PREDICTOR.register(\"KeypointRCNNPredictor\")\nclass KeypointRCNNPredictor(nn.Module):\n    def __init__(self, cfg, in_channels):\n        super(KeypointRCNNPredictor, self).__init__()\n        input_features = in_channels\n        num_keypoints = cfg.MODEL.ROI_KEYPOINT_HEAD.NUM_CLASSES\n        deconv_kernel = 4\n        self.kps_score_lowres = layers.ConvTranspose2d(\n            input_features,\n            num_keypoints,\n            deconv_kernel,\n            stride=2,\n            padding=deconv_kernel // 2 - 1,\n        )\n        nn.init.kaiming_normal_(\n            self.kps_score_lowres.weight, mode=\"fan_out\", nonlinearity=\"relu\"\n        )\n        nn.init.constant_(self.kps_score_lowres.bias, 0)\n        self.up_scale = 2\n        self.out_channels = num_keypoints\n\n    def forward(self, x):\n        x = self.kps_score_lowres(x)\n        x = layers.interpolate(\n            x, scale_factor=self.up_scale, mode=\"bilinear\", align_corners=False\n        )\n        return x\n\n\ndef make_roi_keypoint_predictor(cfg, in_channels):\n    func = registry.ROI_KEYPOINT_PREDICTOR[cfg.MODEL.ROI_KEYPOINT_HEAD.PREDICTOR]\n    return func(cfg, in_channels)\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/roi_heads/mask_head/__init__.py",
    "content": ""
  },
  {
    "path": "maskrcnn_benchmark/modeling/roi_heads/mask_head/inference.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport numpy as np\nimport torch\nfrom torch import nn\nfrom maskrcnn_benchmark.layers.misc import interpolate\n\nfrom maskrcnn_benchmark.structures.bounding_box import BoxList\n\n\n# TODO check if want to return a single BoxList or a composite\n# object\nclass MaskPostProcessor(nn.Module):\n    \"\"\"\n    From the results of the CNN, post process the masks\n    by taking the mask corresponding to the class with max\n    probability (which are of fixed size and directly output\n    by the CNN) and return the masks in the mask field of the BoxList.\n\n    If a masker object is passed, it will additionally\n    project the masks in the image according to the locations in boxes,\n    \"\"\"\n\n    def __init__(self, masker=None):\n        super(MaskPostProcessor, self).__init__()\n        self.masker = masker\n\n    def forward(self, x, boxes):\n        \"\"\"\n        Arguments:\n            x (Tensor): the mask logits\n            boxes (list[BoxList]): bounding boxes that are used as\n                reference, one for ech image\n\n        Returns:\n            results (list[BoxList]): one BoxList for each image, containing\n                the extra field mask\n        \"\"\"\n        mask_prob = x.sigmoid()\n\n        # select masks coresponding to the predicted classes\n        num_masks = x.shape[0]\n        labels = [bbox.get_field(\"labels\") for bbox in boxes]\n        labels = torch.cat(labels)\n        index = torch.arange(num_masks, device=labels.device)\n        mask_prob = mask_prob[index, labels][:, None]\n\n        boxes_per_image = [len(box) for box in boxes]\n        mask_prob = mask_prob.split(boxes_per_image, dim=0)\n\n        if self.masker:\n            mask_prob = self.masker(mask_prob, boxes)\n\n        results = []\n        for prob, box in zip(mask_prob, boxes):\n            bbox = BoxList(box.bbox, box.size, mode=\"xyxy\")\n            for field in box.fields():\n                bbox.add_field(field, box.get_field(field))\n            bbox.add_field(\"mask\", prob)\n            results.append(bbox)\n\n        return results\n\n\nclass MaskPostProcessorCOCOFormat(MaskPostProcessor):\n    \"\"\"\n    From the results of the CNN, post process the results\n    so that the masks are pasted in the image, and\n    additionally convert the results to COCO format.\n    \"\"\"\n\n    def forward(self, x, boxes):\n        import pycocotools.mask as mask_util\n        import numpy as np\n\n        results = super(MaskPostProcessorCOCOFormat, self).forward(x, boxes)\n        for result in results:\n            masks = result.get_field(\"mask\").cpu()\n            rles = [\n                mask_util.encode(np.array(mask[0, :, :, np.newaxis], order=\"F\"))[0]\n                for mask in masks\n            ]\n            for rle in rles:\n                rle[\"counts\"] = rle[\"counts\"].decode(\"utf-8\")\n            result.add_field(\"mask\", rles)\n        return results\n\n\n# the next two functions should be merged inside Masker\n# but are kept here for the moment while we need them\n# temporarily gor paste_mask_in_image\ndef expand_boxes(boxes, scale):\n    w_half = (boxes[:, 2] - boxes[:, 0]) * .5\n    h_half = (boxes[:, 3] - boxes[:, 1]) * .5\n    x_c = (boxes[:, 2] + boxes[:, 0]) * .5\n    y_c = (boxes[:, 3] + boxes[:, 1]) * .5\n\n    w_half *= scale\n    h_half *= scale\n\n    boxes_exp = torch.zeros_like(boxes)\n    boxes_exp[:, 0] = x_c - w_half\n    boxes_exp[:, 2] = x_c + w_half\n    boxes_exp[:, 1] = y_c - h_half\n    boxes_exp[:, 3] = y_c + h_half\n    return boxes_exp\n\n\ndef expand_masks(mask, padding):\n    N = mask.shape[0]\n    M = mask.shape[-1]\n    pad2 = 2 * padding\n    scale = float(M + pad2) / M\n    padded_mask = mask.new_zeros((N, 1, M + pad2, M + pad2))\n    padded_mask[:, :, padding:-padding, padding:-padding] = mask\n    return padded_mask, scale\n\n\ndef paste_mask_in_image(mask, box, im_h, im_w, thresh=0.5, padding=1):\n    padded_mask, scale = expand_masks(mask[None], padding=padding)\n    mask = padded_mask[0, 0]\n    box = expand_boxes(box[None], scale)[0]\n    box = box.to(dtype=torch.int32)\n\n    TO_REMOVE = 1\n    w = int(box[2] - box[0] + TO_REMOVE)\n    h = int(box[3] - box[1] + TO_REMOVE)\n    w = max(w, 1)\n    h = max(h, 1)\n\n    # Set shape to [batchxCxHxW]\n    mask = mask.expand((1, 1, -1, -1))\n\n    # Resize mask\n    mask = mask.to(torch.float32)\n    mask = interpolate(mask, size=(h, w), mode='bilinear', align_corners=False)\n    mask = mask[0][0]\n\n    if thresh >= 0:\n        mask = mask > thresh\n    else:\n        # for visualization and debugging, we also\n        # allow it to return an unmodified mask\n        mask = (mask * 255).to(torch.uint8)\n\n    im_mask = torch.zeros((im_h, im_w), dtype=torch.uint8)\n    x_0 = max(box[0], 0)\n    x_1 = min(box[2] + 1, im_w)\n    y_0 = max(box[1], 0)\n    y_1 = min(box[3] + 1, im_h)\n\n    im_mask[y_0:y_1, x_0:x_1] = mask[\n        (y_0 - box[1]) : (y_1 - box[1]), (x_0 - box[0]) : (x_1 - box[0])\n    ]\n    return im_mask\n\n\nclass Masker(object):\n    \"\"\"\n    Projects a set of masks in an image on the locations\n    specified by the bounding boxes\n    \"\"\"\n\n    def __init__(self, threshold=0.5, padding=1):\n        self.threshold = threshold\n        self.padding = padding\n\n    def forward_single_image(self, masks, boxes):\n        boxes = boxes.convert(\"xyxy\")\n        im_w, im_h = boxes.size\n        res = [\n            paste_mask_in_image(mask[0], box, im_h, im_w, self.threshold, self.padding)\n            for mask, box in zip(masks, boxes.bbox)\n        ]\n        if len(res) > 0:\n            res = torch.stack(res, dim=0)[:, None]\n        else:\n            res = masks.new_empty((0, 1, masks.shape[-2], masks.shape[-1]))\n        return res\n\n    def __call__(self, masks, boxes):\n        if isinstance(boxes, BoxList):\n            boxes = [boxes]\n\n        # Make some sanity check\n        assert len(boxes) == len(masks), \"Masks and boxes should have the same length.\"\n\n        # TODO:  Is this JIT compatible?\n        # If not we should make it compatible.\n        results = []\n        for mask, box in zip(masks, boxes):\n            assert mask.shape[0] == len(box), \"Number of objects should be the same.\"\n            result = self.forward_single_image(mask, box)\n            results.append(result)\n        return results\n\n\ndef make_roi_mask_post_processor(cfg):\n    if cfg.MODEL.ROI_MASK_HEAD.POSTPROCESS_MASKS:\n        mask_threshold = cfg.MODEL.ROI_MASK_HEAD.POSTPROCESS_MASKS_THRESHOLD\n        masker = Masker(threshold=mask_threshold, padding=1)\n    else:\n        masker = None\n    mask_post_processor = MaskPostProcessor(masker)\n    return mask_post_processor\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/roi_heads/mask_head/loss.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport torch\nfrom torch.nn import functional as F\n\nfrom maskrcnn_benchmark.layers import smooth_l1_loss\nfrom maskrcnn_benchmark.modeling.matcher import Matcher\nfrom maskrcnn_benchmark.structures.boxlist_ops import boxlist_iou\nfrom maskrcnn_benchmark.modeling.utils import cat\n\n\ndef project_masks_on_boxes(segmentation_masks, proposals, discretization_size):\n    \"\"\"\n    Given segmentation masks and the bounding boxes corresponding\n    to the location of the masks in the image, this function\n    crops and resizes the masks in the position defined by the\n    boxes. This prepares the masks for them to be fed to the\n    loss computation as the targets.\n\n    Arguments:\n        segmentation_masks: an instance of SegmentationMask\n        proposals: an instance of BoxList\n    \"\"\"\n    masks = []\n    M = discretization_size\n    device = proposals.bbox.device\n    proposals = proposals.convert(\"xyxy\")\n    assert segmentation_masks.size == proposals.size, \"{}, {}\".format(\n        segmentation_masks, proposals\n    )\n\n    # FIXME: CPU computation bottleneck, this should be parallelized\n    proposals = proposals.bbox.to(torch.device(\"cpu\"))\n    for segmentation_mask, proposal in zip(segmentation_masks, proposals):\n        # crop the masks, resize them to the desired resolution and\n        # then convert them to the tensor representation.\n        cropped_mask = segmentation_mask.crop(proposal)\n        scaled_mask = cropped_mask.resize((M, M))\n        mask = scaled_mask.get_mask_tensor()\n        masks.append(mask)\n    if len(masks) == 0:\n        return torch.empty(0, dtype=torch.float32, device=device)\n    return torch.stack(masks, dim=0).to(device, dtype=torch.float32)\n\n\nclass MaskRCNNLossComputation(object):\n    def __init__(self, proposal_matcher, discretization_size):\n        \"\"\"\n        Arguments:\n            proposal_matcher (Matcher)\n            discretization_size (int)\n        \"\"\"\n        self.proposal_matcher = proposal_matcher\n        self.discretization_size = discretization_size\n\n    def match_targets_to_proposals(self, proposal, target):\n        match_quality_matrix = boxlist_iou(target, proposal)\n        matched_idxs = self.proposal_matcher(match_quality_matrix)\n        # Mask RCNN needs \"labels\" and \"masks \"fields for creating the targets\n        target = target.copy_with_fields([\"labels\", \"masks\"])\n        # get the targets corresponding GT for each proposal\n        # NB: need to clamp the indices because we can have a single\n        # GT in the image, and matched_idxs can be -2, which goes\n        # out of bounds\n        matched_targets = target[matched_idxs.clamp(min=0)]\n        matched_targets.add_field(\"matched_idxs\", matched_idxs)\n        return matched_targets\n\n    def prepare_targets(self, proposals, targets):\n        labels = []\n        masks = []\n        for proposals_per_image, targets_per_image in zip(proposals, targets):\n            matched_targets = self.match_targets_to_proposals(\n                proposals_per_image, targets_per_image\n            )\n            matched_idxs = matched_targets.get_field(\"matched_idxs\")\n\n            labels_per_image = matched_targets.get_field(\"labels\")\n            labels_per_image = labels_per_image.to(dtype=torch.int64)\n\n            # this can probably be removed, but is left here for clarity\n            # and completeness\n            neg_inds = matched_idxs == Matcher.BELOW_LOW_THRESHOLD\n            labels_per_image[neg_inds] = 0\n\n            # mask scores are only computed on positive samples\n            positive_inds = torch.nonzero(labels_per_image > 0).squeeze(1)\n\n            segmentation_masks = matched_targets.get_field(\"masks\")\n            segmentation_masks = segmentation_masks[positive_inds]\n\n            positive_proposals = proposals_per_image[positive_inds]\n\n            masks_per_image = project_masks_on_boxes(\n                segmentation_masks, positive_proposals, self.discretization_size\n            )\n\n            labels.append(labels_per_image)\n            masks.append(masks_per_image)\n\n        return labels, masks\n\n    def __call__(self, proposals, mask_logits, targets):\n        \"\"\"\n        Arguments:\n            proposals (list[BoxList])\n            mask_logits (Tensor)\n            targets (list[BoxList])\n\n        Return:\n            mask_loss (Tensor): scalar tensor containing the loss\n        \"\"\"\n        labels, mask_targets = self.prepare_targets(proposals, targets)\n\n        labels = cat(labels, dim=0)\n        mask_targets = cat(mask_targets, dim=0)\n\n        positive_inds = torch.nonzero(labels > 0).squeeze(1)\n        labels_pos = labels[positive_inds]\n\n        # torch.mean (in binary_cross_entropy_with_logits) doesn't\n        # accept empty tensors, so handle it separately\n        if mask_targets.numel() == 0:\n            return mask_logits.sum() * 0\n\n        mask_loss = F.binary_cross_entropy_with_logits(\n            mask_logits[positive_inds, labels_pos], mask_targets\n        )\n        return mask_loss\n\n\ndef make_roi_mask_loss_evaluator(cfg):\n    matcher = Matcher(\n        cfg.MODEL.ROI_HEADS.FG_IOU_THRESHOLD,\n        cfg.MODEL.ROI_HEADS.BG_IOU_THRESHOLD,\n        allow_low_quality_matches=False,\n    )\n\n    loss_evaluator = MaskRCNNLossComputation(\n        matcher, cfg.MODEL.ROI_MASK_HEAD.RESOLUTION\n    )\n\n    return loss_evaluator\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/roi_heads/mask_head/mask_head.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport torch\nfrom torch import nn\n\nfrom maskrcnn_benchmark.structures.bounding_box import BoxList\n\nfrom .roi_mask_feature_extractors import make_roi_mask_feature_extractor\nfrom .roi_mask_predictors import make_roi_mask_predictor\nfrom .inference import make_roi_mask_post_processor\nfrom .loss import make_roi_mask_loss_evaluator\n\n\ndef keep_only_positive_boxes(boxes):\n    \"\"\"\n    Given a set of BoxList containing the `labels` field,\n    return a set of BoxList for which `labels > 0`.\n\n    Arguments:\n        boxes (list of BoxList)\n    \"\"\"\n    assert isinstance(boxes, (list, tuple))\n    assert isinstance(boxes[0], BoxList)\n    assert boxes[0].has_field(\"labels\")\n    positive_boxes = []\n    positive_inds = []\n    num_boxes = 0\n    for boxes_per_image in boxes:\n        labels = boxes_per_image.get_field(\"labels\")\n        inds_mask = labels > 0\n        inds = inds_mask.nonzero().squeeze(1)\n        positive_boxes.append(boxes_per_image[inds])\n        positive_inds.append(inds_mask)\n    return positive_boxes, positive_inds\n\n\nclass ROIMaskHead(torch.nn.Module):\n    def __init__(self, cfg, in_channels):\n        super(ROIMaskHead, self).__init__()\n        self.cfg = cfg.clone()\n        self.feature_extractor = make_roi_mask_feature_extractor(cfg, in_channels)\n        self.predictor = make_roi_mask_predictor(\n            cfg, self.feature_extractor.out_channels)\n        self.post_processor = make_roi_mask_post_processor(cfg)\n        self.loss_evaluator = make_roi_mask_loss_evaluator(cfg)\n\n    def forward(self, features, proposals, targets=None):\n        \"\"\"\n        Arguments:\n            features (list[Tensor]): feature-maps from possibly several levels\n            proposals (list[BoxList]): proposal boxes\n            targets (list[BoxList], optional): the ground-truth targets.\n\n        Returns:\n            x (Tensor): the result of the feature extractor\n            proposals (list[BoxList]): during training, the original proposals\n                are returned. During testing, the predicted boxlists are returned\n                with the `mask` field set\n            losses (dict[Tensor]): During training, returns the losses for the\n                head. During testing, returns an empty dict.\n        \"\"\"\n\n        if self.training:\n            # during training, only focus on positive boxes\n            all_proposals = proposals\n            proposals, positive_inds = keep_only_positive_boxes(proposals)\n        if self.training and self.cfg.MODEL.ROI_MASK_HEAD.SHARE_BOX_FEATURE_EXTRACTOR:\n            x = features\n            x = x[torch.cat(positive_inds, dim=0)]\n        else:\n            x = self.feature_extractor(features, proposals)\n        mask_logits = self.predictor(x)\n\n        if not self.training:\n            result = self.post_processor(mask_logits, proposals)\n            return x, result, {}\n\n        loss_mask = self.loss_evaluator(proposals, mask_logits, targets)\n\n        return x, all_proposals, dict(loss_mask=loss_mask)\n\n\ndef build_roi_mask_head(cfg, in_channels):\n    return ROIMaskHead(cfg, in_channels)\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/roi_heads/mask_head/roi_mask_feature_extractors.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nfrom torch import nn\nfrom torch.nn import functional as F\n\nfrom ..box_head.roi_box_feature_extractors import ResNet50Conv5ROIFeatureExtractor\nfrom maskrcnn_benchmark.modeling import registry\nfrom maskrcnn_benchmark.modeling.poolers import Pooler\nfrom maskrcnn_benchmark.modeling.make_layers import make_conv3x3\n\n\nregistry.ROI_MASK_FEATURE_EXTRACTORS.register(\n    \"ResNet50Conv5ROIFeatureExtractor\", ResNet50Conv5ROIFeatureExtractor\n)\n\n\n@registry.ROI_MASK_FEATURE_EXTRACTORS.register(\"MaskRCNNFPNFeatureExtractor\")\nclass MaskRCNNFPNFeatureExtractor(nn.Module):\n    \"\"\"\n    Heads for FPN for classification\n    \"\"\"\n\n    def __init__(self, cfg, in_channels):\n        \"\"\"\n        Arguments:\n            num_classes (int): number of output classes\n            input_size (int): number of channels of the input once it's flattened\n            representation_size (int): size of the intermediate representation\n        \"\"\"\n        super(MaskRCNNFPNFeatureExtractor, self).__init__()\n\n        resolution = cfg.MODEL.ROI_MASK_HEAD.POOLER_RESOLUTION\n        scales = cfg.MODEL.ROI_MASK_HEAD.POOLER_SCALES\n        sampling_ratio = cfg.MODEL.ROI_MASK_HEAD.POOLER_SAMPLING_RATIO\n        pooler = Pooler(\n            output_size=(resolution, resolution),\n            scales=scales,\n            sampling_ratio=sampling_ratio,\n        )\n        input_size = in_channels\n        self.pooler = pooler\n\n        use_gn = cfg.MODEL.ROI_MASK_HEAD.USE_GN\n        layers = cfg.MODEL.ROI_MASK_HEAD.CONV_LAYERS\n        dilation = cfg.MODEL.ROI_MASK_HEAD.DILATION\n\n        next_feature = input_size\n        self.blocks = []\n        for layer_idx, layer_features in enumerate(layers, 1):\n            layer_name = \"mask_fcn{}\".format(layer_idx)\n            module = make_conv3x3(\n                next_feature, layer_features,\n                dilation=dilation, stride=1, use_gn=use_gn\n            )\n            self.add_module(layer_name, module)\n            next_feature = layer_features\n            self.blocks.append(layer_name)\n        self.out_channels = layer_features\n\n    def forward(self, x, proposals):\n        x = self.pooler(x, proposals)\n\n        for layer_name in self.blocks:\n            x = F.relu(getattr(self, layer_name)(x))\n\n        return x\n\n\ndef make_roi_mask_feature_extractor(cfg, in_channels):\n    func = registry.ROI_MASK_FEATURE_EXTRACTORS[\n        cfg.MODEL.ROI_MASK_HEAD.FEATURE_EXTRACTOR\n    ]\n    return func(cfg, in_channels)\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/roi_heads/mask_head/roi_mask_predictors.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nfrom torch import nn\nfrom torch.nn import functional as F\n\nfrom maskrcnn_benchmark.layers import Conv2d\nfrom maskrcnn_benchmark.layers import ConvTranspose2d\nfrom maskrcnn_benchmark.modeling import registry\n\n\n@registry.ROI_MASK_PREDICTOR.register(\"MaskRCNNC4Predictor\")\nclass MaskRCNNC4Predictor(nn.Module):\n    def __init__(self, cfg, in_channels):\n        super(MaskRCNNC4Predictor, self).__init__()\n        num_classes = cfg.MODEL.ROI_BOX_HEAD.NUM_CLASSES\n        dim_reduced = cfg.MODEL.ROI_MASK_HEAD.CONV_LAYERS[-1]\n        num_inputs = in_channels\n\n        self.conv5_mask = ConvTranspose2d(num_inputs, dim_reduced, 2, 2, 0)\n        self.mask_fcn_logits = Conv2d(dim_reduced, num_classes, 1, 1, 0)\n\n        for name, param in self.named_parameters():\n            if \"bias\" in name:\n                nn.init.constant_(param, 0)\n            elif \"weight\" in name:\n                # Caffe2 implementation uses MSRAFill, which in fact\n                # corresponds to kaiming_normal_ in PyTorch\n                nn.init.kaiming_normal_(param, mode=\"fan_out\", nonlinearity=\"relu\")\n\n    def forward(self, x):\n        x = F.relu(self.conv5_mask(x))\n        return self.mask_fcn_logits(x)\n\n\n@registry.ROI_MASK_PREDICTOR.register(\"MaskRCNNConv1x1Predictor\")\nclass MaskRCNNConv1x1Predictor(nn.Module):\n    def __init__(self, cfg, in_channels):\n        super(MaskRCNNConv1x1Predictor, self).__init__()\n        num_classes = cfg.MODEL.ROI_BOX_HEAD.NUM_CLASSES\n        num_inputs = in_channels\n\n        self.mask_fcn_logits = Conv2d(num_inputs, num_classes, 1, 1, 0)\n\n        for name, param in self.named_parameters():\n            if \"bias\" in name:\n                nn.init.constant_(param, 0)\n            elif \"weight\" in name:\n                # Caffe2 implementation uses MSRAFill, which in fact\n                # corresponds to kaiming_normal_ in PyTorch\n                nn.init.kaiming_normal_(param, mode=\"fan_out\", nonlinearity=\"relu\")\n\n    def forward(self, x):\n        return self.mask_fcn_logits(x)\n\n\ndef make_roi_mask_predictor(cfg, in_channels):\n    func = registry.ROI_MASK_PREDICTOR[cfg.MODEL.ROI_MASK_HEAD.PREDICTOR]\n    return func(cfg, in_channels)\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/roi_heads/roi_heads.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport torch\n\nfrom .box_head.box_head import build_roi_box_head\nfrom .mask_head.mask_head import build_roi_mask_head\nfrom .keypoint_head.keypoint_head import build_roi_keypoint_head\n\n\nclass CombinedROIHeads(torch.nn.ModuleDict):\n    \"\"\"\n    Combines a set of individual heads (for box prediction or masks) into a single\n    head.\n    \"\"\"\n\n    def __init__(self, cfg, heads):\n        super(CombinedROIHeads, self).__init__(heads)\n        self.cfg = cfg.clone()\n        if cfg.MODEL.MASK_ON and cfg.MODEL.ROI_MASK_HEAD.SHARE_BOX_FEATURE_EXTRACTOR:\n            self.mask.feature_extractor = self.box.feature_extractor\n        if cfg.MODEL.KEYPOINT_ON and cfg.MODEL.ROI_KEYPOINT_HEAD.SHARE_BOX_FEATURE_EXTRACTOR:\n            self.keypoint.feature_extractor = self.box.feature_extractor\n\n    def forward(self, features, proposals, targets=None):\n        losses = {}\n        # TODO rename x to roi_box_features, if it doesn't increase memory consumption\n        x, detections, loss_box = self.box(features, proposals, targets)\n        losses.update(loss_box)\n        if self.cfg.MODEL.MASK_ON:\n            mask_features = features\n            # optimization: during training, if we share the feature extractor between\n            # the box and the mask heads, then we can reuse the features already computed\n            if (\n                self.training\n                and self.cfg.MODEL.ROI_MASK_HEAD.SHARE_BOX_FEATURE_EXTRACTOR\n            ):\n                mask_features = x\n            # During training, self.box() will return the unaltered proposals as \"detections\"\n            # this makes the API consistent during training and testing\n            x, detections, loss_mask = self.mask(mask_features, detections, targets)\n            losses.update(loss_mask)\n\n        if self.cfg.MODEL.KEYPOINT_ON:\n            keypoint_features = features\n            # optimization: during training, if we share the feature extractor between\n            # the box and the mask heads, then we can reuse the features already computed\n            if (\n                self.training\n                and self.cfg.MODEL.ROI_KEYPOINT_HEAD.SHARE_BOX_FEATURE_EXTRACTOR\n            ):\n                keypoint_features = x\n            # During training, self.box() will return the unaltered proposals as \"detections\"\n            # this makes the API consistent during training and testing\n            x, detections, loss_keypoint = self.keypoint(keypoint_features, detections, targets)\n            losses.update(loss_keypoint)\n        return x, detections, losses\n\n\ndef build_roi_heads(cfg, in_channels):\n    # individually create the heads, that will be combined together\n    # afterwards\n    roi_heads = []\n    if cfg.MODEL.RETINANET_ON:\n        return []\n\n    if not cfg.MODEL.RPN_ONLY:\n        roi_heads.append((\"box\", build_roi_box_head(cfg, in_channels)))\n    if cfg.MODEL.MASK_ON:\n        roi_heads.append((\"mask\", build_roi_mask_head(cfg, in_channels)))\n    if cfg.MODEL.KEYPOINT_ON:\n        roi_heads.append((\"keypoint\", build_roi_keypoint_head(cfg, in_channels)))\n\n    # combine individual heads in a single module\n    if roi_heads:\n        roi_heads = CombinedROIHeads(cfg, roi_heads)\n\n    return roi_heads\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/rpn/__init__.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n# from .rpn import build_rpn\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/rpn/anchor_generator.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport math\n\nimport numpy as np\nimport torch\nfrom torch import nn\n\nfrom maskrcnn_benchmark.structures.bounding_box import BoxList\n\n\nclass BufferList(nn.Module):\n    \"\"\"\n    Similar to nn.ParameterList, but for buffers\n    \"\"\"\n\n    def __init__(self, buffers=None):\n        super(BufferList, self).__init__()\n        if buffers is not None:\n            self.extend(buffers)\n\n    def extend(self, buffers):\n        offset = len(self)\n        for i, buffer in enumerate(buffers):\n            self.register_buffer(str(offset + i), buffer)\n        return self\n\n    def __len__(self):\n        return len(self._buffers)\n\n    def __iter__(self):\n        return iter(self._buffers.values())\n\n\nclass AnchorGenerator(nn.Module):\n    \"\"\"\n    For a set of image sizes and feature maps, computes a set\n    of anchors\n    \"\"\"\n\n    def __init__(\n        self,\n        sizes=(128, 256, 512),\n        aspect_ratios=(0.5, 1.0, 2.0),\n        anchor_strides=(8, 16, 32),\n        straddle_thresh=0,\n    ):\n        super(AnchorGenerator, self).__init__()\n\n        if len(anchor_strides) == 1:\n            anchor_stride = anchor_strides[0]\n            cell_anchors = [\n                generate_anchors(anchor_stride, sizes, aspect_ratios).float()\n            ]\n        else:\n            if len(anchor_strides) != len(sizes):\n                raise RuntimeError(\"FPN should have #anchor_strides == #sizes\")\n\n            cell_anchors = [\n                generate_anchors(\n                    anchor_stride,\n                    size if isinstance(size, (tuple, list)) else (size,),\n                    aspect_ratios\n                ).float()\n                for anchor_stride, size in zip(anchor_strides, sizes)\n            ]\n        self.strides = anchor_strides\n        self.cell_anchors = BufferList(cell_anchors)\n        self.straddle_thresh = straddle_thresh\n\n    def num_anchors_per_location(self):\n        return [len(cell_anchors) for cell_anchors in self.cell_anchors]\n\n    def grid_anchors(self, grid_sizes):\n        anchors = []\n        for size, stride, base_anchors in zip(\n            grid_sizes, self.strides, self.cell_anchors\n        ):\n            grid_height, grid_width = size\n            device = base_anchors.device\n            shifts_x = torch.arange(\n                0, grid_width * stride, step=stride, dtype=torch.float32, device=device\n            )\n            shifts_y = torch.arange(\n                0, grid_height * stride, step=stride, dtype=torch.float32, device=device\n            )\n            shift_y, shift_x = torch.meshgrid(shifts_y, shifts_x)\n            shift_x = shift_x.reshape(-1)\n            shift_y = shift_y.reshape(-1)\n            shifts = torch.stack((shift_x, shift_y, shift_x, shift_y), dim=1)\n\n            anchors.append(\n                (shifts.view(-1, 1, 4) + base_anchors.view(1, -1, 4)).reshape(-1, 4)\n            )\n\n        return anchors\n\n    def add_visibility_to(self, boxlist):\n        image_width, image_height = boxlist.size\n        anchors = boxlist.bbox\n        if self.straddle_thresh >= 0:\n            inds_inside = (\n                (anchors[..., 0] >= -self.straddle_thresh)\n                & (anchors[..., 1] >= -self.straddle_thresh)\n                & (anchors[..., 2] < image_width + self.straddle_thresh)\n                & (anchors[..., 3] < image_height + self.straddle_thresh)\n            )\n        else:\n            device = anchors.device\n            inds_inside = torch.ones(anchors.shape[0], dtype=torch.uint8, device=device)\n        boxlist.add_field(\"visibility\", inds_inside)\n\n    def forward(self, image_list, feature_maps):\n        grid_sizes = [feature_map.shape[-2:] for feature_map in feature_maps]\n        anchors_over_all_feature_maps = self.grid_anchors(grid_sizes)\n        anchors = []\n        for i, (image_height, image_width) in enumerate(image_list.image_sizes):\n            anchors_in_image = []\n            for anchors_per_feature_map in anchors_over_all_feature_maps:\n                boxlist = BoxList(\n                    anchors_per_feature_map, (image_width, image_height), mode=\"xyxy\"\n                )\n                self.add_visibility_to(boxlist)\n                anchors_in_image.append(boxlist)\n            anchors.append(anchors_in_image)\n        return anchors\n\n\ndef make_anchor_generator(config):\n    anchor_sizes = config.MODEL.RPN.ANCHOR_SIZES\n    aspect_ratios = config.MODEL.RPN.ASPECT_RATIOS\n    anchor_stride = config.MODEL.RPN.ANCHOR_STRIDE\n    straddle_thresh = config.MODEL.RPN.STRADDLE_THRESH\n\n    if config.MODEL.RPN.USE_FPN:\n        assert len(anchor_stride) == len(\n            anchor_sizes\n        ), \"FPN should have len(ANCHOR_STRIDE) == len(ANCHOR_SIZES)\"\n    else:\n        assert len(anchor_stride) == 1, \"Non-FPN should have a single ANCHOR_STRIDE\"\n    anchor_generator = AnchorGenerator(\n        anchor_sizes, aspect_ratios, anchor_stride, straddle_thresh\n    )\n    return anchor_generator\n\n\ndef make_anchor_generator_retinanet(config):\n    anchor_sizes = config.MODEL.RETINANET.ANCHOR_SIZES\n    aspect_ratios = config.MODEL.RETINANET.ASPECT_RATIOS\n    anchor_strides = config.MODEL.RETINANET.ANCHOR_STRIDES\n    straddle_thresh = config.MODEL.RETINANET.STRADDLE_THRESH\n    octave = config.MODEL.RETINANET.OCTAVE\n    scales_per_octave = config.MODEL.RETINANET.SCALES_PER_OCTAVE\n\n    assert len(anchor_strides) == len(anchor_sizes), \"Only support FPN now\"\n    new_anchor_sizes = []\n    for size in anchor_sizes:\n        per_layer_anchor_sizes = []\n        for scale_per_octave in range(scales_per_octave):\n            octave_scale = octave ** (scale_per_octave / float(scales_per_octave))\n            per_layer_anchor_sizes.append(octave_scale * size)\n        new_anchor_sizes.append(tuple(per_layer_anchor_sizes))\n\n    anchor_generator = AnchorGenerator(\n        tuple(new_anchor_sizes), aspect_ratios, anchor_strides, straddle_thresh\n    )\n    return anchor_generator\n\n# Copyright (c) 2017-present, Facebook, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n##############################################################################\n#\n# Based on:\n# --------------------------------------------------------\n# Faster R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick and Sean Bell\n# --------------------------------------------------------\n\n\n# Verify that we compute the same anchors as Shaoqing's matlab implementation:\n#\n#    >> load output/rpn_cachedir/faster_rcnn_VOC2007_ZF_stage1_rpn/anchors.mat\n#    >> anchors\n#\n#    anchors =\n#\n#       -83   -39   100    56\n#      -175   -87   192   104\n#      -359  -183   376   200\n#       -55   -55    72    72\n#      -119  -119   136   136\n#      -247  -247   264   264\n#       -35   -79    52    96\n#       -79  -167    96   184\n#      -167  -343   184   360\n\n# array([[ -83.,  -39.,  100.,   56.],\n#        [-175.,  -87.,  192.,  104.],\n#        [-359., -183.,  376.,  200.],\n#        [ -55.,  -55.,   72.,   72.],\n#        [-119., -119.,  136.,  136.],\n#        [-247., -247.,  264.,  264.],\n#        [ -35.,  -79.,   52.,   96.],\n#        [ -79., -167.,   96.,  184.],\n#        [-167., -343.,  184.,  360.]])\n\n\ndef generate_anchors(\n    stride=16, sizes=(32, 64, 128, 256, 512), aspect_ratios=(0.5, 1, 2)\n):\n    \"\"\"Generates a matrix of anchor boxes in (x1, y1, x2, y2) format. Anchors\n    are centered on stride / 2, have (approximate) sqrt areas of the specified\n    sizes, and aspect ratios as given.\n    \"\"\"\n    return _generate_anchors(\n        stride,\n        np.array(sizes, dtype=np.float) / stride,\n        np.array(aspect_ratios, dtype=np.float),\n    )\n\n\ndef _generate_anchors(base_size, scales, aspect_ratios):\n    \"\"\"Generate anchor (reference) windows by enumerating aspect ratios X\n    scales wrt a reference (0, 0, base_size - 1, base_size - 1) window.\n    \"\"\"\n    anchor = np.array([1, 1, base_size, base_size], dtype=np.float) - 1\n    anchors = _ratio_enum(anchor, aspect_ratios)\n    anchors = np.vstack(\n        [_scale_enum(anchors[i, :], scales) for i in range(anchors.shape[0])]\n    )\n    return torch.from_numpy(anchors)\n\n\ndef _whctrs(anchor):\n    \"\"\"Return width, height, x center, and y center for an anchor (window).\"\"\"\n    w = anchor[2] - anchor[0] + 1\n    h = anchor[3] - anchor[1] + 1\n    x_ctr = anchor[0] + 0.5 * (w - 1)\n    y_ctr = anchor[1] + 0.5 * (h - 1)\n    return w, h, x_ctr, y_ctr\n\n\ndef _mkanchors(ws, hs, x_ctr, y_ctr):\n    \"\"\"Given a vector of widths (ws) and heights (hs) around a center\n    (x_ctr, y_ctr), output a set of anchors (windows).\n    \"\"\"\n    ws = ws[:, np.newaxis]\n    hs = hs[:, np.newaxis]\n    anchors = np.hstack(\n        (\n            x_ctr - 0.5 * (ws - 1),\n            y_ctr - 0.5 * (hs - 1),\n            x_ctr + 0.5 * (ws - 1),\n            y_ctr + 0.5 * (hs - 1),\n        )\n    )\n    return anchors\n\n\ndef _ratio_enum(anchor, ratios):\n    \"\"\"Enumerate a set of anchors for each aspect ratio wrt an anchor.\"\"\"\n    w, h, x_ctr, y_ctr = _whctrs(anchor)\n    size = w * h\n    size_ratios = size / ratios\n    ws = np.round(np.sqrt(size_ratios))\n    hs = np.round(ws * ratios)\n    anchors = _mkanchors(ws, hs, x_ctr, y_ctr)\n    return anchors\n\n\ndef _scale_enum(anchor, scales):\n    \"\"\"Enumerate a set of anchors for each scale wrt an anchor.\"\"\"\n    w, h, x_ctr, y_ctr = _whctrs(anchor)\n    ws = w * scales\n    hs = h * scales\n    anchors = _mkanchors(ws, hs, x_ctr, y_ctr)\n    return anchors\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/rpn/fcos/__init__.py",
    "content": ""
  },
  {
    "path": "maskrcnn_benchmark/modeling/rpn/fcos/fcos.py",
    "content": "import math\nimport torch\nimport torch.nn.functional as F\nfrom torch import nn\n\nfrom .inference import make_fcos_postprocessor\nfrom .loss import make_fcos_loss_evaluator\n\nfrom maskrcnn_benchmark.layers import Scale\n\n\nclass FCOSHead(torch.nn.Module):\n    def __init__(self, cfg, in_channels):\n        \"\"\"\n        Arguments:\n            in_channels (int): number of channels of the input feature\n        \"\"\"\n        super(FCOSHead, self).__init__()\n        # TODO: Implement the sigmoid version first.\n        num_classes = cfg.MODEL.FCOS.NUM_CLASSES - 1\n\n        cls_tower = []\n        bbox_tower = []\n        for i in range(cfg.MODEL.FCOS.NUM_CONVS):\n            cls_tower.append(\n                nn.Conv2d(\n                    in_channels,\n                    in_channels,\n                    kernel_size=3,\n                    stride=1,\n                    padding=1\n                )\n            )\n            cls_tower.append(nn.GroupNorm(32, in_channels))\n            cls_tower.append(nn.ReLU())\n            bbox_tower.append(\n                nn.Conv2d(\n                    in_channels,\n                    in_channels,\n                    kernel_size=3,\n                    stride=1,\n                    padding=1\n                )\n            )\n            bbox_tower.append(nn.GroupNorm(32, in_channels))\n            bbox_tower.append(nn.ReLU())\n\n        self.add_module('cls_tower', nn.Sequential(*cls_tower))\n        self.add_module('bbox_tower', nn.Sequential(*bbox_tower))\n        self.dense_points = cfg.MODEL.FCOS.DENSE_POINTS\n        self.cls_logits = nn.Conv2d(\n            in_channels, num_classes * self.dense_points, kernel_size=3, stride=1,\n            padding=1\n        )\n        self.bbox_pred = nn.Conv2d(\n            in_channels, 4 * self.dense_points, kernel_size=3, stride=1,\n            padding=1\n        )\n        self.centerness = nn.Conv2d(\n            in_channels, 1 * self.dense_points, kernel_size=3, stride=1,\n            padding=1\n        )\n\n        # initialization\n        for modules in [self.cls_tower, self.bbox_tower,\n                        self.cls_logits, self.bbox_pred,\n                        self.centerness]:\n            for l in modules.modules():\n                if isinstance(l, nn.Conv2d):\n                    torch.nn.init.normal_(l.weight, std=0.01)\n                    torch.nn.init.constant_(l.bias, 0)\n\n        # initialize the bias for focal loss\n        prior_prob = cfg.MODEL.FCOS.PRIOR_PROB\n        bias_value = -math.log((1 - prior_prob) / prior_prob)\n        torch.nn.init.constant_(self.cls_logits.bias, bias_value)\n\n        self.scales = nn.ModuleList([Scale(init_value=1.0) for _ in range(5)])\n\n    def forward(self, x):\n        logits = []\n        bbox_reg = []\n        centerness = []\n        for l, feature in enumerate(x):\n            cls_tower = self.cls_tower(feature)\n            logits.append(self.cls_logits(cls_tower))\n            centerness.append(self.centerness(cls_tower))\n            bbox_reg.append(torch.exp(self.scales[l](\n                self.bbox_pred(self.bbox_tower(feature))\n            )))\n        return logits, bbox_reg, centerness\n\n\nclass FCOSModule(torch.nn.Module):\n    \"\"\"\n    Module for FCOS computation. Takes feature maps from the backbone and\n    FCOS outputs and losses. Only Test on FPN now.\n    \"\"\"\n\n    def __init__(self, cfg, in_channels):\n        super(FCOSModule, self).__init__()\n\n        head = FCOSHead(cfg, in_channels)\n\n        box_selector_test = make_fcos_postprocessor(cfg)\n\n        loss_evaluator = make_fcos_loss_evaluator(cfg)\n        self.head = head\n        self.box_selector_test = box_selector_test\n        self.loss_evaluator = loss_evaluator\n        self.fpn_strides = cfg.MODEL.FCOS.FPN_STRIDES\n        self.dense_points = cfg.MODEL.FCOS.DENSE_POINTS\n\n    def forward(self, images, features, targets=None):\n        \"\"\"\n        Arguments:\n            images (ImageList): images for which we want to compute the predictions\n            features (list[Tensor]): features computed from the images that are\n                used for computing the predictions. Each tensor in the list\n                correspond to different feature levels\n            targets (list[BoxList): ground-truth boxes present in the image (optional)\n\n        Returns:\n            boxes (list[BoxList]): the predicted boxes from the RPN, one BoxList per\n                image.\n            losses (dict[Tensor]): the losses for the model during training. During\n                testing, it is an empty dict.\n        \"\"\"\n        box_cls, box_regression, centerness = self.head(features)\n        locations = self.compute_locations(features)\n\n        if self.training:\n            return self._forward_train(\n                locations, box_cls,\n                box_regression,\n                centerness, targets\n            )\n        else:\n            return self._forward_test(\n                locations, box_cls, box_regression,\n                centerness, images.image_sizes\n            )\n\n    def _forward_train(self, locations, box_cls, box_regression, centerness, targets):\n        loss_box_cls, loss_box_reg, loss_centerness = self.loss_evaluator(\n            locations, box_cls, box_regression, centerness, targets\n        )\n        losses = {\n            \"loss_cls\": loss_box_cls,\n            \"loss_reg\": loss_box_reg,\n            \"loss_centerness\": loss_centerness\n        }\n        return None, losses\n\n    def _forward_test(self, locations, box_cls, box_regression, centerness, image_sizes):\n        boxes = self.box_selector_test(\n            locations, box_cls, box_regression,\n            centerness, image_sizes\n        )\n        return boxes, {}\n\n    def compute_locations(self, features):\n        locations = []\n        for level, feature in enumerate(features):\n            h, w = feature.size()[-2:]\n            locations_per_level = self.compute_locations_per_level(\n                h, w, self.fpn_strides[level],\n                feature.device\n            )\n            locations.append(locations_per_level)\n        return locations\n\n    def compute_locations_per_level(self, h, w, stride, device):\n        shifts_x = torch.arange(\n            0, w * stride, step=stride,\n            dtype=torch.float32, device=device\n        )\n        shifts_y = torch.arange(\n            0, h * stride, step=stride,\n            dtype=torch.float32, device=device\n        )\n        shift_y, shift_x = torch.meshgrid(shifts_y, shifts_x)\n        shift_x = shift_x.reshape(-1)\n        shift_y = shift_y.reshape(-1)\n        locations = torch.stack((shift_x, shift_y), dim=1) + stride // 2\n        locations = self.get_dense_locations(locations, stride, device)\n        return locations\n\n    def get_dense_locations(self, locations, stride, device):\n        if self.dense_points <= 1:\n            return locations\n        center = 0\n        step = stride // 4\n        l_t = [center - step, center - step]\n        r_t = [center + step, center - step]\n        l_b = [center - step, center + step]\n        r_b = [center + step, center + step]\n        if self.dense_points == 4:\n            points = torch.cuda.FloatTensor([l_t, r_t, l_b, r_b], device=device)\n        elif self.dense_points == 5:\n            points = torch.cuda.FloatTensor([l_t, r_t, [center, center], l_b, r_b], device=device)\n        else:\n            print(\"dense points only support 1, 4, 5\")\n        points.reshape(1, -1, 2)\n        locations = locations.reshape(-1, 1, 2).to(points)\n        dense_locations = points + locations\n        dense_locations = dense_locations.view(-1, 2)\n        return dense_locations\n\n\ndef build_fcos(cfg, in_channels):\n    return FCOSModule(cfg, in_channels)\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/rpn/fcos/inference.py",
    "content": "import torch\n\nfrom ..inference import RPNPostProcessor\nfrom ..utils import permute_and_flatten\n\nfrom maskrcnn_benchmark.modeling.box_coder import BoxCoder\nfrom maskrcnn_benchmark.modeling.utils import cat\nfrom maskrcnn_benchmark.structures.bounding_box import BoxList\nfrom maskrcnn_benchmark.structures.boxlist_ops import cat_boxlist\nfrom maskrcnn_benchmark.structures.boxlist_ops import boxlist_nms\nfrom maskrcnn_benchmark.structures.boxlist_ops import remove_small_boxes\n\n\nclass FCOSPostProcessor(torch.nn.Module):\n    \"\"\"\n    Performs post-processing on the outputs of the RetinaNet boxes.\n    This is only used in the testing.\n    \"\"\"\n    def __init__(self, pre_nms_thresh, pre_nms_top_n, nms_thresh,\n                 fpn_post_nms_top_n, min_size, num_classes, dense_points):\n        \"\"\"\n        Arguments:\n            pre_nms_thresh (float)\n            pre_nms_top_n (int)\n            nms_thresh (float)\n            fpn_post_nms_top_n (int)\n            min_size (int)\n            num_classes (int)\n            box_coder (BoxCoder)\n        \"\"\"\n        super(FCOSPostProcessor, self).__init__()\n        self.pre_nms_thresh = pre_nms_thresh\n        self.pre_nms_top_n = pre_nms_top_n\n        self.nms_thresh = nms_thresh\n        self.fpn_post_nms_top_n = fpn_post_nms_top_n\n        self.min_size = min_size\n        self.num_classes = num_classes\n        self.dense_points = dense_points\n\n    def forward_for_single_feature_map(\n            self, locations, box_cls,\n            box_regression, centerness,\n            image_sizes):\n        \"\"\"\n        Arguments:\n            anchors: list[BoxList]\n            box_cls: tensor of size N, A * C, H, W\n            box_regression: tensor of size N, A * 4, H, W\n        \"\"\"\n        N, C, H, W = box_cls.shape\n\n        # put in the same format as locations\n        box_cls = box_cls.view(N, C, H, W).permute(0, 2, 3, 1)\n        box_cls = box_cls.reshape(N, -1, self.num_classes - 1).sigmoid()\n        box_regression = box_regression.view(N, self.dense_points * 4, H, W).permute(0, 2, 3, 1)\n        box_regression = box_regression.reshape(N, -1, 4)\n        centerness = centerness.view(N, self.dense_points, H, W).permute(0, 2, 3, 1)\n        centerness = centerness.reshape(N, -1).sigmoid()\n\n        candidate_inds = box_cls > self.pre_nms_thresh\n        pre_nms_top_n = candidate_inds.view(N, -1).sum(1)\n        pre_nms_top_n = pre_nms_top_n.clamp(max=self.pre_nms_top_n)\n\n        # multiply the classification scores with centerness scores\n        box_cls = box_cls * centerness[:, :, None]\n\n        results = []\n        for i in range(N):\n            per_box_cls = box_cls[i]\n            per_candidate_inds = candidate_inds[i]\n            per_box_cls = per_box_cls[per_candidate_inds]\n\n            per_candidate_nonzeros = per_candidate_inds.nonzero()\n            per_box_loc = per_candidate_nonzeros[:, 0]\n            per_class = per_candidate_nonzeros[:, 1] + 1\n\n            per_box_regression = box_regression[i]\n            per_box_regression = per_box_regression[per_box_loc]\n            per_locations = locations[per_box_loc]\n\n            per_pre_nms_top_n = pre_nms_top_n[i]\n\n            if per_candidate_inds.sum().item() > per_pre_nms_top_n.item():\n                per_box_cls, top_k_indices = \\\n                    per_box_cls.topk(per_pre_nms_top_n, sorted=False)\n                per_class = per_class[top_k_indices]\n                per_box_regression = per_box_regression[top_k_indices]\n                per_locations = per_locations[top_k_indices]\n\n            detections = torch.stack([\n                per_locations[:, 0] - per_box_regression[:, 0],\n                per_locations[:, 1] - per_box_regression[:, 1],\n                per_locations[:, 0] + per_box_regression[:, 2],\n                per_locations[:, 1] + per_box_regression[:, 3],\n            ], dim=1)\n\n            h, w = image_sizes[i]\n            boxlist = BoxList(detections, (int(w), int(h)), mode=\"xyxy\")\n            boxlist.add_field(\"labels\", per_class)\n            boxlist.add_field(\"scores\", per_box_cls)\n            boxlist = boxlist.clip_to_image(remove_empty=False)\n            boxlist = remove_small_boxes(boxlist, self.min_size)\n            results.append(boxlist)\n\n        return results\n\n    def forward(self, locations, box_cls, box_regression, centerness, image_sizes):\n        \"\"\"\n        Arguments:\n            anchors: list[list[BoxList]]\n            box_cls: list[tensor]\n            box_regression: list[tensor]\n            image_sizes: list[(h, w)]\n        Returns:\n            boxlists (list[BoxList]): the post-processed anchors, after\n                applying box decoding and NMS\n        \"\"\"\n        sampled_boxes = []\n        for _, (l, o, b, c) in enumerate(zip(locations, box_cls, box_regression, centerness)):\n            sampled_boxes.append(\n                self.forward_for_single_feature_map(\n                    l, o, b, c, image_sizes\n                )\n            )\n\n        boxlists = list(zip(*sampled_boxes))\n        boxlists = [cat_boxlist(boxlist) for boxlist in boxlists]\n        boxlists = self.select_over_all_levels(boxlists)\n\n        return boxlists\n\n    # TODO very similar to filter_results from PostProcessor\n    # but filter_results is per image\n    # TODO Yang: solve this issue in the future. No good solution\n    # right now.\n    def select_over_all_levels(self, boxlists):\n        num_images = len(boxlists)\n        results = []\n        for i in range(num_images):\n            scores = boxlists[i].get_field(\"scores\")\n            labels = boxlists[i].get_field(\"labels\")\n            boxes = boxlists[i].bbox\n            boxlist = boxlists[i]\n            result = []\n            # skip the background\n            for j in range(1, self.num_classes):\n                inds = (labels == j).nonzero().view(-1)\n\n                scores_j = scores[inds]\n                boxes_j = boxes[inds, :].view(-1, 4)\n                boxlist_for_class = BoxList(boxes_j, boxlist.size, mode=\"xyxy\")\n                boxlist_for_class.add_field(\"scores\", scores_j)\n                boxlist_for_class = boxlist_nms(\n                    boxlist_for_class, self.nms_thresh,\n                    score_field=\"scores\"\n                )\n                num_labels = len(boxlist_for_class)\n                boxlist_for_class.add_field(\n                    \"labels\", torch.full((num_labels,), j,\n                                         dtype=torch.int64,\n                                         device=scores.device)\n                )\n                result.append(boxlist_for_class)\n\n            result = cat_boxlist(result)\n            number_of_detections = len(result)\n\n            # Limit to max_per_image detections **over all classes**\n            if number_of_detections > self.fpn_post_nms_top_n > 0:\n                cls_scores = result.get_field(\"scores\")\n                image_thresh, _ = torch.kthvalue(\n                    cls_scores.cpu(),\n                    number_of_detections - self.fpn_post_nms_top_n + 1\n                )\n                keep = cls_scores >= image_thresh.item()\n                keep = torch.nonzero(keep).squeeze(1)\n                result = result[keep]\n            results.append(result)\n        return results\n\n\ndef make_fcos_postprocessor(config):\n    pre_nms_thresh = config.MODEL.FCOS.INFERENCE_TH\n    pre_nms_top_n = config.MODEL.FCOS.PRE_NMS_TOP_N\n    nms_thresh = config.MODEL.FCOS.NMS_TH\n    fpn_post_nms_top_n = config.TEST.DETECTIONS_PER_IMG\n    dense_points = config.MODEL.FCOS.DENSE_POINTS\n\n    box_selector = FCOSPostProcessor(\n        pre_nms_thresh=pre_nms_thresh,\n        pre_nms_top_n=pre_nms_top_n,\n        nms_thresh=nms_thresh,\n        fpn_post_nms_top_n=fpn_post_nms_top_n,\n        min_size=0,\n        num_classes=config.MODEL.FCOS.NUM_CLASSES,\n        dense_points=dense_points)\n\n    return box_selector\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/rpn/fcos/loss.py",
    "content": "\"\"\"\nThis file contains specific functions for computing losses of FCOS\nfile\n\"\"\"\n\nimport torch\nfrom torch.nn import functional as F\nfrom torch import nn\n\nfrom ..utils import concat_box_prediction_layers\nfrom maskrcnn_benchmark.layers import IOULoss\nfrom maskrcnn_benchmark.layers import SigmoidFocalLoss\nfrom maskrcnn_benchmark.modeling.matcher import Matcher\nfrom maskrcnn_benchmark.modeling.utils import cat\nfrom maskrcnn_benchmark.structures.boxlist_ops import boxlist_iou\nfrom maskrcnn_benchmark.structures.boxlist_ops import cat_boxlist\n\n\nINF = 100000000\n\n\nclass FCOSLossComputation(object):\n    \"\"\"\n    This class computes the FCOS losses.\n    \"\"\"\n\n    def __init__(self, cfg):\n        self.cls_loss_func = SigmoidFocalLoss(\n            cfg.MODEL.FCOS.LOSS_GAMMA,\n            cfg.MODEL.FCOS.LOSS_ALPHA\n        )\n        self.center_sample = cfg.MODEL.FCOS.CENTER_SAMPLE\n        self.strides = cfg.MODEL.FCOS.FPN_STRIDES\n        self.radius = cfg.MODEL.FCOS.POS_RADIUS\n        self.loc_loss_type = cfg.MODEL.FCOS.LOC_LOSS_TYPE\n        # we make use of IOU Loss for bounding boxes regression,\n        # but we found that L1 in log scale can yield a similar performance\n        self.box_reg_loss_func = IOULoss(self.loc_loss_type)\n        self.centerness_loss_func = nn.BCEWithLogitsLoss()\n        self.dense_points = cfg.MODEL.FCOS.DENSE_POINTS\n\n    def get_sample_region(self, gt, strides, num_points_per, gt_xs, gt_ys, radius=1):\n        num_gts = gt.shape[0]\n        K = len(gt_xs)\n        gt = gt[None].expand(K, num_gts, 4)\n        center_x = (gt[..., 0] + gt[..., 2]) / 2\n        center_y = (gt[..., 1] + gt[..., 3]) / 2\n        center_gt = gt.new_zeros(gt.shape)\n        # no gt\n        if center_x[..., 0].sum() == 0:\n            return gt_xs.new_zeros(gt_xs.shape, dtype=torch.uint8)\n        beg = 0\n        for level, n_p in enumerate(num_points_per):\n            end = beg + n_p\n            stride = strides[level] * radius\n            xmin = center_x[beg:end] - stride\n            ymin = center_y[beg:end] - stride\n            xmax = center_x[beg:end] + stride\n            ymax = center_y[beg:end] + stride\n            # limit sample region in gt\n            center_gt[beg:end, :, 0] = torch.where(xmin > gt[beg:end, :, 0], xmin, gt[beg:end, :, 0])\n            center_gt[beg:end, :, 1] = torch.where(ymin > gt[beg:end, :, 1], ymin, gt[beg:end, :, 1])\n            center_gt[beg:end, :, 2] = torch.where(xmax > gt[beg:end, :, 2], gt[beg:end, :, 2], xmax)\n            center_gt[beg:end, :, 3] = torch.where(ymax > gt[beg:end, :, 3], gt[beg:end, :, 3], ymax)\n            beg = end\n        left = gt_xs[:, None] - center_gt[..., 0]\n        right = center_gt[..., 2] - gt_xs[:, None]\n        top = gt_ys[:, None] - center_gt[..., 1]\n        bottom = center_gt[..., 3] - gt_ys[:, None]\n        center_bbox = torch.stack((left, top, right, bottom), -1)\n        inside_gt_bbox_mask = center_bbox.min(-1)[0] > 0\n        return inside_gt_bbox_mask\n\n    def prepare_targets(self, points, targets):\n        object_sizes_of_interest = [\n            [-1, 64],\n            [64, 128],\n            [128, 256],\n            [256, 512],\n            [512, INF],\n        ]\n        expanded_object_sizes_of_interest = []\n        for l, points_per_level in enumerate(points):\n            object_sizes_of_interest_per_level = \\\n                points_per_level.new_tensor(object_sizes_of_interest[l])\n            expanded_object_sizes_of_interest.append(\n                object_sizes_of_interest_per_level[None].expand(len(points_per_level), -1)\n            )\n\n        expanded_object_sizes_of_interest = torch.cat(expanded_object_sizes_of_interest, dim=0)\n        num_points_per_level = [len(points_per_level) for points_per_level in points]\n        self.num_points_per_level = num_points_per_level\n        points_all_level = torch.cat(points, dim=0)\n        labels, reg_targets = self.compute_targets_for_locations(\n            points_all_level, targets, expanded_object_sizes_of_interest\n        )\n\n        for i in range(len(labels)):\n            labels[i] = torch.split(labels[i], num_points_per_level, dim=0)\n            reg_targets[i] = torch.split(reg_targets[i], num_points_per_level, dim=0)\n\n        labels_level_first = []\n        reg_targets_level_first = []\n        for level in range(len(points)):\n            labels_level_first.append(\n                torch.cat([labels_per_im[level] for labels_per_im in labels], dim=0)\n            )\n            reg_targets_level_first.append(\n                torch.cat([reg_targets_per_im[level] for reg_targets_per_im in reg_targets], dim=0)\n            )\n\n        return labels_level_first, reg_targets_level_first\n\n    def compute_targets_for_locations(self, locations, targets, object_sizes_of_interest):\n        labels = []\n        reg_targets = []\n        xs, ys = locations[:, 0], locations[:, 1]\n\n        for im_i in range(len(targets)):\n            targets_per_im = targets[im_i]\n            assert targets_per_im.mode == \"xyxy\"\n            bboxes = targets_per_im.bbox\n            labels_per_im = targets_per_im.get_field(\"labels\")\n            area = targets_per_im.area()\n\n            l = xs[:, None] - bboxes[:, 0][None]\n            t = ys[:, None] - bboxes[:, 1][None]\n            r = bboxes[:, 2][None] - xs[:, None]\n            b = bboxes[:, 3][None] - ys[:, None]\n            reg_targets_per_im = torch.stack([l, t, r, b], dim=2)\n            if self.center_sample:\n                is_in_boxes = self.get_sample_region(\n                    bboxes,\n                    self.strides,\n                    self.num_points_per_level,\n                    xs,\n                    ys,\n                    radius=self.radius)\n            else:\n                is_in_boxes = reg_targets_per_im.min(dim=2)[0] > 0\n\n            max_reg_targets_per_im = reg_targets_per_im.max(dim=2)[0]\n            # limit the regression range for each location\n            is_cared_in_the_level = \\\n                (max_reg_targets_per_im >= object_sizes_of_interest[:, [0]]) & \\\n                (max_reg_targets_per_im <= object_sizes_of_interest[:, [1]])\n\n            locations_to_gt_area = area[None].repeat(len(locations), 1)\n            locations_to_gt_area[is_in_boxes == 0] = INF\n            locations_to_gt_area[is_cared_in_the_level == 0] = INF\n\n            # if there are still more than one objects for a location,\n            # we choose the one with minimal area\n            locations_to_min_area, locations_to_gt_inds = locations_to_gt_area.min(dim=1)\n\n            reg_targets_per_im = reg_targets_per_im[range(len(locations)), locations_to_gt_inds]\n            labels_per_im = labels_per_im[locations_to_gt_inds]\n            labels_per_im[locations_to_min_area == INF] = 0\n\n            labels.append(labels_per_im)\n            reg_targets.append(reg_targets_per_im)\n\n        return labels, reg_targets\n\n    def compute_centerness_targets(self, reg_targets):\n        left_right = reg_targets[:, [0, 2]]\n        top_bottom = reg_targets[:, [1, 3]]\n        centerness = (left_right.min(dim=-1)[0] / left_right.max(dim=-1)[0]) * \\\n                      (top_bottom.min(dim=-1)[0] / top_bottom.max(dim=-1)[0])\n        return torch.sqrt(centerness)\n\n    def __call__(self, locations, box_cls, box_regression, centerness, targets):\n        \"\"\"\n        Arguments:\n            locations (list[BoxList])\n            box_cls (list[Tensor])\n            box_regression (list[Tensor])\n            centerness (list[Tensor])\n            targets (list[BoxList])\n\n        Returns:\n            cls_loss (Tensor)\n            reg_loss (Tensor)\n            centerness_loss (Tensor)\n        \"\"\"\n        N = box_cls[0].size(0)\n        num_classes = box_cls[0].size(1) // self.dense_points\n        labels, reg_targets = self.prepare_targets(locations, targets)\n\n        box_cls_flatten = []\n        box_regression_flatten = []\n        centerness_flatten = []\n        labels_flatten = []\n        reg_targets_flatten = []\n        for l in range(len(labels)):\n            box_cls_flatten.append(box_cls[l].permute(0, 2, 3, 1).reshape(-1, num_classes))\n            box_regression_flatten.append(box_regression[l].permute(0, 2, 3, 1).reshape(-1, 4))\n            labels_flatten.append(labels[l].reshape(-1))\n            reg_targets_flatten.append(reg_targets[l].reshape(-1, 4))\n            centerness_flatten.append(centerness[l].permute(0, 2, 3, 1).reshape(-1))\n\n        box_cls_flatten = torch.cat(box_cls_flatten, dim=0)\n        box_regression_flatten = torch.cat(box_regression_flatten, dim=0)\n        centerness_flatten = torch.cat(centerness_flatten, dim=0)\n        labels_flatten = torch.cat(labels_flatten, dim=0)\n        reg_targets_flatten = torch.cat(reg_targets_flatten, dim=0)\n        pos_inds = torch.nonzero(labels_flatten > 0).squeeze(1)\n        cls_loss = self.cls_loss_func(\n            box_cls_flatten,\n            labels_flatten.int()\n        ) / (pos_inds.numel() + N)  # add N to avoid dividing by a zero\n\n        box_regression_flatten = box_regression_flatten[pos_inds]\n        reg_targets_flatten = reg_targets_flatten[pos_inds]\n        centerness_flatten = centerness_flatten[pos_inds]\n\n        if pos_inds.numel() > 0:\n            centerness_targets = self.compute_centerness_targets(reg_targets_flatten)\n            reg_loss = self.box_reg_loss_func(\n                box_regression_flatten,\n                reg_targets_flatten,\n                centerness_targets,\n            )\n            centerness_loss = self.centerness_loss_func(\n                centerness_flatten,\n                centerness_targets\n            )\n        else:\n            reg_loss = box_regression_flatten.sum()\n            centerness_loss = centerness_flatten.sum()\n\n        return cls_loss, reg_loss, centerness_loss\n\n\ndef make_fcos_loss_evaluator(cfg):\n    loss_evaluator = FCOSLossComputation(cfg)\n    return loss_evaluator\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/rpn/inference.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport torch\n\nfrom maskrcnn_benchmark.modeling.box_coder import BoxCoder\nfrom maskrcnn_benchmark.structures.bounding_box import BoxList\nfrom maskrcnn_benchmark.structures.boxlist_ops import cat_boxlist\nfrom maskrcnn_benchmark.structures.boxlist_ops import boxlist_nms\nfrom maskrcnn_benchmark.structures.boxlist_ops import remove_small_boxes\n\nfrom ..utils import cat\nfrom .utils import permute_and_flatten\n\nclass RPNPostProcessor(torch.nn.Module):\n    \"\"\"\n    Performs post-processing on the outputs of the RPN boxes, before feeding the\n    proposals to the heads\n    \"\"\"\n\n    def __init__(\n        self,\n        pre_nms_top_n,\n        post_nms_top_n,\n        nms_thresh,\n        min_size,\n        box_coder=None,\n        fpn_post_nms_top_n=None,\n    ):\n        \"\"\"\n        Arguments:\n            pre_nms_top_n (int)\n            post_nms_top_n (int)\n            nms_thresh (float)\n            min_size (int)\n            box_coder (BoxCoder)\n            fpn_post_nms_top_n (int)\n        \"\"\"\n        super(RPNPostProcessor, self).__init__()\n        self.pre_nms_top_n = pre_nms_top_n\n        self.post_nms_top_n = post_nms_top_n\n        self.nms_thresh = nms_thresh\n        self.min_size = min_size\n\n        if box_coder is None:\n            box_coder = BoxCoder(weights=(1.0, 1.0, 1.0, 1.0))\n        self.box_coder = box_coder\n\n        if fpn_post_nms_top_n is None:\n            fpn_post_nms_top_n = post_nms_top_n\n        self.fpn_post_nms_top_n = fpn_post_nms_top_n\n\n    def add_gt_proposals(self, proposals, targets):\n        \"\"\"\n        Arguments:\n            proposals: list[BoxList]\n            targets: list[BoxList]\n        \"\"\"\n        # Get the device we're operating on\n        device = proposals[0].bbox.device\n\n        gt_boxes = [target.copy_with_fields([]) for target in targets]\n\n        # later cat of bbox requires all fields to be present for all bbox\n        # so we need to add a dummy for objectness that's missing\n        for gt_box in gt_boxes:\n            gt_box.add_field(\"objectness\", torch.ones(len(gt_box), device=device))\n\n        proposals = [\n            cat_boxlist((proposal, gt_box))\n            for proposal, gt_box in zip(proposals, gt_boxes)\n        ]\n\n        return proposals\n\n    def forward_for_single_feature_map(self, anchors, objectness, box_regression):\n        \"\"\"\n        Arguments:\n            anchors: list[BoxList]\n            objectness: tensor of size N, A, H, W\n            box_regression: tensor of size N, A * 4, H, W\n        \"\"\"\n        device = objectness.device\n        N, A, H, W = objectness.shape\n\n        # put in the same format as anchors\n        objectness = permute_and_flatten(objectness, N, A, 1, H, W).view(N, -1)\n        objectness = objectness.sigmoid()\n\n        box_regression = permute_and_flatten(box_regression, N, A, 4, H, W)\n\n        num_anchors = A * H * W\n\n        pre_nms_top_n = min(self.pre_nms_top_n, num_anchors)\n        objectness, topk_idx = objectness.topk(pre_nms_top_n, dim=1, sorted=True)\n\n        batch_idx = torch.arange(N, device=device)[:, None]\n        box_regression = box_regression[batch_idx, topk_idx]\n\n        image_shapes = [box.size for box in anchors]\n        concat_anchors = torch.cat([a.bbox for a in anchors], dim=0)\n        concat_anchors = concat_anchors.reshape(N, -1, 4)[batch_idx, topk_idx]\n\n        proposals = self.box_coder.decode(\n            box_regression.view(-1, 4), concat_anchors.view(-1, 4)\n        )\n\n        proposals = proposals.view(N, -1, 4)\n\n        result = []\n        for proposal, score, im_shape in zip(proposals, objectness, image_shapes):\n            boxlist = BoxList(proposal, im_shape, mode=\"xyxy\")\n            boxlist.add_field(\"objectness\", score)\n            boxlist = boxlist.clip_to_image(remove_empty=False)\n            boxlist = remove_small_boxes(boxlist, self.min_size)\n            boxlist = boxlist_nms(\n                boxlist,\n                self.nms_thresh,\n                max_proposals=self.post_nms_top_n,\n                score_field=\"objectness\",\n            )\n            result.append(boxlist)\n        return result\n\n    def forward(self, anchors, objectness, box_regression, targets=None):\n        \"\"\"\n        Arguments:\n            anchors: list[list[BoxList]]\n            objectness: list[tensor]\n            box_regression: list[tensor]\n\n        Returns:\n            boxlists (list[BoxList]): the post-processed anchors, after\n                applying box decoding and NMS\n        \"\"\"\n        sampled_boxes = []\n        num_levels = len(objectness)\n        anchors = list(zip(*anchors))\n        for a, o, b in zip(anchors, objectness, box_regression):\n            sampled_boxes.append(self.forward_for_single_feature_map(a, o, b))\n\n        boxlists = list(zip(*sampled_boxes))\n        boxlists = [cat_boxlist(boxlist) for boxlist in boxlists]\n\n        if num_levels > 1:\n            boxlists = self.select_over_all_levels(boxlists)\n\n        # append ground-truth bboxes to proposals\n        if self.training and targets is not None:\n            boxlists = self.add_gt_proposals(boxlists, targets)\n\n        return boxlists\n\n    def select_over_all_levels(self, boxlists):\n        num_images = len(boxlists)\n        # different behavior during training and during testing:\n        # during training, post_nms_top_n is over *all* the proposals combined, while\n        # during testing, it is over the proposals for each image\n        # TODO resolve this difference and make it consistent. It should be per image,\n        # and not per batch\n        if self.training:\n            objectness = torch.cat(\n                [boxlist.get_field(\"objectness\") for boxlist in boxlists], dim=0\n            )\n            box_sizes = [len(boxlist) for boxlist in boxlists]\n            post_nms_top_n = min(self.fpn_post_nms_top_n, len(objectness))\n            _, inds_sorted = torch.topk(objectness, post_nms_top_n, dim=0, sorted=True)\n            inds_mask = torch.zeros_like(objectness, dtype=torch.uint8)\n            inds_mask[inds_sorted] = 1\n            inds_mask = inds_mask.split(box_sizes)\n            for i in range(num_images):\n                boxlists[i] = boxlists[i][inds_mask[i]]\n        else:\n            for i in range(num_images):\n                objectness = boxlists[i].get_field(\"objectness\")\n                post_nms_top_n = min(self.fpn_post_nms_top_n, len(objectness))\n                _, inds_sorted = torch.topk(\n                    objectness, post_nms_top_n, dim=0, sorted=True\n                )\n                boxlists[i] = boxlists[i][inds_sorted]\n        return boxlists\n\n\ndef make_rpn_postprocessor(config, rpn_box_coder, is_train):\n    fpn_post_nms_top_n = config.MODEL.RPN.FPN_POST_NMS_TOP_N_TRAIN\n    if not is_train:\n        fpn_post_nms_top_n = config.MODEL.RPN.FPN_POST_NMS_TOP_N_TEST\n\n    pre_nms_top_n = config.MODEL.RPN.PRE_NMS_TOP_N_TRAIN\n    post_nms_top_n = config.MODEL.RPN.POST_NMS_TOP_N_TRAIN\n    if not is_train:\n        pre_nms_top_n = config.MODEL.RPN.PRE_NMS_TOP_N_TEST\n        post_nms_top_n = config.MODEL.RPN.POST_NMS_TOP_N_TEST\n    nms_thresh = config.MODEL.RPN.NMS_THRESH\n    min_size = config.MODEL.RPN.MIN_SIZE\n    box_selector = RPNPostProcessor(\n        pre_nms_top_n=pre_nms_top_n,\n        post_nms_top_n=post_nms_top_n,\n        nms_thresh=nms_thresh,\n        min_size=min_size,\n        box_coder=rpn_box_coder,\n        fpn_post_nms_top_n=fpn_post_nms_top_n,\n    )\n    return box_selector\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/rpn/loss.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n\"\"\"\nThis file contains specific functions for computing losses on the RPN\nfile\n\"\"\"\n\nimport torch\nfrom torch.nn import functional as F\n\nfrom .utils import concat_box_prediction_layers\n\nfrom ..balanced_positive_negative_sampler import BalancedPositiveNegativeSampler\nfrom ..utils import cat\n\nfrom maskrcnn_benchmark.layers import smooth_l1_loss\nfrom maskrcnn_benchmark.modeling.matcher import Matcher\nfrom maskrcnn_benchmark.structures.boxlist_ops import boxlist_iou\nfrom maskrcnn_benchmark.structures.boxlist_ops import cat_boxlist\n\n\nclass RPNLossComputation(object):\n    \"\"\"\n    This class computes the RPN loss.\n    \"\"\"\n\n    def __init__(self, proposal_matcher, fg_bg_sampler, box_coder,\n                 generate_labels_func):\n        \"\"\"\n        Arguments:\n            proposal_matcher (Matcher)\n            fg_bg_sampler (BalancedPositiveNegativeSampler)\n            box_coder (BoxCoder)\n        \"\"\"\n        # self.target_preparator = target_preparator\n        self.proposal_matcher = proposal_matcher\n        self.fg_bg_sampler = fg_bg_sampler\n        self.box_coder = box_coder\n        self.copied_fields = []\n        self.generate_labels_func = generate_labels_func\n        self.discard_cases = ['not_visibility', 'between_thresholds']\n\n    def match_targets_to_anchors(self, anchor, target, copied_fields=[]):\n        match_quality_matrix = boxlist_iou(target, anchor)\n        matched_idxs = self.proposal_matcher(match_quality_matrix)\n        # RPN doesn't need any fields from target\n        # for creating the labels, so clear them all\n        target = target.copy_with_fields(copied_fields)\n        # get the targets corresponding GT for each anchor\n        # NB: need to clamp the indices because we can have a single\n        # GT in the image, and matched_idxs can be -2, which goes\n        # out of bounds\n        matched_targets = target[matched_idxs.clamp(min=0)]\n        matched_targets.add_field(\"matched_idxs\", matched_idxs)\n        return matched_targets\n\n    def prepare_targets(self, anchors, targets):\n        labels = []\n        regression_targets = []\n        for anchors_per_image, targets_per_image in zip(anchors, targets):\n            matched_targets = self.match_targets_to_anchors(\n                anchors_per_image, targets_per_image, self.copied_fields\n            )\n\n            matched_idxs = matched_targets.get_field(\"matched_idxs\")\n            labels_per_image = self.generate_labels_func(matched_targets)\n            labels_per_image = labels_per_image.to(dtype=torch.float32)\n\n            # Background (negative examples)\n            bg_indices = matched_idxs == Matcher.BELOW_LOW_THRESHOLD\n            labels_per_image[bg_indices] = 0\n\n            # discard anchors that go out of the boundaries of the image\n            if \"not_visibility\" in self.discard_cases:\n                labels_per_image[~anchors_per_image.get_field(\"visibility\")] = -1\n\n            # discard indices that are between thresholds\n            if \"between_thresholds\" in self.discard_cases:\n                inds_to_discard = matched_idxs == Matcher.BETWEEN_THRESHOLDS\n                labels_per_image[inds_to_discard] = -1\n\n            # compute regression targets\n            regression_targets_per_image = self.box_coder.encode(\n                matched_targets.bbox, anchors_per_image.bbox\n            )\n\n            labels.append(labels_per_image)\n            regression_targets.append(regression_targets_per_image)\n\n        return labels, regression_targets\n\n\n    def __call__(self, anchors, objectness, box_regression, targets):\n        \"\"\"\n        Arguments:\n            anchors (list[BoxList])\n            objectness (list[Tensor])\n            box_regression (list[Tensor])\n            targets (list[BoxList])\n\n        Returns:\n            objectness_loss (Tensor)\n            box_loss (Tensor\n        \"\"\"\n        anchors = [cat_boxlist(anchors_per_image) for anchors_per_image in anchors]\n        labels, regression_targets = self.prepare_targets(anchors, targets)\n        sampled_pos_inds, sampled_neg_inds = self.fg_bg_sampler(labels)\n        sampled_pos_inds = torch.nonzero(torch.cat(sampled_pos_inds, dim=0)).squeeze(1)\n        sampled_neg_inds = torch.nonzero(torch.cat(sampled_neg_inds, dim=0)).squeeze(1)\n\n        sampled_inds = torch.cat([sampled_pos_inds, sampled_neg_inds], dim=0)\n\n        objectness, box_regression = \\\n                concat_box_prediction_layers(objectness, box_regression)\n\n        objectness = objectness.squeeze()\n\n        labels = torch.cat(labels, dim=0)\n        regression_targets = torch.cat(regression_targets, dim=0)\n\n        box_loss = smooth_l1_loss(\n            box_regression[sampled_pos_inds],\n            regression_targets[sampled_pos_inds],\n            beta=1.0 / 9,\n            size_average=False,\n        ) / (sampled_inds.numel())\n\n        objectness_loss = F.binary_cross_entropy_with_logits(\n            objectness[sampled_inds], labels[sampled_inds]\n        )\n\n        return objectness_loss, box_loss\n\n# This function should be overwritten in RetinaNet\ndef generate_rpn_labels(matched_targets):\n    matched_idxs = matched_targets.get_field(\"matched_idxs\")\n    labels_per_image = matched_idxs >= 0\n    return labels_per_image\n\n\ndef make_rpn_loss_evaluator(cfg, box_coder):\n    matcher = Matcher(\n        cfg.MODEL.RPN.FG_IOU_THRESHOLD,\n        cfg.MODEL.RPN.BG_IOU_THRESHOLD,\n        allow_low_quality_matches=True,\n    )\n\n    fg_bg_sampler = BalancedPositiveNegativeSampler(\n        cfg.MODEL.RPN.BATCH_SIZE_PER_IMAGE, cfg.MODEL.RPN.POSITIVE_FRACTION\n    )\n\n    loss_evaluator = RPNLossComputation(\n        matcher,\n        fg_bg_sampler,\n        box_coder,\n        generate_rpn_labels\n    )\n    return loss_evaluator\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/rpn/retinanet/__init__.py",
    "content": ""
  },
  {
    "path": "maskrcnn_benchmark/modeling/rpn/retinanet/inference.py",
    "content": "import torch\n\nfrom ..inference import RPNPostProcessor\nfrom ..utils import permute_and_flatten\n\nfrom maskrcnn_benchmark.modeling.box_coder import BoxCoder\nfrom maskrcnn_benchmark.modeling.utils import cat\nfrom maskrcnn_benchmark.structures.bounding_box import BoxList\nfrom maskrcnn_benchmark.structures.boxlist_ops import cat_boxlist\nfrom maskrcnn_benchmark.structures.boxlist_ops import boxlist_nms\nfrom maskrcnn_benchmark.structures.boxlist_ops import remove_small_boxes\n\n\nclass RetinaNetPostProcessor(RPNPostProcessor):\n    \"\"\"\n    Performs post-processing on the outputs of the RetinaNet boxes.\n    This is only used in the testing.\n    \"\"\"\n    def __init__(\n        self,\n        pre_nms_thresh,\n        pre_nms_top_n,\n        nms_thresh,\n        fpn_post_nms_top_n,\n        min_size,\n        num_classes,\n        box_coder=None,\n    ):\n        \"\"\"\n        Arguments:\n            pre_nms_thresh (float)\n            pre_nms_top_n (int)\n            nms_thresh (float)\n            fpn_post_nms_top_n (int)\n            min_size (int)\n            num_classes (int)\n            box_coder (BoxCoder)\n        \"\"\"\n        super(RetinaNetPostProcessor, self).__init__(\n            pre_nms_thresh, 0, nms_thresh, min_size\n        )\n        self.pre_nms_thresh = pre_nms_thresh\n        self.pre_nms_top_n = pre_nms_top_n\n        self.nms_thresh = nms_thresh\n        self.fpn_post_nms_top_n = fpn_post_nms_top_n\n        self.min_size = min_size\n        self.num_classes = num_classes\n\n        if box_coder is None:\n            box_coder = BoxCoder(weights=(10., 10., 5., 5.))\n        self.box_coder = box_coder\n \n    def add_gt_proposals(self, proposals, targets):\n        \"\"\"\n        This function is not used in RetinaNet\n        \"\"\"\n        pass\n\n    def forward_for_single_feature_map(\n            self, anchors, box_cls, box_regression):\n        \"\"\"\n        Arguments:\n            anchors: list[BoxList]\n            box_cls: tensor of size N, A * C, H, W\n            box_regression: tensor of size N, A * 4, H, W\n        \"\"\"\n        device = box_cls.device\n        N, _, H, W = box_cls.shape\n        A = box_regression.size(1) // 4\n        C = box_cls.size(1) // A\n\n        # put in the same format as anchors\n        box_cls = permute_and_flatten(box_cls, N, A, C, H, W)\n        box_cls = box_cls.sigmoid()\n\n        box_regression = permute_and_flatten(box_regression, N, A, 4, H, W)\n        box_regression = box_regression.reshape(N, -1, 4)\n\n        num_anchors = A * H * W\n\n        candidate_inds = box_cls > self.pre_nms_thresh\n\n        pre_nms_top_n = candidate_inds.view(N, -1).sum(1)\n        pre_nms_top_n = pre_nms_top_n.clamp(max=self.pre_nms_top_n)\n\n        results = []\n        for per_box_cls, per_box_regression, per_pre_nms_top_n, \\\n        per_candidate_inds, per_anchors in zip(\n            box_cls,\n            box_regression,\n            pre_nms_top_n,\n            candidate_inds,\n            anchors):\n\n            # Sort and select TopN\n            # TODO most of this can be made out of the loop for\n            # all images. \n            # TODO:Yang: Not easy to do. Because the numbers of detections are\n            # different in each image. Therefore, this part needs to be done\n            # per image. \n            per_box_cls = per_box_cls[per_candidate_inds]\n \n            per_box_cls, top_k_indices = \\\n                    per_box_cls.topk(per_pre_nms_top_n, sorted=False)\n\n            per_candidate_nonzeros = \\\n                    per_candidate_inds.nonzero()[top_k_indices, :]\n\n            per_box_loc = per_candidate_nonzeros[:, 0]\n            per_class = per_candidate_nonzeros[:, 1]\n            per_class += 1\n\n            detections = self.box_coder.decode(\n                per_box_regression[per_box_loc, :].view(-1, 4),\n                per_anchors.bbox[per_box_loc, :].view(-1, 4)\n            )\n\n            boxlist = BoxList(detections, per_anchors.size, mode=\"xyxy\")\n            boxlist.add_field(\"labels\", per_class)\n            boxlist.add_field(\"scores\", per_box_cls)\n            boxlist = boxlist.clip_to_image(remove_empty=False)\n            boxlist = remove_small_boxes(boxlist, self.min_size)\n            results.append(boxlist)\n\n        return results\n\n    # TODO very similar to filter_results from PostProcessor\n    # but filter_results is per image\n    # TODO Yang: solve this issue in the future. No good solution\n    # right now.\n    def select_over_all_levels(self, boxlists):\n        num_images = len(boxlists)\n        results = []\n        for i in range(num_images):\n            scores = boxlists[i].get_field(\"scores\")\n            labels = boxlists[i].get_field(\"labels\")\n            boxes = boxlists[i].bbox\n            boxlist = boxlists[i]\n            result = []\n            # skip the background\n            for j in range(1, self.num_classes):\n                inds = (labels == j).nonzero().view(-1)\n\n                scores_j = scores[inds]\n                boxes_j = boxes[inds, :].view(-1, 4)\n                boxlist_for_class = BoxList(boxes_j, boxlist.size, mode=\"xyxy\")\n                boxlist_for_class.add_field(\"scores\", scores_j)\n                boxlist_for_class = boxlist_nms(\n                    boxlist_for_class, self.nms_thresh,\n                    score_field=\"scores\"\n                )\n                num_labels = len(boxlist_for_class)\n                boxlist_for_class.add_field(\n                    \"labels\", torch.full((num_labels,), j,\n                                         dtype=torch.int64,\n                                         device=scores.device)\n                )\n                result.append(boxlist_for_class)\n\n            result = cat_boxlist(result)\n            number_of_detections = len(result)\n\n            # Limit to max_per_image detections **over all classes**\n            if number_of_detections > self.fpn_post_nms_top_n > 0:\n                cls_scores = result.get_field(\"scores\")\n                image_thresh, _ = torch.kthvalue(\n                    cls_scores.cpu(),\n                    number_of_detections - self.fpn_post_nms_top_n + 1\n                )\n                keep = cls_scores >= image_thresh.item()\n                keep = torch.nonzero(keep).squeeze(1)\n                result = result[keep]\n            results.append(result)\n        return results\n\n\ndef make_retinanet_postprocessor(config, rpn_box_coder, is_train):\n    pre_nms_thresh = config.MODEL.RETINANET.INFERENCE_TH\n    pre_nms_top_n = config.MODEL.RETINANET.PRE_NMS_TOP_N\n    nms_thresh = config.MODEL.RETINANET.NMS_TH\n    fpn_post_nms_top_n = config.TEST.DETECTIONS_PER_IMG\n    min_size = 0\n\n    box_selector = RetinaNetPostProcessor(\n        pre_nms_thresh=pre_nms_thresh,\n        pre_nms_top_n=pre_nms_top_n,\n        nms_thresh=nms_thresh,\n        fpn_post_nms_top_n=fpn_post_nms_top_n,\n        min_size=min_size,\n        num_classes=config.MODEL.RETINANET.NUM_CLASSES,\n        box_coder=rpn_box_coder,\n    )\n\n    return box_selector\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/rpn/retinanet/loss.py",
    "content": "\"\"\"\nThis file contains specific functions for computing losses on the RetinaNet\nfile\n\"\"\"\n\nimport torch\nfrom torch.nn import functional as F\n\nfrom ..utils import concat_box_prediction_layers\n\nfrom maskrcnn_benchmark.layers import smooth_l1_loss\nfrom maskrcnn_benchmark.layers import SigmoidFocalLoss\nfrom maskrcnn_benchmark.modeling.matcher import Matcher\nfrom maskrcnn_benchmark.modeling.utils import cat\nfrom maskrcnn_benchmark.structures.boxlist_ops import boxlist_iou\nfrom maskrcnn_benchmark.structures.boxlist_ops import cat_boxlist\nfrom maskrcnn_benchmark.modeling.rpn.loss import RPNLossComputation\n\nclass RetinaNetLossComputation(RPNLossComputation):\n    \"\"\"\n    This class computes the RetinaNet loss.\n    \"\"\"\n\n    def __init__(self, proposal_matcher, box_coder,\n                 generate_labels_func,\n                 sigmoid_focal_loss,\n                 bbox_reg_beta=0.11,\n                 regress_norm=1.0):\n        \"\"\"\n        Arguments:\n            proposal_matcher (Matcher)\n            box_coder (BoxCoder)\n        \"\"\"\n        self.proposal_matcher = proposal_matcher\n        self.box_coder = box_coder\n        self.box_cls_loss_func = sigmoid_focal_loss\n        self.bbox_reg_beta = bbox_reg_beta\n        self.copied_fields = ['labels']\n        self.generate_labels_func = generate_labels_func\n        self.discard_cases = ['between_thresholds']\n        self.regress_norm = regress_norm\n\n    def __call__(self, anchors, box_cls, box_regression, targets):\n        \"\"\"\n        Arguments:\n            anchors (list[BoxList])\n            box_cls (list[Tensor])\n            box_regression (list[Tensor])\n            targets (list[BoxList])\n\n        Returns:\n            retinanet_cls_loss (Tensor)\n            retinanet_regression_loss (Tensor\n        \"\"\"\n        anchors = [cat_boxlist(anchors_per_image) for anchors_per_image in anchors]\n        labels, regression_targets = self.prepare_targets(anchors, targets)\n\n        N = len(labels)\n        box_cls, box_regression = \\\n                concat_box_prediction_layers(box_cls, box_regression)\n\n        labels = torch.cat(labels, dim=0)\n        regression_targets = torch.cat(regression_targets, dim=0)\n        pos_inds = torch.nonzero(labels > 0).squeeze(1)\n\n        retinanet_regression_loss = smooth_l1_loss(\n            box_regression[pos_inds],\n            regression_targets[pos_inds],\n            beta=self.bbox_reg_beta,\n            size_average=False,\n        ) / (max(1, pos_inds.numel() * self.regress_norm))\n\n        labels = labels.int()\n\n        retinanet_cls_loss = self.box_cls_loss_func(\n            box_cls,\n            labels\n        ) / (pos_inds.numel() + N)\n\n        return retinanet_cls_loss, retinanet_regression_loss\n\n\ndef generate_retinanet_labels(matched_targets):\n    labels_per_image = matched_targets.get_field(\"labels\")\n    return labels_per_image\n\n\ndef make_retinanet_loss_evaluator(cfg, box_coder):\n    matcher = Matcher(\n        cfg.MODEL.RETINANET.FG_IOU_THRESHOLD,\n        cfg.MODEL.RETINANET.BG_IOU_THRESHOLD,\n        allow_low_quality_matches=True,\n    )\n    sigmoid_focal_loss = SigmoidFocalLoss(\n        cfg.MODEL.RETINANET.LOSS_GAMMA,\n        cfg.MODEL.RETINANET.LOSS_ALPHA\n    )\n\n    loss_evaluator = RetinaNetLossComputation(\n        matcher,\n        box_coder,\n        generate_retinanet_labels,\n        sigmoid_focal_loss,\n        bbox_reg_beta = cfg.MODEL.RETINANET.BBOX_REG_BETA,\n        regress_norm = cfg.MODEL.RETINANET.BBOX_REG_WEIGHT,\n    )\n    return loss_evaluator\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/rpn/retinanet/retinanet.py",
    "content": "import math\nimport torch\nimport torch.nn.functional as F\nfrom torch import nn\n\nfrom .inference import  make_retinanet_postprocessor\nfrom .loss import make_retinanet_loss_evaluator\nfrom ..anchor_generator import make_anchor_generator_retinanet\n\nfrom maskrcnn_benchmark.modeling.box_coder import BoxCoder\n\n\nclass RetinaNetHead(torch.nn.Module):\n    \"\"\"\n    Adds a RetinNet head with classification and regression heads\n    \"\"\"\n\n    def __init__(self, cfg, in_channels):\n        \"\"\"\n        Arguments:\n            in_channels (int): number of channels of the input feature\n            num_anchors (int): number of anchors to be predicted\n        \"\"\"\n        super(RetinaNetHead, self).__init__()\n        # TODO: Implement the sigmoid version first.\n        num_classes = cfg.MODEL.RETINANET.NUM_CLASSES - 1\n        num_anchors = len(cfg.MODEL.RETINANET.ASPECT_RATIOS) \\\n                        * cfg.MODEL.RETINANET.SCALES_PER_OCTAVE\n\n        cls_tower = []\n        bbox_tower = []\n        for i in range(cfg.MODEL.RETINANET.NUM_CONVS):\n            cls_tower.append(\n                nn.Conv2d(\n                    in_channels,\n                    in_channels,\n                    kernel_size=3,\n                    stride=1,\n                    padding=1\n                )\n            )\n            cls_tower.append(nn.ReLU())\n            bbox_tower.append(\n                nn.Conv2d(\n                    in_channels,\n                    in_channels,\n                    kernel_size=3,\n                    stride=1,\n                    padding=1\n                )\n            )\n            bbox_tower.append(nn.ReLU())\n\n        self.add_module('cls_tower', nn.Sequential(*cls_tower))\n        self.add_module('bbox_tower', nn.Sequential(*bbox_tower))\n        self.cls_logits = nn.Conv2d(\n            in_channels, num_anchors * num_classes, kernel_size=3, stride=1,\n            padding=1\n        )\n        self.bbox_pred = nn.Conv2d(\n            in_channels,  num_anchors * 4, kernel_size=3, stride=1,\n            padding=1\n        )\n\n        # Initialization\n        for modules in [self.cls_tower, self.bbox_tower, self.cls_logits,\n                  self.bbox_pred]:\n            for l in modules.modules():\n                if isinstance(l, nn.Conv2d):\n                    torch.nn.init.normal_(l.weight, std=0.01)\n                    torch.nn.init.constant_(l.bias, 0)\n\n\n        # retinanet_bias_init\n        prior_prob = cfg.MODEL.RETINANET.PRIOR_PROB\n        bias_value = -math.log((1 - prior_prob) / prior_prob)\n        torch.nn.init.constant_(self.cls_logits.bias, bias_value)\n\n    def forward(self, x):\n        logits = []\n        bbox_reg = []\n        for feature in x:\n            logits.append(self.cls_logits(self.cls_tower(feature)))\n            bbox_reg.append(self.bbox_pred(self.bbox_tower(feature)))\n        return logits, bbox_reg\n\n\nclass RetinaNetModule(torch.nn.Module):\n    \"\"\"\n    Module for RetinaNet computation. Takes feature maps from the backbone and\n    RetinaNet outputs and losses. Only Test on FPN now.\n    \"\"\"\n\n    def __init__(self, cfg, in_channels):\n        super(RetinaNetModule, self).__init__()\n\n        self.cfg = cfg.clone()\n\n        anchor_generator = make_anchor_generator_retinanet(cfg)\n        head = RetinaNetHead(cfg, in_channels)\n        box_coder = BoxCoder(weights=(10., 10., 5., 5.))\n\n        box_selector_test = make_retinanet_postprocessor(cfg, box_coder, is_train=False)\n\n        loss_evaluator = make_retinanet_loss_evaluator(cfg, box_coder)\n\n        self.anchor_generator = anchor_generator\n        self.head = head\n        self.box_selector_test = box_selector_test\n        self.loss_evaluator = loss_evaluator\n\n    def forward(self, images, features, targets=None):\n        \"\"\"\n        Arguments:\n            images (ImageList): images for which we want to compute the predictions\n            features (list[Tensor]): features computed from the images that are\n                used for computing the predictions. Each tensor in the list\n                correspond to different feature levels\n            targets (list[BoxList): ground-truth boxes present in the image (optional)\n\n        Returns:\n            boxes (list[BoxList]): the predicted boxes from the RPN, one BoxList per\n                image.\n            losses (dict[Tensor]): the losses for the model during training. During\n                testing, it is an empty dict.\n        \"\"\"\n        box_cls, box_regression = self.head(features)\n        anchors = self.anchor_generator(images, features)\n \n        if self.training:\n            return self._forward_train(anchors, box_cls, box_regression, targets)\n        else:\n            return self._forward_test(anchors, box_cls, box_regression)\n\n    def _forward_train(self, anchors, box_cls, box_regression, targets):\n\n        loss_box_cls, loss_box_reg = self.loss_evaluator(\n            anchors, box_cls, box_regression, targets\n        )\n        losses = {\n            \"loss_retina_cls\": loss_box_cls,\n            \"loss_retina_reg\": loss_box_reg,\n        }\n        return anchors, losses\n\n    def _forward_test(self, anchors, box_cls, box_regression):\n        boxes = self.box_selector_test(anchors, box_cls, box_regression)\n        return boxes, {}\n\n\ndef build_retinanet(cfg, in_channels):\n    return RetinaNetModule(cfg, in_channels)\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/rpn/rpn.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport torch\nimport torch.nn.functional as F\nfrom torch import nn\n\nfrom maskrcnn_benchmark.modeling import registry\nfrom maskrcnn_benchmark.modeling.box_coder import BoxCoder\nfrom maskrcnn_benchmark.modeling.rpn.retinanet.retinanet import build_retinanet\nfrom maskrcnn_benchmark.modeling.rpn.fcos.fcos import build_fcos\nfrom .loss import make_rpn_loss_evaluator\nfrom .anchor_generator import make_anchor_generator\nfrom .inference import make_rpn_postprocessor\n\n\nclass RPNHeadConvRegressor(nn.Module):\n    \"\"\"\n    A simple RPN Head for classification and bbox regression\n    \"\"\"\n\n    def __init__(self, cfg, in_channels, num_anchors):\n        \"\"\"\n        Arguments:\n            cfg              : config\n            in_channels (int): number of channels of the input feature\n            num_anchors (int): number of anchors to be predicted\n        \"\"\"\n        super(RPNHeadConvRegressor, self).__init__()\n        self.cls_logits = nn.Conv2d(in_channels, num_anchors, kernel_size=1, stride=1)\n        self.bbox_pred = nn.Conv2d(\n            in_channels, num_anchors * 4, kernel_size=1, stride=1\n        )\n\n        for l in [self.cls_logits, self.bbox_pred]:\n            torch.nn.init.normal_(l.weight, std=0.01)\n            torch.nn.init.constant_(l.bias, 0)\n\n    def forward(self, x):\n        assert isinstance(x, (list, tuple))\n        logits = [self.cls_logits(y) for y in x]\n        bbox_reg = [self.bbox_pred(y) for y in x]\n\n        return logits, bbox_reg\n\n\nclass RPNHeadFeatureSingleConv(nn.Module):\n    \"\"\"\n    Adds a simple RPN Head with one conv to extract the feature\n    \"\"\"\n\n    def __init__(self, cfg, in_channels):\n        \"\"\"\n        Arguments:\n            cfg              : config\n            in_channels (int): number of channels of the input feature\n        \"\"\"\n        super(RPNHeadFeatureSingleConv, self).__init__()\n        self.conv = nn.Conv2d(\n            in_channels, in_channels, kernel_size=3, stride=1, padding=1\n        )\n\n        for l in [self.conv]:\n            torch.nn.init.normal_(l.weight, std=0.01)\n            torch.nn.init.constant_(l.bias, 0)\n\n        self.out_channels = in_channels\n\n    def forward(self, x):\n        assert isinstance(x, (list, tuple))\n        x = [F.relu(self.conv(z)) for z in x]\n\n        return x\n\n\n@registry.RPN_HEADS.register(\"SingleConvRPNHead\")\nclass RPNHead(nn.Module):\n    \"\"\"\n    Adds a simple RPN Head with classification and regression heads\n    \"\"\"\n\n    def __init__(self, cfg, in_channels, num_anchors):\n        \"\"\"\n        Arguments:\n            cfg              : config\n            in_channels (int): number of channels of the input feature\n            num_anchors (int): number of anchors to be predicted\n        \"\"\"\n        super(RPNHead, self).__init__()\n        self.conv = nn.Conv2d(\n            in_channels, in_channels, kernel_size=3, stride=1, padding=1\n        )\n        self.cls_logits = nn.Conv2d(in_channels, num_anchors, kernel_size=1, stride=1)\n        self.bbox_pred = nn.Conv2d(\n            in_channels, num_anchors * 4, kernel_size=1, stride=1\n        )\n\n        for l in [self.conv, self.cls_logits, self.bbox_pred]:\n            torch.nn.init.normal_(l.weight, std=0.01)\n            torch.nn.init.constant_(l.bias, 0)\n\n    def forward(self, x):\n        logits = []\n        bbox_reg = []\n        for feature in x:\n            t = F.relu(self.conv(feature))\n            logits.append(self.cls_logits(t))\n            bbox_reg.append(self.bbox_pred(t))\n        return logits, bbox_reg\n\n\nclass RPNModule(torch.nn.Module):\n    \"\"\"\n    Module for RPN computation. Takes feature maps from the backbone and RPN\n    proposals and losses. Works for both FPN and non-FPN.\n    \"\"\"\n\n    def __init__(self, cfg, in_channels):\n        super(RPNModule, self).__init__()\n\n        self.cfg = cfg.clone()\n\n        anchor_generator = make_anchor_generator(cfg)\n\n        rpn_head = registry.RPN_HEADS[cfg.MODEL.RPN.RPN_HEAD]\n        head = rpn_head(\n            cfg, in_channels, anchor_generator.num_anchors_per_location()[0]\n        )\n\n        rpn_box_coder = BoxCoder(weights=(1.0, 1.0, 1.0, 1.0))\n\n        box_selector_train = make_rpn_postprocessor(cfg, rpn_box_coder, is_train=True)\n        box_selector_test = make_rpn_postprocessor(cfg, rpn_box_coder, is_train=False)\n\n        loss_evaluator = make_rpn_loss_evaluator(cfg, rpn_box_coder)\n\n        self.anchor_generator = anchor_generator\n        self.head = head\n        self.box_selector_train = box_selector_train\n        self.box_selector_test = box_selector_test\n        self.loss_evaluator = loss_evaluator\n\n    def forward(self, images, features, targets=None):\n        \"\"\"\n        Arguments:\n            images (ImageList): images for which we want to compute the predictions\n            features (list[Tensor]): features computed from the images that are\n                used for computing the predictions. Each tensor in the list\n                correspond to different feature levels\n            targets (list[BoxList): ground-truth boxes present in the image (optional)\n\n        Returns:\n            boxes (list[BoxList]): the predicted boxes from the RPN, one BoxList per\n                image.\n            losses (dict[Tensor]): the losses for the model during training. During\n                testing, it is an empty dict.\n        \"\"\"\n        objectness, rpn_box_regression = self.head(features)\n        anchors = self.anchor_generator(images, features)\n\n        if self.training:\n            return self._forward_train(anchors, objectness, rpn_box_regression, targets)\n        else:\n            return self._forward_test(anchors, objectness, rpn_box_regression)\n\n    def _forward_train(self, anchors, objectness, rpn_box_regression, targets):\n        if self.cfg.MODEL.RPN_ONLY:\n            # When training an RPN-only model, the loss is determined by the\n            # predicted objectness and rpn_box_regression values and there is\n            # no need to transform the anchors into predicted boxes; this is an\n            # optimization that avoids the unnecessary transformation.\n            boxes = anchors\n        else:\n            # For end-to-end models, anchors must be transformed into boxes and\n            # sampled into a training batch.\n            with torch.no_grad():\n                boxes = self.box_selector_train(\n                    anchors, objectness, rpn_box_regression, targets\n                )\n        loss_objectness, loss_rpn_box_reg = self.loss_evaluator(\n            anchors, objectness, rpn_box_regression, targets\n        )\n        losses = {\n            \"loss_objectness\": loss_objectness,\n            \"loss_rpn_box_reg\": loss_rpn_box_reg,\n        }\n        return boxes, losses\n\n    def _forward_test(self, anchors, objectness, rpn_box_regression):\n        boxes = self.box_selector_test(anchors, objectness, rpn_box_regression)\n        if self.cfg.MODEL.RPN_ONLY:\n            # For end-to-end models, the RPN proposals are an intermediate state\n            # and don't bother to sort them in decreasing score order. For RPN-only\n            # models, the proposals are the final output and we return them in\n            # high-to-low confidence order.\n            inds = [\n                box.get_field(\"objectness\").sort(descending=True)[1] for box in boxes\n            ]\n            boxes = [box[ind] for box, ind in zip(boxes, inds)]\n        return boxes, {}\n\n\ndef build_rpn(cfg, in_channels):\n    \"\"\"\n    This gives the gist of it. Not super important because it doesn't change as much\n    \"\"\"\n    if cfg.MODEL.FCOS_ON:\n        return build_fcos(cfg, in_channels)\n    if cfg.MODEL.RETINANET_ON:\n        return build_retinanet(cfg, in_channels)\n\n    return RPNModule(cfg, in_channels)\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/rpn/utils.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n\"\"\"\nUtility functions minipulating the prediction layers\n\"\"\"\n\nfrom ..utils import cat\n\nimport torch\n\ndef permute_and_flatten(layer, N, A, C, H, W):\n    layer = layer.view(N, -1, C, H, W)\n    layer = layer.permute(0, 3, 4, 1, 2)\n    layer = layer.reshape(N, -1, C)\n    return layer\n\n\ndef concat_box_prediction_layers(box_cls, box_regression):\n    box_cls_flattened = []\n    box_regression_flattened = []\n    # for each feature level, permute the outputs to make them be in the\n    # same format as the labels. Note that the labels are computed for\n    # all feature levels concatenated, so we keep the same representation\n    # for the objectness and the box_regression\n    for box_cls_per_level, box_regression_per_level in zip(\n        box_cls, box_regression\n    ):\n        N, AxC, H, W = box_cls_per_level.shape\n        Ax4 = box_regression_per_level.shape[1]\n        A = Ax4 // 4\n        C = AxC // A\n        box_cls_per_level = permute_and_flatten(\n            box_cls_per_level, N, A, C, H, W\n        )\n        box_cls_flattened.append(box_cls_per_level)\n\n        box_regression_per_level = permute_and_flatten(\n            box_regression_per_level, N, A, 4, H, W\n        )\n        box_regression_flattened.append(box_regression_per_level)\n    # concatenate on the first dimension (representing the feature levels), to\n    # take into account the way the labels were generated (with all feature maps\n    # being concatenated as well)\n    box_cls = cat(box_cls_flattened, dim=1).reshape(-1, C)\n    box_regression = cat(box_regression_flattened, dim=1).reshape(-1, 4)\n    return box_cls, box_regression\n"
  },
  {
    "path": "maskrcnn_benchmark/modeling/utils.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n\"\"\"\nMiscellaneous utility functions\n\"\"\"\n\nimport torch\n\n\ndef cat(tensors, dim=0):\n    \"\"\"\n    Efficient version of torch.cat that avoids a copy if there is only a single element in a list\n    \"\"\"\n    assert isinstance(tensors, (list, tuple))\n    if len(tensors) == 1:\n        return tensors[0]\n    return torch.cat(tensors, dim)\n"
  },
  {
    "path": "maskrcnn_benchmark/solver/__init__.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nfrom .build import make_optimizer\nfrom .build import make_lr_scheduler\nfrom .lr_scheduler import WarmupMultiStepLR\n"
  },
  {
    "path": "maskrcnn_benchmark/solver/build.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport torch\n\nfrom .lr_scheduler import WarmupMultiStepLR\n\n\ndef make_optimizer(cfg, model):\n    params = []\n    for key, value in model.named_parameters():\n        if not value.requires_grad:\n            continue\n        lr = cfg.SOLVER.BASE_LR\n        weight_decay = cfg.SOLVER.WEIGHT_DECAY\n        if \"bias\" in key:\n            lr = cfg.SOLVER.BASE_LR * cfg.SOLVER.BIAS_LR_FACTOR\n            weight_decay = cfg.SOLVER.WEIGHT_DECAY_BIAS\n        params += [{\"params\": [value], \"lr\": lr, \"weight_decay\": weight_decay}]\n\n    optimizer = torch.optim.SGD(params, lr, momentum=cfg.SOLVER.MOMENTUM)\n    return optimizer\n\n\ndef make_lr_scheduler(cfg, optimizer):\n    return WarmupMultiStepLR(\n        optimizer,\n        cfg.SOLVER.STEPS,\n        cfg.SOLVER.GAMMA,\n        warmup_factor=cfg.SOLVER.WARMUP_FACTOR,\n        warmup_iters=cfg.SOLVER.WARMUP_ITERS,\n        warmup_method=cfg.SOLVER.WARMUP_METHOD,\n    )\n"
  },
  {
    "path": "maskrcnn_benchmark/solver/lr_scheduler.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nfrom bisect import bisect_right\n\nimport torch\n\n\n# FIXME ideally this would be achieved with a CombinedLRScheduler,\n# separating MultiStepLR with WarmupLR\n# but the current LRScheduler design doesn't allow it\nclass WarmupMultiStepLR(torch.optim.lr_scheduler._LRScheduler):\n    def __init__(\n        self,\n        optimizer,\n        milestones,\n        gamma=0.1,\n        warmup_factor=1.0 / 3,\n        warmup_iters=500,\n        warmup_method=\"linear\",\n        last_epoch=-1,\n    ):\n        if not list(milestones) == sorted(milestones):\n            raise ValueError(\n                \"Milestones should be a list of\" \" increasing integers. Got {}\",\n                milestones,\n            )\n\n        if warmup_method not in (\"constant\", \"linear\"):\n            raise ValueError(\n                \"Only 'constant' or 'linear' warmup_method accepted\"\n                \"got {}\".format(warmup_method)\n            )\n        self.milestones = milestones\n        self.gamma = gamma\n        self.warmup_factor = warmup_factor\n        self.warmup_iters = warmup_iters\n        self.warmup_method = warmup_method\n        super(WarmupMultiStepLR, self).__init__(optimizer, last_epoch)\n\n    def get_lr(self):\n        warmup_factor = 1\n        if self.last_epoch < self.warmup_iters:\n            if self.warmup_method == \"constant\":\n                warmup_factor = self.warmup_factor\n            elif self.warmup_method == \"linear\":\n                alpha = float(self.last_epoch) / self.warmup_iters\n                warmup_factor = self.warmup_factor * (1 - alpha) + alpha\n        return [\n            base_lr\n            * warmup_factor\n            * self.gamma ** bisect_right(self.milestones, self.last_epoch)\n            for base_lr in self.base_lrs\n        ]\n"
  },
  {
    "path": "maskrcnn_benchmark/structures/__init__.py",
    "content": ""
  },
  {
    "path": "maskrcnn_benchmark/structures/bounding_box.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport torch\n\n# transpose\nFLIP_LEFT_RIGHT = 0\nFLIP_TOP_BOTTOM = 1\n\n\nclass BoxList(object):\n    \"\"\"\n    This class represents a set of bounding boxes.\n    The bounding boxes are represented as a Nx4 Tensor.\n    In order to uniquely determine the bounding boxes with respect\n    to an image, we also store the corresponding image dimensions.\n    They can contain extra information that is specific to each bounding box, such as\n    labels.\n    \"\"\"\n\n    def __init__(self, bbox, image_size, mode=\"xyxy\"):\n        device = bbox.device if isinstance(bbox, torch.Tensor) else torch.device(\"cpu\")\n        bbox = torch.as_tensor(bbox, dtype=torch.float32, device=device)\n        if bbox.ndimension() != 2:\n            raise ValueError(\n                \"bbox should have 2 dimensions, got {}\".format(bbox.ndimension())\n            )\n        if bbox.size(-1) != 4:\n            raise ValueError(\n                \"last dimension of bbox should have a \"\n                \"size of 4, got {}\".format(bbox.size(-1))\n            )\n        if mode not in (\"xyxy\", \"xywh\"):\n            raise ValueError(\"mode should be 'xyxy' or 'xywh'\")\n\n        self.bbox = bbox\n        self.size = image_size  # (image_width, image_height)\n        self.mode = mode\n        self.extra_fields = {}\n\n    def add_field(self, field, field_data):\n        self.extra_fields[field] = field_data\n\n    def get_field(self, field):\n        return self.extra_fields[field]\n\n    def has_field(self, field):\n        return field in self.extra_fields\n\n    def fields(self):\n        return list(self.extra_fields.keys())\n\n    def _copy_extra_fields(self, bbox):\n        for k, v in bbox.extra_fields.items():\n            self.extra_fields[k] = v\n\n    def convert(self, mode):\n        if mode not in (\"xyxy\", \"xywh\"):\n            raise ValueError(\"mode should be 'xyxy' or 'xywh'\")\n        if mode == self.mode:\n            return self\n        # we only have two modes, so don't need to check\n        # self.mode\n        xmin, ymin, xmax, ymax = self._split_into_xyxy()\n        if mode == \"xyxy\":\n            bbox = torch.cat((xmin, ymin, xmax, ymax), dim=-1)\n            bbox = BoxList(bbox, self.size, mode=mode)\n        else:\n            TO_REMOVE = 1\n            bbox = torch.cat(\n                (xmin, ymin, xmax - xmin + TO_REMOVE, ymax - ymin + TO_REMOVE), dim=-1\n            )\n            bbox = BoxList(bbox, self.size, mode=mode)\n        bbox._copy_extra_fields(self)\n        return bbox\n\n    def _split_into_xyxy(self):\n        if self.mode == \"xyxy\":\n            xmin, ymin, xmax, ymax = self.bbox.split(1, dim=-1)\n            return xmin, ymin, xmax, ymax\n        elif self.mode == \"xywh\":\n            TO_REMOVE = 1\n            xmin, ymin, w, h = self.bbox.split(1, dim=-1)\n            return (\n                xmin,\n                ymin,\n                xmin + (w - TO_REMOVE).clamp(min=0),\n                ymin + (h - TO_REMOVE).clamp(min=0),\n            )\n        else:\n            raise RuntimeError(\"Should not be here\")\n\n    def resize(self, size, *args, **kwargs):\n        \"\"\"\n        Returns a resized copy of this bounding box\n\n        :param size: The requested size in pixels, as a 2-tuple:\n            (width, height).\n        \"\"\"\n\n        ratios = tuple(float(s) / float(s_orig) for s, s_orig in zip(size, self.size))\n        if ratios[0] == ratios[1]:\n            ratio = ratios[0]\n            scaled_box = self.bbox * ratio\n            bbox = BoxList(scaled_box, size, mode=self.mode)\n            # bbox._copy_extra_fields(self)\n            for k, v in self.extra_fields.items():\n                if not isinstance(v, torch.Tensor):\n                    v = v.resize(size, *args, **kwargs)\n                bbox.add_field(k, v)\n            return bbox\n\n        ratio_width, ratio_height = ratios\n        xmin, ymin, xmax, ymax = self._split_into_xyxy()\n        scaled_xmin = xmin * ratio_width\n        scaled_xmax = xmax * ratio_width\n        scaled_ymin = ymin * ratio_height\n        scaled_ymax = ymax * ratio_height\n        scaled_box = torch.cat(\n            (scaled_xmin, scaled_ymin, scaled_xmax, scaled_ymax), dim=-1\n        )\n        bbox = BoxList(scaled_box, size, mode=\"xyxy\")\n        # bbox._copy_extra_fields(self)\n        for k, v in self.extra_fields.items():\n            if not isinstance(v, torch.Tensor):\n                v = v.resize(size, *args, **kwargs)\n            bbox.add_field(k, v)\n\n        return bbox.convert(self.mode)\n\n    def transpose(self, method):\n        \"\"\"\n        Transpose bounding box (flip or rotate in 90 degree steps)\n        :param method: One of :py:attr:`PIL.Image.FLIP_LEFT_RIGHT`,\n          :py:attr:`PIL.Image.FLIP_TOP_BOTTOM`, :py:attr:`PIL.Image.ROTATE_90`,\n          :py:attr:`PIL.Image.ROTATE_180`, :py:attr:`PIL.Image.ROTATE_270`,\n          :py:attr:`PIL.Image.TRANSPOSE` or :py:attr:`PIL.Image.TRANSVERSE`.\n        \"\"\"\n        if method not in (FLIP_LEFT_RIGHT, FLIP_TOP_BOTTOM):\n            raise NotImplementedError(\n                \"Only FLIP_LEFT_RIGHT and FLIP_TOP_BOTTOM implemented\"\n            )\n\n        image_width, image_height = self.size\n        xmin, ymin, xmax, ymax = self._split_into_xyxy()\n        if method == FLIP_LEFT_RIGHT:\n            TO_REMOVE = 1\n            transposed_xmin = image_width - xmax - TO_REMOVE\n            transposed_xmax = image_width - xmin - TO_REMOVE\n            transposed_ymin = ymin\n            transposed_ymax = ymax\n        elif method == FLIP_TOP_BOTTOM:\n            transposed_xmin = xmin\n            transposed_xmax = xmax\n            transposed_ymin = image_height - ymax\n            transposed_ymax = image_height - ymin\n\n        transposed_boxes = torch.cat(\n            (transposed_xmin, transposed_ymin, transposed_xmax, transposed_ymax), dim=-1\n        )\n        bbox = BoxList(transposed_boxes, self.size, mode=\"xyxy\")\n        # bbox._copy_extra_fields(self)\n        for k, v in self.extra_fields.items():\n            if not isinstance(v, torch.Tensor):\n                v = v.transpose(method)\n            bbox.add_field(k, v)\n        return bbox.convert(self.mode)\n\n    def crop(self, box):\n        \"\"\"\n        Cropss a rectangular region from this bounding box. The box is a\n        4-tuple defining the left, upper, right, and lower pixel\n        coordinate.\n        \"\"\"\n        xmin, ymin, xmax, ymax = self._split_into_xyxy()\n        w, h = box[2] - box[0], box[3] - box[1]\n        cropped_xmin = (xmin - box[0]).clamp(min=0, max=w)\n        cropped_ymin = (ymin - box[1]).clamp(min=0, max=h)\n        cropped_xmax = (xmax - box[0]).clamp(min=0, max=w)\n        cropped_ymax = (ymax - box[1]).clamp(min=0, max=h)\n\n        # TODO should I filter empty boxes here?\n        if False:\n            is_empty = (cropped_xmin == cropped_xmax) | (cropped_ymin == cropped_ymax)\n\n        cropped_box = torch.cat(\n            (cropped_xmin, cropped_ymin, cropped_xmax, cropped_ymax), dim=-1\n        )\n        bbox = BoxList(cropped_box, (w, h), mode=\"xyxy\")\n        # bbox._copy_extra_fields(self)\n        for k, v in self.extra_fields.items():\n            if not isinstance(v, torch.Tensor):\n                v = v.crop(box)\n            bbox.add_field(k, v)\n        return bbox.convert(self.mode)\n\n    # Tensor-like methods\n\n    def to(self, device):\n        bbox = BoxList(self.bbox.to(device), self.size, self.mode)\n        for k, v in self.extra_fields.items():\n            if hasattr(v, \"to\"):\n                v = v.to(device)\n            bbox.add_field(k, v)\n        return bbox\n\n    def __getitem__(self, item):\n        bbox = BoxList(self.bbox[item], self.size, self.mode)\n        for k, v in self.extra_fields.items():\n            bbox.add_field(k, v[item])\n        return bbox\n\n    def __len__(self):\n        return self.bbox.shape[0]\n\n    def clip_to_image(self, remove_empty=True):\n        TO_REMOVE = 1\n        self.bbox[:, 0].clamp_(min=0, max=self.size[0] - TO_REMOVE)\n        self.bbox[:, 1].clamp_(min=0, max=self.size[1] - TO_REMOVE)\n        self.bbox[:, 2].clamp_(min=0, max=self.size[0] - TO_REMOVE)\n        self.bbox[:, 3].clamp_(min=0, max=self.size[1] - TO_REMOVE)\n        if remove_empty:\n            box = self.bbox\n            keep = (box[:, 3] > box[:, 1]) & (box[:, 2] > box[:, 0])\n            return self[keep]\n        return self\n\n    def area(self):\n        box = self.bbox\n        if self.mode == \"xyxy\":\n            TO_REMOVE = 1\n            area = (box[:, 2] - box[:, 0] + TO_REMOVE) * (box[:, 3] - box[:, 1] + TO_REMOVE)\n        elif self.mode == \"xywh\":\n            area = box[:, 2] * box[:, 3]\n        else:\n            raise RuntimeError(\"Should not be here\")\n\n        return area\n\n    def copy_with_fields(self, fields, skip_missing=False):\n        bbox = BoxList(self.bbox, self.size, self.mode)\n        if not isinstance(fields, (list, tuple)):\n            fields = [fields]\n        for field in fields:\n            if self.has_field(field):\n                bbox.add_field(field, self.get_field(field))\n            elif not skip_missing:\n                raise KeyError(\"Field '{}' not found in {}\".format(field, self))\n        return bbox\n\n    def __repr__(self):\n        s = self.__class__.__name__ + \"(\"\n        s += \"num_boxes={}, \".format(len(self))\n        s += \"image_width={}, \".format(self.size[0])\n        s += \"image_height={}, \".format(self.size[1])\n        s += \"mode={})\".format(self.mode)\n        return s\n\n\nif __name__ == \"__main__\":\n    bbox = BoxList([[0, 0, 10, 10], [0, 0, 5, 5]], (10, 10))\n    s_bbox = bbox.resize((5, 5))\n    print(s_bbox)\n    print(s_bbox.bbox)\n\n    t_bbox = bbox.transpose(0)\n    print(t_bbox)\n    print(t_bbox.bbox)\n"
  },
  {
    "path": "maskrcnn_benchmark/structures/boxlist_ops.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport torch\n\nfrom .bounding_box import BoxList\n\nfrom maskrcnn_benchmark.layers import nms as _box_nms\n\n\ndef boxlist_nms(boxlist, nms_thresh, max_proposals=-1, score_field=\"scores\"):\n    \"\"\"\n    Performs non-maximum suppression on a boxlist, with scores specified\n    in a boxlist field via score_field.\n\n    Arguments:\n        boxlist(BoxList)\n        nms_thresh (float)\n        max_proposals (int): if > 0, then only the top max_proposals are kept\n            after non-maximum suppression\n        score_field (str)\n    \"\"\"\n    if nms_thresh <= 0:\n        return boxlist\n    mode = boxlist.mode\n    boxlist = boxlist.convert(\"xyxy\")\n    boxes = boxlist.bbox\n    score = boxlist.get_field(score_field)\n    keep = _box_nms(boxes, score, nms_thresh)\n    if max_proposals > 0:\n        keep = keep[: max_proposals]\n    boxlist = boxlist[keep]\n    return boxlist.convert(mode)\n\n\ndef remove_small_boxes(boxlist, min_size):\n    \"\"\"\n    Only keep boxes with both sides >= min_size\n\n    Arguments:\n        boxlist (Boxlist)\n        min_size (int)\n    \"\"\"\n    # TODO maybe add an API for querying the ws / hs\n    xywh_boxes = boxlist.convert(\"xywh\").bbox\n    _, _, ws, hs = xywh_boxes.unbind(dim=1)\n    keep = (\n        (ws >= min_size) & (hs >= min_size)\n    ).nonzero().squeeze(1)\n    return boxlist[keep]\n\n\n# implementation from https://github.com/kuangliu/torchcv/blob/master/torchcv/utils/box.py\n# with slight modifications\ndef boxlist_iou(boxlist1, boxlist2):\n    \"\"\"Compute the intersection over union of two set of boxes.\n    The box order must be (xmin, ymin, xmax, ymax).\n\n    Arguments:\n      box1: (BoxList) bounding boxes, sized [N,4].\n      box2: (BoxList) bounding boxes, sized [M,4].\n\n    Returns:\n      (tensor) iou, sized [N,M].\n\n    Reference:\n      https://github.com/chainer/chainercv/blob/master/chainercv/utils/bbox/bbox_iou.py\n    \"\"\"\n    if boxlist1.size != boxlist2.size:\n        raise RuntimeError(\n                \"boxlists should have same image size, got {}, {}\".format(boxlist1, boxlist2))\n\n    N = len(boxlist1)\n    M = len(boxlist2)\n\n    area1 = boxlist1.area()\n    area2 = boxlist2.area()\n\n    box1, box2 = boxlist1.bbox, boxlist2.bbox\n\n    lt = torch.max(box1[:, None, :2], box2[:, :2])  # [N,M,2]\n    rb = torch.min(box1[:, None, 2:], box2[:, 2:])  # [N,M,2]\n\n    TO_REMOVE = 1\n\n    wh = (rb - lt + TO_REMOVE).clamp(min=0)  # [N,M,2]\n    inter = wh[:, :, 0] * wh[:, :, 1]  # [N,M]\n\n    iou = inter / (area1[:, None] + area2 - inter)\n    return iou\n\n\n# TODO redundant, remove\ndef _cat(tensors, dim=0):\n    \"\"\"\n    Efficient version of torch.cat that avoids a copy if there is only a single element in a list\n    \"\"\"\n    assert isinstance(tensors, (list, tuple))\n    if len(tensors) == 1:\n        return tensors[0]\n    return torch.cat(tensors, dim)\n\n\ndef cat_boxlist(bboxes):\n    \"\"\"\n    Concatenates a list of BoxList (having the same image size) into a\n    single BoxList\n\n    Arguments:\n        bboxes (list[BoxList])\n    \"\"\"\n    assert isinstance(bboxes, (list, tuple))\n    assert all(isinstance(bbox, BoxList) for bbox in bboxes)\n\n    size = bboxes[0].size\n    assert all(bbox.size == size for bbox in bboxes)\n\n    mode = bboxes[0].mode\n    assert all(bbox.mode == mode for bbox in bboxes)\n\n    fields = set(bboxes[0].fields())\n    assert all(set(bbox.fields()) == fields for bbox in bboxes)\n\n    cat_boxes = BoxList(_cat([bbox.bbox for bbox in bboxes], dim=0), size, mode)\n\n    for field in fields:\n        data = _cat([bbox.get_field(field) for bbox in bboxes], dim=0)\n        cat_boxes.add_field(field, data)\n\n    return cat_boxes\n"
  },
  {
    "path": "maskrcnn_benchmark/structures/image_list.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nfrom __future__ import division\n\nimport torch\n\n\nclass ImageList(object):\n    \"\"\"\n    Structure that holds a list of images (of possibly\n    varying sizes) as a single tensor.\n    This works by padding the images to the same size,\n    and storing in a field the original sizes of each image\n    \"\"\"\n\n    def __init__(self, tensors, image_sizes):\n        \"\"\"\n        Arguments:\n            tensors (tensor)\n            image_sizes (list[tuple[int, int]])\n        \"\"\"\n        self.tensors = tensors\n        self.image_sizes = image_sizes\n\n    def to(self, *args, **kwargs):\n        cast_tensor = self.tensors.to(*args, **kwargs)\n        return ImageList(cast_tensor, self.image_sizes)\n\n\ndef to_image_list(tensors, size_divisible=0):\n    \"\"\"\n    tensors can be an ImageList, a torch.Tensor or\n    an iterable of Tensors. It can't be a numpy array.\n    When tensors is an iterable of Tensors, it pads\n    the Tensors with zeros so that they have the same\n    shape\n    \"\"\"\n    if isinstance(tensors, torch.Tensor) and size_divisible > 0:\n        tensors = [tensors]\n\n    if isinstance(tensors, ImageList):\n        return tensors\n    elif isinstance(tensors, torch.Tensor):\n        # single tensor shape can be inferred\n        if tensors.dim() == 3:\n            tensors = tensors[None]\n        assert tensors.dim() == 4\n        image_sizes = [tensor.shape[-2:] for tensor in tensors]\n        return ImageList(tensors, image_sizes)\n    elif isinstance(tensors, (tuple, list)):\n        max_size = tuple(max(s) for s in zip(*[img.shape for img in tensors]))\n\n        # TODO Ideally, just remove this and let me model handle arbitrary\n        # input sizs\n        if size_divisible > 0:\n            import math\n\n            stride = size_divisible\n            max_size = list(max_size)\n            max_size[1] = int(math.ceil(max_size[1] / stride) * stride)\n            max_size[2] = int(math.ceil(max_size[2] / stride) * stride)\n            max_size = tuple(max_size)\n\n        batch_shape = (len(tensors),) + max_size\n        batched_imgs = tensors[0].new(*batch_shape).zero_()\n        for img, pad_img in zip(tensors, batched_imgs):\n            pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)\n\n        image_sizes = [im.shape[-2:] for im in tensors]\n\n        return ImageList(batched_imgs, image_sizes)\n    else:\n        raise TypeError(\"Unsupported type for to_image_list: {}\".format(type(tensors)))\n"
  },
  {
    "path": "maskrcnn_benchmark/structures/keypoint.py",
    "content": "import torch\n\n\n# transpose\nFLIP_LEFT_RIGHT = 0\nFLIP_TOP_BOTTOM = 1\n\nclass Keypoints(object):\n    def __init__(self, keypoints, size, mode=None):\n        # FIXME remove check once we have better integration with device\n        # in my version this would consistently return a CPU tensor\n        device = keypoints.device if isinstance(keypoints, torch.Tensor) else torch.device('cpu')\n        keypoints = torch.as_tensor(keypoints, dtype=torch.float32, device=device)\n        num_keypoints = keypoints.shape[0]\n        if num_keypoints:\n            keypoints = keypoints.view(num_keypoints, -1, 3)\n        \n        # TODO should I split them?\n        # self.visibility = keypoints[..., 2]\n        self.keypoints = keypoints# [..., :2]\n\n        self.size = size\n        self.mode = mode\n        self.extra_fields = {}\n\n    def crop(self, box):\n        raise NotImplementedError()\n\n    def resize(self, size, *args, **kwargs):\n        ratios = tuple(float(s) / float(s_orig) for s, s_orig in zip(size, self.size))\n        ratio_w, ratio_h = ratios\n        resized_data = self.keypoints.clone()\n        resized_data[..., 0] *= ratio_w\n        resized_data[..., 1] *= ratio_h\n        keypoints = type(self)(resized_data, size, self.mode)\n        for k, v in self.extra_fields.items():\n            keypoints.add_field(k, v)\n        return keypoints\n\n    def transpose(self, method):\n        if method not in (FLIP_LEFT_RIGHT,):\n            raise NotImplementedError(\n                    \"Only FLIP_LEFT_RIGHT implemented\")\n\n        flip_inds = type(self).FLIP_INDS\n        flipped_data = self.keypoints[:, flip_inds]\n        width = self.size[0]\n        TO_REMOVE = 1\n        # Flip x coordinates\n        flipped_data[..., 0] = width - flipped_data[..., 0] - TO_REMOVE\n\n        # Maintain COCO convention that if visibility == 0, then x, y = 0\n        inds = flipped_data[..., 2] == 0\n        flipped_data[inds] = 0\n\n        keypoints = type(self)(flipped_data, self.size, self.mode)\n        for k, v in self.extra_fields.items():\n            keypoints.add_field(k, v)\n        return keypoints\n\n    def to(self, *args, **kwargs):\n        keypoints = type(self)(self.keypoints.to(*args, **kwargs), self.size, self.mode)\n        for k, v in self.extra_fields.items():\n            if hasattr(v, \"to\"):\n                v = v.to(*args, **kwargs)\n            keypoints.add_field(k, v)\n        return keypoints\n\n    def __getitem__(self, item):\n        keypoints = type(self)(self.keypoints[item], self.size, self.mode)\n        for k, v in self.extra_fields.items():\n            keypoints.add_field(k, v[item])\n        return keypoints\n\n    def add_field(self, field, field_data):\n        self.extra_fields[field] = field_data\n\n    def get_field(self, field):\n        return self.extra_fields[field]\n\n    def __repr__(self):\n        s = self.__class__.__name__ + '('\n        s += 'num_instances={}, '.format(len(self.keypoints))\n        s += 'image_width={}, '.format(self.size[0])\n        s += 'image_height={})'.format(self.size[1])\n        return s\n\n\ndef _create_flip_indices(names, flip_map):\n    full_flip_map = flip_map.copy()\n    full_flip_map.update({v: k for k, v in flip_map.items()})\n    flipped_names = [i if i not in full_flip_map else full_flip_map[i] for i in names]\n    flip_indices = [names.index(i) for i in flipped_names]\n    return torch.tensor(flip_indices)\n\n\nclass PersonKeypoints(Keypoints):\n    NAMES = [\n        'nose',\n        'left_eye',\n        'right_eye',\n        'left_ear',\n        'right_ear',\n        'left_shoulder',\n        'right_shoulder',\n        'left_elbow',\n        'right_elbow',\n        'left_wrist',\n        'right_wrist',\n        'left_hip',\n        'right_hip',\n        'left_knee',\n        'right_knee',\n        'left_ankle',\n        'right_ankle'\n    ]\n    FLIP_MAP = {\n        'left_eye': 'right_eye',\n        'left_ear': 'right_ear',\n        'left_shoulder': 'right_shoulder',\n        'left_elbow': 'right_elbow',\n        'left_wrist': 'right_wrist',\n        'left_hip': 'right_hip',\n        'left_knee': 'right_knee',\n        'left_ankle': 'right_ankle'\n    }\n\n\n# TODO this doesn't look great\nPersonKeypoints.FLIP_INDS = _create_flip_indices(PersonKeypoints.NAMES, PersonKeypoints.FLIP_MAP)\ndef kp_connections(keypoints):\n    kp_lines = [\n        [keypoints.index('left_eye'), keypoints.index('right_eye')],\n        [keypoints.index('left_eye'), keypoints.index('nose')],\n        [keypoints.index('right_eye'), keypoints.index('nose')],\n        [keypoints.index('right_eye'), keypoints.index('right_ear')],\n        [keypoints.index('left_eye'), keypoints.index('left_ear')],\n        [keypoints.index('right_shoulder'), keypoints.index('right_elbow')],\n        [keypoints.index('right_elbow'), keypoints.index('right_wrist')],\n        [keypoints.index('left_shoulder'), keypoints.index('left_elbow')],\n        [keypoints.index('left_elbow'), keypoints.index('left_wrist')],\n        [keypoints.index('right_hip'), keypoints.index('right_knee')],\n        [keypoints.index('right_knee'), keypoints.index('right_ankle')],\n        [keypoints.index('left_hip'), keypoints.index('left_knee')],\n        [keypoints.index('left_knee'), keypoints.index('left_ankle')],\n        [keypoints.index('right_shoulder'), keypoints.index('left_shoulder')],\n        [keypoints.index('right_hip'), keypoints.index('left_hip')],\n    ]\n    return kp_lines\nPersonKeypoints.CONNECTIONS = kp_connections(PersonKeypoints.NAMES)\n\n\n# TODO make this nicer, this is a direct translation from C2 (but removing the inner loop)\ndef keypoints_to_heat_map(keypoints, rois, heatmap_size):\n    if rois.numel() == 0:\n        return rois.new().long(), rois.new().long()\n    offset_x = rois[:, 0]\n    offset_y = rois[:, 1]\n    scale_x = heatmap_size / (rois[:, 2] - rois[:, 0])\n    scale_y = heatmap_size / (rois[:, 3] - rois[:, 1])\n\n    offset_x = offset_x[:, None]\n    offset_y = offset_y[:, None]\n    scale_x = scale_x[:, None]\n    scale_y = scale_y[:, None]\n\n    x = keypoints[..., 0]\n    y = keypoints[..., 1]\n\n    x_boundary_inds = x == rois[:, 2][:, None]\n    y_boundary_inds = y == rois[:, 3][:, None]\n\n    x = (x - offset_x) * scale_x\n    x = x.floor().long()\n    y = (y - offset_y) * scale_y\n    y = y.floor().long()\n    \n    x[x_boundary_inds] = heatmap_size - 1\n    y[y_boundary_inds] = heatmap_size - 1\n\n    valid_loc = (x >= 0) & (y >= 0) & (x < heatmap_size) & (y < heatmap_size)\n    vis = keypoints[..., 2] > 0\n    valid = (valid_loc & vis).long()\n\n    lin_ind = y * heatmap_size + x\n    heatmaps = lin_ind * valid\n\n    return heatmaps, valid\n"
  },
  {
    "path": "maskrcnn_benchmark/structures/segmentation_mask.py",
    "content": "import cv2\n\nimport torch\nimport numpy as np\nfrom maskrcnn_benchmark.layers.misc import interpolate\n\nimport pycocotools.mask as mask_utils\n\n# transpose\nFLIP_LEFT_RIGHT = 0\nFLIP_TOP_BOTTOM = 1\n\n\n\"\"\" ABSTRACT\nSegmentations come in either:\n1) Binary masks\n2) Polygons\n\nBinary masks can be represented in a contiguous array\nand operations can be carried out more efficiently,\ntherefore BinaryMaskList handles them together.\n\nPolygons are handled separately for each instance,\nby PolygonInstance and instances are handled by\nPolygonList.\n\nSegmentationList is supposed to represent both,\ntherefore it wraps the functions of BinaryMaskList\nand PolygonList to make it transparent.\n\"\"\"\n\n\nclass BinaryMaskList(object):\n    \"\"\"\n    This class handles binary masks for all objects in the image\n    \"\"\"\n\n    def __init__(self, masks, size):\n        \"\"\"\n            Arguments:\n                masks: Either torch.tensor of [num_instances, H, W]\n                    or list of torch.tensors of [H, W] with num_instances elems,\n                    or RLE (Run Length Encoding) - interpreted as list of dicts,\n                    or BinaryMaskList.\n                size: absolute image size, width first\n\n            After initialization, a hard copy will be made, to leave the\n            initializing source data intact.\n        \"\"\"\n\n        if isinstance(masks, torch.Tensor):\n            # The raw data representation is passed as argument\n            masks = masks.clone()\n        elif isinstance(masks, (list, tuple)):\n            if isinstance(masks[0], torch.Tensor):\n                masks = torch.stack(masks, dim=2).clone()\n            elif isinstance(masks[0], dict) and \"count\" in masks[0]:\n                # RLE interpretation\n\n                masks = mask_utils\n            else:\n                RuntimeError(\n                    \"Type of `masks[0]` could not be interpreted: %s\" % type(masks)\n                )\n        elif isinstance(masks, BinaryMaskList):\n            # just hard copy the BinaryMaskList instance's underlying data\n            masks = masks.masks.clone()\n        else:\n            RuntimeError(\n                \"Type of `masks` argument could not be interpreted:%s\" % type(masks)\n            )\n\n        if len(masks.shape) == 2:\n            # if only a single instance mask is passed\n            masks = masks[None]\n\n        assert len(masks.shape) == 3\n        assert masks.shape[1] == size[1], \"%s != %s\" % (masks.shape[1], size[1])\n        assert masks.shape[2] == size[0], \"%s != %s\" % (masks.shape[2], size[0])\n\n        self.masks = masks\n        self.size = tuple(size)\n\n    def transpose(self, method):\n        dim = 1 if method == FLIP_TOP_BOTTOM else 2\n        flipped_masks = self.masks.flip(dim)\n        return BinaryMaskList(flipped_masks, self.size)\n\n    def crop(self, box):\n        assert isinstance(box, (list, tuple, torch.Tensor)), str(type(box))\n        # box is assumed to be xyxy\n        current_width, current_height = self.size\n        xmin, ymin, xmax, ymax = [round(float(b)) for b in box]\n\n        assert xmin <= xmax and ymin <= ymax, str(box)\n        xmin = min(max(xmin, 0), current_width - 1)\n        ymin = min(max(ymin, 0), current_height - 1)\n\n        xmax = min(max(xmax, 0), current_width)\n        ymax = min(max(ymax, 0), current_height)\n\n        xmax = max(xmax, xmin + 1)\n        ymax = max(ymax, ymin + 1)\n\n        width, height = xmax - xmin, ymax - ymin\n        cropped_masks = self.masks[:, ymin:ymax, xmin:xmax]\n        cropped_size = width, height\n        return BinaryMaskList(cropped_masks, cropped_size)\n\n    def resize(self, size):\n        try:\n            iter(size)\n        except TypeError:\n            assert isinstance(size, (int, float))\n            size = size, size\n        width, height = map(int, size)\n\n        assert width > 0\n        assert height > 0\n\n        # Height comes first here!\n        resized_masks = torch.nn.functional.interpolate(\n            input=self.masks[None].float(),\n            size=(height, width),\n            mode=\"bilinear\",\n            align_corners=False,\n        )[0].type_as(self.masks)\n        resized_size = width, height\n        return BinaryMaskList(resized_masks, resized_size)\n\n    def convert_to_polygon(self):\n        contours = self._findContours()\n        return PolygonList(contours, self.size)\n\n    def to(self, *args, **kwargs):\n        return self\n\n    def _findContours(self):\n        contours = []\n        masks = self.masks.detach().numpy()\n        for mask in masks:\n            mask = cv2.UMat(mask)\n            contour, hierarchy = cv2.findContours(\n                mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_TC89_L1\n            )\n\n            reshaped_contour = []\n            for entity in contour:\n                assert len(entity.shape) == 3\n                assert entity.shape[1] == 1, \"Hierarchical contours are not allowed\"\n                reshaped_contour.append(entity.reshape(-1).tolist())\n            contours.append(reshaped_contour)\n        return contours\n\n    def __len__(self):\n        return len(self.masks)\n\n    def __getitem__(self, index):\n        # Probably it can cause some overhead\n        # but preserves consistency\n        masks = self.masks[index].clone()\n        return BinaryMaskList(masks, self.size)\n\n    def __iter__(self):\n        return iter(self.masks)\n\n    def __repr__(self):\n        s = self.__class__.__name__ + \"(\"\n        s += \"num_instances={}, \".format(len(self.masks))\n        s += \"image_width={}, \".format(self.size[0])\n        s += \"image_height={})\".format(self.size[1])\n        return s\n\n\nclass PolygonInstance(object):\n    \"\"\"\n    This class holds a set of polygons that represents a single instance\n    of an object mask. The object can be represented as a set of\n    polygons\n    \"\"\"\n\n    def __init__(self, polygons, size):\n        \"\"\"\n            Arguments:\n                a list of lists of numbers.\n                The first level refers to all the polygons that compose the\n                object, and the second level to the polygon coordinates.\n        \"\"\"\n        if isinstance(polygons, (list, tuple)):\n            valid_polygons = []\n            for p in polygons:\n                p = torch.as_tensor(p, dtype=torch.float32)\n                if len(p) >= 6:  # 3 * 2 coordinates\n                    valid_polygons.append(p)\n            polygons = valid_polygons\n\n        elif isinstance(polygons, PolygonInstance):\n            polygons = [p.clone() for p in polygons.polygons]\n        else:\n            RuntimeError(\n                \"Type of argument `polygons` is not allowed:%s\" % (type(polygons))\n            )\n\n        \"\"\" This crashes the training way too many times...\n        for p in polygons:\n            assert p[::2].min() >= 0\n            assert p[::2].max() < size[0]\n            assert p[1::2].min() >= 0\n            assert p[1::2].max() , size[1]\n        \"\"\"\n\n        self.polygons = polygons\n        self.size = tuple(size)\n\n    def transpose(self, method):\n        if method not in (FLIP_LEFT_RIGHT, FLIP_TOP_BOTTOM):\n            raise NotImplementedError(\n                \"Only FLIP_LEFT_RIGHT and FLIP_TOP_BOTTOM implemented\"\n            )\n\n        flipped_polygons = []\n        width, height = self.size\n        if method == FLIP_LEFT_RIGHT:\n            dim = width\n            idx = 0\n        elif method == FLIP_TOP_BOTTOM:\n            dim = height\n            idx = 1\n\n        for poly in self.polygons:\n            p = poly.clone()\n            TO_REMOVE = 1\n            p[idx::2] = dim - poly[idx::2] - TO_REMOVE\n            flipped_polygons.append(p)\n\n        return PolygonInstance(flipped_polygons, size=self.size)\n\n    def crop(self, box):\n        assert isinstance(box, (list, tuple, torch.Tensor)), str(type(box))\n\n        # box is assumed to be xyxy\n        current_width, current_height = self.size\n        xmin, ymin, xmax, ymax = map(float, box)\n\n        assert xmin <= xmax and ymin <= ymax, str(box)\n        xmin = min(max(xmin, 0), current_width - 1)\n        ymin = min(max(ymin, 0), current_height - 1)\n\n        xmax = min(max(xmax, 0), current_width)\n        ymax = min(max(ymax, 0), current_height)\n\n        xmax = max(xmax, xmin + 1)\n        ymax = max(ymax, ymin + 1)\n\n        w, h = xmax - xmin, ymax - ymin\n\n        cropped_polygons = []\n        for poly in self.polygons:\n            p = poly.clone()\n            p[0::2] = p[0::2] - xmin  # .clamp(min=0, max=w)\n            p[1::2] = p[1::2] - ymin  # .clamp(min=0, max=h)\n            cropped_polygons.append(p)\n\n        return PolygonInstance(cropped_polygons, size=(w, h))\n\n    def resize(self, size):\n        try:\n            iter(size)\n        except TypeError:\n            assert isinstance(size, (int, float))\n            size = size, size\n\n        ratios = tuple(float(s) / float(s_orig) for s, s_orig in zip(size, self.size))\n\n        if ratios[0] == ratios[1]:\n            ratio = ratios[0]\n            scaled_polys = [p * ratio for p in self.polygons]\n            return PolygonInstance(scaled_polys, size)\n\n        ratio_w, ratio_h = ratios\n        scaled_polygons = []\n        for poly in self.polygons:\n            p = poly.clone()\n            p[0::2] *= ratio_w\n            p[1::2] *= ratio_h\n            scaled_polygons.append(p)\n\n        return PolygonInstance(scaled_polygons, size=size)\n\n    def convert_to_binarymask(self):\n        width, height = self.size\n        # formatting for COCO PythonAPI\n        polygons = [p.numpy() for p in self.polygons]\n        rles = mask_utils.frPyObjects(polygons, height, width)\n        rle = mask_utils.merge(rles)\n        mask = mask_utils.decode(rle)\n        mask = torch.from_numpy(mask)\n        return mask\n\n    def __len__(self):\n        return len(self.polygons)\n\n    def __repr__(self):\n        s = self.__class__.__name__ + \"(\"\n        s += \"num_groups={}, \".format(len(self.polygons))\n        s += \"image_width={}, \".format(self.size[0])\n        s += \"image_height={}, \".format(self.size[1])\n        return s\n\n\nclass PolygonList(object):\n    \"\"\"\n    This class handles PolygonInstances for all objects in the image\n    \"\"\"\n\n    def __init__(self, polygons, size):\n        \"\"\"\n        Arguments:\n            polygons:\n                a list of list of lists of numbers. The first\n                level of the list correspond to individual instances,\n                the second level to all the polygons that compose the\n                object, and the third level to the polygon coordinates.\n\n                OR\n\n                a list of PolygonInstances.\n\n                OR\n\n                a PolygonList\n\n            size: absolute image size\n\n        \"\"\"\n        if isinstance(polygons, (list, tuple)):\n            if len(polygons) == 0:\n                polygons = [[[]]]\n            if isinstance(polygons[0], (list, tuple)):\n                assert isinstance(polygons[0][0], (list, tuple)), str(\n                    type(polygons[0][0])\n                )\n            else:\n                assert isinstance(polygons[0], PolygonInstance), str(type(polygons[0]))\n\n        elif isinstance(polygons, PolygonList):\n            size = polygons.size\n            polygons = polygons.polygons\n\n        else:\n            RuntimeError(\n                \"Type of argument `polygons` is not allowed:%s\" % (type(polygons))\n            )\n\n        assert isinstance(size, (list, tuple)), str(type(size))\n\n        self.polygons = []\n        for p in polygons:\n            p = PolygonInstance(p, size)\n            if len(p) > 0:\n                self.polygons.append(p)\n\n        self.size = tuple(size)\n\n    def transpose(self, method):\n        if method not in (FLIP_LEFT_RIGHT, FLIP_TOP_BOTTOM):\n            raise NotImplementedError(\n                \"Only FLIP_LEFT_RIGHT and FLIP_TOP_BOTTOM implemented\"\n            )\n\n        flipped_polygons = []\n        for polygon in self.polygons:\n            flipped_polygons.append(polygon.transpose(method))\n\n        return PolygonList(flipped_polygons, size=self.size)\n\n    def crop(self, box):\n        w, h = box[2] - box[0], box[3] - box[1]\n        cropped_polygons = []\n        for polygon in self.polygons:\n            cropped_polygons.append(polygon.crop(box))\n\n        cropped_size = w, h\n        return PolygonList(cropped_polygons, cropped_size)\n\n    def resize(self, size):\n        resized_polygons = []\n        for polygon in self.polygons:\n            resized_polygons.append(polygon.resize(size))\n\n        resized_size = size\n        return PolygonList(resized_polygons, resized_size)\n\n    def to(self, *args, **kwargs):\n        return self\n\n    def convert_to_binarymask(self):\n        if len(self) > 0:\n            masks = torch.stack([p.convert_to_binarymask() for p in self.polygons])\n        else:\n            size = self.size\n            masks = torch.empty([0, size[1], size[0]], dtype=torch.uint8)\n\n        return BinaryMaskList(masks, size=self.size)\n\n    def __len__(self):\n        return len(self.polygons)\n\n    def __getitem__(self, item):\n        if isinstance(item, int):\n            selected_polygons = [self.polygons[item]]\n        elif isinstance(item, slice):\n            selected_polygons = self.polygons[item]\n        else:\n            # advanced indexing on a single dimension\n            selected_polygons = []\n            if isinstance(item, torch.Tensor) and item.dtype == torch.uint8:\n                item = item.nonzero()\n                item = item.squeeze(1) if item.numel() > 0 else item\n                item = item.tolist()\n            for i in item:\n                selected_polygons.append(self.polygons[i])\n        return PolygonList(selected_polygons, size=self.size)\n\n    def __iter__(self):\n        return iter(self.polygons)\n\n    def __repr__(self):\n        s = self.__class__.__name__ + \"(\"\n        s += \"num_instances={}, \".format(len(self.polygons))\n        s += \"image_width={}, \".format(self.size[0])\n        s += \"image_height={})\".format(self.size[1])\n        return s\n\n\nclass SegmentationMask(object):\n\n    \"\"\"\n    This class stores the segmentations for all objects in the image.\n    It wraps BinaryMaskList and PolygonList conveniently.\n    \"\"\"\n\n    def __init__(self, instances, size, mode=\"poly\"):\n        \"\"\"\n        Arguments:\n            instances: two types\n                (1) polygon\n                (2) binary mask\n            size: (width, height)\n            mode: 'poly', 'mask'. if mode is 'mask', convert mask of any format to binary mask\n        \"\"\"\n\n        assert isinstance(size, (list, tuple))\n        assert len(size) == 2\n        if isinstance(size[0], torch.Tensor):\n            assert isinstance(size[1], torch.Tensor)\n            size = size[0].item(), size[1].item()\n\n        assert isinstance(size[0], (int, float))\n        assert isinstance(size[1], (int, float))\n\n        if mode == \"poly\":\n            self.instances = PolygonList(instances, size)\n        elif mode == \"mask\":\n            self.instances = BinaryMaskList(instances, size)\n        else:\n            raise NotImplementedError(\"Unknown mode: %s\" % str(mode))\n\n        self.mode = mode\n        self.size = tuple(size)\n\n    def transpose(self, method):\n        flipped_instances = self.instances.transpose(method)\n        return SegmentationMask(flipped_instances, self.size, self.mode)\n\n    def crop(self, box):\n        cropped_instances = self.instances.crop(box)\n        cropped_size = cropped_instances.size\n        return SegmentationMask(cropped_instances, cropped_size, self.mode)\n\n    def resize(self, size, *args, **kwargs):\n        resized_instances = self.instances.resize(size)\n        resized_size = size\n        return SegmentationMask(resized_instances, resized_size, self.mode)\n\n    def to(self, *args, **kwargs):\n        return self\n\n    def convert(self, mode):\n        if mode == self.mode:\n            return self\n\n        if mode == \"poly\":\n            converted_instances = self.instances.convert_to_polygon()\n        elif mode == \"mask\":\n            converted_instances = self.instances.convert_to_binarymask()\n        else:\n            raise NotImplementedError(\"Unknown mode: %s\" % str(mode))\n\n        return SegmentationMask(converted_instances, self.size, mode)\n\n    def get_mask_tensor(self):\n        instances = self.instances\n        if self.mode == \"poly\":\n            instances = instances.convert_to_binarymask()\n        # If there is only 1 instance\n        return instances.masks.squeeze(0)\n\n    def __len__(self):\n        return len(self.instances)\n\n    def __getitem__(self, item):\n        selected_instances = self.instances.__getitem__(item)\n        return SegmentationMask(selected_instances, self.size, self.mode)\n\n    def __iter__(self):\n        self.iter_idx = 0\n        return self\n\n    def __next__(self):\n        if self.iter_idx < self.__len__():\n            next_segmentation = self.__getitem__(self.iter_idx)\n            self.iter_idx += 1\n            return next_segmentation\n        raise StopIteration\n\n    def __repr__(self):\n        s = self.__class__.__name__ + \"(\"\n        s += \"num_instances={}, \".format(len(self.instances))\n        s += \"image_width={}, \".format(self.size[0])\n        s += \"image_height={}, \".format(self.size[1])\n        s += \"mode={})\".format(self.mode)\n        return s\n"
  },
  {
    "path": "maskrcnn_benchmark/utils/README.md",
    "content": "# Utility functions\n\nThis folder contain utility functions that are not used in the\ncore library, but are useful for building models or training\ncode using the config system.\n"
  },
  {
    "path": "maskrcnn_benchmark/utils/__init__.py",
    "content": ""
  },
  {
    "path": "maskrcnn_benchmark/utils/c2_model_loading.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport logging\nimport pickle\nfrom collections import OrderedDict\n\nimport torch\n\nfrom maskrcnn_benchmark.utils.model_serialization import load_state_dict\nfrom maskrcnn_benchmark.utils.registry import Registry\n\n\ndef _rename_basic_resnet_weights(layer_keys):\n    layer_keys = [k.replace(\"_\", \".\") for k in layer_keys]\n    layer_keys = [k.replace(\".w\", \".weight\") for k in layer_keys]\n    layer_keys = [k.replace(\".bn\", \"_bn\") for k in layer_keys]\n    layer_keys = [k.replace(\".b\", \".bias\") for k in layer_keys]\n    layer_keys = [k.replace(\"_bn.s\", \"_bn.scale\") for k in layer_keys]\n    layer_keys = [k.replace(\".biasranch\", \".branch\") for k in layer_keys]\n    layer_keys = [k.replace(\"bbox.pred\", \"bbox_pred\") for k in layer_keys]\n    layer_keys = [k.replace(\"cls.score\", \"cls_score\") for k in layer_keys]\n    layer_keys = [k.replace(\"res.conv1_\", \"conv1_\") for k in layer_keys]\n\n    # RPN / Faster RCNN\n    layer_keys = [k.replace(\".biasbox\", \".bbox\") for k in layer_keys]\n    layer_keys = [k.replace(\"conv.rpn\", \"rpn.conv\") for k in layer_keys]\n    layer_keys = [k.replace(\"rpn.bbox.pred\", \"rpn.bbox_pred\") for k in layer_keys]\n    layer_keys = [k.replace(\"rpn.cls.logits\", \"rpn.cls_logits\") for k in layer_keys]\n\n    # Affine-Channel -> BatchNorm enaming\n    layer_keys = [k.replace(\"_bn.scale\", \"_bn.weight\") for k in layer_keys]\n\n    # Make torchvision-compatible\n    layer_keys = [k.replace(\"conv1_bn.\", \"bn1.\") for k in layer_keys]\n\n    layer_keys = [k.replace(\"res2.\", \"layer1.\") for k in layer_keys]\n    layer_keys = [k.replace(\"res3.\", \"layer2.\") for k in layer_keys]\n    layer_keys = [k.replace(\"res4.\", \"layer3.\") for k in layer_keys]\n    layer_keys = [k.replace(\"res5.\", \"layer4.\") for k in layer_keys]\n\n    layer_keys = [k.replace(\".branch2a.\", \".conv1.\") for k in layer_keys]\n    layer_keys = [k.replace(\".branch2a_bn.\", \".bn1.\") for k in layer_keys]\n    layer_keys = [k.replace(\".branch2b.\", \".conv2.\") for k in layer_keys]\n    layer_keys = [k.replace(\".branch2b_bn.\", \".bn2.\") for k in layer_keys]\n    layer_keys = [k.replace(\".branch2c.\", \".conv3.\") for k in layer_keys]\n    layer_keys = [k.replace(\".branch2c_bn.\", \".bn3.\") for k in layer_keys]\n\n    layer_keys = [k.replace(\".branch1.\", \".downsample.0.\") for k in layer_keys]\n    layer_keys = [k.replace(\".branch1_bn.\", \".downsample.1.\") for k in layer_keys]\n\n    # GroupNorm\n    layer_keys = [k.replace(\"conv1.gn.s\", \"bn1.weight\") for k in layer_keys]\n    layer_keys = [k.replace(\"conv1.gn.bias\", \"bn1.bias\") for k in layer_keys]\n    layer_keys = [k.replace(\"conv2.gn.s\", \"bn2.weight\") for k in layer_keys]\n    layer_keys = [k.replace(\"conv2.gn.bias\", \"bn2.bias\") for k in layer_keys]\n    layer_keys = [k.replace(\"conv3.gn.s\", \"bn3.weight\") for k in layer_keys]\n    layer_keys = [k.replace(\"conv3.gn.bias\", \"bn3.bias\") for k in layer_keys]\n    layer_keys = [k.replace(\"downsample.0.gn.s\", \"downsample.1.weight\") \\\n        for k in layer_keys]\n    layer_keys = [k.replace(\"downsample.0.gn.bias\", \"downsample.1.bias\") \\\n        for k in layer_keys]\n\n    return layer_keys\n\ndef _rename_fpn_weights(layer_keys, stage_names):\n    for mapped_idx, stage_name in enumerate(stage_names, 1):\n        suffix = \"\"\n        if mapped_idx < 4:\n            suffix = \".lateral\"\n        layer_keys = [\n            k.replace(\"fpn.inner.layer{}.sum{}\".format(stage_name, suffix), \"fpn_inner{}\".format(mapped_idx)) for k in layer_keys\n        ]\n        layer_keys = [k.replace(\"fpn.layer{}.sum\".format(stage_name), \"fpn_layer{}\".format(mapped_idx)) for k in layer_keys]\n\n\n    layer_keys = [k.replace(\"rpn.conv.fpn2\", \"rpn.conv\") for k in layer_keys]\n    layer_keys = [k.replace(\"rpn.bbox_pred.fpn2\", \"rpn.bbox_pred\") for k in layer_keys]\n    layer_keys = [\n        k.replace(\"rpn.cls_logits.fpn2\", \"rpn.cls_logits\") for k in layer_keys\n    ]\n\n    return layer_keys\n\n\ndef _rename_weights_for_resnet(weights, stage_names):\n    original_keys = sorted(weights.keys())\n    layer_keys = sorted(weights.keys())\n\n    # for X-101, rename output to fc1000 to avoid conflicts afterwards\n    layer_keys = [k if k != \"pred_b\" else \"fc1000_b\" for k in layer_keys]\n    layer_keys = [k if k != \"pred_w\" else \"fc1000_w\" for k in layer_keys]\n\n    # performs basic renaming: _ -> . , etc\n    layer_keys = _rename_basic_resnet_weights(layer_keys)\n\n    # FPN\n    layer_keys = _rename_fpn_weights(layer_keys, stage_names)\n\n    # Mask R-CNN\n    layer_keys = [k.replace(\"mask.fcn.logits\", \"mask_fcn_logits\") for k in layer_keys]\n    layer_keys = [k.replace(\".[mask].fcn\", \"mask_fcn\") for k in layer_keys]\n    layer_keys = [k.replace(\"conv5.mask\", \"conv5_mask\") for k in layer_keys]\n\n    # Keypoint R-CNN\n    layer_keys = [k.replace(\"kps.score.lowres\", \"kps_score_lowres\") for k in layer_keys]\n    layer_keys = [k.replace(\"kps.score\", \"kps_score\") for k in layer_keys]\n    layer_keys = [k.replace(\"conv.fcn\", \"conv_fcn\") for k in layer_keys]\n\n    # Rename for our RPN structure\n    layer_keys = [k.replace(\"rpn.\", \"rpn.head.\") for k in layer_keys]\n\n    key_map = {k: v for k, v in zip(original_keys, layer_keys)}\n\n    logger = logging.getLogger(__name__)\n    logger.info(\"Remapping C2 weights\")\n    max_c2_key_size = max([len(k) for k in original_keys if \"_momentum\" not in k])\n\n    new_weights = OrderedDict()\n    for k in original_keys:\n        v = weights[k]\n        if \"_momentum\" in k:\n            continue\n        # if 'fc1000' in k:\n        #     continue\n        w = torch.from_numpy(v)\n        # if \"bn\" in k:\n        #     w = w.view(1, -1, 1, 1)\n        logger.info(\"C2 name: {: <{}} mapped name: {}\".format(k, max_c2_key_size, key_map[k]))\n        new_weights[key_map[k]] = w\n\n    return new_weights\n\n\ndef _load_c2_pickled_weights(file_path):\n    with open(file_path, \"rb\") as f:\n        if torch._six.PY3:\n            data = pickle.load(f, encoding=\"latin1\")\n        else:\n            data = pickle.load(f)\n    if \"blobs\" in data:\n        weights = data[\"blobs\"]\n    else:\n        weights = data\n    return weights\n\n\n_C2_STAGE_NAMES = {\n    \"R-50\": [\"1.2\", \"2.3\", \"3.5\", \"4.2\"],\n    \"R-101\": [\"1.2\", \"2.3\", \"3.22\", \"4.2\"],\n    \"R-152\": [\"1.2\", \"2.7\", \"3.35\", \"4.2\"],\n}\n\nC2_FORMAT_LOADER = Registry()\n\n\n@C2_FORMAT_LOADER.register(\"R-50-C4\")\n@C2_FORMAT_LOADER.register(\"R-50-C5\")\n@C2_FORMAT_LOADER.register(\"R-101-C4\")\n@C2_FORMAT_LOADER.register(\"R-101-C5\")\n@C2_FORMAT_LOADER.register(\"R-50-FPN\")\n@C2_FORMAT_LOADER.register(\"R-50-FPN-RETINANET\")\n@C2_FORMAT_LOADER.register(\"R-101-FPN\")\n@C2_FORMAT_LOADER.register(\"R-101-FPN-RETINANET\")\n@C2_FORMAT_LOADER.register(\"R-152-FPN\")\ndef load_resnet_c2_format(cfg, f):\n    state_dict = _load_c2_pickled_weights(f)\n    conv_body = cfg.MODEL.BACKBONE.CONV_BODY\n    arch = conv_body.replace(\"-C4\", \"\").replace(\"-C5\", \"\").replace(\"-FPN\", \"\")\n    arch = arch.replace(\"-RETINANET\", \"\")\n    stages = _C2_STAGE_NAMES[arch]\n    state_dict = _rename_weights_for_resnet(state_dict, stages)\n    return dict(model=state_dict)\n\n\ndef load_c2_format(cfg, f):\n    return C2_FORMAT_LOADER[cfg.MODEL.BACKBONE.CONV_BODY](cfg, f)\n"
  },
  {
    "path": "maskrcnn_benchmark/utils/checkpoint.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport logging\nimport os\n\nimport torch\n\nfrom maskrcnn_benchmark.utils.model_serialization import load_state_dict\nfrom maskrcnn_benchmark.utils.c2_model_loading import load_c2_format\nfrom maskrcnn_benchmark.utils.imports import import_file\nfrom maskrcnn_benchmark.utils.model_zoo import cache_url\n\n\nclass Checkpointer(object):\n    def __init__(\n        self,\n        model,\n        optimizer=None,\n        scheduler=None,\n        save_dir=\"\",\n        save_to_disk=None,\n        logger=None,\n    ):\n        self.model = model\n        self.optimizer = optimizer\n        self.scheduler = scheduler\n        self.save_dir = save_dir\n        self.save_to_disk = save_to_disk\n        if logger is None:\n            logger = logging.getLogger(__name__)\n        self.logger = logger\n\n    def save(self, name, **kwargs):\n        if not self.save_dir:\n            return\n\n        if not self.save_to_disk:\n            return\n\n        data = {}\n        data[\"model\"] = self.model.state_dict()\n        if self.optimizer is not None:\n            data[\"optimizer\"] = self.optimizer.state_dict()\n        if self.scheduler is not None:\n            data[\"scheduler\"] = self.scheduler.state_dict()\n        data.update(kwargs)\n\n        save_file = os.path.join(self.save_dir, \"{}.pth\".format(name))\n        self.logger.info(\"Saving checkpoint to {}\".format(save_file))\n        torch.save(data, save_file)\n        self.tag_last_checkpoint(save_file)\n\n    def load(self, f=None):\n        if self.has_checkpoint():\n            # override argument with existing checkpoint\n            f = self.get_checkpoint_file()\n        if not f:\n            # no checkpoint could be found\n            self.logger.info(\"No checkpoint found. Initializing model from scratch\")\n            return {}\n        self.logger.info(\"Loading checkpoint from {}\".format(f))\n        checkpoint = self._load_file(f)\n        self._load_model(checkpoint)\n        if \"optimizer\" in checkpoint and self.optimizer:\n            self.logger.info(\"Loading optimizer from {}\".format(f))\n            self.optimizer.load_state_dict(checkpoint.pop(\"optimizer\"))\n        if \"scheduler\" in checkpoint and self.scheduler:\n            self.logger.info(\"Loading scheduler from {}\".format(f))\n            self.scheduler.load_state_dict(checkpoint.pop(\"scheduler\"))\n\n        # return any further checkpoint data\n        return checkpoint\n\n    def has_checkpoint(self):\n        save_file = os.path.join(self.save_dir, \"last_checkpoint\")\n        return os.path.exists(save_file)\n\n    def get_checkpoint_file(self):\n        save_file = os.path.join(self.save_dir, \"last_checkpoint\")\n        try:\n            with open(save_file, \"r\") as f:\n                last_saved = f.read()\n                last_saved = last_saved.strip()\n        except IOError:\n            # if file doesn't exist, maybe because it has just been\n            # deleted by a separate process\n            last_saved = \"\"\n        return last_saved\n\n    def tag_last_checkpoint(self, last_filename):\n        save_file = os.path.join(self.save_dir, \"last_checkpoint\")\n        with open(save_file, \"w\") as f:\n            f.write(last_filename)\n\n    def _load_file(self, f):\n        return torch.load(f, map_location=torch.device(\"cpu\"))\n\n    def _load_model(self, checkpoint):\n        load_state_dict(self.model, checkpoint.pop(\"model\"))\n\n\nclass DetectronCheckpointer(Checkpointer):\n    def __init__(\n        self,\n        cfg,\n        model,\n        optimizer=None,\n        scheduler=None,\n        save_dir=\"\",\n        save_to_disk=None,\n        logger=None,\n    ):\n        super(DetectronCheckpointer, self).__init__(\n            model, optimizer, scheduler, save_dir, save_to_disk, logger\n        )\n        self.cfg = cfg.clone()\n\n    def _load_file(self, f):\n        # catalog lookup\n        if f.startswith(\"catalog://\"):\n            paths_catalog = import_file(\n                \"maskrcnn_benchmark.config.paths_catalog\", self.cfg.PATHS_CATALOG, True\n            )\n            catalog_f = paths_catalog.ModelCatalog.get(f[len(\"catalog://\") :])\n            self.logger.info(\"{} points to {}\".format(f, catalog_f))\n            f = catalog_f\n        # download url files\n        if f.startswith(\"http\"):\n            # if the file is a url path, download it and cache it\n            cached_f = cache_url(f)\n            self.logger.info(\"url {} cached in {}\".format(f, cached_f))\n            f = cached_f\n        # convert Caffe2 checkpoint from pkl\n        if f.endswith(\".pkl\"):\n            return load_c2_format(self.cfg, f)\n        # load native detectron.pytorch checkpoint\n        loaded = super(DetectronCheckpointer, self)._load_file(f)\n        if \"model\" not in loaded:\n            loaded = dict(model=loaded)\n        return loaded\n"
  },
  {
    "path": "maskrcnn_benchmark/utils/collect_env.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport PIL\n\nfrom torch.utils.collect_env import get_pretty_env_info\n\n\ndef get_pil_version():\n    return \"\\n        Pillow ({})\".format(PIL.__version__)\n\n\ndef collect_env_info():\n    env_str = get_pretty_env_info()\n    env_str += get_pil_version()\n    return env_str\n"
  },
  {
    "path": "maskrcnn_benchmark/utils/comm.py",
    "content": "\"\"\"\nThis file contains primitives for multi-gpu communication.\nThis is useful when doing distributed training.\n\"\"\"\n\nimport pickle\nimport time\n\nimport torch\nimport torch.distributed as dist\n\n\ndef get_world_size():\n    if not dist.is_available():\n        return 1\n    if not dist.is_initialized():\n        return 1\n    return dist.get_world_size()\n\n\ndef get_rank():\n    if not dist.is_available():\n        return 0\n    if not dist.is_initialized():\n        return 0\n    return dist.get_rank()\n\n\ndef is_main_process():\n    return get_rank() == 0\n\n\ndef synchronize():\n    \"\"\"\n    Helper function to synchronize (barrier) among all processes when\n    using distributed training\n    \"\"\"\n    if not dist.is_available():\n        return\n    if not dist.is_initialized():\n        return\n    world_size = dist.get_world_size()\n    if world_size == 1:\n        return\n    dist.barrier()\n\n\ndef all_gather(data):\n    \"\"\"\n    Run all_gather on arbitrary picklable data (not necessarily tensors)\n    Args:\n        data: any picklable object\n    Returns:\n        list[data]: list of data gathered from each rank\n    \"\"\"\n    world_size = get_world_size()\n    if world_size == 1:\n        return [data]\n\n    # serialized to a Tensor\n    buffer = pickle.dumps(data)\n    storage = torch.ByteStorage.from_buffer(buffer)\n    tensor = torch.ByteTensor(storage).to(\"cuda\")\n\n    # obtain Tensor size of each rank\n    local_size = torch.IntTensor([tensor.numel()]).to(\"cuda\")\n    size_list = [torch.IntTensor([0]).to(\"cuda\") for _ in range(world_size)]\n    dist.all_gather(size_list, local_size)\n    size_list = [int(size.item()) for size in size_list]\n    max_size = max(size_list)\n\n    # receiving Tensor from all ranks\n    # we pad the tensor because torch all_gather does not support\n    # gathering tensors of different shapes\n    tensor_list = []\n    for _ in size_list:\n        tensor_list.append(torch.ByteTensor(size=(max_size,)).to(\"cuda\"))\n    if local_size != max_size:\n        padding = torch.ByteTensor(size=(max_size - local_size,)).to(\"cuda\")\n        tensor = torch.cat((tensor, padding), dim=0)\n    dist.all_gather(tensor_list, tensor)\n\n    data_list = []\n    for size, tensor in zip(size_list, tensor_list):\n        buffer = tensor.cpu().numpy().tobytes()[:size]\n        data_list.append(pickle.loads(buffer))\n\n    return data_list\n\n\ndef reduce_dict(input_dict, average=True):\n    \"\"\"\n    Args:\n        input_dict (dict): all the values will be reduced\n        average (bool): whether to do average or sum\n    Reduce the values in the dictionary from all processes so that process with rank\n    0 has the averaged results. Returns a dict with the same fields as\n    input_dict, after reduction.\n    \"\"\"\n    world_size = get_world_size()\n    if world_size < 2:\n        return input_dict\n    with torch.no_grad():\n        names = []\n        values = []\n        # sort the keys so that they are consistent across processes\n        for k in sorted(input_dict.keys()):\n            names.append(k)\n            values.append(input_dict[k])\n        values = torch.stack(values, dim=0)\n        dist.reduce(values, dst=0)\n        if dist.get_rank() == 0 and average:\n            # only main process gets accumulated, so only divide by\n            # world_size in this case\n            values /= world_size\n        reduced_dict = {k: v for k, v in zip(names, values)}\n    return reduced_dict\n\n\ndef is_pytorch_1_1_0_or_later():\n    return [int(_) for _ in torch.__version__.split(\".\")[:3]] >= [1, 1, 0]\n"
  },
  {
    "path": "maskrcnn_benchmark/utils/cv2_util.py",
    "content": "\"\"\"\nModule for cv2 utility functions and maintaining version compatibility\nbetween 3.x and 4.x\n\"\"\"\nimport cv2\n\n\ndef findContours(*args, **kwargs):\n    \"\"\"\n    Wraps cv2.findContours to maintain compatiblity between versions\n    3 and 4\n\n    Returns:\n        contours, hierarchy\n    \"\"\"\n    if cv2.__version__.startswith('4'):\n        contours, hierarchy = cv2.findContours(*args, **kwargs)\n    elif cv2.__version__.startswith('3'):\n        _, contours, hierarchy = cv2.findContours(*args, **kwargs)\n    else:\n        raise AssertionError(\n            'cv2 must be either version 3 or 4 to call this method')\n\n    return contours, hierarchy\n"
  },
  {
    "path": "maskrcnn_benchmark/utils/env.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport os\n\nfrom maskrcnn_benchmark.utils.imports import import_file\n\n\ndef setup_environment():\n    \"\"\"Perform environment setup work. The default setup is a no-op, but this\n    function allows the user to specify a Python source file that performs\n    custom setup work that may be necessary to their computing environment.\n    \"\"\"\n    custom_module_path = os.environ.get(\"TORCH_DETECTRON_ENV_MODULE\")\n    if custom_module_path:\n        setup_custom_environment(custom_module_path)\n    else:\n        # The default setup is a no-op\n        pass\n\n\ndef setup_custom_environment(custom_module_path):\n    \"\"\"Load custom environment setup from a Python source file and run the setup\n    function.\n    \"\"\"\n    module = import_file(\"maskrcnn_benchmark.utils.env.custom_module\", custom_module_path)\n    assert hasattr(module, \"setup_environment\") and callable(\n        module.setup_environment\n    ), (\n        \"Custom environment module defined in {} does not have the \"\n        \"required callable attribute 'setup_environment'.\"\n    ).format(\n        custom_module_path\n    )\n    module.setup_environment()\n\n\n# Force environment setup when this module is imported\nsetup_environment()\n"
  },
  {
    "path": "maskrcnn_benchmark/utils/imports.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport torch\n\nif torch._six.PY3:\n    import importlib\n    import importlib.util\n    import sys\n\n\n    # from https://stackoverflow.com/questions/67631/how-to-import-a-module-given-the-full-path?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa\n    def import_file(module_name, file_path, make_importable=False):\n        spec = importlib.util.spec_from_file_location(module_name, file_path)\n        module = importlib.util.module_from_spec(spec)\n        spec.loader.exec_module(module)\n        if make_importable:\n            sys.modules[module_name] = module\n        return module\nelse:\n    import imp\n\n    def import_file(module_name, file_path, make_importable=None):\n        module = imp.load_source(module_name, file_path)\n        return module\n"
  },
  {
    "path": "maskrcnn_benchmark/utils/logger.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport logging\nimport os\nimport sys\n\n\ndef setup_logger(name, save_dir, distributed_rank, filename=\"log.txt\"):\n    logger = logging.getLogger(name)\n    logger.setLevel(logging.DEBUG)\n    # don't log results for the non-master process\n    if distributed_rank > 0:\n        return logger\n    ch = logging.StreamHandler(stream=sys.stdout)\n    ch.setLevel(logging.DEBUG)\n    formatter = logging.Formatter(\"%(asctime)s %(name)s %(levelname)s: %(message)s\")\n    ch.setFormatter(formatter)\n    logger.addHandler(ch)\n\n    if save_dir:\n        fh = logging.FileHandler(os.path.join(save_dir, filename))\n        fh.setLevel(logging.DEBUG)\n        fh.setFormatter(formatter)\n        logger.addHandler(fh)\n\n    return logger\n"
  },
  {
    "path": "maskrcnn_benchmark/utils/metric_logger.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nfrom collections import defaultdict\nfrom collections import deque\n\nimport torch\n\n\nclass SmoothedValue(object):\n    \"\"\"Track a series of values and provide access to smoothed values over a\n    window or the global series average.\n    \"\"\"\n\n    def __init__(self, window_size=20):\n        self.deque = deque(maxlen=window_size)\n        self.series = []\n        self.total = 0.0\n        self.count = 0\n\n    def update(self, value):\n        self.deque.append(value)\n        self.series.append(value)\n        self.count += 1\n        self.total += value\n\n    @property\n    def median(self):\n        d = torch.tensor(list(self.deque))\n        return d.median().item()\n\n    @property\n    def avg(self):\n        d = torch.tensor(list(self.deque))\n        return d.mean().item()\n\n    @property\n    def global_avg(self):\n        return self.total / self.count\n\n\nclass MetricLogger(object):\n    def __init__(self, delimiter=\"\\t\"):\n        self.meters = defaultdict(SmoothedValue)\n        self.delimiter = delimiter\n\n    def update(self, **kwargs):\n        for k, v in kwargs.items():\n            if isinstance(v, torch.Tensor):\n                v = v.item()\n            assert isinstance(v, (float, int))\n            self.meters[k].update(v)\n\n    def __getattr__(self, attr):\n        if attr in self.meters:\n            return self.meters[attr]\n        if attr in self.__dict__:\n            return self.__dict__[attr]\n        raise AttributeError(\"'{}' object has no attribute '{}'\".format(\n                    type(self).__name__, attr))\n\n    def __str__(self):\n        loss_str = []\n        for name, meter in self.meters.items():\n            loss_str.append(\n                \"{}: {:.4f} ({:.4f})\".format(name, meter.median, meter.global_avg)\n            )\n        return self.delimiter.join(loss_str)\n"
  },
  {
    "path": "maskrcnn_benchmark/utils/miscellaneous.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport errno\nimport os\n\n\ndef mkdir(path):\n    try:\n        os.makedirs(path)\n    except OSError as e:\n        if e.errno != errno.EEXIST:\n            raise\n"
  },
  {
    "path": "maskrcnn_benchmark/utils/model_serialization.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nfrom collections import OrderedDict\nimport logging\n\nimport torch\n\nfrom maskrcnn_benchmark.utils.imports import import_file\n\n\ndef align_and_update_state_dicts(model_state_dict, loaded_state_dict):\n    \"\"\"\n    Strategy: suppose that the models that we will create will have prefixes appended\n    to each of its keys, for example due to an extra level of nesting that the original\n    pre-trained weights from ImageNet won't contain. For example, model.state_dict()\n    might return backbone[0].body.res2.conv1.weight, while the pre-trained model contains\n    res2.conv1.weight. We thus want to match both parameters together.\n    For that, we look for each model weight, look among all loaded keys if there is one\n    that is a suffix of the current weight name, and use it if that's the case.\n    If multiple matches exist, take the one with longest size\n    of the corresponding name. For example, for the same model as before, the pretrained\n    weight file can contain both res2.conv1.weight, as well as conv1.weight. In this case,\n    we want to match backbone[0].body.conv1.weight to conv1.weight, and\n    backbone[0].body.res2.conv1.weight to res2.conv1.weight.\n    \"\"\"\n    current_keys = sorted(list(model_state_dict.keys()))\n    loaded_keys = sorted(list(loaded_state_dict.keys()))\n    # get a matrix of string matches, where each (i, j) entry correspond to the size of the\n    # loaded_key string, if it matches\n    match_matrix = [\n        len(j) if i.endswith(j) else 0 for i in current_keys for j in loaded_keys\n    ]\n    match_matrix = torch.as_tensor(match_matrix).view(\n        len(current_keys), len(loaded_keys)\n    )\n    max_match_size, idxs = match_matrix.max(1)\n    # remove indices that correspond to no-match\n    idxs[max_match_size == 0] = -1\n\n    # used for logging\n    max_size = max([len(key) for key in current_keys]) if current_keys else 1\n    max_size_loaded = max([len(key) for key in loaded_keys]) if loaded_keys else 1\n    log_str_template = \"{: <{}} loaded from {: <{}} of shape {}\"\n    logger = logging.getLogger(__name__)\n    for idx_new, idx_old in enumerate(idxs.tolist()):\n        if idx_old == -1:\n            continue\n        key = current_keys[idx_new]\n        key_old = loaded_keys[idx_old]\n        model_state_dict[key] = loaded_state_dict[key_old]\n        logger.info(\n            log_str_template.format(\n                key,\n                max_size,\n                key_old,\n                max_size_loaded,\n                tuple(loaded_state_dict[key_old].shape),\n            )\n        )\n\n\ndef strip_prefix_if_present(state_dict, prefix):\n    keys = sorted(state_dict.keys())\n    if not all(key.startswith(prefix) for key in keys):\n        return state_dict\n    stripped_state_dict = OrderedDict()\n    for key, value in state_dict.items():\n        stripped_state_dict[key.replace(prefix, \"\")] = value\n    return stripped_state_dict\n\n\ndef load_state_dict(model, loaded_state_dict):\n    model_state_dict = model.state_dict()\n    # if the state_dict comes from a model that was wrapped in a\n    # DataParallel or DistributedDataParallel during serialization,\n    # remove the \"module\" prefix before performing the matching\n    loaded_state_dict = strip_prefix_if_present(loaded_state_dict, prefix=\"module.\")\n    align_and_update_state_dicts(model_state_dict, loaded_state_dict)\n\n    # use strict loading\n    model.load_state_dict(model_state_dict)\n"
  },
  {
    "path": "maskrcnn_benchmark/utils/model_zoo.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport os\nimport sys\n\ntry:\n    from torch.utils.model_zoo import _download_url_to_file\n    from torch.utils.model_zoo import urlparse\n    from torch.utils.model_zoo import HASH_REGEX\nexcept:\n    from torch.hub import _download_url_to_file\n    from torch.hub import urlparse\n    from torch.hub import HASH_REGEX\n\nfrom maskrcnn_benchmark.utils.comm import is_main_process\nfrom maskrcnn_benchmark.utils.comm import synchronize\n\n\n# very similar to https://github.com/pytorch/pytorch/blob/master/torch/utils/model_zoo.py\n# but with a few improvements and modifications\ndef cache_url(url, model_dir=None, progress=True):\n    r\"\"\"Loads the Torch serialized object at the given URL.\n    If the object is already present in `model_dir`, it's deserialized and\n    returned. The filename part of the URL should follow the naming convention\n    ``filename-<sha256>.ext`` where ``<sha256>`` is the first eight or more\n    digits of the SHA256 hash of the contents of the file. The hash is used to\n    ensure unique names and to verify the contents of the file.\n    The default value of `model_dir` is ``$TORCH_HOME/models`` where\n    ``$TORCH_HOME`` defaults to ``~/.torch``. The default directory can be\n    overridden with the ``$TORCH_MODEL_ZOO`` environment variable.\n    Args:\n        url (string): URL of the object to download\n        model_dir (string, optional): directory in which to save the object\n        progress (bool, optional): whether or not to display a progress bar to stderr\n    Example:\n        >>> cached_file = maskrcnn_benchmark.utils.model_zoo.cache_url('https://s3.amazonaws.com/pytorch/models/resnet18-5c106cde.pth')\n    \"\"\"\n    if model_dir is None:\n        torch_home = os.path.expanduser(os.getenv('TORCH_HOME', '~/.torch'))\n        model_dir = os.getenv('TORCH_MODEL_ZOO', os.path.join(torch_home, 'models'))\n    if not os.path.exists(model_dir):\n        os.makedirs(model_dir)\n    parts = urlparse(url)\n    if parts.fragment != \"\":\n        filename = parts.fragment\n    else:\n        filename = os.path.basename(parts.path)\n    if filename == \"model_final.pkl\":\n        # workaround as pre-trained Caffe2 models from Detectron have all the same filename\n        # so make the full path the filename by replacing / with _\n        filename = parts.path.replace(\"/\", \"_\")\n    cached_file = os.path.join(model_dir, filename)\n    if not os.path.exists(cached_file) and is_main_process():\n        sys.stderr.write('Downloading: \"{}\" to {}\\n'.format(url, cached_file))\n        hash_prefix = HASH_REGEX.search(filename)\n        if hash_prefix is not None:\n            hash_prefix = hash_prefix.group(1)\n            # workaround: Caffe2 models don't have a hash, but follow the R-50 convention,\n            # which matches the hash PyTorch uses. So we skip the hash matching\n            # if the hash_prefix is less than 6 characters\n            if len(hash_prefix) < 6:\n                hash_prefix = None\n        _download_url_to_file(url, cached_file, hash_prefix, progress=progress)\n    synchronize()\n    return cached_file\n"
  },
  {
    "path": "maskrcnn_benchmark/utils/registry.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n\n\ndef _register_generic(module_dict, module_name, module):\n    assert module_name not in module_dict\n    module_dict[module_name] = module\n\n\nclass Registry(dict):\n    '''\n    A helper class for managing registering modules, it extends a dictionary\n    and provides a register functions.\n\n    Eg. creeting a registry:\n        some_registry = Registry({\"default\": default_module})\n\n    There're two ways of registering new modules:\n    1): normal way is just calling register function:\n        def foo():\n            ...\n        some_registry.register(\"foo_module\", foo)\n    2): used as decorator when declaring the module:\n        @some_registry.register(\"foo_module\")\n        @some_registry.register(\"foo_modeul_nickname\")\n        def foo():\n            ...\n\n    Access of module is just like using a dictionary, eg:\n        f = some_registry[\"foo_modeul\"]\n    '''\n    def __init__(self, *args, **kwargs):\n        super(Registry, self).__init__(*args, **kwargs)\n\n    def register(self, module_name, module=None):\n        # used as function call\n        if module is not None:\n            _register_generic(self, module_name, module)\n            return\n\n        # used as decorator\n        def register_fn(fn):\n            _register_generic(self, module_name, fn)\n            return fn\n\n        return register_fn\n"
  },
  {
    "path": "maskrcnn_benchmark/utils/timer.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n\n\nimport time\nimport datetime\n\n\nclass Timer(object):\n    def __init__(self):\n        self.reset()\n\n    @property\n    def average_time(self):\n        return self.total_time / self.calls if self.calls > 0 else 0.0\n\n    def tic(self):\n        # using time.time instead of time.clock because time time.clock\n        # does not normalize for multithreading\n        self.start_time = time.time()\n\n    def toc(self, average=True):\n        self.add(time.time() - self.start_time)\n        if average:\n            return self.average_time\n        else:\n            return self.diff\n\n    def add(self, time_diff):\n        self.diff = time_diff\n        self.total_time += self.diff\n        self.calls += 1\n\n    def reset(self):\n        self.total_time = 0.0\n        self.calls = 0\n        self.start_time = 0.0\n        self.diff = 0.0\n\n    def avg_time_str(self):\n        time_str = str(datetime.timedelta(seconds=self.average_time))\n        return time_str\n\n\ndef get_time_str(time_diff):\n    time_str = str(datetime.timedelta(seconds=time_diff))\n    return time_str\n"
  },
  {
    "path": "requirements.txt",
    "content": "ninja\nyacs\ncython\nmatplotlib\ntqdm\n"
  },
  {
    "path": "setup.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n#!/usr/bin/env python\n\nimport glob\nimport os\n\nimport torch\nfrom setuptools import find_packages\nfrom setuptools import setup\nfrom torch.utils.cpp_extension import CUDA_HOME\nfrom torch.utils.cpp_extension import CppExtension\nfrom torch.utils.cpp_extension import CUDAExtension\n\nrequirements = [\"torch\", \"torchvision\"]\n\n\ndef get_extensions():\n    this_dir = os.path.dirname(os.path.abspath(__file__))\n    extensions_dir = os.path.join(this_dir, \"maskrcnn_benchmark\", \"csrc\")\n\n    main_file = glob.glob(os.path.join(extensions_dir, \"*.cpp\"))\n    source_cpu = glob.glob(os.path.join(extensions_dir, \"cpu\", \"*.cpp\"))\n    source_cuda = glob.glob(os.path.join(extensions_dir, \"cuda\", \"*.cu\"))\n\n    sources = main_file + source_cpu\n    extension = CppExtension\n\n    extra_compile_args = {\"cxx\": []}\n    define_macros = []\n\n    if (torch.cuda.is_available() and CUDA_HOME is not None) or os.getenv(\"FORCE_CUDA\", \"0\") == \"1\":\n        extension = CUDAExtension\n        sources += source_cuda\n        define_macros += [(\"WITH_CUDA\", None)]\n        extra_compile_args[\"nvcc\"] = [\n            \"-DCUDA_HAS_FP16=1\",\n            \"-D__CUDA_NO_HALF_OPERATORS__\",\n            \"-D__CUDA_NO_HALF_CONVERSIONS__\",\n            \"-D__CUDA_NO_HALF2_OPERATORS__\",\n        ]\n\n    sources = [os.path.join(extensions_dir, s) for s in sources]\n\n    include_dirs = [extensions_dir]\n\n    ext_modules = [\n        extension(\n            \"maskrcnn_benchmark._C\",\n            sources,\n            include_dirs=include_dirs,\n            define_macros=define_macros,\n            extra_compile_args=extra_compile_args,\n        )\n    ]\n\n    return ext_modules\n\n\nsetup(\n    name=\"maskrcnn_benchmark\",\n    version=\"0.1\",\n    author=\"fmassa\",\n    url=\"https://github.com/facebookresearch/maskrcnn-benchmark\",\n    description=\"object detection in pytorch\",\n    packages=find_packages(exclude=(\"configs\", \"tests\",)),\n    # install_requires=requirements,\n    ext_modules=get_extensions(),\n    cmdclass={\"build_ext\": torch.utils.cpp_extension.BuildExtension},\n)\n"
  },
  {
    "path": "tests/checkpoint.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nfrom collections import OrderedDict\nimport os\nfrom tempfile import TemporaryDirectory\nimport unittest\n\nimport torch\nfrom torch import nn\n\nfrom maskrcnn_benchmark.utils.model_serialization import load_state_dict\nfrom maskrcnn_benchmark.utils.checkpoint import Checkpointer\n\n\nclass TestCheckpointer(unittest.TestCase):\n    def create_model(self):\n        return nn.Sequential(nn.Linear(2, 3), nn.Linear(3, 1))\n\n    def create_complex_model(self):\n        m = nn.Module()\n        m.block1 = nn.Module()\n        m.block1.layer1 = nn.Linear(2, 3)\n        m.layer2 = nn.Linear(3, 2)\n        m.res = nn.Module()\n        m.res.layer2 = nn.Linear(3, 2)\n\n        state_dict = OrderedDict()\n        state_dict[\"layer1.weight\"] = torch.rand(3, 2)\n        state_dict[\"layer1.bias\"] = torch.rand(3)\n        state_dict[\"layer2.weight\"] = torch.rand(2, 3)\n        state_dict[\"layer2.bias\"] = torch.rand(2)\n        state_dict[\"res.layer2.weight\"] = torch.rand(2, 3)\n        state_dict[\"res.layer2.bias\"] = torch.rand(2)\n\n        return m, state_dict\n\n    def test_from_last_checkpoint_model(self):\n        # test that loading works even if they differ by a prefix\n        for trained_model, fresh_model in [\n            (self.create_model(), self.create_model()),\n            (nn.DataParallel(self.create_model()), self.create_model()),\n            (self.create_model(), nn.DataParallel(self.create_model())),\n            (\n                nn.DataParallel(self.create_model()),\n                nn.DataParallel(self.create_model()),\n            ),\n        ]:\n\n            with TemporaryDirectory() as f:\n                checkpointer = Checkpointer(\n                    trained_model, save_dir=f, save_to_disk=True\n                )\n                checkpointer.save(\"checkpoint_file\")\n\n                # in the same folder\n                fresh_checkpointer = Checkpointer(fresh_model, save_dir=f)\n                self.assertTrue(fresh_checkpointer.has_checkpoint())\n                self.assertEqual(\n                    fresh_checkpointer.get_checkpoint_file(),\n                    os.path.join(f, \"checkpoint_file.pth\"),\n                )\n                _ = fresh_checkpointer.load()\n\n            for trained_p, loaded_p in zip(\n                trained_model.parameters(), fresh_model.parameters()\n            ):\n                # different tensor references\n                self.assertFalse(id(trained_p) == id(loaded_p))\n                # same content\n                self.assertTrue(trained_p.equal(loaded_p))\n\n    def test_from_name_file_model(self):\n        # test that loading works even if they differ by a prefix\n        for trained_model, fresh_model in [\n            (self.create_model(), self.create_model()),\n            (nn.DataParallel(self.create_model()), self.create_model()),\n            (self.create_model(), nn.DataParallel(self.create_model())),\n            (\n                nn.DataParallel(self.create_model()),\n                nn.DataParallel(self.create_model()),\n            ),\n        ]:\n            with TemporaryDirectory() as f:\n                checkpointer = Checkpointer(\n                    trained_model, save_dir=f, save_to_disk=True\n                )\n                checkpointer.save(\"checkpoint_file\")\n\n                # on different folders\n                with TemporaryDirectory() as g:\n                    fresh_checkpointer = Checkpointer(fresh_model, save_dir=g)\n                    self.assertFalse(fresh_checkpointer.has_checkpoint())\n                    self.assertEqual(fresh_checkpointer.get_checkpoint_file(), \"\")\n                    _ = fresh_checkpointer.load(os.path.join(f, \"checkpoint_file.pth\"))\n\n            for trained_p, loaded_p in zip(\n                trained_model.parameters(), fresh_model.parameters()\n            ):\n                # different tensor references\n                self.assertFalse(id(trained_p) == id(loaded_p))\n                # same content\n                self.assertTrue(trained_p.equal(loaded_p))\n\n    def test_complex_model_loaded(self):\n        for add_data_parallel in [False, True]:\n            model, state_dict = self.create_complex_model()\n            if add_data_parallel:\n                model = nn.DataParallel(model)\n\n            load_state_dict(model, state_dict)\n            for loaded, stored in zip(model.state_dict().values(), state_dict.values()):\n                # different tensor references\n                self.assertFalse(id(loaded) == id(stored))\n                # same content\n                self.assertTrue(loaded.equal(stored))\n\n\nif __name__ == \"__main__\":\n    unittest.main()\n"
  },
  {
    "path": "tests/env_tests/env.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n\nimport os\n\n\ndef get_config_root_path():\n    ''' Path to configs for unit tests '''\n    # cur_file_dir is root/tests/env_tests\n    cur_file_dir = os.path.dirname(os.path.abspath(os.path.realpath(__file__)))\n    ret = os.path.dirname(os.path.dirname(cur_file_dir))\n    ret = os.path.join(ret, \"configs\")\n    return ret\n"
  },
  {
    "path": "tests/test_backbones.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n\nimport unittest\nimport copy\nimport torch\n# import modules to to register backbones\nfrom maskrcnn_benchmark.modeling.backbone import build_backbone # NoQA\nfrom maskrcnn_benchmark.modeling import registry\nfrom maskrcnn_benchmark.config import cfg as g_cfg\nfrom utils import load_config\n\n\n# overwrite configs if specified, otherwise default config is used\nBACKBONE_CFGS = {\n    \"R-50-FPN\": \"e2e_faster_rcnn_R_50_FPN_1x.yaml\",\n    \"R-101-FPN\": \"e2e_faster_rcnn_R_101_FPN_1x.yaml\",\n    \"R-152-FPN\": \"e2e_faster_rcnn_R_101_FPN_1x.yaml\",\n    \"R-50-FPN-RETINANET\": \"retinanet/retinanet_R-50-FPN_1x.yaml\",\n    \"R-101-FPN-RETINANET\": \"retinanet/retinanet_R-101-FPN_1x.yaml\",\n}\n\n\nclass TestBackbones(unittest.TestCase):\n    def test_build_backbones(self):\n        ''' Make sure backbones run '''\n\n        self.assertGreater(len(registry.BACKBONES), 0)\n\n        for name, backbone_builder in registry.BACKBONES.items():\n            print('Testing {}...'.format(name))\n            if name in BACKBONE_CFGS:\n                cfg = load_config(BACKBONE_CFGS[name])\n            else:\n                # Use default config if config file is not specified\n                cfg = copy.deepcopy(g_cfg)\n            backbone = backbone_builder(cfg)\n\n            # make sures the backbone has `out_channels`\n            self.assertIsNotNone(\n                getattr(backbone, 'out_channels', None),\n                'Need to provide out_channels for backbone {}'.format(name)\n            )\n\n            N, C_in, H, W = 2, 3, 224, 256\n            input = torch.rand([N, C_in, H, W], dtype=torch.float32)\n            out = backbone(input)\n            for cur_out in out:\n                self.assertEqual(\n                    cur_out.shape[:2],\n                    torch.Size([N, backbone.out_channels])\n                )\n\n\nif __name__ == \"__main__\":\n    unittest.main()\n"
  },
  {
    "path": "tests/test_box_coder.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n\nimport unittest\n\nimport numpy as np\nimport torch\nfrom maskrcnn_benchmark.modeling.box_coder import BoxCoder\n\n\nclass TestBoxCoder(unittest.TestCase):\n    def test_box_decoder(self):\n        \"\"\" Match unit test UtilsBoxesTest.TestBboxTransformRandom in\n            caffe2/operators/generate_proposals_op_util_boxes_test.cc\n        \"\"\"\n        box_coder = BoxCoder(weights=(1.0, 1.0, 1.0, 1.0))\n        bbox = torch.from_numpy(\n            np.array(\n                [\n                    175.62031555,\n                    20.91103172,\n                    253.352005,\n                    155.0145874,\n                    169.24636841,\n                    4.85241556,\n                    228.8605957,\n                    105.02092743,\n                    181.77426147,\n                    199.82876587,\n                    192.88427734,\n                    214.0255127,\n                    174.36262512,\n                    186.75761414,\n                    296.19091797,\n                    231.27906799,\n                    22.73153877,\n                    92.02596283,\n                    135.5695343,\n                    208.80291748,\n                ]\n            )\n            .astype(np.float32)\n            .reshape(-1, 4)\n        )\n\n        deltas = torch.from_numpy(\n            np.array(\n                [\n                    0.47861834,\n                    0.13992102,\n                    0.14961673,\n                    0.71495209,\n                    0.29915856,\n                    -0.35664671,\n                    0.89018666,\n                    0.70815367,\n                    -0.03852064,\n                    0.44466892,\n                    0.49492538,\n                    0.71409376,\n                    0.28052918,\n                    0.02184832,\n                    0.65289006,\n                    1.05060139,\n                    -0.38172557,\n                    -0.08533806,\n                    -0.60335309,\n                    0.79052375,\n                ]\n            )\n            .astype(np.float32)\n            .reshape(-1, 4)\n        )\n\n        gt_bbox = (\n            np.array(\n                [\n                    206.949539,\n                    -30.715202,\n                    297.387665,\n                    244.448486,\n                    143.871216,\n                    -83.342888,\n                    290.502289,\n                    121.053398,\n                    177.430283,\n                    198.666245,\n                    196.295273,\n                    228.703079,\n                    152.251892,\n                    145.431564,\n                    387.215454,\n                    274.594238,\n                    5.062420,\n                    11.040955,\n                    66.328903,\n                    269.686218,\n                ]\n            )\n            .astype(np.float32)\n            .reshape(-1, 4)\n        )\n\n        results = box_coder.decode(deltas, bbox)\n\n        np.testing.assert_allclose(results.detach().numpy(), gt_bbox, atol=1e-4)\n\n\nif __name__ == \"__main__\":\n    unittest.main()\n"
  },
  {
    "path": "tests/test_configs.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n\nimport unittest\nimport glob\nimport os\nimport utils\n\n\nclass TestConfigs(unittest.TestCase):\n    def test_configs_load(self):\n        ''' Make sure configs are loadable '''\n\n        cfg_root_path = utils.get_config_root_path()\n        files = glob.glob(\n            os.path.join(cfg_root_path, \"./**/*.yaml\"), recursive=True)\n        self.assertGreater(len(files), 0)\n\n        for fn in files:\n            print('Loading {}...'.format(fn))\n            utils.load_config_from_file(fn)\n\n\nif __name__ == \"__main__\":\n    unittest.main()\n"
  },
  {
    "path": "tests/test_data_samplers.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport itertools\nimport random\nimport unittest\n\nfrom torch.utils.data.sampler import BatchSampler\nfrom torch.utils.data.sampler import Sampler\nfrom torch.utils.data.sampler import SequentialSampler\nfrom torch.utils.data.sampler import RandomSampler\n\nfrom maskrcnn_benchmark.data.samplers import GroupedBatchSampler\nfrom maskrcnn_benchmark.data.samplers import IterationBasedBatchSampler\n\n\nclass SubsetSampler(Sampler):\n    def __init__(self, indices):\n        self.indices = indices\n\n    def __iter__(self):\n        return iter(self.indices)\n\n    def __len__(self):\n        return len(self.indices)\n\n\nclass TestGroupedBatchSampler(unittest.TestCase):\n    def test_respect_order_simple(self):\n        drop_uneven = False\n        dataset = [i for i in range(40)]\n        group_ids = [i // 10 for i in dataset]\n        sampler = SequentialSampler(dataset)\n        for batch_size in [1, 3, 5, 6]:\n            batch_sampler = GroupedBatchSampler(\n                sampler, group_ids, batch_size, drop_uneven\n            )\n            result = list(batch_sampler)\n            merged_result = list(itertools.chain.from_iterable(result))\n            self.assertEqual(merged_result, dataset)\n\n    def test_respect_order(self):\n        drop_uneven = False\n        dataset = [i for i in range(10)]\n        group_ids = [0, 0, 1, 0, 1, 1, 0, 1, 1, 0]\n        sampler = SequentialSampler(dataset)\n\n        expected = [\n            [[0], [1], [2], [3], [4], [5], [6], [7], [8], [9]],\n            [[0, 1, 3], [2, 4, 5], [6, 9], [7, 8]],\n            [[0, 1, 3, 6], [2, 4, 5, 7], [8], [9]],\n        ]\n\n        for idx, batch_size in enumerate([1, 3, 4]):\n            batch_sampler = GroupedBatchSampler(\n                sampler, group_ids, batch_size, drop_uneven\n            )\n            result = list(batch_sampler)\n            self.assertEqual(result, expected[idx])\n\n    def test_respect_order_drop_uneven(self):\n        batch_size = 3\n        drop_uneven = True\n        dataset = [i for i in range(10)]\n        group_ids = [0, 0, 1, 0, 1, 1, 0, 1, 1, 0]\n        sampler = SequentialSampler(dataset)\n        batch_sampler = GroupedBatchSampler(sampler, group_ids, batch_size, drop_uneven)\n\n        result = list(batch_sampler)\n\n        expected = [[0, 1, 3], [2, 4, 5]]\n        self.assertEqual(result, expected)\n\n    def test_subset_sampler(self):\n        batch_size = 3\n        drop_uneven = False\n        dataset = [i for i in range(10)]\n        group_ids = [0, 0, 1, 0, 1, 1, 0, 1, 1, 0]\n        sampler = SubsetSampler([0, 3, 5, 6, 7, 8])\n\n        batch_sampler = GroupedBatchSampler(sampler, group_ids, batch_size, drop_uneven)\n        result = list(batch_sampler)\n\n        expected = [[0, 3, 6], [5, 7, 8]]\n        self.assertEqual(result, expected)\n\n    def test_permute_subset_sampler(self):\n        batch_size = 3\n        drop_uneven = False\n        dataset = [i for i in range(10)]\n        group_ids = [0, 0, 1, 0, 1, 1, 0, 1, 1, 0]\n        sampler = SubsetSampler([5, 0, 6, 1, 3, 8])\n\n        batch_sampler = GroupedBatchSampler(sampler, group_ids, batch_size, drop_uneven)\n        result = list(batch_sampler)\n\n        expected = [[5, 8], [0, 6, 1], [3]]\n        self.assertEqual(result, expected)\n\n    def test_permute_subset_sampler_drop_uneven(self):\n        batch_size = 3\n        drop_uneven = True\n        dataset = [i for i in range(10)]\n        group_ids = [0, 0, 1, 0, 1, 1, 0, 1, 1, 0]\n        sampler = SubsetSampler([5, 0, 6, 1, 3, 8])\n\n        batch_sampler = GroupedBatchSampler(sampler, group_ids, batch_size, drop_uneven)\n        result = list(batch_sampler)\n\n        expected = [[0, 6, 1]]\n        self.assertEqual(result, expected)\n\n    def test_len(self):\n        batch_size = 3\n        drop_uneven = True\n        dataset = [i for i in range(10)]\n        group_ids = [random.randint(0, 1) for _ in dataset]\n        sampler = RandomSampler(dataset)\n\n        batch_sampler = GroupedBatchSampler(sampler, group_ids, batch_size, drop_uneven)\n        result = list(batch_sampler)\n        self.assertEqual(len(result), len(batch_sampler))\n        self.assertEqual(len(result), len(batch_sampler))\n\n        batch_sampler = GroupedBatchSampler(sampler, group_ids, batch_size, drop_uneven)\n        batch_sampler_len = len(batch_sampler)\n        result = list(batch_sampler)\n        self.assertEqual(len(result), batch_sampler_len)\n        self.assertEqual(len(result), len(batch_sampler))\n\n\nclass TestIterationBasedBatchSampler(unittest.TestCase):\n    def test_number_of_iters_and_elements(self):\n        for batch_size in [2, 3, 4]:\n            for num_iterations in [4, 10, 20]:\n                for drop_last in [False, True]:\n                    dataset = [i for i in range(10)]\n                    sampler = SequentialSampler(dataset)\n                    batch_sampler = BatchSampler(\n                        sampler, batch_size, drop_last=drop_last\n                    )\n\n                    iter_sampler = IterationBasedBatchSampler(\n                        batch_sampler, num_iterations\n                    )\n                    assert len(iter_sampler) == num_iterations\n                    for i, batch in enumerate(iter_sampler):\n                        start = (i % len(batch_sampler)) * batch_size\n                        end = min(start + batch_size, len(dataset))\n                        expected = [x for x in range(start, end)]\n                        self.assertEqual(batch, expected)\n\n\nif __name__ == \"__main__\":\n    unittest.main()\n"
  },
  {
    "path": "tests/test_detectors.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n\nimport unittest\nimport glob\nimport os\nimport copy\nimport torch\nfrom maskrcnn_benchmark.modeling.detector import build_detection_model\nfrom maskrcnn_benchmark.structures.image_list import to_image_list\nimport utils\n\n\nCONFIG_FILES = [\n    # bbox\n    \"e2e_faster_rcnn_R_50_C4_1x.yaml\",\n    \"e2e_faster_rcnn_R_50_FPN_1x.yaml\",\n    \"e2e_faster_rcnn_fbnet.yaml\",\n\n    # mask\n    \"e2e_mask_rcnn_R_50_C4_1x.yaml\",\n    \"e2e_mask_rcnn_R_50_FPN_1x.yaml\",\n    \"e2e_mask_rcnn_fbnet.yaml\",\n\n    # keypoints\n    # TODO: fail to run for random model due to empty head input\n    # \"e2e_keypoint_rcnn_R_50_FPN_1x.yaml\",\n\n    # gn\n    \"gn_baselines/e2e_faster_rcnn_R_50_FPN_1x_gn.yaml\",\n    # TODO: fail to run for random model due to empty head input\n    # \"gn_baselines/e2e_mask_rcnn_R_50_FPN_Xconv1fc_1x_gn.yaml\",\n\t\n    # retinanet\n    \"retinanet/retinanet_R-50-FPN_1x.yaml\",\n\n    # rpn only\n    \"rpn_R_50_C4_1x.yaml\",\n    \"rpn_R_50_FPN_1x.yaml\",\n]\n\nEXCLUDED_FOLDERS = [\n    \"caffe2\",\n    \"quick_schedules\",\n    \"pascal_voc\",\n    \"cityscapes\",\n]\n\n\nTEST_CUDA = torch.cuda.is_available()\n\n\ndef get_config_files(file_list, exclude_folders):\n    cfg_root_path = utils.get_config_root_path()\n    if file_list is not None:\n        files = [os.path.join(cfg_root_path, x) for x in file_list]\n    else:\n        files = glob.glob(\n            os.path.join(cfg_root_path, \"./**/*.yaml\"), recursive=True)\n\n    def _contains(path, exclude_dirs):\n        return any(x in path for x in exclude_dirs)\n\n    if exclude_folders is not None:\n        files = [x for x in files if not _contains(x, exclude_folders)]\n\n    return files\n\n\ndef create_model(cfg, device):\n    cfg = copy.deepcopy(cfg)\n    cfg.freeze()\n    model = build_detection_model(cfg)\n    model = model.to(device)\n    return model\n\n\ndef create_random_input(cfg, device):\n    ret = []\n    for x in cfg.INPUT.MIN_SIZE_TRAIN:\n        ret.append(torch.rand(3, x, int(x * 1.2)))\n    ret = to_image_list(ret, cfg.DATALOADER.SIZE_DIVISIBILITY)\n    ret = ret.to(device)\n    return ret\n\n\ndef _test_build_detectors(self, device):\n    ''' Make sure models build '''\n\n    cfg_files = get_config_files(None, EXCLUDED_FOLDERS)\n    self.assertGreater(len(cfg_files), 0)\n\n    for cfg_file in cfg_files:\n        with self.subTest(cfg_file=cfg_file):\n            print('Testing {}...'.format(cfg_file))\n            cfg = utils.load_config_from_file(cfg_file)\n            create_model(cfg, device)\n\n\ndef _test_run_selected_detectors(self, cfg_files, device):\n    ''' Make sure models build and run '''\n    self.assertGreater(len(cfg_files), 0)\n\n    for cfg_file in cfg_files:\n        with self.subTest(cfg_file=cfg_file):\n            print('Testing {}...'.format(cfg_file))\n            cfg = utils.load_config_from_file(cfg_file)\n            cfg.MODEL.RPN.POST_NMS_TOP_N_TEST = 10\n            cfg.MODEL.RPN.FPN_POST_NMS_TOP_N_TEST = 10\n            model = create_model(cfg, device)\n            inputs = create_random_input(cfg, device)\n            model.eval()\n            output = model(inputs)\n            self.assertEqual(len(output), len(inputs.image_sizes))\n\n\nclass TestDetectors(unittest.TestCase):\n    def test_build_detectors(self):\n        ''' Make sure models build '''\n        _test_build_detectors(self, \"cpu\")\n\n    @unittest.skipIf(not TEST_CUDA, \"no CUDA detected\")\n    def test_build_detectors_cuda(self):\n        ''' Make sure models build on gpu'''\n        _test_build_detectors(self, \"cuda\")\n\n    def test_run_selected_detectors(self):\n        ''' Make sure models build and run '''\n        # run on selected models\n        cfg_files = get_config_files(CONFIG_FILES, None)\n        # cfg_files = get_config_files(None, EXCLUDED_FOLDERS)\n        _test_run_selected_detectors(self, cfg_files, \"cpu\")\n\n    @unittest.skipIf(not TEST_CUDA, \"no CUDA detected\")\n    def test_run_selected_detectors_cuda(self):\n        ''' Make sure models build and run on cuda '''\n        # run on selected models\n        cfg_files = get_config_files(CONFIG_FILES, None)\n        # cfg_files = get_config_files(None, EXCLUDED_FOLDERS)\n        _test_run_selected_detectors(self, cfg_files, \"cuda\")\n\n\nif __name__ == \"__main__\":\n    unittest.main()\n"
  },
  {
    "path": "tests/test_fbnet.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n\nimport unittest\n\nimport numpy as np\nimport torch\nimport maskrcnn_benchmark.modeling.backbone.fbnet_builder as fbnet_builder\n\n\nTEST_CUDA = torch.cuda.is_available()\n\n\ndef _test_primitive(self, device, op_name, op_func, N, C_in, C_out, expand, stride):\n    op = op_func(C_in, C_out, expand, stride).to(device)\n    input = torch.rand([N, C_in, 7, 7], dtype=torch.float32).to(device)\n    output = op(input)\n    self.assertEqual(\n        output.shape[:2], torch.Size([N, C_out]),\n        'Primitive {} failed for shape {}.'.format(op_name, input.shape)\n    )\n\n\nclass TestFBNetBuilder(unittest.TestCase):\n    def test_identity(self):\n        id_op = fbnet_builder.Identity(20, 20, 1)\n        input = torch.rand([10, 20, 7, 7], dtype=torch.float32)\n        output = id_op(input)\n        np.testing.assert_array_equal(np.array(input), np.array(output))\n\n        id_op = fbnet_builder.Identity(20, 40, 2)\n        input = torch.rand([10, 20, 7, 7], dtype=torch.float32)\n        output = id_op(input)\n        np.testing.assert_array_equal(output.shape, [10, 40, 4, 4])\n\n    def test_primitives(self):\n        ''' Make sures the primitives runs '''\n        for op_name, op_func in fbnet_builder.PRIMITIVES.items():\n            print('Testing {}'.format(op_name))\n\n            _test_primitive(\n                self, \"cpu\",\n                op_name, op_func,\n                N=20, C_in=16, C_out=32, expand=4, stride=1\n            )\n\n    @unittest.skipIf(not TEST_CUDA, \"no CUDA detected\")\n    def test_primitives_cuda(self):\n        ''' Make sures the primitives runs on cuda '''\n        for op_name, op_func in fbnet_builder.PRIMITIVES.items():\n            print('Testing {}'.format(op_name))\n\n            _test_primitive(\n                self, \"cuda\",\n                op_name, op_func,\n                N=20, C_in=16, C_out=32, expand=4, stride=1\n            )\n\n    def test_primitives_empty_batch(self):\n        ''' Make sures the primitives runs '''\n        for op_name, op_func in fbnet_builder.PRIMITIVES.items():\n            print('Testing {}'.format(op_name))\n\n            # test empty batch size\n            _test_primitive(\n                self, \"cpu\",\n                op_name, op_func,\n                N=0, C_in=16, C_out=32, expand=4, stride=1\n            )\n\n    @unittest.skipIf(not TEST_CUDA, \"no CUDA detected\")\n    def test_primitives_cuda_empty_batch(self):\n        ''' Make sures the primitives runs '''\n        for op_name, op_func in fbnet_builder.PRIMITIVES.items():\n            print('Testing {}'.format(op_name))\n\n            # test empty batch size\n            _test_primitive(\n                self, \"cuda\",\n                op_name, op_func,\n                N=0, C_in=16, C_out=32, expand=4, stride=1\n            )\n\nif __name__ == \"__main__\":\n    unittest.main()\n"
  },
  {
    "path": "tests/test_feature_extractors.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n\nimport unittest\nimport copy\nimport torch\n# import modules to to register feature extractors\nfrom maskrcnn_benchmark.modeling.backbone import build_backbone # NoQA\nfrom maskrcnn_benchmark.modeling.roi_heads.roi_heads import build_roi_heads # NoQA\nfrom maskrcnn_benchmark.modeling import registry\nfrom maskrcnn_benchmark.structures.bounding_box import BoxList\nfrom maskrcnn_benchmark.config import cfg as g_cfg\nfrom utils import load_config\n\n# overwrite configs if specified, otherwise default config is used\nFEATURE_EXTRACTORS_CFGS = {\n}\n\n# overwrite configs if specified, otherwise default config is used\nFEATURE_EXTRACTORS_INPUT_CHANNELS = {\n    # in_channels was not used, load through config\n    \"ResNet50Conv5ROIFeatureExtractor\": 1024,\n}\n\n\ndef _test_feature_extractors(\n    self, extractors, overwrite_cfgs, overwrite_in_channels\n):\n    ''' Make sure roi box feature extractors run '''\n\n    self.assertGreater(len(extractors), 0)\n\n    in_channels_default = 64\n\n    for name, builder in extractors.items():\n        print('Testing {}...'.format(name))\n        if name in overwrite_cfgs:\n            cfg = load_config(overwrite_cfgs[name])\n        else:\n            # Use default config if config file is not specified\n            cfg = copy.deepcopy(g_cfg)\n\n        in_channels = overwrite_in_channels.get(\n            name, in_channels_default)\n\n        fe = builder(cfg, in_channels)\n        self.assertIsNotNone(\n            getattr(fe, 'out_channels', None),\n            'Need to provide out_channels for feature extractor {}'.format(name)\n        )\n\n        N, C_in, H, W = 2, in_channels, 24, 32\n        input = torch.rand([N, C_in, H, W], dtype=torch.float32)\n        bboxes = [[1, 1, 10, 10], [5, 5, 8, 8], [2, 2, 3, 4]]\n        img_size = [384, 512]\n        box_list = BoxList(bboxes, img_size, \"xyxy\")\n        out = fe([input], [box_list] * N)\n        self.assertEqual(\n            out.shape[:2],\n            torch.Size([N * len(bboxes), fe.out_channels])\n        )\n\n\nclass TestFeatureExtractors(unittest.TestCase):\n    def test_roi_box_feature_extractors(self):\n        ''' Make sure roi box feature extractors run '''\n        _test_feature_extractors(\n            self,\n            registry.ROI_BOX_FEATURE_EXTRACTORS,\n            FEATURE_EXTRACTORS_CFGS,\n            FEATURE_EXTRACTORS_INPUT_CHANNELS,\n        )\n\n    def test_roi_keypoints_feature_extractors(self):\n        ''' Make sure roi keypoints feature extractors run '''\n        _test_feature_extractors(\n            self,\n            registry.ROI_KEYPOINT_FEATURE_EXTRACTORS,\n            FEATURE_EXTRACTORS_CFGS,\n            FEATURE_EXTRACTORS_INPUT_CHANNELS,\n        )\n\n    def test_roi_mask_feature_extractors(self):\n        ''' Make sure roi mask feature extractors run '''\n        _test_feature_extractors(\n            self,\n            registry.ROI_MASK_FEATURE_EXTRACTORS,\n            FEATURE_EXTRACTORS_CFGS,\n            FEATURE_EXTRACTORS_INPUT_CHANNELS,\n        )\n\n\nif __name__ == \"__main__\":\n    unittest.main()\n"
  },
  {
    "path": "tests/test_metric_logger.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport unittest\n\nfrom maskrcnn_benchmark.utils.metric_logger import MetricLogger\n\n\nclass TestMetricLogger(unittest.TestCase):\n    def test_update(self):\n        meter = MetricLogger()\n        for i in range(10):\n            meter.update(metric=float(i))\n        \n        m = meter.meters[\"metric\"]\n        self.assertEqual(m.count, 10)\n        self.assertEqual(m.total, 45)\n        self.assertEqual(m.median, 4)\n        self.assertEqual(m.avg, 4.5)\n\n    def test_no_attr(self):\n        meter = MetricLogger()\n        _ = meter.meters\n        _ = meter.delimiter\n        def broken():\n            _ = meter.not_existent\n        self.assertRaises(AttributeError, broken)\n\nif __name__ == \"__main__\":\n    unittest.main()\n"
  },
  {
    "path": "tests/test_nms.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n\nimport unittest\n\nimport numpy as np\nimport torch\nfrom maskrcnn_benchmark.layers import nms as box_nms\n\n\nclass TestNMS(unittest.TestCase):\n    def test_nms_cpu(self):\n        \"\"\" Match unit test UtilsNMSTest.TestNMS in\n            caffe2/operators/generate_proposals_op_util_nms_test.cc\n        \"\"\"\n\n        inputs = (\n            np.array(\n                [\n                    10,\n                    10,\n                    50,\n                    60,\n                    0.5,\n                    11,\n                    12,\n                    48,\n                    60,\n                    0.7,\n                    8,\n                    9,\n                    40,\n                    50,\n                    0.6,\n                    100,\n                    100,\n                    150,\n                    140,\n                    0.9,\n                    99,\n                    110,\n                    155,\n                    139,\n                    0.8,\n                ]\n            )\n            .astype(np.float32)\n            .reshape(-1, 5)\n        )\n\n        boxes = torch.from_numpy(inputs[:, :4])\n        scores = torch.from_numpy(inputs[:, 4])\n        test_thresh = [0.1, 0.3, 0.5, 0.8, 0.9]\n        gt_indices = [[1, 3], [1, 3], [1, 3], [1, 2, 3, 4], [0, 1, 2, 3, 4]]\n\n        for thresh, gt_index in zip(test_thresh, gt_indices):\n            keep_indices = box_nms(boxes, scores, thresh)\n            keep_indices = np.sort(keep_indices)\n            np.testing.assert_array_equal(keep_indices, np.array(gt_index))\n\n    def test_nms1_cpu(self):\n        \"\"\" Match unit test UtilsNMSTest.TestNMS1 in\n            caffe2/operators/generate_proposals_op_util_nms_test.cc\n        \"\"\"\n\n        boxes = torch.from_numpy(\n            np.array(\n                [\n                    [350.9821, 161.8200, 369.9685, 205.2372],\n                    [250.5236, 154.2844, 274.1773, 204.9810],\n                    [471.4920, 160.4118, 496.0094, 213.4244],\n                    [352.0421, 164.5933, 366.4458, 205.9624],\n                    [166.0765, 169.7707, 183.0102, 232.6606],\n                    [252.3000, 183.1449, 269.6541, 210.6747],\n                    [469.7862, 162.0192, 482.1673, 187.0053],\n                    [168.4862, 174.2567, 181.7437, 232.9379],\n                    [470.3290, 162.3442, 496.4272, 214.6296],\n                    [251.0450, 155.5911, 272.2693, 203.3675],\n                    [252.0326, 154.7950, 273.7404, 195.3671],\n                    [351.7479, 161.9567, 370.6432, 204.3047],\n                    [496.3306, 161.7157, 515.0573, 210.7200],\n                    [471.0749, 162.6143, 485.3374, 207.3448],\n                    [250.9745, 160.7633, 264.1924, 206.8350],\n                    [470.4792, 169.0351, 487.1934, 220.2984],\n                    [474.4227, 161.9546, 513.1018, 215.5193],\n                    [251.9428, 184.1950, 262.6937, 207.6416],\n                    [252.6623, 175.0252, 269.8806, 213.7584],\n                    [260.9884, 157.0351, 288.3554, 206.6027],\n                    [251.3629, 164.5101, 263.2179, 202.4203],\n                    [471.8361, 190.8142, 485.6812, 220.8586],\n                    [248.6243, 156.9628, 264.3355, 199.2767],\n                    [495.1643, 158.0483, 512.6261, 184.4192],\n                    [376.8718, 168.0144, 387.3584, 201.3210],\n                    [122.9191, 160.7433, 172.5612, 231.3837],\n                    [350.3857, 175.8806, 366.2500, 205.4329],\n                    [115.2958, 162.7822, 161.9776, 229.6147],\n                    [168.4375, 177.4041, 180.8028, 232.4551],\n                    [169.7939, 184.4330, 181.4767, 232.1220],\n                    [347.7536, 175.9356, 355.8637, 197.5586],\n                    [495.5434, 164.6059, 516.4031, 207.7053],\n                    [172.1216, 194.6033, 183.1217, 235.2653],\n                    [264.2654, 181.5540, 288.4626, 214.0170],\n                    [111.7971, 183.7748, 137.3745, 225.9724],\n                    [253.4919, 186.3945, 280.8694, 210.0731],\n                    [165.5334, 169.7344, 185.9159, 232.8514],\n                    [348.3662, 184.5187, 354.9081, 201.4038],\n                    [164.6562, 162.5724, 186.3108, 233.5010],\n                    [113.2999, 186.8410, 135.8841, 219.7642],\n                    [117.0282, 179.8009, 142.5375, 221.0736],\n                    [462.1312, 161.1004, 495.3576, 217.2208],\n                    [462.5800, 159.9310, 501.2937, 224.1655],\n                    [503.5242, 170.0733, 518.3792, 209.0113],\n                    [250.3658, 195.5925, 260.6523, 212.4679],\n                    [108.8287, 163.6994, 146.3642, 229.7261],\n                    [256.7617, 187.3123, 288.8407, 211.2013],\n                    [161.2781, 167.4801, 186.3751, 232.7133],\n                    [115.3760, 177.5859, 163.3512, 236.9660],\n                    [248.9077, 188.0919, 264.8579, 207.9718],\n                    [108.1349, 160.7851, 143.6370, 229.6243],\n                    [465.0900, 156.7555, 490.3561, 213.5704],\n                    [107.5338, 173.4323, 141.0704, 235.2910],\n                ]\n            ).astype(np.float32)\n        )\n        scores = torch.from_numpy(\n            np.array(\n                [\n                    0.1919,\n                    0.3293,\n                    0.0860,\n                    0.1600,\n                    0.1885,\n                    0.4297,\n                    0.0974,\n                    0.2711,\n                    0.1483,\n                    0.1173,\n                    0.1034,\n                    0.2915,\n                    0.1993,\n                    0.0677,\n                    0.3217,\n                    0.0966,\n                    0.0526,\n                    0.5675,\n                    0.3130,\n                    0.1592,\n                    0.1353,\n                    0.0634,\n                    0.1557,\n                    0.1512,\n                    0.0699,\n                    0.0545,\n                    0.2692,\n                    0.1143,\n                    0.0572,\n                    0.1990,\n                    0.0558,\n                    0.1500,\n                    0.2214,\n                    0.1878,\n                    0.2501,\n                    0.1343,\n                    0.0809,\n                    0.1266,\n                    0.0743,\n                    0.0896,\n                    0.0781,\n                    0.0983,\n                    0.0557,\n                    0.0623,\n                    0.5808,\n                    0.3090,\n                    0.1050,\n                    0.0524,\n                    0.0513,\n                    0.4501,\n                    0.4167,\n                    0.0623,\n                    0.1749,\n                ]\n            ).astype(np.float32)\n        )\n\n        gt_indices = np.array(\n            [\n                1,\n                6,\n                7,\n                8,\n                11,\n                12,\n                13,\n                14,\n                17,\n                18,\n                19,\n                21,\n                23,\n                24,\n                25,\n                26,\n                30,\n                32,\n                33,\n                34,\n                35,\n                37,\n                43,\n                44,\n                47,\n                50,\n            ]\n        )\n        keep_indices = box_nms(boxes, scores, 0.5)\n        keep_indices = np.sort(keep_indices)\n\n        np.testing.assert_array_equal(keep_indices, gt_indices)\n\n\nif __name__ == \"__main__\":\n    unittest.main()\n"
  },
  {
    "path": "tests/test_predictors.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n\nimport unittest\nimport copy\nimport torch\n# import modules to to register predictors\nfrom maskrcnn_benchmark.modeling.backbone import build_backbone # NoQA\nfrom maskrcnn_benchmark.modeling.roi_heads.roi_heads import build_roi_heads # NoQA\nfrom maskrcnn_benchmark.modeling import registry\nfrom maskrcnn_benchmark.config import cfg as g_cfg\nfrom utils import load_config\n\n\n# overwrite configs if specified, otherwise default config is used\nPREDICTOR_CFGS = {\n}\n\n# overwrite configs if specified, otherwise default config is used\nPREDICTOR_INPUT_CHANNELS = {\n}\n\n\ndef _test_predictors(\n    self, predictors, overwrite_cfgs, overwrite_in_channels,\n    hwsize,\n):\n    ''' Make sure predictors run '''\n\n    self.assertGreater(len(predictors), 0)\n\n    in_channels_default = 64\n\n    for name, builder in predictors.items():\n        print('Testing {}...'.format(name))\n        if name in overwrite_cfgs:\n            cfg = load_config(overwrite_cfgs[name])\n        else:\n            # Use default config if config file is not specified\n            cfg = copy.deepcopy(g_cfg)\n\n        in_channels = overwrite_in_channels.get(\n            name, in_channels_default)\n\n        fe = builder(cfg, in_channels)\n\n        N, C_in, H, W = 2, in_channels, hwsize, hwsize\n        input = torch.rand([N, C_in, H, W], dtype=torch.float32)\n        out = fe(input)\n        yield input, out, cfg\n\n\nclass TestPredictors(unittest.TestCase):\n    def test_roi_box_predictors(self):\n        ''' Make sure roi box predictors run '''\n        for cur_in, cur_out, cur_cfg in _test_predictors(\n            self,\n            registry.ROI_BOX_PREDICTOR,\n            PREDICTOR_CFGS,\n            PREDICTOR_INPUT_CHANNELS,\n            hwsize=1,\n        ):\n            self.assertEqual(len(cur_out), 2)\n            scores, bbox_deltas = cur_out[0], cur_out[1]\n            self.assertEqual(\n                scores.shape[1], cur_cfg.MODEL.ROI_BOX_HEAD.NUM_CLASSES)\n            self.assertEqual(scores.shape[0], cur_in.shape[0])\n            self.assertEqual(scores.shape[0], bbox_deltas.shape[0])\n            self.assertEqual(scores.shape[1] * 4, bbox_deltas.shape[1])\n\n    def test_roi_keypoints_predictors(self):\n        ''' Make sure roi keypoint predictors run '''\n        for cur_in, cur_out, cur_cfg in _test_predictors(\n            self,\n            registry.ROI_KEYPOINT_PREDICTOR,\n            PREDICTOR_CFGS,\n            PREDICTOR_INPUT_CHANNELS,\n            hwsize=14,\n        ):\n            self.assertEqual(cur_out.shape[0], cur_in.shape[0])\n            self.assertEqual(\n                cur_out.shape[1], cur_cfg.MODEL.ROI_KEYPOINT_HEAD.NUM_CLASSES)\n\n    def test_roi_mask_predictors(self):\n        ''' Make sure roi mask predictors run '''\n        for cur_in, cur_out, cur_cfg in _test_predictors(\n            self,\n            registry.ROI_MASK_PREDICTOR,\n            PREDICTOR_CFGS,\n            PREDICTOR_INPUT_CHANNELS,\n            hwsize=14,\n        ):\n            self.assertEqual(cur_out.shape[0], cur_in.shape[0])\n            self.assertEqual(\n                cur_out.shape[1], cur_cfg.MODEL.ROI_BOX_HEAD.NUM_CLASSES)\n\n\nif __name__ == \"__main__\":\n    unittest.main()\n"
  },
  {
    "path": "tests/test_rpn_heads.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n\nimport unittest\nimport copy\nimport torch\n# import modules to to register rpn heads\nfrom maskrcnn_benchmark.modeling.backbone import build_backbone # NoQA\nfrom maskrcnn_benchmark.modeling.rpn.rpn import build_rpn # NoQA\nfrom maskrcnn_benchmark.modeling import registry\nfrom maskrcnn_benchmark.config import cfg as g_cfg\nfrom utils import load_config\n\n\n# overwrite configs if specified, otherwise default config is used\nRPN_CFGS = {\n}\n\n\nclass TestRPNHeads(unittest.TestCase):\n    def test_build_rpn_heads(self):\n        ''' Make sure rpn heads run '''\n\n        self.assertGreater(len(registry.RPN_HEADS), 0)\n\n        in_channels = 64\n        num_anchors = 10\n\n        for name, builder in registry.RPN_HEADS.items():\n            print('Testing {}...'.format(name))\n            if name in RPN_CFGS:\n                cfg = load_config(RPN_CFGS[name])\n            else:\n                # Use default config if config file is not specified\n                cfg = copy.deepcopy(g_cfg)\n\n            rpn = builder(cfg, in_channels, num_anchors)\n\n            N, C_in, H, W = 2, in_channels, 24, 32\n            input = torch.rand([N, C_in, H, W], dtype=torch.float32)\n            LAYERS = 3\n            out = rpn([input] * LAYERS)\n            self.assertEqual(len(out), 2)\n            logits, bbox_reg = out\n            for idx in range(LAYERS):\n                self.assertEqual(\n                    logits[idx].shape,\n                    torch.Size([\n                        input.shape[0], num_anchors,\n                        input.shape[2], input.shape[3],\n                    ])\n                )\n                self.assertEqual(\n                    bbox_reg[idx].shape,\n                    torch.Size([\n                        logits[idx].shape[0], num_anchors * 4,\n                        logits[idx].shape[2], logits[idx].shape[3],\n                    ]),\n                )\n\n\nif __name__ == \"__main__\":\n    unittest.main()\n"
  },
  {
    "path": "tests/test_segmentation_mask.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nimport unittest\nimport torch\nfrom maskrcnn_benchmark.structures.segmentation_mask import SegmentationMask\n\n\nclass TestSegmentationMask(unittest.TestCase):\n    def __init__(self, method_name='runTest'):\n        super(TestSegmentationMask, self).__init__(method_name)\n        poly = [[[423.0, 306.5, 406.5, 277.0, 400.0, 271.5, 389.5, 277.0,\n                  387.5, 292.0, 384.5, 295.0, 374.5, 220.0, 378.5, 210.0,\n                  391.0, 200.5, 404.0, 199.5, 414.0, 203.5, 425.5, 221.0,\n                  438.5, 297.0, 423.0, 306.5],\n                 [100, 100,     200, 100,     200, 200,     100, 200],\n                ]]\n        width = 640\n        height = 480\n        size = width, height\n\n        self.P = SegmentationMask(poly, size, 'poly')\n        self.M = SegmentationMask(poly, size, 'poly').convert('mask')\n\n\n    def L1(self, A, B):\n        diff = A.get_mask_tensor() - B.get_mask_tensor()\n        diff = torch.sum(torch.abs(diff.float())).item()\n        return diff\n\n\n    def test_convert(self):\n        M_hat = self.M.convert('poly').convert('mask')\n        P_hat = self.P.convert('mask').convert('poly')\n\n        diff_mask = self.L1(self.M, M_hat)\n        diff_poly = self.L1(self.P, P_hat)\n        self.assertTrue(diff_mask == diff_poly)\n        self.assertTrue(diff_mask <= 8169.)\n        self.assertTrue(diff_poly <= 8169.)\n\n\n    def test_crop(self):\n        box = [400, 250, 500, 300] # xyxy\n        diff = self.L1(self.M.crop(box), self.P.crop(box))\n        self.assertTrue(diff <= 1.)\n\n\n    def test_resize(self):\n        new_size = 50, 25\n        M_hat = self.M.resize(new_size)\n        P_hat = self.P.resize(new_size)\n        diff = self.L1(M_hat, P_hat)\n\n        self.assertTrue(self.M.size == self.P.size)\n        self.assertTrue(M_hat.size == P_hat.size)\n        self.assertTrue(self.M.size != M_hat.size)\n        self.assertTrue(diff <= 255.)\n\n\n    def test_transpose(self):\n        FLIP_LEFT_RIGHT = 0\n        FLIP_TOP_BOTTOM = 1\n        diff_hor = self.L1(self.M.transpose(FLIP_LEFT_RIGHT),\n                           self.P.transpose(FLIP_LEFT_RIGHT))\n\n        diff_ver = self.L1(self.M.transpose(FLIP_TOP_BOTTOM),\n                           self.P.transpose(FLIP_TOP_BOTTOM))\n\n        self.assertTrue(diff_hor <= 53250.)\n        self.assertTrue(diff_ver <= 42494.)\n\n\nif __name__ == \"__main__\":\n\n    unittest.main()\n"
  },
  {
    "path": "tests/utils.py",
    "content": "from __future__ import absolute_import, division, print_function, unicode_literals\n\n# Set up custom environment before nearly anything else is imported\n# NOTE: this should be the first import (no not reorder)\nfrom maskrcnn_benchmark.utils.env import setup_environment  # noqa F401 isort:skip\nimport env_tests.env as env_tests\n\nimport os\nimport copy\n\nfrom maskrcnn_benchmark.config import cfg as g_cfg\n\n\ndef get_config_root_path():\n    return env_tests.get_config_root_path()\n\n\ndef load_config(rel_path):\n    ''' Load config from file path specified as path relative to config_root '''\n    cfg_path = os.path.join(env_tests.get_config_root_path(), rel_path)\n    return load_config_from_file(cfg_path)\n\n\ndef load_config_from_file(file_path):\n    ''' Load config from file path specified as absolute path '''\n    ret = copy.deepcopy(g_cfg)\n    ret.merge_from_file(file_path)\n    return ret\n"
  },
  {
    "path": "tools/cityscapes/convert_cityscapes_to_coco.py",
    "content": "#!/usr/bin/env python\n\n# Copyright (c) 2017-present, Facebook, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n##############################################################################\n\n# This file is copy from https://github.com/facebookresearch/Detectron/tree/master/tools\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport argparse\nimport h5py\nimport json\nimport os\nimport scipy.misc\nimport sys\n\nimport cityscapesscripts.evaluation.instances2dict_with_polygons as cs\n\n\ndef parse_args():\n    parser = argparse.ArgumentParser(description='Convert dataset')\n    parser.add_argument(\n        '--dataset', help=\"cocostuff, cityscapes\", default=None, type=str)\n    parser.add_argument(\n        '--outdir', help=\"output dir for json files\", default=None, type=str)\n    parser.add_argument(\n        '--datadir', help=\"data dir for annotations to be converted\",\n        default=None, type=str)\n    if len(sys.argv) == 1:\n        parser.print_help()\n        sys.exit(1)\n    return parser.parse_args()\n\n\ndef poly_to_box(poly):\n    \"\"\"Convert a polygon into a tight bounding box.\"\"\"\n    x0 = min(min(p[::2]) for p in poly)\n    x1 = max(max(p[::2]) for p in poly)\n    y0 = min(min(p[1::2]) for p in poly)\n    y1 = max(max(p[1::2]) for p in poly)\n    box_from_poly = [x0, y0, x1, y1]\n\n    return box_from_poly\n\ndef xyxy_to_xywh(xyxy_box):\n    xmin, ymin, xmax, ymax = xyxy_box\n    TO_REMOVE = 1\n    xywh_box = (xmin, ymin, xmax - xmin + TO_REMOVE, ymax - ymin + TO_REMOVE)\n    return xywh_box\n\n\ndef convert_coco_stuff_mat(data_dir, out_dir):\n    \"\"\"Convert to png and save json with path. This currently only contains\n    the segmentation labels for objects+stuff in cocostuff - if we need to\n    combine with other labels from original COCO that will be a TODO.\"\"\"\n    sets = ['train', 'val']\n    categories = []\n    json_name = 'coco_stuff_%s.json'\n    ann_dict = {}\n    for data_set in sets:\n        file_list = os.path.join(data_dir, '%s.txt')\n        images = []\n        with open(file_list % data_set) as f:\n            for img_id, img_name in enumerate(f):\n                img_name = img_name.replace('coco', 'COCO').strip('\\n')\n                image = {}\n                mat_file = os.path.join(\n                    data_dir, 'annotations/%s.mat' % img_name)\n                data = h5py.File(mat_file, 'r')\n                labelMap = data.get('S')\n                if len(categories) == 0:\n                    labelNames = data.get('names')\n                    for idx, n in enumerate(labelNames):\n                        categories.append(\n                            {\"id\": idx, \"name\": ''.join(chr(i) for i in data[\n                                n[0]])})\n                    ann_dict['categories'] = categories\n                scipy.misc.imsave(\n                    os.path.join(data_dir, img_name + '.png'), labelMap)\n                image['width'] = labelMap.shape[0]\n                image['height'] = labelMap.shape[1]\n                image['file_name'] = img_name\n                image['seg_file_name'] = img_name\n                image['id'] = img_id\n                images.append(image)\n        ann_dict['images'] = images\n        print(\"Num images: %s\" % len(images))\n        with open(os.path.join(out_dir, json_name % data_set), 'wb') as outfile:\n            outfile.write(json.dumps(ann_dict))\n\n\n# for Cityscapes\ndef getLabelID(self, instID):\n    if (instID < 1000):\n        return instID\n    else:\n        return int(instID / 1000)\n\n\ndef convert_cityscapes_instance_only(\n        data_dir, out_dir):\n    \"\"\"Convert from cityscapes format to COCO instance seg format - polygons\"\"\"\n    sets = [\n        'gtFine_val',\n        'gtFine_train',\n        'gtFine_test',\n\n        # 'gtCoarse_train',\n        # 'gtCoarse_val',\n        # 'gtCoarse_train_extra'\n    ]\n    ann_dirs = [\n        'gtFine_trainvaltest/gtFine/val',\n        'gtFine_trainvaltest/gtFine/train',\n        'gtFine_trainvaltest/gtFine/test',\n\n        # 'gtCoarse/train',\n        # 'gtCoarse/train_extra',\n        # 'gtCoarse/val'\n    ]\n    json_name = 'instancesonly_filtered_%s.json'\n    ends_in = '%s_polygons.json'\n    img_id = 0\n    ann_id = 0\n    cat_id = 1\n    category_dict = {}\n\n    category_instancesonly = [\n        'person',\n        'rider',\n        'car',\n        'truck',\n        'bus',\n        'train',\n        'motorcycle',\n        'bicycle',\n    ]\n\n    for data_set, ann_dir in zip(sets, ann_dirs):\n        print('Starting %s' % data_set)\n        ann_dict = {}\n        images = []\n        annotations = []\n        ann_dir = os.path.join(data_dir, ann_dir)\n\n        for root, _, files in os.walk(ann_dir):\n            for filename in files:\n                if filename.endswith(ends_in % data_set.split('_')[0]):\n                    if len(images) % 50 == 0:\n                        print(\"Processed %s images, %s annotations\" % (\n                            len(images), len(annotations)))\n                    json_ann = json.load(open(os.path.join(root, filename)))\n                    image = {}\n                    image['id'] = img_id\n                    img_id += 1\n\n                    image['width'] = json_ann['imgWidth']\n                    image['height'] = json_ann['imgHeight']\n                    image['file_name'] = filename[:-len(\n                        ends_in % data_set.split('_')[0])] + 'leftImg8bit.png'\n                    image['seg_file_name'] = filename[:-len(\n                        ends_in % data_set.split('_')[0])] + \\\n                        '%s_instanceIds.png' % data_set.split('_')[0]\n                    images.append(image)\n\n                    fullname = os.path.join(root, image['seg_file_name'])\n                    objects = cs.instances2dict_with_polygons(\n                        [fullname], verbose=False)[fullname]\n\n                    for object_cls in objects:\n                        if object_cls not in category_instancesonly:\n                            continue  # skip non-instance categories\n\n                        for obj in objects[object_cls]:\n                            if obj['contours'] == []:\n                                print('Warning: empty contours.')\n                                continue  # skip non-instance categories\n\n                            len_p = [len(p) for p in obj['contours']]\n                            if min(len_p) <= 4:\n                                print('Warning: invalid contours.')\n                                continue  # skip non-instance categories\n\n                            ann = {}\n                            ann['id'] = ann_id\n                            ann_id += 1\n                            ann['image_id'] = image['id']\n                            ann['segmentation'] = obj['contours']\n\n                            if object_cls not in category_dict:\n                                category_dict[object_cls] = cat_id\n                                cat_id += 1\n                            ann['category_id'] = category_dict[object_cls]\n                            ann['iscrowd'] = 0\n                            ann['area'] = obj['pixelCount']\n                            \n                            xyxy_box = poly_to_box(ann['segmentation'])\n                            xywh_box = xyxy_to_xywh(xyxy_box)\n                            ann['bbox'] = xywh_box\n\n                            annotations.append(ann)\n\n        ann_dict['images'] = images\n        categories = [{\"id\": category_dict[name], \"name\": name} for name in\n                      category_dict]\n        ann_dict['categories'] = categories\n        ann_dict['annotations'] = annotations\n        print(\"Num categories: %s\" % len(categories))\n        print(\"Num images: %s\" % len(images))\n        print(\"Num annotations: %s\" % len(annotations))\n        with open(os.path.join(out_dir, json_name % data_set), 'w') as outfile:\n            outfile.write(json.dumps(ann_dict))\n\n\nif __name__ == '__main__':\n    args = parse_args()\n    if args.dataset == \"cityscapes_instance_only\":\n        convert_cityscapes_instance_only(args.datadir, args.outdir)\n    elif args.dataset == \"cocostuff\":\n        convert_coco_stuff_mat(args.datadir, args.outdir)\n    else:\n        print(\"Dataset not supported: %s\" % args.dataset)\n"
  },
  {
    "path": "tools/cityscapes/instances2dict_with_polygons.py",
    "content": "#!/usr/bin/python\n#\n# Convert instances from png files to a dictionary\n# This files is created according to https://github.com/facebookresearch/Detectron/issues/111\n\nfrom __future__ import print_function, absolute_import, division\nimport os, sys\n\nsys.path.append( os.path.normpath( os.path.join( os.path.dirname( __file__ ) , '..' , 'helpers' ) ) )\nfrom csHelpers import *\n\n# Cityscapes imports\nfrom cityscapesscripts.evaluation.instance import *\nfrom cityscapesscripts.helpers.csHelpers import *\nimport cv2\nfrom maskrcnn_benchmark.utils import cv2_util\n\n\ndef instances2dict_with_polygons(imageFileList, verbose=False):\n    imgCount     = 0\n    instanceDict = {}\n\n    if not isinstance(imageFileList, list):\n        imageFileList = [imageFileList]\n\n    if verbose:\n        print(\"Processing {} images...\".format(len(imageFileList)))\n\n    for imageFileName in imageFileList:\n        # Load image\n        img = Image.open(imageFileName)\n\n        # Image as numpy array\n        imgNp = np.array(img)\n\n        # Initialize label categories\n        instances = {}\n        for label in labels:\n            instances[label.name] = []\n\n        # Loop through all instance ids in instance image\n        for instanceId in np.unique(imgNp):\n            if instanceId < 1000:\n                continue\n            instanceObj = Instance(imgNp, instanceId)\n            instanceObj_dict = instanceObj.toDict()\n\n            #instances[id2label[instanceObj.labelID].name].append(instanceObj.toDict())\n            if id2label[instanceObj.labelID].hasInstances:\n                mask = (imgNp == instanceId).astype(np.uint8)\n                contour, hier = cv2_util.findContours(\n                    mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)\n\n                polygons = [c.reshape(-1).tolist() for c in contour]\n                instanceObj_dict['contours'] = polygons\n\n            instances[id2label[instanceObj.labelID].name].append(instanceObj_dict)\n\n        imgKey = os.path.abspath(imageFileName)\n        instanceDict[imgKey] = instances\n        imgCount += 1\n\n        if verbose:\n            print(\"\\rImages Processed: {}\".format(imgCount), end=' ')\n            sys.stdout.flush()\n\n    if verbose:\n        print(\"\")\n\n    return instanceDict\n\ndef main(argv):\n    fileList = []\n    if (len(argv) > 2):\n        for arg in argv:\n            if (\"png\" in arg):\n                fileList.append(arg)\n    instances2dict_with_polygons(fileList, True)\n\nif __name__ == \"__main__\":\n    main(sys.argv[1:])\n"
  },
  {
    "path": "tools/remove_solver_states.py",
    "content": "# Set up custom environment before nearly anything else is imported\n# NOTE: this should be the first import (no not reorder)\nfrom maskrcnn_benchmark.utils.env import setup_environment  # noqa F401 isort:skip\nimport argparse\nimport os\nimport torch\n\n\ndef main():\n    parser = argparse.ArgumentParser(description=\"Remove the solver states stored in a trained model\")\n    parser.add_argument(\n        \"model\",\n        default=\"models/FCOS_R_50_FPN_1x.pth\",\n        help=\"path to the input model file\",\n    )\n\n    args = parser.parse_args()\n\n    model = torch.load(args.model)\n    del model[\"optimizer\"]\n    del model[\"scheduler\"]\n\n    filename_wo_ext, ext = os.path.splitext(args.model)\n    output_file = filename_wo_ext + \"_wo_solver_states\" + ext\n    torch.save(model, output_file)\n    print(\"Done. The model without solver states is saved to {}\".format(output_file))\n\nif __name__ == \"__main__\":\n    main()\n\n"
  },
  {
    "path": "tools/test_net.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n# Set up custom environment before nearly anything else is imported\n# NOTE: this should be the first import (no not reorder)\nfrom maskrcnn_benchmark.utils.env import setup_environment  # noqa F401 isort:skip\n\nimport argparse\nimport os\n\nimport torch\nfrom maskrcnn_benchmark.config import cfg\nfrom maskrcnn_benchmark.data import make_data_loader\nfrom maskrcnn_benchmark.engine.inference import inference\nfrom maskrcnn_benchmark.modeling.detector import build_detection_model\nfrom maskrcnn_benchmark.utils.checkpoint import DetectronCheckpointer\nfrom maskrcnn_benchmark.utils.collect_env import collect_env_info\nfrom maskrcnn_benchmark.utils.comm import synchronize, get_rank\nfrom maskrcnn_benchmark.utils.logger import setup_logger\nfrom maskrcnn_benchmark.utils.miscellaneous import mkdir\n\n\ndef main():\n    parser = argparse.ArgumentParser(description=\"PyTorch Object Detection Inference\")\n    parser.add_argument(\n        \"--config-file\",\n        default=\"/private/home/fmassa/github/detectron.pytorch_v2/configs/e2e_faster_rcnn_R_50_C4_1x_caffe2.yaml\",\n        metavar=\"FILE\",\n        help=\"path to config file\",\n    )\n    parser.add_argument(\"--local_rank\", type=int, default=0)\n    parser.add_argument(\n        \"opts\",\n        help=\"Modify config options using the command-line\",\n        default=None,\n        nargs=argparse.REMAINDER,\n    )\n\n    args = parser.parse_args()\n\n    num_gpus = int(os.environ[\"WORLD_SIZE\"]) if \"WORLD_SIZE\" in os.environ else 1\n    distributed = num_gpus > 1\n\n    if distributed:\n        torch.cuda.set_device(args.local_rank)\n        torch.distributed.init_process_group(\n            backend=\"nccl\", init_method=\"env://\"\n        )\n        synchronize()\n\n    cfg.merge_from_file(args.config_file)\n    cfg.merge_from_list(args.opts)\n    cfg.freeze()\n\n    save_dir = \"\"\n    logger = setup_logger(\"maskrcnn_benchmark\", save_dir, get_rank())\n    logger.info(\"Using {} GPUs\".format(num_gpus))\n    logger.info(cfg)\n\n    logger.info(\"Collecting env info (might take some time)\")\n    logger.info(\"\\n\" + collect_env_info())\n\n    model = build_detection_model(cfg)\n    model.to(cfg.MODEL.DEVICE)\n\n    output_dir = cfg.OUTPUT_DIR\n    checkpointer = DetectronCheckpointer(cfg, model, save_dir=output_dir)\n    _ = checkpointer.load(cfg.MODEL.WEIGHT)\n\n    iou_types = (\"bbox\",)\n    if cfg.MODEL.MASK_ON:\n        iou_types = iou_types + (\"segm\",)\n    if cfg.MODEL.KEYPOINT_ON:\n        iou_types = iou_types + (\"keypoints\",)\n    output_folders = [None] * len(cfg.DATASETS.TEST)\n    dataset_names = cfg.DATASETS.TEST\n    if cfg.OUTPUT_DIR:\n        for idx, dataset_name in enumerate(dataset_names):\n            output_folder = os.path.join(cfg.OUTPUT_DIR, \"inference\", dataset_name)\n            mkdir(output_folder)\n            output_folders[idx] = output_folder\n    data_loaders_val = make_data_loader(cfg, is_train=False, is_distributed=distributed)\n    for output_folder, dataset_name, data_loader_val in zip(output_folders, dataset_names, data_loaders_val):\n        inference(\n            model,\n            data_loader_val,\n            dataset_name=dataset_name,\n            iou_types=iou_types,\n            box_only=False if cfg.MODEL.FCOS_ON or cfg.MODEL.RETINANET_ON else cfg.MODEL.RPN_ONLY,\n            device=cfg.MODEL.DEVICE,\n            expected_results=cfg.TEST.EXPECTED_RESULTS,\n            expected_results_sigma_tol=cfg.TEST.EXPECTED_RESULTS_SIGMA_TOL,\n            output_folder=output_folder,\n        )\n        synchronize()\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "tools/train_net.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\nr\"\"\"\nBasic training script for PyTorch\n\"\"\"\n\n# Set up custom environment before nearly anything else is imported\n# NOTE: this should be the first import (no not reorder)\nfrom maskrcnn_benchmark.utils.env import setup_environment  # noqa F401 isort:skip\n\nimport argparse\nimport os\n\nimport torch\nfrom maskrcnn_benchmark.config import cfg\nfrom maskrcnn_benchmark.data import make_data_loader\nfrom maskrcnn_benchmark.solver import make_lr_scheduler\nfrom maskrcnn_benchmark.solver import make_optimizer\nfrom maskrcnn_benchmark.engine.inference import inference\nfrom maskrcnn_benchmark.engine.trainer import do_train\nfrom maskrcnn_benchmark.modeling.detector import build_detection_model\nfrom maskrcnn_benchmark.utils.checkpoint import DetectronCheckpointer\nfrom maskrcnn_benchmark.utils.collect_env import collect_env_info\nfrom maskrcnn_benchmark.utils.comm import synchronize, \\\n    get_rank, is_pytorch_1_1_0_or_later\nfrom maskrcnn_benchmark.utils.imports import import_file\nfrom maskrcnn_benchmark.utils.logger import setup_logger\nfrom maskrcnn_benchmark.utils.miscellaneous import mkdir\n\n\ndef train(cfg, local_rank, distributed):\n    model = build_detection_model(cfg)\n    device = torch.device(cfg.MODEL.DEVICE)\n    model.to(device)\n\n    if cfg.MODEL.USE_SYNCBN:\n        assert is_pytorch_1_1_0_or_later(), \\\n            \"SyncBatchNorm is only available in pytorch >= 1.1.0\"\n        model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model)\n\n    optimizer = make_optimizer(cfg, model)\n    scheduler = make_lr_scheduler(cfg, optimizer)\n\n    if distributed:\n        model = torch.nn.parallel.DistributedDataParallel(\n            model, device_ids=[local_rank], output_device=local_rank,\n            # this should be removed if we update BatchNorm stats\n            broadcast_buffers=False,\n        )\n\n    arguments = {}\n    arguments[\"iteration\"] = 0\n\n    output_dir = cfg.OUTPUT_DIR\n\n    save_to_disk = get_rank() == 0\n    checkpointer = DetectronCheckpointer(\n        cfg, model, optimizer, scheduler, output_dir, save_to_disk\n    )\n    extra_checkpoint_data = checkpointer.load(cfg.MODEL.WEIGHT)\n    arguments.update(extra_checkpoint_data)\n\n    data_loader = make_data_loader(\n        cfg,\n        is_train=True,\n        is_distributed=distributed,\n        start_iter=arguments[\"iteration\"],\n    )\n\n    checkpoint_period = cfg.SOLVER.CHECKPOINT_PERIOD\n\n    do_train(\n        model,\n        data_loader,\n        optimizer,\n        scheduler,\n        checkpointer,\n        device,\n        checkpoint_period,\n        arguments,\n    )\n\n    return model\n\n\ndef run_test(cfg, model, distributed):\n    if distributed:\n        model = model.module\n    torch.cuda.empty_cache()  # TODO check if it helps\n    iou_types = (\"bbox\",)\n    if cfg.MODEL.MASK_ON:\n        iou_types = iou_types + (\"segm\",)\n    if cfg.MODEL.KEYPOINT_ON:\n        iou_types = iou_types + (\"keypoints\",)\n    output_folders = [None] * len(cfg.DATASETS.TEST)\n    dataset_names = cfg.DATASETS.TEST\n    if cfg.OUTPUT_DIR:\n        for idx, dataset_name in enumerate(dataset_names):\n            output_folder = os.path.join(cfg.OUTPUT_DIR, \"inference\", dataset_name)\n            mkdir(output_folder)\n            output_folders[idx] = output_folder\n    data_loaders_val = make_data_loader(cfg, is_train=False, is_distributed=distributed)\n    for output_folder, dataset_name, data_loader_val in zip(output_folders, dataset_names, data_loaders_val):\n        inference(\n            model,\n            data_loader_val,\n            dataset_name=dataset_name,\n            iou_types=iou_types,\n            box_only=False if cfg.MODEL.FCOS_ON or cfg.MODEL.RETINANET_ON else cfg.MODEL.RPN_ONLY,\n            device=cfg.MODEL.DEVICE,\n            expected_results=cfg.TEST.EXPECTED_RESULTS,\n            expected_results_sigma_tol=cfg.TEST.EXPECTED_RESULTS_SIGMA_TOL,\n            output_folder=output_folder,\n        )\n        synchronize()\n\n\ndef main():\n    parser = argparse.ArgumentParser(description=\"PyTorch Object Detection Training\")\n    parser.add_argument(\n        \"--config-file\",\n        default=\"\",\n        metavar=\"FILE\",\n        help=\"path to config file\",\n        type=str,\n    )\n    parser.add_argument(\"--local_rank\", type=int, default=0)\n    parser.add_argument(\n        \"--skip-test\",\n        dest=\"skip_test\",\n        help=\"Do not test the final model\",\n        action=\"store_true\",\n    )\n    parser.add_argument(\n        \"opts\",\n        help=\"Modify config options using the command-line\",\n        default=None,\n        nargs=argparse.REMAINDER,\n    )\n\n    args = parser.parse_args()\n\n    num_gpus = int(os.environ[\"WORLD_SIZE\"]) if \"WORLD_SIZE\" in os.environ else 1\n    args.distributed = num_gpus > 1\n\n    if args.distributed:\n        torch.cuda.set_device(args.local_rank)\n        torch.distributed.init_process_group(\n            backend=\"nccl\", init_method=\"env://\"\n        )\n        synchronize()\n\n    cfg.merge_from_file(args.config_file)\n    cfg.merge_from_list(args.opts)\n    cfg.freeze()\n\n    output_dir = cfg.OUTPUT_DIR\n    if output_dir:\n        mkdir(output_dir)\n\n    logger = setup_logger(\"maskrcnn_benchmark\", output_dir, get_rank())\n    logger.info(\"Using {} GPUs\".format(num_gpus))\n    logger.info(args)\n\n    logger.info(\"Collecting env info (might take some time)\")\n    logger.info(\"\\n\" + collect_env_info())\n\n    logger.info(\"Loaded configuration file {}\".format(args.config_file))\n    with open(args.config_file, \"r\") as cf:\n        config_str = \"\\n\" + cf.read()\n        logger.info(config_str)\n    logger.info(\"Running with config:\\n{}\".format(cfg))\n\n    model = train(cfg, args.local_rank, args.distributed)\n\n    if not args.skip_test:\n        run_test(cfg, model, args.distributed)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  }
]