Full Code of clin1223/VLDet for AI

main 172c93519f79 cached
967 files
5.6 MB
1.5M tokens
3464 symbols
1 requests
Download .txt
Showing preview only (6,129K chars total). Download the full file or copy to clipboard to get everything.
Repository: clin1223/VLDet
Branch: main
Commit: 172c93519f79
Files: 967
Total size: 5.6 MB

Directory structure:
gitextract_64k4_27x/

├── .gitignore
├── LICENSE
├── README.md
├── configs/
│   ├── Base-C2_L_R5021k_640b64_4x.yaml
│   ├── Base_OVCOCO_C4_1x.yaml
│   ├── BoxSup-C2_Lbase_CLIP_R5021k_640b64.yaml
│   ├── BoxSup-C2_Lbase_CLIP_SwinB_896b32.yaml
│   ├── BoxSup_OVCOCO_CLIP_R50_1x.yaml
│   ├── VLDet_LbaseCCcap_CLIP_R5021k_640b64_2x_ft4x_caption.yaml
│   ├── VLDet_LbaseI_CLIP_SwinB_896b32_2x_ft4x_caption.yaml
│   └── VLDet_OVCOCO_CLIP_R50_1x_caption.yaml
├── demo.py
├── detectron2/
│   ├── .circleci/
│   │   ├── config.yml
│   │   └── import-tests.sh
│   ├── .clang-format
│   ├── .flake8
│   ├── GETTING_STARTED.md
│   ├── INSTALL.md
│   ├── LICENSE
│   ├── MODEL_ZOO.md
│   ├── README.md
│   ├── configs/
│   │   ├── Base-RCNN-C4.yaml
│   │   ├── Base-RCNN-DilatedC5.yaml
│   │   ├── Base-RCNN-FPN.yaml
│   │   ├── Base-RetinaNet.yaml
│   │   ├── COCO-Detection/
│   │   │   ├── fast_rcnn_R_50_FPN_1x.yaml
│   │   │   ├── faster_rcnn_R_101_C4_3x.yaml
│   │   │   ├── faster_rcnn_R_101_DC5_3x.yaml
│   │   │   ├── faster_rcnn_R_101_FPN_3x.yaml
│   │   │   ├── faster_rcnn_R_50_C4_1x.yaml
│   │   │   ├── faster_rcnn_R_50_C4_3x.yaml
│   │   │   ├── faster_rcnn_R_50_DC5_1x.yaml
│   │   │   ├── faster_rcnn_R_50_DC5_3x.yaml
│   │   │   ├── faster_rcnn_R_50_FPN_1x.yaml
│   │   │   ├── faster_rcnn_R_50_FPN_3x.yaml
│   │   │   ├── faster_rcnn_X_101_32x8d_FPN_3x.yaml
│   │   │   ├── fcos_R_50_FPN_1x.py
│   │   │   ├── retinanet_R_101_FPN_3x.yaml
│   │   │   ├── retinanet_R_50_FPN_1x.py
│   │   │   ├── retinanet_R_50_FPN_1x.yaml
│   │   │   ├── retinanet_R_50_FPN_3x.yaml
│   │   │   ├── rpn_R_50_C4_1x.yaml
│   │   │   └── rpn_R_50_FPN_1x.yaml
│   │   ├── COCO-InstanceSegmentation/
│   │   │   ├── mask_rcnn_R_101_C4_3x.yaml
│   │   │   ├── mask_rcnn_R_101_DC5_3x.yaml
│   │   │   ├── mask_rcnn_R_101_FPN_3x.yaml
│   │   │   ├── mask_rcnn_R_50_C4_1x.py
│   │   │   ├── mask_rcnn_R_50_C4_1x.yaml
│   │   │   ├── mask_rcnn_R_50_C4_3x.yaml
│   │   │   ├── mask_rcnn_R_50_DC5_1x.yaml
│   │   │   ├── mask_rcnn_R_50_DC5_3x.yaml
│   │   │   ├── mask_rcnn_R_50_FPN_1x.py
│   │   │   ├── mask_rcnn_R_50_FPN_1x.yaml
│   │   │   ├── mask_rcnn_R_50_FPN_1x_giou.yaml
│   │   │   ├── mask_rcnn_R_50_FPN_3x.yaml
│   │   │   ├── mask_rcnn_X_101_32x8d_FPN_3x.yaml
│   │   │   ├── mask_rcnn_regnetx_4gf_dds_fpn_1x.py
│   │   │   └── mask_rcnn_regnety_4gf_dds_fpn_1x.py
│   │   ├── COCO-Keypoints/
│   │   │   ├── Base-Keypoint-RCNN-FPN.yaml
│   │   │   ├── keypoint_rcnn_R_101_FPN_3x.yaml
│   │   │   ├── keypoint_rcnn_R_50_FPN_1x.py
│   │   │   ├── keypoint_rcnn_R_50_FPN_1x.yaml
│   │   │   ├── keypoint_rcnn_R_50_FPN_3x.yaml
│   │   │   └── keypoint_rcnn_X_101_32x8d_FPN_3x.yaml
│   │   ├── COCO-PanopticSegmentation/
│   │   │   ├── Base-Panoptic-FPN.yaml
│   │   │   ├── panoptic_fpn_R_101_3x.yaml
│   │   │   ├── panoptic_fpn_R_50_1x.py
│   │   │   ├── panoptic_fpn_R_50_1x.yaml
│   │   │   └── panoptic_fpn_R_50_3x.yaml
│   │   ├── Cityscapes/
│   │   │   └── mask_rcnn_R_50_FPN.yaml
│   │   ├── Detectron1-Comparisons/
│   │   │   ├── README.md
│   │   │   ├── faster_rcnn_R_50_FPN_noaug_1x.yaml
│   │   │   ├── keypoint_rcnn_R_50_FPN_1x.yaml
│   │   │   └── mask_rcnn_R_50_FPN_noaug_1x.yaml
│   │   ├── LVISv0.5-InstanceSegmentation/
│   │   │   ├── mask_rcnn_R_101_FPN_1x.yaml
│   │   │   ├── mask_rcnn_R_50_FPN_1x.yaml
│   │   │   └── mask_rcnn_X_101_32x8d_FPN_1x.yaml
│   │   ├── LVISv1-InstanceSegmentation/
│   │   │   ├── mask_rcnn_R_101_FPN_1x.yaml
│   │   │   ├── mask_rcnn_R_50_FPN_1x.yaml
│   │   │   └── mask_rcnn_X_101_32x8d_FPN_1x.yaml
│   │   ├── Misc/
│   │   │   ├── cascade_mask_rcnn_R_50_FPN_1x.yaml
│   │   │   ├── cascade_mask_rcnn_R_50_FPN_3x.yaml
│   │   │   ├── cascade_mask_rcnn_X_152_32x8d_FPN_IN5k_gn_dconv.yaml
│   │   │   ├── mask_rcnn_R_50_FPN_1x_cls_agnostic.yaml
│   │   │   ├── mask_rcnn_R_50_FPN_1x_dconv_c3-c5.yaml
│   │   │   ├── mask_rcnn_R_50_FPN_3x_dconv_c3-c5.yaml
│   │   │   ├── mask_rcnn_R_50_FPN_3x_gn.yaml
│   │   │   ├── mask_rcnn_R_50_FPN_3x_syncbn.yaml
│   │   │   ├── mmdet_mask_rcnn_R_50_FPN_1x.py
│   │   │   ├── panoptic_fpn_R_101_dconv_cascade_gn_3x.yaml
│   │   │   ├── scratch_mask_rcnn_R_50_FPN_3x_gn.yaml
│   │   │   ├── scratch_mask_rcnn_R_50_FPN_9x_gn.yaml
│   │   │   ├── scratch_mask_rcnn_R_50_FPN_9x_syncbn.yaml
│   │   │   ├── semantic_R_50_FPN_1x.yaml
│   │   │   └── torchvision_imagenet_R_50.py
│   │   ├── PascalVOC-Detection/
│   │   │   ├── faster_rcnn_R_50_C4.yaml
│   │   │   └── faster_rcnn_R_50_FPN.yaml
│   │   ├── common/
│   │   │   ├── README.md
│   │   │   ├── coco_schedule.py
│   │   │   ├── data/
│   │   │   │   ├── coco.py
│   │   │   │   ├── coco_keypoint.py
│   │   │   │   ├── coco_panoptic_separated.py
│   │   │   │   └── constants.py
│   │   │   ├── optim.py
│   │   │   └── train.py
│   │   ├── new_baselines/
│   │   │   ├── mask_rcnn_R_101_FPN_100ep_LSJ.py
│   │   │   ├── mask_rcnn_R_101_FPN_200ep_LSJ.py
│   │   │   ├── mask_rcnn_R_101_FPN_400ep_LSJ.py
│   │   │   ├── mask_rcnn_R_50_FPN_100ep_LSJ.py
│   │   │   ├── mask_rcnn_R_50_FPN_200ep_LSJ.py
│   │   │   ├── mask_rcnn_R_50_FPN_400ep_LSJ.py
│   │   │   ├── mask_rcnn_R_50_FPN_50ep_LSJ.py
│   │   │   ├── mask_rcnn_regnetx_4gf_dds_FPN_100ep_LSJ.py
│   │   │   ├── mask_rcnn_regnetx_4gf_dds_FPN_200ep_LSJ.py
│   │   │   ├── mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ.py
│   │   │   ├── mask_rcnn_regnety_4gf_dds_FPN_100ep_LSJ.py
│   │   │   ├── mask_rcnn_regnety_4gf_dds_FPN_200ep_LSJ.py
│   │   │   └── mask_rcnn_regnety_4gf_dds_FPN_400ep_LSJ.py
│   │   └── quick_schedules/
│   │       ├── README.md
│   │       ├── cascade_mask_rcnn_R_50_FPN_inference_acc_test.yaml
│   │       ├── cascade_mask_rcnn_R_50_FPN_instant_test.yaml
│   │       ├── fast_rcnn_R_50_FPN_inference_acc_test.yaml
│   │       ├── fast_rcnn_R_50_FPN_instant_test.yaml
│   │       ├── keypoint_rcnn_R_50_FPN_inference_acc_test.yaml
│   │       ├── keypoint_rcnn_R_50_FPN_instant_test.yaml
│   │       ├── keypoint_rcnn_R_50_FPN_normalized_training_acc_test.yaml
│   │       ├── keypoint_rcnn_R_50_FPN_training_acc_test.yaml
│   │       ├── mask_rcnn_R_50_C4_GCV_instant_test.yaml
│   │       ├── mask_rcnn_R_50_C4_inference_acc_test.yaml
│   │       ├── mask_rcnn_R_50_C4_instant_test.yaml
│   │       ├── mask_rcnn_R_50_C4_training_acc_test.yaml
│   │       ├── mask_rcnn_R_50_DC5_inference_acc_test.yaml
│   │       ├── mask_rcnn_R_50_FPN_inference_acc_test.yaml
│   │       ├── mask_rcnn_R_50_FPN_instant_test.yaml
│   │       ├── mask_rcnn_R_50_FPN_pred_boxes_training_acc_test.yaml
│   │       ├── mask_rcnn_R_50_FPN_training_acc_test.yaml
│   │       ├── panoptic_fpn_R_50_inference_acc_test.yaml
│   │       ├── panoptic_fpn_R_50_instant_test.yaml
│   │       ├── panoptic_fpn_R_50_training_acc_test.yaml
│   │       ├── retinanet_R_50_FPN_inference_acc_test.yaml
│   │       ├── retinanet_R_50_FPN_instant_test.yaml
│   │       ├── rpn_R_50_FPN_inference_acc_test.yaml
│   │       ├── rpn_R_50_FPN_instant_test.yaml
│   │       ├── semantic_R_50_FPN_inference_acc_test.yaml
│   │       ├── semantic_R_50_FPN_instant_test.yaml
│   │       └── semantic_R_50_FPN_training_acc_test.yaml
│   ├── datasets/
│   │   ├── README.md
│   │   ├── prepare_ade20k_sem_seg.py
│   │   ├── prepare_cocofied_lvis.py
│   │   ├── prepare_for_tests.sh
│   │   └── prepare_panoptic_fpn.py
│   ├── demo/
│   │   ├── README.md
│   │   ├── demo.py
│   │   └── predictor.py
│   ├── detectron2/
│   │   ├── __init__.py
│   │   ├── checkpoint/
│   │   │   ├── __init__.py
│   │   │   ├── c2_model_loading.py
│   │   │   ├── catalog.py
│   │   │   └── detection_checkpoint.py
│   │   ├── config/
│   │   │   ├── __init__.py
│   │   │   ├── compat.py
│   │   │   ├── config.py
│   │   │   ├── defaults.py
│   │   │   ├── instantiate.py
│   │   │   └── lazy.py
│   │   ├── data/
│   │   │   ├── __init__.py
│   │   │   ├── benchmark.py
│   │   │   ├── build.py
│   │   │   ├── catalog.py
│   │   │   ├── common.py
│   │   │   ├── dataset_mapper.py
│   │   │   ├── datasets/
│   │   │   │   ├── README.md
│   │   │   │   ├── __init__.py
│   │   │   │   ├── builtin.py
│   │   │   │   ├── builtin_meta.py
│   │   │   │   ├── cityscapes.py
│   │   │   │   ├── cityscapes_panoptic.py
│   │   │   │   ├── coco.py
│   │   │   │   ├── coco_panoptic.py
│   │   │   │   ├── lvis.py
│   │   │   │   ├── lvis_v0_5_categories.py
│   │   │   │   ├── lvis_v1_categories.py
│   │   │   │   ├── lvis_v1_category_image_count.py
│   │   │   │   ├── pascal_voc.py
│   │   │   │   └── register_coco.py
│   │   │   ├── detection_utils.py
│   │   │   ├── samplers/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── distributed_sampler.py
│   │   │   │   └── grouped_batch_sampler.py
│   │   │   └── transforms/
│   │   │       ├── __init__.py
│   │   │       ├── augmentation.py
│   │   │       ├── augmentation_impl.py
│   │   │       └── transform.py
│   │   ├── engine/
│   │   │   ├── __init__.py
│   │   │   ├── defaults.py
│   │   │   ├── hooks.py
│   │   │   ├── launch.py
│   │   │   └── train_loop.py
│   │   ├── evaluation/
│   │   │   ├── __init__.py
│   │   │   ├── cityscapes_evaluation.py
│   │   │   ├── coco_evaluation.py
│   │   │   ├── evaluator.py
│   │   │   ├── fast_eval_api.py
│   │   │   ├── lvis_evaluation.py
│   │   │   ├── panoptic_evaluation.py
│   │   │   ├── pascal_voc_evaluation.py
│   │   │   ├── rotated_coco_evaluation.py
│   │   │   ├── sem_seg_evaluation.py
│   │   │   └── testing.py
│   │   ├── export/
│   │   │   ├── README.md
│   │   │   ├── __init__.py
│   │   │   ├── api.py
│   │   │   ├── c10.py
│   │   │   ├── caffe2_export.py
│   │   │   ├── caffe2_inference.py
│   │   │   ├── caffe2_modeling.py
│   │   │   ├── caffe2_patch.py
│   │   │   ├── flatten.py
│   │   │   ├── shared.py
│   │   │   ├── torchscript.py
│   │   │   └── torchscript_patch.py
│   │   ├── layers/
│   │   │   ├── __init__.py
│   │   │   ├── aspp.py
│   │   │   ├── batch_norm.py
│   │   │   ├── blocks.py
│   │   │   ├── csrc/
│   │   │   │   ├── README.md
│   │   │   │   ├── ROIAlignRotated/
│   │   │   │   │   ├── ROIAlignRotated.h
│   │   │   │   │   ├── ROIAlignRotated_cpu.cpp
│   │   │   │   │   └── ROIAlignRotated_cuda.cu
│   │   │   │   ├── box_iou_rotated/
│   │   │   │   │   ├── box_iou_rotated.h
│   │   │   │   │   ├── box_iou_rotated_cpu.cpp
│   │   │   │   │   ├── box_iou_rotated_cuda.cu
│   │   │   │   │   └── box_iou_rotated_utils.h
│   │   │   │   ├── cocoeval/
│   │   │   │   │   ├── cocoeval.cpp
│   │   │   │   │   └── cocoeval.h
│   │   │   │   ├── cuda_version.cu
│   │   │   │   ├── deformable/
│   │   │   │   │   ├── deform_conv.h
│   │   │   │   │   ├── deform_conv_cuda.cu
│   │   │   │   │   └── deform_conv_cuda_kernel.cu
│   │   │   │   ├── nms_rotated/
│   │   │   │   │   ├── nms_rotated.h
│   │   │   │   │   ├── nms_rotated_cpu.cpp
│   │   │   │   │   └── nms_rotated_cuda.cu
│   │   │   │   └── vision.cpp
│   │   │   ├── deform_conv.py
│   │   │   ├── losses.py
│   │   │   ├── mask_ops.py
│   │   │   ├── nms.py
│   │   │   ├── roi_align.py
│   │   │   ├── roi_align_rotated.py
│   │   │   ├── rotated_boxes.py
│   │   │   ├── shape_spec.py
│   │   │   └── wrappers.py
│   │   ├── model_zoo/
│   │   │   ├── __init__.py
│   │   │   ├── configs/
│   │   │   │   ├── Base-RCNN-C4.yaml
│   │   │   │   ├── Base-RCNN-DilatedC5.yaml
│   │   │   │   ├── Base-RCNN-FPN.yaml
│   │   │   │   ├── Base-RetinaNet.yaml
│   │   │   │   ├── COCO-Detection/
│   │   │   │   │   ├── fast_rcnn_R_50_FPN_1x.yaml
│   │   │   │   │   ├── faster_rcnn_R_101_C4_3x.yaml
│   │   │   │   │   ├── faster_rcnn_R_101_DC5_3x.yaml
│   │   │   │   │   ├── faster_rcnn_R_101_FPN_3x.yaml
│   │   │   │   │   ├── faster_rcnn_R_50_C4_1x.yaml
│   │   │   │   │   ├── faster_rcnn_R_50_C4_3x.yaml
│   │   │   │   │   ├── faster_rcnn_R_50_DC5_1x.yaml
│   │   │   │   │   ├── faster_rcnn_R_50_DC5_3x.yaml
│   │   │   │   │   ├── faster_rcnn_R_50_FPN_1x.yaml
│   │   │   │   │   ├── faster_rcnn_R_50_FPN_3x.yaml
│   │   │   │   │   ├── faster_rcnn_X_101_32x8d_FPN_3x.yaml
│   │   │   │   │   ├── fcos_R_50_FPN_1x.py
│   │   │   │   │   ├── retinanet_R_101_FPN_3x.yaml
│   │   │   │   │   ├── retinanet_R_50_FPN_1x.py
│   │   │   │   │   ├── retinanet_R_50_FPN_1x.yaml
│   │   │   │   │   ├── retinanet_R_50_FPN_3x.yaml
│   │   │   │   │   ├── rpn_R_50_C4_1x.yaml
│   │   │   │   │   └── rpn_R_50_FPN_1x.yaml
│   │   │   │   ├── COCO-InstanceSegmentation/
│   │   │   │   │   ├── mask_rcnn_R_101_C4_3x.yaml
│   │   │   │   │   ├── mask_rcnn_R_101_DC5_3x.yaml
│   │   │   │   │   ├── mask_rcnn_R_101_FPN_3x.yaml
│   │   │   │   │   ├── mask_rcnn_R_50_C4_1x.py
│   │   │   │   │   ├── mask_rcnn_R_50_C4_1x.yaml
│   │   │   │   │   ├── mask_rcnn_R_50_C4_3x.yaml
│   │   │   │   │   ├── mask_rcnn_R_50_DC5_1x.yaml
│   │   │   │   │   ├── mask_rcnn_R_50_DC5_3x.yaml
│   │   │   │   │   ├── mask_rcnn_R_50_FPN_1x.py
│   │   │   │   │   ├── mask_rcnn_R_50_FPN_1x.yaml
│   │   │   │   │   ├── mask_rcnn_R_50_FPN_1x_giou.yaml
│   │   │   │   │   ├── mask_rcnn_R_50_FPN_3x.yaml
│   │   │   │   │   ├── mask_rcnn_X_101_32x8d_FPN_3x.yaml
│   │   │   │   │   ├── mask_rcnn_regnetx_4gf_dds_fpn_1x.py
│   │   │   │   │   └── mask_rcnn_regnety_4gf_dds_fpn_1x.py
│   │   │   │   ├── COCO-Keypoints/
│   │   │   │   │   ├── Base-Keypoint-RCNN-FPN.yaml
│   │   │   │   │   ├── keypoint_rcnn_R_101_FPN_3x.yaml
│   │   │   │   │   ├── keypoint_rcnn_R_50_FPN_1x.py
│   │   │   │   │   ├── keypoint_rcnn_R_50_FPN_1x.yaml
│   │   │   │   │   ├── keypoint_rcnn_R_50_FPN_3x.yaml
│   │   │   │   │   └── keypoint_rcnn_X_101_32x8d_FPN_3x.yaml
│   │   │   │   ├── COCO-PanopticSegmentation/
│   │   │   │   │   ├── Base-Panoptic-FPN.yaml
│   │   │   │   │   ├── panoptic_fpn_R_101_3x.yaml
│   │   │   │   │   ├── panoptic_fpn_R_50_1x.py
│   │   │   │   │   ├── panoptic_fpn_R_50_1x.yaml
│   │   │   │   │   └── panoptic_fpn_R_50_3x.yaml
│   │   │   │   ├── Cityscapes/
│   │   │   │   │   └── mask_rcnn_R_50_FPN.yaml
│   │   │   │   ├── Detectron1-Comparisons/
│   │   │   │   │   ├── README.md
│   │   │   │   │   ├── faster_rcnn_R_50_FPN_noaug_1x.yaml
│   │   │   │   │   ├── keypoint_rcnn_R_50_FPN_1x.yaml
│   │   │   │   │   └── mask_rcnn_R_50_FPN_noaug_1x.yaml
│   │   │   │   ├── LVISv0.5-InstanceSegmentation/
│   │   │   │   │   ├── mask_rcnn_R_101_FPN_1x.yaml
│   │   │   │   │   ├── mask_rcnn_R_50_FPN_1x.yaml
│   │   │   │   │   └── mask_rcnn_X_101_32x8d_FPN_1x.yaml
│   │   │   │   ├── LVISv1-InstanceSegmentation/
│   │   │   │   │   ├── mask_rcnn_R_101_FPN_1x.yaml
│   │   │   │   │   ├── mask_rcnn_R_50_FPN_1x.yaml
│   │   │   │   │   └── mask_rcnn_X_101_32x8d_FPN_1x.yaml
│   │   │   │   ├── Misc/
│   │   │   │   │   ├── cascade_mask_rcnn_R_50_FPN_1x.yaml
│   │   │   │   │   ├── cascade_mask_rcnn_R_50_FPN_3x.yaml
│   │   │   │   │   ├── cascade_mask_rcnn_X_152_32x8d_FPN_IN5k_gn_dconv.yaml
│   │   │   │   │   ├── mask_rcnn_R_50_FPN_1x_cls_agnostic.yaml
│   │   │   │   │   ├── mask_rcnn_R_50_FPN_1x_dconv_c3-c5.yaml
│   │   │   │   │   ├── mask_rcnn_R_50_FPN_3x_dconv_c3-c5.yaml
│   │   │   │   │   ├── mask_rcnn_R_50_FPN_3x_gn.yaml
│   │   │   │   │   ├── mask_rcnn_R_50_FPN_3x_syncbn.yaml
│   │   │   │   │   ├── mmdet_mask_rcnn_R_50_FPN_1x.py
│   │   │   │   │   ├── panoptic_fpn_R_101_dconv_cascade_gn_3x.yaml
│   │   │   │   │   ├── scratch_mask_rcnn_R_50_FPN_3x_gn.yaml
│   │   │   │   │   ├── scratch_mask_rcnn_R_50_FPN_9x_gn.yaml
│   │   │   │   │   ├── scratch_mask_rcnn_R_50_FPN_9x_syncbn.yaml
│   │   │   │   │   ├── semantic_R_50_FPN_1x.yaml
│   │   │   │   │   └── torchvision_imagenet_R_50.py
│   │   │   │   ├── PascalVOC-Detection/
│   │   │   │   │   ├── faster_rcnn_R_50_C4.yaml
│   │   │   │   │   └── faster_rcnn_R_50_FPN.yaml
│   │   │   │   ├── common/
│   │   │   │   │   ├── README.md
│   │   │   │   │   ├── coco_schedule.py
│   │   │   │   │   ├── data/
│   │   │   │   │   │   ├── coco.py
│   │   │   │   │   │   ├── coco_keypoint.py
│   │   │   │   │   │   ├── coco_panoptic_separated.py
│   │   │   │   │   │   └── constants.py
│   │   │   │   │   ├── optim.py
│   │   │   │   │   └── train.py
│   │   │   │   ├── new_baselines/
│   │   │   │   │   ├── mask_rcnn_R_101_FPN_100ep_LSJ.py
│   │   │   │   │   ├── mask_rcnn_R_101_FPN_200ep_LSJ.py
│   │   │   │   │   ├── mask_rcnn_R_101_FPN_400ep_LSJ.py
│   │   │   │   │   ├── mask_rcnn_R_50_FPN_100ep_LSJ.py
│   │   │   │   │   ├── mask_rcnn_R_50_FPN_200ep_LSJ.py
│   │   │   │   │   ├── mask_rcnn_R_50_FPN_400ep_LSJ.py
│   │   │   │   │   ├── mask_rcnn_R_50_FPN_50ep_LSJ.py
│   │   │   │   │   ├── mask_rcnn_regnetx_4gf_dds_FPN_100ep_LSJ.py
│   │   │   │   │   ├── mask_rcnn_regnetx_4gf_dds_FPN_200ep_LSJ.py
│   │   │   │   │   ├── mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ.py
│   │   │   │   │   ├── mask_rcnn_regnety_4gf_dds_FPN_100ep_LSJ.py
│   │   │   │   │   ├── mask_rcnn_regnety_4gf_dds_FPN_200ep_LSJ.py
│   │   │   │   │   └── mask_rcnn_regnety_4gf_dds_FPN_400ep_LSJ.py
│   │   │   │   └── quick_schedules/
│   │   │   │       ├── README.md
│   │   │   │       ├── cascade_mask_rcnn_R_50_FPN_inference_acc_test.yaml
│   │   │   │       ├── cascade_mask_rcnn_R_50_FPN_instant_test.yaml
│   │   │   │       ├── fast_rcnn_R_50_FPN_inference_acc_test.yaml
│   │   │   │       ├── fast_rcnn_R_50_FPN_instant_test.yaml
│   │   │   │       ├── keypoint_rcnn_R_50_FPN_inference_acc_test.yaml
│   │   │   │       ├── keypoint_rcnn_R_50_FPN_instant_test.yaml
│   │   │   │       ├── keypoint_rcnn_R_50_FPN_normalized_training_acc_test.yaml
│   │   │   │       ├── keypoint_rcnn_R_50_FPN_training_acc_test.yaml
│   │   │   │       ├── mask_rcnn_R_50_C4_GCV_instant_test.yaml
│   │   │   │       ├── mask_rcnn_R_50_C4_inference_acc_test.yaml
│   │   │   │       ├── mask_rcnn_R_50_C4_instant_test.yaml
│   │   │   │       ├── mask_rcnn_R_50_C4_training_acc_test.yaml
│   │   │   │       ├── mask_rcnn_R_50_DC5_inference_acc_test.yaml
│   │   │   │       ├── mask_rcnn_R_50_FPN_inference_acc_test.yaml
│   │   │   │       ├── mask_rcnn_R_50_FPN_instant_test.yaml
│   │   │   │       ├── mask_rcnn_R_50_FPN_pred_boxes_training_acc_test.yaml
│   │   │   │       ├── mask_rcnn_R_50_FPN_training_acc_test.yaml
│   │   │   │       ├── panoptic_fpn_R_50_inference_acc_test.yaml
│   │   │   │       ├── panoptic_fpn_R_50_instant_test.yaml
│   │   │   │       ├── panoptic_fpn_R_50_training_acc_test.yaml
│   │   │   │       ├── retinanet_R_50_FPN_inference_acc_test.yaml
│   │   │   │       ├── retinanet_R_50_FPN_instant_test.yaml
│   │   │   │       ├── rpn_R_50_FPN_inference_acc_test.yaml
│   │   │   │       ├── rpn_R_50_FPN_instant_test.yaml
│   │   │   │       ├── semantic_R_50_FPN_inference_acc_test.yaml
│   │   │   │       ├── semantic_R_50_FPN_instant_test.yaml
│   │   │   │       └── semantic_R_50_FPN_training_acc_test.yaml
│   │   │   └── model_zoo.py
│   │   ├── modeling/
│   │   │   ├── __init__.py
│   │   │   ├── anchor_generator.py
│   │   │   ├── backbone/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── backbone.py
│   │   │   │   ├── build.py
│   │   │   │   ├── fpn.py
│   │   │   │   ├── mvit.py
│   │   │   │   ├── regnet.py
│   │   │   │   ├── resnet.py
│   │   │   │   ├── swin.py
│   │   │   │   ├── utils.py
│   │   │   │   └── vit.py
│   │   │   ├── box_regression.py
│   │   │   ├── matcher.py
│   │   │   ├── meta_arch/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── build.py
│   │   │   │   ├── dense_detector.py
│   │   │   │   ├── fcos.py
│   │   │   │   ├── panoptic_fpn.py
│   │   │   │   ├── rcnn.py
│   │   │   │   ├── retinanet.py
│   │   │   │   └── semantic_seg.py
│   │   │   ├── mmdet_wrapper.py
│   │   │   ├── poolers.py
│   │   │   ├── postprocessing.py
│   │   │   ├── proposal_generator/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── build.py
│   │   │   │   ├── proposal_utils.py
│   │   │   │   ├── rpn.py
│   │   │   │   └── rrpn.py
│   │   │   ├── roi_heads/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── box_head.py
│   │   │   │   ├── cascade_rcnn.py
│   │   │   │   ├── fast_rcnn.py
│   │   │   │   ├── keypoint_head.py
│   │   │   │   ├── mask_head.py
│   │   │   │   ├── roi_heads.py
│   │   │   │   └── rotated_fast_rcnn.py
│   │   │   ├── sampling.py
│   │   │   └── test_time_augmentation.py
│   │   ├── projects/
│   │   │   ├── README.md
│   │   │   └── __init__.py
│   │   ├── solver/
│   │   │   ├── __init__.py
│   │   │   ├── build.py
│   │   │   └── lr_scheduler.py
│   │   ├── structures/
│   │   │   ├── __init__.py
│   │   │   ├── boxes.py
│   │   │   ├── image_list.py
│   │   │   ├── instances.py
│   │   │   ├── keypoints.py
│   │   │   ├── masks.py
│   │   │   └── rotated_boxes.py
│   │   ├── tracking/
│   │   │   ├── __init__.py
│   │   │   ├── base_tracker.py
│   │   │   ├── bbox_iou_tracker.py
│   │   │   ├── hungarian_tracker.py
│   │   │   ├── iou_weighted_hungarian_bbox_iou_tracker.py
│   │   │   ├── utils.py
│   │   │   └── vanilla_hungarian_bbox_iou_tracker.py
│   │   └── utils/
│   │       ├── README.md
│   │       ├── __init__.py
│   │       ├── analysis.py
│   │       ├── collect_env.py
│   │       ├── colormap.py
│   │       ├── comm.py
│   │       ├── develop.py
│   │       ├── env.py
│   │       ├── events.py
│   │       ├── file_io.py
│   │       ├── logger.py
│   │       ├── memory.py
│   │       ├── registry.py
│   │       ├── serialize.py
│   │       ├── testing.py
│   │       ├── video_visualizer.py
│   │       └── visualizer.py
│   ├── dev/
│   │   ├── README.md
│   │   ├── linter.sh
│   │   ├── packaging/
│   │   │   ├── README.md
│   │   │   ├── build_all_wheels.sh
│   │   │   ├── build_wheel.sh
│   │   │   ├── gen_install_table.py
│   │   │   ├── gen_wheel_index.sh
│   │   │   └── pkg_helpers.bash
│   │   ├── parse_results.sh
│   │   ├── run_inference_tests.sh
│   │   └── run_instant_tests.sh
│   ├── docker/
│   │   ├── Dockerfile
│   │   ├── README.md
│   │   ├── deploy.Dockerfile
│   │   └── docker-compose.yml
│   ├── docs/
│   │   ├── .gitignore
│   │   ├── Makefile
│   │   ├── README.md
│   │   ├── _static/
│   │   │   └── css/
│   │   │       └── custom.css
│   │   ├── conf.py
│   │   ├── index.rst
│   │   ├── modules/
│   │   │   ├── checkpoint.rst
│   │   │   ├── config.rst
│   │   │   ├── data.rst
│   │   │   ├── data_transforms.rst
│   │   │   ├── engine.rst
│   │   │   ├── evaluation.rst
│   │   │   ├── export.rst
│   │   │   ├── fvcore.rst
│   │   │   ├── index.rst
│   │   │   ├── layers.rst
│   │   │   ├── model_zoo.rst
│   │   │   ├── modeling.rst
│   │   │   ├── solver.rst
│   │   │   ├── structures.rst
│   │   │   └── utils.rst
│   │   ├── notes/
│   │   │   ├── benchmarks.md
│   │   │   ├── changelog.md
│   │   │   ├── compatibility.md
│   │   │   ├── contributing.md
│   │   │   └── index.rst
│   │   ├── requirements.txt
│   │   └── tutorials/
│   │       ├── README.md
│   │       ├── augmentation.md
│   │       ├── builtin_datasets.md
│   │       ├── configs.md
│   │       ├── data_loading.md
│   │       ├── datasets.md
│   │       ├── deployment.md
│   │       ├── evaluation.md
│   │       ├── extend.md
│   │       ├── getting_started.md
│   │       ├── index.rst
│   │       ├── install.md
│   │       ├── lazyconfigs.md
│   │       ├── models.md
│   │       ├── training.md
│   │       └── write-models.md
│   ├── projects/
│   │   ├── DeepLab/
│   │   │   ├── README.md
│   │   │   ├── configs/
│   │   │   │   └── Cityscapes-SemanticSegmentation/
│   │   │   │       ├── Base-DeepLabV3-OS16-Semantic.yaml
│   │   │   │       ├── deeplab_v3_R_103_os16_mg124_poly_90k_bs16.yaml
│   │   │   │       └── deeplab_v3_plus_R_103_os16_mg124_poly_90k_bs16.yaml
│   │   │   ├── deeplab/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── build_solver.py
│   │   │   │   ├── config.py
│   │   │   │   ├── loss.py
│   │   │   │   ├── lr_scheduler.py
│   │   │   │   ├── resnet.py
│   │   │   │   └── semantic_seg.py
│   │   │   └── train_net.py
│   │   ├── DensePose/
│   │   │   ├── README.md
│   │   │   ├── apply_net.py
│   │   │   ├── configs/
│   │   │   │   ├── Base-DensePose-RCNN-FPN.yaml
│   │   │   │   ├── HRNet/
│   │   │   │   │   ├── densepose_rcnn_HRFPN_HRNet_w32_s1x.yaml
│   │   │   │   │   ├── densepose_rcnn_HRFPN_HRNet_w40_s1x.yaml
│   │   │   │   │   └── densepose_rcnn_HRFPN_HRNet_w48_s1x.yaml
│   │   │   │   ├── cse/
│   │   │   │   │   ├── Base-DensePose-RCNN-FPN-Human.yaml
│   │   │   │   │   ├── Base-DensePose-RCNN-FPN.yaml
│   │   │   │   │   ├── densepose_rcnn_R_101_FPN_DL_s1x.yaml
│   │   │   │   │   ├── densepose_rcnn_R_101_FPN_DL_soft_s1x.yaml
│   │   │   │   │   ├── densepose_rcnn_R_101_FPN_s1x.yaml
│   │   │   │   │   ├── densepose_rcnn_R_101_FPN_soft_s1x.yaml
│   │   │   │   │   ├── densepose_rcnn_R_50_FPN_DL_s1x.yaml
│   │   │   │   │   ├── densepose_rcnn_R_50_FPN_DL_soft_s1x.yaml
│   │   │   │   │   ├── densepose_rcnn_R_50_FPN_s1x.yaml
│   │   │   │   │   ├── densepose_rcnn_R_50_FPN_soft_animals_CA_finetune_16k.yaml
│   │   │   │   │   ├── densepose_rcnn_R_50_FPN_soft_animals_CA_finetune_4k.yaml
│   │   │   │   │   ├── densepose_rcnn_R_50_FPN_soft_animals_I0_finetune_16k.yaml
│   │   │   │   │   ├── densepose_rcnn_R_50_FPN_soft_animals_I0_finetune_i2m_16k.yaml
│   │   │   │   │   ├── densepose_rcnn_R_50_FPN_soft_animals_I0_finetune_m2m_16k.yaml
│   │   │   │   │   ├── densepose_rcnn_R_50_FPN_soft_animals_finetune_16k.yaml
│   │   │   │   │   ├── densepose_rcnn_R_50_FPN_soft_animals_finetune_4k.yaml
│   │   │   │   │   ├── densepose_rcnn_R_50_FPN_soft_animals_finetune_maskonly_24k.yaml
│   │   │   │   │   ├── densepose_rcnn_R_50_FPN_soft_chimps_finetune_4k.yaml
│   │   │   │   │   └── densepose_rcnn_R_50_FPN_soft_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_101_FPN_DL_WC1M_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_101_FPN_DL_WC1_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_101_FPN_DL_WC2M_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_101_FPN_DL_WC2_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_101_FPN_DL_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_101_FPN_WC1M_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_101_FPN_WC1_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_101_FPN_WC2M_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_101_FPN_WC2_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_101_FPN_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_101_FPN_s1x_legacy.yaml
│   │   │   │   ├── densepose_rcnn_R_50_FPN_DL_WC1M_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_50_FPN_DL_WC1_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_50_FPN_DL_WC2M_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_50_FPN_DL_WC2_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_50_FPN_DL_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_50_FPN_WC1M_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_50_FPN_WC1_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_50_FPN_WC2M_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_50_FPN_WC2_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_50_FPN_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_50_FPN_s1x_legacy.yaml
│   │   │   │   ├── evolution/
│   │   │   │   │   ├── Base-RCNN-FPN-Atop10P_CA.yaml
│   │   │   │   │   ├── densepose_R_50_FPN_DL_WC1M_3x_Atop10P_CA.yaml
│   │   │   │   │   ├── densepose_R_50_FPN_DL_WC1M_3x_Atop10P_CA_B_coarsesegm.yaml
│   │   │   │   │   ├── densepose_R_50_FPN_DL_WC1M_3x_Atop10P_CA_B_finesegm.yaml
│   │   │   │   │   ├── densepose_R_50_FPN_DL_WC1M_3x_Atop10P_CA_B_uniform.yaml
│   │   │   │   │   └── densepose_R_50_FPN_DL_WC1M_3x_Atop10P_CA_B_uv.yaml
│   │   │   │   └── quick_schedules/
│   │   │   │       ├── cse/
│   │   │   │       │   ├── densepose_rcnn_R_50_FPN_DL_instant_test.yaml
│   │   │   │       │   └── densepose_rcnn_R_50_FPN_soft_animals_finetune_instant_test.yaml
│   │   │   │       ├── densepose_rcnn_HRFPN_HRNet_w32_instant_test.yaml
│   │   │   │       ├── densepose_rcnn_R_50_FPN_DL_instant_test.yaml
│   │   │   │       ├── densepose_rcnn_R_50_FPN_TTA_inference_acc_test.yaml
│   │   │   │       ├── densepose_rcnn_R_50_FPN_WC1_instant_test.yaml
│   │   │   │       ├── densepose_rcnn_R_50_FPN_WC2_instant_test.yaml
│   │   │   │       ├── densepose_rcnn_R_50_FPN_inference_acc_test.yaml
│   │   │   │       ├── densepose_rcnn_R_50_FPN_instant_test.yaml
│   │   │   │       └── densepose_rcnn_R_50_FPN_training_acc_test.yaml
│   │   │   ├── densepose/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── config.py
│   │   │   │   ├── converters/
│   │   │   │   │   ├── __init__.py
│   │   │   │   │   ├── base.py
│   │   │   │   │   ├── builtin.py
│   │   │   │   │   ├── chart_output_hflip.py
│   │   │   │   │   ├── chart_output_to_chart_result.py
│   │   │   │   │   ├── hflip.py
│   │   │   │   │   ├── segm_to_mask.py
│   │   │   │   │   ├── to_chart_result.py
│   │   │   │   │   └── to_mask.py
│   │   │   │   ├── data/
│   │   │   │   │   ├── __init__.py
│   │   │   │   │   ├── build.py
│   │   │   │   │   ├── combined_loader.py
│   │   │   │   │   ├── dataset_mapper.py
│   │   │   │   │   ├── datasets/
│   │   │   │   │   │   ├── __init__.py
│   │   │   │   │   │   ├── builtin.py
│   │   │   │   │   │   ├── chimpnsee.py
│   │   │   │   │   │   ├── coco.py
│   │   │   │   │   │   ├── dataset_type.py
│   │   │   │   │   │   └── lvis.py
│   │   │   │   │   ├── image_list_dataset.py
│   │   │   │   │   ├── inference_based_loader.py
│   │   │   │   │   ├── meshes/
│   │   │   │   │   │   ├── __init__.py
│   │   │   │   │   │   ├── builtin.py
│   │   │   │   │   │   └── catalog.py
│   │   │   │   │   ├── samplers/
│   │   │   │   │   │   ├── __init__.py
│   │   │   │   │   │   ├── densepose_base.py
│   │   │   │   │   │   ├── densepose_confidence_based.py
│   │   │   │   │   │   ├── densepose_cse_base.py
│   │   │   │   │   │   ├── densepose_cse_confidence_based.py
│   │   │   │   │   │   ├── densepose_cse_uniform.py
│   │   │   │   │   │   ├── densepose_uniform.py
│   │   │   │   │   │   ├── mask_from_densepose.py
│   │   │   │   │   │   └── prediction_to_gt.py
│   │   │   │   │   ├── transform/
│   │   │   │   │   │   ├── __init__.py
│   │   │   │   │   │   └── image.py
│   │   │   │   │   ├── utils.py
│   │   │   │   │   └── video/
│   │   │   │   │       ├── __init__.py
│   │   │   │   │       ├── frame_selector.py
│   │   │   │   │       └── video_keyframe_dataset.py
│   │   │   │   ├── engine/
│   │   │   │   │   ├── __init__.py
│   │   │   │   │   └── trainer.py
│   │   │   │   ├── evaluation/
│   │   │   │   │   ├── __init__.py
│   │   │   │   │   ├── d2_evaluator_adapter.py
│   │   │   │   │   ├── densepose_coco_evaluation.py
│   │   │   │   │   ├── evaluator.py
│   │   │   │   │   ├── mesh_alignment_evaluator.py
│   │   │   │   │   └── tensor_storage.py
│   │   │   │   ├── modeling/
│   │   │   │   │   ├── __init__.py
│   │   │   │   │   ├── build.py
│   │   │   │   │   ├── confidence.py
│   │   │   │   │   ├── cse/
│   │   │   │   │   │   ├── __init__.py
│   │   │   │   │   │   ├── embedder.py
│   │   │   │   │   │   ├── utils.py
│   │   │   │   │   │   ├── vertex_direct_embedder.py
│   │   │   │   │   │   └── vertex_feature_embedder.py
│   │   │   │   │   ├── densepose_checkpoint.py
│   │   │   │   │   ├── filter.py
│   │   │   │   │   ├── hrfpn.py
│   │   │   │   │   ├── hrnet.py
│   │   │   │   │   ├── inference.py
│   │   │   │   │   ├── losses/
│   │   │   │   │   │   ├── __init__.py
│   │   │   │   │   │   ├── chart.py
│   │   │   │   │   │   ├── chart_with_confidences.py
│   │   │   │   │   │   ├── cse.py
│   │   │   │   │   │   ├── cycle_pix2shape.py
│   │   │   │   │   │   ├── cycle_shape2shape.py
│   │   │   │   │   │   ├── embed.py
│   │   │   │   │   │   ├── embed_utils.py
│   │   │   │   │   │   ├── mask.py
│   │   │   │   │   │   ├── mask_or_segm.py
│   │   │   │   │   │   ├── registry.py
│   │   │   │   │   │   ├── segm.py
│   │   │   │   │   │   ├── soft_embed.py
│   │   │   │   │   │   └── utils.py
│   │   │   │   │   ├── predictors/
│   │   │   │   │   │   ├── __init__.py
│   │   │   │   │   │   ├── chart.py
│   │   │   │   │   │   ├── chart_confidence.py
│   │   │   │   │   │   ├── chart_with_confidence.py
│   │   │   │   │   │   ├── cse.py
│   │   │   │   │   │   ├── cse_confidence.py
│   │   │   │   │   │   ├── cse_with_confidence.py
│   │   │   │   │   │   └── registry.py
│   │   │   │   │   ├── roi_heads/
│   │   │   │   │   │   ├── __init__.py
│   │   │   │   │   │   ├── deeplab.py
│   │   │   │   │   │   ├── registry.py
│   │   │   │   │   │   ├── roi_head.py
│   │   │   │   │   │   └── v1convx.py
│   │   │   │   │   ├── test_time_augmentation.py
│   │   │   │   │   └── utils.py
│   │   │   │   ├── structures/
│   │   │   │   │   ├── __init__.py
│   │   │   │   │   ├── chart.py
│   │   │   │   │   ├── chart_confidence.py
│   │   │   │   │   ├── chart_result.py
│   │   │   │   │   ├── cse.py
│   │   │   │   │   ├── cse_confidence.py
│   │   │   │   │   ├── data_relative.py
│   │   │   │   │   ├── list.py
│   │   │   │   │   ├── mesh.py
│   │   │   │   │   └── transform_data.py
│   │   │   │   ├── utils/
│   │   │   │   │   ├── __init__.py
│   │   │   │   │   ├── dbhelper.py
│   │   │   │   │   ├── logger.py
│   │   │   │   │   └── transform.py
│   │   │   │   └── vis/
│   │   │   │       ├── __init__.py
│   │   │   │       ├── base.py
│   │   │   │       ├── bounding_box.py
│   │   │   │       ├── densepose_data_points.py
│   │   │   │       ├── densepose_outputs_iuv.py
│   │   │   │       ├── densepose_outputs_vertex.py
│   │   │   │       ├── densepose_results.py
│   │   │   │       ├── densepose_results_textures.py
│   │   │   │       └── extractor.py
│   │   │   ├── dev/
│   │   │   │   ├── README.md
│   │   │   │   ├── run_inference_tests.sh
│   │   │   │   └── run_instant_tests.sh
│   │   │   ├── doc/
│   │   │   │   ├── BOOTSTRAPPING_PIPELINE.md
│   │   │   │   ├── DENSEPOSE_CSE.md
│   │   │   │   ├── DENSEPOSE_DATASETS.md
│   │   │   │   ├── DENSEPOSE_IUV.md
│   │   │   │   ├── GETTING_STARTED.md
│   │   │   │   ├── RELEASE_2020_04.md
│   │   │   │   ├── RELEASE_2021_03.md
│   │   │   │   ├── RELEASE_2021_06.md
│   │   │   │   ├── TOOL_APPLY_NET.md
│   │   │   │   └── TOOL_QUERY_DB.md
│   │   │   ├── query_db.py
│   │   │   ├── setup.py
│   │   │   ├── tests/
│   │   │   │   ├── common.py
│   │   │   │   ├── test_chart_based_annotations_accumulator.py
│   │   │   │   ├── test_combine_data_loader.py
│   │   │   │   ├── test_cse_annotations_accumulator.py
│   │   │   │   ├── test_dataset_loaded_annotations.py
│   │   │   │   ├── test_frame_selector.py
│   │   │   │   ├── test_image_list_dataset.py
│   │   │   │   ├── test_image_resize_transform.py
│   │   │   │   ├── test_model_e2e.py
│   │   │   │   ├── test_setup.py
│   │   │   │   ├── test_structures.py
│   │   │   │   ├── test_tensor_storage.py
│   │   │   │   └── test_video_keyframe_dataset.py
│   │   │   └── train_net.py
│   │   ├── MViTv2/
│   │   │   ├── README.md
│   │   │   └── configs/
│   │   │       ├── cascade_mask_rcnn_mvitv2_b_3x.py
│   │   │       ├── cascade_mask_rcnn_mvitv2_b_in21k_3x.py
│   │   │       ├── cascade_mask_rcnn_mvitv2_h_in21k_lsj_3x.py
│   │   │       ├── cascade_mask_rcnn_mvitv2_l_in21k_lsj_50ep.py
│   │   │       ├── cascade_mask_rcnn_mvitv2_s_3x.py
│   │   │       ├── cascade_mask_rcnn_mvitv2_t_3x.py
│   │   │       ├── common/
│   │   │       │   ├── coco_loader.py
│   │   │       │   └── coco_loader_lsj.py
│   │   │       └── mask_rcnn_mvitv2_t_3x.py
│   │   ├── Panoptic-DeepLab/
│   │   │   ├── README.md
│   │   │   ├── configs/
│   │   │   │   ├── COCO-PanopticSegmentation/
│   │   │   │   │   └── panoptic_deeplab_R_52_os16_mg124_poly_200k_bs64_crop_640_640_coco_dsconv.yaml
│   │   │   │   └── Cityscapes-PanopticSegmentation/
│   │   │   │       ├── Base-PanopticDeepLab-OS16.yaml
│   │   │   │       ├── panoptic_deeplab_R_52_os16_mg124_poly_90k_bs32_crop_512_1024.yaml
│   │   │   │       └── panoptic_deeplab_R_52_os16_mg124_poly_90k_bs32_crop_512_1024_dsconv.yaml
│   │   │   ├── panoptic_deeplab/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── config.py
│   │   │   │   ├── dataset_mapper.py
│   │   │   │   ├── panoptic_seg.py
│   │   │   │   ├── post_processing.py
│   │   │   │   └── target_generator.py
│   │   │   └── train_net.py
│   │   ├── PointRend/
│   │   │   ├── README.md
│   │   │   ├── configs/
│   │   │   │   ├── InstanceSegmentation/
│   │   │   │   │   ├── Base-Implicit-PointRend.yaml
│   │   │   │   │   ├── Base-PointRend-RCNN-FPN.yaml
│   │   │   │   │   ├── implicit_pointrend_R_50_FPN_1x_coco.yaml
│   │   │   │   │   ├── implicit_pointrend_R_50_FPN_3x_coco.yaml
│   │   │   │   │   ├── pointrend_rcnn_R_101_FPN_3x_coco.yaml
│   │   │   │   │   ├── pointrend_rcnn_R_50_FPN_1x_cityscapes.yaml
│   │   │   │   │   ├── pointrend_rcnn_R_50_FPN_1x_coco.yaml
│   │   │   │   │   ├── pointrend_rcnn_R_50_FPN_3x_coco.yaml
│   │   │   │   │   └── pointrend_rcnn_X_101_32x8d_FPN_3x_coco.yaml
│   │   │   │   └── SemanticSegmentation/
│   │   │   │       ├── Base-PointRend-Semantic-FPN.yaml
│   │   │   │       └── pointrend_semantic_R_101_FPN_1x_cityscapes.yaml
│   │   │   ├── point_rend/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── color_augmentation.py
│   │   │   │   ├── config.py
│   │   │   │   ├── mask_head.py
│   │   │   │   ├── point_features.py
│   │   │   │   ├── point_head.py
│   │   │   │   ├── roi_heads.py
│   │   │   │   └── semantic_seg.py
│   │   │   └── train_net.py
│   │   ├── PointSup/
│   │   │   ├── README.md
│   │   │   ├── configs/
│   │   │   │   ├── implicit_pointrend_R_50_FPN_3x_point_sup_point_aug_coco.yaml
│   │   │   │   ├── mask_rcnn_R_50_FPN_3x_point_sup_coco.yaml
│   │   │   │   └── mask_rcnn_R_50_FPN_3x_point_sup_point_aug_coco.yaml
│   │   │   ├── point_sup/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── config.py
│   │   │   │   ├── dataset_mapper.py
│   │   │   │   ├── detection_utils.py
│   │   │   │   ├── mask_head.py
│   │   │   │   ├── point_utils.py
│   │   │   │   └── register_point_annotations.py
│   │   │   ├── tools/
│   │   │   │   └── prepare_coco_point_annotations_without_masks.py
│   │   │   └── train_net.py
│   │   ├── README.md
│   │   ├── Rethinking-BatchNorm/
│   │   │   ├── README.md
│   │   │   ├── configs/
│   │   │   │   ├── mask_rcnn_BNhead.py
│   │   │   │   ├── mask_rcnn_BNhead_batch_stats.py
│   │   │   │   ├── mask_rcnn_BNhead_shuffle.py
│   │   │   │   ├── mask_rcnn_SyncBNhead.py
│   │   │   │   ├── retinanet_SyncBNhead.py
│   │   │   │   └── retinanet_SyncBNhead_SharedTraining.py
│   │   │   └── retinanet-eval-domain-specific.py
│   │   ├── TensorMask/
│   │   │   ├── README.md
│   │   │   ├── configs/
│   │   │   │   ├── Base-TensorMask.yaml
│   │   │   │   ├── tensormask_R_50_FPN_1x.yaml
│   │   │   │   └── tensormask_R_50_FPN_6x.yaml
│   │   │   ├── setup.py
│   │   │   ├── tensormask/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── arch.py
│   │   │   │   ├── config.py
│   │   │   │   └── layers/
│   │   │   │       ├── __init__.py
│   │   │   │       ├── csrc/
│   │   │   │       │   ├── SwapAlign2Nat/
│   │   │   │       │   │   ├── SwapAlign2Nat.h
│   │   │   │       │   │   └── SwapAlign2Nat_cuda.cu
│   │   │   │       │   └── vision.cpp
│   │   │   │       └── swap_align2nat.py
│   │   │   ├── tests/
│   │   │   │   ├── __init__.py
│   │   │   │   └── test_swap_align2nat.py
│   │   │   └── train_net.py
│   │   ├── TridentNet/
│   │   │   ├── README.md
│   │   │   ├── configs/
│   │   │   │   ├── Base-TridentNet-Fast-C4.yaml
│   │   │   │   ├── tridentnet_fast_R_101_C4_3x.yaml
│   │   │   │   ├── tridentnet_fast_R_50_C4_1x.yaml
│   │   │   │   └── tridentnet_fast_R_50_C4_3x.yaml
│   │   │   ├── train_net.py
│   │   │   └── tridentnet/
│   │   │       ├── __init__.py
│   │   │       ├── config.py
│   │   │       ├── trident_backbone.py
│   │   │       ├── trident_conv.py
│   │   │       ├── trident_rcnn.py
│   │   │       └── trident_rpn.py
│   │   └── ViTDet/
│   │       ├── README.md
│   │       └── configs/
│   │           ├── COCO/
│   │           │   ├── cascade_mask_rcnn_mvitv2_b_in21k_100ep.py
│   │           │   ├── cascade_mask_rcnn_mvitv2_h_in21k_36ep.py
│   │           │   ├── cascade_mask_rcnn_mvitv2_l_in21k_50ep.py
│   │           │   ├── cascade_mask_rcnn_swin_b_in21k_50ep.py
│   │           │   ├── cascade_mask_rcnn_swin_l_in21k_50ep.py
│   │           │   ├── cascade_mask_rcnn_vitdet_b_100ep.py
│   │           │   ├── cascade_mask_rcnn_vitdet_h_75ep.py
│   │           │   ├── cascade_mask_rcnn_vitdet_l_100ep.py
│   │           │   ├── mask_rcnn_vitdet_b_100ep.py
│   │           │   ├── mask_rcnn_vitdet_h_75ep.py
│   │           │   └── mask_rcnn_vitdet_l_100ep.py
│   │           ├── LVIS/
│   │           │   ├── cascade_mask_rcnn_mvitv2_b_in21k_100ep.py
│   │           │   ├── cascade_mask_rcnn_mvitv2_h_in21k_50ep.py
│   │           │   ├── cascade_mask_rcnn_mvitv2_l_in21k_50ep.py
│   │           │   ├── cascade_mask_rcnn_swin_b_in21k_50ep.py
│   │           │   ├── cascade_mask_rcnn_swin_l_in21k_50ep.py
│   │           │   ├── cascade_mask_rcnn_vitdet_b_100ep.py
│   │           │   ├── cascade_mask_rcnn_vitdet_h_100ep.py
│   │           │   ├── cascade_mask_rcnn_vitdet_l_100ep.py
│   │           │   ├── mask_rcnn_vitdet_b_100ep.py
│   │           │   ├── mask_rcnn_vitdet_h_100ep.py
│   │           │   └── mask_rcnn_vitdet_l_100ep.py
│   │           └── common/
│   │               └── coco_loader_lsj.py
│   ├── setup.cfg
│   ├── setup.py
│   ├── tests/
│   │   ├── README.md
│   │   ├── __init__.py
│   │   ├── config/
│   │   │   ├── dir1/
│   │   │   │   ├── dir1_a.py
│   │   │   │   └── dir1_b.py
│   │   │   ├── root_cfg.py
│   │   │   ├── test_instantiate_config.py
│   │   │   ├── test_lazy_config.py
│   │   │   └── test_yacs_config.py
│   │   ├── data/
│   │   │   ├── __init__.py
│   │   │   ├── test_coco.py
│   │   │   ├── test_coco_evaluation.py
│   │   │   ├── test_dataset.py
│   │   │   ├── test_detection_utils.py
│   │   │   ├── test_rotation_transform.py
│   │   │   ├── test_sampler.py
│   │   │   └── test_transforms.py
│   │   ├── export/
│   │   │   └── test_c10.py
│   │   ├── layers/
│   │   │   ├── __init__.py
│   │   │   ├── test_blocks.py
│   │   │   ├── test_deformable.py
│   │   │   ├── test_losses.py
│   │   │   ├── test_mask_ops.py
│   │   │   ├── test_nms.py
│   │   │   ├── test_nms_rotated.py
│   │   │   ├── test_roi_align.py
│   │   │   └── test_roi_align_rotated.py
│   │   ├── modeling/
│   │   │   ├── __init__.py
│   │   │   ├── test_anchor_generator.py
│   │   │   ├── test_backbone.py
│   │   │   ├── test_box2box_transform.py
│   │   │   ├── test_fast_rcnn.py
│   │   │   ├── test_matcher.py
│   │   │   ├── test_mmdet.py
│   │   │   ├── test_model_e2e.py
│   │   │   ├── test_roi_heads.py
│   │   │   ├── test_roi_pooler.py
│   │   │   └── test_rpn.py
│   │   ├── structures/
│   │   │   ├── __init__.py
│   │   │   ├── test_boxes.py
│   │   │   ├── test_imagelist.py
│   │   │   ├── test_instances.py
│   │   │   ├── test_keypoints.py
│   │   │   ├── test_masks.py
│   │   │   └── test_rotated_boxes.py
│   │   ├── test_checkpoint.py
│   │   ├── test_engine.py
│   │   ├── test_events.py
│   │   ├── test_export_caffe2.py
│   │   ├── test_export_onnx.py
│   │   ├── test_export_torchscript.py
│   │   ├── test_model_analysis.py
│   │   ├── test_model_zoo.py
│   │   ├── test_packaging.py
│   │   ├── test_registry.py
│   │   ├── test_scheduler.py
│   │   ├── test_solver.py
│   │   ├── test_visualizer.py
│   │   └── tracking/
│   │       ├── __init__.py
│   │       ├── test_bbox_iou_tracker.py
│   │       ├── test_hungarian_tracker.py
│   │       ├── test_iou_weighted_hungarian_bbox_iou_tracker.py
│   │       └── test_vanilla_hungarian_bbox_iou_tracker.py
│   └── tools/
│       ├── README.md
│       ├── __init__.py
│       ├── analyze_model.py
│       ├── benchmark.py
│       ├── convert-torchvision-to-d2.py
│       ├── deploy/
│       │   ├── CMakeLists.txt
│       │   ├── README.md
│       │   ├── export_model.py
│       │   └── torchscript_mask_rcnn.cpp
│       ├── lazyconfig_train_net.py
│       ├── lightning_train_net.py
│       ├── plain_train_net.py
│       ├── train_net.py
│       ├── visualize_data.py
│       └── visualize_json_results.py
├── prepare_datasets.md
├── requirements.txt
├── tools/
│   ├── convert-thirdparty-pretrained-model-to-d2.py
│   ├── download_cc.py
│   ├── get_cc_tags.py
│   ├── get_coco_zeroshot.py
│   ├── get_lvis_cat_info.py
│   ├── get_tags_for_VLDet_concepts.py
│   └── remove_lvis_rare.py
├── train_net.py
└── vldet/
    ├── __init__.py
    ├── config.py
    ├── custom_solver.py
    ├── data/
    │   ├── custom_build_augmentation.py
    │   ├── custom_dataset_dataloader.py
    │   ├── custom_dataset_mapper.py
    │   ├── datasets/
    │   │   ├── cc.py
    │   │   ├── coco_zeroshot.py
    │   │   ├── imagenet.py
    │   │   ├── lvis_22k_categories.py
    │   │   ├── lvis_v1.py
    │   │   ├── objects365.py
    │   │   ├── oid.py
    │   │   └── register_oid.py
    │   ├── tar_dataset.py
    │   └── transforms/
    │       ├── custom_augmentation_impl.py
    │       └── custom_transform.py
    ├── evaluation/
    │   ├── custom_coco_eval.py
    │   └── oideval.py
    ├── modeling/
    │   ├── backbone/
    │   │   ├── swintransformer.py
    │   │   └── timm.py
    │   ├── debug.py
    │   ├── meta_arch/
    │   │   └── custom_rcnn.py
    │   ├── roi_heads/
    │   │   ├── res5_roi_heads.py
    │   │   ├── vldet_fast_rcnn.py
    │   │   ├── vldet_roi_heads.py
    │   │   └── zero_shot_classifier.py
    │   ├── text/
    │   │   └── text_encoder.py
    │   └── utils.py
    └── predictor.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
/CenterNet2
CenterNet2
models
configs-experimental
experiments
# output dir
index.html
data/*
slurm/
slurm
slurm-output
slurm-output/
output
instant_test_output
inference_test_output


*.png
*.diff
*.jpg
!/projects/DensePose/doc/images/*.jpg

# compilation and distribution
__pycache__
_ext
*.pyc
*.pyd
*.so
*.dll
*.egg-info/
build/
dist/
wheels/

# pytorch/python/numpy formats
*.pth
!models/*.pth
*.pkl
*.ts
model_ts*.txt

# ipython/jupyter notebooks
*.ipynb
**/.ipynb_checkpoints/

# Editor temporaries
*.swn
*.swo
*.swp
*~

# editor settings
.idea
.vscode
_darcs

# project dirs
/datasets/*
/projects/*/datasets
/snippet


================================================
FILE: LICENSE
================================================
VLDet (c) by Chuang Lin

VLDet is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

You should have received a copy of the license along with this
work. If not, see <http://creativecommons.org/licenses/by-nc-sa/4.0/>.

Attribution-NonCommercial-ShareAlike 4.0 International

=======================================================================

Creative Commons Corporation ("Creative Commons") is not a law firm and
does not provide legal services or legal advice. Distribution of
Creative Commons public licenses does not create a lawyer-client or
other relationship. Creative Commons makes its licenses and related
information available on an "as-is" basis. Creative Commons gives no
warranties regarding its licenses, any material licensed under their
terms and conditions, or any related information. Creative Commons
disclaims all liability for damages resulting from their use to the
fullest extent possible.

Using Creative Commons Public Licenses

Creative Commons public licenses provide a standard set of terms and
conditions that creators and other rights holders may use to share
original works of authorship and other material subject to copyright
and certain other rights specified in the public license below. The
following considerations are for informational purposes only, are not
exhaustive, and do not form part of our licenses.

     Considerations for licensors: Our public licenses are
     intended for use by those authorized to give the public
     permission to use material in ways otherwise restricted by
     copyright and certain other rights. Our licenses are
     irrevocable. Licensors should read and understand the terms
     and conditions of the license they choose before applying it.
     Licensors should also secure all rights necessary before
     applying our licenses so that the public can reuse the
     material as expected. Licensors should clearly mark any
     material not subject to the license. This includes other CC-
     licensed material, or material used under an exception or
     limitation to copyright. More considerations for licensors:
	wiki.creativecommons.org/Considerations_for_licensors

     Considerations for the public: By using one of our public
     licenses, a licensor grants the public permission to use the
     licensed material under specified terms and conditions. If
     the licensor's permission is not necessary for any reason--for
     example, because of any applicable exception or limitation to
     copyright--then that use is not regulated by the license. Our
     licenses grant only permissions under copyright and certain
     other rights that a licensor has authority to grant. Use of
     the licensed material may still be restricted for other
     reasons, including because others have copyright or other
     rights in the material. A licensor may make special requests,
     such as asking that all changes be marked or described.
     Although not required by our licenses, you are encouraged to
     respect those requests where reasonable. More_considerations
     for the public: 
	wiki.creativecommons.org/Considerations_for_licensees

=======================================================================

Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
Public License

By exercising the Licensed Rights (defined below), You accept and agree
to be bound by the terms and conditions of this Creative Commons
Attribution-NonCommercial-ShareAlike 4.0 International Public License
("Public License"). To the extent this Public License may be
interpreted as a contract, You are granted the Licensed Rights in
consideration of Your acceptance of these terms and conditions, and the
Licensor grants You such rights in consideration of benefits the
Licensor receives from making the Licensed Material available under
these terms and conditions.


Section 1 -- Definitions.

  a. Adapted Material means material subject to Copyright and Similar
     Rights that is derived from or based upon the Licensed Material
     and in which the Licensed Material is translated, altered,
     arranged, transformed, or otherwise modified in a manner requiring
     permission under the Copyright and Similar Rights held by the
     Licensor. For purposes of this Public License, where the Licensed
     Material is a musical work, performance, or sound recording,
     Adapted Material is always produced where the Licensed Material is
     synched in timed relation with a moving image.

  b. Adapter's License means the license You apply to Your Copyright
     and Similar Rights in Your contributions to Adapted Material in
     accordance with the terms and conditions of this Public License.

  c. BY-NC-SA Compatible License means a license listed at
     creativecommons.org/compatiblelicenses, approved by Creative
     Commons as essentially the equivalent of this Public License.

  d. Copyright and Similar Rights means copyright and/or similar rights
     closely related to copyright including, without limitation,
     performance, broadcast, sound recording, and Sui Generis Database
     Rights, without regard to how the rights are labeled or
     categorized. For purposes of this Public License, the rights
     specified in Section 2(b)(1)-(2) are not Copyright and Similar
     Rights.

  e. Effective Technological Measures means those measures that, in the
     absence of proper authority, may not be circumvented under laws
     fulfilling obligations under Article 11 of the WIPO Copyright
     Treaty adopted on December 20, 1996, and/or similar international
     agreements.

  f. Exceptions and Limitations means fair use, fair dealing, and/or
     any other exception or limitation to Copyright and Similar Rights
     that applies to Your use of the Licensed Material.

  g. License Elements means the license attributes listed in the name
     of a Creative Commons Public License. The License Elements of this
     Public License are Attribution, NonCommercial, and ShareAlike.

  h. Licensed Material means the artistic or literary work, database,
     or other material to which the Licensor applied this Public
     License.

  i. Licensed Rights means the rights granted to You subject to the
     terms and conditions of this Public License, which are limited to
     all Copyright and Similar Rights that apply to Your use of the
     Licensed Material and that the Licensor has authority to license.

  j. Licensor means the individual(s) or entity(ies) granting rights
     under this Public License.

  k. NonCommercial means not primarily intended for or directed towards
     commercial advantage or monetary compensation. For purposes of
     this Public License, the exchange of the Licensed Material for
     other material subject to Copyright and Similar Rights by digital
     file-sharing or similar means is NonCommercial provided there is
     no payment of monetary compensation in connection with the
     exchange.

  l. Share means to provide material to the public by any means or
     process that requires permission under the Licensed Rights, such
     as reproduction, public display, public performance, distribution,
     dissemination, communication, or importation, and to make material
     available to the public including in ways that members of the
     public may access the material from a place and at a time
     individually chosen by them.

  m. Sui Generis Database Rights means rights other than copyright
     resulting from Directive 96/9/EC of the European Parliament and of
     the Council of 11 March 1996 on the legal protection of databases,
     as amended and/or succeeded, as well as other essentially
     equivalent rights anywhere in the world.

  n. You means the individual or entity exercising the Licensed Rights
     under this Public License. Your has a corresponding meaning.


Section 2 -- Scope.

  a. License grant.

       1. Subject to the terms and conditions of this Public License,
          the Licensor hereby grants You a worldwide, royalty-free,
          non-sublicensable, non-exclusive, irrevocable license to
          exercise the Licensed Rights in the Licensed Material to:

            a. reproduce and Share the Licensed Material, in whole or
               in part, for NonCommercial purposes only; and

            b. produce, reproduce, and Share Adapted Material for
               NonCommercial purposes only.

       2. Exceptions and Limitations. For the avoidance of doubt, where
          Exceptions and Limitations apply to Your use, this Public
          License does not apply, and You do not need to comply with
          its terms and conditions.

       3. Term. The term of this Public License is specified in Section
          6(a).

       4. Media and formats; technical modifications allowed. The
          Licensor authorizes You to exercise the Licensed Rights in
          all media and formats whether now known or hereafter created,
          and to make technical modifications necessary to do so. The
          Licensor waives and/or agrees not to assert any right or
          authority to forbid You from making technical modifications
          necessary to exercise the Licensed Rights, including
          technical modifications necessary to circumvent Effective
          Technological Measures. For purposes of this Public License,
          simply making modifications authorized by this Section 2(a)
          (4) never produces Adapted Material.

       5. Downstream recipients.

            a. Offer from the Licensor -- Licensed Material. Every
               recipient of the Licensed Material automatically
               receives an offer from the Licensor to exercise the
               Licensed Rights under the terms and conditions of this
               Public License.

            b. Additional offer from the Licensor -- Adapted Material.
               Every recipient of Adapted Material from You
               automatically receives an offer from the Licensor to
               exercise the Licensed Rights in the Adapted Material
               under the conditions of the Adapter's License You apply.

            c. No downstream restrictions. You may not offer or impose
               any additional or different terms or conditions on, or
               apply any Effective Technological Measures to, the
               Licensed Material if doing so restricts exercise of the
               Licensed Rights by any recipient of the Licensed
               Material.

       6. No endorsement. Nothing in this Public License constitutes or
          may be construed as permission to assert or imply that You
          are, or that Your use of the Licensed Material is, connected
          with, or sponsored, endorsed, or granted official status by,
          the Licensor or others designated to receive attribution as
          provided in Section 3(a)(1)(A)(i).

  b. Other rights.

       1. Moral rights, such as the right of integrity, are not
          licensed under this Public License, nor are publicity,
          privacy, and/or other similar personality rights; however, to
          the extent possible, the Licensor waives and/or agrees not to
          assert any such rights held by the Licensor to the limited
          extent necessary to allow You to exercise the Licensed
          Rights, but not otherwise.

       2. Patent and trademark rights are not licensed under this
          Public License.

       3. To the extent possible, the Licensor waives any right to
          collect royalties from You for the exercise of the Licensed
          Rights, whether directly or through a collecting society
          under any voluntary or waivable statutory or compulsory
          licensing scheme. In all other cases the Licensor expressly
          reserves any right to collect such royalties, including when
          the Licensed Material is used other than for NonCommercial
          purposes.


Section 3 -- License Conditions.

Your exercise of the Licensed Rights is expressly made subject to the
following conditions.

  a. Attribution.

       1. If You Share the Licensed Material (including in modified
          form), You must:

            a. retain the following if it is supplied by the Licensor
               with the Licensed Material:

                 i. identification of the creator(s) of the Licensed
                    Material and any others designated to receive
                    attribution, in any reasonable manner requested by
                    the Licensor (including by pseudonym if
                    designated);

                ii. a copyright notice;

               iii. a notice that refers to this Public License;

                iv. a notice that refers to the disclaimer of
                    warranties;

                 v. a URI or hyperlink to the Licensed Material to the
                    extent reasonably practicable;

            b. indicate if You modified the Licensed Material and
               retain an indication of any previous modifications; and

            c. indicate the Licensed Material is licensed under this
               Public License, and include the text of, or the URI or
               hyperlink to, this Public License.

       2. You may satisfy the conditions in Section 3(a)(1) in any
          reasonable manner based on the medium, means, and context in
          which You Share the Licensed Material. For example, it may be
          reasonable to satisfy the conditions by providing a URI or
          hyperlink to a resource that includes the required
          information.
       3. If requested by the Licensor, You must remove any of the
          information required by Section 3(a)(1)(A) to the extent
          reasonably practicable.

  b. ShareAlike.

     In addition to the conditions in Section 3(a), if You Share
     Adapted Material You produce, the following conditions also apply.

       1. The Adapter's License You apply must be a Creative Commons
          license with the same License Elements, this version or
          later, or a BY-NC-SA Compatible License.

       2. You must include the text of, or the URI or hyperlink to, the
          Adapter's License You apply. You may satisfy this condition
          in any reasonable manner based on the medium, means, and
          context in which You Share Adapted Material.

       3. You may not offer or impose any additional or different terms
          or conditions on, or apply any Effective Technological
          Measures to, Adapted Material that restrict exercise of the
          rights granted under the Adapter's License You apply.


Section 4 -- Sui Generis Database Rights.

Where the Licensed Rights include Sui Generis Database Rights that
apply to Your use of the Licensed Material:

  a. for the avoidance of doubt, Section 2(a)(1) grants You the right
     to extract, reuse, reproduce, and Share all or a substantial
     portion of the contents of the database for NonCommercial purposes
     only;

  b. if You include all or a substantial portion of the database
     contents in a database in which You have Sui Generis Database
     Rights, then the database in which You have Sui Generis Database
     Rights (but not its individual contents) is Adapted Material,
     including for purposes of Section 3(b); and

  c. You must comply with the conditions in Section 3(a) if You Share
     all or a substantial portion of the contents of the database.

For the avoidance of doubt, this Section 4 supplements and does not
replace Your obligations under this Public License where the Licensed
Rights include other Copyright and Similar Rights.


Section 5 -- Disclaimer of Warranties and Limitation of Liability.

  a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
     EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
     AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
     ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
     IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
     WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
     PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
     ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
     KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
     ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.

  b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
     TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
     NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
     INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
     COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
     USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
     ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
     DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
     IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.

  c. The disclaimer of warranties and limitation of liability provided
     above shall be interpreted in a manner that, to the extent
     possible, most closely approximates an absolute disclaimer and
     waiver of all liability.


Section 6 -- Term and Termination.

  a. This Public License applies for the term of the Copyright and
     Similar Rights licensed here. However, if You fail to comply with
     this Public License, then Your rights under this Public License
     terminate automatically.

  b. Where Your right to use the Licensed Material has terminated under
     Section 6(a), it reinstates:

       1. automatically as of the date the violation is cured, provided
          it is cured within 30 days of Your discovery of the
          violation; or

       2. upon express reinstatement by the Licensor.

     For the avoidance of doubt, this Section 6(b) does not affect any
     right the Licensor may have to seek remedies for Your violations
     of this Public License.

  c. For the avoidance of doubt, the Licensor may also offer the
     Licensed Material under separate terms or conditions or stop
     distributing the Licensed Material at any time; however, doing so
     will not terminate this Public License.

  d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
     License.


Section 7 -- Other Terms and Conditions.

  a. The Licensor shall not be bound by any additional or different
     terms or conditions communicated by You unless expressly agreed.

  b. Any arrangements, understandings, or agreements regarding the
     Licensed Material not stated herein are separate from and
     independent of the terms and conditions of this Public License.


Section 8 -- Interpretation.

  a. For the avoidance of doubt, this Public License does not, and
     shall not be interpreted to, reduce, limit, restrict, or impose
     conditions on any use of the Licensed Material that could lawfully
     be made without permission under this Public License.

  b. To the extent possible, if any provision of this Public License is
     deemed unenforceable, it shall be automatically reformed to the
     minimum extent necessary to make it enforceable. If the provision
     cannot be reformed, it shall be severed from this Public License
     without affecting the enforceability of the remaining terms and
     conditions.

  c. No term or condition of this Public License will be waived and no
     failure to comply consented to unless expressly agreed to by the
     Licensor.

  d. Nothing in this Public License constitutes or may be interpreted
     as a limitation upon, or waiver of, any privileges and immunities
     that apply to the Licensor or You, including from the legal
     processes of any jurisdiction or authority.

=======================================================================

Creative Commons is not a party to its public
licenses. Notwithstanding, Creative Commons may elect to apply one of
its public licenses to material it publishes and in those instances
will be considered the “Licensor.” The text of the Creative Commons
public licenses is dedicated to the public domain under the CC0 Public
Domain Dedication. Except for the limited purpose of indicating that
material is shared under a Creative Commons public license or as
otherwise permitted by the Creative Commons policies published at
creativecommons.org/policies, Creative Commons does not authorize the
use of the trademark "Creative Commons" or any other trademark or logo
of Creative Commons without its prior written consent including,
without limitation, in connection with any unauthorized modifications
to any of its public licenses or any other arrangements,
understandings, or agreements concerning use of licensed material. For
the avoidance of doubt, this paragraph does not form part of the
public licenses.

Creative Commons may be contacted at creativecommons.org.

================================================
FILE: README.md
================================================
# VLDet: Learning Object-Language Alignments for Open-Vocabulary Object Detection

<p align="center"> <img src='docs/readme.jpeg' align="center" height="200px"> </p>

> [**Learning Object-Language Alignments for Open-Vocabulary Object Detection**](https://arxiv.org/abs/2211.14843),               
> Chuang Lin, Peize Sun, Yi Jiang, Ping Luo, Lizhen Qu, Gholamreza Haffari, Zehuan Yuan, Jianfei Cai,    
> *ICLR 2023 ([https://arxiv.org/abs/2211.14843](https://arxiv.org/abs/2211.14843))*   

## Highlight

We are excited to announce that our paper was accepted to ICLR 2023! 🥳🥳🥳


## A quick explainable video demo for VLDet


https://user-images.githubusercontent.com/6366788/218620999-1eb5c5eb-0479-4dcc-88ca-863f34de25a0.mp4


## Performance

### Open-Vocabulary on COCO

<p align="center">
<img src="https://user-images.githubusercontent.com/6366788/214261751-3007d40c-5a5d-4efd-8acd-7f6a4ea62ce3.png" width=68%>
<p>


### Open-Vocabulary on LVIS

<p align="center">
<img src="https://user-images.githubusercontent.com/6366788/214262298-ab2de22b-910a-44ba-9bc5-f0df6e4d5e14.png" width=68%>
<p>

## Installation

### Requirements
- Linux or macOS with Python ≥ 3.7
- PyTorch ≥ 1.9.
  Install them together at [pytorch.org](https://pytorch.org) to make sure of this. Note, please check
  PyTorch version matches that is required by Detectron2.
- Detectron2: follow [Detectron2 installation instructions](https://detectron2.readthedocs.io/tutorials/install.html).

### Example conda environment setup
```bash
conda create --name VLDet python=3.7 -y
conda activate VLDet
conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch-lts -c nvidia

# under your working directory

git clone https://github.com/clin1223/VLDet.git
cd VLDet
cd detectron2
pip install -e .
cd ..
pip install -r requirements.txt
```

## Features
- Directly learn an open-vocabulary object detector from image-text pairs by formulating the task as a bipartite matching problem.

- State-of-the-art results on Open-vocabulary LVIS and Open-vocabulary COCO.

- Scaling and extending novel object vocabulary easily.


## Benchmark evaluation and training

Please first [prepare datasets](prepare_datasets.md).

The VLDet models are finetuned on the corresponding [Box-Supervised models](https://drive.google.com/drive/folders/1ngb1mBOUvFpkcUM7D3bgIkMdUj2W5FUa?usp=sharing) (indicated by MODEL.WEIGHTS in the config files). Please train or download the Box-Supervised model and place them under VLDet_ROOT/models/ before training the VLDet models.

To train a model, run

```
python train_net.py --num-gpus 8 --config-file /path/to/config/name.yaml
``` 

To evaluate a model with a trained/ pretrained model, run 

```
python train_net.py --num-gpus 8 --config-file /path/to/config/name.yaml --eval-only MODEL.WEIGHTS /path/to/weight.pth
``` 

Download the trained network weights [here](https://drive.google.com/drive/folders/1ngb1mBOUvFpkcUM7D3bgIkMdUj2W5FUa?usp=sharing).

| OV_COCO  | box mAP50 | box mAP50_novel |
|----------|-----------|-----------------|
| [config_RN50](configs/VLDet_OVCOCO_CLIP_R50_1x_caption.yaml) | 45.8      | 32.0            |

| OV_LVIS       | mask mAP_all | mask mAP_novel |
| ------------- | ------------ | -------------- |
| [config_RN50](configs/VLDet_LbaseCCcap_CLIP_R5021k_640b64_2x_ft4x_caption.yaml)   | 30.1         | 21.7           |
| [config_Swin-B](configs/VLDet_LbaseI_CLIP_SwinB_896b32_2x_ft4x_caption.yaml) | 38.1         | 26.3           |
 

## Citation

If you find this project useful for your research, please use the following BibTeX entry.

```
@article{VLDet,
  title={Learning Object-Language Alignments for Open-Vocabulary Object Detection},
  author={Lin, Chuang and Sun, Peize and Jiang, Yi and Luo, Ping and Qu, Lizhen and Haffari, Gholamreza and Yuan, Zehuan and Cai, Jianfei},
  journal={arXiv preprint arXiv:2211.14843},
  year={2022}
}
```
## License

<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.

## Acknowledgement
This repository was built on top of [Detectron2](https://github.com/facebookresearch/detectron2), [Detic](https://github.com/facebookresearch/Detic.git), [RegionCLIP](https://github.com/microsoft/RegionCLIP.git) and [OVR-CNN](https://github.com/alirezazareian/ovr-cnn). We thank for their hard work.


================================================
FILE: configs/Base-C2_L_R5021k_640b64_4x.yaml
================================================
MODEL:
  META_ARCHITECTURE: "CustomRCNN"
  MASK_ON: True
  PROPOSAL_GENERATOR:
    NAME: "CenterNet"
  WEIGHTS: "models/resnet50_miil_21k.pkl"
  BACKBONE:
    NAME: build_p67_timm_fpn_backbone
  TIMM:
    BASE_NAME: resnet50_in21k
  FPN:
    IN_FEATURES: ["layer3", "layer4", "layer5"]
  PIXEL_MEAN: [123.675, 116.280, 103.530]
  PIXEL_STD: [58.395, 57.12, 57.375]
  ROI_HEADS:
    NAME: DeticCascadeROIHeads
    IN_FEATURES: ["p3", "p4", "p5"]
    IOU_THRESHOLDS: [0.6]
    NUM_CLASSES: 1203
    SCORE_THRESH_TEST: 0.02
    NMS_THRESH_TEST: 0.5
  ROI_BOX_CASCADE_HEAD:
    IOUS: [0.6, 0.7, 0.8]
  ROI_BOX_HEAD:
    NAME: "FastRCNNConvFCHead"
    NUM_FC: 2
    POOLER_RESOLUTION: 7
    CLS_AGNOSTIC_BBOX_REG: True
    MULT_PROPOSAL_SCORE: True

    USE_SIGMOID_CE: True
    USE_FED_LOSS: True
  ROI_MASK_HEAD:
    NAME: "MaskRCNNConvUpsampleHead"
    NUM_CONV: 4
    POOLER_RESOLUTION: 14
    CLS_AGNOSTIC_MASK: True
  CENTERNET:
    NUM_CLASSES: 1203
    REG_WEIGHT: 1.
    NOT_NORM_REG: True
    ONLY_PROPOSAL: True
    WITH_AGN_HM: True
    INFERENCE_TH: 0.0001
    PRE_NMS_TOPK_TRAIN: 4000
    POST_NMS_TOPK_TRAIN: 2000
    PRE_NMS_TOPK_TEST: 1000
    POST_NMS_TOPK_TEST: 256
    NMS_TH_TRAIN: 0.9
    NMS_TH_TEST: 0.9
    POS_WEIGHT: 0.5
    NEG_WEIGHT: 0.5
    IGNORE_HIGH_FP: 0.85
DATASETS:
  TRAIN: ("lvis_v1_train",)
  TEST: ("lvis_v1_val",)
DATALOADER:
  SAMPLER_TRAIN: "RepeatFactorTrainingSampler"
  REPEAT_THRESHOLD: 0.001
  NUM_WORKERS: 8
TEST:
  DETECTIONS_PER_IMAGE: 300
SOLVER:
  LR_SCHEDULER_NAME: "WarmupCosineLR"
  CHECKPOINT_PERIOD: 1000000000
  WARMUP_ITERS: 10000
  WARMUP_FACTOR: 0.0001
  USE_CUSTOM_SOLVER: True
  OPTIMIZER: "ADAMW"
  MAX_ITER: 90000
  IMS_PER_BATCH: 64
  BASE_LR: 0.0002
  CLIP_GRADIENTS:
    ENABLED: True
INPUT:
  FORMAT: RGB
  CUSTOM_AUG: EfficientDetResizeCrop
  TRAIN_SIZE: 640
OUTPUT_DIR: "./output/VLDet/auto"
EVAL_PROPOSAL_AR: False
VERSION: 2
FP16: True

================================================
FILE: configs/Base_OVCOCO_C4_1x.yaml
================================================
MODEL:
  META_ARCHITECTURE: "CustomRCNN"
  RPN:
    PRE_NMS_TOPK_TEST: 6000
    POST_NMS_TOPK_TEST: 1000
  ROI_HEADS:
    NUM_CLASSES: 65
    NAME: "CustomRes5ROIHeads"
  SHARE_PROJ_V_DIM: 2048
  SHARE_PROJ_L_DIM: 1024
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  RESNETS:
    DEPTH: 50
  ROI_BOX_HEAD:
    CLS_AGNOSTIC_BBOX_REG: True
    USE_SIGMOID_CE: True
    USE_ZEROSHOT_CLS: True
    ZEROSHOT_WEIGHT_PATH: 'datasets/coco/VLDet/coco_nouns_4764_emb.pth' 
    DETECTION_WEIGHT_PATH: 'datasets/coco/VLDet/coco_65_cls_emb.pth'
    IGNORE_ZERO_CATS: True
    CAT_FREQ_PATH: 'datasets/coco/zero-shot/instances_train2017_seen_2_del_cat_info.json' 
    ZEROSHOT_WEIGHT_DIM: 1024
DATASETS:
  TRAIN: ("coco_zeroshot_train_del",)
  TEST: ("coco_generalized_del_val",)
SOLVER:
  IMS_PER_BATCH: 16
  BASE_LR: 0.02
  STEPS: (60000, 80000)
  MAX_ITER: 90000
  CHECKPOINT_PERIOD: 10000
INPUT:
  MIN_SIZE_TRAIN: (800,)
VERSION: 2
OUTPUT_DIR: output/release_base_coco
FP16: True
TEST:
  EVAL_PERIOD: 10000

================================================
FILE: configs/BoxSup-C2_Lbase_CLIP_R5021k_640b64.yaml
================================================
_BASE_: "Base-C2_L_R5021k_640b64_4x.yaml"
MODEL:
  WITH_CAPTION: False
  ROI_BOX_HEAD:
    USE_ZEROSHOT_CLS: True
    ZEROSHOT_WEIGHT_PATH: 'datasets/cc3m/VLDet/googlecc_nouns_6250_emb.pth' 
    DETECTION_WEIGHT_PATH: 'datasets/cc3m/VLDet/lvis_1203_cls_emb.pth'
    ZEROSHOT_WEIGHT_DIM: 1024
  SHARE_PROJ_V_DIM: 1024
  SHARE_PROJ_L_DIM: 1024
SOLVER:
  IMS_PER_BATCH: 8
DATASETS:
  TRAIN: ("lvis_v1_train_norare",)



================================================
FILE: configs/BoxSup-C2_Lbase_CLIP_SwinB_896b32.yaml
================================================
_BASE_: "Base-C2_L_R5021k_640b64_4x.yaml"
MODEL:
  ROI_BOX_HEAD:
    USE_ZEROSHOT_CLS: True
    ZEROSHOT_WEIGHT_PATH: 'datasets/cc3m/VLDet/googlecc_nouns_6250_emb.pth' 
    DETECTION_WEIGHT_PATH: 'datasets/cc3m/VLDet/lvis_1203_cls_emb.pth'
    ZEROSHOT_WEIGHT_DIM: 1024
  SHARE_PROJ_V_DIM: 1024
  SHARE_PROJ_L_DIM: 1024
  WEIGHTS: "models/swin_base_patch4_window7_224_22k.pkl"
  BACKBONE:
    NAME: build_swintransformer_fpn_backbone
  SWIN:
    SIZE: B-22k
  FPN:
    IN_FEATURES: ["swin1", "swin2", "swin3"]
SOLVER:
  MAX_ITER: 180000
  IMS_PER_BATCH: 32
  BASE_LR: 0.0001
  CHECKPOINT_PERIOD: 30000
INPUT:
  TRAIN_SIZE: 896
DATASETS:
  TRAIN: ("lvis_v1_train_norare",)



================================================
FILE: configs/BoxSup_OVCOCO_CLIP_R50_1x.yaml
================================================
_BASE_: "Base_OVCOCO_C4_1x.yaml"


================================================
FILE: configs/VLDet_LbaseCCcap_CLIP_R5021k_640b64_2x_ft4x_caption.yaml
================================================
_BASE_: "Base-C2_L_R5021k_640b64_4x.yaml"
MODEL:
  WITH_CAPTION: True
  SYNC_CAPTION_BATCH: False
  ROI_BOX_HEAD:
    ADD_IMAGE_BOX: True # caption loss is added to the image-box
    USE_ZEROSHOT_CLS: True
    WS_NUM_PROPS: 32
    USE_OT: 'contrastive'
    OT_LOSS_WEIGHT: 0.01
    USE_CAPTION: True
    CAPTION_WEIGHT: 1.0
    ZEROSHOT_WEIGHT_PATH: 'datasets/cc3m/VLDet/googlecc_nouns_6250_emb.pth' 
    DETECTION_WEIGHT_PATH: 'datasets/cc3m/VLDet/lvis_1203_cls_emb.pth'
    ZEROSHOT_WEIGHT_DIM: 1024
  SHARE_PROJ_V_DIM: 1024
  SHARE_PROJ_L_DIM: 1024
  WEIGHTS: "models/lvis_base.pth"
SOLVER:
  MAX_ITER: 90000
  CHECKPOINT_PERIOD: 10000
  IMS_PER_BATCH: 64
  BASE_LR: 0.0001
  WARMUP_ITERS: 1000
  WARMUP_FACTOR: 0.001
DATASETS:
  TRAIN: ("lvis_v1_train_norare", "cc3m_v1_nouns_train_6250tags") 
DATALOADER:
  SAMPLER_TRAIN: "MultiDatasetSampler"
  DATASET_RATIO: [1, 4]
  USE_DIFF_BS_SIZE: True
  DATASET_BS: [8, 16]
  DATASET_INPUT_SIZE: [640, 640]
  USE_RFS: [True, False]
  DATASET_INPUT_SCALE: [[0.1, 2.0], [0.5, 1.5]]
  FILTER_EMPTY_ANNOTATIONS: False
  MULTI_DATASET_GROUPING: True
  DATASET_ANN: ['box', 'caption']
  NUM_WORKERS: 8
WITH_IMAGE_LABELS: True
FP16: True
OUTPUT_DIR: output/test
TEST:
  EVAL_PERIOD: 10000



================================================
FILE: configs/VLDet_LbaseI_CLIP_SwinB_896b32_2x_ft4x_caption.yaml
================================================
_BASE_: "Base-C2_L_R5021k_640b64_4x.yaml"
MODEL:
  WITH_CAPTION: True
  ROI_BOX_HEAD:
    ADD_IMAGE_BOX: True 
    USE_ZEROSHOT_CLS: True
    WS_NUM_PROPS: 5
    USE_OT: 'contrastive'
    OT_LOSS_WEIGHT: 0.05
    USE_CAPTION: True
    CAPTION_WEIGHT: 1.0
    USE_ZEROSHOT_CLS: True
    ZEROSHOT_WEIGHT_PATH: 'datasets/cc3m/VLDet/googlecc_nouns_6250_emb.pth' 
    DETECTION_WEIGHT_PATH: 'datasets/cc3m/VLDet/lvis_1203_cls_emb.pth'
    ZEROSHOT_WEIGHT_DIM: 1024
  SHARE_PROJ_V_DIM: 1024
  SHARE_PROJ_L_DIM: 1024
  BACKBONE:
    NAME: build_swintransformer_fpn_backbone
  SWIN:
    SIZE: B-22k
  FPN:
    IN_FEATURES: ["swin1", "swin2", "swin3"]
  WEIGHTS: "models/lvis_base_swinB.pth"
SOLVER:
  MAX_ITER: 90000
  IMS_PER_BATCH: 32
  BASE_LR: 0.0001
  WARMUP_ITERS: 1000
  WARMUP_FACTOR: 0.001
DATASETS:
  TRAIN: ("lvis_v1_train_norare", "cc3m_v1_nouns_train_6250tags") 
DATALOADER:
  SAMPLER_TRAIN: "MultiDatasetSampler"
  DATASET_RATIO: [1, 4]
  USE_DIFF_BS_SIZE: True
  DATASET_BS: [4, 16]
  DATASET_INPUT_SIZE: [896, 448]
  USE_RFS: [True, False]
  DATASET_INPUT_SCALE: [[0.1, 2.0], [0.5, 1.5]]
  FILTER_EMPTY_ANNOTATIONS: False
  MULTI_DATASET_GROUPING: True
  DATASET_ANN: ['box', 'caption']
  NUM_WORKERS: 8
WITH_IMAGE_LABELS: True
OUTPUT_DIR: output/lvis_swinB


================================================
FILE: configs/VLDet_OVCOCO_CLIP_R50_1x_caption.yaml
================================================
_BASE_: "Base_OVCOCO_C4_1x.yaml"
MODEL:
  SHARE_PROJ_V_DIM: 2048
  WEIGHTS: "models/coco_base.pth"
  WITH_CAPTION: True
  SYNC_CAPTION_BATCH: False
  SHARE_PROJ_L_DIM: 1024
  ROI_HEADS:
    NUM_CLASSES: 65 
  ROI_BOX_HEAD:
    WS_NUM_PROPS: 32
    ADD_IMAGE_BOX: True
    NEG_CAP_WEIGHT: 1.0
    OT_LOSS_WEIGHT: 0.01
    USE_CAPTION: True
    USE_OT: 'contrastive'
    ZEROSHOT_WEIGHT_PATH: 'datasets/coco/VLDet/coco_nouns_4764_emb.pth' 
    DETECTION_WEIGHT_PATH: 'datasets/coco/VLDet/coco_65_cls_emb.pth'
    CAT_FREQ_PATH: 'datasets/coco/zero-shot/instances_train2017_seen_2_del_cat_info.json' 
    ZEROSHOT_WEIGHT_DIM: 1024
    CAPTION_WEIGHT: 1.0
SOLVER:
  IMS_PER_BATCH: 32
  BASE_LR: 0.02
  STEPS: (60000, 80000)
  CHECKPOINT_PERIOD: 10000
  MAX_ITER: 90000
  CLIP_GRADIENTS:
    ENABLED: True
DATASETS:
  TRAIN: ("coco_zeroshot_train_del", "coco_caption_nouns_train_4764tags",) 
  TEST: ("coco_generalized_del_val",)
INPUT:
  CUSTOM_AUG: ResizeShortestEdge
  MIN_SIZE_TRAIN_SAMPLING: range
  MIN_SIZE_TRAIN: (800, 800)
DATALOADER:
  SAMPLER_TRAIN: "MultiDatasetSampler"
  DATASET_RATIO: [1, 4]
  USE_DIFF_BS_SIZE: True
  DATASET_BS: [2, 8]
  USE_RFS: [False, False]
  DATASET_MIN_SIZES: [[800, 800], [400, 400]]
  DATASET_MAX_SIZES: [1333, 667]
  FILTER_EMPTY_ANNOTATIONS: False
  MULTI_DATASET_GROUPING: True
  DATASET_ANN: ['box', 'caption']
  NUM_WORKERS: 8
WITH_IMAGE_LABELS: True
OUTPUT_DIR: output/test
FP16: True
TEST:
  EVAL_PERIOD: 10000


================================================
FILE: demo.py
================================================
# Copyright (c) Facebook, Inc. and its affiliates.
import argparse
import glob
import multiprocessing as mp
import numpy as np
import os
import tempfile
import time
import warnings
import cv2
import tqdm
import sys
import mss

from detectron2.config import get_cfg
from detectron2.data.detection_utils import read_image
from detectron2.utils.logger import setup_logger

sys.path.insert(0, 'CenterNet2/projects/CenterNet2/')
from centernet.config import add_centernet_config
from vldet.config import add_vldet_config

from vldet.predictor import VisualizationDemo

# Fake a video capture object OpenCV style - half width, half height of first screen using MSS
class ScreenGrab:
    def __init__(self):
        self.sct = mss.mss()
        m0 = self.sct.monitors[0]
        self.monitor = {'top': 0, 'left': 0, 'width': m0['width'] / 2, 'height': m0['height'] / 2}

    def read(self):
        img =  np.array(self.sct.grab(self.monitor))
        nf = cv2.cvtColor(img, cv2.COLOR_BGRA2BGR)
        return (True, nf)

    def isOpened(self):
        return True
    def release(self):
        return True


# constants
WINDOW_NAME = "Detic"

def setup_cfg(args):
    cfg = get_cfg()
    if args.cpu:
        cfg.MODEL.DEVICE="cpu"
    add_centernet_config(cfg)
    add_vldet_config(cfg)
    cfg.merge_from_file(args.config_file)
    cfg.merge_from_list(args.opts)
    # Set score_threshold for builtin models
    cfg.MODEL.RETINANET.SCORE_THRESH_TEST = args.confidence_threshold
    cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = args.confidence_threshold
    cfg.MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH = args.confidence_threshold
    cfg.MODEL.ROI_BOX_HEAD.ZEROSHOT_WEIGHT_PATH = 'rand' # load later
    if not args.pred_all_class:
        cfg.MODEL.ROI_HEADS.ONE_CLASS_PER_PROPOSAL = True
    cfg.freeze()
    return cfg


def get_parser():
    parser = argparse.ArgumentParser(description="Detectron2 demo for builtin configs")
    parser.add_argument(
        "--config-file",
        default="configs/quick_schedules/mask_rcnn_R_50_FPN_inference_acc_test.yaml",
        metavar="FILE",
        help="path to config file",
    )
    parser.add_argument("--webcam", help="Take inputs from webcam.")
    parser.add_argument("--cpu", action='store_true', help="Use CPU only.")
    parser.add_argument("--video-input", help="Path to video file.")
    parser.add_argument(
        "--input",
        nargs="+",
        help="A list of space separated input images; "
        "or a single glob pattern such as 'directory/*.jpg'",
    )
    parser.add_argument(
        "--output",
        help="A file or directory to save output visualizations. "
        "If not given, will show output in an OpenCV window.",
    )
    parser.add_argument(
        "--vocabulary",
        default="lvis",
        choices=['lvis', 'openimages', 'objects365', 'coco', 'custom'],
        help="",
    )
    parser.add_argument(
        "--custom_vocabulary",
        default="",
        help="",
    )
    parser.add_argument("--pred_all_class", action='store_true')
    parser.add_argument(
        "--confidence-threshold",
        type=float,
        default=0.5,
        help="Minimum score for instance predictions to be shown",
    )
    parser.add_argument(
        "--opts",
        help="Modify config options using the command-line 'KEY VALUE' pairs",
        default=[],
        nargs=argparse.REMAINDER,
    )
    return parser


def test_opencv_video_format(codec, file_ext):
    with tempfile.TemporaryDirectory(prefix="video_format_test") as dir:
        filename = os.path.join(dir, "test_file" + file_ext)
        writer = cv2.VideoWriter(
            filename=filename,
            fourcc=cv2.VideoWriter_fourcc(*codec),
            fps=float(30),
            frameSize=(10, 10),
            isColor=True,
        )
        [writer.write(np.zeros((10, 10, 3), np.uint8)) for _ in range(30)]
        writer.release()
        if os.path.isfile(filename):
            return True
        return False


if __name__ == "__main__":
    mp.set_start_method("spawn", force=True)
    args = get_parser().parse_args()
    setup_logger(name="fvcore")
    logger = setup_logger()
    logger.info("Arguments: " + str(args))

    cfg = setup_cfg(args)

    demo = VisualizationDemo(cfg, args)

    if args.input:
        if len(args.input) == 1:
            args.input = glob.glob(os.path.expanduser(args.input[0]))
            assert args.input, "The input path(s) was not found"
        for path in tqdm.tqdm(args.input, disable=not args.output):
            img = read_image(path, format="BGR")
            start_time = time.time()
            predictions, visualized_output = demo.run_on_image(img)
            logger.info(
                "{}: {} in {:.2f}s".format(
                    path,
                    "detected {} instances".format(len(predictions["instances"]))
                    if "instances" in predictions
                    else "finished",
                    time.time() - start_time,
                )
            )

            if args.output:
                if os.path.isdir(args.output):
                    assert os.path.isdir(args.output), args.output
                    out_filename = os.path.join(args.output, os.path.basename(path))
                else:
                    assert len(args.input) == 1, "Please specify a directory with args.output"
                    out_filename = args.output
                visualized_output.save(out_filename)
            else:
                cv2.namedWindow(WINDOW_NAME, cv2.WINDOW_NORMAL)
                cv2.imshow(WINDOW_NAME, visualized_output.get_image()[:, :, ::-1])
                if cv2.waitKey(0) == 27:
                    break  # esc to quit
    elif args.webcam:
        assert args.input is None, "Cannot have both --input and --webcam!"
        assert args.output is None, "output not yet supported with --webcam!"
        if args.webcam == "screen":
            cam = ScreenGrab()
        else:
            cam = cv2.VideoCapture(int(args.webcam))
        for vis in tqdm.tqdm(demo.run_on_video(cam)):
            cv2.namedWindow(WINDOW_NAME, cv2.WINDOW_NORMAL)
            cv2.imshow(WINDOW_NAME, vis)
            if cv2.waitKey(1) == 27:
                break  # esc to quit
        cam.release()
        cv2.destroyAllWindows()
    elif args.video_input:
        video = cv2.VideoCapture(args.video_input)
        width = int(video.get(cv2.CAP_PROP_FRAME_WIDTH))
        height = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT))
        frames_per_second = video.get(cv2.CAP_PROP_FPS)
        num_frames = int(video.get(cv2.CAP_PROP_FRAME_COUNT))
        basename = os.path.basename(args.video_input)
        codec, file_ext = (
            ("x264", ".mkv") if test_opencv_video_format("x264", ".mkv") else ("mp4v", ".mp4")
        )
        if codec == ".mp4v":
            warnings.warn("x264 codec not available, switching to mp4v")
        if args.output:
            if os.path.isdir(args.output):
                output_fname = os.path.join(args.output, basename)
                output_fname = os.path.splitext(output_fname)[0] + file_ext
            else:
                output_fname = args.output
            assert not os.path.isfile(output_fname), output_fname
            output_file = cv2.VideoWriter(
                filename=output_fname,
                # some installation of opencv may not support x264 (due to its license),
                # you can try other format (e.g. MPEG)
                fourcc=cv2.VideoWriter_fourcc(*codec),
                fps=float(frames_per_second),
                frameSize=(width, height),
                isColor=True,
            )
        assert os.path.isfile(args.video_input)
        for vis_frame in tqdm.tqdm(demo.run_on_video(video), total=num_frames):
            if args.output:
                output_file.write(vis_frame)
            else:
                cv2.namedWindow(basename, cv2.WINDOW_NORMAL)
                cv2.imshow(basename, vis_frame)
                if cv2.waitKey(1) == 27:
                    break  # esc to quit
        video.release()
        if args.output:
            output_file.release()
        else:
            cv2.destroyAllWindows()


================================================
FILE: detectron2/.circleci/config.yml
================================================
version: 2.1

# -------------------------------------------------------------------------------------
# Environments to run the jobs in
# -------------------------------------------------------------------------------------
cpu: &cpu
  machine:
    image: ubuntu-2004:202107-02
  resource_class: medium

gpu: &gpu
  machine:
    # NOTE: use a cuda version that's supported by all our pytorch versions
    image: ubuntu-1604-cuda-11.1:202012-01
  resource_class: gpu.nvidia.small

windows-cpu: &windows_cpu
  machine:
    resource_class: windows.medium
    image: windows-server-2019-vs2019:stable
    shell: powershell.exe

# windows-gpu: &windows_gpu
#     machine:
#       resource_class: windows.gpu.nvidia.medium
#       image: windows-server-2019-nvidia:stable

version_parameters: &version_parameters
  parameters:
    pytorch_version:
      type: string
    torchvision_version:
      type: string
    pytorch_index:
      type: string
      # use test wheels index to have access to RC wheels
      # https://download.pytorch.org/whl/test/torch_test.html
      default: "https://download.pytorch.org/whl/torch_stable.html"
    python_version:  # NOTE: only affect linux
      type: string
      default: '3.7.9'

  environment:
    PYTORCH_VERSION: << parameters.pytorch_version >>
    TORCHVISION_VERSION: << parameters.torchvision_version >>
    PYTORCH_INDEX: << parameters.pytorch_index >>
    PYTHON_VERSION: << parameters.python_version>>
    # point datasets to ~/.torch so it's cached in CI
    DETECTRON2_DATASETS: ~/.torch/datasets

# -------------------------------------------------------------------------------------
# Re-usable commands
# -------------------------------------------------------------------------------------
# install_nvidia_driver: &install_nvidia_driver
#   - run:
#       name: Install nvidia driver
#       working_directory: ~/
#       command: |
#         wget -q 'https://s3.amazonaws.com/ossci-linux/nvidia_driver/NVIDIA-Linux-x86_64-430.40.run'
#         sudo /bin/bash ./NVIDIA-Linux-x86_64-430.40.run -s --no-drm
#         nvidia-smi

add_ssh_keys: &add_ssh_keys
  # https://circleci.com/docs/2.0/add-ssh-key/
  - add_ssh_keys:
      fingerprints:
        - "e4:13:f2:22:d4:49:e8:e4:57:5a:ac:20:2f:3f:1f:ca"

install_python: &install_python
  - run:
      name: Install Python
      working_directory: ~/
      command: |
        # upgrade pyenv
        cd /opt/circleci/.pyenv/plugins/python-build/../.. && git pull && cd -
        pyenv install -s $PYTHON_VERSION
        pyenv global $PYTHON_VERSION
        python --version
        which python
        pip install --upgrade pip

setup_venv: &setup_venv
  - run:
      name: Setup Virtual Env
      working_directory: ~/
      command: |
        python -m venv ~/venv
        echo ". ~/venv/bin/activate" >> $BASH_ENV
        . ~/venv/bin/activate
        python --version
        which python
        which pip
        pip install --upgrade pip

setup_venv_win: &setup_venv_win
  - run:
      name: Setup Virtual Env for Windows
      command: |
        pip install virtualenv
        python -m virtualenv env
        .\env\Scripts\activate
        python --version
        which python
        which pip

install_linux_dep: &install_linux_dep
  - run:
      name: Install Dependencies
      command: |
        # disable crash coredump, so unittests fail fast
        sudo systemctl stop apport.service
        # install from github to get latest; install iopath first since fvcore depends on it
        pip install --progress-bar off -U 'git+https://github.com/facebookresearch/iopath'
        pip install --progress-bar off -U 'git+https://github.com/facebookresearch/fvcore'
        # Don't use pytest-xdist: cuda tests are unstable under multi-process workers.
        pip install --progress-bar off ninja opencv-python-headless pytest tensorboard pycocotools onnx
        pip install --progress-bar off torch==$PYTORCH_VERSION -f $PYTORCH_INDEX
        if [[ "$TORCHVISION_VERSION" == "master" ]]; then
          pip install git+https://github.com/pytorch/vision.git
        else
          pip install --progress-bar off torchvision==$TORCHVISION_VERSION -f $PYTORCH_INDEX
        fi

        python -c 'import torch; print("CUDA:", torch.cuda.is_available())'
        gcc --version

install_detectron2: &install_detectron2
  - run:
      name: Install Detectron2
      command: |
        # Remove first, in case it's in the CI cache
        pip uninstall -y detectron2

        pip install --progress-bar off -e .[all]
        python -m detectron2.utils.collect_env
        ./datasets/prepare_for_tests.sh

run_unittests: &run_unittests
  - run:
      name: Run Unit Tests
      command: |
        pytest -sv --durations=15 tests  # parallel causes some random failures

uninstall_tests: &uninstall_tests
  - run:
      name: Run Tests After Uninstalling
      command: |
        pip uninstall -y detectron2
        # Remove built binaries
        rm -rf build/ detectron2/*.so
        # Tests that code is importable without installation
        PYTHONPATH=. ./.circleci/import-tests.sh


# -------------------------------------------------------------------------------------
# Jobs to run
# -------------------------------------------------------------------------------------
jobs:
  linux_cpu_tests:
    <<: *cpu
    <<: *version_parameters

    working_directory: ~/detectron2

    steps:
      - checkout

      # Cache the venv directory that contains python, dependencies, and checkpoints
      # Refresh the key when dependencies should be updated (e.g. when pytorch releases)
      - restore_cache:
          keys:
            - cache-{{ arch }}-<< parameters.pytorch_version >>-{{ .Branch }}-20210827

      - <<: *install_python
      - <<: *install_linux_dep
      - <<: *install_detectron2
      - <<: *run_unittests
      - <<: *uninstall_tests

      - save_cache:
          paths:
            - /opt/circleci/.pyenv
            - ~/.torch
          key: cache-{{ arch }}-<< parameters.pytorch_version >>-{{ .Branch }}-20210827


  linux_gpu_tests:
    <<: *gpu
    <<: *version_parameters

    working_directory: ~/detectron2

    steps:
      - checkout

      - restore_cache:
          keys:
            - cache-{{ arch }}-<< parameters.pytorch_version >>-{{ .Branch }}-20210827

      - <<: *install_python
      - <<: *install_linux_dep
      - <<: *install_detectron2
      - <<: *run_unittests
      - <<: *uninstall_tests

      - save_cache:
          paths:
            - /opt/circleci/.pyenv
            - ~/.torch
          key: cache-{{ arch }}-<< parameters.pytorch_version >>-{{ .Branch }}-20210827

  windows_cpu_build:
    <<: *windows_cpu
    <<: *version_parameters
    steps:
      - <<: *add_ssh_keys
      - checkout
      - <<: *setup_venv_win

      # Cache the env directory that contains dependencies
      - restore_cache:
          keys:
            - cache-{{ arch }}-<< parameters.pytorch_version >>-{{ .Branch }}-20210404

      - run:
          name: Install Dependencies
          command: |
            pip install certifi --ignore-installed  # required on windows to workaround some cert issue
            pip install numpy cython  # required on windows before pycocotools
            pip install opencv-python-headless pytest-xdist pycocotools tensorboard onnx
            pip install -U git+https://github.com/facebookresearch/iopath
            pip install -U git+https://github.com/facebookresearch/fvcore
            pip install torch==$env:PYTORCH_VERSION torchvision==$env:TORCHVISION_VERSION -f $env:PYTORCH_INDEX

      - save_cache:
          paths:
            - env
          key: cache-{{ arch }}-<< parameters.pytorch_version >>-{{ .Branch }}-20210404

      - <<: *install_detectron2
      # TODO: unittest fails for now

workflows:
  version: 2
  regular_test:
    jobs:
      - linux_cpu_tests:
          name: linux_cpu_tests_pytorch1.10
          pytorch_version: '1.10.0+cpu'
          torchvision_version: '0.11.1+cpu'
      - linux_gpu_tests:
          name: linux_gpu_tests_pytorch1.8
          pytorch_version: '1.8.1+cu111'
          torchvision_version: '0.9.1+cu111'
      - linux_gpu_tests:
          name: linux_gpu_tests_pytorch1.9
          pytorch_version: '1.9+cu111'
          torchvision_version: '0.10+cu111'
      - linux_gpu_tests:
          name: linux_gpu_tests_pytorch1.10
          pytorch_version: '1.10+cu111'
          torchvision_version: '0.11.1+cu111'
      - linux_gpu_tests:
          name: linux_gpu_tests_pytorch1.10_python39
          pytorch_version: '1.10+cu111'
          torchvision_version: '0.11.1+cu111'
          python_version: '3.9.6'
      - windows_cpu_build:
          pytorch_version: '1.10+cpu'
          torchvision_version: '0.11.1+cpu'


================================================
FILE: detectron2/.circleci/import-tests.sh
================================================
#!/bin/bash -e
# Copyright (c) Facebook, Inc. and its affiliates.

# Test that import works without building detectron2.

# Check that _C is not importable
python -c "from detectron2 import _C" > /dev/null 2>&1 && {
  echo "This test should be run without building detectron2."
  exit 1
}

# Check that other modules are still importable, even when _C is not importable
python -c "from detectron2 import modeling"
python -c "from detectron2 import modeling, data"
python -c "from detectron2 import evaluation, export, checkpoint"
python -c "from detectron2 import utils, engine"


================================================
FILE: detectron2/.clang-format
================================================
AccessModifierOffset: -1
AlignAfterOpenBracket: AlwaysBreak
AlignConsecutiveAssignments: false
AlignConsecutiveDeclarations: false
AlignEscapedNewlinesLeft: true
AlignOperands:   false
AlignTrailingComments: false
AllowAllParametersOfDeclarationOnNextLine: false
AllowShortBlocksOnASingleLine: false
AllowShortCaseLabelsOnASingleLine: false
AllowShortFunctionsOnASingleLine: Empty
AllowShortIfStatementsOnASingleLine: false
AllowShortLoopsOnASingleLine: false
AlwaysBreakAfterReturnType: None
AlwaysBreakBeforeMultilineStrings: true
AlwaysBreakTemplateDeclarations: true
BinPackArguments: false
BinPackParameters: false
BraceWrapping:
  AfterClass:      false
  AfterControlStatement: false
  AfterEnum:       false
  AfterFunction:   false
  AfterNamespace:  false
  AfterObjCDeclaration: false
  AfterStruct:     false
  AfterUnion:      false
  BeforeCatch:     false
  BeforeElse:      false
  IndentBraces:    false
BreakBeforeBinaryOperators: None
BreakBeforeBraces: Attach
BreakBeforeTernaryOperators: true
BreakConstructorInitializersBeforeComma: false
BreakAfterJavaFieldAnnotations: false
BreakStringLiterals: false
ColumnLimit:     80
CommentPragmas:  '^ IWYU pragma:'
ConstructorInitializerAllOnOneLineOrOnePerLine: true
ConstructorInitializerIndentWidth: 4
ContinuationIndentWidth: 4
Cpp11BracedListStyle: true
DerivePointerAlignment: false
DisableFormat:   false
ForEachMacros:   [ FOR_EACH, FOR_EACH_R, FOR_EACH_RANGE, ]
IncludeCategories:
  - Regex:           '^<.*\.h(pp)?>'
    Priority:        1
  - Regex:           '^<.*'
    Priority:        2
  - Regex:           '.*'
    Priority:        3
IndentCaseLabels: true
IndentWidth:     2
IndentWrappedFunctionNames: false
KeepEmptyLinesAtTheStartOfBlocks: false
MacroBlockBegin: ''
MacroBlockEnd:   ''
MaxEmptyLinesToKeep: 1
NamespaceIndentation: None
ObjCBlockIndentWidth: 2
ObjCSpaceAfterProperty: false
ObjCSpaceBeforeProtocolList: false
PenaltyBreakBeforeFirstCallParameter: 1
PenaltyBreakComment: 300
PenaltyBreakFirstLessLess: 120
PenaltyBreakString: 1000
PenaltyExcessCharacter: 1000000
PenaltyReturnTypeOnItsOwnLine: 200
PointerAlignment: Left
ReflowComments:  true
SortIncludes:    true
SpaceAfterCStyleCast: false
SpaceBeforeAssignmentOperators: true
SpaceBeforeParens: ControlStatements
SpaceInEmptyParentheses: false
SpacesBeforeTrailingComments: 1
SpacesInAngles:  false
SpacesInContainerLiterals: true
SpacesInCStyleCastParentheses: false
SpacesInParentheses: false
SpacesInSquareBrackets: false
Standard:        Cpp11
TabWidth:        8
UseTab:          Never


================================================
FILE: detectron2/.flake8
================================================
# This is an example .flake8 config, used when developing *Black* itself.
# Keep in sync with setup.cfg which is used for source packages.

[flake8]
ignore = W503, E203, E221, C901, C408, E741, C407, B017, F811, C101, EXE001, EXE002
max-line-length = 100
max-complexity = 18
select = B,C,E,F,W,T4,B9
exclude = build
per-file-ignores =
  **/__init__.py:F401,F403,E402
  **/configs/**.py:F401,E402
  configs/**.py:F401,E402
  **/tests/config/**.py:F401,E402
  tests/config/**.py:F401,E402


================================================
FILE: detectron2/GETTING_STARTED.md
================================================
## Getting Started with Detectron2

This document provides a brief intro of the usage of builtin command-line tools in detectron2.

For a tutorial that involves actual coding with the API,
see our [Colab Notebook](https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5)
which covers how to run inference with an
existing model, and how to train a builtin model on a custom dataset.


### Inference Demo with Pre-trained Models

1. Pick a model and its config file from
  [model zoo](MODEL_ZOO.md),
  for example, `mask_rcnn_R_50_FPN_3x.yaml`.
2. We provide `demo.py` that is able to demo builtin configs. Run it with:
```
cd demo/
python demo.py --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \
  --input input1.jpg input2.jpg \
  [--other-options]
  --opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl
```
The configs are made for training, therefore we need to specify `MODEL.WEIGHTS` to a model from model zoo for evaluation.
This command will run the inference and show visualizations in an OpenCV window.

For details of the command line arguments, see `demo.py -h` or look at its source code
to understand its behavior. Some common arguments are:
* To run __on your webcam__, replace `--input files` with `--webcam`.
* To run __on a video__, replace `--input files` with `--video-input video.mp4`.
* To run __on cpu__, add `MODEL.DEVICE cpu` after `--opts`.
* To save outputs to a directory (for images) or a file (for webcam or video), use `--output`.


### Training & Evaluation in Command Line

We provide two scripts in "tools/plain_train_net.py" and "tools/train_net.py",
that are made to train all the configs provided in detectron2. You may want to
use it as a reference to write your own training script.

Compared to "train_net.py", "plain_train_net.py" supports fewer default
features. It also includes fewer abstraction, therefore is easier to add custom
logic.

To train a model with "train_net.py", first
setup the corresponding datasets following
[datasets/README.md](./datasets/README.md),
then run:
```
cd tools/
./train_net.py --num-gpus 8 \
  --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml
```

The configs are made for 8-GPU training.
To train on 1 GPU, you may need to [change some parameters](https://arxiv.org/abs/1706.02677), e.g.:
```
./train_net.py \
  --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml \
  --num-gpus 1 SOLVER.IMS_PER_BATCH 2 SOLVER.BASE_LR 0.0025
```

To evaluate a model's performance, use
```
./train_net.py \
  --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml \
  --eval-only MODEL.WEIGHTS /path/to/checkpoint_file
```
For more options, see `./train_net.py -h`.

### Use Detectron2 APIs in Your Code

See our [Colab Notebook](https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5)
to learn how to use detectron2 APIs to:
1. run inference with an existing model
2. train a builtin model on a custom dataset

See [detectron2/projects](https://github.com/facebookresearch/detectron2/tree/main/projects)
for more ways to build your project on detectron2.


================================================
FILE: detectron2/INSTALL.md
================================================
## Installation

### Requirements
- Linux or macOS with Python ≥ 3.7
- PyTorch ≥ 1.8 and [torchvision](https://github.com/pytorch/vision/) that matches the PyTorch installation.
  Install them together at [pytorch.org](https://pytorch.org) to make sure of this
- OpenCV is optional but needed by demo and visualization


### Build Detectron2 from Source

gcc & g++ ≥ 5.4 are required. [ninja](https://ninja-build.org/) is optional but recommended for faster build.
After having them, run:
```
python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
# (add --user if you don't have permission)

# Or, to install it from a local clone:
git clone https://github.com/facebookresearch/detectron2.git
python -m pip install -e detectron2

# On macOS, you may need to prepend the above commands with a few environment variables:
CC=clang CXX=clang++ ARCHFLAGS="-arch x86_64" python -m pip install ...
```

To __rebuild__ detectron2 that's built from a local clone, use `rm -rf build/ **/*.so` to clean the
old build first. You often need to rebuild detectron2 after reinstalling PyTorch.

### Install Pre-Built Detectron2 (Linux only)

Choose from this table to install [v0.6 (Oct 2021)](https://github.com/facebookresearch/detectron2/releases):

<table class="docutils"><tbody><th width="80"> CUDA </th><th valign="bottom" align="left" width="100">torch 1.10</th><th valign="bottom" align="left" width="100">torch 1.9</th><th valign="bottom" align="left" width="100">torch 1.8</th> <tr><td align="left">11.3</td><td align="left"><details><summary> install </summary><pre><code>python -m pip install detectron2 -f \
  https://dl.fbaipublicfiles.com/detectron2/wheels/cu113/torch1.10/index.html
</code></pre> </details> </td> <td align="left"> </td> <td align="left"> </td> </tr> <tr><td align="left">11.1</td><td align="left"><details><summary> install </summary><pre><code>python -m pip install detectron2 -f \
  https://dl.fbaipublicfiles.com/detectron2/wheels/cu111/torch1.10/index.html
</code></pre> </details> </td> <td align="left"><details><summary> install </summary><pre><code>python -m pip install detectron2 -f \
  https://dl.fbaipublicfiles.com/detectron2/wheels/cu111/torch1.9/index.html
</code></pre> </details> </td> <td align="left"><details><summary> install </summary><pre><code>python -m pip install detectron2 -f \
  https://dl.fbaipublicfiles.com/detectron2/wheels/cu111/torch1.8/index.html
</code></pre> </details> </td> </tr> <tr><td align="left">10.2</td><td align="left"><details><summary> install </summary><pre><code>python -m pip install detectron2 -f \
  https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.10/index.html
</code></pre> </details> </td> <td align="left"><details><summary> install </summary><pre><code>python -m pip install detectron2 -f \
  https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.9/index.html
</code></pre> </details> </td> <td align="left"><details><summary> install </summary><pre><code>python -m pip install detectron2 -f \
  https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.8/index.html
</code></pre> </details> </td> </tr> <tr><td align="left">10.1</td><td align="left"> </td> <td align="left"> </td> <td align="left"><details><summary> install </summary><pre><code>python -m pip install detectron2 -f \
  https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.8/index.html
</code></pre> </details> </td> </tr> <tr><td align="left">cpu</td><td align="left"><details><summary> install </summary><pre><code>python -m pip install detectron2 -f \
  https://dl.fbaipublicfiles.com/detectron2/wheels/cpu/torch1.10/index.html
</code></pre> </details> </td> <td align="left"><details><summary> install </summary><pre><code>python -m pip install detectron2 -f \
  https://dl.fbaipublicfiles.com/detectron2/wheels/cpu/torch1.9/index.html
</code></pre> </details> </td> <td align="left"><details><summary> install </summary><pre><code>python -m pip install detectron2 -f \
  https://dl.fbaipublicfiles.com/detectron2/wheels/cpu/torch1.8/index.html
</code></pre> </details> </td> </tr></tbody></table>

Note that:
1. The pre-built packages have to be used with corresponding version of CUDA and the official package of PyTorch.
   Otherwise, please build detectron2 from source.
2. New packages are released every few months. Therefore, packages may not contain latest features in the main
   branch and may not be compatible with the main branch of a research project that uses detectron2
   (e.g. those in [projects](projects)).

### Common Installation Issues

Click each issue for its solutions:

<details>
<summary>
Undefined symbols that looks like "TH..","at::Tensor...","torch..."
</summary>
<br/>

This usually happens when detectron2 or torchvision is not
compiled with the version of PyTorch you're running.

If the error comes from a pre-built torchvision, uninstall torchvision and pytorch and reinstall them
following [pytorch.org](http://pytorch.org). So the versions will match.

If the error comes from a pre-built detectron2, check [release notes](https://github.com/facebookresearch/detectron2/releases),
uninstall and reinstall the correct pre-built detectron2 that matches pytorch version.

If the error comes from detectron2 or torchvision that you built manually from source,
remove files you built (`build/`, `**/*.so`) and rebuild it so it can pick up the version of pytorch currently in your environment.

If the above instructions do not resolve this problem, please provide an environment (e.g. a dockerfile) that can reproduce the issue.
</details>

<details>
<summary>
Missing torch dynamic libraries, OR segmentation fault immediately when using detectron2.
</summary>
This usually happens when detectron2 or torchvision is not
compiled with the version of PyTorch you're running. See the previous common issue for the solution.
</details>

<details>
<summary>
Undefined C++ symbols (e.g. "GLIBCXX..") or C++ symbols not found.
</summary>
<br/>
Usually it's because the library is compiled with a newer C++ compiler but run with an old C++ runtime.

This often happens with old anaconda.
It may help to run `conda update libgcc` to upgrade its runtime.

The fundamental solution is to avoid the mismatch, either by compiling using older version of C++
compiler, or run the code with proper C++ runtime.
To run the code with a specific C++ runtime, you can use environment variable `LD_PRELOAD=/path/to/libstdc++.so`.

</details>

<details>
<summary>
"nvcc not found" or "Not compiled with GPU support" or "Detectron2 CUDA Compiler: not available".
</summary>
<br/>
CUDA is not found when building detectron2.
You should make sure

```
python -c 'import torch; from torch.utils.cpp_extension import CUDA_HOME; print(torch.cuda.is_available(), CUDA_HOME)'
```

print `(True, a directory with cuda)` at the time you build detectron2.

Most models can run inference (but not training) without GPU support. To use CPUs, set `MODEL.DEVICE='cpu'` in the config.
</details>

<details>
<summary>
"invalid device function" or "no kernel image is available for execution".
</summary>
<br/>
Two possibilities:

* You build detectron2 with one version of CUDA but run it with a different version.

  To check whether it is the case,
  use `python -m detectron2.utils.collect_env` to find out inconsistent CUDA versions.
  In the output of this command, you should expect "Detectron2 CUDA Compiler", "CUDA_HOME", "PyTorch built with - CUDA"
  to contain cuda libraries of the same version.

  When they are inconsistent,
  you need to either install a different build of PyTorch (or build by yourself)
  to match your local CUDA installation, or install a different version of CUDA to match PyTorch.

* PyTorch/torchvision/Detectron2 is not built for the correct GPU SM architecture (aka. compute capability).

  The architecture included by PyTorch/detectron2/torchvision is available in the "architecture flags" in
  `python -m detectron2.utils.collect_env`. It must include
  the architecture of your GPU, which can be found at [developer.nvidia.com/cuda-gpus](https://developer.nvidia.com/cuda-gpus).

  If you're using pre-built PyTorch/detectron2/torchvision, they have included support for most popular GPUs already.
  If not supported, you need to build them from source.

  When building detectron2/torchvision from source, they detect the GPU device and build for only the device.
  This means the compiled code may not work on a different GPU device.
  To recompile them for the correct architecture, remove all installed/compiled files,
  and rebuild them with the `TORCH_CUDA_ARCH_LIST` environment variable set properly.
  For example, `export TORCH_CUDA_ARCH_LIST="6.0;7.0"` makes it compile for both P100s and V100s.
</details>

<details>
<summary>
Undefined CUDA symbols; Cannot open libcudart.so
</summary>
<br/>
The version of NVCC you use to build detectron2 or torchvision does
not match the version of CUDA you are running with.
This often happens when using anaconda's CUDA runtime.

Use `python -m detectron2.utils.collect_env` to find out inconsistent CUDA versions.
In the output of this command, you should expect "Detectron2 CUDA Compiler", "CUDA_HOME", "PyTorch built with - CUDA"
to contain cuda libraries of the same version.

When they are inconsistent,
you need to either install a different build of PyTorch (or build by yourself)
to match your local CUDA installation, or install a different version of CUDA to match PyTorch.
</details>


<details>
<summary>
C++ compilation errors from NVCC / NVRTC, or "Unsupported gpu architecture"
</summary>
<br/>
A few possibilities:

1. Local CUDA/NVCC version has to match the CUDA version of your PyTorch. Both can be found in `python collect_env.py`
   (download from [here](./detectron2/utils/collect_env.py)).
   When they are inconsistent, you need to either install a different build of PyTorch (or build by yourself)
   to match your local CUDA installation, or install a different version of CUDA to match PyTorch.

2. Local CUDA/NVCC version shall support the SM architecture (a.k.a. compute capability) of your GPU.
   The capability of your GPU can be found at [developer.nvidia.com/cuda-gpus](https://developer.nvidia.com/cuda-gpus).
   The capability supported by NVCC is listed at [here](https://gist.github.com/ax3l/9489132).
   If your NVCC version is too old, this can be workaround by setting environment variable
   `TORCH_CUDA_ARCH_LIST` to a lower, supported capability.

3. The combination of NVCC and GCC you use is incompatible. You need to change one of their versions.
   See [here](https://gist.github.com/ax3l/9489132) for some valid combinations.
   Notably, CUDA<=10.1.105 doesn't support GCC>7.3.

   The CUDA/GCC version used by PyTorch can be found by `print(torch.__config__.show())`.

</details>


<details>
<summary>
"ImportError: cannot import name '_C'".
</summary>
<br/>
Please build and install detectron2 following the instructions above.

Or, if you are running code from detectron2's root directory, `cd` to a different one.
Otherwise you may not import the code that you installed.
</details>


<details>
<summary>
Any issue on windows.
</summary>
<br/>

Detectron2 is continuously built on windows with [CircleCI](https://app.circleci.com/pipelines/github/facebookresearch/detectron2?branch=main).
However we do not provide official support for it.
PRs that improves code compatibility on windows are welcome.
</details>

<details>
<summary>
ONNX conversion segfault after some "TraceWarning".
</summary>
<br/>
The ONNX package is compiled with a too old compiler.

Please build and install ONNX from its source code using a compiler
whose version is closer to what's used by PyTorch (available in `torch.__config__.show()`).
</details>


<details>
<summary>
"library not found for -lstdc++" on older version of MacOS
</summary>
<br/>
See
[this stackoverflow answer](https://stackoverflow.com/questions/56083725/macos-build-issues-lstdc-not-found-while-building-python-package).

</details>


### Installation inside specific environments:

* __Colab__: see our [Colab Tutorial](https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5)
  which has step-by-step instructions.

* __Docker__: The official [Dockerfile](docker) installs detectron2 with a few simple commands.


================================================
FILE: detectron2/LICENSE
================================================
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/

TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

1. Definitions.

"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.

"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.

"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.

"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.

"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.

"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.

"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).

"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.

"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."

"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.

2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.

3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.

4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:

(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and

(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and

(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and

(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.

You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.

5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.

6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.

7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.

8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.

9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

END OF TERMS AND CONDITIONS

APPENDIX: How to apply the Apache License to your work.

To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!)  The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.

Copyright [yyyy] [name of copyright owner]


Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.


================================================
FILE: detectron2/MODEL_ZOO.md
================================================
# Detectron2 Model Zoo and Baselines

## Introduction

This file documents a large collection of baselines trained
with detectron2 in Sep-Oct, 2019.
All numbers were obtained on [Big Basin](https://engineering.fb.com/data-center-engineering/introducing-big-basin-our-next-generation-ai-hardware/)
servers with 8 NVIDIA V100 GPUs & NVLink. The speed numbers are periodically updated with latest PyTorch/CUDA/cuDNN versions.
You can access these models from code using [detectron2.model_zoo](https://detectron2.readthedocs.io/modules/model_zoo.html) APIs.

In addition to these official baseline models, you can find more models in [projects/](projects/).

#### How to Read the Tables
* The "Name" column contains a link to the config file. Models can be reproduced using `tools/train_net.py` with the corresponding yaml config file,
  or `tools/lazyconfig_train_net.py` for python config files.
* Training speed is averaged across the entire training.
  We keep updating the speed with latest version of detectron2/pytorch/etc.,
  so they might be different from the `metrics` file.
  Training speed for multi-machine jobs is not provided.
* Inference speed is measured by `tools/train_net.py --eval-only`, or [inference_on_dataset()](https://detectron2.readthedocs.io/modules/evaluation.html#detectron2.evaluation.inference_on_dataset),
  with batch size 1 in detectron2 directly.
  Measuring it with custom code may introduce other overhead.
  Actual deployment in production should in general be faster than the given inference
  speed due to more optimizations.
* The *model id* column is provided for ease of reference.
  To check downloaded file integrity, any model on this page contains its md5 prefix in its file name.
* Training curves and other statistics can be found in `metrics` for each model.

#### Common Settings for COCO Models
* All COCO models were trained on `train2017` and evaluated on `val2017`.
* The default settings are __not directly comparable__ with Detectron's standard settings.
  For example, our default training data augmentation uses scale jittering in addition to horizontal flipping.

  To make fair comparisons with Detectron's settings, see
  [Detectron1-Comparisons](configs/Detectron1-Comparisons/) for accuracy comparison,
  and [benchmarks](https://detectron2.readthedocs.io/notes/benchmarks.html)
  for speed comparison.
* For Faster/Mask R-CNN, we provide baselines based on __3 different backbone combinations__:
  * __FPN__: Use a ResNet+FPN backbone with standard conv and FC heads for mask and box prediction,
    respectively. It obtains the best
    speed/accuracy tradeoff, but the other two are still useful for research.
  * __C4__: Use a ResNet conv4 backbone with conv5 head. The original baseline in the Faster R-CNN paper.
  * __DC5__ (Dilated-C5): Use a ResNet conv5 backbone with dilations in conv5, and standard conv and FC heads
    for mask and box prediction, respectively.
    This is used by the Deformable ConvNet paper.
* Most models are trained with the 3x schedule (~37 COCO epochs).
  Although 1x models are heavily under-trained, we provide some ResNet-50 models with the 1x (~12 COCO epochs)
  training schedule for comparison when doing quick research iteration.

#### ImageNet Pretrained Models

It's common to initialize from backbone models pre-trained on ImageNet classification tasks. The following backbone models are available:

* [R-50.pkl](https://dl.fbaipublicfiles.com/detectron2/ImageNetPretrained/MSRA/R-50.pkl): converted copy of [MSRA's original ResNet-50](https://github.com/KaimingHe/deep-residual-networks) model.
* [R-101.pkl](https://dl.fbaipublicfiles.com/detectron2/ImageNetPretrained/MSRA/R-101.pkl): converted copy of [MSRA's original ResNet-101](https://github.com/KaimingHe/deep-residual-networks) model.
* [X-101-32x8d.pkl](https://dl.fbaipublicfiles.com/detectron2/ImageNetPretrained/FAIR/X-101-32x8d.pkl): ResNeXt-101-32x8d model trained with Caffe2 at FB.
* [R-50.pkl (torchvision)](https://dl.fbaipublicfiles.com/detectron2/ImageNetPretrained/torchvision/R-50.pkl): converted copy of [torchvision's ResNet-50](https://pytorch.org/docs/stable/torchvision/models.html#torchvision.models.resnet50) model.
  More details can be found in [the conversion script](tools/convert-torchvision-to-d2.py).

Note that the above models have __different__ format from those provided in Detectron: we do not fuse BatchNorm into an affine layer.
Pretrained models in Detectron's format can still be used. For example:
* [X-152-32x8d-IN5k.pkl](https://dl.fbaipublicfiles.com/detectron/ImageNetPretrained/25093814/X-152-32x8d-IN5k.pkl):
  ResNeXt-152-32x8d model trained on ImageNet-5k with Caffe2 at FB (see ResNeXt paper for details on ImageNet-5k).
* [R-50-GN.pkl](https://dl.fbaipublicfiles.com/detectron/ImageNetPretrained/47261647/R-50-GN.pkl):
  ResNet-50 with Group Normalization.
* [R-101-GN.pkl](https://dl.fbaipublicfiles.com/detectron/ImageNetPretrained/47592356/R-101-GN.pkl):
  ResNet-101 with Group Normalization.

These models require slightly different settings regarding normalization and architecture. See the model zoo configs for reference.

#### License

All models available for download through this document are licensed under the
[Creative Commons Attribution-ShareAlike 3.0 license](https://creativecommons.org/licenses/by-sa/3.0/).

### COCO Object Detection Baselines

#### Faster R-CNN:
<!--
(fb only) To update the table in vim:
1. Remove the old table: d}
2. Copy the below command to the place of the table
3. :.!bash

./gen_html_table.py --config 'COCO-Detection/faster*50*'{1x,3x}'*' 'COCO-Detection/faster*101*' --name R50-C4 R50-DC5 R50-FPN R50-C4 R50-DC5 R50-FPN R101-C4 R101-DC5 R101-FPN X101-FPN --fields lr_sched train_speed inference_speed mem box_AP
-->


<table><tbody>
<!-- START TABLE -->
<!-- TABLE HEADER -->
<th valign="bottom">Name</th>
<th valign="bottom">lr<br/>sched</th>
<th valign="bottom">train<br/>time<br/>(s/iter)</th>
<th valign="bottom">inference<br/>time<br/>(s/im)</th>
<th valign="bottom">train<br/>mem<br/>(GB)</th>
<th valign="bottom">box<br/>AP</th>
<th valign="bottom">model id</th>
<th valign="bottom">download</th>
<!-- TABLE BODY -->
<!-- ROW: faster_rcnn_R_50_C4_1x -->
 <tr><td align="left"><a href="configs/COCO-Detection/faster_rcnn_R_50_C4_1x.yaml">R50-C4</a></td>
<td align="center">1x</td>
<td align="center">0.551</td>
<td align="center">0.102</td>
<td align="center">4.8</td>
<td align="center">35.7</td>
<td align="center">137257644</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/faster_rcnn_R_50_C4_1x/137257644/model_final_721ade.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/faster_rcnn_R_50_C4_1x/137257644/metrics.json">metrics</a></td>
</tr>
<!-- ROW: faster_rcnn_R_50_DC5_1x -->
 <tr><td align="left"><a href="configs/COCO-Detection/faster_rcnn_R_50_DC5_1x.yaml">R50-DC5</a></td>
<td align="center">1x</td>
<td align="center">0.380</td>
<td align="center">0.068</td>
<td align="center">5.0</td>
<td align="center">37.3</td>
<td align="center">137847829</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/faster_rcnn_R_50_DC5_1x/137847829/model_final_51d356.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/faster_rcnn_R_50_DC5_1x/137847829/metrics.json">metrics</a></td>
</tr>
<!-- ROW: faster_rcnn_R_50_FPN_1x -->
 <tr><td align="left"><a href="configs/COCO-Detection/faster_rcnn_R_50_FPN_1x.yaml">R50-FPN</a></td>
<td align="center">1x</td>
<td align="center">0.210</td>
<td align="center">0.038</td>
<td align="center">3.0</td>
<td align="center">37.9</td>
<td align="center">137257794</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/faster_rcnn_R_50_FPN_1x/137257794/model_final_b275ba.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/faster_rcnn_R_50_FPN_1x/137257794/metrics.json">metrics</a></td>
</tr>
<!-- ROW: faster_rcnn_R_50_C4_3x -->
 <tr><td align="left"><a href="configs/COCO-Detection/faster_rcnn_R_50_C4_3x.yaml">R50-C4</a></td>
<td align="center">3x</td>
<td align="center">0.543</td>
<td align="center">0.104</td>
<td align="center">4.8</td>
<td align="center">38.4</td>
<td align="center">137849393</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/faster_rcnn_R_50_C4_3x/137849393/model_final_f97cb7.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/faster_rcnn_R_50_C4_3x/137849393/metrics.json">metrics</a></td>
</tr>
<!-- ROW: faster_rcnn_R_50_DC5_3x -->
 <tr><td align="left"><a href="configs/COCO-Detection/faster_rcnn_R_50_DC5_3x.yaml">R50-DC5</a></td>
<td align="center">3x</td>
<td align="center">0.378</td>
<td align="center">0.070</td>
<td align="center">5.0</td>
<td align="center">39.0</td>
<td align="center">137849425</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/faster_rcnn_R_50_DC5_3x/137849425/model_final_68d202.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/faster_rcnn_R_50_DC5_3x/137849425/metrics.json">metrics</a></td>
</tr>
<!-- ROW: faster_rcnn_R_50_FPN_3x -->
 <tr><td align="left"><a href="configs/COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml">R50-FPN</a></td>
<td align="center">3x</td>
<td align="center">0.209</td>
<td align="center">0.038</td>
<td align="center">3.0</td>
<td align="center">40.2</td>
<td align="center">137849458</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/faster_rcnn_R_50_FPN_3x/137849458/model_final_280758.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/faster_rcnn_R_50_FPN_3x/137849458/metrics.json">metrics</a></td>
</tr>
<!-- ROW: faster_rcnn_R_101_C4_3x -->
 <tr><td align="left"><a href="configs/COCO-Detection/faster_rcnn_R_101_C4_3x.yaml">R101-C4</a></td>
<td align="center">3x</td>
<td align="center">0.619</td>
<td align="center">0.139</td>
<td align="center">5.9</td>
<td align="center">41.1</td>
<td align="center">138204752</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/faster_rcnn_R_101_C4_3x/138204752/model_final_298dad.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/faster_rcnn_R_101_C4_3x/138204752/metrics.json">metrics</a></td>
</tr>
<!-- ROW: faster_rcnn_R_101_DC5_3x -->
 <tr><td align="left"><a href="configs/COCO-Detection/faster_rcnn_R_101_DC5_3x.yaml">R101-DC5</a></td>
<td align="center">3x</td>
<td align="center">0.452</td>
<td align="center">0.086</td>
<td align="center">6.1</td>
<td align="center">40.6</td>
<td align="center">138204841</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/faster_rcnn_R_101_DC5_3x/138204841/model_final_3e0943.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/faster_rcnn_R_101_DC5_3x/138204841/metrics.json">metrics</a></td>
</tr>
<!-- ROW: faster_rcnn_R_101_FPN_3x -->
 <tr><td align="left"><a href="configs/COCO-Detection/faster_rcnn_R_101_FPN_3x.yaml">R101-FPN</a></td>
<td align="center">3x</td>
<td align="center">0.286</td>
<td align="center">0.051</td>
<td align="center">4.1</td>
<td align="center">42.0</td>
<td align="center">137851257</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/faster_rcnn_R_101_FPN_3x/137851257/model_final_f6e8b1.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/faster_rcnn_R_101_FPN_3x/137851257/metrics.json">metrics</a></td>
</tr>
<!-- ROW: faster_rcnn_X_101_32x8d_FPN_3x -->
 <tr><td align="left"><a href="configs/COCO-Detection/faster_rcnn_X_101_32x8d_FPN_3x.yaml">X101-FPN</a></td>
<td align="center">3x</td>
<td align="center">0.638</td>
<td align="center">0.098</td>
<td align="center">6.7</td>
<td align="center">43.0</td>
<td align="center">139173657</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/faster_rcnn_X_101_32x8d_FPN_3x/139173657/model_final_68b088.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/faster_rcnn_X_101_32x8d_FPN_3x/139173657/metrics.json">metrics</a></td>
</tr>
</tbody></table>

#### RetinaNet:
<!--
./gen_html_table.py --config 'COCO-Detection/retina*50*' 'COCO-Detection/retina*101*' --name R50 R50 R101 --fields lr_sched train_speed inference_speed mem box_AP
-->

<table><tbody>
<!-- START TABLE -->
<!-- TABLE HEADER -->
<th valign="bottom">Name</th>
<th valign="bottom">lr<br/>sched</th>
<th valign="bottom">train<br/>time<br/>(s/iter)</th>
<th valign="bottom">inference<br/>time<br/>(s/im)</th>
<th valign="bottom">train<br/>mem<br/>(GB)</th>
<th valign="bottom">box<br/>AP</th>
<th valign="bottom">model id</th>
<th valign="bottom">download</th>
<!-- TABLE BODY -->
<!-- ROW: retinanet_R_50_FPN_1x -->
 <tr><td align="left"><a href="configs/COCO-Detection/retinanet_R_50_FPN_1x.yaml">R50</a></td>
<td align="center">1x</td>
<td align="center">0.205</td>
<td align="center">0.041</td>
<td align="center">4.1</td>
<td align="center">37.4</td>
<td align="center">190397773</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/retinanet_R_50_FPN_1x/190397773/model_final_bfca0b.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/retinanet_R_50_FPN_1x/190397773/metrics.json">metrics</a></td>
</tr>
<!-- ROW: retinanet_R_50_FPN_3x -->
 <tr><td align="left"><a href="configs/COCO-Detection/retinanet_R_50_FPN_3x.yaml">R50</a></td>
<td align="center">3x</td>
<td align="center">0.205</td>
<td align="center">0.041</td>
<td align="center">4.1</td>
<td align="center">38.7</td>
<td align="center">190397829</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/retinanet_R_50_FPN_3x/190397829/model_final_5bd44e.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/retinanet_R_50_FPN_3x/190397829/metrics.json">metrics</a></td>
</tr>
<!-- ROW: retinanet_R_101_FPN_3x -->
 <tr><td align="left"><a href="configs/COCO-Detection/retinanet_R_101_FPN_3x.yaml">R101</a></td>
<td align="center">3x</td>
<td align="center">0.291</td>
<td align="center">0.054</td>
<td align="center">5.2</td>
<td align="center">40.4</td>
<td align="center">190397697</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/retinanet_R_101_FPN_3x/190397697/model_final_971ab9.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/retinanet_R_101_FPN_3x/190397697/metrics.json">metrics</a></td>
</tr>
</tbody></table>


#### RPN & Fast R-CNN:
<!--
./gen_html_table.py --config 'COCO-Detection/rpn*' 'COCO-Detection/fast_rcnn*' --name "RPN R50-C4" "RPN R50-FPN" "Fast R-CNN R50-FPN" --fields lr_sched train_speed inference_speed mem box_AP prop_AR
-->

<table><tbody>
<!-- START TABLE -->
<!-- TABLE HEADER -->
<th valign="bottom">Name</th>
<th valign="bottom">lr<br/>sched</th>
<th valign="bottom">train<br/>time<br/>(s/iter)</th>
<th valign="bottom">inference<br/>time<br/>(s/im)</th>
<th valign="bottom">train<br/>mem<br/>(GB)</th>
<th valign="bottom">box<br/>AP</th>
<th valign="bottom">prop.<br/>AR</th>
<th valign="bottom">model id</th>
<th valign="bottom">download</th>
<!-- TABLE BODY -->
<!-- ROW: rpn_R_50_C4_1x -->
 <tr><td align="left"><a href="configs/COCO-Detection/rpn_R_50_C4_1x.yaml">RPN R50-C4</a></td>
<td align="center">1x</td>
<td align="center">0.130</td>
<td align="center">0.034</td>
<td align="center">1.5</td>
<td align="center"></td>
<td align="center">51.6</td>
<td align="center">137258005</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/rpn_R_50_C4_1x/137258005/model_final_450694.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/rpn_R_50_C4_1x/137258005/metrics.json">metrics</a></td>
</tr>
<!-- ROW: rpn_R_50_FPN_1x -->
 <tr><td align="left"><a href="configs/COCO-Detection/rpn_R_50_FPN_1x.yaml">RPN R50-FPN</a></td>
<td align="center">1x</td>
<td align="center">0.186</td>
<td align="center">0.032</td>
<td align="center">2.7</td>
<td align="center"></td>
<td align="center">58.0</td>
<td align="center">137258492</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/rpn_R_50_FPN_1x/137258492/model_final_02ce48.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/rpn_R_50_FPN_1x/137258492/metrics.json">metrics</a></td>
</tr>
<!-- ROW: fast_rcnn_R_50_FPN_1x -->
 <tr><td align="left"><a href="configs/COCO-Detection/fast_rcnn_R_50_FPN_1x.yaml">Fast R-CNN R50-FPN</a></td>
<td align="center">1x</td>
<td align="center">0.140</td>
<td align="center">0.029</td>
<td align="center">2.6</td>
<td align="center">37.8</td>
<td align="center"></td>
<td align="center">137635226</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/fast_rcnn_R_50_FPN_1x/137635226/model_final_e5f7ce.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/fast_rcnn_R_50_FPN_1x/137635226/metrics.json">metrics</a></td>
</tr>
</tbody></table>

### COCO Instance Segmentation Baselines with Mask R-CNN
<!--
./gen_html_table.py --config 'COCO-InstanceSegmentation/mask*50*'{1x,3x}'*' 'COCO-InstanceSegmentation/mask*101*' --name R50-C4 R50-DC5 R50-FPN R50-C4 R50-DC5 R50-FPN R101-C4 R101-DC5 R101-FPN X101-FPN --fields lr_sched train_speed inference_speed mem box_AP mask_AP
-->



<table><tbody>
<!-- START TABLE -->
<!-- TABLE HEADER -->
<th valign="bottom">Name</th>
<th valign="bottom">lr<br/>sched</th>
<th valign="bottom">train<br/>time<br/>(s/iter)</th>
<th valign="bottom">inference<br/>time<br/>(s/im)</th>
<th valign="bottom">train<br/>mem<br/>(GB)</th>
<th valign="bottom">box<br/>AP</th>
<th valign="bottom">mask<br/>AP</th>
<th valign="bottom">model id</th>
<th valign="bottom">download</th>
<!-- TABLE BODY -->
<!-- ROW: mask_rcnn_R_50_C4_1x -->
 <tr><td align="left"><a href="configs/COCO-InstanceSegmentation/mask_rcnn_R_50_C4_1x.yaml">R50-C4</a></td>
<td align="center">1x</td>
<td align="center">0.584</td>
<td align="center">0.110</td>
<td align="center">5.2</td>
<td align="center">36.8</td>
<td align="center">32.2</td>
<td align="center">137259246</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_C4_1x/137259246/model_final_9243eb.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_C4_1x/137259246/metrics.json">metrics</a></td>
</tr>
<!-- ROW: mask_rcnn_R_50_DC5_1x -->
 <tr><td align="left"><a href="configs/COCO-InstanceSegmentation/mask_rcnn_R_50_DC5_1x.yaml">R50-DC5</a></td>
<td align="center">1x</td>
<td align="center">0.471</td>
<td align="center">0.076</td>
<td align="center">6.5</td>
<td align="center">38.3</td>
<td align="center">34.2</td>
<td align="center">137260150</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_DC5_1x/137260150/model_final_4f86c3.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_DC5_1x/137260150/metrics.json">metrics</a></td>
</tr>
<!-- ROW: mask_rcnn_R_50_FPN_1x -->
 <tr><td align="left"><a href="configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml">R50-FPN</a></td>
<td align="center">1x</td>
<td align="center">0.261</td>
<td align="center">0.043</td>
<td align="center">3.4</td>
<td align="center">38.6</td>
<td align="center">35.2</td>
<td align="center">137260431</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x/137260431/model_final_a54504.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x/137260431/metrics.json">metrics</a></td>
</tr>
<!-- ROW: mask_rcnn_R_50_C4_3x -->
 <tr><td align="left"><a href="configs/COCO-InstanceSegmentation/mask_rcnn_R_50_C4_3x.yaml">R50-C4</a></td>
<td align="center">3x</td>
<td align="center">0.575</td>
<td align="center">0.111</td>
<td align="center">5.2</td>
<td align="center">39.8</td>
<td align="center">34.4</td>
<td align="center">137849525</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_C4_3x/137849525/model_final_4ce675.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_C4_3x/137849525/metrics.json">metrics</a></td>
</tr>
<!-- ROW: mask_rcnn_R_50_DC5_3x -->
 <tr><td align="left"><a href="configs/COCO-InstanceSegmentation/mask_rcnn_R_50_DC5_3x.yaml">R50-DC5</a></td>
<td align="center">3x</td>
<td align="center">0.470</td>
<td align="center">0.076</td>
<td align="center">6.5</td>
<td align="center">40.0</td>
<td align="center">35.9</td>
<td align="center">137849551</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_DC5_3x/137849551/model_final_84107b.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_DC5_3x/137849551/metrics.json">metrics</a></td>
</tr>
<!-- ROW: mask_rcnn_R_50_FPN_3x -->
 <tr><td align="left"><a href="configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml">R50-FPN</a></td>
<td align="center">3x</td>
<td align="center">0.261</td>
<td align="center">0.043</td>
<td align="center">3.4</td>
<td align="center">41.0</td>
<td align="center">37.2</td>
<td align="center">137849600</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/metrics.json">metrics</a></td>
</tr>
<!-- ROW: mask_rcnn_R_101_C4_3x -->
 <tr><td align="left"><a href="configs/COCO-InstanceSegmentation/mask_rcnn_R_101_C4_3x.yaml">R101-C4</a></td>
<td align="center">3x</td>
<td align="center">0.652</td>
<td align="center">0.145</td>
<td align="center">6.3</td>
<td align="center">42.6</td>
<td align="center">36.7</td>
<td align="center">138363239</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_101_C4_3x/138363239/model_final_a2914c.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_101_C4_3x/138363239/metrics.json">metrics</a></td>
</tr>
<!-- ROW: mask_rcnn_R_101_DC5_3x -->
 <tr><td align="left"><a href="configs/COCO-InstanceSegmentation/mask_rcnn_R_101_DC5_3x.yaml">R101-DC5</a></td>
<td align="center">3x</td>
<td align="center">0.545</td>
<td align="center">0.092</td>
<td align="center">7.6</td>
<td align="center">41.9</td>
<td align="center">37.3</td>
<td align="center">138363294</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_101_DC5_3x/138363294/model_final_0464b7.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_101_DC5_3x/138363294/metrics.json">metrics</a></td>
</tr>
<!-- ROW: mask_rcnn_R_101_FPN_3x -->
 <tr><td align="left"><a href="configs/COCO-InstanceSegmentation/mask_rcnn_R_101_FPN_3x.yaml">R101-FPN</a></td>
<td align="center">3x</td>
<td align="center">0.340</td>
<td align="center">0.056</td>
<td align="center">4.6</td>
<td align="center">42.9</td>
<td align="center">38.6</td>
<td align="center">138205316</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_101_FPN_3x/138205316/model_final_a3ec72.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_101_FPN_3x/138205316/metrics.json">metrics</a></td>
</tr>
<!-- ROW: mask_rcnn_X_101_32x8d_FPN_3x -->
 <tr><td align="left"><a href="configs/COCO-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_3x.yaml">X101-FPN</a></td>
<td align="center">3x</td>
<td align="center">0.690</td>
<td align="center">0.103</td>
<td align="center">7.2</td>
<td align="center">44.3</td>
<td align="center">39.5</td>
<td align="center">139653917</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_3x/139653917/model_final_2d9806.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_3x/139653917/metrics.json">metrics</a></td>
</tr>
</tbody></table>



#### New baselines using Large-Scale Jitter and Longer Training Schedule

The following baselines of COCO Instance Segmentation with Mask R-CNN are generated
using a longer training schedule and large-scale jitter as described in Google's
[Simple Copy-Paste Data Augmentation](https://arxiv.org/pdf/2012.07177.pdf) paper. These
models are trained from scratch using random initialization. These baselines exceed the
previous Mask R-CNN baselines.

In the following table, one epoch consists of training on 118000 COCO images.

<table><tbody>
<!-- START TABLE -->
<!-- TABLE HEADER -->
<th valign="bottom">Name</th>
<th valign="bottom">epochs</th>
<th valign="bottom">train<br/>time<br/>(s/im)</th>
<th valign="bottom">inference<br/>time<br/>(s/im)</th>
<th valign="bottom">box<br/>AP</th>
<th valign="bottom">mask<br/>AP</th>
<th valign="bottom">model id</th>
<th valign="bottom">download</th>
<!-- TABLE BODY -->
<!-- ROW: mask_rcnn_R_50_FPN_100ep_LSJ -->
 <tr><td align="left"><a href="configs/new_baselines/mask_rcnn_R_50_FPN_100ep_LSJ.py">R50-FPN</a></td>
<td align="center">100</td>
<td align="center">0.376</td>
<td align="center">0.069</td>
<td align="center">44.6</td>
<td align="center">40.3</td>
<td align="center">42047764</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/new_baselines/mask_rcnn_R_50_FPN_100ep_LSJ/42047764/model_final_bb69de.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/new_baselines/mask_rcnn_R_50_FPN_100ep_LSJ/42047764/metrics.json">metrics</a></td>
</tr>
<!-- ROW: mask_rcnn_R_50_FPN_200ep_LSJ -->
 <tr><td align="left"><a href="configs/new_baselines/mask_rcnn_R_50_FPN_200ep_LSJ.py">R50-FPN</a></td>
<td align="center">200</td>
<td align="center">0.376</td>
<td align="center">0.069</td>
<td align="center">46.3</td>
<td align="center">41.7</td>
<td align="center">42047638</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/new_baselines/mask_rcnn_R_50_FPN_200ep_LSJ/42047638/model_final_89a8d3.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/new_baselines/mask_rcnn_R_50_FPN_200ep_LSJ/42047638/metrics.json">metrics</a></td>
</tr>
<!-- ROW: mask_rcnn_R_50_FPN_400ep_LSJ -->
 <tr><td align="left"><a href="configs/new_baselines/mask_rcnn_R_50_FPN_400ep_LSJ.py">R50-FPN</a></td>
<td align="center">400</td>
<td align="center">0.376</td>
<td align="center">0.069</td>
<td align="center">47.4</td>
<td align="center">42.5</td>
<td align="center">42019571</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/new_baselines/mask_rcnn_R_50_FPN_400ep_LSJ/42019571/model_final_14d201.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/new_baselines/mask_rcnn_R_50_FPN_400ep_LSJ/42019571/metrics.json">metrics</a></td>
</tr>
<!-- ROW: mask_rcnn_R_101_FPN_100ep_LSJ -->
 <tr><td align="left"><a href="configs/new_baselines/mask_rcnn_R_101_FPN_100ep_LSJ.py">R101-FPN</a></td>
<td align="center">100</td>
<td align="center">0.518</td>
<td align="center">0.073</td>
<td align="center">46.4</td>
<td align="center">41.6</td>
<td align="center">42025812</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/new_baselines/mask_rcnn_R_101_FPN_100ep_LSJ/42025812/model_final_4f7b58.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/new_baselines/mask_rcnn_R_101_FPN_100ep_LSJ/42025812/metrics.json">metrics</a></td>
</tr>
<!-- ROW: mask_rcnn_R_101_FPN_200ep_LSJ -->
 <tr><td align="left"><a href="configs/new_baselines/mask_rcnn_R_101_FPN_200ep_LSJ.py">R101-FPN</a></td>
<td align="center">200</td>
<td align="center">0.518</td>
<td align="center">0.073</td>
<td align="center">48.0</td>
<td align="center">43.1</td>
<td align="center">42131867</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/new_baselines/mask_rcnn_R_101_FPN_200ep_LSJ/42131867/model_final_0bb7ae.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/new_baselines/mask_rcnn_R_101_FPN_200ep_LSJ/42131867/metrics.json">metrics</a></td>
</tr>
<!-- ROW: mask_rcnn_R_101_FPN_400ep_LSJ -->
 <tr><td align="left"><a href="configs/new_baselines/mask_rcnn_R_101_FPN_400ep_LSJ.py">R101-FPN</a></td>
<td align="center">400</td>
<td align="center">0.518</td>
<td align="center">0.073</td>
<td align="center">48.9</td>
<td align="center">43.7</td>
<td align="center">42073830</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/new_baselines/mask_rcnn_R_101_FPN_400ep_LSJ/42073830/model_final_f96b26.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/new_baselines/mask_rcnn_R_101_FPN_400ep_LSJ/42073830/metrics.json">metrics</a></td>
</tr>
<!-- ROW: mask_rcnn_regnetx_4gf_dds_FPN_100ep_LSJ -->
 <tr><td align="left"><a href="configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_100ep_LSJ.py">regnetx_4gf_dds_FPN</a></td>
<td align="center">100</td>
<td align="center">0.474</td>
<td align="center">0.071</td>
<td align="center">46.0</td>
<td align="center">41.3</td>
<td align="center">42047771</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_100ep_LSJ/42047771/model_final_b7fbab.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_100ep_LSJ/42047771/metrics.json">metrics</a></td>
</tr>
<!-- ROW: mask_rcnn_regnetx_4gf_dds_FPN_200ep_LSJ -->
 <tr><td align="left"><a href="configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_200ep_LSJ.py">regnetx_4gf_dds_FPN</a></td>
<td align="center">200</td>
<td align="center">0.474</td>
<td align="center">0.071</td>
<td align="center">48.1</td>
<td align="center">43.1</td>
<td align="center">42132721</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_200ep_LSJ/42132721/model_final_5d87c1.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_200ep_LSJ/42132721/metrics.json">metrics</a></td>
</tr>
<!-- ROW: mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ -->
 <tr><td align="left"><a href="configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ.py">regnetx_4gf_dds_FPN</a></td>
<td align="center">400</td>
<td align="center">0.474</td>
<td align="center">0.071</td>
<td align="center">48.6</td>
<td align="center">43.5</td>
<td align="center">42025447</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ/42025447/model_final_f1362d.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ/42025447/metrics.json">metrics</a></td>
</tr>
<!-- ROW: mask_rcnn_regnety_4gf_dds_FPN_100ep_LSJ -->
 <tr><td align="left"><a href="configs/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_100ep_LSJ.py">regnety_4gf_dds_FPN</a></td>
<td align="center">100</td>
<td align="center">0.487</td>
<td align="center">0.073</td>
<td align="center">46.1</td>
<td align="center">41.6</td>
<td align="center">42047784</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_100ep_LSJ/42047784/model_final_6ba57e.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_100ep_LSJ/42047784/metrics.json">metrics</a></td>
</tr>
<!-- ROW: mask_rcnn_regnety_4gf_dds_FPN_200ep_LSJ -->
 <tr><td align="left"><a href="configs/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_200ep_LSJ.py">regnety_4gf_dds_FPN</a></td>
<td align="center">200</td>
<td align="center">0.487</td>
<td align="center">0.072</td>
<td align="center">47.8</td>
<td align="center">43.0</td>
<td align="center">42047642</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_200ep_LSJ/42047642/model_final_27b9c1.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_200ep_LSJ/42047642/metrics.json">metrics</a></td>
</tr>
<!-- ROW: mask_rcnn_regnety_4gf_dds_FPN_400ep_LSJ -->
 <tr><td align="left"><a href="configs/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_400ep_LSJ.py">regnety_4gf_dds_FPN</a></td>
<td align="center">400</td>
<td align="center">0.487</td>
<td align="center">0.072</td>
<td align="center">48.2</td>
<td align="center">43.3</td>
<td align="center">42045954</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_400ep_LSJ/42045954/model_final_ef3a80.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_400ep_LSJ/42045954/metrics.json">metrics</a></td>
</tr>
</tbody></table>

### COCO Person Keypoint Detection Baselines with Keypoint R-CNN
<!--
./gen_html_table.py --config 'COCO-Keypoints/*50*' 'COCO-Keypoints/*101*'  --name R50-FPN R50-FPN R101-FPN X101-FPN --fields lr_sched train_speed inference_speed mem box_AP keypoint_AP
-->


<table><tbody>
<!-- START TABLE -->
<!-- TABLE HEADER -->
<th valign="bottom">Name</th>
<th valign="bottom">lr<br/>sched</th>
<th valign="bottom">train<br/>time<br/>(s/iter)</th>
<th valign="bottom">inference<br/>time<br/>(s/im)</th>
<th valign="bottom">train<br/>mem<br/>(GB)</th>
<th valign="bottom">box<br/>AP</th>
<th valign="bottom">kp.<br/>AP</th>
<th valign="bottom">model id</th>
<th valign="bottom">download</th>
<!-- TABLE BODY -->
<!-- ROW: keypoint_rcnn_R_50_FPN_1x -->
 <tr><td align="left"><a href="configs/COCO-Keypoints/keypoint_rcnn_R_50_FPN_1x.yaml">R50-FPN</a></td>
<td align="center">1x</td>
<td align="center">0.315</td>
<td align="center">0.072</td>
<td align="center">5.0</td>
<td align="center">53.6</td>
<td align="center">64.0</td>
<td align="center">137261548</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Keypoints/keypoint_rcnn_R_50_FPN_1x/137261548/model_final_04e291.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Keypoints/keypoint_rcnn_R_50_FPN_1x/137261548/metrics.json">metrics</a></td>
</tr>
<!-- ROW: keypoint_rcnn_R_50_FPN_3x -->
 <tr><td align="left"><a href="configs/COCO-Keypoints/keypoint_rcnn_R_50_FPN_3x.yaml">R50-FPN</a></td>
<td align="center">3x</td>
<td align="center">0.316</td>
<td align="center">0.066</td>
<td align="center">5.0</td>
<td align="center">55.4</td>
<td align="center">65.5</td>
<td align="center">137849621</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Keypoints/keypoint_rcnn_R_50_FPN_3x/137849621/model_final_a6e10b.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Keypoints/keypoint_rcnn_R_50_FPN_3x/137849621/metrics.json">metrics</a></td>
</tr>
<!-- ROW: keypoint_rcnn_R_101_FPN_3x -->
 <tr><td align="left"><a href="configs/COCO-Keypoints/keypoint_rcnn_R_101_FPN_3x.yaml">R101-FPN</a></td>
<td align="center">3x</td>
<td align="center">0.390</td>
<td align="center">0.076</td>
<td align="center">6.1</td>
<td align="center">56.4</td>
<td align="center">66.1</td>
<td align="center">138363331</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Keypoints/keypoint_rcnn_R_101_FPN_3x/138363331/model_final_997cc7.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Keypoints/keypoint_rcnn_R_101_FPN_3x/138363331/metrics.json">metrics</a></td>
</tr>
<!-- ROW: keypoint_rcnn_X_101_32x8d_FPN_3x -->
 <tr><td align="left"><a href="configs/COCO-Keypoints/keypoint_rcnn_X_101_32x8d_FPN_3x.yaml">X101-FPN</a></td>
<td align="center">3x</td>
<td align="center">0.738</td>
<td align="center">0.121</td>
<td align="center">8.7</td>
<td align="center">57.3</td>
<td align="center">66.0</td>
<td align="center">139686956</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Keypoints/keypoint_rcnn_X_101_32x8d_FPN_3x/139686956/model_final_5ad38f.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-Keypoints/keypoint_rcnn_X_101_32x8d_FPN_3x/139686956/metrics.json">metrics</a></td>
</tr>
</tbody></table>

### COCO Panoptic Segmentation Baselines with Panoptic FPN
<!--
./gen_html_table.py --config 'COCO-PanopticSegmentation/*50*' 'COCO-PanopticSegmentation/*101*'  --name R50-FPN R50-FPN R101-FPN --fields lr_sched train_speed inference_speed mem box_AP mask_AP PQ
-->


<table><tbody>
<!-- START TABLE -->
<!-- TABLE HEADER -->
<th valign="bottom">Name</th>
<th valign="bottom">lr<br/>sched</th>
<th valign="bottom">train<br/>time<br/>(s/iter)</th>
<th valign="bottom">inference<br/>time<br/>(s/im)</th>
<th valign="bottom">train<br/>mem<br/>(GB)</th>
<th valign="bottom">box<br/>AP</th>
<th valign="bottom">mask<br/>AP</th>
<th valign="bottom">PQ</th>
<th valign="bottom">model id</th>
<th valign="bottom">download</th>
<!-- TABLE BODY -->
<!-- ROW: panoptic_fpn_R_50_1x -->
 <tr><td align="left"><a href="configs/COCO-PanopticSegmentation/panoptic_fpn_R_50_1x.yaml">R50-FPN</a></td>
<td align="center">1x</td>
<td align="center">0.304</td>
<td align="center">0.053</td>
<td align="center">4.8</td>
<td align="center">37.6</td>
<td align="center">34.7</td>
<td align="center">39.4</td>
<td align="center">139514544</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-PanopticSegmentation/panoptic_fpn_R_50_1x/139514544/model_final_dbfeb4.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-PanopticSegmentation/panoptic_fpn_R_50_1x/139514544/metrics.json">metrics</a></td>
</tr>
<!-- ROW: panoptic_fpn_R_50_3x -->
 <tr><td align="left"><a href="configs/COCO-PanopticSegmentation/panoptic_fpn_R_50_3x.yaml">R50-FPN</a></td>
<td align="center">3x</td>
<td align="center">0.302</td>
<td align="center">0.053</td>
<td align="center">4.8</td>
<td align="center">40.0</td>
<td align="center">36.5</td>
<td align="center">41.5</td>
<td align="center">139514569</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-PanopticSegmentation/panoptic_fpn_R_50_3x/139514569/model_final_c10459.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-PanopticSegmentation/panoptic_fpn_R_50_3x/139514569/metrics.json">metrics</a></td>
</tr>
<!-- ROW: panoptic_fpn_R_101_3x -->
 <tr><td align="left"><a href="configs/COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml">R101-FPN</a></td>
<td align="center">3x</td>
<td align="center">0.392</td>
<td align="center">0.066</td>
<td align="center">6.0</td>
<td align="center">42.4</td>
<td align="center">38.5</td>
<td align="center">43.0</td>
<td align="center">139514519</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-PanopticSegmentation/panoptic_fpn_R_101_3x/139514519/model_final_cafdb1.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-PanopticSegmentation/panoptic_fpn_R_101_3x/139514519/metrics.json">metrics</a></td>
</tr>
</tbody></table>


### LVIS Instance Segmentation Baselines with Mask R-CNN

Mask R-CNN baselines on the [LVIS dataset](https://lvisdataset.org), v0.5.
These baselines are described in Table 3(c) of the [LVIS paper](https://arxiv.org/abs/1908.03195).

NOTE: the 1x schedule here has the same amount of __iterations__ as the COCO 1x baselines.
They are roughly 24 epochs of LVISv0.5 data.
The final results of these configs have large variance across different runs.

<!--
./gen_html_table.py --config 'LVISv0.5-InstanceSegmentation/mask*50*' 'LVISv0.5-InstanceSegmentation/mask*101*' --name R50-FPN R101-FPN X101-FPN --fields lr_sched train_speed inference_speed mem box_AP mask_AP
-->


<table><tbody>
<!-- START TABLE -->
<!-- TABLE HEADER -->
<th valign="bottom">Name</th>
<th valign="bottom">lr<br/>sched</th>
<th valign="bottom">train<br/>time<br/>(s/iter)</th>
<th valign="bottom">inference<br/>time<br/>(s/im)</th>
<th valign="bottom">train<br/>mem<br/>(GB)</th>
<th valign="bottom">box<br/>AP</th>
<th valign="bottom">mask<br/>AP</th>
<th valign="bottom">model id</th>
<th valign="bottom">download</th>
<!-- TABLE BODY -->
<!-- ROW: mask_rcnn_R_50_FPN_1x -->
 <tr><td align="left"><a href="configs/LVISv0.5-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml">R50-FPN</a></td>
<td align="center">1x</td>
<td align="center">0.292</td>
<td align="center">0.107</td>
<td align="center">7.1</td>
<td align="center">23.6</td>
<td align="center">24.4</td>
<td align="center">144219072</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/LVISv0.5-InstanceSegmentation/mask_rcnn_R_50_FPN_1x/144219072/model_final_571f7c.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/LVISv0.5-InstanceSegmentation/mask_rcnn_R_50_FPN_1x/144219072/metrics.json">metrics</a></td>
</tr>
<!-- ROW: mask_rcnn_R_101_FPN_1x -->
 <tr><td align="left"><a href="configs/LVISv0.5-InstanceSegmentation/mask_rcnn_R_101_FPN_1x.yaml">R101-FPN</a></td>
<td align="center">1x</td>
<td align="center">0.371</td>
<td align="center">0.114</td>
<td align="center">7.8</td>
<td align="center">25.6</td>
<td align="center">25.9</td>
<td align="center">144219035</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/LVISv0.5-InstanceSegmentation/mask_rcnn_R_101_FPN_1x/144219035/model_final_824ab5.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/LVISv0.5-InstanceSegmentation/mask_rcnn_R_101_FPN_1x/144219035/metrics.json">metrics</a></td>
</tr>
<!-- ROW: mask_rcnn_X_101_32x8d_FPN_1x -->
 <tr><td align="left"><a href="configs/LVISv0.5-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_1x.yaml">X101-FPN</a></td>
<td align="center">1x</td>
<td align="center">0.712</td>
<td align="center">0.151</td>
<td align="center">10.2</td>
<td align="center">26.7</td>
<td align="center">27.1</td>
<td align="center">144219108</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/LVISv0.5-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_1x/144219108/model_final_5e3439.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/LVISv0.5-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_1x/144219108/metrics.json">metrics</a></td>
</tr>
</tbody></table>



### Cityscapes & Pascal VOC Baselines

Simple baselines for
* Mask R-CNN on Cityscapes instance segmentation (initialized from COCO pre-training, then trained on Cityscapes fine annotations only)
* Faster R-CNN on PASCAL VOC object detection (trained on VOC 2007 train+val + VOC 2012 train+val, tested on VOC 2007 using 11-point interpolated AP)

<!--
./gen_html_table.py --config 'Cityscapes/*' 'PascalVOC-Detection/*' --name "R50-FPN, Cityscapes" "R50-C4, VOC" --fields train_speed inference_speed mem box_AP box_AP50 mask_AP
-->


<table><tbody>
<!-- START TABLE -->
<!-- TABLE HEADER -->
<th valign="bottom">Name</th>
<th valign="bottom">train<br/>time<br/>(s/iter)</th>
<th valign="bottom">inference<br/>time<br/>(s/im)</th>
<th valign="bottom">train<br/>mem<br/>(GB)</th>
<th valign="bottom">box<br/>AP</th>
<th valign="bottom">box<br/>AP50</th>
<th valign="bottom">mask<br/>AP</th>
<th valign="bottom">model id</th>
<th valign="bottom">download</th>
<!-- TABLE BODY -->
<!-- ROW: mask_rcnn_R_50_FPN -->
 <tr><td align="left"><a href="configs/Cityscapes/mask_rcnn_R_50_FPN.yaml">R50-FPN, Cityscapes</a></td>
<td align="center">0.240</td>
<td align="center">0.078</td>
<td align="center">4.4</td>
<td align="center"></td>
<td align="center"></td>
<td align="center">36.5</td>
<td align="center">142423278</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/Cityscapes/mask_rcnn_R_50_FPN/142423278/model_final_af9cf5.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/Cityscapes/mask_rcnn_R_50_FPN/142423278/metrics.json">metrics</a></td>
</tr>
<!-- ROW: faster_rcnn_R_50_C4 -->
 <tr><td align="left"><a href="configs/PascalVOC-Detection/faster_rcnn_R_50_C4.yaml">R50-C4, VOC</a></td>
<td align="center">0.537</td>
<td align="center">0.081</td>
<td align="center">4.8</td>
<td align="center">51.9</td>
<td align="center">80.3</td>
<td align="center"></td>
<td align="center">142202221</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/PascalVOC-Detection/faster_rcnn_R_50_C4/142202221/model_final_b1acc2.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/PascalVOC-Detection/faster_rcnn_R_50_C4/142202221/metrics.json">metrics</a></td>
</tr>
</tbody></table>



### Other Settings

Ablations for Deformable Conv and Cascade R-CNN:

<!--
./gen_html_table.py --config 'COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml' 'Misc/*R_50_FPN_1x_dconv*' 'Misc/cascade*1x.yaml' 'COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml' 'Misc/*R_50_FPN_3x_dconv*' 'Misc/cascade*3x.yaml' --name "Baseline R50-FPN" "Deformable Conv" "Cascade R-CNN" "Baseline R50-FPN" "Deformable Conv" "Cascade R-CNN"  --fields lr_sched train_speed inference_speed mem box_AP mask_AP
-->


<table><tbody>
<!-- START TABLE -->
<!-- TABLE HEADER -->
<th valign="bottom">Name</th>
<th valign="bottom">lr<br/>sched</th>
<th valign="bottom">train<br/>time<br/>(s/iter)</th>
<th valign="bottom">inference<br/>time<br/>(s/im)</th>
<th valign="bottom">train<br/>mem<br/>(GB)</th>
<th valign="bottom">box<br/>AP</th>
<th valign="bottom">mask<br/>AP</th>
<th valign="bottom">model id</th>
<th valign="bottom">download</th>
<!-- TABLE BODY -->
<!-- ROW: mask_rcnn_R_50_FPN_1x -->
 <tr><td align="left"><a href="configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml">Baseline R50-FPN</a></td>
<td align="center">1x</td>
<td align="center">0.261</td>
<td align="center">0.043</td>
<td align="center">3.4</td>
<td align="center">38.6</td>
<td align="center">35.2</td>
<td align="center">137260431</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x/137260431/model_final_a54504.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x/137260431/metrics.json">metrics</a></td>
</tr>
<!-- ROW: mask_rcnn_R_50_FPN_1x_dconv_c3-c5 -->
 <tr><td align="left"><a href="configs/Misc/mask_rcnn_R_50_FPN_1x_dconv_c3-c5.yaml">Deformable Conv</a></td>
<td align="center">1x</td>
<td align="center">0.342</td>
<td align="center">0.048</td>
<td align="center">3.5</td>
<td align="center">41.5</td>
<td align="center">37.5</td>
<td align="center">138602867</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/Misc/mask_rcnn_R_50_FPN_1x_dconv_c3-c5/138602867/model_final_65c703.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/Misc/mask_rcnn_R_50_FPN_1x_dconv_c3-c5/138602867/metrics.json">metrics</a></td>
</tr>
<!-- ROW: cascade_mask_rcnn_R_50_FPN_1x -->
 <tr><td align="left"><a href="configs/Misc/cascade_mask_rcnn_R_50_FPN_1x.yaml">Cascade R-CNN</a></td>
<td align="center">1x</td>
<td align="center">0.317</td>
<td align="center">0.052</td>
<td align="center">4.0</td>
<td align="center">42.1</td>
<td align="center">36.4</td>
<td align="center">138602847</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/Misc/cascade_mask_rcnn_R_50_FPN_1x/138602847/model_final_e9d89b.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/Misc/cascade_mask_rcnn_R_50_FPN_1x/138602847/metrics.json">metrics</a></td>
</tr>
<!-- ROW: mask_rcnn_R_50_FPN_3x -->
 <tr><td align="left"><a href="configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml">Baseline R50-FPN</a></td>
<td align="center">3x</td>
<td align="center">0.261</td>
<td align="center">0.043</td>
<td align="center">3.4</td>
<td align="center">41.0</td>
<td align="center">37.2</td>
<td align="center">137849600</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/metrics.json">metrics</a></td>
</tr>
<!-- ROW: mask_rcnn_R_50_FPN_3x_dconv_c3-c5 -->
 <tr><td align="left"><a href="configs/Misc/mask_rcnn_R_50_FPN_3x_dconv_c3-c5.yaml">Deformable Conv</a></td>
<td align="center">3x</td>
<td align="center">0.349</td>
<td align="center">0.047</td>
<td align="center">3.5</td>
<td align="center">42.7</td>
<td align="center">38.5</td>
<td align="center">144998336</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/Misc/mask_rcnn_R_50_FPN_3x_dconv_c3-c5/144998336/model_final_821d0b.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/Misc/mask_rcnn_R_50_FPN_3x_dconv_c3-c5/144998336/metrics.json">metrics</a></td>
</tr>
<!-- ROW: cascade_mask_rcnn_R_50_FPN_3x -->
 <tr><td align="left"><a href="configs/Misc/cascade_mask_rcnn_R_50_FPN_3x.yaml">Cascade R-CNN</a></td>
<td align="center">3x</td>
<td align="center">0.328</td>
<td align="center">0.053</td>
<td align="center">4.0</td>
<td align="center">44.3</td>
<td align="center">38.5</td>
<td align="center">144998488</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/Misc/cascade_mask_rcnn_R_50_FPN_3x/144998488/model_final_480dd8.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/Misc/cascade_mask_rcnn_R_50_FPN_3x/144998488/metrics.json">metrics</a></td>
</tr>
</tbody></table>


Ablations for normalization methods, and a few models trained from scratch following [Rethinking ImageNet Pre-training](https://arxiv.org/abs/1811.08883).
(Note: The baseline uses `2fc` head while the others use [`4conv1fc` head](https://arxiv.org/abs/1803.08494))
<!--
./gen_html_table.py --config 'COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml' 'Misc/mask*50_FPN_3x_gn.yaml' 'Misc/mask*50_FPN_3x_syncbn.yaml' 'Misc/scratch*' --name "Baseline R50-FPN" "GN" "SyncBN" "GN (from scratch)" "GN (from scratch)" "SyncBN (from scratch)" --fields lr_sched train_speed inference_speed mem box_AP mask_AP
   -->


<table><tbody>
<!-- START TABLE -->
<!-- TABLE HEADER -->
<th valign="bottom">Name</th>
<th valign="bottom">lr<br/>sched</th>
<th valign="bottom">train<br/>time<br/>(s/iter)</th>
<th valign="bottom">inference<br/>time<br/>(s/im)</th>
<th valign="bottom">train<br/>mem<br/>(GB)</th>
<th valign="bottom">box<br/>AP</th>
<th valign="bottom">mask<br/>AP</th>
<th valign="bottom">model id</th>
<th valign="bottom">download</th>
<!-- TABLE BODY -->
<!-- ROW: mask_rcnn_R_50_FPN_3x -->
 <tr><td align="left"><a href="configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml">Baseline R50-FPN</a></td>
<td align="center">3x</td>
<td align="center">0.261</td>
<td align="center">0.043</td>
<td align="center">3.4</td>
<td align="center">41.0</td>
<td align="center">37.2</td>
<td align="center">137849600</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/metrics.json">metrics</a></td>
</tr>
<!-- ROW: mask_rcnn_R_50_FPN_3x_gn -->
 <tr><td align="left"><a href="configs/Misc/mask_rcnn_R_50_FPN_3x_gn.yaml">GN</a></td>
<td align="center">3x</td>
<td align="center">0.309</td>
<td align="center">0.060</td>
<td align="center">5.6</td>
<td align="center">42.6</td>
<td align="center">38.6</td>
<td align="center">138602888</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/Misc/mask_rcnn_R_50_FPN_3x_gn/138602888/model_final_dc5d9e.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/Misc/mask_rcnn_R_50_FPN_3x_gn/138602888/metrics.json">metrics</a></td>
</tr>
<!-- ROW: mask_rcnn_R_50_FPN_3x_syncbn -->
 <tr><td align="left"><a href="configs/Misc/mask_rcnn_R_50_FPN_3x_syncbn.yaml">SyncBN</a></td>
<td align="center">3x</td>
<td align="center">0.345</td>
<td align="center">0.053</td>
<td align="center">5.5</td>
<td align="center">41.9</td>
<td align="center">37.8</td>
<td align="center">169527823</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/Misc/mask_rcnn_R_50_FPN_3x_syncbn/169527823/model_final_3b3c51.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/Misc/mask_rcnn_R_50_FPN_3x_syncbn/169527823/metrics.json">metrics</a></td>
</tr>
<!-- ROW: scratch_mask_rcnn_R_50_FPN_3x_gn -->
 <tr><td align="left"><a href="configs/Misc/scratch_mask_rcnn_R_50_FPN_3x_gn.yaml">GN (from scratch)</a></td>
<td align="center">3x</td>
<td align="center">0.338</td>
<td align="center">0.061</td>
<td align="center">7.2</td>
<td align="center">39.9</td>
<td align="center">36.6</td>
<td align="center">138602908</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/Misc/scratch_mask_rcnn_R_50_FPN_3x_gn/138602908/model_final_01ca85.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/Misc/scratch_mask_rcnn_R_50_FPN_3x_gn/138602908/metrics.json">metrics</a></td>
</tr>
<!-- ROW: scratch_mask_rcnn_R_50_FPN_9x_gn -->
 <tr><td align="left"><a href="configs/Misc/scratch_mask_rcnn_R_50_FPN_9x_gn.yaml">GN (from scratch)</a></td>
<td align="center">9x</td>
<td align="center">N/A</td>
<td align="center">0.061</td>
<td align="center">7.2</td>
<td align="center">43.7</td>
<td align="center">39.6</td>
<td align="center">183808979</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/Misc/scratch_mask_rcnn_R_50_FPN_9x_gn/183808979/model_final_da7b4c.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/Misc/scratch_mask_rcnn_R_50_FPN_9x_gn/183808979/metrics.json">metrics</a></td>
</tr>
<!-- ROW: scratch_mask_rcnn_R_50_FPN_9x_syncbn -->
 <tr><td align="left"><a href="configs/Misc/scratch_mask_rcnn_R_50_FPN_9x_syncbn.yaml">SyncBN (from scratch)</a></td>
<td align="center">9x</td>
<td align="center">N/A</td>
<td align="center">0.055</td>
<td align="center">7.2</td>
<td align="center">43.6</td>
<td align="center">39.3</td>
<td align="center">184226666</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/Misc/scratch_mask_rcnn_R_50_FPN_9x_syncbn/184226666/model_final_5ce33e.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/Misc/scratch_mask_rcnn_R_50_FPN_9x_syncbn/184226666/metrics.json">metrics</a></td>
</tr>
</tbody></table>


A few very large models trained for a long time, for demo purposes. They are trained using multiple machines:

<!--
./gen_html_table.py --config 'Misc/panoptic_*dconv*' 'Misc/cascade_*152*' --name "Panoptic FPN R101" "Mask R-CNN X152" --fields inference_speed mem box_AP mask_AP PQ
# manually add TTA results
-->


<table><tbody>
<!-- START TABLE -->
<!-- TABLE HEADER -->
<th valign="bottom">Name</th>
<th valign="bottom">inference<br/>time<br/>(s/im)</th>
<th valign="bottom">train<br/>mem<br/>(GB)</th>
<th valign="bottom">box<br/>AP</th>
<th valign="bottom">mask<br/>AP</th>
<th valign="bottom">PQ</th>
<th valign="bottom">model id</th>
<th valign="bottom">download</th>
<!-- TABLE BODY -->
<!-- ROW: panoptic_fpn_R_101_dconv_cascade_gn_3x -->
 <tr><td align="left"><a href="configs/Misc/panoptic_fpn_R_101_dconv_cascade_gn_3x.yaml">Panoptic FPN R101</a></td>
<td align="center">0.098</td>
<td align="center">11.4</td>
<td align="center">47.4</td>
<td align="center">41.3</td>
<td align="center">46.1</td>
<td align="center">139797668</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/Misc/panoptic_fpn_R_101_dconv_cascade_gn_3x/139797668/model_final_be35db.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/Misc/panoptic_fpn_R_101_dconv_cascade_gn_3x/139797668/metrics.json">metrics</a></td>
</tr>
<!-- ROW: cascade_mask_rcnn_X_152_32x8d_FPN_IN5k_gn_dconv -->
 <tr><td align="left"><a href="configs/Misc/cascade_mask_rcnn_X_152_32x8d_FPN_IN5k_gn_dconv.yaml">Mask R-CNN X152</a></td>
<td align="center">0.234</td>
<td align="center">15.1</td>
<td align="center">50.2</td>
<td align="center">44.0</td>
<td align="center"></td>
<td align="center">18131413</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/Misc/cascade_mask_rcnn_X_152_32x8d_FPN_IN5k_gn_dconv/18131413/model_0039999_e76410.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/Misc/cascade_mask_rcnn_X_152_32x8d_FPN_IN5k_gn_dconv/18131413/metrics.json">metrics</a></td>
</tr>
<!-- ROW: TTA cascade_mask_rcnn_X_152_32x8d_FPN_IN5k_gn_dconv -->
 <tr><td align="left">above + test-time aug.</td>
<td align="center"></td>
<td align="center"></td>
<td align="center">51.9</td>
<td align="center">45.9</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
</tbody></table>


================================================
FILE: detectron2/README.md
================================================
<img src=".github/Detectron2-Logo-Horz.svg" width="300" >

<a href="https://opensource.facebook.com/support-ukraine">
  <img src="https://img.shields.io/badge/Support-Ukraine-FFD500?style=flat&labelColor=005BBB" alt="Support Ukraine - Help Provide Humanitarian Aid to Ukraine." />
</a>

Detectron2 is Facebook AI Research's next generation library
that provides state-of-the-art detection and segmentation algorithms.
It is the successor of
[Detectron](https://github.com/facebookresearch/Detectron/)
and [maskrcnn-benchmark](https://github.com/facebookresearch/maskrcnn-benchmark/).
It supports a number of computer vision research projects and production applications in Facebook.

<div align="center">
  <img src="https://user-images.githubusercontent.com/1381301/66535560-d3422200-eace-11e9-9123-5535d469db19.png"/>
</div>
<br>

## Learn More about Detectron2

Explain Like I’m 5: Detectron2            |  Using Machine Learning with Detectron2
:-------------------------:|:-------------------------:
[![Explain Like I’m 5: Detectron2](https://img.youtube.com/vi/1oq1Ye7dFqc/0.jpg)](https://www.youtube.com/watch?v=1oq1Ye7dFqc)  |  [![Using Machine Learning with Detectron2](https://img.youtube.com/vi/eUSgtfK4ivk/0.jpg)](https://www.youtube.com/watch?v=eUSgtfK4ivk)

## What's New
* Includes new capabilities such as panoptic segmentation, Densepose, Cascade R-CNN, rotated bounding boxes, PointRend,
  DeepLab, ViTDet, MViTv2 etc.
* Used as a library to support building [research projects](projects/) on top of it.
* Models can be exported to TorchScript format or Caffe2 format for deployment.
* It [trains much faster](https://detectron2.readthedocs.io/notes/benchmarks.html).

See our [blog post](https://ai.facebook.com/blog/-detectron2-a-pytorch-based-modular-object-detection-library-/)
to see more demos and learn about detectron2.

## Installation

See [installation instructions](https://detectron2.readthedocs.io/tutorials/install.html).

## Getting Started

See [Getting Started with Detectron2](https://detectron2.readthedocs.io/tutorials/getting_started.html),
and the [Colab Notebook](https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5)
to learn about basic usage.

Learn more at our [documentation](https://detectron2.readthedocs.org).
And see [projects/](projects/) for some projects that are built on top of detectron2.

## Model Zoo and Baselines

We provide a large set of baseline results and trained models available for download in the [Detectron2 Model Zoo](MODEL_ZOO.md).

## License

Detectron2 is released under the [Apache 2.0 license](LICENSE).

## Citing Detectron2

If you use Detectron2 in your research or wish to refer to the baseline results published in the [Model Zoo](MODEL_ZOO.md), please use the following BibTeX entry.

```BibTeX
@misc{wu2019detectron2,
  author =       {Yuxin Wu and Alexander Kirillov and Francisco Massa and
                  Wan-Yen Lo and Ross Girshick},
  title =        {Detectron2},
  howpublished = {\url{https://github.com/facebookresearch/detectron2}},
  year =         {2019}
}
```


================================================
FILE: detectron2/configs/Base-RCNN-C4.yaml
================================================
MODEL:
  META_ARCHITECTURE: "GeneralizedRCNN"
  RPN:
    PRE_NMS_TOPK_TEST: 6000
    POST_NMS_TOPK_TEST: 1000
  ROI_HEADS:
    NAME: "Res5ROIHeads"
DATASETS:
  TRAIN: ("coco_2017_train",)
  TEST: ("coco_2017_val",)
SOLVER:
  IMS_PER_BATCH: 16
  BASE_LR: 0.02
  STEPS: (60000, 80000)
  MAX_ITER: 90000
INPUT:
  MIN_SIZE_TRAIN: (640, 672, 704, 736, 768, 800)
VERSION: 2


================================================
FILE: detectron2/configs/Base-RCNN-DilatedC5.yaml
================================================
MODEL:
  META_ARCHITECTURE: "GeneralizedRCNN"
  RESNETS:
    OUT_FEATURES: ["res5"]
    RES5_DILATION: 2
  RPN:
    IN_FEATURES: ["res5"]
    PRE_NMS_TOPK_TEST: 6000
    POST_NMS_TOPK_TEST: 1000
  ROI_HEADS:
    NAME: "StandardROIHeads"
    IN_FEATURES: ["res5"]
  ROI_BOX_HEAD:
    NAME: "FastRCNNConvFCHead"
    NUM_FC: 2
    POOLER_RESOLUTION: 7
  ROI_MASK_HEAD:
    NAME: "MaskRCNNConvUpsampleHead"
    NUM_CONV: 4
    POOLER_RESOLUTION: 14
DATASETS:
  TRAIN: ("coco_2017_train",)
  TEST: ("coco_2017_val",)
SOLVER:
  IMS_PER_BATCH: 16
  BASE_LR: 0.02
  STEPS: (60000, 80000)
  MAX_ITER: 90000
INPUT:
  MIN_SIZE_TRAIN: (640, 672, 704, 736, 768, 800)
VERSION: 2


================================================
FILE: detectron2/configs/Base-RCNN-FPN.yaml
================================================
MODEL:
  META_ARCHITECTURE: "GeneralizedRCNN"
  BACKBONE:
    NAME: "build_resnet_fpn_backbone"
  RESNETS:
    OUT_FEATURES: ["res2", "res3", "res4", "res5"]
  FPN:
    IN_FEATURES: ["res2", "res3", "res4", "res5"]
  ANCHOR_GENERATOR:
    SIZES: [[32], [64], [128], [256], [512]]  # One size for each in feature map
    ASPECT_RATIOS: [[0.5, 1.0, 2.0]]  # Three aspect ratios (same for all in feature maps)
  RPN:
    IN_FEATURES: ["p2", "p3", "p4", "p5", "p6"]
    PRE_NMS_TOPK_TRAIN: 2000  # Per FPN level
    PRE_NMS_TOPK_TEST: 1000  # Per FPN level
    # Detectron1 uses 2000 proposals per-batch,
    # (See "modeling/rpn/rpn_outputs.py" for details of this legacy issue)
    # which is approximately 1000 proposals per-image since the default batch size for FPN is 2.
    POST_NMS_TOPK_TRAIN: 1000
    POST_NMS_TOPK_TEST: 1000
  ROI_HEADS:
    NAME: "StandardROIHeads"
    IN_FEATURES: ["p2", "p3", "p4", "p5"]
  ROI_BOX_HEAD:
    NAME: "FastRCNNConvFCHead"
    NUM_FC: 2
    POOLER_RESOLUTION: 7
  ROI_MASK_HEAD:
    NAME: "MaskRCNNConvUpsampleHead"
    NUM_CONV: 4
    POOLER_RESOLUTION: 14
DATASETS:
  TRAIN: ("coco_2017_train",)
  TEST: ("coco_2017_val",)
SOLVER:
  IMS_PER_BATCH: 16
  BASE_LR: 0.02
  STEPS: (60000, 80000)
  MAX_ITER: 90000
INPUT:
  MIN_SIZE_TRAIN: (640, 672, 704, 736, 768, 800)
VERSION: 2


================================================
FILE: detectron2/configs/Base-RetinaNet.yaml
================================================
MODEL:
  META_ARCHITECTURE: "RetinaNet"
  BACKBONE:
    NAME: "build_retinanet_resnet_fpn_backbone"
  RESNETS:
    OUT_FEATURES: ["res3", "res4", "res5"]
  ANCHOR_GENERATOR:
    SIZES: !!python/object/apply:eval ["[[x, x * 2**(1.0/3), x * 2**(2.0/3) ] for x in [32, 64, 128, 256, 512 ]]"]
  FPN:
    IN_FEATURES: ["res3", "res4", "res5"]
  RETINANET:
    IOU_THRESHOLDS: [0.4, 0.5]
    IOU_LABELS: [0, -1, 1]
    SMOOTH_L1_LOSS_BETA: 0.0
DATASETS:
  TRAIN: ("coco_2017_train",)
  TEST: ("coco_2017_val",)
SOLVER:
  IMS_PER_BATCH: 16
  BASE_LR: 0.01  # Note that RetinaNet uses a different default learning rate
  STEPS: (60000, 80000)
  MAX_ITER: 90000
INPUT:
  MIN_SIZE_TRAIN: (640, 672, 704, 736, 768, 800)
VERSION: 2


================================================
FILE: detectron2/configs/COCO-Detection/fast_rcnn_R_50_FPN_1x.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: False
  LOAD_PROPOSALS: True
  RESNETS:
    DEPTH: 50
  PROPOSAL_GENERATOR:
    NAME: "PrecomputedProposals"
DATASETS:
  TRAIN: ("coco_2017_train",)
  PROPOSAL_FILES_TRAIN: ("detectron2://COCO-Detection/rpn_R_50_FPN_1x/137258492/coco_2017_train_box_proposals_21bc3a.pkl", )
  TEST: ("coco_2017_val",)
  PROPOSAL_FILES_TEST: ("detectron2://COCO-Detection/rpn_R_50_FPN_1x/137258492/coco_2017_val_box_proposals_ee0dad.pkl", )
DATALOADER:
  # proposals are part of the dataset_dicts, and take a lot of RAM
  NUM_WORKERS: 2


================================================
FILE: detectron2/configs/COCO-Detection/faster_rcnn_R_101_C4_3x.yaml
================================================
_BASE_: "../Base-RCNN-C4.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-101.pkl"
  MASK_ON: False
  RESNETS:
    DEPTH: 101
SOLVER:
  STEPS: (210000, 250000)
  MAX_ITER: 270000


================================================
FILE: detectron2/configs/COCO-Detection/faster_rcnn_R_101_DC5_3x.yaml
================================================
_BASE_: "../Base-RCNN-DilatedC5.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-101.pkl"
  MASK_ON: False
  RESNETS:
    DEPTH: 101
SOLVER:
  STEPS: (210000, 250000)
  MAX_ITER: 270000


================================================
FILE: detectron2/configs/COCO-Detection/faster_rcnn_R_101_FPN_3x.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-101.pkl"
  MASK_ON: False
  RESNETS:
    DEPTH: 101
SOLVER:
  STEPS: (210000, 250000)
  MAX_ITER: 270000


================================================
FILE: detectron2/configs/COCO-Detection/faster_rcnn_R_50_C4_1x.yaml
================================================
_BASE_: "../Base-RCNN-C4.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: False
  RESNETS:
    DEPTH: 50


================================================
FILE: detectron2/configs/COCO-Detection/faster_rcnn_R_50_C4_3x.yaml
================================================
_BASE_: "../Base-RCNN-C4.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: False
  RESNETS:
    DEPTH: 50
SOLVER:
  STEPS: (210000, 250000)
  MAX_ITER: 270000


================================================
FILE: detectron2/configs/COCO-Detection/faster_rcnn_R_50_DC5_1x.yaml
================================================
_BASE_: "../Base-RCNN-DilatedC5.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: False
  RESNETS:
    DEPTH: 50


================================================
FILE: detectron2/configs/COCO-Detection/faster_rcnn_R_50_DC5_3x.yaml
================================================
_BASE_: "../Base-RCNN-DilatedC5.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: False
  RESNETS:
    DEPTH: 50
SOLVER:
  STEPS: (210000, 250000)
  MAX_ITER: 270000


================================================
FILE: detectron2/configs/COCO-Detection/faster_rcnn_R_50_FPN_1x.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: False
  RESNETS:
    DEPTH: 50


================================================
FILE: detectron2/configs/COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: False
  RESNETS:
    DEPTH: 50
SOLVER:
  STEPS: (210000, 250000)
  MAX_ITER: 270000


================================================
FILE: detectron2/configs/COCO-Detection/faster_rcnn_X_101_32x8d_FPN_3x.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  MASK_ON: False
  WEIGHTS: "detectron2://ImageNetPretrained/FAIR/X-101-32x8d.pkl"
  PIXEL_STD: [57.375, 57.120, 58.395]
  RESNETS:
    STRIDE_IN_1X1: False  # this is a C2 model
    NUM_GROUPS: 32
    WIDTH_PER_GROUP: 8
    DEPTH: 101
SOLVER:
  STEPS: (210000, 250000)
  MAX_ITER: 270000


================================================
FILE: detectron2/configs/COCO-Detection/fcos_R_50_FPN_1x.py
================================================
from ..common.optim import SGD as optimizer
from ..common.coco_schedule import lr_multiplier_1x as lr_multiplier
from ..common.data.coco import dataloader
from ..common.models.fcos import model
from ..common.train import train

dataloader.train.mapper.use_instance_mask = False
optimizer.lr = 0.01

model.backbone.bottom_up.freeze_at = 2
train.init_checkpoint = "detectron2://ImageNetPretrained/MSRA/R-50.pkl"


================================================
FILE: detectron2/configs/COCO-Detection/retinanet_R_101_FPN_3x.yaml
================================================
_BASE_: "../Base-RetinaNet.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-101.pkl"
  RESNETS:
    DEPTH: 101
SOLVER:
  STEPS: (210000, 250000)
  MAX_ITER: 270000


================================================
FILE: detectron2/configs/COCO-Detection/retinanet_R_50_FPN_1x.py
================================================
from ..common.optim import SGD as optimizer
from ..common.coco_schedule import lr_multiplier_1x as lr_multiplier
from ..common.data.coco import dataloader
from ..common.models.retinanet import model
from ..common.train import train

dataloader.train.mapper.use_instance_mask = False
model.backbone.bottom_up.freeze_at = 2
optimizer.lr = 0.01

train.init_checkpoint = "detectron2://ImageNetPretrained/MSRA/R-50.pkl"


================================================
FILE: detectron2/configs/COCO-Detection/retinanet_R_50_FPN_1x.yaml
================================================
_BASE_: "../Base-RetinaNet.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  RESNETS:
    DEPTH: 50


================================================
FILE: detectron2/configs/COCO-Detection/retinanet_R_50_FPN_3x.yaml
================================================
_BASE_: "../Base-RetinaNet.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  RESNETS:
    DEPTH: 50
SOLVER:
  STEPS: (210000, 250000)
  MAX_ITER: 270000


================================================
FILE: detectron2/configs/COCO-Detection/rpn_R_50_C4_1x.yaml
================================================
_BASE_: "../Base-RCNN-C4.yaml"
MODEL:
  META_ARCHITECTURE: "ProposalNetwork"
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: False
  RESNETS:
    DEPTH: 50
  RPN:
    PRE_NMS_TOPK_TEST: 12000
    POST_NMS_TOPK_TEST: 2000


================================================
FILE: detectron2/configs/COCO-Detection/rpn_R_50_FPN_1x.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  META_ARCHITECTURE: "ProposalNetwork"
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: False
  RESNETS:
    DEPTH: 50
  RPN:
    POST_NMS_TOPK_TEST: 2000


================================================
FILE: detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_R_101_C4_3x.yaml
================================================
_BASE_: "../Base-RCNN-C4.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-101.pkl"
  MASK_ON: True
  RESNETS:
    DEPTH: 101
SOLVER:
  STEPS: (210000, 250000)
  MAX_ITER: 270000


================================================
FILE: detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_R_101_DC5_3x.yaml
================================================
_BASE_: "../Base-RCNN-DilatedC5.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-101.pkl"
  MASK_ON: True
  RESNETS:
    DEPTH: 101
SOLVER:
  STEPS: (210000, 250000)
  MAX_ITER: 270000


================================================
FILE: detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_R_101_FPN_3x.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-101.pkl"
  MASK_ON: True
  RESNETS:
    DEPTH: 101
SOLVER:
  STEPS: (210000, 250000)
  MAX_ITER: 270000


================================================
FILE: detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_C4_1x.py
================================================
from ..common.train import train
from ..common.optim import SGD as optimizer
from ..common.coco_schedule import lr_multiplier_1x as lr_multiplier
from ..common.data.coco import dataloader
from ..common.models.mask_rcnn_c4 import model

model.backbone.freeze_at = 2
train.init_checkpoint = "detectron2://ImageNetPretrained/MSRA/R-50.pkl"


================================================
FILE: detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_C4_1x.yaml
================================================
_BASE_: "../Base-RCNN-C4.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: True
  RESNETS:
    DEPTH: 50


================================================
FILE: detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_C4_3x.yaml
================================================
_BASE_: "../Base-RCNN-C4.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: True
  RESNETS:
    DEPTH: 50
SOLVER:
  STEPS: (210000, 250000)
  MAX_ITER: 270000


================================================
FILE: detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_DC5_1x.yaml
================================================
_BASE_: "../Base-RCNN-DilatedC5.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: True
  RESNETS:
    DEPTH: 50


================================================
FILE: detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_DC5_3x.yaml
================================================
_BASE_: "../Base-RCNN-DilatedC5.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: True
  RESNETS:
    DEPTH: 50
SOLVER:
  STEPS: (210000, 250000)
  MAX_ITER: 270000


================================================
FILE: detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.py
================================================
from ..common.optim import SGD as optimizer
from ..common.coco_schedule import lr_multiplier_1x as lr_multiplier
from ..common.data.coco import dataloader
from ..common.models.mask_rcnn_fpn import model
from ..common.train import train

model.backbone.bottom_up.freeze_at = 2
train.init_checkpoint = "detectron2://ImageNetPretrained/MSRA/R-50.pkl"


================================================
FILE: detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: True
  RESNETS:
    DEPTH: 50


================================================
FILE: detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x_giou.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: True
  RESNETS:
    DEPTH: 50
  RPN:
    BBOX_REG_LOSS_TYPE: "giou"
    BBOX_REG_LOSS_WEIGHT: 2.0
  ROI_BOX_HEAD:
    BBOX_REG_LOSS_TYPE: "giou"
    BBOX_REG_LOSS_WEIGHT: 10.0


================================================
FILE: detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: True
  RESNETS:
    DEPTH: 50
SOLVER:
  STEPS: (210000, 250000)
  MAX_ITER: 270000


================================================
FILE: detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_3x.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  MASK_ON: True
  WEIGHTS: "detectron2://ImageNetPretrained/FAIR/X-101-32x8d.pkl"
  PIXEL_STD: [57.375, 57.120, 58.395]
  RESNETS:
    STRIDE_IN_1X1: False  # this is a C2 model
    NUM_GROUPS: 32
    WIDTH_PER_GROUP: 8
    DEPTH: 101
SOLVER:
  STEPS: (210000, 250000)
  MAX_ITER: 270000


================================================
FILE: detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_regnetx_4gf_dds_fpn_1x.py
================================================
from ..common.optim import SGD as optimizer
from ..common.coco_schedule import lr_multiplier_1x as lr_multiplier
from ..common.data.coco import dataloader
from ..common.models.mask_rcnn_fpn import model
from ..common.train import train

from detectron2.config import LazyCall as L
from detectron2.modeling.backbone import RegNet
from detectron2.modeling.backbone.regnet import SimpleStem, ResBottleneckBlock


# Replace default ResNet with RegNetX-4GF from the DDS paper. Config source:
# https://github.com/facebookresearch/pycls/blob/2c152a6e5d913e898cca4f0a758f41e6b976714d/configs/dds_baselines/regnetx/RegNetX-4.0GF_dds_8gpu.yaml#L4-L9  # noqa
model.backbone.bottom_up = L(RegNet)(
    stem_class=SimpleStem,
    stem_width=32,
    block_class=ResBottleneckBlock,
    depth=23,
    w_a=38.65,
    w_0=96,
    w_m=2.43,
    group_width=40,
    freeze_at=2,
    norm="FrozenBN",
    out_features=["s1", "s2", "s3", "s4"],
)
model.pixel_std = [57.375, 57.120, 58.395]

optimizer.weight_decay = 5e-5
train.init_checkpoint = (
    "https://dl.fbaipublicfiles.com/pycls/dds_baselines/160906383/RegNetX-4.0GF_dds_8gpu.pyth"
)
# RegNets benefit from enabling cudnn benchmark mode
train.cudnn_benchmark = True


================================================
FILE: detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_regnety_4gf_dds_fpn_1x.py
================================================
from ..common.optim import SGD as optimizer
from ..common.coco_schedule import lr_multiplier_1x as lr_multiplier
from ..common.data.coco import dataloader
from ..common.models.mask_rcnn_fpn import model
from ..common.train import train

from detectron2.config import LazyCall as L
from detectron2.modeling.backbone import RegNet
from detectron2.modeling.backbone.regnet import SimpleStem, ResBottleneckBlock


# Replace default ResNet with RegNetY-4GF from the DDS paper. Config source:
# https://github.com/facebookresearch/pycls/blob/2c152a6e5d913e898cca4f0a758f41e6b976714d/configs/dds_baselines/regnety/RegNetY-4.0GF_dds_8gpu.yaml#L4-L10  # noqa
model.backbone.bottom_up = L(RegNet)(
    stem_class=SimpleStem,
    stem_width=32,
    block_class=ResBottleneckBlock,
    depth=22,
    w_a=31.41,
    w_0=96,
    w_m=2.24,
    group_width=64,
    se_ratio=0.25,
    freeze_at=2,
    norm="FrozenBN",
    out_features=["s1", "s2", "s3", "s4"],
)
model.pixel_std = [57.375, 57.120, 58.395]

optimizer.weight_decay = 5e-5
train.init_checkpoint = (
    "https://dl.fbaipublicfiles.com/pycls/dds_baselines/160906838/RegNetY-4.0GF_dds_8gpu.pyth"
)
# RegNets benefit from enabling cudnn benchmark mode
train.cudnn_benchmark = True


================================================
FILE: detectron2/configs/COCO-Keypoints/Base-Keypoint-RCNN-FPN.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  KEYPOINT_ON: True
  ROI_HEADS:
    NUM_CLASSES: 1
  ROI_BOX_HEAD:
    SMOOTH_L1_BETA: 0.5  # Keypoint AP degrades (though box AP improves) when using plain L1 loss
  RPN:
    # Detectron1 uses 2000 proposals per-batch, but this option is per-image in detectron2.
    # 1000 proposals per-image is found to hurt box AP.
    # Therefore we increase it to 1500 per-image.
    POST_NMS_TOPK_TRAIN: 1500
DATASETS:
  TRAIN: ("keypoints_coco_2017_train",)
  TEST: ("keypoints_coco_2017_val",)


================================================
FILE: detectron2/configs/COCO-Keypoints/keypoint_rcnn_R_101_FPN_3x.yaml
================================================
_BASE_: "Base-Keypoint-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-101.pkl"
  RESNETS:
    DEPTH: 101
SOLVER:
  STEPS: (210000, 250000)
  MAX_ITER: 270000


================================================
FILE: detectron2/configs/COCO-Keypoints/keypoint_rcnn_R_50_FPN_1x.py
================================================
from ..common.optim import SGD as optimizer
from ..common.coco_schedule import lr_multiplier_1x as lr_multiplier
from ..common.data.coco_keypoint import dataloader
from ..common.models.keypoint_rcnn_fpn import model
from ..common.train import train

model.backbone.bottom_up.freeze_at = 2
train.init_checkpoint = "detectron2://ImageNetPretrained/MSRA/R-50.pkl"


================================================
FILE: detectron2/configs/COCO-Keypoints/keypoint_rcnn_R_50_FPN_1x.yaml
================================================
_BASE_: "Base-Keypoint-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  RESNETS:
    DEPTH: 50


================================================
FILE: detectron2/configs/COCO-Keypoints/keypoint_rcnn_R_50_FPN_3x.yaml
================================================
_BASE_: "Base-Keypoint-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  RESNETS:
    DEPTH: 50
SOLVER:
  STEPS: (210000, 250000)
  MAX_ITER: 270000


================================================
FILE: detectron2/configs/COCO-Keypoints/keypoint_rcnn_X_101_32x8d_FPN_3x.yaml
================================================
_BASE_: "Base-Keypoint-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/FAIR/X-101-32x8d.pkl"
  PIXEL_STD: [57.375, 57.120, 58.395]
  RESNETS:
    STRIDE_IN_1X1: False  # this is a C2 model
    NUM_GROUPS: 32
    WIDTH_PER_GROUP: 8
    DEPTH: 101
SOLVER:
  STEPS: (210000, 250000)
  MAX_ITER: 270000


================================================
FILE: detectron2/configs/COCO-PanopticSegmentation/Base-Panoptic-FPN.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  META_ARCHITECTURE: "PanopticFPN"
  MASK_ON: True
  SEM_SEG_HEAD:
    LOSS_WEIGHT: 0.5
DATASETS:
  TRAIN: ("coco_2017_train_panoptic_separated",)
  TEST: ("coco_2017_val_panoptic_separated",)
DATALOADER:
  FILTER_EMPTY_ANNOTATIONS: False


================================================
FILE: detectron2/configs/COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml
================================================
_BASE_: "Base-Panoptic-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-101.pkl"
  RESNETS:
    DEPTH: 101
SOLVER:
  STEPS: (210000, 250000)
  MAX_ITER: 270000


================================================
FILE: detectron2/configs/COCO-PanopticSegmentation/panoptic_fpn_R_50_1x.py
================================================
from ..common.optim import SGD as optimizer
from ..common.coco_schedule import lr_multiplier_1x as lr_multiplier
from ..common.data.coco_panoptic_separated import dataloader
from ..common.models.panoptic_fpn import model
from ..common.train import train

model.backbone.bottom_up.freeze_at = 2
train.init_checkpoint = "detectron2://ImageNetPretrained/MSRA/R-50.pkl"


================================================
FILE: detectron2/configs/COCO-PanopticSegmentation/panoptic_fpn_R_50_1x.yaml
================================================
_BASE_: "Base-Panoptic-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  RESNETS:
    DEPTH: 50


================================================
FILE: detectron2/configs/COCO-PanopticSegmentation/panoptic_fpn_R_50_3x.yaml
================================================
_BASE_: "Base-Panoptic-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  RESNETS:
    DEPTH: 50
SOLVER:
  STEPS: (210000, 250000)
  MAX_ITER: 270000


================================================
FILE: detectron2/configs/Cityscapes/mask_rcnn_R_50_FPN.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  # WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  # For better, more stable performance initialize from COCO
  WEIGHTS: "detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl"
  MASK_ON: True
  ROI_HEADS:
    NUM_CLASSES: 8
# This is similar to the setting used in Mask R-CNN paper, Appendix A
# But there are some differences, e.g., we did not initialize the output
# layer using the corresponding classes from COCO
INPUT:
  MIN_SIZE_TRAIN: (800, 832, 864, 896, 928, 960, 992, 1024)
  MIN_SIZE_TRAIN_SAMPLING: "choice"
  MIN_SIZE_TEST: 1024
  MAX_SIZE_TRAIN: 2048
  MAX_SIZE_TEST: 2048
DATASETS:
  TRAIN: ("cityscapes_fine_instance_seg_train",)
  TEST: ("cityscapes_fine_instance_seg_val",)
SOLVER:
  BASE_LR: 0.01
  STEPS: (18000,)
  MAX_ITER: 24000
  IMS_PER_BATCH: 8
TEST:
  EVAL_PERIOD: 8000


================================================
FILE: detectron2/configs/Detectron1-Comparisons/README.md
================================================

Detectron2 model zoo's experimental settings and a few implementation details are different from Detectron.

The differences in implementation details are shared in
[Compatibility with Other Libraries](../../docs/notes/compatibility.md).

The differences in model zoo's experimental settings include:
* Use scale augmentation during training. This improves AP with lower training cost.
* Use L1 loss instead of smooth L1 loss for simplicity. This sometimes improves box AP but may
  affect other AP.
* Use `POOLER_SAMPLING_RATIO=0` instead of 2. This does not significantly affect AP.
* Use `ROIAlignV2`. This does not significantly affect AP.

In this directory, we provide a few configs that __do not__ have the above changes.
They mimic Detectron's behavior as close as possible,
and provide a fair comparison of accuracy and speed against Detectron.

<!--
./gen_html_table.py --config 'Detectron1-Comparisons/*.yaml' --name "Faster R-CNN" "Keypoint R-CNN" "Mask R-CNN" --fields lr_sched train_speed inference_speed mem box_AP mask_AP keypoint_AP --base-dir ../../../configs/Detectron1-Comparisons
-->


<table><tbody>
<!-- START TABLE -->
<!-- TABLE HEADER -->
<th valign="bottom">Name</th>
<th valign="bottom">lr<br/>sched</th>
<th valign="bottom">train<br/>time<br/>(s/iter)</th>
<th valign="bottom">inference<br/>time<br/>(s/im)</th>
<th valign="bottom">train<br/>mem<br/>(GB)</th>
<th valign="bottom">box<br/>AP</th>
<th valign="bottom">mask<br/>AP</th>
<th valign="bottom">kp.<br/>AP</th>
<th valign="bottom">model id</th>
<th valign="bottom">download</th>
<!-- TABLE BODY -->
<!-- ROW: faster_rcnn_R_50_FPN_noaug_1x -->
 <tr><td align="left"><a href="faster_rcnn_R_50_FPN_noaug_1x.yaml">Faster R-CNN</a></td>
<td align="center">1x</td>
<td align="center">0.219</td>
<td align="center">0.038</td>
<td align="center">3.1</td>
<td align="center">36.9</td>
<td align="center"></td>
<td align="center"></td>
<td align="center">137781054</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/Detectron1-Comparisons/faster_rcnn_R_50_FPN_noaug_1x/137781054/model_final_7ab50c.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/Detectron1-Comparisons/faster_rcnn_R_50_FPN_noaug_1x/137781054/metrics.json">metrics</a></td>
</tr>
<!-- ROW: keypoint_rcnn_R_50_FPN_1x -->
 <tr><td align="left"><a href="keypoint_rcnn_R_50_FPN_1x.yaml">Keypoint R-CNN</a></td>
<td align="center">1x</td>
<td align="center">0.313</td>
<td align="center">0.071</td>
<td align="center">5.0</td>
<td align="center">53.1</td>
<td align="center"></td>
<td align="center">64.2</td>
<td align="center">137781195</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/Detectron1-Comparisons/keypoint_rcnn_R_50_FPN_1x/137781195/model_final_cce136.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/Detectron1-Comparisons/keypoint_rcnn_R_50_FPN_1x/137781195/metrics.json">metrics</a></td>
</tr>
<!-- ROW: mask_rcnn_R_50_FPN_noaug_1x -->
 <tr><td align="left"><a href="mask_rcnn_R_50_FPN_noaug_1x.yaml">Mask R-CNN</a></td>
<td align="center">1x</td>
<td align="center">0.273</td>
<td align="center">0.043</td>
<td align="center">3.4</td>
<td align="center">37.8</td>
<td align="center">34.9</td>
<td align="center"></td>
<td align="center">137781281</td>
<td align="center"><a href="https://dl.fbaipublicfiles.com/detectron2/Detectron1-Comparisons/mask_rcnn_R_50_FPN_noaug_1x/137781281/model_final_62ca52.pkl">model</a>&nbsp;|&nbsp;<a href="https://dl.fbaipublicfiles.com/detectron2/Detectron1-Comparisons/mask_rcnn_R_50_FPN_noaug_1x/137781281/metrics.json">metrics</a></td>
</tr>
</tbody></table>

## Comparisons:

* Faster R-CNN: Detectron's AP is 36.7, similar to ours.
* Keypoint R-CNN: Detectron's AP is box 53.6, keypoint 64.2. Fixing a Detectron's
  [bug](https://github.com/facebookresearch/Detectron/issues/459) lead to a drop in box AP, and can be
	compensated back by some parameter tuning.
* Mask R-CNN: Detectron's AP is box 37.7, mask 33.9. We're 1 AP better in mask AP, due to more correct implementation.
  See [this article](https://ppwwyyxx.com/blog/2021/Where-are-Pixels/) for details.

For speed comparison, see [benchmarks](https://detectron2.readthedocs.io/notes/benchmarks.html).


================================================
FILE: detectron2/configs/Detectron1-Comparisons/faster_rcnn_R_50_FPN_noaug_1x.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: False
  RESNETS:
    DEPTH: 50
  # Detectron1 uses smooth L1 loss with some magic beta values.
  # The defaults are changed to L1 loss in Detectron2.
  RPN:
    SMOOTH_L1_BETA: 0.1111
  ROI_BOX_HEAD:
    SMOOTH_L1_BETA: 1.0
    POOLER_SAMPLING_RATIO: 2
    POOLER_TYPE: "ROIAlign"
INPUT:
  # no scale augmentation
  MIN_SIZE_TRAIN: (800, )


================================================
FILE: detectron2/configs/Detectron1-Comparisons/keypoint_rcnn_R_50_FPN_1x.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  KEYPOINT_ON: True
  RESNETS:
    DEPTH: 50
  ROI_HEADS:
    NUM_CLASSES: 1
  ROI_KEYPOINT_HEAD:
    POOLER_RESOLUTION: 14
    POOLER_SAMPLING_RATIO: 2
    POOLER_TYPE: "ROIAlign"
  # Detectron1 uses smooth L1 loss with some magic beta values.
  # The defaults are changed to L1 loss in Detectron2.
  ROI_BOX_HEAD:
    SMOOTH_L1_BETA: 1.0
    POOLER_SAMPLING_RATIO: 2
    POOLER_TYPE: "ROIAlign"
  RPN:
    SMOOTH_L1_BETA: 0.1111
    # Detectron1 uses 2000 proposals per-batch, but this option is per-image in detectron2
    # 1000 proposals per-image is found to hurt box AP.
    # Therefore we increase it to 1500 per-image.
    POST_NMS_TOPK_TRAIN: 1500
DATASETS:
  TRAIN: ("keypoints_coco_2017_train",)
  TEST: ("keypoints_coco_2017_val",)


================================================
FILE: detectron2/configs/Detectron1-Comparisons/mask_rcnn_R_50_FPN_noaug_1x.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: True
  RESNETS:
    DEPTH: 50
  # Detectron1 uses smooth L1 loss with some magic beta values.
  # The defaults are changed to L1 loss in Detectron2.
  RPN:
    SMOOTH_L1_BETA: 0.1111
  ROI_BOX_HEAD:
    SMOOTH_L1_BETA: 1.0
    POOLER_SAMPLING_RATIO: 2
    POOLER_TYPE: "ROIAlign"
  ROI_MASK_HEAD:
    POOLER_SAMPLING_RATIO: 2
    POOLER_TYPE: "ROIAlign"
INPUT:
  # no scale augmentation
  MIN_SIZE_TRAIN: (800, )


================================================
FILE: detectron2/configs/LVISv0.5-InstanceSegmentation/mask_rcnn_R_101_FPN_1x.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-101.pkl"
  MASK_ON: True
  RESNETS:
    DEPTH: 101
  ROI_HEADS:
    NUM_CLASSES: 1230
    SCORE_THRESH_TEST: 0.0001
INPUT:
  MIN_SIZE_TRAIN: (640, 672, 704, 736, 768, 800)
DATASETS:
  TRAIN: ("lvis_v0.5_train",)
  TEST: ("lvis_v0.5_val",)
TEST:
  DETECTIONS_PER_IMAGE: 300  # LVIS allows up to 300
DATALOADER:
  SAMPLER_TRAIN: "RepeatFactorTrainingSampler"
  REPEAT_THRESHOLD: 0.001


================================================
FILE: detectron2/configs/LVISv0.5-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: True
  RESNETS:
    DEPTH: 50
  ROI_HEADS:
    NUM_CLASSES: 1230
    SCORE_THRESH_TEST: 0.0001
INPUT:
  MIN_SIZE_TRAIN: (640, 672, 704, 736, 768, 800)
DATASETS:
  TRAIN: ("lvis_v0.5_train",)
  TEST: ("lvis_v0.5_val",)
TEST:
  DETECTIONS_PER_IMAGE: 300  # LVIS allows up to 300
DATALOADER:
  SAMPLER_TRAIN: "RepeatFactorTrainingSampler"
  REPEAT_THRESHOLD: 0.001


================================================
FILE: detectron2/configs/LVISv0.5-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_1x.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/FAIR/X-101-32x8d.pkl"
  PIXEL_STD: [57.375, 57.120, 58.395]
  MASK_ON: True
  RESNETS:
    STRIDE_IN_1X1: False  # this is a C2 model
    NUM_GROUPS: 32
    WIDTH_PER_GROUP: 8
    DEPTH: 101
  ROI_HEADS:
    NUM_CLASSES: 1230
    SCORE_THRESH_TEST: 0.0001
INPUT:
  MIN_SIZE_TRAIN: (640, 672, 704, 736, 768, 800)
DATASETS:
  TRAIN: ("lvis_v0.5_train",)
  TEST: ("lvis_v0.5_val",)
TEST:
  DETECTIONS_PER_IMAGE: 300  # LVIS allows up to 300
DATALOADER:
  SAMPLER_TRAIN: "RepeatFactorTrainingSampler"
  REPEAT_THRESHOLD: 0.001


================================================
FILE: detectron2/configs/LVISv1-InstanceSegmentation/mask_rcnn_R_101_FPN_1x.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-101.pkl"
  MASK_ON: True
  RESNETS:
    DEPTH: 101
  ROI_HEADS:
    NUM_CLASSES: 1203
    SCORE_THRESH_TEST: 0.0001
INPUT:
  MIN_SIZE_TRAIN: (640, 672, 704, 736, 768, 800)
DATASETS:
  TRAIN: ("lvis_v1_train",)
  TEST: ("lvis_v1_val",)
TEST:
  DETECTIONS_PER_IMAGE: 300  # LVIS allows up to 300
SOLVER:
  STEPS: (120000, 160000)
  MAX_ITER: 180000  # 180000 * 16 / 100000 ~ 28.8 epochs
DATALOADER:
  SAMPLER_TRAIN: "RepeatFactorTrainingSampler"
  REPEAT_THRESHOLD: 0.001


================================================
FILE: detectron2/configs/LVISv1-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: True
  RESNETS:
    DEPTH: 50
  ROI_HEADS:
    NUM_CLASSES: 1203
    SCORE_THRESH_TEST: 0.0001
INPUT:
  MIN_SIZE_TRAIN: (640, 672, 704, 736, 768, 800)
DATASETS:
  TRAIN: ("lvis_v1_train",)
  TEST: ("lvis_v1_val",)
TEST:
  DETECTIONS_PER_IMAGE: 300  # LVIS allows up to 300
SOLVER:
  STEPS: (120000, 160000)
  MAX_ITER: 180000  # 180000 * 16 / 100000 ~ 28.8 epochs
DATALOADER:
  SAMPLER_TRAIN: "RepeatFactorTrainingSampler"
  REPEAT_THRESHOLD: 0.001


================================================
FILE: detectron2/configs/LVISv1-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_1x.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/FAIR/X-101-32x8d.pkl"
  PIXEL_STD: [57.375, 57.120, 58.395]
  MASK_ON: True
  RESNETS:
    STRIDE_IN_1X1: False  # this is a C2 model
    NUM_GROUPS: 32
    WIDTH_PER_GROUP: 8
    DEPTH: 101
  ROI_HEADS:
    NUM_CLASSES: 1203
    SCORE_THRESH_TEST: 0.0001
INPUT:
  MIN_SIZE_TRAIN: (640, 672, 704, 736, 768, 800)
DATASETS:
  TRAIN: ("lvis_v1_train",)
  TEST: ("lvis_v1_val",)
SOLVER:
  STEPS: (120000, 160000)
  MAX_ITER: 180000  # 180000 * 16 / 100000 ~ 28.8 epochs
TEST:
  DETECTIONS_PER_IMAGE: 300  # LVIS allows up to 300
DATALOADER:
  SAMPLER_TRAIN: "RepeatFactorTrainingSampler"
  REPEAT_THRESHOLD: 0.001


================================================
FILE: detectron2/configs/Misc/cascade_mask_rcnn_R_50_FPN_1x.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: True
  RESNETS:
    DEPTH: 50
  ROI_HEADS:
    NAME: CascadeROIHeads
  ROI_BOX_HEAD:
    CLS_AGNOSTIC_BBOX_REG: True
  RPN:
    POST_NMS_TOPK_TRAIN: 2000


================================================
FILE: detectron2/configs/Misc/cascade_mask_rcnn_R_50_FPN_3x.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: True
  RESNETS:
    DEPTH: 50
  ROI_HEADS:
    NAME: CascadeROIHeads
  ROI_BOX_HEAD:
    CLS_AGNOSTIC_BBOX_REG: True
  RPN:
    POST_NMS_TOPK_TRAIN: 2000
SOLVER:
  STEPS: (210000, 250000)
  MAX_ITER: 270000


================================================
FILE: detectron2/configs/Misc/cascade_mask_rcnn_X_152_32x8d_FPN_IN5k_gn_dconv.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  MASK_ON: True
  WEIGHTS: "catalog://ImageNetPretrained/FAIR/X-152-32x8d-IN5k"
  RESNETS:
    STRIDE_IN_1X1: False  # this is a C2 model
    NUM_GROUPS: 32
    WIDTH_PER_GROUP: 8
    DEPTH: 152
    DEFORM_ON_PER_STAGE: [False, True, True, True]
  ROI_HEADS:
    NAME: "CascadeROIHeads"
  ROI_BOX_HEAD:
    NAME: "FastRCNNConvFCHead"
    NUM_CONV: 4
    NUM_FC: 1
    NORM: "GN"
    CLS_AGNOSTIC_BBOX_REG: True
  ROI_MASK_HEAD:
    NUM_CONV: 8
    NORM: "GN"
  RPN:
    POST_NMS_TOPK_TRAIN: 2000
SOLVER:
  IMS_PER_BATCH: 128
  STEPS: (35000, 45000)
  MAX_ITER: 50000
  BASE_LR: 0.16
INPUT:
  MIN_SIZE_TRAIN: (640, 864)
  MIN_SIZE_TRAIN_SAMPLING: "range"
  MAX_SIZE_TRAIN: 1440
  CROP:
    ENABLED: True
TEST:
  EVAL_PERIOD: 2500


================================================
FILE: detectron2/configs/Misc/mask_rcnn_R_50_FPN_1x_cls_agnostic.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: True
  RESNETS:
    DEPTH: 50
  ROI_BOX_HEAD:
    CLS_AGNOSTIC_BBOX_REG: True
  ROI_MASK_HEAD:
    CLS_AGNOSTIC_MASK: True


================================================
FILE: detectron2/configs/Misc/mask_rcnn_R_50_FPN_1x_dconv_c3-c5.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: True
  RESNETS:
    DEPTH: 50
    DEFORM_ON_PER_STAGE: [False, True, True, True] # on Res3,Res4,Res5
    DEFORM_MODULATED: False


================================================
FILE: detectron2/configs/Misc/mask_rcnn_R_50_FPN_3x_dconv_c3-c5.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: True
  RESNETS:
    DEPTH: 50
    DEFORM_ON_PER_STAGE: [False, True, True, True] # on Res3,Res4,Res5
    DEFORM_MODULATED: False
SOLVER:
  STEPS: (210000, 250000)
  MAX_ITER: 270000


================================================
FILE: detectron2/configs/Misc/mask_rcnn_R_50_FPN_3x_gn.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "catalog://ImageNetPretrained/FAIR/R-50-GN"
  MASK_ON: True
  RESNETS:
    DEPTH: 50
    NORM: "GN"
    STRIDE_IN_1X1: False
  FPN:
    NORM: "GN"
  ROI_BOX_HEAD:
    NAME: "FastRCNNConvFCHead"
    NUM_CONV: 4
    NUM_FC: 1
    NORM: "GN"
  ROI_MASK_HEAD:
    NORM: "GN"
SOLVER:
  # 3x schedule
  STEPS: (210000, 250000)
  MAX_ITER: 270000


================================================
FILE: detectron2/configs/Misc/mask_rcnn_R_50_FPN_3x_syncbn.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: True
  RESNETS:
    DEPTH: 50
    NORM: "SyncBN"
    STRIDE_IN_1X1: True
  FPN:
    NORM: "SyncBN"
  ROI_BOX_HEAD:
    NAME: "FastRCNNConvFCHead"
    NUM_CONV: 4
    NUM_FC: 1
    NORM: "SyncBN"
  ROI_MASK_HEAD:
    NORM: "SyncBN"
SOLVER:
  # 3x schedule
  STEPS: (210000, 250000)
  MAX_ITER: 270000
TEST:
  PRECISE_BN:
    ENABLED: True


================================================
FILE: detectron2/configs/Misc/mmdet_mask_rcnn_R_50_FPN_1x.py
================================================
# An example config to train a mmdetection model using detectron2.

from ..common.data.coco import dataloader
from ..common.coco_schedule import lr_multiplier_1x as lr_multiplier
from ..common.optim import SGD as optimizer
from ..common.train import train
from ..common.data.constants import constants

from detectron2.modeling.mmdet_wrapper import MMDetDetector
from detectron2.config import LazyCall as L

model = L(MMDetDetector)(
    detector=dict(
        type="MaskRCNN",
        pretrained="torchvision://resnet50",
        backbone=dict(
            type="ResNet",
            depth=50,
            num_stages=4,
            out_indices=(0, 1, 2, 3),
            frozen_stages=1,
            norm_cfg=dict(type="BN", requires_grad=True),
            norm_eval=True,
            style="pytorch",
        ),
        neck=dict(type="FPN", in_channels=[256, 512, 1024, 2048], out_channels=256, num_outs=5),
        rpn_head=dict(
            type="RPNHead",
            in_channels=256,
            feat_channels=256,
            anchor_generator=dict(
                type="AnchorGenerator",
                scales=[8],
                ratios=[0.5, 1.0, 2.0],
                strides=[4, 8, 16, 32, 64],
            ),
            bbox_coder=dict(
                type="DeltaXYWHBBoxCoder",
                target_means=[0.0, 0.0, 0.0, 0.0],
                target_stds=[1.0, 1.0, 1.0, 1.0],
            ),
            loss_cls=dict(type="CrossEntropyLoss", use_sigmoid=True, loss_weight=1.0),
            loss_bbox=dict(type="L1Loss", loss_weight=1.0),
        ),
        roi_head=dict(
            type="StandardRoIHead",
            bbox_roi_extractor=dict(
                type="SingleRoIExtractor",
                roi_layer=dict(type="RoIAlign", output_size=7, sampling_ratio=0),
                out_channels=256,
                featmap_strides=[4, 8, 16, 32],
            ),
            bbox_head=dict(
                type="Shared2FCBBoxHead",
                in_channels=256,
                fc_out_channels=1024,
                roi_feat_size=7,
                num_classes=80,
                bbox_coder=dict(
                    type="DeltaXYWHBBoxCoder",
                    target_means=[0.0, 0.0, 0.0, 0.0],
                    target_stds=[0.1, 0.1, 0.2, 0.2],
                ),
                reg_class_agnostic=False,
                loss_cls=dict(type="CrossEntropyLoss", use_sigmoid=False, loss_weight=1.0),
                loss_bbox=dict(type="L1Loss", loss_weight=1.0),
            ),
            mask_roi_extractor=dict(
                type="SingleRoIExtractor",
                roi_layer=dict(type="RoIAlign", output_size=14, sampling_ratio=0),
                out_channels=256,
                featmap_strides=[4, 8, 16, 32],
            ),
            mask_head=dict(
                type="FCNMaskHead",
                num_convs=4,
                in_channels=256,
                conv_out_channels=256,
                num_classes=80,
                loss_mask=dict(type="CrossEntropyLoss", use_mask=True, loss_weight=1.0),
            ),
        ),
        # model training and testing settings
        train_cfg=dict(
            rpn=dict(
                assigner=dict(
                    type="MaxIoUAssigner",
                    pos_iou_thr=0.7,
                    neg_iou_thr=0.3,
                    min_pos_iou=0.3,
                    match_low_quality=True,
                    ignore_iof_thr=-1,
                ),
                sampler=dict(
                    type="RandomSampler",
                    num=256,
                    pos_fraction=0.5,
                    neg_pos_ub=-1,
                    add_gt_as_proposals=False,
                ),
                allowed_border=-1,
                pos_weight=-1,
                debug=False,
            ),
            rpn_proposal=dict(
                nms_pre=2000,
                max_per_img=1000,
                nms=dict(type="nms", iou_threshold=0.7),
                min_bbox_size=0,
            ),
            rcnn=dict(
                assigner=dict(
                    type="MaxIoUAssigner",
                    pos_iou_thr=0.5,
                    neg_iou_thr=0.5,
                    min_pos_iou=0.5,
                    match_low_quality=True,
                    ignore_iof_thr=-1,
                ),
                sampler=dict(
                    type="RandomSampler",
                    num=512,
                    pos_fraction=0.25,
                    neg_pos_ub=-1,
                    add_gt_as_proposals=True,
                ),
                mask_size=28,
                pos_weight=-1,
                debug=False,
            ),
        ),
        test_cfg=dict(
            rpn=dict(
                nms_pre=1000,
                max_per_img=1000,
                nms=dict(type="nms", iou_threshold=0.7),
                min_bbox_size=0,
            ),
            rcnn=dict(
                score_thr=0.05,
                nms=dict(type="nms", iou_threshold=0.5),
                max_per_img=100,
                mask_thr_binary=0.5,
            ),
        ),
    ),
    pixel_mean=constants.imagenet_rgb256_mean,
    pixel_std=constants.imagenet_rgb256_std,
)

dataloader.train.mapper.image_format = "RGB"  # torchvision pretrained model
train.init_checkpoint = None  # pretrained model is loaded inside backbone


================================================
FILE: detectron2/configs/Misc/panoptic_fpn_R_101_dconv_cascade_gn_3x.yaml
================================================
# A large PanopticFPN for demo purposes.
# Use GN on backbone to support semantic seg.
# Use Cascade + Deform Conv to improve localization.
_BASE_: "../COCO-PanopticSegmentation/Base-Panoptic-FPN.yaml"
MODEL:
  WEIGHTS: "catalog://ImageNetPretrained/FAIR/R-101-GN"
  RESNETS:
    DEPTH: 101
    NORM: "GN"
    DEFORM_ON_PER_STAGE: [False, True, True, True]
    STRIDE_IN_1X1: False
  FPN:
    NORM: "GN"
  ROI_HEADS:
    NAME: CascadeROIHeads
  ROI_BOX_HEAD:
    CLS_AGNOSTIC_BBOX_REG: True
  ROI_MASK_HEAD:
    NORM: "GN"
  RPN:
    POST_NMS_TOPK_TRAIN: 2000
SOLVER:
  STEPS: (105000, 125000)
  MAX_ITER: 135000
  IMS_PER_BATCH: 32
  BASE_LR: 0.04


================================================
FILE: detectron2/configs/Misc/scratch_mask_rcnn_R_50_FPN_3x_gn.yaml
================================================
_BASE_: "mask_rcnn_R_50_FPN_3x_gn.yaml"
MODEL:
  # Train from random initialization.
  WEIGHTS: ""
  # It makes sense to divide by STD when training from scratch
  # But it seems to make no difference on the results and C2's models didn't do this.
  # So we keep things consistent with C2.
  # PIXEL_STD: [57.375, 57.12, 58.395]
  MASK_ON: True
  BACKBONE:
    FREEZE_AT: 0
# NOTE: Please refer to Rethinking ImageNet Pre-training https://arxiv.org/abs/1811.08883
# to learn what you need for training from scratch.


================================================
FILE: detectron2/configs/Misc/scratch_mask_rcnn_R_50_FPN_9x_gn.yaml
================================================
_BASE_: "mask_rcnn_R_50_FPN_3x_gn.yaml"
MODEL:
  PIXEL_STD: [57.375, 57.12, 58.395]
  WEIGHTS: ""
  MASK_ON: True
  RESNETS:
    STRIDE_IN_1X1: False
  BACKBONE:
    FREEZE_AT: 0
SOLVER:
  # 9x schedule
  IMS_PER_BATCH: 64  # 4x the standard
  STEPS: (187500, 197500)  # last 60/4==15k and last 20/4==5k
  MAX_ITER: 202500   # 90k * 9 / 4
  BASE_LR: 0.08
TEST:
  EVAL_PERIOD: 2500
# NOTE: Please refer to Rethinking ImageNet Pre-training https://arxiv.org/abs/1811.08883
# to learn what you need for training from scratch.


================================================
FILE: detectron2/configs/Misc/scratch_mask_rcnn_R_50_FPN_9x_syncbn.yaml
================================================
_BASE_: "mask_rcnn_R_50_FPN_3x_syncbn.yaml"
MODEL:
  PIXEL_STD: [57.375, 57.12, 58.395]
  WEIGHTS: ""
  MASK_ON: True
  RESNETS:
    STRIDE_IN_1X1: False
  BACKBONE:
    FREEZE_AT: 0
SOLVER:
  # 9x schedule
  IMS_PER_BATCH: 64  # 4x the standard
  STEPS: (187500, 197500)  # last 60/4==15k and last 20/4==5k
  MAX_ITER: 202500   # 90k * 9 / 4
  BASE_LR: 0.08
TEST:
  EVAL_PERIOD: 2500
# NOTE: Please refer to Rethinking ImageNet Pre-training https://arxiv.org/abs/1811.08883
# to learn what you need for training from scratch.


================================================
FILE: detectron2/configs/Misc/semantic_R_50_FPN_1x.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  META_ARCHITECTURE: "SemanticSegmentor"
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  RESNETS:
    DEPTH: 50
DATASETS:
  TRAIN: ("coco_2017_train_panoptic_stuffonly",)
  TEST: ("coco_2017_val_panoptic_stuffonly",)
INPUT:
  MIN_SIZE_TRAIN: (640, 672, 704, 736, 768, 800)


================================================
FILE: detectron2/configs/Misc/torchvision_imagenet_R_50.py
================================================
"""
An example config file to train a ImageNet classifier with detectron2.
Model and dataloader both come from torchvision.
This shows how to use detectron2 as a general engine for any new models and tasks.

To run, use the following command:

python tools/lazyconfig_train_net.py --config-file configs/Misc/torchvision_imagenet_R_50.py \
    --num-gpus 8 dataloader.train.dataset.root=/path/to/imagenet/

"""


import torch
from torch import nn
from torch.nn import functional as F
from omegaconf import OmegaConf
import torchvision
from torchvision.transforms import transforms as T
from torchvision.models.resnet import ResNet, Bottleneck
from fvcore.common.param_scheduler import MultiStepParamScheduler

from detectron2.solver import WarmupParamScheduler
from detectron2.solver.build import get_default_optimizer_params
from detectron2.config import LazyCall as L
from detectron2.model_zoo import get_config
from detectron2.data.samplers import TrainingSampler, InferenceSampler
from detectron2.evaluation import DatasetEvaluator
from detectron2.utils import comm


"""
Note: Here we put reusable code (models, evaluation, data) together with configs just as a
proof-of-concept, to easily demonstrate what's needed to train a ImageNet classifier in detectron2.
Writing code in configs offers extreme flexibility but is often not a good engineering practice.
In practice, you might want to put code in your project and import them instead.
"""


def build_data_loader(dataset, batch_size, num_workers, training=True):
    return torch.utils.data.DataLoader(
        dataset,
        sampler=(TrainingSampler if training else InferenceSampler)(len(dataset)),
        batch_size=batch_size,
        num_workers=num_workers,
        pin_memory=True,
    )


class ClassificationNet(nn.Module):
    def __init__(self, model: nn.Module):
        super().__init__()
        self.model = model

    @property
    def device(self):
        return list(self.model.parameters())[0].device

    def forward(self, inputs):
        image, label = inputs
        pred = self.model(image.to(self.device))
        if self.training:
            label = label.to(self.device)
            return F.cross_entropy(pred, label)
        else:
            return pred


class ClassificationAcc(DatasetEvaluator):
    def reset(self):
        self.corr = self.total = 0

    def process(self, inputs, outputs):
        image, label = inputs
        self.corr += (outputs.argmax(dim=1).cpu() == label.cpu()).sum().item()
        self.total += len(label)

    def evaluate(self):
        all_corr_total = comm.all_gather([self.corr, self.total])
        corr = sum(x[0] for x in all_corr_total)
        total = sum(x[1] for x in all_corr_total)
        return {"accuracy": corr / total}


# --- End of code that could be in a project and be imported


dataloader = OmegaConf.create()
dataloader.train = L(build_data_loader)(
    dataset=L(torchvision.datasets.ImageNet)(
        root="/path/to/imagenet",
        split="train",
        transform=L(T.Compose)(
            transforms=[
                L(T.RandomResizedCrop)(size=224),
                L(T.RandomHorizontalFlip)(),
                T.ToTensor(),
                L(T.Normalize)(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
            ]
        ),
    ),
    batch_size=256 // 8,
    num_workers=4,
    training=True,
)

dataloader.test = L(build_data_loader)(
    dataset=L(torchvision.datasets.ImageNet)(
        root="${...train.dataset.root}",
        split="val",
        transform=L(T.Compose)(
            transforms=[
                L(T.Resize)(size=256),
                L(T.CenterCrop)(size=224),
                T.ToTensor(),
                L(T.Normalize)(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
            ]
        ),
    ),
    batch_size=256 // 8,
    num_workers=4,
    training=False,
)

dataloader.evaluator = L(ClassificationAcc)()

model = L(ClassificationNet)(
    model=(ResNet)(block=Bottleneck, layers=[3, 4, 6, 3], zero_init_residual=True)
)


optimizer = L(torch.optim.SGD)(
    params=L(get_default_optimizer_params)(),
    lr=0.1,
    momentum=0.9,
    weight_decay=1e-4,
)

lr_multiplier = L(WarmupParamScheduler)(
    scheduler=L(MultiStepParamScheduler)(
        values=[1.0, 0.1, 0.01, 0.001], milestones=[30, 60, 90, 100]
    ),
    warmup_length=1 / 100,
    warmup_factor=0.1,
)


train = get_config("common/train.py").train
train.init_checkpoint = None
train.max_iter = 100 * 1281167 // 256


================================================
FILE: detectron2/configs/PascalVOC-Detection/faster_rcnn_R_50_C4.yaml
================================================
_BASE_: "../Base-RCNN-C4.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: False
  RESNETS:
    DEPTH: 50
  ROI_HEADS:
    NUM_CLASSES: 20
INPUT:
  MIN_SIZE_TRAIN: (480, 512, 544, 576, 608, 640, 672, 704, 736, 768, 800)
  MIN_SIZE_TEST: 800
DATASETS:
  TRAIN: ('voc_2007_trainval', 'voc_2012_trainval')
  TEST: ('voc_2007_test',)
SOLVER:
  STEPS: (12000, 16000)
  MAX_ITER: 18000  # 17.4 epochs
  WARMUP_ITERS: 100


================================================
FILE: detectron2/configs/PascalVOC-Detection/faster_rcnn_R_50_FPN.yaml
================================================
_BASE_: "../Base-RCNN-FPN.yaml"
MODEL:
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
  MASK_ON: False
  RESNETS:
    DEPTH: 50
  ROI_HEADS:
    NUM_CLASSES: 20
INPUT:
  MIN_SIZE_TRAIN: (480, 512, 544, 576, 608, 640, 672, 704, 736, 768, 800)
  MIN_SIZE_TEST: 800
DATASETS:
  TRAIN: ('voc_2007_trainval', 'voc_2012_trainval')
  TEST: ('voc_2007_test',)
SOLVER:
  STEPS: (12000, 16000)
  MAX_ITER: 18000  # 17.4 epochs
  WARMUP_ITERS: 100


================================================
FILE: detectron2/configs/common/README.md
================================================
This directory provides definitions for a few common models, dataloaders, scheduler,
and optimizers that are often used in t
Download .txt
gitextract_64k4_27x/

├── .gitignore
├── LICENSE
├── README.md
├── configs/
│   ├── Base-C2_L_R5021k_640b64_4x.yaml
│   ├── Base_OVCOCO_C4_1x.yaml
│   ├── BoxSup-C2_Lbase_CLIP_R5021k_640b64.yaml
│   ├── BoxSup-C2_Lbase_CLIP_SwinB_896b32.yaml
│   ├── BoxSup_OVCOCO_CLIP_R50_1x.yaml
│   ├── VLDet_LbaseCCcap_CLIP_R5021k_640b64_2x_ft4x_caption.yaml
│   ├── VLDet_LbaseI_CLIP_SwinB_896b32_2x_ft4x_caption.yaml
│   └── VLDet_OVCOCO_CLIP_R50_1x_caption.yaml
├── demo.py
├── detectron2/
│   ├── .circleci/
│   │   ├── config.yml
│   │   └── import-tests.sh
│   ├── .clang-format
│   ├── .flake8
│   ├── GETTING_STARTED.md
│   ├── INSTALL.md
│   ├── LICENSE
│   ├── MODEL_ZOO.md
│   ├── README.md
│   ├── configs/
│   │   ├── Base-RCNN-C4.yaml
│   │   ├── Base-RCNN-DilatedC5.yaml
│   │   ├── Base-RCNN-FPN.yaml
│   │   ├── Base-RetinaNet.yaml
│   │   ├── COCO-Detection/
│   │   │   ├── fast_rcnn_R_50_FPN_1x.yaml
│   │   │   ├── faster_rcnn_R_101_C4_3x.yaml
│   │   │   ├── faster_rcnn_R_101_DC5_3x.yaml
│   │   │   ├── faster_rcnn_R_101_FPN_3x.yaml
│   │   │   ├── faster_rcnn_R_50_C4_1x.yaml
│   │   │   ├── faster_rcnn_R_50_C4_3x.yaml
│   │   │   ├── faster_rcnn_R_50_DC5_1x.yaml
│   │   │   ├── faster_rcnn_R_50_DC5_3x.yaml
│   │   │   ├── faster_rcnn_R_50_FPN_1x.yaml
│   │   │   ├── faster_rcnn_R_50_FPN_3x.yaml
│   │   │   ├── faster_rcnn_X_101_32x8d_FPN_3x.yaml
│   │   │   ├── fcos_R_50_FPN_1x.py
│   │   │   ├── retinanet_R_101_FPN_3x.yaml
│   │   │   ├── retinanet_R_50_FPN_1x.py
│   │   │   ├── retinanet_R_50_FPN_1x.yaml
│   │   │   ├── retinanet_R_50_FPN_3x.yaml
│   │   │   ├── rpn_R_50_C4_1x.yaml
│   │   │   └── rpn_R_50_FPN_1x.yaml
│   │   ├── COCO-InstanceSegmentation/
│   │   │   ├── mask_rcnn_R_101_C4_3x.yaml
│   │   │   ├── mask_rcnn_R_101_DC5_3x.yaml
│   │   │   ├── mask_rcnn_R_101_FPN_3x.yaml
│   │   │   ├── mask_rcnn_R_50_C4_1x.py
│   │   │   ├── mask_rcnn_R_50_C4_1x.yaml
│   │   │   ├── mask_rcnn_R_50_C4_3x.yaml
│   │   │   ├── mask_rcnn_R_50_DC5_1x.yaml
│   │   │   ├── mask_rcnn_R_50_DC5_3x.yaml
│   │   │   ├── mask_rcnn_R_50_FPN_1x.py
│   │   │   ├── mask_rcnn_R_50_FPN_1x.yaml
│   │   │   ├── mask_rcnn_R_50_FPN_1x_giou.yaml
│   │   │   ├── mask_rcnn_R_50_FPN_3x.yaml
│   │   │   ├── mask_rcnn_X_101_32x8d_FPN_3x.yaml
│   │   │   ├── mask_rcnn_regnetx_4gf_dds_fpn_1x.py
│   │   │   └── mask_rcnn_regnety_4gf_dds_fpn_1x.py
│   │   ├── COCO-Keypoints/
│   │   │   ├── Base-Keypoint-RCNN-FPN.yaml
│   │   │   ├── keypoint_rcnn_R_101_FPN_3x.yaml
│   │   │   ├── keypoint_rcnn_R_50_FPN_1x.py
│   │   │   ├── keypoint_rcnn_R_50_FPN_1x.yaml
│   │   │   ├── keypoint_rcnn_R_50_FPN_3x.yaml
│   │   │   └── keypoint_rcnn_X_101_32x8d_FPN_3x.yaml
│   │   ├── COCO-PanopticSegmentation/
│   │   │   ├── Base-Panoptic-FPN.yaml
│   │   │   ├── panoptic_fpn_R_101_3x.yaml
│   │   │   ├── panoptic_fpn_R_50_1x.py
│   │   │   ├── panoptic_fpn_R_50_1x.yaml
│   │   │   └── panoptic_fpn_R_50_3x.yaml
│   │   ├── Cityscapes/
│   │   │   └── mask_rcnn_R_50_FPN.yaml
│   │   ├── Detectron1-Comparisons/
│   │   │   ├── README.md
│   │   │   ├── faster_rcnn_R_50_FPN_noaug_1x.yaml
│   │   │   ├── keypoint_rcnn_R_50_FPN_1x.yaml
│   │   │   └── mask_rcnn_R_50_FPN_noaug_1x.yaml
│   │   ├── LVISv0.5-InstanceSegmentation/
│   │   │   ├── mask_rcnn_R_101_FPN_1x.yaml
│   │   │   ├── mask_rcnn_R_50_FPN_1x.yaml
│   │   │   └── mask_rcnn_X_101_32x8d_FPN_1x.yaml
│   │   ├── LVISv1-InstanceSegmentation/
│   │   │   ├── mask_rcnn_R_101_FPN_1x.yaml
│   │   │   ├── mask_rcnn_R_50_FPN_1x.yaml
│   │   │   └── mask_rcnn_X_101_32x8d_FPN_1x.yaml
│   │   ├── Misc/
│   │   │   ├── cascade_mask_rcnn_R_50_FPN_1x.yaml
│   │   │   ├── cascade_mask_rcnn_R_50_FPN_3x.yaml
│   │   │   ├── cascade_mask_rcnn_X_152_32x8d_FPN_IN5k_gn_dconv.yaml
│   │   │   ├── mask_rcnn_R_50_FPN_1x_cls_agnostic.yaml
│   │   │   ├── mask_rcnn_R_50_FPN_1x_dconv_c3-c5.yaml
│   │   │   ├── mask_rcnn_R_50_FPN_3x_dconv_c3-c5.yaml
│   │   │   ├── mask_rcnn_R_50_FPN_3x_gn.yaml
│   │   │   ├── mask_rcnn_R_50_FPN_3x_syncbn.yaml
│   │   │   ├── mmdet_mask_rcnn_R_50_FPN_1x.py
│   │   │   ├── panoptic_fpn_R_101_dconv_cascade_gn_3x.yaml
│   │   │   ├── scratch_mask_rcnn_R_50_FPN_3x_gn.yaml
│   │   │   ├── scratch_mask_rcnn_R_50_FPN_9x_gn.yaml
│   │   │   ├── scratch_mask_rcnn_R_50_FPN_9x_syncbn.yaml
│   │   │   ├── semantic_R_50_FPN_1x.yaml
│   │   │   └── torchvision_imagenet_R_50.py
│   │   ├── PascalVOC-Detection/
│   │   │   ├── faster_rcnn_R_50_C4.yaml
│   │   │   └── faster_rcnn_R_50_FPN.yaml
│   │   ├── common/
│   │   │   ├── README.md
│   │   │   ├── coco_schedule.py
│   │   │   ├── data/
│   │   │   │   ├── coco.py
│   │   │   │   ├── coco_keypoint.py
│   │   │   │   ├── coco_panoptic_separated.py
│   │   │   │   └── constants.py
│   │   │   ├── optim.py
│   │   │   └── train.py
│   │   ├── new_baselines/
│   │   │   ├── mask_rcnn_R_101_FPN_100ep_LSJ.py
│   │   │   ├── mask_rcnn_R_101_FPN_200ep_LSJ.py
│   │   │   ├── mask_rcnn_R_101_FPN_400ep_LSJ.py
│   │   │   ├── mask_rcnn_R_50_FPN_100ep_LSJ.py
│   │   │   ├── mask_rcnn_R_50_FPN_200ep_LSJ.py
│   │   │   ├── mask_rcnn_R_50_FPN_400ep_LSJ.py
│   │   │   ├── mask_rcnn_R_50_FPN_50ep_LSJ.py
│   │   │   ├── mask_rcnn_regnetx_4gf_dds_FPN_100ep_LSJ.py
│   │   │   ├── mask_rcnn_regnetx_4gf_dds_FPN_200ep_LSJ.py
│   │   │   ├── mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ.py
│   │   │   ├── mask_rcnn_regnety_4gf_dds_FPN_100ep_LSJ.py
│   │   │   ├── mask_rcnn_regnety_4gf_dds_FPN_200ep_LSJ.py
│   │   │   └── mask_rcnn_regnety_4gf_dds_FPN_400ep_LSJ.py
│   │   └── quick_schedules/
│   │       ├── README.md
│   │       ├── cascade_mask_rcnn_R_50_FPN_inference_acc_test.yaml
│   │       ├── cascade_mask_rcnn_R_50_FPN_instant_test.yaml
│   │       ├── fast_rcnn_R_50_FPN_inference_acc_test.yaml
│   │       ├── fast_rcnn_R_50_FPN_instant_test.yaml
│   │       ├── keypoint_rcnn_R_50_FPN_inference_acc_test.yaml
│   │       ├── keypoint_rcnn_R_50_FPN_instant_test.yaml
│   │       ├── keypoint_rcnn_R_50_FPN_normalized_training_acc_test.yaml
│   │       ├── keypoint_rcnn_R_50_FPN_training_acc_test.yaml
│   │       ├── mask_rcnn_R_50_C4_GCV_instant_test.yaml
│   │       ├── mask_rcnn_R_50_C4_inference_acc_test.yaml
│   │       ├── mask_rcnn_R_50_C4_instant_test.yaml
│   │       ├── mask_rcnn_R_50_C4_training_acc_test.yaml
│   │       ├── mask_rcnn_R_50_DC5_inference_acc_test.yaml
│   │       ├── mask_rcnn_R_50_FPN_inference_acc_test.yaml
│   │       ├── mask_rcnn_R_50_FPN_instant_test.yaml
│   │       ├── mask_rcnn_R_50_FPN_pred_boxes_training_acc_test.yaml
│   │       ├── mask_rcnn_R_50_FPN_training_acc_test.yaml
│   │       ├── panoptic_fpn_R_50_inference_acc_test.yaml
│   │       ├── panoptic_fpn_R_50_instant_test.yaml
│   │       ├── panoptic_fpn_R_50_training_acc_test.yaml
│   │       ├── retinanet_R_50_FPN_inference_acc_test.yaml
│   │       ├── retinanet_R_50_FPN_instant_test.yaml
│   │       ├── rpn_R_50_FPN_inference_acc_test.yaml
│   │       ├── rpn_R_50_FPN_instant_test.yaml
│   │       ├── semantic_R_50_FPN_inference_acc_test.yaml
│   │       ├── semantic_R_50_FPN_instant_test.yaml
│   │       └── semantic_R_50_FPN_training_acc_test.yaml
│   ├── datasets/
│   │   ├── README.md
│   │   ├── prepare_ade20k_sem_seg.py
│   │   ├── prepare_cocofied_lvis.py
│   │   ├── prepare_for_tests.sh
│   │   └── prepare_panoptic_fpn.py
│   ├── demo/
│   │   ├── README.md
│   │   ├── demo.py
│   │   └── predictor.py
│   ├── detectron2/
│   │   ├── __init__.py
│   │   ├── checkpoint/
│   │   │   ├── __init__.py
│   │   │   ├── c2_model_loading.py
│   │   │   ├── catalog.py
│   │   │   └── detection_checkpoint.py
│   │   ├── config/
│   │   │   ├── __init__.py
│   │   │   ├── compat.py
│   │   │   ├── config.py
│   │   │   ├── defaults.py
│   │   │   ├── instantiate.py
│   │   │   └── lazy.py
│   │   ├── data/
│   │   │   ├── __init__.py
│   │   │   ├── benchmark.py
│   │   │   ├── build.py
│   │   │   ├── catalog.py
│   │   │   ├── common.py
│   │   │   ├── dataset_mapper.py
│   │   │   ├── datasets/
│   │   │   │   ├── README.md
│   │   │   │   ├── __init__.py
│   │   │   │   ├── builtin.py
│   │   │   │   ├── builtin_meta.py
│   │   │   │   ├── cityscapes.py
│   │   │   │   ├── cityscapes_panoptic.py
│   │   │   │   ├── coco.py
│   │   │   │   ├── coco_panoptic.py
│   │   │   │   ├── lvis.py
│   │   │   │   ├── lvis_v0_5_categories.py
│   │   │   │   ├── lvis_v1_categories.py
│   │   │   │   ├── lvis_v1_category_image_count.py
│   │   │   │   ├── pascal_voc.py
│   │   │   │   └── register_coco.py
│   │   │   ├── detection_utils.py
│   │   │   ├── samplers/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── distributed_sampler.py
│   │   │   │   └── grouped_batch_sampler.py
│   │   │   └── transforms/
│   │   │       ├── __init__.py
│   │   │       ├── augmentation.py
│   │   │       ├── augmentation_impl.py
│   │   │       └── transform.py
│   │   ├── engine/
│   │   │   ├── __init__.py
│   │   │   ├── defaults.py
│   │   │   ├── hooks.py
│   │   │   ├── launch.py
│   │   │   └── train_loop.py
│   │   ├── evaluation/
│   │   │   ├── __init__.py
│   │   │   ├── cityscapes_evaluation.py
│   │   │   ├── coco_evaluation.py
│   │   │   ├── evaluator.py
│   │   │   ├── fast_eval_api.py
│   │   │   ├── lvis_evaluation.py
│   │   │   ├── panoptic_evaluation.py
│   │   │   ├── pascal_voc_evaluation.py
│   │   │   ├── rotated_coco_evaluation.py
│   │   │   ├── sem_seg_evaluation.py
│   │   │   └── testing.py
│   │   ├── export/
│   │   │   ├── README.md
│   │   │   ├── __init__.py
│   │   │   ├── api.py
│   │   │   ├── c10.py
│   │   │   ├── caffe2_export.py
│   │   │   ├── caffe2_inference.py
│   │   │   ├── caffe2_modeling.py
│   │   │   ├── caffe2_patch.py
│   │   │   ├── flatten.py
│   │   │   ├── shared.py
│   │   │   ├── torchscript.py
│   │   │   └── torchscript_patch.py
│   │   ├── layers/
│   │   │   ├── __init__.py
│   │   │   ├── aspp.py
│   │   │   ├── batch_norm.py
│   │   │   ├── blocks.py
│   │   │   ├── csrc/
│   │   │   │   ├── README.md
│   │   │   │   ├── ROIAlignRotated/
│   │   │   │   │   ├── ROIAlignRotated.h
│   │   │   │   │   ├── ROIAlignRotated_cpu.cpp
│   │   │   │   │   └── ROIAlignRotated_cuda.cu
│   │   │   │   ├── box_iou_rotated/
│   │   │   │   │   ├── box_iou_rotated.h
│   │   │   │   │   ├── box_iou_rotated_cpu.cpp
│   │   │   │   │   ├── box_iou_rotated_cuda.cu
│   │   │   │   │   └── box_iou_rotated_utils.h
│   │   │   │   ├── cocoeval/
│   │   │   │   │   ├── cocoeval.cpp
│   │   │   │   │   └── cocoeval.h
│   │   │   │   ├── cuda_version.cu
│   │   │   │   ├── deformable/
│   │   │   │   │   ├── deform_conv.h
│   │   │   │   │   ├── deform_conv_cuda.cu
│   │   │   │   │   └── deform_conv_cuda_kernel.cu
│   │   │   │   ├── nms_rotated/
│   │   │   │   │   ├── nms_rotated.h
│   │   │   │   │   ├── nms_rotated_cpu.cpp
│   │   │   │   │   └── nms_rotated_cuda.cu
│   │   │   │   └── vision.cpp
│   │   │   ├── deform_conv.py
│   │   │   ├── losses.py
│   │   │   ├── mask_ops.py
│   │   │   ├── nms.py
│   │   │   ├── roi_align.py
│   │   │   ├── roi_align_rotated.py
│   │   │   ├── rotated_boxes.py
│   │   │   ├── shape_spec.py
│   │   │   └── wrappers.py
│   │   ├── model_zoo/
│   │   │   ├── __init__.py
│   │   │   ├── configs/
│   │   │   │   ├── Base-RCNN-C4.yaml
│   │   │   │   ├── Base-RCNN-DilatedC5.yaml
│   │   │   │   ├── Base-RCNN-FPN.yaml
│   │   │   │   ├── Base-RetinaNet.yaml
│   │   │   │   ├── COCO-Detection/
│   │   │   │   │   ├── fast_rcnn_R_50_FPN_1x.yaml
│   │   │   │   │   ├── faster_rcnn_R_101_C4_3x.yaml
│   │   │   │   │   ├── faster_rcnn_R_101_DC5_3x.yaml
│   │   │   │   │   ├── faster_rcnn_R_101_FPN_3x.yaml
│   │   │   │   │   ├── faster_rcnn_R_50_C4_1x.yaml
│   │   │   │   │   ├── faster_rcnn_R_50_C4_3x.yaml
│   │   │   │   │   ├── faster_rcnn_R_50_DC5_1x.yaml
│   │   │   │   │   ├── faster_rcnn_R_50_DC5_3x.yaml
│   │   │   │   │   ├── faster_rcnn_R_50_FPN_1x.yaml
│   │   │   │   │   ├── faster_rcnn_R_50_FPN_3x.yaml
│   │   │   │   │   ├── faster_rcnn_X_101_32x8d_FPN_3x.yaml
│   │   │   │   │   ├── fcos_R_50_FPN_1x.py
│   │   │   │   │   ├── retinanet_R_101_FPN_3x.yaml
│   │   │   │   │   ├── retinanet_R_50_FPN_1x.py
│   │   │   │   │   ├── retinanet_R_50_FPN_1x.yaml
│   │   │   │   │   ├── retinanet_R_50_FPN_3x.yaml
│   │   │   │   │   ├── rpn_R_50_C4_1x.yaml
│   │   │   │   │   └── rpn_R_50_FPN_1x.yaml
│   │   │   │   ├── COCO-InstanceSegmentation/
│   │   │   │   │   ├── mask_rcnn_R_101_C4_3x.yaml
│   │   │   │   │   ├── mask_rcnn_R_101_DC5_3x.yaml
│   │   │   │   │   ├── mask_rcnn_R_101_FPN_3x.yaml
│   │   │   │   │   ├── mask_rcnn_R_50_C4_1x.py
│   │   │   │   │   ├── mask_rcnn_R_50_C4_1x.yaml
│   │   │   │   │   ├── mask_rcnn_R_50_C4_3x.yaml
│   │   │   │   │   ├── mask_rcnn_R_50_DC5_1x.yaml
│   │   │   │   │   ├── mask_rcnn_R_50_DC5_3x.yaml
│   │   │   │   │   ├── mask_rcnn_R_50_FPN_1x.py
│   │   │   │   │   ├── mask_rcnn_R_50_FPN_1x.yaml
│   │   │   │   │   ├── mask_rcnn_R_50_FPN_1x_giou.yaml
│   │   │   │   │   ├── mask_rcnn_R_50_FPN_3x.yaml
│   │   │   │   │   ├── mask_rcnn_X_101_32x8d_FPN_3x.yaml
│   │   │   │   │   ├── mask_rcnn_regnetx_4gf_dds_fpn_1x.py
│   │   │   │   │   └── mask_rcnn_regnety_4gf_dds_fpn_1x.py
│   │   │   │   ├── COCO-Keypoints/
│   │   │   │   │   ├── Base-Keypoint-RCNN-FPN.yaml
│   │   │   │   │   ├── keypoint_rcnn_R_101_FPN_3x.yaml
│   │   │   │   │   ├── keypoint_rcnn_R_50_FPN_1x.py
│   │   │   │   │   ├── keypoint_rcnn_R_50_FPN_1x.yaml
│   │   │   │   │   ├── keypoint_rcnn_R_50_FPN_3x.yaml
│   │   │   │   │   └── keypoint_rcnn_X_101_32x8d_FPN_3x.yaml
│   │   │   │   ├── COCO-PanopticSegmentation/
│   │   │   │   │   ├── Base-Panoptic-FPN.yaml
│   │   │   │   │   ├── panoptic_fpn_R_101_3x.yaml
│   │   │   │   │   ├── panoptic_fpn_R_50_1x.py
│   │   │   │   │   ├── panoptic_fpn_R_50_1x.yaml
│   │   │   │   │   └── panoptic_fpn_R_50_3x.yaml
│   │   │   │   ├── Cityscapes/
│   │   │   │   │   └── mask_rcnn_R_50_FPN.yaml
│   │   │   │   ├── Detectron1-Comparisons/
│   │   │   │   │   ├── README.md
│   │   │   │   │   ├── faster_rcnn_R_50_FPN_noaug_1x.yaml
│   │   │   │   │   ├── keypoint_rcnn_R_50_FPN_1x.yaml
│   │   │   │   │   └── mask_rcnn_R_50_FPN_noaug_1x.yaml
│   │   │   │   ├── LVISv0.5-InstanceSegmentation/
│   │   │   │   │   ├── mask_rcnn_R_101_FPN_1x.yaml
│   │   │   │   │   ├── mask_rcnn_R_50_FPN_1x.yaml
│   │   │   │   │   └── mask_rcnn_X_101_32x8d_FPN_1x.yaml
│   │   │   │   ├── LVISv1-InstanceSegmentation/
│   │   │   │   │   ├── mask_rcnn_R_101_FPN_1x.yaml
│   │   │   │   │   ├── mask_rcnn_R_50_FPN_1x.yaml
│   │   │   │   │   └── mask_rcnn_X_101_32x8d_FPN_1x.yaml
│   │   │   │   ├── Misc/
│   │   │   │   │   ├── cascade_mask_rcnn_R_50_FPN_1x.yaml
│   │   │   │   │   ├── cascade_mask_rcnn_R_50_FPN_3x.yaml
│   │   │   │   │   ├── cascade_mask_rcnn_X_152_32x8d_FPN_IN5k_gn_dconv.yaml
│   │   │   │   │   ├── mask_rcnn_R_50_FPN_1x_cls_agnostic.yaml
│   │   │   │   │   ├── mask_rcnn_R_50_FPN_1x_dconv_c3-c5.yaml
│   │   │   │   │   ├── mask_rcnn_R_50_FPN_3x_dconv_c3-c5.yaml
│   │   │   │   │   ├── mask_rcnn_R_50_FPN_3x_gn.yaml
│   │   │   │   │   ├── mask_rcnn_R_50_FPN_3x_syncbn.yaml
│   │   │   │   │   ├── mmdet_mask_rcnn_R_50_FPN_1x.py
│   │   │   │   │   ├── panoptic_fpn_R_101_dconv_cascade_gn_3x.yaml
│   │   │   │   │   ├── scratch_mask_rcnn_R_50_FPN_3x_gn.yaml
│   │   │   │   │   ├── scratch_mask_rcnn_R_50_FPN_9x_gn.yaml
│   │   │   │   │   ├── scratch_mask_rcnn_R_50_FPN_9x_syncbn.yaml
│   │   │   │   │   ├── semantic_R_50_FPN_1x.yaml
│   │   │   │   │   └── torchvision_imagenet_R_50.py
│   │   │   │   ├── PascalVOC-Detection/
│   │   │   │   │   ├── faster_rcnn_R_50_C4.yaml
│   │   │   │   │   └── faster_rcnn_R_50_FPN.yaml
│   │   │   │   ├── common/
│   │   │   │   │   ├── README.md
│   │   │   │   │   ├── coco_schedule.py
│   │   │   │   │   ├── data/
│   │   │   │   │   │   ├── coco.py
│   │   │   │   │   │   ├── coco_keypoint.py
│   │   │   │   │   │   ├── coco_panoptic_separated.py
│   │   │   │   │   │   └── constants.py
│   │   │   │   │   ├── optim.py
│   │   │   │   │   └── train.py
│   │   │   │   ├── new_baselines/
│   │   │   │   │   ├── mask_rcnn_R_101_FPN_100ep_LSJ.py
│   │   │   │   │   ├── mask_rcnn_R_101_FPN_200ep_LSJ.py
│   │   │   │   │   ├── mask_rcnn_R_101_FPN_400ep_LSJ.py
│   │   │   │   │   ├── mask_rcnn_R_50_FPN_100ep_LSJ.py
│   │   │   │   │   ├── mask_rcnn_R_50_FPN_200ep_LSJ.py
│   │   │   │   │   ├── mask_rcnn_R_50_FPN_400ep_LSJ.py
│   │   │   │   │   ├── mask_rcnn_R_50_FPN_50ep_LSJ.py
│   │   │   │   │   ├── mask_rcnn_regnetx_4gf_dds_FPN_100ep_LSJ.py
│   │   │   │   │   ├── mask_rcnn_regnetx_4gf_dds_FPN_200ep_LSJ.py
│   │   │   │   │   ├── mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ.py
│   │   │   │   │   ├── mask_rcnn_regnety_4gf_dds_FPN_100ep_LSJ.py
│   │   │   │   │   ├── mask_rcnn_regnety_4gf_dds_FPN_200ep_LSJ.py
│   │   │   │   │   └── mask_rcnn_regnety_4gf_dds_FPN_400ep_LSJ.py
│   │   │   │   └── quick_schedules/
│   │   │   │       ├── README.md
│   │   │   │       ├── cascade_mask_rcnn_R_50_FPN_inference_acc_test.yaml
│   │   │   │       ├── cascade_mask_rcnn_R_50_FPN_instant_test.yaml
│   │   │   │       ├── fast_rcnn_R_50_FPN_inference_acc_test.yaml
│   │   │   │       ├── fast_rcnn_R_50_FPN_instant_test.yaml
│   │   │   │       ├── keypoint_rcnn_R_50_FPN_inference_acc_test.yaml
│   │   │   │       ├── keypoint_rcnn_R_50_FPN_instant_test.yaml
│   │   │   │       ├── keypoint_rcnn_R_50_FPN_normalized_training_acc_test.yaml
│   │   │   │       ├── keypoint_rcnn_R_50_FPN_training_acc_test.yaml
│   │   │   │       ├── mask_rcnn_R_50_C4_GCV_instant_test.yaml
│   │   │   │       ├── mask_rcnn_R_50_C4_inference_acc_test.yaml
│   │   │   │       ├── mask_rcnn_R_50_C4_instant_test.yaml
│   │   │   │       ├── mask_rcnn_R_50_C4_training_acc_test.yaml
│   │   │   │       ├── mask_rcnn_R_50_DC5_inference_acc_test.yaml
│   │   │   │       ├── mask_rcnn_R_50_FPN_inference_acc_test.yaml
│   │   │   │       ├── mask_rcnn_R_50_FPN_instant_test.yaml
│   │   │   │       ├── mask_rcnn_R_50_FPN_pred_boxes_training_acc_test.yaml
│   │   │   │       ├── mask_rcnn_R_50_FPN_training_acc_test.yaml
│   │   │   │       ├── panoptic_fpn_R_50_inference_acc_test.yaml
│   │   │   │       ├── panoptic_fpn_R_50_instant_test.yaml
│   │   │   │       ├── panoptic_fpn_R_50_training_acc_test.yaml
│   │   │   │       ├── retinanet_R_50_FPN_inference_acc_test.yaml
│   │   │   │       ├── retinanet_R_50_FPN_instant_test.yaml
│   │   │   │       ├── rpn_R_50_FPN_inference_acc_test.yaml
│   │   │   │       ├── rpn_R_50_FPN_instant_test.yaml
│   │   │   │       ├── semantic_R_50_FPN_inference_acc_test.yaml
│   │   │   │       ├── semantic_R_50_FPN_instant_test.yaml
│   │   │   │       └── semantic_R_50_FPN_training_acc_test.yaml
│   │   │   └── model_zoo.py
│   │   ├── modeling/
│   │   │   ├── __init__.py
│   │   │   ├── anchor_generator.py
│   │   │   ├── backbone/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── backbone.py
│   │   │   │   ├── build.py
│   │   │   │   ├── fpn.py
│   │   │   │   ├── mvit.py
│   │   │   │   ├── regnet.py
│   │   │   │   ├── resnet.py
│   │   │   │   ├── swin.py
│   │   │   │   ├── utils.py
│   │   │   │   └── vit.py
│   │   │   ├── box_regression.py
│   │   │   ├── matcher.py
│   │   │   ├── meta_arch/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── build.py
│   │   │   │   ├── dense_detector.py
│   │   │   │   ├── fcos.py
│   │   │   │   ├── panoptic_fpn.py
│   │   │   │   ├── rcnn.py
│   │   │   │   ├── retinanet.py
│   │   │   │   └── semantic_seg.py
│   │   │   ├── mmdet_wrapper.py
│   │   │   ├── poolers.py
│   │   │   ├── postprocessing.py
│   │   │   ├── proposal_generator/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── build.py
│   │   │   │   ├── proposal_utils.py
│   │   │   │   ├── rpn.py
│   │   │   │   └── rrpn.py
│   │   │   ├── roi_heads/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── box_head.py
│   │   │   │   ├── cascade_rcnn.py
│   │   │   │   ├── fast_rcnn.py
│   │   │   │   ├── keypoint_head.py
│   │   │   │   ├── mask_head.py
│   │   │   │   ├── roi_heads.py
│   │   │   │   └── rotated_fast_rcnn.py
│   │   │   ├── sampling.py
│   │   │   └── test_time_augmentation.py
│   │   ├── projects/
│   │   │   ├── README.md
│   │   │   └── __init__.py
│   │   ├── solver/
│   │   │   ├── __init__.py
│   │   │   ├── build.py
│   │   │   └── lr_scheduler.py
│   │   ├── structures/
│   │   │   ├── __init__.py
│   │   │   ├── boxes.py
│   │   │   ├── image_list.py
│   │   │   ├── instances.py
│   │   │   ├── keypoints.py
│   │   │   ├── masks.py
│   │   │   └── rotated_boxes.py
│   │   ├── tracking/
│   │   │   ├── __init__.py
│   │   │   ├── base_tracker.py
│   │   │   ├── bbox_iou_tracker.py
│   │   │   ├── hungarian_tracker.py
│   │   │   ├── iou_weighted_hungarian_bbox_iou_tracker.py
│   │   │   ├── utils.py
│   │   │   └── vanilla_hungarian_bbox_iou_tracker.py
│   │   └── utils/
│   │       ├── README.md
│   │       ├── __init__.py
│   │       ├── analysis.py
│   │       ├── collect_env.py
│   │       ├── colormap.py
│   │       ├── comm.py
│   │       ├── develop.py
│   │       ├── env.py
│   │       ├── events.py
│   │       ├── file_io.py
│   │       ├── logger.py
│   │       ├── memory.py
│   │       ├── registry.py
│   │       ├── serialize.py
│   │       ├── testing.py
│   │       ├── video_visualizer.py
│   │       └── visualizer.py
│   ├── dev/
│   │   ├── README.md
│   │   ├── linter.sh
│   │   ├── packaging/
│   │   │   ├── README.md
│   │   │   ├── build_all_wheels.sh
│   │   │   ├── build_wheel.sh
│   │   │   ├── gen_install_table.py
│   │   │   ├── gen_wheel_index.sh
│   │   │   └── pkg_helpers.bash
│   │   ├── parse_results.sh
│   │   ├── run_inference_tests.sh
│   │   └── run_instant_tests.sh
│   ├── docker/
│   │   ├── Dockerfile
│   │   ├── README.md
│   │   ├── deploy.Dockerfile
│   │   └── docker-compose.yml
│   ├── docs/
│   │   ├── .gitignore
│   │   ├── Makefile
│   │   ├── README.md
│   │   ├── _static/
│   │   │   └── css/
│   │   │       └── custom.css
│   │   ├── conf.py
│   │   ├── index.rst
│   │   ├── modules/
│   │   │   ├── checkpoint.rst
│   │   │   ├── config.rst
│   │   │   ├── data.rst
│   │   │   ├── data_transforms.rst
│   │   │   ├── engine.rst
│   │   │   ├── evaluation.rst
│   │   │   ├── export.rst
│   │   │   ├── fvcore.rst
│   │   │   ├── index.rst
│   │   │   ├── layers.rst
│   │   │   ├── model_zoo.rst
│   │   │   ├── modeling.rst
│   │   │   ├── solver.rst
│   │   │   ├── structures.rst
│   │   │   └── utils.rst
│   │   ├── notes/
│   │   │   ├── benchmarks.md
│   │   │   ├── changelog.md
│   │   │   ├── compatibility.md
│   │   │   ├── contributing.md
│   │   │   └── index.rst
│   │   ├── requirements.txt
│   │   └── tutorials/
│   │       ├── README.md
│   │       ├── augmentation.md
│   │       ├── builtin_datasets.md
│   │       ├── configs.md
│   │       ├── data_loading.md
│   │       ├── datasets.md
│   │       ├── deployment.md
│   │       ├── evaluation.md
│   │       ├── extend.md
│   │       ├── getting_started.md
│   │       ├── index.rst
│   │       ├── install.md
│   │       ├── lazyconfigs.md
│   │       ├── models.md
│   │       ├── training.md
│   │       └── write-models.md
│   ├── projects/
│   │   ├── DeepLab/
│   │   │   ├── README.md
│   │   │   ├── configs/
│   │   │   │   └── Cityscapes-SemanticSegmentation/
│   │   │   │       ├── Base-DeepLabV3-OS16-Semantic.yaml
│   │   │   │       ├── deeplab_v3_R_103_os16_mg124_poly_90k_bs16.yaml
│   │   │   │       └── deeplab_v3_plus_R_103_os16_mg124_poly_90k_bs16.yaml
│   │   │   ├── deeplab/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── build_solver.py
│   │   │   │   ├── config.py
│   │   │   │   ├── loss.py
│   │   │   │   ├── lr_scheduler.py
│   │   │   │   ├── resnet.py
│   │   │   │   └── semantic_seg.py
│   │   │   └── train_net.py
│   │   ├── DensePose/
│   │   │   ├── README.md
│   │   │   ├── apply_net.py
│   │   │   ├── configs/
│   │   │   │   ├── Base-DensePose-RCNN-FPN.yaml
│   │   │   │   ├── HRNet/
│   │   │   │   │   ├── densepose_rcnn_HRFPN_HRNet_w32_s1x.yaml
│   │   │   │   │   ├── densepose_rcnn_HRFPN_HRNet_w40_s1x.yaml
│   │   │   │   │   └── densepose_rcnn_HRFPN_HRNet_w48_s1x.yaml
│   │   │   │   ├── cse/
│   │   │   │   │   ├── Base-DensePose-RCNN-FPN-Human.yaml
│   │   │   │   │   ├── Base-DensePose-RCNN-FPN.yaml
│   │   │   │   │   ├── densepose_rcnn_R_101_FPN_DL_s1x.yaml
│   │   │   │   │   ├── densepose_rcnn_R_101_FPN_DL_soft_s1x.yaml
│   │   │   │   │   ├── densepose_rcnn_R_101_FPN_s1x.yaml
│   │   │   │   │   ├── densepose_rcnn_R_101_FPN_soft_s1x.yaml
│   │   │   │   │   ├── densepose_rcnn_R_50_FPN_DL_s1x.yaml
│   │   │   │   │   ├── densepose_rcnn_R_50_FPN_DL_soft_s1x.yaml
│   │   │   │   │   ├── densepose_rcnn_R_50_FPN_s1x.yaml
│   │   │   │   │   ├── densepose_rcnn_R_50_FPN_soft_animals_CA_finetune_16k.yaml
│   │   │   │   │   ├── densepose_rcnn_R_50_FPN_soft_animals_CA_finetune_4k.yaml
│   │   │   │   │   ├── densepose_rcnn_R_50_FPN_soft_animals_I0_finetune_16k.yaml
│   │   │   │   │   ├── densepose_rcnn_R_50_FPN_soft_animals_I0_finetune_i2m_16k.yaml
│   │   │   │   │   ├── densepose_rcnn_R_50_FPN_soft_animals_I0_finetune_m2m_16k.yaml
│   │   │   │   │   ├── densepose_rcnn_R_50_FPN_soft_animals_finetune_16k.yaml
│   │   │   │   │   ├── densepose_rcnn_R_50_FPN_soft_animals_finetune_4k.yaml
│   │   │   │   │   ├── densepose_rcnn_R_50_FPN_soft_animals_finetune_maskonly_24k.yaml
│   │   │   │   │   ├── densepose_rcnn_R_50_FPN_soft_chimps_finetune_4k.yaml
│   │   │   │   │   └── densepose_rcnn_R_50_FPN_soft_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_101_FPN_DL_WC1M_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_101_FPN_DL_WC1_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_101_FPN_DL_WC2M_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_101_FPN_DL_WC2_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_101_FPN_DL_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_101_FPN_WC1M_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_101_FPN_WC1_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_101_FPN_WC2M_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_101_FPN_WC2_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_101_FPN_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_101_FPN_s1x_legacy.yaml
│   │   │   │   ├── densepose_rcnn_R_50_FPN_DL_WC1M_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_50_FPN_DL_WC1_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_50_FPN_DL_WC2M_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_50_FPN_DL_WC2_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_50_FPN_DL_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_50_FPN_WC1M_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_50_FPN_WC1_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_50_FPN_WC2M_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_50_FPN_WC2_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_50_FPN_s1x.yaml
│   │   │   │   ├── densepose_rcnn_R_50_FPN_s1x_legacy.yaml
│   │   │   │   ├── evolution/
│   │   │   │   │   ├── Base-RCNN-FPN-Atop10P_CA.yaml
│   │   │   │   │   ├── densepose_R_50_FPN_DL_WC1M_3x_Atop10P_CA.yaml
│   │   │   │   │   ├── densepose_R_50_FPN_DL_WC1M_3x_Atop10P_CA_B_coarsesegm.yaml
│   │   │   │   │   ├── densepose_R_50_FPN_DL_WC1M_3x_Atop10P_CA_B_finesegm.yaml
│   │   │   │   │   ├── densepose_R_50_FPN_DL_WC1M_3x_Atop10P_CA_B_uniform.yaml
│   │   │   │   │   └── densepose_R_50_FPN_DL_WC1M_3x_Atop10P_CA_B_uv.yaml
│   │   │   │   └── quick_schedules/
│   │   │   │       ├── cse/
│   │   │   │       │   ├── densepose_rcnn_R_50_FPN_DL_instant_test.yaml
│   │   │   │       │   └── densepose_rcnn_R_50_FPN_soft_animals_finetune_instant_test.yaml
│   │   │   │       ├── densepose_rcnn_HRFPN_HRNet_w32_instant_test.yaml
│   │   │   │       ├── densepose_rcnn_R_50_FPN_DL_instant_test.yaml
│   │   │   │       ├── densepose_rcnn_R_50_FPN_TTA_inference_acc_test.yaml
│   │   │   │       ├── densepose_rcnn_R_50_FPN_WC1_instant_test.yaml
│   │   │   │       ├── densepose_rcnn_R_50_FPN_WC2_instant_test.yaml
│   │   │   │       ├── densepose_rcnn_R_50_FPN_inference_acc_test.yaml
│   │   │   │       ├── densepose_rcnn_R_50_FPN_instant_test.yaml
│   │   │   │       └── densepose_rcnn_R_50_FPN_training_acc_test.yaml
│   │   │   ├── densepose/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── config.py
│   │   │   │   ├── converters/
│   │   │   │   │   ├── __init__.py
│   │   │   │   │   ├── base.py
│   │   │   │   │   ├── builtin.py
│   │   │   │   │   ├── chart_output_hflip.py
│   │   │   │   │   ├── chart_output_to_chart_result.py
│   │   │   │   │   ├── hflip.py
│   │   │   │   │   ├── segm_to_mask.py
│   │   │   │   │   ├── to_chart_result.py
│   │   │   │   │   └── to_mask.py
│   │   │   │   ├── data/
│   │   │   │   │   ├── __init__.py
│   │   │   │   │   ├── build.py
│   │   │   │   │   ├── combined_loader.py
│   │   │   │   │   ├── dataset_mapper.py
│   │   │   │   │   ├── datasets/
│   │   │   │   │   │   ├── __init__.py
│   │   │   │   │   │   ├── builtin.py
│   │   │   │   │   │   ├── chimpnsee.py
│   │   │   │   │   │   ├── coco.py
│   │   │   │   │   │   ├── dataset_type.py
│   │   │   │   │   │   └── lvis.py
│   │   │   │   │   ├── image_list_dataset.py
│   │   │   │   │   ├── inference_based_loader.py
│   │   │   │   │   ├── meshes/
│   │   │   │   │   │   ├── __init__.py
│   │   │   │   │   │   ├── builtin.py
│   │   │   │   │   │   └── catalog.py
│   │   │   │   │   ├── samplers/
│   │   │   │   │   │   ├── __init__.py
│   │   │   │   │   │   ├── densepose_base.py
│   │   │   │   │   │   ├── densepose_confidence_based.py
│   │   │   │   │   │   ├── densepose_cse_base.py
│   │   │   │   │   │   ├── densepose_cse_confidence_based.py
│   │   │   │   │   │   ├── densepose_cse_uniform.py
│   │   │   │   │   │   ├── densepose_uniform.py
│   │   │   │   │   │   ├── mask_from_densepose.py
│   │   │   │   │   │   └── prediction_to_gt.py
│   │   │   │   │   ├── transform/
│   │   │   │   │   │   ├── __init__.py
│   │   │   │   │   │   └── image.py
│   │   │   │   │   ├── utils.py
│   │   │   │   │   └── video/
│   │   │   │   │       ├── __init__.py
│   │   │   │   │       ├── frame_selector.py
│   │   │   │   │       └── video_keyframe_dataset.py
│   │   │   │   ├── engine/
│   │   │   │   │   ├── __init__.py
│   │   │   │   │   └── trainer.py
│   │   │   │   ├── evaluation/
│   │   │   │   │   ├── __init__.py
│   │   │   │   │   ├── d2_evaluator_adapter.py
│   │   │   │   │   ├── densepose_coco_evaluation.py
│   │   │   │   │   ├── evaluator.py
│   │   │   │   │   ├── mesh_alignment_evaluator.py
│   │   │   │   │   └── tensor_storage.py
│   │   │   │   ├── modeling/
│   │   │   │   │   ├── __init__.py
│   │   │   │   │   ├── build.py
│   │   │   │   │   ├── confidence.py
│   │   │   │   │   ├── cse/
│   │   │   │   │   │   ├── __init__.py
│   │   │   │   │   │   ├── embedder.py
│   │   │   │   │   │   ├── utils.py
│   │   │   │   │   │   ├── vertex_direct_embedder.py
│   │   │   │   │   │   └── vertex_feature_embedder.py
│   │   │   │   │   ├── densepose_checkpoint.py
│   │   │   │   │   ├── filter.py
│   │   │   │   │   ├── hrfpn.py
│   │   │   │   │   ├── hrnet.py
│   │   │   │   │   ├── inference.py
│   │   │   │   │   ├── losses/
│   │   │   │   │   │   ├── __init__.py
│   │   │   │   │   │   ├── chart.py
│   │   │   │   │   │   ├── chart_with_confidences.py
│   │   │   │   │   │   ├── cse.py
│   │   │   │   │   │   ├── cycle_pix2shape.py
│   │   │   │   │   │   ├── cycle_shape2shape.py
│   │   │   │   │   │   ├── embed.py
│   │   │   │   │   │   ├── embed_utils.py
│   │   │   │   │   │   ├── mask.py
│   │   │   │   │   │   ├── mask_or_segm.py
│   │   │   │   │   │   ├── registry.py
│   │   │   │   │   │   ├── segm.py
│   │   │   │   │   │   ├── soft_embed.py
│   │   │   │   │   │   └── utils.py
│   │   │   │   │   ├── predictors/
│   │   │   │   │   │   ├── __init__.py
│   │   │   │   │   │   ├── chart.py
│   │   │   │   │   │   ├── chart_confidence.py
│   │   │   │   │   │   ├── chart_with_confidence.py
│   │   │   │   │   │   ├── cse.py
│   │   │   │   │   │   ├── cse_confidence.py
│   │   │   │   │   │   ├── cse_with_confidence.py
│   │   │   │   │   │   └── registry.py
│   │   │   │   │   ├── roi_heads/
│   │   │   │   │   │   ├── __init__.py
│   │   │   │   │   │   ├── deeplab.py
│   │   │   │   │   │   ├── registry.py
│   │   │   │   │   │   ├── roi_head.py
│   │   │   │   │   │   └── v1convx.py
│   │   │   │   │   ├── test_time_augmentation.py
│   │   │   │   │   └── utils.py
│   │   │   │   ├── structures/
│   │   │   │   │   ├── __init__.py
│   │   │   │   │   ├── chart.py
│   │   │   │   │   ├── chart_confidence.py
│   │   │   │   │   ├── chart_result.py
│   │   │   │   │   ├── cse.py
│   │   │   │   │   ├── cse_confidence.py
│   │   │   │   │   ├── data_relative.py
│   │   │   │   │   ├── list.py
│   │   │   │   │   ├── mesh.py
│   │   │   │   │   └── transform_data.py
│   │   │   │   ├── utils/
│   │   │   │   │   ├── __init__.py
│   │   │   │   │   ├── dbhelper.py
│   │   │   │   │   ├── logger.py
│   │   │   │   │   └── transform.py
│   │   │   │   └── vis/
│   │   │   │       ├── __init__.py
│   │   │   │       ├── base.py
│   │   │   │       ├── bounding_box.py
│   │   │   │       ├── densepose_data_points.py
│   │   │   │       ├── densepose_outputs_iuv.py
│   │   │   │       ├── densepose_outputs_vertex.py
│   │   │   │       ├── densepose_results.py
│   │   │   │       ├── densepose_results_textures.py
│   │   │   │       └── extractor.py
│   │   │   ├── dev/
│   │   │   │   ├── README.md
│   │   │   │   ├── run_inference_tests.sh
│   │   │   │   └── run_instant_tests.sh
│   │   │   ├── doc/
│   │   │   │   ├── BOOTSTRAPPING_PIPELINE.md
│   │   │   │   ├── DENSEPOSE_CSE.md
│   │   │   │   ├── DENSEPOSE_DATASETS.md
│   │   │   │   ├── DENSEPOSE_IUV.md
│   │   │   │   ├── GETTING_STARTED.md
│   │   │   │   ├── RELEASE_2020_04.md
│   │   │   │   ├── RELEASE_2021_03.md
│   │   │   │   ├── RELEASE_2021_06.md
│   │   │   │   ├── TOOL_APPLY_NET.md
│   │   │   │   └── TOOL_QUERY_DB.md
│   │   │   ├── query_db.py
│   │   │   ├── setup.py
│   │   │   ├── tests/
│   │   │   │   ├── common.py
│   │   │   │   ├── test_chart_based_annotations_accumulator.py
│   │   │   │   ├── test_combine_data_loader.py
│   │   │   │   ├── test_cse_annotations_accumulator.py
│   │   │   │   ├── test_dataset_loaded_annotations.py
│   │   │   │   ├── test_frame_selector.py
│   │   │   │   ├── test_image_list_dataset.py
│   │   │   │   ├── test_image_resize_transform.py
│   │   │   │   ├── test_model_e2e.py
│   │   │   │   ├── test_setup.py
│   │   │   │   ├── test_structures.py
│   │   │   │   ├── test_tensor_storage.py
│   │   │   │   └── test_video_keyframe_dataset.py
│   │   │   └── train_net.py
│   │   ├── MViTv2/
│   │   │   ├── README.md
│   │   │   └── configs/
│   │   │       ├── cascade_mask_rcnn_mvitv2_b_3x.py
│   │   │       ├── cascade_mask_rcnn_mvitv2_b_in21k_3x.py
│   │   │       ├── cascade_mask_rcnn_mvitv2_h_in21k_lsj_3x.py
│   │   │       ├── cascade_mask_rcnn_mvitv2_l_in21k_lsj_50ep.py
│   │   │       ├── cascade_mask_rcnn_mvitv2_s_3x.py
│   │   │       ├── cascade_mask_rcnn_mvitv2_t_3x.py
│   │   │       ├── common/
│   │   │       │   ├── coco_loader.py
│   │   │       │   └── coco_loader_lsj.py
│   │   │       └── mask_rcnn_mvitv2_t_3x.py
│   │   ├── Panoptic-DeepLab/
│   │   │   ├── README.md
│   │   │   ├── configs/
│   │   │   │   ├── COCO-PanopticSegmentation/
│   │   │   │   │   └── panoptic_deeplab_R_52_os16_mg124_poly_200k_bs64_crop_640_640_coco_dsconv.yaml
│   │   │   │   └── Cityscapes-PanopticSegmentation/
│   │   │   │       ├── Base-PanopticDeepLab-OS16.yaml
│   │   │   │       ├── panoptic_deeplab_R_52_os16_mg124_poly_90k_bs32_crop_512_1024.yaml
│   │   │   │       └── panoptic_deeplab_R_52_os16_mg124_poly_90k_bs32_crop_512_1024_dsconv.yaml
│   │   │   ├── panoptic_deeplab/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── config.py
│   │   │   │   ├── dataset_mapper.py
│   │   │   │   ├── panoptic_seg.py
│   │   │   │   ├── post_processing.py
│   │   │   │   └── target_generator.py
│   │   │   └── train_net.py
│   │   ├── PointRend/
│   │   │   ├── README.md
│   │   │   ├── configs/
│   │   │   │   ├── InstanceSegmentation/
│   │   │   │   │   ├── Base-Implicit-PointRend.yaml
│   │   │   │   │   ├── Base-PointRend-RCNN-FPN.yaml
│   │   │   │   │   ├── implicit_pointrend_R_50_FPN_1x_coco.yaml
│   │   │   │   │   ├── implicit_pointrend_R_50_FPN_3x_coco.yaml
│   │   │   │   │   ├── pointrend_rcnn_R_101_FPN_3x_coco.yaml
│   │   │   │   │   ├── pointrend_rcnn_R_50_FPN_1x_cityscapes.yaml
│   │   │   │   │   ├── pointrend_rcnn_R_50_FPN_1x_coco.yaml
│   │   │   │   │   ├── pointrend_rcnn_R_50_FPN_3x_coco.yaml
│   │   │   │   │   └── pointrend_rcnn_X_101_32x8d_FPN_3x_coco.yaml
│   │   │   │   └── SemanticSegmentation/
│   │   │   │       ├── Base-PointRend-Semantic-FPN.yaml
│   │   │   │       └── pointrend_semantic_R_101_FPN_1x_cityscapes.yaml
│   │   │   ├── point_rend/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── color_augmentation.py
│   │   │   │   ├── config.py
│   │   │   │   ├── mask_head.py
│   │   │   │   ├── point_features.py
│   │   │   │   ├── point_head.py
│   │   │   │   ├── roi_heads.py
│   │   │   │   └── semantic_seg.py
│   │   │   └── train_net.py
│   │   ├── PointSup/
│   │   │   ├── README.md
│   │   │   ├── configs/
│   │   │   │   ├── implicit_pointrend_R_50_FPN_3x_point_sup_point_aug_coco.yaml
│   │   │   │   ├── mask_rcnn_R_50_FPN_3x_point_sup_coco.yaml
│   │   │   │   └── mask_rcnn_R_50_FPN_3x_point_sup_point_aug_coco.yaml
│   │   │   ├── point_sup/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── config.py
│   │   │   │   ├── dataset_mapper.py
│   │   │   │   ├── detection_utils.py
│   │   │   │   ├── mask_head.py
│   │   │   │   ├── point_utils.py
│   │   │   │   └── register_point_annotations.py
│   │   │   ├── tools/
│   │   │   │   └── prepare_coco_point_annotations_without_masks.py
│   │   │   └── train_net.py
│   │   ├── README.md
│   │   ├── Rethinking-BatchNorm/
│   │   │   ├── README.md
│   │   │   ├── configs/
│   │   │   │   ├── mask_rcnn_BNhead.py
│   │   │   │   ├── mask_rcnn_BNhead_batch_stats.py
│   │   │   │   ├── mask_rcnn_BNhead_shuffle.py
│   │   │   │   ├── mask_rcnn_SyncBNhead.py
│   │   │   │   ├── retinanet_SyncBNhead.py
│   │   │   │   └── retinanet_SyncBNhead_SharedTraining.py
│   │   │   └── retinanet-eval-domain-specific.py
│   │   ├── TensorMask/
│   │   │   ├── README.md
│   │   │   ├── configs/
│   │   │   │   ├── Base-TensorMask.yaml
│   │   │   │   ├── tensormask_R_50_FPN_1x.yaml
│   │   │   │   └── tensormask_R_50_FPN_6x.yaml
│   │   │   ├── setup.py
│   │   │   ├── tensormask/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── arch.py
│   │   │   │   ├── config.py
│   │   │   │   └── layers/
│   │   │   │       ├── __init__.py
│   │   │   │       ├── csrc/
│   │   │   │       │   ├── SwapAlign2Nat/
│   │   │   │       │   │   ├── SwapAlign2Nat.h
│   │   │   │       │   │   └── SwapAlign2Nat_cuda.cu
│   │   │   │       │   └── vision.cpp
│   │   │   │       └── swap_align2nat.py
│   │   │   ├── tests/
│   │   │   │   ├── __init__.py
│   │   │   │   └── test_swap_align2nat.py
│   │   │   └── train_net.py
│   │   ├── TridentNet/
│   │   │   ├── README.md
│   │   │   ├── configs/
│   │   │   │   ├── Base-TridentNet-Fast-C4.yaml
│   │   │   │   ├── tridentnet_fast_R_101_C4_3x.yaml
│   │   │   │   ├── tridentnet_fast_R_50_C4_1x.yaml
│   │   │   │   └── tridentnet_fast_R_50_C4_3x.yaml
│   │   │   ├── train_net.py
│   │   │   └── tridentnet/
│   │   │       ├── __init__.py
│   │   │       ├── config.py
│   │   │       ├── trident_backbone.py
│   │   │       ├── trident_conv.py
│   │   │       ├── trident_rcnn.py
│   │   │       └── trident_rpn.py
│   │   └── ViTDet/
│   │       ├── README.md
│   │       └── configs/
│   │           ├── COCO/
│   │           │   ├── cascade_mask_rcnn_mvitv2_b_in21k_100ep.py
│   │           │   ├── cascade_mask_rcnn_mvitv2_h_in21k_36ep.py
│   │           │   ├── cascade_mask_rcnn_mvitv2_l_in21k_50ep.py
│   │           │   ├── cascade_mask_rcnn_swin_b_in21k_50ep.py
│   │           │   ├── cascade_mask_rcnn_swin_l_in21k_50ep.py
│   │           │   ├── cascade_mask_rcnn_vitdet_b_100ep.py
│   │           │   ├── cascade_mask_rcnn_vitdet_h_75ep.py
│   │           │   ├── cascade_mask_rcnn_vitdet_l_100ep.py
│   │           │   ├── mask_rcnn_vitdet_b_100ep.py
│   │           │   ├── mask_rcnn_vitdet_h_75ep.py
│   │           │   └── mask_rcnn_vitdet_l_100ep.py
│   │           ├── LVIS/
│   │           │   ├── cascade_mask_rcnn_mvitv2_b_in21k_100ep.py
│   │           │   ├── cascade_mask_rcnn_mvitv2_h_in21k_50ep.py
│   │           │   ├── cascade_mask_rcnn_mvitv2_l_in21k_50ep.py
│   │           │   ├── cascade_mask_rcnn_swin_b_in21k_50ep.py
│   │           │   ├── cascade_mask_rcnn_swin_l_in21k_50ep.py
│   │           │   ├── cascade_mask_rcnn_vitdet_b_100ep.py
│   │           │   ├── cascade_mask_rcnn_vitdet_h_100ep.py
│   │           │   ├── cascade_mask_rcnn_vitdet_l_100ep.py
│   │           │   ├── mask_rcnn_vitdet_b_100ep.py
│   │           │   ├── mask_rcnn_vitdet_h_100ep.py
│   │           │   └── mask_rcnn_vitdet_l_100ep.py
│   │           └── common/
│   │               └── coco_loader_lsj.py
│   ├── setup.cfg
│   ├── setup.py
│   ├── tests/
│   │   ├── README.md
│   │   ├── __init__.py
│   │   ├── config/
│   │   │   ├── dir1/
│   │   │   │   ├── dir1_a.py
│   │   │   │   └── dir1_b.py
│   │   │   ├── root_cfg.py
│   │   │   ├── test_instantiate_config.py
│   │   │   ├── test_lazy_config.py
│   │   │   └── test_yacs_config.py
│   │   ├── data/
│   │   │   ├── __init__.py
│   │   │   ├── test_coco.py
│   │   │   ├── test_coco_evaluation.py
│   │   │   ├── test_dataset.py
│   │   │   ├── test_detection_utils.py
│   │   │   ├── test_rotation_transform.py
│   │   │   ├── test_sampler.py
│   │   │   └── test_transforms.py
│   │   ├── export/
│   │   │   └── test_c10.py
│   │   ├── layers/
│   │   │   ├── __init__.py
│   │   │   ├── test_blocks.py
│   │   │   ├── test_deformable.py
│   │   │   ├── test_losses.py
│   │   │   ├── test_mask_ops.py
│   │   │   ├── test_nms.py
│   │   │   ├── test_nms_rotated.py
│   │   │   ├── test_roi_align.py
│   │   │   └── test_roi_align_rotated.py
│   │   ├── modeling/
│   │   │   ├── __init__.py
│   │   │   ├── test_anchor_generator.py
│   │   │   ├── test_backbone.py
│   │   │   ├── test_box2box_transform.py
│   │   │   ├── test_fast_rcnn.py
│   │   │   ├── test_matcher.py
│   │   │   ├── test_mmdet.py
│   │   │   ├── test_model_e2e.py
│   │   │   ├── test_roi_heads.py
│   │   │   ├── test_roi_pooler.py
│   │   │   └── test_rpn.py
│   │   ├── structures/
│   │   │   ├── __init__.py
│   │   │   ├── test_boxes.py
│   │   │   ├── test_imagelist.py
│   │   │   ├── test_instances.py
│   │   │   ├── test_keypoints.py
│   │   │   ├── test_masks.py
│   │   │   └── test_rotated_boxes.py
│   │   ├── test_checkpoint.py
│   │   ├── test_engine.py
│   │   ├── test_events.py
│   │   ├── test_export_caffe2.py
│   │   ├── test_export_onnx.py
│   │   ├── test_export_torchscript.py
│   │   ├── test_model_analysis.py
│   │   ├── test_model_zoo.py
│   │   ├── test_packaging.py
│   │   ├── test_registry.py
│   │   ├── test_scheduler.py
│   │   ├── test_solver.py
│   │   ├── test_visualizer.py
│   │   └── tracking/
│   │       ├── __init__.py
│   │       ├── test_bbox_iou_tracker.py
│   │       ├── test_hungarian_tracker.py
│   │       ├── test_iou_weighted_hungarian_bbox_iou_tracker.py
│   │       └── test_vanilla_hungarian_bbox_iou_tracker.py
│   └── tools/
│       ├── README.md
│       ├── __init__.py
│       ├── analyze_model.py
│       ├── benchmark.py
│       ├── convert-torchvision-to-d2.py
│       ├── deploy/
│       │   ├── CMakeLists.txt
│       │   ├── README.md
│       │   ├── export_model.py
│       │   └── torchscript_mask_rcnn.cpp
│       ├── lazyconfig_train_net.py
│       ├── lightning_train_net.py
│       ├── plain_train_net.py
│       ├── train_net.py
│       ├── visualize_data.py
│       └── visualize_json_results.py
├── prepare_datasets.md
├── requirements.txt
├── tools/
│   ├── convert-thirdparty-pretrained-model-to-d2.py
│   ├── download_cc.py
│   ├── get_cc_tags.py
│   ├── get_coco_zeroshot.py
│   ├── get_lvis_cat_info.py
│   ├── get_tags_for_VLDet_concepts.py
│   └── remove_lvis_rare.py
├── train_net.py
└── vldet/
    ├── __init__.py
    ├── config.py
    ├── custom_solver.py
    ├── data/
    │   ├── custom_build_augmentation.py
    │   ├── custom_dataset_dataloader.py
    │   ├── custom_dataset_mapper.py
    │   ├── datasets/
    │   │   ├── cc.py
    │   │   ├── coco_zeroshot.py
    │   │   ├── imagenet.py
    │   │   ├── lvis_22k_categories.py
    │   │   ├── lvis_v1.py
    │   │   ├── objects365.py
    │   │   ├── oid.py
    │   │   └── register_oid.py
    │   ├── tar_dataset.py
    │   └── transforms/
    │       ├── custom_augmentation_impl.py
    │       └── custom_transform.py
    ├── evaluation/
    │   ├── custom_coco_eval.py
    │   └── oideval.py
    ├── modeling/
    │   ├── backbone/
    │   │   ├── swintransformer.py
    │   │   └── timm.py
    │   ├── debug.py
    │   ├── meta_arch/
    │   │   └── custom_rcnn.py
    │   ├── roi_heads/
    │   │   ├── res5_roi_heads.py
    │   │   ├── vldet_fast_rcnn.py
    │   │   ├── vldet_roi_heads.py
    │   │   └── zero_shot_classifier.py
    │   ├── text/
    │   │   └── text_encoder.py
    │   └── utils.py
    └── predictor.py
Download .txt
Showing preview only (298K chars total). Download the full file or copy to clipboard to get everything.
SYMBOL INDEX (3464 symbols across 398 files)

FILE: demo.py
  class ScreenGrab (line 26) | class ScreenGrab:
    method __init__ (line 27) | def __init__(self):
    method read (line 32) | def read(self):
    method isOpened (line 37) | def isOpened(self):
    method release (line 39) | def release(self):
  function setup_cfg (line 46) | def setup_cfg(args):
  function get_parser (line 65) | def get_parser():
  function test_opencv_video_format (line 114) | def test_opencv_video_format(codec, file_ext):

FILE: detectron2/configs/Misc/torchvision_imagenet_R_50.py
  function build_data_loader (line 40) | def build_data_loader(dataset, batch_size, num_workers, training=True):
  class ClassificationNet (line 50) | class ClassificationNet(nn.Module):
    method __init__ (line 51) | def __init__(self, model: nn.Module):
    method device (line 56) | def device(self):
    method forward (line 59) | def forward(self, inputs):
  class ClassificationAcc (line 69) | class ClassificationAcc(DatasetEvaluator):
    method reset (line 70) | def reset(self):
    method process (line 73) | def process(self, inputs, outputs):
    method evaluate (line 78) | def evaluate(self):

FILE: detectron2/configs/common/coco_schedule.py
  function default_X_scheduler (line 7) | def default_X_scheduler(num_X):

FILE: detectron2/datasets/prepare_ade20k_sem_seg.py
  function convert (line 11) | def convert(input, output):

FILE: detectron2/datasets/prepare_cocofied_lvis.py
  function cocofy_lvis (line 96) | def cocofy_lvis(input_filename, output_filename):

FILE: detectron2/datasets/prepare_panoptic_fpn.py
  function _process_panoptic_to_semantic (line 18) | def _process_panoptic_to_semantic(input_panoptic, output_semantic, segme...
  function separate_coco_semantic_from_panoptic (line 29) | def separate_coco_semantic_from_panoptic(panoptic_json, panoptic_root, s...
  function link_val100 (line 98) | def link_val100(dir_full, dir_100):

FILE: detectron2/demo/demo.py
  function setup_cfg (line 23) | def setup_cfg(args):
  function get_parser (line 39) | def get_parser():
  function test_opencv_video_format (line 76) | def test_opencv_video_format(codec, file_ext):

FILE: detectron2/demo/predictor.py
  class VisualizationDemo (line 15) | class VisualizationDemo(object):
    method __init__ (line 16) | def __init__(self, cfg, instance_mode=ColorMode.IMAGE, parallel=False):
    method run_on_image (line 37) | def run_on_image(self, image):
    method _frame_from_video (line 68) | def _frame_from_video(self, video):
    method run_on_video (line 76) | def run_on_video(self, video):
  class AsyncPredictor (line 132) | class AsyncPredictor:
    class _StopToken (line 139) | class _StopToken:
    class _PredictWorker (line 142) | class _PredictWorker(mp.Process):
      method __init__ (line 143) | def __init__(self, cfg, task_queue, result_queue):
      method run (line 149) | def run(self):
    method __init__ (line 160) | def __init__(self, cfg, num_gpus: int = 1):
    method put (line 187) | def put(self, image):
    method get (line 191) | def get(self):
    method __len__ (line 207) | def __len__(self):
    method __call__ (line 210) | def __call__(self, image):
    method shutdown (line 214) | def shutdown(self):
    method default_buffer_size (line 219) | def default_buffer_size(self):

FILE: detectron2/detectron2/checkpoint/c2_model_loading.py
  function convert_basic_c2_names (line 10) | def convert_basic_c2_names(original_keys):
  function convert_c2_detectron_names (line 66) | def convert_c2_detectron_names(weights):
  function align_and_update_state_dicts (line 209) | def align_and_update_state_dicts(model_state_dict, ckpt_state_dict, c2_c...
  function _group_keys_by_module (line 337) | def _group_keys_by_module(keys: List[str], original_names: Dict[str, str]):
  function _longest_common_prefix (line 377) | def _longest_common_prefix(names: List[str]) -> str:
  function _longest_common_prefix_str (line 388) | def _longest_common_prefix_str(names: List[str]) -> str:
  function _group_str (line 395) | def _group_str(names: List[str]) -> str:

FILE: detectron2/detectron2/checkpoint/catalog.py
  class ModelCatalog (line 7) | class ModelCatalog(object):
    method get (line 58) | def get(name):
    method _get_c2_imagenet_pretrained (line 66) | def _get_c2_imagenet_pretrained(name):
    method _get_c2_detectron_baseline (line 74) | def _get_c2_detectron_baseline(name):
  class ModelCatalogHandler (line 95) | class ModelCatalogHandler(PathHandler):
    method _get_supported_prefixes (line 102) | def _get_supported_prefixes(self):
    method _get_local_path (line 105) | def _get_local_path(self, path, **kwargs):
    method _open (line 111) | def _open(self, path, mode="r", **kwargs):

FILE: detectron2/detectron2/checkpoint/detection_checkpoint.py
  class DetectionCheckpointer (line 15) | class DetectionCheckpointer(Checkpointer):
    method __init__ (line 22) | def __init__(self, model, save_dir="", *, save_to_disk=None, **checkpo...
    method load (line 32) | def load(self, path, *args, **kwargs):
    method _load_file (line 59) | def _load_file(self, filename):
    method _load_model (line 94) | def _load_model(self, checkpoint):

FILE: detectron2/detectron2/config/compat.py
  function upgrade_config (line 33) | def upgrade_config(cfg: CN, to_version: Optional[int] = None) -> CN:
  function downgrade_config (line 55) | def downgrade_config(cfg: CN, to_version: int) -> CN:
  function guess_version (line 82) | def guess_version(cfg: CN, filename: str) -> int:
  function _rename (line 116) | def _rename(cfg: CN, old: str, new: str) -> None:
  class _RenameConverter (line 146) | class _RenameConverter:
    method upgrade (line 154) | def upgrade(cls, cfg: CN) -> None:
    method downgrade (line 159) | def downgrade(cls, cfg: CN) -> None:
  class ConverterV1 (line 164) | class ConverterV1(_RenameConverter):
  class ConverterV2 (line 168) | class ConverterV2(_RenameConverter):
    method upgrade (line 204) | def upgrade(cls, cfg: CN) -> None:
    method downgrade (line 222) | def downgrade(cls, cfg: CN) -> None:

FILE: detectron2/detectron2/config/config.py
  class CfgNode (line 12) | class CfgNode(_CfgNode):
    method _open_cfg (line 33) | def _open_cfg(cls, filename):
    method merge_from_file (line 37) | def merge_from_file(self, cfg_filename: str, allow_unsafe: bool = True...
    method dump (line 87) | def dump(self, *args, **kwargs):
  function get_cfg (line 99) | def get_cfg() -> CfgNode:
  function set_global_cfg (line 111) | def set_global_cfg(cfg: CfgNode) -> None:
  function configurable (line 130) | def configurable(init_func=None, *, from_config=None):
  function _get_args_from_config (line 218) | def _get_args_from_config(from_config_func, *args, **kwargs):
  function _called_with_cfg (line 251) | def _called_with_cfg(*args, **kwargs):

FILE: detectron2/detectron2/config/instantiate.py
  function dump_dataclass (line 13) | def dump_dataclass(obj: Any):
  function instantiate (line 37) | def instantiate(cfg):

FILE: detectron2/detectron2/config/lazy.py
  class LazyCall (line 25) | class LazyCall:
    method __init__ (line 42) | def __init__(self, target):
    method __call__ (line 49) | def __call__(self, **kwargs):
  function _visit_dict_config (line 61) | def _visit_dict_config(cfg, func):
  function _validate_py_syntax (line 74) | def _validate_py_syntax(filename):
  function _cast_to_config (line 84) | def _cast_to_config(obj):
  function _random_package_name (line 97) | def _random_package_name(filename):
  function _patch_import (line 103) | def _patch_import():
  class LazyConfig (line 161) | class LazyConfig:
    method load_rel (line 168) | def load_rel(filename: str, keys: Union[None, str, Tuple[str, ...]] = ...
    method load (line 184) | def load(filename: str, keys: Union[None, str, Tuple[str, ...]] = None):
    method save (line 239) | def save(cfg, filename: str):
    method apply_overrides (line 305) | def apply_overrides(cfg, overrides: List[str]):
    method to_py (line 347) | def to_py(cfg, prefix: str = "cfg."):

FILE: detectron2/detectron2/data/benchmark.py
  class _EmptyMapDataset (line 19) | class _EmptyMapDataset(torch.utils.data.Dataset):
    method __init__ (line 24) | def __init__(self, dataset):
    method __len__ (line 27) | def __len__(self):
    method __getitem__ (line 30) | def __getitem__(self, idx):
  function iter_benchmark (line 35) | def iter_benchmark(
  class DataLoaderBenchmark (line 65) | class DataLoaderBenchmark:
    method __init__ (line 71) | def __init__(
    method _benchmark (line 100) | def _benchmark(self, iterator, num_iter, warmup, msg=None):
    method _log_time (line 106) | def _log_time(self, msg, avg, all_times, distributed=False):
    method benchmark_dataset (line 126) | def benchmark_dataset(self, num_iter, warmup=5):
    method benchmark_mapper (line 138) | def benchmark_mapper(self, num_iter, warmup=5):
    method benchmark_workers (line 151) | def benchmark_workers(self, num_iter, warmup=10):
    method benchmark_IPC (line 175) | def benchmark_IPC(self, num_iter, warmup=10):
    method benchmark_distributed (line 195) | def benchmark_distributed(self, num_iter, warmup=10):

FILE: detectron2/detectron2/data/build.py
  function filter_images_with_only_crowd_annotations (line 45) | def filter_images_with_only_crowd_annotations(dataset_dicts):
  function filter_images_with_few_keypoints (line 76) | def filter_images_with_few_keypoints(dataset_dicts, min_keypoints_per_im...
  function load_proposals_into_dataset (line 110) | def load_proposals_into_dataset(dataset_dicts, proposal_file):
  function print_instances_class_histogram (line 164) | def print_instances_class_histogram(dataset_dicts, class_names):
  function get_detection_dataset_dicts (line 216) | def get_detection_dataset_dicts(
  function build_batch_data_loader (line 282) | def build_batch_data_loader(
  function _train_loader_from_config (line 342) | def _train_loader_from_config(cfg, mapper=None, *, dataset=None, sampler...
  function build_detection_train_loader (line 390) | def build_detection_train_loader(
  function _test_loader_from_config (line 453) | def _test_loader_from_config(cfg, dataset_name, mapper=None):
  function build_detection_test_loader (line 483) | def build_detection_test_loader(
  function trivial_batch_collator (line 547) | def trivial_batch_collator(batch):
  function worker_init_reset_seed (line 554) | def worker_init_reset_seed(worker_id):

FILE: detectron2/detectron2/data/catalog.py
  class _DatasetCatalog (line 13) | class _DatasetCatalog(UserDict):
    method register (line 29) | def register(self, name, func):
    method get (line 40) | def get(self, name):
    method list (line 60) | def list(self) -> List[str]:
    method remove (line 69) | def remove(self, name):
    method __str__ (line 75) | def __str__(self):
  class Metadata (line 91) | class Metadata(types.SimpleNamespace):
    method __getattr__ (line 115) | def __getattr__(self, key):
    method __setattr__ (line 136) | def __setattr__(self, key, val):
    method as_dict (line 155) | def as_dict(self):
    method set (line 162) | def set(self, **kwargs):
    method get (line 170) | def get(self, key, default=None):
  class _MetadataCatalog (line 181) | class _MetadataCatalog(UserDict):
    method get (line 194) | def get(self, name):
    method list (line 209) | def list(self):
    method remove (line 218) | def remove(self, name):
    method __str__ (line 224) | def __str__(self):

FILE: detectron2/detectron2/data/common.py
  function _shard_iterator_dataloader_worker (line 16) | def _shard_iterator_dataloader_worker(iterable):
  class _MapIterableDataset (line 26) | class _MapIterableDataset(data.IterableDataset):
    method __init__ (line 36) | def __init__(self, dataset, map_func):
    method __len__ (line 40) | def __len__(self):
    method __iter__ (line 43) | def __iter__(self):
  class MapDataset (line 49) | class MapDataset(data.Dataset):
    method __init__ (line 54) | def __init__(self, dataset, map_func):
    method __new__ (line 72) | def __new__(cls, dataset, map_func):
    method __getnewargs__ (line 79) | def __getnewargs__(self):
    method __len__ (line 82) | def __len__(self):
    method __getitem__ (line 85) | def __getitem__(self, idx):
  class DatasetFromList (line 109) | class DatasetFromList(data.Dataset):
    method __init__ (line 114) | def __init__(self, lst: list, copy: bool = True, serialize: bool = True):
    method __len__ (line 146) | def __len__(self):
    method __getitem__ (line 152) | def __getitem__(self, idx):
  class ToIterableDataset (line 164) | class ToIterableDataset(data.IterableDataset):
    method __init__ (line 170) | def __init__(self, dataset: data.Dataset, sampler: Sampler, shard_samp...
    method __iter__ (line 190) | def __iter__(self):
    method __len__ (line 203) | def __len__(self):
  class AspectRatioGroupedDataset (line 207) | class AspectRatioGroupedDataset(data.IterableDataset):
    method __init__ (line 220) | def __init__(self, dataset, batch_size):
    method __iter__ (line 233) | def __iter__(self):

FILE: detectron2/detectron2/data/dataset_mapper.py
  class DatasetMapper (line 20) | class DatasetMapper:
    method __init__ (line 38) | def __init__(
    method from_config (line 86) | def from_config(cls, cfg, is_train: bool = True):
    method _transform_annotations (line 115) | def _transform_annotations(self, dataset_dict, transforms, image_shape):
    method __call__ (line 144) | def __call__(self, dataset_dict):

FILE: detectron2/detectron2/data/datasets/builtin.py
  function register_all_coco (line 101) | def register_all_coco(root):
  function register_all_lvis (line 165) | def register_all_lvis(root):
  function register_all_cityscapes (line 184) | def register_all_cityscapes(root):
  function register_all_pascal_voc (line 215) | def register_all_pascal_voc(root):
  function register_all_ade20k (line 231) | def register_all_ade20k(root):

FILE: detectron2/detectron2/data/datasets/builtin_meta.py
  function _get_coco_instances_meta (line 235) | def _get_coco_instances_meta():
  function _get_coco_panoptic_separated_meta (line 250) | def _get_coco_panoptic_separated_meta():
  function _get_builtin_metadata (line 283) | def _get_builtin_metadata(dataset_name):

FILE: detectron2/detectron2/data/datasets/cityscapes.py
  function _get_cityscapes_files (line 27) | def _get_cityscapes_files(image_dir, gt_dir):
  function load_cityscapes_instances (line 53) | def load_cityscapes_instances(image_dir, gt_dir, from_json=True, to_poly...
  function load_cityscapes_semantic (line 95) | def load_cityscapes_semantic(image_dir, gt_dir):
  function _cityscapes_files_to_dict (line 128) | def _cityscapes_files_to_dict(files, from_json, to_polygons):

FILE: detectron2/detectron2/data/datasets/cityscapes_panoptic.py
  function get_cityscapes_panoptic_files (line 18) | def get_cityscapes_panoptic_files(image_dir, gt_dir, json_info):
  function load_cityscapes_panoptic (line 51) | def load_cityscapes_panoptic(image_dir, gt_dir, gt_json, meta):
  function register_all_cityscapes_panoptic (line 127) | def register_all_cityscapes_panoptic(root):

FILE: detectron2/detectron2/data/datasets/coco.py
  function load_coco_json (line 30) | def load_coco_json(json_file, image_root, dataset_name=None, extra_annot...
  function load_sem_seg (line 230) | def load_sem_seg(gt_root, image_root, gt_ext="png", image_ext="jpg"):
  function convert_to_coco_dict (line 306) | def convert_to_coco_dict(dataset_name):
  function convert_to_coco_json (line 445) | def convert_to_coco_json(dataset_name, output_file, allow_cached=True):
  function register_coco_instances (line 479) | def register_coco_instances(name, metadata, json_file, image_root):

FILE: detectron2/detectron2/data/datasets/coco_panoptic.py
  function load_coco_panoptic_json (line 14) | def load_coco_panoptic_json(json_file, image_dir, gt_dir, meta):
  function register_coco_panoptic (line 66) | def register_coco_panoptic(
  function register_coco_panoptic_separated (line 102) | def register_coco_panoptic_separated(
  function merge_to_panoptic (line 168) | def merge_to_panoptic(detection_dicts, sem_seg_dicts):

FILE: detectron2/detectron2/data/datasets/lvis.py
  function register_lvis_instances (line 25) | def register_lvis_instances(name, metadata, json_file, image_root):
  function load_lvis_json (line 41) | def load_lvis_json(json_file, image_root, dataset_name=None, extra_annot...
  function get_lvis_instances_meta (line 168) | def get_lvis_instances_meta(dataset_name):
  function _get_lvis_instances_meta_v0_5 (line 187) | def _get_lvis_instances_meta_v0_5():
  function _get_lvis_instances_meta_v1 (line 200) | def _get_lvis_instances_meta_v1():

FILE: detectron2/detectron2/data/datasets/pascal_voc.py
  function load_voc_instances (line 25) | def load_voc_instances(dirname: str, split: str, class_names: Union[List...
  function register_pascal_voc (line 78) | def register_pascal_voc(name, dirname, split, year, class_names=CLASS_NA...

FILE: detectron2/detectron2/data/detection_utils.py
  class SizeMismatchError (line 46) | class SizeMismatchError(ValueError):
  function convert_PIL_to_numpy (line 60) | def convert_PIL_to_numpy(image, format):
  function convert_image_to_rgb (line 93) | def convert_image_to_rgb(image, format):
  function _apply_exif_orientation (line 119) | def _apply_exif_orientation(image):
  function read_image (line 166) | def read_image(file_name, format=None):
  function check_image_size (line 188) | def check_image_size(dataset_dict, image):
  function transform_proposals (line 214) | def transform_proposals(dataset_dict, image_shape, transforms, *, propos...
  function transform_instance_annotations (line 257) | def transform_instance_annotations(
  function transform_keypoint_annotations (line 321) | def transform_keypoint_annotations(keypoints, transforms, image_size, ke...
  function annotations_to_instances (line 369) | def annotations_to_instances(annos, image_size, mask_format="polygon"):
  function annotations_to_instances_rotated (line 444) | def annotations_to_instances_rotated(annos, image_size):
  function filter_empty_instances (line 473) | def filter_empty_instances(
  function create_keypoint_hflip_indices (line 509) | def create_keypoint_hflip_indices(dataset_names: Union[str, List[str]]) ...
  function get_fed_loss_cls_weights (line 534) | def get_fed_loss_cls_weights(dataset_names: Union[str, List[str]], freq_...
  function gen_crop_transform_with_instance (line 557) | def gen_crop_transform_with_instance(crop_size, image_size, instance):
  function check_metadata_consistency (line 587) | def check_metadata_consistency(key, dataset_names):
  function build_augmentation (line 616) | def build_augmentation(cfg, is_train):

FILE: detectron2/detectron2/data/samplers/distributed_sampler.py
  class TrainingSampler (line 15) | class TrainingSampler(Sampler):
    method __init__ (line 36) | def __init__(self, size: int, shuffle: bool = True, seed: Optional[int...
    method __iter__ (line 58) | def __iter__(self):
    method _infinite_indices (line 62) | def _infinite_indices(self):
  class RandomSubsetTrainingSampler (line 72) | class RandomSubsetTrainingSampler(TrainingSampler):
    method __init__ (line 79) | def __init__(
    method _infinite_indices (line 117) | def _infinite_indices(self):
  class RepeatFactorTrainingSampler (line 129) | class RepeatFactorTrainingSampler(Sampler):
    method __init__ (line 135) | def __init__(self, repeat_factors, *, shuffle=True, seed=None):
    method repeat_factors_from_category_frequency (line 158) | def repeat_factors_from_category_frequency(dataset_dicts, repeat_thresh):
    method _get_epoch_indices (line 204) | def _get_epoch_indices(self, generator):
    method __iter__ (line 227) | def __iter__(self):
    method _infinite_indices (line 231) | def _infinite_indices(self):
  class InferenceSampler (line 245) | class InferenceSampler(Sampler):
    method __init__ (line 253) | def __init__(self, size: int):
    method _get_local_indices (line 265) | def _get_local_indices(total_size, world_size, rank):
    method __iter__ (line 274) | def __iter__(self):
    method __len__ (line 277) | def __len__(self):

FILE: detectron2/detectron2/data/samplers/grouped_batch_sampler.py
  class GroupedBatchSampler (line 6) | class GroupedBatchSampler(BatchSampler):
    method __init__ (line 14) | def __init__(self, sampler, group_ids, batch_size):
    method __iter__ (line 37) | def __iter__(self):
    method __len__ (line 46) | def __len__(self):

FILE: detectron2/detectron2/data/transforms/augmentation.py
  function _check_img_dtype (line 27) | def _check_img_dtype(img):
  function _get_aug_input_args (line 39) | def _get_aug_input_args(aug, aug_input) -> List[Any]:
  class Augmentation (line 80) | class Augmentation:
    method _init (line 109) | def _init(self, params=None):
    method get_transform (line 115) | def get_transform(self, *args) -> Transform:
    method __call__ (line 151) | def __call__(self, aug_input) -> Transform:
    method _rand_range (line 176) | def _rand_range(self, low=1.0, high=None, size=None):
    method __repr__ (line 186) | def __repr__(self):
  class _TransformToAug (line 219) | class _TransformToAug(Augmentation):
    method __init__ (line 220) | def __init__(self, tfm: Transform):
    method get_transform (line 223) | def get_transform(self, *args):
    method __repr__ (line 226) | def __repr__(self):
  function _transform_to_aug (line 232) | def _transform_to_aug(tfm_or_aug):
  class AugmentationList (line 244) | class AugmentationList(Augmentation):
    method __init__ (line 256) | def __init__(self, augs):
    method __call__ (line 264) | def __call__(self, aug_input) -> Transform:
    method __repr__ (line 271) | def __repr__(self):
  class AugInput (line 278) | class AugInput:
    method __init__ (line 310) | def __init__(
    method transform (line 331) | def transform(self, tfm: Transform) -> None:
    method apply_augmentations (line 344) | def apply_augmentations(
  function apply_augmentations (line 353) | def apply_augmentations(augmentations: List[Union[Transform, Augmentatio...

FILE: detectron2/detectron2/data/transforms/augmentation_impl.py
  class RandomApply (line 43) | class RandomApply(Augmentation):
    method __init__ (line 48) | def __init__(self, tfm_or_aug, prob=0.5):
    method get_transform (line 62) | def get_transform(self, *args):
    method __call__ (line 69) | def __call__(self, aug_input):
  class RandomFlip (line 77) | class RandomFlip(Augmentation):
    method __init__ (line 82) | def __init__(self, prob=0.5, *, horizontal=True, vertical=False):
    method get_transform (line 97) | def get_transform(self, image):
  class Resize (line 109) | class Resize(Augmentation):
    method __init__ (line 112) | def __init__(self, shape, interp=Image.BILINEAR):
    method get_transform (line 123) | def get_transform(self, image):
  class ResizeShortestEdge (line 129) | class ResizeShortestEdge(Augmentation):
    method __init__ (line 138) | def __init__(
    method get_transform (line 163) | def get_transform(self, image):
    method get_output_shape (line 176) | def get_output_shape(
  class ResizeScale (line 198) | class ResizeScale(Augmentation):
    method __init__ (line 207) | def __init__(
    method _get_resize (line 226) | def _get_resize(self, image: np.ndarray, scale: float) -> Transform:
    method get_transform (line 243) | def get_transform(self, image: np.ndarray) -> Transform:
  class RandomRotation (line 248) | class RandomRotation(Augmentation):
    method __init__ (line 254) | def __init__(self, angle, expand=True, center=None, sample_style="rang...
    method get_transform (line 278) | def get_transform(self, image):
  class FixedSizeCrop (line 302) | class FixedSizeCrop(Augmentation):
    method __init__ (line 310) | def __init__(self, crop_size: Tuple[int], pad: bool = True, pad_value:...
    method _get_crop (line 320) | def _get_crop(self, image: np.ndarray) -> Transform:
    method _get_pad (line 334) | def _get_pad(self, image: np.ndarray) -> Transform:
    method get_transform (line 347) | def get_transform(self, image: np.ndarray) -> TransformList:
  class RandomCrop (line 354) | class RandomCrop(Augmentation):
    method __init__ (line 359) | def __init__(self, crop_type: str, crop_size):
    method get_transform (line 381) | def get_transform(self, image):
    method get_crop_size (line 389) | def get_crop_size(self, image_size):
  class RandomCrop_CategoryAreaConstraint (line 416) | class RandomCrop_CategoryAreaConstraint(Augmentation):
    method __init__ (line 424) | def __init__(
    method get_transform (line 443) | def get_transform(self, image, sem_seg):
  class RandomExtent (line 462) | class RandomExtent(Augmentation):
    method __init__ (line 471) | def __init__(self, scale_range, shift_range):
    method get_transform (line 484) | def get_transform(self, image):
  class RandomContrast (line 507) | class RandomContrast(Augmentation):
    method __init__ (line 519) | def __init__(self, intensity_min, intensity_max):
    method get_transform (line 528) | def get_transform(self, image):
  class RandomBrightness (line 533) | class RandomBrightness(Augmentation):
    method __init__ (line 545) | def __init__(self, intensity_min, intensity_max):
    method get_transform (line 554) | def get_transform(self, image):
  class RandomSaturation (line 559) | class RandomSaturation(Augmentation):
    method __init__ (line 572) | def __init__(self, intensity_min, intensity_max):
    method get_transform (line 581) | def get_transform(self, image):
  class RandomLighting (line 588) | class RandomLighting(Augmentation):
    method __init__ (line 597) | def __init__(self, scale):
    method get_transform (line 609) | def get_transform(self, image):

FILE: detectron2/detectron2/data/transforms/transform.py
  class ExtentTransform (line 36) | class ExtentTransform(Transform):
    method __init__ (line 46) | def __init__(self, src_rect, output_size, interp=Image.LINEAR, fill=0):
    method apply_image (line 57) | def apply_image(self, img, interp=None):
    method apply_coords (line 75) | def apply_coords(self, coords):
    method apply_segmentation (line 89) | def apply_segmentation(self, segmentation):
  class ResizeTransform (line 94) | class ResizeTransform(Transform):
    method __init__ (line 99) | def __init__(self, h, w, new_h, new_w, interp=None):
    method apply_image (line 112) | def apply_image(self, img, interp=None):
    method apply_coords (line 149) | def apply_coords(self, coords):
    method apply_segmentation (line 154) | def apply_segmentation(self, segmentation):
    method inverse (line 158) | def inverse(self):
  class RotationTransform (line 162) | class RotationTransform(Transform):
    method __init__ (line 168) | def __init__(self, h, w, angle, expand=True, center=None, interp=None):
    method apply_image (line 200) | def apply_image(self, img, interp=None):
    method apply_coords (line 210) | def apply_coords(self, coords):
    method apply_segmentation (line 219) | def apply_segmentation(self, segmentation):
    method create_rotation_matrix (line 223) | def create_rotation_matrix(self, offset=0):
    method inverse (line 235) | def inverse(self):
  class ColorTransform (line 250) | class ColorTransform(Transform):
    method __init__ (line 258) | def __init__(self, op):
    method apply_image (line 269) | def apply_image(self, img):
    method apply_coords (line 272) | def apply_coords(self, coords):
    method inverse (line 275) | def inverse(self):
    method apply_segmentation (line 278) | def apply_segmentation(self, segmentation):
  class PILColorTransform (line 282) | class PILColorTransform(ColorTransform):
    method __init__ (line 289) | def __init__(self, op):
    method apply_image (line 302) | def apply_image(self, img):
  function HFlip_rotated_box (line 307) | def HFlip_rotated_box(transform, rotated_boxes):
  function Resize_rotated_box (line 323) | def Resize_rotated_box(transform, rotated_boxes):

FILE: detectron2/detectron2/engine/defaults.py
  function create_ddp_model (line 60) | def create_ddp_model(model, *, fp16_compression=False, **kwargs):
  function default_argument_parser (line 82) | def default_argument_parser(epilog=None):
  function _try_get_key (line 146) | def _try_get_key(cfg, *keys, default=None):
  function _highlight (line 160) | def _highlight(code, filename):
  function default_setup (line 174) | def default_setup(cfg, args):
  function default_writers (line 230) | def default_writers(output_dir: str, max_iter: Optional[int] = None):
  class DefaultPredictor (line 252) | class DefaultPredictor:
    method __init__ (line 280) | def __init__(self, cfg):
    method __call__ (line 297) | def __call__(self, original_image):
  class DefaultTrainer (line 321) | class DefaultTrainer(TrainerBase):
    method __init__ (line 364) | def __init__(self, cfg):
    method resume_or_load (line 398) | def resume_or_load(self, resume=True):
    method build_hooks (line 418) | def build_hooks(self):
    method build_writers (line 466) | def build_writers(self):
    method train (line 477) | def train(self):
    method run_step (line 492) | def run_step(self):
    method state_dict (line 496) | def state_dict(self):
    method load_state_dict (line 501) | def load_state_dict(self, state_dict):
    method build_model (line 506) | def build_model(cls, cfg):
    method build_optimizer (line 520) | def build_optimizer(cls, cfg, model):
    method build_lr_scheduler (line 531) | def build_lr_scheduler(cls, cfg, optimizer):
    method build_train_loader (line 539) | def build_train_loader(cls, cfg):
    method build_test_loader (line 550) | def build_test_loader(cls, cfg, dataset_name):
    method build_evaluator (line 561) | def build_evaluator(cls, cfg, dataset_name):
    method test (line 577) | def test(cls, cfg, model, evaluators=None):
    method auto_scale_workers (line 633) | def auto_scale_workers(cfg, num_workers: int):

FILE: detectron2/detectron2/engine/hooks.py
  class CallbackHook (line 49) | class CallbackHook(HookBase):
    method __init__ (line 54) | def __init__(self, *, before_train=None, after_train=None, before_step...
    method before_train (line 63) | def before_train(self):
    method after_train (line 67) | def after_train(self):
    method before_step (line 75) | def before_step(self):
    method after_step (line 79) | def after_step(self):
  class IterationTimer (line 84) | class IterationTimer(HookBase):
    method __init__ (line 96) | def __init__(self, warmup_iter=3):
    method before_train (line 107) | def before_train(self):
    method after_train (line 112) | def after_train(self):
    method before_step (line 138) | def before_step(self):
    method after_step (line 142) | def after_step(self):
  class PeriodicWriter (line 156) | class PeriodicWriter(HookBase):
    method __init__ (line 164) | def __init__(self, writers, period=20):
    method after_step (line 175) | def after_step(self):
    method after_train (line 182) | def after_train(self):
  class PeriodicCheckpointer (line 190) | class PeriodicCheckpointer(_PeriodicCheckpointer, HookBase):
    method before_train (line 201) | def before_train(self):
    method after_step (line 204) | def after_step(self):
  class BestCheckpointer (line 209) | class BestCheckpointer(HookBase):
    method __init__ (line 217) | def __init__(
    method _update_best (line 250) | def _update_best(self, val, iteration):
    method _best_checking (line 257) | def _best_checking(self):
    method after_step (line 290) | def after_step(self):
    method after_train (line 300) | def after_train(self):
  class LRScheduler (line 306) | class LRScheduler(HookBase):
    method __init__ (line 312) | def __init__(self, optimizer=None, scheduler=None):
    method before_train (line 325) | def before_train(self):
    method get_best_param_group_id (line 337) | def get_best_param_group_id(optimizer):
    method after_step (line 355) | def after_step(self):
    method scheduler (line 361) | def scheduler(self):
    method state_dict (line 364) | def state_dict(self):
    method load_state_dict (line 369) | def load_state_dict(self, state_dict):
  class TorchProfiler (line 376) | class TorchProfiler(HookBase):
    method __init__ (line 394) | def __init__(self, enable_predicate, output_dir, *, activities=None, s...
    method before_step (line 409) | def before_step(self):
    method after_step (line 434) | def after_step(self):
  class AutogradProfiler (line 456) | class AutogradProfiler(TorchProfiler):
    method __init__ (line 479) | def __init__(self, enable_predicate, output_dir, *, use_cuda=True):
    method before_step (line 493) | def before_step(self):
  class EvalHook (line 501) | class EvalHook(HookBase):
    method __init__ (line 508) | def __init__(self, eval_period, eval_function, eval_after_train=True):
    method _do_eval (line 527) | def _do_eval(self):
    method after_step (line 550) | def after_step(self):
    method after_train (line 557) | def after_train(self):
  class PreciseBN (line 566) | class PreciseBN(HookBase):
    method __init__ (line 576) | def __init__(self, period, model, data_loader, num_iter):
    method after_step (line 605) | def after_step(self):
    method update_stats (line 611) | def update_stats(self):
  class TorchMemoryStats (line 638) | class TorchMemoryStats(HookBase):
    method __init__ (line 643) | def __init__(self, period=20, max_runs=10):
    method after_step (line 655) | def after_step(self):

FILE: detectron2/detectron2/engine/launch.py
  function _find_free_port (line 15) | def _find_free_port():
  function launch (line 27) | def launch(
  function _distributed_worker (line 85) | def _distributed_worker(

FILE: detectron2/detectron2/engine/train_loop.py
  class HookBase (line 19) | class HookBase:
    method before_train (line 56) | def before_train(self):
    method after_train (line 62) | def after_train(self):
    method before_step (line 68) | def before_step(self):
    method after_step (line 74) | def after_step(self):
    method state_dict (line 80) | def state_dict(self):
  class TrainerBase (line 88) | class TrainerBase:
    method __init__ (line 107) | def __init__(self) -> None:
    method register_hooks (line 115) | def register_hooks(self, hooks: List[Optional[HookBase]]) -> None:
    method train (line 133) | def train(self, start_iter: int, max_iter: int):
    method before_train (line 161) | def before_train(self):
    method after_train (line 165) | def after_train(self):
    method before_step (line 170) | def before_step(self):
    method after_step (line 178) | def after_step(self):
    method run_step (line 182) | def run_step(self):
    method state_dict (line 185) | def state_dict(self):
    method load_state_dict (line 200) | def load_state_dict(self, state_dict):
  class SimpleTrainer (line 216) | class SimpleTrainer(TrainerBase):
    method __init__ (line 235) | def __init__(self, model, data_loader, optimizer):
    method run_step (line 259) | def run_step(self):
    method _data_loader_iter (line 298) | def _data_loader_iter(self):
    method reset_data_loader (line 304) | def reset_data_loader(self, data_loader_builder):
    method _write_metrics (line 314) | def _write_metrics(
    method write_metrics (line 323) | def write_metrics(
    method state_dict (line 365) | def state_dict(self):
    method load_state_dict (line 370) | def load_state_dict(self, state_dict):
  class AMPTrainer (line 375) | class AMPTrainer(SimpleTrainer):
    method __init__ (line 381) | def __init__(self, model, data_loader, optimizer, grad_scaler=None):
    method run_step (line 400) | def run_step(self):
    method state_dict (line 428) | def state_dict(self):
    method load_state_dict (line 433) | def load_state_dict(self, state_dict):

FILE: detectron2/detectron2/evaluation/cityscapes_evaluation.py
  class CityscapesEvaluator (line 18) | class CityscapesEvaluator(DatasetEvaluator):
    method __init__ (line 23) | def __init__(self, dataset_name):
    method reset (line 34) | def reset(self):
  class CityscapesInstanceEvaluator (line 50) | class CityscapesInstanceEvaluator(CityscapesEvaluator):
    method process (line 60) | def process(self, inputs, outputs):
    method evaluate (line 91) | def evaluate(self):
  class CityscapesSemSegEvaluator (line 132) | class CityscapesSemSegEvaluator(CityscapesEvaluator):
    method process (line 142) | def process(self, inputs, outputs):
    method evaluate (line 158) | def evaluate(self):

FILE: detectron2/detectron2/evaluation/coco_evaluation.py
  class COCOEvaluator (line 34) | class COCOEvaluator(DatasetEvaluator):
    method __init__ (line 47) | def __init__(
    method reset (line 154) | def reset(self):
    method process (line 157) | def process(self, inputs, outputs):
    method evaluate (line 177) | def evaluate(self, img_ids=None):
    method _tasks_from_predictions (line 210) | def _tasks_from_predictions(self, predictions):
    method _eval_predictions (line 222) | def _eval_predictions(self, predictions, img_ids=None):
    method _eval_box_proposals (line 284) | def _eval_box_proposals(self, predictions):
    method _derive_coco_results (line 323) | def _derive_coco_results(self, coco_eval, iou_type, class_names=None):
  function instances_to_coco_json (line 392) | def instances_to_coco_json(instances, img_id):
  function _evaluate_box_proposals (line 456) | def _evaluate_box_proposals(dataset_predictions, coco_api, thresholds=No...
  function _evaluate_predictions_on_coco (line 567) | def _evaluate_predictions_on_coco(
  class COCOevalMaxDets (line 634) | class COCOevalMaxDets(COCOeval):
    method summarize (line 640) | def summarize(self):
    method __str__ (line 721) | def __str__(self):

FILE: detectron2/detectron2/evaluation/evaluator.py
  class DatasetEvaluator (line 15) | class DatasetEvaluator:
    method reset (line 26) | def reset(self):
    method process (line 33) | def process(self, inputs, outputs):
    method evaluate (line 50) | def evaluate(self):
  class DatasetEvaluators (line 66) | class DatasetEvaluators(DatasetEvaluator):
    method __init__ (line 74) | def __init__(self, evaluators):
    method reset (line 82) | def reset(self):
    method process (line 86) | def process(self, inputs, outputs):
    method evaluate (line 90) | def evaluate(self):
  function inference_on_dataset (line 103) | def inference_on_dataset(
  function inference_context (line 213) | def inference_context(model):

FILE: detectron2/detectron2/evaluation/fast_eval_api.py
  class COCOeval_opt (line 13) | class COCOeval_opt(COCOeval):
    method evaluate (line 19) | def evaluate(self):
    method accumulate (line 98) | def accumulate(self):

FILE: detectron2/detectron2/evaluation/lvis_evaluation.py
  class LVISEvaluator (line 22) | class LVISEvaluator(DatasetEvaluator):
    method __init__ (line 28) | def __init__(
    method reset (line 77) | def reset(self):
    method process (line 80) | def process(self, inputs, outputs):
    method evaluate (line 99) | def evaluate(self):
    method _tasks_from_predictions (line 128) | def _tasks_from_predictions(self, predictions):
    method _eval_predictions (line 134) | def _eval_predictions(self, predictions):
    method _eval_box_proposals (line 180) | def _eval_box_proposals(self, predictions):
  function _evaluate_box_proposals (line 222) | def _evaluate_box_proposals(dataset_predictions, lvis_api, thresholds=No...
  function _evaluate_predictions_on_lvis (line 331) | def _evaluate_predictions_on_lvis(

FILE: detectron2/detectron2/evaluation/panoptic_evaluation.py
  class COCOPanopticEvaluator (line 24) | class COCOPanopticEvaluator(DatasetEvaluator):
    method __init__ (line 32) | def __init__(self, dataset_name: str, output_dir: Optional[str] = None):
    method reset (line 50) | def reset(self):
    method _convert_category_id (line 53) | def _convert_category_id(self, segment_info):
    method process (line 68) | def process(self, inputs, outputs):
    method evaluate (line 114) | def evaluate(self):
  function _print_panoptic_results (line 168) | def _print_panoptic_results(pq_res):

FILE: detectron2/detectron2/evaluation/pascal_voc_evaluation.py
  class PascalVOCDetectionEvaluator (line 20) | class PascalVOCDetectionEvaluator(DatasetEvaluator):
    method __init__ (line 31) | def __init__(self, dataset_name):
    method reset (line 51) | def reset(self):
    method process (line 54) | def process(self, inputs, outputs):
    method evaluate (line 70) | def evaluate(self):
  function parse_rec (line 132) | def parse_rec(filename):
  function voc_ap (line 155) | def voc_ap(rec, prec, use_07_metric=False):
  function voc_eval (line 187) | def voc_eval(detpath, annopath, imagesetfile, classname, ovthresh=0.5, u...

FILE: detectron2/detectron2/evaluation/rotated_coco_evaluation.py
  class RotatedCOCOeval (line 15) | class RotatedCOCOeval(COCOeval):
    method is_rotated (line 17) | def is_rotated(box_list):
    method boxlist_to_tensor (line 34) | def boxlist_to_tensor(boxlist, output_box_dim):
    method compute_iou_dt_gt (line 57) | def compute_iou_dt_gt(self, dt, gt, is_crowd):
    method computeIoU (line 68) | def computeIoU(self, imgId, catId):
  class RotatedCOCOEvaluator (line 97) | class RotatedCOCOEvaluator(COCOEvaluator):
    method process (line 104) | def process(self, inputs, outputs):
    method instances_to_json (line 124) | def instances_to_json(self, instances, img_id):
    method _eval_predictions (line 148) | def _eval_predictions(self, predictions, img_ids=None):  # img_ids: un...
    method _evaluate_predictions_on_coco (line 192) | def _evaluate_predictions_on_coco(self, coco_gt, coco_results):

FILE: detectron2/detectron2/evaluation/sem_seg_evaluation.py
  function load_image_into_numpy_array (line 27) | def load_image_into_numpy_array(
  class SemSegEvaluator (line 37) | class SemSegEvaluator(DatasetEvaluator):
    method __init__ (line 42) | def __init__(
    method reset (line 113) | def reset(self):
    method process (line 120) | def process(self, inputs, outputs):
    method evaluate (line 154) | def evaluate(self):
    method encode_json_sem_seg (line 232) | def encode_json_sem_seg(self, sem_seg, input_file_name):
    method _mask_to_boundary (line 254) | def _mask_to_boundary(self, mask: np.ndarray, dilation_ratio=0.02):

FILE: detectron2/detectron2/evaluation/testing.py
  function print_csv_format (line 9) | def print_csv_format(results):
  function verify_results (line 31) | def verify_results(cfg, results):
  function flatten_results_dict (line 68) | def flatten_results_dict(results):

FILE: detectron2/detectron2/export/__init__.py
  function add_export_config (line 22) | def add_export_config(cfg):

FILE: detectron2/detectron2/export/api.py
  class Caffe2Tracer (line 22) | class Caffe2Tracer:
    method __init__ (line 45) | def __init__(self, cfg: CfgNode, model: nn.Module, inputs):
    method export_caffe2 (line 65) | def export_caffe2(self):
    method export_onnx (line 81) | def export_onnx(self):
    method export_torchscript (line 96) | def export_torchscript(self):
  class Caffe2Model (line 110) | class Caffe2Model(nn.Module):
    method __init__ (line 126) | def __init__(self, predict_net, init_net):
    method predict_net (line 136) | def predict_net(self):
    method init_net (line 143) | def init_net(self):
    method save_protobuf (line 149) | def save_protobuf(self, output_dir):
    method save_graph (line 175) | def save_graph(self, output_file, inputs=None):
    method load_protobuf (line 198) | def load_protobuf(dir):
    method __call__ (line 218) | def __call__(self, inputs):

FILE: detectron2/detectron2/export/c10.py
  class Caffe2Boxes (line 23) | class Caffe2Boxes(Boxes):
    method __init__ (line 30) | def __init__(self, tensor):
  class InstancesList (line 39) | class InstancesList(object):
    method __init__ (line 49) | def __init__(self, im_info, indices, extra_fields=None):
    method get_fields (line 59) | def get_fields(self):
    method has (line 73) | def has(self, name):
    method set (line 76) | def set(self, name, value):
    method __setattr__ (line 92) | def __setattr__(self, name, val):
    method __getattr__ (line 98) | def __getattr__(self, name):
    method __len__ (line 103) | def __len__(self):
    method flatten (line 106) | def flatten(self):
    method to_d2_instances_list (line 116) | def to_d2_instances_list(instances_list):
  class Caffe2Compatible (line 156) | class Caffe2Compatible(object):
    method _get_tensor_mode (line 161) | def _get_tensor_mode(self):
    method _set_tensor_mode (line 164) | def _set_tensor_mode(self, v):
  class Caffe2RPN (line 173) | class Caffe2RPN(Caffe2Compatible, rpn.RPN):
    method from_config (line 175) | def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]):
    method _generate_proposals (line 182) | def _generate_proposals(
    method forward (line 265) | def forward(self, images, features, gt_instances=None):
    method c2_postprocess (line 277) | def c2_postprocess(im_info, rpn_rois, rpn_roi_probs, tensor_mode):
  class Caffe2ROIPooler (line 293) | class Caffe2ROIPooler(Caffe2Compatible, poolers.ROIPooler):
    method c2_preprocess (line 295) | def c2_preprocess(box_lists):
    method forward (line 305) | def forward(self, x, box_lists):
  class Caffe2FastRCNNOutputsInference (line 385) | class Caffe2FastRCNNOutputsInference:
    method __init__ (line 386) | def __init__(self, tensor_mode):
    method __call__ (line 389) | def __call__(self, box_predictor, predictions, proposals):
  class Caffe2MaskRCNNInference (line 519) | class Caffe2MaskRCNNInference:
    method __call__ (line 520) | def __call__(self, pred_mask_logits, pred_instances):
  class Caffe2KeypointRCNNInference (line 531) | class Caffe2KeypointRCNNInference:
    method __init__ (line 532) | def __init__(self, use_heatmap_max_keypoint):
    method __call__ (line 535) | def __call__(self, pred_keypoint_logits, pred_instances):

FILE: detectron2/detectron2/export/caffe2_export.py
  function export_onnx_model (line 34) | def export_onnx_model(model, inputs):
  function _op_stats (line 70) | def _op_stats(net_def):
  function _assign_device_option (line 79) | def _assign_device_option(
  function export_caffe2_detection_model (line 125) | def export_caffe2_detection_model(model: torch.nn.Module, tensor_inputs:...
  function run_and_save_graph (line 171) | def run_and_save_graph(predict_net, init_net, tensor_inputs, graph_save_...

FILE: detectron2/detectron2/export/caffe2_inference.py
  class ProtobufModel (line 17) | class ProtobufModel(torch.nn.Module):
    method __init__ (line 26) | def __init__(self, predict_net, init_net):
    method _infer_output_devices (line 48) | def _infer_output_devices(self, inputs):
    method forward (line 71) | def forward(self, inputs):
  class ProtobufDetectionModel (line 125) | class ProtobufDetectionModel(torch.nn.Module):
    method __init__ (line 131) | def __init__(self, predict_net, init_net, *, convert_outputs=None):
    method _convert_inputs (line 151) | def _convert_inputs(self, batched_inputs):
    method forward (line 157) | def forward(self, batched_inputs):

FILE: detectron2/detectron2/export/caffe2_modeling.py
  function assemble_rcnn_outputs_by_name (line 27) | def assemble_rcnn_outputs_by_name(image_sizes, tensor_outputs, force_mas...
  function _cast_to_f32 (line 95) | def _cast_to_f32(f64):
  function set_caffe2_compatible_tensor_mode (line 99) | def set_caffe2_compatible_tensor_mode(model, enable=True):
  function convert_batched_inputs_to_c2_format (line 107) | def convert_batched_inputs_to_c2_format(batched_inputs, size_divisibilit...
  class Caffe2MetaArch (line 135) | class Caffe2MetaArch(Caffe2Compatible, torch.nn.Module):
    method __init__ (line 142) | def __init__(self, cfg, torch_model):
    method get_caffe2_inputs (line 154) | def get_caffe2_inputs(self, batched_inputs):
    method encode_additional_info (line 178) | def encode_additional_info(self, predict_net, init_net):
    method forward (line 184) | def forward(self, inputs):
    method _caffe2_preprocess_image (line 199) | def _caffe2_preprocess_image(self, inputs):
    method get_outputs_converter (line 217) | def get_outputs_converter(predict_net, init_net):
  class Caffe2GeneralizedRCNN (line 245) | class Caffe2GeneralizedRCNN(Caffe2MetaArch):
    method __init__ (line 246) | def __init__(self, cfg, torch_model):
    method encode_additional_info (line 259) | def encode_additional_info(self, predict_net, init_net):
    method forward (line 268) | def forward(self, inputs):
    method get_outputs_converter (line 279) | def get_outputs_converter(predict_net, init_net):
  class Caffe2RetinaNet (line 289) | class Caffe2RetinaNet(Caffe2MetaArch):
    method __init__ (line 290) | def __init__(self, cfg, torch_model):
    method forward (line 295) | def forward(self, inputs):
    method encode_additional_info (line 316) | def encode_additional_info(self, predict_net, init_net):
    method _encode_anchor_generator_cfg (line 349) | def _encode_anchor_generator_cfg(self, predict_net):
    method get_outputs_converter (line 359) | def get_outputs_converter(predict_net, init_net):

FILE: detectron2/detectron2/export/caffe2_patch.py
  class GenericMixin (line 22) | class GenericMixin(object):
  class Caffe2CompatibleConverter (line 26) | class Caffe2CompatibleConverter(object):
    method __init__ (line 32) | def __init__(self, replaceCls):
    method create_from (line 35) | def create_from(self, module):
  function patch (line 57) | def patch(model, target, updater, *args, **kwargs):
  function patch_generalized_rcnn (line 70) | def patch_generalized_rcnn(model):
  function mock_fastrcnn_outputs_inference (line 79) | def mock_fastrcnn_outputs_inference(
  function mock_mask_rcnn_inference (line 94) | def mock_mask_rcnn_inference(tensor_mode, patched_module, check=True):
  function mock_keypoint_rcnn_inference (line 104) | def mock_keypoint_rcnn_inference(tensor_mode, patched_module, use_heatma...
  class ROIHeadsPatcher (line 114) | class ROIHeadsPatcher:
    method __init__ (line 115) | def __init__(self, heads, use_heatmap_max_keypoint):
    method mock_roi_heads (line 120) | def mock_roi_heads(self, tensor_mode=True):

FILE: detectron2/detectron2/export/flatten.py
  class Schema (line 15) | class Schema:
    method flatten (line 37) | def flatten(cls, obj):
    method __call__ (line 40) | def __call__(self, values):
    method _concat (line 44) | def _concat(values):
    method _split (line 54) | def _split(values, sizes):
  class ListSchema (line 68) | class ListSchema(Schema):
    method __call__ (line 72) | def __call__(self, values):
    method flatten (line 82) | def flatten(cls, obj):
  class TupleSchema (line 89) | class TupleSchema(ListSchema):
    method __call__ (line 90) | def __call__(self, values):
  class IdentitySchema (line 95) | class IdentitySchema(Schema):
    method __call__ (line 96) | def __call__(self, values):
    method flatten (line 100) | def flatten(cls, obj):
  class DictSchema (line 105) | class DictSchema(ListSchema):
    method __call__ (line 108) | def __call__(self, values):
    method flatten (line 113) | def flatten(cls, obj):
  class InstancesSchema (line 124) | class InstancesSchema(DictSchema):
    method __call__ (line 125) | def __call__(self, values):
    method flatten (line 131) | def flatten(cls, obj):
  class TensorWrapSchema (line 140) | class TensorWrapSchema(Schema):
    method __call__ (line 148) | def __call__(self, values):
    method flatten (line 152) | def flatten(cls, obj):
  function flatten_to_tuple (line 158) | def flatten_to_tuple(obj):
  class TracingAdapter (line 186) | class TracingAdapter(nn.Module):
    method __init__ (line 225) | def __init__(
    method forward (line 279) | def forward(self, *args: torch.Tensor):
    method _create_wrapper (line 319) | def _create_wrapper(self, traced_model):

FILE: detectron2/detectron2/export/shared.py
  function to_device (line 25) | def to_device(t, device_str):
  function BilinearInterpolation (line 48) | def BilinearInterpolation(tensor_in, up_scale):
  function onnx_compatibale_interpolate (line 82) | def onnx_compatibale_interpolate(
  function mock_torch_nn_functional_interpolate (line 116) | def mock_torch_nn_functional_interpolate():
  class ScopedWS (line 129) | class ScopedWS(object):
    method __init__ (line 130) | def __init__(self, ws_name, is_reset, is_cleanup=False):
    method __enter__ (line 136) | def __enter__(self):
    method __exit__ (line 145) | def __exit__(self, *args):
  function fetch_any_blob (line 152) | def fetch_any_blob(name):
  function get_pb_arg (line 167) | def get_pb_arg(pb, arg_name):
  function get_pb_arg_valf (line 174) | def get_pb_arg_valf(pb, arg_name, default_val):
  function get_pb_arg_floats (line 179) | def get_pb_arg_floats(pb, arg_name, default_val):
  function get_pb_arg_ints (line 184) | def get_pb_arg_ints(pb, arg_name, default_val):
  function get_pb_arg_vali (line 189) | def get_pb_arg_vali(pb, arg_name, default_val):
  function get_pb_arg_vals (line 194) | def get_pb_arg_vals(pb, arg_name, default_val):
  function get_pb_arg_valstrings (line 199) | def get_pb_arg_valstrings(pb, arg_name, default_val):
  function check_set_pb_arg (line 204) | def check_set_pb_arg(pb, arg_name, arg_attr, arg_value, allow_override=F...
  function _create_const_fill_op_from_numpy (line 222) | def _create_const_fill_op_from_numpy(name, tensor, device_option=None):
  function _create_const_fill_op_from_c2_int8_tensor (line 243) | def _create_const_fill_op_from_c2_int8_tensor(name, int8_tensor):
  function create_const_fill_op (line 265) | def create_const_fill_op(
  function construct_init_net_from_params (line 290) | def construct_init_net_from_params(
  function get_producer_map (line 314) | def get_producer_map(ssa):
  function get_consumer_map (line 327) | def get_consumer_map(ssa):
  function get_params_from_init_net (line 340) | def get_params_from_init_net(
  function _updater_raise (line 369) | def _updater_raise(op, input_types, output_types):
  function _generic_status_identifier (line 376) | def _generic_status_identifier(
  function infer_device_type (line 448) | def infer_device_type(
  function _modify_blob_names (line 491) | def _modify_blob_names(ops, blob_rename_f):
  function _rename_blob (line 507) | def _rename_blob(name, blob_sizes, blob_ranges):
  function save_graph (line 523) | def save_graph(net, file_name, graph_name="net", op_only=True, blob_size...
  function save_graph_base (line 528) | def save_graph_base(net, file_name, graph_name="net", op_only=True, blob...
  function group_norm_replace_aten_with_caffe2 (line 563) | def group_norm_replace_aten_with_caffe2(predict_net: caffe2_pb2.NetDef):
  function alias (line 592) | def alias(x, name, is_backward=False):
  function fuse_alias_placeholder (line 599) | def fuse_alias_placeholder(predict_net, init_net):
  class IllegalGraphTransformError (line 627) | class IllegalGraphTransformError(ValueError):
  function _rename_versioned_blob_in_proto (line 631) | def _rename_versioned_blob_in_proto(
  function rename_op_input (line 662) | def rename_op_input(
  function rename_op_output (line 729) | def rename_op_output(predict_net: caffe2_pb2.NetDef, op_id: int, output_...
  function get_sub_graph_external_input_output (line 750) | def get_sub_graph_external_input_output(
  class DiGraph (line 782) | class DiGraph:
    method __init__ (line 785) | def __init__(self):
    method add_edge (line 789) | def add_edge(self, u, v):
    method get_all_paths (line 795) | def get_all_paths(self, s, d):
    method from_ssa (line 816) | def from_ssa(ssa):
  function _get_dependency_chain (line 825) | def _get_dependency_chain(ssa, versioned_target, versioned_source):
  function identify_reshape_sub_graph (line 855) | def identify_reshape_sub_graph(predict_net: caffe2_pb2.NetDef) -> List[L...
  function remove_reshape_for_fc (line 882) | def remove_reshape_for_fc(predict_net, params):
  function fuse_copy_between_cpu_and_gpu (line 952) | def fuse_copy_between_cpu_and_gpu(predict_net: caffe2_pb2.NetDef):
  function remove_dead_end_ops (line 1010) | def remove_dead_end_ops(net_def: caffe2_pb2.NetDef):

FILE: detectron2/detectron2/export/torchscript.py
  function scripting_with_instances (line 13) | def scripting_with_instances(model, fields):
  function dump_torchscript_IR (line 63) | def dump_torchscript_IR(model, dir):

FILE: detectron2/detectron2/export/torchscript_patch.py
  function _clear_jit_cache (line 20) | def _clear_jit_cache():
  function _add_instances_conversion_methods (line 28) | def _add_instances_conversion_methods(newInstances):
  function patch_instances (line 51) | def patch_instances(fields):
  function _gen_instance_class (line 90) | def _gen_instance_class(fields):
  function _gen_instance_module (line 290) | def _gen_instance_module(fields):
  function _import (line 309) | def _import(path):
  function patch_builtin_len (line 316) | def patch_builtin_len(modules=()):
  function patch_nonscriptable_classes (line 342) | def patch_nonscriptable_classes():
  function freeze_training_mode (line 392) | def freeze_training_mode(model):

FILE: detectron2/detectron2/layers/aspp.py
  class ASPP (line 14) | class ASPP(nn.Module):
    method __init__ (line 19) | def __init__(
    method forward (line 129) | def forward(self, x):

FILE: detectron2/detectron2/layers/batch_norm.py
  class FrozenBatchNorm2d (line 13) | class FrozenBatchNorm2d(nn.Module):
    method __init__ (line 35) | def __init__(self, num_features, eps=1e-5):
    method forward (line 44) | def forward(self, x):
    method _load_from_state_dict (line 67) | def _load_from_state_dict(
    method __repr__ (line 84) | def __repr__(self):
    method convert_frozen_batchnorm (line 88) | def convert_frozen_batchnorm(cls, module):
  function get_norm (line 121) | def get_norm(norm, out_channels):
  class NaiveSyncBatchNorm (line 152) | class NaiveSyncBatchNorm(BatchNorm2d):
    method __init__ (line 180) | def __init__(self, *args, stats_mode="", **kwargs):
    method forward (line 185) | def forward(self, input):
  class CycleBatchNormList (line 233) | class CycleBatchNormList(nn.ModuleList):
    method __init__ (line 249) | def __init__(self, length: int, bn_class=nn.BatchNorm2d, **kwargs):
    method forward (line 265) | def forward(self, x):
    method extra_repr (line 276) | def extra_repr(self):
  class LayerNorm (line 280) | class LayerNorm(nn.Module):
    method __init__ (line 288) | def __init__(self, normalized_shape, eps=1e-6):
    method forward (line 295) | def forward(self, x):

FILE: detectron2/detectron2/layers/blocks.py
  class CNNBlockBase (line 16) | class CNNBlockBase(nn.Module):
    method __init__ (line 29) | def __init__(self, in_channels, out_channels, stride):
    method freeze (line 43) | def freeze(self):
  class DepthwiseSeparableConv2d (line 58) | class DepthwiseSeparableConv2d(nn.Module):
    method __init__ (line 66) | def __init__(
    method forward (line 110) | def forward(self, x):

FILE: detectron2/detectron2/layers/csrc/ROIAlignRotated/ROIAlignRotated.h
  function namespace (line 5) | namespace detectron2 {

FILE: detectron2/detectron2/layers/csrc/ROIAlignRotated/ROIAlignRotated_cpu.cpp
  type detectron2 (line 12) | namespace detectron2 {
    type PreCalc (line 16) | struct PreCalc {
    function pre_calc_for_bilinear_interpolate (line 28) | void pre_calc_for_bilinear_interpolate(
    function bilinear_interpolate_gradient (line 132) | void bilinear_interpolate_gradient(
    function add (line 195) | inline void add(T* address, const T& val) {
    function ROIAlignRotatedForward (line 202) | void ROIAlignRotatedForward(
    function ROIAlignRotatedBackward (line 313) | void ROIAlignRotatedBackward(
    function ROIAlignRotated_forward_cpu (line 418) | at::Tensor ROIAlignRotated_forward_cpu(
    function ROIAlignRotated_backward_cpu (line 466) | at::Tensor ROIAlignRotated_backward_cpu(

FILE: detectron2/detectron2/layers/csrc/box_iou_rotated/box_iou_rotated.h
  function namespace (line 5) | namespace detectron2 {

FILE: detectron2/detectron2/layers/csrc/box_iou_rotated/box_iou_rotated_cpu.cpp
  type detectron2 (line 5) | namespace detectron2 {
    function box_iou_rotated_cpu_kernel (line 8) | void box_iou_rotated_cpu_kernel(
    function box_iou_rotated_cpu (line 23) | at::Tensor box_iou_rotated_cpu(

FILE: detectron2/detectron2/layers/csrc/box_iou_rotated/box_iou_rotated_utils.h
  function namespace (line 17) | namespace detectron2 {

FILE: detectron2/detectron2/layers/csrc/cocoeval/cocoeval.cpp
  type detectron2 (line 10) | namespace detectron2 {
    type COCOeval (line 12) | namespace COCOeval {
      function SortInstancesByDetectionScore (line 18) | void SortInstancesByDetectionScore(
      function SortInstancesByIgnore (line 34) | void SortInstancesByIgnore(
      function MatchDetectionsToGroundTruth (line 61) | void MatchDetectionsToGroundTruth(
      function EvaluateImages (line 142) | std::vector<ImageEvaluation> EvaluateImages(
      function list_to_vec (line 203) | std::vector<T> list_to_vec(const py::list& l) {
      function BuildSortedDetectionList (line 223) | int BuildSortedDetectionList(
      function ComputePrecisionRecallCurve (line 284) | void ComputePrecisionRecallCurve(
      function Accumulate (line 372) | py::dict Accumulate(

FILE: detectron2/detectron2/layers/csrc/cocoeval/cocoeval.h
  function namespace (line 12) | namespace detectron2 {

FILE: detectron2/detectron2/layers/csrc/deformable/deform_conv.h
  function namespace (line 5) | namespace detectron2 {

FILE: detectron2/detectron2/layers/csrc/nms_rotated/nms_rotated.h
  function namespace (line 5) | namespace detectron2 {

FILE: detectron2/detectron2/layers/csrc/nms_rotated/nms_rotated_cpu.cpp
  type detectron2 (line 5) | namespace detectron2 {
    function nms_rotated_cpu_kernel (line 8) | at::Tensor nms_rotated_cpu_kernel(
    function nms_rotated_cpu (line 62) | at::Tensor nms_rotated_cpu(

FILE: detectron2/detectron2/layers/csrc/vision.cpp
  type detectron2 (line 10) | namespace detectron2 {
    function get_cuda_version (line 16) | std::string get_cuda_version() {
    function has_cuda (line 41) | bool has_cuda() {
    function get_compiler_version (line 51) | std::string get_compiler_version() {
    function PYBIND11_MODULE (line 77) | PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
    function TORCH_LIBRARY (line 111) | TORCH_LIBRARY(detectron2, m) {

FILE: detectron2/detectron2/layers/deform_conv.py
  class _DeformConv (line 16) | class _DeformConv(Function):
    method forward (line 18) | def forward(
    method backward (line 85) | def backward(ctx, grad_output):
    method _output_size (line 146) | def _output_size(input, weight, padding, dilation, stride):
    method _cal_im2col_step (line 165) | def _cal_im2col_step(input_size, default_size):
  class _ModulatedDeformConv (line 187) | class _ModulatedDeformConv(Function):
    method forward (line 189) | def forward(
    method backward (line 246) | def backward(ctx, grad_output):
    method _infer_shape (line 298) | def _infer_shape(ctx, input, weight):
  class DeformConv (line 316) | class DeformConv(nn.Module):
    method __init__ (line 317) | def __init__(
    method forward (line 369) | def forward(self, x, offset):
    method extra_repr (line 400) | def extra_repr(self):
  class ModulatedDeformConv (line 413) | class ModulatedDeformConv(nn.Module):
    method __init__ (line 414) | def __init__(
    method forward (line 463) | def forward(self, x, offset, mask):
    method extra_repr (line 492) | def extra_repr(self):

FILE: detectron2/detectron2/layers/losses.py
  function diou_loss (line 5) | def diou_loss(
  function ciou_loss (line 66) | def ciou_loss(

FILE: detectron2/detectron2/layers/mask_ops.py
  function _do_paste_mask (line 17) | def _do_paste_mask(masks, boxes, img_h: int, img_w: int, skip_empty: boo...
  function paste_masks_in_image (line 74) | def paste_masks_in_image(
  function paste_mask_in_image_old (line 155) | def paste_mask_in_image_old(mask, box, img_h, img_w, threshold):
  function pad_masks (line 219) | def pad_masks(masks, padding):
  function scale_boxes (line 237) | def scale_boxes(boxes, scale):
  function _paste_masks_tensor_shape (line 264) | def _paste_masks_tensor_shape(

FILE: detectron2/detectron2/layers/nms.py
  function batched_nms (line 9) | def batched_nms(
  function nms_rotated (line 25) | def nms_rotated(boxes, scores, iou_threshold):
  function batched_nms_rotated (line 91) | def batched_nms_rotated(boxes, scores, idxs, iou_threshold):

FILE: detectron2/detectron2/layers/roi_align.py
  class ROIAlign (line 7) | class ROIAlign(nn.Module):
    method __init__ (line 8) | def __init__(self, output_size, spatial_scale, sampling_ratio, aligned...
    method forward (line 49) | def forward(self, input, rois):
    method __repr__ (line 67) | def __repr__(self):

FILE: detectron2/detectron2/layers/roi_align_rotated.py
  class _ROIAlignRotated (line 9) | class _ROIAlignRotated(Function):
    method forward (line 11) | def forward(ctx, input, roi, output_size, spatial_scale, sampling_ratio):
    method backward (line 24) | def backward(ctx, grad_output):
  class ROIAlignRotated (line 48) | class ROIAlignRotated(nn.Module):
    method __init__ (line 49) | def __init__(self, output_size, spatial_scale, sampling_ratio):
    method forward (line 69) | def forward(self, input, rois):
    method __repr__ (line 85) | def __repr__(self):

FILE: detectron2/detectron2/layers/rotated_boxes.py
  function pairwise_iou_rotated (line 6) | def pairwise_iou_rotated(boxes1, boxes2):

FILE: detectron2/detectron2/layers/shape_spec.py
  class ShapeSpec (line 8) | class ShapeSpec:

FILE: detectron2/detectron2/layers/wrappers.py
  function shapes_to_tensor (line 17) | def shapes_to_tensor(x: List[int], device: Optional[torch.device] = None...
  function cat (line 39) | def cat(tensors: List[torch.Tensor], dim: int = 0):
  function empty_input_loss_func_wrapper (line 49) | def empty_input_loss_func_wrapper(loss_func):
  class _NewEmptyTensorOp (line 64) | class _NewEmptyTensorOp(torch.autograd.Function):
    method forward (line 66) | def forward(ctx, x, new_shape):
    method backward (line 71) | def backward(ctx, grad):
  class Conv2d (line 76) | class Conv2d(torch.nn.Conv2d):
    method __init__ (line 81) | def __init__(self, *args, **kwargs):
    method forward (line 98) | def forward(self, x):
  function nonzero_tuple (line 129) | def nonzero_tuple(x):
  function move_device_like (line 143) | def move_device_like(src: torch.Tensor, dst: torch.Tensor) -> torch.Tensor:

FILE: detectron2/detectron2/model_zoo/configs/Misc/torchvision_imagenet_R_50.py
  function build_data_loader (line 40) | def build_data_loader(dataset, batch_size, num_workers, training=True):
  class ClassificationNet (line 50) | class ClassificationNet(nn.Module):
    method __init__ (line 51) | def __init__(self, model: nn.Module):
    method device (line 56) | def device(self):
    method forward (line 59) | def forward(self, inputs):
  class ClassificationAcc (line 69) | class ClassificationAcc(DatasetEvaluator):
    method reset (line 70) | def reset(self):
    method process (line 73) | def process(self, inputs, outputs):
    method evaluate (line 78) | def evaluate(self):

FILE: detectron2/detectron2/model_zoo/configs/common/coco_schedule.py
  function default_X_scheduler (line 7) | def default_X_scheduler(num_X):

FILE: detectron2/detectron2/model_zoo/model_zoo.py
  class _ModelZooUrls (line 12) | class _ModelZooUrls(object):
    method query (line 99) | def query(config_path: str) -> Optional[str]:
  function get_checkpoint_url (line 111) | def get_checkpoint_url(config_path):
  function get_config_file (line 128) | def get_config_file(config_path):
  function get_config (line 147) | def get_config(config_path, trained: bool = False):
  function get (line 180) | def get(config_path, trained: bool = False, device: Optional[str] = None):

FILE: detectron2/detectron2/modeling/anchor_generator.py
  class BufferList (line 21) | class BufferList(nn.Module):
    method __init__ (line 26) | def __init__(self, buffers):
    method __len__ (line 32) | def __len__(self):
    method __iter__ (line 35) | def __iter__(self):
  function _create_grid_offsets (line 39) | def _create_grid_offsets(
  function _broadcast_params (line 58) | def _broadcast_params(params, num_features, name):
  class DefaultAnchorGenerator (line 86) | class DefaultAnchorGenerator(nn.Module):
    method __init__ (line 98) | def __init__(self, *, sizes, aspect_ratios, strides, offset=0.5):
    method from_config (line 128) | def from_config(cls, cfg, input_shape: List[ShapeSpec]):
    method _calculate_anchors (line 136) | def _calculate_anchors(self, sizes, aspect_ratios):
    method num_cell_anchors (line 144) | def num_cell_anchors(self):
    method num_anchors (line 152) | def num_anchors(self):
    method _grid_anchors (line 165) | def _grid_anchors(self, grid_sizes: List[List[int]]):
    method generate_cell_anchors (line 181) | def generate_cell_anchors(self, sizes=(32, 64, 128, 256, 512), aspect_...
    method forward (line 218) | def forward(self, features: List[torch.Tensor]):
  class RotatedAnchorGenerator (line 235) | class RotatedAnchorGenerator(nn.Module):
    method __init__ (line 247) | def __init__(self, *, sizes, aspect_ratios, strides, angles, offset=0.5):
    method from_config (line 280) | def from_config(cls, cfg, input_shape: List[ShapeSpec]):
    method _calculate_anchors (line 289) | def _calculate_anchors(self, sizes, aspect_ratios, angles):
    method num_cell_anchors (line 297) | def num_cell_anchors(self):
    method num_anchors (line 304) | def num_anchors(self):
    method _grid_anchors (line 318) | def _grid_anchors(self, grid_sizes):
    method generate_cell_anchors (line 329) | def generate_cell_anchors(
    method forward (line 365) | def forward(self, features):
  function build_anchor_generator (line 381) | def build_anchor_generator(cfg, input_shape):

FILE: detectron2/detectron2/modeling/backbone/backbone.py
  class Backbone (line 11) | class Backbone(nn.Module, metaclass=ABCMeta):
    method __init__ (line 16) | def __init__(self):
    method forward (line 23) | def forward(self):
    method size_divisibility (line 33) | def size_divisibility(self) -> int:
    method padding_constraints (line 44) | def padding_constraints(self) -> Dict[str, int]:
    method output_shape (line 63) | def output_shape(self):

FILE: detectron2/detectron2/modeling/backbone/build.py
  function build_backbone (line 20) | def build_backbone(cfg, input_shape=None):

FILE: detectron2/detectron2/modeling/backbone/fpn.py
  class FPN (line 17) | class FPN(Backbone):
    method __init__ (line 25) | def __init__(
    method size_divisibility (line 119) | def size_divisibility(self):
    method padding_constraints (line 123) | def padding_constraints(self):
    method forward (line 126) | def forward(self, x):
    method output_shape (line 169) | def output_shape(self):
  function _assert_strides_are_log2_contiguous (line 178) | def _assert_strides_are_log2_contiguous(strides):
  class LastLevelMaxPool (line 188) | class LastLevelMaxPool(nn.Module):
    method __init__ (line 194) | def __init__(self):
    method forward (line 199) | def forward(self, x):
  class LastLevelP6P7 (line 203) | class LastLevelP6P7(nn.Module):
    method __init__ (line 209) | def __init__(self, in_channels, out_channels, in_feature="res5"):
    method forward (line 218) | def forward(self, c5):
  function build_resnet_fpn_backbone (line 225) | def build_resnet_fpn_backbone(cfg, input_shape: ShapeSpec):
  function build_retinanet_resnet_fpn_backbone (line 248) | def build_retinanet_resnet_fpn_backbone(cfg, input_shape: ShapeSpec):

FILE: detectron2/detectron2/modeling/backbone/mvit.py
  function attention_pool (line 24) | def attention_pool(x, pool, norm=None):
  class MultiScaleAttention (line 36) | class MultiScaleAttention(nn.Module):
    method __init__ (line 39) | def __init__(
    method forward (line 131) | def forward(self, x):
  class MultiScaleBlock (line 180) | class MultiScaleBlock(nn.Module):
    method __init__ (line 183) | def __init__(
    method forward (line 257) | def forward(self, x):
  class MViT (line 272) | class MViT(Backbone):
    method __init__ (line 277) | def __init__(
    method _init_weights (line 420) | def _init_weights(self, m):
    method forward (line 429) | def forward(self, x):

FILE: detectron2/detectron2/modeling/backbone/regnet.py
  function conv2d (line 28) | def conv2d(w_in, w_out, k, *, stride=1, groups=1, bias=False):
  function gap2d (line 35) | def gap2d():
  function pool2d (line 40) | def pool2d(k, *, stride=1):
  function init_weights (line 46) | def init_weights(m):
  class ResStem (line 60) | class ResStem(CNNBlockBase):
    method __init__ (line 63) | def __init__(self, w_in, w_out, norm, activation_class):
    method forward (line 70) | def forward(self, x):
  class SimpleStem (line 76) | class SimpleStem(CNNBlockBase):
    method __init__ (line 79) | def __init__(self, w_in, w_out, norm, activation_class):
    method forward (line 85) | def forward(self, x):
  class SE (line 91) | class SE(nn.Module):
    method __init__ (line 94) | def __init__(self, w_in, w_se, activation_class):
    method forward (line 104) | def forward(self, x):
  class VanillaBlock (line 108) | class VanillaBlock(CNNBlockBase):
    method __init__ (line 111) | def __init__(self, w_in, w_out, stride, norm, activation_class, _params):
    method forward (line 120) | def forward(self, x):
  class BasicTransform (line 126) | class BasicTransform(nn.Module):
    method __init__ (line 129) | def __init__(self, w_in, w_out, stride, norm, activation_class, _params):
    method forward (line 138) | def forward(self, x):
  class ResBasicBlock (line 144) | class ResBasicBlock(CNNBlockBase):
    method __init__ (line 147) | def __init__(self, w_in, w_out, stride, norm, activation_class, params):
    method forward (line 156) | def forward(self, x):
  class BottleneckTransform (line 161) | class BottleneckTransform(nn.Module):
    method __init__ (line 164) | def __init__(self, w_in, w_out, stride, norm, activation_class, params):
    method forward (line 180) | def forward(self, x):
  class ResBottleneckBlock (line 186) | class ResBottleneckBlock(CNNBlockBase):
    method __init__ (line 189) | def __init__(self, w_in, w_out, stride, norm, activation_class, params):
    method forward (line 198) | def forward(self, x):
  class AnyStage (line 203) | class AnyStage(nn.Module):
    method __init__ (line 206) | def __init__(self, w_in, w_out, stride, d, block_class, norm, activati...
    method forward (line 213) | def forward(self, x):
  class AnyNet (line 219) | class AnyNet(Backbone):
    method __init__ (line 222) | def __init__(
    method forward (line 305) | def forward(self, x):
    method output_shape (line 324) | def output_shape(self):
    method freeze (line 332) | def freeze(self, freeze_at=0):
  function adjust_block_compatibility (line 356) | def adjust_block_compatibility(ws, bs, gs):
  function generate_regnet_parameters (line 369) | def generate_regnet_parameters(w_a, w_0, w_m, d, q=8):
  class RegNet (line 387) | class RegNet(AnyNet):
    method __init__ (line 390) | def __init__(

FILE: detectron2/detectron2/modeling/backbone/resnet.py
  class BasicBlock (line 32) | class BasicBlock(CNNBlockBase):
    method __init__ (line 38) | def __init__(self, in_channels, out_channels, *, stride=1, norm="BN"):
    method forward (line 85) | def forward(self, x):
  class BottleneckBlock (line 100) | class BottleneckBlock(CNNBlockBase):
    method __init__ (line 107) | def __init__(
    method forward (line 194) | def forward(self, x):
  class DeformBottleneckBlock (line 213) | class DeformBottleneckBlock(CNNBlockBase):
    method __init__ (line 219) | def __init__(
    method forward (line 303) | def forward(self, x):
  class BasicStem (line 330) | class BasicStem(CNNBlockBase):
    method __init__ (line 336) | def __init__(self, in_channels=3, out_channels=64, norm="BN"):
    method forward (line 355) | def forward(self, x):
  class ResNet (line 362) | class ResNet(Backbone):
    method __init__ (line 367) | def __init__(self, stem, stages, num_classes=None, out_features=None, ...
    method forward (line 435) | def forward(self, x):
    method output_shape (line 460) | def output_shape(self):
    method freeze (line 468) | def freeze(self, freeze_at=0):
    method make_stage (line 493) | def make_stage(block_class, num_blocks, *, in_channels, out_channels, ...
    method make_default_stages (line 548) | def make_default_stages(depth, block_class=None, **kwargs):
  function make_stage (line 606) | def make_stage(*args, **kwargs):
  function build_resnet_backbone (line 614) | def build_resnet_backbone(cfg, input_shape):

FILE: detectron2/detectron2/modeling/backbone/swin.py
  class Mlp (line 26) | class Mlp(nn.Module):
    method __init__ (line 29) | def __init__(
    method forward (line 40) | def forward(self, x):
  function window_partition (line 49) | def window_partition(x, window_size):
  function window_reverse (line 63) | def window_reverse(windows, window_size, H, W):
  class WindowAttention (line 79) | class WindowAttention(nn.Module):
    method __init__ (line 93) | def __init__(
    method forward (line 137) | def forward(self, x, mask=None):
  class SwinTransformerBlock (line 180) | class SwinTransformerBlock(nn.Module):
    method __init__ (line 197) | def __init__(
    method forward (line 241) | def forward(self, x, mask_matrix):
  class PatchMerging (line 304) | class PatchMerging(nn.Module):
    method __init__ (line 311) | def __init__(self, dim, norm_layer=nn.LayerNorm):
    method forward (line 317) | def forward(self, x, H, W):
  class BasicLayer (line 346) | class BasicLayer(nn.Module):
    method __init__ (line 365) | def __init__(
    method forward (line 413) | def forward(self, x, H, W):
  class PatchEmbed (line 463) | class PatchEmbed(nn.Module):
    method __init__ (line 472) | def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=...
    method forward (line 486) | def forward(self, x):
  class SwinTransformer (line 505) | class SwinTransformer(Backbone):
    method __init__ (line 533) | def __init__(
    method _freeze_stages (line 633) | def _freeze_stages(self):
    method _init_weights (line 650) | def _init_weights(self, m):
    method size_divisibility (line 660) | def size_divisibility(self):
    method forward (line 663) | def forward(self, x):

FILE: detectron2/detectron2/modeling/backbone/utils.py
  function window_partition (line 16) | def window_partition(x, window_size):
  function window_unpartition (line 40) | def window_unpartition(windows, window_size, pad_hw, hw):
  function get_rel_pos (line 63) | def get_rel_pos(q_size, k_size, rel_pos):
  function add_decomposed_rel_pos (line 96) | def add_decomposed_rel_pos(attn, q, rel_pos_h, rel_pos_w, q_size, k_size):
  function get_abs_pos (line 128) | def get_abs_pos(abs_pos, has_cls_token, hw):
  class PatchEmbed (line 160) | class PatchEmbed(nn.Module):
    method __init__ (line 165) | def __init__(
    method forward (line 182) | def forward(self, x):

FILE: detectron2/detectron2/modeling/backbone/vit.py
  class Attention (line 28) | class Attention(nn.Module):
    method __init__ (line 31) | def __init__(
    method forward (line 68) | def forward(self, x):
  class ResBottleneckBlock (line 87) | class ResBottleneckBlock(CNNBlockBase):
    method __init__ (line 93) | def __init__(
    method forward (line 139) | def forward(self, x):
  class Block (line 148) | class Block(nn.Module):
    method __init__ (line 151) | def __init__(
    method forward (line 211) | def forward(self, x):
  class ViT (line 233) | class ViT(Backbone):
    method __init__ (line 240) | def __init__(
    method _init_weights (line 338) | def _init_weights(self, m):
    method forward (line 347) | def forward(self, x):
  class SimpleFeaturePyramid (line 361) | class SimpleFeaturePyramid(Backbone):
    method __init__ (line 367) | def __init__(
    method padding_constraints (line 469) | def padding_constraints(self):
    method forward (line 475) | def forward(self, x):
  function get_vit_lr_decay_rate (line 504) | def get_vit_lr_decay_rate(name, lr_decay_rate=1.0, num_layers=12):

FILE: detectron2/detectron2/modeling/box_regression.py
  class Box2BoxTransform (line 21) | class Box2BoxTransform(object):
    method __init__ (line 28) | def __init__(
    method get_deltas (line 43) | def get_deltas(self, src_boxes, target_boxes):
    method apply_deltas (line 78) | def apply_deltas(self, deltas, boxes):
  class Box2BoxTransformRotated (line 120) | class Box2BoxTransformRotated(object):
    method __init__ (line 129) | def __init__(
    method get_deltas (line 145) | def get_deltas(self, src_boxes, target_boxes):
    method apply_deltas (line 183) | def apply_deltas(self, deltas, boxes):
  class Box2BoxTransformLinear (line 230) | class Box2BoxTransformLinear(object):
    method __init__ (line 236) | def __init__(self, normalize_by_size=True):
    method get_deltas (line 243) | def get_deltas(self, src_boxes, target_boxes):
    method apply_deltas (line 275) | def apply_deltas(self, deltas, boxes):
  function _dense_box_regression_loss (line 310) | def _dense_box_regression_loss(

FILE: detectron2/detectron2/modeling/matcher.py
  class Matcher (line 9) | class Matcher(object):
    method __init__ (line 25) | def __init__(
    method __call__ (line 62) | def __call__(self, match_quality_matrix):
    method set_low_quality_matches_ (line 106) | def set_low_quality_matches_(self, match_labels, match_quality_matrix):

FILE: detectron2/detectron2/modeling/meta_arch/build.py
  function build_model (line 16) | def build_model(cfg):

FILE: detectron2/detectron2/modeling/meta_arch/dense_detector.py
  function permute_to_N_HWA_K (line 15) | def permute_to_N_HWA_K(tensor, K: int):
  class DenseDetector (line 27) | class DenseDetector(nn.Module):
    method __init__ (line 33) | def __init__(
    method device (line 69) | def device(self):
    method _move_to_current_device (line 72) | def _move_to_current_device(self, x):
    method forward (line 75) | def forward(self, batched_inputs: List[Dict[str, Tensor]]):
    method forward_training (line 120) | def forward_training(self, images, features, predictions, gt_instances):
    method preprocess_image (line 123) | def preprocess_image(self, batched_inputs: List[Dict[str, Tensor]]):
    method _transpose_dense_predictions (line 136) | def _transpose_dense_predictions(
    method _ema_update (line 160) | def _ema_update(self, name: str, value: float, initial_value: float, m...
    method _decode_per_level_predictions (line 186) | def _decode_per_level_predictions(
    method _decode_multi_level_predictions (line 230) | def _decode_multi_level_predictions(
    method visualize_training (line 256) | def visualize_training(self, batched_inputs, results):

FILE: detectron2/detectron2/modeling/meta_arch/fcos.py
  class FCOS (line 25) | class FCOS(DenseDetector):
    method __init__ (line 30) | def __init__(
    method forward_training (line 86) | def forward_training(self, images, features, predictions, gt_instances):
    method _match_anchors (line 98) | def _match_anchors(self, gt_boxes: Boxes, anchors: List[Boxes]):
    method label_anchors (line 154) | def label_anchors(self, anchors: List[Boxes], gt_instances: List[Insta...
    method losses (line 193) | def losses(
    method compute_ctrness_targets (line 240) | def compute_ctrness_targets(self, anchors: List[Boxes], gt_boxes: List...
    method forward_inference (line 253) | def forward_inference(
    method inference_single_image (line 279) | def inference_single_image(
  class FCOSHead (line 303) | class FCOSHead(RetinaNetHead):
    method __init__ (line 309) | def __init__(self, *, input_shape: List[ShapeSpec], conv_dims: List[in...
    method forward (line 318) | def forward(self, features):

FILE: detectron2/detectron2/modeling/meta_arch/panoptic_fpn.py
  class PanopticFPN (line 21) | class PanopticFPN(GeneralizedRCNN):
    method __init__ (line 27) | def __init__(
    method from_config (line 57) | def from_config(cls, cfg):
    method forward (line 90) | def forward(self, batched_inputs):
    method inference (line 140) | def inference(self, batched_inputs: List[Dict[str, torch.Tensor]], do_...
  function combine_semantic_and_instance_outputs (line 184) | def combine_semantic_and_instance_outputs(

FILE: detectron2/detectron2/modeling/meta_arch/rcnn.py
  class GeneralizedRCNN (line 25) | class GeneralizedRCNN(nn.Module):
    method __init__ (line 34) | def __init__(
    method from_config (line 72) | def from_config(cls, cfg):
    method device (line 85) | def device(self):
    method _move_to_current_device (line 88) | def _move_to_current_device(self, x):
    method visualize_training (line 91) | def visualize_training(self, batched_inputs, proposals):
    method forward (line 126) | def forward(self, batched_inputs: List[Dict[str, torch.Tensor]]):
    method inference (line 178) | def inference(
    method preprocess_image (line 223) | def preprocess_image(self, batched_inputs: List[Dict[str, torch.Tensor...
    method _postprocess (line 237) | def _postprocess(instances, batched_inputs: List[Dict[str, torch.Tenso...
  class ProposalNetwork (line 254) | class ProposalNetwork(nn.Module):
    method __init__ (line 260) | def __init__(
    method from_config (line 282) | def from_config(cls, cfg):
    method device (line 292) | def device(self):
    method _move_to_current_device (line 295) | def _move_to_current_device(self, x):
    method forward (line 298) | def forward(self, batched_inputs):

FILE: detectron2/detectron2/modeling/meta_arch/retinanet.py
  class RetinaNet (line 29) | class RetinaNet(DenseDetector):
    method __init__ (line 35) | def __init__(
    method from_config (line 116) | def from_config(cls, cfg):
    method forward_training (line 151) | def forward_training(self, images, features, predictions, gt_instances):
    method losses (line 160) | def losses(self, anchors, pred_logits, gt_labels, pred_anchor_deltas, ...
    method label_anchors (line 213) | def label_anchors(self, anchors, gt_instances):
    method forward_inference (line 257) | def forward_inference(
    method inference_single_image (line 275) | def inference_single_image(
  class RetinaNetHead (line 311) | class RetinaNetHead(nn.Module):
    method __init__ (line 318) | def __init__(
    method from_config (line 401) | def from_config(cls, cfg, input_shape: List[ShapeSpec]):
    method forward (line 417) | def forward(self, features: List[Tensor]):

FILE: detectron2/detectron2/modeling/meta_arch/semantic_seg.py
  class SemanticSegmentor (line 34) | class SemanticSegmentor(nn.Module):
    method __init__ (line 40) | def __init__(
    method from_config (line 62) | def from_config(cls, cfg):
    method device (line 73) | def device(self):
    method forward (line 76) | def forward(self, batched_inputs):
  function build_sem_seg_head (line 134) | def build_sem_seg_head(cfg, input_shape):
  class SemSegFPNHead (line 143) | class SemSegFPNHead(nn.Module):
    method __init__ (line 153) | def __init__(
    method from_config (line 218) | def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]):
    method forward (line 231) | def forward(self, features, targets=None):
    method layers (line 246) | def layers(self, features):
    method losses (line 255) | def losses(self, predictions, targets):

FILE: detectron2/detectron2/modeling/mmdet_wrapper.py
  function _to_container (line 21) | def _to_container(cfg):
  class MMDetBackbone (line 33) | class MMDetBackbone(Backbone):
    method __init__ (line 43) | def __init__(
    method forward (line 100) | def forward(self, x) -> Dict[str, Tensor]:
    method output_shape (line 114) | def output_shape(self) -> Dict[str, ShapeSpec]:
  class MMDetDetector (line 118) | class MMDetDetector(nn.Module):
    method __init__ (line 125) | def __init__(
    method forward (line 156) | def forward(self, batched_inputs: List[Dict[str, torch.Tensor]]):
    method device (line 223) | def device(self):
  function _convert_mmdet_result (line 229) | def _convert_mmdet_result(result, shape: Tuple[int, int]) -> Instances:
  function _parse_losses (line 257) | def _parse_losses(losses: Dict[str, Tensor]) -> Dict[str, Tensor]:

FILE: detectron2/detectron2/modeling/poolers.py
  function assign_boxes_to_levels (line 22) | def assign_boxes_to_levels(
  function _convert_boxes_to_pooler_format (line 63) | def _convert_boxes_to_pooler_format(boxes: torch.Tensor, sizes: torch.Te...
  function convert_boxes_to_pooler_format (line 71) | def convert_boxes_to_pooler_format(box_lists: List[Boxes]):
  function _create_zeros (line 101) | def _create_zeros(
  class ROIPooler (line 113) | class ROIPooler(nn.Module):
    method __init__ (line 119) | def __init__(
    method forward (line 205) | def forward(self, x: List[torch.Tensor], box_lists: List[Boxes]):

FILE: detectron2/detectron2/modeling/postprocessing.py
  function detector_postprocess (line 9) | def detector_postprocess(
  function sem_seg_postprocess (line 77) | def sem_seg_postprocess(result, img_size, output_height, output_width):

FILE: detectron2/detectron2/modeling/proposal_generator/build.py
  function build_proposal_generator (line 15) | def build_proposal_generator(cfg, input_shape):

FILE: detectron2/detectron2/modeling/proposal_generator/proposal_utils.py
  function _is_tracing (line 13) | def _is_tracing():
  function find_top_rpn_proposals (line 22) | def find_top_rpn_proposals(
  function add_ground_truth_to_proposals (line 138) | def add_ground_truth_to_proposals(
  function add_ground_truth_to_proposals_single_image (line 167) | def add_ground_truth_to_proposals_single_image(

FILE: detectron2/detectron2/modeling/proposal_generator/rpn.py
  function build_rpn_head (line 58) | def build_rpn_head(cfg, input_shape):
  class StandardRPNHead (line 67) | class StandardRPNHead(nn.Module):
    method __init__ (line 76) | def __init__(
    method _get_rpn_conv (line 126) | def _get_rpn_conv(self, in_channels, out_channels):
    method from_config (line 137) | def from_config(cls, cfg, input_shape):
    method forward (line 158) | def forward(self, features: List[torch.Tensor]):
  class RPN (line 181) | class RPN(nn.Module):
    method __init__ (line 187) | def __init__(
    method from_config (line 259) | def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]):
    method _subsample_labels (line 287) | def _subsample_labels(self, label):
    method label_and_sample_anchors (line 307) | def label_and_sample_anchors(
    method losses (line 366) | def losses(
    method forward (line 431) | def forward(
    method predict_proposals (line 482) | def predict_proposals(
    method _decode_proposals (line 514) | def _decode_proposals(self, anchors: List[Boxes], pred_anchor_deltas: ...

FILE: detectron2/detectron2/modeling/proposal_generator/rrpn.py
  function find_top_rrpn_proposals (line 20) | def find_top_rrpn_proposals(
  class RRPN (line 131) | class RRPN(RPN):
    method __init__ (line 137) | def __init__(self, *args, **kwargs):
    method from_config (line 145) | def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]):
    method label_and_sample_anchors (line 151) | def label_and_sample_anchors(self, anchors: List[RotatedBoxes], gt_ins...
    method predict_proposals (line 198) | def predict_proposals(self, anchors, pred_objectness_logits, pred_anch...

FILE: detectron2/detectron2/modeling/roi_heads/box_head.py
  class FastRCNNConvFCHead (line 26) | class FastRCNNConvFCHead(nn.Sequential):
    method __init__ (line 33) | def __init__(
    method from_config (line 82) | def from_config(cls, cfg, input_shape):
    method forward (line 94) | def forward(self, x):
    method output_shape (line 101) | def output_shape(self):
  function build_box_head (line 113) | def build_box_head(cfg, input_shape):

FILE: detectron2/detectron2/modeling/roi_heads/cascade_rcnn.py
  class _ScaleGradient (line 20) | class _ScaleGradient(Function):
    method forward (line 22) | def forward(ctx, input, scale):
    method backward (line 27) | def backward(ctx, grad_output):
  class CascadeROIHeads (line 32) | class CascadeROIHeads(StandardROIHeads):
    method __init__ (line 38) | def __init__(
    method from_config (line 81) | def from_config(cls, cfg, input_shape):
    method _init_box_head (line 87) | def _init_box_head(cls, cfg, input_shape):
    method forward (line 137) | def forward(self, images, features, proposals, targets=None):
    method _forward_box (line 153) | def _forward_box(self, features, proposals, targets=None):
    method _match_and_label_boxes (line 209) | def _match_and_label_boxes(self, proposals, stage, targets):
    method _run_stage (line 258) | def _run_stage(self, features, proposals, stage):
    method _create_proposals_from_boxes (line 278) | def _create_proposals_from_boxes(self, boxes, image_sizes):

FILE: detectron2/detectron2/modeling/roi_heads/fast_rcnn.py
  function fast_rcnn_inference (line 46) | def fast_rcnn_inference(
  function _log_classification_stats (line 88) | def _log_classification_stats(pred_logits, gt_classes, prefix="fast_rcnn"):
  function fast_rcnn_inference_single_image (line 118) | def fast_rcnn_inference_single_image(
  class FastRCNNOutputLayers (line 174) | class FastRCNNOutputLayers(nn.Module):
    method __init__ (line 183) | def __init__(
    method from_config (line 268) | def from_config(cls, cfg, input_shape):
    method forward (line 288) | def forward(self, x):
    method losses (line 307) | def losses(self, predictions, proposals):
    method get_fed_loss_classes (line 356) | def get_fed_loss_classes(self, gt_classes, num_fed_loss_classes, num_c...
    method sigmoid_cross_entropy_loss (line 386) | def sigmoid_cross_entropy_loss(self, pred_class_logits, gt_classes):
    method box_reg_loss (line 424) | def box_reg_loss(self, proposal_boxes, gt_boxes, pred_deltas, gt_class...
    method inference (line 465) | def inference(self, predictions: Tuple[torch.Tensor, torch.Tensor], pr...
    method predict_boxes_for_gt_classes (line 488) | def predict_boxes_for_gt_classes(self, predictions, proposals):
    method predict_boxes (line 523) | def predict_boxes(
    method predict_probs (line 549) | def predict_probs(

FILE: detectron2/detectron2/modeling/roi_heads/keypoint_head.py
  function build_keypoint_head (line 32) | def build_keypoint_head(cfg, input_shape):
  function keypoint_rcnn_loss (line 40) | def keypoint_rcnn_loss(pred_keypoint_logits, instances, normalizer):
  function keypoint_rcnn_inference (line 99) | def keypoint_rcnn_inference(pred_keypoint_logits: torch.Tensor, pred_ins...
  class BaseKeypointRCNNHead (line 135) | class BaseKeypointRCNNHead(nn.Module):
    method __init__ (line 142) | def __init__(self, *, num_keypoints, loss_weight=1.0, loss_normalizer=...
    method from_config (line 161) | def from_config(cls, cfg, input_shape):
    method forward (line 179) | def forward(self, x, instances: List[Instances]):
    method layers (line 207) | def layers(self, x):
  class KRCNNConvDeconvUpsampleHead (line 218) | class KRCNNConvDeconvUpsampleHead(BaseKeypointRCNNHead, nn.Sequential):
    method __init__ (line 226) | def __init__(self, input_shape, *, num_keypoints, conv_dims, **kwargs):
    method from_config (line 262) | def from_config(cls, cfg, input_shape):
    method layers (line 268) | def layers(self, x):

FILE: detectron2/detectron2/modeling/roi_heads/mask_head.py
  function mask_rcnn_loss (line 33) | def mask_rcnn_loss(pred_mask_logits: torch.Tensor, instances: List[Insta...
  function mask_rcnn_inference (line 115) | def mask_rcnn_inference(pred_mask_logits: torch.Tensor, pred_instances: ...
  class BaseMaskRCNNHead (line 161) | class BaseMaskRCNNHead(nn.Module):
    method __init__ (line 167) | def __init__(self, *, loss_weight: float = 1.0, vis_period: int = 0):
    method from_config (line 180) | def from_config(cls, cfg, input_shape):
    method forward (line 183) | def forward(self, x, instances: List[Instances]):
    method layers (line 204) | def layers(self, x):
  class MaskRCNNConvUpsampleHead (line 215) | class MaskRCNNConvUpsampleHead(BaseMaskRCNNHead, nn.Sequential):
    method __init__ (line 222) | def __init__(self, input_shape: ShapeSpec, *, num_classes, conv_dims, ...
    method from_config (line 272) | def from_config(cls, cfg, input_shape):
    method layers (line 287) | def layers(self, x):
  function build_mask_head (line 293) | def build_mask_head(cfg, input_shape):

FILE: detectron2/detectron2/modeling/roi_heads/roi_heads.py
  function build_roi_heads (line 38) | def build_roi_heads(cfg, input_shape):
  function select_foreground_proposals (line 46) | def select_foreground_proposals(
  function select_proposals_with_visible_keypoints (line 78) | def select_proposals_with_visible_keypoints(proposals: List[Instances]) ...
  class ROIHeads (line 123) | class ROIHeads(torch.nn.Module):
    method __init__ (line 139) | def __init__(
    method from_config (line 167) | def from_config(cls, cfg):
    method _sample_proposals (line 181) | def _sample_proposals(
    method label_and_sample_proposals (line 220) | def label_and_sample_proposals(
    method forward (line 304) | def forward(
  class Res5ROIHeads (line 342) | class Res5ROIHeads(ROIHeads):
    method __init__ (line 351) | def __init__(
    method from_config (line 387) | def from_config(cls, cfg, input_shape):
    method _build_res5_block (line 429) | def _build_res5_block(cls, cfg):
    method _shared_roi_transform (line 455) | def _shared_roi_transform(self, features: List[torch.Tensor], boxes: L...
    method forward (line 459) | def forward(
    method forward_with_given_boxes (line 502) | def forward_with_given_boxes(
  class StandardROIHeads (line 530) | class StandardROIHeads(ROIHeads):
    method __init__ (line 543) | def __init__(
    method from_config (line 601) | def from_config(cls, cfg, input_shape):
    method _init_box_head (line 618) | def _init_box_head(cls, cfg, input_shape):
    method _init_mask_head (line 655) | def _init_mask_head(cls, cfg, input_shape):
    method _init_keypoint_head (line 689) | def _init_keypoint_head(cls, cfg, input_shape):
    method forward (line 722) | def forward(
    method forward_with_given_boxes (line 753) | def forward_with_given_boxes(
    method _forward_box (line 780) | def _forward_box(self, features: Dict[str, torch.Tensor], proposals: L...
    method _forward_mask (line 818) | def _forward_mask(self, features: Dict[str, torch.Tensor], instances: ...
    method _forward_keypoint (line 848) | def _forward_keypoint(self, features: Dict[str, torch.Tensor], instanc...

FILE: detectron2/detectron2/modeling/roi_heads/rotated_fast_rcnn.py
  function fast_rcnn_inference_rotated (line 46) | def fast_rcnn_inference_rotated(
  function fast_rcnn_inference_single_image_rotated (line 84) | def fast_rcnn_inference_single_image_rotated(
  class RotatedFastRCNNOutputLayers (line 135) | class RotatedFastRCNNOutputLayers(FastRCNNOutputLayers):
    method from_config (line 141) | def from_config(cls, cfg, input_shape):
    method inference (line 148) | def inference(self, predictions, proposals):
  class RROIHeads (line 169) | class RROIHeads(StandardROIHeads):
    method __init__ (line 176) | def __init__(self, **kwargs):
    method _init_box_head (line 187) | def _init_box_head(cls, cfg, input_shape):
    method label_and_sample_proposals (line 218) | def label_and_sample_proposals(self, proposals, targets):

FILE: detectron2/detectron2/modeling/sampling.py
  function subsample_labels (line 9) | def subsample_labels(

FILE: detectron2/detectron2/modeling/test_time_augmentation.py
  class DatasetMapperTTA (line 29) | class DatasetMapperTTA:
    method __init__ (line 39) | def __init__(self, min_sizes: List[int], max_size: int, flip: bool):
    method from_config (line 51) | def from_config(cls, cfg):
    method __call__ (line 58) | def __call__(self, dataset_dict):
  class GeneralizedRCNNWithTTA (line 101) | class GeneralizedRCNNWithTTA(nn.Module):
    method __init__ (line 107) | def __init__(self, cfg, model, tta_mapper=None, batch_size=3):
    method _turn_off_roi_heads (line 137) | def _turn_off_roi_heads(self, attrs):
    method _batch_inference (line 162) | def _batch_inference(self, batched_inputs, detected_instances=None):
    method __call__ (line 188) | def __call__(self, batched_inputs):
    method _inference_one_image (line 206) | def _inference_one_image(self, input):
    method _get_augmented_inputs (line 239) | def _get_augmented_inputs(self, input):
    method _get_augmented_boxes (line 244) | def _get_augmented_boxes(self, augmented_inputs, tfms):
    method _merge_detections (line 262) | def _merge_detections(self, all_boxes, all_scores, all_classes, shape_...
    method _rescale_detected_boxes (line 282) | def _rescale_detected_boxes(self, augmented_inputs, merged_instances, ...
    method _reduce_pred_masks (line 298) | def _reduce_pred_masks(self, outputs, tfms):

FILE: detectron2/detectron2/projects/__init__.py
  class _D2ProjectsFinder (line 16) | class _D2ProjectsFinder(importlib.abc.MetaPathFinder):
    method find_spec (line 17) | def find_spec(self, name, path, target=None):

FILE: detectron2/detectron2/solver/build.py
  class GradientClipType (line 19) | class GradientClipType(Enum):
  function _create_gradient_clipper (line 24) | def _create_gradient_clipper(cfg: CfgNode) -> _GradientClipper:
  function _generate_optimizer_class_with_gradient_clipping (line 44) | def _generate_optimizer_class_with_gradient_clipping(
  function maybe_add_gradient_clipping (line 78) | def maybe_add_gradient_clipping(
  function build_optimizer (line 114) | def build_optimizer(cfg: CfgNode, model: torch.nn.Module) -> torch.optim...
  function get_default_optimizer_params (line 134) | def get_default_optimizer_params(
  function _expand_param_groups (line 230) | def _expand_param_groups(params: List[Dict[str, Any]]) -> List[Dict[str,...
  function reduce_param_groups (line 242) | def reduce_param_groups(params: List[Dict[str, Any]]) -> List[Dict[str, ...
  function build_lr_scheduler (line 262) | def build_lr_scheduler(

FILE: detectron2/detectron2/solver/lr_scheduler.py
  class WarmupParamScheduler (line 17) | class WarmupParamScheduler(CompositeParamScheduler):
    method __init__ (line 22) | def __init__(
  class LRMultiplier (line 52) | class LRMultiplier(torch.optim.lr_scheduler._LRScheduler):
    method __init__ (line 86) | def __init__(
    method state_dict (line 110) | def state_dict(self):
    method get_lr (line 114) | def get_lr(self) -> List[float]:
  class WarmupMultiStepLR (line 132) | class WarmupMultiStepLR(torch.optim.lr_scheduler._LRScheduler):
    method __init__ (line 133) | def __init__(
    method get_lr (line 157) | def get_lr(self) -> List[float]:
    method _compute_values (line 166) | def _compute_values(self) -> List[float]:
  class WarmupCosineLR (line 171) | class WarmupCosineLR(torch.optim.lr_scheduler._LRScheduler):
    method __init__ (line 172) | def __init__(
    method get_lr (line 190) | def get_lr(self) -> List[float]:
    method _compute_values (line 207) | def _compute_values(self) -> List[float]:
  function _get_warmup_factor_at_iter (line 212) | def _get_warmup_factor_at_iter(

FILE: detectron2/detectron2/structures/boxes.py
  class BoxMode (line 13) | class BoxMode(IntEnum):
    method convert (line 44) | def convert(box: _RawBoxType, from_mode: "BoxMode", to_mode: "BoxMode"...
  class Boxes (line 130) | class Boxes:
    method __init__ (line 142) | def __init__(self, tensor: torch.Tensor):
    method clone (line 159) | def clone(self) -> "Boxes":
    method to (line 168) | def to(self, device: torch.device):
    method area (line 172) | def area(self) -> torch.Tensor:
    method clip (line 183) | def clip(self, box_size: Tuple[int, int]) -> None:
    method nonempty (line 199) | def nonempty(self, threshold: float = 0.0) -> torch.Tensor:
    method __getitem__ (line 215) | def __getitem__(self, item) -> "Boxes":
    method __len__ (line 239) | def __len__(self) -> int:
    method __repr__ (line 242) | def __repr__(self) -> str:
    method inside_box (line 245) | def inside_box(self, box_size: Tuple[int, int], boundary_threshold: in...
    method get_centers (line 264) | def get_centers(self) -> torch.Tensor:
    method scale (line 271) | def scale(self, scale_x: float, scale_y: float) -> None:
    method cat (line 279) | def cat(cls, boxes_list: List["Boxes"]) -> "Boxes":
    method device (line 299) | def device(self) -> device:
    method __iter__ (line 305) | def __iter__(self):
  function pairwise_intersection (line 312) | def pairwise_intersection(boxes1: Boxes, boxes2: Boxes) -> torch.Tensor:
  function pairwise_iou (line 336) | def pairwise_iou(boxes1: Boxes, boxes2: Boxes) -> torch.Tensor:
  function pairwise_ioa (line 361) | def pairwise_ioa(boxes1: Boxes, boxes2: Boxes) -> torch.Tensor:
  function pairwise_point_box_distance (line 381) | def pairwise_point_box_distance(points: torch.Tensor, boxes: Boxes):
  function matched_pairwise_iou (line 400) | def matched_pairwise_iou(boxes1: Boxes, boxes2: Boxes) -> torch.Tensor:

FILE: detectron2/detectron2/structures/image_list.py
  class ImageList (line 11) | class ImageList(object):
    method __init__ (line 23) | def __init__(self, tensor: torch.Tensor, image_sizes: List[Tuple[int, ...
    method __len__ (line 33) | def __len__(self) -> int:
    method __getitem__ (line 36) | def __getitem__(self, idx) -> torch.Tensor:
    method to (line 50) | def to(self, *args: Any, **kwargs: Any) -> "ImageList":
    method device (line 55) | def device(self) -> device:
    method from_tensors (line 59) | def from_tensors(

FILE: detectron2/detectron2/structures/instances.py
  class Instances (line 8) | class Instances:
    method __init__ (line 39) | def __init__(self, image_size: Tuple[int, int], **kwargs: Any):
    method image_size (line 51) | def image_size(self) -> Tuple[int, int]:
    method __setattr__ (line 58) | def __setattr__(self, name: str, val: Any) -> None:
    method __getattr__ (line 64) | def __getattr__(self, name: str) -> Any:
    method set (line 69) | def set(self, name: str, value: Any) -> None:
    method has (line 83) | def has(self, name: str) -> bool:
    method remove (line 90) | def remove(self, name: str) -> None:
    method get (line 96) | def get(self, name: str) -> Any:
    method get_fields (line 102) | def get_fields(self) -> Dict[str, Any]:
    method to (line 112) | def to(self, *args: Any, **kwargs: Any) -> "Instances":
    method __getitem__ (line 124) | def __getitem__(self, item: Union[int, slice, torch.BoolTensor]) -> "I...
    method __len__ (line 144) | def __len__(self) -> int:
    method __iter__ (line 150) | def __iter__(self):
    method cat (line 154) | def cat(instance_lists: List["Instances"]) -> "Instances":
    method __str__ (line 186) | def __str__(self) -> str:

FILE: detectron2/detectron2/structures/keypoints.py
  class Keypoints (line 8) | class Keypoints:
    method __init__ (line 21) | def __init__(self, keypoints: Union[torch.Tensor, np.ndarray, List[Lis...
    method __len__ (line 33) | def __len__(self) -> int:
    method to (line 36) | def to(self, *args: Any, **kwargs: Any) -> "Keypoints":
    method device (line 40) | def device(self) -> torch.device:
    method to_heatmap (line 43) | def to_heatmap(self, boxes: torch.Tensor, heatmap_size: int) -> torch....
    method __getitem__ (line 60) | def __getitem__(self, item: Union[int, slice, torch.BoolTensor]) -> "K...
    method __repr__ (line 78) | def __repr__(self) -> str:
    method cat (line 84) | def cat(keypoints_list: List["Keypoints"]) -> "Keypoints":
  function _keypoints_to_heatmap (line 105) | def _keypoints_to_heatmap(
  function heatmaps_to_keypoints (line 165) | def heatmaps_to_keypoints(maps: torch.Tensor, rois: torch.Tensor) -> tor...

FILE: detectron2/detectron2/structures/masks.py
  function polygon_area (line 16) | def polygon_area(x, y):
  function polygons_to_bitmask (line 22) | def polygons_to_bitmask(polygons: List[np.ndarray], height: int, width: ...
  function rasterize_polygons_within_box (line 39) | def rasterize_polygons_within_box(
  class BitMasks (line 88) | class BitMasks:
    method __init__ (line 97) | def __init__(self, tensor: Union[torch.Tensor, np.ndarray]):
    method to (line 111) | def to(self, *args: Any, **kwargs: Any) -> "BitMasks":
    method device (line 115) | def device(self) -> torch.device:
    method __getitem__ (line 119) | def __getitem__(self, item: Union[int, slice, torch.BoolTensor]) -> "B...
    method __iter__ (line 143) | def __iter__(self) -> torch.Tensor:
    method __repr__ (line 147) | def __repr__(self) -> str:
    method __len__ (line 152) | def __len__(self) -> int:
    method nonempty (line 155) | def nonempty(self) -> torch.Tensor:
    method from_polygon_masks (line 166) | def from_polygon_masks(
    method from_roi_masks (line 183) | def from_roi_masks(roi_masks: "ROIMasks", height: int, width: int) -> ...
    method crop_and_resize (line 191) | def crop_and_resize(self, boxes: torch.Tensor, mask_size: int) -> torc...
    method get_bounding_boxes (line 224) | def get_bounding_boxes(self) -> Boxes:
    method cat (line 243) | def cat(bitmasks_list: List["BitMasks"]) -> "BitMasks":
  class PolygonMasks (line 261) | class PolygonMasks:
    method __init__ (line 269) | def __init__(self, polygons: List[List[Union[torch.Tensor, np.ndarray]...
    method to (line 313) | def to(self, *args: Any, **kwargs: Any) -> "PolygonMasks":
    method device (line 317) | def device(self) -> torch.device:
    method get_bounding_boxes (line 320) | def get_bounding_boxes(self) -> Boxes:
    method nonempty (line 337) | def nonempty(self) -> torch.Tensor:
    method __getitem__ (line 348) | def __getitem__(self, item: Union[int, slice, List[int], torch.BoolTen...
    method __iter__ (line 378) | def __iter__(self) -> Iterator[List[np.ndarray]]:
    method __repr__ (line 386) | def __repr__(self) -> str:
    method __len__ (line 391) | def __len__(self) -> int:
    method crop_and_resize (line 394) | def crop_and_resize(self, boxes: torch.Tensor, mask_size: int) -> torc...
    method area (line 426) | def area(self):
    method cat (line 446) | def cat(polymasks_list: List["PolygonMasks"]) -> "PolygonMasks":
  class ROIMasks (line 466) | class ROIMasks:
    method __init__ (line 473) | def __init__(self, tensor: torch.Tensor):
    method to (line 482) | def to(self, device: torch.device) -> "ROIMasks":
    method device (line 486) | def device(self) -> device:
    method __len__ (line 489) | def __len__(self):
    method __getitem__ (line 492) | def __getitem__(self, item) -> "ROIMasks":
    method __repr__ (line 514) | def __repr__(self) -> str:
    method to_bitmasks (line 520) | def to_bitmasks(self, boxes: torch.Tensor, height, width, threshold=0.5):

FILE: detectron2/detectron2/structures/rotated_boxes.py
  class RotatedBoxes (line 11) | class RotatedBoxes(Boxes):
    method __init__ (line 20) | def __init__(self, tensor: torch.Tensor):
    method clone (line 223) | def clone(self) -> "RotatedBoxes":
    method to (line 232) | def to(self, device: torch.device):
    method area (line 236) | def area(self) -> torch.Tensor:
    method normalize_angles (line 247) | def normalize_angles(self) -> None:
    method clip (line 253) | def clip(self, box_size: Tuple[int, int], clip_angle_threshold: float ...
    method nonempty (line 303) | def nonempty(self, threshold: float = 0.0) -> torch.Tensor:
    method __getitem__ (line 318) | def __getitem__(self, item) -> "RotatedBoxes":
    method __len__ (line 341) | def __len__(self) -> int:
    method __repr__ (line 344) | def __repr__(self) -> str:
    method inside_box (line 347) | def inside_box(self, box_size: Tuple[int, int], boundary_threshold: in...
    method get_centers (line 384) | def get_centers(self) -> torch.Tensor:
    method scale (line 391) | def scale(self, scale_x: float, scale_y: float) -> None:
    method cat (line 457) | def cat(cls, boxes_list: List["RotatedBoxes"]) -> "RotatedBoxes":
    method device (line 477) | def device(self) -> torch.device:
    method __iter__ (line 481) | def __iter__(self):
  function pairwise_iou (line 488) | def pairwise_iou(boxes1: RotatedBoxes, boxes2: RotatedBoxes) -> None:

FILE: detectron2/detectron2/tracking/base_tracker.py
  class BaseTracker (line 15) | class BaseTracker(object):
    method __init__ (line 21) | def __init__(self, **kwargs):
    method from_config (line 29) | def from_config(cls, cfg: CfgNode_):
    method update (line 32) | def update(self, predictions: Instances) -> Instances:
  function build_tracker_head (line 53) | def build_tracker_head(cfg: CfgNode_) -> BaseTracker:

FILE: detectron2/detectron2/tracking/bbox_iou_tracker.py
  class BBoxIOUTracker (line 17) | class BBoxIOUTracker(BaseTracker):
    method __init__ (line 23) | def __init__(
    method from_config (line 60) | def from_config(cls, cfg: CfgNode_):
    method update (line 89) | def update(self, instances: Instances) -> Instances:
    method _create_prediction_pairs (line 124) | def _create_prediction_pairs(self, instances: Instances, iou_all: np.n...
    method _initialize_extra_fields (line 151) | def _initialize_extra_fields(self, instances: Instances) -> Instances:
    method _reset_fields (line 174) | def _reset_fields(self):
    method _assign_new_id (line 182) | def _assign_new_id(self, instances: Instances) -> Instances:
    method _merge_untracked_instances (line 199) | def _merge_untracked_instances(self, instances: Instances) -> Instances:

FILE: detectron2/detectron2/tracking/hungarian_tracker.py
  class BaseHungarianTracker (line 16) | class BaseHungarianTracker(BaseTracker):
    method __init__ (line 22) | def __init__(
    method from_config (line 54) | def from_config(cls, cfg: CfgNode_) -> Dict:
    method build_cost_matrix (line 57) | def build_cost_matrix(self, instances: Instances, prev_instances: Inst...
    method update (line 60) | def update(self, instances: Instances) -> Instances:
    method _initialize_extra_fields (line 74) | def _initialize_extra_fields(self, instances: Instances) -> Instances:
    method _process_matched_idx (line 97) | def _process_matched_idx(
    method _process_unmatched_idx (line 109) | def _process_unmatched_idx(self, instances: Instances, matched_idx: np...
    method _process_unmatched_prev_idx (line 118) | def _process_unmatched_prev_idx(

FILE: detectron2/detectron2/tracking/iou_weighted_hungarian_bbox_iou_tracker.py
  class IOUWeightedHungarianBBoxIOUTracker (line 15) | class IOUWeightedHungarianBBoxIOUTracker(VanillaHungarianBBoxIOUTracker):
    method __init__ (line 22) | def __init__(
    method from_config (line 60) | def from_config(cls, cfg: CfgNode_):
    method assign_cost_matrix_values (line 89) | def assign_cost_matrix_values(self, cost_matrix: np.ndarray, bbox_pair...

FILE: detectron2/detectron2/tracking/utils.py
  function create_prediction_pairs (line 8) | def create_prediction_pairs(

FILE: detectron2/detectron2/tracking/vanilla_hungarian_bbox_iou_tracker.py
  class VanillaHungarianBBoxIOUTracker (line 18) | class VanillaHungarianBBoxIOUTracker(BaseHungarianTracker):
    method __init__ (line 24) | def __init__(
    method from_config (line 62) | def from_config(cls, cfg: CfgNode_):
    method build_cost_matrix (line 91) | def build_cost_matrix(self, instances: Instances, prev_instances: Inst...
    method assign_cost_matrix_values (line 116) | def assign_cost_matrix_values(self, cost_matrix: np.ndarray, bbox_pair...

FILE: detectron2/detectron2/utils/analysis.py
  class FlopCountAnalysis (line 55) | class FlopCountAnalysis(fvcore.nn.FlopCountAnalysis):
    method __init__ (line 60) | def __init__(self, model, inputs):
  function flop_count_operators (line 71) | def flop_count_operators(model: nn.Module, inputs: list) -> typing.Defau...
  function activation_count_operators (line 103) | def activation_count_operators(
  function _wrapper_count_operators (line 128) | def _wrapper_count_operators(
  function find_unused_parameters (line 158) | def find_unused_parameters(model: nn.Module, inputs: Any) -> List[str]:

FILE: detectron2/detectron2/utils/collect_env.py
  function collect_torch_env (line 17) | def collect_torch_env():
  function get_env_module (line 29) | def get_env_module():
  function detect_compute_compatibility (line 34) | def detect_compute_compatibility(CUDA_HOME, so_file):
  function collect_env_info (line 55) | def collect_env_info():
  function test_nccl_ops (line 203) | def test_nccl_ops():
  function _test_nccl_worker (line 214) | def _test_nccl_worker(rank, num_gpu, dist_url):

FILE: detectron2/detectron2/utils/colormap.py
  function colormap (line 96) | def colormap(rgb=False, maximum=255):
  function random_color (line 112) | def random_color(rgb=False, maximum=255):
  function random_colors (line 128) | def random_colors(N, rgb=False, maximum=255):

FILE: detectron2/detectron2/utils/comm.py
  function get_world_size (line 19) | def get_world_size() -> int:
  function get_rank (line 27) | def get_rank() -> int:
  function get_local_rank (line 35) | def get_local_rank() -> int:
  function get_local_size (line 50) | def get_local_size() -> int:
  function is_main_process (line 63) | def is_main_process() -> bool:
  function synchronize (line 67) | def synchronize():
  function _get_global_gloo_group (line 88) | def _get_global_gloo_group():
  function all_gather (line 99) | def all_gather(data, group=None):
  function gather (line 124) | def gather(data, dst=0, group=None):
  function shared_random_seed (line 156) | def shared_random_seed():
  function reduce_dict (line 170) | def reduce_dict(input_dict, average=True):

FILE: detectron2/detectron2/utils/develop.py
  function create_dummy_class (line 8) | def create_dummy_class(klass, dependency, message=""):
  function create_dummy_func (line 37) | def create_dummy_func(func, dependency, message=""):

FILE: detectron2/detectron2/utils/env.py
  function seed_all_rng (line 27) | def seed_all_rng(seed=None):
  function _import_file (line 49) | def _import_file(module_name, file_path, make_importable=False):
  function _configure_libraries (line 58) | def _configure_libraries():
  function setup_environment (line 97) | def setup_environment():
  function setup_custom_environment (line 119) | def setup_custom_environment(custom_module):
  function fixup_module_metadata (line 135) | def fixup_module_metadata(module_name, namespace, keys=None):

FILE: detectron2/detectron2/utils/events.py
  function get_event_storage (line 26) | def get_event_storage():
  class EventWriter (line 38) | class EventWriter:
    method write (line 43) | def write(self):
    method close (line 46) | def close(self):
  class JSONWriter (line 50) | class JSONWriter(EventWriter):
    method __init__ (line 94) | def __init__(self, json_file, window_size=20):
    method write (line 105) | def write(self):
    method close (line 127) | def close(self):
  class TensorboardXWriter (line 131) | class TensorboardXWriter(EventWriter):
    method __init__ (line 136) | def __init__(self, log_dir: str, window_size: int = 20, **kwargs):
    method write (line 150) | def write(self):
    method close (line 176) | def close(self):
  class CommonMetricPrinter (line 181) | class CommonMetricPrinter(EventWriter):
    method __init__ (line 191) | def __init__(self, max_iter: Optional[int] = None, window_size: int = ...
    method _get_eta (line 203) | def _get_eta(self, storage) -> Optional[str]:
    method write (line 223) | def write(self):
  class EventStorage (line 274) | class EventStorage:
    method __init__ (line 281) | def __init__(self, start_iter=0):
    method put_image (line 294) | def put_image(self, img_name, img_tensor):
    method put_scalar (line 309) | def put_scalar(self, name, value, smoothing_hint=True):
    method put_scalars (line 336) | def put_scalars(self, *, smoothing_hint=True, **kwargs):
    method put_histogram (line 347) | def put_histogram(self, hist_name, hist_tensor, bins=1000):
    method history (line 377) | def history(self, name):
    method histories (line 387) | def histories(self):
    method latest (line 394) | def latest(self):
    method latest_with_smoothing_hint (line 402) | def latest_with_smoothing_hint(self, window_size=20):
    method smoothing_hints (line 419) | def smoothing_hints(self):
    method step (line 427) | def step(self):
    method iter (line 437) | def iter(self):
    method iter (line 446) | def iter(self, val):
    method iteration (line 450) | def iteration(self):
    method __enter__ (line 454) | def __enter__(self):
    method __exit__ (line 458) | def __exit__(self, exc_type, exc_val, exc_tb):
    method name_scope (line 463) | def name_scope(self, name):
    method clear_images (line 474) | def clear_images(self):
    method clear_histograms (line 481) | def clear_histograms(self):

FILE: detectron2/detectron2/utils/file_io.py
  class Detectron2Handler (line 16) | class Detectron2Handler(PathHandler):
    method _get_supported_prefixes (line 24) | def _get_supported_prefixes(self):
    method _get_local_path (line 27) | def _get_local_path(self, path, **kwargs):
    method _open (line 31) | def _open(self, path, mode="r", **kwargs):

FILE: detectron2/detectron2/utils/logger.py
  class _ColorfulFormatter (line 18) | class _ColorfulFormatter(logging.Formatter):
    method __init__ (line 19) | def __init__(self, *args, **kwargs):
    method formatMessage (line 26) | def formatMessage(self, record):
  function setup_logger (line 39) | def setup_logger(
  function _cached_log_stream (line 105) | def _cached_log_stream(filename):
  function _find_caller (line 119) | def _find_caller():
  function log_first_n (line 140) | def log_first_n(lvl, msg, n=1, *, name=None, key="caller"):
  function log_every_n (line 175) | def log_every_n(lvl, msg, n=1, *, name=None):
  function log_every_n_seconds (line 191) | def log_every_n_seconds(lvl, msg, n=1, *, name=None):
  function create_small_table (line 209) | def create_small_table(small_dict):
  function _log_api_usage (line 232) | def _log_api_usage(identifier: str):

FILE: detectron2/detectron2/utils/memory.py
  function _ignore_torch_cuda_oom (line 12) | def _ignore_torch_cuda_oom():
  function retry_if_cuda_oom (line 26) | def retry_if_cuda_oom(func):

FILE: detectron2/detectron2/utils/registry.py
  function _convert_target_to_string (line 15) | def _convert_target_to_string(t: Any) -> str:
  function locate (line 40) | def locate(name: str) -> Any:

FILE: detectron2/detectron2/utils/serialize.py
  class PicklableWrapper (line 5) | class PicklableWrapper(object):
    method __init__ (line 15) | def __init__(self, obj):
    method __reduce__ (line 21) | def __reduce__(self):
    method __call__ (line 25) | def __call__(self, *args, **kwargs):
    method __getattr__ (line 28) | def __getattr__(self, attr):

FILE: detectron2/detectron2/utils/testing.py
  function get_model_no_weights (line 29) | def get_model_no_weights(config_path):
  function random_boxes (line 42) | def random_boxes(num_boxes, max_coord=100, device="cpu"):
  function get_sample_coco_image (line 56) | def get_sample_coco_image(tensor=True):
  function convert_scripted_instances (line 80) | def convert_scripted_instances(instances):
  function assert_instances_allclose (line 95) | def assert_instances_allclose(input, other, *, rtol=1e-5, msg="", size_a...
  function reload_script_model (line 142) | def reload_script_model(module):
  function reload_lazy_config (line 153) | def reload_lazy_config(cfg):
  function min_torch_version (line 165) | def min_torch_version(min_version: str) -> bool:
  function register_custom_op_onnx_export (line 179) | def register_custom_op_onnx_export(
  function unregister_custom_op_onnx_export (line 193) | def unregister_custom_op_onnx_export(opname: str, opset_version: int, mi...
  function skipIfUnsupportedMinOpsetVersion (line 263) | def skipIfUnsupportedMinOpsetVersion(min_opset_version, current_opset_ve...
  function skipIfUnsupportedMinTorchVersion (line 286) | def skipIfUnsupportedMinTorchVersion(min_version):
  function _pytorch1111_symbolic_opset9_to (line 295) | def _pytorch1111_symbolic_opset9_to(g, self, *args):
  function _pytorch1111_symbolic_opset9_repeat_interleave (line 369) | def _pytorch1111_symbolic_opset9_repeat_interleave(g, self, repeats, dim...

FILE: detectron2/detectron2/utils/video_visualizer.py
  class _DetectedInstance (line 17) | class _DetectedInstance:
    method __init__ (line 33) | def __init__(self, label, bbox, mask_rle, color, ttl):
  class VideoVisualizer (line 41) | class VideoVisualizer:
    method __init__ (line 42) | def __init__(self, metadata, instance_mode=ColorMode.IMAGE):
    method draw_instance_predictions (line 59) | def draw_instance_predictions(self, frame, predictions):
    method draw_sem_seg (line 143) | def draw_sem_seg(self, frame, sem_seg, area_threshold=None):
    method draw_panoptic_seg_predictions (line 155) | def draw_panoptic_seg_predictions(
    method _assign_colors (line 211) | def _assign_colors(self, instances):
    method _assign_colors_by_id (line 268) | def _assign_colors_by_id(self, instances: Instances) -> List:

FILE: detectron2/detectron2/utils/visualizer.py
  class ColorMode (line 37) | class ColorMode(Enum):
  class GenericMask (line 59) | class GenericMask:
    method __init__ (line 67) | def __init__(self, mask_or_polygons, height, width):
    method mask (line 99) | def mask(self):
    method polygons (line 105) | def polygons(self):
    method has_holes (line 111) | def has_holes(self):
    method mask_to_polygons (line 119) | def mask_to_polygons(self, mask):
    method polygons_to_mask (line 138) | def polygons_to_mask(self, polygons):
    method area (line 143) | def area(self):
    method bbox (line 146) | def bbox(self):
  class _PanopticPrediction (line 155) | class _PanopticPrediction:
    method __init__ (line 160) | def __init__(self, panoptic_seg, segments_info, metadata=None):
    method non_empty_mask (line 196) | def non_empty_mask(self):
    method semantic_masks (line 212) | def semantic_masks(self):
    method instance_masks (line 220) | def instance_masks(self):
  function _create_text_labels (line 230) | def _create_text_labels(classes, scores, class_names, is_crowd=None):
  class VisImage (line 257) | class VisImage:
    method __init__ (line 258) | def __init__(self, img, scale=1.0):
    method _setup_figure (line 269) | def _setup_figure(self, img):
    method reset_image (line 294) | def reset_image(self, img):
    method save (line 302) | def save(self, filepath):
    method get_image (line 310) | def get_image(self):
  class Visualizer (line 331) | class Visualizer:
    method __init__ (line 357) | def __init__(self, img_rgb, metadata=None, scale=1.0, instance_mode=Co...
    method draw_instance_predictions (line 383) | def draw_instance_predictions(self, predictions):
    method draw_sem_seg (line 436) | def draw_sem_seg(self, sem_seg, area_threshold=None, alpha=0.8):
    method draw_panoptic_seg (line 472) | def draw_panoptic_seg(self, panoptic_seg, segments_info, area_threshol...
    method draw_dataset_dict (line 538) | def draw_dataset_dict(self, dic):
    method overlay_instances (line 607) | def overlay_instances(
    method overlay_rotated_instances (line 749) | def overlay_rotated_instances(self, boxes=None, labels=None, assigned_...
    method draw_and_connect_keypoints (line 787) | def draw_and_connect_keypoints(self, keypoints):
    method draw_text (line 850) | def draw_text(
    method draw_box (line 897) | def draw_box(self, box_coord, alpha=0.5, edge_color="g", line_style="-"):
    method draw_rotated_box_with_label (line 931) | def draw_rotated_box_with_label(
    method draw_circle (line 986) | def draw_circle(self, circle_coord, color, radius=3):
    method draw_line (line 1004) | def draw_line(self, x_data, y_data, color, linestyle="-", linewidth=No...
    method draw_binary_mask (line 1035) | def draw_binary_mask(
    method draw_soft_mask (line 1086) | def draw_soft_mask(self, soft_mask, color=None, *, text=None, alpha=0.5):
    method draw_polygon (line 1114) | def draw_polygon(self, segment, color, edge_color=None, alpha=0.5):
    method _jitter (line 1150) | def _jitter(self, color):
    method _create_grayscale_image (line 1169) | def _create_grayscale_image(self, mask=None):
    method _change_color_brightness (line 1180) | def _change_color_brightness(self, color, brightness_factor):
    method _convert_boxes (line 1205) | def _convert_boxes(self, boxes):
    method _convert_masks (line 1214) | def _convert_masks(self, masks_or_polygons):
    method _draw_text_in_mask (line 1237) | def _draw_text_in_mask(self, binary_mask, text, color):
    method _convert_keypoints (line 1255) | def _convert_keypoints(self, keypoints):
    method get_output (line 1261) | def get_output(self):

FILE: detectron2/dev/packaging/gen_install_table.py
  function gen_header (line 23) | def gen_header(torch_versions):

FILE: detectron2/docs/conf.py
  class GithubURLDomain (line 30) | class GithubURLDomain(Domain):
    method resolve_any_xref (line 39) | def resolve_any_xref(self, env, fromdocname, builder, target, node, co...
  function autodoc_skip_member (line 275) | def autodoc_skip_member(app, what, name, obj, skip, options):
  function paper_ref_role (line 349) | def paper_ref_role(
  function setup (line 380) | def setup(app):

FILE: detectron2/projects/DeepLab/deeplab/build_solver.py
  function build_lr_scheduler (line 10) | def build_lr_scheduler(

FILE: detectron2/projects/DeepLab/deeplab/config.py
  function add_deeplab_config (line 5) | def add_deeplab_config(cfg):

FILE: detectron2/projects/DeepLab/deeplab/loss.py
  class DeepLabCE (line 6) | class DeepLabCE(nn.Module):
    method __init__ (line 20) | def __init__(self, ignore_label=-1, top_k_percent_pixels=1.0, weight=N...
    method forward (line 28) | def forward(self, logits, labels, weights=None):

FILE: detectron2/projects/DeepLab/deeplab/lr_scheduler.py
  class WarmupPolyLR (line 17) | class WarmupPolyLR(torch.optim.lr_scheduler._LRScheduler):
    method __init__ (line 25) | def __init__(
    method get_lr (line 44) | def get_lr(self) -> List[float]:
    method _compute_values (line 60) | def _compute_values(self) -> List[float]:

FILE: detectron2/projects/DeepLab/deeplab/resnet.py
  class DeepLabStem (line 15) | class DeepLabStem(CNNBlockBase):
    method __init__ (line 20) | def __init__(self, in_channels=3, out_channels=128, norm="BN"):
    method forward (line 59) | def forward(self, x):
  function build_resnet_deeplab_backbone (line 71) | def build_resnet_deeplab_backbone(cfg, input_shape):

FILE: detectron2/projects/DeepLab/deeplab/semantic_seg.py
  class DeepLabV3PlusHead (line 16) | class DeepLabV3PlusHead(nn.Module):
    method __init__ (line 22) | def __init__(
    method from_config (line 191) | def from_config(cls, cfg, input_shape):
    method forward (line 219) | def forward(self, features, targets=None):
    method layers (line 237) | def layers(self, features):
    method losses (line 254) | def losses(self, predictions, targets):
  class DeepLabV3Head (line 264) | class DeepLabV3Head(nn.Module):
    method __init__ (line 269) | def __init__(self, cfg, input_shape: Dict[str, ShapeSpec]):
    method forward (line 325) | def forward(self, features, targets=None):
    method losses (line 342) | def losses(self, predictions, targets):

FILE: detectron2/projects/DeepLab/train_net.py
  function build_sem_seg_train_aug (line 21) | def build_sem_seg_train_aug(cfg):
  class Trainer (line 40) | class Trainer(DefaultTrainer):
    method build_evaluator (line 49) | def build_evaluator(cls, cfg, dataset_name, output_folder=None):
    method build_train_loader (line 79) | def build_train_loader(cls, cfg):
    method build_lr_scheduler (line 87) | def build_lr_scheduler(cls, cfg, optimizer):
  function setup (line 95) | def setup(args):
  function main (line 108) | def main(args):

FILE: detectron2/projects/DensePose/apply_net.py
  class Action (line 55) | class Action(object):
    method add_arguments (line 57) | def add_arguments(cls: type, parser: argparse.ArgumentParser):
  function register_action (line 66) | def register_action(cls: type):
  class InferenceAction (line 75) | class InferenceAction(Action):
    method add_arguments (line 77) | def add_arguments(cls: type, parser: argparse.ArgumentParser):
    method execute (line 90) | def execute(cls: type, args: argparse.Namespace):
    method setup_config (line 110) | def setup_config(
    method _get_input_file_list (line 124) | def _get_input_file_list(cls: type, input_spec: str):
  class DumpAction (line 139) | class DumpAction(InferenceAction):
    method add_parser (line 147) | def add_parser(cls: type, subparsers: argparse._SubParsersAction):
    method add_arguments (line 153) | def add_arguments(cls: type, parser: argparse.ArgumentParser):
    method execute_on_outputs (line 163) | def execute_on_outputs(
    method create_context (line 182) | def create_context(cls: type, args: argparse.Namespace, cfg: CfgNode):
    method postexecute (line 187) | def postexecute(cls: type, context: Dict[str, Any]):
  class ShowAction (line 198) | class ShowAction(InferenceAction):
    method add_parser (line 216) | def add_parser(cls: type, subparsers: argparse._SubParsersAction):
    method add_arguments (line 222) | def add_arguments(cls: type, parser: argparse.ArgumentParser):
    method setup_config (line 260) | def setup_config(
    method execute_on_outputs (line 272) | def execute_on_outputs(
    method postexecute (line 296) | def postexecute(cls: type, context: Dict[str, Any]):
    method _get_out_fname (line 300) | def _get_out_fname(cls: type, entry_idx: int, fname_base: str):
    method create_context (line 305) | def create_context(cls: type, args: argparse.Namespace, cfg: CfgNode) ...
  function create_argument_parser (line 331) | def create_argument_parser() -> argparse.ArgumentParser:
  function main (line 343) | def main():

FILE: detectron2/projects/DensePose/densepose/config.py
  function add_dataset_category_config (line 8) | def add_dataset_category_config(cfg: CN) -> None:
  function add_evaluation_config (line 21) | def add_evaluation_config(cfg: CN) -> None:
  function add_bootstrap_config (line 50) | def add_bootstrap_config(cfg: CN) -> None:
  function get_bootstrap_dataset_config (line 59) | def get_bootstrap_dataset_config() -> CN:
  function load_bootstrap_config (line 88) | def load_bootstrap_config(cfg: CN) -> None:
  function add_densepose_head_cse_config (line 105) | def add_densepose_head_cse_config(cfg: CN) -> None:
  function add_densepose_head_config (line 158) | def add_densepose_head_config(cfg: CN) -> None:
  function add_hrnet_config (line 237) | def add_hrnet_config(cfg: CN) -> None:
  function add_densepose_config (line 272) | def add_densepose_config(cfg: CN) -> None:

FILE: detectron2/projects/DensePose/densepose/converters/base.py
  class BaseConverter (line 7) | class BaseConverter:
    method register (line 16) | def register(cls, from_type: Type, converter: Any = None):
    method _do_register (line 38) | def _do_register(cls, from_type: Type, converter: Any):
    method _lookup_converter (line 42) | def _lookup_converter(cls, from_type: Type) -> Any:
    method convert (line 64) | def convert(cls, instance: Any, *args, **kwargs):
  function make_int_box (line 90) | def make_int_box(box: torch.Tensor) -> IntTupleBox:

FILE: detectron2/projects/DensePose/densepose/converters/chart_output_hflip.py
  function densepose_chart_predictor_output_hflip (line 8) | def densepose_chart_predictor_output_hflip(
  function _flip_iuv_semantics_tensor (line 41) | def _flip_iuv_semantics_tensor(
  function _flip_segm_semantics_tensor (line 64) | def _flip_segm_semantics_tensor(

FILE: detectron2/projects/DensePose/densepose/converters/chart_output_to_chart_result.py
  function resample_uv_tensors_to_bbox (line 18) | def resample_uv_tensors_to_bbox(
  function resample_uv_to_bbox (line 48) | def resample_uv_to_bbox(
  function densepose_chart_predictor_output_to_result (line 73) | def densepose_chart_predictor_output_to_result(
  function resample_confidences_to_bbox (line 101) | def resample_confidences_to_bbox(
  function densepose_chart_predictor_output_to_result_with_confidences (line 162) | def densepose_chart_predictor_output_to_result_with_confidences(

FILE: detectron2/projects/DensePose/densepose/converters/hflip.py
  class HFlipConverter (line 8) | class HFlipConverter(BaseConverter):
    method convert (line 18) | def convert(cls, predictor_outputs: Any, transform_data: Any, *args, *...

FILE: detectron2/projects/DensePose/densepose/converters/segm_to_mask.py
  function resample_coarse_segm_tensor_to_bbox (line 13) | def resample_coarse_segm_tensor_to_bbox(coarse_segm: torch.Tensor, box_x...
  function resample_fine_and_coarse_segm_tensors_to_bbox (line 32) | def resample_fine_and_coarse_segm_tensors_to_bbox(
  function resample_fine_and_coarse_segm_to_bbox (line 65) | def resample_fine_and_coarse_segm_to_bbox(predictor_output: Any, box_xyw...
  function predictor_output_with_coarse_segm_to_mask (line 85) | def predictor_output_with_coarse_segm_to_mask(
  function predictor_output_with_fine_and_coarse_segm_to_mask (line 118) | def predictor_output_with_fine_and_coarse_segm_to_mask(

FILE: detectron2/projects/DensePose/densepose/converters/to_chart_result.py
  class ToChartResultConverter (line 11) | class ToChartResultConverter(BaseConverter):
    method convert (line 21) | def convert(cls, predictor_outputs: Any, boxes: Boxes, *args, **kwargs...
  class ToChartResultConverterWithConfidences (line 38) | class ToChartResultConverterWithConfidences(BaseConverter):
    method convert (line 48) | def convert(

FILE: detectron2/projects/DensePose/densepose/converters/to_mask.py
  class ToMaskConverter (line 12) | class ToMaskConverter(BaseConverter):
    method convert (line 23) | def convert(

FILE: detectron2/projects/DensePose/densepose/data/build.py
  function _compute_num_images_per_worker (line 59) | def _compute_num_images_per_worker(cfg: CfgNode) -> int:
  function _map_category_id_to_contiguous_id (line 76) | def _map_category_id_to_contiguous_id(dataset_name: str, dataset_dicts: ...
  class _DatasetCategory (line 84) | class _DatasetCategory:
  function _add_category_id_to_contiguous_id_maps_to_metadata (line 112) | def _add_category_id_to_contiguous_id_maps_to_metadata(
  function _maybe_create_general_keep_instance_predicate (line 150) | def _maybe_create_general_keep_instance_predicate(cfg: CfgNode) -> Optio...
  function _maybe_create_keypoints_keep_instance_predicate (line 168) | def _maybe_create_keypoints_keep_instance_predicate(cfg: CfgNode) -> Opt...
  function _maybe_create_mask_keep_instance_predicate (line 185) | def _maybe_create_mask_keep_instance_predicate(cfg: CfgNode) -> Optional...
  function _maybe_create_densepose_keep_instance_predicate (line 195) | def _maybe_create_densepose_keep_instance_predicate(cfg: CfgNode) -> Opt...
  function _maybe_create_specific_keep_instance_predicate (line 214) | def _maybe_create_specific_keep_instance_predicate(cfg: CfgNode) -> Opti...
  function _get_train_keep_instance_predicate (line 231) | def _get_train_keep_instance_predicate(cfg: CfgNode):
  function _get_test_keep_instance_predicate (line 247) | def _get_test_keep_instance_predicate(cfg: CfgNode):
  function _maybe_filter_and_map_categories (line 252) | def _maybe_filter_and_map_categories(
  function _add_category_whitelists_to_metadata (line 271) | def _add_category_whitelists_to_metadata(cfg: CfgNode) -> None:
  function _add_category_maps_to_metadata (line 283) | def _add_category_maps_to_metadata(cfg: CfgNode) -> None:
  function _add_category_info_to_bootstrapping_metadata (line 294) | def _add_category_info_to_bootstrapping_metadata(dataset_name: str, data...
  function _maybe_add_class_to_mesh_name_map_to_metadata (line 307) | def _maybe_add_class_to_mesh_name_map_to_metadata(dataset_names: List[st...
  function _merge_categories (line 314) | def _merge_categories(dataset_names: Collection[str]) -> _MergedCategori...
  function _warn_if_merged_different_categories (line 353) | def _warn_if_merged_different_categories(merged_categories: _MergedCateg...
  function combine_detection_dataset_dicts (line 370) | def combine_detection_dataset_dicts(
  function build_detection_train_loader (line 426) | def build_detection_train_loader(cfg: CfgNode, mapper=None):
  function build_detection_test_loader (line 462) | def build_detection_test_loader(cfg, dataset_name, mapper=None):
  function build_frame_selector (line 501) | def build_frame_selector(cfg: CfgNode):
  function build_transform (line 515) | def build_transform(cfg: CfgNode, data_type: str):
  function build_combined_loader (line 522) | def build_combined_loader(cfg: CfgNode, loaders: Collection[Loader], rat...
  function build_bootstrap_dataset (line 527) | def build_bootstrap_dataset(dataset_name: str, cfg: CfgNode) -> Sequence...
  function build_data_sampler (line 551) | def build_data_sampler(cfg: CfgNode, sampler_cfg: CfgNode, embedder: Opt...
  function build_data_filter (line 642) | def build_data_filter(cfg: CfgNode):
  function build_inference_based_loader (line 649) | def build_inference_based_loader(
  function has_inference_based_loaders (line 681) | def has_inference_based_loaders(cfg: CfgNode) -> bool:
  function build_inference_based_loaders (line 689) | def build_inference_based_loaders(
  function build_video_list_dataset (line 704) | def build_video_list_dataset(meta: Metadata, cfg: CfgNode):
  class _BootstrapDatasetFactoryCatalog (line 718) | class _BootstrapDatasetFactoryCatalog(UserDict):
    method register (line 724) | def register(self, dataset_type: DatasetType, factory: Callable[[Metad...

FILE: detectron2/projects/DensePose/densepose/data/combined_loader.py
  function _pooled_next (line 10) | def _pooled_next(iterator: Iterator[Any], pool: Deque[Any]):
  class CombinedDataLoader (line 16) | class CombinedDataLoader:
    method __init__ (line 23) | def __init__(self, loaders: Collection[Loader], batch_size: int, ratio...
    method __iter__ (line 28) | def __iter__(self) -> Iterator[List[Any]]:

FILE: detectron2/projects/DensePose/densepose/data/dataset_mapper.py
  function build_augmentation (line 19) | def build_augmentation(cfg, is_train):
  class DatasetMapper (line 31) | class DatasetMapper:
    method __init__ (line 36) | def __init__(self, cfg, is_train=True):
    method __call__ (line 74) | def __call__(self, dataset_dict):
    method _transform_densepose (line 126) | def _transform_densepose(self, annotation, transforms):
    method _add_densepose_masks_as_segmentation (line 145) | def _add_densepose_masks_as_segmentation(

FILE: detectron2/projects/DensePose/densepose/data/datasets/chimpnsee.py
  function register_dataset (line 13) | def register_dataset(datasets_root: Optional[str] = None) -> None:

FILE: detectron2/projects/DensePose/densepose/data/datasets/coco.py
  class CocoDatasetInfo (line 27) | class CocoDatasetInfo:
  function get_metadata (line 131) | def get_metadata(base_path: Optional[str]) -> Dict[str, Any]:
  function _load_coco_annotations (line 154) | def _load_coco_annotations(json_file: str):
  function _add_categories_metadata (line 176) | def _add_categories_metadata(dataset_name: str, categories: List[Dict[st...
  function _verify_annotations_have_unique_ids (line 183) | def _verify_annotations_have_unique_ids(json_file: str, anns: List[List[...
  function _maybe_add_bbox (line 195) | def _maybe_add_bbox(obj: Dict[str, Any], ann_dict: Dict[str, Any]):
  function _maybe_add_segm (line 202) | def _maybe_add_segm(obj: Dict[str, Any], ann_dict: Dict[str, Any]):
  function _maybe_add_keypoints (line 214) | def _maybe_add_keypoints(obj: Dict[str, Any], ann_dict: Dict[str, Any]):
  function _maybe_add_densepose (line 228) | def _maybe_add_densepose(obj: Dict[str, Any], ann_dict: Dict[str, Any]):
  function _combine_images_with_annotations (line 234) | def _combine_images_with_annotations(
  function get_contiguous_id_to_category_id_map (line 273) | def get_contiguous_id_to_category_id_map(metadata):
  function maybe_filter_categories_cocoapi (line 283) | def maybe_filter_categories_cocoapi(dataset_name, coco_api):
  function maybe_filter_and_map_categories_cocoapi (line 312) | def maybe_filter_and_map_categories_cocoapi(dataset_name, coco_api):
  function create_video_frame_mapping (line 337) | def create_video_frame_mapping(dataset_name, dataset_dicts):
  function load_coco_json (line 347) | def load_coco_json(annotations_json_file: str, image_root: str, dataset_...
  function register_dataset (line 391) | def register_dataset(dataset_data: CocoDatasetInfo, datasets_root: Optio...
  function register_datasets (line 419) | def register_datasets(

FILE: detectron2/projects/DensePose/densepose/data/datasets/dataset_type.py
  class DatasetType (line 6) | class DatasetType(Enum):

FILE: detectron2/projects/DensePose/densepose/data/datasets/lvis.py
  function _load_lvis_annotations (line 49) | def _load_lvis_annotations(json_file: str):
  function _add_categories_metadata (line 71) | def _add_categories_metadata(dataset_name: str) -> None:
  function _verify_annotations_have_unique_ids (line 80) | def _verify_annotations_have_unique_ids(json_file: str, anns: List[List[...
  function _maybe_add_bbox (line 87) | def _maybe_add_bbox(obj: Dict[str, Any], ann_dict: Dict[str, Any]) -> None:
  function _maybe_add_segm (line 94) | def _maybe_add_segm(obj: Dict[str, Any], ann_dict: Dict[str, Any]) -> None:
  function _maybe_add_keypoints (line 106) | def _maybe_add_keypoints(obj: Dict[str, Any], ann_dict: Dict[str, Any]) ...
  function _maybe_add_densepose (line 120) | def _maybe_add_densepose(obj: Dict[str, Any], ann_dict: Dict[str, Any]) ...
  function _combine_images_with_annotations (line 126) | def _combine_images_with_annotations(
  function load_lvis_json (line 168) | def load_lvis_json(annotations_json_file: str, image_root: str, dataset_...
  function register_dataset (line 215) | def register_dataset(dataset_data: CocoDatasetInfo, datasets_root: Optio...
  function register_datasets (line 244) | def register_datasets(

FILE: detectron2/projects/DensePose/densepose/data/image_list_dataset.py
  class ImageListDataset (line 15) | class ImageListDataset(Dataset):
    method __init__ (line 22) | def __init__(
    method __getitem__ (line 44) | def __getitem__(self, idx: int) -> Dict[str, Any]:
    method __len__ (line 71) | def __len__(self):

FILE: detectron2/projects/DensePose/densepose/data/inference_based_loader.py
  function _grouper (line 12) | def _grouper(iterable: Iterable[Any], n: int, fillvalue=None) -> Iterato...
  class ScoreBasedFilter (line 33) | class ScoreBasedFilter:
    method __init__ (line 39) | def __init__(self, min_score: float = 0.8):
    method __call__ (line 42) | def __call__(self, model_output: ModelOutput) -> ModelOutput:
  class InferenceBasedLoader (line 52) | class InferenceBasedLoader:
    method __init__ (line 60) | def __init__(
    method __iter__ (line 105) | def __iter__(self) -> Iterator[List[SampledData]]:
    method _produce_data (line 121) | def _produce_data(

FILE: detectron2/projects/DensePose/densepose/data/meshes/catalog.py
  class MeshInfo (line 12) | class MeshInfo:
  class _MeshCatalog (line 20) | class _MeshCatalog(UserDict):
    method __init__ (line 21) | def __init__(self, *args, **kwargs):
    method __setitem__ (line 27) | def __setitem__(self, key, value):
    method get_mesh_id (line 42) | def get_mesh_id(self, shape_name: str) -> int:
    method get_mesh_name (line 45) | def get_mesh_name(self, mesh_id: int) -> str:
  function register_mesh (line 52) | def register_mesh(mesh_info: MeshInfo, base_path: Optional[str]) -> None:
  function register_meshes (line 69) | def register_meshes(mesh_infos: Iterable[MeshInfo], base_path: Optional[...

FILE: detectron2/projects/DensePose/densepose/data/samplers/densepose_base.py
  class DensePoseBaseSampler (line 14) | class DensePoseBaseSampler:
    method __init__ (line 21) | def __init__(self, count_per_class: int = 8):
    method __call__ (line 31) | def __call__(self, instances: Instances) -> DensePoseList:
    method _sample (line 49) | def _sample(self, instance: Instances, bbox_xywh: IntTupleBox) -> Dict...
    method _produce_index_sample (line 94) | def _produce_index_sample(self, values: torch.Tensor, count: int):
    method _produce_labels_and_results (line 111) | def _produce_labels_and_results(self, instance: Instances) -> Tuple[to...
    method _resample_mask (line 127) | def _resample_mask(self, output: Any) -> torch.Tensor:

FILE: detectron2/projects/DensePose/densepose/data/samplers/densepose_confidence_based.py
  class DensePoseConfidenceBasedSampler (line 12) | class DensePoseConfidenceBasedSampler(DensePoseBaseSampler):
    method __init__ (line 18) | def __init__(
    method _produce_index_sample (line 57) | def _produce_index_sample(self, values: torch.Tensor, count: int):
    method _produce_labels_and_results (line 89) | def _produce_labels_and_results(self, instance) -> Tuple[torch.Tensor,...

FILE: detectron2/projects/DensePose/densepose/data/samplers/densepose_cse_base.py
  class DensePoseCSEBaseSampler (line 18) | class DensePoseCSEBaseSampler(DensePoseBaseSampler):
    method __init__ (line 25) | def __init__(
    method _sample (line 46) | def _sample(self, instance: Instances, bbox_xywh: IntTupleBox) -> Dict...
    method _produce_mask_and_results (line 91) | def _produce_mask_and_results(
    method _resample_mask (line 118) | def _resample_mask(self, output: Any) -> torch.Tensor:

FILE: detectron2/projects/DensePose/densepose/data/samplers/densepose_cse_confidence_based.py
  class DensePoseCSEConfidenceBasedSampler (line 16) | class DensePoseCSEConfidenceBasedSampler(DensePoseCSEBaseSampler):
    method __init__ (line 22) | def __init__(
    method _produce_index_sample (line 64) | def _produce_index_sample(self, values: torch.Tensor, count: int):
    method _produce_mask_and_results (line 94) | def _produce_mask_and_results(

FILE: detectron2/projects/DensePose/densepose/data/samplers/densepose_cse_uniform.py
  class DensePoseCSEUniformSampler (line 7) | class DensePoseCSEUniformSampler(DensePoseCSEBaseSampler, DensePoseUnifo...

FILE: detectron2/projects/DensePose/densepose/data/samplers/densepose_uniform.py
  class DensePoseUniformSampler (line 9) | class DensePoseUniformSampler(DensePoseBaseSampler):
    method __init__ (line 16) | def __init__(self, count_per_class: int = 8):
    method _produce_index_sample (line 26) | def _produce_index_sample(self, values: torch.Tensor, count: int):

FILE: detectron2/projects/DensePose/densepose/data/samplers/mask_from_densepose.py
  class MaskFromDensePoseSampler (line 8) | class MaskFromDensePoseSampler:
    method __call__ (line 15) | def __call__(self, instances: Instances) -> BitMasks:

FILE: detectron2/projects/DensePose/densepose/data/samplers/prediction_to_gt.py
  class _Sampler (line 13) | class _Sampler:
  class PredictionToGroundTruthSampler (line 27) | class PredictionToGroundTruthSampler:
    method __init__ (line 33) | def __init__(self, dataset_name: str = ""):
    method __call__ (line 41) | def __call__(self, model_output: List[ModelOutput]) -> List[SampledData]:
    method register_sampler (line 67) | def register_sampler(
    method remove_sampler (line 85) | def remove_sampler(

FILE: detectron2/projects/DensePose/densepose/data/transform/image.py
  class ImageResizeTransform (line 6) | class ImageResizeTransform:
    method __init__ (line 13) | def __init__(self, min_size: int = 800, max_size: int = 1333):
    method __call__ (line 17) | def __call__(self, images: torch.Tensor) -> torch.Tensor:

FILE: detectron2/projects/DensePose/densepose/data/utils.py
  function is_relative_local_path (line 9) | def is_relative_local_path(path: str) -> bool:
  function maybe_prepend_base_path (line 14) | def maybe_prepend_base_path(base_path: Optional[str], path: str):
  function get_class_to_mesh_name_mapping (line 27) | def get_class_to_mesh_name_mapping(cfg: CfgNode) -> Dict[int, str]:
  function get_category_to_class_mapping (line 34) | def get_category_to_class_mapping(dataset_cfg: CfgNode) -> Dict[str, int]:

FILE: detectron2/projects/DensePose/densepose/data/video/frame_selector.py
  class FrameSelectionStrategy (line 13) | class FrameSelectionStrategy(Enum):
  class RandomKFramesSelector (line 30) | class RandomKFramesSelector(Callable):  # pyre-ignore[39]
    method __init__ (line 35) | def __init__(self, k: int):
    method __call__ (line 38) | def __call__(self, frame_tss: FrameTsList) -> FrameTsList:
  class FirstKFramesSelector (line 50) | class FirstKFramesSelector(Callable):  # pyre-ignore[39]
    method __init__ (line 55) | def __init__(self, k: int):
    method __call__ (line 58) | def __call__(self, frame_tss: FrameTsList) -> FrameTsList:
  class LastKFramesSelector (line 70) | class LastKFramesSelector(Callable):  # pyre-ignore[39]
    method __init__ (line 75) | def __init__(self, k: int):
    method __call__ (line 78) | def __call__(self, frame_tss: FrameTsList) -> FrameTsList:

FILE: detectron2/projects/DensePose/densepose/data/video/video_keyframe_dataset.py
  function list_keyframes (line 21) | def list_keyframes(video_fpath: str, video_stream_idx: int = 0) -> Frame...
  function read_keyframes (line 96) | def read_keyframes(
  function video_list_from_file (line 160) | def video_list_from_file(video_list_fpath: str, base_path: Optional[str]...
  function read_keyframe_helper_data (line 175) | def read_keyframe_helper_data(fpath: str):
  class VideoKeyframeDataset (line 215) | class VideoKeyframeDataset(Dataset):
    method __init__ (line 222) | def __init__(
    method __getitem__ (line 263) | def __getitem__(self, idx: int) -> Dict[str, Any]:
    method __len__ (line 299) | def __len__(self):

FILE: detectron2/projects/DensePose/densepose/engine/trainer.py
  class SampleCountingLoader (line 37) | class SampleCountingLoader:
    method __init__ (line 38) | def __init__(self, loader):
    method __iter__ (line 41) | def __iter__(self):
  class SampleCountMetricPrinter (line 61) | class SampleCountMetricPrinter(EventWriter):
    method __init__ (line 62) | def __init__(self):
    method write (line 65) | def write(self):
  class Trainer (line 74) | class Trainer(DefaultTrainer):
    method extract_embedder_from_model (line 76) | def extract_embedder_from_model(cls, model: nn.Module) -> Optional[Emb...
    method test (line 88) | def test(
    method build_evaluator (line 150) | def build_evaluator(
    method build_optimizer (line 192) | def build_optimizer(cls, cfg: CfgNode, model: nn.Module):
    method build_test_loader (line 219) | def build_test_loader(cls, cfg: CfgNode, dataset_name):
    method build_train_loader (line 223) | def build_train_loader(cls, cfg: CfgNode):
    method build_writers (line 237) | def build_writers(self):
    method test_with_TTA (line 243) | def test_with_TTA(cls, cfg: CfgNode, model):

FILE: detectron2/projects/DensePose/densepose/evaluation/d2_evaluator_adapter.py
  function _maybe_add_iscrowd_annotations (line 12) | def _maybe_add_iscrowd_annotations(cocoapi) -> None:
  class Detectron2COCOEvaluatorAdapter (line 18) | class Detectron2COCOEvaluatorAdapter(COCOEvaluator):
    method __init__ (line 19) | def __init__(
    method _maybe_substitute_metadata (line 33) | def _maybe_substitute_metadata(self):

FILE: detectron2/projects/DensePose/densepose/evaluation/densepose_coco_evaluation.py
  class DensePoseEvalMode (line 40) | class DensePoseEvalMode(str, Enum):
  class DensePoseDataMode (line 49) | class DensePoseDataMode(str, Enum):
  class DensePoseCocoEval (line 62) | class DensePoseCocoEval(object):
    method __init__ (line 112) | def __init__(
    method _loadGEval (line 148) | def _loadGEval(self):
    method _prepare (line 182) | def _prepare(self):
    method evaluate (line 300) | def evaluate(self):
    method getDensePoseMask (line 351) | def getDensePoseMask(self, polys):
    method _generate_rlemask_on_image (line 360) | def _generate_rlemask_on_image(self, mask, imgId, data):
    method computeDPIoU (line 377) | def computeDPIoU(self, imgId, catId):
    method computeIoU (line 436) | def computeIoU(self, imgId, catId):
    method computeOks (line 465) | def computeOks(self, imgId, catId):
    method _extract_mask (line 536) | def _extract_mask(self, dt: Dict[str, Any]) -> np.ndarray:
    method _extract_iuv (line 581) | def _extract_iuv(
    method computeOgps_single_pair (line 617) | def computeOgps_single_pair(self, dt, gt, py, px, pt_mask):
    method extract_iuv_from_quantized (line 652) | def extract_iuv_from_quantized(self, dt, gt, py, px, pt_mask):
    method extract_iuv_from_raw (line 660) | def extract_iuv_from_raw(self, dt, gt, py, px, pt_mask):
    method computeOgps_single_pair_iuv (line 674) | def computeOgps_single_pair_iuv(self, dt, gt, ipoints, upoints, vpoints):
    method computeOgps_single_pair_cse (line 687) | def computeOgps_single_pair_cse(
    method computeOgps (line 719) | def computeOgps(self, imgId, catId):
    method evaluateImg (line 779) | def evaluateImg(self, imgId, catId, aRng, maxDet):
    method accumulate (line 924) | def accumulate(self, p=None):
    method summarize (line 1029) | def summarize(self):
    method __str__ (line 1160) | def __str__(self):
    method findAllClosestVertsUV (line 1164) | def findAllClosestVertsUV(self, U_points, V_points, Index_points):
    method findClosestVertsCse (line 1182) | def findClosestVertsCse(self, embedding, py, px, mask, mesh_name):
    method findAllClosestVertsGT (line 1191) | def findAllClosestVertsGT(self, gt):
    method getDistancesCse (line 1212) | def getDistancesCse(self, cVertsGT, cVerts, mesh_name):
    method getDistancesUV (line 1219) | def getDistancesUV(self, cVertsGT, cVerts):
  class Params (line 1250) | class Params:
    method setDetParams (line 1255) | def setDetParams(self):
    method setKpParams (line 1271) | def setKpParams(self):
    method setUvParams (line 1282) | def setUvParams(self):
    method __init__ (line 1292) | def __init__(self, iouType="segm"):

FILE: detectron2/projects/DensePose/densepose/evaluation/evaluator.py
  class DensePoseCOCOEvaluator (line 45) | class DensePoseCOCOEvaluator(DatasetEvaluator):
    method __init__ (line 46) | def __init__(
    method reset (line 84) | def reset(self):
    method process (line 87) | def process(self, inputs, outputs):
    method evaluate (line 120) | def evaluate(self, img_ids=None):
    method _eval_predictions (line 134) | def _eval_predictions(self, predictions, multi_storage=None, img_ids=N...
    method _evaluate_mesh_alignment (line 165) | def _evaluate_mesh_alignment(self):
    method _print_mesh_alignment_results (line 180) | def _print_mesh_alignment_results(self, results: Dict[str, float], mes...
  function prediction_to_dict (line 198) | def prediction_to_dict(instances, img_id, embedder, class_to_mesh_name, ...
  function densepose_chart_predictions_to_dict (line 235) | def densepose_chart_predictions_to_dict(instances):
  function densepose_chart_predictions_to_storage_dict (line 261) | def densepose_chart_predictions_to_storage_dict(instances):
  function densepose_cse_predictions_to_dict (line 275) | def densepose_cse_predictions_to_dict(instances, embedder, class_to_mesh...
  function _evaluate_predictions_on_coco (line 288) | def _evaluate_predictions_on_coco(
  function _get_densepose_metrics (line 322) | def _get_densepose_metrics(min_threshold: float = 0.5):
  function _derive_results_from_coco_eval (line 334) | def _derive_results_from_coco_eval(
  function build_densepose_evaluator_storage (line 386) | def build_densepose_evaluator_storage(cfg: CfgNode, output_folder: str):

FILE: detectron2/projects/DensePose/densepose/evaluation/mesh_alignment_evaluator.py
  class MeshAlignmentEvaluator (line 14) | class MeshAlignmentEvaluator:
    method __init__ (line 19) | def __init__(self, embedder: nn.Module, mesh_names: Optional[List[str]]):
    method evaluate (line 29) | def evaluate(self):

FILE: detectron2/projects/DensePose/densepose/evaluation/tensor_storage.py
  class SizeData (line 17) | class SizeData:
  function _calculate_record_field_size_b (line 22) | def _calculate_record_field_size_b(data_schema: Dict[str, SizeData], fie...
  function _calculate_record_size_b (line 29) | def _calculate_record_size_b(data_schema: Dict[str, SizeData]) -> int:
  function _calculate_record_field_sizes_b (line 37) | def _calculate_record_field_sizes_b(data_schema: Dict[str, SizeData]) ->...
  class SingleProcessTensorStorage (line 44) | class SingleProcessTensorStorage:
    method __init__ (line 49) | def __init__(self, data_schema: Dict[str, SizeData], storage_impl: Bin...
    method get (line 76) | def get(self, record_id: int) -> Dict[str, torch.Tensor]:
    method put (line 106) | def put(self, data: Dict[str, torch.Tensor]) -> int:
  class SingleProcessFileTensorStorage (line 138) | class SingleProcessFileTensorStorage(SingleProcessTensorStorage):
    method __init__ (line 143) | def __init__(self, data_schema: Dict[str, SizeData], fpath: str, mode:...
  class SingleProcessRamTensorStorage (line 156) | class SingleProcessRamTensorStorage(SingleProcessTensorStorage):
    method __init__ (line 161) | def __init__(self, data_schema: Dict[str, SizeData], buf: io.BytesIO):
  class MultiProcessTensorStorage (line 165) | class MultiProcessTensorStorage:
    method __init__ (line 174) | def __init__(self, rank_to_storage: Dict[int, SingleProcessTensorStora...
    method get (line 177) | def get(self, rank: int, record_id: int) -> Dict[str, torch.Tensor]:
    method put (line 181) | def put(self, rank: int, data: Dict[str, torch.Tensor]) -> int:
  class MultiProcessFileTensorStorage (line 186) | class MultiProcessFileTensorStorage(MultiProcessTensorStorage):
    method __init__ (line 187) | def __init__(self, data_schema: Dict[str, SizeData], rank_to_fpath: Di...
  class MultiProcessRamTensorStorage (line 195) | class MultiProcessRamTensorStorage(MultiProcessTensorStorage):
    method __init__ (line 196) | def __init__(self, data_schema: Dict[str, SizeData], rank_to_buffer: D...
  function _ram_storage_gather (line 204) | def _ram_storage_gather(
  function _file_storage_gather (line 218) | def _file_storage_gather(
  function storage_gather (line 231) | def storage_gather(

FILE: detectron2/projects/DensePose/densepose/modeling/build.py
  function build_densepose_predictor (line 12) | def build_densepose_predictor(cfg: CfgNode, input_channels: int):
  function build_densepose_data_filter (line 28) | def build_densepose_data_filter(cfg: CfgNode):
  function build_densepose_head (line 44) | def build_densepose_head(cfg: CfgNode, input_channels: int):
  function build_densepose_losses (line 60) | def build_densepose_losses(cfg: CfgNode):
  function build_densepose_embedder (line 75) | def build_densepose_embedder(cfg: CfgNode) -> Optional[nn.Module]:

FILE: detectron2/projects/DensePose/densepose/modeling/confidence.py
  class DensePoseUVConfidenceType (line 9) | class DensePoseUVConfidenceType(Enum):
  class DensePoseUVConfidenceConfig (line 28) | class DensePoseUVConfidenceConfig:
  class DensePoseSegmConfidenceConfig (line 40) | class DensePoseSegmConfidenceConfig:
  class DensePoseConfidenceModelConfig (line 51) | class DensePoseConfidenceModelConfig:
    method from_cfg (line 62) | def from_cfg(cfg: CfgNode) -> "DensePoseConfidenceModelConfig":

FILE: detectron2/projects/DensePose/densepose/modeling/cse/embedder.py
  class EmbedderType (line 18) | class EmbedderType(Enum):
  function create_embedder (line 29) | def create_embedder(embedder_spec: CfgNode, embedder_dim: int) -> nn.Mod...
  class Embedder (line 66) | class Embedder(nn.Module):
    method __init__ (line 74) | def __init__(self, cfg: CfgNode):
    method load_from_model_checkpoint (line 93) | def load_from_model_checkpoint(self, fpath: str, prefix: Optional[str]...
    method forward (line 114) | def forward(self, mesh_name: str) -> torch.Tensor:
    method has_embeddings (line 127) | def has_embeddings(self, mesh_name: str) -> bool:

FILE: detectron2/projects/DensePose/densepose/modeling/cse/utils.py
  function squared_euclidean_distance_matrix (line 7) | def squared_euclidean_distance_matrix(pts1: torch.Tensor, pts2: torch.Te...
  function normalize_embeddings (line 25) | def normalize_embeddings(embeddings: torch.Tensor, epsilon: float = 1e-6...
  function get_closest_vertices_mask_from_ES (line 38) | def get_closest_vertices_mask_from_ES(

FILE: detectron2/projects/DensePose/densepose/modeling/cse/vertex_direct_embedder.py
  class VertexDirectEmbedder (line 12) | class VertexDirectEmbedder(nn.Module):
    method __init__ (line 20) | def __init__(self, num_vertices: int, embed_dim: int):
    method reset_parameters (line 33) | def reset_parameters(self):
    method forward (line 39) | def forward(self) -> torch.Tensor:
    method load (line 51) | def load(self, fpath: str):

FILE: detectron2/projects/DensePose/densepose/modeling/cse/vertex_feature_embedder.py
  class VertexFeatureEmbedder (line 12) | class VertexFeatureEmbedder(nn.Module):
    method __init__ (line 24) | def __init__(
    method reset_parameters (line 46) | def reset_parameters(self):
    method forward (line 50) | def forward(self) -> torch.Tensor:
    method load (line 62) | def load(self, fpath: str):

FILE: detectron2/projects/DensePose/densepose/modeling/densepose_checkpoint.py
  function _rename_HRNet_weights (line 7) | def _rename_HRNet_weights(weights):
  class DensePoseCheckpointer (line 22) | class DensePoseCheckpointer(DetectionCheckpointer):
    method __init__ (line 27) | def __init__(self, model, save_dir="", *, save_to_disk=None, **checkpo...
    method _load_file (line 30) | def _load_file(self, filename: str) -> object:

FILE: detectron2/projects/DensePose/densepose/modeling/filter.py
  class DensePoseDataFilter (line 11) | class DensePoseDataFilter(object):
    method __init__ (line 12) | def __init__(self, cfg: CfgNode):
    method __call__ (line 17) | def __call__(self, features: List[torch.Tensor], proposals_with_target...

FILE: detectron2/projects/DensePose/densepose/modeling/hrfpn.py
  class HRFPN (line 33) | class HRFPN(Backbone):
    method __init__ (line 48) | def __init__(
    method init_weights (line 129) | def init_weights(self):
    method forward (line 135) | def forward(self, inputs):
  function build_hrfpn_backbone (line 165) | def build_hrfpn_backbone(cfg, input_shape: ShapeSpec) -> HRFPN:

FILE: detectron2/projects/DensePose/densepose/modeling/hrnet.py
  function conv3x3 (line 24) | def conv3x3(in_planes, out_planes, stride=1):
  class BasicBlock (line 29) | class BasicBlock(nn.Module):
    method __init__ (line 32) | def __init__(self, inplanes, planes, stride=1, downsample=None):
    method forward (line 42) | def forward(self, x):
  class Bottleneck (line 61) | class Bottleneck(nn.Module):
    method __init__ (line 64) | def __init__(self, inplanes, planes, stride=1, downsample=None):
    method forward (line 76) | def forward(self, x):
  class HighResolutionModule (line 99) | class HighResolutionModule(nn.Module):
    method __init__ (line 112) | def __init__(
    method _check_branches (line 133) | def _check_branches(self, num_branches, blocks, num_blocks, num_inchan...
    method _make_one_branch (line 153) | def _make_one_branch(self, branch_index, block, num_blocks, num_channe...
    method _make_branches (line 180) | def _make_branches(self, num_branches, block, num_blocks, num_channels):
    method _make_fuse_layers (line 188) | def _make_fuse_layers(self):
    method get_num_inchannels (line 247) | def get_num_inchannels(self):
    method forward (line 250) | def forward(self, x):
  class PoseHigherResolutionNet (line 275) | class PoseHigherResolutionNet(Backbone):
    method __init__ (line 282) | def __init__(self, cfg, **kwargs):
    method _get_deconv_cfg (line 328) | def _get_deconv_cfg(self, deconv_kernel):
    method _make_transition_layer (line 341) | def _make_transition_layer(self, num_channels_pre_layer, num_channels_...
    method _make_layer (line 383) | def _make_layer(self, block, planes, blocks, stride=1):
    method _make_stage (line 405) | def _make_stage(self, layer_config, num_inchannels, multi_scale_output...
    method forward (line 434) | def forward(self, x):
  function build_pose_hrnet_backbone (line 472) | def build_pose_hrnet_backbone(cfg, input_shape: ShapeSpec):

FILE: detectron2/projects/DensePose/densepose/modeling/inference.py
  function densepose_inference (line 9) | def densepose_inference(densepose_predictor_output: Any, detections: Lis...

FILE: detectron2/projects/DensePose/densepose/modeling/losses/chart.py
  class DensePoseChartLoss (line 21) | class DensePoseChartLoss:
    method __init__ (line 47) | def __init__(self, cfg: CfgNode):
    method __call__ (line 64) | def __call__(
    method produce_fake_densepose_losses (line 139) | def produce_fake_densepose_losses(self, densepose_predictor_outputs: A...
    method produce_fake_densepose_losses_uv (line 164) | def produce_fake_densepose_losses_uv(self, densepose_predictor_outputs...
    method produce_fake_densepose_losses_segm (line 187) | def produce_fake_densepose_losses_segm(self, densepose_predictor_outpu...
    method produce_densepose_losses_uv (line 211) | def produce_densepose_losses_uv(
    method produce_densepose_losses_segm (line 243) | def produce_densepose_losses_segm(

FILE: detectron2/projects/DensePose/densepose/modeling/losses/chart_with_confidences.py
  class DensePoseChartWithConfidenceLoss (line 18) | class DensePoseChartWithConfidenceLoss(DensePoseChartLoss):
    method __init__ (line 21) | def __init__(self, cfg: CfgNode):
    method produce_fake_densepose_losses_uv (line 33) | def produce_fake_densepose_losses_uv(self, densepose_predictor_outputs...
    method produce_densepose_losses_uv (line 71) | def produce_densepose_losses_uv(
  class IIDIsotropicGaussianUVLoss (line 119) | class IIDIsotropicGaussianUVLoss(nn.Module):
    method __init__ (line 132) | def __init__(self, sigma_lower_bound: float):
    method forward (line 137) | def forward(
  class IndepAnisotropicGaussianUVLoss (line 157) | class IndepAnisotropicGaussianUVLoss(nn.Module):
    method __init__ (line 173) | def __init__(self, sigma_lower_bound: float):
    method forward (line 178) | def forward(

FILE: detectron2/projects/DensePose/densepose/modeling/losses/cse.py
  class DensePoseCseLoss (line 20) | class DensePoseCseLoss:
    method __init__ (line 28) | def __init__(self, cfg: CfgNode):
    method create_embed_loss (line 49) | def create_embed_loss(cls, cfg: CfgNode):
    method __call__ (line 54) | def __call__(
    method produce_fake_losses (line 95) | def produce_fake_losses(

FILE: detectron2/projects/DensePose/densepose/modeling/losses/cycle_pix2shape.py
  function _create_pixel_dist_matrix (line 18) | def _create_pixel_dist_matrix(grid_size: int) -> torch.Tensor:
  function _sample_fg_pixels_randperm (line 30) | def _sample_fg_pixels_randperm(fg_mask: torch.Tensor, sample_size: int) ...
  function _sample_fg_pixels_multinomial (line 40) | def _sample_fg_pixels_multinomial(fg_mask: torch.Tensor, sample_size: in...
  class PixToShapeCycleLoss (line 48) | class PixToShapeCycleLoss(nn.Module):
    method __init__ (line 53) | def __init__(self, cfg: CfgNode):
    method forward (line 73) | def forward(
    method fake_value (line 149) | def fake_value(self, densepose_predictor_outputs: Any, embedder: nn.Mo...

FILE: detectron2/projects/DensePose/densepose/modeling/losses/cycle_shape2shape.py
  class ShapeToShapeCycleLoss (line 16) | class ShapeToShapeCycleLoss(nn.Module):
    method __init__ (line 23) | def __init__(self, cfg: CfgNode):
    method _sample_random_pair (line 37) | def _sample_random_pair(self) -> Tuple[str, str]:
    method forward (line 51) | def forward(self, embedder: nn.Module):
    method fake_value (line 60) | def fake_value(self, embedder: nn.Module):
    method _get_embeddings_and_geodists_for_mesh (line 66) | def _get_embeddings_and_geodists_for_mesh(
    method _forward_one_pair (line 93) | def _forward_one_pair(

FILE: detectron2/projects/DensePose/densepose/modeling/losses/embed.py
  class EmbeddingLoss (line 18) | class EmbeddingLoss:
    method __init__ (line 28) | def __init__(self, cfg: CfgNode):
    method __call__ (line 34) | def __call__(
    method fake_values (line 116) | def fake_values(self, densepose_predictor_outputs: Any, embedder: nn.M...
    method fake_value (line 126) | def fake_value(self, densepose_predictor_outputs: Any, embedder: nn.Mo...

FILE: detectron2/projects/DensePose/densepose/modeling/losses/embed_utils.py
  class PackedCseAnnotations (line 13) | class PackedCseAnnotations:
  class CseAnnotationsAccumulator (line 26) | class CseAnnotationsAccumulator(AnnotationsAccumulator):
    method __init__ (line 32) | def __init__(self):
    method accumulate (line 46) | def accumulate(self, instances_one_image: Instances):
    method _do_accumulate (line 80) | def _do_accumulate(self, box_xywh_gt: torch.Tensor, box_xywh_est: torc...
    method pack (line 108) | def pack(self) -> Optional[PackedCseAnnotations]:

FILE: detectron2/projects/DensePose/densepose/modeling/losses/mask.py
  class DataForMaskLoss (line 12) | class DataForMaskLoss:
  function extract_data_for_mask_loss_from_matches (line 23) | def extract_data_for_mask_loss_from_matches(
  class MaskLoss (line 71) | class MaskLoss:
    method __call__ (line 79) | def __call__(
    method fake_value (line 111) | def fake_value(self, densepose_predictor_outputs: Any) -> torch.Tensor:

FILE: detectron2/projects/DensePose/densepose/modeling/losses/mask_or_segm.py
  class MaskOrSegmentationLoss (line 13) | class MaskOrSegmentationLoss:
    method __init__ (line 21) | def __init__(self, cfg: CfgNode):
    method __call__ (line 33) | def __call__(
    method fake_value (line 58) | def fake_value(self, densepose_predictor_outputs: Any) -> torch.Tensor:

FILE: detectron2/projects/DensePose/densepose/modeling/losses/segm.py
  class SegmentationLoss (line 13) | class SegmentationLoss:
    method __init__ (line 21) | def __init__(self, cfg: CfgNode):
    method __call__ (line 31) | def __call__(
    method fake_value (line 69) | def fake_value(self, densepose_predictor_outputs: Any) -> torch.Tensor:

FILE: detectron2/projects/DensePose/densepose/modeling/losses/soft_embed.py
  class SoftEmbeddingLoss (line 19) | class SoftEmbeddingLoss:
    method __init__ (line 31) | def __init__(self, cfg: CfgNode):
    method __call__ (line 38) | def __call__(
    method fake_values (line 130) | def fake_values(self, densepose_predictor_outputs: Any, embedder: nn.M...
    method fake_value (line 140) | def fake_value(self, densepose_predictor_outputs: Any, embedder: nn.Mo...

FILE: detectron2/projects/DensePose/densepose/modeling/losses/utils.py
  function _linear_interpolation_utilities (line 16) | def _linear_interpolation_utilities(v_norm, v0_src, size_src, v0_dst, si...
  class BilinearInterpolationHelper (line 62) | class BilinearInterpolationHelper:
    method __init__ (line 86) | def __init__(
    method from_matches (line 104) | def from_matches(
    method extract_at_points (line 158) | def extract_at_points(
  function resample_data (line 195) | def resample_data(
  class AnnotationsAccumulator (line 238) | class AnnotationsAccumulator(ABC):
    method accumulate (line 245) | def accumulate(self, instances_one_image: Instances):
    method pack (line 255) | def pack(self) -> Any:
  class PackedChartBasedAnnotations (line 263) | class PackedChartBasedAnnotations:
  class ChartBasedAnnotationsAccumulator (line 303) | class ChartBasedAnnotationsAccumulator(AnnotationsAccumulator):
    method __init__ (line 309) | def __init__(self):
    method accumulate (line 324) | def accumulate(self, instances_one_image: Instances):
    method _do_accumulate (line 358) | def _do_accumulate(
    method pack (line 385) | def pack(self) -> Optional[PackedChartBasedAnnotations]:
  function extract_packed_annotations_from_matches (line 416) | def extract_packed_annotations_from_matches(
  function sample_random_indices (line 424) | def sample_random_indices(

FILE: detectron2/projects/DensePose/densepose/modeling/predictors/chart.py
  class DensePoseChartPredictor (line 15) | class DensePoseChartPredictor(nn.Module):
    method __init__ (line 34) | def __init__(self, cfg: CfgNode, input_channels: int):
    method interp2d (line 66) | def interp2d(self, tensor_nchw: torch.Tensor):
    method forward (line 80) | def forward(self, head_outputs: torch.Tensor):

FILE: detectron2/projects/DensePose/densepose/modeling/predictors/chart_confidence.py
  class DensePoseChartConfidencePredictorMixin (line 15) | class DensePoseChartConfidencePredictorMixin:
    method __init__ (line 32) | def __init__(self, cfg: CfgNode, input_channels: int):
    method _initialize_confidence_estimation_layers (line 47) | def _initialize_confidence_estimation_layers(self, cfg: CfgNode, dim_i...
    method forward (line 88) | def forward(self, head_outputs: torch.Tensor):
    method _create_output_instance (line 149) | def _create_output_instance(self, base_predictor_outputs: Any):

FILE: detectron2/projects/DensePose/densepose/modeling/predictors/chart_with_confidence.py
  class DensePoseChartWithConfidencePredictor (line 8) | class DensePoseChartWithConfidencePredictor(

FILE: detectron2/projects/DensePose/densepose/modeling/predictors/cse.py
  class DensePoseEmbeddingPredictor (line 15) | class DensePoseEmbeddingPredictor(nn.Module):
    method __init__ (line 21) | def __init__(self, cfg: CfgNode, input_channels: int):
    method interp2d (line 45) | def interp2d(self, tensor_nchw: torch.Tensor):
    method forward (line 59) | def forward(self, head_outputs):

FILE: detectron2/projects/DensePose/densepose/modeling/predictors/cse_confidence.py
  class DensePoseEmbeddingConfidencePredictorMixin (line 15) | class DensePoseEmbeddingConfidencePredictorMixin:
    method __init__ (line 31) | def __init__(self, cfg: CfgNode, input_channels: int):
    method _initialize_confidence_estimation_layers (line 46) | def _initialize_confidence_estimation_layers(self, cfg: CfgNode, dim_i...
    method forward (line 60) | def forward(self, head_outputs: torch.Tensor):
    method _create_output_instance (line 95) | def _create_output_instance(self, base_predictor_outputs: Any):

FILE: detectron2/projects/DensePose/densepose/modeling/predictors/cse_with_confidence.py
  class DensePoseEmbeddingWithConfidencePredictor (line 8) | class DensePoseEmbeddingWithConfidencePredictor(

FILE: detectron2/projects/DensePose/densepose/modeling/roi_heads/deeplab.py
  class DensePoseDeepLabHead (line 15) | class DensePoseDeepLabHead(nn.Module):
    method __init__ (line 22) | def __init__(self, cfg: CfgNode, input_channels: int):
    method forward (line 60) | def forward(self, features):
    method _get_layer_name (line 73) | def _get_layer_name(self, i: int):
  class ASPPConv (line 81) | class ASPPConv(nn.Sequential):
    method __init__ (line 82) | def __init__(self, in_channels, out_channels, dilation):
  class ASPPPooling (line 93) | class ASPPPooling(nn.Sequential):
    method __init__ (line 94) | def __init__(self, in_channels, out_channels):
    method forward (line 102) | def forward(self, x):
  class ASPP (line 108) | class ASPP(nn.Module):
    method __init__ (line 109) | def __init__(self, in_channels, atrous_rates, out_channels):
    method forward (line 135) | def forward(self, x):
  class _NonLocalBlockND (line 146) | class _NonLocalBlockND(nn.Module):
    method __init__ (line 147) | def __init__(
    method forward (line 229) | def forward(self, x):
  class NONLocalBlock2D (line 255) | class NONLocalBlock2D(_NonLocalBlockND):
    method __init__ (line 256) | def __init__(self, in_channels, inter_channels=None, sub_sample=True, ...

FILE: detectron2/projects/DensePose/densepose/modeling/roi_heads/roi_head.py
  class Decoder (line 26) | class Decoder(nn.Module):
    method __init__ (line 33) | def __init__(self, cfg, input_shape: Dict[str, ShapeSpec], in_features):
    method forward (line 74) | def forward(self, features: List[torch.Tensor]):
  class DensePoseROIHeads (line 85) | class DensePoseROIHeads(StandardROIHeads):
    method __init__ (line 90) | def __init__(self, cfg, input_shape):
    method _init_densepose_head (line 94) | def _init_densepose_head(self, cfg, input_shape):
    method _forward_densepose (line 127) | def _forward_densepose(self, features: Dict[str, torch.Tensor], instan...
    method forward (line 184) | def forward(
    method forward_with_given_boxes (line 198) | def forward_with_given_boxes(

FILE: detectron2/projects/DensePose/densepose/modeling/roi_heads/v1convx.py
  class DensePoseV1ConvXHead (line 15) | class DensePoseV1ConvXHead(nn.Module):
    method __init__ (line 20) | def __init__(self, cfg: CfgNode, input_channels: int):
    method forward (line 44) | def forward(self, features: torch.Tensor):
    method _get_layer_name (line 62) | def _get_layer_name(self, i: int):

FILE: detectron2/projects/DensePose/densepose/modeling/test_time_augmentation.py
  class DensePoseDatasetMapperTTA (line 15) | class DensePoseDatasetMapperTTA(DatasetMapperTTA):
    method __init__ (line 16) | def __init__(self, cfg):
    method __call__ (line 20) | def __call__(self, dataset_dict):
  class DensePoseGeneralizedRCNNWithTTA (line 38) | class DensePoseGeneralizedRCNNWithTTA(GeneralizedRCNNWithTTA):
    method __init__ (line 39) | def __init__(self, cfg, model, transform_data, tta_mapper=None, batch_...
    method _inference_one_image (line 55) | def _inference_one_image(self, input):
    method _get_augmented_boxes (line 93) | def _get_augmented_boxes(self, augmented_inputs, tfms):
    method _reduce_pred_densepose (line 114) | def _reduce_pred_densepose(self, outputs, tfms):
    method _incremental_avg_dp (line 136) | def _incremental_avg_dp(self, avg, new_el, idx):
  function _inverse_rotation (line 145) | def _inverse_rotation(densepose_attrs, boxes, transform):
  function rotate_box_inverse (line 185) | def rotate_box_inverse(rot_tfm, rotated_box):

FILE: detectron2/projects/DensePose/densepose/modeling/utils.py
  function initialize_module_params (line 6) | def initialize_module_params(module: nn.Module) -> None:

FILE: detectron2/projects/DensePose/densepose/structures/chart.py
  class DensePoseChartPredictorOutput (line 9) | class DensePoseChartPredictorOutput:
    method __len__ (line 32) | def __len__(self):
    method __getitem__ (line 38) | def __getitem__(
    method to (line 62) | def to(self, device: torch.device):

FILE: detectron2/projects/DensePose/densepose/structures/chart_confidence.py
  function decorate_predictor_output_class_with_confidences (line 10) | def decorate_predictor_output_class_with_confidences(BasePredictorOutput...

FILE: detectron2/projects/DensePose/densepose/structures/chart_result.py
  class DensePoseChartResult (line 9) | class DensePoseChartResult:
    method to (line 25) | def to(self, device: torch.device):
  class DensePoseChartResultWithConfidences (line 35) | class DensePoseChartResultWithConfidences:
    method to (line 55) | def to(self, device: torch.device):
  class DensePoseChartResultQuantized (line 78) | class DensePoseChartResultQuantized:
    method to (line 95) | def to(self, device: torch.device):
  class DensePoseChartResultCompressed (line 104) | class DensePoseChartResultCompressed:
  function quantize_densepose_chart_result (line 120) | def quantize_densepose_chart_result(result: DensePoseChartResult) -> Den...
  function compress_quantized_densepose_chart_result (line 136) | def compress_quantized_densepose_chart_result(
  function decompress_compressed_densepose_chart_result (line 162) | def decompress_compressed_densepose_chart_result(

FILE: detectron2/projects/DensePose/densepose/structures/cse.py
  class DensePoseEmbeddingPredictorOutput (line 9) | class DensePoseEmbeddingPredictorOutput:
    method __len__ (line 21) | def __len__(self):
    method __getitem__ (line 27) | def __getitem__(
    method to (line 46) | def to(self, device: torch.device):

FILE: detectron2/projects/DensePose/densepose/structures/cse_confidence.py
  function decorate_cse_predictor_output_class_with_confidences (line 10) | def decorate_cse_predictor_output_class_with_confidences(BasePredictorOu...

FILE: detectron2/projects/DensePose/densepose/structures/data_relative.py
  class DensePoseDataRelative (line 11) | class DensePoseDataRelative(object):
    method __init__ (line 50) | def __init__(self, annotation, cleanup=False):
    method to (line 75) | def to(self, device):
    method extract_segmentation_mask (line 90) | def extract_segmentation_mask(annotation):
    method validate_annotation (line 114) | def validate_annotation(annotation):
    method cleanup_annotation (line 158) | def cleanup_annotation(annotation):
    method apply_transform (line 172) | def apply_transform(self, transforms, densepose_transform_data):
    method _transform_pts (line 177) | def _transform_pts(self, transforms, dp_transform_data):
    method _flip_iuv_semantics (line 195) | def _flip_iuv_semantics(self, dp_transform_data: DensePoseTransformDat...
    method _flip_vertices (line 213) | def _flip_vertices(self):
    method _transform_segm (line 220) | def _transform_segm(self, transforms, dp_transform_data):
    method _flip_segm_semantics (line 233) | def _flip_segm_semantics(self, dp_transform_data):
    method _transform_segm_rotation (line 240) | def _transform_segm_rotation(self, rotation):

FILE: detectron2/projects/DensePose/densepose/structures/list.py
  class DensePoseList (line 7) | class DensePoseList(object):
    method __init__ (line 11) | def __init__(self, densepose_datas, boxes_xyxy_abs, image_size_hw, dev...
    method to (line 31) | def to(self, device):
    method __iter__ (line 36) | def __iter__(self):
    method __len__ (line 39) | def __len__(self):
    method __repr__ (line 42) | def __repr__(self):
    method __getitem__ (line 49) | def __getitem__(self, item):

FILE: detectron2/projects/DensePose/densepose/structures/mesh.py
  function _maybe_copy_to_device (line 13) | def _maybe_copy_to_device(
  class Mesh (line 21) | class Mesh:
    method __init__ (line 22) | def __init__(
    method to (line 79) | def to(self, device: torch.device):
    method vertices (line 94) | def vertices(self):
    method faces (line 100) | def faces(self):
    method geodists (line 106) | def geodists(self):
    method symmetry (line 112) | def symmetry(self):
    method texcoords (line 118) | def texcoords(self):
    method get_geodists (line 123) | def get_geodists(self):
    method _compute_geodists (line 128) | def _compute_geodists(self):
  function load_mesh_data (line 134) | def load_mesh_data(
  function load_mesh_auxiliary_data (line 146) | def load_mesh_auxiliary_data(
  function load_mesh_symmetry (line 156) | def load_mesh_symmetry(
  function create_mesh (line 171) | def create_mesh(mesh_name: str, device: Optional[torch.device] = None) -...

FILE: detectron2/projects/DensePose/densepose/structures/transform_data.py
  function normalized_coords_transform (line 6) | def normalized_coords_transform(x0, y0, w, h):
  class DensePoseTransformData (line 19) | class DensePoseTransformData(object):
    method __init__ (line 27) | def __init__(self, uv_symmetries: Dict[str, torch.Tensor], device: tor...
    method to (line 33) | def to(self, device: torch.device, copy: bool = False) -> "DensePoseTr...
    method load (line 52) | def load(io: Union[str, BinaryIO]):

FILE: detectron2/projects/DensePose/densepose/utils/dbhelper.py
  class EntrySelector (line 5) | class EntrySelector(object):
    method from_string (line 11) | def from_string(spec: str) -> "EntrySelector":
  class AllEntrySelector (line 17) | class AllEntrySelector(EntrySelector):
    method __call__ (line 24) | def __call__(self, entry):
  class FieldEntrySelector (line 28) | class FieldEntrySelector(EntrySelector):
    class _FieldEntryValuePredicate (line 52) | class _FieldEntryValuePredicate(object):
      method __init__ (line 57) | def __init__(self, name: str, typespec: Optional[str], value: str):
      method __call__ (line 64) | def __call__(self, entry):
    class _FieldEntryRangePredicate (line 67) | class _FieldEntryRangePredicate(object):
      method __init__ (line 72) | def __init__(self, name: str, typespec: Optional[str], vmin: str, vm...
      method __call__ (line 80) | def __call__(self, entry):
    method __init__ (line 85) | def __init__(self, spec: str):
    method __call__ (line 88) | def __call__(self, entry: Dict[str, Any]):
    method _parse_specifier_into_predicates (line 94) | def _parse_specifier_into_predicates(self, spec: str):
    method _parse_field_name_type (line 119) | def _parse_field_name_type(self, field_name_with_type: str) -> Tuple[s...
    method _is_range_spec (line 133) | def _is_range_spec(self, field_value_or_range):
    method _get_range_spec (line 137) | def _get_range_spec(self, field_value_or_range):
    method _parse_error (line 146) | def _parse_error(self, msg):

FILE: detectron2/projects/DensePose/densepose/utils/logger.py
  function verbosity_to_level (line 5) | def verbosity_to_level(verbosity) -> int:

FILE: detectron2/projects/DensePose/densepose/utils/transform.py
  function load_for_dataset (line 8) | def load_for_dataset(dataset_name):
  function load_from_cfg (line 14) | def load_from_cfg(cfg):

FILE: detectron2/projects/DensePose/densepose/vis/base.py
  class MatrixVisualizer (line 11) | class MatrixVisualizer(object):
    method __init__ (line 16) | def __init__(
    method visualize (line 32) | def visualize(self, image_bgr, mask, matrix, bbox_xywh):
    method _resize (line 59) | def _resize(self, mask, matrix, w, h):
    method _check_image (line 66) | def _check_image(self, image_rgb):
    method _check_mask_matrix (line 71) | def _check_mask_matrix(self, mask, matrix):
  class RectangleVisualizer (line 77) | class RectangleVisualizer(object):
    method __init__ (line 81) | def __init__(self, color=_COLOR_GREEN, thickness=1):
    method visualize (line 85) | def visualize(self, image_bgr, bbox_xywh, color=None, thickness=None):
  class PointsVisualizer (line 93) | class PointsVisualizer(object):
    method __init__ (line 97) | def __init__(self, color_bgr=_COLOR_GREEN, r=5):
    method visualize (line 101) | def visualize(self, image_bgr, pts_xy, colors_bgr=None, rs=None):
  class TextVisualizer (line 110) | class TextVisualizer(object):
    method __init__ (line 115) | def __init__(
    method visualize (line 139) | def visualize(self, image_bgr, txt, topleft_xy):
    method get_text_size_wh (line 167) | def get_text_size_wh(self, txt):
  class CompoundVisualizer (line 174) | class CompoundVisualizer(object):
    method __init__ (line 175) | def __init__(self, visualizers):
    method visualize (line 178) | def visualize(self, image_bgr, data):
    method __str__ (line 189) | def __str__(self):

FILE: detectron2/projects/DensePose/densepose/vis/bounding_box.py
  class BoundingBoxVisualizer (line 5) | class BoundingBoxVisualizer(object):
    method __init__ (line 6) | def __init__(self):
    method visualize (line 9) | def visualize(self, image_bgr, boxes_xywh):
  class ScoredBoundingBoxVisualizer (line 15) | class ScoredBoundingBoxVisualizer(object):
    method __init__ (line 16) | def __init__(self, bbox_visualizer_params=None, score_visualizer_param...
    method visualize (line 24) | def visualize(self, image_bgr, scored_bboxes):

FILE: detectron2/projects/DensePose/densepose/vis/densepose_data_points.py
  class DensePoseDataCoarseSegmentationVisualizer (line 11) | class DensePoseDataCoarseSegmentationVisualizer(object):
    method __init__ (line 16) | def __init__(self, inplace=True, cmap=cv2.COLORMAP_PARULA, alpha=0.7, ...
    method visualize (line 24) | def visualize(
  class DensePoseDataPointsVisualizer (line 39) | class DensePoseDataPointsVisualizer(object):
    method __init__ (line 40) | def __init__(self, densepose_data_to_value_fn=None, cmap=cv2.COLORMAP_...
    method visualize (line 45) | def visualize(
  function _densepose_data_u_for_cmap (line 69) | def _densepose_data_u_for_cmap(densepose_data):
  function _densepose_data_v_for_cmap (line 74) | def _densepose_data_v_for_cmap(densepose_data):
  function _densepose_data_i_for_cmap (line 79) | def _densepose_data_i_for_cmap(densepose_data):
  class DensePoseDataPointsUVisualizer (line 88) | class DensePoseDataPointsUVisualizer(DensePoseDataPointsVisualizer):
    method __init__ (line 89) | def __init__(self, **kwargs):
  class DensePoseDataPointsVVisualizer (line 95) | class DensePoseDataPointsVVisualizer(DensePoseDataPointsVisualizer):
    method __init__ (line 96) | def __init__(self, **kwargs):
  class DensePoseDataPointsIVisualizer (line 102) | class DensePoseDataPointsIVisualizer(DensePoseDataPointsVisualizer):
    method __init__ (line 103) | def __init__(self, **kwargs):

FILE: detectron2/projects/DensePose/densepose/vis/densepose_outputs_iuv.py
  class DensePoseOutputsVisualizer (line 12) | class DensePoseOutputsVisualizer(object):
    method __init__ (line 13) | def __init__(
    method visualize (line 27) | def visualize(
  class DensePoseOutputsUVisualizer (line 89) | class DensePoseOutputsUVisualizer(DensePoseOutputsVisualizer):
    method __init__ (line 90) | def __init__(self, inplace=True, cmap=cv2.COLORMAP_PARULA, alpha=0.7, ...
  class DensePoseOutputsVVisualizer (line 94) | class DensePoseOutputsVVisualizer(DensePoseOutputsVisualizer):
    method __init__ (line 95) | def __init__(self, inplace=True, cmap=cv2.COLORMAP_PARULA, alpha=0.7, ...
  class DensePoseOutputsFineSegmentationVisualizer (line 99) | class DensePoseOutputsFineSegmentationVisualizer(DensePoseOutputsVisuali...
    method __init__ (line 100) | def __init__(self, inplace=True, cmap=cv2.COLORMAP_PARULA, alpha=0.7, ...

FILE: detectron2/projects/DensePose/densepose/vis/densepose_outputs_vertex.py
  function get_xyz_vertex_embedding (line 22) | def get_xyz_vertex_embedding(mesh_name: str, device: torch.device):
  class DensePoseOutputsVertexVisualizer (line 40) | class DensePoseOutputsVertexVisualizer(object):
    method __init__ (line 41) | def __init__(
    method visualize (line 65) | def visualize(
    method extract_and_check_outputs_and_boxes (line 97) | def extract_and_check_outputs_and_boxes(self, outputs_boxes_xywh_class...
  function get_texture_atlases (line 131) | def get_texture_atlases(json_str: Optional[str]) -> Optional[Dict[str, O...
  class DensePoseOutputsTextureVisualizer (line 142) | class DensePoseOutputsTextureVisualizer(DensePoseOutputsVertexVisualizer):
    method __init__ (line 143) | def __init__(
    method visualize (line 173) | def visualize(
    method generate_image_with_texture (line 217) | def generate_image_with_texture(self, bbox_image_bgr, uv_array, mask, ...

FILE: detectron2/projects/DensePose/densepose/vis/densepose_results.py
  class DensePoseResultsVisualizer (line 14) | class DensePoseResultsVisualizer(object):
    method visualize (line 15) | def visualize(
    method create_visualization_context (line 34) | def create_visualization_context(self, image_bgr: Image):
    method visualize_iuv_arr (line 37) | def visualize_iuv_arr(self, context, iuv_arr: np.ndarray, bbox_xywh) -...
    method context_to_image_bgr (line 40) | def context_to_image_bgr(self, context):
    method get_image_bgr_from_context (line 43) | def get_image_bgr_from_context(self, context):
  class DensePoseMaskedColormapResultsVisualizer (line 47) | class DensePoseMaskedColormapResultsVisualizer(DensePoseResultsVisualizer):
    method __init__ (line 48) | def __init__(
    method context_to_image_bgr (line 64) | def context_to_image_bgr(self, context):
    method visualize_iuv_arr (line 67) | def visualize_iuv_arr(self, context, iuv_arr: np.ndarray, bbox_xywh) -...
  function _extract_i_from_iuvarr (line 76) | def _extract_i_from_iuvarr(iuv_arr):
  function _extract_u_from_iuvarr (line 80) | def _extract_u_from_iuvarr(iuv_arr):
  function _extract_v_from_iuvarr (line 84) | def _extract_v_from_iuvarr(iuv_arr):
  class DensePoseResultsMplContourVisualizer (line 88) | class DensePoseResultsMplContourVisualizer(DensePoseResultsVisualizer):
    method __init__ (line 89) | def __init__(self, levels=10, **kwargs):
    method create_visualization_context (line 93) | def create_visualization_context(self, image_bgr: Image):
    method context_to_image_bgr (line 112) | def context_to_image_bgr(self, context):
    method visualize_iuv_arr (line 122) | def visualize_iuv_arr(self, context, iuv_arr: np.ndarray, bbox_xywh: B...
  class DensePoseResultsCustomContourVisualizer (line 137) | class DensePoseResultsCustomContourVisualizer(DensePoseResultsVisualizer):
    method __init__ (line 142) | def __init__(self, levels=10, **kwargs):
    method visualize_iuv_arr (line 159) | def visualize_iuv_arr(self, context, iuv_arr: np.ndarray, bbox_xywh: B...
    method _contours (line 167) | def _contours(self, image_bgr, arr, segm, bbox_xywh):
    method _draw_line (line 213) | def _draw_line(
    method _bin_code_2_lines (line 239) | def _bin_code_2_lines(self, arr, v, bin_code, multi_idx, Nw, Nh, offset):
  class DensePoseResultsFineSegmentationVisualizer (line 319) | class DensePoseResultsFineSegmentationVisualizer(DensePoseMaskedColormap...
    method __init__ (line 320) | def __init__(self, inplace=True, cmap=cv2.COLORMAP_PARULA, alpha=0.7, ...
  class DensePoseResultsUVisualizer (line 332) | class DensePoseResultsUVisualizer(DensePoseMaskedColormapResultsVisualiz...
    method __init__ (line 333) | def __init__(self, inplace=True, cmap=cv2.COLORMAP_PARULA, alpha=0.7, ...
  class DensePoseResultsVVisualizer (line 345) | class DensePoseResultsVVisualizer(DensePoseMaskedColormapResultsVisualiz...
    method __init__ (line 346) | def __init__(self, inplace=True, cmap=cv2.COLORMAP_PARULA, alpha=0.7, ...

FILE: detectron2/projects/DensePose/densepose/vis/densepose_results_textures.py
  function get_texture_atlas (line 13) | def get_texture_atlas(path: Optional[str]) -> Optional[np.ndarray]:
  class DensePoseResultsVisualizerWithTexture (line 27) | class DensePoseResultsVisualizerWithTexture(DensePoseResultsVisualizer):
    method __init__ (line 34) | def __init__(self, texture_atlas, **kwargs):
    method visualize (line 39) | def visualize(
    method get_texture (line 59) | def get_texture(self):
    method generate_image_with_texture (line 76) | def generate_image_with_texture(self, texture_image, alpha, bbox_image...

FILE: detectron2/projects/DensePose/densepose/vis/extractor.py
  function extract_scores_from_instances (line 24) | def extract_scores_from_instances(instances: Instances, select=None):
  function extract_boxes_xywh_from_instances (line 30) | def extract_boxes_xywh_from_instances(instances: Instances, select=None):
  function create_extractor (line 39) | def create_extractor(visualizer: object):
  class BoundingBoxExtractor (line 60) | class BoundingBoxExtractor(ob
Condensed preview — 967 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (6,142K chars).
[
  {
    "path": ".gitignore",
    "chars": 625,
    "preview": "/CenterNet2\nCenterNet2\nmodels\nconfigs-experimental\nexperiments\n# output dir\nindex.html\ndata/*\nslurm/\nslurm\nslurm-output\n"
  },
  {
    "path": "LICENSE",
    "chars": 21107,
    "preview": "VLDet (c) by Chuang Lin\n\nVLDet is licensed under a\nCreative Commons Attribution-NonCommercial-ShareAlike 4.0 Internation"
  },
  {
    "path": "README.md",
    "chars": 4616,
    "preview": "# VLDet: Learning Object-Language Alignments for Open-Vocabulary Object Detection\n\n<p align=\"center\"> <img src='docs/rea"
  },
  {
    "path": "configs/Base-C2_L_R5021k_640b64_4x.yaml",
    "chars": 1905,
    "preview": "MODEL:\n  META_ARCHITECTURE: \"CustomRCNN\"\n  MASK_ON: True\n  PROPOSAL_GENERATOR:\n    NAME: \"CenterNet\"\n  WEIGHTS: \"models/"
  },
  {
    "path": "configs/Base_OVCOCO_C4_1x.yaml",
    "chars": 1009,
    "preview": "MODEL:\n  META_ARCHITECTURE: \"CustomRCNN\"\n  RPN:\n    PRE_NMS_TOPK_TEST: 6000\n    POST_NMS_TOPK_TEST: 1000\n  ROI_HEADS:\n  "
  },
  {
    "path": "configs/BoxSup-C2_Lbase_CLIP_R5021k_640b64.yaml",
    "chars": 415,
    "preview": "_BASE_: \"Base-C2_L_R5021k_640b64_4x.yaml\"\nMODEL:\n  WITH_CAPTION: False\n  ROI_BOX_HEAD:\n    USE_ZEROSHOT_CLS: True\n    ZE"
  },
  {
    "path": "configs/BoxSup-C2_Lbase_CLIP_SwinB_896b32.yaml",
    "chars": 673,
    "preview": "_BASE_: \"Base-C2_L_R5021k_640b64_4x.yaml\"\nMODEL:\n  ROI_BOX_HEAD:\n    USE_ZEROSHOT_CLS: True\n    ZEROSHOT_WEIGHT_PATH: 'd"
  },
  {
    "path": "configs/BoxSup_OVCOCO_CLIP_R50_1x.yaml",
    "chars": 33,
    "preview": "_BASE_: \"Base_OVCOCO_C4_1x.yaml\"\n"
  },
  {
    "path": "configs/VLDet_LbaseCCcap_CLIP_R5021k_640b64_2x_ft4x_caption.yaml",
    "chars": 1229,
    "preview": "_BASE_: \"Base-C2_L_R5021k_640b64_4x.yaml\"\nMODEL:\n  WITH_CAPTION: True\n  SYNC_CAPTION_BATCH: False\n  ROI_BOX_HEAD:\n    AD"
  },
  {
    "path": "configs/VLDet_LbaseI_CLIP_SwinB_896b32_2x_ft4x_caption.yaml",
    "chars": 1266,
    "preview": "_BASE_: \"Base-C2_L_R5021k_640b64_4x.yaml\"\nMODEL:\n  WITH_CAPTION: True\n  ROI_BOX_HEAD:\n    ADD_IMAGE_BOX: True \n    USE_Z"
  },
  {
    "path": "configs/VLDet_OVCOCO_CLIP_R50_1x_caption.yaml",
    "chars": 1455,
    "preview": "_BASE_: \"Base_OVCOCO_C4_1x.yaml\"\nMODEL:\n  SHARE_PROJ_V_DIM: 2048\n  WEIGHTS: \"models/coco_base.pth\"\n  WITH_CAPTION: True\n"
  },
  {
    "path": "demo.py",
    "chars": 8215,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\nimport argparse\nimport glob\nimport multiprocessing as mp\nimport numpy"
  },
  {
    "path": "detectron2/.circleci/config.yml",
    "chars": 8772,
    "preview": "version: 2.1\n\n# -------------------------------------------------------------------------------------\n# Environments to "
  },
  {
    "path": "detectron2/.circleci/import-tests.sh",
    "chars": 579,
    "preview": "#!/bin/bash -e\n# Copyright (c) Facebook, Inc. and its affiliates.\n\n# Test that import works without building detectron2."
  },
  {
    "path": "detectron2/.clang-format",
    "chars": 2544,
    "preview": "AccessModifierOffset: -1\nAlignAfterOpenBracket: AlwaysBreak\nAlignConsecutiveAssignments: false\nAlignConsecutiveDeclarati"
  },
  {
    "path": "detectron2/.flake8",
    "chars": 487,
    "preview": "# This is an example .flake8 config, used when developing *Black* itself.\n# Keep in sync with setup.cfg which is used fo"
  },
  {
    "path": "detectron2/GETTING_STARTED.md",
    "chars": 3224,
    "preview": "## Getting Started with Detectron2\n\nThis document provides a brief intro of the usage of builtin command-line tools in d"
  },
  {
    "path": "detectron2/INSTALL.md",
    "chars": 12334,
    "preview": "## Installation\n\n### Requirements\n- Linux or macOS with Python ≥ 3.7\n- PyTorch ≥ 1.8 and [torchvision](https://github.co"
  },
  {
    "path": "detectron2/LICENSE",
    "chars": 10256,
    "preview": "Apache License\nVersion 2.0, January 2004\nhttp://www.apache.org/licenses/\n\nTERMS AND CONDITIONS FOR USE, REPRODUCTION, AN"
  },
  {
    "path": "detectron2/MODEL_ZOO.md",
    "chars": 57938,
    "preview": "# Detectron2 Model Zoo and Baselines\n\n## Introduction\n\nThis file documents a large collection of baselines trained\nwith "
  },
  {
    "path": "detectron2/README.md",
    "chars": 3081,
    "preview": "<img src=\".github/Detectron2-Logo-Horz.svg\" width=\"300\" >\n\n<a href=\"https://opensource.facebook.com/support-ukraine\">\n  "
  },
  {
    "path": "detectron2/configs/Base-RCNN-C4.yaml",
    "chars": 368,
    "preview": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  RPN:\n    PRE_NMS_TOPK_TEST: 6000\n    POST_NMS_TOPK_TEST: 1000\n  ROI_HEAD"
  },
  {
    "path": "detectron2/configs/Base-RCNN-DilatedC5.yaml",
    "chars": 665,
    "preview": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  RESNETS:\n    OUT_FEATURES: [\"res5\"]\n    RES5_DILATION: 2\n  RPN:\n    IN_F"
  },
  {
    "path": "detectron2/configs/Base-RCNN-FPN.yaml",
    "chars": 1318,
    "preview": "MODEL:\n  META_ARCHITECTURE: \"GeneralizedRCNN\"\n  BACKBONE:\n    NAME: \"build_resnet_fpn_backbone\"\n  RESNETS:\n    OUT_FEATU"
  },
  {
    "path": "detectron2/configs/Base-RetinaNet.yaml",
    "chars": 720,
    "preview": "MODEL:\n  META_ARCHITECTURE: \"RetinaNet\"\n  BACKBONE:\n    NAME: \"build_retinanet_resnet_fpn_backbone\"\n  RESNETS:\n    OUT_F"
  },
  {
    "path": "detectron2/configs/COCO-Detection/fast_rcnn_R_50_FPN_1x.yaml",
    "chars": 628,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  MASK_ON: False\n  LOA"
  },
  {
    "path": "detectron2/configs/COCO-Detection/faster_rcnn_R_101_C4_3x.yaml",
    "chars": 194,
    "preview": "_BASE_: \"../Base-RCNN-C4.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-101.pkl\"\n  MASK_ON: False\n  RES"
  },
  {
    "path": "detectron2/configs/COCO-Detection/faster_rcnn_R_101_DC5_3x.yaml",
    "chars": 201,
    "preview": "_BASE_: \"../Base-RCNN-DilatedC5.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-101.pkl\"\n  MASK_ON: Fals"
  },
  {
    "path": "detectron2/configs/COCO-Detection/faster_rcnn_R_101_FPN_3x.yaml",
    "chars": 195,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-101.pkl\"\n  MASK_ON: False\n  RE"
  },
  {
    "path": "detectron2/configs/COCO-Detection/faster_rcnn_R_50_C4_1x.yaml",
    "chars": 139,
    "preview": "_BASE_: \"../Base-RCNN-C4.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  MASK_ON: False\n  RESN"
  },
  {
    "path": "detectron2/configs/COCO-Detection/faster_rcnn_R_50_C4_3x.yaml",
    "chars": 192,
    "preview": "_BASE_: \"../Base-RCNN-C4.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  MASK_ON: False\n  RESN"
  },
  {
    "path": "detectron2/configs/COCO-Detection/faster_rcnn_R_50_DC5_1x.yaml",
    "chars": 146,
    "preview": "_BASE_: \"../Base-RCNN-DilatedC5.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  MASK_ON: False"
  },
  {
    "path": "detectron2/configs/COCO-Detection/faster_rcnn_R_50_DC5_3x.yaml",
    "chars": 199,
    "preview": "_BASE_: \"../Base-RCNN-DilatedC5.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  MASK_ON: False"
  },
  {
    "path": "detectron2/configs/COCO-Detection/faster_rcnn_R_50_FPN_1x.yaml",
    "chars": 140,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  MASK_ON: False\n  RES"
  },
  {
    "path": "detectron2/configs/COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml",
    "chars": 193,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  MASK_ON: False\n  RES"
  },
  {
    "path": "detectron2/configs/COCO-Detection/faster_rcnn_X_101_32x8d_FPN_3x.yaml",
    "chars": 328,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  MASK_ON: False\n  WEIGHTS: \"detectron2://ImageNetPretrained/FAIR/X-101-32x8d.pkl"
  },
  {
    "path": "detectron2/configs/COCO-Detection/fcos_R_50_FPN_1x.py",
    "chars": 410,
    "preview": "from ..common.optim import SGD as optimizer\nfrom ..common.coco_schedule import lr_multiplier_1x as lr_multiplier\nfrom .."
  },
  {
    "path": "detectron2/configs/COCO-Detection/retinanet_R_101_FPN_3x.yaml",
    "chars": 179,
    "preview": "_BASE_: \"../Base-RetinaNet.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-101.pkl\"\n  RESNETS:\n    DEPTH"
  },
  {
    "path": "detectron2/configs/COCO-Detection/retinanet_R_50_FPN_1x.py",
    "chars": 415,
    "preview": "from ..common.optim import SGD as optimizer\nfrom ..common.coco_schedule import lr_multiplier_1x as lr_multiplier\nfrom .."
  },
  {
    "path": "detectron2/configs/COCO-Detection/retinanet_R_50_FPN_1x.yaml",
    "chars": 124,
    "preview": "_BASE_: \"../Base-RetinaNet.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  RESNETS:\n    DEPTH:"
  },
  {
    "path": "detectron2/configs/COCO-Detection/retinanet_R_50_FPN_3x.yaml",
    "chars": 177,
    "preview": "_BASE_: \"../Base-RetinaNet.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  RESNETS:\n    DEPTH:"
  },
  {
    "path": "detectron2/configs/COCO-Detection/rpn_R_50_C4_1x.yaml",
    "chars": 243,
    "preview": "_BASE_: \"../Base-RCNN-C4.yaml\"\nMODEL:\n  META_ARCHITECTURE: \"ProposalNetwork\"\n  WEIGHTS: \"detectron2://ImageNetPretrained"
  },
  {
    "path": "detectron2/configs/COCO-Detection/rpn_R_50_FPN_1x.yaml",
    "chars": 215,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  META_ARCHITECTURE: \"ProposalNetwork\"\n  WEIGHTS: \"detectron2://ImageNetPretraine"
  },
  {
    "path": "detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_R_101_C4_3x.yaml",
    "chars": 193,
    "preview": "_BASE_: \"../Base-RCNN-C4.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-101.pkl\"\n  MASK_ON: True\n  RESN"
  },
  {
    "path": "detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_R_101_DC5_3x.yaml",
    "chars": 200,
    "preview": "_BASE_: \"../Base-RCNN-DilatedC5.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-101.pkl\"\n  MASK_ON: True"
  },
  {
    "path": "detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_R_101_FPN_3x.yaml",
    "chars": 194,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-101.pkl\"\n  MASK_ON: True\n  RES"
  },
  {
    "path": "detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_C4_1x.py",
    "chars": 337,
    "preview": "from ..common.train import train\nfrom ..common.optim import SGD as optimizer\nfrom ..common.coco_schedule import lr_multi"
  },
  {
    "path": "detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_C4_1x.yaml",
    "chars": 138,
    "preview": "_BASE_: \"../Base-RCNN-C4.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  MASK_ON: True\n  RESNE"
  },
  {
    "path": "detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_C4_3x.yaml",
    "chars": 191,
    "preview": "_BASE_: \"../Base-RCNN-C4.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  MASK_ON: True\n  RESNE"
  },
  {
    "path": "detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_DC5_1x.yaml",
    "chars": 145,
    "preview": "_BASE_: \"../Base-RCNN-DilatedC5.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  MASK_ON: True\n"
  },
  {
    "path": "detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_DC5_3x.yaml",
    "chars": 198,
    "preview": "_BASE_: \"../Base-RCNN-DilatedC5.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  MASK_ON: True\n"
  },
  {
    "path": "detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.py",
    "chars": 348,
    "preview": "from ..common.optim import SGD as optimizer\nfrom ..common.coco_schedule import lr_multiplier_1x as lr_multiplier\nfrom .."
  },
  {
    "path": "detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml",
    "chars": 139,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  MASK_ON: True\n  RESN"
  },
  {
    "path": "detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x_giou.yaml",
    "chars": 285,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  MASK_ON: True\n  RESN"
  },
  {
    "path": "detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml",
    "chars": 192,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  MASK_ON: True\n  RESN"
  },
  {
    "path": "detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_3x.yaml",
    "chars": 327,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  MASK_ON: True\n  WEIGHTS: \"detectron2://ImageNetPretrained/FAIR/X-101-32x8d.pkl\""
  },
  {
    "path": "detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_regnetx_4gf_dds_fpn_1x.py",
    "chars": 1206,
    "preview": "from ..common.optim import SGD as optimizer\nfrom ..common.coco_schedule import lr_multiplier_1x as lr_multiplier\nfrom .."
  },
  {
    "path": "detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_regnety_4gf_dds_fpn_1x.py",
    "chars": 1226,
    "preview": "from ..common.optim import SGD as optimizer\nfrom ..common.coco_schedule import lr_multiplier_1x as lr_multiplier\nfrom .."
  },
  {
    "path": "detectron2/configs/COCO-Keypoints/Base-Keypoint-RCNN-FPN.yaml",
    "chars": 527,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  KEYPOINT_ON: True\n  ROI_HEADS:\n    NUM_CLASSES: 1\n  ROI_BOX_HEAD:\n    SMOOTH_L1"
  },
  {
    "path": "detectron2/configs/COCO-Keypoints/keypoint_rcnn_R_101_FPN_3x.yaml",
    "chars": 184,
    "preview": "_BASE_: \"Base-Keypoint-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-101.pkl\"\n  RESNETS:\n    "
  },
  {
    "path": "detectron2/configs/COCO-Keypoints/keypoint_rcnn_R_50_FPN_1x.py",
    "chars": 361,
    "preview": "from ..common.optim import SGD as optimizer\nfrom ..common.coco_schedule import lr_multiplier_1x as lr_multiplier\nfrom .."
  },
  {
    "path": "detectron2/configs/COCO-Keypoints/keypoint_rcnn_R_50_FPN_1x.yaml",
    "chars": 129,
    "preview": "_BASE_: \"Base-Keypoint-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  RESNETS:\n    D"
  },
  {
    "path": "detectron2/configs/COCO-Keypoints/keypoint_rcnn_R_50_FPN_3x.yaml",
    "chars": 182,
    "preview": "_BASE_: \"Base-Keypoint-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  RESNETS:\n    D"
  },
  {
    "path": "detectron2/configs/COCO-Keypoints/keypoint_rcnn_X_101_32x8d_FPN_3x.yaml",
    "chars": 317,
    "preview": "_BASE_: \"Base-Keypoint-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/FAIR/X-101-32x8d.pkl\"\n  PIXEL_S"
  },
  {
    "path": "detectron2/configs/COCO-PanopticSegmentation/Base-Panoptic-FPN.yaml",
    "chars": 278,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  META_ARCHITECTURE: \"PanopticFPN\"\n  MASK_ON: True\n  SEM_SEG_HEAD:\n    LOSS_WEIGH"
  },
  {
    "path": "detectron2/configs/COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml",
    "chars": 179,
    "preview": "_BASE_: \"Base-Panoptic-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-101.pkl\"\n  RESNETS:\n    DEPTH"
  },
  {
    "path": "detectron2/configs/COCO-PanopticSegmentation/panoptic_fpn_R_50_1x.py",
    "chars": 366,
    "preview": "from ..common.optim import SGD as optimizer\nfrom ..common.coco_schedule import lr_multiplier_1x as lr_multiplier\nfrom .."
  },
  {
    "path": "detectron2/configs/COCO-PanopticSegmentation/panoptic_fpn_R_50_1x.yaml",
    "chars": 124,
    "preview": "_BASE_: \"Base-Panoptic-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  RESNETS:\n    DEPTH:"
  },
  {
    "path": "detectron2/configs/COCO-PanopticSegmentation/panoptic_fpn_R_50_3x.yaml",
    "chars": 177,
    "preview": "_BASE_: \"Base-Panoptic-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  RESNETS:\n    DEPTH:"
  },
  {
    "path": "detectron2/configs/Cityscapes/mask_rcnn_R_50_FPN.yaml",
    "chars": 889,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  # WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  # For better, more"
  },
  {
    "path": "detectron2/configs/Detectron1-Comparisons/README.md",
    "chars": 4266,
    "preview": "\nDetectron2 model zoo's experimental settings and a few implementation details are different from Detectron.\n\nThe differ"
  },
  {
    "path": "detectron2/configs/Detectron1-Comparisons/faster_rcnn_R_50_FPN_noaug_1x.yaml",
    "chars": 449,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  MASK_ON: False\n  RES"
  },
  {
    "path": "detectron2/configs/Detectron1-Comparisons/keypoint_rcnn_R_50_FPN_1x.yaml",
    "chars": 843,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  KEYPOINT_ON: True\n  "
  },
  {
    "path": "detectron2/configs/Detectron1-Comparisons/mask_rcnn_R_50_FPN_noaug_1x.yaml",
    "chars": 522,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  MASK_ON: True\n  RESN"
  },
  {
    "path": "detectron2/configs/LVISv0.5-InstanceSegmentation/mask_rcnn_R_101_FPN_1x.yaml",
    "chars": 473,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-101.pkl\"\n  MASK_ON: True\n  RES"
  },
  {
    "path": "detectron2/configs/LVISv0.5-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml",
    "chars": 471,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  MASK_ON: True\n  RESN"
  },
  {
    "path": "detectron2/configs/LVISv0.5-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_1x.yaml",
    "chars": 606,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/FAIR/X-101-32x8d.pkl\"\n  PIXEL_STD: [5"
  },
  {
    "path": "detectron2/configs/LVISv1-InstanceSegmentation/mask_rcnn_R_101_FPN_1x.yaml",
    "chars": 560,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-101.pkl\"\n  MASK_ON: True\n  RES"
  },
  {
    "path": "detectron2/configs/LVISv1-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml",
    "chars": 558,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  MASK_ON: True\n  RESN"
  },
  {
    "path": "detectron2/configs/LVISv1-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_1x.yaml",
    "chars": 693,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/FAIR/X-101-32x8d.pkl\"\n  PIXEL_STD: [5"
  },
  {
    "path": "detectron2/configs/Misc/cascade_mask_rcnn_R_50_FPN_1x.yaml",
    "chars": 263,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  MASK_ON: True\n  RESN"
  },
  {
    "path": "detectron2/configs/Misc/cascade_mask_rcnn_R_50_FPN_3x.yaml",
    "chars": 316,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  MASK_ON: True\n  RESN"
  },
  {
    "path": "detectron2/configs/Misc/cascade_mask_rcnn_X_152_32x8d_FPN_IN5k_gn_dconv.yaml",
    "chars": 768,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  MASK_ON: True\n  WEIGHTS: \"catalog://ImageNetPretrained/FAIR/X-152-32x8d-IN5k\"\n "
  },
  {
    "path": "detectron2/configs/Misc/mask_rcnn_R_50_FPN_1x_cls_agnostic.yaml",
    "chars": 232,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  MASK_ON: True\n  RESN"
  },
  {
    "path": "detectron2/configs/Misc/mask_rcnn_R_50_FPN_1x_dconv_c3-c5.yaml",
    "chars": 238,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  MASK_ON: True\n  RESN"
  },
  {
    "path": "detectron2/configs/Misc/mask_rcnn_R_50_FPN_3x_dconv_c3-c5.yaml",
    "chars": 291,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  MASK_ON: True\n  RESN"
  },
  {
    "path": "detectron2/configs/Misc/mask_rcnn_R_50_FPN_3x_gn.yaml",
    "chars": 390,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"catalog://ImageNetPretrained/FAIR/R-50-GN\"\n  MASK_ON: True\n  RESNETS:"
  },
  {
    "path": "detectron2/configs/Misc/mask_rcnn_R_50_FPN_3x_syncbn.yaml",
    "chars": 447,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  MASK_ON: True\n  RESN"
  },
  {
    "path": "detectron2/configs/Misc/mmdet_mask_rcnn_R_50_FPN_1x.py",
    "chars": 5358,
    "preview": "# An example config to train a mmdetection model using detectron2.\n\nfrom ..common.data.coco import dataloader\nfrom ..com"
  },
  {
    "path": "detectron2/configs/Misc/panoptic_fpn_R_101_dconv_cascade_gn_3x.yaml",
    "chars": 649,
    "preview": "# A large PanopticFPN for demo purposes.\n# Use GN on backbone to support semantic seg.\n# Use Cascade + Deform Conv to im"
  },
  {
    "path": "detectron2/configs/Misc/scratch_mask_rcnn_R_50_FPN_3x_gn.yaml",
    "chars": 516,
    "preview": "_BASE_: \"mask_rcnn_R_50_FPN_3x_gn.yaml\"\nMODEL:\n  # Train from random initialization.\n  WEIGHTS: \"\"\n  # It makes sense to"
  },
  {
    "path": "detectron2/configs/Misc/scratch_mask_rcnn_R_50_FPN_9x_gn.yaml",
    "chars": 523,
    "preview": "_BASE_: \"mask_rcnn_R_50_FPN_3x_gn.yaml\"\nMODEL:\n  PIXEL_STD: [57.375, 57.12, 58.395]\n  WEIGHTS: \"\"\n  MASK_ON: True\n  RESN"
  },
  {
    "path": "detectron2/configs/Misc/scratch_mask_rcnn_R_50_FPN_9x_syncbn.yaml",
    "chars": 527,
    "preview": "_BASE_: \"mask_rcnn_R_50_FPN_3x_syncbn.yaml\"\nMODEL:\n  PIXEL_STD: [57.375, 57.12, 58.395]\n  WEIGHTS: \"\"\n  MASK_ON: True\n  "
  },
  {
    "path": "detectron2/configs/Misc/semantic_R_50_FPN_1x.yaml",
    "chars": 325,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  META_ARCHITECTURE: \"SemanticSegmentor\"\n  WEIGHTS: \"detectron2://ImageNetPretrai"
  },
  {
    "path": "detectron2/configs/Misc/torchvision_imagenet_R_50.py",
    "chars": 4497,
    "preview": "\"\"\"\nAn example config file to train a ImageNet classifier with detectron2.\nModel and dataloader both come from torchvisi"
  },
  {
    "path": "detectron2/configs/PascalVOC-Detection/faster_rcnn_R_50_C4.yaml",
    "chars": 448,
    "preview": "_BASE_: \"../Base-RCNN-C4.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  MASK_ON: False\n  RESN"
  },
  {
    "path": "detectron2/configs/PascalVOC-Detection/faster_rcnn_R_50_FPN.yaml",
    "chars": 449,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  MASK_ON: False\n  RES"
  },
  {
    "path": "detectron2/configs/common/README.md",
    "chars": 371,
    "preview": "This directory provides definitions for a few common models, dataloaders, scheduler,\nand optimizers that are often used "
  },
  {
    "path": "detectron2/configs/common/coco_schedule.py",
    "chars": 1657,
    "preview": "from fvcore.common.param_scheduler import MultiStepParamScheduler\n\nfrom detectron2.config import LazyCall as L\nfrom dete"
  },
  {
    "path": "detectron2/configs/common/data/coco.py",
    "chars": 1377,
    "preview": "from omegaconf import OmegaConf\n\nimport detectron2.data.transforms as T\nfrom detectron2.config import LazyCall as L\nfrom"
  },
  {
    "path": "detectron2/configs/common/data/coco_keypoint.py",
    "chars": 444,
    "preview": "from detectron2.data.detection_utils import create_keypoint_hflip_indices\n\nfrom .coco import dataloader\n\ndataloader.trai"
  },
  {
    "path": "detectron2/configs/common/data/coco_panoptic_separated.py",
    "chars": 659,
    "preview": "from detectron2.config import LazyCall as L\nfrom detectron2.evaluation import (\n    COCOEvaluator,\n    COCOPanopticEvalu"
  },
  {
    "path": "detectron2/configs/common/data/constants.py",
    "chars": 437,
    "preview": "constants = dict(\n    imagenet_rgb256_mean=[123.675, 116.28, 103.53],\n    imagenet_rgb256_std=[58.395, 57.12, 57.375],\n "
  },
  {
    "path": "detectron2/configs/common/optim.py",
    "chars": 707,
    "preview": "import torch\n\nfrom detectron2.config import LazyCall as L\nfrom detectron2.solver.build import get_default_optimizer_para"
  },
  {
    "path": "detectron2/configs/common/train.py",
    "chars": 634,
    "preview": "# Common training-related configs that are designed for \"tools/lazyconfig_train_net.py\"\n# You can use your own instead, "
  },
  {
    "path": "detectron2/configs/new_baselines/mask_rcnn_R_101_FPN_100ep_LSJ.py",
    "chars": 163,
    "preview": "from .mask_rcnn_R_50_FPN_100ep_LSJ import (\n    dataloader,\n    lr_multiplier,\n    model,\n    optimizer,\n    train,\n)\n\nm"
  },
  {
    "path": "detectron2/configs/new_baselines/mask_rcnn_R_101_FPN_200ep_LSJ.py",
    "chars": 323,
    "preview": "from .mask_rcnn_R_101_FPN_100ep_LSJ import (\n    dataloader,\n    lr_multiplier,\n    model,\n    optimizer,\n    train,\n)\n\n"
  },
  {
    "path": "detectron2/configs/new_baselines/mask_rcnn_R_101_FPN_400ep_LSJ.py",
    "chars": 323,
    "preview": "from .mask_rcnn_R_101_FPN_100ep_LSJ import (\n    dataloader,\n    lr_multiplier,\n    model,\n    optimizer,\n    train,\n)\n\n"
  },
  {
    "path": "detectron2/configs/new_baselines/mask_rcnn_R_50_FPN_100ep_LSJ.py",
    "chars": 2515,
    "preview": "import detectron2.data.transforms as T\nfrom detectron2.config.lazy import LazyCall as L\nfrom detectron2.layers.batch_nor"
  },
  {
    "path": "detectron2/configs/new_baselines/mask_rcnn_R_50_FPN_200ep_LSJ.py",
    "chars": 322,
    "preview": "from .mask_rcnn_R_50_FPN_100ep_LSJ import (\n    dataloader,\n    lr_multiplier,\n    model,\n    optimizer,\n    train,\n)\n\nt"
  },
  {
    "path": "detectron2/configs/new_baselines/mask_rcnn_R_50_FPN_400ep_LSJ.py",
    "chars": 322,
    "preview": "from .mask_rcnn_R_50_FPN_100ep_LSJ import (\n    dataloader,\n    lr_multiplier,\n    model,\n    optimizer,\n    train,\n)\n\nt"
  },
  {
    "path": "detectron2/configs/new_baselines/mask_rcnn_R_50_FPN_50ep_LSJ.py",
    "chars": 323,
    "preview": "from .mask_rcnn_R_50_FPN_100ep_LSJ import (\n    dataloader,\n    lr_multiplier,\n    model,\n    optimizer,\n    train,\n)\n\nt"
  },
  {
    "path": "detectron2/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_100ep_LSJ.py",
    "chars": 829,
    "preview": "from .mask_rcnn_R_50_FPN_100ep_LSJ import (\n    dataloader,\n    lr_multiplier,\n    model,\n    optimizer,\n    train,\n)\nfr"
  },
  {
    "path": "detectron2/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_200ep_LSJ.py",
    "chars": 333,
    "preview": "from .mask_rcnn_regnetx_4gf_dds_FPN_100ep_LSJ import (\n    dataloader,\n    lr_multiplier,\n    model,\n    optimizer,\n    "
  },
  {
    "path": "detectron2/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ.py",
    "chars": 333,
    "preview": "from .mask_rcnn_regnetx_4gf_dds_FPN_100ep_LSJ import (\n    dataloader,\n    lr_multiplier,\n    model,\n    optimizer,\n    "
  },
  {
    "path": "detectron2/configs/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_100ep_LSJ.py",
    "chars": 848,
    "preview": "from .mask_rcnn_R_50_FPN_100ep_LSJ import (\n    dataloader,\n    lr_multiplier,\n    model,\n    optimizer,\n    train,\n)\nfr"
  },
  {
    "path": "detectron2/configs/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_200ep_LSJ.py",
    "chars": 333,
    "preview": "from .mask_rcnn_regnety_4gf_dds_FPN_100ep_LSJ import (\n    dataloader,\n    lr_multiplier,\n    model,\n    optimizer,\n    "
  },
  {
    "path": "detectron2/configs/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_400ep_LSJ.py",
    "chars": 333,
    "preview": "from .mask_rcnn_regnety_4gf_dds_FPN_100ep_LSJ import (\n    dataloader,\n    lr_multiplier,\n    model,\n    optimizer,\n    "
  },
  {
    "path": "detectron2/configs/quick_schedules/README.md",
    "chars": 572,
    "preview": "These are quick configs for performance or accuracy regression tracking purposes.\n\n* `*instance_test.yaml`: can train on"
  },
  {
    "path": "detectron2/configs/quick_schedules/cascade_mask_rcnn_R_50_FPN_inference_acc_test.yaml",
    "chars": 281,
    "preview": "_BASE_: \"../Misc/cascade_mask_rcnn_R_50_FPN_3x.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://Misc/cascade_mask_rcnn_R_50_FPN_3x/"
  },
  {
    "path": "detectron2/configs/quick_schedules/cascade_mask_rcnn_R_50_FPN_instant_test.yaml",
    "chars": 229,
    "preview": "_BASE_: \"../Misc/cascade_mask_rcnn_R_50_FPN_3x.yaml\"\nDATASETS:\n  TRAIN: (\"coco_2017_val_100\",)\n  TEST: (\"coco_2017_val_1"
  },
  {
    "path": "detectron2/configs/quick_schedules/fast_rcnn_R_50_FPN_inference_acc_test.yaml",
    "chars": 255,
    "preview": "_BASE_: \"../COCO-Detection/fast_rcnn_R_50_FPN_1x.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://COCO-Detection/fast_rcnn_R_50_FPN"
  },
  {
    "path": "detectron2/configs/quick_schedules/fast_rcnn_R_50_FPN_instant_test.yaml",
    "chars": 542,
    "preview": "_BASE_: \"../COCO-Detection/fast_rcnn_R_50_FPN_1x.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\""
  },
  {
    "path": "detectron2/configs/quick_schedules/keypoint_rcnn_R_50_FPN_inference_acc_test.yaml",
    "chars": 307,
    "preview": "_BASE_: \"../COCO-Keypoints/keypoint_rcnn_R_50_FPN_3x.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://COCO-Keypoints/keypoint_rcnn_"
  },
  {
    "path": "detectron2/configs/quick_schedules/keypoint_rcnn_R_50_FPN_instant_test.yaml",
    "chars": 346,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  KEYPOINT_ON: True\n  "
  },
  {
    "path": "detectron2/configs/quick_schedules/keypoint_rcnn_R_50_FPN_normalized_training_acc_test.yaml",
    "chars": 842,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  KEYPOINT_ON: True\n  "
  },
  {
    "path": "detectron2/configs/quick_schedules/keypoint_rcnn_R_50_FPN_training_acc_test.yaml",
    "chars": 772,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  KEYPOINT_ON: True\n  "
  },
  {
    "path": "detectron2/configs/quick_schedules/mask_rcnn_R_50_C4_GCV_instant_test.yaml",
    "chars": 368,
    "preview": "_BASE_: \"../Base-RCNN-C4.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  MASK_ON: True\nDATASET"
  },
  {
    "path": "detectron2/configs/quick_schedules/mask_rcnn_R_50_C4_inference_acc_test.yaml",
    "chars": 304,
    "preview": "_BASE_: \"../COCO-InstanceSegmentation/mask_rcnn_R_50_C4_3x.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://COCO-InstanceSegmentati"
  },
  {
    "path": "detectron2/configs/quick_schedules/mask_rcnn_R_50_C4_instant_test.yaml",
    "chars": 289,
    "preview": "_BASE_: \"../Base-RCNN-C4.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  MASK_ON: True\nDATASET"
  },
  {
    "path": "detectron2/configs/quick_schedules/mask_rcnn_R_50_C4_training_acc_test.yaml",
    "chars": 532,
    "preview": "_BASE_: \"../Base-RCNN-C4.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  ROI_HEADS:\n    BATCH_"
  },
  {
    "path": "detectron2/configs/quick_schedules/mask_rcnn_R_50_DC5_inference_acc_test.yaml",
    "chars": 306,
    "preview": "_BASE_: \"../COCO-InstanceSegmentation/mask_rcnn_R_50_DC5_3x.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://COCO-InstanceSegmentat"
  },
  {
    "path": "detectron2/configs/quick_schedules/mask_rcnn_R_50_FPN_inference_acc_test.yaml",
    "chars": 445,
    "preview": "_BASE_: \"../COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://COCO-InstanceSegmentat"
  },
  {
    "path": "detectron2/configs/quick_schedules/mask_rcnn_R_50_FPN_instant_test.yaml",
    "chars": 290,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  MASK_ON: True\nDATASE"
  },
  {
    "path": "detectron2/configs/quick_schedules/mask_rcnn_R_50_FPN_pred_boxes_training_acc_test.yaml",
    "chars": 188,
    "preview": "_BASE_: \"./mask_rcnn_R_50_FPN_training_acc_test.yaml\"\nMODEL:\n  ROI_BOX_HEAD:\n    TRAIN_ON_PRED_BOXES: True\nTEST:\n  EXPEC"
  },
  {
    "path": "detectron2/configs/quick_schedules/mask_rcnn_R_50_FPN_training_acc_test.yaml",
    "chars": 495,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\n  ROI_HEADS:\n    BATCH"
  },
  {
    "path": "detectron2/configs/quick_schedules/panoptic_fpn_R_50_inference_acc_test.yaml",
    "chars": 394,
    "preview": "_BASE_: \"../COCO-PanopticSegmentation/panoptic_fpn_R_50_3x.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://COCO-PanopticSegmentati"
  },
  {
    "path": "detectron2/configs/quick_schedules/panoptic_fpn_R_50_instant_test.yaml",
    "chars": 425,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  META_ARCHITECTURE: \"PanopticFPN\"\n  WEIGHTS: \"detectron2://ImageNetPretrained/MS"
  },
  {
    "path": "detectron2/configs/quick_schedules/panoptic_fpn_R_50_training_acc_test.yaml",
    "chars": 566,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  META_ARCHITECTURE: \"PanopticFPN\"\n  WEIGHTS: \"detectron2://ImageNetPretrained/MS"
  },
  {
    "path": "detectron2/configs/quick_schedules/retinanet_R_50_FPN_inference_acc_test.yaml",
    "chars": 255,
    "preview": "_BASE_: \"../COCO-Detection/retinanet_R_50_FPN_3x.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://COCO-Detection/retinanet_R_50_FPN"
  },
  {
    "path": "detectron2/configs/quick_schedules/retinanet_R_50_FPN_instant_test.yaml",
    "chars": 297,
    "preview": "_BASE_: \"../COCO-Detection/retinanet_R_50_FPN_1x.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\""
  },
  {
    "path": "detectron2/configs/quick_schedules/rpn_R_50_FPN_inference_acc_test.yaml",
    "chars": 257,
    "preview": "_BASE_: \"../COCO-Detection/rpn_R_50_FPN_1x.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://COCO-Detection/rpn_R_50_FPN_1x/13725849"
  },
  {
    "path": "detectron2/configs/quick_schedules/rpn_R_50_FPN_instant_test.yaml",
    "chars": 291,
    "preview": "_BASE_: \"../COCO-Detection/rpn_R_50_FPN_1x.yaml\"\nMODEL:\n  WEIGHTS: \"detectron2://ImageNetPretrained/MSRA/R-50.pkl\"\nDATAS"
  },
  {
    "path": "detectron2/configs/quick_schedules/semantic_R_50_FPN_inference_acc_test.yaml",
    "chars": 366,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  META_ARCHITECTURE: \"SemanticSegmentor\"\n  WEIGHTS: \"detectron2://semantic_R_50_F"
  },
  {
    "path": "detectron2/configs/quick_schedules/semantic_R_50_FPN_instant_test.yaml",
    "chars": 434,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  META_ARCHITECTURE: \"SemanticSegmentor\"\n  WEIGHTS: \"detectron2://ImageNetPretrai"
  },
  {
    "path": "detectron2/configs/quick_schedules/semantic_R_50_FPN_training_acc_test.yaml",
    "chars": 520,
    "preview": "_BASE_: \"../Base-RCNN-FPN.yaml\"\nMODEL:\n  META_ARCHITECTURE: \"SemanticSegmentor\"\n  WEIGHTS: \"detectron2://ImageNetPretrai"
  },
  {
    "path": "detectron2/datasets/README.md",
    "chars": 4652,
    "preview": "# Use Builtin Datasets\n\nA dataset can be used by accessing [DatasetCatalog](https://detectron2.readthedocs.io/modules/da"
  },
  {
    "path": "detectron2/datasets/prepare_ade20k_sem_seg.py",
    "chars": 895,
    "preview": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n# Copyright (c) Facebook, Inc. and its affiliates.\nimport numpy as np\nimp"
  },
  {
    "path": "detectron2/datasets/prepare_cocofied_lvis.py",
    "chars": 7617,
    "preview": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n# Copyright (c) Facebook, Inc. and its affiliates.\n\nimport copy\nimport js"
  },
  {
    "path": "detectron2/datasets/prepare_for_tests.sh",
    "chars": 823,
    "preview": "#!/bin/bash -e\n# Copyright (c) Facebook, Inc. and its affiliates.\n\n# Download the mini dataset (coco val2017_100, with o"
  },
  {
    "path": "detectron2/datasets/prepare_panoptic_fpn.py",
    "chars": 4351,
    "preview": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n# Copyright (c) Facebook, Inc. and its affiliates.\n\nimport functools\nimpo"
  },
  {
    "path": "detectron2/demo/README.md",
    "chars": 327,
    "preview": "\n## Detectron2 Demo\n\nWe provide a command line tool to run a simple demo of builtin configs.\nThe usage is explained in ["
  },
  {
    "path": "detectron2/demo/demo.py",
    "chars": 7078,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\nimport argparse\nimport glob\nimport multiprocessing as mp\nimport numpy"
  },
  {
    "path": "detectron2/demo/predictor.py",
    "chars": 7844,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\nimport atexit\nimport bisect\nimport multiprocessing as mp\nfrom collect"
  },
  {
    "path": "detectron2/detectron2/__init__.py",
    "chars": 258,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\n\nfrom .utils.env import setup_environment\n\nsetup_environment()\n\n\n# Th"
  },
  {
    "path": "detectron2/detectron2/checkpoint/__init__.py",
    "chars": 347,
    "preview": "# -*- coding: utf-8 -*-\n# Copyright (c) Facebook, Inc. and its affiliates.\n# File:\n\n\nfrom . import catalog as _UNUSED  #"
  },
  {
    "path": "detectron2/detectron2/checkpoint/c2_model_loading.py",
    "chars": 17770,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\nimport copy\nimport logging\nimport re\nfrom typing import Dict, List\nim"
  },
  {
    "path": "detectron2/detectron2/checkpoint/catalog.py",
    "chars": 5685,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\nimport logging\n\nfrom detectron2.utils.file_io import PathHandler, Pat"
  },
  {
    "path": "detectron2/detectron2/checkpoint/detection_checkpoint.py",
    "chars": 5258,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\nimport logging\nimport os\nimport pickle\nimport torch\nfrom fvcore.commo"
  },
  {
    "path": "detectron2/detectron2/config/__init__.py",
    "chars": 599,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\nfrom .compat import downgrade_config, upgrade_config\nfrom .config imp"
  },
  {
    "path": "detectron2/detectron2/config/compat.py",
    "chars": 7890,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\n\"\"\"\nBackward compatibility of configs.\n\nInstructions to bump version:"
  },
  {
    "path": "detectron2/detectron2/config/config.py",
    "chars": 9211,
    "preview": "# -*- coding: utf-8 -*-\n# Copyright (c) Facebook, Inc. and its affiliates.\n\nimport functools\nimport inspect\nimport loggi"
  },
  {
    "path": "detectron2/detectron2/config/defaults.py",
    "chars": 29512,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\nfrom .config import CfgNode as CN\n\n# NOTE: given the new config syste"
  },
  {
    "path": "detectron2/detectron2/config/instantiate.py",
    "chars": 3015,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\n\nimport collections.abc as abc\nimport dataclasses\nimport logging\nfrom"
  },
  {
    "path": "detectron2/detectron2/config/lazy.py",
    "chars": 15378,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\n\nimport ast\nimport builtins\nimport collections.abc as abc\nimport impo"
  },
  {
    "path": "detectron2/detectron2/data/__init__.py",
    "chars": 644,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\nfrom . import transforms  # isort:skip\n\nfrom .build import (\n    buil"
  },
  {
    "path": "detectron2/detectron2/data/benchmark.py",
    "chars": 7378,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\nimport logging\nimport numpy as np\nfrom itertools import count\nfrom ty"
  },
  {
    "path": "detectron2/detectron2/data/build.py",
    "chars": 21231,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\nimport itertools\nimport logging\nimport numpy as np\nimport operator\nim"
  },
  {
    "path": "detectron2/detectron2/data/catalog.py",
    "chars": 7224,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\nimport copy\nimport logging\nimport types\nfrom collections import UserD"
  },
  {
    "path": "detectron2/detectron2/data/common.py",
    "chars": 9162,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\nimport copy\nimport itertools\nimport logging\nimport numpy as np\nimport"
  },
  {
    "path": "detectron2/detectron2/data/dataset_mapper.py",
    "chars": 8169,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\nimport copy\nimport logging\nimport numpy as np\nfrom typing import List"
  },
  {
    "path": "detectron2/detectron2/data/datasets/README.md",
    "chars": 347,
    "preview": "\n\n### Common Datasets\n\nThe dataset implemented here do not need to load the data into the final format.\nIt should provid"
  },
  {
    "path": "detectron2/detectron2/data/datasets/__init__.py",
    "chars": 523,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\nfrom .coco import load_coco_json, load_sem_seg, register_coco_instanc"
  },
  {
    "path": "detectron2/detectron2/data/datasets/builtin.py",
    "chars": 10174,
    "preview": "# -*- coding: utf-8 -*-\n# Copyright (c) Facebook, Inc. and its affiliates.\n\n\n\"\"\"\nThis file registers pre-defined dataset"
  },
  {
    "path": "detectron2/detectron2/data/datasets/builtin_meta.py",
    "chars": 21841,
    "preview": "# -*- coding: utf-8 -*-\n# Copyright (c) Facebook, Inc. and its affiliates.\n\n\"\"\"\nNote:\nFor your custom dataset, there is "
  },
  {
    "path": "detectron2/detectron2/data/datasets/cityscapes.py",
    "chars": 13167,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\nimport functools\nimport json\nimport logging\nimport multiprocessing as"
  },
  {
    "path": "detectron2/detectron2/data/datasets/cityscapes_panoptic.py",
    "chars": 7821,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\nimport json\nimport logging\nimport os\n\nfrom detectron2.data import Dat"
  },
  {
    "path": "detectron2/detectron2/data/datasets/coco.py",
    "chars": 23465,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\nimport contextlib\nimport datetime\nimport io\nimport json\nimport loggin"
  },
  {
    "path": "detectron2/detectron2/data/datasets/coco_panoptic.py",
    "chars": 8977,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\nimport copy\nimport json\nimport os\n\nfrom detectron2.data import Datase"
  },
  {
    "path": "detectron2/detectron2/data/datasets/lvis.py",
    "chars": 9623,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\nimport logging\nimport os\nfrom fvcore.common.timer import Timer\n\nfrom "
  },
  {
    "path": "detectron2/detectron2/data/datasets/lvis_v0_5_categories.py",
    "chars": 223757,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\n# Autogen with\n# with open(\"lvis_v0.5_val.json\", \"r\") as f:\n#     a ="
  },
  {
    "path": "detectron2/detectron2/data/datasets/lvis_v1_categories.py",
    "chars": 219177,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\n# Autogen with\n# with open(\"lvis_v1_val.json\", \"r\") as f:\n#     a = j"
  },
  {
    "path": "detectron2/detectron2/data/datasets/lvis_v1_category_image_count.py",
    "chars": 39414,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\n# Autogen with\n# with open(\"lvis_v1_train.json\", \"r\") as f:\n#     a ="
  },
  {
    "path": "detectron2/detectron2/data/datasets/pascal_voc.py",
    "chars": 3128,
    "preview": "# -*- coding: utf-8 -*-\n# Copyright (c) Facebook, Inc. and its affiliates.\n\nimport numpy as np\nimport os\nimport xml.etre"
  },
  {
    "path": "detectron2/detectron2/data/datasets/register_coco.py",
    "chars": 169,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\nfrom .coco import register_coco_instances  # noqa\nfrom .coco_panoptic"
  },
  {
    "path": "detectron2/detectron2/data/detection_utils.py",
    "chars": 22841,
    "preview": "# -*- coding: utf-8 -*-\n# Copyright (c) Facebook, Inc. and its affiliates.\n\n\"\"\"\nCommon data processing utilities that ar"
  },
  {
    "path": "detectron2/detectron2/data/samplers/__init__.py",
    "chars": 412,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\nfrom .distributed_sampler import (\n    InferenceSampler,\n    RandomSu"
  },
  {
    "path": "detectron2/detectron2/data/samplers/distributed_sampler.py",
    "chars": 11789,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\nimport itertools\nimport logging\nimport math\nfrom collections import d"
  },
  {
    "path": "detectron2/detectron2/data/samplers/grouped_batch_sampler.py",
    "chars": 1944,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\nimport numpy as np\nfrom torch.utils.data.sampler import BatchSampler,"
  },
  {
    "path": "detectron2/detectron2/data/transforms/__init__.py",
    "chars": 466,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\nfrom fvcore.transforms.transform import Transform, TransformList  # o"
  },
  {
    "path": "detectron2/detectron2/data/transforms/augmentation.py",
    "chars": 14112,
    "preview": "# -*- coding: utf-8 -*-\n# Copyright (c) Facebook, Inc. and its affiliates.\n\nimport inspect\nimport numpy as np\nimport ppr"
  },
  {
    "path": "detectron2/detectron2/data/transforms/augmentation_impl.py",
    "chars": 23069,
    "preview": "# -*- coding: utf-8 -*-\n# Copyright (c) Facebook, Inc. and its affiliates.\n\"\"\"\nImplement many useful :class:`Augmentatio"
  },
  {
    "path": "detectron2/detectron2/data/transforms/transform.py",
    "chars": 12629,
    "preview": "# -*- coding: utf-8 -*-\n# Copyright (c) Facebook, Inc. and its affiliates.\n\n\"\"\"\nSee \"Data Augmentation\" tutorial for an "
  },
  {
    "path": "detectron2/detectron2/engine/__init__.py",
    "chars": 340,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\n\nfrom .launch import *\nfrom .train_loop import *\n\n__all__ = [k for k "
  },
  {
    "path": "detectron2/detectron2/engine/defaults.py",
    "chars": 26862,
    "preview": "# -*- coding: utf-8 -*-\n# Copyright (c) Facebook, Inc. and its affiliates.\n\n\"\"\"\nThis file contains components with some "
  },
  {
    "path": "detectron2/detectron2/engine/hooks.py",
    "chars": 25497,
    "preview": "# -*- coding: utf-8 -*-\n# Copyright (c) Facebook, Inc. and its affiliates.\n\nimport datetime\nimport itertools\nimport logg"
  },
  {
    "path": "detectron2/detectron2/engine/launch.py",
    "chars": 4089,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\nimport logging\nfrom datetime import timedelta\nimport torch\nimport tor"
  },
  {
    "path": "detectron2/detectron2/engine/train_loop.py",
    "chars": 14818,
    "preview": "# -*- coding: utf-8 -*-\n# Copyright (c) Facebook, Inc. and its affiliates.\n\nimport logging\nimport numpy as np\nimport tim"
  },
  {
    "path": "detectron2/detectron2/evaluation/__init__.py",
    "chars": 671,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\nfrom .cityscapes_evaluation import CityscapesInstanceEvaluator, Citys"
  },
  {
    "path": "detectron2/detectron2/evaluation/cityscapes_evaluation.py",
    "chars": 8369,
    "preview": "# Copyright (c) Facebook, Inc. and its affiliates.\nimport glob\nimport logging\nimport numpy as np\nimport os\nimport tempfi"
  }
]

// ... and 767 more files (download for full content)

About this extraction

This page contains the full source code of the clin1223/VLDet GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 967 files (5.6 MB), approximately 1.5M tokens, and a symbol index with 3464 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!