[
  {
    "path": ".gitignore",
    "content": "attent*\nlog*\n*.pth\ncheckpoint\n1st\nallmetaclass\nbase*\noutput*\nsplit*\nresult*\n*.sh\n"
  },
  {
    "path": "README.md",
    "content": "## Meta R-CNN : Towards General Solver for Instance-level Low-shot Learning.\r\n\r\nCode for reproducing the results in the following paper, and the code is built on top of [jwyang/faster-rcnn.pytorch](https://github.com/jwyang/faster-rcnn.pytorch)\r\n\r\n**<a href=\"https://yanxp.github.io/metarcnn.html\">Meta R-CNN : Towards General Solver for Instance-level Low-shot Learning</a>**\r\n\r\n<a href=\"https://yanxp.github.io/\">Xiaopeng Yan*</a>,\r\n<a href=\"http://cziliang.com\">Ziliang Chen*</a>,\r\nAnni Xu, Xiaoxi Wang, \r\n<a href=\"https://lemondan.github.io/\">Xiaodan Liang</a>,\r\n<a href=\"http://www.linliang.net/\">Liang Lin</a>\r\n\r\nSun Yat-Sen University, Presented at  *IEEE International Conference on Computer Vision [(ICCV2019)](http://iccv2019.thecvf.com/)*\t\r\n\r\n<p align=center><img width=\"80%\" src=\"demo/metarcnn.png\"/></p>\r\n\r\n### License\r\n\r\nFor Academic Research Use Only!\r\n\r\n### Requirements\r\n\r\n+ python packages\r\n\r\n  + PyTorch = 0.3.1\r\n    \r\n    *This project can not support pytorch 0.4, higher version will not recur results.*\r\n\r\n  + Torchvision >= 0.2.0\r\n\r\n  + cython\r\n\r\n  + pyyaml\r\n\r\n  + easydict\r\n\r\n  + opencv-python\r\n\r\n  + matplotlib\r\n\r\n  + numpy\r\n\r\n  + scipy\r\n\r\n  + tensorboardX\r\n\r\n    You can install above package using ```pip```:\r\n\r\n    ```sh\r\n    pip install Cython easydict matplotlib opencv-python pyyaml scipy\r\n    ```\r\n\r\n+ CUDA 8.0\r\n\r\n+ gcc >= 4.9\r\n\r\n### Misc\r\n\r\nTested on Ubuntu 14.04 with a Titan X GPU (12G) and Intel(R) Xeon(R) CPU E5-2623 v3 @ 3.00GHz.\r\n\r\n### Getting Started\r\n\r\nClone the repo:\r\n\r\n```\r\nhttps://github.com/yanxp/MetaR-CNN.git\r\n```\r\n\r\n### Compilation\r\n\r\nCompile the CUDA dependencies:\r\n\r\n```sh\r\ncd {repo_root}/lib\r\nsh make.sh\r\n```\r\nIt will compile all the modules you need, including NMS, ROI_Pooing, ROI_Crop and ROI_Align. \r\n\r\n### Data Preparation\r\n\r\nCreate a data folder under the repo,\r\n\r\n```sh\r\ncd {repo_root}\r\nmkdir data\r\n```\r\n**PASCAL_VOC 07+12**: Please follow the instructions in [py-faster-rcnn](https://github.com/rbgirshick/py-faster-rcnn#beyond-the-demo-installation-for-training-and-testing-models) to prepare VOC datasets. Actually, you can refer to any others. After downloading the data, create softlinks in the folder data/.\r\n\r\nplease download the three base classes [splits](https://pan.baidu.com/s/11IxGujTTegLEXFsaiohV_Q)[[GoogleDrive](https://drive.google.com/drive/folders/14gtxnxWokk3eO6Oe5SrEG6_R9Dt6efT8?usp=sharing)] and put them into VOC2007 and VOC2012 ImageSets/Main dirs.\r\n\r\n### Training\r\nWe used [ResNet101](https://www.dropbox.com/s/iev3tkbz5wyyuz9/resnet101_caffe.pth?dl=0) pretrained model on ImageNet in our experiments. Download it and put it into the data/pretrained_model/.\r\n\r\nfor example, if you want to train the first split of base and novel class with meta learning, just run:\r\n\r\n#### the first phase\r\n```sh\r\n$>CUDA_VISIBLE_DEVICES=0 python train_metarcnn.py --dataset pascal_voc_0712 --epochs 21 --bs 4 --nw 8 --log_dir checkpoint --save_dir models/meta/first --meta_type 1 --meta_train True --meta_loss True \r\n```\r\n#### the second phase\r\n```sh\r\n$>CUDA_VISIBLE_DEVICES=0 python train_metarcnn.py --dataset pascal_voc_0712 --epochs 30 --bs 4 --nw 8 --log_dir checkpoint --save_dir models/meta/first --r True --checksession 1 --checkepoch 20 --checkpoint 3081 --phase 2 --shots 10 --meta_train True --meta_loss True --meta_type 1\r\n```\r\n### Testing\r\n\r\nif you want to evaluate the performance of meta trained model, simply run:\r\n```sh\r\n$>CUDA_VISIBLE_DEVICES=0 python test_metarcnn.py --dataset pascal_voc_0712 --net metarcnn --load_dir models/meta/first  --checksession 10 --checkepoch 30 --checkpoint 111 --shots 10  --meta_type 1 --meta_test True --meta_loss True --phase 2\r\n```\r\n\r\nwe provide the part models with meta training and without meta training in the following:\r\n[Meta Models](https://pan.baidu.com/s/1N3PW9WTi82lbdURNAz7EFA)[[GoogleDrive](https://drive.google.com/file/d/19gapxklxKCwYIyGszOMhQKNDqYOLeubn/view?usp=sharing)] and [WoMeta Models](https://pan.baidu.com/s/1GkjUJmaOaEWzh3z2fs7ieA)[[GoogleDrive](https://drive.google.com/file/d/1G6xYH9M_bAAqUec1ARufv0ELi_pd7ERj/view?usp=sharing)]\r\n\r\n### Citation\r\n\r\n```\r\n@inproceedings{yan2019meta,\r\n  title={Meta r-cnn: Towards general solver for instance-level low-shot learning},\r\n  author={Yan, Xiaopeng and Chen, Ziliang and Xu, Anni and Wang, Xiaoxi and Liang, Xiaodan and Lin, Liang},\r\n  booktitle={Proceedings of the IEEE International Conference on Computer Vision},\r\n  pages={9577--9586},\r\n  year={2019}\r\n}\r\n```\r\n\r\n### Contact\r\n\r\nIf you have any questions about this repo, please feel free to contact [yanxp3@mail3.sysu.edu.cn](mailto:yanxp3@mail3.sysu.edu.cn).\r\n"
  },
  {
    "path": "_init_paths.py",
    "content": "import os.path as osp\nimport sys\nimport os\n\nif os.listdir('data/cache/'):\n    os.system('rm data/cache/*')\n\ndef add_path(path):\n    if path not in sys.path:\n        sys.path.insert(0, path)\n\nthis_dir = osp.dirname(__file__)\n\n# Add lib to PYTHONPATH\nlib_path = osp.join(this_dir, 'lib')\nadd_path(lib_path)\n\ncoco_path = osp.join(this_dir, 'data', 'coco', 'PythonAPI')\nadd_path(coco_path)\n\nvg_path = osp.join(this_dir, 'data', 'vgapi')\nadd_path(vg_path)\n"
  },
  {
    "path": "cfgs/res101_ms.yml",
    "content": "EXP_DIR: res101\nTRAIN:\n  HAS_RPN: True\n  BBOX_NORMALIZE_TARGETS_PRECOMPUTED: True\n  RPN_POSITIVE_OVERLAP: 0.7\n  RPN_BATCHSIZE: 256\n  PROPOSAL_METHOD: gt\n  BG_THRESH_LO: 0.0\n  DISPLAY: 20\n  BATCH_SIZE: 128\n  WEIGHT_DECAY: 0.0001\n  MAX_SIZE: 1000\n  SCALES: [600]\n  DOUBLE_BIAS: False\n  RCNN_BBOX_WEIGHT: 1\nTEST:\n  SCALES: [600]\n  HAS_RPN: True\nPOOLING_SIZE: 7\nPOOLING_MODE: align\nCROP_RESIZE_WITH_MAX_POOL: False\n"
  },
  {
    "path": "cfgs/res50.yml",
    "content": "EXP_DIR: res50\nTRAIN:\n  HAS_RPN: True\n  BBOX_NORMALIZE_TARGETS_PRECOMPUTED: True\n  RPN_POSITIVE_OVERLAP: 0.7\n  RPN_BATCHSIZE: 256\n  PROPOSAL_METHOD: gt\n  BG_THRESH_LO: 0.0\n  DISPLAY: 20\n  BATCH_SIZE: 128\n  WEIGHT_DECAY: 0.0001\n  MAX_SIZE: 1000\n  SCALES: [600]\n  DOUBLE_BIAS: False\n  RCNN_BBOX_WEIGHT: 1\nTEST:\n  SCALES: [600]\n  HAS_RPN: True\nPOOLING_SIZE: 7\nPOOLING_MODE: align\nCROP_RESIZE_WITH_MAX_POOL: False\n"
  },
  {
    "path": "lib/datasets/VOCdevkit-matlab-wrapper/get_voc_opts.m",
    "content": "function VOCopts = get_voc_opts(path)\n\ntmp = pwd;\ncd(path);\ntry\n  addpath('VOCcode');\n  VOCinit;\ncatch\n  rmpath('VOCcode');\n  cd(tmp);\n  error(sprintf('VOCcode directory not found under %s', path));\nend\nrmpath('VOCcode');\ncd(tmp);\n"
  },
  {
    "path": "lib/datasets/VOCdevkit-matlab-wrapper/voc_eval.m",
    "content": "function res = voc_eval(path, comp_id, test_set, output_dir)\n\nVOCopts = get_voc_opts(path);\nVOCopts.testset = test_set;\n\nfor i = 1:length(VOCopts.classes)\n  cls = VOCopts.classes{i};\n  res(i) = voc_eval_cls(cls, VOCopts, comp_id, output_dir);\nend\n\nfprintf('\\n~~~~~~~~~~~~~~~~~~~~\\n');\nfprintf('Results:\\n');\naps = [res(:).ap]';\nfprintf('%.1f\\n', aps * 100);\nfprintf('%.1f\\n', mean(aps) * 100);\nfprintf('~~~~~~~~~~~~~~~~~~~~\\n');\n\nfunction res = voc_eval_cls(cls, VOCopts, comp_id, output_dir)\n\ntest_set = VOCopts.testset;\nyear = VOCopts.dataset(4:end);\n\naddpath(fullfile(VOCopts.datadir, 'VOCcode'));\n\nres_fn = sprintf(VOCopts.detrespath, comp_id, cls);\n\nrecall = [];\nprec = [];\nap = 0;\nap_auc = 0;\n\ndo_eval = (str2num(year) <= ) | ~strcmp(test_set, 'test');\nif do_eval\n  % Bug in VOCevaldet requires that tic has been called first\n  tic;\n  [recall, prec, ap] = VOCevaldet(VOCopts, comp_id, cls, true);\n  ap_auc = xVOCap(recall, prec);\n\n  % force plot limits\n  ylim([0 1]);\n  xlim([0 1]);\n\n  print(gcf, '-djpeg', '-r0', ...\n        [output_dir '/' cls '_pr.jpg']);\nend\nfprintf('!!! %s : %.4f %.4f\\n', cls, ap, ap_auc);\n\nres.recall = recall;\nres.prec = prec;\nres.ap = ap;\nres.ap_auc = ap_auc;\n\nsave([output_dir '/' cls '_pr.mat'], ...\n     'res', 'recall', 'prec', 'ap', 'ap_auc');\n\nrmpath(fullfile(VOCopts.datadir, 'VOCcode'));\n"
  },
  {
    "path": "lib/datasets/VOCdevkit-matlab-wrapper/xVOCap.m",
    "content": "function ap = xVOCap(rec,prec)\r\n% From the PASCAL VOC 2011 devkit\r\n\r\nmrec=[0 ; rec ; 1];\r\nmpre=[0 ; prec ; 0];\r\nfor i=numel(mpre)-1:-1:1\r\n    mpre(i)=max(mpre(i),mpre(i+1));\r\nend\r\ni=find(mrec(2:end)~=mrec(1:end-1))+1;\r\nap=sum((mrec(i)-mrec(i-1)).*mpre(i));\r\n"
  },
  {
    "path": "lib/datasets/__init__.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------"
  },
  {
    "path": "lib/datasets/coco.py",
    "content": "# --------------------------------------------------------\n# Fast/er R-CNN\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick and Xinlei Chen\n# --------------------------------------------------------\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom datasets.imdb import imdb\nimport datasets.ds_utils as ds_utils\nfrom model.utils.config import cfg\nimport os.path as osp\nimport sys\nimport os\nimport numpy as np\nimport scipy.sparse\nimport scipy.io as sio\nimport pickle\nimport json\nimport uuid\n# COCO API\nfrom pycocotools.coco import COCO\nfrom pycocotools.cocoeval import COCOeval\nfrom pycocotools import mask as COCOmask\n\nclass coco(imdb):\n  def __init__(self, image_set, year):\n    imdb.__init__(self, 'coco_' + year + '_' + image_set)\n    # COCO specific config options\n    self.config = {'use_salt': True,\n                   'cleanup': True}\n    # name, paths\n    self._year = year\n    self._image_set = image_set\n    self._data_path = osp.join(cfg.DATA_DIR, 'coco'+self._year)\n    # load COCO API, classes, class <-> id mappings\n    self._COCO = COCO(self._get_ann_file())\n    cats = self._COCO.loadCats(self._COCO.getCatIds())\n    self._classes = tuple(['__background__'] + [c['name'] for c in cats])\n    self._class_to_ind = dict(list(zip(self.classes, list(range(self.num_classes)))))\n    self._class_to_coco_cat_id = dict(list(zip([c['name'] for c in cats],\n                                               self._COCO.getCatIds())))\n    self._image_index = self._load_image_set_index()\n    # Default to roidb handler\n    self.set_proposal_method('gt')\n    self.competition_mode(False)\n\n    # Some image sets are \"views\" (i.e. subsets) into others.\n    # For example, minival2014 is a random 5000 image subset of val2014.\n    # This mapping tells us where the view's images and proposals come from.\n    self._view_map = {\n      'minival2014': 'val2014',  # 5k val2014 subset\n      'valminusminival2014': 'val2014',  # val2014 \\setminus minival2014\n      'test-dev2015': 'test2015',\n      'valminuscapval2014': 'val2014',\n      'capval2014': 'val2014',\n      'captest2014': 'val2014'\n    }\n    coco_name = image_set + year  # e.g., \"val2014\"\n    self._data_name = (self._view_map[coco_name]\n                       if coco_name in self._view_map\n                       else coco_name)\n    # Dataset splits that have ground-truth annotations (test splits\n    # do not have gt annotations)\n    self._gt_splits = ('train', 'val', 'minival')\n\n  def _get_ann_file(self):\n    prefix = 'instances' if self._image_set.find('test') == -1 \\\n      else 'image_info'\n    return osp.join(self._data_path, 'annotations',\n                    prefix + '_' + self._image_set + self._year + '.json')\n\n  def _load_image_set_index(self):\n    \"\"\"\n    Load image ids.\n    \"\"\"\n    image_ids = self._COCO.getImgIds()\n    return image_ids\n\n  def _get_widths(self):\n    anns = self._COCO.loadImgs(self._image_index)\n    widths = [ann['width'] for ann in anns]\n    return widths\n\n  def image_path_at(self, i):\n    \"\"\"\n    Return the absolute path to image i in the image sequence.\n    \"\"\"\n    return self.image_path_from_index(self._image_index[i])\n\n  def image_id_at(self, i):\n    \"\"\"\n    Return the absolute path to image i in the image sequence.\n    \"\"\"\n    return self._image_index[i]\n\n  def image_path_from_index(self, index):\n    \"\"\"\n    Construct an image path from the image's \"index\" identifier.\n    \"\"\"\n    # Example image path for index=119993:\n    #   images/train2014/COCO_train2014_000000119993.jpg\n    if self._year == '2017':\n      file_name = str(index).zfill(12) + '.jpg'\n    elif self._year == '2014':\n      file_name = ('COCO_' + self._data_name + '_' +\n                   str(index).zfill(12) + '.jpg')\n    image_path = osp.join(self._data_path, 'images',\n                          self._data_name, file_name)\n    assert osp.exists(image_path), \\\n      'Path does not exist: {}'.format(image_path)\n    return image_path\n\n  def gt_roidb(self):\n    \"\"\"\n    Return the database of ground-truth regions of interest.\n    This function loads/saves from/to a cache file to speed up future calls.\n    \"\"\"\n    cache_file = osp.join(self.cache_path, self.name + '_gt_roidb.pkl')\n    if osp.exists(cache_file):\n      with open(cache_file, 'rb') as fid:\n        roidb = pickle.load(fid)\n      print('{} gt roidb loaded from {}'.format(self.name, cache_file))\n      return roidb\n\n    gt_roidb = [self._load_coco_annotation(index)\n                for index in self._image_index]\n\n    with open(cache_file, 'wb') as fid:\n      pickle.dump(gt_roidb, fid, pickle.HIGHEST_PROTOCOL)\n    print('wrote gt roidb to {}'.format(cache_file))\n    return gt_roidb\n\n  def _load_coco_annotation(self, index):\n    \"\"\"\n    Loads COCO bounding-box instance annotations. Crowd instances are\n    handled by marking their overlaps (with all categories) to -1. This\n    overlap value means that crowd \"instances\" are excluded from training.\n    \"\"\"\n    im_ann = self._COCO.loadImgs(index)[0]\n    width = im_ann['width']\n    height = im_ann['height']\n\n    annIds = self._COCO.getAnnIds(imgIds=index, iscrowd=None)\n    objs = self._COCO.loadAnns(annIds)\n    # Sanitize bboxes -- some are invalid\n    valid_objs = []\n    for obj in objs:\n      x1 = np.max((0, obj['bbox'][0]))\n      y1 = np.max((0, obj['bbox'][1]))\n      x2 = np.min((width - 1, x1 + np.max((0, obj['bbox'][2] - 1))))\n      y2 = np.min((height - 1, y1 + np.max((0, obj['bbox'][3] - 1))))\n      if obj['area'] > 0 and x2 >= x1 and y2 >= y1:\n        obj['clean_bbox'] = [x1, y1, x2, y2]\n        valid_objs.append(obj)\n    objs = valid_objs\n    num_objs = len(objs)\n\n    boxes = np.zeros((num_objs, 4), dtype=np.uint16)\n    gt_classes = np.zeros((num_objs), dtype=np.int32)\n    overlaps = np.zeros((num_objs, self.num_classes), dtype=np.float32)\n    seg_areas = np.zeros((num_objs), dtype=np.float32)\n\n    # Lookup table to map from COCO category ids to our internal class\n    # indices\n    coco_cat_id_to_class_ind = dict([(self._class_to_coco_cat_id[cls],\n                                      self._class_to_ind[cls])\n                                     for cls in self._classes[1:]])\n\n    for ix, obj in enumerate(objs):\n      cls = coco_cat_id_to_class_ind[obj['category_id']]\n      boxes[ix, :] = obj['clean_bbox']\n      gt_classes[ix] = cls\n      seg_areas[ix] = obj['area']\n      if obj['iscrowd']:\n        # Set overlap to -1 for all classes for crowd objects\n        # so they will be excluded during training\n        overlaps[ix, :] = -1.0\n      else:\n        overlaps[ix, cls] = 1.0\n\n    ds_utils.validate_boxes(boxes, width=width, height=height)\n    overlaps = scipy.sparse.csr_matrix(overlaps)\n    return {'width': width,\n            'height': height,\n            'boxes': boxes,\n            'gt_classes': gt_classes,\n            'gt_overlaps': overlaps,\n            'flipped': False,\n            'seg_areas': seg_areas}\n\n  def _get_widths(self):\n    return [r['width'] for r in self.roidb]\n\n  def append_flipped_images(self):\n    num_images = self.num_images\n    widths = self._get_widths()\n    for i in range(num_images):\n      boxes = self.roidb[i]['boxes'].copy()\n      oldx1 = boxes[:, 0].copy()\n      oldx2 = boxes[:, 2].copy()\n      boxes[:, 0] = widths[i] - oldx2 - 1\n      boxes[:, 2] = widths[i] - oldx1 - 1\n      assert (boxes[:, 2] >= boxes[:, 0]).all()\n      entry = {'width': widths[i],\n               'height': self.roidb[i]['height'],\n               'boxes': boxes,\n               'gt_classes': self.roidb[i]['gt_classes'],\n               'gt_overlaps': self.roidb[i]['gt_overlaps'],\n               'flipped': True,\n               'seg_areas': self.roidb[i]['seg_areas']}\n\n      self.roidb.append(entry)\n    self._image_index = self._image_index * 2\n\n  def _get_box_file(self, index):\n    # first 14 chars / first 22 chars / all chars + .mat\n    # COCO_val2014_0/COCO_val2014_000000447/COCO_val2014_000000447991.mat\n    file_name = ('COCO_' + self._data_name +\n                 '_' + str(index).zfill(12) + '.mat')\n    return osp.join(file_name[:14], file_name[:22], file_name)\n\n  def _print_detection_eval_metrics(self, coco_eval):\n    IoU_lo_thresh = 0.5\n    IoU_hi_thresh = 0.95\n\n    def _get_thr_ind(coco_eval, thr):\n      ind = np.where((coco_eval.params.iouThrs > thr - 1e-5) &\n                     (coco_eval.params.iouThrs < thr + 1e-5))[0][0]\n      iou_thr = coco_eval.params.iouThrs[ind]\n      assert np.isclose(iou_thr, thr)\n      return ind\n\n    ind_lo = _get_thr_ind(coco_eval, IoU_lo_thresh)\n    ind_hi = _get_thr_ind(coco_eval, IoU_hi_thresh)\n    # precision has dims (iou, recall, cls, area range, max dets)\n    # area range index 0: all area ranges\n    # max dets index 2: 100 per image\n    precision = \\\n      coco_eval.eval['precision'][ind_lo:(ind_hi + 1), :, :, 0, 2]\n    ap_default = np.mean(precision[precision > -1])\n    print(('~~~~ Mean and per-category AP @ IoU=[{:.2f},{:.2f}] '\n           '~~~~').format(IoU_lo_thresh, IoU_hi_thresh))\n    print('{:.1f}'.format(100 * ap_default))\n    for cls_ind, cls in enumerate(self.classes):\n      if cls == '__background__':\n        continue\n      # minus 1 because of __background__\n      precision = coco_eval.eval['precision'][ind_lo:(ind_hi + 1), :, cls_ind - 1, 0, 2]\n      ap = np.mean(precision[precision > -1])\n      print('{}: {:.1f}'.format(cls, 100 * ap))\n\n    print('~~~~ Summary metrics ~~~~')\n    coco_eval.summarize()\n\n  def _do_detection_eval(self, res_file, output_dir):\n    ann_type = 'bbox'\n    coco_dt = self._COCO.loadRes(res_file)\n    coco_eval = COCOeval(self._COCO, coco_dt)\n    coco_eval.params.useSegm = (ann_type == 'segm')\n    coco_eval.evaluate()\n    coco_eval.accumulate()\n    self._print_detection_eval_metrics(coco_eval)\n    eval_file = osp.join(output_dir, 'detection_results.pkl')\n    with open(eval_file, 'wb') as fid:\n      pickle.dump(coco_eval, fid, pickle.HIGHEST_PROTOCOL)\n    print('Wrote COCO eval results to: {}'.format(eval_file))\n\n  def _coco_results_one_category(self, boxes, cat_id):\n    results = []\n    for im_ind, index in enumerate(self.image_index):\n      dets = boxes[im_ind].astype(np.float)\n      if dets == []:\n        continue\n      scores = dets[:, -1]\n      xs = dets[:, 0]\n      ys = dets[:, 1]\n      ws = dets[:, 2] - xs + 1\n      hs = dets[:, 3] - ys + 1\n      results.extend(\n        [{'image_id': index,\n          'category_id': cat_id,\n          'bbox': [xs[k], ys[k], ws[k], hs[k]],\n          'score': scores[k]} for k in range(dets.shape[0])])\n    return results\n\n  def _write_coco_results_file(self, all_boxes, res_file):\n    # [{\"image_id\": 42,\n    #   \"category_id\": 18,\n    #   \"bbox\": [258.15,41.29,348.26,243.78],\n    #   \"score\": 0.236}, ...]\n    results = []\n    for cls_ind, cls in enumerate(self.classes):\n      if cls == '__background__':\n        continue\n      print('Collecting {} results ({:d}/{:d})'.format(cls, cls_ind,\n                                                       self.num_classes - 1))\n      coco_cat_id = self._class_to_coco_cat_id[cls]\n      results.extend(self._coco_results_one_category(all_boxes[cls_ind],\n                                                     coco_cat_id))\n    print('Writing results json to {}'.format(res_file))\n    with open(res_file, 'w') as fid:\n      json.dump(results, fid)\n\n  def evaluate_detections(self, all_boxes, output_dir):\n    res_file = osp.join(output_dir, ('detections_' +\n                                     self._image_set +\n                                     self._year +\n                                     '_results'))\n    if self.config['use_salt']:\n      res_file += '_{}'.format(str(uuid.uuid4()))\n    res_file += '.json'\n    self._write_coco_results_file(all_boxes, res_file)\n    # Only do evaluation on non-test sets\n    if self._image_set.find('test') == -1:\n      self._do_detection_eval(res_file, output_dir)\n    # Optionally cleanup results json file\n    if self.config['cleanup']:\n      os.remove(res_file)\n\n  def competition_mode(self, on):\n    if on:\n      self.config['use_salt'] = False\n      self.config['cleanup'] = False\n    else:\n      self.config['use_salt'] = True\n      self.config['cleanup'] = True"
  },
  {
    "path": "lib/datasets/ds_utils.py",
    "content": "# --------------------------------------------------------\n# Fast/er R-CNN\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\n\n\ndef unique_boxes(boxes, scale=1.0):\n  \"\"\"Return indices of unique boxes.\"\"\"\n  v = np.array([1, 1e3, 1e6, 1e9])\n  hashes = np.round(boxes * scale).dot(v)\n  _, index = np.unique(hashes, return_index=True)\n  return np.sort(index)\n\n\ndef xywh_to_xyxy(boxes):\n  \"\"\"Convert [x y w h] box format to [x1 y1 x2 y2] format.\"\"\"\n  return np.hstack((boxes[:, 0:2], boxes[:, 0:2] + boxes[:, 2:4] - 1))\n\n\ndef xyxy_to_xywh(boxes):\n  \"\"\"Convert [x1 y1 x2 y2] box format to [x y w h] format.\"\"\"\n  return np.hstack((boxes[:, 0:2], boxes[:, 2:4] - boxes[:, 0:2] + 1))\n\n\ndef validate_boxes(boxes, width=0, height=0):\n  \"\"\"Check that a set of boxes are valid.\"\"\"\n  x1 = boxes[:, 0]\n  y1 = boxes[:, 1]\n  x2 = boxes[:, 2]\n  y2 = boxes[:, 3]\n  assert (x1 >= 0).all()\n  assert (y1 >= 0).all()\n  assert (x2 >= x1).all()\n  assert (y2 >= y1).all()\n  assert (x2 < width).all()\n  assert (y2 < height).all()\n\n\ndef filter_small_boxes(boxes, min_size):\n  w = boxes[:, 2] - boxes[:, 0]\n  h = boxes[:, 3] - boxes[:, 1]\n  keep = np.where((w >= min_size) & (h > min_size))[0]\n  return keep\n"
  },
  {
    "path": "lib/datasets/factory.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\n\"\"\"Factory method for easily getting imdbs by name.\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\n__sets = {}\n\nfrom datasets.coco import coco\nfrom datasets.pascal_voc import pascal_voc\n\n# # Set up voc_<year>_<split>\nfor year in ['2007', '2012']:\n  for split in ['train', 'val', 'trainval', 'test','shots', 'train_first_split', 'train_second_split', 'train_third_split']:\n    name = 'voc_{}_{}'.format(year, split)\n    __sets[name] = (lambda split=split, year=year: pascal_voc(split, year))\n\n\nfor year in ['2014']:\n  for split in ['train', 'val', 'minival', 'valminusminival', 'trainval']:\n    name = 'coco_{}_{}'.format(year, split)\n    __sets[name] = (lambda split=split, year=year: coco(split, year))\n\nfor year in ['2017']:\n  for split in ['train', 'val']:\n    name = 'coco_{}_{}'.format(year, split)\n    __sets[name] = (lambda split=split, year=year: coco(split, year))\n\ndef get_imdb(name):\n  \"\"\"Get an imdb (image database) by name.\"\"\"\n  if name not in __sets:\n    raise KeyError('Unknown dataset: {}'.format(name))\n  return __sets[name]()\n\n\ndef list_imdbs():\n  \"\"\"List all registered imdbs.\"\"\"\n  return list(__sets.keys())\n"
  },
  {
    "path": "lib/datasets/imdb.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick and Xinlei Chen\n# --------------------------------------------------------\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport os.path as osp\nimport PIL\nfrom model.utils.cython_bbox import bbox_overlaps\nimport numpy as np\nimport scipy.sparse\nfrom model.utils.config import cfg\nimport pdb\n\nROOT_DIR = osp.join(osp.dirname(__file__), '..', '..')\n\nclass imdb(object):\n  \"\"\"Image database.\"\"\"\n\n  def __init__(self, name, classes=None):\n    self._name = name\n    self._num_classes = 0\n    if not classes:\n      self._classes = []\n    else:\n      self._classes = classes\n    self._image_index = []\n    self._obj_proposer = 'gt'\n    self._roidb = None\n    self._roidb_handler = self.default_roidb\n    # Use this dict for storing dataset specific config options\n    self.config = {}\n\n  @property\n  def name(self):\n    return self._name\n\n  @property\n  def num_classes(self):\n    return len(self._classes)\n\n  def set_classes(self,classes):\n    self._classes = classes\n\n  @property\n  def classes(self):\n    return self._classes\n\n  @property\n  def image_index(self):\n    return self._image_index\n\n  @property\n  def roidb_handler(self):\n    return self._roidb_handler\n\n  @roidb_handler.setter\n  def roidb_handler(self, val):\n    self._roidb_handler = val\n\n  def set_proposal_method(self, method):\n    method = eval('self.' + method + '_roidb')\n    self.roidb_handler = method\n\n  @property\n  def roidb(self):\n    # A roidb is a list of dictionaries, each with the following keys:\n    #   boxes\n    #   gt_overlaps\n    #   gt_classes\n    #   flipped\n    if self._roidb is not None:\n      return self._roidb\n    self._roidb = self.roidb_handler()\n    return self._roidb\n\n  def set_roidb(self,roidb):\n    self._roidb = roidb\n\n\n\n  @property\n  def cache_path(self):\n    cache_path = osp.abspath(osp.join(cfg.DATA_DIR, 'cache'))\n    if not os.path.exists(cache_path):\n      os.makedirs(cache_path)\n    return cache_path\n\n  @property\n  def num_images(self):\n    return len(self.image_index)\n\n  def image_path_at(self, i):\n    raise NotImplementedError\n\n  def image_id_at(self, i):\n    raise NotImplementedError\n\n  def default_roidb(self):\n    raise NotImplementedError\n\n  def evaluate_detections(self, all_boxes, output_dir=None):\n    \"\"\"\n    all_boxes is a list of length number-of-classes.\n    Each list element is a list of length number-of-images.\n    Each of those list elements is either an empty list []\n    or a numpy array of detection.\n\n    all_boxes[class][image] = [] or np.array of shape #dets x 5\n    \"\"\"\n    raise NotImplementedError\n\n  def _get_widths(self):\n    return [PIL.Image.open(self.image_path_at(i)).size[0]\n            for i in range(self.num_images)]\n\n  def append_flipped_images(self):\n    num_images = self.num_images\n    widths = self._get_widths()\n    for i in range(num_images):\n      boxes = self.roidb[i]['boxes'].copy()\n      oldx1 = boxes[:, 0].copy()\n      oldx2 = boxes[:, 2].copy()\n      boxes[:, 0] = widths[i] - oldx2 - 1\n      boxes[:, 2] = widths[i] - oldx1 - 1\n      assert (boxes[:, 2] >= boxes[:, 0]).all()\n\n\n      entry = {'boxes': boxes,\n               'gt_overlaps': self.roidb[i]['gt_overlaps'],\n               'gt_classes': self.roidb[i]['gt_classes'],\n               'flipped': True}\n      self.roidb.append(entry)\n    self._image_index = self._image_index * 2\n\n  def evaluate_recall(self, candidate_boxes=None, thresholds=None,\n                      area='all', limit=None):\n    \"\"\"Evaluate detection proposal recall metrics.\n\n    Returns:\n        results: dictionary of results with keys\n            'ar': average recall\n            'recalls': vector recalls at each IoU overlap threshold\n            'thresholds': vector of IoU overlap thresholds\n            'gt_overlaps': vector of all ground-truth overlaps\n    \"\"\"\n    # Record max overlap value for each gt box\n    # Return vector of overlap values\n    areas = {'all': 0, 'small': 1, 'medium': 2, 'large': 3,\n             '96-128': 4, '128-256': 5, '256-512': 6, '512-inf': 7}\n    area_ranges = [[0 ** 2, 1e5 ** 2],  # all\n                   [0 ** 2, 32 ** 2],  # small\n                   [32 ** 2, 96 ** 2],  # medium\n                   [96 ** 2, 1e5 ** 2],  # large\n                   [96 ** 2, 128 ** 2],  # 96-128\n                   [128 ** 2, 256 ** 2],  # 128-256\n                   [256 ** 2, 512 ** 2],  # 256-512\n                   [512 ** 2, 1e5 ** 2],  # 512-inf\n                   ]\n    assert area in areas, 'unknown area range: {}'.format(area)\n    area_range = area_ranges[areas[area]]\n    gt_overlaps = np.zeros(0)\n    num_pos = 0\n    for i in range(self.num_images):\n      # Checking for max_overlaps == 1 avoids including crowd annotations\n      # (...pretty hacking :/)\n      max_gt_overlaps = self.roidb[i]['gt_overlaps'].toarray().max(axis=1)\n      gt_inds = np.where((self.roidb[i]['gt_classes'] > 0) &\n                         (max_gt_overlaps == 1))[0]\n      gt_boxes = self.roidb[i]['boxes'][gt_inds, :]\n      gt_areas = self.roidb[i]['seg_areas'][gt_inds]\n      valid_gt_inds = np.where((gt_areas >= area_range[0]) &\n                               (gt_areas <= area_range[1]))[0]\n      gt_boxes = gt_boxes[valid_gt_inds, :]\n      num_pos += len(valid_gt_inds)\n\n      if candidate_boxes is None:\n        # If candidate_boxes is not supplied, the default is to use the\n        # non-ground-truth boxes from this roidb\n        non_gt_inds = np.where(self.roidb[i]['gt_classes'] == 0)[0]\n        boxes = self.roidb[i]['boxes'][non_gt_inds, :]\n      else:\n        boxes = candidate_boxes[i]\n      if boxes.shape[0] == 0:\n        continue\n      if limit is not None and boxes.shape[0] > limit:\n        boxes = boxes[:limit, :]\n\n      overlaps = bbox_overlaps(boxes.astype(np.float),\n                               gt_boxes.astype(np.float))\n\n      _gt_overlaps = np.zeros((gt_boxes.shape[0]))\n      for j in range(gt_boxes.shape[0]):\n        # find which proposal box maximally covers each gt box\n        argmax_overlaps = overlaps.argmax(axis=0)\n        # and get the iou amount of coverage for each gt box\n        max_overlaps = overlaps.max(axis=0)\n        # find which gt box is 'best' covered (i.e. 'best' = most iou)\n        gt_ind = max_overlaps.argmax()\n        gt_ovr = max_overlaps.max()\n        assert (gt_ovr >= 0)\n        # find the proposal box that covers the best covered gt box\n        box_ind = argmax_overlaps[gt_ind]\n        # record the iou coverage of this gt box\n        _gt_overlaps[j] = overlaps[box_ind, gt_ind]\n        assert (_gt_overlaps[j] == gt_ovr)\n        # mark the proposal box and the gt box as used\n        overlaps[box_ind, :] = -1\n        overlaps[:, gt_ind] = -1\n      # append recorded iou coverage level\n      gt_overlaps = np.hstack((gt_overlaps, _gt_overlaps))\n\n    gt_overlaps = np.sort(gt_overlaps)\n    if thresholds is None:\n      step = 0.05\n      thresholds = np.arange(0.5, 0.95 + 1e-5, step)\n    recalls = np.zeros_like(thresholds)\n    # compute recall for each iou threshold\n    for i, t in enumerate(thresholds):\n      recalls[i] = (gt_overlaps >= t).sum() / float(num_pos)\n    # ar = 2 * np.trapz(recalls, thresholds)\n    ar = recalls.mean()\n    return {'ar': ar, 'recalls': recalls, 'thresholds': thresholds,\n            'gt_overlaps': gt_overlaps}\n\n  def create_roidb_from_box_list(self, box_list, gt_roidb):\n    assert len(box_list) == self.num_images, \\\n      'Number of boxes must match number of ground-truth images'\n    roidb = []\n    for i in range(self.num_images):\n      boxes = box_list[i]\n      num_boxes = boxes.shape[0]\n      overlaps = np.zeros((num_boxes, self.num_classes), dtype=np.float32)\n\n      if gt_roidb is not None and gt_roidb[i]['boxes'].size > 0:\n        gt_boxes = gt_roidb[i]['boxes']\n        gt_classes = gt_roidb[i]['gt_classes']\n        gt_overlaps = bbox_overlaps(boxes.astype(np.float),\n                                    gt_boxes.astype(np.float))\n        argmaxes = gt_overlaps.argmax(axis=1)\n        maxes = gt_overlaps.max(axis=1)\n        I = np.where(maxes > 0)[0]\n        overlaps[I, gt_classes[argmaxes[I]]] = maxes[I]\n\n      overlaps = scipy.sparse.csr_matrix(overlaps)\n      roidb.append({\n        'boxes': boxes,\n        'gt_classes': np.zeros((num_boxes,), dtype=np.int32),\n        'gt_overlaps': overlaps,\n        'flipped': False,\n        'seg_areas': np.zeros((num_boxes,), dtype=np.float32),\n      })\n    return roidb\n\n  @staticmethod\n  def merge_roidbs(a, b):\n    assert len(a) == len(b)\n    for i in range(len(a)):\n      a[i]['boxes'] = np.vstack((a[i]['boxes'], b[i]['boxes']))\n      a[i]['gt_classes'] = np.hstack((a[i]['gt_classes'],\n                                      b[i]['gt_classes']))\n      a[i]['gt_overlaps'] = scipy.sparse.vstack([a[i]['gt_overlaps'],\n                                                 b[i]['gt_overlaps']])\n      a[i]['seg_areas'] = np.hstack((a[i]['seg_areas'],\n                                     b[i]['seg_areas']))\n    return a\n\n  def competition_mode(self, on):\n    \"\"\"Turn competition mode on or off.\"\"\"\n    pass\n"
  },
  {
    "path": "lib/datasets/metadata.py",
    "content": "# --------------------------------------------------------\n# Pytorch Meta R-CNN\n# Written by Anny Xu, Xiaopeng Yan, based on the code from Jianwei Yang\n# --------------------------------------------------------\nimport os\nimport os.path\nimport sys\nimport torch.utils.data as data\nimport cv2\nimport torch\nimport random\nimport numpy as np\nif sys.version_info[0] == 2:\n    import xml.etree.cElementTree as ET\nelse:\n    import xml.etree.ElementTree as ET\nfrom model.utils.config import cfg\nimport collections\n\nclass MetaDataset(data.Dataset):\n\n    \"\"\"Meta Dataset\n    Arguments:\n        root (string): filepath to VOCdevkit folder.\n        image_set (string): imageset to use (eg. 'train', 'val')\n        metaclass(string): the class name\n        img_size(int) : the PRN network input size\n        shot(int): the number of instances\n        shuffle(bool)\n    \"\"\"\n\n    def __init__(self, root, image_sets, metaclass, img_size, shots=1, shuffle=False, phase=1):\n        self.root = root\n        self.image_set = image_sets\n        self.img_size = img_size\n        self.metaclass = metaclass\n        self.shots = shots\n        if phase == 2:\n            self.shots = shots * 3\n        self.shuffle=shuffle\n        self._annopath = os.path.join('%s', 'Annotations', '%s.xml')\n        self._imgpath = os.path.join('%s', 'JPEGImages', '%s.jpg')\n        self.shot_path  = open(os.path.join(self.root, 'VOC2007', 'ImageSets/Main/shots.txt'), 'w')  # the default saved path\n        self.ids = list()\n        for (year, name) in image_sets:\n            self._year = year\n            rootpath = os.path.join(self.root, 'VOC' + year)\n            for line in open(os.path.join(rootpath, 'ImageSets', 'Main', name + '.txt')):\n                self.ids.append((rootpath, line.strip()))\n\n        class_to_idx = dict(zip(self.metaclass, range(len(self.metaclass))))  # class to index mapping\n\n        self.prndata = []\n        self.prncls = []\n        prn_image, prn_mask = self.get_prndata()\n        for i in range(shots):\n            cls = []\n            data = []\n            for n, key in enumerate(list(prn_image.keys())):\n                img = torch.from_numpy(np.array(prn_image[key][i]))\n                img = img.unsqueeze(0)\n                mask = torch.from_numpy(np.array(prn_mask[key][i]))\n                mask = mask.unsqueeze(0)\n                mask = mask.unsqueeze(3)\n                imgmask = torch.cat([img, mask], dim=3)\n                data.append(imgmask.permute(0, 3, 1, 2).contiguous())\n                cls.append(class_to_idx[key])\n            self.prncls.append(cls)\n            self.prndata.append(torch.cat(data,dim=0))\n\n    def __getitem__(self, index):\n        return  self.prndata[index],self.prncls[index]\n\n    def get_prndata(self):\n        '''\n        :return: the construct prn input data\n        '''\n        if self.shuffle:\n            random.shuffle(self.ids)\n        prn_image = collections.defaultdict(list)\n        prn_mask = collections.defaultdict(list)\n        classes = collections.defaultdict(int)\n        for cls in self.metaclass:\n            classes[cls] = 0\n        for img_id in self.ids:\n            target = ET.parse(self._annopath % img_id).getroot()\n            img = cv2.imread(self._imgpath % img_id, cv2.IMREAD_COLOR)\n            img = img[:, :, ::-1]\n            img = img.astype(np.float32, copy=False)\n            img -= cfg.PIXEL_MEANS\n            height, width, _ = img.shape\n            mask = np.zeros((self.img_size, self.img_size), dtype=np.float32)\n            h, w, _ = img.shape\n            y_ration = float(h) / self.img_size\n            x_ration = float(w) / self.img_size\n            img_resize = cv2.resize(img, (self.img_size, self.img_size), interpolation=cv2.INTER_LINEAR)\n            for obj in target.iter('object'):\n                difficult = int(obj.find('difficult').text) == 1\n                if difficult:\n                    continue\n                name = obj.find('name').text.strip()\n                if name not in self.metaclass:\n                    continue\n                if classes[name] >= self.shots:\n                    break\n                classes[name] += 1\n                bbox = obj.find('bndbox')\n                pts = ['xmin', 'ymin', 'xmax', 'ymax']\n                bndbox = []\n                for i, pt in enumerate(pts):\n                    cur_pt = int(float(bbox.find(pt).text)) - 1\n                    if i % 2 == 0:\n                        cur_pt = int(cur_pt / x_ration)\n                        bndbox.append(cur_pt)\n                    elif i % 2 == 1:\n                        cur_pt = int(cur_pt / y_ration)\n                        bndbox.append(cur_pt)\n                mask[bndbox[1]:bndbox[3], bndbox[0]:bndbox[2]] = 1\n                prn_image[name].append(img_resize)\n                prn_mask[name].append(mask)\n                self.shot_path.write(str(img_id[1])+'\\n')\n                break\n            if len(classes)>0 and min(classes.values()) == self.shots:\n                break\n        self.shot_path.close()\n        return prn_image, prn_mask\n\n    def __len__(self):\n        return len(self.prndata)\n"
  },
  {
    "path": "lib/datasets/pascal_voc.py",
    "content": "from __future__ import print_function\nfrom __future__ import absolute_import\n# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\nimport xml.dom.minidom as minidom\n\nimport os\n# import PIL\nimport numpy as np\nimport scipy.sparse\nimport subprocess\nimport math\nimport glob\nimport uuid\nimport scipy.io as sio\nimport xml.etree.ElementTree as ET\nimport pickle\nfrom .imdb import imdb\nfrom .imdb import ROOT_DIR\nfrom . import ds_utils\nfrom .voc_eval import voc_eval\nimport  random\n# TODO: make fast_rcnn irrelevant\n# >>>> obsolete, because it depends on sth outside of this project\nfrom model.utils.config import cfg\n\ntry:\n    xrange          # Python 2\nexcept NameError:\n    xrange = range  # Python 3\n\n# <<<< obsolete\n\n\nclass pascal_voc(imdb):\n    def __init__(self, image_set, year, devkit_path=None):\n        imdb.__init__(self, 'voc_' + year + '_' + image_set)\n        self._year = year\n        self._image_set = image_set\n        self._devkit_path = self._get_default_path() if devkit_path is None \\\n            else devkit_path\n        self._data_path = os.path.join(self._devkit_path, 'VOC' + self._year)\n        #first split\n        if cfg.TRAIN.META_TYPE == 1:\n            self._classes = ['__background__'] + cfg.TRAIN.ALLCLASSES_FIRST\n        #second split\n        if cfg.TRAIN.META_TYPE == 2:\n            self._classes = ['__background__'] + cfg.TRAIN.ALLCLASSES_SECOND\n        #third split\n        if cfg.TRAIN.META_TYPE == 3:\n            self._classes = ['__background__'] + cfg.TRAIN.ALLCLASSES_THIRD\n\n        self._class_to_ind = dict(zip(self.classes, xrange(self.num_classes)))\n        self._image_ext = '.jpg'\n        self._image_index = self._load_image_set_index()\n        # Default to roidb handler\n        # self._roidb_handler = self.selective_search_roidb\n        self._roidb_handler = self.gt_roidb\n        self._salt = str(uuid.uuid4())\n        self._comp_id = 'comp4'\n\n        # PASCAL specific config options\n        self.config = {'cleanup': True,\n                       'use_salt': True,\n                       'use_diff': False,\n                       'matlab_eval': False,\n                       'rpn_file': None,\n                       'min_size': 2}\n\n        assert os.path.exists(self._devkit_path), \\\n            'VOCdevkit path does not exist: {}'.format(self._devkit_path)\n        assert os.path.exists(self._data_path), \\\n            'Path does not exist: {}'.format(self._data_path)\n\n    def image_path_at(self, i):\n        \"\"\"\n        Return the absolute path to image i in the image sequence.\n        \"\"\"\n        return self.image_path_from_index(self._image_index[i])\n\n    def image_id_at(self, i):\n        \"\"\"\n        Return the absolute path to image i in the image sequence.\n        \"\"\"\n        return i\n\n    def image_path_from_index(self, index):\n        \"\"\"\n        Construct an image path from the image's \"index\" identifier.\n        \"\"\"\n        # file_name = str(index).zfill(6)\n        image_path = os.path.join(self._data_path, 'JPEGImages',\n                                  index + self._image_ext)\n        assert os.path.exists(image_path), \\\n            'Path does not exist: {}'.format(image_path)\n        return image_path\n\n    def _load_image_set_index(self):\n        \"\"\"\n        Load the indexes listed in this dataset's image set file.\n        \"\"\"\n        # Example path to image set file:\n        # self._devkit_path + /VOCdevkit2007/VOC2007/ImageSets/Main/val.txt\n        image_set_file = os.path.join(self._data_path, 'ImageSets', 'Main',\n                                      self._image_set + '.txt')\n        assert os.path.exists(image_set_file), \\\n            'Path does not exist: {}'.format(image_set_file)\n        with open(image_set_file) as f:\n            image_index = [x.strip() for x in f.readlines()]\n        return image_index\n\n    def _get_default_path(self):\n        \"\"\"\n        Return the default path where PASCAL VOC is expected to be installed.\n        \"\"\"\n        return os.path.join(cfg.DATA_DIR, 'VOCdevkit' + self._year)\n\n    def gt_roidb(self):\n        \"\"\"\n        Return the database of ground-truth regions of interest.\n\n        This function loads/saves from/to a cache file to speed up future calls.\n        \"\"\"\n        cache_file = os.path.join(self.cache_path, self.name + '_gt_roidb.pkl')\n        if os.path.exists(cache_file):\n            with open(cache_file, 'rb') as fid:\n                roidb = pickle.load(fid)\n            print('{} gt roidb loaded from {}'.format(self.name, cache_file))\n            return roidb\n\n        gt_roidb = [self._load_pascal_annotation(index)\n                    for index in self.image_index]\n        with open(cache_file, 'wb') as fid:\n            pickle.dump(gt_roidb, fid, pickle.HIGHEST_PROTOCOL)\n        print('wrote gt roidb to {}'.format(cache_file))\n\n        return gt_roidb\n\n    def selective_search_roidb(self):\n        \"\"\"\n        Return the database of selective search regions of interest.\n        Ground-truth ROIs are also included.\n\n        This function loads/saves from/to a cache file to speed up future calls.\n        \"\"\"\n        cache_file = os.path.join(self.cache_path,\n                                  self.name + '_selective_search_roidb.pkl')\n\n        if os.path.exists(cache_file):\n            with open(cache_file, 'rb') as fid:\n                roidb = pickle.load(fid)\n            print('{} ss roidb loaded from {}'.format(self.name, cache_file))\n            return roidb\n\n        if int(self._year) == 2007 or self._image_set != 'test':\n            gt_roidb = self.gt_roidb()\n            ss_roidb = self._load_selective_search_roidb(gt_roidb)\n            roidb = imdb.merge_roidbs(gt_roidb, ss_roidb)\n        else:\n            roidb = self._load_selective_search_roidb(None)\n        with open(cache_file, 'wb') as fid:\n            pickle.dump(roidb, fid, pickle.HIGHEST_PROTOCOL)\n        print('wrote ss roidb to {}'.format(cache_file))\n\n        return roidb\n\n    def rpn_roidb(self):\n        if int(self._year) == 2007 or self._image_set != 'test':\n            gt_roidb = self.gt_roidb()\n            rpn_roidb = self._load_rpn_roidb(gt_roidb)\n            roidb = imdb.merge_roidbs(gt_roidb, rpn_roidb)\n        else:\n            roidb = self._load_rpn_roidb(None)\n\n        return roidb\n\n    def _load_rpn_roidb(self, gt_roidb):\n        filename = self.config['rpn_file']\n        print('loading {}'.format(filename))\n        assert os.path.exists(filename), \\\n            'rpn data not found at: {}'.format(filename)\n        with open(filename, 'rb') as f:\n            box_list = pickle.load(f)\n        return self.create_roidb_from_box_list(box_list, gt_roidb)\n\n    def _load_selective_search_roidb(self, gt_roidb):\n        filename = os.path.abspath(os.path.join(cfg.DATA_DIR,\n                                                'selective_search_data',\n                                                self.name + '.mat'))\n        assert os.path.exists(filename), \\\n            'Selective search data not found at: {}'.format(filename)\n        raw_data = sio.loadmat(filename)['boxes'].ravel()\n\n        box_list = []\n        for i in xrange(raw_data.shape[0]):\n            boxes = raw_data[i][:, (1, 0, 3, 2)] - 1\n            keep = ds_utils.unique_boxes(boxes)\n            boxes = boxes[keep, :]\n            keep = ds_utils.filter_small_boxes(boxes, self.config['min_size'])\n            boxes = boxes[keep, :]\n            box_list.append(boxes)\n\n        return self.create_roidb_from_box_list(box_list, gt_roidb)\n\n    def _load_pascal_annotation(self, index):\n        \"\"\"\n        Load image and bounding boxes info from XML file in the PASCAL VOC\n        format.\n        \"\"\"\n        filename = os.path.join(self._data_path, 'Annotations', index + '.xml')\n        tree = ET.parse(filename)\n        objs = tree.findall('object')\n        # if not self.config['use_diff']:\n        #     # Exclude the samples labeled as difficult\n        #     non_diff_objs = [\n        #         obj for obj in objs if int(obj.find('difficult').text) == 0]\n        #     # if len(non_diff_objs) != len(objs):\n        #     #     print 'Removed {} difficult objects'.format(\n        #     #         len(objs) - len(non_diff_objs))\n        #     objs = non_diff_objs\n        num_objs = len(objs)\n\n        boxes = np.zeros((num_objs, 4), dtype=np.uint16)\n        gt_classes = np.zeros((num_objs), dtype=np.int32)\n        overlaps = np.zeros((num_objs, self.num_classes), dtype=np.float32)\n        # \"Seg\" area for pascal is just the box area\n        seg_areas = np.zeros((num_objs), dtype=np.float32)\n        ishards = np.zeros((num_objs), dtype=np.int32)\n\n        # Load object bounding boxes into a data frame.\n        for ix, obj in enumerate(objs):\n            bbox = obj.find('bndbox')\n            # Make pixel indexes 0-based\n            x1 = float(bbox.find('xmin').text) - 1\n            y1 = float(bbox.find('ymin').text) - 1\n            x2 = float(bbox.find('xmax').text) - 1\n            y2 = float(bbox.find('ymax').text) - 1\n\n            diffc = obj.find('difficult')\n            difficult = 0 if diffc == None else int(diffc.text)\n            ishards[ix] = difficult\n            if obj.find('name').text.lower().strip() not in self._classes:\n                continue\n            cls = self._class_to_ind[obj.find('name').text.lower().strip()]\n            boxes[ix, :] = [x1, y1, x2, y2]\n            gt_classes[ix] = cls\n            overlaps[ix, cls] = 1.0\n            seg_areas[ix] = (x2 - x1 + 1) * (y2 - y1 + 1)\n\n        overlaps = scipy.sparse.csr_matrix(overlaps)\n\n        return {'boxes': boxes,\n                'gt_classes': gt_classes,\n                'gt_ishard': ishards,\n                'gt_overlaps': overlaps,\n                'flipped': False,\n                'seg_areas': seg_areas}\n\n    def _get_comp_id(self):\n        comp_id = (self._comp_id + '_' + self._salt if self.config['use_salt']\n                   else self._comp_id)\n        return comp_id\n\n    def _get_voc_results_file_template(self):\n        # VOCdevkit/results/VOC2007/Main/<comp_id>_det_test_aeroplane.txt\n        filename = self._get_comp_id() + '_det_' + self._image_set + '_{:s}.txt'\n        filedir = os.path.join(self._devkit_path, 'results', 'VOC' + self._year, 'Main')\n        if not os.path.exists(filedir):\n            os.makedirs(filedir)\n        path = os.path.join(filedir, filename)\n        return path\n\n    def _write_voc_results_file(self, all_boxes):\n        for cls_ind, cls in enumerate(self.classes):\n            if cls == '__background__':\n                continue\n            print('Writing {} VOC results file'.format(cls))\n            filename = self._get_voc_results_file_template().format(cls)\n            with open(filename, 'wt') as f:\n                for im_ind, index in enumerate(self.image_index):\n                    dets = all_boxes[cls_ind][im_ind]\n                    if dets == []:\n                        continue\n                    # the VOCdevkit expects 1-based indices\n                    for k in xrange(dets.shape[0]):\n                        f.write('{:s} {:.3f} {:.1f} {:.1f} {:.1f} {:.1f}\\n'.\n                                format(index, dets[k, -1],\n                                       dets[k, 0] + 1, dets[k, 1] + 1,\n                                       dets[k, 2] + 1, dets[k, 3] + 1))\n\n    def _do_python_eval(self, output_dir='output', **kwargs):\n        annopath = os.path.join(\n            self._devkit_path,\n            'VOC' + self._year,\n            'Annotations',\n            '{:s}.xml')\n        imagesetfile = os.path.join(\n            self._devkit_path,\n            'VOC' + self._year,\n            'ImageSets',\n            'Main',\n            self._image_set + '.txt')\n        cachedir = os.path.join(self._devkit_path, 'annotations_cache')\n        aps = []\n        import time,csv\n        now = time.strftime(\"%Y-%m-%d-%H-%M-%S\")\n        ############################### changed by xan 2019/1/31 begin################################\n        save_dir = 'results'\n        if not os.path.exists(save_dir):\n            os.mkdir(save_dir)\n        path = str(now) + '_'\n        for k in kwargs:\n            if k in ('checksession','checkepoch','checkpoint','meta_test','shots'):\n                path = path + k + '_' + str(kwargs[k])\n        csvfile = open(os.path.join(save_dir, path + '.csv'), 'w')\n        ############################## end ###########################################################\n        writer = csv.writer(csvfile)\n\n        # The PASCAL VOC metric changed in 2010\n        use_07_metric = True if int(self._year) < 2010 else False\n        print('VOC07 metric? ' + ('Yes' if use_07_metric else 'No'))\n        if not os.path.isdir(output_dir):\n            os.mkdir(output_dir)\n        cls_names = []\n        ap_values = []\n        for i, cls in enumerate(self._classes):\n            if cls == '__background__':\n                continue\n            filename = self._get_voc_results_file_template().format(cls)\n            rec, prec, ap = voc_eval(\n                filename, annopath, imagesetfile, cls, cachedir, ovthresh=0.5,\n                use_07_metric=use_07_metric)\n            aps += [ap]\n            print('AP for {} = {:.3f}'.format(cls, ap))\n            cls_names.append(cls)\n            ap_values.append((\"%.1f\" % (ap*100)))\n            if i == 15:\n                cls_names.append('mean')\n                tmp = np.mean(aps)*100\n                ap_values.append((\"%.1f\" % tmp))\n            if i == 20:\n                cls_names.append('mean')\n                tmp = np.mean(aps[-5:])*100\n                ap_values.append((\"%.1f\" % tmp))\n            with open(os.path.join(output_dir, cls + '_pr.pkl'), 'wb') as f:\n                pickle.dump({'rec': rec, 'prec': prec, 'ap': ap}, f)\n        print('Mean AP = {:.4f}'.format(np.mean(aps)))\n        writer.writerow(cls_names)\n        writer.writerow(ap_values)\n        csvfile.close()\n        print('~~~~~~~~')\n        print('Results:')\n        for ap in aps:\n            print('{:.3f}'.format(ap))\n        print('{:.3f}'.format(np.mean(aps)))\n        print('~~~~~~~~')\n        print('')\n        print('--------------------------------------------------------------')\n        print('Results computed with the **unofficial** Python eval code.')\n        print('Results should be very close to the official MATLAB eval code.')\n        print('Recompute with `./tools/reval.py --matlab ...` for your paper.')\n        print('-- Thanks, The Management')\n        print('--------------------------------------------------------------')\n\n    def _do_matlab_eval(self, output_dir='output'):\n        print('-----------------------------------------------------')\n        print('Computing results with the official MATLAB eval code.')\n        print('-----------------------------------------------------')\n        path = os.path.join(cfg.ROOT_DIR, 'lib', 'datasets',\n                            'VOCdevkit-matlab-wrapper')\n        cmd = 'cd {} && '.format(path)\n        cmd += '{:s} -nodisplay -nodesktop '.format(cfg.MATLAB)\n        cmd += '-r \"dbstop if error; '\n        cmd += 'voc_eval(\\'{:s}\\',\\'{:s}\\',\\'{:s}\\',\\'{:s}\\'); quit;\"' \\\n            .format(self._devkit_path, self._get_comp_id(),\n                    self._image_set, output_dir)\n        print('Running:\\n{}'.format(cmd))\n        status = subprocess.call(cmd, shell=True)\n\n    def evaluate_detections(self, all_boxes, output_dir, **kwargs):\n        self._write_voc_results_file(all_boxes)\n        self._do_python_eval(output_dir, **kwargs)\n        if self.config['matlab_eval']:\n            self._do_matlab_eval(output_dir)\n        if self.config['cleanup']:\n            for cls in self._classes:\n                if cls == '__background__':\n                    continue\n                filename = self._get_voc_results_file_template().format(cls)\n                os.remove(filename)\n\n    def competition_mode(self, on):\n        if on:\n            self.config['use_salt'] = False\n            self.config['cleanup'] = False\n        else:\n            self.config['use_salt'] = True\n            self.config['cleanup'] = True\n\n\nif __name__ == '__main__':\n    d = pascal_voc('trainval', '2007')\n    res = d.roidb\n    from IPython import embed;\n\n    embed()\n"
  },
  {
    "path": "lib/datasets/pascal_voc_rbg.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick and Xinlei Chen\n# --------------------------------------------------------\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nfrom datasets.imdb import imdb\nimport scipy.sparse\nimport pickle\nimport subprocess\nimport uuid\nfrom .voc_eval import voc_eval\nfrom model.utils.config import cfg\nimport pdb\nimport os\nimport os.path\nimport sys\nimport torch.utils.data as data\nimport cv2\nimport numpy as np\nif sys.version_info[0] == 2:\n    import xml.etree.cElementTree as ET\nelse:\n    import xml.etree.ElementTree as ET\n\nclass VOCDetection(data.Dataset):\n  \"\"\"VOC Detection Dataset Object\n\n  input is image, target is annotation\n\n  Arguments:\n      root (string): filepath to VOCdevkit folder.\n      image_set (string): imageset to use (eg. 'train', 'val', 'test')\n      transform (callable, optional): transformation to perform on the\n          input image\n      target_transform (callable, optional): transformation to perform on the\n          target `annotation`\n          (eg: take in caption string, return tensor of word indices)\n      dataset_name (string, optional): which dataset to load\n          (default: 'VOC2007')\n  \"\"\"\n\n  def __init__(self, root, image_sets, img_size, preproc=None, target_transform=None,\n               dataset_name='VOC0712'):\n    self.root = root\n    self.image_set = image_sets\n    self.img_size = img_size\n    self.preproc = preproc\n    self.target_transform = target_transform\n    self.name = dataset_name\n    self._annopath = os.path.join('%s', 'Annotations', '%s.xml')\n    self._imgpath = os.path.join('%s', 'JPEGImages', '%s.jpg')\n    self.ids = list()\n    for (year, name) in image_sets:\n      self._year = year\n      rootpath = os.path.join(self.root, 'VOC' + year)\n      for line in open(os.path.join(rootpath, 'ImageSets', 'Main', name + '.txt')):\n        self.ids.append((rootpath, line.strip()))\n\n  def __getitem__(self, index):\n    img_id = self.ids[index]\n    target = ET.parse(self._annopath % img_id).getroot()\n    img = cv2.imread(self._imgpath % img_id, cv2.IMREAD_COLOR)\n    height, width, _ = img.shape\n    mask = np.zeros((self.img_size, self.img_size), dtype=np.uint8)\n    h, w, _ = img.shape\n    y_ration = float(h) / self.img_size\n    x_ration = float(w) / self.img_size\n    img_resize = cv2.resize(img, (self.img_size, self.img_size))\n    labels = []\n    for obj in target.iter('object'):\n      difficult = int(obj.find('difficult').text) == 1\n      if difficult:\n        continue\n      name = obj.find('name').text.strip()\n      labels.append(name)\n      bbox = obj.find('bndbox')\n      pts = ['xmin', 'ymin', 'xmax', 'ymax']\n      bndbox = []\n\n      for i, pt in enumerate(pts):\n        cur_pt = int(float(bbox.find(pt).text)) - 1\n        if i % 2 == 0:\n          cur_pt = int(cur_pt / x_ration)\n          bndbox.append(cur_pt)\n        elif i % 2 == 1:\n          cur_pt = int(cur_pt / y_ration)\n          bndbox.append(cur_pt)\n      mask[bndbox[0]:bndbox[2], bndbox[1]:bndbox[3]] = 1\n    return img_resize, mask, labels\n\n  def __len__(self):\n    return len(self.ids)\n\n\nclass pascal_voc(imdb):\n  def __init__(self, image_set, year, devkit_path=None):\n    imdb.__init__(self, 'voc_' + year + '_' + image_set)\n    self._year = year\n    self._image_set = image_set\n    self._devkit_path = self._get_default_path() if devkit_path is None \\\n      else devkit_path\n\n    \n    self._data_path = os.path.join(self._devkit_path, 'VOC' + self._year)\n    self._classes = ('__background__',  # always index 0\n                     'aeroplane', 'bicycle', 'bird', 'boat',\n                     'bottle', 'bus', 'car', 'cat', 'chair',\n                     'cow', 'diningtable', 'dog', 'horse',\n                     'motorbike', 'person', 'pottedplant',\n                     'sheep', 'sofa', 'train', 'tvmonitor')\n    self._class_to_ind = dict(list(zip(self.classes, list(range(self.num_classes)))))\n    self._image_ext = '.jpg'\n    self._image_index = self._load_image_set_index()\n    # Default to roidb handler\n    self._roidb_handler = self.gt_roidb\n    self._salt = str(uuid.uuid4())\n    self._comp_id = 'comp4'\n\n    # PASCAL specific config options\n    self.config = {'cleanup': True,\n                   'use_salt': True,\n                   'use_diff': False,\n                   'matlab_eval': False,\n                   'rpn_file': None}\n\n    assert os.path.exists(self._devkit_path), \\\n      'VOCdevkit path does not exist: {}'.format(self._devkit_path)\n    assert os.path.exists(self._data_path), \\\n      'Path does not exist: {}'.format(self._data_path)\n\n  def image_path_at(self, i):\n    \"\"\"\n    Return the absolute path to image i in the image sequence.\n    \"\"\"\n    return self.image_path_from_index(self._image_index[i])\n\n  def image_path_from_index(self, index):\n    \"\"\"\n    Construct an image path from the image's \"index\" identifier.\n    \"\"\"\n    image_path = os.path.join(self._data_path, 'JPEGImages',\n                              index + self._image_ext)\n    assert os.path.exists(image_path), \\\n      'Path does not exist: {}'.format(image_path)\n    return image_path\n\n  def _load_image_set_index(self):\n    \"\"\"\n    Load the indexes listed in this dataset's image set file.\n    \"\"\"\n    # Example path to image set file:\n    # self._devkit_path + /VOCdevkit2007/VOC2007/ImageSets/Main/val.txt\n    image_set_file = os.path.join(self._data_path, 'ImageSets', 'Main',\n                                  self._image_set + '.txt')\n    \n    assert os.path.exists(image_set_file), \\\n      'Path does not exist: {}'.format(image_set_file)\n    with open(image_set_file) as f:\n      image_index = [x.strip() for x in f.readlines()]\n    return image_index\n\n  def _get_default_path(self):\n    \"\"\"\n    Return the default path where PASCAL VOC is expected to be installed.\n    \"\"\"\n    return os.path.join(cfg.DATA_DIR, 'VOCdevkit' + self._year)\n\n  def gt_roidb(self):\n    \"\"\"\n    Return the database of ground-truth regions of interest.\n\n    This function loads/saves from/to a cache file to speed up future calls.\n    \"\"\"\n    cache_file = os.path.join(self.cache_path, self.name + '_gt_roidb.pkl')\n    if os.path.exists(cache_file):\n      with open(cache_file, 'rb') as fid:\n        try:\n          roidb = pickle.load(fid)\n        except:\n          roidb = pickle.load(fid, encoding='bytes')\n      print('{} gt roidb loaded from {}'.format(self.name, cache_file))\n      return roidb\n\n    gt_roidb = [self._load_pascal_annotation(index)\n                for index in self.image_index]\n    with open(cache_file, 'wb') as fid:\n      pickle.dump(gt_roidb, fid, pickle.HIGHEST_PROTOCOL)\n    print('wrote gt roidb to {}'.format(cache_file))\n\n    return gt_roidb\n\n  def rpn_roidb(self):\n    if int(self._year) == 2007 or self._image_set != 'test':\n      gt_roidb = self.gt_roidb()\n      rpn_roidb = self._load_rpn_roidb(gt_roidb)\n      roidb = imdb.merge_roidbs(gt_roidb, rpn_roidb)\n    else:\n      roidb = self._load_rpn_roidb(None)\n\n    return roidb\n\n  def _load_rpn_roidb(self, gt_roidb):\n    filename = self.config['rpn_file']\n    print('loading {}'.format(filename))\n    assert os.path.exists(filename), \\\n      'rpn data not found at: {}'.format(filename)\n    with open(filename, 'rb') as f:\n      box_list = pickle.load(f)\n    return self.create_roidb_from_box_list(box_list, gt_roidb)\n\n  def _load_pascal_annotation(self, index):\n    \"\"\"\n    Load image and bounding boxes info from XML file in the PASCAL VOC\n    format.\n    \"\"\"\n    filename = os.path.join(self._data_path, 'Annotations', index + '.xml')\n    tree = ET.parse(filename)\n    objs = tree.findall('object')\n    if not self.config['use_diff']:\n      # Exclude the samples labeled as difficult\n      non_diff_objs = [\n        obj for obj in objs if int(obj.find('difficult').text) == 0]\n      # if len(non_diff_objs) != len(objs):\n      #     print 'Removed {} difficult objects'.format(\n      #         len(objs) - len(non_diff_objs))\n      objs = non_diff_objs\n    num_objs = len(objs)\n\n    boxes = np.zeros((num_objs, 4), dtype=np.uint16)\n    gt_classes = np.zeros((num_objs), dtype=np.int32)\n    overlaps = np.zeros((num_objs, self.num_classes), dtype=np.float32)\n    # \"Seg\" area for pascal is just the box area\n    seg_areas = np.zeros((num_objs), dtype=np.float32)\n\n    # Load object bounding boxes into a data frame.\n    for ix, obj in enumerate(objs):\n      bbox = obj.find('bndbox')\n      # Make pixel indexes 0-based\n      x1 = float(bbox.find('xmin').text) - 1\n      y1 = float(bbox.find('ymin').text) - 1\n      x2 = float(bbox.find('xmax').text) - 1\n      y2 = float(bbox.find('ymax').text) - 1\n      cls = self._class_to_ind[obj.find('name').text.lower().strip()]\n      boxes[ix, :] = [x1, y1, x2, y2]\n      gt_classes[ix] = cls\n      overlaps[ix, cls] = 1.0\n      seg_areas[ix] = (x2 - x1 + 1) * (y2 - y1 + 1)\n\n    overlaps = scipy.sparse.csr_matrix(overlaps)\n\n    return {'boxes': boxes,\n            'gt_classes': gt_classes,\n            'gt_overlaps': overlaps,\n            'flipped': False,\n            'seg_areas': seg_areas}\n\n  def _get_comp_id(self):\n    comp_id = (self._comp_id + '_' + self._salt if self.config['use_salt']\n               else self._comp_id)\n    return comp_id\n\n  def _get_voc_results_file_template(self):\n    # VOCdevkit/results/VOC2007/Main/<comp_id>_det_test_aeroplane.txt\n    filename = self._get_comp_id() + '_det_' + self._image_set + '_{:s}.txt'\n    path = os.path.join(\n      self._devkit_path,\n      'results',\n      'VOC' + self._year,\n      'Main',\n      filename)\n    return path\n\n  def _write_voc_results_file(self, all_boxes):\n    for cls_ind, cls in enumerate(self.classes):\n      if cls == '__background__':\n        continue\n      print('Writing {} VOC results file'.format(cls))\n      filename = self._get_voc_results_file_template().format(cls)\n      with open(filename, 'wt') as f:\n        for im_ind, index in enumerate(self.image_index):\n          dets = all_boxes[cls_ind][im_ind]\n          if dets == []:\n            continue\n          # the VOCdevkit expects 1-based indices\n          for k in range(dets.shape[0]):\n            f.write('{:s} {:.3f} {:.1f} {:.1f} {:.1f} {:.1f}\\n'.\n                    format(index, dets[k, -1],\n                           dets[k, 0] + 1, dets[k, 1] + 1,\n                           dets[k, 2] + 1, dets[k, 3] + 1))\n\n  def _do_python_eval(self, output_dir='output'):\n    annopath = os.path.join(\n      self._devkit_path,\n      'VOC' + self._year,\n      'Annotations',\n      '{:s}.xml')\n    imagesetfile = os.path.join(\n      self._devkit_path,\n      'VOC' + self._year,\n      'ImageSets',\n      'Main',\n      self._image_set + '.txt')\n    cachedir = os.path.join(self._devkit_path, 'annotations_cache')\n    aps = []\n    # The PASCAL VOC metric changed in 2010\n    use_07_metric = True if int(self._year) < 2010 else False\n    print('VOC07 metric? ' + ('Yes' if use_07_metric else 'No'))\n    if not os.path.isdir(output_dir):\n      os.mkdir(output_dir)\n    for i, cls in enumerate(self._classes):\n      if cls == '__background__':\n        continue\n      filename = self._get_voc_results_file_template().format(cls)\n      rec, prec, ap = voc_eval(\n        filename, annopath, imagesetfile, cls, cachedir, ovthresh=0.5,\n        use_07_metric=use_07_metric)\n      aps += [ap]\n      print(('AP for {} = {:.4f}'.format(cls, ap)))\n      with open(os.path.join(output_dir, cls + '_pr.pkl'), 'wb') as f:\n        pickle.dump({'rec': rec, 'prec': prec, 'ap': ap}, f)\n    print(('Mean AP = {:.4f}'.format(np.mean(aps))))\n    print('~~~~~~~~')\n    print('Results:')\n    for ap in aps:\n      print(('{:.3f}'.format(ap)))\n    print(('{:.3f}'.format(np.mean(aps))))\n    print('~~~~~~~~')\n    print('')\n    print('--------------------------------------------------------------')\n    print('Results computed with the **unofficial** Python eval code.')\n    print('Results should be very close to the official MATLAB eval code.')\n    print('Recompute with `./tools/reval.py --matlab ...` for your paper.')\n    print('-- Thanks, The Management')\n    print('--------------------------------------------------------------')\n\n  def _do_matlab_eval(self, output_dir='output'):\n    print('-----------------------------------------------------')\n    print('Computing results with the official MATLAB eval code.')\n    print('-----------------------------------------------------')\n    path = os.path.join(cfg.ROOT_DIR, 'lib', 'datasets',\n                        'VOCdevkit-matlab-wrapper')\n    cmd = 'cd {} && '.format(path)\n    cmd += '{:s} -nodisplay -nodesktop '.format(cfg.MATLAB)\n    cmd += '-r \"dbstop if error; '\n    cmd += 'voc_eval(\\'{:s}\\',\\'{:s}\\',\\'{:s}\\',\\'{:s}\\'); quit;\"' \\\n      .format(self._devkit_path, self._get_comp_id(),\n              self._image_set, output_dir)\n    print(('Running:\\n{}'.format(cmd)))\n    status = subprocess.call(cmd, shell=True)\n\n  def evaluate_detections(self, all_boxes, output_dir):\n    pdb.set_trace()\n    self._write_voc_results_file(all_boxes)\n    self._do_python_eval(output_dir)\n    if self.config['matlab_eval']:\n      self._do_matlab_eval(output_dir)\n    if self.config['cleanup']:\n      for cls in self._classes:\n        if cls == '__background__':\n          continue\n        filename = self._get_voc_results_file_template().format(cls)\n        os.remove(filename)\n\n  def competition_mode(self, on):\n    if on:\n      self.config['use_salt'] = False\n      self.config['cleanup'] = False\n    else:\n      self.config['use_salt'] = True\n      self.config['cleanup'] = True\n\n\nif __name__ == '__main__':\n  from datasets.pascal_voc import pascal_voc\n\n  d = pascal_voc('trainval', '2007')\n  res = d.roidb\n  from IPython import embed;\n\n  embed()\n"
  },
  {
    "path": "lib/datasets/tools/compute_prior.py",
    "content": "import numpy as np\nimport pickle\nimport os\nimport sys\n\nNUM_ATTR_REL = 200\ndef cout_w(prob, num=NUM_ATTR_REL,dim=1):\n    prob_weight = prob[:, :num]\n    sum_value = np.sum(prob_weight, keepdims=True, axis=dim) + 0.1\n    prob_weight = prob_weight / np.repeat(sum_value, prob_weight.shape[dim], axis=dim)\n    return prob_weight\n\ndef cp_kl(a, b):\n    # compute kl diverse\n    if np.sum(a) == 0 or np.sum(b) == 0:\n        return 1\n    sum_ = a * np.log(a / b)\n    all_value = [x for x in sum_ if str(x) != 'nan' and str(x) != 'inf']\n    kl = np.sum(all_value)\n    return kl\n\ndef compute_js(attr_prob):\n    cls_num = attr_prob.shape[0]\n    similarity = np.zeros((cls_num, cls_num))\n    similarity[0, 1:] = 1\n    similarity[1:, 0] = 1\n    for i in range(1, cls_num):\n        if i % 50 == 0:\n            print('had proccessed {} cls...\\n'.format(i))\n        for j in range(1, cls_num):\n            if i == j:\n                similarity[i,j] = 0\n            else:\n                similarity[i,j] = 0.5 * (cp_kl(attr_prob[i, :], 0.5*(attr_prob[i, :] + attr_prob[j,:]))\n                                         + cp_kl(attr_prob[j, :], 0.5*(attr_prob[i, :] + attr_prob[j, :])))\n    return similarity\n\nif __name__=='__main__':\n    data_path = '/data/VisualGenome/graph/'\n    dim_ = 1000\n    ## Compute attribute knowledge by JS-diversion\n    graph_a = pickle.load(open(data_path + 'vg_attr_frequency_1000.pkl', 'rb'))\n\n    ## You can get part of graph_a and match name with your datasets\n    #  We give an example of compute graph of VisualGenome with 1000 classes\n    #  first line of graph_a is background\n    graph_a = cout_w(graph_a, num=len(graph_a))\n    graph_a = compute_js(graph_a)\n    graph_a = 1 - graph_a\n    pickle.dump(graph_a, open(data_path + 'vg_graph_a.pkl', 'wb'))\n\n    ## Compute relation knowledge\n    graph_r = pickle.load(open(data_path + 'vg_pair_frequency_1000.pkl', 'rb'))\n    ## You can get part of graph_a and match name with your datasets\n    #  We give an example of compute graph of VisualGenome with 1000 classes\n    relation_matrix = np.zeros((dim_, dim_))\n    relation_matrix = graph_r + graph_r.transpose()\n    relation_matrix_row_sum = relation_matrix.sum(1)\n    for i in range(dim_):\n        relation_matrix[i, i] = relation_matrix_row_sum[i] + 1.\n    prob_relation_matrix = np.zeros((dim_, dim_))\n    for i in range(dim_):\n        for j in range(dim_):\n            prob_relation_matrix[i, j] = relation_matrix[i, j] / (\n                        np.sqrt(relation_matrix[i, i]) * np.sqrt(relation_matrix[j, j]))\n    prob_relation_matrix_ba = np.zeros((dim_ + 1, dim_ + 1))\n    prob_relation_matrix_ba[1:, 1:] = prob_relation_matrix\n    print(prob_relation_matrix_ba.shape)\n    pickle.dump(prob_relation_matrix_ba, open(data_path + 'vg_graph_r.pkl', 'wb'))\n"
  },
  {
    "path": "lib/datasets/tools/mcg_munge.py",
    "content": "from __future__ import print_function\nimport os\nimport sys\n\n\"\"\"Hacky tool to convert file system layout of MCG boxes downloaded from\nhttp://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/mcg/\nso that it's consistent with those computed by Jan Hosang (see:\nhttp://www.mpi-inf.mpg.de/departments/computer-vision-and-multimodal-\n  computing/research/object-recognition-and-scene-understanding/how-\n  good-are-detection-proposals-really/)\n\nNB: Boxes from the MCG website are in (y1, x1, y2, x2) order.\nBoxes from Hosang et al. are in (x1, y1, x2, y2) order.\n\"\"\"\n\ndef munge(src_dir):\n    # stored as: ./MCG-COCO-val2014-boxes/COCO_val2014_000000193401.mat\n    # want:      ./MCG/mat/COCO_val2014_0/COCO_val2014_000000141/COCO_val2014_000000141334.mat\n\n    files = os.listdir(src_dir)\n    for fn in files:\n        base, ext = os.path.splitext(fn)\n        # first 14 chars / first 22 chars / all chars + .mat\n        # COCO_val2014_0/COCO_val2014_000000447/COCO_val2014_000000447991.mat\n        first = base[:14]\n        second = base[:22]\n        dst_dir = os.path.join('MCG', 'mat', first, second)\n        if not os.path.exists(dst_dir):\n            os.makedirs(dst_dir)\n        src = os.path.join(src_dir, fn)\n        dst = os.path.join(dst_dir, fn)\n        print('MV: {} -> {}'.format(src, dst))\n        os.rename(src, dst)\n\nif __name__ == '__main__':\n    # src_dir should look something like:\n    #  src_dir = 'MCG-COCO-val2014-boxes'\n    src_dir = sys.argv[1]\n    munge(src_dir)\n"
  },
  {
    "path": "lib/datasets/voc_eval.py",
    "content": "# --------------------------------------------------------\n# Fast/er R-CNN\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Bharath Hariharan\n# --------------------------------------------------------\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport xml.etree.ElementTree as ET\nimport os\nimport pickle\nimport numpy as np\n\ndef parse_rec(filename):\n  \"\"\" Parse a PASCAL VOC xml file \"\"\"\n  tree = ET.parse(filename)\n  objects = []\n  for obj in tree.findall('object'):\n    obj_struct = {}\n    obj_struct['name'] = obj.find('name').text\n    obj_struct['pose'] = obj.find('pose').text\n    obj_struct['truncated'] = int(obj.find('truncated').text)\n    obj_struct['difficult'] = int(obj.find('difficult').text)\n    bbox = obj.find('bndbox')\n    obj_struct['bbox'] = [int(bbox.find('xmin').text),\n                          int(bbox.find('ymin').text),\n                          int(bbox.find('xmax').text),\n                          int(bbox.find('ymax').text)]\n    objects.append(obj_struct)\n\n  return objects\n\n\ndef voc_ap(rec, prec, use_07_metric=False):\n  \"\"\" ap = voc_ap(rec, prec, [use_07_metric])\n  Compute VOC AP given precision and recall.\n  If use_07_metric is true, uses the\n  VOC 07 11 point method (default:False).\n  \"\"\"\n  if use_07_metric:\n    # 11 point metric\n    ap = 0.\n    for t in np.arange(0., 1.1, 0.1):\n      if np.sum(rec >= t) == 0:\n        p = 0\n      else:\n        p = np.max(prec[rec >= t])\n      ap = ap + p / 11.\n  else:\n    # correct AP calculation\n    # first append sentinel values at the end\n    mrec = np.concatenate(([0.], rec, [1.]))\n    mpre = np.concatenate(([0.], prec, [0.]))\n\n    # compute the precision envelope\n    for i in range(mpre.size - 1, 0, -1):\n      mpre[i - 1] = np.maximum(mpre[i - 1], mpre[i])\n\n    # to calculate area under PR curve, look for points\n    # where X axis (recall) changes value\n    i = np.where(mrec[1:] != mrec[:-1])[0]\n\n    # and sum (\\Delta recall) * prec\n    ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1])\n  return ap\n\n\ndef voc_eval(detpath,\n             annopath,\n             imagesetfile,\n             classname,\n             cachedir,\n             ovthresh=0.5,\n             use_07_metric=False):\n  \"\"\"rec, prec, ap = voc_eval(detpath,\n                              annopath,\n                              imagesetfile,\n                              classname,\n                              [ovthresh],\n                              [use_07_metric])\n\n  Top level function that does the PASCAL VOC evaluation.\n\n  detpath: Path to detections\n      detpath.format(classname) should produce the detection results file.\n  annopath: Path to annotations\n      annopath.format(imagename) should be the xml annotations file.\n  imagesetfile: Text file containing the list of images, one image per line.\n  classname: Category name (duh)\n  cachedir: Directory for caching the annotations\n  [ovthresh]: Overlap threshold (default = 0.5)\n  [use_07_metric]: Whether to use VOC07's 11 point AP computation\n      (default False)\n  \"\"\"\n  # assumes detections are in detpath.format(classname)\n  # assumes annotations are in annopath.format(imagename)\n  # assumes imagesetfile is a text file with each line an image name\n  # cachedir caches the annotations in a pickle file\n\n  # first load gt\n  if not os.path.isdir(cachedir):\n    os.mkdir(cachedir)\n  cachefile = os.path.join(cachedir, 'annots.pkl')\n\n  # read list of images\n  with open(imagesetfile, 'r') as f:\n    lines = f.readlines()\n  imagenames = [x.strip() for x in lines]\n\n  if not os.path.isfile(cachefile):\n    # load annotations\n    recs = {}\n    for i, imagename in enumerate(imagenames):\n      recs[imagename] = parse_rec(annopath.format(imagename))\n      if i % 100 == 0:\n        print('Reading annotation for {:d}/{:d}'.format(\n          i + 1, len(imagenames)))\n    # save\n    print('Saving cached annotations to {:s}'.format(cachefile))\n    with open(cachefile, 'wb') as f:\n      pickle.dump(recs, f)\n  else:\n    # load\n    with open(cachefile, 'rb') as f:\n      try:\n        recs = pickle.load(f)\n      except:\n        recs = pickle.load(f, encoding='bytes')\n\n  # extract gt objects for this class\n  class_recs = {}\n  npos = 0\n  for imagename in imagenames:\n    R = [obj for obj in recs[imagename] if obj['name'] == classname]\n    bbox = np.array([x['bbox'] for x in R])\n    difficult = np.array([x['difficult'] for x in R]).astype(np.bool)\n    det = [False] * len(R)\n    npos = npos + sum(~difficult)\n    class_recs[imagename] = {'bbox': bbox,\n                             'difficult': difficult,\n                             'det': det}\n\n  # read dets\n  detfile = detpath.format(classname)\n  with open(detfile, 'r') as f:\n    lines = f.readlines()\n\n  if len(lines) == 0:\n        # No detection examples\n        return 0, 0, 0, 0, npos\n\n  splitlines = [x.strip().split(' ') for x in lines]\n  image_ids = [x[0] for x in splitlines]\n  confidence = np.array([float(x[1]) for x in splitlines])\n  BB = np.array([[float(z) for z in x[2:]] for x in splitlines])\n\n  nd = len(image_ids)\n  tp = np.zeros(nd)\n  fp = np.zeros(nd)\n\n  if BB.shape[0] > 0:\n    # sort by confidence\n    sorted_ind = np.argsort(-confidence)\n    sorted_scores = np.sort(-confidence)\n    BB = BB[sorted_ind, :]\n    image_ids = [image_ids[x] for x in sorted_ind]\n\n    # go down dets and mark TPs and FPs\n    for d in range(nd):\n      R = class_recs[image_ids[d]]\n      bb = BB[d, :].astype(float)\n      ovmax = -np.inf\n      BBGT = R['bbox'].astype(float)\n\n      if BBGT.size > 0:\n        # compute overlaps\n        # intersection\n        ixmin = np.maximum(BBGT[:, 0], bb[0])\n        iymin = np.maximum(BBGT[:, 1], bb[1])\n        ixmax = np.minimum(BBGT[:, 2], bb[2])\n        iymax = np.minimum(BBGT[:, 3], bb[3])\n        iw = np.maximum(ixmax - ixmin + 1., 0.)\n        ih = np.maximum(iymax - iymin + 1., 0.)\n        inters = iw * ih\n\n        # union\n        uni = ((bb[2] - bb[0] + 1.) * (bb[3] - bb[1] + 1.) +\n               (BBGT[:, 2] - BBGT[:, 0] + 1.) *\n               (BBGT[:, 3] - BBGT[:, 1] + 1.) - inters)\n\n        overlaps = inters / uni\n        ovmax = np.max(overlaps)\n        jmax = np.argmax(overlaps)\n\n      if ovmax > ovthresh:\n        if not R['difficult'][jmax]:\n          if not R['det'][jmax]:\n            tp[d] = 1.\n            R['det'][jmax] = 1\n          else:\n            fp[d] = 1.\n      else:\n        fp[d] = 1.\n\n  # compute precision recall\n  fp = np.cumsum(fp)\n  tp = np.cumsum(tp)\n  rec = tp / float(npos)\n  # avoid divide by zero in case the first detection matches a difficult\n  # ground truth\n  prec = tp / np.maximum(tp + fp, np.finfo(np.float64).eps)\n  ap = voc_ap(rec, prec, use_07_metric)\n\n  return rec, prec, ap\n"
  },
  {
    "path": "lib/model/__init__.py",
    "content": ""
  },
  {
    "path": "lib/model/faster_rcnn/__init__.py",
    "content": ""
  },
  {
    "path": "lib/model/faster_rcnn/faster_rcnn.py",
    "content": "# --------------------------------------------------------\n# Pytorch Meta R-CNN\n# Written by Anny Xu, Xiaopeng Yan, based on the code from Jianwei Yang\n# --------------------------------------------------------\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\nimport torchvision.models as models\nfrom torch.autograd import Variable\nimport numpy as np\nfrom model.utils.config import cfg\nfrom model.rpn.rpn import _RPN\nfrom model.roi_pooling.modules.roi_pool import _RoIPooling\nfrom model.roi_crop.modules.roi_crop import _RoICrop\nfrom model.roi_align.modules.roi_align import RoIAlignAvg\nfrom model.rpn.proposal_target_layer_cascade import _ProposalTargetLayer\nimport time\nimport pdb\nfrom model.utils.net_utils import _smooth_l1_loss, _crop_pool_layer, _affine_grid_gen, _affine_theta\nimport pickle\n\n\nclass _fasterRCNN(nn.Module):\n    \"\"\" faster RCNN \"\"\"\n\n    def __init__(self, classes, class_agnostic, meta_train, meta_test=None, meta_loss=None):\n        super(_fasterRCNN, self).__init__()\n        self.classes = classes\n        self.n_classes = len(classes)\n        self.class_agnostic = class_agnostic\n        self.meta_train = meta_train\n        self.meta_test = meta_test\n        self.meta_loss = meta_loss\n        # loss\n        self.RCNN_loss_cls = 0\n        self.RCNN_loss_bbox = 0\n\n        # define rpn\n        self.RCNN_rpn = _RPN(self.dout_base_model)\n        self.RCNN_proposal_target = _ProposalTargetLayer(self.n_classes)\n        self.RCNN_roi_pool = _RoIPooling(cfg.POOLING_SIZE, cfg.POOLING_SIZE, 1.0 / 16.0)\n        self.RCNN_roi_align = RoIAlignAvg(cfg.POOLING_SIZE, cfg.POOLING_SIZE, 1.0 / 16.0)\n\n        self.grid_size = cfg.POOLING_SIZE * 2 if cfg.CROP_RESIZE_WITH_MAX_POOL else cfg.POOLING_SIZE\n        self.RCNN_roi_crop = _RoICrop()\n\n\n    def forward(self, im_data_list, im_info_list, gt_boxes_list, num_boxes_list, average_shot=None,\n                mean_class_attentions=None):\n        # return attentions for testing\n        if average_shot:\n            prn_data = im_data_list[0]  # len(metaclass)*4*224*224\n            attentions = self.prn_network(prn_data)\n            return attentions\n        # extract attentions for training\n        if self.meta_train and self.training:\n            prn_data = im_data_list[0]  # len(metaclass)*4*224*224\n            # feed prn data to prn_network\n            attentions = self.prn_network(prn_data)\n            prn_cls = im_info_list[0]  # len(metaclass)\n\n        im_data = im_data_list[-1]\n        im_info = im_info_list[-1]\n        gt_boxes = gt_boxes_list[-1]\n        num_boxes = num_boxes_list[-1]\n\n        batch_size = im_data.size(0)\n        im_info = im_info.data\n        gt_boxes = gt_boxes.data\n        num_boxes = num_boxes.data\n\n        # feed image data to base model to obtain base feature map\n        base_feat = self.RCNN_base(self.rcnn_conv1(im_data))\n\n        # feed base feature map tp RPN to obtain rois\n        rois, rpn_loss_cls, rpn_loss_bbox = self.RCNN_rpn(base_feat, im_info, gt_boxes, num_boxes)\n\n        # if it is training phase, then use ground truth bboxes for refining\n        if self.training:\n            roi_data = self.RCNN_proposal_target(rois, gt_boxes, num_boxes)\n            rois, rois_label, rois_target, rois_inside_ws, rois_outside_ws = roi_data\n            rois_label = Variable(rois_label.view(-1).long())\n            rois_target = Variable(rois_target.view(-1, rois_target.size(2)))\n            rois_inside_ws = Variable(rois_inside_ws.view(-1, rois_inside_ws.size(2)))\n            rois_outside_ws = Variable(rois_outside_ws.view(-1, rois_outside_ws.size(2)))\n        else:\n            rois_label = None\n            rois_target = None\n            rois_inside_ws = None\n            rois_outside_ws = None\n            rpn_loss_cls = 0\n            rpn_loss_bbox = 0\n\n        rois = Variable(rois)\n\n        # do roi pooling based on predicted rois\n        if cfg.POOLING_MODE == 'crop':\n            # pooled_feat_anchor = _crop_pool_layer(base_feat, rois.view(-1, 5))\n            grid_xy = _affine_grid_gen(rois.view(-1, 5), base_feat.size()[2:], self.grid_size)\n            grid_yx = torch.stack([grid_xy.data[:, :, :, 1], grid_xy.data[:, :, :, 0]], 3).contiguous()\n            pooled_feat = self.RCNN_roi_crop(base_feat, Variable(grid_yx).detach())\n            if cfg.CROP_RESIZE_WITH_MAX_POOL:\n                pooled_feat = F.max_pool2d(pooled_feat, 2, 2)\n        elif cfg.POOLING_MODE == 'align':\n            pooled_feat = self.RCNN_roi_align(base_feat, rois.view(-1, 5))  # (b*128)*1024*7*7\n        elif cfg.POOLING_MODE == 'pool':\n            pooled_feat = self.RCNN_roi_pool(base_feat, rois.view(-1, 5))\n\n        # feed pooled features to top model\n        pooled_feat = self._head_to_tail(pooled_feat)  # (b*128)*2048\n\n        # meta training phase\n        if self.meta_train:\n            rcnn_loss_cls = []\n            rcnn_loss_bbox = []\n            # pooled feature maps need to operate channel-wise multiplication with the corresponding class's attentions of every roi of image\n            for b in range(batch_size):\n                zero = Variable(torch.FloatTensor([0]).cuda())\n                proposal_labels = rois_label[b * 128:(b + 1) * 128].data.cpu().numpy()[0]\n                unique_labels = list(np.unique(proposal_labels)) # the unique rois labels of the input image\n                for i in range(attentions.size(0)):  # attentions len(attentions)*2048\n                    if prn_cls[i].numpy()[0] + 1 not in unique_labels:\n                        rcnn_loss_cls.append(zero)\n                        rcnn_loss_bbox.append(zero)\n                        continue\n                    channel_wise_feat = pooled_feat[b * cfg.TRAIN.BATCH_SIZE:(b + 1) * cfg.TRAIN.BATCH_SIZE, :] * \\\n                                        attentions[i]  # 128x2048 channel-wise multiple\n                    bbox_pred = self.RCNN_bbox_pred(channel_wise_feat)  # 128 * 4\n                    if self.training and not self.class_agnostic:\n                        # select the corresponding columns according to roi labels\n                        bbox_pred_view = bbox_pred.view(bbox_pred.size(0), int(bbox_pred.size(1) / 4), 4)\n                        bbox_pred_select = torch.gather(bbox_pred_view, 1,\n                                                        rois_label[\n                                                        b * cfg.TRAIN.BATCH_SIZE:(b + 1) * cfg.TRAIN.BATCH_SIZE].view(\n                                                            rois_label[b * cfg.TRAIN.BATCH_SIZE:(\n                                                                                                        b + 1) * cfg.TRAIN.BATCH_SIZE].size(\n                                                                0), 1, 1).expand(\n                                                            rois_label[b * cfg.TRAIN.BATCH_SIZE:(\n                                                                                                        b + 1) * cfg.TRAIN.BATCH_SIZE].size(\n                                                                0), 1,\n                                                            4))\n                        bbox_pred = bbox_pred_select.squeeze(1)\n                    # compute object classification probability\n                    cls_score = self.RCNN_cls_score(channel_wise_feat)  # 128 * 21\n\n                    if self.training:\n                        # classification loss\n                        RCNN_loss_cls = F.cross_entropy(cls_score, rois_label[b * 128:(b + 1) * 128])\n                        rcnn_loss_cls.append(RCNN_loss_cls)\n                        # bounding box regression L1 loss\n                        RCNN_loss_bbox = _smooth_l1_loss(bbox_pred, rois_target[b * 128:(b + 1) * 128],\n                                                         rois_inside_ws[b * 128:(b + 1) * 128],\n                                                         rois_outside_ws[b * 128:(b + 1) * 128])\n\n                        rcnn_loss_bbox.append(RCNN_loss_bbox)\n            # meta attentions loss\n            if self.meta_loss:\n                attentions_score = self.Meta_cls_score(attentions)\n                meta_loss = F.cross_entropy(attentions_score, Variable(torch.cat(prn_cls,dim=0).cuda()))\n            else:\n                meta_loss = 0\n\n            return rois, rpn_loss_cls, rpn_loss_bbox, rcnn_loss_cls, rcnn_loss_bbox, rois_label, 0, 0, meta_loss\n\n        elif self.meta_test:\n            cls_prob_list = []\n            bbox_pred_list = []\n            for i in range(len(mean_class_attentions)):\n                mean_attentions = mean_class_attentions[i]\n                channel_wise_feat = pooled_feat * mean_attentions\n                # compute bbox offset\n                bbox_pred = self.RCNN_bbox_pred(channel_wise_feat)\n                if self.training and not self.class_agnostic:\n                    # select the corresponding columns according to roi labels\n                    bbox_pred_view = bbox_pred.view(bbox_pred.size(0), int(bbox_pred.size(1) / 4), 4)\n                    bbox_pred_select = torch.gather(bbox_pred_view, 1,\n                                                    rois_label.view(rois_label.size(0), 1, 1).expand(rois_label.size(0),\n                                                                                                     1, 4))\n                    bbox_pred = bbox_pred_select.squeeze(1)\n\n                # compute object classification probability\n                cls_score = self.RCNN_cls_score(channel_wise_feat)\n                cls_prob = F.softmax(cls_score)\n\n                RCNN_loss_cls = 0\n                RCNN_loss_bbox = 0\n\n                if self.training:\n                    # classification loss\n                    RCNN_loss_cls = F.cross_entropy(cls_score, rois_label)\n                    # bounding box regression L1 loss\n                    RCNN_loss_bbox = _smooth_l1_loss(bbox_pred, rois_target, rois_inside_ws, rois_outside_ws)\n\n                cls_prob = cls_prob.view(batch_size, rois.size(1), -1)\n                bbox_pred = bbox_pred.view(batch_size, rois.size(1), -1)\n                cls_prob_list.append(cls_prob)\n                bbox_pred_list.append(bbox_pred)\n\n            return rois, rpn_loss_cls, rpn_loss_bbox, RCNN_loss_cls, RCNN_loss_bbox, rois_label, cls_prob_list, bbox_pred_list, 0\n        else:\n            bbox_pred = self.RCNN_bbox_pred(pooled_feat)\n            if self.training and not self.class_agnostic:\n                # select the corresponding columns according to roi labels\n                bbox_pred_view = bbox_pred.view(bbox_pred.size(0), int(bbox_pred.size(1) / 4), 4)\n                bbox_pred_select = torch.gather(bbox_pred_view, 1,\n                                                rois_label.view(rois_label.size(0), 1, 1).expand(rois_label.size(0), 1,\n                                                                                                 4))\n                bbox_pred = bbox_pred_select.squeeze(1)\n\n            # compute object classification probability\n            cls_score = self.RCNN_cls_score(pooled_feat)  # 128 * 1001\n            cls_prob = F.softmax(cls_score)\n\n            RCNN_loss_cls = 0\n            RCNN_loss_bbox = 0\n\n            if self.training:\n                # classification loss\n                RCNN_loss_cls = F.cross_entropy(cls_score, rois_label)\n\n                # bounding box regression L1 loss\n                RCNN_loss_bbox = _smooth_l1_loss(bbox_pred, rois_target, rois_inside_ws, rois_outside_ws)\n\n            cls_prob = cls_prob.view(batch_size, rois.size(1), -1)\n            bbox_pred = bbox_pred.view(batch_size, rois.size(1), -1)\n\n        return rois, rpn_loss_cls, rpn_loss_bbox, RCNN_loss_cls, RCNN_loss_bbox, rois_label, cls_prob, bbox_pred, 0\n\n    def _init_weights(self):\n        def normal_init(m, mean, stddev, truncated=False):\n            \"\"\"\n            weight initalizer: truncated normal and random normal.\n            \"\"\"\n            # x is a parameter\n            if truncated:\n                m.weight.data.normal_().fmod_(2).mul_(stddev).add_(mean)  # not a perfect approximation\n            else:\n                m.weight.data.normal_(mean, stddev)\n                m.bias.data.zero_()\n\n        normal_init(self.RCNN_rpn.RPN_Conv, 0, 0.01, cfg.TRAIN.TRUNCATED)\n        normal_init(self.RCNN_rpn.RPN_cls_score, 0, 0.01, cfg.TRAIN.TRUNCATED)\n        normal_init(self.RCNN_rpn.RPN_bbox_pred, 0, 0.01, cfg.TRAIN.TRUNCATED)\n        normal_init(self.RCNN_cls_score, 0, 0.01, cfg.TRAIN.TRUNCATED)\n        normal_init(self.RCNN_bbox_pred, 0, 0.001, cfg.TRAIN.TRUNCATED)\n\n    def create_architecture(self):\n        self._init_modules()\n        self._init_weights()\n"
  },
  {
    "path": "lib/model/faster_rcnn/resnet.py",
    "content": "# --------------------------------------------------------\n# Pytorch Meta R-CNN\n# Written by Anny Xu, Xiaopeng Yan, based on code from Jianwei Yang\n# --------------------------------------------------------\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom model.utils.config import cfg\nfrom model.faster_rcnn.faster_rcnn import _fasterRCNN\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\nfrom torch.nn import init\nimport math\nimport torch.utils.model_zoo as model_zoo\nimport pdb\n\n__all__ = ['ResNet', 'resnet18', 'resnet34', 'resnet50', 'resnet101',\n       'resnet152']\n\n\nmodel_urls = {\n  'resnet18': 'https://s3.amazonaws.com/pytorch/models/resnet18-5c106cde.pth',\n  'resnet34': 'https://s3.amazonaws.com/pytorch/models/resnet34-333f7ec4.pth',\n  'resnet50': 'https://s3.amazonaws.com/pytorch/models/resnet50-19c8e357.pth',\n  'resnet101': 'https://s3.amazonaws.com/pytorch/models/resnet101-5d3b4d8f.pth',\n  'resnet152': 'https://s3.amazonaws.com/pytorch/models/resnet152-b121ed2d.pth',\n}\n\ndef conv3x3(in_planes, out_planes, stride=1):\n  \"3x3 convolution with padding\"\n  return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,\n           padding=1, bias=False)\n\ndef init_conv(conv,glu=True):\n  init.xavier_uniform(conv.weight)\n  if conv.bias is not None:\n    conv.bias.data.zero_()\n\ndef init_linear(linear):\n  init.constant(linear.weight,0)\n  init.constant(linear.bias, 1)\n\nclass BasicBlock(nn.Module):\n  expansion = 1\n\n  def __init__(self, inplanes, planes, stride=1, downsample=None):\n    super(BasicBlock, self).__init__()\n    self.conv1 = conv3x3(inplanes, planes, stride)\n    self.bn1 = nn.BatchNorm2d(planes)\n    self.relu = nn.ReLU(inplace=True)\n    self.conv2 = conv3x3(planes, planes)\n    self.bn2 = nn.BatchNorm2d(planes)\n    self.downsample = downsample\n    self.stride = stride\n\n  def forward(self, x):\n    residual = x\n\n    out = self.conv1(x)\n    out = self.bn1(out)\n    out = self.relu(out)\n\n    out = self.conv2(out)\n    out = self.bn2(out)\n\n    if self.downsample is not None:\n      residual = self.downsample(x)\n\n    out += residual\n    out = self.relu(out)\n\n    return out\n\n\nclass Bottleneck(nn.Module):\n  expansion = 4\n\n  def __init__(self, inplanes, planes, stride=1, downsample=None):\n    super(Bottleneck, self).__init__()\n    self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, stride=stride, bias=False) # change\n    self.bn1 = nn.BatchNorm2d(planes)\n    self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, # change\n                 padding=1, bias=False)\n    self.bn2 = nn.BatchNorm2d(planes)\n    self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)\n    self.bn3 = nn.BatchNorm2d(planes * 4)\n    self.relu = nn.ReLU(inplace=True)\n    self.downsample = downsample\n    self.stride = stride\n\n  def forward(self, x):\n    residual = x\n\n    out = self.conv1(x)\n    out = self.bn1(out)\n    out = self.relu(out)\n\n    out = self.conv2(out)\n    out = self.bn2(out)\n    out = self.relu(out)\n\n    out = self.conv3(out)\n    out = self.bn3(out)\n\n    if self.downsample is not None:\n      residual = self.downsample(x)\n\n    out += residual\n    out = self.relu(out)\n\n    return out\n\n\nclass ResNet(nn.Module):\n  def __init__(self, block, layers, num_classes=1000):\n    self.inplanes = 64\n    super(ResNet, self).__init__()\n    self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,\n                 bias=False)\n    self.bn1 = nn.BatchNorm2d(64)\n    self.relu = nn.ReLU(inplace=True)\n    self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=0, ceil_mode=True) # change\n    self.layer1 = self._make_layer(block, 64, layers[0])\n    self.layer2 = self._make_layer(block, 128, layers[1], stride=2)\n    self.layer3 = self._make_layer(block, 256, layers[2], stride=2)\n    self.layer4 = self._make_layer(block, 512, layers[3], stride=2)\n    # it is slightly better whereas slower to set stride = 1\n    # self.layer4 = self._make_layer(block, 512, layers[3], stride=1)\n    self.avgpool = nn.AvgPool2d(7)\n    self.fc = nn.Linear(512 * block.expansion, num_classes)\n\n    for m in self.modules():\n      if isinstance(m, nn.Conv2d):\n        n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels\n        m.weight.data.normal_(0, math.sqrt(2. / n))\n      elif isinstance(m, nn.BatchNorm2d):\n        m.weight.data.fill_(1)\n        m.bias.data.zero_()\n\n  def _make_layer(self, block, planes, blocks, stride=1):\n    downsample = None\n    if stride != 1 or self.inplanes != planes * block.expansion:\n      downsample = nn.Sequential(\n        nn.Conv2d(self.inplanes, planes * block.expansion,\n              kernel_size=1, stride=stride, bias=False),\n        nn.BatchNorm2d(planes * block.expansion),\n      )\n\n    layers = []\n    layers.append(block(self.inplanes, planes, stride, downsample))\n    self.inplanes = planes * block.expansion\n    for i in range(1, blocks):\n      layers.append(block(self.inplanes, planes))\n\n    return nn.Sequential(*layers)\n\n  def forward(self, x):\n    x = self.conv1(x)\n    x = self.bn1(x)\n    x = self.relu(x)\n    x = self.maxpool(x)\n\n    x = self.layer1(x)\n    x = self.layer2(x)\n    x = self.layer3(x)\n    x = self.layer4(x)\n\n    x = self.avgpool(x)\n    x = x.view(x.size(0), -1)\n    x = self.fc(x)\n\n    return x\n\n\ndef resnet18(pretrained=False):\n  \"\"\"Constructs a ResNet-18 model.\n  Args:\n    pretrained (bool): If True, returns a model pre-trained on ImageNet\n  \"\"\"\n  model = ResNet(BasicBlock, [2, 2, 2, 2])\n  if pretrained:\n    model.load_state_dict(model_zoo.load_url(model_urls['resnet18']))\n  return model\n\n\ndef resnet34(pretrained=False):\n  \"\"\"Constructs a ResNet-34 model.\n  Args:\n    pretrained (bool): If True, returns a model pre-trained on ImageNet\n  \"\"\"\n  model = ResNet(BasicBlock, [3, 4, 6, 3])\n  if pretrained:\n    model.load_state_dict(model_zoo.load_url(model_urls['resnet34']))\n  return model\n\n\ndef resnet50(pretrained=False):\n  \"\"\"Constructs a ResNet-50 model.\n  Args:\n    pretrained (bool): If True, returns a model pre-trained on ImageNet\n  \"\"\"\n  model = ResNet(Bottleneck, [3, 4, 6, 3])\n  if pretrained:\n    model.load_state_dict(model_zoo.load_url(model_urls['resnet50']))\n  return model\n\n\ndef resnet101(pretrained=False):\n  \"\"\"Constructs a ResNet-101 model.\n  Args:\n    pretrained (bool): If True, returns a model pre-trained on ImageNet\n  \"\"\"\n  model = ResNet(Bottleneck, [3, 4, 23, 3])\n  if pretrained:\n    model.load_state_dict(model_zoo.load_url(model_urls['resnet101']))\n  return model\n\n\ndef resnet152(pretrained=False):\n  \"\"\"Constructs a ResNet-152 model.\n  Args:\n    pretrained (bool): If True, returns a model pre-trained on ImageNet\n  \"\"\"\n  model = ResNet(Bottleneck, [3, 8, 36, 3])\n  if pretrained:\n    model.load_state_dict(model_zoo.load_url(model_urls['resnet152']))\n  return model\n\nclass resnet(_fasterRCNN):\n  def __init__(self, classes, num_layers=101, pretrained=False, class_agnostic=False,meta_train=True,meta_test=None,meta_loss=None):\n    self.model_path = 'data/pretrained_model/resnet101_caffe.pth'\n    self.dout_base_model = 1024\n    self.pretrained = pretrained\n    self.class_agnostic = class_agnostic\n    self.meta_train = meta_train\n    self.meta_test = meta_test\n    self.meta_loss = meta_loss\n\n    _fasterRCNN.__init__(self, classes, class_agnostic,meta_train,meta_test,meta_loss)\n\n  def _init_modules(self):\n    resnet = resnet101()\n\n    if self.pretrained == True:\n      print(\"Loading pretrained weights from %s\" %(self.model_path))\n      state_dict = torch.load(self.model_path)\n      resnet.load_state_dict({k:v for k,v in state_dict.items() if k in resnet.state_dict()})\n\n    # Build resnet.\n    self.meta_conv1 = nn.Conv2d(4, 64, kernel_size=7, stride=2, padding=3, bias=False)\n    self.rcnn_conv1 = resnet.conv1\n\n    self.RCNN_base = nn.Sequential(resnet.bn1,resnet.relu,\n      resnet.maxpool,resnet.layer1,resnet.layer2,resnet.layer3)\n\n    self.RCNN_top = nn.Sequential(resnet.layer4)\n\n    self.sigmoid = nn.Sigmoid()\n    self.max_pooled = nn.MaxPool2d(2)\n\n    self.RCNN_cls_score = nn.Linear(2048, self.n_classes)\n\n    if self.meta_loss:\n      self.Meta_cls_score = nn.Linear(2048, self.n_classes)\n\n    if self.class_agnostic:\n      self.RCNN_bbox_pred = nn.Linear(2048, 4) # x,y,w,h\n    else:\n      self.RCNN_bbox_pred = nn.Linear(2048, 4 * self.n_classes)\n\n\n    # Fix blocks\n    for p in self.rcnn_conv1.parameters(): p.requires_grad=False\n    for p in self.RCNN_base[0].parameters(): p.requires_grad=False\n\n\n    assert (0 <= cfg.RESNET.FIXED_BLOCKS < 5)\n    if cfg.RESNET.FIXED_BLOCKS >= 4:\n      for p in self.RCNN_top.parameters(): p.requires_grad = False\n    if cfg.RESNET.FIXED_BLOCKS >= 3:\n      for p in self.RCNN_base[5].parameters(): p.requires_grad=False\n    if cfg.RESNET.FIXED_BLOCKS >= 2:\n      for p in self.RCNN_base[4].parameters(): p.requires_grad=False\n    if cfg.RESNET.FIXED_BLOCKS >= 1:\n      for p in self.RCNN_base[3].parameters(): p.requires_grad=False\n\n    def set_bn_fix(m):\n      classname = m.__class__.__name__\n      if classname.find('BatchNorm') != -1:\n        for p in m.parameters(): p.requires_grad=False\n\n    self.RCNN_base.apply(set_bn_fix)\n    self.RCNN_top.apply(set_bn_fix)\n\n  def train(self, mode=True):\n    # Override train so that the training mode is set as we want\n    nn.Module.train(self, mode)\n    if mode:\n      # Set fixed blocks to be in eval mode\n      self.RCNN_base.eval()\n      self.RCNN_base[4].train()\n      self.RCNN_base[5].train()\n\n      self.RCNN_base.eval()\n\n      def set_bn_eval(m):\n        classname = m.__class__.__name__\n        if classname.find('BatchNorm') != -1:\n          m.eval()\n\n      self.RCNN_base.apply(set_bn_eval)\n      self.RCNN_top.apply(set_bn_eval)\n\n  def _head_to_tail(self, pool5):\n    fc7 = self.RCNN_top(pool5).mean(3).mean(2)\n    return fc7\n\n  def prn_network(self,im_data):\n    '''\n    the Predictor-head Remodeling Network (PRN)\n    :param im_data:\n    :return attention vectors:\n    '''\n    base_feat = self.RCNN_base(self.meta_conv1(im_data))\n    feature = self._head_to_tail(self.max_pooled(base_feat))\n    attentions = self.sigmoid(feature)\n    return  attentions\n\n"
  },
  {
    "path": "lib/model/faster_rcnn/trail.py",
    "content": "# --------------------------------------------------------\n# Pytorch multi-GPU Faster R-CNN\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Jiasen Lu, Jianwei Yang, based on code from Ross Girshick\n# --------------------------------------------------------\nimport _init_paths\nimport os\nimport sys\nimport numpy as np\nimport argparse\nimport pprint\nimport pdb\nimport time\n\nimport torch\nfrom torch.autograd import Variable\nimport torch.nn as nn\nimport torch.optim as optim\n\nimport torchvision.transforms as transforms\nfrom torch.utils.data.sampler import Sampler\n\nfrom roi_data_layer.roidb import combined_roidb\nfrom roi_data_layer.roibatchLoader import roibatchLoader\nfrom model.utils.config import cfg, cfg_from_file, cfg_from_list, get_output_dir\nfrom model.utils.net_utils import weights_normal_init, save_net, load_net, \\\n      adjust_learning_rate, save_checkpoint, clip_gradient\n\nfrom model.faster_rcnn.vgg16 import vgg16\nfrom model.faster_rcnn.resnet_GNN import resnet\n\nimport pickle\n\ndef cout_w(prob):\n    prob_weight = prob\n    sum_value = np.sum(prob_weight)\n    prob_weight = prob_weight / sum_value\n    meam_value = np.mean(prob_weight)\n\n    prob_weight = prob_weight - meam_value + 1\n    prob_weight = np.log(prob_weight)\n    return prob_weight\n\ndef cout_a(prob, NUM_ATTR_REL):\n    prob_weight = prob[:, : NUM_ATTR_REL]\n    sum_value = np.sum(prob_weight, keepdims=True, axis=1)\n    zero_inds = np.nonzero(sum_value == 0)[0]\n    sum_value[zero_inds,:] = 1.\n\n    prob_weight = prob_weight / np.repeat(sum_value, prob_weight.shape[1], axis=1)\n\n    meam_value = np.mean(prob_weight, keepdims=True, axis=1)\n    meam_value = np.repeat(meam_value, prob_weight.shape[1], axis=1)\n    prob_weight = prob_weight - meam_value + 1\n    prob_weight = np.log(prob_weight)\n\n    cls_cls_a = np.zeros((1001, 1001))\n    for row in range(prob_weight.shape[0]):\n        temp = np.zeros(1001)\n        for col in range(prob_weight.shape[1]):\n            xx = prob_weight[row, col] * prob_weight[:, col]\n            temp += xx\n        cls_cls_a[row] = temp\n    pickle.dump(cls_cls_a, open('data/vg/clscls_attr.pkl', 'wb'))\n\n    return cls_cls_a\n\ndef parse_args():\n  \"\"\"\n  Parse input arguments\n  \"\"\"\n  parser = argparse.ArgumentParser(description='Train a Fast R-CNN network')\n  parser.add_argument('--dataset', dest='dataset',\n                      help='training dataset',\n                      default='vg', type=str)\n  parser.add_argument('--net', dest='net',\n                    help='vgg16, res101',\n                    default='baseline', type=str)\n  parser.add_argument('--start_epoch', dest='start_epoch',\n                      help='starting epoch',\n                      default=1, type=int)\n  parser.add_argument('--epochs', dest='max_epochs',\n                      help='number of epochs to train',\n                      default=20, type=int)\n  parser.add_argument('--disp_interval', dest='disp_interval',\n                      help='number of iterations to display',\n                      default=100, type=int)\n  parser.add_argument('--checkpoint_interval', dest='checkpoint_interval',\n                      help='number of iterations to display',\n                      default=10000, type=int)\n\n  parser.add_argument('--save_dir', dest='save_dir',\n                      help='directory to save models', default=\"exps/baseline/models\",\n                      nargs=argparse.REMAINDER)\n  parser.add_argument('--nw', dest='num_workers',\n                      help='number of worker to load data',\n                      default=2, type=int)\n  parser.add_argument('--cuda', dest='cuda',default=True, type=bool,\n                      help='whether use CUDA')\n  parser.add_argument('--ls', dest='large_scale',\n                      help='whether use large imag scale',\n                      action='store_true')\n  parser.add_argument('--ms', dest='multi_scale',\n                      help='whether to use multi scale training',\n                      action='store_true')\n  parser.add_argument('--mGPUs', dest='mGPUs',\n                      help='whether use multiple GPUs',\n                      action='store_true')\n  parser.add_argument('--bs', dest='batch_size',\n                      help='batch_size',\n                      default=2, type=int)\n  parser.add_argument('--cag', dest='class_agnostic',default=False, type=bool,\n                      help='whether perform class_agnostic bbox regression')\n\n# config optimization\n  parser.add_argument('--o', dest='optimizer',\n                      help='training optimizer',\n                      default=\"sgd\", type=str)\n  parser.add_argument('--lr', dest='lr',\n                      help='starting learning rate',\n                      default=0.001, type=float)\n  parser.add_argument('--lr_decay_step', dest='lr_decay_step',\n                      help='step to do learning rate decay, unit is epoch',\n                      default=4, type=int)\n  parser.add_argument('--lr_decay_gamma', dest='lr_decay_gamma',\n                      help='learning rate decay ratio',\n                      default=0.1, type=float)\n\n# set training session\n  parser.add_argument('--s', dest='session',\n                      help='training session',\n                      default=1, type=int)\n\n# resume trained model\n  parser.add_argument('--r', dest='resume',\n                      help='resume checkpoint or not',\n                      default=False, type=bool)\n  parser.add_argument('--checksession', dest='checksession',\n                      help='checksession to load model',\n                      default=1, type=int)\n  parser.add_argument('--checkepoch', dest='checkepoch',\n                      help='checkepoch to load model',\n                      default=1, type=int)\n  parser.add_argument('--checkpoint', dest='checkpoint',\n                      help='checkpoint to load model',\n                      default=0, type=int)\n# log and diaplay\n  parser.add_argument('--use_tfboard', dest='use_tfboard',\n                      help='whether use tensorflow tensorboard',\n                      default=False, type=bool)\n  parser.add_argument('--log_dir', dest='log_dir',\n                      help='directory to save logs', default='logs',\n                      type=str)\n\n  args = parser.parse_args()\n  return args\n\n\nclass sampler(Sampler):\n  def __init__(self, train_size, batch_size):\n    self.num_data = train_size\n    self.num_per_batch = int(train_size / batch_size)\n    self.batch_size = batch_size\n    self.range = torch.arange(0,batch_size).view(1, batch_size).long()\n    self.leftover_flag = False\n    if train_size % batch_size:\n      self.leftover = torch.arange(self.num_per_batch*batch_size, train_size).long()\n      self.leftover_flag = True\n\n  def __iter__(self):\n    rand_num = torch.randperm(self.num_per_batch).view(-1,1) * self.batch_size\n    self.rand_num = rand_num.expand(self.num_per_batch, self.batch_size) + self.range\n    self.rand_num_view = self.rand_num.view(-1)\n\n    if self.leftover_flag:\n      self.rand_num_view = torch.cat((self.rand_num_view, self.leftover),0)\n\n    return iter(self.rand_num_view)\n\n  def __len__(self):\n    return self.num_data\n\n\n  args = parse_args()\n\n  print('Called with args:')\n  print(args)\n\n  if args.use_tfboard:\n    from model.utils.logger import Logger\n    # Set the logger\n    logger = Logger(args.log_dir)\n\n  if args.dataset == \"pascal_voc\":\n      args.imdb_name = \"voc_2007_trainval\"\n      args.imdbval_name = \"voc_2007_test\"\n      args.set_cfgs = ['ANCHOR_SCALES', '[8, 16, 32]', 'ANCHOR_RATIOS', '[0.5,1,2]', 'MAX_NUM_GT_BOXES', '20']\n  elif args.dataset == \"pascal_voc_0712\":\n      args.imdb_name = \"voc_2007_trainval+voc_2012_trainval\"\n      args.imdbval_name = \"voc_2007_test\"\n      args.set_cfgs = ['ANCHOR_SCALES', '[8, 16, 32]', 'ANCHOR_RATIOS', '[0.5,1,2]', 'MAX_NUM_GT_BOXES', '20']\n  elif args.dataset == \"coco\":\n      args.imdb_name = \"coco_2014_train+coco_2014_valminusminival\"\n      args.imdbval_name = \"coco_2014_minival\"\n      args.set_cfgs = ['ANCHOR_SCALES', '[4, 8, 16, 32]', 'ANCHOR_RATIOS', '[0.5,1,2]', 'MAX_NUM_GT_BOXES', '50']\n  elif args.dataset == \"imagenet\":\n      args.imdb_name = \"imagenet_train\"\n      args.imdbval_name = \"imagenet_val\"\n      args.set_cfgs = ['ANCHOR_SCALES', '[4, 8, 16, 32]', 'ANCHOR_RATIOS', '[0.5,1,2]', 'MAX_NUM_GT_BOXES', '30']\n  elif args.dataset == \"vg\":\n      # train sizes: train, smalltrain, minitrain\n      # train scale: ['150-50-20', '150-50-50', '500-150-80', '750-250-150', '1750-700-450', '1600-400-20']\n      args.imdb_name = \"vg_train\"\n      args.imdbval_name = \"vg_val\"\n      args.set_cfgs = ['ANCHOR_SCALES', '[2, 4, 8, 16, 32]', 'MAX_NUM_GT_BOXES', '50']\n      # args.set_cfgs = ['ANCHOR_SCALES', '[4, 8, 16, 32]', 'ANCHOR_RATIOS', '[0.5,1,2]', 'MAX_NUM_GT_BOXES', '50']\n\n  # args.cfg_file = \"cfgs/{}_ls.yml\".format(args.net) if args.large_scale else \"cfgs/{}.yml\".format(args.net)\n  args.cfg_file = \"cfgs/res101_ms.yml\"#.format(args.net + \"_ms\" if args.multi_scale else \"\")\n\n  if args.cfg_file is not None:\n    cfg_from_file(args.cfg_file)\n  if args.set_cfgs is not None:\n    cfg_from_list(args.set_cfgs)\n\n  print('Using config:')\n  pprint.pprint(cfg)\n  np.random.seed(cfg.RNG_SEED)\n\n  #torch.backends.cudnn.benchmark = True\n  if torch.cuda.is_available() and not args.cuda:\n    print(\"WARNING: You have a CUDA device, so you should probably run with --cuda\")\n\n  # train set\n  # -- Note: Use validation set and disable the flipped to enable faster loading.\n  cfg.TRAIN.USE_FLIPPED = True\n  cfg.USE_GPU_NMS = args.cuda\n  imdb, roidb, ratio_list, ratio_index = combined_roidb(args.imdb_name)\n  train_size = len(roidb)\n\n  print('{:d} roidb entries'.format(len(roidb)))\n  sys.stdout.flush()\n\n  output_dir = args.save_dir[0] + \"/\" + args.net + \"/\" + args.dataset\n  if not os.path.exists(output_dir):\n    os.makedirs(output_dir)\n\n  # log_f = open(os.path.join(args.log_dir, time.strftime(\"%Y-%m-%d-%H:%M.txt\", time.localtime())), \"w\")\n  # old_stdout = sys.stdout\n  # sys.stdout = log_f\n\n  sampler_batch = sampler(train_size, args.batch_size)\n\n  dataset = roibatchLoader(roidb, ratio_list, ratio_index, args.batch_size, imdb.num_classes, training=True)\n\n  dataloader = torch.utils.data.DataLoader(dataset, batch_size=args.batch_size,\n                            sampler=sampler_batch, num_workers=args.num_workers, pin_memory=False)\n\n  # initilize the tensor holder here.\n  im_data = torch.FloatTensor(1)\n  im_info = torch.FloatTensor(1)\n  num_boxes = torch.LongTensor(1)\n  gt_boxes = torch.FloatTensor(1)\n#  attr_prob = torch.FloatTensor(imdb._clscls_attr)\n  # attr_prob = torch.FloatTensor(cout_a(imdb._class_to_attr, 200))\n#  pair_prob = torch.FloatTensor(cout_w(imdb.pair))\n#  pair_prob = pair_prob.unsqueeze(0).repeat(args.batch_size, 1, 1)\n\n\n  # ship to cuda\n  if args.cuda:\n    im_data = im_data.cuda()\n    im_info = im_info.cuda()\n    num_boxes = num_boxes.cuda()\n    gt_boxes = gt_boxes.cuda()\n\n\n  # make variable\n  im_data = Variable(im_data)\n  im_info = Variable(im_info)\n  num_boxes = Variable(num_boxes)\n  gt_boxes = Variable(gt_boxes)\n#  pair_prob = Variable(pair_prob)\n#  attr_prob = Variable(attr_prob)\n\n  if args.cuda:\n    cfg.CUDA = True\n\n  # initilize the network here.\n  if args.net == 'baseline':\n    fasterRCNN = resnet(imdb.classes, 101, pretrained=True, class_agnostic=args.class_agnostic)\n  elif args.net == 'relationloss':\n    fasterRCNN = resnet(imdb.classes, 101, pretrained=False, class_agnostic=args.class_agnostic, pair_prob=pair_prob)\n  elif args.net == 'attributeloss':\n    fasterRCNN = resnet(imdb.classes, 101, pretrained=True, class_agnostic=args.class_agnostic, attr_prob=attr_prob)\n\n\n  fasterRCNN.create_architecture()\n\n  lr = cfg.TRAIN.LEARNING_RATE\n  lr = args.lr\n  #tr_momentum = cfg.TRAIN.MOMENTUM\n  #tr_momentum = args.momentum\n\n  params = []\n  for key, value in dict(fasterRCNN.named_parameters()).items():\n    if value.requires_grad:\n      if 'bias' in key:\n        params += [{'params':[value],'lr':lr*(cfg.TRAIN.DOUBLE_BIAS + 1), \\\n                'weight_decay': cfg.TRAIN.BIAS_DECAY and cfg.TRAIN.WEIGHT_DECAY or 0}]\n      else:\n        params += [{'params':[value],'lr':lr, 'weight_decay': cfg.TRAIN.WEIGHT_DECAY}]\n\n  if args.optimizer == \"adam\":\n    lr = lr * 0.1\n    optimizer = torch.optim.Adam(params)\n\n  elif args.optimizer == \"sgd\":\n    optimizer = torch.optim.SGD(params, momentum=cfg.TRAIN.MOMENTUM)\n\n  if args.resume:\n    load_name = os.path.join(output_dir,\n      'faster_rcnn_{}_{}_{}.pth'.format(args.checksession, args.checkepoch, args.checkpoint))\n    print(\"loading checkpoint %s\" % (load_name))\n    checkpoint = torch.load(load_name)\n    args.session = checkpoint['session']\n    args.start_epoch = checkpoint['epoch']\n    fasterRCNN.load_state_dict(checkpoint['model'])\n    optimizer.load_state_dict(checkpoint['optimizer'])\n    lr = optimizer.param_groups[0]['lr']\n    if 'pooling_mode' in checkpoint.keys():\n      cfg.POOLING_MODE = checkpoint['pooling_mode']\n    print(\"loaded checkpoint %s\" % (load_name))\n\n  if args.mGPUs:\n    fasterRCNN = nn.DataParallel(fasterRCNN)\n\n  if args.cuda:\n    fasterRCNN.cuda()\n\n  iters_per_epoch = int(train_size / args.batch_size)\n\n\nfasterRCNN = resnet(imdb.classes, 101, pretrained=True, class_agnostic=args.class_agnostic)\nfasterRCNN.create_architecture()\nfasterRCNN.cuda()\nfasterRCNN.train()\ndata_iter = iter(dataloader)\n\ndata = next(data_iter)\nim_data.data.resize_(data[0].size()).copy_(data[0])\nim_info.data.resize_(data[1].size()).copy_(data[1])\ngt_boxes.data.resize_(data[2].size()).copy_(data[2])\nnum_boxes.data.resize_(data[3].size()).copy_(data[3])\n\nfasterRCNN.zero_grad()\n\n\n\nbatch_size = im_data.size(0)\n\nim_info = im_info.data\ngt_boxes = gt_boxes.data\nnum_boxes = num_boxes.data\n\n# feed image data to base model to obtain base feature map\nbase_feat = fasterRCNN.RCNN_base(im_data)\n\n# feed base feature map tp RPN to obtain rois\nrois, rpn_loss_cls, rpn_loss_bbox = fasterRCNN.RCNN_rpn(base_feat, im_info, gt_boxes, num_boxes)\n\n# if it is training phrase, then use ground trubut bboxes for refining\nif fasterRCNN.training:\n    roi_data = fasterRCNN.RCNN_proposal_target(rois, gt_boxes, num_boxes)\n    rois, rois_label, rois_target, rois_inside_ws, rois_outside_ws = roi_data\n\n    rois_label = Variable(rois_label.view(-1).long())\n    rois_target = Variable(rois_target.view(-1, rois_target.size(2)))\n    rois_inside_ws = Variable(rois_inside_ws.view(-1, rois_inside_ws.size(2)))\n    rois_outside_ws = Variable(rois_outside_ws.view(-1, rois_outside_ws.size(2)))\nelse:\n    rois_label = None\n    rois_target = None\n    rois_inside_ws = None\n    rois_outside_ws = None\n    rpn_loss_cls = 0\n    rpn_loss_bbox = 0\n\nrois = Variable(rois)\n# do roi pooling based on predicted rois\n\nif cfg.POOLING_MODE == 'crop':\n    # pdb.set_trace()\n    # pooled_feat_anchor = _crop_pool_layer(base_feat, rois.view(-1, 5))\n    grid_xy = _affine_grid_gen(rois.view(-1, 5), base_feat.size()[2:], fasterRCNN.grid_size)\n    grid_yx = torch.stack([grid_xy.data[:,:,:,1], grid_xy.data[:,:,:,0]], 3).contiguous()\n    pooled_feat = fasterRCNN.RCNN_roi_crop(base_feat, Variable(grid_yx).detach())\n    if cfg.CROP_RESIZE_WITH_MAX_POOL:\n        pooled_feat = F.max_pool2d(pooled_feat, 2, 2)\nelif cfg.POOLING_MODE == 'align':\n    pooled_feat = fasterRCNN.RCNN_roi_align(base_feat, rois.view(-1, 5))\nelif cfg.POOLING_MODE == 'pool':\n    pooled_feat = fasterRCNN.RCNN_roi_pool(base_feat, rois.view(-1,5))"
  },
  {
    "path": "lib/model/faster_rcnn/vgg16.py",
    "content": "# --------------------------------------------------------\n# Tensorflow Faster R-CNN\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Xinlei Chen\n# --------------------------------------------------------\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\nimport math\nimport torchvision.models as models\nfrom model.faster_rcnn.faster_rcnn import _fasterRCNN\nimport pdb\n\nclass vgg16(_fasterRCNN):\n  def __init__(self, classes, pretrained=False, class_agnostic=False):\n    self.model_path = 'data/pretrained_model/vgg16_caffe.pth'\n    self.dout_base_model = 512\n    self.pretrained = pretrained\n    self.class_agnostic = class_agnostic\n\n    _fasterRCNN.__init__(self, classes, class_agnostic)\n\n  def _init_modules(self):\n    vgg = models.vgg16()\n    if self.pretrained:\n        print(\"Loading pretrained weights from %s\" %(self.model_path))\n        state_dict = torch.load(self.model_path)\n        vgg.load_state_dict({k:v for k,v in state_dict.items() if k in vgg.state_dict()})\n\n    vgg.classifier = nn.Sequential(*list(vgg.classifier._modules.values())[:-1])\n\n    # not using the last maxpool layer\n    self.RCNN_base = nn.Sequential(*list(vgg.features._modules.values())[:-1])\n\n    # Fix the layers before conv3:\n    for layer in range(10):\n      for p in self.RCNN_base[layer].parameters(): p.requires_grad = False\n\n    # self.RCNN_base = _RCNN_base(vgg.features, self.classes, self.dout_base_model)\n\n    self.RCNN_top = vgg.classifier\n\n    # not using the last maxpool layer\n    self.RCNN_cls_score = nn.Linear(4096, self.n_classes)\n\n    if self.class_agnostic:\n      self.RCNN_bbox_pred = nn.Linear(4096, 4)\n    else:\n      self.RCNN_bbox_pred = nn.Linear(4096, 4 * self.n_classes)      \n\n  def _head_to_tail(self, pool5):\n    \n    pool5_flat = pool5.view(pool5.size(0), -1)\n    fc7 = self.RCNN_top(pool5_flat)\n\n    return fc7\n\n"
  },
  {
    "path": "lib/model/nms/.gitignore",
    "content": "*.c\n*.cpp\n*.so\n"
  },
  {
    "path": "lib/model/nms/__init__.py",
    "content": ""
  },
  {
    "path": "lib/model/nms/_ext/__init__.py",
    "content": ""
  },
  {
    "path": "lib/model/nms/_ext/nms/__init__.py",
    "content": "\nfrom torch.utils.ffi import _wrap_function\nfrom ._nms import lib as _lib, ffi as _ffi\n\n__all__ = []\ndef _import_symbols(locals):\n    for symbol in dir(_lib):\n        fn = getattr(_lib, symbol)\n        if callable(fn):\n            locals[symbol] = _wrap_function(fn, _ffi)\n        else:\n            locals[symbol] = fn\n        __all__.append(symbol)\n\n_import_symbols(locals())\n"
  },
  {
    "path": "lib/model/nms/build.py",
    "content": "from __future__ import print_function\nimport os\nimport torch\nfrom torch.utils.ffi import create_extension\n\n#this_file = os.path.dirname(__file__)\n\nsources = []\nheaders = []\ndefines = []\nwith_cuda = False\n\nif torch.cuda.is_available():\n    print('Including CUDA code.')\n    sources += ['src/nms_cuda.c']\n    headers += ['src/nms_cuda.h']\n    defines += [('WITH_CUDA', None)]\n    with_cuda = True\n\nthis_file = os.path.dirname(os.path.realpath(__file__))\nprint(this_file)\nextra_objects = ['src/nms_cuda_kernel.cu.o']\nextra_objects = [os.path.join(this_file, fname) for fname in extra_objects]\nprint(extra_objects)\n\nffi = create_extension(\n    '_ext.nms',\n    headers=headers,\n    sources=sources,\n    define_macros=defines,\n    relative_to=__file__,\n    with_cuda=with_cuda,\n    extra_objects=extra_objects\n)\n\nif __name__ == '__main__':\n    ffi.build()\n"
  },
  {
    "path": "lib/model/nms/nms_cpu.py",
    "content": "from __future__ import absolute_import\n\nimport numpy as np\nimport torch\n\ndef nms_cpu(dets, thresh):\n    dets = dets.cpu().numpy()\n    x1 = dets[:, 0]\n    y1 = dets[:, 1]\n    x2 = dets[:, 2]\n    y2 = dets[:, 3]\n    scores = dets[:, 4]\n\n    areas = (x2 - x1 + 1) * (y2 - y1 + 1)\n    order = scores.argsort()[::-1]\n\n    keep = []\n    while order.size > 0:\n        i = order.item(0)\n        keep.append(i)\n        xx1 = np.maximum(x1[i], x1[order[1:]])\n        yy1 = np.maximum(y1[i], y1[order[1:]])\n        xx2 = np.minimum(x2[i], x2[order[1:]])\n        yy2 = np.minimum(y2[i], y2[order[1:]])\n\n        w = np.maximum(0.0, xx2 - xx1 + 1)\n        h = np.maximum(0.0, yy2 - yy1 + 1)\n        inter = w * h\n        ovr = inter / (areas[i] + areas[order[1:]] - inter)\n\n        inds = np.where(ovr <= thresh)[0]\n        order = order[inds + 1]\n\n    return torch.IntTensor(keep)\n\n\n\n\ndef nms_cpu_np(dets, thresh):\n    x1 = dets[:, 0]\n    y1 = dets[:, 1]\n    x2 = dets[:, 2]\n    y2 = dets[:, 3]\n    scores = dets[:, 4]\n\n    areas = (x2 - x1 + 1) * (y2 - y1 + 1)\n    order = scores.argsort()[::-1]\n\n    keep = []\n    while order.size > 0:\n        i = order.item(0)\n        keep.append(i)\n        xx1 = np.maximum(x1[i], x1[order[1:]])\n        yy1 = np.maximum(y1[i], y1[order[1:]])\n        xx2 = np.minimum(x2[i], x2[order[1:]])\n        yy2 = np.minimum(y2[i], y2[order[1:]])\n\n        w = np.maximum(0.0, xx2 - xx1 + 1)\n        h = np.maximum(0.0, yy2 - yy1 + 1)\n        inter = w * h\n        ovr = inter / (areas[i] + areas[order[1:]] - inter)\n\n        inds = np.where(ovr <= thresh)[0]\n        order = order[inds + 1]\n\n    return keep\n\n\n\ndef soft_nms_cpu(dets, threshold=0.001, Nt=0.3, method=1):\n    boxes = dets.cpu().numpy()\n    N = dets.shape[0]\n    pos = 0\n    maxscore = 0\n    maxpos = 0\n    \n    for i in range(N):\n        maxscore = boxes[i, 4]\n        maxpos = i\n        \n        tx1 = boxes[i,0]\n        ty1 = boxes[i,1]\n        tx2 = boxes[i,2]\n        ty2 = boxes[i,3]\n        ts = boxes[i,4]\n        \n        pos = i + 1\n        # get max box\n        while pos < N:\n            if maxscore < boxes[pos, 4]:\n                maxscore = boxes[pos, 4]\n                maxpos = pos\n            pos = pos + 1\n        \n        # add max box as a detection\n        boxes[i,0] = boxes[maxpos,0]\n        boxes[i,1] = boxes[maxpos,1]\n        boxes[i,2] = boxes[maxpos,2]\n        boxes[i,3] = boxes[maxpos,3]\n        boxes[i,4] = boxes[maxpos,4]\n        \n        # swap ith box with position of max box\n        boxes[maxpos,0] = tx1\n        boxes[maxpos,1] = ty1\n        boxes[maxpos,2] = tx2\n        boxes[maxpos,3] = ty2\n        boxes[maxpos,4] = ts\n        \n        tx1 = boxes[i,0]\n        ty1 = boxes[i,1]\n        tx2 = boxes[i,2]\n        ty2 = boxes[i,3]\n        ts = boxes[i,4]\n        \n        pos = i + 1\n        # NMS iterations, note that N changes if detection boxes fall below threshold\n        while pos < N:\n            x1 = boxes[pos, 0]\n            y1 = boxes[pos, 1]\n            x2 = boxes[pos, 2]\n            y2 = boxes[pos, 3]\n            s = boxes[pos, 4]\n        \n            area = (x2 - x1 + 1) * (y2 - y1 + 1)\n            iw = (min(tx2, x2) - max(tx1, x1) + 1)\n            if iw > 0:\n                ih = (min(ty2, y2) - max(ty1, y1) + 1)\n                if ih > 0:\n                    ua = float((tx2 - tx1 + 1) * (ty2 - ty1 + 1) + area - iw * ih)\n                    ov = iw * ih / ua #iou between max box and detection box\n        \n                    if method == 1: # linear\n                        if ov > Nt: \n                            weight = 1 - ov\n                        else:\n                            weight = 1\n                    elif method == 2: # gaussian\n                        weight = np.exp(-(ov * ov)/sigma)\n                    else: # original NMS\n                        if ov > Nt: \n                            weight = 0\n                        else:\n                            weight = 1\n        \n                    boxes[pos, 4] = weight*boxes[pos, 4]\n        \t\t    \n        \t\t    # if box score falls below threshold, discard the box by swapping with last box\n        \t\t    # update N\n                    if boxes[pos, 4] < threshold:\n                        boxes[pos,0] = boxes[N-1, 0]\n                        boxes[pos,1] = boxes[N-1, 1]\n                        boxes[pos,2] = boxes[N-1, 2]\n                        boxes[pos,3] = boxes[N-1, 3]\n                        boxes[pos,4] = boxes[N-1, 4]\n                        N = N - 1\n                        pos = pos - 1\n        \n            pos = pos + 1\n\n    keep = [i for i in range(N)]\n    return keep, boxes\n    \n\ndef nms_domain(dets, dets_small, thresh_small=0.85, thresh_big=0.5):\n#    dets = dets.cpu().numpy()\n#    dets_small = dets_small.cpu().numpy()\n    x1 = dets[:, 0]\n    y1 = dets[:, 1]\n    x2 = dets[:, 2]\n    y2 = dets[:, 3]\n    scores = dets[:, 4]\n\n    x21 = dets_small[:, 0]\n    y21 = dets_small[:, 1]\n    x22 = dets_small[:, 2]\n    y22 = dets_small[:, 3]\n    scores2 = dets_small[:, 4]\n\n    areas = (x2 - x1 + 1) * (y2 - y1 + 1)\n    order = scores.argsort()[::-1]\n\n    areas2 = (x22 - x21 + 1) * (y22 - y21 + 1)\n    order2 = scores2.argsort()[::-1]\n    \n    throw = set()\n    keep = set(list(range(len(dets_small))))\n    for i in range(len(dets)):\n        xx1 = np.maximum(x1[i], x21)\n        yy1 = np.maximum(y1[i], y21)\n        xx2 = np.minimum(x2[i], x22)\n        yy2 = np.minimum(y2[i], y22)\n\n        w = np.maximum(0.0, xx2 - xx1 + 1)\n        h = np.maximum(0.0, yy2 - yy1 + 1)\n        inter = w * h\n\n        ovr_1 = inter / (areas[i])\n        ovr_2 = inter / (areas2)\n\n        throw_array = np.where((ovr_2 > thresh_small) & (ovr_1 < thresh_big))[0].tolist()\n        throw.update(throw_array)\n    keep = list(keep - throw)\n    return keep\n    \n    "
  },
  {
    "path": "lib/model/nms/nms_gpu.py",
    "content": "from __future__ import absolute_import\nimport torch\nimport numpy as np\nfrom ._ext import nms\nimport pdb\n\ndef nms_gpu(dets, thresh):\n\tkeep = dets.new(dets.size(0), 1).zero_().int()\n\tnum_out = dets.new(1).zero_().int()\n\tnms.nms_cuda(keep, dets, num_out, thresh)\n\tkeep = keep[:num_out[0]]\n\treturn keep\n"
  },
  {
    "path": "lib/model/nms/nms_kernel.cu",
    "content": "// ------------------------------------------------------------------\n// Faster R-CNN\n// Copyright (c) 2015 Microsoft\n// Licensed under The MIT License [see fast-rcnn/LICENSE for details]\n// Written by Shaoqing Ren\n// ------------------------------------------------------------------\n\n#include \"gpu_nms.hpp\"\n#include <vector>\n#include <iostream>\n\n#define CUDA_CHECK(condition) \\\n  /* Code block avoids redefinition of cudaError_t error */ \\\n  do { \\\n    cudaError_t error = condition; \\\n    if (error != cudaSuccess) { \\\n      std::cout << cudaGetErrorString(error) << std::endl; \\\n    } \\\n  } while (0)\n\n#define DIVUP(m,n) ((m) / (n) + ((m) % (n) > 0))\nint const threadsPerBlock = sizeof(unsigned long long) * 8;\n\n__device__ inline float devIoU(float const * const a, float const * const b) {\n  float left = max(a[0], b[0]), right = min(a[2], b[2]);\n  float top = max(a[1], b[1]), bottom = min(a[3], b[3]);\n  float width = max(right - left + 1, 0.f), height = max(bottom - top + 1, 0.f);\n  float interS = width * height;\n  float Sa = (a[2] - a[0] + 1) * (a[3] - a[1] + 1);\n  float Sb = (b[2] - b[0] + 1) * (b[3] - b[1] + 1);\n  return interS / (Sa + Sb - interS);\n}\n\n__global__ void nms_kernel(const int n_boxes, const float nms_overlap_thresh,\n                           const float *dev_boxes, unsigned long long *dev_mask) {\n  const int row_start = blockIdx.y;\n  const int col_start = blockIdx.x;\n\n  // if (row_start > col_start) return;\n\n  const int row_size =\n        min(n_boxes - row_start * threadsPerBlock, threadsPerBlock);\n  const int col_size =\n        min(n_boxes - col_start * threadsPerBlock, threadsPerBlock);\n\n  __shared__ float block_boxes[threadsPerBlock * 5];\n  if (threadIdx.x < col_size) {\n    block_boxes[threadIdx.x * 5 + 0] =\n        dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 0];\n    block_boxes[threadIdx.x * 5 + 1] =\n        dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 1];\n    block_boxes[threadIdx.x * 5 + 2] =\n        dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 2];\n    block_boxes[threadIdx.x * 5 + 3] =\n        dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 3];\n    block_boxes[threadIdx.x * 5 + 4] =\n        dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 4];\n  }\n  __syncthreads();\n\n  if (threadIdx.x < row_size) {\n    const int cur_box_idx = threadsPerBlock * row_start + threadIdx.x;\n    const float *cur_box = dev_boxes + cur_box_idx * 5;\n    int i = 0;\n    unsigned long long t = 0;\n    int start = 0;\n    if (row_start == col_start) {\n      start = threadIdx.x + 1;\n    }\n    for (i = start; i < col_size; i++) {\n      if (devIoU(cur_box, block_boxes + i * 5) > nms_overlap_thresh) {\n        t |= 1ULL << i;\n      }\n    }\n    const int col_blocks = DIVUP(n_boxes, threadsPerBlock);\n    dev_mask[cur_box_idx * col_blocks + col_start] = t;\n  }\n}\n\nvoid _set_device(int device_id) {\n  int current_device;\n  CUDA_CHECK(cudaGetDevice(&current_device));\n  if (current_device == device_id) {\n    return;\n  }\n  // The call to cudaSetDevice must come before any calls to Get, which\n  // may perform initialization using the GPU.\n  CUDA_CHECK(cudaSetDevice(device_id));\n}\n\nvoid _nms(int* keep_out, int* num_out, const float* boxes_host, int boxes_num,\n          int boxes_dim, float nms_overlap_thresh, int device_id) {\n  _set_device(device_id);\n\n  float* boxes_dev = NULL;\n  unsigned long long* mask_dev = NULL;\n\n  const int col_blocks = DIVUP(boxes_num, threadsPerBlock);\n\n  CUDA_CHECK(cudaMalloc(&boxes_dev,\n                        boxes_num * boxes_dim * sizeof(float)));\n  CUDA_CHECK(cudaMemcpy(boxes_dev,\n                        boxes_host,\n                        boxes_num * boxes_dim * sizeof(float),\n                        cudaMemcpyHostToDevice));\n\n  CUDA_CHECK(cudaMalloc(&mask_dev,\n                        boxes_num * col_blocks * sizeof(unsigned long long)));\n\n  dim3 blocks(DIVUP(boxes_num, threadsPerBlock),\n              DIVUP(boxes_num, threadsPerBlock));\n  dim3 threads(threadsPerBlock);\n  nms_kernel<<<blocks, threads>>>(boxes_num,\n                                  nms_overlap_thresh,\n                                  boxes_dev,\n                                  mask_dev);\n\n  std::vector<unsigned long long> mask_host(boxes_num * col_blocks);\n  CUDA_CHECK(cudaMemcpy(&mask_host[0],\n                        mask_dev,\n                        sizeof(unsigned long long) * boxes_num * col_blocks,\n                        cudaMemcpyDeviceToHost));\n\n  std::vector<unsigned long long> remv(col_blocks);\n  memset(&remv[0], 0, sizeof(unsigned long long) * col_blocks);\n\n  int num_to_keep = 0;\n  for (int i = 0; i < boxes_num; i++) {\n    int nblock = i / threadsPerBlock;\n    int inblock = i % threadsPerBlock;\n\n    if (!(remv[nblock] & (1ULL << inblock))) {\n      keep_out[num_to_keep++] = i;\n      unsigned long long *p = &mask_host[0] + i * col_blocks;\n      for (int j = nblock; j < col_blocks; j++) {\n        remv[j] |= p[j];\n      }\n    }\n  }\n  *num_out = num_to_keep;\n\n  CUDA_CHECK(cudaFree(boxes_dev));\n  CUDA_CHECK(cudaFree(mask_dev));\n}\n"
  },
  {
    "path": "lib/model/nms/nms_wrapper.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\nimport torch\nfrom model.utils.config import cfg\nif torch.cuda.is_available():\n    from model.nms.nms_gpu import nms_gpu\nfrom model.nms.nms_cpu import nms_cpu\n\ndef nms(dets, thresh, force_cpu=False):\n    \"\"\"Dispatch to either CPU or GPU NMS implementations.\"\"\"\n    if dets.shape[0] == 0:\n        return []\n    # ---numpy version---\n    # original: return gpu_nms(dets, thresh, device_id=cfg.GPU_ID)\n    # ---pytorch version---\n\n    return nms_gpu(dets, thresh) if force_cpu == False else nms_cpu(dets, thresh)\n"
  },
  {
    "path": "lib/model/nms/src/nms_cuda.h",
    "content": "// int nms_cuda(THCudaTensor *keep_out, THCudaTensor *num_out,\n//             THCudaTensor *boxes_host, THCudaTensor *nms_overlap_thresh);\n\nint nms_cuda(THCudaIntTensor *keep_out, THCudaTensor *boxes_host,\n             THCudaIntTensor *num_out, float nms_overlap_thresh);\n"
  },
  {
    "path": "lib/model/nms/src/nms_cuda_kernel.cu",
    "content": "// ------------------------------------------------------------------\n// Faster R-CNN\n// Copyright (c) 2015 Microsoft\n// Licensed under The MIT License [see fast-rcnn/LICENSE for details]\n// Written by Shaoqing Ren\n// ------------------------------------------------------------------\n\n#include <stdbool.h>\n#include <stdio.h>\n#include <vector>\n#include <iostream>\n#include \"nms_cuda_kernel.h\"\n\n#define CUDA_WARN(XXX) \\\n    do { if (XXX != cudaSuccess) std::cout << \"CUDA Error: \" << \\\n        cudaGetErrorString(XXX) << \", at line \" << __LINE__ \\\n<< std::endl; cudaDeviceSynchronize(); } while (0)\n\n#define CUDA_CHECK(condition) \\\n  /* Code block avoids redefinition of cudaError_t error */ \\\n  do { \\\n    cudaError_t error = condition; \\\n    if (error != cudaSuccess) { \\\n      std::cout << cudaGetErrorString(error) << std::endl; \\\n    } \\\n  } while (0)\n\n#define DIVUP(m,n) ((m) / (n) + ((m) % (n) > 0))\nint const threadsPerBlock = sizeof(unsigned long long) * 8;\n\n__device__ inline float devIoU(float const * const a, float const * const b) {\n  float left = max(a[0], b[0]), right = min(a[2], b[2]);\n  float top = max(a[1], b[1]), bottom = min(a[3], b[3]);\n  float width = max(right - left + 1, 0.f), height = max(bottom - top + 1, 0.f);\n  float interS = width * height;\n  float Sa = (a[2] - a[0] + 1) * (a[3] - a[1] + 1);\n  float Sb = (b[2] - b[0] + 1) * (b[3] - b[1] + 1);\n  return interS / (Sa + Sb - interS);\n}\n\n__global__ void nms_kernel(int n_boxes, float nms_overlap_thresh,\n                           float *dev_boxes, unsigned long long *dev_mask) {\n  const int row_start = blockIdx.y;\n  const int col_start = blockIdx.x;\n\n  // if (row_start > col_start) return;\n\n  const int row_size =\n        min(n_boxes - row_start * threadsPerBlock, threadsPerBlock);\n  const int col_size =\n        min(n_boxes - col_start * threadsPerBlock, threadsPerBlock);\n\n  __shared__ float block_boxes[threadsPerBlock * 5];\n  if (threadIdx.x < col_size) {\n    block_boxes[threadIdx.x * 5 + 0] =\n        dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 0];\n    block_boxes[threadIdx.x * 5 + 1] =\n        dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 1];\n    block_boxes[threadIdx.x * 5 + 2] =\n        dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 2];\n    block_boxes[threadIdx.x * 5 + 3] =\n        dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 3];\n    block_boxes[threadIdx.x * 5 + 4] =\n        dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 5 + 4];\n  }\n  __syncthreads();\n\n  if (threadIdx.x < row_size) {\n    const int cur_box_idx = threadsPerBlock * row_start + threadIdx.x;\n    const float *cur_box = dev_boxes + cur_box_idx * 5;\n    int i = 0;\n    unsigned long long t = 0;\n    int start = 0;\n    if (row_start == col_start) {\n      start = threadIdx.x + 1;\n    }\n    for (i = start; i < col_size; i++) {\n      if (devIoU(cur_box, block_boxes + i * 5) > nms_overlap_thresh) {\n        t |= 1ULL << i;\n      }\n    }\n    const int col_blocks = DIVUP(n_boxes, threadsPerBlock);\n    dev_mask[cur_box_idx * col_blocks + col_start] = t;\n  }\n}\n\nvoid nms_cuda_compute(int* keep_out, int *num_out, float* boxes_host, int boxes_num,\n          int boxes_dim, float nms_overlap_thresh) {\n\n  float* boxes_dev = NULL;\n  unsigned long long* mask_dev = NULL;\n\n  const int col_blocks = DIVUP(boxes_num, threadsPerBlock);\n\n  CUDA_CHECK(cudaMalloc(&boxes_dev,\n                        boxes_num * boxes_dim * sizeof(float)));\n  CUDA_CHECK(cudaMemcpy(boxes_dev,\n                        boxes_host,\n                        boxes_num * boxes_dim * sizeof(float),\n                        cudaMemcpyHostToDevice));\n\n  CUDA_CHECK(cudaMalloc(&mask_dev,\n                        boxes_num * col_blocks * sizeof(unsigned long long)));\n\n  dim3 blocks(DIVUP(boxes_num, threadsPerBlock),\n              DIVUP(boxes_num, threadsPerBlock));\n  dim3 threads(threadsPerBlock);\n\n  // printf(\"i am at line %d\\n\", boxes_num);\n  // printf(\"i am at line %d\\n\", boxes_dim);  \n\n  nms_kernel<<<blocks, threads>>>(boxes_num,\n                                  nms_overlap_thresh,\n                                  boxes_dev,\n                                  mask_dev);\n\n  std::vector<unsigned long long> mask_host(boxes_num * col_blocks);\n  CUDA_CHECK(cudaMemcpy(&mask_host[0],\n                        mask_dev,\n                        sizeof(unsigned long long) * boxes_num * col_blocks,\n                        cudaMemcpyDeviceToHost));\n\n  std::vector<unsigned long long> remv(col_blocks);\n  memset(&remv[0], 0, sizeof(unsigned long long) * col_blocks);\n\n  // we need to create a memory for keep_out on cpu\n  // otherwise, the following code cannot run\n\n  int* keep_out_cpu = new int[boxes_num];\n\n  int num_to_keep = 0;\n  for (int i = 0; i < boxes_num; i++) {\n    int nblock = i / threadsPerBlock;\n    int inblock = i % threadsPerBlock;\n\n    if (!(remv[nblock] & (1ULL << inblock))) {\n      // orignal: keep_out[num_to_keep++] = i;\n      keep_out_cpu[num_to_keep++] = i;\n      unsigned long long *p = &mask_host[0] + i * col_blocks;\n      for (int j = nblock; j < col_blocks; j++) {\n        remv[j] |= p[j];\n      }\n    }\n  }\n\n  // copy keep_out_cpu to keep_out on gpu\n  CUDA_WARN(cudaMemcpy(keep_out, keep_out_cpu, boxes_num * sizeof(int),cudaMemcpyHostToDevice));  \n\n  // *num_out = num_to_keep;\n\n  // original: *num_out = num_to_keep;\n  // copy num_to_keep to num_out on gpu\n\n  CUDA_WARN(cudaMemcpy(num_out, &num_to_keep, 1 * sizeof(int),cudaMemcpyHostToDevice));  \n\n  // release cuda memory\n  CUDA_CHECK(cudaFree(boxes_dev));\n  CUDA_CHECK(cudaFree(mask_dev));\n  // release cpu memory\n  delete []keep_out_cpu;\n}\n"
  },
  {
    "path": "lib/model/nms/src/nms_cuda_kernel.h",
    "content": "#ifdef __cplusplus\nextern \"C\" {\n#endif\n\nvoid nms_cuda_compute(int* keep_out, int *num_out, float* boxes_host, int boxes_num,\n          int boxes_dim, float nms_overlap_thresh);\n\n#ifdef __cplusplus\n}\n#endif\n"
  },
  {
    "path": "lib/model/roi_align/__init__.py",
    "content": ""
  },
  {
    "path": "lib/model/roi_align/_ext/__init__.py",
    "content": ""
  },
  {
    "path": "lib/model/roi_align/_ext/roi_align/__init__.py",
    "content": "\nfrom torch.utils.ffi import _wrap_function\nfrom ._roi_align import lib as _lib, ffi as _ffi\n\n__all__ = []\ndef _import_symbols(locals):\n    for symbol in dir(_lib):\n        fn = getattr(_lib, symbol)\n        if callable(fn):\n            locals[symbol] = _wrap_function(fn, _ffi)\n        else:\n            locals[symbol] = fn\n        __all__.append(symbol)\n\n_import_symbols(locals())\n"
  },
  {
    "path": "lib/model/roi_align/build.py",
    "content": "from __future__ import print_function\nimport os\nimport torch\nfrom torch.utils.ffi import create_extension\n\n# sources = ['src/roi_align.c']\n# headers = ['src/roi_align.h']\nsources = []\nheaders = []\ndefines = []\nwith_cuda = False\n\nif torch.cuda.is_available():\n    print('Including CUDA code.')\n    sources += ['src/roi_align_cuda.c']\n    headers += ['src/roi_align_cuda.h']\n    defines += [('WITH_CUDA', None)]\n    with_cuda = True\n\nthis_file = os.path.dirname(os.path.realpath(__file__))\nprint(this_file)\nextra_objects = ['src/roi_align_kernel.cu.o']\nextra_objects = [os.path.join(this_file, fname) for fname in extra_objects]\n\nffi = create_extension(\n    '_ext.roi_align',\n    headers=headers,\n    sources=sources,\n    define_macros=defines,\n    relative_to=__file__,\n    with_cuda=with_cuda,\n    extra_objects=extra_objects\n)\n\nif __name__ == '__main__':\n    ffi.build()\n"
  },
  {
    "path": "lib/model/roi_align/functions/__init__.py",
    "content": ""
  },
  {
    "path": "lib/model/roi_align/functions/roi_align.py",
    "content": "import torch\nfrom torch.autograd import Function\nfrom .._ext import roi_align\n\n\n# TODO use save_for_backward instead\nclass RoIAlignFunction(Function):\n    def __init__(self, aligned_height, aligned_width, spatial_scale):\n        self.aligned_width = int(aligned_width)\n        self.aligned_height = int(aligned_height)\n        self.spatial_scale = float(spatial_scale)\n        self.rois = None\n        self.feature_size = None\n\n    def forward(self, features, rois):\n        self.rois = rois\n        self.feature_size = features.size()\n\n        batch_size, num_channels, data_height, data_width = features.size()\n        num_rois = rois.size(0)\n\n        output = features.new(num_rois, num_channels, self.aligned_height, self.aligned_width).zero_()\n        if features.is_cuda:\n            roi_align.roi_align_forward_cuda(self.aligned_height,\n                                             self.aligned_width,\n                                             self.spatial_scale, features,\n                                             rois, output)\n        else:\n            raise NotImplementedError\n\n        return output\n\n    def backward(self, grad_output):\n        assert(self.feature_size is not None and grad_output.is_cuda)\n\n        batch_size, num_channels, data_height, data_width = self.feature_size\n\n        grad_input = self.rois.new(batch_size, num_channels, data_height,\n                                  data_width).zero_()\n        roi_align.roi_align_backward_cuda(self.aligned_height,\n                                          self.aligned_width,\n                                          self.spatial_scale, grad_output,\n                                          self.rois, grad_input)\n\n        # print grad_input\n\n        return grad_input, None\n"
  },
  {
    "path": "lib/model/roi_align/modules/__init__.py",
    "content": ""
  },
  {
    "path": "lib/model/roi_align/modules/roi_align.py",
    "content": "from torch.nn.modules.module import Module\nfrom torch.nn.functional import avg_pool2d, max_pool2d\nfrom ..functions.roi_align import RoIAlignFunction\n\n\nclass RoIAlign(Module):\n    def __init__(self, aligned_height, aligned_width, spatial_scale):\n        super(RoIAlign, self).__init__()\n\n        self.aligned_width = int(aligned_width)\n        self.aligned_height = int(aligned_height)\n        self.spatial_scale = float(spatial_scale)\n\n    def forward(self, features, rois):\n        return RoIAlignFunction(self.aligned_height, self.aligned_width,\n                                self.spatial_scale)(features, rois)\n\nclass RoIAlignAvg(Module):\n    def __init__(self, aligned_height, aligned_width, spatial_scale):\n        super(RoIAlignAvg, self).__init__()\n\n        self.aligned_width = int(aligned_width)\n        self.aligned_height = int(aligned_height)\n        self.spatial_scale = float(spatial_scale)\n\n    def forward(self, features, rois):\n        x =  RoIAlignFunction(self.aligned_height+1, self.aligned_width+1,\n                                self.spatial_scale)(features, rois)\n        return avg_pool2d(x, kernel_size=2, stride=1)\n\nclass RoIAlignMax(Module):\n    def __init__(self, aligned_height, aligned_width, spatial_scale):\n        super(RoIAlignMax, self).__init__()\n\n        self.aligned_width = int(aligned_width)\n        self.aligned_height = int(aligned_height)\n        self.spatial_scale = float(spatial_scale)\n\n    def forward(self, features, rois):\n        x =  RoIAlignFunction(self.aligned_height+1, self.aligned_width+1,\n                                self.spatial_scale)(features, rois)\n        return max_pool2d(x, kernel_size=2, stride=1)\n"
  },
  {
    "path": "lib/model/roi_align/src/roi_align_cuda.c",
    "content": "#include <THC/THC.h>\n#include <math.h>\n#include \"roi_align_kernel.h\"\n\nextern THCState *state;\n\nint roi_align_forward_cuda(int aligned_height, int aligned_width, float spatial_scale,\n                        THCudaTensor * features, THCudaTensor * rois, THCudaTensor * output)\n{\n    // Grab the input tensor\n    float * data_flat = THCudaTensor_data(state, features);\n    float * rois_flat = THCudaTensor_data(state, rois);\n\n    float * output_flat = THCudaTensor_data(state, output);\n\n    // Number of ROIs\n    int num_rois = THCudaTensor_size(state, rois, 0);\n    int size_rois = THCudaTensor_size(state, rois, 1);\n    if (size_rois != 5)\n    {\n        return 0;\n    }\n\n    // data height\n    int data_height = THCudaTensor_size(state, features, 2);\n    // data width\n    int data_width = THCudaTensor_size(state, features, 3);\n    // Number of channels\n    int num_channels = THCudaTensor_size(state, features, 1);\n\n    cudaStream_t stream = THCState_getCurrentStream(state);\n\n    ROIAlignForwardLaucher(\n        data_flat, spatial_scale, num_rois, data_height,\n        data_width, num_channels, aligned_height,\n        aligned_width, rois_flat,\n        output_flat, stream);\n\n    return 1;\n}\n\nint roi_align_backward_cuda(int aligned_height, int aligned_width, float spatial_scale,\n                        THCudaTensor * top_grad, THCudaTensor * rois, THCudaTensor * bottom_grad)\n{\n    // Grab the input tensor\n    float * top_grad_flat = THCudaTensor_data(state, top_grad);\n    float * rois_flat = THCudaTensor_data(state, rois);\n\n    float * bottom_grad_flat = THCudaTensor_data(state, bottom_grad);\n\n    // Number of ROIs\n    int num_rois = THCudaTensor_size(state, rois, 0);\n    int size_rois = THCudaTensor_size(state, rois, 1);\n    if (size_rois != 5)\n    {\n        return 0;\n    }\n\n    // batch size\n    int batch_size = THCudaTensor_size(state, bottom_grad, 0);\n    // data height\n    int data_height = THCudaTensor_size(state, bottom_grad, 2);\n    // data width\n    int data_width = THCudaTensor_size(state, bottom_grad, 3);\n    // Number of channels\n    int num_channels = THCudaTensor_size(state, bottom_grad, 1);\n\n    cudaStream_t stream = THCState_getCurrentStream(state);\n    ROIAlignBackwardLaucher(\n        top_grad_flat, spatial_scale, batch_size, num_rois, data_height,\n        data_width, num_channels, aligned_height,\n        aligned_width, rois_flat,\n        bottom_grad_flat, stream);\n\n    return 1;\n}\n"
  },
  {
    "path": "lib/model/roi_align/src/roi_align_cuda.h",
    "content": "int roi_align_forward_cuda(int aligned_height, int aligned_width, float spatial_scale,\n                        THCudaTensor * features, THCudaTensor * rois, THCudaTensor * output);\n\nint roi_align_backward_cuda(int aligned_height, int aligned_width, float spatial_scale,\n                        THCudaTensor * top_grad, THCudaTensor * rois, THCudaTensor * bottom_grad);\n"
  },
  {
    "path": "lib/model/roi_align/src/roi_align_kernel.cu",
    "content": "#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include <stdio.h>\n#include <math.h>\n#include <float.h>\n#include \"roi_align_kernel.h\"\n\n#define CUDA_1D_KERNEL_LOOP(i, n)                            \\\n    for (int i = blockIdx.x * blockDim.x + threadIdx.x; i < n; \\\n            i += blockDim.x * gridDim.x)\n\n\n    __global__ void ROIAlignForward(const int nthreads, const float* bottom_data, const float spatial_scale, const int height, const int width,\n                                    const int channels, const int aligned_height, const int aligned_width, const float* bottom_rois, float* top_data) {\n        CUDA_1D_KERNEL_LOOP(index, nthreads) {\n            // (n, c, ph, pw) is an element in the aligned output\n            // int n = index;\n            // int pw = n % aligned_width;\n            // n /= aligned_width;\n            // int ph = n % aligned_height;\n            // n /= aligned_height;\n            // int c = n % channels;\n            // n /= channels;\n\n            int pw = index % aligned_width;\n            int ph = (index / aligned_width) % aligned_height;\n            int c  = (index / aligned_width / aligned_height) % channels;\n            int n  = index / aligned_width / aligned_height / channels;\n\n            // bottom_rois += n * 5;\n            float roi_batch_ind = bottom_rois[n * 5 + 0];\n            float roi_start_w = bottom_rois[n * 5 + 1] * spatial_scale;\n            float roi_start_h = bottom_rois[n * 5 + 2] * spatial_scale;\n            float roi_end_w = bottom_rois[n * 5 + 3] * spatial_scale;\n            float roi_end_h = bottom_rois[n * 5 + 4] * spatial_scale;\n\n            // Force malformed ROIs to be 1x1\n            float roi_width = fmaxf(roi_end_w - roi_start_w + 1., 0.);\n            float roi_height = fmaxf(roi_end_h - roi_start_h + 1., 0.);\n            float bin_size_h = roi_height / (aligned_height - 1.);\n            float bin_size_w = roi_width / (aligned_width - 1.);\n\n            float h = (float)(ph) * bin_size_h + roi_start_h;\n            float w = (float)(pw) * bin_size_w + roi_start_w;\n\n            int hstart = fminf(floor(h), height - 2);\n            int wstart = fminf(floor(w), width - 2);\n\n            int img_start = roi_batch_ind * channels * height * width;\n\n            // bilinear interpolation\n            if (h < 0 || h >= height || w < 0 || w >= width) {\n                top_data[index] = 0.;\n            } else {\n                float h_ratio = h - (float)(hstart);\n                float w_ratio = w - (float)(wstart);\n                int upleft = img_start + (c * height + hstart) * width + wstart;\n                int upright = upleft + 1;\n                int downleft = upleft + width;\n                int downright = downleft + 1;\n\n                top_data[index] = bottom_data[upleft] * (1. - h_ratio) * (1. - w_ratio)\n                    + bottom_data[upright] * (1. - h_ratio) * w_ratio\n                    + bottom_data[downleft] * h_ratio * (1. - w_ratio)\n                    + bottom_data[downright] * h_ratio * w_ratio;\n            }\n        }\n    }\n\n\n    int ROIAlignForwardLaucher(const float* bottom_data, const float spatial_scale, const int num_rois, const int height, const int width,\n                               const int channels, const int aligned_height, const int aligned_width, const float* bottom_rois, float* top_data, cudaStream_t stream) {\n        const int kThreadsPerBlock = 1024;\n        const int output_size = num_rois * aligned_height * aligned_width * channels;\n        cudaError_t err;\n\n\n        ROIAlignForward<<<(output_size + kThreadsPerBlock - 1) / kThreadsPerBlock, kThreadsPerBlock, 0, stream>>>(\n          output_size, bottom_data, spatial_scale, height, width, channels,\n          aligned_height, aligned_width, bottom_rois, top_data);\n\n        err = cudaGetLastError();\n        if(cudaSuccess != err) {\n            fprintf( stderr, \"cudaCheckError() failed : %s\\n\", cudaGetErrorString( err ) );\n            exit( -1 );\n        }\n\n        return 1;\n    }\n\n\n    __global__ void ROIAlignBackward(const int nthreads, const float* top_diff, const float spatial_scale, const int height, const int width,\n                                     const int channels, const int aligned_height, const int aligned_width, float* bottom_diff, const float* bottom_rois) {\n        CUDA_1D_KERNEL_LOOP(index, nthreads) {\n\n            // (n, c, ph, pw) is an element in the aligned output\n            int pw = index % aligned_width;\n            int ph = (index / aligned_width) % aligned_height;\n            int c  = (index / aligned_width / aligned_height) % channels;\n            int n  = index / aligned_width / aligned_height / channels;\n\n            float roi_batch_ind = bottom_rois[n * 5 + 0];\n            float roi_start_w = bottom_rois[n * 5 + 1] * spatial_scale;\n            float roi_start_h = bottom_rois[n * 5 + 2] * spatial_scale;\n            float roi_end_w = bottom_rois[n * 5 + 3] * spatial_scale;\n            float roi_end_h = bottom_rois[n * 5 + 4] * spatial_scale;\n            /* int roi_start_w = round(bottom_rois[1] * spatial_scale); */\n            /* int roi_start_h = round(bottom_rois[2] * spatial_scale); */\n            /* int roi_end_w = round(bottom_rois[3] * spatial_scale); */\n            /* int roi_end_h = round(bottom_rois[4] * spatial_scale); */\n\n            // Force malformed ROIs to be 1x1\n            float roi_width = fmaxf(roi_end_w - roi_start_w + 1., 0.);\n            float roi_height = fmaxf(roi_end_h - roi_start_h + 1., 0.);\n            float bin_size_h = roi_height / (aligned_height - 1.);\n            float bin_size_w = roi_width / (aligned_width - 1.);\n\n            float h = (float)(ph) * bin_size_h + roi_start_h;\n            float w = (float)(pw) * bin_size_w + roi_start_w;\n\n            int hstart = fminf(floor(h), height - 2);\n            int wstart = fminf(floor(w), width - 2);\n\n            int img_start = roi_batch_ind * channels * height * width;\n\n            // bilinear interpolation\n            if (!(h < 0 || h >= height || w < 0 || w >= width)) {\n                float h_ratio = h - (float)(hstart);\n                float w_ratio = w - (float)(wstart);\n                int upleft = img_start + (c * height + hstart) * width + wstart;\n                int upright = upleft + 1;\n                int downleft = upleft + width;\n                int downright = downleft + 1;\n\n                atomicAdd(bottom_diff + upleft, top_diff[index] * (1. - h_ratio) * (1 - w_ratio));\n                atomicAdd(bottom_diff + upright, top_diff[index] * (1. - h_ratio) * w_ratio);\n                atomicAdd(bottom_diff + downleft, top_diff[index] * h_ratio * (1 - w_ratio));\n                atomicAdd(bottom_diff + downright, top_diff[index] * h_ratio * w_ratio);\n            }\n        }\n    }\n\n    int ROIAlignBackwardLaucher(const float* top_diff, const float spatial_scale, const int batch_size, const int num_rois, const int height, const int width,\n                                const int channels, const int aligned_height, const int aligned_width, const float* bottom_rois, float* bottom_diff, cudaStream_t stream) {\n        const int kThreadsPerBlock = 1024;\n        const int output_size = num_rois * aligned_height * aligned_width * channels;\n        cudaError_t err;\n\n        ROIAlignBackward<<<(output_size + kThreadsPerBlock - 1) / kThreadsPerBlock, kThreadsPerBlock, 0, stream>>>(\n          output_size, top_diff, spatial_scale, height, width, channels,\n          aligned_height, aligned_width, bottom_diff, bottom_rois);\n\n        err = cudaGetLastError();\n        if(cudaSuccess != err) {\n            fprintf( stderr, \"cudaCheckError() failed : %s\\n\", cudaGetErrorString( err ) );\n            exit( -1 );\n        }\n\n        return 1;\n    }\n\n\n#ifdef __cplusplus\n}\n#endif\n"
  },
  {
    "path": "lib/model/roi_align/src/roi_align_kernel.h",
    "content": "#ifndef _ROI_ALIGN_KERNEL\n#define _ROI_ALIGN_KERNEL\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n__global__ void ROIAlignForward(const int nthreads, const float* bottom_data,\n    const float spatial_scale, const int height, const int width,\n    const int channels, const int aligned_height, const int aligned_width,\n    const float* bottom_rois, float* top_data);\n\nint ROIAlignForwardLaucher(\n    const float* bottom_data, const float spatial_scale, const int num_rois, const int height,\n    const int width, const int channels, const int aligned_height,\n    const int aligned_width, const float* bottom_rois,\n    float* top_data, cudaStream_t stream);\n\n__global__ void ROIAlignBackward(const int nthreads, const float* top_diff,\n    const float spatial_scale, const int height, const int width,\n    const int channels, const int aligned_height, const int aligned_width,\n    float* bottom_diff, const float* bottom_rois);\n\nint ROIAlignBackwardLaucher(const float* top_diff, const float spatial_scale, const int batch_size, const int num_rois,\n    const int height, const int width, const int channels, const int aligned_height,\n    const int aligned_width, const float* bottom_rois,\n    float* bottom_diff, cudaStream_t stream);\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif\n\n"
  },
  {
    "path": "lib/model/roi_crop/__init__.py",
    "content": ""
  },
  {
    "path": "lib/model/roi_crop/_ext/__init__.py",
    "content": ""
  },
  {
    "path": "lib/model/roi_crop/_ext/roi_crop/__init__.py",
    "content": "\nfrom torch.utils.ffi import _wrap_function\nfrom ._roi_crop import lib as _lib, ffi as _ffi\n\n__all__ = []\ndef _import_symbols(locals):\n    for symbol in dir(_lib):\n        fn = getattr(_lib, symbol)\n        if callable(fn):\n            locals[symbol] = _wrap_function(fn, _ffi)\n        else:\n            locals[symbol] = fn\n        __all__.append(symbol)\n\n_import_symbols(locals())\n"
  },
  {
    "path": "lib/model/roi_crop/build.py",
    "content": "from __future__ import print_function\nimport os\nimport torch\nfrom torch.utils.ffi import create_extension\n\n#this_file = os.path.dirname(__file__)\n\nsources = ['src/roi_crop.c']\nheaders = ['src/roi_crop.h']\ndefines = []\nwith_cuda = False\n\nif torch.cuda.is_available():\n    print('Including CUDA code.')\n    sources += ['src/roi_crop_cuda.c']\n    headers += ['src/roi_crop_cuda.h']\n    defines += [('WITH_CUDA', None)]\n    with_cuda = True\n\nthis_file = os.path.dirname(os.path.realpath(__file__))\nprint(this_file)\nextra_objects = ['src/roi_crop_cuda_kernel.cu.o']\nextra_objects = [os.path.join(this_file, fname) for fname in extra_objects]\n\nffi = create_extension(\n    '_ext.roi_crop',\n    headers=headers,\n    sources=sources,\n    define_macros=defines,\n    relative_to=__file__,\n    with_cuda=with_cuda,\n    extra_objects=extra_objects\n)\n\nif __name__ == '__main__':\n    ffi.build()\n"
  },
  {
    "path": "lib/model/roi_crop/functions/__init__.py",
    "content": ""
  },
  {
    "path": "lib/model/roi_crop/functions/crop_resize.py",
    "content": "# functions/add.py\nimport torch\nfrom torch.autograd import Function\nfrom .._ext import roi_crop\nfrom cffi import FFI\nffi = FFI()\n\nclass RoICropFunction(Function):\n    def forward(self, input1, input2):\n        self.input1 = input1\n        self.input2 = input2\n        self.device_c = ffi.new(\"int *\")\n        output = torch.zeros(input2.size()[0], input1.size()[1], input2.size()[1], input2.size()[2])\n        #print('decice %d' % torch.cuda.current_device())\n        if input1.is_cuda:\n            self.device = torch.cuda.current_device()\n        else:\n            self.device = -1\n        self.device_c[0] = self.device\n        if not input1.is_cuda:\n            roi_crop.BilinearSamplerBHWD_updateOutput(input1, input2, output)\n        else:\n            output = output.cuda(self.device)\n            roi_crop.BilinearSamplerBHWD_updateOutput_cuda(input1, input2, output)\n        return output\n\n    def backward(self, grad_output):\n        grad_input1 = torch.zeros(self.input1.size())\n        grad_input2 = torch.zeros(self.input2.size())\n        #print('backward decice %d' % self.device)\n        if not grad_output.is_cuda:\n            roi_crop.BilinearSamplerBHWD_updateGradInput(self.input1, self.input2, grad_input1, grad_input2, grad_output)\n        else:\n            grad_input1 = grad_input1.cuda(self.device)\n            grad_input2 = grad_input2.cuda(self.device)\n            roi_crop.BilinearSamplerBHWD_updateGradInput_cuda(self.input1, self.input2, grad_input1, grad_input2, grad_output)\n        return grad_input1, grad_input2\n"
  },
  {
    "path": "lib/model/roi_crop/functions/gridgen.py",
    "content": "# functions/add.py\nimport torch\nfrom torch.autograd import Function\nimport numpy as np\n\n\nclass AffineGridGenFunction(Function):\n    def __init__(self, height, width,lr=1):\n        super(AffineGridGenFunction, self).__init__()\n        self.lr = lr\n        self.height, self.width = height, width\n        self.grid = np.zeros( [self.height, self.width, 3], dtype=np.float32)\n        self.grid[:,:,0] = np.expand_dims(np.repeat(np.expand_dims(np.arange(-1, 1, 2.0/(self.height)), 0), repeats = self.width, axis = 0).T, 0)\n        self.grid[:,:,1] = np.expand_dims(np.repeat(np.expand_dims(np.arange(-1, 1, 2.0/(self.width)), 0), repeats = self.height, axis = 0), 0)\n        # self.grid[:,:,0] = np.expand_dims(np.repeat(np.expand_dims(np.arange(-1, 1, 2.0/(self.height - 1)), 0), repeats = self.width, axis = 0).T, 0)\n        # self.grid[:,:,1] = np.expand_dims(np.repeat(np.expand_dims(np.arange(-1, 1, 2.0/(self.width - 1)), 0), repeats = self.height, axis = 0), 0)\n        self.grid[:,:,2] = np.ones([self.height, width])\n        self.grid = torch.from_numpy(self.grid.astype(np.float32))\n        #print(self.grid)\n\n    def forward(self, input1):\n        self.input1 = input1\n        output = input1.new(torch.Size([input1.size(0)]) + self.grid.size()).zero_()\n        self.batchgrid = input1.new(torch.Size([input1.size(0)]) + self.grid.size()).zero_()\n        for i in range(input1.size(0)):\n            self.batchgrid[i] = self.grid.astype(self.batchgrid[i])\n\n        # if input1.is_cuda:\n        #    self.batchgrid = self.batchgrid.cuda()\n        #    output = output.cuda()\n\n        for i in range(input1.size(0)):\n            output = torch.bmm(self.batchgrid.view(-1, self.height*self.width, 3), torch.transpose(input1, 1, 2)).view(-1, self.height, self.width, 2)\n\n        return output\n\n    def backward(self, grad_output):\n\n        grad_input1 = self.input1.new(self.input1.size()).zero_()\n\n        # if grad_output.is_cuda:\n        #    self.batchgrid = self.batchgrid.cuda()\n        #    grad_input1 = grad_input1.cuda()\n\n        grad_input1 = torch.baddbmm(grad_input1, torch.transpose(grad_output.view(-1, self.height*self.width, 2), 1,2), self.batchgrid.view(-1, self.height*self.width, 3))\n        return grad_input1\n"
  },
  {
    "path": "lib/model/roi_crop/functions/roi_crop.py",
    "content": "# functions/add.py\nimport torch\nfrom torch.autograd import Function\nfrom .._ext import roi_crop\nimport pdb\n\nclass RoICropFunction(Function):\n    def forward(self, input1, input2):\n        self.input1 = input1.clone()\n        self.input2 = input2.clone()\n        output = input2.new(input2.size()[0], input1.size()[1], input2.size()[1], input2.size()[2]).zero_()\n        assert output.get_device() == input1.get_device(), \"output and input1 must on the same device\"\n        assert output.get_device() == input2.get_device(), \"output and input2 must on the same device\"\n        roi_crop.BilinearSamplerBHWD_updateOutput_cuda(input1, input2, output)\n        return output\n\n    def backward(self, grad_output):\n        grad_input1 = self.input1.new(self.input1.size()).zero_()\n        grad_input2 = self.input2.new(self.input2.size()).zero_()\n        roi_crop.BilinearSamplerBHWD_updateGradInput_cuda(self.input1, self.input2, grad_input1, grad_input2, grad_output)\n        return grad_input1, grad_input2\n"
  },
  {
    "path": "lib/model/roi_crop/modules/__init__.py",
    "content": ""
  },
  {
    "path": "lib/model/roi_crop/modules/gridgen.py",
    "content": "from torch.nn.modules.module import Module\nimport torch\nfrom torch.autograd import Variable\nimport numpy as np\nfrom ..functions.gridgen import AffineGridGenFunction\n\nimport pyximport\npyximport.install(setup_args={\"include_dirs\":np.get_include()},\n                  reload_support=True)\n\n\nclass _AffineGridGen(Module):\n    def __init__(self, height, width, lr = 1, aux_loss = False):\n        super(_AffineGridGen, self).__init__()\n        self.height, self.width = height, width\n        self.aux_loss = aux_loss\n        self.f = AffineGridGenFunction(self.height, self.width, lr=lr)\n        self.lr = lr\n    def forward(self, input):\n        # if not self.aux_loss:\n        return self.f(input)\n        # else:\n        #     identity = torch.from_numpy(np.array([[1,0,0], [0,1,0]], dtype=np.float32))\n        #     batch_identity = torch.zeros([input.size(0), 2,3])\n        #     for i in range(input.size(0)):\n        #         batch_identity[i] = identity\n        #     batch_identity = Variable(batch_identity)\n        #     loss = torch.mul(input - batch_identity, input - batch_identity)\n        #     loss = torch.sum(loss,1)\n        #     loss = torch.sum(loss,2)\n\n        #       return self.f(input), loss.view(-1,1)\n\nclass CylinderGridGen(Module):\n    def __init__(self, height, width, lr = 1, aux_loss = False):\n        super(CylinderGridGen, self).__init__()\n        self.height, self.width = height, width\n        self.aux_loss = aux_loss\n        self.f = CylinderGridGenFunction(self.height, self.width, lr=lr)\n        self.lr = lr\n    def forward(self, input):\n\n        if not self.aux_loss:\n            return self.f(input)\n        else:\n            return self.f(input), torch.mul(input, input).view(-1,1)\n\n\nclass AffineGridGenV2(Module):\n    def __init__(self, height, width, lr = 1, aux_loss = False):\n        super(AffineGridGenV2, self).__init__()\n        self.height, self.width = height, width\n        self.aux_loss = aux_loss\n        self.lr = lr\n\n        self.grid = np.zeros( [self.height, self.width, 3], dtype=np.float32)\n        self.grid[:,:,0] = np.expand_dims(np.repeat(np.expand_dims(np.arange(-1, 1, 2.0/self.height), 0), repeats = self.width, axis = 0).T, 0)\n        self.grid[:,:,1] = np.expand_dims(np.repeat(np.expand_dims(np.arange(-1, 1, 2.0/self.width), 0), repeats = self.height, axis = 0), 0)\n        self.grid[:,:,2] = np.ones([self.height, width])\n        self.grid = torch.from_numpy(self.grid.astype(np.float32))\n\n\n    def forward(self, input1):\n        self.batchgrid = torch.zeros(torch.Size([input1.size(0)]) + self.grid.size())\n\n        for i in range(input1.size(0)):\n            self.batchgrid[i] = self.grid\n        self.batchgrid = Variable(self.batchgrid)\n\n        if input1.is_cuda:\n            self.batchgrid = self.batchgrid.cuda()\n\n        output = torch.bmm(self.batchgrid.view(-1, self.height*self.width, 3), torch.transpose(input1, 1, 2)).view(-1, self.height, self.width, 2)\n\n        return output\n\n\nclass CylinderGridGenV2(Module):\n    def __init__(self, height, width, lr = 1):\n        super(CylinderGridGenV2, self).__init__()\n        self.height, self.width = height, width\n        self.lr = lr\n        self.grid = np.zeros( [self.height, self.width, 3], dtype=np.float32)\n        self.grid[:,:,0] = np.expand_dims(np.repeat(np.expand_dims(np.arange(-1, 1, 2.0/self.height), 0), repeats = self.width, axis = 0).T, 0)\n        self.grid[:,:,1] = np.expand_dims(np.repeat(np.expand_dims(np.arange(-1, 1, 2.0/self.width), 0), repeats = self.height, axis = 0), 0)\n        self.grid[:,:,2] = np.ones([self.height, width])\n        self.grid = torch.from_numpy(self.grid.astype(np.float32))\n    def forward(self, input):\n        self.batchgrid = torch.zeros(torch.Size([input.size(0)]) + self.grid.size() )\n        #print(self.batchgrid.size())\n        for i in range(input.size(0)):\n            self.batchgrid[i,:,:,:] = self.grid\n        self.batchgrid = Variable(self.batchgrid)\n\n        #print(self.batchgrid.size())\n\n        input_u = input.view(-1,1,1,1).repeat(1,self.height, self.width,1)\n        #print(input_u.requires_grad, self.batchgrid)\n\n        output0 = self.batchgrid[:,:,:,0:1]\n        output1 = torch.atan(torch.tan(np.pi/2.0*(self.batchgrid[:,:,:,1:2] + self.batchgrid[:,:,:,2:] * input_u[:,:,:,:])))  /(np.pi/2)\n        #print(output0.size(), output1.size())\n\n        output = torch.cat([output0, output1], 3)\n        return output\n\n\nclass DenseAffineGridGen(Module):\n    def __init__(self, height, width, lr = 1, aux_loss = False):\n        super(DenseAffineGridGen, self).__init__()\n        self.height, self.width = height, width\n        self.aux_loss = aux_loss\n        self.lr = lr\n\n        self.grid = np.zeros( [self.height, self.width, 3], dtype=np.float32)\n        self.grid[:,:,0] = np.expand_dims(np.repeat(np.expand_dims(np.arange(-1, 1, 2.0/self.height), 0), repeats = self.width, axis = 0).T, 0)\n        self.grid[:,:,1] = np.expand_dims(np.repeat(np.expand_dims(np.arange(-1, 1, 2.0/self.width), 0), repeats = self.height, axis = 0), 0)\n        self.grid[:,:,2] = np.ones([self.height, width])\n        self.grid = torch.from_numpy(self.grid.astype(np.float32))\n\n\n    def forward(self, input1):\n        self.batchgrid = torch.zeros(torch.Size([input1.size(0)]) + self.grid.size())\n\n        for i in range(input1.size(0)):\n            self.batchgrid[i] = self.grid\n\n        self.batchgrid = Variable(self.batchgrid)\n        #print self.batchgrid,  input1[:,:,:,0:3]\n        #print self.batchgrid,  input1[:,:,:,4:6]\n        x = torch.mul(self.batchgrid, input1[:,:,:,0:3])\n        y = torch.mul(self.batchgrid, input1[:,:,:,3:6])\n\n        output = torch.cat([torch.sum(x,3),torch.sum(y,3)], 3)\n        return output\n\n\n\n\nclass DenseAffine3DGridGen(Module):\n    def __init__(self, height, width, lr = 1, aux_loss = False):\n        super(DenseAffine3DGridGen, self).__init__()\n        self.height, self.width = height, width\n        self.aux_loss = aux_loss\n        self.lr = lr\n\n        self.grid = np.zeros( [self.height, self.width, 3], dtype=np.float32)\n        self.grid[:,:,0] = np.expand_dims(np.repeat(np.expand_dims(np.arange(-1, 1, 2.0/self.height), 0), repeats = self.width, axis = 0).T, 0)\n        self.grid[:,:,1] = np.expand_dims(np.repeat(np.expand_dims(np.arange(-1, 1, 2.0/self.width), 0), repeats = self.height, axis = 0), 0)\n        self.grid[:,:,2] = np.ones([self.height, width])\n        self.grid = torch.from_numpy(self.grid.astype(np.float32))\n\n        self.theta = self.grid[:,:,0] * np.pi/2 + np.pi/2\n        self.phi = self.grid[:,:,1] * np.pi\n\n        self.x = torch.sin(self.theta) * torch.cos(self.phi)\n        self.y = torch.sin(self.theta) * torch.sin(self.phi)\n        self.z = torch.cos(self.theta)\n\n        self.grid3d = torch.from_numpy(np.zeros( [self.height, self.width, 4], dtype=np.float32))\n\n        self.grid3d[:,:,0] = self.x\n        self.grid3d[:,:,1] = self.y\n        self.grid3d[:,:,2] = self.z\n        self.grid3d[:,:,3] = self.grid[:,:,2]\n\n\n    def forward(self, input1):\n        self.batchgrid3d = torch.zeros(torch.Size([input1.size(0)]) + self.grid3d.size())\n\n        for i in range(input1.size(0)):\n            self.batchgrid3d[i] = self.grid3d\n\n        self.batchgrid3d = Variable(self.batchgrid3d)\n        #print(self.batchgrid3d)\n\n        x = torch.sum(torch.mul(self.batchgrid3d, input1[:,:,:,0:4]), 3)\n        y = torch.sum(torch.mul(self.batchgrid3d, input1[:,:,:,4:8]), 3)\n        z = torch.sum(torch.mul(self.batchgrid3d, input1[:,:,:,8:]), 3)\n        #print(x)\n        r = torch.sqrt(x**2 + y**2 + z**2) + 1e-5\n\n        #print(r)\n        theta = torch.acos(z/r)/(np.pi/2)  - 1\n        #phi = torch.atan(y/x)\n        phi = torch.atan(y/(x + 1e-5))  + np.pi * x.lt(0).type(torch.FloatTensor) * (y.ge(0).type(torch.FloatTensor) - y.lt(0).type(torch.FloatTensor))\n        phi = phi/np.pi\n\n\n        output = torch.cat([theta,phi], 3)\n\n        return output\n\n\n\n\n\nclass DenseAffine3DGridGen_rotate(Module):\n    def __init__(self, height, width, lr = 1, aux_loss = False):\n        super(DenseAffine3DGridGen_rotate, self).__init__()\n        self.height, self.width = height, width\n        self.aux_loss = aux_loss\n        self.lr = lr\n\n        self.grid = np.zeros( [self.height, self.width, 3], dtype=np.float32)\n        self.grid[:,:,0] = np.expand_dims(np.repeat(np.expand_dims(np.arange(-1, 1, 2.0/self.height), 0), repeats = self.width, axis = 0).T, 0)\n        self.grid[:,:,1] = np.expand_dims(np.repeat(np.expand_dims(np.arange(-1, 1, 2.0/self.width), 0), repeats = self.height, axis = 0), 0)\n        self.grid[:,:,2] = np.ones([self.height, width])\n        self.grid = torch.from_numpy(self.grid.astype(np.float32))\n\n        self.theta = self.grid[:,:,0] * np.pi/2 + np.pi/2\n        self.phi = self.grid[:,:,1] * np.pi\n\n        self.x = torch.sin(self.theta) * torch.cos(self.phi)\n        self.y = torch.sin(self.theta) * torch.sin(self.phi)\n        self.z = torch.cos(self.theta)\n\n        self.grid3d = torch.from_numpy(np.zeros( [self.height, self.width, 4], dtype=np.float32))\n\n        self.grid3d[:,:,0] = self.x\n        self.grid3d[:,:,1] = self.y\n        self.grid3d[:,:,2] = self.z\n        self.grid3d[:,:,3] = self.grid[:,:,2]\n\n\n    def forward(self, input1, input2):\n        self.batchgrid3d = torch.zeros(torch.Size([input1.size(0)]) + self.grid3d.size())\n\n        for i in range(input1.size(0)):\n            self.batchgrid3d[i] = self.grid3d\n\n        self.batchgrid3d = Variable(self.batchgrid3d)\n\n        self.batchgrid = torch.zeros(torch.Size([input1.size(0)]) + self.grid.size())\n\n        for i in range(input1.size(0)):\n            self.batchgrid[i] = self.grid\n\n        self.batchgrid = Variable(self.batchgrid)\n\n        #print(self.batchgrid3d)\n\n        x = torch.sum(torch.mul(self.batchgrid3d, input1[:,:,:,0:4]), 3)\n        y = torch.sum(torch.mul(self.batchgrid3d, input1[:,:,:,4:8]), 3)\n        z = torch.sum(torch.mul(self.batchgrid3d, input1[:,:,:,8:]), 3)\n        #print(x)\n        r = torch.sqrt(x**2 + y**2 + z**2) + 1e-5\n\n        #print(r)\n        theta = torch.acos(z/r)/(np.pi/2)  - 1\n        #phi = torch.atan(y/x)\n        phi = torch.atan(y/(x + 1e-5))  + np.pi * x.lt(0).type(torch.FloatTensor) * (y.ge(0).type(torch.FloatTensor) - y.lt(0).type(torch.FloatTensor))\n        phi = phi/np.pi\n\n        input_u = input2.view(-1,1,1,1).repeat(1,self.height, self.width,1)\n\n        output = torch.cat([theta,phi], 3)\n\n        output1 = torch.atan(torch.tan(np.pi/2.0*(output[:,:,:,1:2] + self.batchgrid[:,:,:,2:] * input_u[:,:,:,:])))  /(np.pi/2)\n        output2 = torch.cat([output[:,:,:,0:1], output1], 3)\n\n        return output2\n\n\nclass Depth3DGridGen(Module):\n    def __init__(self, height, width, lr = 1, aux_loss = False):\n        super(Depth3DGridGen, self).__init__()\n        self.height, self.width = height, width\n        self.aux_loss = aux_loss\n        self.lr = lr\n\n        self.grid = np.zeros( [self.height, self.width, 3], dtype=np.float32)\n        self.grid[:,:,0] = np.expand_dims(np.repeat(np.expand_dims(np.arange(-1, 1, 2.0/self.height), 0), repeats = self.width, axis = 0).T, 0)\n        self.grid[:,:,1] = np.expand_dims(np.repeat(np.expand_dims(np.arange(-1, 1, 2.0/self.width), 0), repeats = self.height, axis = 0), 0)\n        self.grid[:,:,2] = np.ones([self.height, width])\n        self.grid = torch.from_numpy(self.grid.astype(np.float32))\n\n        self.theta = self.grid[:,:,0] * np.pi/2 + np.pi/2\n        self.phi = self.grid[:,:,1] * np.pi\n\n        self.x = torch.sin(self.theta) * torch.cos(self.phi)\n        self.y = torch.sin(self.theta) * torch.sin(self.phi)\n        self.z = torch.cos(self.theta)\n\n        self.grid3d = torch.from_numpy(np.zeros( [self.height, self.width, 4], dtype=np.float32))\n\n        self.grid3d[:,:,0] = self.x\n        self.grid3d[:,:,1] = self.y\n        self.grid3d[:,:,2] = self.z\n        self.grid3d[:,:,3] = self.grid[:,:,2]\n\n\n    def forward(self, depth, trans0, trans1, rotate):\n        self.batchgrid3d = torch.zeros(torch.Size([depth.size(0)]) + self.grid3d.size())\n\n        for i in range(depth.size(0)):\n            self.batchgrid3d[i] = self.grid3d\n\n        self.batchgrid3d = Variable(self.batchgrid3d)\n\n        self.batchgrid = torch.zeros(torch.Size([depth.size(0)]) + self.grid.size())\n\n        for i in range(depth.size(0)):\n            self.batchgrid[i] = self.grid\n\n        self.batchgrid = Variable(self.batchgrid)\n\n        x = self.batchgrid3d[:,:,:,0:1] * depth + trans0.view(-1,1,1,1).repeat(1, self.height, self.width, 1)\n\n        y = self.batchgrid3d[:,:,:,1:2] * depth + trans1.view(-1,1,1,1).repeat(1, self.height, self.width, 1)\n        z = self.batchgrid3d[:,:,:,2:3] * depth\n        #print(x.size(), y.size(), z.size())\n        r = torch.sqrt(x**2 + y**2 + z**2) + 1e-5\n\n        #print(r)\n        theta = torch.acos(z/r)/(np.pi/2)  - 1\n        #phi = torch.atan(y/x)\n        phi = torch.atan(y/(x + 1e-5))  + np.pi * x.lt(0).type(torch.FloatTensor) * (y.ge(0).type(torch.FloatTensor) - y.lt(0).type(torch.FloatTensor))\n        phi = phi/np.pi\n\n        #print(theta.size(), phi.size())\n\n\n        input_u = rotate.view(-1,1,1,1).repeat(1,self.height, self.width,1)\n\n        output = torch.cat([theta,phi], 3)\n        #print(output.size())\n\n        output1 = torch.atan(torch.tan(np.pi/2.0*(output[:,:,:,1:2] + self.batchgrid[:,:,:,2:] * input_u[:,:,:,:])))  /(np.pi/2)\n        output2 = torch.cat([output[:,:,:,0:1], output1], 3)\n\n        return output2\n\n\n\n\n\nclass Depth3DGridGen_with_mask(Module):\n    def __init__(self, height, width, lr = 1, aux_loss = False, ray_tracing = False):\n        super(Depth3DGridGen_with_mask, self).__init__()\n        self.height, self.width = height, width\n        self.aux_loss = aux_loss\n        self.lr = lr\n        self.ray_tracing = ray_tracing\n\n        self.grid = np.zeros( [self.height, self.width, 3], dtype=np.float32)\n        self.grid[:,:,0] = np.expand_dims(np.repeat(np.expand_dims(np.arange(-1, 1, 2.0/self.height), 0), repeats = self.width, axis = 0).T, 0)\n        self.grid[:,:,1] = np.expand_dims(np.repeat(np.expand_dims(np.arange(-1, 1, 2.0/self.width), 0), repeats = self.height, axis = 0), 0)\n        self.grid[:,:,2] = np.ones([self.height, width])\n        self.grid = torch.from_numpy(self.grid.astype(np.float32))\n\n        self.theta = self.grid[:,:,0] * np.pi/2 + np.pi/2\n        self.phi = self.grid[:,:,1] * np.pi\n\n        self.x = torch.sin(self.theta) * torch.cos(self.phi)\n        self.y = torch.sin(self.theta) * torch.sin(self.phi)\n        self.z = torch.cos(self.theta)\n\n        self.grid3d = torch.from_numpy(np.zeros( [self.height, self.width, 4], dtype=np.float32))\n\n        self.grid3d[:,:,0] = self.x\n        self.grid3d[:,:,1] = self.y\n        self.grid3d[:,:,2] = self.z\n        self.grid3d[:,:,3] = self.grid[:,:,2]\n\n\n    def forward(self, depth, trans0, trans1, rotate):\n        self.batchgrid3d = torch.zeros(torch.Size([depth.size(0)]) + self.grid3d.size())\n\n        for i in range(depth.size(0)):\n            self.batchgrid3d[i] = self.grid3d\n\n        self.batchgrid3d = Variable(self.batchgrid3d)\n\n        self.batchgrid = torch.zeros(torch.Size([depth.size(0)]) + self.grid.size())\n\n        for i in range(depth.size(0)):\n            self.batchgrid[i] = self.grid\n\n        self.batchgrid = Variable(self.batchgrid)\n\n        if depth.is_cuda:\n            self.batchgrid = self.batchgrid.cuda()\n            self.batchgrid3d = self.batchgrid3d.cuda()\n\n\n        x_ = self.batchgrid3d[:,:,:,0:1] * depth + trans0.view(-1,1,1,1).repeat(1, self.height, self.width, 1)\n\n        y_ = self.batchgrid3d[:,:,:,1:2] * depth + trans1.view(-1,1,1,1).repeat(1, self.height, self.width, 1)\n        z = self.batchgrid3d[:,:,:,2:3] * depth\n        #print(x.size(), y.size(), z.size())\n\n        rotate_z = rotate.view(-1,1,1,1).repeat(1,self.height, self.width,1) * np.pi\n\n        x = x_ * torch.cos(rotate_z) - y_ * torch.sin(rotate_z)\n        y = x_ * torch.sin(rotate_z) + y_ * torch.cos(rotate_z)\n\n\n        r = torch.sqrt(x**2 + y**2 + z**2) + 1e-5\n\n        #print(r)\n        theta = torch.acos(z/r)/(np.pi/2)  - 1\n        #phi = torch.atan(y/x)\n\n        if depth.is_cuda:\n            phi = torch.atan(y/(x + 1e-5))  + np.pi * x.lt(0).type(torch.cuda.FloatTensor) * (y.ge(0).type(torch.cuda.FloatTensor) - y.lt(0).type(torch.cuda.FloatTensor))\n        else:\n            phi = torch.atan(y/(x + 1e-5))  + np.pi * x.lt(0).type(torch.FloatTensor) * (y.ge(0).type(torch.FloatTensor) - y.lt(0).type(torch.FloatTensor))\n\n\n        phi = phi/np.pi\n\n        output = torch.cat([theta,phi], 3)\n        return output\n"
  },
  {
    "path": "lib/model/roi_crop/modules/roi_crop.py",
    "content": "from torch.nn.modules.module import Module\nfrom ..functions.roi_crop import RoICropFunction\n\nclass _RoICrop(Module):\n    def __init__(self, layout = 'BHWD'):\n        super(_RoICrop, self).__init__()\n    def forward(self, input1, input2):\n        return RoICropFunction()(input1, input2)\n"
  },
  {
    "path": "lib/model/roi_crop/src/roi_crop.c",
    "content": "#include <TH/TH.h>\n#include <stdbool.h>\n#include <stdio.h>\n\n#define real float\n\nint BilinearSamplerBHWD_updateOutput(THFloatTensor *inputImages, THFloatTensor *grids, THFloatTensor *output)\n{\n\n  int batchsize = inputImages->size[0];\n  int inputImages_height = inputImages->size[1];\n  int inputImages_width = inputImages->size[2];\n  int output_height = output->size[1];\n  int output_width = output->size[2];\n  int inputImages_channels = inputImages->size[3];\n\n  int output_strideBatch = output->stride[0];\n  int output_strideHeight = output->stride[1];\n  int output_strideWidth = output->stride[2];\n\n  int inputImages_strideBatch = inputImages->stride[0];\n  int inputImages_strideHeight = inputImages->stride[1];\n  int inputImages_strideWidth = inputImages->stride[2];\n\n  int grids_strideBatch = grids->stride[0];\n  int grids_strideHeight = grids->stride[1];\n  int grids_strideWidth = grids->stride[2];\n\n\n  real *inputImages_data, *output_data, *grids_data;\n  inputImages_data = THFloatTensor_data(inputImages);\n  output_data = THFloatTensor_data(output);\n  grids_data = THFloatTensor_data(grids);\n\n  int b, yOut, xOut;\n\n  for(b=0; b < batchsize; b++)\n  {\n    for(yOut=0; yOut < output_height; yOut++)\n    {\n      for(xOut=0; xOut < output_width; xOut++)\n      {\n        //read the grid\n        real yf = grids_data[b*grids_strideBatch + yOut*grids_strideHeight + xOut*grids_strideWidth];\n        real xf = grids_data[b*grids_strideBatch + yOut*grids_strideHeight + xOut*grids_strideWidth + 1];\n\n        // get the weights for interpolation\n        int yInTopLeft, xInTopLeft;\n        real yWeightTopLeft, xWeightTopLeft;\n\n        real xcoord = (xf + 1) * (inputImages_width - 1) / 2;\n        xInTopLeft = floor(xcoord);\n        xWeightTopLeft = 1 - (xcoord - xInTopLeft);\n\n        real ycoord = (yf + 1) * (inputImages_height - 1) / 2;\n        yInTopLeft = floor(ycoord);\n        yWeightTopLeft = 1 - (ycoord - yInTopLeft);\n\n\n\n        const int outAddress = output_strideBatch * b + output_strideHeight * yOut + output_strideWidth * xOut;\n        const int inTopLeftAddress = inputImages_strideBatch * b + inputImages_strideHeight * yInTopLeft + inputImages_strideWidth * xInTopLeft;\n        const int inTopRightAddress = inTopLeftAddress + inputImages_strideWidth;\n        const int inBottomLeftAddress = inTopLeftAddress + inputImages_strideHeight;\n        const int inBottomRightAddress = inBottomLeftAddress + inputImages_strideWidth;\n\n        real v=0;\n        real inTopLeft=0;\n        real inTopRight=0;\n        real inBottomLeft=0;\n        real inBottomRight=0;\n\n        // we are careful with the boundaries\n        bool topLeftIsIn = xInTopLeft >= 0 && xInTopLeft <= inputImages_width-1 && yInTopLeft >= 0 && yInTopLeft <= inputImages_height-1;\n        bool topRightIsIn = xInTopLeft+1 >= 0 && xInTopLeft+1 <= inputImages_width-1 && yInTopLeft >= 0 && yInTopLeft <= inputImages_height-1;\n        bool bottomLeftIsIn = xInTopLeft >= 0 && xInTopLeft <= inputImages_width-1 && yInTopLeft+1 >= 0 && yInTopLeft+1 <= inputImages_height-1;\n        bool bottomRightIsIn = xInTopLeft+1 >= 0 && xInTopLeft+1 <= inputImages_width-1 && yInTopLeft+1 >= 0 && yInTopLeft+1 <= inputImages_height-1;\n\n        int t;\n        // interpolation happens here\n        for(t=0; t<inputImages_channels; t++)\n        {\n           if(topLeftIsIn) inTopLeft = inputImages_data[inTopLeftAddress + t];\n           if(topRightIsIn) inTopRight = inputImages_data[inTopRightAddress + t];\n           if(bottomLeftIsIn) inBottomLeft = inputImages_data[inBottomLeftAddress + t];\n           if(bottomRightIsIn) inBottomRight = inputImages_data[inBottomRightAddress + t];\n\n           v = xWeightTopLeft * yWeightTopLeft * inTopLeft\n             + (1 - xWeightTopLeft) * yWeightTopLeft * inTopRight\n             + xWeightTopLeft * (1 - yWeightTopLeft) * inBottomLeft\n             + (1 - xWeightTopLeft) * (1 - yWeightTopLeft) * inBottomRight;\n\n           output_data[outAddress + t] = v;\n        }\n\n      }\n    }\n  }\n\n  return 1;\n}\n\n\n\nint BilinearSamplerBHWD_updateGradInput(THFloatTensor *inputImages, THFloatTensor *grids, THFloatTensor *gradInputImages,\n                                        THFloatTensor *gradGrids, THFloatTensor *gradOutput)\n{\n  bool onlyGrid=false;\n\n  int batchsize = inputImages->size[0];\n  int inputImages_height = inputImages->size[1];\n  int inputImages_width = inputImages->size[2];\n  int gradOutput_height = gradOutput->size[1];\n  int gradOutput_width = gradOutput->size[2];\n  int inputImages_channels = inputImages->size[3];\n\n  int gradOutput_strideBatch = gradOutput->stride[0];\n  int gradOutput_strideHeight = gradOutput->stride[1];\n  int gradOutput_strideWidth = gradOutput->stride[2];\n\n  int inputImages_strideBatch = inputImages->stride[0];\n  int inputImages_strideHeight = inputImages->stride[1];\n  int inputImages_strideWidth = inputImages->stride[2];\n\n  int gradInputImages_strideBatch = gradInputImages->stride[0];\n  int gradInputImages_strideHeight = gradInputImages->stride[1];\n  int gradInputImages_strideWidth = gradInputImages->stride[2];\n\n  int grids_strideBatch = grids->stride[0];\n  int grids_strideHeight = grids->stride[1];\n  int grids_strideWidth = grids->stride[2];\n\n  int gradGrids_strideBatch = gradGrids->stride[0];\n  int gradGrids_strideHeight = gradGrids->stride[1];\n  int gradGrids_strideWidth = gradGrids->stride[2];\n\n  real *inputImages_data, *gradOutput_data, *grids_data, *gradGrids_data, *gradInputImages_data;\n  inputImages_data = THFloatTensor_data(inputImages);\n  gradOutput_data = THFloatTensor_data(gradOutput);\n  grids_data = THFloatTensor_data(grids);\n  gradGrids_data = THFloatTensor_data(gradGrids);\n  gradInputImages_data = THFloatTensor_data(gradInputImages);\n\n  int b, yOut, xOut;\n\n  for(b=0; b < batchsize; b++)\n  {\n    for(yOut=0; yOut < gradOutput_height; yOut++)\n    {\n      for(xOut=0; xOut < gradOutput_width; xOut++)\n      {\n        //read the grid\n        real yf = grids_data[b*grids_strideBatch + yOut*grids_strideHeight + xOut*grids_strideWidth];\n        real xf = grids_data[b*grids_strideBatch + yOut*grids_strideHeight + xOut*grids_strideWidth + 1];\n\n        // get the weights for interpolation\n        int yInTopLeft, xInTopLeft;\n        real yWeightTopLeft, xWeightTopLeft;\n\n        real xcoord = (xf + 1) * (inputImages_width - 1) / 2;\n        xInTopLeft = floor(xcoord);\n        xWeightTopLeft = 1 - (xcoord - xInTopLeft);\n\n        real ycoord = (yf + 1) * (inputImages_height - 1) / 2;\n        yInTopLeft = floor(ycoord);\n        yWeightTopLeft = 1 - (ycoord - yInTopLeft);\n\n\n        const int inTopLeftAddress = inputImages_strideBatch * b + inputImages_strideHeight * yInTopLeft + inputImages_strideWidth * xInTopLeft;\n        const int inTopRightAddress = inTopLeftAddress + inputImages_strideWidth;\n        const int inBottomLeftAddress = inTopLeftAddress + inputImages_strideHeight;\n        const int inBottomRightAddress = inBottomLeftAddress + inputImages_strideWidth;\n\n        const int gradInputImagesTopLeftAddress = gradInputImages_strideBatch * b + gradInputImages_strideHeight * yInTopLeft + gradInputImages_strideWidth * xInTopLeft;\n        const int gradInputImagesTopRightAddress = gradInputImagesTopLeftAddress + gradInputImages_strideWidth;\n        const int gradInputImagesBottomLeftAddress = gradInputImagesTopLeftAddress + gradInputImages_strideHeight;\n        const int gradInputImagesBottomRightAddress = gradInputImagesBottomLeftAddress + gradInputImages_strideWidth;\n\n        const int gradOutputAddress = gradOutput_strideBatch * b + gradOutput_strideHeight * yOut + gradOutput_strideWidth * xOut;\n\n        real topLeftDotProduct = 0;\n        real topRightDotProduct = 0;\n        real bottomLeftDotProduct = 0;\n        real bottomRightDotProduct = 0;\n\n        real v=0;\n        real inTopLeft=0;\n        real inTopRight=0;\n        real inBottomLeft=0;\n        real inBottomRight=0;\n\n        // we are careful with the boundaries\n        bool topLeftIsIn = xInTopLeft >= 0 && xInTopLeft <= inputImages_width-1 && yInTopLeft >= 0 && yInTopLeft <= inputImages_height-1;\n        bool topRightIsIn = xInTopLeft+1 >= 0 && xInTopLeft+1 <= inputImages_width-1 && yInTopLeft >= 0 && yInTopLeft <= inputImages_height-1;\n        bool bottomLeftIsIn = xInTopLeft >= 0 && xInTopLeft <= inputImages_width-1 && yInTopLeft+1 >= 0 && yInTopLeft+1 <= inputImages_height-1;\n        bool bottomRightIsIn = xInTopLeft+1 >= 0 && xInTopLeft+1 <= inputImages_width-1 && yInTopLeft+1 >= 0 && yInTopLeft+1 <= inputImages_height-1;\n\n        int t;\n\n        for(t=0; t<inputImages_channels; t++)\n        {\n           real gradOutValue = gradOutput_data[gradOutputAddress + t];\n           if(topLeftIsIn)\n           {\n              real inTopLeft = inputImages_data[inTopLeftAddress + t];\n              topLeftDotProduct += inTopLeft * gradOutValue;\n              if(!onlyGrid) gradInputImages_data[gradInputImagesTopLeftAddress + t] += xWeightTopLeft * yWeightTopLeft * gradOutValue;\n           }\n\n           if(topRightIsIn)\n           {\n              real inTopRight = inputImages_data[inTopRightAddress + t];\n              topRightDotProduct += inTopRight * gradOutValue;\n              if(!onlyGrid) gradInputImages_data[gradInputImagesTopRightAddress + t] += (1 - xWeightTopLeft) * yWeightTopLeft * gradOutValue;\n           }\n\n           if(bottomLeftIsIn)\n           {\n              real inBottomLeft = inputImages_data[inBottomLeftAddress + t];\n              bottomLeftDotProduct += inBottomLeft * gradOutValue;\n              if(!onlyGrid) gradInputImages_data[gradInputImagesBottomLeftAddress + t] += xWeightTopLeft * (1 - yWeightTopLeft) * gradOutValue;\n           }\n\n           if(bottomRightIsIn)\n           {\n              real inBottomRight = inputImages_data[inBottomRightAddress + t];\n              bottomRightDotProduct += inBottomRight * gradOutValue;\n              if(!onlyGrid) gradInputImages_data[gradInputImagesBottomRightAddress + t] += (1 - xWeightTopLeft) * (1 - yWeightTopLeft) * gradOutValue;\n           }\n        }\n\n        yf = - xWeightTopLeft * topLeftDotProduct + xWeightTopLeft * bottomLeftDotProduct - (1-xWeightTopLeft) * topRightDotProduct + (1-xWeightTopLeft) * bottomRightDotProduct;\n        xf = - yWeightTopLeft * topLeftDotProduct + yWeightTopLeft * topRightDotProduct - (1-yWeightTopLeft) * bottomLeftDotProduct + (1-yWeightTopLeft) * bottomRightDotProduct;\n\n        gradGrids_data[b*gradGrids_strideBatch + yOut*gradGrids_strideHeight + xOut*gradGrids_strideWidth] = yf * (inputImages_height-1) / 2;\n        gradGrids_data[b*gradGrids_strideBatch + yOut*gradGrids_strideHeight + xOut*gradGrids_strideWidth + 1] = xf * (inputImages_width-1) / 2;\n\n      }\n    }\n  }\n\n  return 1;\n}\n\n\nint BilinearSamplerBCHW_updateOutput(THFloatTensor *inputImages, THFloatTensor *grids, THFloatTensor *output)\n{\n\n  int batchsize = inputImages->size[0];\n  int inputImages_height = inputImages->size[2];\n  int inputImages_width = inputImages->size[3];\n  \n  int output_height = output->size[2];\n  int output_width = output->size[3];\n  int inputImages_channels = inputImages->size[1];\n\n  int output_strideBatch = output->stride[0];\n  int output_strideHeight = output->stride[2];\n  int output_strideWidth = output->stride[3];  \n  int output_strideChannel = output->stride[1];\n    \n\n  int inputImages_strideBatch = inputImages->stride[0];\n  int inputImages_strideHeight = inputImages->stride[2];\n  int inputImages_strideWidth = inputImages->stride[3];\n  int inputImages_strideChannel = inputImages->stride[1];\n\n  int grids_strideBatch = grids->stride[0];\n  int grids_strideHeight = grids->stride[2];\n  int grids_strideWidth = grids->stride[3];\n  int grids_strideChannel = grids->stride[1];\n\n\n  real *inputImages_data, *output_data, *grids_data;\n  inputImages_data = THFloatTensor_data(inputImages);\n  output_data = THFloatTensor_data(output);\n  grids_data = THFloatTensor_data(grids);\n\n  int b, yOut, xOut;\n\n  for(b=0; b < batchsize; b++)\n  {\n    for(yOut=0; yOut < output_height; yOut++)\n    {\n      for(xOut=0; xOut < output_width; xOut++)\n      {\n        //read the grid\n        \n        real xf = grids_data[b*grids_strideBatch + yOut*grids_strideHeight + xOut*grids_strideWidth + grids_strideChannel];\n        real yf = grids_data[b*grids_strideBatch + yOut*grids_strideHeight + xOut*grids_strideWidth];\n\n        // get the weights for interpolation\n        int yInTopLeft, xInTopLeft;\n        real yWeightTopLeft, xWeightTopLeft;\n\n        real xcoord = (xf + 1) * (inputImages_width - 1) / 2;\n        xInTopLeft = floor(xcoord);\n        xWeightTopLeft = 1 - (xcoord - xInTopLeft);\n\n        real ycoord = (yf + 1) * (inputImages_height - 1) / 2;\n        yInTopLeft = floor(ycoord);\n        yWeightTopLeft = 1 - (ycoord - yInTopLeft);\n\n\n\n        const int outAddress = output_strideBatch * b + output_strideHeight * yOut + output_strideWidth * xOut;\n        const int inTopLeftAddress = inputImages_strideBatch * b + inputImages_strideHeight * yInTopLeft + inputImages_strideWidth * xInTopLeft;\n        const int inTopRightAddress = inTopLeftAddress + inputImages_strideWidth;\n        const int inBottomLeftAddress = inTopLeftAddress + inputImages_strideHeight;\n        const int inBottomRightAddress = inBottomLeftAddress + inputImages_strideWidth;\n\n        real v=0;\n        real inTopLeft=0;\n        real inTopRight=0;\n        real inBottomLeft=0;\n        real inBottomRight=0;\n\n        // we are careful with the boundaries\n        bool topLeftIsIn = xInTopLeft >= 0 && xInTopLeft <= inputImages_width-1 && yInTopLeft >= 0 && yInTopLeft <= inputImages_height-1;\n        bool topRightIsIn = xInTopLeft+1 >= 0 && xInTopLeft+1 <= inputImages_width-1 && yInTopLeft >= 0 && yInTopLeft <= inputImages_height-1;\n        bool bottomLeftIsIn = xInTopLeft >= 0 && xInTopLeft <= inputImages_width-1 && yInTopLeft+1 >= 0 && yInTopLeft+1 <= inputImages_height-1;\n        bool bottomRightIsIn = xInTopLeft+1 >= 0 && xInTopLeft+1 <= inputImages_width-1 && yInTopLeft+1 >= 0 && yInTopLeft+1 <= inputImages_height-1;\n\n        int t;\n        // interpolation happens here\n        for(t=0; t<inputImages_channels; t++)\n        {\n           if(topLeftIsIn) inTopLeft = inputImages_data[inTopLeftAddress + t * inputImages_strideChannel];\n           if(topRightIsIn) inTopRight = inputImages_data[inTopRightAddress + t * inputImages_strideChannel];\n           if(bottomLeftIsIn) inBottomLeft = inputImages_data[inBottomLeftAddress + t * inputImages_strideChannel];\n           if(bottomRightIsIn) inBottomRight = inputImages_data[inBottomRightAddress + t * inputImages_strideChannel];\n\n           v = xWeightTopLeft * yWeightTopLeft * inTopLeft\n             + (1 - xWeightTopLeft) * yWeightTopLeft * inTopRight\n             + xWeightTopLeft * (1 - yWeightTopLeft) * inBottomLeft\n             + (1 - xWeightTopLeft) * (1 - yWeightTopLeft) * inBottomRight;\n\n           output_data[outAddress + t * output_strideChannel] = v;\n        }\n\n      }\n    }\n  }\n\n  return 1;\n}\n\n\n\nint BilinearSamplerBCHW_updateGradInput(THFloatTensor *inputImages, THFloatTensor *grids, THFloatTensor *gradInputImages,\n                                        THFloatTensor *gradGrids, THFloatTensor *gradOutput)\n{\n  bool onlyGrid=false;\n\n  int batchsize = inputImages->size[0];\n  int inputImages_height = inputImages->size[2];\n  int inputImages_width = inputImages->size[3];\n  int gradOutput_height = gradOutput->size[2];\n  int gradOutput_width = gradOutput->size[3];\n  int inputImages_channels = inputImages->size[1];\n\n  int gradOutput_strideBatch = gradOutput->stride[0];\n  int gradOutput_strideHeight = gradOutput->stride[2];\n  int gradOutput_strideWidth = gradOutput->stride[3];\n  int gradOutput_strideChannel = gradOutput->stride[1];\n\n  int inputImages_strideBatch = inputImages->stride[0];\n  int inputImages_strideHeight = inputImages->stride[2];\n  int inputImages_strideWidth = inputImages->stride[3];\n  int inputImages_strideChannel = inputImages->stride[1];\n    \n\n  int gradInputImages_strideBatch = gradInputImages->stride[0];\n  int gradInputImages_strideHeight = gradInputImages->stride[2];\n  int gradInputImages_strideWidth = gradInputImages->stride[3];\n  int gradInputImages_strideChannel = gradInputImages->stride[1];\n\n  int grids_strideBatch = grids->stride[0];\n  int grids_strideHeight = grids->stride[2];\n  int grids_strideWidth = grids->stride[3];\n  int grids_strideChannel = grids->stride[1];\n\n  int gradGrids_strideBatch = gradGrids->stride[0];\n  int gradGrids_strideHeight = gradGrids->stride[2];\n  int gradGrids_strideWidth = gradGrids->stride[3];\n  int gradGrids_strideChannel = gradGrids->stride[1];\n\n  real *inputImages_data, *gradOutput_data, *grids_data, *gradGrids_data, *gradInputImages_data;\n  inputImages_data = THFloatTensor_data(inputImages);\n  gradOutput_data = THFloatTensor_data(gradOutput);\n  grids_data = THFloatTensor_data(grids);\n  gradGrids_data = THFloatTensor_data(gradGrids);\n  gradInputImages_data = THFloatTensor_data(gradInputImages);\n\n  int b, yOut, xOut;\n\n  for(b=0; b < batchsize; b++)\n  {\n    for(yOut=0; yOut < gradOutput_height; yOut++)\n    {\n      for(xOut=0; xOut < gradOutput_width; xOut++)\n      {\n        //read the grid\n        real xf = grids_data[b*grids_strideBatch + yOut*grids_strideHeight + xOut*grids_strideWidth + grids_strideChannel];\n        real yf = grids_data[b*grids_strideBatch + yOut*grids_strideHeight + xOut*grids_strideWidth];\n        \n        // get the weights for interpolation\n        int yInTopLeft, xInTopLeft;\n        real yWeightTopLeft, xWeightTopLeft;\n\n        real xcoord = (xf + 1) * (inputImages_width - 1) / 2;\n        xInTopLeft = floor(xcoord);\n        xWeightTopLeft = 1 - (xcoord - xInTopLeft);\n\n        real ycoord = (yf + 1) * (inputImages_height - 1) / 2;\n        yInTopLeft = floor(ycoord);\n        yWeightTopLeft = 1 - (ycoord - yInTopLeft);\n\n\n        const int inTopLeftAddress = inputImages_strideBatch * b + inputImages_strideHeight * yInTopLeft + inputImages_strideWidth * xInTopLeft;\n        const int inTopRightAddress = inTopLeftAddress + inputImages_strideWidth;\n        const int inBottomLeftAddress = inTopLeftAddress + inputImages_strideHeight;\n        const int inBottomRightAddress = inBottomLeftAddress + inputImages_strideWidth;\n\n        const int gradInputImagesTopLeftAddress = gradInputImages_strideBatch * b + gradInputImages_strideHeight * yInTopLeft + gradInputImages_strideWidth * xInTopLeft;\n        const int gradInputImagesTopRightAddress = gradInputImagesTopLeftAddress + gradInputImages_strideWidth;\n        const int gradInputImagesBottomLeftAddress = gradInputImagesTopLeftAddress + gradInputImages_strideHeight;\n        const int gradInputImagesBottomRightAddress = gradInputImagesBottomLeftAddress + gradInputImages_strideWidth;\n\n        const int gradOutputAddress = gradOutput_strideBatch * b + gradOutput_strideHeight * yOut + gradOutput_strideWidth * xOut;\n\n        real topLeftDotProduct = 0;\n        real topRightDotProduct = 0;\n        real bottomLeftDotProduct = 0;\n        real bottomRightDotProduct = 0;\n\n        real v=0;\n        real inTopLeft=0;\n        real inTopRight=0;\n        real inBottomLeft=0;\n        real inBottomRight=0;\n\n        // we are careful with the boundaries\n        bool topLeftIsIn = xInTopLeft >= 0 && xInTopLeft <= inputImages_width-1 && yInTopLeft >= 0 && yInTopLeft <= inputImages_height-1;\n        bool topRightIsIn = xInTopLeft+1 >= 0 && xInTopLeft+1 <= inputImages_width-1 && yInTopLeft >= 0 && yInTopLeft <= inputImages_height-1;\n        bool bottomLeftIsIn = xInTopLeft >= 0 && xInTopLeft <= inputImages_width-1 && yInTopLeft+1 >= 0 && yInTopLeft+1 <= inputImages_height-1;\n        bool bottomRightIsIn = xInTopLeft+1 >= 0 && xInTopLeft+1 <= inputImages_width-1 && yInTopLeft+1 >= 0 && yInTopLeft+1 <= inputImages_height-1;\n\n        int t;\n\n        for(t=0; t<inputImages_channels; t++)\n        {\n           real gradOutValue = gradOutput_data[gradOutputAddress + t * gradOutput_strideChannel];\n           if(topLeftIsIn)\n           {\n              real inTopLeft = inputImages_data[inTopLeftAddress + t * inputImages_strideChannel];\n              topLeftDotProduct += inTopLeft * gradOutValue;\n              if(!onlyGrid) gradInputImages_data[gradInputImagesTopLeftAddress + t * gradInputImages_strideChannel] += xWeightTopLeft * yWeightTopLeft * gradOutValue;\n           }\n\n           if(topRightIsIn)\n           {\n              real inTopRight = inputImages_data[inTopRightAddress + t * inputImages_strideChannel];\n              topRightDotProduct += inTopRight * gradOutValue;\n              if(!onlyGrid) gradInputImages_data[gradInputImagesTopRightAddress + t * gradInputImages_strideChannel] += (1 - xWeightTopLeft) * yWeightTopLeft * gradOutValue;\n           }\n\n           if(bottomLeftIsIn)\n           {\n              real inBottomLeft = inputImages_data[inBottomLeftAddress + t * inputImages_strideChannel];\n              bottomLeftDotProduct += inBottomLeft * gradOutValue;\n              if(!onlyGrid) gradInputImages_data[gradInputImagesBottomLeftAddress + t * gradInputImages_strideChannel] += xWeightTopLeft * (1 - yWeightTopLeft) * gradOutValue;\n           }\n\n           if(bottomRightIsIn)\n           {\n              real inBottomRight = inputImages_data[inBottomRightAddress + t * inputImages_strideChannel];\n              bottomRightDotProduct += inBottomRight * gradOutValue;\n              if(!onlyGrid) gradInputImages_data[gradInputImagesBottomRightAddress + t * gradInputImages_strideChannel] += (1 - xWeightTopLeft) * (1 - yWeightTopLeft) * gradOutValue;\n           }\n        }\n\n        xf = - yWeightTopLeft * topLeftDotProduct + yWeightTopLeft * topRightDotProduct - (1-yWeightTopLeft) * bottomLeftDotProduct + (1-yWeightTopLeft) * bottomRightDotProduct;\n          \n        yf = - xWeightTopLeft * topLeftDotProduct + xWeightTopLeft * bottomLeftDotProduct - (1-xWeightTopLeft) * topRightDotProduct + (1-xWeightTopLeft) * bottomRightDotProduct;\n        \n\n        gradGrids_data[b*gradGrids_strideBatch + yOut*gradGrids_strideHeight + xOut*gradGrids_strideWidth + gradGrids_strideChannel] = xf * (inputImages_width-1) / 2;\n          \n        gradGrids_data[b*gradGrids_strideBatch + yOut*gradGrids_strideHeight + xOut*gradGrids_strideWidth] = yf * (inputImages_height-1) / 2;\n        \n\n      }\n    }\n  }\n\n  return 1;\n}\n\n\n"
  },
  {
    "path": "lib/model/roi_crop/src/roi_crop.h",
    "content": "int BilinearSamplerBHWD_updateOutput(THFloatTensor *inputImages, THFloatTensor *grids, THFloatTensor *output);\n\nint BilinearSamplerBHWD_updateGradInput(THFloatTensor *inputImages, THFloatTensor *grids, THFloatTensor *gradInputImages,\n                                        THFloatTensor *gradGrids, THFloatTensor *gradOutput);\n\n\n\nint BilinearSamplerBCHW_updateOutput(THFloatTensor *inputImages, THFloatTensor *grids, THFloatTensor *output);\n\nint BilinearSamplerBCHW_updateGradInput(THFloatTensor *inputImages, THFloatTensor *grids, THFloatTensor *gradInputImages,\n                                        THFloatTensor *gradGrids, THFloatTensor *gradOutput);\n"
  },
  {
    "path": "lib/model/roi_crop/src/roi_crop_cuda.c",
    "content": "#include <THC/THC.h>\n#include <stdbool.h>\n#include <stdio.h>\n#include \"roi_crop_cuda_kernel.h\"\n\n#define real float\n\n// this symbol will be resolved automatically from PyTorch libs\nextern THCState *state;\n\n// Bilinear sampling is done in BHWD (coalescing is not obvious in BDHW)\n// we assume BHWD format in inputImages\n// we assume BHW(YX) format on grids\n\nint BilinearSamplerBHWD_updateOutput_cuda(THCudaTensor *inputImages, THCudaTensor *grids, THCudaTensor *output){\n//  THCState *state = getCutorchState(L);\n//  THCudaTensor *inputImages = (THCudaTensor *)luaT_checkudata(L, 2, \"torch.CudaTensor\");\n//  THCudaTensor *grids = (THCudaTensor *)luaT_checkudata(L, 3, \"torch.CudaTensor\");\n//  THCudaTensor *output = (THCudaTensor *)luaT_checkudata(L, 4, \"torch.CudaTensor\");\n\n  int success = 0;\n  success = BilinearSamplerBHWD_updateOutput_cuda_kernel(output->size[1],\n                                               output->size[3],\n                                               output->size[2],\n                                               output->size[0],\n                                               THCudaTensor_size(state, inputImages, 1),\n                                               THCudaTensor_size(state, inputImages, 2),\n                                               THCudaTensor_size(state, inputImages, 3),\n                                               THCudaTensor_size(state, inputImages, 0),\n                                               THCudaTensor_data(state, inputImages),\n                                               THCudaTensor_stride(state, inputImages, 0),\n                                               THCudaTensor_stride(state, inputImages, 1),\n                                               THCudaTensor_stride(state, inputImages, 2),\n                                               THCudaTensor_stride(state, inputImages, 3),\n                                               THCudaTensor_data(state, grids),\n                                               THCudaTensor_stride(state, grids, 0),\n                                               THCudaTensor_stride(state, grids, 3),\n                                               THCudaTensor_stride(state, grids, 1),\n                                               THCudaTensor_stride(state, grids, 2),\n                                               THCudaTensor_data(state, output),\n                                               THCudaTensor_stride(state, output, 0),\n                                               THCudaTensor_stride(state, output, 1),\n                                               THCudaTensor_stride(state, output, 2),\n                                               THCudaTensor_stride(state, output, 3),\n                                               THCState_getCurrentStream(state));\n\n  //check for errors\n  if (!success) {\n    THError(\"aborting\");\n  }\n  return 1;\n}\n\nint BilinearSamplerBHWD_updateGradInput_cuda(THCudaTensor *inputImages, THCudaTensor *grids, THCudaTensor *gradInputImages,\n                                        THCudaTensor *gradGrids, THCudaTensor *gradOutput)\n{\n//  THCState *state = getCutorchState(L);\n//  THCudaTensor *inputImages = (THCudaTensor *)luaT_checkudata(L, 2, \"torch.CudaTensor\");\n//  THCudaTensor *grids = (THCudaTensor *)luaT_checkudata(L, 3, \"torch.CudaTensor\");\n//  THCudaTensor *gradInputImages = (THCudaTensor *)luaT_checkudata(L, 4, \"torch.CudaTensor\");\n//  THCudaTensor *gradGrids = (THCudaTensor *)luaT_checkudata(L, 5, \"torch.CudaTensor\");\n//  THCudaTensor *gradOutput = (THCudaTensor *)luaT_checkudata(L, 6, \"torch.CudaTensor\");\n\n  int success = 0;\n  success = BilinearSamplerBHWD_updateGradInput_cuda_kernel(gradOutput->size[1],\n                                                  gradOutput->size[3],\n                                                  gradOutput->size[2],\n                                                  gradOutput->size[0],\n                                                  THCudaTensor_size(state, inputImages, 1),\n                                                  THCudaTensor_size(state, inputImages, 2),\n                                                  THCudaTensor_size(state, inputImages, 3),\n                                                  THCudaTensor_size(state, inputImages, 0),\n                                                  THCudaTensor_data(state, inputImages),\n                                                  THCudaTensor_stride(state, inputImages, 0),\n                                                  THCudaTensor_stride(state, inputImages, 1),\n                                                  THCudaTensor_stride(state, inputImages, 2),\n                                                  THCudaTensor_stride(state, inputImages, 3),\n                                                  THCudaTensor_data(state, grids),\n                                                  THCudaTensor_stride(state, grids, 0),\n                                                  THCudaTensor_stride(state, grids, 3),\n                                                  THCudaTensor_stride(state, grids, 1),\n                                                  THCudaTensor_stride(state, grids, 2),\n                                                  THCudaTensor_data(state, gradInputImages),\n                                                  THCudaTensor_stride(state, gradInputImages, 0),\n                                                  THCudaTensor_stride(state, gradInputImages, 1),\n                                                  THCudaTensor_stride(state, gradInputImages, 2),\n                                                  THCudaTensor_stride(state, gradInputImages, 3),\n                                                  THCudaTensor_data(state, gradGrids),\n                                                  THCudaTensor_stride(state, gradGrids, 0),\n                                                  THCudaTensor_stride(state, gradGrids, 3),\n                                                  THCudaTensor_stride(state, gradGrids, 1),\n                                                  THCudaTensor_stride(state, gradGrids, 2),\n                                                  THCudaTensor_data(state, gradOutput),\n                                                  THCudaTensor_stride(state, gradOutput, 0),\n                                                  THCudaTensor_stride(state, gradOutput, 1),\n                                                  THCudaTensor_stride(state, gradOutput, 2),\n                                                  THCudaTensor_stride(state, gradOutput, 3),\n                                                  THCState_getCurrentStream(state));\n\n  //check for errors\n  if (!success) {\n    THError(\"aborting\");\n  }\n  return 1;\n}\n"
  },
  {
    "path": "lib/model/roi_crop/src/roi_crop_cuda.h",
    "content": "// Bilinear sampling is done in BHWD (coalescing is not obvious in BDHW)\n// we assume BHWD format in inputImages\n// we assume BHW(YX) format on grids\n\nint BilinearSamplerBHWD_updateOutput_cuda(THCudaTensor *inputImages, THCudaTensor *grids, THCudaTensor *output);\n\nint BilinearSamplerBHWD_updateGradInput_cuda(THCudaTensor *inputImages, THCudaTensor *grids, THCudaTensor *gradInputImages,\n                                        THCudaTensor *gradGrids, THCudaTensor *gradOutput);\n"
  },
  {
    "path": "lib/model/roi_crop/src/roi_crop_cuda_kernel.cu",
    "content": "#include <stdbool.h>\n#include <stdio.h>\n#include \"roi_crop_cuda_kernel.h\"\n\n#define real float\n\n// Bilinear sampling is done in BHWD (coalescing is not obvious in BDHW)\n// we assume BHWD format in inputImages\n// we assume BHW(YX) format on grids\n\n__device__ void getTopLeft(float x, int width, int& point, float& weight)\n{\n   /* for interpolation :\n      stores in point and weight :\n      - the x-coordinate of the pixel on the left (or y-coordinate of the upper pixel)\n      - the weight for interpolating\n   */\n\n   float xcoord = (x + 1) * (width - 1) / 2;\n   point = floor(xcoord);\n   weight = 1 - (xcoord - point);\n}\n\n__device__ bool between(int value, int lowerBound, int upperBound)\n{\n   return (value >= lowerBound && value <= upperBound);\n}\n\n__device__ void sumReduceShMem(volatile float s[])\n{\n   /* obviously only works for 32 elements */\n   /* sums up a shared memory array of 32 elements, stores it in s[0] */\n   /* whole warp can then read first element (broadcasting) */\n   if(threadIdx.x<16) { s[threadIdx.x] = s[threadIdx.x] + s[threadIdx.x+16]; }\n   if(threadIdx.x<8) { s[threadIdx.x] = s[threadIdx.x] + s[threadIdx.x+8]; }\n   if(threadIdx.x<4) { s[threadIdx.x] = s[threadIdx.x] + s[threadIdx.x+4]; }\n   if(threadIdx.x<2) { s[threadIdx.x] = s[threadIdx.x] + s[threadIdx.x+2]; }\n   if(threadIdx.x<1) { s[threadIdx.x] = s[threadIdx.x] + s[threadIdx.x+1]; }\n}\n\n// CUDA: grid stride looping\n#define CUDA_KERNEL_LOOP(i, n) \\\n  for (int i = blockIdx.x * blockDim.x + threadIdx.x; \\\n       i < (n); \\\n       i += blockDim.x * gridDim.x)\n\n__global__ void bilinearSamplingFromGrid(const int nthreads, float* inputImages_data, int inputImages_strideBatch, int inputImages_strideChannels, int inputImages_strideHeight, int inputImages_strideWidth,\n                                         float* grids_data, int grids_strideBatch, int grids_strideYX, int grids_strideHeight, int grids_strideWidth,\n                                         float* output_data, int output_strideBatch, int output_strideChannels, int output_strideHeight, int output_strideWidth,\n                                         int inputImages_channels, int inputImages_height, int inputImages_width,\n                                         int output_channels, int output_height, int output_width, int output_batchsize,\n                                         int roiPerImage)\n{\n   CUDA_KERNEL_LOOP(index, nthreads)\n   {\n       const int xOut = index % output_width;\n       const int yOut = (index / output_width) % output_height;\n       const int cOut  = (index / output_width / output_height) % output_channels;\n       const int b = index / output_width / output_height / output_channels;\n\n       const int width = inputImages_width;\n       const int height = inputImages_height;\n\n       const int b_input = b / roiPerImage;\n\n       float yf = grids_data[b*grids_strideBatch + yOut*grids_strideHeight + xOut*grids_strideWidth];\n       float xf = grids_data[b*grids_strideBatch + yOut*grids_strideHeight + xOut*grids_strideWidth + 1];\n\n       int yInTopLeft, xInTopLeft;\n       float yWeightTopLeft, xWeightTopLeft;\n       getTopLeft(xf, inputImages_width, xInTopLeft, xWeightTopLeft);\n       getTopLeft(yf, inputImages_height, yInTopLeft, yWeightTopLeft);\n\n       // const int outAddress = output_strideBatch * b + output_strideHeight * yOut + output_strideWidth * xOut;\n       const int outAddress = output_strideBatch * b + output_strideChannels * cOut + output_strideHeight * yOut + xOut;\n\n       const int inTopLeftAddress = inputImages_strideBatch * b_input + inputImages_strideChannels * cOut + inputImages_strideHeight * yInTopLeft + xInTopLeft;\n       const int inTopRightAddress = inTopLeftAddress + inputImages_strideWidth;\n       const int inBottomLeftAddress = inTopLeftAddress + inputImages_strideHeight;\n       const int inBottomRightAddress = inBottomLeftAddress + inputImages_strideWidth;\n\n       float v=0;\n       float inTopLeft=0;\n       float inTopRight=0;\n       float inBottomLeft=0;\n       float inBottomRight=0;\n\n       bool topLeftIsIn = between(xInTopLeft, 0, width-1) && between(yInTopLeft, 0, height-1);\n       bool topRightIsIn = between(xInTopLeft+1, 0, width-1) && between(yInTopLeft, 0, height-1);\n       bool bottomLeftIsIn = between(xInTopLeft, 0, width-1) && between(yInTopLeft+1, 0, height-1);\n       bool bottomRightIsIn = between(xInTopLeft+1, 0, width-1) && between(yInTopLeft+1, 0, height-1);\n\n       if (!topLeftIsIn && !topRightIsIn && !bottomLeftIsIn && !bottomRightIsIn)\n         continue;\n\n       if(topLeftIsIn) inTopLeft = inputImages_data[inTopLeftAddress];\n       if(topRightIsIn) inTopRight = inputImages_data[inTopRightAddress];\n       if(bottomLeftIsIn) inBottomLeft = inputImages_data[inBottomLeftAddress];\n       if(bottomRightIsIn) inBottomRight = inputImages_data[inBottomRightAddress];\n\n       v = xWeightTopLeft * yWeightTopLeft * inTopLeft\n         + (1 - xWeightTopLeft) * yWeightTopLeft * inTopRight\n         + xWeightTopLeft * (1 - yWeightTopLeft) * inBottomLeft\n         + (1 - xWeightTopLeft) * (1 - yWeightTopLeft) * inBottomRight;\n\n       output_data[outAddress] = v;\n   }\n\n}\n\n__global__ void backwardBilinearSampling(const int nthreads, float* inputImages_data, int inputImages_strideBatch, int inputImages_strideChannels, int inputImages_strideHeight, int inputImages_strideWidth,\n                                         float* gradInputImages_data, int gradInputImages_strideBatch, int gradInputImages_strideChannels, int gradInputImages_strideHeight, int gradInputImages_strideWidth,\n                                         float* grids_data, int grids_strideBatch, int grids_strideYX, int grids_strideHeight, int grids_strideWidth,\n                                         float* gradGrids_data, int gradGrids_strideBatch, int gradGrids_strideYX, int gradGrids_strideHeight, int gradGrids_strideWidth,\n                                         float* gradOutput_data, int gradOutput_strideBatch, int gradOutput_strideChannels, int gradOutput_strideHeight, int gradOutput_strideWidth,\n                                         int inputImages_channels, int inputImages_height, int inputImages_width,\n                                         int gradOutput_channels, int gradOutput_height, int gradOutput_width, int gradOutput_batchsize,\n                                         int roiPerImage)\n{\n\n  CUDA_KERNEL_LOOP(index, nthreads)\n  {\n      const int xOut = index % gradOutput_width;\n      const int yOut = (index / gradOutput_width) % gradOutput_height;\n      const int cOut  = (index / gradOutput_width / gradOutput_height) % gradOutput_channels;\n      const int b = index / gradOutput_width / gradOutput_height / gradOutput_channels;\n\n      const int b_input = b / roiPerImage;\n\n      const int width = inputImages_width;\n      const int height = inputImages_height;\n\n      float yf = grids_data[b*grids_strideBatch + yOut*grids_strideHeight + xOut*grids_strideWidth];\n      float xf = grids_data[b*grids_strideBatch + yOut*grids_strideHeight + xOut*grids_strideWidth + 1];\n\n      int yInTopLeft, xInTopLeft;\n      float yWeightTopLeft, xWeightTopLeft;\n      getTopLeft(xf, inputImages_width, xInTopLeft, xWeightTopLeft);\n      getTopLeft(yf, inputImages_height, yInTopLeft, yWeightTopLeft);\n\n      const int inTopLeftAddress = inputImages_strideBatch * b_input + inputImages_strideChannels * cOut + inputImages_strideHeight * yInTopLeft + xInTopLeft;\n      const int inTopRightAddress = inTopLeftAddress + inputImages_strideWidth;\n      const int inBottomLeftAddress = inTopLeftAddress + inputImages_strideHeight;\n      const int inBottomRightAddress = inBottomLeftAddress + inputImages_strideWidth;\n\n      const int gradInputImagesTopLeftAddress = gradInputImages_strideBatch * b_input + gradInputImages_strideChannels * cOut\n                                              + gradInputImages_strideHeight * yInTopLeft + xInTopLeft;\n      const int gradInputImagesTopRightAddress = gradInputImagesTopLeftAddress + gradInputImages_strideWidth;\n      const int gradInputImagesBottomLeftAddress = gradInputImagesTopLeftAddress + gradInputImages_strideHeight;\n      const int gradInputImagesBottomRightAddress = gradInputImagesBottomLeftAddress + gradInputImages_strideWidth;\n\n      const int gradOutputAddress = gradOutput_strideBatch * b + gradOutput_strideChannels * cOut + gradOutput_strideHeight * yOut + xOut;\n\n      float topLeftDotProduct = 0;\n      float topRightDotProduct = 0;\n      float bottomLeftDotProduct = 0;\n      float bottomRightDotProduct = 0;\n\n      bool topLeftIsIn = between(xInTopLeft, 0, width-1) && between(yInTopLeft, 0, height-1);\n      bool topRightIsIn = between(xInTopLeft+1, 0, width-1) && between(yInTopLeft, 0, height-1);\n      bool bottomLeftIsIn = between(xInTopLeft, 0, width-1) && between(yInTopLeft+1, 0, height-1);\n      bool bottomRightIsIn = between(xInTopLeft+1, 0, width-1) && between(yInTopLeft+1, 0, height-1);\n\n      float gradOutValue = gradOutput_data[gradOutputAddress];\n      // bool between(int value, int lowerBound, int upperBound)\n      if(topLeftIsIn)\n      {\n         float inTopLeft = inputImages_data[inTopLeftAddress];\n         topLeftDotProduct += inTopLeft * gradOutValue;\n         atomicAdd(&gradInputImages_data[gradInputImagesTopLeftAddress], xWeightTopLeft * yWeightTopLeft * gradOutValue);\n      }\n\n      if(topRightIsIn)\n      {\n         float inTopRight = inputImages_data[inTopRightAddress];\n         topRightDotProduct += inTopRight * gradOutValue;\n         atomicAdd(&gradInputImages_data[gradInputImagesTopRightAddress], (1 - xWeightTopLeft) * yWeightTopLeft * gradOutValue);\n      }\n\n      if(bottomLeftIsIn)\n      {\n         float inBottomLeft = inputImages_data[inBottomLeftAddress];\n         bottomLeftDotProduct += inBottomLeft * gradOutValue;\n         atomicAdd(&gradInputImages_data[gradInputImagesBottomLeftAddress], xWeightTopLeft * (1 - yWeightTopLeft) * gradOutValue);\n      }\n\n      if(bottomRightIsIn)\n      {\n         float inBottomRight = inputImages_data[inBottomRightAddress];\n         bottomRightDotProduct += inBottomRight * gradOutValue;\n         atomicAdd(&gradInputImages_data[gradInputImagesBottomRightAddress], (1 - xWeightTopLeft) * (1 - yWeightTopLeft) * gradOutValue);\n      }\n  }\n}\n\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\nint BilinearSamplerBHWD_updateOutput_cuda_kernel(/*output->size[1]*/int oc,\n                                                 /*output->size[3]*/int ow,\n                                                 /*output->size[2]*/int oh,\n                                                 /*output->size[0]*/int ob,\n                                                 /*THCudaTensor_size(state, inputImages, 1)*/int ic,\n                                                 /*THCudaTensor_size(state, inputImages, 2)*/int ih,\n                                                 /*THCudaTensor_size(state, inputImages, 3)*/int iw,\n                                                 /*THCudaTensor_size(state, inputImages, 0)*/int ib,\n                                                 /*THCudaTensor *inputImages*/float *inputImages, int isb, int isc, int ish, int isw,\n                                                 /*THCudaTensor *grids*/float *grids, int gsb, int gsc, int gsh, int gsw,\n                                                 /*THCudaTensor *output*/float *output, int osb, int osc, int osh, int osw,\n                                                 /*THCState_getCurrentStream(state)*/cudaStream_t stream)\n{\n   const int kThreadsPerBlock = 1024;\n   int output_size = ob * oh * ow * oc;\n   cudaError_t err;\n   int roiPerImage = ob / ib;\n\n   // printf(\"forward pass\\n\");\n\n   bilinearSamplingFromGrid<<<(output_size + kThreadsPerBlock - 1) / kThreadsPerBlock, kThreadsPerBlock, 0, stream>>>(\n     output_size,\n     /*THCudaTensor_data(state, inputImages)*/inputImages,\n     /*THCudaTensor_stride(state, inputImages, 0)*/isb,\n     /*THCudaTensor_stride(state, inputImages, 3)*/isc,\n     /*THCudaTensor_stride(state, inputImages, 1)*/ish,\n     /*THCudaTensor_stride(state, inputImages, 2)*/isw,\n     /*THCudaTensor_data(state, grids)*/grids,\n     /*THCudaTensor_stride(state, grids, 0)*/gsb,\n     /*THCudaTensor_stride(state, grids, 3)*/gsc,\n     /*THCudaTensor_stride(state, grids, 1)*/gsh,\n     /*THCudaTensor_stride(state, grids, 2)*/gsw,\n     /*THCudaTensor_data(state, output)*/output,\n     /*THCudaTensor_stride(state, output, 0)*/osb,\n     /*THCudaTensor_stride(state, output, 3)*/osc,\n     /*THCudaTensor_stride(state, output, 1)*/osh,\n     /*THCudaTensor_stride(state, output, 2)*/osw,\n     /*THCudaTensor_size(state, inputImages, 3)*/ic,\n     /*THCudaTensor_size(state, inputImages, 1)*/ih,\n     /*THCudaTensor_size(state, inputImages, 2)*/iw,\n     /*THCudaTensor_size(state, output, 3)*/oc,\n     /*THCudaTensor_size(state, output, 1)*/oh,\n     /*THCudaTensor_size(state, output, 2)*/ow,\n     /*THCudaTensor_size(state, output, 0)*/ob,\n     /*Number of rois per image*/roiPerImage);\n\n   // check for errors\n   err = cudaGetLastError();\n   if (err != cudaSuccess) {\n     printf(\"error in BilinearSampler.updateOutput: %s\\n\", cudaGetErrorString(err));\n     //THError(\"aborting\");\n     return 0;\n   }\n   return 1;\n}\n\nint BilinearSamplerBHWD_updateGradInput_cuda_kernel(/*gradOutput->size[1]*/int goc,\n                                                    /*gradOutput->size[3]*/int gow,\n                                                    /*gradOutput->size[2]*/int goh,\n                                                    /*gradOutput->size[0]*/int gob,\n                                                    /*THCudaTensor_size(state, inputImages, 1)*/int ic,\n                                                    /*THCudaTensor_size(state, inputImages, 2)*/int ih,\n                                                    /*THCudaTensor_size(state, inputImages, 3)*/int iw,\n                                                    /*THCudaTensor_size(state, inputImages, 0)*/int ib,\n                                                    /*THCudaTensor *inputImages*/float *inputImages, int isb, int isc, int ish, int isw,\n                                                    /*THCudaTensor *grids*/float *grids, int gsb, int gsc, int gsh, int gsw,\n                                                    /*THCudaTensor *gradInputImages*/float *gradInputImages, int gisb, int gisc, int gish, int gisw,\n                                                    /*THCudaTensor *gradGrids*/float *gradGrids, int ggsb, int ggsc, int ggsh, int ggsw,\n                                                    /*THCudaTensor *gradOutput*/float *gradOutput, int gosb, int gosc, int gosh, int gosw,\n                                                    /*THCState_getCurrentStream(state)*/cudaStream_t stream)\n{\n\n  const int kThreadsPerBlock = 1024;\n  int output_size = gob * goh * gow * goc;\n  cudaError_t err;\n  int roiPerImage = gob / ib;\n\n  // printf(\"%d %d %d %d\\n\", gob, goh, gow, goc);\n  // printf(\"%d %d %d %d\\n\", ib, ih, iw, ic);\n  // printf(\"backward pass\\n\");\n\n  backwardBilinearSampling<<<(output_size + kThreadsPerBlock - 1) / kThreadsPerBlock, kThreadsPerBlock, 0, stream>>>(\n    output_size,\n    /*THCudaTensor_data(state, inputImages)*/inputImages,\n    /*THCudaTensor_stride(state, inputImages, 0)*/isb,\n    /*THCudaTensor_stride(state, inputImages, 3)*/isc,\n    /*THCudaTensor_stride(state, inputImages, 1)*/ish,\n    /*THCudaTensor_stride(state, inputImages, 2)*/isw,\n    /*THCudaTensor_data(state, gradInputImages)*/gradInputImages,\n    /*THCudaTensor_stride(state, gradInputImages, 0)*/gisb,\n    /*THCudaTensor_stride(state, gradInputImages, 3)*/gisc,\n    /*THCudaTensor_stride(state, gradInputImages, 1)*/gish,\n    /*THCudaTensor_stride(state, gradInputImages, 2)*/gisw,\n    /*THCudaTensor_data(state, grids)*/grids,\n    /*THCudaTensor_stride(state, grids, 0)*/gsb,\n    /*THCudaTensor_stride(state, grids, 3)*/gsc,\n    /*THCudaTensor_stride(state, grids, 1)*/gsh,\n    /*THCudaTensor_stride(state, grids, 2)*/gsw,\n    /*THCudaTensor_data(state, gradGrids)*/gradGrids,\n    /*THCudaTensor_stride(state, gradGrids, 0)*/ggsb,\n    /*THCudaTensor_stride(state, gradGrids, 3)*/ggsc,\n    /*THCudaTensor_stride(state, gradGrids, 1)*/ggsh,\n    /*THCudaTensor_stride(state, gradGrids, 2)*/ggsw,\n    /*THCudaTensor_data(state, gradOutput)*/gradOutput,\n    /*THCudaTensor_stride(state, gradOutput, 0)*/gosb,\n    /*THCudaTensor_stride(state, gradOutput, 3)*/gosc,\n    /*THCudaTensor_stride(state, gradOutput, 1)*/gosh,\n    /*THCudaTensor_stride(state, gradOutput, 2)*/gosw,\n    /*THCudaTensor_size(state, inputImages, 3)*/ic,\n    /*THCudaTensor_size(state, inputImages, 1)*/ih,\n    /*THCudaTensor_size(state, inputImages, 2)*/iw,\n    /*THCudaTensor_size(state, gradOutput, 3)*/goc,\n    /*THCudaTensor_size(state, gradOutput, 1)*/goh,\n    /*THCudaTensor_size(state, gradOutput, 2)*/gow,\n    /*THCudaTensor_size(state, gradOutput, 0)*/gob,\n    /*Number of rois per image*/roiPerImage);\n\n  // check for errors\n  err = cudaGetLastError();\n  if (err != cudaSuccess) {\n    printf(\"error in BilinearSampler.updateGradInput: %s\\n\", cudaGetErrorString(err));\n    //THError(\"aborting\");\n    return 0;\n  }\n  return 1;\n}\n\n#ifdef __cplusplus\n}\n#endif\n"
  },
  {
    "path": "lib/model/roi_crop/src/roi_crop_cuda_kernel.h",
    "content": "#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n\nint BilinearSamplerBHWD_updateOutput_cuda_kernel(/*output->size[3]*/int oc,\n                                                 /*output->size[2]*/int ow,\n                                                 /*output->size[1]*/int oh,\n                                                 /*output->size[0]*/int ob,\n                                                 /*THCudaTensor_size(state, inputImages, 3)*/int ic,\n                                                 /*THCudaTensor_size(state, inputImages, 1)*/int ih,\n                                                 /*THCudaTensor_size(state, inputImages, 2)*/int iw,\n                                                 /*THCudaTensor_size(state, inputImages, 0)*/int ib,\n                                                 /*THCudaTensor *inputImages*/float *inputImages, int isb, int isc, int ish, int isw,\n                                                 /*THCudaTensor *grids*/float *grids, int gsb, int gsc, int gsh, int gsw,\n                                                 /*THCudaTensor *output*/float *output, int osb, int osc, int osh, int osw,\n                                                 /*THCState_getCurrentStream(state)*/cudaStream_t stream);\n\nint BilinearSamplerBHWD_updateGradInput_cuda_kernel(/*gradOutput->size[3]*/int goc,\n                                                    /*gradOutput->size[2]*/int gow,\n                                                    /*gradOutput->size[1]*/int goh,\n                                                    /*gradOutput->size[0]*/int gob,\n                                                    /*THCudaTensor_size(state, inputImages, 3)*/int ic,\n                                                    /*THCudaTensor_size(state, inputImages, 1)*/int ih,\n                                                    /*THCudaTensor_size(state, inputImages, 2)*/int iw,\n                                                    /*THCudaTensor_size(state, inputImages, 0)*/int ib,\n                                                    /*THCudaTensor *inputImages*/float *inputImages, int isb, int isc, int ish, int isw,\n                                                    /*THCudaTensor *grids*/float *grids, int gsb, int gsc, int gsh, int gsw,\n                                                    /*THCudaTensor *gradInputImages*/float *gradInputImages, int gisb, int gisc, int gish, int gisw,\n                                                    /*THCudaTensor *gradGrids*/float *gradGrids, int ggsb, int ggsc, int ggsh, int ggsw,\n                                                    /*THCudaTensor *gradOutput*/float *gradOutput, int gosb, int gosc, int gosh, int gosw,\n                                                    /*THCState_getCurrentStream(state)*/cudaStream_t stream);\n\n\n#ifdef __cplusplus\n}\n#endif\n"
  },
  {
    "path": "lib/model/roi_pooling/__init__.py",
    "content": ""
  },
  {
    "path": "lib/model/roi_pooling/_ext/__init__.py",
    "content": ""
  },
  {
    "path": "lib/model/roi_pooling/_ext/roi_pooling/__init__.py",
    "content": "\nfrom torch.utils.ffi import _wrap_function\nfrom ._roi_pooling import lib as _lib, ffi as _ffi\n\n__all__ = []\ndef _import_symbols(locals):\n    for symbol in dir(_lib):\n        fn = getattr(_lib, symbol)\n        if callable(fn):\n            locals[symbol] = _wrap_function(fn, _ffi)\n        else:\n            locals[symbol] = fn\n        __all__.append(symbol)\n\n_import_symbols(locals())\n"
  },
  {
    "path": "lib/model/roi_pooling/build.py",
    "content": "from __future__ import print_function\nimport os\nimport torch\nfrom torch.utils.ffi import create_extension\n\n\nsources = ['src/roi_pooling.c']\nheaders = ['src/roi_pooling.h']\ndefines = []\nwith_cuda = False\n\nif torch.cuda.is_available():\n    print('Including CUDA code.')\n    sources += ['src/roi_pooling_cuda.c']\n    headers += ['src/roi_pooling_cuda.h']\n    defines += [('WITH_CUDA', None)]\n    with_cuda = True\n\nthis_file = os.path.dirname(os.path.realpath(__file__))\nprint(this_file)\nextra_objects = ['src/roi_pooling.cu.o']\nextra_objects = [os.path.join(this_file, fname) for fname in extra_objects]\n\nffi = create_extension(\n    '_ext.roi_pooling',\n    headers=headers,\n    sources=sources,\n    define_macros=defines,\n    relative_to=__file__,\n    with_cuda=with_cuda,\n    extra_objects=extra_objects\n)\n\nif __name__ == '__main__':\n    ffi.build()\n"
  },
  {
    "path": "lib/model/roi_pooling/functions/__init__.py",
    "content": ""
  },
  {
    "path": "lib/model/roi_pooling/functions/roi_pool.py",
    "content": "import torch\nfrom torch.autograd import Function\nfrom .._ext import roi_pooling\nimport pdb\n\nclass RoIPoolFunction(Function):\n    def __init__(ctx, pooled_height, pooled_width, spatial_scale):\n        ctx.pooled_width = pooled_width\n        ctx.pooled_height = pooled_height\n        ctx.spatial_scale = spatial_scale\n        ctx.feature_size = None\n\n    def forward(ctx, features, rois): \n        ctx.feature_size = features.size()           \n        batch_size, num_channels, data_height, data_width = ctx.feature_size\n        num_rois = rois.size(0)\n        output = features.new(num_rois, num_channels, ctx.pooled_height, ctx.pooled_width).zero_()\n        ctx.argmax = features.new(num_rois, num_channels, ctx.pooled_height, ctx.pooled_width).zero_().int()\n        ctx.rois = rois\n        if not features.is_cuda:\n            _features = features.permute(0, 2, 3, 1)\n            roi_pooling.roi_pooling_forward(ctx.pooled_height, ctx.pooled_width, ctx.spatial_scale,\n                                            _features, rois, output)\n        else:\n            roi_pooling.roi_pooling_forward_cuda(ctx.pooled_height, ctx.pooled_width, ctx.spatial_scale,\n                                                 features, rois, output, ctx.argmax)\n\n        return output\n\n    def backward(ctx, grad_output):\n        assert(ctx.feature_size is not None and grad_output.is_cuda)\n        batch_size, num_channels, data_height, data_width = ctx.feature_size\n        grad_input = grad_output.new(batch_size, num_channels, data_height, data_width).zero_()\n\n        roi_pooling.roi_pooling_backward_cuda(ctx.pooled_height, ctx.pooled_width, ctx.spatial_scale,\n                                              grad_output, ctx.rois, grad_input, ctx.argmax)\n\n        return grad_input, None\n"
  },
  {
    "path": "lib/model/roi_pooling/modules/__init__.py",
    "content": ""
  },
  {
    "path": "lib/model/roi_pooling/modules/roi_pool.py",
    "content": "from torch.nn.modules.module import Module\nfrom ..functions.roi_pool import RoIPoolFunction\n\n\nclass _RoIPooling(Module):\n    def __init__(self, pooled_height, pooled_width, spatial_scale):\n        super(_RoIPooling, self).__init__()\n\n        self.pooled_width = int(pooled_width)\n        self.pooled_height = int(pooled_height)\n        self.spatial_scale = float(spatial_scale)\n\n    def forward(self, features, rois):\n        return RoIPoolFunction(self.pooled_height, self.pooled_width, self.spatial_scale)(features, rois)\n"
  },
  {
    "path": "lib/model/roi_pooling/src/roi_pooling.c",
    "content": "#include <TH/TH.h>\n#include <math.h>\n\nint roi_pooling_forward(int pooled_height, int pooled_width, float spatial_scale,\n                        THFloatTensor * features, THFloatTensor * rois, THFloatTensor * output)\n{\n    // Grab the input tensor\n    float * data_flat = THFloatTensor_data(features);\n    float * rois_flat = THFloatTensor_data(rois);\n\n    float * output_flat = THFloatTensor_data(output);\n\n    // Number of ROIs\n    int num_rois = THFloatTensor_size(rois, 0);\n    int size_rois = THFloatTensor_size(rois, 1);\n    // batch size\n    int batch_size = THFloatTensor_size(features, 0);\n    if(batch_size != 1)\n    {\n        return 0;\n    }\n    // data height\n    int data_height = THFloatTensor_size(features, 1);\n    // data width\n    int data_width = THFloatTensor_size(features, 2);\n    // Number of channels\n    int num_channels = THFloatTensor_size(features, 3);\n\n    // Set all element of the output tensor to -inf.\n    THFloatStorage_fill(THFloatTensor_storage(output), -1);\n\n    // For each ROI R = [batch_index x1 y1 x2 y2]: max pool over R\n    int index_roi = 0;\n    int index_output = 0;\n    int n;\n    for (n = 0; n < num_rois; ++n)\n    {\n        int roi_batch_ind = rois_flat[index_roi + 0];\n        int roi_start_w = round(rois_flat[index_roi + 1] * spatial_scale);\n        int roi_start_h = round(rois_flat[index_roi + 2] * spatial_scale);\n        int roi_end_w = round(rois_flat[index_roi + 3] * spatial_scale);\n        int roi_end_h = round(rois_flat[index_roi + 4] * spatial_scale);\n        //      CHECK_GE(roi_batch_ind, 0);\n        //      CHECK_LT(roi_batch_ind, batch_size);\n\n        int roi_height = fmaxf(roi_end_h - roi_start_h + 1, 1);\n        int roi_width = fmaxf(roi_end_w - roi_start_w + 1, 1);\n        float bin_size_h = (float)(roi_height) / (float)(pooled_height);\n        float bin_size_w = (float)(roi_width) / (float)(pooled_width);\n\n        int index_data = roi_batch_ind * data_height * data_width * num_channels;\n        const int output_area = pooled_width * pooled_height;\n\n        int c, ph, pw;\n        for (ph = 0; ph < pooled_height; ++ph)\n        {\n            for (pw = 0; pw < pooled_width; ++pw)\n            {\n                int hstart = (floor((float)(ph) * bin_size_h));\n                int wstart = (floor((float)(pw) * bin_size_w));\n                int hend = (ceil((float)(ph + 1) * bin_size_h));\n                int wend = (ceil((float)(pw + 1) * bin_size_w));\n\n                hstart = fminf(fmaxf(hstart + roi_start_h, 0), data_height);\n                hend = fminf(fmaxf(hend + roi_start_h, 0), data_height);\n                wstart = fminf(fmaxf(wstart + roi_start_w, 0), data_width);\n                wend = fminf(fmaxf(wend + roi_start_w, 0), data_width);\n\n                const int pool_index = index_output + (ph * pooled_width + pw);\n                int is_empty = (hend <= hstart) || (wend <= wstart);\n                if (is_empty)\n                {\n                    for (c = 0; c < num_channels * output_area; c += output_area)\n                    {\n                        output_flat[pool_index + c] = 0;\n                    }\n                }\n                else\n                {\n                    int h, w, c;\n                    for (h = hstart; h < hend; ++h)\n                    {\n                        for (w = wstart; w < wend; ++w)\n                        {\n                            for (c = 0; c < num_channels; ++c)\n                            {\n                                const int index = (h * data_width + w) * num_channels + c;\n                                if (data_flat[index_data + index] > output_flat[pool_index + c * output_area])\n                                {\n                                    output_flat[pool_index + c * output_area] = data_flat[index_data + index];\n                                }\n                            }\n                        }\n                    }\n                }\n            }\n        }\n\n        // Increment ROI index\n        index_roi += size_rois;\n        index_output += pooled_height * pooled_width * num_channels;\n    }\n    return 1;\n}"
  },
  {
    "path": "lib/model/roi_pooling/src/roi_pooling.h",
    "content": "int roi_pooling_forward(int pooled_height, int pooled_width, float spatial_scale,\n                        THFloatTensor * features, THFloatTensor * rois, THFloatTensor * output);"
  },
  {
    "path": "lib/model/roi_pooling/src/roi_pooling_cuda.c",
    "content": "#include <THC/THC.h>\n#include <math.h>\n#include \"roi_pooling_kernel.h\"\n\nextern THCState *state;\n\nint roi_pooling_forward_cuda(int pooled_height, int pooled_width, float spatial_scale,\n                        THCudaTensor * features, THCudaTensor * rois, THCudaTensor * output, THCudaIntTensor * argmax)\n{\n    // Grab the input tensor\n    float * data_flat = THCudaTensor_data(state, features);\n    float * rois_flat = THCudaTensor_data(state, rois);\n\n    float * output_flat = THCudaTensor_data(state, output);\n    int * argmax_flat = THCudaIntTensor_data(state, argmax);\n\n    // Number of ROIs\n    int num_rois = THCudaTensor_size(state, rois, 0);\n    int size_rois = THCudaTensor_size(state, rois, 1);\n    if (size_rois != 5)\n    {\n        return 0;\n    }\n\n    // batch size\n    // int batch_size = THCudaTensor_size(state, features, 0);\n    // if (batch_size != 1)\n    // {\n    //     return 0;\n    // }\n    // data height\n    int data_height = THCudaTensor_size(state, features, 2);\n    // data width\n    int data_width = THCudaTensor_size(state, features, 3);\n    // Number of channels\n    int num_channels = THCudaTensor_size(state, features, 1);\n\n    cudaStream_t stream = THCState_getCurrentStream(state);\n\n    ROIPoolForwardLaucher(\n        data_flat, spatial_scale, num_rois, data_height,\n        data_width, num_channels, pooled_height,\n        pooled_width, rois_flat,\n        output_flat, argmax_flat, stream);\n\n    return 1;\n}\n\nint roi_pooling_backward_cuda(int pooled_height, int pooled_width, float spatial_scale,\n                        THCudaTensor * top_grad, THCudaTensor * rois, THCudaTensor * bottom_grad, THCudaIntTensor * argmax)\n{\n    // Grab the input tensor\n    float * top_grad_flat = THCudaTensor_data(state, top_grad);\n    float * rois_flat = THCudaTensor_data(state, rois);\n\n    float * bottom_grad_flat = THCudaTensor_data(state, bottom_grad);\n    int * argmax_flat = THCudaIntTensor_data(state, argmax);\n\n    // Number of ROIs\n    int num_rois = THCudaTensor_size(state, rois, 0);\n    int size_rois = THCudaTensor_size(state, rois, 1);\n    if (size_rois != 5)\n    {\n        return 0;\n    }\n\n    // batch size\n    int batch_size = THCudaTensor_size(state, bottom_grad, 0);\n    // if (batch_size != 1)\n    // {\n    //     return 0;\n    // }\n    // data height\n    int data_height = THCudaTensor_size(state, bottom_grad, 2);\n    // data width\n    int data_width = THCudaTensor_size(state, bottom_grad, 3);\n    // Number of channels\n    int num_channels = THCudaTensor_size(state, bottom_grad, 1);\n\n    cudaStream_t stream = THCState_getCurrentStream(state);\n    ROIPoolBackwardLaucher(\n        top_grad_flat, spatial_scale, batch_size, num_rois, data_height,\n        data_width, num_channels, pooled_height,\n        pooled_width, rois_flat,\n        bottom_grad_flat, argmax_flat, stream);\n\n    return 1;\n}\n"
  },
  {
    "path": "lib/model/roi_pooling/src/roi_pooling_cuda.h",
    "content": "int roi_pooling_forward_cuda(int pooled_height, int pooled_width, float spatial_scale,\n                        THCudaTensor * features, THCudaTensor * rois, THCudaTensor * output, THCudaIntTensor * argmax);\n\nint roi_pooling_backward_cuda(int pooled_height, int pooled_width, float spatial_scale,\n                        THCudaTensor * top_grad, THCudaTensor * rois, THCudaTensor * bottom_grad, THCudaIntTensor * argmax);"
  },
  {
    "path": "lib/model/roi_pooling/src/roi_pooling_kernel.cu",
    "content": "// #ifdef __cplusplus\n// extern \"C\" {\n// #endif\n\n#include <stdio.h>\n#include <vector>\n#include <math.h>\n#include <float.h>\n#include \"roi_pooling_kernel.h\"\n\n\n#define DIVUP(m, n) ((m) / (m) + ((m) % (n) > 0))\n\n#define CUDA_1D_KERNEL_LOOP(i, n)                            \\\n  for (int i = blockIdx.x * blockDim.x + threadIdx.x; i < n; \\\n       i += blockDim.x * gridDim.x)\n\n// CUDA: grid stride looping\n#define CUDA_KERNEL_LOOP(i, n) \\\n  for (int i = blockIdx.x * blockDim.x + threadIdx.x; \\\n       i < (n); \\\n       i += blockDim.x * gridDim.x)\n\n__global__ void ROIPoolForward(const int nthreads, const float* bottom_data,\n    const float spatial_scale, const int height, const int width,\n    const int channels, const int pooled_height, const int pooled_width,\n    const float* bottom_rois, float* top_data, int* argmax_data)\n{\n    CUDA_KERNEL_LOOP(index, nthreads)\n    {\n        // (n, c, ph, pw) is an element in the pooled output\n        // int n = index;\n        // int pw = n % pooled_width;\n        // n /= pooled_width;\n        // int ph = n % pooled_height;\n        // n /= pooled_height;\n        // int c = n % channels;\n        // n /= channels;\n        int pw = index % pooled_width;\n        int ph = (index / pooled_width) % pooled_height;\n        int c  = (index / pooled_width / pooled_height) % channels;\n        int n  = index / pooled_width / pooled_height / channels;\n\n        // bottom_rois += n * 5;\n        int roi_batch_ind = bottom_rois[n * 5 + 0];\n        int roi_start_w = round(bottom_rois[n * 5 + 1] * spatial_scale);\n        int roi_start_h = round(bottom_rois[n * 5 + 2] * spatial_scale);\n        int roi_end_w = round(bottom_rois[n * 5 + 3] * spatial_scale);\n        int roi_end_h = round(bottom_rois[n * 5 + 4] * spatial_scale);\n\n        // Force malformed ROIs to be 1x1\n        int roi_width = fmaxf(roi_end_w - roi_start_w + 1, 1);\n        int roi_height = fmaxf(roi_end_h - roi_start_h + 1, 1);\n        float bin_size_h = (float)(roi_height) / (float)(pooled_height);\n        float bin_size_w = (float)(roi_width) / (float)(pooled_width);\n\n        int hstart = (int)(floor((float)(ph) * bin_size_h));\n        int wstart = (int)(floor((float)(pw) * bin_size_w));\n        int hend = (int)(ceil((float)(ph + 1) * bin_size_h));\n        int wend = (int)(ceil((float)(pw + 1) * bin_size_w));\n\n        // Add roi offsets and clip to input boundaries\n        hstart = fminf(fmaxf(hstart + roi_start_h, 0), height);\n        hend = fminf(fmaxf(hend + roi_start_h, 0), height);\n        wstart = fminf(fmaxf(wstart + roi_start_w, 0), width);\n        wend = fminf(fmaxf(wend + roi_start_w, 0), width);\n        bool is_empty = (hend <= hstart) || (wend <= wstart);\n\n        // Define an empty pooling region to be zero\n        float maxval = is_empty ? 0 : -FLT_MAX;\n        // If nothing is pooled, argmax = -1 causes nothing to be backprop'd\n        int maxidx = -1;\n        // bottom_data += roi_batch_ind * channels * height * width;\n\n        int bottom_data_batch_offset = roi_batch_ind * channels * height * width;\n        int bottom_data_offset = bottom_data_batch_offset + c * height * width;\n\n        for (int h = hstart; h < hend; ++h) {\n            for (int w = wstart; w < wend; ++w) {\n                // int bottom_index = (h * width + w) * channels + c;\n                // int bottom_index = (c * height + h) * width + w;\n                int bottom_index = h * width + w;\n                if (bottom_data[bottom_data_offset + bottom_index] > maxval) {\n                    maxval = bottom_data[bottom_data_offset + bottom_index];\n                    maxidx = bottom_data_offset + bottom_index;\n                }\n            }\n        }\n        top_data[index] = maxval;\n        if (argmax_data != NULL)\n            argmax_data[index] = maxidx;\n    }\n}\n\nint ROIPoolForwardLaucher(\n    const float* bottom_data, const float spatial_scale, const int num_rois, const int height,\n    const int width, const int channels, const int pooled_height,\n    const int pooled_width, const float* bottom_rois,\n    float* top_data, int* argmax_data, cudaStream_t stream)\n{\n    const int kThreadsPerBlock = 1024;\n    int output_size = num_rois * pooled_height * pooled_width * channels;\n    cudaError_t err;\n\n    ROIPoolForward<<<(output_size + kThreadsPerBlock - 1) / kThreadsPerBlock, kThreadsPerBlock, 0, stream>>>(\n      output_size, bottom_data, spatial_scale, height, width, channels, pooled_height,\n      pooled_width, bottom_rois, top_data, argmax_data);\n\n    // dim3 blocks(DIVUP(output_size, kThreadsPerBlock),\n    //             DIVUP(output_size, kThreadsPerBlock));\n    // dim3 threads(kThreadsPerBlock);\n    //\n    // ROIPoolForward<<<blocks, threads, 0, stream>>>(\n    //   output_size, bottom_data, spatial_scale, height, width, channels, pooled_height,\n    //   pooled_width, bottom_rois, top_data, argmax_data);\n\n    err = cudaGetLastError();\n    if(cudaSuccess != err)\n    {\n        fprintf( stderr, \"cudaCheckError() failed : %s\\n\", cudaGetErrorString( err ) );\n        exit( -1 );\n    }\n\n    return 1;\n}\n\n\n__global__ void ROIPoolBackward(const int nthreads, const float* top_diff,\n    const int* argmax_data, const int num_rois, const float spatial_scale,\n    const int height, const int width, const int channels,\n    const int pooled_height, const int pooled_width, float* bottom_diff,\n    const float* bottom_rois) {\n    CUDA_1D_KERNEL_LOOP(index, nthreads)\n    {\n\n        // (n, c, ph, pw) is an element in the pooled output\n        int n = index;\n        int w = n % width;\n        n /= width;\n        int h = n % height;\n        n /= height;\n        int c = n % channels;\n        n /= channels;\n\n        float gradient = 0;\n        // Accumulate gradient over all ROIs that pooled this element\n        for (int roi_n = 0; roi_n < num_rois; ++roi_n)\n        {\n            const float* offset_bottom_rois = bottom_rois + roi_n * 5;\n            int roi_batch_ind = offset_bottom_rois[0];\n            // Skip if ROI's batch index doesn't match n\n            if (n != roi_batch_ind) {\n                continue;\n            }\n\n            int roi_start_w = round(offset_bottom_rois[1] * spatial_scale);\n            int roi_start_h = round(offset_bottom_rois[2] * spatial_scale);\n            int roi_end_w = round(offset_bottom_rois[3] * spatial_scale);\n            int roi_end_h = round(offset_bottom_rois[4] * spatial_scale);\n\n            // Skip if ROI doesn't include (h, w)\n            const bool in_roi = (w >= roi_start_w && w <= roi_end_w &&\n                               h >= roi_start_h && h <= roi_end_h);\n            if (!in_roi) {\n                continue;\n            }\n\n            int offset = roi_n * pooled_height * pooled_width * channels;\n            const float* offset_top_diff = top_diff + offset;\n            const int* offset_argmax_data = argmax_data + offset;\n\n            // Compute feasible set of pooled units that could have pooled\n            // this bottom unit\n\n            // Force malformed ROIs to be 1x1\n            int roi_width = fmaxf(roi_end_w - roi_start_w + 1, 1);\n            int roi_height = fmaxf(roi_end_h - roi_start_h + 1, 1);\n\n            float bin_size_h = (float)(roi_height) / (float)(pooled_height);\n            float bin_size_w = (float)(roi_width) / (float)(pooled_width);\n\n            int phstart = floor((float)(h - roi_start_h) / bin_size_h);\n            int phend = ceil((float)(h - roi_start_h + 1) / bin_size_h);\n            int pwstart = floor((float)(w - roi_start_w) / bin_size_w);\n            int pwend = ceil((float)(w - roi_start_w + 1) / bin_size_w);\n\n            phstart = fminf(fmaxf(phstart, 0), pooled_height);\n            phend = fminf(fmaxf(phend, 0), pooled_height);\n            pwstart = fminf(fmaxf(pwstart, 0), pooled_width);\n            pwend = fminf(fmaxf(pwend, 0), pooled_width);\n\n            for (int ph = phstart; ph < phend; ++ph) {\n                for (int pw = pwstart; pw < pwend; ++pw) {\n                    if (offset_argmax_data[(c * pooled_height + ph) * pooled_width + pw] == index)\n                    {\n                        gradient += offset_top_diff[(c * pooled_height + ph) * pooled_width + pw];\n                    }\n                }\n            }\n        }\n        bottom_diff[index] = gradient;\n  }\n}\n\nint ROIPoolBackwardLaucher(const float* top_diff, const float spatial_scale, const int batch_size, const int num_rois,\n    const int height, const int width, const int channels, const int pooled_height,\n    const int pooled_width, const float* bottom_rois,\n    float* bottom_diff, const int* argmax_data, cudaStream_t stream)\n{\n    const int kThreadsPerBlock = 1024;\n    int output_size = batch_size * height * width * channels;\n    cudaError_t err;\n\n    ROIPoolBackward<<<(output_size + kThreadsPerBlock - 1) / kThreadsPerBlock, kThreadsPerBlock, 0, stream>>>(\n      output_size, top_diff, argmax_data, num_rois, spatial_scale, height, width, channels, pooled_height,\n      pooled_width, bottom_diff, bottom_rois);\n\n    // dim3 blocks(DIVUP(output_size, kThreadsPerBlock),\n    //             DIVUP(output_size, kThreadsPerBlock));\n    // dim3 threads(kThreadsPerBlock);\n    //\n    // ROIPoolBackward<<<blocks, threads, 0, stream>>>(\n    //   output_size, top_diff, argmax_data, num_rois, spatial_scale, height, width, channels, pooled_height,\n    //   pooled_width, bottom_diff, bottom_rois);\n\n    err = cudaGetLastError();\n    if(cudaSuccess != err)\n    {\n        fprintf( stderr, \"cudaCheckError() failed : %s\\n\", cudaGetErrorString( err ) );\n        exit( -1 );\n    }\n\n    return 1;\n}\n\n\n// #ifdef __cplusplus\n// }\n// #endif\n"
  },
  {
    "path": "lib/model/roi_pooling/src/roi_pooling_kernel.h",
    "content": "#ifndef _ROI_POOLING_KERNEL\n#define _ROI_POOLING_KERNEL\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\nint ROIPoolForwardLaucher(\n    const float* bottom_data, const float spatial_scale, const int num_rois, const int height,\n    const int width, const int channels, const int pooled_height,\n    const int pooled_width, const float* bottom_rois,\n    float* top_data, int* argmax_data, cudaStream_t stream);\n\n\nint ROIPoolBackwardLaucher(const float* top_diff, const float spatial_scale, const int batch_size, const int num_rois,\n    const int height, const int width, const int channels, const int pooled_height,\n    const int pooled_width, const float* bottom_rois,\n    float* bottom_diff, const int* argmax_data, cudaStream_t stream);\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif\n\n"
  },
  {
    "path": "lib/model/rpn/__init__.py",
    "content": ""
  },
  {
    "path": "lib/model/rpn/anchor_target_layer.py",
    "content": "from __future__ import absolute_import\n# --------------------------------------------------------\n# Faster R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick and Sean Bell\n# --------------------------------------------------------\n# --------------------------------------------------------\n# Reorganized and modified by Jianwei Yang and Jiasen Lu\n# --------------------------------------------------------\n\nimport torch\nimport torch.nn as nn\nimport numpy as np\nimport numpy.random as npr\n\nfrom model.utils.config import cfg\nfrom .generate_anchors import generate_anchors\nfrom .bbox_transform import clip_boxes, bbox_overlaps_batch, bbox_transform_batch\n\nimport pdb\n\nDEBUG = False\n\ntry:\n    long        # Python 2\nexcept NameError:\n    long = int  # Python 3\n\n\nclass _AnchorTargetLayer(nn.Module):\n    \"\"\"\n        Assign anchors to ground-truth targets. Produces anchor classification\n        labels and bounding-box regression targets.\n    \"\"\"\n    def __init__(self, feat_stride, scales, ratios):\n        super(_AnchorTargetLayer, self).__init__()\n\n        self._feat_stride = feat_stride\n        self._scales = scales\n        anchor_scales = scales\n        self._anchors = torch.from_numpy(generate_anchors(scales=np.array(anchor_scales), ratios=np.array(ratios))).float()\n        self._num_anchors = self._anchors.size(0)\n\n        # allow boxes to sit over the edge by a small amount\n        self._allowed_border = 0  # default is 0\n\n    def forward(self, input):\n        # Algorithm:\n        #\n        # for each (H, W) location i\n        #   generate 9 anchor boxes centered on cell i\n        #   apply predicted bbox deltas at cell i to each of the 9 anchors\n        # filter out-of-image anchors\n\n        rpn_cls_score = input[0]\n        gt_boxes = input[1]\n        im_info = input[2]\n        num_boxes = input[3]\n\n        # map of shape (..., H, W)\n        height, width = rpn_cls_score.size(2), rpn_cls_score.size(3)\n\n        batch_size = gt_boxes.size(0)\n\n        feat_height, feat_width = rpn_cls_score.size(2), rpn_cls_score.size(3)\n        shift_x = np.arange(0, feat_width) * self._feat_stride\n        shift_y = np.arange(0, feat_height) * self._feat_stride\n        shift_x, shift_y = np.meshgrid(shift_x, shift_y)\n        shifts = torch.from_numpy(np.vstack((shift_x.ravel(), shift_y.ravel(),\n                                  shift_x.ravel(), shift_y.ravel())).transpose())\n        shifts = shifts.contiguous().type_as(rpn_cls_score).float()\n\n        A = self._num_anchors\n        K = shifts.size(0)\n\n        self._anchors = self._anchors.type_as(gt_boxes) # move to specific gpu.\n        all_anchors = self._anchors.view(1, A, 4) + shifts.view(K, 1, 4)\n        all_anchors = all_anchors.view(K * A, 4)\n\n        total_anchors = int(K * A)\n\n        keep = ((all_anchors[:, 0] >= -self._allowed_border) &\n                (all_anchors[:, 1] >= -self._allowed_border) &\n                (all_anchors[:, 2] < long(im_info[0][1]) + self._allowed_border) &\n                (all_anchors[:, 3] < long(im_info[0][0]) + self._allowed_border))\n\n        inds_inside = torch.nonzero(keep).view(-1)\n\n        # keep only inside anchors\n        anchors = all_anchors[inds_inside, :]\n\n        # label: 1 is positive, 0 is negative, -1 is dont care\n        labels = gt_boxes.new(batch_size, inds_inside.size(0)).fill_(-1)\n        bbox_inside_weights = gt_boxes.new(batch_size, inds_inside.size(0)).zero_()\n        bbox_outside_weights = gt_boxes.new(batch_size, inds_inside.size(0)).zero_()\n\n        overlaps = bbox_overlaps_batch(anchors, gt_boxes)\n\n        max_overlaps, argmax_overlaps = torch.max(overlaps, 2)\n        gt_max_overlaps, _ = torch.max(overlaps, 1)\n\n        if not cfg.TRAIN.RPN_CLOBBER_POSITIVES:\n            labels[max_overlaps < cfg.TRAIN.RPN_NEGATIVE_OVERLAP] = 0\n\n        gt_max_overlaps[gt_max_overlaps==0] = 1e-5\n        keep = torch.sum(overlaps.eq(gt_max_overlaps.view(batch_size,1,-1).expand_as(overlaps)), 2)\n\n        if torch.sum(keep) > 0:\n            labels[keep>0] = 1\n\n        # fg label: above threshold IOU\n        labels[max_overlaps >= cfg.TRAIN.RPN_POSITIVE_OVERLAP] = 1\n\n        if cfg.TRAIN.RPN_CLOBBER_POSITIVES:\n            labels[max_overlaps < cfg.TRAIN.RPN_NEGATIVE_OVERLAP] = 0\n\n        num_fg = int(cfg.TRAIN.RPN_FG_FRACTION * cfg.TRAIN.RPN_BATCHSIZE)\n\n        sum_fg = torch.sum((labels == 1).int(), 1)\n        sum_bg = torch.sum((labels == 0).int(), 1)\n\n        for i in range(batch_size):\n            # subsample positive labels if we have too many\n            if sum_fg[i] > num_fg:\n                fg_inds = torch.nonzero(labels[i] == 1).view(-1)\n                # torch.randperm seems has a bug on multi-gpu setting that cause the segfault.\n                # See https://github.com/pytorch/pytorch/issues/1868 for more details.\n                # use numpy instead.\n                #rand_num = torch.randperm(fg_inds.size(0)).type_as(gt_boxes).long()\n                rand_num = torch.from_numpy(np.random.permutation(fg_inds.size(0))).type_as(gt_boxes).long()\n                disable_inds = fg_inds[rand_num[:fg_inds.size(0)-num_fg]]\n                labels[i][disable_inds] = -1\n\n            num_bg = cfg.TRAIN.RPN_BATCHSIZE - sum_fg[i]\n\n            # subsample negative labels if we have too many\n            if sum_bg[i] > num_bg:\n                bg_inds = torch.nonzero(labels[i] == 0).view(-1)\n                #rand_num = torch.randperm(bg_inds.size(0)).type_as(gt_boxes).long()\n\n                rand_num = torch.from_numpy(np.random.permutation(bg_inds.size(0))).type_as(gt_boxes).long()\n                disable_inds = bg_inds[rand_num[:bg_inds.size(0)-num_bg]]\n                labels[i][disable_inds] = -1\n\n        offset = torch.arange(0, batch_size)*gt_boxes.size(1)\n\n        argmax_overlaps = argmax_overlaps + offset.view(batch_size, 1).type_as(argmax_overlaps)\n        bbox_targets = _compute_targets_batch(anchors, gt_boxes.view(-1,5)[argmax_overlaps.view(-1), :].view(batch_size, -1, 5))\n\n        # use a single value instead of 4 values for easy index.\n        bbox_inside_weights[labels==1] = cfg.TRAIN.RPN_BBOX_INSIDE_WEIGHTS[0]\n\n        if cfg.TRAIN.RPN_POSITIVE_WEIGHT < 0:\n            num_examples = torch.sum(labels[i] >= 0)\n            positive_weights = 1.0 / num_examples\n            negative_weights = 1.0 / num_examples\n        else:\n            assert ((cfg.TRAIN.RPN_POSITIVE_WEIGHT > 0) &\n                    (cfg.TRAIN.RPN_POSITIVE_WEIGHT < 1))\n\n        bbox_outside_weights[labels == 1] = positive_weights\n        bbox_outside_weights[labels == 0] = negative_weights\n\n        labels = _unmap(labels, total_anchors, inds_inside, batch_size, fill=-1)\n        bbox_targets = _unmap(bbox_targets, total_anchors, inds_inside, batch_size, fill=0)\n        bbox_inside_weights = _unmap(bbox_inside_weights, total_anchors, inds_inside, batch_size, fill=0)\n        bbox_outside_weights = _unmap(bbox_outside_weights, total_anchors, inds_inside, batch_size, fill=0)\n\n        outputs = []\n\n        labels = labels.view(batch_size, height, width, A).permute(0,3,1,2).contiguous()\n        labels = labels.view(batch_size, 1, A * height, width)\n        outputs.append(labels)\n\n        bbox_targets = bbox_targets.view(batch_size, height, width, A*4).permute(0,3,1,2).contiguous()\n        outputs.append(bbox_targets)\n\n        anchors_count = bbox_inside_weights.size(1)\n        bbox_inside_weights = bbox_inside_weights.view(batch_size,anchors_count,1).expand(batch_size, anchors_count, 4)\n\n        bbox_inside_weights = bbox_inside_weights.contiguous().view(batch_size, height, width, 4*A)\\\n                            .permute(0,3,1,2).contiguous()\n\n        outputs.append(bbox_inside_weights)\n\n        bbox_outside_weights = bbox_outside_weights.view(batch_size,anchors_count,1).expand(batch_size, anchors_count, 4)\n        bbox_outside_weights = bbox_outside_weights.contiguous().view(batch_size, height, width, 4*A)\\\n                            .permute(0,3,1,2).contiguous()\n        outputs.append(bbox_outside_weights)\n\n        return outputs\n\n    def backward(self, top, propagate_down, bottom):\n        \"\"\"This layer does not propagate gradients.\"\"\"\n        pass\n\n    def reshape(self, bottom, top):\n        \"\"\"Reshaping happens during the call to forward.\"\"\"\n        pass\n\ndef _unmap(data, count, inds, batch_size, fill=0):\n    \"\"\" Unmap a subset of item (data) back to the original set of items (of\n    size count) \"\"\"\n\n    if data.dim() == 2:\n        ret = torch.Tensor(batch_size, count).fill_(fill).type_as(data)\n        ret[:, inds] = data\n    else:\n        ret = torch.Tensor(batch_size, count, data.size(2)).fill_(fill).type_as(data)\n        ret[:, inds,:] = data\n    return ret\n\n\ndef _compute_targets_batch(ex_rois, gt_rois):\n    \"\"\"Compute bounding-box regression targets for an image.\"\"\"\n\n    return bbox_transform_batch(ex_rois, gt_rois[:, :, :4])\n"
  },
  {
    "path": "lib/model/rpn/bbox_transform.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n# --------------------------------------------------------\n# Reorganized and modified by Jianwei Yang and Jiasen Lu\n# --------------------------------------------------------\n\nimport torch\nimport numpy as np\nimport pdb\n\ndef bbox_transform(ex_rois, gt_rois):\n    ex_widths = ex_rois[:, 2] - ex_rois[:, 0] + 1.0\n    ex_heights = ex_rois[:, 3] - ex_rois[:, 1] + 1.0\n    ex_ctr_x = ex_rois[:, 0] + 0.5 * ex_widths\n    ex_ctr_y = ex_rois[:, 1] + 0.5 * ex_heights\n\n    gt_widths = gt_rois[:, 2] - gt_rois[:, 0] + 1.0\n    gt_heights = gt_rois[:, 3] - gt_rois[:, 1] + 1.0\n    gt_ctr_x = gt_rois[:, 0] + 0.5 * gt_widths\n    gt_ctr_y = gt_rois[:, 1] + 0.5 * gt_heights\n\n    targets_dx = (gt_ctr_x - ex_ctr_x) / ex_widths\n    targets_dy = (gt_ctr_y - ex_ctr_y) / ex_heights\n    targets_dw = torch.log(gt_widths / ex_widths)\n    targets_dh = torch.log(gt_heights / ex_heights)\n\n    targets = torch.stack(\n        (targets_dx, targets_dy, targets_dw, targets_dh),1)\n\n    return targets\n\ndef bbox_transform_batch(ex_rois, gt_rois):\n\n    if ex_rois.dim() == 2:\n        ex_widths = ex_rois[:, 2] - ex_rois[:, 0] + 1.0\n        ex_heights = ex_rois[:, 3] - ex_rois[:, 1] + 1.0\n        ex_ctr_x = ex_rois[:, 0] + 0.5 * ex_widths\n        ex_ctr_y = ex_rois[:, 1] + 0.5 * ex_heights\n\n        gt_widths = gt_rois[:, :, 2] - gt_rois[:, :, 0] + 1.0\n        gt_heights = gt_rois[:, :, 3] - gt_rois[:, :, 1] + 1.0\n        gt_ctr_x = gt_rois[:, :, 0] + 0.5 * gt_widths\n        gt_ctr_y = gt_rois[:, :, 1] + 0.5 * gt_heights\n\n        targets_dx = (gt_ctr_x - ex_ctr_x.view(1,-1).expand_as(gt_ctr_x)) / ex_widths\n        targets_dy = (gt_ctr_y - ex_ctr_y.view(1,-1).expand_as(gt_ctr_y)) / ex_heights\n        targets_dw = torch.log(gt_widths / ex_widths.view(1,-1).expand_as(gt_widths))\n        targets_dh = torch.log(gt_heights / ex_heights.view(1,-1).expand_as(gt_heights))\n\n    elif ex_rois.dim() == 3:\n        ex_widths = ex_rois[:, :, 2] - ex_rois[:, :, 0] + 1.0\n        ex_heights = ex_rois[:,:, 3] - ex_rois[:,:, 1] + 1.0\n        ex_ctr_x = ex_rois[:, :, 0] + 0.5 * ex_widths\n        ex_ctr_y = ex_rois[:, :, 1] + 0.5 * ex_heights\n\n        gt_widths = gt_rois[:, :, 2] - gt_rois[:, :, 0] + 1.0\n        gt_heights = gt_rois[:, :, 3] - gt_rois[:, :, 1] + 1.0\n        gt_ctr_x = gt_rois[:, :, 0] + 0.5 * gt_widths\n        gt_ctr_y = gt_rois[:, :, 1] + 0.5 * gt_heights\n\n        targets_dx = (gt_ctr_x - ex_ctr_x) / ex_widths\n        targets_dy = (gt_ctr_y - ex_ctr_y) / ex_heights\n        targets_dw = torch.log(gt_widths / ex_widths)\n        targets_dh = torch.log(gt_heights / ex_heights)\n    else:\n        raise ValueError('ex_roi input dimension is not correct.')\n\n    targets = torch.stack(\n        (targets_dx, targets_dy, targets_dw, targets_dh),2)\n\n    return targets\n\ndef bbox_transform_inv(boxes, deltas, batch_size):\n    widths = boxes[:, :, 2] - boxes[:, :, 0] + 1.0\n    heights = boxes[:, :, 3] - boxes[:, :, 1] + 1.0\n    ctr_x = boxes[:, :, 0] + 0.5 * widths\n    ctr_y = boxes[:, :, 1] + 0.5 * heights\n\n    dx = deltas[:, :, 0::4]\n    dy = deltas[:, :, 1::4]\n    dw = deltas[:, :, 2::4]\n    dh = deltas[:, :, 3::4]\n\n    pred_ctr_x = dx * widths.unsqueeze(2) + ctr_x.unsqueeze(2)\n    pred_ctr_y = dy * heights.unsqueeze(2) + ctr_y.unsqueeze(2)\n    pred_w = torch.exp(dw) * widths.unsqueeze(2)\n    pred_h = torch.exp(dh) * heights.unsqueeze(2)\n\n    pred_boxes = deltas.clone()\n    # x1\n    pred_boxes[:, :, 0::4] = pred_ctr_x - 0.5 * pred_w\n    # y1\n    pred_boxes[:, :, 1::4] = pred_ctr_y - 0.5 * pred_h\n    # x2\n    pred_boxes[:, :, 2::4] = pred_ctr_x + 0.5 * pred_w\n    # y2\n    pred_boxes[:, :, 3::4] = pred_ctr_y + 0.5 * pred_h\n\n    return pred_boxes\n\ndef clip_boxes_batch(boxes, im_shape, batch_size):\n    \"\"\"\n    Clip boxes to image boundaries.\n    \"\"\"\n    num_rois = boxes.size(1)\n\n    boxes[boxes < 0] = 0\n    # batch_x = (im_shape[:,0]-1).view(batch_size, 1).expand(batch_size, num_rois)\n    # batch_y = (im_shape[:,1]-1).view(batch_size, 1).expand(batch_size, num_rois)\n\n    batch_x = im_shape[:, 1] - 1\n    batch_y = im_shape[:, 0] - 1\n\n    boxes[:,:,0][boxes[:,:,0] > batch_x] = batch_x\n    boxes[:,:,1][boxes[:,:,1] > batch_y] = batch_y\n    boxes[:,:,2][boxes[:,:,2] > batch_x] = batch_x\n    boxes[:,:,3][boxes[:,:,3] > batch_y] = batch_y\n\n    return boxes\n\ndef clip_boxes(boxes, im_shape, batch_size):\n\n    for i in range(batch_size):\n        boxes[i,:,0::4].clamp_(0, im_shape[i, 1]-1)\n        boxes[i,:,1::4].clamp_(0, im_shape[i, 0]-1)\n        boxes[i,:,2::4].clamp_(0, im_shape[i, 1]-1)\n        boxes[i,:,3::4].clamp_(0, im_shape[i, 0]-1)\n\n    return boxes\n\n\ndef bbox_overlaps(anchors, gt_boxes):\n    \"\"\"\n    anchors: (N, 4) ndarray of float\n    gt_boxes: (K, 4) ndarray of float\n\n    overlaps: (N, K) ndarray of overlap between boxes and query_boxes\n    \"\"\"\n    N = anchors.size(0)\n    K = gt_boxes.size(0)\n\n    gt_boxes_area = ((gt_boxes[:,2] - gt_boxes[:,0] + 1) *\n                (gt_boxes[:,3] - gt_boxes[:,1] + 1)).view(1, K)\n\n    anchors_area = ((anchors[:,2] - anchors[:,0] + 1) *\n                (anchors[:,3] - anchors[:,1] + 1)).view(N, 1)\n\n    boxes = anchors.view(N, 1, 4).expand(N, K, 4)\n    query_boxes = gt_boxes.view(1, K, 4).expand(N, K, 4)\n\n    iw = (torch.min(boxes[:,:,2], query_boxes[:,:,2]) -\n        torch.max(boxes[:,:,0], query_boxes[:,:,0]) + 1)\n    iw[iw < 0] = 0\n\n    ih = (torch.min(boxes[:,:,3], query_boxes[:,:,3]) -\n        torch.max(boxes[:,:,1], query_boxes[:,:,1]) + 1)\n    ih[ih < 0] = 0\n\n    ua = anchors_area + gt_boxes_area - (iw * ih)\n    overlaps = iw * ih / ua\n\n    return overlaps\n\ndef bbox_overlaps_batch(anchors, gt_boxes):\n    \"\"\"\n    anchors: (N, 4) ndarray of float\n    gt_boxes: (b, K, 5) ndarray of float\n\n    overlaps: (N, K) ndarray of overlap between boxes and query_boxes\n    \"\"\"\n    batch_size = gt_boxes.size(0)\n\n\n    if anchors.dim() == 2:\n\n        N = anchors.size(0)\n        K = gt_boxes.size(1)\n\n        anchors = anchors.view(1, N, 4).expand(batch_size, N, 4).contiguous()\n        gt_boxes = gt_boxes[:,:,:4].contiguous()\n\n\n        gt_boxes_x = (gt_boxes[:,:,2] - gt_boxes[:,:,0] + 1)\n        gt_boxes_y = (gt_boxes[:,:,3] - gt_boxes[:,:,1] + 1)\n        gt_boxes_area = (gt_boxes_x * gt_boxes_y).view(batch_size, 1, K)\n\n        anchors_boxes_x = (anchors[:,:,2] - anchors[:,:,0] + 1)\n        anchors_boxes_y = (anchors[:,:,3] - anchors[:,:,1] + 1)\n        anchors_area = (anchors_boxes_x * anchors_boxes_y).view(batch_size, N, 1)\n\n        gt_area_zero = (gt_boxes_x == 1) & (gt_boxes_y == 1)\n        anchors_area_zero = (anchors_boxes_x == 1) & (anchors_boxes_y == 1)\n\n        boxes = anchors.view(batch_size, N, 1, 4).expand(batch_size, N, K, 4)\n        query_boxes = gt_boxes.view(batch_size, 1, K, 4).expand(batch_size, N, K, 4)\n\n        iw = (torch.min(boxes[:,:,:,2], query_boxes[:,:,:,2]) -\n            torch.max(boxes[:,:,:,0], query_boxes[:,:,:,0]) + 1)\n        iw[iw < 0] = 0\n\n        ih = (torch.min(boxes[:,:,:,3], query_boxes[:,:,:,3]) -\n            torch.max(boxes[:,:,:,1], query_boxes[:,:,:,1]) + 1)\n        ih[ih < 0] = 0\n        ua = anchors_area + gt_boxes_area - (iw * ih)\n        overlaps = iw * ih / ua\n\n        # mask the overlap here.\n        overlaps.masked_fill_(gt_area_zero.view(batch_size, 1, K).expand(batch_size, N, K), 0)\n        overlaps.masked_fill_(anchors_area_zero.view(batch_size, N, 1).expand(batch_size, N, K), -1)\n\n    elif anchors.dim() == 3:\n        N = anchors.size(1)\n        K = gt_boxes.size(1)\n\n        if anchors.size(2) == 4:\n            anchors = anchors[:,:,:4].contiguous()\n        else:\n            anchors = anchors[:,:,1:5].contiguous()\n\n        gt_boxes = gt_boxes[:,:,:4].contiguous()\n\n        gt_boxes_x = (gt_boxes[:,:,2] - gt_boxes[:,:,0] + 1)\n        gt_boxes_y = (gt_boxes[:,:,3] - gt_boxes[:,:,1] + 1)\n        gt_boxes_area = (gt_boxes_x * gt_boxes_y).view(batch_size, 1, K)\n\n        anchors_boxes_x = (anchors[:,:,2] - anchors[:,:,0] + 1)\n        anchors_boxes_y = (anchors[:,:,3] - anchors[:,:,1] + 1)\n        anchors_area = (anchors_boxes_x * anchors_boxes_y).view(batch_size, N, 1)\n\n        gt_area_zero = (gt_boxes_x == 1) & (gt_boxes_y == 1)\n        anchors_area_zero = (anchors_boxes_x == 1) & (anchors_boxes_y == 1)\n\n        boxes = anchors.view(batch_size, N, 1, 4).expand(batch_size, N, K, 4)\n        query_boxes = gt_boxes.view(batch_size, 1, K, 4).expand(batch_size, N, K, 4)\n\n        iw = (torch.min(boxes[:,:,:,2], query_boxes[:,:,:,2]) -\n            torch.max(boxes[:,:,:,0], query_boxes[:,:,:,0]) + 1)\n        iw[iw < 0] = 0\n\n        ih = (torch.min(boxes[:,:,:,3], query_boxes[:,:,:,3]) -\n            torch.max(boxes[:,:,:,1], query_boxes[:,:,:,1]) + 1)\n        ih[ih < 0] = 0\n        ua = anchors_area + gt_boxes_area - (iw * ih)\n\n        overlaps = iw * ih / ua\n\n        # mask the overlap here.\n        overlaps.masked_fill_(gt_area_zero.view(batch_size, 1, K).expand(batch_size, N, K), 0)\n        overlaps.masked_fill_(anchors_area_zero.view(batch_size, N, 1).expand(batch_size, N, K), -1)\n    else:\n        raise ValueError('anchors input dimension is not correct.')\n\n    return overlaps\n"
  },
  {
    "path": "lib/model/rpn/generate_anchors.py",
    "content": "from __future__ import print_function\n# --------------------------------------------------------\n# Faster R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick and Sean Bell\n# --------------------------------------------------------\n\nimport numpy as np\nimport pdb\n\n# Verify that we compute the same anchors as Shaoqing's matlab implementation:\n#\n#    >> load output/rpn_cachedir/faster_rcnn_VOC2007_ZF_stage1_rpn/anchors.mat\n#    >> anchors\n#\n#    anchors =\n#\n#       -83   -39   100    56\n#      -175   -87   192   104\n#      -359  -183   376   200\n#       -55   -55    72    72\n#      -119  -119   136   136\n#      -247  -247   264   264\n#       -35   -79    52    96\n#       -79  -167    96   184\n#      -167  -343   184   360\n\n#array([[ -83.,  -39.,  100.,   56.],\n#       [-175.,  -87.,  192.,  104.],\n#       [-359., -183.,  376.,  200.],\n#       [ -55.,  -55.,   72.,   72.],\n#       [-119., -119.,  136.,  136.],\n#       [-247., -247.,  264.,  264.],\n#       [ -35.,  -79.,   52.,   96.],\n#       [ -79., -167.,   96.,  184.],\n#       [-167., -343.,  184.,  360.]])\n\ntry:\n    xrange          # Python 2\nexcept NameError:\n    xrange = range  # Python 3\n\n\ndef generate_anchors(base_size=16, ratios=[0.5, 1, 2],\n                     scales=2**np.arange(3, 6)):\n    \"\"\"\n    Generate anchor (reference) windows by enumerating aspect ratios X\n    scales wrt a reference (0, 0, 15, 15) window.\n    \"\"\"\n\n    base_anchor = np.array([1, 1, base_size, base_size]) - 1\n    ratio_anchors = _ratio_enum(base_anchor, ratios)\n    anchors = np.vstack([_scale_enum(ratio_anchors[i, :], scales)\n                         for i in xrange(ratio_anchors.shape[0])])\n    return anchors\n\ndef _whctrs(anchor):\n    \"\"\"\n    Return width, height, x center, and y center for an anchor (window).\n    \"\"\"\n\n    w = anchor[2] - anchor[0] + 1\n    h = anchor[3] - anchor[1] + 1\n    x_ctr = anchor[0] + 0.5 * (w - 1)\n    y_ctr = anchor[1] + 0.5 * (h - 1)\n    return w, h, x_ctr, y_ctr\n\ndef _mkanchors(ws, hs, x_ctr, y_ctr):\n    \"\"\"\n    Given a vector of widths (ws) and heights (hs) around a center\n    (x_ctr, y_ctr), output a set of anchors (windows).\n    \"\"\"\n\n    ws = ws[:, np.newaxis]\n    hs = hs[:, np.newaxis]\n    anchors = np.hstack((x_ctr - 0.5 * (ws - 1),\n                         y_ctr - 0.5 * (hs - 1),\n                         x_ctr + 0.5 * (ws - 1),\n                         y_ctr + 0.5 * (hs - 1)))\n    return anchors\n\ndef _ratio_enum(anchor, ratios):\n    \"\"\"\n    Enumerate a set of anchors for each aspect ratio wrt an anchor.\n    \"\"\"\n\n    w, h, x_ctr, y_ctr = _whctrs(anchor)\n    size = w * h\n    size_ratios = size / ratios\n    ws = np.round(np.sqrt(size_ratios))\n    hs = np.round(ws * ratios)\n    anchors = _mkanchors(ws, hs, x_ctr, y_ctr)\n    return anchors\n\ndef _scale_enum(anchor, scales):\n    \"\"\"\n    Enumerate a set of anchors for each scale wrt an anchor.\n    \"\"\"\n\n    w, h, x_ctr, y_ctr = _whctrs(anchor)\n    ws = w * scales\n    hs = h * scales\n    anchors = _mkanchors(ws, hs, x_ctr, y_ctr)\n    return anchors\n\nif __name__ == '__main__':\n    import time\n    t = time.time()\n    a = generate_anchors()\n    print(time.time() - t)\n    print(a)\n    from IPython import embed; embed()\n"
  },
  {
    "path": "lib/model/rpn/proposal_layer.py",
    "content": "from __future__ import absolute_import\n# --------------------------------------------------------\n# Faster R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick and Sean Bell\n# --------------------------------------------------------\n# --------------------------------------------------------\n# Reorganized and modified by Jianwei Yang and Jiasen Lu\n# --------------------------------------------------------\n\nimport torch\nimport torch.nn as nn\nimport numpy as np\nimport math\nimport yaml\nfrom model.utils.config import cfg\nfrom .generate_anchors import generate_anchors\nfrom .bbox_transform import bbox_transform_inv, clip_boxes, clip_boxes_batch\nfrom model.nms.nms_wrapper import nms\n\nimport pdb\n\nDEBUG = False\n\nclass _ProposalLayer(nn.Module):\n    \"\"\"\n    Outputs object detection proposals by applying estimated bounding-box\n    transformations to a set of regular boxes (called \"anchors\").\n    \"\"\"\n\n    def __init__(self, feat_stride, scales, ratios):\n        super(_ProposalLayer, self).__init__()\n\n        self._feat_stride = feat_stride\n        self._anchors = torch.from_numpy(generate_anchors(scales=np.array(scales), \n            ratios=np.array(ratios))).float()\n        self._num_anchors = self._anchors.size(0)\n\n        # rois blob: holds R regions of interest, each is a 5-tuple\n        # (n, x1, y1, x2, y2) specifying an image batch index n and a\n        # rectangle (x1, y1, x2, y2)\n        # top[0].reshape(1, 5)\n        #\n        # # scores blob: holds scores for R regions of interest\n        # if len(top) > 1:\n        #     top[1].reshape(1, 1, 1, 1)\n\n    def forward(self, input):\n\n        # Algorithm:\n        #\n        # for each (H, W) location i\n        #   generate A anchor boxes centered on cell i\n        #   apply predicted bbox deltas at cell i to each of the A anchors\n        # clip predicted boxes to image\n        # remove predicted boxes with either height or width < threshold\n        # sort all (proposal, score) pairs by score from highest to lowest\n        # take top pre_nms_topN proposals before NMS\n        # apply NMS with threshold 0.7 to remaining proposals\n        # take after_nms_topN proposals after NMS\n        # return the top proposals (-> RoIs top, scores top)\n\n\n        # the first set of _num_anchors channels are bg probs\n        # the second set are the fg probs\n        scores = input[0][:, self._num_anchors:, :, :]\n        bbox_deltas = input[1]\n        im_info = input[2]\n        cfg_key = input[3]\n\n        pre_nms_topN  = cfg[cfg_key].RPN_PRE_NMS_TOP_N\n        post_nms_topN = cfg[cfg_key].RPN_POST_NMS_TOP_N\n        nms_thresh    = cfg[cfg_key].RPN_NMS_THRESH\n        min_size      = cfg[cfg_key].RPN_MIN_SIZE\n\n        batch_size = bbox_deltas.size(0)\n\n        feat_height, feat_width = scores.size(2), scores.size(3)\n        shift_x = np.arange(0, feat_width) * self._feat_stride\n        shift_y = np.arange(0, feat_height) * self._feat_stride\n        shift_x, shift_y = np.meshgrid(shift_x, shift_y)\n        shifts = torch.from_numpy(np.vstack((shift_x.ravel(), shift_y.ravel(),\n                                  shift_x.ravel(), shift_y.ravel())).transpose())\n        shifts = shifts.contiguous().type_as(scores).float()\n\n        A = self._num_anchors\n        K = shifts.size(0)\n\n        self._anchors = self._anchors.type_as(scores)\n        # anchors = self._anchors.view(1, A, 4) + shifts.view(1, K, 4).permute(1, 0, 2).contiguous()\n        anchors = self._anchors.view(1, A, 4) + shifts.view(K, 1, 4)\n        anchors = anchors.view(1, K * A, 4).expand(batch_size, K * A, 4)\n\n        # Transpose and reshape predicted bbox transformations to get them\n        # into the same order as the anchors:\n\n        bbox_deltas = bbox_deltas.permute(0, 2, 3, 1).contiguous()\n        bbox_deltas = bbox_deltas.view(batch_size, -1, 4)\n\n        # Same story for the scores:\n        scores = scores.permute(0, 2, 3, 1).contiguous()\n        scores = scores.view(batch_size, -1)\n\n        # Convert anchors into proposals via bbox transformations\n        proposals = bbox_transform_inv(anchors, bbox_deltas, batch_size)\n\n        # 2. clip predicted boxes to image\n        proposals = clip_boxes(proposals, im_info, batch_size)\n        # proposals = clip_boxes_batch(proposals, im_info, batch_size)\n\n        # assign the score to 0 if it's non keep.\n        # keep = self._filter_boxes(proposals, min_size * im_info[:, 2])\n\n        # trim keep index to make it euqal over batch\n        # keep_idx = torch.cat(tuple(keep_idx), 0)\n\n        # scores_keep = scores.view(-1)[keep_idx].view(batch_size, trim_size)\n        # proposals_keep = proposals.view(-1, 4)[keep_idx, :].contiguous().view(batch_size, trim_size, 4)\n        \n        # _, order = torch.sort(scores_keep, 1, True)\n        \n        scores_keep = scores\n        proposals_keep = proposals\n        _, order = torch.sort(scores_keep, 1, True)\n\n        output = scores.new(batch_size, post_nms_topN, 5).zero_()\n        for i in range(batch_size):\n            # # 3. remove predicted boxes with either height or width < threshold\n            # # (NOTE: convert min_size to input image scale stored in im_info[2])\n            proposals_single = proposals_keep[i]\n            scores_single = scores_keep[i]\n\n            # # 4. sort all (proposal, score) pairs by score from highest to lowest\n            # # 5. take top pre_nms_topN (e.g. 6000)\n            order_single = order[i]\n\n            if pre_nms_topN > 0 and pre_nms_topN < scores_keep.numel():\n                order_single = order_single[:pre_nms_topN]\n\n            proposals_single = proposals_single[order_single, :]\n            scores_single = scores_single[order_single].view(-1,1)\n\n            # 6. apply nms (e.g. threshold = 0.7)\n            # 7. take after_nms_topN (e.g. 300)\n            # 8. return the top proposals (-> RoIs top)\n\n            keep_idx_i = nms(torch.cat((proposals_single, scores_single), 1), nms_thresh)\n            keep_idx_i = keep_idx_i.long().view(-1)\n\n            if post_nms_topN > 0:\n                keep_idx_i = keep_idx_i[:post_nms_topN]\n            proposals_single = proposals_single[keep_idx_i, :]\n            scores_single = scores_single[keep_idx_i, :]\n\n            # padding 0 at the end.\n            num_proposal = proposals_single.size(0)\n            output[i,:,0] = i\n            output[i,:num_proposal,1:] = proposals_single\n\n        return output\n\n    def backward(self, top, propagate_down, bottom):\n        \"\"\"This layer does not propagate gradients.\"\"\"\n        pass\n\n    def reshape(self, bottom, top):\n        \"\"\"Reshaping happens during the call to forward.\"\"\"\n        pass\n\n    def _filter_boxes(self, boxes, min_size):\n        \"\"\"Remove all boxes with any side smaller than min_size.\"\"\"\n        ws = boxes[:, :, 2] - boxes[:, :, 0] + 1\n        hs = boxes[:, :, 3] - boxes[:, :, 1] + 1\n        keep = ((ws >= min_size.view(-1,1).expand_as(ws)) & (hs >= min_size.view(-1,1).expand_as(hs)))\n        return keep\n"
  },
  {
    "path": "lib/model/rpn/proposal_layer_region.py",
    "content": "from __future__ import absolute_import\n# --------------------------------------------------------\n# Faster R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick and Sean Bell\n# --------------------------------------------------------\n# --------------------------------------------------------\n# Reorganized and modified by Jianwei Yang and Jiasen Lu\n# --------------------------------------------------------\n\nimport torch\nimport torch.nn as nn\nimport numpy as np\nimport math\nimport yaml\nfrom model.utils.config import cfg\nfrom .generate_anchors import generate_anchors\nfrom .bbox_transform import bbox_transform_inv, clip_boxes, clip_boxes_batch\nfrom model.nms.nms_wrapper import nms\n\nimport pdb\n\nDEBUG = False\n\nclass _ProposalLayer(nn.Module):\n    \"\"\"\n    Outputs object detection proposals by applying estimated bounding-box\n    transformations to a set of regular boxes (called \"anchors\").\n    \"\"\"\n\n    def __init__(self, feat_stride, scales, ratios):\n        super(_ProposalLayer, self).__init__()\n\n        self._feat_stride = feat_stride\n        self._anchors = torch.from_numpy(generate_anchors(scales=np.array(scales), \n            ratios=np.array(ratios))).float()\n        self._num_anchors = self._anchors.size(0)\n\n        # rois blob: holds R regions of interest, each is a 5-tuple\n        # (n, x1, y1, x2, y2) specifying an image batch index n and a\n        # rectangle (x1, y1, x2, y2)\n        # top[0].reshape(1, 5)\n        #\n        # # scores blob: holds scores for R regions of interest\n        # if len(top) > 1:\n        #     top[1].reshape(1, 1, 1, 1)\n\n    def forward(self, input):\n\n        # Algorithm:\n        #\n        # for each (H, W) location i\n        #   generate A anchor boxes centered on cell i\n        #   apply predicted bbox deltas at cell i to each of the A anchors\n        # clip predicted boxes to image\n        # remove predicted boxes with either height or width < threshold\n        # sort all (proposal, score) pairs by score from highest to lowest\n        # take top pre_nms_topN proposals before NMS\n        # apply NMS with threshold 0.7 to remaining proposals\n        # take after_nms_topN proposals after NMS\n        # return the top proposals (-> RoIs top, scores top)\n\n\n        # the first set of _num_anchors channels are bg probs\n        # the second set are the fg probs\n        scores = input[0][:, self._num_anchors:, :, :]\n        bbox_deltas = input[1]\n        im_info = input[2]\n        cfg_key = input[3]\n\n        pre_nms_topN  = cfg[cfg_key].RPN_PRE_NMS_TOP_N\n        post_nms_topN = cfg[cfg_key].RPN_POST_NMS_TOP_N\n        nms_thresh    = cfg[cfg_key].RPN_NMS_THRESH\n        min_size      = cfg[cfg_key].RPN_MIN_SIZE\n\n        batch_size = bbox_deltas.size(0)\n\n        feat_height, feat_width = scores.size(2), scores.size(3)\n        shift_x = np.arange(0, feat_width) * self._feat_stride\n        shift_y = np.arange(0, feat_height) * self._feat_stride\n        shift_x, shift_y = np.meshgrid(shift_x, shift_y)\n        shifts = torch.from_numpy(np.vstack((shift_x.ravel(), shift_y.ravel(),\n                                  shift_x.ravel(), shift_y.ravel())).transpose())\n        shifts = shifts.contiguous().type_as(scores).float()\n\n        A = self._num_anchors\n        K = shifts.size(0)\n\n        self._anchors = self._anchors.type_as(scores)\n        # anchors = self._anchors.view(1, A, 4) + shifts.view(1, K, 4).permute(1, 0, 2).contiguous()\n        anchors = self._anchors.view(1, A, 4) + shifts.view(K, 1, 4)\n        anchors = anchors.view(1, K * A, 4).expand(batch_size, K * A, 4)\n\n        # Transpose and reshape predicted bbox transformations to get them\n        # into the same order as the anchors:\n\n        bbox_deltas = bbox_deltas.permute(0, 2, 3, 1).contiguous()\n        bbox_deltas = bbox_deltas.view(batch_size, -1, 4)\n\n        # Same story for the scores:\n        scores = scores.permute(0, 2, 3, 1).contiguous()\n        scores = scores.view(batch_size, -1)\n\n        # Convert anchors into proposals via bbox transformations\n        proposals = bbox_transform_inv(anchors, bbox_deltas, batch_size)\n\n        # 2. clip predicted boxes to image\n        proposals = clip_boxes(proposals, im_info, batch_size)\n        # proposals = clip_boxes_batch(proposals, im_info, batch_size)\n\n        # assign the score to 0 if it's non keep.\n        # keep = self._filter_boxes(proposals, min_size * im_info[:, 2])\n\n        # trim keep index to make it euqal over batch\n        # keep_idx = torch.cat(tuple(keep_idx), 0)\n\n        # scores_keep = scores.view(-1)[keep_idx].view(batch_size, trim_size)\n        # proposals_keep = proposals.view(-1, 4)[keep_idx, :].contiguous().view(batch_size, trim_size, 4)\n        \n        # _, order = torch.sort(scores_keep, 1, True)\n        \n        scores_keep = scores\n        proposals_keep = proposals\n        _, order = torch.sort(scores_keep, 1, True)\n\n        output = scores.new(batch_size, post_nms_topN, 5).zero_()\n        output_cls_score = scores.new(batch_size, post_nms_topN, 2).zero_()\n        for i in range(batch_size):\n            # # 3. remove predicted boxes with either height or width < threshold\n            # # (NOTE: convert min_size to input image scale stored in im_info[2])\n            proposals_single = proposals_keep[i]\n            scores_single = scores_keep[i]\n\n            # # 4. sort all (proposal, score) pairs by score from highest to lowest\n            # # 5. take top pre_nms_topN (e.g. 6000)\n            order_single = order[i]\n\n            if pre_nms_topN > 0 and pre_nms_topN < scores_keep.numel():\n                order_single = order_single[:pre_nms_topN]\n\n            proposals_single = proposals_single[order_single, :]\n            scores_single = scores_single[order_single].view(-1,1)\n\n            # 6. apply nms (e.g. threshold = 0.7)\n            # 7. take after_nms_topN (e.g. 300)\n            # 8. return the top proposals (-> RoIs top)\n\n            keep_idx_i = nms(torch.cat((proposals_single, scores_single), 1), nms_thresh)\n            keep_idx_i = keep_idx_i.long().view(-1)\n\n            if post_nms_topN > 0:\n                keep_idx_i = keep_idx_i[:post_nms_topN]\n            proposals_single = proposals_single[keep_idx_i, :]\n            scores_single = scores_single[keep_idx_i, :]\n\n            # padding 0 at the end.\n            num_proposal = proposals_single.size(0)\n            output[i,:,0] = i\n            output[i,:num_proposal,1:] = proposals_single\n            output_cls_score[i,:,0] = i\n            output_cls_score[i,:num_proposal,1] = scores_single\n\n        return output, output_cls_score\n\n    def backward(self, top, propagate_down, bottom):\n        \"\"\"This layer does not propagate gradients.\"\"\"\n        pass\n\n    def reshape(self, bottom, top):\n        \"\"\"Reshaping happens during the call to forward.\"\"\"\n        pass\n\n    def _filter_boxes(self, boxes, min_size):\n        \"\"\"Remove all boxes with any side smaller than min_size.\"\"\"\n        ws = boxes[:, :, 2] - boxes[:, :, 0] + 1\n        hs = boxes[:, :, 3] - boxes[:, :, 1] + 1\n        keep = ((ws >= min_size.view(-1,1).expand_as(ws)) & (hs >= min_size.view(-1,1).expand_as(hs)))\n        return keep\n"
  },
  {
    "path": "lib/model/rpn/proposal_target_layer_cascade.py",
    "content": "from __future__ import absolute_import\n# --------------------------------------------------------\n# Faster R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick and Sean Bell\n# --------------------------------------------------------\n# --------------------------------------------------------\n# Reorganized and modified by Jianwei Yang and Jiasen Lu\n# --------------------------------------------------------\n\nimport torch\nimport torch.nn as nn\nimport numpy as np\nimport numpy.random as npr\nfrom ..utils.config import cfg\nfrom .bbox_transform import bbox_overlaps_batch, bbox_transform_batch\nimport pdb\n\nclass _ProposalTargetLayer(nn.Module):\n    \"\"\"\n    Assign object detection proposals to ground-truth targets. Produces proposal\n    classification labels and bounding-box regression targets.\n    \"\"\"\n\n    def __init__(self, nclasses):\n        super(_ProposalTargetLayer, self).__init__()\n        self._num_classes = nclasses\n        self.BBOX_NORMALIZE_MEANS = torch.FloatTensor(cfg.TRAIN.BBOX_NORMALIZE_MEANS)\n        self.BBOX_NORMALIZE_STDS = torch.FloatTensor(cfg.TRAIN.BBOX_NORMALIZE_STDS)\n        self.BBOX_INSIDE_WEIGHTS = torch.FloatTensor(cfg.TRAIN.BBOX_INSIDE_WEIGHTS)\n\n    def forward(self, all_rois, gt_boxes, num_boxes):\n\n        self.BBOX_NORMALIZE_MEANS = self.BBOX_NORMALIZE_MEANS.type_as(gt_boxes)\n        self.BBOX_NORMALIZE_STDS = self.BBOX_NORMALIZE_STDS.type_as(gt_boxes)\n        self.BBOX_INSIDE_WEIGHTS = self.BBOX_INSIDE_WEIGHTS.type_as(gt_boxes)\n\n        num_images = 1\n        rois_per_image = int(cfg.TRAIN.BATCH_SIZE / num_images)\n        fg_rois_per_image = int(np.round(cfg.TRAIN.FG_FRACTION * rois_per_image))\n        fg_rois_per_image = 1 if fg_rois_per_image == 0 else fg_rois_per_image\n\n        labels, rois, bbox_targets, bbox_inside_weights = self._sample_rois_pytorch(\n            all_rois, gt_boxes, fg_rois_per_image,\n            rois_per_image, self._num_classes)\n\n        bbox_outside_weights = (bbox_inside_weights > 0).float()\n\n        return rois, labels, bbox_targets, bbox_inside_weights, bbox_outside_weights\n\n    def backward(self, top, propagate_down, bottom):\n        \"\"\"This layer does not propagate gradients.\"\"\"\n        pass\n\n    def reshape(self, bottom, top):\n        \"\"\"Reshaping happens during the call to forward.\"\"\"\n        pass\n\n    def _get_bbox_regression_labels_pytorch(self, bbox_target_data, labels_batch, num_classes):\n        \"\"\"Bounding-box regression targets (bbox_target_data) are stored in a\n        compact form b x N x (class, tx, ty, tw, th)\n\n        This function expands those targets into the 4-of-4*K representation used\n        by the network (i.e. only one class has non-zero targets).\n\n        Returns:\n            bbox_target (ndarray): b x N x 4K blob of regression targets\n            bbox_inside_weights (ndarray): b x N x 4K blob of loss weights\n        \"\"\"\n        batch_size = labels_batch.size(0)\n        rois_per_image = labels_batch.size(1)\n        clss = labels_batch\n        bbox_targets = bbox_target_data.new(batch_size, rois_per_image, 4).zero_()\n        bbox_inside_weights = bbox_target_data.new(bbox_targets.size()).zero_()\n\n        for b in range(batch_size):\n            # assert clss[b].sum() > 0\n            if clss[b].sum() == 0:\n                continue\n            inds = torch.nonzero(clss[b] > 0).view(-1)\n            for i in range(inds.numel()):\n                ind = inds[i]\n                bbox_targets[b, ind, :] = bbox_target_data[b, ind, :]\n                bbox_inside_weights[b, ind, :] = self.BBOX_INSIDE_WEIGHTS\n\n        return bbox_targets, bbox_inside_weights\n\n\n    def _compute_targets_pytorch(self, ex_rois, gt_rois):\n        \"\"\"Compute bounding-box regression targets for an image.\"\"\"\n\n        assert ex_rois.size(1) == gt_rois.size(1)\n        assert ex_rois.size(2) == 4\n        assert gt_rois.size(2) == 4\n\n        batch_size = ex_rois.size(0)\n        rois_per_image = ex_rois.size(1)\n\n        targets = bbox_transform_batch(ex_rois, gt_rois)\n\n        if cfg.TRAIN.BBOX_NORMALIZE_TARGETS_PRECOMPUTED:\n            # Optionally normalize targets by a precomputed mean and stdev\n            targets = ((targets - self.BBOX_NORMALIZE_MEANS.expand_as(targets))\n                        / self.BBOX_NORMALIZE_STDS.expand_as(targets))\n\n        return targets\n\n\n    def _sample_rois_pytorch(self, all_rois, gt_boxes, fg_rois_per_image, rois_per_image, num_classes):\n        \"\"\"Generate a random sample of RoIs comprising foreground and background\n        examples.\n        \"\"\"\n        # overlaps: (rois x gt_boxes)\n\n        overlaps = bbox_overlaps_batch(all_rois, gt_boxes)\n        # max_overlaps = max overlap of (candidate rois with gt_rois)\n        max_overlaps, gt_assignment = torch.max(overlaps, 2)\n        batch_size = overlaps.size(0)\n        num_proposal = overlaps.size(1)\n        num_boxes_per_img = overlaps.size(2)\n\n        offset = torch.arange(0, batch_size)*gt_boxes.size(1)\n        offset = offset.view(-1, 1).type_as(gt_assignment) + gt_assignment\n        \n        labels = gt_boxes[:,:,4].contiguous().view(-1).index(offset.view(-1))\\\n                                                            .view(batch_size, -1)\n\n        labels_batch = labels.new(batch_size, rois_per_image).zero_()\n        rois_batch  = all_rois.new(batch_size, rois_per_image, 5).zero_() # get rois_per_image front of rois\n        gt_rois_batch = all_rois.new(batch_size, rois_per_image, 5).zero_()\n        # Guard against the case when an image has fewer than max_fg_rois_per_image\n        # foreground RoIs\n        for i in range(batch_size):\n\n            fg_inds = torch.nonzero(max_overlaps[i] >= cfg.TRAIN.FG_THRESH).view(-1)\n            fg_num_rois = fg_inds.numel()\n\n            # Select background RoIs as those within [BG_THRESH_LO, BG_THRESH_HI)\n            bg_inds = torch.nonzero((max_overlaps[i] < cfg.TRAIN.BG_THRESH_HI) &\n                                    (max_overlaps[i] >= cfg.TRAIN.BG_THRESH_LO)).view(-1)\n            bg_num_rois = bg_inds.numel()\n\n            if fg_num_rois > 0 and bg_num_rois > 0:\n                # sampling fg\n                fg_rois_per_this_image = min(fg_rois_per_image, fg_num_rois)\n                \n                # torch.randperm seems has a bug on multi-gpu setting that cause the segfault. \n                # See https://github.com/pytorch/pytorch/issues/1868 for more details.\n                # use numpy instead.\n                #rand_num = torch.randperm(fg_num_rois).long().cuda()\n                rand_num = torch.from_numpy(np.random.permutation(fg_num_rois)).type_as(gt_boxes).long()\n                fg_inds = fg_inds[rand_num[:fg_rois_per_this_image]]\n\n                # sampling bg\n                bg_rois_per_this_image = rois_per_image - fg_rois_per_this_image\n\n                # Seems torch.rand has a bug, it will generate very large number and make an error. \n                # We use numpy rand instead. \n                #rand_num = (torch.rand(bg_rois_per_this_image) * bg_num_rois).long().cuda()\n                rand_num = np.floor(np.random.rand(bg_rois_per_this_image) * bg_num_rois)\n                rand_num = torch.from_numpy(rand_num).type_as(gt_boxes).long()\n                bg_inds = bg_inds[rand_num]\n\n            elif fg_num_rois > 0 and bg_num_rois == 0:\n                # sampling fg\n                #rand_num = torch.floor(torch.rand(rois_per_image) * fg_num_rois).long().cuda()\n                rand_num = np.floor(np.random.rand(rois_per_image) * fg_num_rois)\n                rand_num = torch.from_numpy(rand_num).type_as(gt_boxes).long()\n                fg_inds = fg_inds[rand_num]\n                fg_rois_per_this_image = rois_per_image\n                bg_rois_per_this_image = 0\n            elif bg_num_rois > 0 and fg_num_rois == 0:\n                # sampling bg\n                #rand_num = torch.floor(torch.rand(rois_per_image) * bg_num_rois).long().cuda()\n                rand_num = np.floor(np.random.rand(rois_per_image) * bg_num_rois)\n                rand_num = torch.from_numpy(rand_num).type_as(gt_boxes).long()\n\n                bg_inds = bg_inds[rand_num]\n                bg_rois_per_this_image = rois_per_image\n                fg_rois_per_this_image = 0\n            else:\n                raise ValueError(\"bg_num_rois = 0 and fg_num_rois = 0, this should not happen!\")\n                \n            # The indices that we're selecting (both fg and bg)\n            keep_inds = torch.cat([fg_inds, bg_inds], 0)\n\n            # Select sampled values from various arrays:\n            labels_batch[i].copy_(labels[i][keep_inds])\n\n            # Clamp labels for the background RoIs to 0\n            if fg_rois_per_this_image < rois_per_image:\n                labels_batch[i][fg_rois_per_this_image:] = 0\n\n            rois_batch[i] = all_rois[i][keep_inds]\n            rois_batch[i,:,0] = i\n\n            gt_rois_batch[i] = gt_boxes[i][gt_assignment[i][keep_inds]]\n\n        bbox_target_data = self._compute_targets_pytorch(\n                rois_batch[:,:,1:5], gt_rois_batch[:,:,:4])\n\n        bbox_targets, bbox_inside_weights = \\\n                self._get_bbox_regression_labels_pytorch(bbox_target_data, labels_batch, num_classes)\n\n        return labels_batch, rois_batch, bbox_targets, bbox_inside_weights\n"
  },
  {
    "path": "lib/model/rpn/proposal_target_layer_cascade_region.py",
    "content": "from __future__ import absolute_import\n# --------------------------------------------------------\n# Faster R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick and Sean Bell\n# --------------------------------------------------------\n# --------------------------------------------------------\n# Reorganized and modified by Jianwei Yang and Jiasen Lu\n# --------------------------------------------------------\n\nimport torch\nimport torch.nn as nn\nimport numpy as np\nimport numpy.random as npr\nfrom ..utils.config import cfg\nfrom .bbox_transform import bbox_overlaps_batch, bbox_transform_batch\nimport pdb\n\nclass _ProposalTargetLayer(nn.Module):\n    \"\"\"\n    Assign object detection proposals to ground-truth targets. Produces proposal\n    classification labels and bounding-box regression targets.\n    \"\"\"\n\n    def __init__(self, nclasses):\n        super(_ProposalTargetLayer, self).__init__()\n        self._num_classes = nclasses\n        self.BBOX_NORMALIZE_MEANS = torch.FloatTensor(cfg.TRAIN.BBOX_NORMALIZE_MEANS)\n        self.BBOX_NORMALIZE_STDS = torch.FloatTensor(cfg.TRAIN.BBOX_NORMALIZE_STDS)\n        self.BBOX_INSIDE_WEIGHTS = torch.FloatTensor(cfg.TRAIN.BBOX_INSIDE_WEIGHTS)\n\n    def forward(self, all_rois, gt_boxes, num_boxes,output_cls_score):\n\n        self.BBOX_NORMALIZE_MEANS = self.BBOX_NORMALIZE_MEANS.type_as(gt_boxes)\n        self.BBOX_NORMALIZE_STDS = self.BBOX_NORMALIZE_STDS.type_as(gt_boxes)\n        self.BBOX_INSIDE_WEIGHTS = self.BBOX_INSIDE_WEIGHTS.type_as(gt_boxes)\n\n        gt_boxes_append = gt_boxes.new(gt_boxes.size()).zero_()\n        gt_boxes_append[:,:,1:5] = gt_boxes[:,:,:4]\n        all_score_append = output_cls_score.new(gt_boxes.size()[0],gt_boxes.size()[1],2).zero_()+1\n\n        # Include ground-truth boxes in the set of candidate rois\n        all_rois = torch.cat([all_rois, gt_boxes_append], 1)\n        all_score = torch.cat([output_cls_score, all_score_append], 1)\n\n        num_images = 1\n        rois_per_image = int(cfg.TRAIN.BATCH_SIZE / num_images)\n        fg_rois_per_image = int(np.round(cfg.TRAIN.FG_FRACTION * rois_per_image))\n        fg_rois_per_image = 1 if fg_rois_per_image == 0 else fg_rois_per_image\n\n        labels, rois, bbox_targets, bbox_inside_weights, output_bg_score= self._sample_rois_pytorch(\n            all_rois, gt_boxes, fg_rois_per_image,\n            rois_per_image, self._num_classes, all_score)\n\n        bbox_outside_weights = (bbox_inside_weights > 0).float()\n\n        return rois, labels, bbox_targets, bbox_inside_weights, bbox_outside_weights, output_bg_score\n\n    def backward(self, top, propagate_down, bottom):\n        \"\"\"This layer does not propagate gradients.\"\"\"\n        pass\n\n    def reshape(self, bottom, top):\n        \"\"\"Reshaping happens during the call to forward.\"\"\"\n        pass\n\n    def _get_bbox_regression_labels_pytorch(self, bbox_target_data, labels_batch, num_classes):\n        \"\"\"Bounding-box regression targets (bbox_target_data) are stored in a\n        compact form b x N x (class, tx, ty, tw, th)\n\n        This function expands those targets into the 4-of-4*K representation used\n        by the network (i.e. only one class has non-zero targets).\n\n        Returns:\n            bbox_target (ndarray): b x N x 4K blob of regression targets\n            bbox_inside_weights (ndarray): b x N x 4K blob of loss weights\n        \"\"\"\n        batch_size = labels_batch.size(0)\n        rois_per_image = labels_batch.size(1)\n        clss = labels_batch\n        bbox_targets = bbox_target_data.new(batch_size, rois_per_image, 4).zero_()\n        bbox_inside_weights = bbox_target_data.new(bbox_targets.size()).zero_()\n\n        for b in range(batch_size):\n            # assert clss[b].sum() > 0\n            if clss[b].sum() == 0:\n                continue\n            inds = torch.nonzero(clss[b] > 0).view(-1)\n            for i in range(inds.numel()):\n                ind = inds[i]\n                bbox_targets[b, ind, :] = bbox_target_data[b, ind, :]\n                bbox_inside_weights[b, ind, :] = self.BBOX_INSIDE_WEIGHTS\n\n        return bbox_targets, bbox_inside_weights\n\n\n    def _compute_targets_pytorch(self, ex_rois, gt_rois):\n        \"\"\"Compute bounding-box regression targets for an image.\"\"\"\n\n        assert ex_rois.size(1) == gt_rois.size(1)\n        assert ex_rois.size(2) == 4\n        assert gt_rois.size(2) == 4\n\n        batch_size = ex_rois.size(0)\n        rois_per_image = ex_rois.size(1)\n\n        targets = bbox_transform_batch(ex_rois, gt_rois)\n\n        if cfg.TRAIN.BBOX_NORMALIZE_TARGETS_PRECOMPUTED:\n            # Optionally normalize targets by a precomputed mean and stdev\n            targets = ((targets - self.BBOX_NORMALIZE_MEANS.expand_as(targets))\n                        / self.BBOX_NORMALIZE_STDS.expand_as(targets))\n\n        return targets\n\n\n    def _sample_rois_pytorch(self, all_rois, gt_boxes, fg_rois_per_image, rois_per_image, num_classes, all_score):\n        \"\"\"Generate a random sample of RoIs comprising foreground and background\n        examples.\n        \"\"\"\n        # overlaps: (rois x gt_boxes)\n\n        overlaps = bbox_overlaps_batch(all_rois, gt_boxes)\n        # max_overlaps = max overlap of (candidate rois with gt_rois)\n        max_overlaps, gt_assignment = torch.max(overlaps, 2)\n        batch_size = overlaps.size(0)\n        num_proposal = overlaps.size(1)\n        num_boxes_per_img = overlaps.size(2)\n\n        offset = torch.arange(0, batch_size)*gt_boxes.size(1)\n        offset = offset.view(-1, 1).type_as(gt_assignment) + gt_assignment\n\n        labels = gt_boxes[:,:,4].contiguous().view(-1).index(offset.view(-1))\\\n                                                            .view(batch_size, -1)\n\n        labels_batch = labels.new(batch_size, rois_per_image).zero_()\n        rois_batch  = all_rois.new(batch_size, rois_per_image, 5).zero_() # get rois_per_image front of rois\n        gt_rois_batch = all_rois.new(batch_size, rois_per_image, 5).zero_()\n        output_bg_score  =  all_score.new(batch_size, rois_per_image, 2).zero_()\n        # Guard against the case when an image has fewer than max_fg_rois_per_image\n        # foreground RoIs\n        for i in range(batch_size):\n\n            fg_inds = torch.nonzero(max_overlaps[i] >= cfg.TRAIN.FG_THRESH).view(-1)\n            fg_num_rois = fg_inds.numel()\n\n            # Select background RoIs as those within [BG_THRESH_LO, BG_THRESH_HI)\n            bg_inds = torch.nonzero((max_overlaps[i] < cfg.TRAIN.BG_THRESH_HI) &\n                                    (max_overlaps[i] >= cfg.TRAIN.BG_THRESH_LO)).view(-1)\n            bg_num_rois = bg_inds.numel()\n\n            if fg_num_rois > 0 and bg_num_rois > 0:\n                # sampling fg\n                fg_rois_per_this_image = min(fg_rois_per_image, fg_num_rois)\n                \n                # torch.randperm seems has a bug on multi-gpu setting that cause the segfault. \n                # See https://github.com/pytorch/pytorch/issues/1868 for more details.\n                # use numpy instead.\n                #rand_num = torch.randperm(fg_num_rois).long().cuda()\n                rand_num = torch.from_numpy(np.random.permutation(fg_num_rois)).type_as(gt_boxes).long()\n                fg_inds = fg_inds[rand_num[:fg_rois_per_this_image]]\n\n                # sampling bg\n                bg_rois_per_this_image = rois_per_image - fg_rois_per_this_image\n\n                # Seems torch.rand has a bug, it will generate very large number and make an error. \n                # We use numpy rand instead. \n                #rand_num = (torch.rand(bg_rois_per_this_image) * bg_num_rois).long().cuda()\n                rand_num = np.floor(np.random.rand(bg_rois_per_this_image) * bg_num_rois)\n                rand_num = torch.from_numpy(rand_num).type_as(gt_boxes).long()\n                bg_inds = bg_inds[rand_num]\n\n            elif fg_num_rois > 0 and bg_num_rois == 0:\n                # sampling fg\n                #rand_num = torch.floor(torch.rand(rois_per_image) * fg_num_rois).long().cuda()\n                rand_num = np.floor(np.random.rand(rois_per_image) * fg_num_rois)\n                rand_num = torch.from_numpy(rand_num).type_as(gt_boxes).long()\n                fg_inds = fg_inds[rand_num]\n                fg_rois_per_this_image = rois_per_image\n                bg_rois_per_this_image = 0\n            elif bg_num_rois > 0 and fg_num_rois == 0:\n                # sampling bg\n                #rand_num = torch.floor(torch.rand(rois_per_image) * bg_num_rois).long().cuda()\n                rand_num = np.floor(np.random.rand(rois_per_image) * bg_num_rois)\n                rand_num = torch.from_numpy(rand_num).type_as(gt_boxes).long()\n\n                bg_inds = bg_inds[rand_num]\n                bg_rois_per_this_image = rois_per_image\n                fg_rois_per_this_image = 0\n            else:\n                raise ValueError(\"bg_num_rois = 0 and fg_num_rois = 0, this should not happen!\")\n                \n            # The indices that we're selecting (both fg and bg)\n            keep_inds = torch.cat([fg_inds, bg_inds], 0)\n\n            # Select sampled values from various arrays:\n            labels_batch[i].copy_(labels[i][keep_inds])\n\n            # Clamp labels for the background RoIs to 0\n            if fg_rois_per_this_image < rois_per_image:\n                labels_batch[i][fg_rois_per_this_image:] = 0\n\n            rois_batch[i] = all_rois[i][keep_inds]\n            rois_batch[i,:,0] = i\n            #adding the score\n            output_bg_score[i] = all_score[i][keep_inds]\n            output_bg_score[i,:,0] = i\n\n            gt_rois_batch[i] = gt_boxes[i][gt_assignment[i][keep_inds]]\n\n        bbox_target_data = self._compute_targets_pytorch(\n                rois_batch[:,:,1:5], gt_rois_batch[:,:,:4])\n\n        bbox_targets, bbox_inside_weights = \\\n                self._get_bbox_regression_labels_pytorch(bbox_target_data, labels_batch, num_classes)\n\n        return labels_batch, rois_batch, bbox_targets, bbox_inside_weights, output_bg_score\n"
  },
  {
    "path": "lib/model/rpn/rpn.py",
    "content": "from __future__ import absolute_import\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\nfrom model.utils.config import cfg\nfrom .proposal_layer import _ProposalLayer\nfrom .anchor_target_layer import _AnchorTargetLayer\nfrom model.utils.net_utils import _smooth_l1_loss\n\nimport numpy as np\nimport math\nimport pdb\nimport time\n\nclass _RPN(nn.Module):\n    \"\"\" region proposal network \"\"\"\n    def __init__(self, din):\n        super(_RPN, self).__init__()\n        \n        self.din = din  # get depth of input feature map, e.g., 512\n        self.anchor_scales = cfg.ANCHOR_SCALES\n        self.anchor_ratios = cfg.ANCHOR_RATIOS\n        self.feat_stride = cfg.FEAT_STRIDE[0]\n\n        # define the convrelu layers processing input feature map\n        self.RPN_Conv = nn.Conv2d(self.din, 512, 3, 1, 1, bias=True)\n\n        # define bg/fg classifcation score layer\n        self.nc_score_out = len(self.anchor_scales) * len(self.anchor_ratios) * 2 # 2(bg/fg) * 9 (anchors)\n        self.RPN_cls_score = nn.Conv2d(512, self.nc_score_out, 1, 1, 0)\n\n        # define anchor box offset prediction layer\n        self.nc_bbox_out = len(self.anchor_scales) * len(self.anchor_ratios) * 4 # 4(coords) * 9 (anchors)\n        self.RPN_bbox_pred = nn.Conv2d(512, self.nc_bbox_out, 1, 1, 0)\n\n        # define proposal layer\n        self.RPN_proposal = _ProposalLayer(self.feat_stride, self.anchor_scales, self.anchor_ratios)\n\n        # define anchor target layer\n        self.RPN_anchor_target = _AnchorTargetLayer(self.feat_stride, self.anchor_scales, self.anchor_ratios)\n\n        self.rpn_loss_cls = 0\n        self.rpn_loss_box = 0\n\n    @staticmethod\n    def reshape(x, d):\n        input_shape = x.size()\n        x = x.view(\n            input_shape[0],\n            int(d),\n            int(float(input_shape[1] * input_shape[2]) / float(d)),\n            input_shape[3]\n        )\n        return x\n\n    def forward(self, base_feat, im_info, gt_boxes, num_boxes):\n\n        batch_size = base_feat.size(0)\n\n        # return feature map after convrelu layer\n        rpn_conv1 = F.relu(self.RPN_Conv(base_feat), inplace=True)\n        # get rpn classification score\n        rpn_cls_score = self.RPN_cls_score(rpn_conv1)\n\n        rpn_cls_score_reshape = self.reshape(rpn_cls_score, 2)\n        rpn_cls_prob_reshape = F.softmax(rpn_cls_score_reshape,dim=1)\n        rpn_cls_prob = self.reshape(rpn_cls_prob_reshape, self.nc_score_out)\n\n        # get rpn offsets to the anchor boxes\n        rpn_bbox_pred = self.RPN_bbox_pred(rpn_conv1)\n\n        # proposal layer\n        cfg_key = 'TRAIN' if self.training else 'TEST'\n\n        rois = self.RPN_proposal((rpn_cls_prob.data, rpn_bbox_pred.data,\n                                 im_info, cfg_key))\n\n        self.rpn_loss_cls = 0\n        self.rpn_loss_box = 0\n\n        # generating training labels and build the rpn loss\n        if self.training:\n            assert gt_boxes is not None\n\n            rpn_data = self.RPN_anchor_target((rpn_cls_score.data, gt_boxes, im_info, num_boxes))\n\n            # compute classification loss\n            rpn_cls_score = rpn_cls_score_reshape.permute(0, 2, 3, 1).contiguous().view(batch_size, -1, 2)\n            rpn_label = rpn_data[0].view(batch_size, -1)\n\n            rpn_keep = Variable(rpn_label.view(-1).ne(-1).nonzero().view(-1))\n            rpn_cls_score = torch.index_select(rpn_cls_score.view(-1,2), 0, rpn_keep)\n            rpn_label = torch.index_select(rpn_label.view(-1), 0, rpn_keep.data)\n            rpn_label = Variable(rpn_label.long())\n            self.rpn_loss_cls = F.cross_entropy(rpn_cls_score, rpn_label)\n            fg_cnt = torch.sum(rpn_label.data.ne(0))\n\n            rpn_bbox_targets, rpn_bbox_inside_weights, rpn_bbox_outside_weights = rpn_data[1:]\n\n            # compute bbox regression loss\n            rpn_bbox_inside_weights = Variable(rpn_bbox_inside_weights)\n            rpn_bbox_outside_weights = Variable(rpn_bbox_outside_weights)\n            rpn_bbox_targets = Variable(rpn_bbox_targets)\n\n            self.rpn_loss_box = _smooth_l1_loss(rpn_bbox_pred, rpn_bbox_targets, rpn_bbox_inside_weights,\n                                                            rpn_bbox_outside_weights, sigma=3, dim=[1,2,3])\n\n        return rois, self.rpn_loss_cls, self.rpn_loss_box\n\n\nclass _RPN_out_pred_label(nn.Module):\n    \"\"\" region proposal network \"\"\"\n    def __init__(self, din):\n        super(_RPN_out_pred_label, self).__init__()\n        \n        self.din = din  # get depth of input feature map, e.g., 512\n        self.anchor_scales = cfg.ANCHOR_SCALES\n        self.anchor_ratios = cfg.ANCHOR_RATIOS\n        self.feat_stride = cfg.FEAT_STRIDE[0]\n\n        # define the convrelu layers processing input feature map\n        self.RPN_Conv = nn.Conv2d(self.din, 512, 3, 1, 1, bias=True)\n\n        # define bg/fg classifcation score layer\n        self.nc_score_out = len(self.anchor_scales) * len(self.anchor_ratios) * 2 # 2(bg/fg) * 9 (anchors)\n        self.RPN_cls_score = nn.Conv2d(512, self.nc_score_out, 1, 1, 0)\n\n        # define anchor box offset prediction layer\n        self.nc_bbox_out = len(self.anchor_scales) * len(self.anchor_ratios) * 4 # 4(coords) * 9 (anchors)\n        self.RPN_bbox_pred = nn.Conv2d(512, self.nc_bbox_out, 1, 1, 0)\n\n        # define proposal layer\n        self.RPN_proposal = _ProposalLayer(self.feat_stride, self.anchor_scales, self.anchor_ratios)\n\n        # define anchor target layer\n        self.RPN_anchor_target = _AnchorTargetLayer(self.feat_stride, self.anchor_scales, self.anchor_ratios)\n\n        self.rpn_loss_cls = 0\n        self.rpn_loss_box = 0\n\n    @staticmethod\n    def reshape(x, d):\n        input_shape = x.size()\n        x = x.view(\n            input_shape[0],\n            int(d),\n            int(float(input_shape[1] * input_shape[2]) / float(d)),\n            input_shape[3]\n        )\n        return x\n\n    def forward(self, base_feat, im_info, gt_boxes, num_boxes):\n\n        batch_size = base_feat.size(0)\n\n        # return feature map after convrelu layer\n        rpn_conv1 = F.relu(self.RPN_Conv(base_feat), inplace=True)\n        # get rpn classification score\n        rpn_cls_score = self.RPN_cls_score(rpn_conv1)\n\n        rpn_cls_score_reshape = self.reshape(rpn_cls_score, 2)\n        rpn_cls_prob_reshape = F.softmax(rpn_cls_score_reshape)\n        rpn_cls_prob = self.reshape(rpn_cls_prob_reshape, self.nc_score_out)\n\n        # get rpn offsets to the anchor boxes\n        rpn_bbox_pred = self.RPN_bbox_pred(rpn_conv1)\n\n        # proposal layer\n        cfg_key = 'TRAIN' if self.training else 'TEST'\n\n        rois = self.RPN_proposal((rpn_cls_prob.data, rpn_bbox_pred.data,\n                                 im_info, cfg_key))\n\n        self.rpn_loss_cls = 0\n        self.rpn_loss_box = 0\n\n        # generating training labels and build the rpn loss\n        if self.training:\n            assert gt_boxes is not None\n\n            rpn_data = self.RPN_anchor_target((rpn_cls_score.data, gt_boxes, im_info, num_boxes))\n\n            # compute classification loss\n            rpn_cls_score = rpn_cls_score_reshape.permute(0, 2, 3, 1).contiguous().view(batch_size, -1, 2)\n            rpn_label = rpn_data[0].view(batch_size, -1)\n\n            rpn_keep = Variable(rpn_label.view(-1).ne(-1).nonzero().view(-1))\n            rpn_cls_score = torch.index_select(rpn_cls_score.view(-1,2), 0, rpn_keep)\n            rpn_label = torch.index_select(rpn_label.view(-1), 0, rpn_keep.data)\n            rpn_label = Variable(rpn_label.long())\n            self.rpn_loss_cls = F.cross_entropy(rpn_cls_score, rpn_label)\n            fg_cnt = torch.sum(rpn_label.data.ne(0))\n\n            rpn_bbox_targets, rpn_bbox_inside_weights, rpn_bbox_outside_weights = rpn_data[1:]\n\n            # compute bbox regression loss\n            rpn_bbox_inside_weights = Variable(rpn_bbox_inside_weights)\n            rpn_bbox_outside_weights = Variable(rpn_bbox_outside_weights)\n            rpn_bbox_targets = Variable(rpn_bbox_targets)\n\n            self.rpn_loss_box = _smooth_l1_loss(rpn_bbox_pred, rpn_bbox_targets, rpn_bbox_inside_weights,\n                                                            rpn_bbox_outside_weights, sigma=3, dim=[1,2,3])\n\n        return rois, rpn_cls_prob, self.rpn_loss_cls, self.rpn_loss_box"
  },
  {
    "path": "lib/model/rpn/rpn_region.py",
    "content": "from __future__ import absolute_import\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\nfrom model.utils.config import cfg\nfrom .proposal_layer_region import _ProposalLayer\nfrom .anchor_target_layer import _AnchorTargetLayer\nfrom model.utils.net_utils import _smooth_l1_loss\n\nimport numpy as np\nimport math\nimport pdb\nimport time\n\nclass _RPN(nn.Module):\n    \"\"\" region proposal network \"\"\"\n    def __init__(self, din):\n        super(_RPN, self).__init__()\n        \n        self.din = din  # get depth of input feature map, e.g., 512\n        self.anchor_scales = cfg.ANCHOR_SCALES\n        self.anchor_ratios = cfg.ANCHOR_RATIOS\n        self.feat_stride = cfg.FEAT_STRIDE[0]\n\n        # define the convrelu layers processing input feature map\n        self.RPN_Conv = nn.Conv2d(self.din, 512, 3, 1, 1, bias=True)\n\n        # define bg/fg classifcation score layer\n        self.nc_score_out = len(self.anchor_scales) * len(self.anchor_ratios) * 2 # 2(bg/fg) * 9 (anchors)\n        self.RPN_cls_score = nn.Conv2d(512, self.nc_score_out, 1, 1, 0)\n\n        # define anchor box offset prediction layer\n        self.nc_bbox_out = len(self.anchor_scales) * len(self.anchor_ratios) * 4 # 4(coords) * 9 (anchors)\n        self.RPN_bbox_pred = nn.Conv2d(512, self.nc_bbox_out, 1, 1, 0)\n\n        # define proposal layer\n        self.RPN_proposal = _ProposalLayer(self.feat_stride, self.anchor_scales, self.anchor_ratios)\n\n        # define anchor target layer\n        self.RPN_anchor_target = _AnchorTargetLayer(self.feat_stride, self.anchor_scales, self.anchor_ratios)\n\n        self.rpn_loss_cls = 0\n        self.rpn_loss_box = 0\n\n    @staticmethod\n    def reshape(x, d):\n        input_shape = x.size()\n        x = x.view(\n            input_shape[0],\n            int(d),\n            int(float(input_shape[1] * input_shape[2]) / float(d)),\n            input_shape[3]\n        )\n        return x\n\n    def forward(self, base_feat, im_info, gt_boxes, num_boxes):\n\n        batch_size = base_feat.size(0)\n\n        # return feature map after convrelu layer\n        rpn_conv1 = F.relu(self.RPN_Conv(base_feat), inplace=True)\n        # get rpn classification score\n        rpn_cls_score = self.RPN_cls_score(rpn_conv1)\n\n        rpn_cls_score_reshape = self.reshape(rpn_cls_score, 2)\n        rpn_cls_prob_reshape = F.softmax(rpn_cls_score_reshape,dim=1)\n        rpn_cls_prob = self.reshape(rpn_cls_prob_reshape, self.nc_score_out)\n\n        # get rpn offsets to the anchor boxes\n        rpn_bbox_pred = self.RPN_bbox_pred(rpn_conv1)\n\n        # proposal layer\n        cfg_key = 'TRAIN' if self.training else 'TEST'\n\n        rois, output_cls_score= self.RPN_proposal((rpn_cls_prob.data, rpn_bbox_pred.data,\n                                 im_info, cfg_key))\n\n        self.rpn_loss_cls = 0\n        self.rpn_loss_box = 0\n\n        # generating training labels and build the rpn loss\n        if self.training:\n            assert gt_boxes is not None\n\n            rpn_data = self.RPN_anchor_target((rpn_cls_score.data, gt_boxes, im_info, num_boxes))\n\n            # compute classification loss\n            rpn_cls_score = rpn_cls_score_reshape.permute(0, 2, 3, 1).contiguous().view(batch_size, -1, 2)\n            rpn_label = rpn_data[0].view(batch_size, -1)\n\n            rpn_keep = Variable(rpn_label.view(-1).ne(-1).nonzero().view(-1))\n            rpn_cls_score = torch.index_select(rpn_cls_score.view(-1,2), 0, rpn_keep)\n            rpn_label = torch.index_select(rpn_label.view(-1), 0, rpn_keep.data)\n            rpn_label = Variable(rpn_label.long())\n            self.rpn_loss_cls = F.cross_entropy(rpn_cls_score, rpn_label)\n            fg_cnt = torch.sum(rpn_label.data.ne(0))\n\n            rpn_bbox_targets, rpn_bbox_inside_weights, rpn_bbox_outside_weights = rpn_data[1:]\n\n            # compute bbox regression loss\n            rpn_bbox_inside_weights = Variable(rpn_bbox_inside_weights)\n            rpn_bbox_outside_weights = Variable(rpn_bbox_outside_weights)\n            rpn_bbox_targets = Variable(rpn_bbox_targets)\n\n            self.rpn_loss_box = _smooth_l1_loss(rpn_bbox_pred, rpn_bbox_targets, rpn_bbox_inside_weights,\n                                                            rpn_bbox_outside_weights, sigma=3, dim=[1,2,3])\n\n        return rois, output_cls_score, self.rpn_loss_cls, self.rpn_loss_box\n\n\n"
  },
  {
    "path": "lib/model/utils/.gitignore",
    "content": "*.c\n*.cpp\n*.so\n"
  },
  {
    "path": "lib/model/utils/__init__.py",
    "content": ""
  },
  {
    "path": "lib/model/utils/bbox.pyx",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Sergey Karayev\n# --------------------------------------------------------\n\ncimport cython\nimport numpy as np\ncimport numpy as np\n\nDTYPE = np.float\nctypedef np.float_t DTYPE_t\n\ndef bbox_overlaps(np.ndarray[DTYPE_t, ndim=2] boxes,\n        np.ndarray[DTYPE_t, ndim=2] query_boxes):\n    return bbox_overlaps_c(boxes, query_boxes)\n\ncdef np.ndarray[DTYPE_t, ndim=2] bbox_overlaps_c(\n        np.ndarray[DTYPE_t, ndim=2] boxes,\n        np.ndarray[DTYPE_t, ndim=2] query_boxes):\n    \"\"\"\n    Parameters\n    ----------\n    boxes: (N, 4) ndarray of float\n    query_boxes: (K, 4) ndarray of float\n    Returns\n    -------\n    overlaps: (N, K) ndarray of overlap between boxes and query_boxes\n    \"\"\"\n    cdef unsigned int N = boxes.shape[0]\n    cdef unsigned int K = query_boxes.shape[0]\n    cdef np.ndarray[DTYPE_t, ndim=2] overlaps = np.zeros((N, K), dtype=DTYPE)\n    cdef DTYPE_t iw, ih, box_area\n    cdef DTYPE_t ua\n    cdef unsigned int k, n\n    for k in range(K):\n        box_area = (\n            (query_boxes[k, 2] - query_boxes[k, 0] + 1) *\n            (query_boxes[k, 3] - query_boxes[k, 1] + 1)\n        )\n        for n in range(N):\n            iw = (\n                min(boxes[n, 2], query_boxes[k, 2]) -\n                max(boxes[n, 0], query_boxes[k, 0]) + 1\n            )\n            if iw > 0:\n                ih = (\n                    min(boxes[n, 3], query_boxes[k, 3]) -\n                    max(boxes[n, 1], query_boxes[k, 1]) + 1\n                )\n                if ih > 0:\n                    ua = float(\n                        (boxes[n, 2] - boxes[n, 0] + 1) *\n                        (boxes[n, 3] - boxes[n, 1] + 1) +\n                        box_area - iw * ih\n                    )\n                    overlaps[n, k] = iw * ih / ua\n    return overlaps\n\n\ndef bbox_intersections(\n        np.ndarray[DTYPE_t, ndim=2] boxes,\n        np.ndarray[DTYPE_t, ndim=2] query_boxes):\n    return bbox_intersections_c(boxes, query_boxes)\n\n\ncdef np.ndarray[DTYPE_t, ndim=2] bbox_intersections_c(\n        np.ndarray[DTYPE_t, ndim=2] boxes,\n        np.ndarray[DTYPE_t, ndim=2] query_boxes):\n    \"\"\"\n    For each query box compute the intersection ratio covered by boxes\n    ----------\n    Parameters\n    ----------\n    boxes: (N, 4) ndarray of float\n    query_boxes: (K, 4) ndarray of float\n    Returns\n    -------\n    overlaps: (N, K) ndarray of intersec between boxes and query_boxes\n    \"\"\"\n    cdef unsigned int N = boxes.shape[0]\n    cdef unsigned int K = query_boxes.shape[0]\n    cdef np.ndarray[DTYPE_t, ndim=2] intersec = np.zeros((N, K), dtype=DTYPE)\n    cdef DTYPE_t iw, ih, box_area\n    cdef DTYPE_t ua\n    cdef unsigned int k, n\n    for k in range(K):\n        box_area = (\n            (query_boxes[k, 2] - query_boxes[k, 0] + 1) *\n            (query_boxes[k, 3] - query_boxes[k, 1] + 1)\n        )\n        for n in range(N):\n            iw = (\n                min(boxes[n, 2], query_boxes[k, 2]) -\n                max(boxes[n, 0], query_boxes[k, 0]) + 1\n            )\n            if iw > 0:\n                ih = (\n                    min(boxes[n, 3], query_boxes[k, 3]) -\n                    max(boxes[n, 1], query_boxes[k, 1]) + 1\n                )\n                if ih > 0:\n                    intersec[n, k] = iw * ih / box_area\n    return intersec"
  },
  {
    "path": "lib/model/utils/blob.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\n\"\"\"Blob helper functions.\"\"\"\n\nimport numpy as np\n# from scipy.misc import imread, imresize\nimport cv2\n\ntry:\n    xrange          # Python 2\nexcept NameError:\n    xrange = range  # Python 3\n\n\ndef im_list_to_blob(ims):\n    \"\"\"Convert a list of images into a network input.\n\n    Assumes images are already prepared (means subtracted, BGR order, ...).\n    \"\"\"\n    max_shape = np.array([im.shape for im in ims]).max(axis=0)\n    num_images = len(ims)\n    blob = np.zeros((num_images, max_shape[0], max_shape[1], 3),\n                    dtype=np.float32)\n    for i in xrange(num_images):\n        im = ims[i]\n        blob[i, 0:im.shape[0], 0:im.shape[1], :] = im\n\n    return blob\n\ndef prep_im_for_blob(im, pixel_means, target_size, max_size):\n    \"\"\"Mean subtract and scale an image for use in a blob.\"\"\"\n\n    im = im.astype(np.float32, copy=False)\n    im -= pixel_means\n    # im = im[:, :, ::-1]\n    im_shape = im.shape\n    im_size_min = np.min(im_shape[0:2])\n    im_size_max = np.max(im_shape[0:2])\n    im_scale = float(target_size) / float(im_size_min)\n    # Prevent the biggest axis from being more than MAX_SIZE\n    # if np.round(im_scale * im_size_max) > max_size:\n    #     im_scale = float(max_size) / float(im_size_max)\n    # im = imresize(im, im_scale)\n    im = cv2.resize(im, None, None, fx=im_scale, fy=im_scale,\n                    interpolation=cv2.INTER_LINEAR)\n\n    return im, im_scale\n"
  },
  {
    "path": "lib/model/utils/config.py",
    "content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport os.path as osp\nimport numpy as np\n# `pip install easydict` if you don't have it\nfrom easydict import EasyDict as edict\n\n__C = edict()\n# Consumers can get config by:\n#   from fast_rcnn_config import cfg\ncfg = __C\n\n#\n# Training options\n#\n__C.TRAIN = edict()\n\n# Initial learning rate\n__C.TRAIN.LEARNING_RATE = 0.001\n\n__C.TRAIN.META_TYPE = 1\n\n##### set classes ###\n__C.TRAIN.ALLCLASSES_FIRST = ['aeroplane', 'bicycle', 'boat', 'bottle', 'car', 'cat', 'chair', 'diningtable', 'dog',\n                         'horse', 'person', 'pottedplant', 'sheep', 'train', 'tvmonitor', 'bird', 'bus', 'cow',\n                         'motorbike', 'sofa']\n\n__C.TRAIN.BASECLASSES_FIRST = ['aeroplane', 'bicycle', 'boat',\n                             'bottle', 'car', 'cat', 'chair',\n                             'diningtable', 'dog', 'horse',\n                             'person', 'pottedplant',\n                             'sheep', 'train', 'tvmonitor']\n\n__C.TRAIN.ALLCLASSES_SECOND = ['bicycle', 'bird', 'boat', 'bus', 'car', 'cat', 'chair', 'diningtable', 'dog', 'motorbike',\n                         'person', 'pottedplant', 'sheep', 'train', 'tvmonitor', 'aeroplane', 'bottle', 'cow',\n                         'horse', 'sofa']\n\n__C.TRAIN.BASECLASSES_SECOND = ['bicycle', 'bird', 'boat', 'bus',\n                             'car','cat', 'chair','diningtable',\n                             'dog', 'motorbike','person', 'pottedplant',\n                            'sheep','train', 'tvmonitor']\n\n__C.TRAIN.ALLCLASSES_THIRD = ['aeroplane', 'bicycle', 'bird', 'bottle', 'bus', 'car', 'chair', 'cow', 'diningtable',\n                         'dog', 'horse', 'person', 'pottedplant', 'train', 'tvmonitor', 'boat', 'cat', 'motorbike',\n                         'sheep', 'sofa']\n\n__C.TRAIN.BASECLASSES_THIRD = ['aeroplane', 'bicycle', 'bird','bottle',\n                             'bus', 'car', 'chair','cow', 'diningtable',\n                             'dog', 'horse','person', 'pottedplant', 'train', 'tvmonitor']\n####\n# Momentum\n__C.TRAIN.MOMENTUM = 0.9\n\n# Weight decay, for regularization\n__C.TRAIN.WEIGHT_DECAY = 0.0005\n\n# Factor for reducing the learning rate\n__C.TRAIN.GAMMA = 0.1\n\n# Step size for reducing the learning rate, currently only support one step\n__C.TRAIN.STEPSIZE = [30000]\n\n# Iteration intervals for showing the loss during training, on command line interface\n__C.TRAIN.DISPLAY = 10\n\n# Whether to double the learning rate for bias\n__C.TRAIN.DOUBLE_BIAS = True\n\n# Whether to initialize the weights with truncated normal distribution\n__C.TRAIN.TRUNCATED = False\n\n# Whether to have weight decay on bias as well\n__C.TRAIN.BIAS_DECAY = False\n\n# Whether to add ground truth boxes to the pool when sampling regions\n__C.TRAIN.USE_GT = False\n\n# Whether to use aspect-ratio grouping of training images, introduced merely for saving\n# GPU memory\n__C.TRAIN.ASPECT_GROUPING = False\n\n# The number of snapshots kept, older ones are deleted to save space\n__C.TRAIN.SNAPSHOT_KEPT = 3\n\n# The time interval for saving tensorflow summaries\n__C.TRAIN.SUMMARY_INTERVAL = 180\n\n# Scale to use during training (can list multiple scales)\n# The scale is the pixel size of an image's shortest side\n__C.TRAIN.SCALES = (600,)\n\n# Max pixel size of the longest side of a scaled input image\n__C.TRAIN.MAX_SIZE = 1000\n\n# Trim size for input images to create minibatch\n__C.TRAIN.TRIM_HEIGHT = 600\n__C.TRAIN.TRIM_WIDTH = 600\n\n# Images to use per minibatch\n__C.TRAIN.IMS_PER_BATCH = 1\n\n# Minibatch size (number of regions of interest [ROIs])\n__C.TRAIN.BATCH_SIZE = 128\n\n# Fraction of minibatch that is labeled foreground (i.e. class > 0)\n__C.TRAIN.FG_FRACTION = 0.25\n\n# Overlap threshold for a ROI to be considered foreground (if >= FG_THRESH)\n__C.TRAIN.FG_THRESH = 0.5\n\n# Overlap threshold for a ROI to be considered background (class = 0 if\n# overlap in [LO, HI))\n__C.TRAIN.BG_THRESH_HI = 0.5\n__C.TRAIN.BG_THRESH_LO = 0.1\n\n# Use horizontally-flipped images during training?\n__C.TRAIN.USE_FLIPPED = True\n\n# Train bounding-box regressors\n__C.TRAIN.BBOX_REG = True\n__C.TRAIN.RCNN_BBOX_WEIGHT = 1\n\n# Overlap required between a ROI and ground-truth box in order for that ROI to\n# be used as a bounding-box regression training example\n__C.TRAIN.BBOX_THRESH = 0.5\n\n# Iterations between snapshots\n__C.TRAIN.SNAPSHOT_ITERS = 5000\n\n# solver.prototxt specifies the snapshot path prefix, this adds an optional\n# infix to yield the path: <prefix>[_<infix>]_iters_XYZ.caffemodel\n__C.TRAIN.SNAPSHOT_PREFIX = 'res101_faster_rcnn'\n# __C.TRAIN.SNAPSHOT_INFIX = ''\n\n# Use a prefetch thread in roi_data_layer.layer\n# So far I haven't found this useful; likely more engineering work is required\n# __C.TRAIN.USE_PREFETCH = False\n\n# Normalize the targets (subtract empirical mean, divide by empirical stddev)\n__C.TRAIN.BBOX_NORMALIZE_TARGETS = True\n# Deprecated (inside weights)\n__C.TRAIN.BBOX_INSIDE_WEIGHTS = (1.0, 1.0, 1.0, 1.0)\n# Normalize the targets using \"precomputed\" (or made up) means and stdevs\n# (BBOX_NORMALIZE_TARGETS must also be True)\n__C.TRAIN.BBOX_NORMALIZE_TARGETS_PRECOMPUTED = True\n__C.TRAIN.BBOX_NORMALIZE_MEANS = (0.0, 0.0, 0.0, 0.0)\n__C.TRAIN.BBOX_NORMALIZE_STDS = (0.1, 0.1, 0.2, 0.2)\n\n# Train using these proposals\n__C.TRAIN.PROPOSAL_METHOD = 'gt'\n\n# Make minibatches from images that have similar aspect ratios (i.e. both\n# tall and thin or both short and wide) in order to avoid wasting computation\n# on zero-padding.\n\n# Use RPN to detect objects\n__C.TRAIN.HAS_RPN = True\n# IOU >= thresh: positive example\n__C.TRAIN.RPN_POSITIVE_OVERLAP = 0.7\n# IOU < thresh: negative example\n__C.TRAIN.RPN_NEGATIVE_OVERLAP = 0.3\n# If an anchor statisfied by positive and negative conditions set to negative\n__C.TRAIN.RPN_CLOBBER_POSITIVES = False\n# Max number of foreground examples\n__C.TRAIN.RPN_FG_FRACTION = 0.5\n# Total number of examples\n__C.TRAIN.RPN_BATCHSIZE = 256\n# NMS threshold used on RPN proposals\n__C.TRAIN.RPN_NMS_THRESH = 0.7\n# Number of top scoring boxes to keep before apply NMS to RPN proposals\n__C.TRAIN.RPN_PRE_NMS_TOP_N = 12000\n# Number of top scoring boxes to keep after applying NMS to RPN proposals\n__C.TRAIN.RPN_POST_NMS_TOP_N = 2000\n# Proposal height and width both need to be greater than RPN_MIN_SIZE (at orig image scale)\n__C.TRAIN.RPN_MIN_SIZE = 8\n# Deprecated (outside weights)\n__C.TRAIN.RPN_BBOX_INSIDE_WEIGHTS = (1.0, 1.0, 1.0, 1.0)\n# Give the positive RPN examples weight of p * 1 / {num positives}\n# and give negatives a weight of (1 - p)\n# Set to -1.0 to use uniform example weighting\n__C.TRAIN.RPN_POSITIVE_WEIGHT = -1.0\n# Whether to use all ground truth bounding boxes for training,\n# For COCO, setting USE_ALL_GT to False will exclude boxes that are flagged as ''iscrowd''\n__C.TRAIN.USE_ALL_GT = True\n\n# Whether to tune the batch normalization parameters during training\n__C.TRAIN.BN_TRAIN = False\n\n#\n# Testing options\n#\n__C.TEST = edict()\n\n# Scale to use during testing (can NOT list multiple scales)\n# The scale is the pixel size of an image's shortest side\n__C.TEST.SCALES = (600,)\n\n# Max pixel size of the longest side of a scaled input image\n__C.TEST.MAX_SIZE = 1000\n\n# Overlap threshold used for non-maximum suppression (suppress boxes with\n# IoU >= this threshold)\n__C.TEST.NMS = 0.3\n\n# Experimental: treat the (K+1) units in the cls_score layer as linear\n# predictors (trained, eg, with one-vs-rest SVMs).\n__C.TEST.SVM = False\n\n# Test using bounding-box regressors\n__C.TEST.BBOX_REG = True\n\n# Propose boxes\n__C.TEST.HAS_RPN = False\n\n# Test using these proposals\n__C.TEST.PROPOSAL_METHOD = 'gt'\n\n## NMS threshold used on RPN proposals\n__C.TEST.RPN_NMS_THRESH = 0.7\n## Number of top scoring boxes to keep before apply NMS to RPN proposals\n__C.TEST.RPN_PRE_NMS_TOP_N = 6000\n\n## Number of top scoring boxes to keep after applying NMS to RPN proposals\n__C.TEST.RPN_POST_NMS_TOP_N = 300\n\n# Proposal height and width both need to be greater than RPN_MIN_SIZE (at orig image scale)\n__C.TEST.RPN_MIN_SIZE = 16\n\n# Testing mode, default to be 'nms', 'top' is slower but better\n# See report for details\n__C.TEST.MODE = 'nms'\n\n# Only useful when TEST.MODE is 'top', specifies the number of top proposals to select\n__C.TEST.RPN_TOP_N = 5000\n\n#\n# ResNet options\n#\n\n__C.RESNET = edict()\n\n# Option to set if max-pooling is appended after crop_and_resize.\n# if true, the region will be resized to a square of 2xPOOLING_SIZE,\n# then 2x2 max-pooling is applied; otherwise the region will be directly\n# resized to a square of POOLING_SIZE\n__C.RESNET.MAX_POOL = False\n\n# Number of fixed blocks during training, by default the FIRST of all 4 blocks is fixed\n# Range: 0 (none) to 4 (all)\n__C.RESNET.FIXED_BLOCKS = 2\n\n#\n# MobileNet options\n#\n\n__C.MOBILENET = edict()\n\n# Whether to regularize the depth-wise filters during training\n__C.MOBILENET.REGU_DEPTH = False\n\n# Number of fixed layers during training, by default the FIRST of all 14 layers is fixed\n# Range: 0 (none) to 12 (all)\n__C.MOBILENET.FIXED_LAYERS = 5\n\n# Weight decay for the mobilenet weights\n__C.MOBILENET.WEIGHT_DECAY = 0.00004\n\n# Depth multiplier\n__C.MOBILENET.DEPTH_MULTIPLIER = 1.\n\n#\n# MISC\n#\n\n# The mapping from image coordinates to feature map coordinates might cause\n# some boxes that are distinct in image space to become identical in feature\n# coordinates. If DEDUP_BOXES > 0, then DEDUP_BOXES is used as the scale factor\n# for identifying duplicate boxes.\n# 1/16 is correct for {Alex,Caffe}Net, VGG_CNN_M_1024, and VGG16\n__C.DEDUP_BOXES = 1. / 16.\n\n# Pixel mean values (BGR order) as a (1, 1, 3) array\n# We use the same pixel mean for all networks even though it's not exactly what\n# they were trained with\n__C.PIXEL_MEANS = np.array([[[102.9801, 115.9465, 122.7717]]])\n\n# For reproducibility\n__C.RNG_SEED = 3\n\n# A small number that's used many times\n__C.EPS = 1e-14\n\n# Root directory of project\n__C.ROOT_DIR = osp.abspath(osp.join(osp.dirname(__file__), '..', '..', '..'))\n__C.ROOT_DATA = '/'\n# Data directory\n#__C.DATA_DIR = osp.abspath(osp.join(__C.ROOT_DIR, 'data'))\n__C.DATA_DIR = './data'\n\n# Name (or path to) the matlab executable\n__C.MATLAB = 'matlab'\n\n# Place outputs under an experiments directory\n__C.EXP_DIR = 'default'\n\n# Use GPU implementation of non-maximum suppression\n__C.USE_GPU_NMS = True\n\n# Default GPU device id\n__C.GPU_ID = 0\n\n__C.POOLING_MODE = 'align'\n\n# Size of the pooled region after RoI pooling\n__C.POOLING_SIZE = 7\n\n# Maximal number of gt rois in an image during Training\n__C.MAX_NUM_GT_BOXES = 20\n\n# Anchor scales for RPN\n__C.ANCHOR_SCALES = [2,4,8,16,32]#[8,16,32]\n\n# Anchor ratios for RPN\n__C.ANCHOR_RATIOS = [0.5,1,2]\n\n# Feature stride for RPN\n__C.FEAT_STRIDE = [16, ]\n\n__C.CUDA = False\n\n__C.CROP_RESIZE_WITH_MAX_POOL = True\n\nimport pdb\ndef get_output_dir(imdb, weights_filename):\n  \"\"\"Return the directory where experimental artifacts are placed.\n  If the directory does not exist, it is created.\n\n  A canonical path is built using the name from an imdb and a network\n  (if not None).\n  \"\"\"\n  outdir = osp.abspath(osp.join(__C.ROOT_DIR, 'output', __C.EXP_DIR, imdb.name))\n  if weights_filename is None:\n    weights_filename = 'default'\n  outdir = osp.join(outdir, weights_filename)\n  if not os.path.exists(outdir):\n    os.makedirs(outdir)\n  return outdir\n\n\ndef get_output_tb_dir(imdb, weights_filename):\n  \"\"\"Return the directory where tensorflow summaries are placed.\n  If the directory does not exist, it is created.\n\n  A canonical path is built using the name from an imdb and a network\n  (if not None).\n  \"\"\"\n  outdir = osp.abspath(osp.join(__C.ROOT_DIR, 'tensorboard', __C.EXP_DIR, imdb.name))\n  if weights_filename is None:\n    weights_filename = 'default'\n  outdir = osp.join(outdir, weights_filename)\n  if not os.path.exists(outdir):\n    os.makedirs(outdir)\n  return outdir\n\n\ndef _merge_a_into_b(a, b):\n  \"\"\"Merge config dictionary a into config dictionary b, clobbering the\n  options in b whenever they are also specified in a.\n  \"\"\"\n  if type(a) is not edict:\n    return\n\n  for k, v in a.items():\n    # a must specify keys that are in b\n    if k not in b:\n      raise KeyError('{} is not a valid config key'.format(k))\n\n    # the types must match, too\n    old_type = type(b[k])\n    if old_type is not type(v):\n      if isinstance(b[k], np.ndarray):\n        v = np.array(v, dtype=b[k].dtype)\n      else:\n        raise ValueError(('Type mismatch ({} vs. {}) '\n                          'for config key: {}').format(type(b[k]),\n                                                       type(v), k))\n\n    # recursively merge dicts\n    if type(v) is edict:\n      try:\n        _merge_a_into_b(a[k], b[k])\n      except:\n        print(('Error under config key: {}'.format(k)))\n        raise\n    else:\n      b[k] = v\n\n\ndef cfg_from_file(filename):\n  \"\"\"Load a config file and merge it into the default options.\"\"\"\n  import yaml\n  with open(filename, 'r') as f:\n    yaml_cfg = edict(yaml.load(f))\n\n  _merge_a_into_b(yaml_cfg, __C)\n\n\ndef cfg_from_list(cfg_list):\n  \"\"\"Set config keys via list (e.g., from command line).\"\"\"\n  from ast import literal_eval\n  assert len(cfg_list) % 2 == 0\n  for k, v in zip(cfg_list[0::2], cfg_list[1::2]):\n    key_list = k.split('.')\n    d = __C\n    for subkey in key_list[:-1]:\n      assert subkey in d\n      d = d[subkey]\n    subkey = key_list[-1]\n    assert subkey in d\n    try:\n      value = literal_eval(v)\n    except:\n      # handle the case when v is a string literal\n      value = v\n    assert type(value) == type(d[subkey]), \\\n      'type {} does not match original type {}'.format(\n        type(value), type(d[subkey]))\n    d[subkey] = value\n"
  },
  {
    "path": "lib/model/utils/net_utils.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\nimport numpy as np\nimport torchvision.models as models\nfrom model.utils.config import cfg\nfrom model.roi_crop.functions.roi_crop import RoICropFunction\nimport cv2\nimport pdb\nimport random\n\ndef save_net(fname, net):\n    import h5py\n    h5f = h5py.File(fname, mode='w')\n    for k, v in net.state_dict().items():\n        h5f.create_dataset(k, data=v.cpu().numpy())\n\ndef load_net(fname, net):\n    import h5py\n    h5f = h5py.File(fname, mode='r')\n    for k, v in net.state_dict().items():\n        param = torch.from_numpy(np.asarray(h5f[k]))\n        v.copy_(param)\n\ndef weights_normal_init(model, dev=0.01):\n    if isinstance(model, list):\n        for m in model:\n            weights_normal_init(m, dev)\n    else:\n        for m in model.modules():\n            if isinstance(m, nn.Conv2d):\n                m.weight.data.normal_(0.0, dev)\n            elif isinstance(m, nn.Linear):\n                m.weight.data.normal_(0.0, dev)\n\n\ndef clip_gradient(model, clip_norm):\n    \"\"\"Computes a gradient clipping coefficient based on gradient norm.\"\"\"\n    totalnorm = 0\n    for p in model.parameters():\n        if p.requires_grad:\n            modulenorm = p.grad.data.norm()\n            totalnorm += modulenorm ** 2\n    totalnorm = np.sqrt(totalnorm)\n\n    norm = clip_norm / max(totalnorm, clip_norm)\n    for p in model.parameters():\n        if p.requires_grad:\n            p.grad.mul_(norm)\n\ndef vis_detections(im, class_name, dets, thresh=0.8):\n    \"\"\"Visual debugging of detections.\"\"\"\n    class_name = class_name.split('.')[0]\n    for i in range(np.minimum(10, dets.shape[0])):\n        bbox = tuple(int(np.round(x)) for x in dets[i, :4])\n        score = dets[i, -1]\n        if score > thresh:\n            cv2.rectangle(im, bbox[0:2], bbox[2:4], (255, 255, 0), 1)\n            text_size = cv2.getTextSize('%s: %.3f' % (class_name, score), cv2.FONT_HERSHEY_PLAIN, 0.8, 1)\n            point = (bbox[0] + text_size[0][0], bbox[1] + text_size[0][1] + text_size[1])\n            cv2.rectangle(im, bbox[0:2], point, (255, 255, 0), -1)\n            cv2.putText(im, '%s: %.3f' % (class_name, score), (bbox[0], bbox[1] + 10), cv2.FONT_HERSHEY_PLAIN,\n                        0.8, (0, 0, 255), thickness=1)\n            # cv2.putText(im, '%s' % (class_name), (bbox[0], bbox[1] + 10), cv2.FONT_HERSHEY_PLAIN,\n            #             0.8, (0, 0, 255), thickness=1)\n    return im\n\ndef vis_detections_label_only(im, class_name, dets, thresh=0.8):\n    \"\"\"Visual debugging of detections.\"\"\"\n    class_name = class_name.split('.')[0]\n    for i in range(np.minimum(10, dets.shape[0])):\n        bbox = tuple(int(np.round(x)) for x in dets[i, :4])\n        score = dets[i, -1]\n        if score > thresh:\n            cv2.rectangle(im, bbox[0:2], bbox[2:4], (0, 255, 0), 2)\n            text_size = cv2.getTextSize('%s' % (class_name), cv2.FONT_HERSHEY_COMPLEX, 0.5, 1)\n            point = (bbox[0] + text_size[0][0], bbox[1] + text_size[0][1] + text_size[1])\n            cv2.rectangle(im, bbox[0:2], point, (0, 255, 0), -1)\n            cv2.putText(im, '%s' % (class_name), (bbox[0], bbox[1] + 10), cv2.FONT_HERSHEY_COMPLEX,\n                        0.5, (0, 0, 0), thickness=1)\n            # cv2.putText(im, '%s' % (class_name), (bbox[0], bbox[1] + 10), cv2.FONT_HERSHEY_PLAIN,\n            #             0.8, (0, 0, 255), thickness=1)\n    return im\n\n\n\ndef adjust_learning_rate(optimizer, decay=0.1):\n    \"\"\"Sets the learning rate to the initial LR decayed by 0.5 every 20 epochs\"\"\"\n    for param_group in optimizer.param_groups:\n        param_group['lr'] = decay * param_group['lr']\n\n\ndef save_checkpoint(state, filename):\n    torch.save(state, filename)\n\ndef _smooth_l1_loss(bbox_pred, bbox_targets, bbox_inside_weights, bbox_outside_weights, sigma=1.0, dim=[1]):\n    \n    sigma_2 = sigma ** 2\n    box_diff = bbox_pred - bbox_targets\n    in_box_diff = bbox_inside_weights * box_diff\n    abs_in_box_diff = torch.abs(in_box_diff)\n    smoothL1_sign = (abs_in_box_diff < 1. / sigma_2).detach().float()\n    in_loss_box = torch.pow(in_box_diff, 2) * (sigma_2 / 2.) * smoothL1_sign \\\n                  + (abs_in_box_diff - (0.5 / sigma_2)) * (1. - smoothL1_sign)\n    out_loss_box = bbox_outside_weights * in_loss_box\n    loss_box = out_loss_box\n    for i in sorted(dim, reverse=True):\n      loss_box = loss_box.sum(i)\n    loss_box = loss_box.mean()\n    return loss_box\n\ndef _crop_pool_layer(bottom, rois, max_pool=True):\n    # code modified from \n    # https://github.com/ruotianluo/pytorch-faster-rcnn\n    # implement it using stn\n    # box to affine\n    # input (x1,y1,x2,y2)\n    \"\"\"\n    [  x2-x1             x1 + x2 - W + 1  ]\n    [  -----      0      ---------------  ]\n    [  W - 1                  W - 1       ]\n    [                                     ]\n    [           y2-y1    y1 + y2 - H + 1  ]\n    [    0      -----    ---------------  ]\n    [           H - 1         H - 1      ]\n    \"\"\"\n    rois = rois.detach()\n    batch_size = bottom.size(0)\n    D = bottom.size(1)\n    H = bottom.size(2)\n    W = bottom.size(3)\n    roi_per_batch = rois.size(0) / batch_size\n    x1 = rois[:, 1::4] / 16.0\n    y1 = rois[:, 2::4] / 16.0\n    x2 = rois[:, 3::4] / 16.0\n    y2 = rois[:, 4::4] / 16.0\n\n    height = bottom.size(2)\n    width = bottom.size(3)\n\n    # affine theta\n    zero = Variable(rois.data.new(rois.size(0), 1).zero_())\n    theta = torch.cat([\\\n      (x2 - x1) / (width - 1),\n      zero,\n      (x1 + x2 - width + 1) / (width - 1),\n      zero,\n      (y2 - y1) / (height - 1),\n      (y1 + y2 - height + 1) / (height - 1)], 1).view(-1, 2, 3)\n\n    if max_pool:\n      pre_pool_size = cfg.POOLING_SIZE * 2\n      grid = F.affine_grid(theta, torch.Size((rois.size(0), 1, pre_pool_size, pre_pool_size)))\n      bottom = bottom.view(1, batch_size, D, H, W).contiguous().expand(roi_per_batch, batch_size, D, H, W)\\\n                                                                .contiguous().view(-1, D, H, W)\n      crops = F.grid_sample(bottom, grid)\n      crops = F.max_pool2d(crops, 2, 2)\n    else:\n      grid = F.affine_grid(theta, torch.Size((rois.size(0), 1, cfg.POOLING_SIZE, cfg.POOLING_SIZE)))\n      bottom = bottom.view(1, batch_size, D, H, W).contiguous().expand(roi_per_batch, batch_size, D, H, W)\\\n                                                                .contiguous().view(-1, D, H, W)\n      crops = F.grid_sample(bottom, grid)\n    \n    return crops, grid\n\ndef _affine_grid_gen(rois, input_size, grid_size):\n\n    rois = rois.detach()\n    x1 = rois[:, 1::4] / 16.0\n    y1 = rois[:, 2::4] / 16.0\n    x2 = rois[:, 3::4] / 16.0\n    y2 = rois[:, 4::4] / 16.0\n\n    height = input_size[0]\n    width = input_size[1]\n\n    zero = Variable(rois.data.new(rois.size(0), 1).zero_())\n    theta = torch.cat([\\\n      (x2 - x1) / (width - 1),\n      zero,\n      (x1 + x2 - width + 1) / (width - 1),\n      zero,\n      (y2 - y1) / (height - 1),\n      (y1 + y2 - height + 1) / (height - 1)], 1).view(-1, 2, 3)\n\n    grid = F.affine_grid(theta, torch.Size((rois.size(0), 1, grid_size, grid_size)))\n\n    return grid\n\ndef _affine_theta(rois, input_size):\n\n    rois = rois.detach()\n    x1 = rois[:, 1::4] / 16.0\n    y1 = rois[:, 2::4] / 16.0\n    x2 = rois[:, 3::4] / 16.0\n    y2 = rois[:, 4::4] / 16.0\n\n    height = input_size[0]\n    width = input_size[1]\n\n    zero = Variable(rois.data.new(rois.size(0), 1).zero_())\n\n    # theta = torch.cat([\\\n    #   (x2 - x1) / (width - 1),\n    #   zero,\n    #   (x1 + x2 - width + 1) / (width - 1),\n    #   zero,\n    #   (y2 - y1) / (height - 1),\n    #   (y1 + y2 - height + 1) / (height - 1)], 1).view(-1, 2, 3)\n\n    theta = torch.cat([\\\n      (y2 - y1) / (height - 1),\n      zero,\n      (y1 + y2 - height + 1) / (height - 1),\n      zero,\n      (x2 - x1) / (width - 1),\n      (x1 + x2 - width + 1) / (width - 1)], 1).view(-1, 2, 3)\n\n    return theta\n\ndef compare_grid_sample():\n    # do gradcheck\n    N = random.randint(1, 8)\n    C = 2 # random.randint(1, 8)\n    H = 5 # random.randint(1, 8)\n    W = 4 # random.randint(1, 8)\n    input = Variable(torch.randn(N, C, H, W).cuda(), requires_grad=True)\n    input_p = input.clone().data.contiguous()\n   \n    grid = Variable(torch.randn(N, H, W, 2).cuda(), requires_grad=True)\n    grid_clone = grid.clone().contiguous()\n\n    out_offcial = F.grid_sample(input, grid)    \n    grad_outputs = Variable(torch.rand(out_offcial.size()).cuda())\n    grad_outputs_clone = grad_outputs.clone().contiguous()\n    grad_inputs = torch.autograd.grad(out_offcial, (input, grid), grad_outputs.contiguous())\n    grad_input_off = grad_inputs[0]\n\n\n    crf = RoICropFunction()\n    grid_yx = torch.stack([grid_clone.data[:,:,:,1], grid_clone.data[:,:,:,0]], 3).contiguous().cuda()\n    out_stn = crf.forward(input_p, grid_yx)\n    grad_inputs = crf.backward(grad_outputs_clone.data)\n    grad_input_stn = grad_inputs[0]\n    pdb.set_trace()\n\n    delta = (grad_input_off.data - grad_input_stn).sum()\n"
  },
  {
    "path": "lib/pycocotools/UPSTREAM_REV",
    "content": "https://github.com/pdollar/coco/commit/3ac47c77ebd5a1ed4254a98b7fbf2ef4765a3574\n"
  },
  {
    "path": "lib/pycocotools/__init__.py",
    "content": "__author__ = 'tylin'\n"
  },
  {
    "path": "lib/pycocotools/_mask.c",
    "content": "/* Generated by Cython 0.29.12 */\n\n#define PY_SSIZE_T_CLEAN\n#include \"Python.h\"\n#ifndef Py_PYTHON_H\n    #error Python headers needed to compile C extensions, please install development version of Python.\n#elif PY_VERSION_HEX < 0x02060000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03030000)\n    #error Cython requires Python 2.6+ or Python 3.3+.\n#else\n#define CYTHON_ABI \"0_29_12\"\n#define CYTHON_HEX_VERSION 0x001D0CF0\n#define CYTHON_FUTURE_DIVISION 0\n#include <stddef.h>\n#ifndef offsetof\n  #define offsetof(type, member) ( (size_t) & ((type*)0) -> member )\n#endif\n#if !defined(WIN32) && !defined(MS_WINDOWS)\n  #ifndef __stdcall\n    #define __stdcall\n  #endif\n  #ifndef __cdecl\n    #define __cdecl\n  #endif\n  #ifndef __fastcall\n    #define __fastcall\n  #endif\n#endif\n#ifndef DL_IMPORT\n  #define DL_IMPORT(t) t\n#endif\n#ifndef DL_EXPORT\n  #define DL_EXPORT(t) t\n#endif\n#define __PYX_COMMA ,\n#ifndef HAVE_LONG_LONG\n  #if PY_VERSION_HEX >= 0x02070000\n    #define HAVE_LONG_LONG\n  #endif\n#endif\n#ifndef PY_LONG_LONG\n  #define PY_LONG_LONG LONG_LONG\n#endif\n#ifndef Py_HUGE_VAL\n  #define Py_HUGE_VAL HUGE_VAL\n#endif\n#ifdef PYPY_VERSION\n  #define CYTHON_COMPILING_IN_PYPY 1\n  #define CYTHON_COMPILING_IN_PYSTON 0\n  #define CYTHON_COMPILING_IN_CPYTHON 0\n  #undef CYTHON_USE_TYPE_SLOTS\n  #define CYTHON_USE_TYPE_SLOTS 0\n  #undef CYTHON_USE_PYTYPE_LOOKUP\n  #define CYTHON_USE_PYTYPE_LOOKUP 0\n  #if PY_VERSION_HEX < 0x03050000\n    #undef CYTHON_USE_ASYNC_SLOTS\n    #define CYTHON_USE_ASYNC_SLOTS 0\n  #elif !defined(CYTHON_USE_ASYNC_SLOTS)\n    #define CYTHON_USE_ASYNC_SLOTS 1\n  #endif\n  #undef CYTHON_USE_PYLIST_INTERNALS\n  #define CYTHON_USE_PYLIST_INTERNALS 0\n  #undef CYTHON_USE_UNICODE_INTERNALS\n  #define CYTHON_USE_UNICODE_INTERNALS 0\n  #undef CYTHON_USE_UNICODE_WRITER\n  #define CYTHON_USE_UNICODE_WRITER 0\n  #undef CYTHON_USE_PYLONG_INTERNALS\n  #define CYTHON_USE_PYLONG_INTERNALS 0\n  #undef CYTHON_AVOID_BORROWED_REFS\n  #define CYTHON_AVOID_BORROWED_REFS 1\n  #undef CYTHON_ASSUME_SAFE_MACROS\n  #define CYTHON_ASSUME_SAFE_MACROS 0\n  #undef CYTHON_UNPACK_METHODS\n  #define CYTHON_UNPACK_METHODS 0\n  #undef CYTHON_FAST_THREAD_STATE\n  #define CYTHON_FAST_THREAD_STATE 0\n  #undef CYTHON_FAST_PYCALL\n  #define CYTHON_FAST_PYCALL 0\n  #undef CYTHON_PEP489_MULTI_PHASE_INIT\n  #define CYTHON_PEP489_MULTI_PHASE_INIT 0\n  #undef CYTHON_USE_TP_FINALIZE\n  #define CYTHON_USE_TP_FINALIZE 0\n  #undef CYTHON_USE_DICT_VERSIONS\n  #define CYTHON_USE_DICT_VERSIONS 0\n  #undef CYTHON_USE_EXC_INFO_STACK\n  #define CYTHON_USE_EXC_INFO_STACK 0\n#elif defined(PYSTON_VERSION)\n  #define CYTHON_COMPILING_IN_PYPY 0\n  #define CYTHON_COMPILING_IN_PYSTON 1\n  #define CYTHON_COMPILING_IN_CPYTHON 0\n  #ifndef CYTHON_USE_TYPE_SLOTS\n    #define CYTHON_USE_TYPE_SLOTS 1\n  #endif\n  #undef CYTHON_USE_PYTYPE_LOOKUP\n  #define CYTHON_USE_PYTYPE_LOOKUP 0\n  #undef CYTHON_USE_ASYNC_SLOTS\n  #define CYTHON_USE_ASYNC_SLOTS 0\n  #undef CYTHON_USE_PYLIST_INTERNALS\n  #define CYTHON_USE_PYLIST_INTERNALS 0\n  #ifndef CYTHON_USE_UNICODE_INTERNALS\n    #define CYTHON_USE_UNICODE_INTERNALS 1\n  #endif\n  #undef CYTHON_USE_UNICODE_WRITER\n  #define CYTHON_USE_UNICODE_WRITER 0\n  #undef CYTHON_USE_PYLONG_INTERNALS\n  #define CYTHON_USE_PYLONG_INTERNALS 0\n  #ifndef CYTHON_AVOID_BORROWED_REFS\n    #define CYTHON_AVOID_BORROWED_REFS 0\n  #endif\n  #ifndef CYTHON_ASSUME_SAFE_MACROS\n    #define CYTHON_ASSUME_SAFE_MACROS 1\n  #endif\n  #ifndef CYTHON_UNPACK_METHODS\n    #define CYTHON_UNPACK_METHODS 1\n  #endif\n  #undef CYTHON_FAST_THREAD_STATE\n  #define CYTHON_FAST_THREAD_STATE 0\n  #undef CYTHON_FAST_PYCALL\n  #define CYTHON_FAST_PYCALL 0\n  #undef CYTHON_PEP489_MULTI_PHASE_INIT\n  #define CYTHON_PEP489_MULTI_PHASE_INIT 0\n  #undef CYTHON_USE_TP_FINALIZE\n  #define CYTHON_USE_TP_FINALIZE 0\n  #undef CYTHON_USE_DICT_VERSIONS\n  #define CYTHON_USE_DICT_VERSIONS 0\n  #undef CYTHON_USE_EXC_INFO_STACK\n  #define CYTHON_USE_EXC_INFO_STACK 0\n#else\n  #define CYTHON_COMPILING_IN_PYPY 0\n  #define CYTHON_COMPILING_IN_PYSTON 0\n  #define CYTHON_COMPILING_IN_CPYTHON 1\n  #ifndef CYTHON_USE_TYPE_SLOTS\n    #define CYTHON_USE_TYPE_SLOTS 1\n  #endif\n  #if PY_VERSION_HEX < 0x02070000\n    #undef CYTHON_USE_PYTYPE_LOOKUP\n    #define CYTHON_USE_PYTYPE_LOOKUP 0\n  #elif !defined(CYTHON_USE_PYTYPE_LOOKUP)\n    #define CYTHON_USE_PYTYPE_LOOKUP 1\n  #endif\n  #if PY_MAJOR_VERSION < 3\n    #undef CYTHON_USE_ASYNC_SLOTS\n    #define CYTHON_USE_ASYNC_SLOTS 0\n  #elif !defined(CYTHON_USE_ASYNC_SLOTS)\n    #define CYTHON_USE_ASYNC_SLOTS 1\n  #endif\n  #if PY_VERSION_HEX < 0x02070000\n    #undef CYTHON_USE_PYLONG_INTERNALS\n    #define CYTHON_USE_PYLONG_INTERNALS 0\n  #elif !defined(CYTHON_USE_PYLONG_INTERNALS)\n    #define CYTHON_USE_PYLONG_INTERNALS 1\n  #endif\n  #ifndef CYTHON_USE_PYLIST_INTERNALS\n    #define CYTHON_USE_PYLIST_INTERNALS 1\n  #endif\n  #ifndef CYTHON_USE_UNICODE_INTERNALS\n    #define CYTHON_USE_UNICODE_INTERNALS 1\n  #endif\n  #if PY_VERSION_HEX < 0x030300F0\n    #undef CYTHON_USE_UNICODE_WRITER\n    #define CYTHON_USE_UNICODE_WRITER 0\n  #elif !defined(CYTHON_USE_UNICODE_WRITER)\n    #define CYTHON_USE_UNICODE_WRITER 1\n  #endif\n  #ifndef CYTHON_AVOID_BORROWED_REFS\n    #define CYTHON_AVOID_BORROWED_REFS 0\n  #endif\n  #ifndef CYTHON_ASSUME_SAFE_MACROS\n    #define CYTHON_ASSUME_SAFE_MACROS 1\n  #endif\n  #ifndef CYTHON_UNPACK_METHODS\n    #define CYTHON_UNPACK_METHODS 1\n  #endif\n  #ifndef CYTHON_FAST_THREAD_STATE\n    #define CYTHON_FAST_THREAD_STATE 1\n  #endif\n  #ifndef CYTHON_FAST_PYCALL\n    #define CYTHON_FAST_PYCALL 1\n  #endif\n  #ifndef CYTHON_PEP489_MULTI_PHASE_INIT\n    #define CYTHON_PEP489_MULTI_PHASE_INIT (PY_VERSION_HEX >= 0x03050000)\n  #endif\n  #ifndef CYTHON_USE_TP_FINALIZE\n    #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1)\n  #endif\n  #ifndef CYTHON_USE_DICT_VERSIONS\n    #define CYTHON_USE_DICT_VERSIONS (PY_VERSION_HEX >= 0x030600B1)\n  #endif\n  #ifndef CYTHON_USE_EXC_INFO_STACK\n    #define CYTHON_USE_EXC_INFO_STACK (PY_VERSION_HEX >= 0x030700A3)\n  #endif\n#endif\n#if !defined(CYTHON_FAST_PYCCALL)\n#define CYTHON_FAST_PYCCALL  (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1)\n#endif\n#if CYTHON_USE_PYLONG_INTERNALS\n  #include \"longintrepr.h\"\n  #undef SHIFT\n  #undef BASE\n  #undef MASK\n  #ifdef SIZEOF_VOID_P\n    enum { __pyx_check_sizeof_voidp = 1 / (int)(SIZEOF_VOID_P == sizeof(void*)) };\n  #endif\n#endif\n#ifndef __has_attribute\n  #define __has_attribute(x) 0\n#endif\n#ifndef __has_cpp_attribute\n  #define __has_cpp_attribute(x) 0\n#endif\n#ifndef CYTHON_RESTRICT\n  #if defined(__GNUC__)\n    #define CYTHON_RESTRICT __restrict__\n  #elif defined(_MSC_VER) && _MSC_VER >= 1400\n    #define CYTHON_RESTRICT __restrict\n  #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L\n    #define CYTHON_RESTRICT restrict\n  #else\n    #define CYTHON_RESTRICT\n  #endif\n#endif\n#ifndef CYTHON_UNUSED\n# if defined(__GNUC__)\n#   if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4))\n#     define CYTHON_UNUSED __attribute__ ((__unused__))\n#   else\n#     define CYTHON_UNUSED\n#   endif\n# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER))\n#   define CYTHON_UNUSED __attribute__ ((__unused__))\n# else\n#   define CYTHON_UNUSED\n# endif\n#endif\n#ifndef CYTHON_MAYBE_UNUSED_VAR\n#  if defined(__cplusplus)\n     template<class T> void CYTHON_MAYBE_UNUSED_VAR( const T& ) { }\n#  else\n#    define CYTHON_MAYBE_UNUSED_VAR(x) (void)(x)\n#  endif\n#endif\n#ifndef CYTHON_NCP_UNUSED\n# if CYTHON_COMPILING_IN_CPYTHON\n#  define CYTHON_NCP_UNUSED\n# else\n#  define CYTHON_NCP_UNUSED CYTHON_UNUSED\n# endif\n#endif\n#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None)\n#ifdef _MSC_VER\n    #ifndef _MSC_STDINT_H_\n        #if _MSC_VER < 1300\n           typedef unsigned char     uint8_t;\n           typedef unsigned int      uint32_t;\n        #else\n           typedef unsigned __int8   uint8_t;\n           typedef unsigned __int32  uint32_t;\n        #endif\n    #endif\n#else\n   #include <stdint.h>\n#endif\n#ifndef CYTHON_FALLTHROUGH\n  #if defined(__cplusplus) && __cplusplus >= 201103L\n    #if __has_cpp_attribute(fallthrough)\n      #define CYTHON_FALLTHROUGH [[fallthrough]]\n    #elif __has_cpp_attribute(clang::fallthrough)\n      #define CYTHON_FALLTHROUGH [[clang::fallthrough]]\n    #elif __has_cpp_attribute(gnu::fallthrough)\n      #define CYTHON_FALLTHROUGH [[gnu::fallthrough]]\n    #endif\n  #endif\n  #ifndef CYTHON_FALLTHROUGH\n    #if __has_attribute(fallthrough)\n      #define CYTHON_FALLTHROUGH __attribute__((fallthrough))\n    #else\n      #define CYTHON_FALLTHROUGH\n    #endif\n  #endif\n  #if defined(__clang__ ) && defined(__apple_build_version__)\n    #if __apple_build_version__ < 7000000\n      #undef  CYTHON_FALLTHROUGH\n      #define CYTHON_FALLTHROUGH\n    #endif\n  #endif\n#endif\n\n#ifndef CYTHON_INLINE\n  #if defined(__clang__)\n    #define CYTHON_INLINE __inline__ __attribute__ ((__unused__))\n  #elif defined(__GNUC__)\n    #define CYTHON_INLINE __inline__\n  #elif defined(_MSC_VER)\n    #define CYTHON_INLINE __inline\n  #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L\n    #define CYTHON_INLINE inline\n  #else\n    #define CYTHON_INLINE\n  #endif\n#endif\n\n#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX < 0x02070600 && !defined(Py_OptimizeFlag)\n  #define Py_OptimizeFlag 0\n#endif\n#define __PYX_BUILD_PY_SSIZE_T \"n\"\n#define CYTHON_FORMAT_SSIZE_T \"z\"\n#if PY_MAJOR_VERSION < 3\n  #define __Pyx_BUILTIN_MODULE_NAME \"__builtin__\"\n  #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\\\n          PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\n  #define __Pyx_DefaultClassType PyClass_Type\n#else\n  #define __Pyx_BUILTIN_MODULE_NAME \"builtins\"\n#if PY_VERSION_HEX >= 0x030800A4 && PY_VERSION_HEX < 0x030800B2\n  #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\\\n          PyCode_New(a, 0, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\n#else\n  #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\\\n          PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\n#endif\n  #define __Pyx_DefaultClassType PyType_Type\n#endif\n#ifndef Py_TPFLAGS_CHECKTYPES\n  #define Py_TPFLAGS_CHECKTYPES 0\n#endif\n#ifndef Py_TPFLAGS_HAVE_INDEX\n  #define Py_TPFLAGS_HAVE_INDEX 0\n#endif\n#ifndef Py_TPFLAGS_HAVE_NEWBUFFER\n  #define Py_TPFLAGS_HAVE_NEWBUFFER 0\n#endif\n#ifndef Py_TPFLAGS_HAVE_FINALIZE\n  #define Py_TPFLAGS_HAVE_FINALIZE 0\n#endif\n#ifndef METH_STACKLESS\n  #define METH_STACKLESS 0\n#endif\n#if PY_VERSION_HEX <= 0x030700A3 || !defined(METH_FASTCALL)\n  #ifndef METH_FASTCALL\n     #define METH_FASTCALL 0x80\n  #endif\n  typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject *const *args, Py_ssize_t nargs);\n  typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject *const *args,\n                                                          Py_ssize_t nargs, PyObject *kwnames);\n#else\n  #define __Pyx_PyCFunctionFast _PyCFunctionFast\n  #define __Pyx_PyCFunctionFastWithKeywords _PyCFunctionFastWithKeywords\n#endif\n#if CYTHON_FAST_PYCCALL\n#define __Pyx_PyFastCFunction_Check(func)\\\n    ((PyCFunction_Check(func) && (METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS)))))\n#else\n#define __Pyx_PyFastCFunction_Check(func) 0\n#endif\n#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc)\n  #define PyObject_Malloc(s)   PyMem_Malloc(s)\n  #define PyObject_Free(p)     PyMem_Free(p)\n  #define PyObject_Realloc(p)  PyMem_Realloc(p)\n#endif\n#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030400A1\n  #define PyMem_RawMalloc(n)           PyMem_Malloc(n)\n  #define PyMem_RawRealloc(p, n)       PyMem_Realloc(p, n)\n  #define PyMem_RawFree(p)             PyMem_Free(p)\n#endif\n#if CYTHON_COMPILING_IN_PYSTON\n  #define __Pyx_PyCode_HasFreeVars(co)  PyCode_HasFreeVars(co)\n  #define __Pyx_PyFrame_SetLineNumber(frame, lineno) PyFrame_SetLineNumber(frame, lineno)\n#else\n  #define __Pyx_PyCode_HasFreeVars(co)  (PyCode_GetNumFree(co) > 0)\n  #define __Pyx_PyFrame_SetLineNumber(frame, lineno)  (frame)->f_lineno = (lineno)\n#endif\n#if !CYTHON_FAST_THREAD_STATE || PY_VERSION_HEX < 0x02070000\n  #define __Pyx_PyThreadState_Current PyThreadState_GET()\n#elif PY_VERSION_HEX >= 0x03060000\n  #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet()\n#elif PY_VERSION_HEX >= 0x03000000\n  #define __Pyx_PyThreadState_Current PyThreadState_GET()\n#else\n  #define __Pyx_PyThreadState_Current _PyThreadState_Current\n#endif\n#if PY_VERSION_HEX < 0x030700A2 && !defined(PyThread_tss_create) && !defined(Py_tss_NEEDS_INIT)\n#include \"pythread.h\"\n#define Py_tss_NEEDS_INIT 0\ntypedef int Py_tss_t;\nstatic CYTHON_INLINE int PyThread_tss_create(Py_tss_t *key) {\n  *key = PyThread_create_key();\n  return 0;\n}\nstatic CYTHON_INLINE Py_tss_t * PyThread_tss_alloc(void) {\n  Py_tss_t *key = (Py_tss_t *)PyObject_Malloc(sizeof(Py_tss_t));\n  *key = Py_tss_NEEDS_INIT;\n  return key;\n}\nstatic CYTHON_INLINE void PyThread_tss_free(Py_tss_t *key) {\n  PyObject_Free(key);\n}\nstatic CYTHON_INLINE int PyThread_tss_is_created(Py_tss_t *key) {\n  return *key != Py_tss_NEEDS_INIT;\n}\nstatic CYTHON_INLINE void PyThread_tss_delete(Py_tss_t *key) {\n  PyThread_delete_key(*key);\n  *key = Py_tss_NEEDS_INIT;\n}\nstatic CYTHON_INLINE int PyThread_tss_set(Py_tss_t *key, void *value) {\n  return PyThread_set_key_value(*key, value);\n}\nstatic CYTHON_INLINE void * PyThread_tss_get(Py_tss_t *key) {\n  return PyThread_get_key_value(*key);\n}\n#endif\n#if CYTHON_COMPILING_IN_CPYTHON || defined(_PyDict_NewPresized)\n#define __Pyx_PyDict_NewPresized(n)  ((n <= 8) ? PyDict_New() : _PyDict_NewPresized(n))\n#else\n#define __Pyx_PyDict_NewPresized(n)  PyDict_New()\n#endif\n#if PY_MAJOR_VERSION >= 3 || CYTHON_FUTURE_DIVISION\n  #define __Pyx_PyNumber_Divide(x,y)         PyNumber_TrueDivide(x,y)\n  #define __Pyx_PyNumber_InPlaceDivide(x,y)  PyNumber_InPlaceTrueDivide(x,y)\n#else\n  #define __Pyx_PyNumber_Divide(x,y)         PyNumber_Divide(x,y)\n  #define __Pyx_PyNumber_InPlaceDivide(x,y)  PyNumber_InPlaceDivide(x,y)\n#endif\n#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 && CYTHON_USE_UNICODE_INTERNALS\n#define __Pyx_PyDict_GetItemStr(dict, name)  _PyDict_GetItem_KnownHash(dict, name, ((PyASCIIObject *) name)->hash)\n#else\n#define __Pyx_PyDict_GetItemStr(dict, name)  PyDict_GetItem(dict, name)\n#endif\n#if PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND)\n  #define CYTHON_PEP393_ENABLED 1\n  #define __Pyx_PyUnicode_READY(op)       (likely(PyUnicode_IS_READY(op)) ?\\\n                                              0 : _PyUnicode_Ready((PyObject *)(op)))\n  #define __Pyx_PyUnicode_GET_LENGTH(u)   PyUnicode_GET_LENGTH(u)\n  #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i)\n  #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u)   PyUnicode_MAX_CHAR_VALUE(u)\n  #define __Pyx_PyUnicode_KIND(u)         PyUnicode_KIND(u)\n  #define __Pyx_PyUnicode_DATA(u)         PyUnicode_DATA(u)\n  #define __Pyx_PyUnicode_READ(k, d, i)   PyUnicode_READ(k, d, i)\n  #define __Pyx_PyUnicode_WRITE(k, d, i, ch)  PyUnicode_WRITE(k, d, i, ch)\n  #define __Pyx_PyUnicode_IS_TRUE(u)      (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u)))\n#else\n  #define CYTHON_PEP393_ENABLED 0\n  #define PyUnicode_1BYTE_KIND  1\n  #define PyUnicode_2BYTE_KIND  2\n  #define PyUnicode_4BYTE_KIND  4\n  #define __Pyx_PyUnicode_READY(op)       (0)\n  #define __Pyx_PyUnicode_GET_LENGTH(u)   PyUnicode_GET_SIZE(u)\n  #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i]))\n  #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u)   ((sizeof(Py_UNICODE) == 2) ? 65535 : 1114111)\n  #define __Pyx_PyUnicode_KIND(u)         (sizeof(Py_UNICODE))\n  #define __Pyx_PyUnicode_DATA(u)         ((void*)PyUnicode_AS_UNICODE(u))\n  #define __Pyx_PyUnicode_READ(k, d, i)   ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i]))\n  #define __Pyx_PyUnicode_WRITE(k, d, i, ch)  (((void)(k)), ((Py_UNICODE*)d)[i] = ch)\n  #define __Pyx_PyUnicode_IS_TRUE(u)      (0 != PyUnicode_GET_SIZE(u))\n#endif\n#if CYTHON_COMPILING_IN_PYPY\n  #define __Pyx_PyUnicode_Concat(a, b)      PyNumber_Add(a, b)\n  #define __Pyx_PyUnicode_ConcatSafe(a, b)  PyNumber_Add(a, b)\n#else\n  #define __Pyx_PyUnicode_Concat(a, b)      PyUnicode_Concat(a, b)\n  #define __Pyx_PyUnicode_ConcatSafe(a, b)  ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\\\n      PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b))\n#endif\n#if CYTHON_COMPILING_IN_PYPY && !defined(PyUnicode_Contains)\n  #define PyUnicode_Contains(u, s)  PySequence_Contains(u, s)\n#endif\n#if CYTHON_COMPILING_IN_PYPY && !defined(PyByteArray_Check)\n  #define PyByteArray_Check(obj)  PyObject_TypeCheck(obj, &PyByteArray_Type)\n#endif\n#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Format)\n  #define PyObject_Format(obj, fmt)  PyObject_CallMethod(obj, \"__format__\", \"O\", fmt)\n#endif\n#define __Pyx_PyString_FormatSafe(a, b)   ((unlikely((a) == Py_None || (PyString_Check(b) && !PyString_CheckExact(b)))) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b))\n#define __Pyx_PyUnicode_FormatSafe(a, b)  ((unlikely((a) == Py_None || (PyUnicode_Check(b) && !PyUnicode_CheckExact(b)))) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b))\n#if PY_MAJOR_VERSION >= 3\n  #define __Pyx_PyString_Format(a, b)  PyUnicode_Format(a, b)\n#else\n  #define __Pyx_PyString_Format(a, b)  PyString_Format(a, b)\n#endif\n#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII)\n  #define PyObject_ASCII(o)            PyObject_Repr(o)\n#endif\n#if PY_MAJOR_VERSION >= 3\n  #define PyBaseString_Type            PyUnicode_Type\n  #define PyStringObject               PyUnicodeObject\n  #define PyString_Type                PyUnicode_Type\n  #define PyString_Check               PyUnicode_Check\n  #define PyString_CheckExact          PyUnicode_CheckExact\n  #define PyObject_Unicode             PyObject_Str\n#endif\n#if PY_MAJOR_VERSION >= 3\n  #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj)\n  #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj)\n#else\n  #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj))\n  #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj))\n#endif\n#ifndef PySet_CheckExact\n  #define PySet_CheckExact(obj)        (Py_TYPE(obj) == &PySet_Type)\n#endif\n#if CYTHON_ASSUME_SAFE_MACROS\n  #define __Pyx_PySequence_SIZE(seq)  Py_SIZE(seq)\n#else\n  #define __Pyx_PySequence_SIZE(seq)  PySequence_Size(seq)\n#endif\n#if PY_MAJOR_VERSION >= 3\n  #define PyIntObject                  PyLongObject\n  #define PyInt_Type                   PyLong_Type\n  #define PyInt_Check(op)              PyLong_Check(op)\n  #define PyInt_CheckExact(op)         PyLong_CheckExact(op)\n  #define PyInt_FromString             PyLong_FromString\n  #define PyInt_FromUnicode            PyLong_FromUnicode\n  #define PyInt_FromLong               PyLong_FromLong\n  #define PyInt_FromSize_t             PyLong_FromSize_t\n  #define PyInt_FromSsize_t            PyLong_FromSsize_t\n  #define PyInt_AsLong                 PyLong_AsLong\n  #define PyInt_AS_LONG                PyLong_AS_LONG\n  #define PyInt_AsSsize_t              PyLong_AsSsize_t\n  #define PyInt_AsUnsignedLongMask     PyLong_AsUnsignedLongMask\n  #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask\n  #define PyNumber_Int                 PyNumber_Long\n#endif\n#if PY_MAJOR_VERSION >= 3\n  #define PyBoolObject                 PyLongObject\n#endif\n#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY\n  #ifndef PyUnicode_InternFromString\n    #define PyUnicode_InternFromString(s) PyUnicode_FromString(s)\n  #endif\n#endif\n#if PY_VERSION_HEX < 0x030200A4\n  typedef long Py_hash_t;\n  #define __Pyx_PyInt_FromHash_t PyInt_FromLong\n  #define __Pyx_PyInt_AsHash_t   PyInt_AsLong\n#else\n  #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t\n  #define __Pyx_PyInt_AsHash_t   PyInt_AsSsize_t\n#endif\n#if PY_MAJOR_VERSION >= 3\n  #define __Pyx_PyMethod_New(func, self, klass) ((self) ? PyMethod_New(func, self) : (Py_INCREF(func), func))\n#else\n  #define __Pyx_PyMethod_New(func, self, klass) PyMethod_New(func, self, klass)\n#endif\n#if CYTHON_USE_ASYNC_SLOTS\n  #if PY_VERSION_HEX >= 0x030500B1\n    #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods\n    #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async)\n  #else\n    #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved))\n  #endif\n#else\n  #define __Pyx_PyType_AsAsync(obj) NULL\n#endif\n#ifndef __Pyx_PyAsyncMethodsStruct\n    typedef struct {\n        unaryfunc am_await;\n        unaryfunc am_aiter;\n        unaryfunc am_anext;\n    } __Pyx_PyAsyncMethodsStruct;\n#endif\n\n#if defined(WIN32) || defined(MS_WINDOWS)\n  #define _USE_MATH_DEFINES\n#endif\n#include <math.h>\n#ifdef NAN\n#define __PYX_NAN() ((float) NAN)\n#else\nstatic CYTHON_INLINE float __PYX_NAN() {\n  float value;\n  memset(&value, 0xFF, sizeof(value));\n  return value;\n}\n#endif\n#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL)\n#define __Pyx_truncl trunc\n#else\n#define __Pyx_truncl truncl\n#endif\n\n\n#define __PYX_ERR(f_index, lineno, Ln_error) \\\n{ \\\n  __pyx_filename = __pyx_f[f_index]; __pyx_lineno = lineno; __pyx_clineno = __LINE__; goto Ln_error; \\\n}\n\n#ifndef __PYX_EXTERN_C\n  #ifdef __cplusplus\n    #define __PYX_EXTERN_C extern \"C\"\n  #else\n    #define __PYX_EXTERN_C extern\n  #endif\n#endif\n\n#define __PYX_HAVE__pycocotools___mask\n#define __PYX_HAVE_API__pycocotools___mask\n/* Early includes */\n#include <string.h>\n#include <stdio.h>\n#include \"numpy/arrayobject.h\"\n#include \"numpy/ufuncobject.h\"\n#include <stdlib.h>\n#include \"maskApi.h\"\n#ifdef _OPENMP\n#include <omp.h>\n#endif /* _OPENMP */\n\n#if defined(PYREX_WITHOUT_ASSERTIONS) && !defined(CYTHON_WITHOUT_ASSERTIONS)\n#define CYTHON_WITHOUT_ASSERTIONS\n#endif\n\ntypedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding;\n                const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry;\n\n#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0\n#define __PYX_DEFAULT_STRING_ENCODING_IS_UTF8 0\n#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT (PY_MAJOR_VERSION >= 3 && __PYX_DEFAULT_STRING_ENCODING_IS_UTF8)\n#define __PYX_DEFAULT_STRING_ENCODING \"\"\n#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString\n#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize\n#define __Pyx_uchar_cast(c) ((unsigned char)c)\n#define __Pyx_long_cast(x) ((long)x)\n#define __Pyx_fits_Py_ssize_t(v, type, is_signed)  (\\\n    (sizeof(type) < sizeof(Py_ssize_t))  ||\\\n    (sizeof(type) > sizeof(Py_ssize_t) &&\\\n          likely(v < (type)PY_SSIZE_T_MAX ||\\\n                 v == (type)PY_SSIZE_T_MAX)  &&\\\n          (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\\\n                                v == (type)PY_SSIZE_T_MIN)))  ||\\\n    (sizeof(type) == sizeof(Py_ssize_t) &&\\\n          (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\\\n                               v == (type)PY_SSIZE_T_MAX)))  )\nstatic CYTHON_INLINE int __Pyx_is_valid_index(Py_ssize_t i, Py_ssize_t limit) {\n    return (size_t) i < (size_t) limit;\n}\n#if defined (__cplusplus) && __cplusplus >= 201103L\n    #include <cstdlib>\n    #define __Pyx_sst_abs(value) std::abs(value)\n#elif SIZEOF_INT >= SIZEOF_SIZE_T\n    #define __Pyx_sst_abs(value) abs(value)\n#elif SIZEOF_LONG >= SIZEOF_SIZE_T\n    #define __Pyx_sst_abs(value) labs(value)\n#elif defined (_MSC_VER)\n    #define __Pyx_sst_abs(value) ((Py_ssize_t)_abs64(value))\n#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L\n    #define __Pyx_sst_abs(value) llabs(value)\n#elif defined (__GNUC__)\n    #define __Pyx_sst_abs(value) __builtin_llabs(value)\n#else\n    #define __Pyx_sst_abs(value) ((value<0) ? -value : value)\n#endif\nstatic CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject*);\nstatic CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length);\n#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s))\n#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l)\n#define __Pyx_PyBytes_FromString        PyBytes_FromString\n#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize\nstatic CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*);\n#if PY_MAJOR_VERSION < 3\n    #define __Pyx_PyStr_FromString        __Pyx_PyBytes_FromString\n    #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize\n#else\n    #define __Pyx_PyStr_FromString        __Pyx_PyUnicode_FromString\n    #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize\n#endif\n#define __Pyx_PyBytes_AsWritableString(s)     ((char*) PyBytes_AS_STRING(s))\n#define __Pyx_PyBytes_AsWritableSString(s)    ((signed char*) PyBytes_AS_STRING(s))\n#define __Pyx_PyBytes_AsWritableUString(s)    ((unsigned char*) PyBytes_AS_STRING(s))\n#define __Pyx_PyBytes_AsString(s)     ((const char*) PyBytes_AS_STRING(s))\n#define __Pyx_PyBytes_AsSString(s)    ((const signed char*) PyBytes_AS_STRING(s))\n#define __Pyx_PyBytes_AsUString(s)    ((const unsigned char*) PyBytes_AS_STRING(s))\n#define __Pyx_PyObject_AsWritableString(s)    ((char*) __Pyx_PyObject_AsString(s))\n#define __Pyx_PyObject_AsWritableSString(s)    ((signed char*) __Pyx_PyObject_AsString(s))\n#define __Pyx_PyObject_AsWritableUString(s)    ((unsigned char*) __Pyx_PyObject_AsString(s))\n#define __Pyx_PyObject_AsSString(s)    ((const signed char*) __Pyx_PyObject_AsString(s))\n#define __Pyx_PyObject_AsUString(s)    ((const unsigned char*) __Pyx_PyObject_AsString(s))\n#define __Pyx_PyObject_FromCString(s)  __Pyx_PyObject_FromString((const char*)s)\n#define __Pyx_PyBytes_FromCString(s)   __Pyx_PyBytes_FromString((const char*)s)\n#define __Pyx_PyByteArray_FromCString(s)   __Pyx_PyByteArray_FromString((const char*)s)\n#define __Pyx_PyStr_FromCString(s)     __Pyx_PyStr_FromString((const char*)s)\n#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s)\nstatic CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) {\n    const Py_UNICODE *u_end = u;\n    while (*u_end++) ;\n    return (size_t)(u_end - u - 1);\n}\n#define __Pyx_PyUnicode_FromUnicode(u)       PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u))\n#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode\n#define __Pyx_PyUnicode_AsUnicode            PyUnicode_AsUnicode\n#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj)\n#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None)\nstatic CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b);\nstatic CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*);\nstatic CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject*);\nstatic CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x);\n#define __Pyx_PySequence_Tuple(obj)\\\n    (likely(PyTuple_CheckExact(obj)) ? __Pyx_NewRef(obj) : PySequence_Tuple(obj))\nstatic CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*);\nstatic CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t);\n#if CYTHON_ASSUME_SAFE_MACROS\n#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x))\n#else\n#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x)\n#endif\n#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x))\n#if PY_MAJOR_VERSION >= 3\n#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x))\n#else\n#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x))\n#endif\n#define __Pyx_PyNumber_Float(x) (PyFloat_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Float(x))\n#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII\nstatic int __Pyx_sys_getdefaultencoding_not_ascii;\nstatic int __Pyx_init_sys_getdefaultencoding_params(void) {\n    PyObject* sys;\n    PyObject* default_encoding = NULL;\n    PyObject* ascii_chars_u = NULL;\n    PyObject* ascii_chars_b = NULL;\n    const char* default_encoding_c;\n    sys = PyImport_ImportModule(\"sys\");\n    if (!sys) goto bad;\n    default_encoding = PyObject_CallMethod(sys, (char*) \"getdefaultencoding\", NULL);\n    Py_DECREF(sys);\n    if (!default_encoding) goto bad;\n    default_encoding_c = PyBytes_AsString(default_encoding);\n    if (!default_encoding_c) goto bad;\n    if (strcmp(default_encoding_c, \"ascii\") == 0) {\n        __Pyx_sys_getdefaultencoding_not_ascii = 0;\n    } else {\n        char ascii_chars[128];\n        int c;\n        for (c = 0; c < 128; c++) {\n            ascii_chars[c] = c;\n        }\n        __Pyx_sys_getdefaultencoding_not_ascii = 1;\n        ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL);\n        if (!ascii_chars_u) goto bad;\n        ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL);\n        if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) {\n            PyErr_Format(\n                PyExc_ValueError,\n                \"This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.\",\n                default_encoding_c);\n            goto bad;\n        }\n        Py_DECREF(ascii_chars_u);\n        Py_DECREF(ascii_chars_b);\n    }\n    Py_DECREF(default_encoding);\n    return 0;\nbad:\n    Py_XDECREF(default_encoding);\n    Py_XDECREF(ascii_chars_u);\n    Py_XDECREF(ascii_chars_b);\n    return -1;\n}\n#endif\n#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3\n#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL)\n#else\n#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL)\n#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT\nstatic char* __PYX_DEFAULT_STRING_ENCODING;\nstatic int __Pyx_init_sys_getdefaultencoding_params(void) {\n    PyObject* sys;\n    PyObject* default_encoding = NULL;\n    char* default_encoding_c;\n    sys = PyImport_ImportModule(\"sys\");\n    if (!sys) goto bad;\n    default_encoding = PyObject_CallMethod(sys, (char*) (const char*) \"getdefaultencoding\", NULL);\n    Py_DECREF(sys);\n    if (!default_encoding) goto bad;\n    default_encoding_c = PyBytes_AsString(default_encoding);\n    if (!default_encoding_c) goto bad;\n    __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c) + 1);\n    if (!__PYX_DEFAULT_STRING_ENCODING) goto bad;\n    strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c);\n    Py_DECREF(default_encoding);\n    return 0;\nbad:\n    Py_XDECREF(default_encoding);\n    return -1;\n}\n#endif\n#endif\n\n\n/* Test for GCC > 2.95 */\n#if defined(__GNUC__)     && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95)))\n  #define likely(x)   __builtin_expect(!!(x), 1)\n  #define unlikely(x) __builtin_expect(!!(x), 0)\n#else /* !__GNUC__ or GCC < 2.95 */\n  #define likely(x)   (x)\n  #define unlikely(x) (x)\n#endif /* __GNUC__ */\nstatic CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void)ptr; }\n\nstatic PyObject *__pyx_m = NULL;\nstatic PyObject *__pyx_d;\nstatic PyObject *__pyx_b;\nstatic PyObject *__pyx_cython_runtime = NULL;\nstatic PyObject *__pyx_empty_tuple;\nstatic PyObject *__pyx_empty_bytes;\nstatic PyObject *__pyx_empty_unicode;\nstatic int __pyx_lineno;\nstatic int __pyx_clineno = 0;\nstatic const char * __pyx_cfilenm= __FILE__;\nstatic const char *__pyx_filename;\n\n/* Header.proto */\n#if !defined(CYTHON_CCOMPLEX)\n  #if defined(__cplusplus)\n    #define CYTHON_CCOMPLEX 1\n  #elif defined(_Complex_I)\n    #define CYTHON_CCOMPLEX 1\n  #else\n    #define CYTHON_CCOMPLEX 0\n  #endif\n#endif\n#if CYTHON_CCOMPLEX\n  #ifdef __cplusplus\n    #include <complex>\n  #else\n    #include <complex.h>\n  #endif\n#endif\n#if CYTHON_CCOMPLEX && !defined(__cplusplus) && defined(__sun__) && defined(__GNUC__)\n  #undef _Complex_I\n  #define _Complex_I 1.0fj\n#endif\n\n\nstatic const char *__pyx_f[] = {\n  \"pycocotools/_mask.pyx\",\n  \"stringsource\",\n  \"__init__.pxd\",\n  \"type.pxd\",\n};\n/* BufferFormatStructs.proto */\n#define IS_UNSIGNED(type) (((type) -1) > 0)\nstruct __Pyx_StructField_;\n#define __PYX_BUF_FLAGS_PACKED_STRUCT (1 << 0)\ntypedef struct {\n  const char* name;\n  struct __Pyx_StructField_* fields;\n  size_t size;\n  size_t arraysize[8];\n  int ndim;\n  char typegroup;\n  char is_unsigned;\n  int flags;\n} __Pyx_TypeInfo;\ntypedef struct __Pyx_StructField_ {\n  __Pyx_TypeInfo* type;\n  const char* name;\n  size_t offset;\n} __Pyx_StructField;\ntypedef struct {\n  __Pyx_StructField* field;\n  size_t parent_offset;\n} __Pyx_BufFmt_StackElem;\ntypedef struct {\n  __Pyx_StructField root;\n  __Pyx_BufFmt_StackElem* head;\n  size_t fmt_offset;\n  size_t new_count, enc_count;\n  size_t struct_alignment;\n  int is_complex;\n  char enc_type;\n  char new_packmode;\n  char enc_packmode;\n  char is_valid_array;\n} __Pyx_BufFmt_Context;\n\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":776\n * # in Cython to enable them only on the right systems.\n * \n * ctypedef npy_int8       int8_t             # <<<<<<<<<<<<<<\n * ctypedef npy_int16      int16_t\n * ctypedef npy_int32      int32_t\n */\ntypedef npy_int8 __pyx_t_5numpy_int8_t;\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":777\n * \n * ctypedef npy_int8       int8_t\n * ctypedef npy_int16      int16_t             # <<<<<<<<<<<<<<\n * ctypedef npy_int32      int32_t\n * ctypedef npy_int64      int64_t\n */\ntypedef npy_int16 __pyx_t_5numpy_int16_t;\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":778\n * ctypedef npy_int8       int8_t\n * ctypedef npy_int16      int16_t\n * ctypedef npy_int32      int32_t             # <<<<<<<<<<<<<<\n * ctypedef npy_int64      int64_t\n * #ctypedef npy_int96      int96_t\n */\ntypedef npy_int32 __pyx_t_5numpy_int32_t;\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":779\n * ctypedef npy_int16      int16_t\n * ctypedef npy_int32      int32_t\n * ctypedef npy_int64      int64_t             # <<<<<<<<<<<<<<\n * #ctypedef npy_int96      int96_t\n * #ctypedef npy_int128     int128_t\n */\ntypedef npy_int64 __pyx_t_5numpy_int64_t;\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":783\n * #ctypedef npy_int128     int128_t\n * \n * ctypedef npy_uint8      uint8_t             # <<<<<<<<<<<<<<\n * ctypedef npy_uint16     uint16_t\n * ctypedef npy_uint32     uint32_t\n */\ntypedef npy_uint8 __pyx_t_5numpy_uint8_t;\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":784\n * \n * ctypedef npy_uint8      uint8_t\n * ctypedef npy_uint16     uint16_t             # <<<<<<<<<<<<<<\n * ctypedef npy_uint32     uint32_t\n * ctypedef npy_uint64     uint64_t\n */\ntypedef npy_uint16 __pyx_t_5numpy_uint16_t;\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":785\n * ctypedef npy_uint8      uint8_t\n * ctypedef npy_uint16     uint16_t\n * ctypedef npy_uint32     uint32_t             # <<<<<<<<<<<<<<\n * ctypedef npy_uint64     uint64_t\n * #ctypedef npy_uint96     uint96_t\n */\ntypedef npy_uint32 __pyx_t_5numpy_uint32_t;\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":786\n * ctypedef npy_uint16     uint16_t\n * ctypedef npy_uint32     uint32_t\n * ctypedef npy_uint64     uint64_t             # <<<<<<<<<<<<<<\n * #ctypedef npy_uint96     uint96_t\n * #ctypedef npy_uint128    uint128_t\n */\ntypedef npy_uint64 __pyx_t_5numpy_uint64_t;\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":790\n * #ctypedef npy_uint128    uint128_t\n * \n * ctypedef npy_float32    float32_t             # <<<<<<<<<<<<<<\n * ctypedef npy_float64    float64_t\n * #ctypedef npy_float80    float80_t\n */\ntypedef npy_float32 __pyx_t_5numpy_float32_t;\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":791\n * \n * ctypedef npy_float32    float32_t\n * ctypedef npy_float64    float64_t             # <<<<<<<<<<<<<<\n * #ctypedef npy_float80    float80_t\n * #ctypedef npy_float128   float128_t\n */\ntypedef npy_float64 __pyx_t_5numpy_float64_t;\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":800\n * # The int types are mapped a bit surprising --\n * # numpy.int corresponds to 'l' and numpy.long to 'q'\n * ctypedef npy_long       int_t             # <<<<<<<<<<<<<<\n * ctypedef npy_longlong   long_t\n * ctypedef npy_longlong   longlong_t\n */\ntypedef npy_long __pyx_t_5numpy_int_t;\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":801\n * # numpy.int corresponds to 'l' and numpy.long to 'q'\n * ctypedef npy_long       int_t\n * ctypedef npy_longlong   long_t             # <<<<<<<<<<<<<<\n * ctypedef npy_longlong   longlong_t\n * \n */\ntypedef npy_longlong __pyx_t_5numpy_long_t;\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":802\n * ctypedef npy_long       int_t\n * ctypedef npy_longlong   long_t\n * ctypedef npy_longlong   longlong_t             # <<<<<<<<<<<<<<\n * \n * ctypedef npy_ulong      uint_t\n */\ntypedef npy_longlong __pyx_t_5numpy_longlong_t;\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":804\n * ctypedef npy_longlong   longlong_t\n * \n * ctypedef npy_ulong      uint_t             # <<<<<<<<<<<<<<\n * ctypedef npy_ulonglong  ulong_t\n * ctypedef npy_ulonglong  ulonglong_t\n */\ntypedef npy_ulong __pyx_t_5numpy_uint_t;\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":805\n * \n * ctypedef npy_ulong      uint_t\n * ctypedef npy_ulonglong  ulong_t             # <<<<<<<<<<<<<<\n * ctypedef npy_ulonglong  ulonglong_t\n * \n */\ntypedef npy_ulonglong __pyx_t_5numpy_ulong_t;\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":806\n * ctypedef npy_ulong      uint_t\n * ctypedef npy_ulonglong  ulong_t\n * ctypedef npy_ulonglong  ulonglong_t             # <<<<<<<<<<<<<<\n * \n * ctypedef npy_intp       intp_t\n */\ntypedef npy_ulonglong __pyx_t_5numpy_ulonglong_t;\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":808\n * ctypedef npy_ulonglong  ulonglong_t\n * \n * ctypedef npy_intp       intp_t             # <<<<<<<<<<<<<<\n * ctypedef npy_uintp      uintp_t\n * \n */\ntypedef npy_intp __pyx_t_5numpy_intp_t;\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":809\n * \n * ctypedef npy_intp       intp_t\n * ctypedef npy_uintp      uintp_t             # <<<<<<<<<<<<<<\n * \n * ctypedef npy_double     float_t\n */\ntypedef npy_uintp __pyx_t_5numpy_uintp_t;\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":811\n * ctypedef npy_uintp      uintp_t\n * \n * ctypedef npy_double     float_t             # <<<<<<<<<<<<<<\n * ctypedef npy_double     double_t\n * ctypedef npy_longdouble longdouble_t\n */\ntypedef npy_double __pyx_t_5numpy_float_t;\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":812\n * \n * ctypedef npy_double     float_t\n * ctypedef npy_double     double_t             # <<<<<<<<<<<<<<\n * ctypedef npy_longdouble longdouble_t\n * \n */\ntypedef npy_double __pyx_t_5numpy_double_t;\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":813\n * ctypedef npy_double     float_t\n * ctypedef npy_double     double_t\n * ctypedef npy_longdouble longdouble_t             # <<<<<<<<<<<<<<\n * \n * ctypedef npy_cfloat      cfloat_t\n */\ntypedef npy_longdouble __pyx_t_5numpy_longdouble_t;\n/* Declarations.proto */\n#if CYTHON_CCOMPLEX\n  #ifdef __cplusplus\n    typedef ::std::complex< float > __pyx_t_float_complex;\n  #else\n    typedef float _Complex __pyx_t_float_complex;\n  #endif\n#else\n    typedef struct { float real, imag; } __pyx_t_float_complex;\n#endif\nstatic CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float, float);\n\n/* Declarations.proto */\n#if CYTHON_CCOMPLEX\n  #ifdef __cplusplus\n    typedef ::std::complex< double > __pyx_t_double_complex;\n  #else\n    typedef double _Complex __pyx_t_double_complex;\n  #endif\n#else\n    typedef struct { double real, imag; } __pyx_t_double_complex;\n#endif\nstatic CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double, double);\n\n\n/*--- Type declarations ---*/\nstruct __pyx_obj_11pycocotools_5_mask_RLEs;\nstruct __pyx_obj_11pycocotools_5_mask_Masks;\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":815\n * ctypedef npy_longdouble longdouble_t\n * \n * ctypedef npy_cfloat      cfloat_t             # <<<<<<<<<<<<<<\n * ctypedef npy_cdouble     cdouble_t\n * ctypedef npy_clongdouble clongdouble_t\n */\ntypedef npy_cfloat __pyx_t_5numpy_cfloat_t;\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":816\n * \n * ctypedef npy_cfloat      cfloat_t\n * ctypedef npy_cdouble     cdouble_t             # <<<<<<<<<<<<<<\n * ctypedef npy_clongdouble clongdouble_t\n * \n */\ntypedef npy_cdouble __pyx_t_5numpy_cdouble_t;\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":817\n * ctypedef npy_cfloat      cfloat_t\n * ctypedef npy_cdouble     cdouble_t\n * ctypedef npy_clongdouble clongdouble_t             # <<<<<<<<<<<<<<\n * \n * ctypedef npy_cdouble     complex_t\n */\ntypedef npy_clongdouble __pyx_t_5numpy_clongdouble_t;\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":819\n * ctypedef npy_clongdouble clongdouble_t\n * \n * ctypedef npy_cdouble     complex_t             # <<<<<<<<<<<<<<\n * \n * cdef inline object PyArray_MultiIterNew1(a):\n */\ntypedef npy_cdouble __pyx_t_5numpy_complex_t;\n\n/* \"pycocotools/_mask.pyx\":53\n * # python class to wrap RLE array in C\n * # the class handles the memory allocation and deallocation\n * cdef class RLEs:             # <<<<<<<<<<<<<<\n *     cdef RLE *_R\n *     cdef siz _n\n */\nstruct __pyx_obj_11pycocotools_5_mask_RLEs {\n  PyObject_HEAD\n  RLE *_R;\n  siz _n;\n};\n\n\n/* \"pycocotools/_mask.pyx\":74\n * # python class to wrap Mask array in C\n * # the class handles the memory allocation and deallocation\n * cdef class Masks:             # <<<<<<<<<<<<<<\n *     cdef byte *_mask\n *     cdef siz _h\n */\nstruct __pyx_obj_11pycocotools_5_mask_Masks {\n  PyObject_HEAD\n  byte *_mask;\n  siz _h;\n  siz _w;\n  siz _n;\n};\n\n\n/* --- Runtime support code (head) --- */\n/* Refnanny.proto */\n#ifndef CYTHON_REFNANNY\n  #define CYTHON_REFNANNY 0\n#endif\n#if CYTHON_REFNANNY\n  typedef struct {\n    void (*INCREF)(void*, PyObject*, int);\n    void (*DECREF)(void*, PyObject*, int);\n    void (*GOTREF)(void*, PyObject*, int);\n    void (*GIVEREF)(void*, PyObject*, int);\n    void* (*SetupContext)(const char*, int, const char*);\n    void (*FinishContext)(void**);\n  } __Pyx_RefNannyAPIStruct;\n  static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL;\n  static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname);\n  #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL;\n#ifdef WITH_THREAD\n  #define __Pyx_RefNannySetupContext(name, acquire_gil)\\\n          if (acquire_gil) {\\\n              PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\\\n              __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\\\n              PyGILState_Release(__pyx_gilstate_save);\\\n          } else {\\\n              __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\\\n          }\n#else\n  #define __Pyx_RefNannySetupContext(name, acquire_gil)\\\n          __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__)\n#endif\n  #define __Pyx_RefNannyFinishContext()\\\n          __Pyx_RefNanny->FinishContext(&__pyx_refnanny)\n  #define __Pyx_INCREF(r)  __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__)\n  #define __Pyx_DECREF(r)  __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__)\n  #define __Pyx_GOTREF(r)  __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__)\n  #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__)\n  #define __Pyx_XINCREF(r)  do { if((r) != NULL) {__Pyx_INCREF(r); }} while(0)\n  #define __Pyx_XDECREF(r)  do { if((r) != NULL) {__Pyx_DECREF(r); }} while(0)\n  #define __Pyx_XGOTREF(r)  do { if((r) != NULL) {__Pyx_GOTREF(r); }} while(0)\n  #define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);}} while(0)\n#else\n  #define __Pyx_RefNannyDeclarations\n  #define __Pyx_RefNannySetupContext(name, acquire_gil)\n  #define __Pyx_RefNannyFinishContext()\n  #define __Pyx_INCREF(r) Py_INCREF(r)\n  #define __Pyx_DECREF(r) Py_DECREF(r)\n  #define __Pyx_GOTREF(r)\n  #define __Pyx_GIVEREF(r)\n  #define __Pyx_XINCREF(r) Py_XINCREF(r)\n  #define __Pyx_XDECREF(r) Py_XDECREF(r)\n  #define __Pyx_XGOTREF(r)\n  #define __Pyx_XGIVEREF(r)\n#endif\n#define __Pyx_XDECREF_SET(r, v) do {\\\n        PyObject *tmp = (PyObject *) r;\\\n        r = v; __Pyx_XDECREF(tmp);\\\n    } while (0)\n#define __Pyx_DECREF_SET(r, v) do {\\\n        PyObject *tmp = (PyObject *) r;\\\n        r = v; __Pyx_DECREF(tmp);\\\n    } while (0)\n#define __Pyx_CLEAR(r)    do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0)\n#define __Pyx_XCLEAR(r)   do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0)\n\n/* PyObjectGetAttrStr.proto */\n#if CYTHON_USE_TYPE_SLOTS\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name);\n#else\n#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n)\n#endif\n\n/* GetBuiltinName.proto */\nstatic PyObject *__Pyx_GetBuiltinName(PyObject *name);\n\n/* RaiseDoubleKeywords.proto */\nstatic void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name);\n\n/* ParseKeywords.proto */\nstatic int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[],\\\n    PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args,\\\n    const char* function_name);\n\n/* RaiseArgTupleInvalid.proto */\nstatic void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact,\n    Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found);\n\n/* IncludeStringH.proto */\n#include <string.h>\n\n/* BytesEquals.proto */\nstatic CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals);\n\n/* UnicodeEquals.proto */\nstatic CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals);\n\n/* StrEquals.proto */\n#if PY_MAJOR_VERSION >= 3\n#define __Pyx_PyString_Equals __Pyx_PyUnicode_Equals\n#else\n#define __Pyx_PyString_Equals __Pyx_PyBytes_Equals\n#endif\n\n/* PyCFunctionFastCall.proto */\n#if CYTHON_FAST_PYCCALL\nstatic CYTHON_INLINE PyObject *__Pyx_PyCFunction_FastCall(PyObject *func, PyObject **args, Py_ssize_t nargs);\n#else\n#define __Pyx_PyCFunction_FastCall(func, args, nargs)  (assert(0), NULL)\n#endif\n\n/* PyFunctionFastCall.proto */\n#if CYTHON_FAST_PYCALL\n#define __Pyx_PyFunction_FastCall(func, args, nargs)\\\n    __Pyx_PyFunction_FastCallDict((func), (args), (nargs), NULL)\n#if 1 || PY_VERSION_HEX < 0x030600B1\nstatic PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs);\n#else\n#define __Pyx_PyFunction_FastCallDict(func, args, nargs, kwargs) _PyFunction_FastCallDict(func, args, nargs, kwargs)\n#endif\n#define __Pyx_BUILD_ASSERT_EXPR(cond)\\\n    (sizeof(char [1 - 2*!(cond)]) - 1)\n#ifndef Py_MEMBER_SIZE\n#define Py_MEMBER_SIZE(type, member) sizeof(((type *)0)->member)\n#endif\n  static size_t __pyx_pyframe_localsplus_offset = 0;\n  #include \"frameobject.h\"\n  #define __Pxy_PyFrame_Initialize_Offsets()\\\n    ((void)__Pyx_BUILD_ASSERT_EXPR(sizeof(PyFrameObject) == offsetof(PyFrameObject, f_localsplus) + Py_MEMBER_SIZE(PyFrameObject, f_localsplus)),\\\n     (void)(__pyx_pyframe_localsplus_offset = ((size_t)PyFrame_Type.tp_basicsize) - Py_MEMBER_SIZE(PyFrameObject, f_localsplus)))\n  #define __Pyx_PyFrame_GetLocalsplus(frame)\\\n    (assert(__pyx_pyframe_localsplus_offset), (PyObject **)(((char *)(frame)) + __pyx_pyframe_localsplus_offset))\n#endif\n\n/* PyObjectCall.proto */\n#if CYTHON_COMPILING_IN_CPYTHON\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw);\n#else\n#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw)\n#endif\n\n/* PyObjectCallMethO.proto */\n#if CYTHON_COMPILING_IN_CPYTHON\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg);\n#endif\n\n/* PyObjectCallOneArg.proto */\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg);\n\n/* PyThreadStateGet.proto */\n#if CYTHON_FAST_THREAD_STATE\n#define __Pyx_PyThreadState_declare  PyThreadState *__pyx_tstate;\n#define __Pyx_PyThreadState_assign  __pyx_tstate = __Pyx_PyThreadState_Current;\n#define __Pyx_PyErr_Occurred()  __pyx_tstate->curexc_type\n#else\n#define __Pyx_PyThreadState_declare\n#define __Pyx_PyThreadState_assign\n#define __Pyx_PyErr_Occurred()  PyErr_Occurred()\n#endif\n\n/* PyErrFetchRestore.proto */\n#if CYTHON_FAST_THREAD_STATE\n#define __Pyx_PyErr_Clear() __Pyx_ErrRestore(NULL, NULL, NULL)\n#define __Pyx_ErrRestoreWithState(type, value, tb)  __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb)\n#define __Pyx_ErrFetchWithState(type, value, tb)    __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb)\n#define __Pyx_ErrRestore(type, value, tb)  __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb)\n#define __Pyx_ErrFetch(type, value, tb)    __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb)\nstatic CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb);\nstatic CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);\n#if CYTHON_COMPILING_IN_CPYTHON\n#define __Pyx_PyErr_SetNone(exc) (Py_INCREF(exc), __Pyx_ErrRestore((exc), NULL, NULL))\n#else\n#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc)\n#endif\n#else\n#define __Pyx_PyErr_Clear() PyErr_Clear()\n#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc)\n#define __Pyx_ErrRestoreWithState(type, value, tb)  PyErr_Restore(type, value, tb)\n#define __Pyx_ErrFetchWithState(type, value, tb)  PyErr_Fetch(type, value, tb)\n#define __Pyx_ErrRestoreInState(tstate, type, value, tb)  PyErr_Restore(type, value, tb)\n#define __Pyx_ErrFetchInState(tstate, type, value, tb)  PyErr_Fetch(type, value, tb)\n#define __Pyx_ErrRestore(type, value, tb)  PyErr_Restore(type, value, tb)\n#define __Pyx_ErrFetch(type, value, tb)  PyErr_Fetch(type, value, tb)\n#endif\n\n/* RaiseException.proto */\nstatic void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause);\n\n/* ExtTypeTest.proto */\nstatic CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type);\n\n/* ArgTypeTest.proto */\n#define __Pyx_ArgTypeTest(obj, type, none_allowed, name, exact)\\\n    ((likely((Py_TYPE(obj) == type) | (none_allowed && (obj == Py_None)))) ? 1 :\\\n        __Pyx__ArgTypeTest(obj, type, name, exact))\nstatic int __Pyx__ArgTypeTest(PyObject *obj, PyTypeObject *type, const char *name, int exact);\n\n/* ListAppend.proto */\n#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS\nstatic CYTHON_INLINE int __Pyx_PyList_Append(PyObject* list, PyObject* x) {\n    PyListObject* L = (PyListObject*) list;\n    Py_ssize_t len = Py_SIZE(list);\n    if (likely(L->allocated > len) & likely(len > (L->allocated >> 1))) {\n        Py_INCREF(x);\n        PyList_SET_ITEM(list, len, x);\n        Py_SIZE(list) = len+1;\n        return 0;\n    }\n    return PyList_Append(list, x);\n}\n#else\n#define __Pyx_PyList_Append(L,x) PyList_Append(L,x)\n#endif\n\n/* PyIntBinop.proto */\n#if !CYTHON_COMPILING_IN_PYPY\nstatic PyObject* __Pyx_PyInt_AddObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check);\n#else\n#define __Pyx_PyInt_AddObjC(op1, op2, intval, inplace, zerodivision_check)\\\n    (inplace ? PyNumber_InPlaceAdd(op1, op2) : PyNumber_Add(op1, op2))\n#endif\n\n/* DictGetItem.proto */\n#if PY_MAJOR_VERSION >= 3 && !CYTHON_COMPILING_IN_PYPY\nstatic PyObject *__Pyx_PyDict_GetItem(PyObject *d, PyObject* key);\n#define __Pyx_PyObject_Dict_GetItem(obj, name)\\\n    (likely(PyDict_CheckExact(obj)) ?\\\n     __Pyx_PyDict_GetItem(obj, name) : PyObject_GetItem(obj, name))\n#else\n#define __Pyx_PyDict_GetItem(d, key) PyObject_GetItem(d, key)\n#define __Pyx_PyObject_Dict_GetItem(obj, name)  PyObject_GetItem(obj, name)\n#endif\n\n/* GetItemInt.proto */\n#define __Pyx_GetItemInt(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\\\n    (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\\\n    __Pyx_GetItemInt_Fast(o, (Py_ssize_t)i, is_list, wraparound, boundscheck) :\\\n    (is_list ? (PyErr_SetString(PyExc_IndexError, \"list index out of range\"), (PyObject*)NULL) :\\\n               __Pyx_GetItemInt_Generic(o, to_py_func(i))))\n#define __Pyx_GetItemInt_List(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\\\n    (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\\\n    __Pyx_GetItemInt_List_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\\\n    (PyErr_SetString(PyExc_IndexError, \"list index out of range\"), (PyObject*)NULL))\nstatic CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i,\n                                                              int wraparound, int boundscheck);\n#define __Pyx_GetItemInt_Tuple(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\\\n    (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\\\n    __Pyx_GetItemInt_Tuple_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\\\n    (PyErr_SetString(PyExc_IndexError, \"tuple index out of range\"), (PyObject*)NULL))\nstatic CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i,\n                                                              int wraparound, int boundscheck);\nstatic PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j);\nstatic CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i,\n                                                     int is_list, int wraparound, int boundscheck);\n\n/* IsLittleEndian.proto */\nstatic CYTHON_INLINE int __Pyx_Is_Little_Endian(void);\n\n/* BufferFormatCheck.proto */\nstatic const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts);\nstatic void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx,\n                              __Pyx_BufFmt_StackElem* stack,\n                              __Pyx_TypeInfo* type);\n\n/* BufferGetAndValidate.proto */\n#define __Pyx_GetBufferAndValidate(buf, obj, dtype, flags, nd, cast, stack)\\\n    ((obj == Py_None || obj == NULL) ?\\\n    (__Pyx_ZeroBuffer(buf), 0) :\\\n    __Pyx__GetBufferAndValidate(buf, obj, dtype, flags, nd, cast, stack))\nstatic int  __Pyx__GetBufferAndValidate(Py_buffer* buf, PyObject* obj,\n    __Pyx_TypeInfo* dtype, int flags, int nd, int cast, __Pyx_BufFmt_StackElem* stack);\nstatic void __Pyx_ZeroBuffer(Py_buffer* buf);\nstatic CYTHON_INLINE void __Pyx_SafeReleaseBuffer(Py_buffer* info);\nstatic Py_ssize_t __Pyx_minusones[] = { -1, -1, -1, -1, -1, -1, -1, -1 };\nstatic Py_ssize_t __Pyx_zeros[] = { 0, 0, 0, 0, 0, 0, 0, 0 };\n\n/* PyDictVersioning.proto */\n#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS\n#define __PYX_DICT_VERSION_INIT  ((PY_UINT64_T) -1)\n#define __PYX_GET_DICT_VERSION(dict)  (((PyDictObject*)(dict))->ma_version_tag)\n#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)\\\n    (version_var) = __PYX_GET_DICT_VERSION(dict);\\\n    (cache_var) = (value);\n#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) {\\\n    static PY_UINT64_T __pyx_dict_version = 0;\\\n    static PyObject *__pyx_dict_cached_value = NULL;\\\n    if (likely(__PYX_GET_DICT_VERSION(DICT) == __pyx_dict_version)) {\\\n        (VAR) = __pyx_dict_cached_value;\\\n    } else {\\\n        (VAR) = __pyx_dict_cached_value = (LOOKUP);\\\n        __pyx_dict_version = __PYX_GET_DICT_VERSION(DICT);\\\n    }\\\n}\nstatic CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj);\nstatic CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj);\nstatic CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version);\n#else\n#define __PYX_GET_DICT_VERSION(dict)  (0)\n#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)\n#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP)  (VAR) = (LOOKUP);\n#endif\n\n/* GetModuleGlobalName.proto */\n#if CYTHON_USE_DICT_VERSIONS\n#define __Pyx_GetModuleGlobalName(var, name)  {\\\n    static PY_UINT64_T __pyx_dict_version = 0;\\\n    static PyObject *__pyx_dict_cached_value = NULL;\\\n    (var) = (likely(__pyx_dict_version == __PYX_GET_DICT_VERSION(__pyx_d))) ?\\\n        (likely(__pyx_dict_cached_value) ? __Pyx_NewRef(__pyx_dict_cached_value) : __Pyx_GetBuiltinName(name)) :\\\n        __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\\\n}\n#define __Pyx_GetModuleGlobalNameUncached(var, name)  {\\\n    PY_UINT64_T __pyx_dict_version;\\\n    PyObject *__pyx_dict_cached_value;\\\n    (var) = __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\\\n}\nstatic PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value);\n#else\n#define __Pyx_GetModuleGlobalName(var, name)  (var) = __Pyx__GetModuleGlobalName(name)\n#define __Pyx_GetModuleGlobalNameUncached(var, name)  (var) = __Pyx__GetModuleGlobalName(name)\nstatic CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name);\n#endif\n\n/* PyObjectCall2Args.proto */\nstatic CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2);\n\n/* PyIntCompare.proto */\nstatic CYTHON_INLINE PyObject* __Pyx_PyInt_EqObjC(PyObject *op1, PyObject *op2, long intval, long inplace);\n\n/* ListCompAppend.proto */\n#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS\nstatic CYTHON_INLINE int __Pyx_ListComp_Append(PyObject* list, PyObject* x) {\n    PyListObject* L = (PyListObject*) list;\n    Py_ssize_t len = Py_SIZE(list);\n    if (likely(L->allocated > len)) {\n        Py_INCREF(x);\n        PyList_SET_ITEM(list, len, x);\n        Py_SIZE(list) = len+1;\n        return 0;\n    }\n    return PyList_Append(list, x);\n}\n#else\n#define __Pyx_ListComp_Append(L,x) PyList_Append(L,x)\n#endif\n\n/* FetchCommonType.proto */\nstatic PyTypeObject* __Pyx_FetchCommonType(PyTypeObject* type);\n\n/* CythonFunction.proto */\n#define __Pyx_CyFunction_USED 1\n#define __Pyx_CYFUNCTION_STATICMETHOD  0x01\n#define __Pyx_CYFUNCTION_CLASSMETHOD   0x02\n#define __Pyx_CYFUNCTION_CCLASS        0x04\n#define __Pyx_CyFunction_GetClosure(f)\\\n    (((__pyx_CyFunctionObject *) (f))->func_closure)\n#define __Pyx_CyFunction_GetClassObj(f)\\\n    (((__pyx_CyFunctionObject *) (f))->func_classobj)\n#define __Pyx_CyFunction_Defaults(type, f)\\\n    ((type *)(((__pyx_CyFunctionObject *) (f))->defaults))\n#define __Pyx_CyFunction_SetDefaultsGetter(f, g)\\\n    ((__pyx_CyFunctionObject *) (f))->defaults_getter = (g)\ntypedef struct {\n    PyCFunctionObject func;\n#if PY_VERSION_HEX < 0x030500A0\n    PyObject *func_weakreflist;\n#endif\n    PyObject *func_dict;\n    PyObject *func_name;\n    PyObject *func_qualname;\n    PyObject *func_doc;\n    PyObject *func_globals;\n    PyObject *func_code;\n    PyObject *func_closure;\n    PyObject *func_classobj;\n    void *defaults;\n    int defaults_pyobjects;\n    int flags;\n    PyObject *defaults_tuple;\n    PyObject *defaults_kwdict;\n    PyObject *(*defaults_getter)(PyObject *);\n    PyObject *func_annotations;\n} __pyx_CyFunctionObject;\nstatic PyTypeObject *__pyx_CyFunctionType = 0;\n#define __Pyx_CyFunction_Check(obj)  (__Pyx_TypeCheck(obj, __pyx_CyFunctionType))\n#define __Pyx_CyFunction_NewEx(ml, flags, qualname, self, module, globals, code)\\\n    __Pyx_CyFunction_New(__pyx_CyFunctionType, ml, flags, qualname, self, module, globals, code)\nstatic PyObject *__Pyx_CyFunction_New(PyTypeObject *, PyMethodDef *ml,\n                                      int flags, PyObject* qualname,\n                                      PyObject *self,\n                                      PyObject *module, PyObject *globals,\n                                      PyObject* code);\nstatic CYTHON_INLINE void *__Pyx_CyFunction_InitDefaults(PyObject *m,\n                                                         size_t size,\n                                                         int pyobjects);\nstatic CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsTuple(PyObject *m,\n                                                            PyObject *tuple);\nstatic CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsKwDict(PyObject *m,\n                                                             PyObject *dict);\nstatic CYTHON_INLINE void __Pyx_CyFunction_SetAnnotationsDict(PyObject *m,\n                                                              PyObject *dict);\nstatic int __pyx_CyFunction_init(void);\n\n/* BufferFallbackError.proto */\nstatic void __Pyx_RaiseBufferFallbackError(void);\n\n/* None.proto */\nstatic CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t, Py_ssize_t);\n\n/* BufferIndexError.proto */\nstatic void __Pyx_RaiseBufferIndexError(int axis);\n\n#define __Pyx_BufPtrStrided1d(type, buf, i0, s0) (type)((char*)buf + i0 * s0)\n/* RaiseTooManyValuesToUnpack.proto */\nstatic CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected);\n\n/* RaiseNeedMoreValuesToUnpack.proto */\nstatic CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index);\n\n/* RaiseNoneIterError.proto */\nstatic CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void);\n\n/* GetTopmostException.proto */\n#if CYTHON_USE_EXC_INFO_STACK\nstatic _PyErr_StackItem * __Pyx_PyErr_GetTopmostException(PyThreadState *tstate);\n#endif\n\n/* SaveResetException.proto */\n#if CYTHON_FAST_THREAD_STATE\n#define __Pyx_ExceptionSave(type, value, tb)  __Pyx__ExceptionSave(__pyx_tstate, type, value, tb)\nstatic CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);\n#define __Pyx_ExceptionReset(type, value, tb)  __Pyx__ExceptionReset(__pyx_tstate, type, value, tb)\nstatic CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb);\n#else\n#define __Pyx_ExceptionSave(type, value, tb)   PyErr_GetExcInfo(type, value, tb)\n#define __Pyx_ExceptionReset(type, value, tb)  PyErr_SetExcInfo(type, value, tb)\n#endif\n\n/* PyErrExceptionMatches.proto */\n#if CYTHON_FAST_THREAD_STATE\n#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err)\nstatic CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err);\n#else\n#define __Pyx_PyErr_ExceptionMatches(err)  PyErr_ExceptionMatches(err)\n#endif\n\n/* GetException.proto */\n#if CYTHON_FAST_THREAD_STATE\n#define __Pyx_GetException(type, value, tb)  __Pyx__GetException(__pyx_tstate, type, value, tb)\nstatic int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);\n#else\nstatic int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb);\n#endif\n\n/* PyObject_GenericGetAttrNoDict.proto */\n#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name);\n#else\n#define __Pyx_PyObject_GenericGetAttrNoDict PyObject_GenericGetAttr\n#endif\n\n/* PyObject_GenericGetAttr.proto */\n#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000\nstatic PyObject* __Pyx_PyObject_GenericGetAttr(PyObject* obj, PyObject* attr_name);\n#else\n#define __Pyx_PyObject_GenericGetAttr PyObject_GenericGetAttr\n#endif\n\n/* SetupReduce.proto */\nstatic int __Pyx_setup_reduce(PyObject* type_obj);\n\n/* TypeImport.proto */\n#ifndef __PYX_HAVE_RT_ImportType_proto\n#define __PYX_HAVE_RT_ImportType_proto\nenum __Pyx_ImportType_CheckSize {\n   __Pyx_ImportType_CheckSize_Error = 0,\n   __Pyx_ImportType_CheckSize_Warn = 1,\n   __Pyx_ImportType_CheckSize_Ignore = 2\n};\nstatic PyTypeObject *__Pyx_ImportType(PyObject* module, const char *module_name, const char *class_name, size_t size, enum __Pyx_ImportType_CheckSize check_size);\n#endif\n\n/* Import.proto */\nstatic PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level);\n\n/* CLineInTraceback.proto */\n#ifdef CYTHON_CLINE_IN_TRACEBACK\n#define __Pyx_CLineForTraceback(tstate, c_line)  (((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0)\n#else\nstatic int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line);\n#endif\n\n/* CodeObjectCache.proto */\ntypedef struct {\n    PyCodeObject* code_object;\n    int code_line;\n} __Pyx_CodeObjectCacheEntry;\nstruct __Pyx_CodeObjectCache {\n    int count;\n    int max_count;\n    __Pyx_CodeObjectCacheEntry* entries;\n};\nstatic struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL};\nstatic int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line);\nstatic PyCodeObject *__pyx_find_code_object(int code_line);\nstatic void __pyx_insert_code_object(int code_line, PyCodeObject* code_object);\n\n/* AddTraceback.proto */\nstatic void __Pyx_AddTraceback(const char *funcname, int c_line,\n                               int py_line, const char *filename);\n\n/* BufferStructDeclare.proto */\ntypedef struct {\n  Py_ssize_t shape, strides, suboffsets;\n} __Pyx_Buf_DimInfo;\ntypedef struct {\n  size_t refcount;\n  Py_buffer pybuffer;\n} __Pyx_Buffer;\ntypedef struct {\n  __Pyx_Buffer *rcbuffer;\n  char *data;\n  __Pyx_Buf_DimInfo diminfo[8];\n} __Pyx_LocalBuf_ND;\n\n#if PY_MAJOR_VERSION < 3\n    static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags);\n    static void __Pyx_ReleaseBuffer(Py_buffer *view);\n#else\n    #define __Pyx_GetBuffer PyObject_GetBuffer\n    #define __Pyx_ReleaseBuffer PyBuffer_Release\n#endif\n\n\n/* CIntToPy.proto */\nstatic CYTHON_INLINE PyObject* __Pyx_PyInt_From_siz(siz value);\n\n/* CIntToPy.proto */\nstatic CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value);\n\n/* CIntToPy.proto */\nstatic CYTHON_INLINE PyObject* __Pyx_PyInt_From_Py_intptr_t(Py_intptr_t value);\n\n/* RealImag.proto */\n#if CYTHON_CCOMPLEX\n  #ifdef __cplusplus\n    #define __Pyx_CREAL(z) ((z).real())\n    #define __Pyx_CIMAG(z) ((z).imag())\n  #else\n    #define __Pyx_CREAL(z) (__real__(z))\n    #define __Pyx_CIMAG(z) (__imag__(z))\n  #endif\n#else\n    #define __Pyx_CREAL(z) ((z).real)\n    #define __Pyx_CIMAG(z) ((z).imag)\n#endif\n#if defined(__cplusplus) && CYTHON_CCOMPLEX\\\n        && (defined(_WIN32) || defined(__clang__) || (defined(__GNUC__) && (__GNUC__ >= 5 || __GNUC__ == 4 && __GNUC_MINOR__ >= 4 )) || __cplusplus >= 201103)\n    #define __Pyx_SET_CREAL(z,x) ((z).real(x))\n    #define __Pyx_SET_CIMAG(z,y) ((z).imag(y))\n#else\n    #define __Pyx_SET_CREAL(z,x) __Pyx_CREAL(z) = (x)\n    #define __Pyx_SET_CIMAG(z,y) __Pyx_CIMAG(z) = (y)\n#endif\n\n/* Arithmetic.proto */\n#if CYTHON_CCOMPLEX\n    #define __Pyx_c_eq_float(a, b)   ((a)==(b))\n    #define __Pyx_c_sum_float(a, b)  ((a)+(b))\n    #define __Pyx_c_diff_float(a, b) ((a)-(b))\n    #define __Pyx_c_prod_float(a, b) ((a)*(b))\n    #define __Pyx_c_quot_float(a, b) ((a)/(b))\n    #define __Pyx_c_neg_float(a)     (-(a))\n  #ifdef __cplusplus\n    #define __Pyx_c_is_zero_float(z) ((z)==(float)0)\n    #define __Pyx_c_conj_float(z)    (::std::conj(z))\n    #if 1\n        #define __Pyx_c_abs_float(z)     (::std::abs(z))\n        #define __Pyx_c_pow_float(a, b)  (::std::pow(a, b))\n    #endif\n  #else\n    #define __Pyx_c_is_zero_float(z) ((z)==0)\n    #define __Pyx_c_conj_float(z)    (conjf(z))\n    #if 1\n        #define __Pyx_c_abs_float(z)     (cabsf(z))\n        #define __Pyx_c_pow_float(a, b)  (cpowf(a, b))\n    #endif\n #endif\n#else\n    static CYTHON_INLINE int __Pyx_c_eq_float(__pyx_t_float_complex, __pyx_t_float_complex);\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_sum_float(__pyx_t_float_complex, __pyx_t_float_complex);\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_diff_float(__pyx_t_float_complex, __pyx_t_float_complex);\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_prod_float(__pyx_t_float_complex, __pyx_t_float_complex);\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quot_float(__pyx_t_float_complex, __pyx_t_float_complex);\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_neg_float(__pyx_t_float_complex);\n    static CYTHON_INLINE int __Pyx_c_is_zero_float(__pyx_t_float_complex);\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_conj_float(__pyx_t_float_complex);\n    #if 1\n        static CYTHON_INLINE float __Pyx_c_abs_float(__pyx_t_float_complex);\n        static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_pow_float(__pyx_t_float_complex, __pyx_t_float_complex);\n    #endif\n#endif\n\n/* Arithmetic.proto */\n#if CYTHON_CCOMPLEX\n    #define __Pyx_c_eq_double(a, b)   ((a)==(b))\n    #define __Pyx_c_sum_double(a, b)  ((a)+(b))\n    #define __Pyx_c_diff_double(a, b) ((a)-(b))\n    #define __Pyx_c_prod_double(a, b) ((a)*(b))\n    #define __Pyx_c_quot_double(a, b) ((a)/(b))\n    #define __Pyx_c_neg_double(a)     (-(a))\n  #ifdef __cplusplus\n    #define __Pyx_c_is_zero_double(z) ((z)==(double)0)\n    #define __Pyx_c_conj_double(z)    (::std::conj(z))\n    #if 1\n        #define __Pyx_c_abs_double(z)     (::std::abs(z))\n        #define __Pyx_c_pow_double(a, b)  (::std::pow(a, b))\n    #endif\n  #else\n    #define __Pyx_c_is_zero_double(z) ((z)==0)\n    #define __Pyx_c_conj_double(z)    (conj(z))\n    #if 1\n        #define __Pyx_c_abs_double(z)     (cabs(z))\n        #define __Pyx_c_pow_double(a, b)  (cpow(a, b))\n    #endif\n #endif\n#else\n    static CYTHON_INLINE int __Pyx_c_eq_double(__pyx_t_double_complex, __pyx_t_double_complex);\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum_double(__pyx_t_double_complex, __pyx_t_double_complex);\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff_double(__pyx_t_double_complex, __pyx_t_double_complex);\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod_double(__pyx_t_double_complex, __pyx_t_double_complex);\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex, __pyx_t_double_complex);\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg_double(__pyx_t_double_complex);\n    static CYTHON_INLINE int __Pyx_c_is_zero_double(__pyx_t_double_complex);\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj_double(__pyx_t_double_complex);\n    #if 1\n        static CYTHON_INLINE double __Pyx_c_abs_double(__pyx_t_double_complex);\n        static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_pow_double(__pyx_t_double_complex, __pyx_t_double_complex);\n    #endif\n#endif\n\n/* CIntToPy.proto */\nstatic CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value);\n\n/* CIntToPy.proto */\nstatic CYTHON_INLINE PyObject* __Pyx_PyInt_From_enum__NPY_TYPES(enum NPY_TYPES value);\n\n/* CIntFromPy.proto */\nstatic CYTHON_INLINE siz __Pyx_PyInt_As_siz(PyObject *);\n\n/* CIntFromPy.proto */\nstatic CYTHON_INLINE size_t __Pyx_PyInt_As_size_t(PyObject *);\n\n/* CIntFromPy.proto */\nstatic CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *);\n\n/* CIntFromPy.proto */\nstatic CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *);\n\n/* FastTypeChecks.proto */\n#if CYTHON_COMPILING_IN_CPYTHON\n#define __Pyx_TypeCheck(obj, type) __Pyx_IsSubtype(Py_TYPE(obj), (PyTypeObject *)type)\nstatic CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b);\nstatic CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject *type);\nstatic CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2);\n#else\n#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type)\n#define __Pyx_PyErr_GivenExceptionMatches(err, type) PyErr_GivenExceptionMatches(err, type)\n#define __Pyx_PyErr_GivenExceptionMatches2(err, type1, type2) (PyErr_GivenExceptionMatches(err, type1) || PyErr_GivenExceptionMatches(err, type2))\n#endif\n#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception)\n\n/* CheckBinaryVersion.proto */\nstatic int __Pyx_check_binary_version(void);\n\n/* InitStrings.proto */\nstatic int __Pyx_InitStrings(__Pyx_StringTabEntry *t);\n\n\n/* Module declarations from 'cpython.buffer' */\n\n/* Module declarations from 'libc.string' */\n\n/* Module declarations from 'libc.stdio' */\n\n/* Module declarations from '__builtin__' */\n\n/* Module declarations from 'cpython.type' */\nstatic PyTypeObject *__pyx_ptype_7cpython_4type_type = 0;\n\n/* Module declarations from 'cpython' */\n\n/* Module declarations from 'cpython.object' */\n\n/* Module declarations from 'cpython.ref' */\n\n/* Module declarations from 'cpython.mem' */\n\n/* Module declarations from 'numpy' */\n\n/* Module declarations from 'numpy' */\nstatic PyTypeObject *__pyx_ptype_5numpy_dtype = 0;\nstatic PyTypeObject *__pyx_ptype_5numpy_flatiter = 0;\nstatic PyTypeObject *__pyx_ptype_5numpy_broadcast = 0;\nstatic PyTypeObject *__pyx_ptype_5numpy_ndarray = 0;\nstatic PyTypeObject *__pyx_ptype_5numpy_ufunc = 0;\nstatic CYTHON_INLINE char *__pyx_f_5numpy__util_dtypestring(PyArray_Descr *, char *, char *, int *); /*proto*/\nstatic CYTHON_INLINE int __pyx_f_5numpy_import_array(void); /*proto*/\n\n/* Module declarations from 'libc.stdlib' */\n\n/* Module declarations from 'pycocotools._mask' */\nstatic PyTypeObject *__pyx_ptype_11pycocotools_5_mask_RLEs = 0;\nstatic PyTypeObject *__pyx_ptype_11pycocotools_5_mask_Masks = 0;\nstatic __Pyx_TypeInfo __Pyx_TypeInfo_nn___pyx_t_5numpy_uint8_t = { \"uint8_t\", NULL, sizeof(__pyx_t_5numpy_uint8_t), { 0 }, 0, IS_UNSIGNED(__pyx_t_5numpy_uint8_t) ? 'U' : 'I', IS_UNSIGNED(__pyx_t_5numpy_uint8_t), 0 };\nstatic __Pyx_TypeInfo __Pyx_TypeInfo_nn___pyx_t_5numpy_double_t = { \"double_t\", NULL, sizeof(__pyx_t_5numpy_double_t), { 0 }, 0, 'R', 0, 0 };\nstatic __Pyx_TypeInfo __Pyx_TypeInfo_nn___pyx_t_5numpy_uint32_t = { \"uint32_t\", NULL, sizeof(__pyx_t_5numpy_uint32_t), { 0 }, 0, IS_UNSIGNED(__pyx_t_5numpy_uint32_t) ? 'U' : 'I', IS_UNSIGNED(__pyx_t_5numpy_uint32_t), 0 };\n#define __Pyx_MODULE_NAME \"pycocotools._mask\"\nextern int __pyx_module_is_main_pycocotools___mask;\nint __pyx_module_is_main_pycocotools___mask = 0;\n\n/* Implementation of 'pycocotools._mask' */\nstatic PyObject *__pyx_builtin_range;\nstatic PyObject *__pyx_builtin_AttributeError;\nstatic PyObject *__pyx_builtin_TypeError;\nstatic PyObject *__pyx_builtin_enumerate;\nstatic PyObject *__pyx_builtin_ValueError;\nstatic PyObject *__pyx_builtin_RuntimeError;\nstatic PyObject *__pyx_builtin_ImportError;\nstatic const char __pyx_k_F[] = \"F\";\nstatic const char __pyx_k_N[] = \"N\";\nstatic const char __pyx_k_R[] = \"R\";\nstatic const char __pyx_k_a[] = \"_a\";\nstatic const char __pyx_k_h[] = \"h\";\nstatic const char __pyx_k_i[] = \"i\";\nstatic const char __pyx_k_j[] = \"j\";\nstatic const char __pyx_k_m[] = \"m\";\nstatic const char __pyx_k_n[] = \"n\";\nstatic const char __pyx_k_p[] = \"p\";\nstatic const char __pyx_k_w[] = \"w\";\nstatic const char __pyx_k_Rs[] = \"Rs\";\nstatic const char __pyx_k_bb[] = \"bb\";\nstatic const char __pyx_k_dt[] = \"dt\";\nstatic const char __pyx_k_gt[] = \"gt\";\nstatic const char __pyx_k_np[] = \"np\";\nstatic const char __pyx_k_a_2[] = \"a\";\nstatic const char __pyx_k_all[] = \"all\";\nstatic const char __pyx_k_iou[] = \"_iou\";\nstatic const char __pyx_k_len[] = \"_len\";\nstatic const char __pyx_k_obj[] = \"obj\";\nstatic const char __pyx_k_RLEs[] = \"RLEs\";\nstatic const char __pyx_k_area[] = \"area\";\nstatic const char __pyx_k_bb_2[] = \"_bb\";\nstatic const char __pyx_k_cnts[] = \"cnts\";\nstatic const char __pyx_k_data[] = \"data\";\nstatic const char __pyx_k_main[] = \"__main__\";\nstatic const char __pyx_k_mask[] = \"mask\";\nstatic const char __pyx_k_name[] = \"__name__\";\nstatic const char __pyx_k_objs[] = \"objs\";\nstatic const char __pyx_k_poly[] = \"poly\";\nstatic const char __pyx_k_size[] = \"size\";\nstatic const char __pyx_k_test[] = \"__test__\";\nstatic const char __pyx_k_Masks[] = \"Masks\";\nstatic const char __pyx_k_array[] = \"array\";\nstatic const char __pyx_k_bbIou[] = \"_bbIou\";\nstatic const char __pyx_k_dtype[] = \"dtype\";\nstatic const char __pyx_k_iou_2[] = \"iou\";\nstatic const char __pyx_k_isbox[] = \"isbox\";\nstatic const char __pyx_k_isrle[] = \"isrle\";\nstatic const char __pyx_k_masks[] = \"masks\";\nstatic const char __pyx_k_merge[] = \"merge\";\nstatic const char __pyx_k_numpy[] = \"numpy\";\nstatic const char __pyx_k_order[] = \"order\";\nstatic const char __pyx_k_pyobj[] = \"pyobj\";\nstatic const char __pyx_k_range[] = \"range\";\nstatic const char __pyx_k_shape[] = \"shape\";\nstatic const char __pyx_k_uint8[] = \"uint8\";\nstatic const char __pyx_k_zeros[] = \"zeros\";\nstatic const char __pyx_k_astype[] = \"astype\";\nstatic const char __pyx_k_author[] = \"__author__\";\nstatic const char __pyx_k_counts[] = \"counts\";\nstatic const char __pyx_k_decode[] = \"decode\";\nstatic const char __pyx_k_double[] = \"double\";\nstatic const char __pyx_k_encode[] = \"encode\";\nstatic const char __pyx_k_frBbox[] = \"frBbox\";\nstatic const char __pyx_k_frPoly[] = \"frPoly\";\nstatic const char __pyx_k_import[] = \"__import__\";\nstatic const char __pyx_k_iouFun[] = \"_iouFun\";\nstatic const char __pyx_k_reduce[] = \"__reduce__\";\nstatic const char __pyx_k_rleIou[] = \"_rleIou\";\nstatic const char __pyx_k_toBbox[] = \"toBbox\";\nstatic const char __pyx_k_ucRles[] = \"ucRles\";\nstatic const char __pyx_k_uint32[] = \"uint32\";\nstatic const char __pyx_k_iscrowd[] = \"iscrowd\";\nstatic const char __pyx_k_np_poly[] = \"np_poly\";\nstatic const char __pyx_k_preproc[] = \"_preproc\";\nstatic const char __pyx_k_reshape[] = \"reshape\";\nstatic const char __pyx_k_rleObjs[] = \"rleObjs\";\nstatic const char __pyx_k_tsungyi[] = \"tsungyi\";\nstatic const char __pyx_k_c_string[] = \"c_string\";\nstatic const char __pyx_k_frString[] = \"_frString\";\nstatic const char __pyx_k_getstate[] = \"__getstate__\";\nstatic const char __pyx_k_setstate[] = \"__setstate__\";\nstatic const char __pyx_k_toString[] = \"_toString\";\nstatic const char __pyx_k_TypeError[] = \"TypeError\";\nstatic const char __pyx_k_enumerate[] = \"enumerate\";\nstatic const char __pyx_k_intersect[] = \"intersect\";\nstatic const char __pyx_k_py_string[] = \"py_string\";\nstatic const char __pyx_k_pyiscrowd[] = \"pyiscrowd\";\nstatic const char __pyx_k_reduce_ex[] = \"__reduce_ex__\";\nstatic const char __pyx_k_ValueError[] = \"ValueError\";\nstatic const char __pyx_k_ImportError[] = \"ImportError\";\nstatic const char __pyx_k_frPyObjects[] = \"frPyObjects\";\nstatic const char __pyx_k_RuntimeError[] = \"RuntimeError\";\nstatic const char __pyx_k_reduce_cython[] = \"__reduce_cython__\";\nstatic const char __pyx_k_AttributeError[] = \"AttributeError\";\nstatic const char __pyx_k_iou_locals__len[] = \"iou.<locals>._len\";\nstatic const char __pyx_k_setstate_cython[] = \"__setstate_cython__\";\nstatic const char __pyx_k_frUncompressedRLE[] = \"frUncompressedRLE\";\nstatic const char __pyx_k_iou_locals__bbIou[] = \"iou.<locals>._bbIou\";\nstatic const char __pyx_k_pycocotools__mask[] = \"pycocotools._mask\";\nstatic const char __pyx_k_cline_in_traceback[] = \"cline_in_traceback\";\nstatic const char __pyx_k_iou_locals__rleIou[] = \"iou.<locals>._rleIou\";\nstatic const char __pyx_k_iou_locals__preproc[] = \"iou.<locals>._preproc\";\nstatic const char __pyx_k_pycocotools__mask_pyx[] = \"pycocotools/_mask.pyx\";\nstatic const char __pyx_k_input_data_type_not_allowed[] = \"input data type not allowed.\";\nstatic const char __pyx_k_input_type_is_not_supported[] = \"input type is not supported.\";\nstatic const char __pyx_k_ndarray_is_not_C_contiguous[] = \"ndarray is not C contiguous\";\nstatic const char __pyx_k_numpy_core_multiarray_failed_to[] = \"numpy.core.multiarray failed to import\";\nstatic const char __pyx_k_numpy_ndarray_input_is_only_for[] = \"numpy ndarray input is only for *bounding boxes* and should have Nx4 dimension\";\nstatic const char __pyx_k_unknown_dtype_code_in_numpy_pxd[] = \"unknown dtype code in numpy.pxd (%d)\";\nstatic const char __pyx_k_unrecognized_type_The_following[] = \"unrecognized type.  The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.\";\nstatic const char __pyx_k_Format_string_allocated_too_shor[] = \"Format string allocated too short, see comment in numpy.pxd\";\nstatic const char __pyx_k_Non_native_byte_order_not_suppor[] = \"Non-native byte order not supported\";\nstatic const char __pyx_k_The_dt_and_gt_should_have_the_sa[] = \"The dt and gt should have the same data type, either RLEs, list or np.ndarray\";\nstatic const char __pyx_k_list_input_can_be_bounding_box_N[] = \"list input can be bounding box (Nx4) or RLEs ([RLE])\";\nstatic const char __pyx_k_ndarray_is_not_Fortran_contiguou[] = \"ndarray is not Fortran contiguous\";\nstatic const char __pyx_k_no_default___reduce___due_to_non[] = \"no default __reduce__ due to non-trivial __cinit__\";\nstatic const char __pyx_k_numpy_core_umath_failed_to_impor[] = \"numpy.core.umath failed to import\";\nstatic const char __pyx_k_Format_string_allocated_too_shor_2[] = \"Format string allocated too short.\";\nstatic PyObject *__pyx_n_s_AttributeError;\nstatic PyObject *__pyx_n_s_F;\nstatic PyObject *__pyx_kp_u_Format_string_allocated_too_shor;\nstatic PyObject *__pyx_kp_u_Format_string_allocated_too_shor_2;\nstatic PyObject *__pyx_n_s_ImportError;\nstatic PyObject *__pyx_n_s_Masks;\nstatic PyObject *__pyx_n_s_N;\nstatic PyObject *__pyx_kp_u_Non_native_byte_order_not_suppor;\nstatic PyObject *__pyx_n_s_R;\nstatic PyObject *__pyx_n_s_RLEs;\nstatic PyObject *__pyx_n_s_Rs;\nstatic PyObject *__pyx_n_s_RuntimeError;\nstatic PyObject *__pyx_kp_s_The_dt_and_gt_should_have_the_sa;\nstatic PyObject *__pyx_n_s_TypeError;\nstatic PyObject *__pyx_n_s_ValueError;\nstatic PyObject *__pyx_n_s_a;\nstatic PyObject *__pyx_n_s_a_2;\nstatic PyObject *__pyx_n_s_all;\nstatic PyObject *__pyx_n_s_area;\nstatic PyObject *__pyx_n_s_array;\nstatic PyObject *__pyx_n_s_astype;\nstatic PyObject *__pyx_n_s_author;\nstatic PyObject *__pyx_n_s_bb;\nstatic PyObject *__pyx_n_s_bbIou;\nstatic PyObject *__pyx_n_s_bb_2;\nstatic PyObject *__pyx_n_s_c_string;\nstatic PyObject *__pyx_n_s_cline_in_traceback;\nstatic PyObject *__pyx_n_s_cnts;\nstatic PyObject *__pyx_n_s_counts;\nstatic PyObject *__pyx_n_s_data;\nstatic PyObject *__pyx_n_s_decode;\nstatic PyObject *__pyx_n_s_double;\nstatic PyObject *__pyx_n_s_dt;\nstatic PyObject *__pyx_n_s_dtype;\nstatic PyObject *__pyx_n_s_encode;\nstatic PyObject *__pyx_n_s_enumerate;\nstatic PyObject *__pyx_n_s_frBbox;\nstatic PyObject *__pyx_n_s_frPoly;\nstatic PyObject *__pyx_n_s_frPyObjects;\nstatic PyObject *__pyx_n_s_frString;\nstatic PyObject *__pyx_n_s_frUncompressedRLE;\nstatic PyObject *__pyx_n_s_getstate;\nstatic PyObject *__pyx_n_s_gt;\nstatic PyObject *__pyx_n_s_h;\nstatic PyObject *__pyx_n_s_i;\nstatic PyObject *__pyx_n_s_import;\nstatic PyObject *__pyx_kp_s_input_data_type_not_allowed;\nstatic PyObject *__pyx_kp_s_input_type_is_not_supported;\nstatic PyObject *__pyx_n_s_intersect;\nstatic PyObject *__pyx_n_s_iou;\nstatic PyObject *__pyx_n_s_iouFun;\nstatic PyObject *__pyx_n_s_iou_2;\nstatic PyObject *__pyx_n_s_iou_locals__bbIou;\nstatic PyObject *__pyx_n_s_iou_locals__len;\nstatic PyObject *__pyx_n_s_iou_locals__preproc;\nstatic PyObject *__pyx_n_s_iou_locals__rleIou;\nstatic PyObject *__pyx_n_s_isbox;\nstatic PyObject *__pyx_n_s_iscrowd;\nstatic PyObject *__pyx_n_s_isrle;\nstatic PyObject *__pyx_n_s_j;\nstatic PyObject *__pyx_n_s_len;\nstatic PyObject *__pyx_kp_s_list_input_can_be_bounding_box_N;\nstatic PyObject *__pyx_n_s_m;\nstatic PyObject *__pyx_n_s_main;\nstatic PyObject *__pyx_n_s_mask;\nstatic PyObject *__pyx_n_s_masks;\nstatic PyObject *__pyx_n_s_merge;\nstatic PyObject *__pyx_n_s_n;\nstatic PyObject *__pyx_n_s_name;\nstatic PyObject *__pyx_kp_u_ndarray_is_not_C_contiguous;\nstatic PyObject *__pyx_kp_u_ndarray_is_not_Fortran_contiguou;\nstatic PyObject *__pyx_kp_s_no_default___reduce___due_to_non;\nstatic PyObject *__pyx_n_s_np;\nstatic PyObject *__pyx_n_s_np_poly;\nstatic PyObject *__pyx_n_s_numpy;\nstatic PyObject *__pyx_kp_s_numpy_core_multiarray_failed_to;\nstatic PyObject *__pyx_kp_s_numpy_core_umath_failed_to_impor;\nstatic PyObject *__pyx_kp_s_numpy_ndarray_input_is_only_for;\nstatic PyObject *__pyx_n_s_obj;\nstatic PyObject *__pyx_n_s_objs;\nstatic PyObject *__pyx_n_s_order;\nstatic PyObject *__pyx_n_s_p;\nstatic PyObject *__pyx_n_s_poly;\nstatic PyObject *__pyx_n_s_preproc;\nstatic PyObject *__pyx_n_s_py_string;\nstatic PyObject *__pyx_n_s_pycocotools__mask;\nstatic PyObject *__pyx_kp_s_pycocotools__mask_pyx;\nstatic PyObject *__pyx_n_s_pyiscrowd;\nstatic PyObject *__pyx_n_s_pyobj;\nstatic PyObject *__pyx_n_s_range;\nstatic PyObject *__pyx_n_s_reduce;\nstatic PyObject *__pyx_n_s_reduce_cython;\nstatic PyObject *__pyx_n_s_reduce_ex;\nstatic PyObject *__pyx_n_s_reshape;\nstatic PyObject *__pyx_n_s_rleIou;\nstatic PyObject *__pyx_n_s_rleObjs;\nstatic PyObject *__pyx_n_s_setstate;\nstatic PyObject *__pyx_n_s_setstate_cython;\nstatic PyObject *__pyx_n_s_shape;\nstatic PyObject *__pyx_n_s_size;\nstatic PyObject *__pyx_n_s_test;\nstatic PyObject *__pyx_n_s_toBbox;\nstatic PyObject *__pyx_n_s_toString;\nstatic PyObject *__pyx_n_s_tsungyi;\nstatic PyObject *__pyx_n_s_ucRles;\nstatic PyObject *__pyx_n_s_uint32;\nstatic PyObject *__pyx_n_s_uint8;\nstatic PyObject *__pyx_kp_u_unknown_dtype_code_in_numpy_pxd;\nstatic PyObject *__pyx_kp_s_unrecognized_type_The_following;\nstatic PyObject *__pyx_n_s_w;\nstatic PyObject *__pyx_n_s_zeros;\nstatic int __pyx_pf_11pycocotools_5_mask_4RLEs___cinit__(struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_self, siz __pyx_v_n); /* proto */\nstatic void __pyx_pf_11pycocotools_5_mask_4RLEs_2__dealloc__(struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_self); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_4RLEs_4__getattr__(struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_self, PyObject *__pyx_v_key); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_4RLEs_6__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_self); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_4RLEs_8__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */\nstatic int __pyx_pf_11pycocotools_5_mask_5Masks___cinit__(struct __pyx_obj_11pycocotools_5_mask_Masks *__pyx_v_self, PyObject *__pyx_v_h, PyObject *__pyx_v_w, PyObject *__pyx_v_n); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_5Masks_2__array__(struct __pyx_obj_11pycocotools_5_mask_Masks *__pyx_v_self); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_5Masks_4__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_11pycocotools_5_mask_Masks *__pyx_v_self); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_5Masks_6__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_11pycocotools_5_mask_Masks *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask__toString(CYTHON_UNUSED PyObject *__pyx_self, struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_Rs); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_2_frString(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_rleObjs); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_4encode(CYTHON_UNUSED PyObject *__pyx_self, PyArrayObject *__pyx_v_mask); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_6decode(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_rleObjs); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_8merge(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_rleObjs, int __pyx_v_intersect); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_10area(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_rleObjs); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_3iou__preproc(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_objs); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_3iou_2_rleIou(CYTHON_UNUSED PyObject *__pyx_self, struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_dt, struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_gt, PyArrayObject *__pyx_v_iscrowd, siz __pyx_v_m, siz __pyx_v_n, PyArrayObject *__pyx_v__iou); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_3iou_4_bbIou(CYTHON_UNUSED PyObject *__pyx_self, PyArrayObject *__pyx_v_dt, PyArrayObject *__pyx_v_gt, PyArrayObject *__pyx_v_iscrowd, siz __pyx_v_m, siz __pyx_v_n, PyArrayObject *__pyx_v__iou); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_3iou_6_len(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_obj); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_12iou(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_dt, PyObject *__pyx_v_gt, PyObject *__pyx_v_pyiscrowd); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_14toBbox(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_rleObjs); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_16frBbox(CYTHON_UNUSED PyObject *__pyx_self, PyArrayObject *__pyx_v_bb, siz __pyx_v_h, siz __pyx_v_w); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_18frPoly(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_poly, siz __pyx_v_h, siz __pyx_v_w); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_20frUncompressedRLE(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_ucRles, CYTHON_UNUSED siz __pyx_v_h, CYTHON_UNUSED siz __pyx_v_w); /* proto */\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_22frPyObjects(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pyobj, siz __pyx_v_h, PyObject *__pyx_v_w); /* proto */\nstatic int __pyx_pf_5numpy_7ndarray___getbuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */\nstatic void __pyx_pf_5numpy_7ndarray_2__releasebuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info); /* proto */\nstatic PyObject *__pyx_tp_new_11pycocotools_5_mask_RLEs(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/\nstatic PyObject *__pyx_tp_new_11pycocotools_5_mask_Masks(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/\nstatic PyObject *__pyx_int_0;\nstatic PyObject *__pyx_int_1;\nstatic PyObject *__pyx_int_4;\nstatic PyObject *__pyx_tuple_;\nstatic PyObject *__pyx_tuple__2;\nstatic PyObject *__pyx_tuple__3;\nstatic PyObject *__pyx_tuple__4;\nstatic PyObject *__pyx_tuple__5;\nstatic PyObject *__pyx_tuple__6;\nstatic PyObject *__pyx_tuple__7;\nstatic PyObject *__pyx_tuple__8;\nstatic PyObject *__pyx_tuple__9;\nstatic PyObject *__pyx_tuple__11;\nstatic PyObject *__pyx_tuple__13;\nstatic PyObject *__pyx_tuple__15;\nstatic PyObject *__pyx_tuple__17;\nstatic PyObject *__pyx_tuple__18;\nstatic PyObject *__pyx_tuple__19;\nstatic PyObject *__pyx_tuple__20;\nstatic PyObject *__pyx_tuple__21;\nstatic PyObject *__pyx_tuple__22;\nstatic PyObject *__pyx_tuple__23;\nstatic PyObject *__pyx_tuple__24;\nstatic PyObject *__pyx_tuple__25;\nstatic PyObject *__pyx_tuple__26;\nstatic PyObject *__pyx_tuple__27;\nstatic PyObject *__pyx_tuple__29;\nstatic PyObject *__pyx_tuple__31;\nstatic PyObject *__pyx_tuple__33;\nstatic PyObject *__pyx_tuple__35;\nstatic PyObject *__pyx_tuple__37;\nstatic PyObject *__pyx_tuple__39;\nstatic PyObject *__pyx_tuple__41;\nstatic PyObject *__pyx_tuple__43;\nstatic PyObject *__pyx_tuple__45;\nstatic PyObject *__pyx_tuple__47;\nstatic PyObject *__pyx_tuple__49;\nstatic PyObject *__pyx_codeobj__10;\nstatic PyObject *__pyx_codeobj__12;\nstatic PyObject *__pyx_codeobj__14;\nstatic PyObject *__pyx_codeobj__16;\nstatic PyObject *__pyx_codeobj__28;\nstatic PyObject *__pyx_codeobj__30;\nstatic PyObject *__pyx_codeobj__32;\nstatic PyObject *__pyx_codeobj__34;\nstatic PyObject *__pyx_codeobj__36;\nstatic PyObject *__pyx_codeobj__38;\nstatic PyObject *__pyx_codeobj__40;\nstatic PyObject *__pyx_codeobj__42;\nstatic PyObject *__pyx_codeobj__44;\nstatic PyObject *__pyx_codeobj__46;\nstatic PyObject *__pyx_codeobj__48;\nstatic PyObject *__pyx_codeobj__50;\n/* Late includes */\n\n/* \"pycocotools/_mask.pyx\":57\n *     cdef siz _n\n * \n *     def __cinit__(self, siz n =0):             # <<<<<<<<<<<<<<\n *         rlesInit(&self._R, n)\n *         self._n = n\n */\n\n/* Python wrapper */\nstatic int __pyx_pw_11pycocotools_5_mask_4RLEs_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic int __pyx_pw_11pycocotools_5_mask_4RLEs_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  siz __pyx_v_n;\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__cinit__ (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_n,0};\n    PyObject* values[1] = {0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        CYTHON_FALLTHROUGH;\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (kw_args > 0) {\n          PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_n);\n          if (value) { values[0] = value; kw_args--; }\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"__cinit__\") < 0)) __PYX_ERR(0, 57, __pyx_L3_error)\n      }\n    } else {\n      switch (PyTuple_GET_SIZE(__pyx_args)) {\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        CYTHON_FALLTHROUGH;\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n    }\n    if (values[0]) {\n      __pyx_v_n = __Pyx_PyInt_As_siz(values[0]); if (unlikely((__pyx_v_n == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 57, __pyx_L3_error)\n    } else {\n      __pyx_v_n = ((siz)0);\n    }\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"__cinit__\", 0, 0, 1, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 57, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"pycocotools._mask.RLEs.__cinit__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return -1;\n  __pyx_L4_argument_unpacking_done:;\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_4RLEs___cinit__(((struct __pyx_obj_11pycocotools_5_mask_RLEs *)__pyx_v_self), __pyx_v_n);\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic int __pyx_pf_11pycocotools_5_mask_4RLEs___cinit__(struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_self, siz __pyx_v_n) {\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__cinit__\", 0);\n\n  /* \"pycocotools/_mask.pyx\":58\n * \n *     def __cinit__(self, siz n =0):\n *         rlesInit(&self._R, n)             # <<<<<<<<<<<<<<\n *         self._n = n\n * \n */\n  rlesInit((&__pyx_v_self->_R), __pyx_v_n);\n\n  /* \"pycocotools/_mask.pyx\":59\n *     def __cinit__(self, siz n =0):\n *         rlesInit(&self._R, n)\n *         self._n = n             # <<<<<<<<<<<<<<\n * \n *     # free the RLE array here\n */\n  __pyx_v_self->_n = __pyx_v_n;\n\n  /* \"pycocotools/_mask.pyx\":57\n *     cdef siz _n\n * \n *     def __cinit__(self, siz n =0):             # <<<<<<<<<<<<<<\n *         rlesInit(&self._R, n)\n *         self._n = n\n */\n\n  /* function exit code */\n  __pyx_r = 0;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":62\n * \n *     # free the RLE array here\n *     def __dealloc__(self):             # <<<<<<<<<<<<<<\n *         if self._R is not NULL:\n *             for i in range(self._n):\n */\n\n/* Python wrapper */\nstatic void __pyx_pw_11pycocotools_5_mask_4RLEs_3__dealloc__(PyObject *__pyx_v_self); /*proto*/\nstatic void __pyx_pw_11pycocotools_5_mask_4RLEs_3__dealloc__(PyObject *__pyx_v_self) {\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__dealloc__ (wrapper)\", 0);\n  __pyx_pf_11pycocotools_5_mask_4RLEs_2__dealloc__(((struct __pyx_obj_11pycocotools_5_mask_RLEs *)__pyx_v_self));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n}\n\nstatic void __pyx_pf_11pycocotools_5_mask_4RLEs_2__dealloc__(struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_self) {\n  siz __pyx_v_i;\n  __Pyx_RefNannyDeclarations\n  int __pyx_t_1;\n  siz __pyx_t_2;\n  siz __pyx_t_3;\n  siz __pyx_t_4;\n  __Pyx_RefNannySetupContext(\"__dealloc__\", 0);\n\n  /* \"pycocotools/_mask.pyx\":63\n *     # free the RLE array here\n *     def __dealloc__(self):\n *         if self._R is not NULL:             # <<<<<<<<<<<<<<\n *             for i in range(self._n):\n *                 free(self._R[i].cnts)\n */\n  __pyx_t_1 = ((__pyx_v_self->_R != NULL) != 0);\n  if (__pyx_t_1) {\n\n    /* \"pycocotools/_mask.pyx\":64\n *     def __dealloc__(self):\n *         if self._R is not NULL:\n *             for i in range(self._n):             # <<<<<<<<<<<<<<\n *                 free(self._R[i].cnts)\n *             free(self._R)\n */\n    __pyx_t_2 = __pyx_v_self->_n;\n    __pyx_t_3 = __pyx_t_2;\n    for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) {\n      __pyx_v_i = __pyx_t_4;\n\n      /* \"pycocotools/_mask.pyx\":65\n *         if self._R is not NULL:\n *             for i in range(self._n):\n *                 free(self._R[i].cnts)             # <<<<<<<<<<<<<<\n *             free(self._R)\n *     def __getattr__(self, key):\n */\n      free((__pyx_v_self->_R[__pyx_v_i]).cnts);\n    }\n\n    /* \"pycocotools/_mask.pyx\":66\n *             for i in range(self._n):\n *                 free(self._R[i].cnts)\n *             free(self._R)             # <<<<<<<<<<<<<<\n *     def __getattr__(self, key):\n *         if key == 'n':\n */\n    free(__pyx_v_self->_R);\n\n    /* \"pycocotools/_mask.pyx\":63\n *     # free the RLE array here\n *     def __dealloc__(self):\n *         if self._R is not NULL:             # <<<<<<<<<<<<<<\n *             for i in range(self._n):\n *                 free(self._R[i].cnts)\n */\n  }\n\n  /* \"pycocotools/_mask.pyx\":62\n * \n *     # free the RLE array here\n *     def __dealloc__(self):             # <<<<<<<<<<<<<<\n *         if self._R is not NULL:\n *             for i in range(self._n):\n */\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n}\n\n/* \"pycocotools/_mask.pyx\":67\n *                 free(self._R[i].cnts)\n *             free(self._R)\n *     def __getattr__(self, key):             # <<<<<<<<<<<<<<\n *         if key == 'n':\n *             return self._n\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_4RLEs_5__getattr__(PyObject *__pyx_v_self, PyObject *__pyx_v_key); /*proto*/\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_4RLEs_5__getattr__(PyObject *__pyx_v_self, PyObject *__pyx_v_key) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__getattr__ (wrapper)\", 0);\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_4RLEs_4__getattr__(((struct __pyx_obj_11pycocotools_5_mask_RLEs *)__pyx_v_self), ((PyObject *)__pyx_v_key));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_4RLEs_4__getattr__(struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_self, PyObject *__pyx_v_key) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  int __pyx_t_1;\n  PyObject *__pyx_t_2 = NULL;\n  __Pyx_RefNannySetupContext(\"__getattr__\", 0);\n\n  /* \"pycocotools/_mask.pyx\":68\n *             free(self._R)\n *     def __getattr__(self, key):\n *         if key == 'n':             # <<<<<<<<<<<<<<\n *             return self._n\n *         raise AttributeError(key)\n */\n  __pyx_t_1 = (__Pyx_PyString_Equals(__pyx_v_key, __pyx_n_s_n, Py_EQ)); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(0, 68, __pyx_L1_error)\n  if (__pyx_t_1) {\n\n    /* \"pycocotools/_mask.pyx\":69\n *     def __getattr__(self, key):\n *         if key == 'n':\n *             return self._n             # <<<<<<<<<<<<<<\n *         raise AttributeError(key)\n * \n */\n    __Pyx_XDECREF(__pyx_r);\n    __pyx_t_2 = __Pyx_PyInt_From_siz(__pyx_v_self->_n); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 69, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_2);\n    __pyx_r = __pyx_t_2;\n    __pyx_t_2 = 0;\n    goto __pyx_L0;\n\n    /* \"pycocotools/_mask.pyx\":68\n *             free(self._R)\n *     def __getattr__(self, key):\n *         if key == 'n':             # <<<<<<<<<<<<<<\n *             return self._n\n *         raise AttributeError(key)\n */\n  }\n\n  /* \"pycocotools/_mask.pyx\":70\n *         if key == 'n':\n *             return self._n\n *         raise AttributeError(key)             # <<<<<<<<<<<<<<\n * \n * # python class to wrap Mask array in C\n */\n  __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_builtin_AttributeError, __pyx_v_key); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 70, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_Raise(__pyx_t_2, 0, 0, 0);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __PYX_ERR(0, 70, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":67\n *                 free(self._R[i].cnts)\n *             free(self._R)\n *     def __getattr__(self, key):             # <<<<<<<<<<<<<<\n *         if key == 'n':\n *             return self._n\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_AddTraceback(\"pycocotools._mask.RLEs.__getattr__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"(tree fragment)\":1\n * def __reduce_cython__(self):             # <<<<<<<<<<<<<<\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n * def __setstate_cython__(self, __pyx_state):\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_4RLEs_7__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_4RLEs_7__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__reduce_cython__ (wrapper)\", 0);\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_4RLEs_6__reduce_cython__(((struct __pyx_obj_11pycocotools_5_mask_RLEs *)__pyx_v_self));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_4RLEs_6__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_self) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"__reduce_cython__\", 0);\n\n  /* \"(tree fragment)\":2\n * def __reduce_cython__(self):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")             # <<<<<<<<<<<<<<\n * def __setstate_cython__(self, __pyx_state):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n */\n  __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple_, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_Raise(__pyx_t_1, 0, 0, 0);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __PYX_ERR(1, 2, __pyx_L1_error)\n\n  /* \"(tree fragment)\":1\n * def __reduce_cython__(self):             # <<<<<<<<<<<<<<\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n * def __setstate_cython__(self, __pyx_state):\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"pycocotools._mask.RLEs.__reduce_cython__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"(tree fragment)\":3\n * def __reduce_cython__(self):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n * def __setstate_cython__(self, __pyx_state):             # <<<<<<<<<<<<<<\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_4RLEs_9__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_4RLEs_9__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__setstate_cython__ (wrapper)\", 0);\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_4RLEs_8__setstate_cython__(((struct __pyx_obj_11pycocotools_5_mask_RLEs *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_4RLEs_8__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"__setstate_cython__\", 0);\n\n  /* \"(tree fragment)\":4\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n * def __setstate_cython__(self, __pyx_state):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")             # <<<<<<<<<<<<<<\n */\n  __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__2, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_Raise(__pyx_t_1, 0, 0, 0);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __PYX_ERR(1, 4, __pyx_L1_error)\n\n  /* \"(tree fragment)\":3\n * def __reduce_cython__(self):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n * def __setstate_cython__(self, __pyx_state):             # <<<<<<<<<<<<<<\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"pycocotools._mask.RLEs.__setstate_cython__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":80\n *     cdef siz _n\n * \n *     def __cinit__(self, h, w, n):             # <<<<<<<<<<<<<<\n *         self._mask = <byte*> malloc(h*w*n* sizeof(byte))\n *         self._h = h\n */\n\n/* Python wrapper */\nstatic int __pyx_pw_11pycocotools_5_mask_5Masks_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic int __pyx_pw_11pycocotools_5_mask_5Masks_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyObject *__pyx_v_h = 0;\n  PyObject *__pyx_v_w = 0;\n  PyObject *__pyx_v_n = 0;\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__cinit__ (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_h,&__pyx_n_s_w,&__pyx_n_s_n,0};\n    PyObject* values[3] = {0,0,0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        CYTHON_FALLTHROUGH;\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        CYTHON_FALLTHROUGH;\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        CYTHON_FALLTHROUGH;\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_h)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        CYTHON_FALLTHROUGH;\n        case  1:\n        if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_w)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"__cinit__\", 1, 3, 3, 1); __PYX_ERR(0, 80, __pyx_L3_error)\n        }\n        CYTHON_FALLTHROUGH;\n        case  2:\n        if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_n)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"__cinit__\", 1, 3, 3, 2); __PYX_ERR(0, 80, __pyx_L3_error)\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"__cinit__\") < 0)) __PYX_ERR(0, 80, __pyx_L3_error)\n      }\n    } else if (PyTuple_GET_SIZE(__pyx_args) != 3) {\n      goto __pyx_L5_argtuple_error;\n    } else {\n      values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n      values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n      values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n    }\n    __pyx_v_h = values[0];\n    __pyx_v_w = values[1];\n    __pyx_v_n = values[2];\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"__cinit__\", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 80, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"pycocotools._mask.Masks.__cinit__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return -1;\n  __pyx_L4_argument_unpacking_done:;\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_5Masks___cinit__(((struct __pyx_obj_11pycocotools_5_mask_Masks *)__pyx_v_self), __pyx_v_h, __pyx_v_w, __pyx_v_n);\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic int __pyx_pf_11pycocotools_5_mask_5Masks___cinit__(struct __pyx_obj_11pycocotools_5_mask_Masks *__pyx_v_self, PyObject *__pyx_v_h, PyObject *__pyx_v_w, PyObject *__pyx_v_n) {\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  size_t __pyx_t_4;\n  siz __pyx_t_5;\n  __Pyx_RefNannySetupContext(\"__cinit__\", 0);\n\n  /* \"pycocotools/_mask.pyx\":81\n * \n *     def __cinit__(self, h, w, n):\n *         self._mask = <byte*> malloc(h*w*n* sizeof(byte))             # <<<<<<<<<<<<<<\n *         self._h = h\n *         self._w = w\n */\n  __pyx_t_1 = PyNumber_Multiply(__pyx_v_h, __pyx_v_w); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 81, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = PyNumber_Multiply(__pyx_t_1, __pyx_v_n); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 81, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_1 = __Pyx_PyInt_FromSize_t((sizeof(byte))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 81, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_3 = PyNumber_Multiply(__pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 81, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_4 = __Pyx_PyInt_As_size_t(__pyx_t_3); if (unlikely((__pyx_t_4 == (size_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 81, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  __pyx_v_self->_mask = ((byte *)malloc(__pyx_t_4));\n\n  /* \"pycocotools/_mask.pyx\":82\n *     def __cinit__(self, h, w, n):\n *         self._mask = <byte*> malloc(h*w*n* sizeof(byte))\n *         self._h = h             # <<<<<<<<<<<<<<\n *         self._w = w\n *         self._n = n\n */\n  __pyx_t_5 = __Pyx_PyInt_As_siz(__pyx_v_h); if (unlikely((__pyx_t_5 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 82, __pyx_L1_error)\n  __pyx_v_self->_h = __pyx_t_5;\n\n  /* \"pycocotools/_mask.pyx\":83\n *         self._mask = <byte*> malloc(h*w*n* sizeof(byte))\n *         self._h = h\n *         self._w = w             # <<<<<<<<<<<<<<\n *         self._n = n\n *     # def __dealloc__(self):\n */\n  __pyx_t_5 = __Pyx_PyInt_As_siz(__pyx_v_w); if (unlikely((__pyx_t_5 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 83, __pyx_L1_error)\n  __pyx_v_self->_w = __pyx_t_5;\n\n  /* \"pycocotools/_mask.pyx\":84\n *         self._h = h\n *         self._w = w\n *         self._n = n             # <<<<<<<<<<<<<<\n *     # def __dealloc__(self):\n *         # the memory management of _mask has been passed to np.ndarray\n */\n  __pyx_t_5 = __Pyx_PyInt_As_siz(__pyx_v_n); if (unlikely((__pyx_t_5 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 84, __pyx_L1_error)\n  __pyx_v_self->_n = __pyx_t_5;\n\n  /* \"pycocotools/_mask.pyx\":80\n *     cdef siz _n\n * \n *     def __cinit__(self, h, w, n):             # <<<<<<<<<<<<<<\n *         self._mask = <byte*> malloc(h*w*n* sizeof(byte))\n *         self._h = h\n */\n\n  /* function exit code */\n  __pyx_r = 0;\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_AddTraceback(\"pycocotools._mask.Masks.__cinit__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = -1;\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":90\n * \n *     # called when passing into np.array() and return an np.ndarray in column-major order\n *     def __array__(self):             # <<<<<<<<<<<<<<\n *         cdef np.npy_intp shape[1]\n *         shape[0] = <np.npy_intp> self._h*self._w*self._n\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_5Masks_3__array__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_5Masks_3__array__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__array__ (wrapper)\", 0);\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_5Masks_2__array__(((struct __pyx_obj_11pycocotools_5_mask_Masks *)__pyx_v_self));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_5Masks_2__array__(struct __pyx_obj_11pycocotools_5_mask_Masks *__pyx_v_self) {\n  npy_intp __pyx_v_shape[1];\n  PyObject *__pyx_v_ndarray = NULL;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *__pyx_t_4 = NULL;\n  PyObject *__pyx_t_5 = NULL;\n  __Pyx_RefNannySetupContext(\"__array__\", 0);\n\n  /* \"pycocotools/_mask.pyx\":92\n *     def __array__(self):\n *         cdef np.npy_intp shape[1]\n *         shape[0] = <np.npy_intp> self._h*self._w*self._n             # <<<<<<<<<<<<<<\n *         # Create a 1D array, and reshape it to fortran/Matlab column-major array\n *         ndarray = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT8, self._mask).reshape((self._h, self._w, self._n), order='F')\n */\n  (__pyx_v_shape[0]) = ((((npy_intp)__pyx_v_self->_h) * __pyx_v_self->_w) * __pyx_v_self->_n);\n\n  /* \"pycocotools/_mask.pyx\":94\n *         shape[0] = <np.npy_intp> self._h*self._w*self._n\n *         # Create a 1D array, and reshape it to fortran/Matlab column-major array\n *         ndarray = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT8, self._mask).reshape((self._h, self._w, self._n), order='F')             # <<<<<<<<<<<<<<\n *         # The _mask allocated by Masks is now handled by ndarray\n *         PyArray_ENABLEFLAGS(ndarray, np.NPY_OWNDATA)\n */\n  __pyx_t_1 = PyArray_SimpleNewFromData(1, __pyx_v_shape, NPY_UINT8, __pyx_v_self->_mask); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 94, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_reshape); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 94, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_1 = __Pyx_PyInt_From_siz(__pyx_v_self->_h); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 94, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_3 = __Pyx_PyInt_From_siz(__pyx_v_self->_w); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 94, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __pyx_t_4 = __Pyx_PyInt_From_siz(__pyx_v_self->_n); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 94, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __pyx_t_5 = PyTuple_New(3); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 94, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_GIVEREF(__pyx_t_1);\n  PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_1);\n  __Pyx_GIVEREF(__pyx_t_3);\n  PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_3);\n  __Pyx_GIVEREF(__pyx_t_4);\n  PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_t_4);\n  __pyx_t_1 = 0;\n  __pyx_t_3 = 0;\n  __pyx_t_4 = 0;\n  __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 94, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __Pyx_GIVEREF(__pyx_t_5);\n  PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_5);\n  __pyx_t_5 = 0;\n  __pyx_t_5 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 94, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  if (PyDict_SetItem(__pyx_t_5, __pyx_n_s_order, __pyx_n_s_F) < 0) __PYX_ERR(0, 94, __pyx_L1_error)\n  __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_4, __pyx_t_5); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 94, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  __pyx_v_ndarray = __pyx_t_3;\n  __pyx_t_3 = 0;\n\n  /* \"pycocotools/_mask.pyx\":96\n *         ndarray = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT8, self._mask).reshape((self._h, self._w, self._n), order='F')\n *         # The _mask allocated by Masks is now handled by ndarray\n *         PyArray_ENABLEFLAGS(ndarray, np.NPY_OWNDATA)             # <<<<<<<<<<<<<<\n *         return ndarray\n * \n */\n  if (!(likely(((__pyx_v_ndarray) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_ndarray, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 96, __pyx_L1_error)\n  PyArray_ENABLEFLAGS(((PyArrayObject *)__pyx_v_ndarray), NPY_OWNDATA);\n\n  /* \"pycocotools/_mask.pyx\":97\n *         # The _mask allocated by Masks is now handled by ndarray\n *         PyArray_ENABLEFLAGS(ndarray, np.NPY_OWNDATA)\n *         return ndarray             # <<<<<<<<<<<<<<\n * \n * # internal conversion from Python RLEs object to compressed RLE format\n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(__pyx_v_ndarray);\n  __pyx_r = __pyx_v_ndarray;\n  goto __pyx_L0;\n\n  /* \"pycocotools/_mask.pyx\":90\n * \n *     # called when passing into np.array() and return an np.ndarray in column-major order\n *     def __array__(self):             # <<<<<<<<<<<<<<\n *         cdef np.npy_intp shape[1]\n *         shape[0] = <np.npy_intp> self._h*self._w*self._n\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_AddTraceback(\"pycocotools._mask.Masks.__array__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XDECREF(__pyx_v_ndarray);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"(tree fragment)\":1\n * def __reduce_cython__(self):             # <<<<<<<<<<<<<<\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n * def __setstate_cython__(self, __pyx_state):\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_5Masks_5__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_5Masks_5__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__reduce_cython__ (wrapper)\", 0);\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_5Masks_4__reduce_cython__(((struct __pyx_obj_11pycocotools_5_mask_Masks *)__pyx_v_self));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_5Masks_4__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_11pycocotools_5_mask_Masks *__pyx_v_self) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"__reduce_cython__\", 0);\n\n  /* \"(tree fragment)\":2\n * def __reduce_cython__(self):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")             # <<<<<<<<<<<<<<\n * def __setstate_cython__(self, __pyx_state):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n */\n  __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__3, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_Raise(__pyx_t_1, 0, 0, 0);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __PYX_ERR(1, 2, __pyx_L1_error)\n\n  /* \"(tree fragment)\":1\n * def __reduce_cython__(self):             # <<<<<<<<<<<<<<\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n * def __setstate_cython__(self, __pyx_state):\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"pycocotools._mask.Masks.__reduce_cython__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"(tree fragment)\":3\n * def __reduce_cython__(self):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n * def __setstate_cython__(self, __pyx_state):             # <<<<<<<<<<<<<<\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_5Masks_7__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_5Masks_7__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__setstate_cython__ (wrapper)\", 0);\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_5Masks_6__setstate_cython__(((struct __pyx_obj_11pycocotools_5_mask_Masks *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_5Masks_6__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_11pycocotools_5_mask_Masks *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"__setstate_cython__\", 0);\n\n  /* \"(tree fragment)\":4\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n * def __setstate_cython__(self, __pyx_state):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")             # <<<<<<<<<<<<<<\n */\n  __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__4, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_Raise(__pyx_t_1, 0, 0, 0);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __PYX_ERR(1, 4, __pyx_L1_error)\n\n  /* \"(tree fragment)\":3\n * def __reduce_cython__(self):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n * def __setstate_cython__(self, __pyx_state):             # <<<<<<<<<<<<<<\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"pycocotools._mask.Masks.__setstate_cython__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":100\n * \n * # internal conversion from Python RLEs object to compressed RLE format\n * def _toString(RLEs Rs):             # <<<<<<<<<<<<<<\n *     cdef siz n = Rs.n\n *     cdef bytes py_string\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_1_toString(PyObject *__pyx_self, PyObject *__pyx_v_Rs); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_1_toString = {\"_toString\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_1_toString, METH_O, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_1_toString(PyObject *__pyx_self, PyObject *__pyx_v_Rs) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"_toString (wrapper)\", 0);\n  if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_Rs), __pyx_ptype_11pycocotools_5_mask_RLEs, 1, \"Rs\", 0))) __PYX_ERR(0, 100, __pyx_L1_error)\n  __pyx_r = __pyx_pf_11pycocotools_5_mask__toString(__pyx_self, ((struct __pyx_obj_11pycocotools_5_mask_RLEs *)__pyx_v_Rs));\n\n  /* function exit code */\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask__toString(CYTHON_UNUSED PyObject *__pyx_self, struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_Rs) {\n  siz __pyx_v_n;\n  PyObject *__pyx_v_py_string = 0;\n  char *__pyx_v_c_string;\n  PyObject *__pyx_v_objs = NULL;\n  siz __pyx_v_i;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  siz __pyx_t_2;\n  siz __pyx_t_3;\n  siz __pyx_t_4;\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  PyObject *__pyx_t_7 = NULL;\n  int __pyx_t_8;\n  __Pyx_RefNannySetupContext(\"_toString\", 0);\n\n  /* \"pycocotools/_mask.pyx\":101\n * # internal conversion from Python RLEs object to compressed RLE format\n * def _toString(RLEs Rs):\n *     cdef siz n = Rs.n             # <<<<<<<<<<<<<<\n *     cdef bytes py_string\n *     cdef char* c_string\n */\n  __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_Rs), __pyx_n_s_n); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 101, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = __Pyx_PyInt_As_siz(__pyx_t_1); if (unlikely((__pyx_t_2 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 101, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_v_n = __pyx_t_2;\n\n  /* \"pycocotools/_mask.pyx\":104\n *     cdef bytes py_string\n *     cdef char* c_string\n *     objs = []             # <<<<<<<<<<<<<<\n *     for i in range(n):\n *         c_string = rleToString( <RLE*> &Rs._R[i] )\n */\n  __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 104, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_v_objs = ((PyObject*)__pyx_t_1);\n  __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":105\n *     cdef char* c_string\n *     objs = []\n *     for i in range(n):             # <<<<<<<<<<<<<<\n *         c_string = rleToString( <RLE*> &Rs._R[i] )\n *         py_string = c_string\n */\n  __pyx_t_2 = __pyx_v_n;\n  __pyx_t_3 = __pyx_t_2;\n  for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) {\n    __pyx_v_i = __pyx_t_4;\n\n    /* \"pycocotools/_mask.pyx\":106\n *     objs = []\n *     for i in range(n):\n *         c_string = rleToString( <RLE*> &Rs._R[i] )             # <<<<<<<<<<<<<<\n *         py_string = c_string\n *         objs.append({\n */\n    __pyx_v_c_string = rleToString(((RLE *)(&(__pyx_v_Rs->_R[__pyx_v_i]))));\n\n    /* \"pycocotools/_mask.pyx\":107\n *     for i in range(n):\n *         c_string = rleToString( <RLE*> &Rs._R[i] )\n *         py_string = c_string             # <<<<<<<<<<<<<<\n *         objs.append({\n *             'size': [Rs._R[i].h, Rs._R[i].w],\n */\n    __pyx_t_1 = __Pyx_PyBytes_FromString(__pyx_v_c_string); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 107, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __Pyx_XDECREF_SET(__pyx_v_py_string, ((PyObject*)__pyx_t_1));\n    __pyx_t_1 = 0;\n\n    /* \"pycocotools/_mask.pyx\":109\n *         py_string = c_string\n *         objs.append({\n *             'size': [Rs._R[i].h, Rs._R[i].w],             # <<<<<<<<<<<<<<\n *             'counts': py_string\n *         })\n */\n    __pyx_t_1 = __Pyx_PyDict_NewPresized(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 109, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __pyx_t_5 = __Pyx_PyInt_From_siz((__pyx_v_Rs->_R[__pyx_v_i]).h); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 109, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __pyx_t_6 = __Pyx_PyInt_From_siz((__pyx_v_Rs->_R[__pyx_v_i]).w); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 109, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __pyx_t_7 = PyList_New(2); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 109, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __Pyx_GIVEREF(__pyx_t_5);\n    PyList_SET_ITEM(__pyx_t_7, 0, __pyx_t_5);\n    __Pyx_GIVEREF(__pyx_t_6);\n    PyList_SET_ITEM(__pyx_t_7, 1, __pyx_t_6);\n    __pyx_t_5 = 0;\n    __pyx_t_6 = 0;\n    if (PyDict_SetItem(__pyx_t_1, __pyx_n_s_size, __pyx_t_7) < 0) __PYX_ERR(0, 109, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n\n    /* \"pycocotools/_mask.pyx\":110\n *         objs.append({\n *             'size': [Rs._R[i].h, Rs._R[i].w],\n *             'counts': py_string             # <<<<<<<<<<<<<<\n *         })\n *         free(c_string)\n */\n    if (PyDict_SetItem(__pyx_t_1, __pyx_n_s_counts, __pyx_v_py_string) < 0) __PYX_ERR(0, 109, __pyx_L1_error)\n\n    /* \"pycocotools/_mask.pyx\":108\n *         c_string = rleToString( <RLE*> &Rs._R[i] )\n *         py_string = c_string\n *         objs.append({             # <<<<<<<<<<<<<<\n *             'size': [Rs._R[i].h, Rs._R[i].w],\n *             'counts': py_string\n */\n    __pyx_t_8 = __Pyx_PyList_Append(__pyx_v_objs, __pyx_t_1); if (unlikely(__pyx_t_8 == ((int)-1))) __PYX_ERR(0, 108, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n    /* \"pycocotools/_mask.pyx\":112\n *             'counts': py_string\n *         })\n *         free(c_string)             # <<<<<<<<<<<<<<\n *     return objs\n * \n */\n    free(__pyx_v_c_string);\n  }\n\n  /* \"pycocotools/_mask.pyx\":113\n *         })\n *         free(c_string)\n *     return objs             # <<<<<<<<<<<<<<\n * \n * # internal conversion from compressed RLE format to Python RLEs object\n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(__pyx_v_objs);\n  __pyx_r = __pyx_v_objs;\n  goto __pyx_L0;\n\n  /* \"pycocotools/_mask.pyx\":100\n * \n * # internal conversion from Python RLEs object to compressed RLE format\n * def _toString(RLEs Rs):             # <<<<<<<<<<<<<<\n *     cdef siz n = Rs.n\n *     cdef bytes py_string\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_XDECREF(__pyx_t_7);\n  __Pyx_AddTraceback(\"pycocotools._mask._toString\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XDECREF(__pyx_v_py_string);\n  __Pyx_XDECREF(__pyx_v_objs);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":116\n * \n * # internal conversion from compressed RLE format to Python RLEs object\n * def _frString(rleObjs):             # <<<<<<<<<<<<<<\n *     cdef siz n = len(rleObjs)\n *     Rs = RLEs(n)\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_3_frString(PyObject *__pyx_self, PyObject *__pyx_v_rleObjs); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_3_frString = {\"_frString\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_3_frString, METH_O, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_3_frString(PyObject *__pyx_self, PyObject *__pyx_v_rleObjs) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"_frString (wrapper)\", 0);\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_2_frString(__pyx_self, ((PyObject *)__pyx_v_rleObjs));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_2_frString(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_rleObjs) {\n  siz __pyx_v_n;\n  struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_Rs = NULL;\n  PyObject *__pyx_v_py_string = 0;\n  char *__pyx_v_c_string;\n  PyObject *__pyx_v_i = NULL;\n  PyObject *__pyx_v_obj = NULL;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  Py_ssize_t __pyx_t_1;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *(*__pyx_t_4)(PyObject *);\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  char *__pyx_t_7;\n  Py_ssize_t __pyx_t_8;\n  siz __pyx_t_9;\n  siz __pyx_t_10;\n  __Pyx_RefNannySetupContext(\"_frString\", 0);\n\n  /* \"pycocotools/_mask.pyx\":117\n * # internal conversion from compressed RLE format to Python RLEs object\n * def _frString(rleObjs):\n *     cdef siz n = len(rleObjs)             # <<<<<<<<<<<<<<\n *     Rs = RLEs(n)\n *     cdef bytes py_string\n */\n  __pyx_t_1 = PyObject_Length(__pyx_v_rleObjs); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 117, __pyx_L1_error)\n  __pyx_v_n = __pyx_t_1;\n\n  /* \"pycocotools/_mask.pyx\":118\n * def _frString(rleObjs):\n *     cdef siz n = len(rleObjs)\n *     Rs = RLEs(n)             # <<<<<<<<<<<<<<\n *     cdef bytes py_string\n *     cdef char* c_string\n */\n  __pyx_t_2 = __Pyx_PyInt_From_siz(__pyx_v_n); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 118, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_t_3 = __Pyx_PyObject_CallOneArg(((PyObject *)__pyx_ptype_11pycocotools_5_mask_RLEs), __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 118, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_v_Rs = ((struct __pyx_obj_11pycocotools_5_mask_RLEs *)__pyx_t_3);\n  __pyx_t_3 = 0;\n\n  /* \"pycocotools/_mask.pyx\":121\n *     cdef bytes py_string\n *     cdef char* c_string\n *     for i, obj in enumerate(rleObjs):             # <<<<<<<<<<<<<<\n *         py_string = str(obj['counts'])\n *         c_string = py_string\n */\n  __Pyx_INCREF(__pyx_int_0);\n  __pyx_t_3 = __pyx_int_0;\n  if (likely(PyList_CheckExact(__pyx_v_rleObjs)) || PyTuple_CheckExact(__pyx_v_rleObjs)) {\n    __pyx_t_2 = __pyx_v_rleObjs; __Pyx_INCREF(__pyx_t_2); __pyx_t_1 = 0;\n    __pyx_t_4 = NULL;\n  } else {\n    __pyx_t_1 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_v_rleObjs); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 121, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_2);\n    __pyx_t_4 = Py_TYPE(__pyx_t_2)->tp_iternext; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 121, __pyx_L1_error)\n  }\n  for (;;) {\n    if (likely(!__pyx_t_4)) {\n      if (likely(PyList_CheckExact(__pyx_t_2))) {\n        if (__pyx_t_1 >= PyList_GET_SIZE(__pyx_t_2)) break;\n        #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n        __pyx_t_5 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_1); __Pyx_INCREF(__pyx_t_5); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(0, 121, __pyx_L1_error)\n        #else\n        __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 121, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_5);\n        #endif\n      } else {\n        if (__pyx_t_1 >= PyTuple_GET_SIZE(__pyx_t_2)) break;\n        #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n        __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_1); __Pyx_INCREF(__pyx_t_5); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(0, 121, __pyx_L1_error)\n        #else\n        __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 121, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_5);\n        #endif\n      }\n    } else {\n      __pyx_t_5 = __pyx_t_4(__pyx_t_2);\n      if (unlikely(!__pyx_t_5)) {\n        PyObject* exc_type = PyErr_Occurred();\n        if (exc_type) {\n          if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();\n          else __PYX_ERR(0, 121, __pyx_L1_error)\n        }\n        break;\n      }\n      __Pyx_GOTREF(__pyx_t_5);\n    }\n    __Pyx_XDECREF_SET(__pyx_v_obj, __pyx_t_5);\n    __pyx_t_5 = 0;\n    __Pyx_INCREF(__pyx_t_3);\n    __Pyx_XDECREF_SET(__pyx_v_i, __pyx_t_3);\n    __pyx_t_5 = __Pyx_PyInt_AddObjC(__pyx_t_3, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 121, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_DECREF(__pyx_t_3);\n    __pyx_t_3 = __pyx_t_5;\n    __pyx_t_5 = 0;\n\n    /* \"pycocotools/_mask.pyx\":122\n *     cdef char* c_string\n *     for i, obj in enumerate(rleObjs):\n *         py_string = str(obj['counts'])             # <<<<<<<<<<<<<<\n *         c_string = py_string\n *         rleFrString( <RLE*> &Rs._R[i], <char*> c_string, obj['size'][0], obj['size'][1] )\n */\n    __pyx_t_5 = __Pyx_PyObject_Dict_GetItem(__pyx_v_obj, __pyx_n_s_counts); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 122, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __pyx_t_6 = __Pyx_PyObject_CallOneArg(((PyObject *)(&PyString_Type)), __pyx_t_5); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 122, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    if (!(likely(PyBytes_CheckExact(__pyx_t_6))||((__pyx_t_6) == Py_None)||(PyErr_Format(PyExc_TypeError, \"Expected %.16s, got %.200s\", \"bytes\", Py_TYPE(__pyx_t_6)->tp_name), 0))) __PYX_ERR(0, 122, __pyx_L1_error)\n    __Pyx_XDECREF_SET(__pyx_v_py_string, ((PyObject*)__pyx_t_6));\n    __pyx_t_6 = 0;\n\n    /* \"pycocotools/_mask.pyx\":123\n *     for i, obj in enumerate(rleObjs):\n *         py_string = str(obj['counts'])\n *         c_string = py_string             # <<<<<<<<<<<<<<\n *         rleFrString( <RLE*> &Rs._R[i], <char*> c_string, obj['size'][0], obj['size'][1] )\n *     return Rs\n */\n    if (unlikely(__pyx_v_py_string == Py_None)) {\n      PyErr_SetString(PyExc_TypeError, \"expected bytes, NoneType found\");\n      __PYX_ERR(0, 123, __pyx_L1_error)\n    }\n    __pyx_t_7 = __Pyx_PyBytes_AsWritableString(__pyx_v_py_string); if (unlikely((!__pyx_t_7) && PyErr_Occurred())) __PYX_ERR(0, 123, __pyx_L1_error)\n    __pyx_v_c_string = __pyx_t_7;\n\n    /* \"pycocotools/_mask.pyx\":124\n *         py_string = str(obj['counts'])\n *         c_string = py_string\n *         rleFrString( <RLE*> &Rs._R[i], <char*> c_string, obj['size'][0], obj['size'][1] )             # <<<<<<<<<<<<<<\n *     return Rs\n * \n */\n    __pyx_t_8 = __Pyx_PyIndex_AsSsize_t(__pyx_v_i); if (unlikely((__pyx_t_8 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 124, __pyx_L1_error)\n    __pyx_t_6 = __Pyx_PyObject_Dict_GetItem(__pyx_v_obj, __pyx_n_s_size); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 124, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __pyx_t_5 = __Pyx_GetItemInt(__pyx_t_6, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 124, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n    __pyx_t_9 = __Pyx_PyInt_As_siz(__pyx_t_5); if (unlikely((__pyx_t_9 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 124, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    __pyx_t_5 = __Pyx_PyObject_Dict_GetItem(__pyx_v_obj, __pyx_n_s_size); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 124, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __pyx_t_6 = __Pyx_GetItemInt(__pyx_t_5, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 124, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    __pyx_t_10 = __Pyx_PyInt_As_siz(__pyx_t_6); if (unlikely((__pyx_t_10 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 124, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n    rleFrString(((RLE *)(&(__pyx_v_Rs->_R[__pyx_t_8]))), ((char *)__pyx_v_c_string), __pyx_t_9, __pyx_t_10);\n\n    /* \"pycocotools/_mask.pyx\":121\n *     cdef bytes py_string\n *     cdef char* c_string\n *     for i, obj in enumerate(rleObjs):             # <<<<<<<<<<<<<<\n *         py_string = str(obj['counts'])\n *         c_string = py_string\n */\n  }\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n\n  /* \"pycocotools/_mask.pyx\":125\n *         c_string = py_string\n *         rleFrString( <RLE*> &Rs._R[i], <char*> c_string, obj['size'][0], obj['size'][1] )\n *     return Rs             # <<<<<<<<<<<<<<\n * \n * # encode mask to RLEs objects\n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(((PyObject *)__pyx_v_Rs));\n  __pyx_r = ((PyObject *)__pyx_v_Rs);\n  goto __pyx_L0;\n\n  /* \"pycocotools/_mask.pyx\":116\n * \n * # internal conversion from compressed RLE format to Python RLEs object\n * def _frString(rleObjs):             # <<<<<<<<<<<<<<\n *     cdef siz n = len(rleObjs)\n *     Rs = RLEs(n)\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_AddTraceback(\"pycocotools._mask._frString\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_Rs);\n  __Pyx_XDECREF(__pyx_v_py_string);\n  __Pyx_XDECREF(__pyx_v_i);\n  __Pyx_XDECREF(__pyx_v_obj);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":129\n * # encode mask to RLEs objects\n * # list of RLE string can be generated by RLEs member function\n * def encode(np.ndarray[np.uint8_t, ndim=3, mode='fortran'] mask):             # <<<<<<<<<<<<<<\n *     h, w, n = mask.shape[0], mask.shape[1], mask.shape[2]\n *     cdef RLEs Rs = RLEs(n)\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_5encode(PyObject *__pyx_self, PyObject *__pyx_v_mask); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_5encode = {\"encode\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_5encode, METH_O, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_5encode(PyObject *__pyx_self, PyObject *__pyx_v_mask) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"encode (wrapper)\", 0);\n  if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_mask), __pyx_ptype_5numpy_ndarray, 1, \"mask\", 0))) __PYX_ERR(0, 129, __pyx_L1_error)\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_4encode(__pyx_self, ((PyArrayObject *)__pyx_v_mask));\n\n  /* function exit code */\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_4encode(CYTHON_UNUSED PyObject *__pyx_self, PyArrayObject *__pyx_v_mask) {\n  npy_intp __pyx_v_h;\n  npy_intp __pyx_v_w;\n  npy_intp __pyx_v_n;\n  struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_Rs = 0;\n  PyObject *__pyx_v_objs = NULL;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_mask;\n  __Pyx_Buffer __pyx_pybuffer_mask;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  npy_intp __pyx_t_1;\n  npy_intp __pyx_t_2;\n  npy_intp __pyx_t_3;\n  PyObject *__pyx_t_4 = NULL;\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  __Pyx_RefNannySetupContext(\"encode\", 0);\n  __pyx_pybuffer_mask.pybuffer.buf = NULL;\n  __pyx_pybuffer_mask.refcount = 0;\n  __pyx_pybuffernd_mask.data = NULL;\n  __pyx_pybuffernd_mask.rcbuffer = &__pyx_pybuffer_mask;\n  {\n    __Pyx_BufFmt_StackElem __pyx_stack[1];\n    if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_mask.rcbuffer->pybuffer, (PyObject*)__pyx_v_mask, &__Pyx_TypeInfo_nn___pyx_t_5numpy_uint8_t, PyBUF_FORMAT| PyBUF_F_CONTIGUOUS, 3, 0, __pyx_stack) == -1)) __PYX_ERR(0, 129, __pyx_L1_error)\n  }\n  __pyx_pybuffernd_mask.diminfo[0].strides = __pyx_pybuffernd_mask.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_mask.diminfo[0].shape = __pyx_pybuffernd_mask.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_mask.diminfo[1].strides = __pyx_pybuffernd_mask.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_mask.diminfo[1].shape = __pyx_pybuffernd_mask.rcbuffer->pybuffer.shape[1]; __pyx_pybuffernd_mask.diminfo[2].strides = __pyx_pybuffernd_mask.rcbuffer->pybuffer.strides[2]; __pyx_pybuffernd_mask.diminfo[2].shape = __pyx_pybuffernd_mask.rcbuffer->pybuffer.shape[2];\n\n  /* \"pycocotools/_mask.pyx\":130\n * # list of RLE string can be generated by RLEs member function\n * def encode(np.ndarray[np.uint8_t, ndim=3, mode='fortran'] mask):\n *     h, w, n = mask.shape[0], mask.shape[1], mask.shape[2]             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = RLEs(n)\n *     rleEncode(Rs._R,<byte*>mask.data,h,w,n)\n */\n  __pyx_t_1 = (__pyx_v_mask->dimensions[0]);\n  __pyx_t_2 = (__pyx_v_mask->dimensions[1]);\n  __pyx_t_3 = (__pyx_v_mask->dimensions[2]);\n  __pyx_v_h = __pyx_t_1;\n  __pyx_v_w = __pyx_t_2;\n  __pyx_v_n = __pyx_t_3;\n\n  /* \"pycocotools/_mask.pyx\":131\n * def encode(np.ndarray[np.uint8_t, ndim=3, mode='fortran'] mask):\n *     h, w, n = mask.shape[0], mask.shape[1], mask.shape[2]\n *     cdef RLEs Rs = RLEs(n)             # <<<<<<<<<<<<<<\n *     rleEncode(Rs._R,<byte*>mask.data,h,w,n)\n *     objs = _toString(Rs)\n */\n  __pyx_t_4 = __Pyx_PyInt_From_Py_intptr_t(__pyx_v_n); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 131, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __pyx_t_5 = __Pyx_PyObject_CallOneArg(((PyObject *)__pyx_ptype_11pycocotools_5_mask_RLEs), __pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 131, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n  __pyx_v_Rs = ((struct __pyx_obj_11pycocotools_5_mask_RLEs *)__pyx_t_5);\n  __pyx_t_5 = 0;\n\n  /* \"pycocotools/_mask.pyx\":132\n *     h, w, n = mask.shape[0], mask.shape[1], mask.shape[2]\n *     cdef RLEs Rs = RLEs(n)\n *     rleEncode(Rs._R,<byte*>mask.data,h,w,n)             # <<<<<<<<<<<<<<\n *     objs = _toString(Rs)\n *     return objs\n */\n  rleEncode(__pyx_v_Rs->_R, ((byte *)__pyx_v_mask->data), __pyx_v_h, __pyx_v_w, __pyx_v_n);\n\n  /* \"pycocotools/_mask.pyx\":133\n *     cdef RLEs Rs = RLEs(n)\n *     rleEncode(Rs._R,<byte*>mask.data,h,w,n)\n *     objs = _toString(Rs)             # <<<<<<<<<<<<<<\n *     return objs\n * \n */\n  __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_toString); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 133, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __pyx_t_6 = NULL;\n  if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_4))) {\n    __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_4);\n    if (likely(__pyx_t_6)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4);\n      __Pyx_INCREF(__pyx_t_6);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_4, function);\n    }\n  }\n  __pyx_t_5 = (__pyx_t_6) ? __Pyx_PyObject_Call2Args(__pyx_t_4, __pyx_t_6, ((PyObject *)__pyx_v_Rs)) : __Pyx_PyObject_CallOneArg(__pyx_t_4, ((PyObject *)__pyx_v_Rs));\n  __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;\n  if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 133, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n  __pyx_v_objs = __pyx_t_5;\n  __pyx_t_5 = 0;\n\n  /* \"pycocotools/_mask.pyx\":134\n *     rleEncode(Rs._R,<byte*>mask.data,h,w,n)\n *     objs = _toString(Rs)\n *     return objs             # <<<<<<<<<<<<<<\n * \n * # decode mask from compressed list of RLE string or RLEs object\n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(__pyx_v_objs);\n  __pyx_r = __pyx_v_objs;\n  goto __pyx_L0;\n\n  /* \"pycocotools/_mask.pyx\":129\n * # encode mask to RLEs objects\n * # list of RLE string can be generated by RLEs member function\n * def encode(np.ndarray[np.uint8_t, ndim=3, mode='fortran'] mask):             # <<<<<<<<<<<<<<\n *     h, w, n = mask.shape[0], mask.shape[1], mask.shape[2]\n *     cdef RLEs Rs = RLEs(n)\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  { PyObject *__pyx_type, *__pyx_value, *__pyx_tb;\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_mask.rcbuffer->pybuffer);\n  __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);}\n  __Pyx_AddTraceback(\"pycocotools._mask.encode\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  goto __pyx_L2;\n  __pyx_L0:;\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_mask.rcbuffer->pybuffer);\n  __pyx_L2:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_Rs);\n  __Pyx_XDECREF(__pyx_v_objs);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":137\n * \n * # decode mask from compressed list of RLE string or RLEs object\n * def decode(rleObjs):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     h, w, n = Rs._R[0].h, Rs._R[0].w, Rs._n\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_7decode(PyObject *__pyx_self, PyObject *__pyx_v_rleObjs); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_7decode = {\"decode\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_7decode, METH_O, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_7decode(PyObject *__pyx_self, PyObject *__pyx_v_rleObjs) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"decode (wrapper)\", 0);\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_6decode(__pyx_self, ((PyObject *)__pyx_v_rleObjs));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_6decode(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_rleObjs) {\n  struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_Rs = 0;\n  siz __pyx_v_h;\n  siz __pyx_v_w;\n  siz __pyx_v_n;\n  struct __pyx_obj_11pycocotools_5_mask_Masks *__pyx_v_masks = NULL;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  siz __pyx_t_4;\n  siz __pyx_t_5;\n  siz __pyx_t_6;\n  PyObject *__pyx_t_7 = NULL;\n  __Pyx_RefNannySetupContext(\"decode\", 0);\n\n  /* \"pycocotools/_mask.pyx\":138\n * # decode mask from compressed list of RLE string or RLEs object\n * def decode(rleObjs):\n *     cdef RLEs Rs = _frString(rleObjs)             # <<<<<<<<<<<<<<\n *     h, w, n = Rs._R[0].h, Rs._R[0].w, Rs._n\n *     masks = Masks(h, w, n)\n */\n  __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_frString); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 138, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_t_3 = NULL;\n  if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {\n    __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);\n    if (likely(__pyx_t_3)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);\n      __Pyx_INCREF(__pyx_t_3);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_2, function);\n    }\n  }\n  __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_3, __pyx_v_rleObjs) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v_rleObjs);\n  __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n  if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 138, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_ptype_11pycocotools_5_mask_RLEs))))) __PYX_ERR(0, 138, __pyx_L1_error)\n  __pyx_v_Rs = ((struct __pyx_obj_11pycocotools_5_mask_RLEs *)__pyx_t_1);\n  __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":139\n * def decode(rleObjs):\n *     cdef RLEs Rs = _frString(rleObjs)\n *     h, w, n = Rs._R[0].h, Rs._R[0].w, Rs._n             # <<<<<<<<<<<<<<\n *     masks = Masks(h, w, n)\n *     rleDecode( <RLE*>Rs._R, masks._mask, n );\n */\n  __pyx_t_4 = (__pyx_v_Rs->_R[0]).h;\n  __pyx_t_5 = (__pyx_v_Rs->_R[0]).w;\n  __pyx_t_6 = __pyx_v_Rs->_n;\n  __pyx_v_h = __pyx_t_4;\n  __pyx_v_w = __pyx_t_5;\n  __pyx_v_n = __pyx_t_6;\n\n  /* \"pycocotools/_mask.pyx\":140\n *     cdef RLEs Rs = _frString(rleObjs)\n *     h, w, n = Rs._R[0].h, Rs._R[0].w, Rs._n\n *     masks = Masks(h, w, n)             # <<<<<<<<<<<<<<\n *     rleDecode( <RLE*>Rs._R, masks._mask, n );\n *     return np.array(masks)\n */\n  __pyx_t_1 = __Pyx_PyInt_From_siz(__pyx_v_h); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 140, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = __Pyx_PyInt_From_siz(__pyx_v_w); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 140, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_t_3 = __Pyx_PyInt_From_siz(__pyx_v_n); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 140, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __pyx_t_7 = PyTuple_New(3); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 140, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_7);\n  __Pyx_GIVEREF(__pyx_t_1);\n  PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_1);\n  __Pyx_GIVEREF(__pyx_t_2);\n  PyTuple_SET_ITEM(__pyx_t_7, 1, __pyx_t_2);\n  __Pyx_GIVEREF(__pyx_t_3);\n  PyTuple_SET_ITEM(__pyx_t_7, 2, __pyx_t_3);\n  __pyx_t_1 = 0;\n  __pyx_t_2 = 0;\n  __pyx_t_3 = 0;\n  __pyx_t_3 = __Pyx_PyObject_Call(((PyObject *)__pyx_ptype_11pycocotools_5_mask_Masks), __pyx_t_7, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 140, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n  __pyx_v_masks = ((struct __pyx_obj_11pycocotools_5_mask_Masks *)__pyx_t_3);\n  __pyx_t_3 = 0;\n\n  /* \"pycocotools/_mask.pyx\":141\n *     h, w, n = Rs._R[0].h, Rs._R[0].w, Rs._n\n *     masks = Masks(h, w, n)\n *     rleDecode( <RLE*>Rs._R, masks._mask, n );             # <<<<<<<<<<<<<<\n *     return np.array(masks)\n * \n */\n  rleDecode(((RLE *)__pyx_v_Rs->_R), __pyx_v_masks->_mask, __pyx_v_n);\n\n  /* \"pycocotools/_mask.pyx\":142\n *     masks = Masks(h, w, n)\n *     rleDecode( <RLE*>Rs._R, masks._mask, n );\n *     return np.array(masks)             # <<<<<<<<<<<<<<\n * \n * def merge(rleObjs, bint intersect=0):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_np); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 142, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_7);\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_array); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 142, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n  __pyx_t_7 = NULL;\n  if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {\n    __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_2);\n    if (likely(__pyx_t_7)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);\n      __Pyx_INCREF(__pyx_t_7);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_2, function);\n    }\n  }\n  __pyx_t_3 = (__pyx_t_7) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_7, ((PyObject *)__pyx_v_masks)) : __Pyx_PyObject_CallOneArg(__pyx_t_2, ((PyObject *)__pyx_v_masks));\n  __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;\n  if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 142, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_r = __pyx_t_3;\n  __pyx_t_3 = 0;\n  goto __pyx_L0;\n\n  /* \"pycocotools/_mask.pyx\":137\n * \n * # decode mask from compressed list of RLE string or RLEs object\n * def decode(rleObjs):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     h, w, n = Rs._R[0].h, Rs._R[0].w, Rs._n\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_7);\n  __Pyx_AddTraceback(\"pycocotools._mask.decode\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_Rs);\n  __Pyx_XDECREF((PyObject *)__pyx_v_masks);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":144\n *     return np.array(masks)\n * \n * def merge(rleObjs, bint intersect=0):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef RLEs R = RLEs(1)\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_9merge(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_9merge = {\"merge\", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_11pycocotools_5_mask_9merge, METH_VARARGS|METH_KEYWORDS, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_9merge(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyObject *__pyx_v_rleObjs = 0;\n  int __pyx_v_intersect;\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"merge (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_rleObjs,&__pyx_n_s_intersect,0};\n    PyObject* values[2] = {0,0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        CYTHON_FALLTHROUGH;\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        CYTHON_FALLTHROUGH;\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_rleObjs)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        CYTHON_FALLTHROUGH;\n        case  1:\n        if (kw_args > 0) {\n          PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_intersect);\n          if (value) { values[1] = value; kw_args--; }\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"merge\") < 0)) __PYX_ERR(0, 144, __pyx_L3_error)\n      }\n    } else {\n      switch (PyTuple_GET_SIZE(__pyx_args)) {\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        CYTHON_FALLTHROUGH;\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n    }\n    __pyx_v_rleObjs = values[0];\n    if (values[1]) {\n      __pyx_v_intersect = __Pyx_PyObject_IsTrue(values[1]); if (unlikely((__pyx_v_intersect == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 144, __pyx_L3_error)\n    } else {\n      __pyx_v_intersect = ((int)0);\n    }\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"merge\", 0, 1, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 144, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"pycocotools._mask.merge\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return NULL;\n  __pyx_L4_argument_unpacking_done:;\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_8merge(__pyx_self, __pyx_v_rleObjs, __pyx_v_intersect);\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_8merge(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_rleObjs, int __pyx_v_intersect) {\n  struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_Rs = 0;\n  struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_R = 0;\n  PyObject *__pyx_v_obj = NULL;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  __Pyx_RefNannySetupContext(\"merge\", 0);\n\n  /* \"pycocotools/_mask.pyx\":145\n * \n * def merge(rleObjs, bint intersect=0):\n *     cdef RLEs Rs = _frString(rleObjs)             # <<<<<<<<<<<<<<\n *     cdef RLEs R = RLEs(1)\n *     rleMerge(<RLE*>Rs._R, <RLE*> R._R, <siz> Rs._n, intersect)\n */\n  __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_frString); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 145, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_t_3 = NULL;\n  if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {\n    __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);\n    if (likely(__pyx_t_3)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);\n      __Pyx_INCREF(__pyx_t_3);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_2, function);\n    }\n  }\n  __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_3, __pyx_v_rleObjs) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v_rleObjs);\n  __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n  if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 145, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_ptype_11pycocotools_5_mask_RLEs))))) __PYX_ERR(0, 145, __pyx_L1_error)\n  __pyx_v_Rs = ((struct __pyx_obj_11pycocotools_5_mask_RLEs *)__pyx_t_1);\n  __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":146\n * def merge(rleObjs, bint intersect=0):\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef RLEs R = RLEs(1)             # <<<<<<<<<<<<<<\n *     rleMerge(<RLE*>Rs._R, <RLE*> R._R, <siz> Rs._n, intersect)\n *     obj = _toString(R)[0]\n */\n  __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_ptype_11pycocotools_5_mask_RLEs), __pyx_tuple__5, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 146, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_v_R = ((struct __pyx_obj_11pycocotools_5_mask_RLEs *)__pyx_t_1);\n  __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":147\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef RLEs R = RLEs(1)\n *     rleMerge(<RLE*>Rs._R, <RLE*> R._R, <siz> Rs._n, intersect)             # <<<<<<<<<<<<<<\n *     obj = _toString(R)[0]\n *     return obj\n */\n  rleMerge(((RLE *)__pyx_v_Rs->_R), ((RLE *)__pyx_v_R->_R), ((siz)__pyx_v_Rs->_n), __pyx_v_intersect);\n\n  /* \"pycocotools/_mask.pyx\":148\n *     cdef RLEs R = RLEs(1)\n *     rleMerge(<RLE*>Rs._R, <RLE*> R._R, <siz> Rs._n, intersect)\n *     obj = _toString(R)[0]             # <<<<<<<<<<<<<<\n *     return obj\n * \n */\n  __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_toString); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 148, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_t_3 = NULL;\n  if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {\n    __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);\n    if (likely(__pyx_t_3)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);\n      __Pyx_INCREF(__pyx_t_3);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_2, function);\n    }\n  }\n  __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_3, ((PyObject *)__pyx_v_R)) : __Pyx_PyObject_CallOneArg(__pyx_t_2, ((PyObject *)__pyx_v_R));\n  __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n  if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 148, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_t_2 = __Pyx_GetItemInt(__pyx_t_1, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 148, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_v_obj = __pyx_t_2;\n  __pyx_t_2 = 0;\n\n  /* \"pycocotools/_mask.pyx\":149\n *     rleMerge(<RLE*>Rs._R, <RLE*> R._R, <siz> Rs._n, intersect)\n *     obj = _toString(R)[0]\n *     return obj             # <<<<<<<<<<<<<<\n * \n * def area(rleObjs):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(__pyx_v_obj);\n  __pyx_r = __pyx_v_obj;\n  goto __pyx_L0;\n\n  /* \"pycocotools/_mask.pyx\":144\n *     return np.array(masks)\n * \n * def merge(rleObjs, bint intersect=0):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef RLEs R = RLEs(1)\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_AddTraceback(\"pycocotools._mask.merge\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_Rs);\n  __Pyx_XDECREF((PyObject *)__pyx_v_R);\n  __Pyx_XDECREF(__pyx_v_obj);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":151\n *     return obj\n * \n * def area(rleObjs):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef uint* _a = <uint*> malloc(Rs._n* sizeof(uint))\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_11area(PyObject *__pyx_self, PyObject *__pyx_v_rleObjs); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_11area = {\"area\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_11area, METH_O, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_11area(PyObject *__pyx_self, PyObject *__pyx_v_rleObjs) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"area (wrapper)\", 0);\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_10area(__pyx_self, ((PyObject *)__pyx_v_rleObjs));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_10area(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_rleObjs) {\n  struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_Rs = 0;\n  uint *__pyx_v__a;\n  npy_intp __pyx_v_shape[1];\n  PyObject *__pyx_v_a = NULL;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *__pyx_t_4 = NULL;\n  PyObject *__pyx_t_5 = NULL;\n  __Pyx_RefNannySetupContext(\"area\", 0);\n\n  /* \"pycocotools/_mask.pyx\":152\n * \n * def area(rleObjs):\n *     cdef RLEs Rs = _frString(rleObjs)             # <<<<<<<<<<<<<<\n *     cdef uint* _a = <uint*> malloc(Rs._n* sizeof(uint))\n *     rleArea(Rs._R, Rs._n, _a)\n */\n  __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_frString); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 152, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_t_3 = NULL;\n  if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {\n    __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);\n    if (likely(__pyx_t_3)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);\n      __Pyx_INCREF(__pyx_t_3);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_2, function);\n    }\n  }\n  __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_3, __pyx_v_rleObjs) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v_rleObjs);\n  __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n  if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 152, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_ptype_11pycocotools_5_mask_RLEs))))) __PYX_ERR(0, 152, __pyx_L1_error)\n  __pyx_v_Rs = ((struct __pyx_obj_11pycocotools_5_mask_RLEs *)__pyx_t_1);\n  __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":153\n * def area(rleObjs):\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef uint* _a = <uint*> malloc(Rs._n* sizeof(uint))             # <<<<<<<<<<<<<<\n *     rleArea(Rs._R, Rs._n, _a)\n *     cdef np.npy_intp shape[1]\n */\n  __pyx_v__a = ((uint *)malloc((__pyx_v_Rs->_n * (sizeof(unsigned int)))));\n\n  /* \"pycocotools/_mask.pyx\":154\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef uint* _a = <uint*> malloc(Rs._n* sizeof(uint))\n *     rleArea(Rs._R, Rs._n, _a)             # <<<<<<<<<<<<<<\n *     cdef np.npy_intp shape[1]\n *     shape[0] = <np.npy_intp> Rs._n\n */\n  rleArea(__pyx_v_Rs->_R, __pyx_v_Rs->_n, __pyx_v__a);\n\n  /* \"pycocotools/_mask.pyx\":156\n *     rleArea(Rs._R, Rs._n, _a)\n *     cdef np.npy_intp shape[1]\n *     shape[0] = <np.npy_intp> Rs._n             # <<<<<<<<<<<<<<\n *     a = np.array((Rs._n, ), dtype=np.uint8)\n *     a = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT32, _a)\n */\n  (__pyx_v_shape[0]) = ((npy_intp)__pyx_v_Rs->_n);\n\n  /* \"pycocotools/_mask.pyx\":157\n *     cdef np.npy_intp shape[1]\n *     shape[0] = <np.npy_intp> Rs._n\n *     a = np.array((Rs._n, ), dtype=np.uint8)             # <<<<<<<<<<<<<<\n *     a = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT32, _a)\n *     PyArray_ENABLEFLAGS(a, np.NPY_OWNDATA)\n */\n  __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 157, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_array); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 157, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_1 = __Pyx_PyInt_From_siz(__pyx_v_Rs->_n); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 157, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 157, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_GIVEREF(__pyx_t_1);\n  PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_1);\n  __pyx_t_1 = 0;\n  __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 157, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_GIVEREF(__pyx_t_3);\n  PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_3);\n  __pyx_t_3 = 0;\n  __pyx_t_3 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 157, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 157, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_uint8); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 157, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n  if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_dtype, __pyx_t_5) < 0) __PYX_ERR(0, 157, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 157, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  __pyx_v_a = __pyx_t_5;\n  __pyx_t_5 = 0;\n\n  /* \"pycocotools/_mask.pyx\":158\n *     shape[0] = <np.npy_intp> Rs._n\n *     a = np.array((Rs._n, ), dtype=np.uint8)\n *     a = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT32, _a)             # <<<<<<<<<<<<<<\n *     PyArray_ENABLEFLAGS(a, np.NPY_OWNDATA)\n *     return a\n */\n  __pyx_t_5 = PyArray_SimpleNewFromData(1, __pyx_v_shape, NPY_UINT32, __pyx_v__a); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 158, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_DECREF_SET(__pyx_v_a, __pyx_t_5);\n  __pyx_t_5 = 0;\n\n  /* \"pycocotools/_mask.pyx\":159\n *     a = np.array((Rs._n, ), dtype=np.uint8)\n *     a = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT32, _a)\n *     PyArray_ENABLEFLAGS(a, np.NPY_OWNDATA)             # <<<<<<<<<<<<<<\n *     return a\n * \n */\n  if (!(likely(((__pyx_v_a) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_a, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 159, __pyx_L1_error)\n  PyArray_ENABLEFLAGS(((PyArrayObject *)__pyx_v_a), NPY_OWNDATA);\n\n  /* \"pycocotools/_mask.pyx\":160\n *     a = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT32, _a)\n *     PyArray_ENABLEFLAGS(a, np.NPY_OWNDATA)\n *     return a             # <<<<<<<<<<<<<<\n * \n * # iou computation. support function overload (RLEs-RLEs and bbox-bbox).\n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(__pyx_v_a);\n  __pyx_r = __pyx_v_a;\n  goto __pyx_L0;\n\n  /* \"pycocotools/_mask.pyx\":151\n *     return obj\n * \n * def area(rleObjs):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef uint* _a = <uint*> malloc(Rs._n* sizeof(uint))\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_AddTraceback(\"pycocotools._mask.area\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_Rs);\n  __Pyx_XDECREF(__pyx_v_a);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":163\n * \n * # iou computation. support function overload (RLEs-RLEs and bbox-bbox).\n * def iou( dt, gt, pyiscrowd ):             # <<<<<<<<<<<<<<\n *     def _preproc(objs):\n *         if len(objs) == 0:\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_13iou(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_13iou = {\"iou\", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_11pycocotools_5_mask_13iou, METH_VARARGS|METH_KEYWORDS, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_13iou(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyObject *__pyx_v_dt = 0;\n  PyObject *__pyx_v_gt = 0;\n  PyObject *__pyx_v_pyiscrowd = 0;\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"iou (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_dt,&__pyx_n_s_gt,&__pyx_n_s_pyiscrowd,0};\n    PyObject* values[3] = {0,0,0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        CYTHON_FALLTHROUGH;\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        CYTHON_FALLTHROUGH;\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        CYTHON_FALLTHROUGH;\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_dt)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        CYTHON_FALLTHROUGH;\n        case  1:\n        if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_gt)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"iou\", 1, 3, 3, 1); __PYX_ERR(0, 163, __pyx_L3_error)\n        }\n        CYTHON_FALLTHROUGH;\n        case  2:\n        if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pyiscrowd)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"iou\", 1, 3, 3, 2); __PYX_ERR(0, 163, __pyx_L3_error)\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"iou\") < 0)) __PYX_ERR(0, 163, __pyx_L3_error)\n      }\n    } else if (PyTuple_GET_SIZE(__pyx_args) != 3) {\n      goto __pyx_L5_argtuple_error;\n    } else {\n      values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n      values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n      values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n    }\n    __pyx_v_dt = values[0];\n    __pyx_v_gt = values[1];\n    __pyx_v_pyiscrowd = values[2];\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"iou\", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 163, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"pycocotools._mask.iou\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return NULL;\n  __pyx_L4_argument_unpacking_done:;\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_12iou(__pyx_self, __pyx_v_dt, __pyx_v_gt, __pyx_v_pyiscrowd);\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":164\n * # iou computation. support function overload (RLEs-RLEs and bbox-bbox).\n * def iou( dt, gt, pyiscrowd ):\n *     def _preproc(objs):             # <<<<<<<<<<<<<<\n *         if len(objs) == 0:\n *             return objs\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_3iou_1_preproc(PyObject *__pyx_self, PyObject *__pyx_v_objs); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_3iou_1_preproc = {\"_preproc\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_3iou_1_preproc, METH_O, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_3iou_1_preproc(PyObject *__pyx_self, PyObject *__pyx_v_objs) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"_preproc (wrapper)\", 0);\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_3iou__preproc(__pyx_self, ((PyObject *)__pyx_v_objs));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_3iou__preproc(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_objs) {\n  PyObject *__pyx_v_isbox = NULL;\n  PyObject *__pyx_v_isrle = NULL;\n  PyObject *__pyx_v_obj = NULL;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  Py_ssize_t __pyx_t_1;\n  int __pyx_t_2;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *__pyx_t_4 = NULL;\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  int __pyx_t_7;\n  int __pyx_t_8;\n  PyObject *__pyx_t_9 = NULL;\n  PyObject *__pyx_t_10 = NULL;\n  PyObject *(*__pyx_t_11)(PyObject *);\n  PyObject *__pyx_t_12 = NULL;\n  Py_ssize_t __pyx_t_13;\n  PyObject *__pyx_t_14 = NULL;\n  __Pyx_RefNannySetupContext(\"_preproc\", 0);\n  __Pyx_INCREF(__pyx_v_objs);\n\n  /* \"pycocotools/_mask.pyx\":165\n * def iou( dt, gt, pyiscrowd ):\n *     def _preproc(objs):\n *         if len(objs) == 0:             # <<<<<<<<<<<<<<\n *             return objs\n *         if type(objs) == np.ndarray:\n */\n  __pyx_t_1 = PyObject_Length(__pyx_v_objs); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 165, __pyx_L1_error)\n  __pyx_t_2 = ((__pyx_t_1 == 0) != 0);\n  if (__pyx_t_2) {\n\n    /* \"pycocotools/_mask.pyx\":166\n *     def _preproc(objs):\n *         if len(objs) == 0:\n *             return objs             # <<<<<<<<<<<<<<\n *         if type(objs) == np.ndarray:\n *             if len(objs.shape) == 1:\n */\n    __Pyx_XDECREF(__pyx_r);\n    __Pyx_INCREF(__pyx_v_objs);\n    __pyx_r = __pyx_v_objs;\n    goto __pyx_L0;\n\n    /* \"pycocotools/_mask.pyx\":165\n * def iou( dt, gt, pyiscrowd ):\n *     def _preproc(objs):\n *         if len(objs) == 0:             # <<<<<<<<<<<<<<\n *             return objs\n *         if type(objs) == np.ndarray:\n */\n  }\n\n  /* \"pycocotools/_mask.pyx\":167\n *         if len(objs) == 0:\n *             return objs\n *         if type(objs) == np.ndarray:             # <<<<<<<<<<<<<<\n *             if len(objs.shape) == 1:\n *                 objs = objs.reshape((objs[0], 1))\n */\n  __pyx_t_3 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_objs)), ((PyObject *)__pyx_ptype_5numpy_ndarray), Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 167, __pyx_L1_error)\n  __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 167, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  if (__pyx_t_2) {\n\n    /* \"pycocotools/_mask.pyx\":168\n *             return objs\n *         if type(objs) == np.ndarray:\n *             if len(objs.shape) == 1:             # <<<<<<<<<<<<<<\n *                 objs = objs.reshape((objs[0], 1))\n *             # check if it's Nx4 bbox\n */\n    __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_objs, __pyx_n_s_shape); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 168, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __pyx_t_1 = PyObject_Length(__pyx_t_3); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 168, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __pyx_t_2 = ((__pyx_t_1 == 1) != 0);\n    if (__pyx_t_2) {\n\n      /* \"pycocotools/_mask.pyx\":169\n *         if type(objs) == np.ndarray:\n *             if len(objs.shape) == 1:\n *                 objs = objs.reshape((objs[0], 1))             # <<<<<<<<<<<<<<\n *             # check if it's Nx4 bbox\n *             if not len(objs.shape) == 2 or not objs.shape[1] == 4:\n */\n      __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_objs, __pyx_n_s_reshape); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 169, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_5 = __Pyx_GetItemInt(__pyx_v_objs, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 169, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_5);\n      __pyx_t_6 = PyTuple_New(2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 169, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_6);\n      __Pyx_GIVEREF(__pyx_t_5);\n      PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_5);\n      __Pyx_INCREF(__pyx_int_1);\n      __Pyx_GIVEREF(__pyx_int_1);\n      PyTuple_SET_ITEM(__pyx_t_6, 1, __pyx_int_1);\n      __pyx_t_5 = 0;\n      __pyx_t_5 = NULL;\n      if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_4))) {\n        __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_4);\n        if (likely(__pyx_t_5)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4);\n          __Pyx_INCREF(__pyx_t_5);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_4, function);\n        }\n      }\n      __pyx_t_3 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_4, __pyx_t_5, __pyx_t_6) : __Pyx_PyObject_CallOneArg(__pyx_t_4, __pyx_t_6);\n      __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;\n      __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n      if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 169, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __Pyx_DECREF_SET(__pyx_v_objs, __pyx_t_3);\n      __pyx_t_3 = 0;\n\n      /* \"pycocotools/_mask.pyx\":168\n *             return objs\n *         if type(objs) == np.ndarray:\n *             if len(objs.shape) == 1:             # <<<<<<<<<<<<<<\n *                 objs = objs.reshape((objs[0], 1))\n *             # check if it's Nx4 bbox\n */\n    }\n\n    /* \"pycocotools/_mask.pyx\":171\n *                 objs = objs.reshape((objs[0], 1))\n *             # check if it's Nx4 bbox\n *             if not len(objs.shape) == 2 or not objs.shape[1] == 4:             # <<<<<<<<<<<<<<\n *                 raise Exception('numpy ndarray input is only for *bounding boxes* and should have Nx4 dimension')\n *             objs = objs.astype(np.double)\n */\n    __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_objs, __pyx_n_s_shape); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 171, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __pyx_t_1 = PyObject_Length(__pyx_t_3); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 171, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __pyx_t_7 = ((!((__pyx_t_1 == 2) != 0)) != 0);\n    if (!__pyx_t_7) {\n    } else {\n      __pyx_t_2 = __pyx_t_7;\n      goto __pyx_L7_bool_binop_done;\n    }\n    __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_objs, __pyx_n_s_shape); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 171, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __pyx_t_4 = __Pyx_GetItemInt(__pyx_t_3, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 171, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __pyx_t_3 = __Pyx_PyInt_EqObjC(__pyx_t_4, __pyx_int_4, 4, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 171, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    __pyx_t_7 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_7 < 0)) __PYX_ERR(0, 171, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __pyx_t_8 = ((!__pyx_t_7) != 0);\n    __pyx_t_2 = __pyx_t_8;\n    __pyx_L7_bool_binop_done:;\n    if (unlikely(__pyx_t_2)) {\n\n      /* \"pycocotools/_mask.pyx\":172\n *             # check if it's Nx4 bbox\n *             if not len(objs.shape) == 2 or not objs.shape[1] == 4:\n *                 raise Exception('numpy ndarray input is only for *bounding boxes* and should have Nx4 dimension')             # <<<<<<<<<<<<<<\n *             objs = objs.astype(np.double)\n *         elif type(objs) == list:\n */\n      __pyx_t_3 = __Pyx_PyObject_Call(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])), __pyx_tuple__6, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 172, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __PYX_ERR(0, 172, __pyx_L1_error)\n\n      /* \"pycocotools/_mask.pyx\":171\n *                 objs = objs.reshape((objs[0], 1))\n *             # check if it's Nx4 bbox\n *             if not len(objs.shape) == 2 or not objs.shape[1] == 4:             # <<<<<<<<<<<<<<\n *                 raise Exception('numpy ndarray input is only for *bounding boxes* and should have Nx4 dimension')\n *             objs = objs.astype(np.double)\n */\n    }\n\n    /* \"pycocotools/_mask.pyx\":173\n *             if not len(objs.shape) == 2 or not objs.shape[1] == 4:\n *                 raise Exception('numpy ndarray input is only for *bounding boxes* and should have Nx4 dimension')\n *             objs = objs.astype(np.double)             # <<<<<<<<<<<<<<\n *         elif type(objs) == list:\n *             # check if list is in box format and convert it to np.ndarray\n */\n    __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_objs, __pyx_n_s_astype); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 173, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_np); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 173, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_double); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 173, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n    __pyx_t_6 = NULL;\n    if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_4))) {\n      __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_4);\n      if (likely(__pyx_t_6)) {\n        PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4);\n        __Pyx_INCREF(__pyx_t_6);\n        __Pyx_INCREF(function);\n        __Pyx_DECREF_SET(__pyx_t_4, function);\n      }\n    }\n    __pyx_t_3 = (__pyx_t_6) ? __Pyx_PyObject_Call2Args(__pyx_t_4, __pyx_t_6, __pyx_t_5) : __Pyx_PyObject_CallOneArg(__pyx_t_4, __pyx_t_5);\n    __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 173, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    __Pyx_DECREF_SET(__pyx_v_objs, __pyx_t_3);\n    __pyx_t_3 = 0;\n\n    /* \"pycocotools/_mask.pyx\":167\n *         if len(objs) == 0:\n *             return objs\n *         if type(objs) == np.ndarray:             # <<<<<<<<<<<<<<\n *             if len(objs.shape) == 1:\n *                 objs = objs.reshape((objs[0], 1))\n */\n    goto __pyx_L4;\n  }\n\n  /* \"pycocotools/_mask.pyx\":174\n *                 raise Exception('numpy ndarray input is only for *bounding boxes* and should have Nx4 dimension')\n *             objs = objs.astype(np.double)\n *         elif type(objs) == list:             # <<<<<<<<<<<<<<\n *             # check if list is in box format and convert it to np.ndarray\n *             isbox = np.all(np.array([(len(obj)==4) and ((type(obj)==list) or (type(obj)==np.ndarray)) for obj in objs]))\n */\n  __pyx_t_3 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_objs)), ((PyObject *)(&PyList_Type)), Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 174, __pyx_L1_error)\n  __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 174, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  if (likely(__pyx_t_2)) {\n\n    /* \"pycocotools/_mask.pyx\":176\n *         elif type(objs) == list:\n *             # check if list is in box format and convert it to np.ndarray\n *             isbox = np.all(np.array([(len(obj)==4) and ((type(obj)==list) or (type(obj)==np.ndarray)) for obj in objs]))             # <<<<<<<<<<<<<<\n *             isrle = np.all(np.array([type(obj) == dict for obj in objs]))\n *             if isbox:\n */\n    __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 176, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_all); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 176, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_np); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 176, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_array); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 176, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_9);\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n    __pyx_t_6 = PyList_New(0); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 176, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    if (likely(PyList_CheckExact(__pyx_v_objs)) || PyTuple_CheckExact(__pyx_v_objs)) {\n      __pyx_t_10 = __pyx_v_objs; __Pyx_INCREF(__pyx_t_10); __pyx_t_1 = 0;\n      __pyx_t_11 = NULL;\n    } else {\n      __pyx_t_1 = -1; __pyx_t_10 = PyObject_GetIter(__pyx_v_objs); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 176, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_10);\n      __pyx_t_11 = Py_TYPE(__pyx_t_10)->tp_iternext; if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 176, __pyx_L1_error)\n    }\n    for (;;) {\n      if (likely(!__pyx_t_11)) {\n        if (likely(PyList_CheckExact(__pyx_t_10))) {\n          if (__pyx_t_1 >= PyList_GET_SIZE(__pyx_t_10)) break;\n          #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n          __pyx_t_12 = PyList_GET_ITEM(__pyx_t_10, __pyx_t_1); __Pyx_INCREF(__pyx_t_12); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(0, 176, __pyx_L1_error)\n          #else\n          __pyx_t_12 = PySequence_ITEM(__pyx_t_10, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 176, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_12);\n          #endif\n        } else {\n          if (__pyx_t_1 >= PyTuple_GET_SIZE(__pyx_t_10)) break;\n          #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n          __pyx_t_12 = PyTuple_GET_ITEM(__pyx_t_10, __pyx_t_1); __Pyx_INCREF(__pyx_t_12); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(0, 176, __pyx_L1_error)\n          #else\n          __pyx_t_12 = PySequence_ITEM(__pyx_t_10, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 176, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_12);\n          #endif\n        }\n      } else {\n        __pyx_t_12 = __pyx_t_11(__pyx_t_10);\n        if (unlikely(!__pyx_t_12)) {\n          PyObject* exc_type = PyErr_Occurred();\n          if (exc_type) {\n            if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();\n            else __PYX_ERR(0, 176, __pyx_L1_error)\n          }\n          break;\n        }\n        __Pyx_GOTREF(__pyx_t_12);\n      }\n      __Pyx_XDECREF_SET(__pyx_v_obj, __pyx_t_12);\n      __pyx_t_12 = 0;\n      __pyx_t_13 = PyObject_Length(__pyx_v_obj); if (unlikely(__pyx_t_13 == ((Py_ssize_t)-1))) __PYX_ERR(0, 176, __pyx_L1_error)\n      __pyx_t_2 = (__pyx_t_13 == 4);\n      if (__pyx_t_2) {\n      } else {\n        __pyx_t_14 = __Pyx_PyBool_FromLong(__pyx_t_2); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 176, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_14);\n        __pyx_t_12 = __pyx_t_14;\n        __pyx_t_14 = 0;\n        goto __pyx_L11_bool_binop_done;\n      }\n      __pyx_t_14 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_obj)), ((PyObject *)(&PyList_Type)), Py_EQ); __Pyx_XGOTREF(__pyx_t_14); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 176, __pyx_L1_error)\n      __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_14); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 176, __pyx_L1_error)\n      if (!__pyx_t_2) {\n        __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n      } else {\n        __Pyx_INCREF(__pyx_t_14);\n        __pyx_t_12 = __pyx_t_14;\n        __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n        goto __pyx_L11_bool_binop_done;\n      }\n      __pyx_t_14 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_obj)), ((PyObject *)__pyx_ptype_5numpy_ndarray), Py_EQ); __Pyx_XGOTREF(__pyx_t_14); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 176, __pyx_L1_error)\n      __Pyx_INCREF(__pyx_t_14);\n      __pyx_t_12 = __pyx_t_14;\n      __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0;\n      __pyx_L11_bool_binop_done:;\n      if (unlikely(__Pyx_ListComp_Append(__pyx_t_6, (PyObject*)__pyx_t_12))) __PYX_ERR(0, 176, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n    }\n    __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n    __pyx_t_10 = NULL;\n    if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_9))) {\n      __pyx_t_10 = PyMethod_GET_SELF(__pyx_t_9);\n      if (likely(__pyx_t_10)) {\n        PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_9);\n        __Pyx_INCREF(__pyx_t_10);\n        __Pyx_INCREF(function);\n        __Pyx_DECREF_SET(__pyx_t_9, function);\n      }\n    }\n    __pyx_t_4 = (__pyx_t_10) ? __Pyx_PyObject_Call2Args(__pyx_t_9, __pyx_t_10, __pyx_t_6) : __Pyx_PyObject_CallOneArg(__pyx_t_9, __pyx_t_6);\n    __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0;\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n    if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 176, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;\n    __pyx_t_9 = NULL;\n    if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_5))) {\n      __pyx_t_9 = PyMethod_GET_SELF(__pyx_t_5);\n      if (likely(__pyx_t_9)) {\n        PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5);\n        __Pyx_INCREF(__pyx_t_9);\n        __Pyx_INCREF(function);\n        __Pyx_DECREF_SET(__pyx_t_5, function);\n      }\n    }\n    __pyx_t_3 = (__pyx_t_9) ? __Pyx_PyObject_Call2Args(__pyx_t_5, __pyx_t_9, __pyx_t_4) : __Pyx_PyObject_CallOneArg(__pyx_t_5, __pyx_t_4);\n    __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0;\n    __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 176, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    __pyx_v_isbox = __pyx_t_3;\n    __pyx_t_3 = 0;\n\n    /* \"pycocotools/_mask.pyx\":177\n *             # check if list is in box format and convert it to np.ndarray\n *             isbox = np.all(np.array([(len(obj)==4) and ((type(obj)==list) or (type(obj)==np.ndarray)) for obj in objs]))\n *             isrle = np.all(np.array([type(obj) == dict for obj in objs]))             # <<<<<<<<<<<<<<\n *             if isbox:\n *                 objs = np.array(objs, dtype=np.double)\n */\n    __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_np); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 177, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_all); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 177, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    __Pyx_GetModuleGlobalName(__pyx_t_9, __pyx_n_s_np); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 177, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_9);\n    __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_9, __pyx_n_s_array); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 177, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;\n    __pyx_t_9 = PyList_New(0); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 177, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_9);\n    if (likely(PyList_CheckExact(__pyx_v_objs)) || PyTuple_CheckExact(__pyx_v_objs)) {\n      __pyx_t_10 = __pyx_v_objs; __Pyx_INCREF(__pyx_t_10); __pyx_t_1 = 0;\n      __pyx_t_11 = NULL;\n    } else {\n      __pyx_t_1 = -1; __pyx_t_10 = PyObject_GetIter(__pyx_v_objs); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 177, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_10);\n      __pyx_t_11 = Py_TYPE(__pyx_t_10)->tp_iternext; if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 177, __pyx_L1_error)\n    }\n    for (;;) {\n      if (likely(!__pyx_t_11)) {\n        if (likely(PyList_CheckExact(__pyx_t_10))) {\n          if (__pyx_t_1 >= PyList_GET_SIZE(__pyx_t_10)) break;\n          #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n          __pyx_t_12 = PyList_GET_ITEM(__pyx_t_10, __pyx_t_1); __Pyx_INCREF(__pyx_t_12); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(0, 177, __pyx_L1_error)\n          #else\n          __pyx_t_12 = PySequence_ITEM(__pyx_t_10, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 177, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_12);\n          #endif\n        } else {\n          if (__pyx_t_1 >= PyTuple_GET_SIZE(__pyx_t_10)) break;\n          #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n          __pyx_t_12 = PyTuple_GET_ITEM(__pyx_t_10, __pyx_t_1); __Pyx_INCREF(__pyx_t_12); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(0, 177, __pyx_L1_error)\n          #else\n          __pyx_t_12 = PySequence_ITEM(__pyx_t_10, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 177, __pyx_L1_error)\n          __Pyx_GOTREF(__pyx_t_12);\n          #endif\n        }\n      } else {\n        __pyx_t_12 = __pyx_t_11(__pyx_t_10);\n        if (unlikely(!__pyx_t_12)) {\n          PyObject* exc_type = PyErr_Occurred();\n          if (exc_type) {\n            if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();\n            else __PYX_ERR(0, 177, __pyx_L1_error)\n          }\n          break;\n        }\n        __Pyx_GOTREF(__pyx_t_12);\n      }\n      __Pyx_XDECREF_SET(__pyx_v_obj, __pyx_t_12);\n      __pyx_t_12 = 0;\n      __pyx_t_12 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_obj)), ((PyObject *)(&PyDict_Type)), Py_EQ); __Pyx_XGOTREF(__pyx_t_12); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 177, __pyx_L1_error)\n      if (unlikely(__Pyx_ListComp_Append(__pyx_t_9, (PyObject*)__pyx_t_12))) __PYX_ERR(0, 177, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;\n    }\n    __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0;\n    __pyx_t_10 = NULL;\n    if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_6))) {\n      __pyx_t_10 = PyMethod_GET_SELF(__pyx_t_6);\n      if (likely(__pyx_t_10)) {\n        PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6);\n        __Pyx_INCREF(__pyx_t_10);\n        __Pyx_INCREF(function);\n        __Pyx_DECREF_SET(__pyx_t_6, function);\n      }\n    }\n    __pyx_t_5 = (__pyx_t_10) ? __Pyx_PyObject_Call2Args(__pyx_t_6, __pyx_t_10, __pyx_t_9) : __Pyx_PyObject_CallOneArg(__pyx_t_6, __pyx_t_9);\n    __Pyx_XDECREF(__pyx_t_10); __pyx_t_10 = 0;\n    __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;\n    if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 177, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n    __pyx_t_6 = NULL;\n    if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_4))) {\n      __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_4);\n      if (likely(__pyx_t_6)) {\n        PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4);\n        __Pyx_INCREF(__pyx_t_6);\n        __Pyx_INCREF(function);\n        __Pyx_DECREF_SET(__pyx_t_4, function);\n      }\n    }\n    __pyx_t_3 = (__pyx_t_6) ? __Pyx_PyObject_Call2Args(__pyx_t_4, __pyx_t_6, __pyx_t_5) : __Pyx_PyObject_CallOneArg(__pyx_t_4, __pyx_t_5);\n    __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0;\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 177, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    __pyx_v_isrle = __pyx_t_3;\n    __pyx_t_3 = 0;\n\n    /* \"pycocotools/_mask.pyx\":178\n *             isbox = np.all(np.array([(len(obj)==4) and ((type(obj)==list) or (type(obj)==np.ndarray)) for obj in objs]))\n *             isrle = np.all(np.array([type(obj) == dict for obj in objs]))\n *             if isbox:             # <<<<<<<<<<<<<<\n *                 objs = np.array(objs, dtype=np.double)\n *                 if len(objs.shape) == 1:\n */\n    __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_v_isbox); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 178, __pyx_L1_error)\n    if (__pyx_t_2) {\n\n      /* \"pycocotools/_mask.pyx\":179\n *             isrle = np.all(np.array([type(obj) == dict for obj in objs]))\n *             if isbox:\n *                 objs = np.array(objs, dtype=np.double)             # <<<<<<<<<<<<<<\n *                 if len(objs.shape) == 1:\n *                     objs = objs.reshape((1,objs.shape[0]))\n */\n      __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_np); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 179, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_array); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 179, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 179, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_INCREF(__pyx_v_objs);\n      __Pyx_GIVEREF(__pyx_v_objs);\n      PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_objs);\n      __pyx_t_5 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 179, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_5);\n      __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_np); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 179, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_6);\n      __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_double); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 179, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_9);\n      __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n      if (PyDict_SetItem(__pyx_t_5, __pyx_n_s_dtype, __pyx_t_9) < 0) __PYX_ERR(0, 179, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;\n      __pyx_t_9 = __Pyx_PyObject_Call(__pyx_t_4, __pyx_t_3, __pyx_t_5); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 179, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_9);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n      __Pyx_DECREF_SET(__pyx_v_objs, __pyx_t_9);\n      __pyx_t_9 = 0;\n\n      /* \"pycocotools/_mask.pyx\":180\n *             if isbox:\n *                 objs = np.array(objs, dtype=np.double)\n *                 if len(objs.shape) == 1:             # <<<<<<<<<<<<<<\n *                     objs = objs.reshape((1,objs.shape[0]))\n *             elif isrle:\n */\n      __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_objs, __pyx_n_s_shape); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 180, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_9);\n      __pyx_t_1 = PyObject_Length(__pyx_t_9); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 180, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;\n      __pyx_t_2 = ((__pyx_t_1 == 1) != 0);\n      if (__pyx_t_2) {\n\n        /* \"pycocotools/_mask.pyx\":181\n *                 objs = np.array(objs, dtype=np.double)\n *                 if len(objs.shape) == 1:\n *                     objs = objs.reshape((1,objs.shape[0]))             # <<<<<<<<<<<<<<\n *             elif isrle:\n *                 objs = _frString(objs)\n */\n        __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_objs, __pyx_n_s_reshape); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 181, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_5);\n        __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_objs, __pyx_n_s_shape); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 181, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __pyx_t_4 = __Pyx_GetItemInt(__pyx_t_3, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 181, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_4);\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 181, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_INCREF(__pyx_int_1);\n        __Pyx_GIVEREF(__pyx_int_1);\n        PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_int_1);\n        __Pyx_GIVEREF(__pyx_t_4);\n        PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_4);\n        __pyx_t_4 = 0;\n        __pyx_t_4 = NULL;\n        if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) {\n          __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_5);\n          if (likely(__pyx_t_4)) {\n            PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5);\n            __Pyx_INCREF(__pyx_t_4);\n            __Pyx_INCREF(function);\n            __Pyx_DECREF_SET(__pyx_t_5, function);\n          }\n        }\n        __pyx_t_9 = (__pyx_t_4) ? __Pyx_PyObject_Call2Args(__pyx_t_5, __pyx_t_4, __pyx_t_3) : __Pyx_PyObject_CallOneArg(__pyx_t_5, __pyx_t_3);\n        __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 181, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_9);\n        __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n        __Pyx_DECREF_SET(__pyx_v_objs, __pyx_t_9);\n        __pyx_t_9 = 0;\n\n        /* \"pycocotools/_mask.pyx\":180\n *             if isbox:\n *                 objs = np.array(objs, dtype=np.double)\n *                 if len(objs.shape) == 1:             # <<<<<<<<<<<<<<\n *                     objs = objs.reshape((1,objs.shape[0]))\n *             elif isrle:\n */\n      }\n\n      /* \"pycocotools/_mask.pyx\":178\n *             isbox = np.all(np.array([(len(obj)==4) and ((type(obj)==list) or (type(obj)==np.ndarray)) for obj in objs]))\n *             isrle = np.all(np.array([type(obj) == dict for obj in objs]))\n *             if isbox:             # <<<<<<<<<<<<<<\n *                 objs = np.array(objs, dtype=np.double)\n *                 if len(objs.shape) == 1:\n */\n      goto __pyx_L16;\n    }\n\n    /* \"pycocotools/_mask.pyx\":182\n *                 if len(objs.shape) == 1:\n *                     objs = objs.reshape((1,objs.shape[0]))\n *             elif isrle:             # <<<<<<<<<<<<<<\n *                 objs = _frString(objs)\n *             else:\n */\n    __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_v_isrle); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 182, __pyx_L1_error)\n    if (likely(__pyx_t_2)) {\n\n      /* \"pycocotools/_mask.pyx\":183\n *                     objs = objs.reshape((1,objs.shape[0]))\n *             elif isrle:\n *                 objs = _frString(objs)             # <<<<<<<<<<<<<<\n *             else:\n *                 raise Exception('list input can be bounding box (Nx4) or RLEs ([RLE])')\n */\n      __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_frString); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 183, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_5);\n      __pyx_t_3 = NULL;\n      if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_5))) {\n        __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_5);\n        if (likely(__pyx_t_3)) {\n          PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5);\n          __Pyx_INCREF(__pyx_t_3);\n          __Pyx_INCREF(function);\n          __Pyx_DECREF_SET(__pyx_t_5, function);\n        }\n      }\n      __pyx_t_9 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_5, __pyx_t_3, __pyx_v_objs) : __Pyx_PyObject_CallOneArg(__pyx_t_5, __pyx_v_objs);\n      __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 183, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_9);\n      __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n      __Pyx_DECREF_SET(__pyx_v_objs, __pyx_t_9);\n      __pyx_t_9 = 0;\n\n      /* \"pycocotools/_mask.pyx\":182\n *                 if len(objs.shape) == 1:\n *                     objs = objs.reshape((1,objs.shape[0]))\n *             elif isrle:             # <<<<<<<<<<<<<<\n *                 objs = _frString(objs)\n *             else:\n */\n      goto __pyx_L16;\n    }\n\n    /* \"pycocotools/_mask.pyx\":185\n *                 objs = _frString(objs)\n *             else:\n *                 raise Exception('list input can be bounding box (Nx4) or RLEs ([RLE])')             # <<<<<<<<<<<<<<\n *         else:\n *             raise Exception('unrecognized type.  The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.')\n */\n    /*else*/ {\n      __pyx_t_9 = __Pyx_PyObject_Call(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])), __pyx_tuple__7, NULL); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 185, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_9);\n      __Pyx_Raise(__pyx_t_9, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;\n      __PYX_ERR(0, 185, __pyx_L1_error)\n    }\n    __pyx_L16:;\n\n    /* \"pycocotools/_mask.pyx\":174\n *                 raise Exception('numpy ndarray input is only for *bounding boxes* and should have Nx4 dimension')\n *             objs = objs.astype(np.double)\n *         elif type(objs) == list:             # <<<<<<<<<<<<<<\n *             # check if list is in box format and convert it to np.ndarray\n *             isbox = np.all(np.array([(len(obj)==4) and ((type(obj)==list) or (type(obj)==np.ndarray)) for obj in objs]))\n */\n    goto __pyx_L4;\n  }\n\n  /* \"pycocotools/_mask.pyx\":187\n *                 raise Exception('list input can be bounding box (Nx4) or RLEs ([RLE])')\n *         else:\n *             raise Exception('unrecognized type.  The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.')             # <<<<<<<<<<<<<<\n *         return objs\n *     def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t,  ndim=1] _iou):\n */\n  /*else*/ {\n    __pyx_t_9 = __Pyx_PyObject_Call(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])), __pyx_tuple__8, NULL); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 187, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_9);\n    __Pyx_Raise(__pyx_t_9, 0, 0, 0);\n    __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;\n    __PYX_ERR(0, 187, __pyx_L1_error)\n  }\n  __pyx_L4:;\n\n  /* \"pycocotools/_mask.pyx\":188\n *         else:\n *             raise Exception('unrecognized type.  The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.')\n *         return objs             # <<<<<<<<<<<<<<\n *     def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t,  ndim=1] _iou):\n *         rleIou( <RLE*> dt._R, <RLE*> gt._R, m, n, <byte*> iscrowd.data, <double*> _iou.data )\n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(__pyx_v_objs);\n  __pyx_r = __pyx_v_objs;\n  goto __pyx_L0;\n\n  /* \"pycocotools/_mask.pyx\":164\n * # iou computation. support function overload (RLEs-RLEs and bbox-bbox).\n * def iou( dt, gt, pyiscrowd ):\n *     def _preproc(objs):             # <<<<<<<<<<<<<<\n *         if len(objs) == 0:\n *             return objs\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_XDECREF(__pyx_t_9);\n  __Pyx_XDECREF(__pyx_t_10);\n  __Pyx_XDECREF(__pyx_t_12);\n  __Pyx_XDECREF(__pyx_t_14);\n  __Pyx_AddTraceback(\"pycocotools._mask.iou._preproc\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XDECREF(__pyx_v_isbox);\n  __Pyx_XDECREF(__pyx_v_isrle);\n  __Pyx_XDECREF(__pyx_v_obj);\n  __Pyx_XDECREF(__pyx_v_objs);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":189\n *             raise Exception('unrecognized type.  The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.')\n *         return objs\n *     def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t,  ndim=1] _iou):             # <<<<<<<<<<<<<<\n *         rleIou( <RLE*> dt._R, <RLE*> gt._R, m, n, <byte*> iscrowd.data, <double*> _iou.data )\n *     def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_3iou_3_rleIou(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_3iou_3_rleIou = {\"_rleIou\", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_11pycocotools_5_mask_3iou_3_rleIou, METH_VARARGS|METH_KEYWORDS, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_3iou_3_rleIou(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_dt = 0;\n  struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_gt = 0;\n  PyArrayObject *__pyx_v_iscrowd = 0;\n  siz __pyx_v_m;\n  siz __pyx_v_n;\n  PyArrayObject *__pyx_v__iou = 0;\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"_rleIou (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_dt,&__pyx_n_s_gt,&__pyx_n_s_iscrowd,&__pyx_n_s_m,&__pyx_n_s_n,&__pyx_n_s_iou,0};\n    PyObject* values[6] = {0,0,0,0,0,0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  6: values[5] = PyTuple_GET_ITEM(__pyx_args, 5);\n        CYTHON_FALLTHROUGH;\n        case  5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4);\n        CYTHON_FALLTHROUGH;\n        case  4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);\n        CYTHON_FALLTHROUGH;\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        CYTHON_FALLTHROUGH;\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        CYTHON_FALLTHROUGH;\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        CYTHON_FALLTHROUGH;\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_dt)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        CYTHON_FALLTHROUGH;\n        case  1:\n        if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_gt)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"_rleIou\", 1, 6, 6, 1); __PYX_ERR(0, 189, __pyx_L3_error)\n        }\n        CYTHON_FALLTHROUGH;\n        case  2:\n        if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_iscrowd)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"_rleIou\", 1, 6, 6, 2); __PYX_ERR(0, 189, __pyx_L3_error)\n        }\n        CYTHON_FALLTHROUGH;\n        case  3:\n        if (likely((values[3] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_m)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"_rleIou\", 1, 6, 6, 3); __PYX_ERR(0, 189, __pyx_L3_error)\n        }\n        CYTHON_FALLTHROUGH;\n        case  4:\n        if (likely((values[4] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_n)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"_rleIou\", 1, 6, 6, 4); __PYX_ERR(0, 189, __pyx_L3_error)\n        }\n        CYTHON_FALLTHROUGH;\n        case  5:\n        if (likely((values[5] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_iou)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"_rleIou\", 1, 6, 6, 5); __PYX_ERR(0, 189, __pyx_L3_error)\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"_rleIou\") < 0)) __PYX_ERR(0, 189, __pyx_L3_error)\n      }\n    } else if (PyTuple_GET_SIZE(__pyx_args) != 6) {\n      goto __pyx_L5_argtuple_error;\n    } else {\n      values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n      values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n      values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n      values[3] = PyTuple_GET_ITEM(__pyx_args, 3);\n      values[4] = PyTuple_GET_ITEM(__pyx_args, 4);\n      values[5] = PyTuple_GET_ITEM(__pyx_args, 5);\n    }\n    __pyx_v_dt = ((struct __pyx_obj_11pycocotools_5_mask_RLEs *)values[0]);\n    __pyx_v_gt = ((struct __pyx_obj_11pycocotools_5_mask_RLEs *)values[1]);\n    __pyx_v_iscrowd = ((PyArrayObject *)values[2]);\n    __pyx_v_m = __Pyx_PyInt_As_siz(values[3]); if (unlikely((__pyx_v_m == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 189, __pyx_L3_error)\n    __pyx_v_n = __Pyx_PyInt_As_siz(values[4]); if (unlikely((__pyx_v_n == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 189, __pyx_L3_error)\n    __pyx_v__iou = ((PyArrayObject *)values[5]);\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"_rleIou\", 1, 6, 6, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 189, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"pycocotools._mask.iou._rleIou\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return NULL;\n  __pyx_L4_argument_unpacking_done:;\n  if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_dt), __pyx_ptype_11pycocotools_5_mask_RLEs, 1, \"dt\", 0))) __PYX_ERR(0, 189, __pyx_L1_error)\n  if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_gt), __pyx_ptype_11pycocotools_5_mask_RLEs, 1, \"gt\", 0))) __PYX_ERR(0, 189, __pyx_L1_error)\n  if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_iscrowd), __pyx_ptype_5numpy_ndarray, 1, \"iscrowd\", 0))) __PYX_ERR(0, 189, __pyx_L1_error)\n  if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v__iou), __pyx_ptype_5numpy_ndarray, 1, \"_iou\", 0))) __PYX_ERR(0, 189, __pyx_L1_error)\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_3iou_2_rleIou(__pyx_self, __pyx_v_dt, __pyx_v_gt, __pyx_v_iscrowd, __pyx_v_m, __pyx_v_n, __pyx_v__iou);\n\n  /* function exit code */\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_3iou_2_rleIou(CYTHON_UNUSED PyObject *__pyx_self, struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_dt, struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_gt, PyArrayObject *__pyx_v_iscrowd, siz __pyx_v_m, siz __pyx_v_n, PyArrayObject *__pyx_v__iou) {\n  __Pyx_LocalBuf_ND __pyx_pybuffernd__iou;\n  __Pyx_Buffer __pyx_pybuffer__iou;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_iscrowd;\n  __Pyx_Buffer __pyx_pybuffer_iscrowd;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"_rleIou\", 0);\n  __pyx_pybuffer_iscrowd.pybuffer.buf = NULL;\n  __pyx_pybuffer_iscrowd.refcount = 0;\n  __pyx_pybuffernd_iscrowd.data = NULL;\n  __pyx_pybuffernd_iscrowd.rcbuffer = &__pyx_pybuffer_iscrowd;\n  __pyx_pybuffer__iou.pybuffer.buf = NULL;\n  __pyx_pybuffer__iou.refcount = 0;\n  __pyx_pybuffernd__iou.data = NULL;\n  __pyx_pybuffernd__iou.rcbuffer = &__pyx_pybuffer__iou;\n  {\n    __Pyx_BufFmt_StackElem __pyx_stack[1];\n    if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_iscrowd.rcbuffer->pybuffer, (PyObject*)__pyx_v_iscrowd, &__Pyx_TypeInfo_nn___pyx_t_5numpy_uint8_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) __PYX_ERR(0, 189, __pyx_L1_error)\n  }\n  __pyx_pybuffernd_iscrowd.diminfo[0].strides = __pyx_pybuffernd_iscrowd.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_iscrowd.diminfo[0].shape = __pyx_pybuffernd_iscrowd.rcbuffer->pybuffer.shape[0];\n  {\n    __Pyx_BufFmt_StackElem __pyx_stack[1];\n    if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd__iou.rcbuffer->pybuffer, (PyObject*)__pyx_v__iou, &__Pyx_TypeInfo_nn___pyx_t_5numpy_double_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) __PYX_ERR(0, 189, __pyx_L1_error)\n  }\n  __pyx_pybuffernd__iou.diminfo[0].strides = __pyx_pybuffernd__iou.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd__iou.diminfo[0].shape = __pyx_pybuffernd__iou.rcbuffer->pybuffer.shape[0];\n\n  /* \"pycocotools/_mask.pyx\":190\n *         return objs\n *     def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t,  ndim=1] _iou):\n *         rleIou( <RLE*> dt._R, <RLE*> gt._R, m, n, <byte*> iscrowd.data, <double*> _iou.data )             # <<<<<<<<<<<<<<\n *     def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):\n *         bbIou( <BB> dt.data, <BB> gt.data, m, n, <byte*> iscrowd.data, <double*>_iou.data )\n */\n  rleIou(((RLE *)__pyx_v_dt->_R), ((RLE *)__pyx_v_gt->_R), __pyx_v_m, __pyx_v_n, ((byte *)__pyx_v_iscrowd->data), ((double *)__pyx_v__iou->data));\n\n  /* \"pycocotools/_mask.pyx\":189\n *             raise Exception('unrecognized type.  The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.')\n *         return objs\n *     def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t,  ndim=1] _iou):             # <<<<<<<<<<<<<<\n *         rleIou( <RLE*> dt._R, <RLE*> gt._R, m, n, <byte*> iscrowd.data, <double*> _iou.data )\n *     def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):\n */\n\n  /* function exit code */\n  __pyx_r = Py_None; __Pyx_INCREF(Py_None);\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  { PyObject *__pyx_type, *__pyx_value, *__pyx_tb;\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd__iou.rcbuffer->pybuffer);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_iscrowd.rcbuffer->pybuffer);\n  __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);}\n  __Pyx_AddTraceback(\"pycocotools._mask.iou._rleIou\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  goto __pyx_L2;\n  __pyx_L0:;\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd__iou.rcbuffer->pybuffer);\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_iscrowd.rcbuffer->pybuffer);\n  __pyx_L2:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":191\n *     def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t,  ndim=1] _iou):\n *         rleIou( <RLE*> dt._R, <RLE*> gt._R, m, n, <byte*> iscrowd.data, <double*> _iou.data )\n *     def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):             # <<<<<<<<<<<<<<\n *         bbIou( <BB> dt.data, <BB> gt.data, m, n, <byte*> iscrowd.data, <double*>_iou.data )\n *     def _len(obj):\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_3iou_5_bbIou(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_3iou_5_bbIou = {\"_bbIou\", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_11pycocotools_5_mask_3iou_5_bbIou, METH_VARARGS|METH_KEYWORDS, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_3iou_5_bbIou(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyArrayObject *__pyx_v_dt = 0;\n  PyArrayObject *__pyx_v_gt = 0;\n  PyArrayObject *__pyx_v_iscrowd = 0;\n  siz __pyx_v_m;\n  siz __pyx_v_n;\n  PyArrayObject *__pyx_v__iou = 0;\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"_bbIou (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_dt,&__pyx_n_s_gt,&__pyx_n_s_iscrowd,&__pyx_n_s_m,&__pyx_n_s_n,&__pyx_n_s_iou,0};\n    PyObject* values[6] = {0,0,0,0,0,0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  6: values[5] = PyTuple_GET_ITEM(__pyx_args, 5);\n        CYTHON_FALLTHROUGH;\n        case  5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4);\n        CYTHON_FALLTHROUGH;\n        case  4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);\n        CYTHON_FALLTHROUGH;\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        CYTHON_FALLTHROUGH;\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        CYTHON_FALLTHROUGH;\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        CYTHON_FALLTHROUGH;\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_dt)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        CYTHON_FALLTHROUGH;\n        case  1:\n        if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_gt)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"_bbIou\", 1, 6, 6, 1); __PYX_ERR(0, 191, __pyx_L3_error)\n        }\n        CYTHON_FALLTHROUGH;\n        case  2:\n        if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_iscrowd)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"_bbIou\", 1, 6, 6, 2); __PYX_ERR(0, 191, __pyx_L3_error)\n        }\n        CYTHON_FALLTHROUGH;\n        case  3:\n        if (likely((values[3] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_m)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"_bbIou\", 1, 6, 6, 3); __PYX_ERR(0, 191, __pyx_L3_error)\n        }\n        CYTHON_FALLTHROUGH;\n        case  4:\n        if (likely((values[4] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_n)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"_bbIou\", 1, 6, 6, 4); __PYX_ERR(0, 191, __pyx_L3_error)\n        }\n        CYTHON_FALLTHROUGH;\n        case  5:\n        if (likely((values[5] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_iou)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"_bbIou\", 1, 6, 6, 5); __PYX_ERR(0, 191, __pyx_L3_error)\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"_bbIou\") < 0)) __PYX_ERR(0, 191, __pyx_L3_error)\n      }\n    } else if (PyTuple_GET_SIZE(__pyx_args) != 6) {\n      goto __pyx_L5_argtuple_error;\n    } else {\n      values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n      values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n      values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n      values[3] = PyTuple_GET_ITEM(__pyx_args, 3);\n      values[4] = PyTuple_GET_ITEM(__pyx_args, 4);\n      values[5] = PyTuple_GET_ITEM(__pyx_args, 5);\n    }\n    __pyx_v_dt = ((PyArrayObject *)values[0]);\n    __pyx_v_gt = ((PyArrayObject *)values[1]);\n    __pyx_v_iscrowd = ((PyArrayObject *)values[2]);\n    __pyx_v_m = __Pyx_PyInt_As_siz(values[3]); if (unlikely((__pyx_v_m == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 191, __pyx_L3_error)\n    __pyx_v_n = __Pyx_PyInt_As_siz(values[4]); if (unlikely((__pyx_v_n == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 191, __pyx_L3_error)\n    __pyx_v__iou = ((PyArrayObject *)values[5]);\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"_bbIou\", 1, 6, 6, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 191, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"pycocotools._mask.iou._bbIou\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return NULL;\n  __pyx_L4_argument_unpacking_done:;\n  if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_dt), __pyx_ptype_5numpy_ndarray, 1, \"dt\", 0))) __PYX_ERR(0, 191, __pyx_L1_error)\n  if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_gt), __pyx_ptype_5numpy_ndarray, 1, \"gt\", 0))) __PYX_ERR(0, 191, __pyx_L1_error)\n  if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_iscrowd), __pyx_ptype_5numpy_ndarray, 1, \"iscrowd\", 0))) __PYX_ERR(0, 191, __pyx_L1_error)\n  if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v__iou), __pyx_ptype_5numpy_ndarray, 1, \"_iou\", 0))) __PYX_ERR(0, 191, __pyx_L1_error)\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_3iou_4_bbIou(__pyx_self, __pyx_v_dt, __pyx_v_gt, __pyx_v_iscrowd, __pyx_v_m, __pyx_v_n, __pyx_v__iou);\n\n  /* function exit code */\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_3iou_4_bbIou(CYTHON_UNUSED PyObject *__pyx_self, PyArrayObject *__pyx_v_dt, PyArrayObject *__pyx_v_gt, PyArrayObject *__pyx_v_iscrowd, siz __pyx_v_m, siz __pyx_v_n, PyArrayObject *__pyx_v__iou) {\n  __Pyx_LocalBuf_ND __pyx_pybuffernd__iou;\n  __Pyx_Buffer __pyx_pybuffer__iou;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_dt;\n  __Pyx_Buffer __pyx_pybuffer_dt;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_gt;\n  __Pyx_Buffer __pyx_pybuffer_gt;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_iscrowd;\n  __Pyx_Buffer __pyx_pybuffer_iscrowd;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"_bbIou\", 0);\n  __pyx_pybuffer_dt.pybuffer.buf = NULL;\n  __pyx_pybuffer_dt.refcount = 0;\n  __pyx_pybuffernd_dt.data = NULL;\n  __pyx_pybuffernd_dt.rcbuffer = &__pyx_pybuffer_dt;\n  __pyx_pybuffer_gt.pybuffer.buf = NULL;\n  __pyx_pybuffer_gt.refcount = 0;\n  __pyx_pybuffernd_gt.data = NULL;\n  __pyx_pybuffernd_gt.rcbuffer = &__pyx_pybuffer_gt;\n  __pyx_pybuffer_iscrowd.pybuffer.buf = NULL;\n  __pyx_pybuffer_iscrowd.refcount = 0;\n  __pyx_pybuffernd_iscrowd.data = NULL;\n  __pyx_pybuffernd_iscrowd.rcbuffer = &__pyx_pybuffer_iscrowd;\n  __pyx_pybuffer__iou.pybuffer.buf = NULL;\n  __pyx_pybuffer__iou.refcount = 0;\n  __pyx_pybuffernd__iou.data = NULL;\n  __pyx_pybuffernd__iou.rcbuffer = &__pyx_pybuffer__iou;\n  {\n    __Pyx_BufFmt_StackElem __pyx_stack[1];\n    if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_dt.rcbuffer->pybuffer, (PyObject*)__pyx_v_dt, &__Pyx_TypeInfo_nn___pyx_t_5numpy_double_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) __PYX_ERR(0, 191, __pyx_L1_error)\n  }\n  __pyx_pybuffernd_dt.diminfo[0].strides = __pyx_pybuffernd_dt.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_dt.diminfo[0].shape = __pyx_pybuffernd_dt.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_dt.diminfo[1].strides = __pyx_pybuffernd_dt.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_dt.diminfo[1].shape = __pyx_pybuffernd_dt.rcbuffer->pybuffer.shape[1];\n  {\n    __Pyx_BufFmt_StackElem __pyx_stack[1];\n    if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_gt.rcbuffer->pybuffer, (PyObject*)__pyx_v_gt, &__Pyx_TypeInfo_nn___pyx_t_5numpy_double_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) __PYX_ERR(0, 191, __pyx_L1_error)\n  }\n  __pyx_pybuffernd_gt.diminfo[0].strides = __pyx_pybuffernd_gt.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_gt.diminfo[0].shape = __pyx_pybuffernd_gt.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_gt.diminfo[1].strides = __pyx_pybuffernd_gt.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_gt.diminfo[1].shape = __pyx_pybuffernd_gt.rcbuffer->pybuffer.shape[1];\n  {\n    __Pyx_BufFmt_StackElem __pyx_stack[1];\n    if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_iscrowd.rcbuffer->pybuffer, (PyObject*)__pyx_v_iscrowd, &__Pyx_TypeInfo_nn___pyx_t_5numpy_uint8_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) __PYX_ERR(0, 191, __pyx_L1_error)\n  }\n  __pyx_pybuffernd_iscrowd.diminfo[0].strides = __pyx_pybuffernd_iscrowd.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_iscrowd.diminfo[0].shape = __pyx_pybuffernd_iscrowd.rcbuffer->pybuffer.shape[0];\n  {\n    __Pyx_BufFmt_StackElem __pyx_stack[1];\n    if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd__iou.rcbuffer->pybuffer, (PyObject*)__pyx_v__iou, &__Pyx_TypeInfo_nn___pyx_t_5numpy_double_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) __PYX_ERR(0, 191, __pyx_L1_error)\n  }\n  __pyx_pybuffernd__iou.diminfo[0].strides = __pyx_pybuffernd__iou.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd__iou.diminfo[0].shape = __pyx_pybuffernd__iou.rcbuffer->pybuffer.shape[0];\n\n  /* \"pycocotools/_mask.pyx\":192\n *         rleIou( <RLE*> dt._R, <RLE*> gt._R, m, n, <byte*> iscrowd.data, <double*> _iou.data )\n *     def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):\n *         bbIou( <BB> dt.data, <BB> gt.data, m, n, <byte*> iscrowd.data, <double*>_iou.data )             # <<<<<<<<<<<<<<\n *     def _len(obj):\n *         cdef siz N = 0\n */\n  bbIou(((BB)__pyx_v_dt->data), ((BB)__pyx_v_gt->data), __pyx_v_m, __pyx_v_n, ((byte *)__pyx_v_iscrowd->data), ((double *)__pyx_v__iou->data));\n\n  /* \"pycocotools/_mask.pyx\":191\n *     def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t,  ndim=1] _iou):\n *         rleIou( <RLE*> dt._R, <RLE*> gt._R, m, n, <byte*> iscrowd.data, <double*> _iou.data )\n *     def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):             # <<<<<<<<<<<<<<\n *         bbIou( <BB> dt.data, <BB> gt.data, m, n, <byte*> iscrowd.data, <double*>_iou.data )\n *     def _len(obj):\n */\n\n  /* function exit code */\n  __pyx_r = Py_None; __Pyx_INCREF(Py_None);\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  { PyObject *__pyx_type, *__pyx_value, *__pyx_tb;\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd__iou.rcbuffer->pybuffer);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_dt.rcbuffer->pybuffer);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_gt.rcbuffer->pybuffer);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_iscrowd.rcbuffer->pybuffer);\n  __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);}\n  __Pyx_AddTraceback(\"pycocotools._mask.iou._bbIou\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  goto __pyx_L2;\n  __pyx_L0:;\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd__iou.rcbuffer->pybuffer);\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_dt.rcbuffer->pybuffer);\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_gt.rcbuffer->pybuffer);\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_iscrowd.rcbuffer->pybuffer);\n  __pyx_L2:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":193\n *     def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):\n *         bbIou( <BB> dt.data, <BB> gt.data, m, n, <byte*> iscrowd.data, <double*>_iou.data )\n *     def _len(obj):             # <<<<<<<<<<<<<<\n *         cdef siz N = 0\n *         if type(obj) == RLEs:\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_3iou_7_len(PyObject *__pyx_self, PyObject *__pyx_v_obj); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_3iou_7_len = {\"_len\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_3iou_7_len, METH_O, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_3iou_7_len(PyObject *__pyx_self, PyObject *__pyx_v_obj) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"_len (wrapper)\", 0);\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_3iou_6_len(__pyx_self, ((PyObject *)__pyx_v_obj));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_3iou_6_len(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_obj) {\n  siz __pyx_v_N;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  int __pyx_t_2;\n  siz __pyx_t_3;\n  Py_ssize_t __pyx_t_4;\n  PyObject *__pyx_t_5 = NULL;\n  __Pyx_RefNannySetupContext(\"_len\", 0);\n\n  /* \"pycocotools/_mask.pyx\":194\n *         bbIou( <BB> dt.data, <BB> gt.data, m, n, <byte*> iscrowd.data, <double*>_iou.data )\n *     def _len(obj):\n *         cdef siz N = 0             # <<<<<<<<<<<<<<\n *         if type(obj) == RLEs:\n *             N = obj.n\n */\n  __pyx_v_N = 0;\n\n  /* \"pycocotools/_mask.pyx\":195\n *     def _len(obj):\n *         cdef siz N = 0\n *         if type(obj) == RLEs:             # <<<<<<<<<<<<<<\n *             N = obj.n\n *         elif len(obj)==0:\n */\n  __pyx_t_1 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_obj)), ((PyObject *)__pyx_ptype_11pycocotools_5_mask_RLEs), Py_EQ); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 195, __pyx_L1_error)\n  __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 195, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  if (__pyx_t_2) {\n\n    /* \"pycocotools/_mask.pyx\":196\n *         cdef siz N = 0\n *         if type(obj) == RLEs:\n *             N = obj.n             # <<<<<<<<<<<<<<\n *         elif len(obj)==0:\n *             pass\n */\n    __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_obj, __pyx_n_s_n); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 196, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __pyx_t_3 = __Pyx_PyInt_As_siz(__pyx_t_1); if (unlikely((__pyx_t_3 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 196, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n    __pyx_v_N = __pyx_t_3;\n\n    /* \"pycocotools/_mask.pyx\":195\n *     def _len(obj):\n *         cdef siz N = 0\n *         if type(obj) == RLEs:             # <<<<<<<<<<<<<<\n *             N = obj.n\n *         elif len(obj)==0:\n */\n    goto __pyx_L3;\n  }\n\n  /* \"pycocotools/_mask.pyx\":197\n *         if type(obj) == RLEs:\n *             N = obj.n\n *         elif len(obj)==0:             # <<<<<<<<<<<<<<\n *             pass\n *         elif type(obj) == np.ndarray:\n */\n  __pyx_t_4 = PyObject_Length(__pyx_v_obj); if (unlikely(__pyx_t_4 == ((Py_ssize_t)-1))) __PYX_ERR(0, 197, __pyx_L1_error)\n  __pyx_t_2 = ((__pyx_t_4 == 0) != 0);\n  if (__pyx_t_2) {\n    goto __pyx_L3;\n  }\n\n  /* \"pycocotools/_mask.pyx\":199\n *         elif len(obj)==0:\n *             pass\n *         elif type(obj) == np.ndarray:             # <<<<<<<<<<<<<<\n *             N = obj.shape[0]\n *         return N\n */\n  __pyx_t_1 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_obj)), ((PyObject *)__pyx_ptype_5numpy_ndarray), Py_EQ); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 199, __pyx_L1_error)\n  __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 199, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  if (__pyx_t_2) {\n\n    /* \"pycocotools/_mask.pyx\":200\n *             pass\n *         elif type(obj) == np.ndarray:\n *             N = obj.shape[0]             # <<<<<<<<<<<<<<\n *         return N\n *     # convert iscrowd to numpy array\n */\n    __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_obj, __pyx_n_s_shape); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 200, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __pyx_t_5 = __Pyx_GetItemInt(__pyx_t_1, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 200, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n    __pyx_t_3 = __Pyx_PyInt_As_siz(__pyx_t_5); if (unlikely((__pyx_t_3 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 200, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    __pyx_v_N = __pyx_t_3;\n\n    /* \"pycocotools/_mask.pyx\":199\n *         elif len(obj)==0:\n *             pass\n *         elif type(obj) == np.ndarray:             # <<<<<<<<<<<<<<\n *             N = obj.shape[0]\n *         return N\n */\n  }\n  __pyx_L3:;\n\n  /* \"pycocotools/_mask.pyx\":201\n *         elif type(obj) == np.ndarray:\n *             N = obj.shape[0]\n *         return N             # <<<<<<<<<<<<<<\n *     # convert iscrowd to numpy array\n *     cdef np.ndarray[np.uint8_t, ndim=1] iscrowd = np.array(pyiscrowd, dtype=np.uint8)\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_5 = __Pyx_PyInt_From_siz(__pyx_v_N); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 201, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __pyx_r = __pyx_t_5;\n  __pyx_t_5 = 0;\n  goto __pyx_L0;\n\n  /* \"pycocotools/_mask.pyx\":193\n *     def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):\n *         bbIou( <BB> dt.data, <BB> gt.data, m, n, <byte*> iscrowd.data, <double*>_iou.data )\n *     def _len(obj):             # <<<<<<<<<<<<<<\n *         cdef siz N = 0\n *         if type(obj) == RLEs:\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_AddTraceback(\"pycocotools._mask.iou._len\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":163\n * \n * # iou computation. support function overload (RLEs-RLEs and bbox-bbox).\n * def iou( dt, gt, pyiscrowd ):             # <<<<<<<<<<<<<<\n *     def _preproc(objs):\n *         if len(objs) == 0:\n */\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_12iou(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_dt, PyObject *__pyx_v_gt, PyObject *__pyx_v_pyiscrowd) {\n  PyObject *__pyx_v__preproc = 0;\n  PyObject *__pyx_v__rleIou = 0;\n  PyObject *__pyx_v__bbIou = 0;\n  PyObject *__pyx_v__len = 0;\n  PyArrayObject *__pyx_v_iscrowd = 0;\n  siz __pyx_v_m;\n  siz __pyx_v_n;\n  double *__pyx_v__iou;\n  npy_intp __pyx_v_shape[1];\n  PyObject *__pyx_v__iouFun = NULL;\n  PyObject *__pyx_v_iou = NULL;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_iscrowd;\n  __Pyx_Buffer __pyx_pybuffer_iscrowd;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *__pyx_t_4 = NULL;\n  PyObject *__pyx_t_5 = NULL;\n  PyArrayObject *__pyx_t_6 = NULL;\n  siz __pyx_t_7;\n  int __pyx_t_8;\n  int __pyx_t_9;\n  int __pyx_t_10;\n  PyObject *__pyx_t_11 = NULL;\n  __Pyx_RefNannySetupContext(\"iou\", 0);\n  __Pyx_INCREF(__pyx_v_dt);\n  __Pyx_INCREF(__pyx_v_gt);\n  __pyx_pybuffer_iscrowd.pybuffer.buf = NULL;\n  __pyx_pybuffer_iscrowd.refcount = 0;\n  __pyx_pybuffernd_iscrowd.data = NULL;\n  __pyx_pybuffernd_iscrowd.rcbuffer = &__pyx_pybuffer_iscrowd;\n\n  /* \"pycocotools/_mask.pyx\":164\n * # iou computation. support function overload (RLEs-RLEs and bbox-bbox).\n * def iou( dt, gt, pyiscrowd ):\n *     def _preproc(objs):             # <<<<<<<<<<<<<<\n *         if len(objs) == 0:\n *             return objs\n */\n  __pyx_t_1 = __Pyx_CyFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_3iou_1_preproc, 0, __pyx_n_s_iou_locals__preproc, NULL, __pyx_n_s_pycocotools__mask, __pyx_d, ((PyObject *)__pyx_codeobj__10)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 164, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_v__preproc = __pyx_t_1;\n  __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":189\n *             raise Exception('unrecognized type.  The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.')\n *         return objs\n *     def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t,  ndim=1] _iou):             # <<<<<<<<<<<<<<\n *         rleIou( <RLE*> dt._R, <RLE*> gt._R, m, n, <byte*> iscrowd.data, <double*> _iou.data )\n *     def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):\n */\n  __pyx_t_1 = __Pyx_CyFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_3iou_3_rleIou, 0, __pyx_n_s_iou_locals__rleIou, NULL, __pyx_n_s_pycocotools__mask, __pyx_d, ((PyObject *)__pyx_codeobj__12)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 189, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_v__rleIou = __pyx_t_1;\n  __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":191\n *     def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t,  ndim=1] _iou):\n *         rleIou( <RLE*> dt._R, <RLE*> gt._R, m, n, <byte*> iscrowd.data, <double*> _iou.data )\n *     def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):             # <<<<<<<<<<<<<<\n *         bbIou( <BB> dt.data, <BB> gt.data, m, n, <byte*> iscrowd.data, <double*>_iou.data )\n *     def _len(obj):\n */\n  __pyx_t_1 = __Pyx_CyFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_3iou_5_bbIou, 0, __pyx_n_s_iou_locals__bbIou, NULL, __pyx_n_s_pycocotools__mask, __pyx_d, ((PyObject *)__pyx_codeobj__14)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 191, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_v__bbIou = __pyx_t_1;\n  __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":193\n *     def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):\n *         bbIou( <BB> dt.data, <BB> gt.data, m, n, <byte*> iscrowd.data, <double*>_iou.data )\n *     def _len(obj):             # <<<<<<<<<<<<<<\n *         cdef siz N = 0\n *         if type(obj) == RLEs:\n */\n  __pyx_t_1 = __Pyx_CyFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_3iou_7_len, 0, __pyx_n_s_iou_locals__len, NULL, __pyx_n_s_pycocotools__mask, __pyx_d, ((PyObject *)__pyx_codeobj__16)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 193, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_v__len = __pyx_t_1;\n  __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":203\n *         return N\n *     # convert iscrowd to numpy array\n *     cdef np.ndarray[np.uint8_t, ndim=1] iscrowd = np.array(pyiscrowd, dtype=np.uint8)             # <<<<<<<<<<<<<<\n *     # simple type checking\n *     cdef siz m, n\n */\n  __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 203, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_array); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 203, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 203, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_INCREF(__pyx_v_pyiscrowd);\n  __Pyx_GIVEREF(__pyx_v_pyiscrowd);\n  PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_pyiscrowd);\n  __pyx_t_3 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 203, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 203, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_uint8); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 203, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n  if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_dtype, __pyx_t_5) < 0) __PYX_ERR(0, 203, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 203, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  if (!(likely(((__pyx_t_5) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_5, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 203, __pyx_L1_error)\n  __pyx_t_6 = ((PyArrayObject *)__pyx_t_5);\n  {\n    __Pyx_BufFmt_StackElem __pyx_stack[1];\n    if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_iscrowd.rcbuffer->pybuffer, (PyObject*)__pyx_t_6, &__Pyx_TypeInfo_nn___pyx_t_5numpy_uint8_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) {\n      __pyx_v_iscrowd = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); __pyx_pybuffernd_iscrowd.rcbuffer->pybuffer.buf = NULL;\n      __PYX_ERR(0, 203, __pyx_L1_error)\n    } else {__pyx_pybuffernd_iscrowd.diminfo[0].strides = __pyx_pybuffernd_iscrowd.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_iscrowd.diminfo[0].shape = __pyx_pybuffernd_iscrowd.rcbuffer->pybuffer.shape[0];\n    }\n  }\n  __pyx_t_6 = 0;\n  __pyx_v_iscrowd = ((PyArrayObject *)__pyx_t_5);\n  __pyx_t_5 = 0;\n\n  /* \"pycocotools/_mask.pyx\":206\n *     # simple type checking\n *     cdef siz m, n\n *     dt = _preproc(dt)             # <<<<<<<<<<<<<<\n *     gt = _preproc(gt)\n *     m = _len(dt)\n */\n  __pyx_t_5 = __pyx_pf_11pycocotools_5_mask_3iou__preproc(__pyx_v__preproc, __pyx_v_dt); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 206, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_DECREF_SET(__pyx_v_dt, __pyx_t_5);\n  __pyx_t_5 = 0;\n\n  /* \"pycocotools/_mask.pyx\":207\n *     cdef siz m, n\n *     dt = _preproc(dt)\n *     gt = _preproc(gt)             # <<<<<<<<<<<<<<\n *     m = _len(dt)\n *     n = _len(gt)\n */\n  __pyx_t_5 = __pyx_pf_11pycocotools_5_mask_3iou__preproc(__pyx_v__preproc, __pyx_v_gt); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 207, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_DECREF_SET(__pyx_v_gt, __pyx_t_5);\n  __pyx_t_5 = 0;\n\n  /* \"pycocotools/_mask.pyx\":208\n *     dt = _preproc(dt)\n *     gt = _preproc(gt)\n *     m = _len(dt)             # <<<<<<<<<<<<<<\n *     n = _len(gt)\n *     if m == 0 or n == 0:\n */\n  __pyx_t_5 = __pyx_pf_11pycocotools_5_mask_3iou_6_len(__pyx_v__len, __pyx_v_dt); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 208, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __pyx_t_7 = __Pyx_PyInt_As_siz(__pyx_t_5); if (unlikely((__pyx_t_7 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 208, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  __pyx_v_m = __pyx_t_7;\n\n  /* \"pycocotools/_mask.pyx\":209\n *     gt = _preproc(gt)\n *     m = _len(dt)\n *     n = _len(gt)             # <<<<<<<<<<<<<<\n *     if m == 0 or n == 0:\n *         return []\n */\n  __pyx_t_5 = __pyx_pf_11pycocotools_5_mask_3iou_6_len(__pyx_v__len, __pyx_v_gt); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 209, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __pyx_t_7 = __Pyx_PyInt_As_siz(__pyx_t_5); if (unlikely((__pyx_t_7 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 209, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  __pyx_v_n = __pyx_t_7;\n\n  /* \"pycocotools/_mask.pyx\":210\n *     m = _len(dt)\n *     n = _len(gt)\n *     if m == 0 or n == 0:             # <<<<<<<<<<<<<<\n *         return []\n *     if not type(dt) == type(gt):\n */\n  __pyx_t_9 = ((__pyx_v_m == 0) != 0);\n  if (!__pyx_t_9) {\n  } else {\n    __pyx_t_8 = __pyx_t_9;\n    goto __pyx_L4_bool_binop_done;\n  }\n  __pyx_t_9 = ((__pyx_v_n == 0) != 0);\n  __pyx_t_8 = __pyx_t_9;\n  __pyx_L4_bool_binop_done:;\n  if (__pyx_t_8) {\n\n    /* \"pycocotools/_mask.pyx\":211\n *     n = _len(gt)\n *     if m == 0 or n == 0:\n *         return []             # <<<<<<<<<<<<<<\n *     if not type(dt) == type(gt):\n *         raise Exception('The dt and gt should have the same data type, either RLEs, list or np.ndarray')\n */\n    __Pyx_XDECREF(__pyx_r);\n    __pyx_t_5 = PyList_New(0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 211, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __pyx_r = __pyx_t_5;\n    __pyx_t_5 = 0;\n    goto __pyx_L0;\n\n    /* \"pycocotools/_mask.pyx\":210\n *     m = _len(dt)\n *     n = _len(gt)\n *     if m == 0 or n == 0:             # <<<<<<<<<<<<<<\n *         return []\n *     if not type(dt) == type(gt):\n */\n  }\n\n  /* \"pycocotools/_mask.pyx\":212\n *     if m == 0 or n == 0:\n *         return []\n *     if not type(dt) == type(gt):             # <<<<<<<<<<<<<<\n *         raise Exception('The dt and gt should have the same data type, either RLEs, list or np.ndarray')\n * \n */\n  __pyx_t_5 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_dt)), ((PyObject *)Py_TYPE(__pyx_v_gt)), Py_EQ); __Pyx_XGOTREF(__pyx_t_5); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 212, __pyx_L1_error)\n  __pyx_t_8 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_8 < 0)) __PYX_ERR(0, 212, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  __pyx_t_9 = ((!__pyx_t_8) != 0);\n  if (unlikely(__pyx_t_9)) {\n\n    /* \"pycocotools/_mask.pyx\":213\n *         return []\n *     if not type(dt) == type(gt):\n *         raise Exception('The dt and gt should have the same data type, either RLEs, list or np.ndarray')             # <<<<<<<<<<<<<<\n * \n *     # define local variables\n */\n    __pyx_t_5 = __Pyx_PyObject_Call(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])), __pyx_tuple__17, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 213, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_Raise(__pyx_t_5, 0, 0, 0);\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    __PYX_ERR(0, 213, __pyx_L1_error)\n\n    /* \"pycocotools/_mask.pyx\":212\n *     if m == 0 or n == 0:\n *         return []\n *     if not type(dt) == type(gt):             # <<<<<<<<<<<<<<\n *         raise Exception('The dt and gt should have the same data type, either RLEs, list or np.ndarray')\n * \n */\n  }\n\n  /* \"pycocotools/_mask.pyx\":216\n * \n *     # define local variables\n *     cdef double* _iou = <double*> 0             # <<<<<<<<<<<<<<\n *     cdef np.npy_intp shape[1]\n *     # check type and assign iou function\n */\n  __pyx_v__iou = ((double *)0);\n\n  /* \"pycocotools/_mask.pyx\":219\n *     cdef np.npy_intp shape[1]\n *     # check type and assign iou function\n *     if type(dt) == RLEs:             # <<<<<<<<<<<<<<\n *         _iouFun = _rleIou\n *     elif type(dt) == np.ndarray:\n */\n  __pyx_t_5 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_dt)), ((PyObject *)__pyx_ptype_11pycocotools_5_mask_RLEs), Py_EQ); __Pyx_XGOTREF(__pyx_t_5); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 219, __pyx_L1_error)\n  __pyx_t_9 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_9 < 0)) __PYX_ERR(0, 219, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  if (__pyx_t_9) {\n\n    /* \"pycocotools/_mask.pyx\":220\n *     # check type and assign iou function\n *     if type(dt) == RLEs:\n *         _iouFun = _rleIou             # <<<<<<<<<<<<<<\n *     elif type(dt) == np.ndarray:\n *         _iouFun = _bbIou\n */\n    __Pyx_INCREF(__pyx_v__rleIou);\n    __pyx_v__iouFun = __pyx_v__rleIou;\n\n    /* \"pycocotools/_mask.pyx\":219\n *     cdef np.npy_intp shape[1]\n *     # check type and assign iou function\n *     if type(dt) == RLEs:             # <<<<<<<<<<<<<<\n *         _iouFun = _rleIou\n *     elif type(dt) == np.ndarray:\n */\n    goto __pyx_L7;\n  }\n\n  /* \"pycocotools/_mask.pyx\":221\n *     if type(dt) == RLEs:\n *         _iouFun = _rleIou\n *     elif type(dt) == np.ndarray:             # <<<<<<<<<<<<<<\n *         _iouFun = _bbIou\n *     else:\n */\n  __pyx_t_5 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_dt)), ((PyObject *)__pyx_ptype_5numpy_ndarray), Py_EQ); __Pyx_XGOTREF(__pyx_t_5); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 221, __pyx_L1_error)\n  __pyx_t_9 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_9 < 0)) __PYX_ERR(0, 221, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  if (likely(__pyx_t_9)) {\n\n    /* \"pycocotools/_mask.pyx\":222\n *         _iouFun = _rleIou\n *     elif type(dt) == np.ndarray:\n *         _iouFun = _bbIou             # <<<<<<<<<<<<<<\n *     else:\n *         raise Exception('input data type not allowed.')\n */\n    __Pyx_INCREF(__pyx_v__bbIou);\n    __pyx_v__iouFun = __pyx_v__bbIou;\n\n    /* \"pycocotools/_mask.pyx\":221\n *     if type(dt) == RLEs:\n *         _iouFun = _rleIou\n *     elif type(dt) == np.ndarray:             # <<<<<<<<<<<<<<\n *         _iouFun = _bbIou\n *     else:\n */\n    goto __pyx_L7;\n  }\n\n  /* \"pycocotools/_mask.pyx\":224\n *         _iouFun = _bbIou\n *     else:\n *         raise Exception('input data type not allowed.')             # <<<<<<<<<<<<<<\n *     _iou = <double*> malloc(m*n* sizeof(double))\n *     iou = np.zeros((m*n, ), dtype=np.double)\n */\n  /*else*/ {\n    __pyx_t_5 = __Pyx_PyObject_Call(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])), __pyx_tuple__18, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 224, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_Raise(__pyx_t_5, 0, 0, 0);\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    __PYX_ERR(0, 224, __pyx_L1_error)\n  }\n  __pyx_L7:;\n\n  /* \"pycocotools/_mask.pyx\":225\n *     else:\n *         raise Exception('input data type not allowed.')\n *     _iou = <double*> malloc(m*n* sizeof(double))             # <<<<<<<<<<<<<<\n *     iou = np.zeros((m*n, ), dtype=np.double)\n *     shape[0] = <np.npy_intp> m*n\n */\n  __pyx_v__iou = ((double *)malloc(((__pyx_v_m * __pyx_v_n) * (sizeof(double)))));\n\n  /* \"pycocotools/_mask.pyx\":226\n *         raise Exception('input data type not allowed.')\n *     _iou = <double*> malloc(m*n* sizeof(double))\n *     iou = np.zeros((m*n, ), dtype=np.double)             # <<<<<<<<<<<<<<\n *     shape[0] = <np.npy_intp> m*n\n *     iou = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _iou)\n */\n  __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_np); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 226, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_zeros); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 226, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  __pyx_t_5 = __Pyx_PyInt_From_siz((__pyx_v_m * __pyx_v_n)); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 226, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 226, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_GIVEREF(__pyx_t_5);\n  PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_5);\n  __pyx_t_5 = 0;\n  __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 226, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_GIVEREF(__pyx_t_1);\n  PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_1);\n  __pyx_t_1 = 0;\n  __pyx_t_1 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 226, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_np); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 226, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_double); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 226, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  if (PyDict_SetItem(__pyx_t_1, __pyx_n_s_dtype, __pyx_t_4) < 0) __PYX_ERR(0, 226, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n  __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_5, __pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 226, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_v_iou = __pyx_t_4;\n  __pyx_t_4 = 0;\n\n  /* \"pycocotools/_mask.pyx\":227\n *     _iou = <double*> malloc(m*n* sizeof(double))\n *     iou = np.zeros((m*n, ), dtype=np.double)\n *     shape[0] = <np.npy_intp> m*n             # <<<<<<<<<<<<<<\n *     iou = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _iou)\n *     PyArray_ENABLEFLAGS(iou, np.NPY_OWNDATA)\n */\n  (__pyx_v_shape[0]) = (((npy_intp)__pyx_v_m) * __pyx_v_n);\n\n  /* \"pycocotools/_mask.pyx\":228\n *     iou = np.zeros((m*n, ), dtype=np.double)\n *     shape[0] = <np.npy_intp> m*n\n *     iou = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _iou)             # <<<<<<<<<<<<<<\n *     PyArray_ENABLEFLAGS(iou, np.NPY_OWNDATA)\n *     _iouFun(dt, gt, iscrowd, m, n, iou)\n */\n  __pyx_t_4 = PyArray_SimpleNewFromData(1, __pyx_v_shape, NPY_DOUBLE, __pyx_v__iou); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 228, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __Pyx_DECREF_SET(__pyx_v_iou, __pyx_t_4);\n  __pyx_t_4 = 0;\n\n  /* \"pycocotools/_mask.pyx\":229\n *     shape[0] = <np.npy_intp> m*n\n *     iou = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _iou)\n *     PyArray_ENABLEFLAGS(iou, np.NPY_OWNDATA)             # <<<<<<<<<<<<<<\n *     _iouFun(dt, gt, iscrowd, m, n, iou)\n *     return iou.reshape((m,n), order='F')\n */\n  if (!(likely(((__pyx_v_iou) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_iou, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 229, __pyx_L1_error)\n  PyArray_ENABLEFLAGS(((PyArrayObject *)__pyx_v_iou), NPY_OWNDATA);\n\n  /* \"pycocotools/_mask.pyx\":230\n *     iou = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _iou)\n *     PyArray_ENABLEFLAGS(iou, np.NPY_OWNDATA)\n *     _iouFun(dt, gt, iscrowd, m, n, iou)             # <<<<<<<<<<<<<<\n *     return iou.reshape((m,n), order='F')\n * \n */\n  __pyx_t_1 = __Pyx_PyInt_From_siz(__pyx_v_m); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 230, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_5 = __Pyx_PyInt_From_siz(__pyx_v_n); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 230, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_INCREF(__pyx_v__iouFun);\n  __pyx_t_3 = __pyx_v__iouFun; __pyx_t_2 = NULL;\n  __pyx_t_10 = 0;\n  if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) {\n    __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3);\n    if (likely(__pyx_t_2)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);\n      __Pyx_INCREF(__pyx_t_2);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_3, function);\n      __pyx_t_10 = 1;\n    }\n  }\n  #if CYTHON_FAST_PYCALL\n  if (PyFunction_Check(__pyx_t_3)) {\n    PyObject *__pyx_temp[7] = {__pyx_t_2, __pyx_v_dt, __pyx_v_gt, ((PyObject *)__pyx_v_iscrowd), __pyx_t_1, __pyx_t_5, __pyx_v_iou};\n    __pyx_t_4 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_10, 6+__pyx_t_10); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 230, __pyx_L1_error)\n    __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n    __Pyx_GOTREF(__pyx_t_4);\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  } else\n  #endif\n  #if CYTHON_FAST_PYCCALL\n  if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) {\n    PyObject *__pyx_temp[7] = {__pyx_t_2, __pyx_v_dt, __pyx_v_gt, ((PyObject *)__pyx_v_iscrowd), __pyx_t_1, __pyx_t_5, __pyx_v_iou};\n    __pyx_t_4 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_10, 6+__pyx_t_10); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 230, __pyx_L1_error)\n    __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n    __Pyx_GOTREF(__pyx_t_4);\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  } else\n  #endif\n  {\n    __pyx_t_11 = PyTuple_New(6+__pyx_t_10); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 230, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_11);\n    if (__pyx_t_2) {\n      __Pyx_GIVEREF(__pyx_t_2); PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_t_2); __pyx_t_2 = NULL;\n    }\n    __Pyx_INCREF(__pyx_v_dt);\n    __Pyx_GIVEREF(__pyx_v_dt);\n    PyTuple_SET_ITEM(__pyx_t_11, 0+__pyx_t_10, __pyx_v_dt);\n    __Pyx_INCREF(__pyx_v_gt);\n    __Pyx_GIVEREF(__pyx_v_gt);\n    PyTuple_SET_ITEM(__pyx_t_11, 1+__pyx_t_10, __pyx_v_gt);\n    __Pyx_INCREF(((PyObject *)__pyx_v_iscrowd));\n    __Pyx_GIVEREF(((PyObject *)__pyx_v_iscrowd));\n    PyTuple_SET_ITEM(__pyx_t_11, 2+__pyx_t_10, ((PyObject *)__pyx_v_iscrowd));\n    __Pyx_GIVEREF(__pyx_t_1);\n    PyTuple_SET_ITEM(__pyx_t_11, 3+__pyx_t_10, __pyx_t_1);\n    __Pyx_GIVEREF(__pyx_t_5);\n    PyTuple_SET_ITEM(__pyx_t_11, 4+__pyx_t_10, __pyx_t_5);\n    __Pyx_INCREF(__pyx_v_iou);\n    __Pyx_GIVEREF(__pyx_v_iou);\n    PyTuple_SET_ITEM(__pyx_t_11, 5+__pyx_t_10, __pyx_v_iou);\n    __pyx_t_1 = 0;\n    __pyx_t_5 = 0;\n    __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_11, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 230, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n  }\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n\n  /* \"pycocotools/_mask.pyx\":231\n *     PyArray_ENABLEFLAGS(iou, np.NPY_OWNDATA)\n *     _iouFun(dt, gt, iscrowd, m, n, iou)\n *     return iou.reshape((m,n), order='F')             # <<<<<<<<<<<<<<\n * \n * def toBbox( rleObjs ):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_v_iou, __pyx_n_s_reshape); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 231, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __pyx_t_3 = __Pyx_PyInt_From_siz(__pyx_v_m); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 231, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __pyx_t_11 = __Pyx_PyInt_From_siz(__pyx_v_n); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 231, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_11);\n  __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 231, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_GIVEREF(__pyx_t_3);\n  PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_3);\n  __Pyx_GIVEREF(__pyx_t_11);\n  PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_11);\n  __pyx_t_3 = 0;\n  __pyx_t_11 = 0;\n  __pyx_t_11 = PyTuple_New(1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 231, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_11);\n  __Pyx_GIVEREF(__pyx_t_5);\n  PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_t_5);\n  __pyx_t_5 = 0;\n  __pyx_t_5 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 231, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  if (PyDict_SetItem(__pyx_t_5, __pyx_n_s_order, __pyx_n_s_F) < 0) __PYX_ERR(0, 231, __pyx_L1_error)\n  __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_4, __pyx_t_11, __pyx_t_5); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 231, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n  __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  __pyx_r = __pyx_t_3;\n  __pyx_t_3 = 0;\n  goto __pyx_L0;\n\n  /* \"pycocotools/_mask.pyx\":163\n * \n * # iou computation. support function overload (RLEs-RLEs and bbox-bbox).\n * def iou( dt, gt, pyiscrowd ):             # <<<<<<<<<<<<<<\n *     def _preproc(objs):\n *         if len(objs) == 0:\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_11);\n  { PyObject *__pyx_type, *__pyx_value, *__pyx_tb;\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_iscrowd.rcbuffer->pybuffer);\n  __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);}\n  __Pyx_AddTraceback(\"pycocotools._mask.iou\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  goto __pyx_L2;\n  __pyx_L0:;\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_iscrowd.rcbuffer->pybuffer);\n  __pyx_L2:;\n  __Pyx_XDECREF(__pyx_v__preproc);\n  __Pyx_XDECREF(__pyx_v__rleIou);\n  __Pyx_XDECREF(__pyx_v__bbIou);\n  __Pyx_XDECREF(__pyx_v__len);\n  __Pyx_XDECREF((PyObject *)__pyx_v_iscrowd);\n  __Pyx_XDECREF(__pyx_v__iouFun);\n  __Pyx_XDECREF(__pyx_v_iou);\n  __Pyx_XDECREF(__pyx_v_dt);\n  __Pyx_XDECREF(__pyx_v_gt);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":233\n *     return iou.reshape((m,n), order='F')\n * \n * def toBbox( rleObjs ):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef siz n = Rs.n\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_15toBbox(PyObject *__pyx_self, PyObject *__pyx_v_rleObjs); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_15toBbox = {\"toBbox\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_15toBbox, METH_O, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_15toBbox(PyObject *__pyx_self, PyObject *__pyx_v_rleObjs) {\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"toBbox (wrapper)\", 0);\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_14toBbox(__pyx_self, ((PyObject *)__pyx_v_rleObjs));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_14toBbox(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_rleObjs) {\n  struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_Rs = 0;\n  siz __pyx_v_n;\n  BB __pyx_v__bb;\n  npy_intp __pyx_v_shape[1];\n  PyObject *__pyx_v_bb = NULL;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  siz __pyx_t_4;\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  __Pyx_RefNannySetupContext(\"toBbox\", 0);\n\n  /* \"pycocotools/_mask.pyx\":234\n * \n * def toBbox( rleObjs ):\n *     cdef RLEs Rs = _frString(rleObjs)             # <<<<<<<<<<<<<<\n *     cdef siz n = Rs.n\n *     cdef BB _bb = <BB> malloc(4*n* sizeof(double))\n */\n  __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_frString); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 234, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_t_3 = NULL;\n  if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {\n    __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);\n    if (likely(__pyx_t_3)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);\n      __Pyx_INCREF(__pyx_t_3);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_2, function);\n    }\n  }\n  __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_3, __pyx_v_rleObjs) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v_rleObjs);\n  __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n  if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 234, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_ptype_11pycocotools_5_mask_RLEs))))) __PYX_ERR(0, 234, __pyx_L1_error)\n  __pyx_v_Rs = ((struct __pyx_obj_11pycocotools_5_mask_RLEs *)__pyx_t_1);\n  __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":235\n * def toBbox( rleObjs ):\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef siz n = Rs.n             # <<<<<<<<<<<<<<\n *     cdef BB _bb = <BB> malloc(4*n* sizeof(double))\n *     rleToBbox( <const RLE*> Rs._R, _bb, n )\n */\n  __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_Rs), __pyx_n_s_n); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 235, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_4 = __Pyx_PyInt_As_siz(__pyx_t_1); if (unlikely((__pyx_t_4 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 235, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_v_n = __pyx_t_4;\n\n  /* \"pycocotools/_mask.pyx\":236\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef siz n = Rs.n\n *     cdef BB _bb = <BB> malloc(4*n* sizeof(double))             # <<<<<<<<<<<<<<\n *     rleToBbox( <const RLE*> Rs._R, _bb, n )\n *     cdef np.npy_intp shape[1]\n */\n  __pyx_v__bb = ((BB)malloc(((4 * __pyx_v_n) * (sizeof(double)))));\n\n  /* \"pycocotools/_mask.pyx\":237\n *     cdef siz n = Rs.n\n *     cdef BB _bb = <BB> malloc(4*n* sizeof(double))\n *     rleToBbox( <const RLE*> Rs._R, _bb, n )             # <<<<<<<<<<<<<<\n *     cdef np.npy_intp shape[1]\n *     shape[0] = <np.npy_intp> 4*n\n */\n  rleToBbox(((RLE const *)__pyx_v_Rs->_R), __pyx_v__bb, __pyx_v_n);\n\n  /* \"pycocotools/_mask.pyx\":239\n *     rleToBbox( <const RLE*> Rs._R, _bb, n )\n *     cdef np.npy_intp shape[1]\n *     shape[0] = <np.npy_intp> 4*n             # <<<<<<<<<<<<<<\n *     bb = np.array((1,4*n), dtype=np.double)\n *     bb = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _bb).reshape((n, 4))\n */\n  (__pyx_v_shape[0]) = (((npy_intp)4) * __pyx_v_n);\n\n  /* \"pycocotools/_mask.pyx\":240\n *     cdef np.npy_intp shape[1]\n *     shape[0] = <np.npy_intp> 4*n\n *     bb = np.array((1,4*n), dtype=np.double)             # <<<<<<<<<<<<<<\n *     bb = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _bb).reshape((n, 4))\n *     PyArray_ENABLEFLAGS(bb, np.NPY_OWNDATA)\n */\n  __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 240, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_array); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 240, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_1 = __Pyx_PyInt_From_siz((4 * __pyx_v_n)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 240, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 240, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_INCREF(__pyx_int_1);\n  __Pyx_GIVEREF(__pyx_int_1);\n  PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_int_1);\n  __Pyx_GIVEREF(__pyx_t_1);\n  PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1);\n  __pyx_t_1 = 0;\n  __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 240, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_GIVEREF(__pyx_t_3);\n  PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_3);\n  __pyx_t_3 = 0;\n  __pyx_t_3 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 240, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_np); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 240, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_double); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 240, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_6);\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_dtype, __pyx_t_6) < 0) __PYX_ERR(0, 240, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n  __pyx_t_6 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 240, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_6);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  __pyx_v_bb = __pyx_t_6;\n  __pyx_t_6 = 0;\n\n  /* \"pycocotools/_mask.pyx\":241\n *     shape[0] = <np.npy_intp> 4*n\n *     bb = np.array((1,4*n), dtype=np.double)\n *     bb = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _bb).reshape((n, 4))             # <<<<<<<<<<<<<<\n *     PyArray_ENABLEFLAGS(bb, np.NPY_OWNDATA)\n *     return bb\n */\n  __pyx_t_3 = PyArray_SimpleNewFromData(1, __pyx_v_shape, NPY_DOUBLE, __pyx_v__bb); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 241, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_reshape); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 241, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  __pyx_t_3 = __Pyx_PyInt_From_siz(__pyx_v_n); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 241, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 241, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_GIVEREF(__pyx_t_3);\n  PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_3);\n  __Pyx_INCREF(__pyx_int_4);\n  __Pyx_GIVEREF(__pyx_int_4);\n  PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_int_4);\n  __pyx_t_3 = 0;\n  __pyx_t_3 = NULL;\n  if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) {\n    __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_1);\n    if (likely(__pyx_t_3)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1);\n      __Pyx_INCREF(__pyx_t_3);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_1, function);\n    }\n  }\n  __pyx_t_6 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_1, __pyx_t_3, __pyx_t_2) : __Pyx_PyObject_CallOneArg(__pyx_t_1, __pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 241, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_6);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __Pyx_DECREF_SET(__pyx_v_bb, __pyx_t_6);\n  __pyx_t_6 = 0;\n\n  /* \"pycocotools/_mask.pyx\":242\n *     bb = np.array((1,4*n), dtype=np.double)\n *     bb = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _bb).reshape((n, 4))\n *     PyArray_ENABLEFLAGS(bb, np.NPY_OWNDATA)             # <<<<<<<<<<<<<<\n *     return bb\n * \n */\n  if (!(likely(((__pyx_v_bb) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_bb, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 242, __pyx_L1_error)\n  PyArray_ENABLEFLAGS(((PyArrayObject *)__pyx_v_bb), NPY_OWNDATA);\n\n  /* \"pycocotools/_mask.pyx\":243\n *     bb = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _bb).reshape((n, 4))\n *     PyArray_ENABLEFLAGS(bb, np.NPY_OWNDATA)\n *     return bb             # <<<<<<<<<<<<<<\n * \n * def frBbox(np.ndarray[np.double_t, ndim=2] bb, siz h, siz w ):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(__pyx_v_bb);\n  __pyx_r = __pyx_v_bb;\n  goto __pyx_L0;\n\n  /* \"pycocotools/_mask.pyx\":233\n *     return iou.reshape((m,n), order='F')\n * \n * def toBbox( rleObjs ):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef siz n = Rs.n\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_AddTraceback(\"pycocotools._mask.toBbox\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_Rs);\n  __Pyx_XDECREF(__pyx_v_bb);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":245\n *     return bb\n * \n * def frBbox(np.ndarray[np.double_t, ndim=2] bb, siz h, siz w ):             # <<<<<<<<<<<<<<\n *     cdef siz n = bb.shape[0]\n *     Rs = RLEs(n)\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_17frBbox(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_17frBbox = {\"frBbox\", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_11pycocotools_5_mask_17frBbox, METH_VARARGS|METH_KEYWORDS, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_17frBbox(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyArrayObject *__pyx_v_bb = 0;\n  siz __pyx_v_h;\n  siz __pyx_v_w;\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"frBbox (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_bb,&__pyx_n_s_h,&__pyx_n_s_w,0};\n    PyObject* values[3] = {0,0,0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        CYTHON_FALLTHROUGH;\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        CYTHON_FALLTHROUGH;\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        CYTHON_FALLTHROUGH;\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_bb)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        CYTHON_FALLTHROUGH;\n        case  1:\n        if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_h)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"frBbox\", 1, 3, 3, 1); __PYX_ERR(0, 245, __pyx_L3_error)\n        }\n        CYTHON_FALLTHROUGH;\n        case  2:\n        if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_w)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"frBbox\", 1, 3, 3, 2); __PYX_ERR(0, 245, __pyx_L3_error)\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"frBbox\") < 0)) __PYX_ERR(0, 245, __pyx_L3_error)\n      }\n    } else if (PyTuple_GET_SIZE(__pyx_args) != 3) {\n      goto __pyx_L5_argtuple_error;\n    } else {\n      values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n      values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n      values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n    }\n    __pyx_v_bb = ((PyArrayObject *)values[0]);\n    __pyx_v_h = __Pyx_PyInt_As_siz(values[1]); if (unlikely((__pyx_v_h == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 245, __pyx_L3_error)\n    __pyx_v_w = __Pyx_PyInt_As_siz(values[2]); if (unlikely((__pyx_v_w == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 245, __pyx_L3_error)\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"frBbox\", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 245, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"pycocotools._mask.frBbox\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return NULL;\n  __pyx_L4_argument_unpacking_done:;\n  if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_bb), __pyx_ptype_5numpy_ndarray, 1, \"bb\", 0))) __PYX_ERR(0, 245, __pyx_L1_error)\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_16frBbox(__pyx_self, __pyx_v_bb, __pyx_v_h, __pyx_v_w);\n\n  /* function exit code */\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_16frBbox(CYTHON_UNUSED PyObject *__pyx_self, PyArrayObject *__pyx_v_bb, siz __pyx_v_h, siz __pyx_v_w) {\n  siz __pyx_v_n;\n  struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_Rs = NULL;\n  PyObject *__pyx_v_objs = NULL;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_bb;\n  __Pyx_Buffer __pyx_pybuffer_bb;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  __Pyx_RefNannySetupContext(\"frBbox\", 0);\n  __pyx_pybuffer_bb.pybuffer.buf = NULL;\n  __pyx_pybuffer_bb.refcount = 0;\n  __pyx_pybuffernd_bb.data = NULL;\n  __pyx_pybuffernd_bb.rcbuffer = &__pyx_pybuffer_bb;\n  {\n    __Pyx_BufFmt_StackElem __pyx_stack[1];\n    if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_bb.rcbuffer->pybuffer, (PyObject*)__pyx_v_bb, &__Pyx_TypeInfo_nn___pyx_t_5numpy_double_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) __PYX_ERR(0, 245, __pyx_L1_error)\n  }\n  __pyx_pybuffernd_bb.diminfo[0].strides = __pyx_pybuffernd_bb.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_bb.diminfo[0].shape = __pyx_pybuffernd_bb.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_bb.diminfo[1].strides = __pyx_pybuffernd_bb.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_bb.diminfo[1].shape = __pyx_pybuffernd_bb.rcbuffer->pybuffer.shape[1];\n\n  /* \"pycocotools/_mask.pyx\":246\n * \n * def frBbox(np.ndarray[np.double_t, ndim=2] bb, siz h, siz w ):\n *     cdef siz n = bb.shape[0]             # <<<<<<<<<<<<<<\n *     Rs = RLEs(n)\n *     rleFrBbox( <RLE*> Rs._R, <const BB> bb.data, h, w, n )\n */\n  __pyx_v_n = (__pyx_v_bb->dimensions[0]);\n\n  /* \"pycocotools/_mask.pyx\":247\n * def frBbox(np.ndarray[np.double_t, ndim=2] bb, siz h, siz w ):\n *     cdef siz n = bb.shape[0]\n *     Rs = RLEs(n)             # <<<<<<<<<<<<<<\n *     rleFrBbox( <RLE*> Rs._R, <const BB> bb.data, h, w, n )\n *     objs = _toString(Rs)\n */\n  __pyx_t_1 = __Pyx_PyInt_From_siz(__pyx_v_n); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 247, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = __Pyx_PyObject_CallOneArg(((PyObject *)__pyx_ptype_11pycocotools_5_mask_RLEs), __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 247, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_v_Rs = ((struct __pyx_obj_11pycocotools_5_mask_RLEs *)__pyx_t_2);\n  __pyx_t_2 = 0;\n\n  /* \"pycocotools/_mask.pyx\":248\n *     cdef siz n = bb.shape[0]\n *     Rs = RLEs(n)\n *     rleFrBbox( <RLE*> Rs._R, <const BB> bb.data, h, w, n )             # <<<<<<<<<<<<<<\n *     objs = _toString(Rs)\n *     return objs\n */\n  rleFrBbox(((RLE *)__pyx_v_Rs->_R), ((BB const )__pyx_v_bb->data), __pyx_v_h, __pyx_v_w, __pyx_v_n);\n\n  /* \"pycocotools/_mask.pyx\":249\n *     Rs = RLEs(n)\n *     rleFrBbox( <RLE*> Rs._R, <const BB> bb.data, h, w, n )\n *     objs = _toString(Rs)             # <<<<<<<<<<<<<<\n *     return objs\n * \n */\n  __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_toString); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 249, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_3 = NULL;\n  if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_1))) {\n    __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_1);\n    if (likely(__pyx_t_3)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1);\n      __Pyx_INCREF(__pyx_t_3);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_1, function);\n    }\n  }\n  __pyx_t_2 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_1, __pyx_t_3, ((PyObject *)__pyx_v_Rs)) : __Pyx_PyObject_CallOneArg(__pyx_t_1, ((PyObject *)__pyx_v_Rs));\n  __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n  if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 249, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_v_objs = __pyx_t_2;\n  __pyx_t_2 = 0;\n\n  /* \"pycocotools/_mask.pyx\":250\n *     rleFrBbox( <RLE*> Rs._R, <const BB> bb.data, h, w, n )\n *     objs = _toString(Rs)\n *     return objs             # <<<<<<<<<<<<<<\n * \n * def frPoly( poly, siz h, siz w ):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(__pyx_v_objs);\n  __pyx_r = __pyx_v_objs;\n  goto __pyx_L0;\n\n  /* \"pycocotools/_mask.pyx\":245\n *     return bb\n * \n * def frBbox(np.ndarray[np.double_t, ndim=2] bb, siz h, siz w ):             # <<<<<<<<<<<<<<\n *     cdef siz n = bb.shape[0]\n *     Rs = RLEs(n)\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  { PyObject *__pyx_type, *__pyx_value, *__pyx_tb;\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_bb.rcbuffer->pybuffer);\n  __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);}\n  __Pyx_AddTraceback(\"pycocotools._mask.frBbox\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  goto __pyx_L2;\n  __pyx_L0:;\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_bb.rcbuffer->pybuffer);\n  __pyx_L2:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_Rs);\n  __Pyx_XDECREF(__pyx_v_objs);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":252\n *     return objs\n * \n * def frPoly( poly, siz h, siz w ):             # <<<<<<<<<<<<<<\n *     cdef np.ndarray[np.double_t, ndim=1] np_poly\n *     n = len(poly)\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_19frPoly(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_19frPoly = {\"frPoly\", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_11pycocotools_5_mask_19frPoly, METH_VARARGS|METH_KEYWORDS, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_19frPoly(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyObject *__pyx_v_poly = 0;\n  siz __pyx_v_h;\n  siz __pyx_v_w;\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"frPoly (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_poly,&__pyx_n_s_h,&__pyx_n_s_w,0};\n    PyObject* values[3] = {0,0,0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        CYTHON_FALLTHROUGH;\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        CYTHON_FALLTHROUGH;\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        CYTHON_FALLTHROUGH;\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_poly)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        CYTHON_FALLTHROUGH;\n        case  1:\n        if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_h)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"frPoly\", 1, 3, 3, 1); __PYX_ERR(0, 252, __pyx_L3_error)\n        }\n        CYTHON_FALLTHROUGH;\n        case  2:\n        if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_w)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"frPoly\", 1, 3, 3, 2); __PYX_ERR(0, 252, __pyx_L3_error)\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"frPoly\") < 0)) __PYX_ERR(0, 252, __pyx_L3_error)\n      }\n    } else if (PyTuple_GET_SIZE(__pyx_args) != 3) {\n      goto __pyx_L5_argtuple_error;\n    } else {\n      values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n      values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n      values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n    }\n    __pyx_v_poly = values[0];\n    __pyx_v_h = __Pyx_PyInt_As_siz(values[1]); if (unlikely((__pyx_v_h == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 252, __pyx_L3_error)\n    __pyx_v_w = __Pyx_PyInt_As_siz(values[2]); if (unlikely((__pyx_v_w == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 252, __pyx_L3_error)\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"frPoly\", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 252, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"pycocotools._mask.frPoly\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return NULL;\n  __pyx_L4_argument_unpacking_done:;\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_18frPoly(__pyx_self, __pyx_v_poly, __pyx_v_h, __pyx_v_w);\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_18frPoly(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_poly, siz __pyx_v_h, siz __pyx_v_w) {\n  PyArrayObject *__pyx_v_np_poly = 0;\n  Py_ssize_t __pyx_v_n;\n  struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_Rs = NULL;\n  PyObject *__pyx_v_i = NULL;\n  PyObject *__pyx_v_p = NULL;\n  PyObject *__pyx_v_objs = NULL;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_np_poly;\n  __Pyx_Buffer __pyx_pybuffer_np_poly;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  Py_ssize_t __pyx_t_1;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *(*__pyx_t_4)(PyObject *);\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  PyObject *__pyx_t_7 = NULL;\n  PyObject *__pyx_t_8 = NULL;\n  PyObject *__pyx_t_9 = NULL;\n  PyArrayObject *__pyx_t_10 = NULL;\n  int __pyx_t_11;\n  PyObject *__pyx_t_12 = NULL;\n  PyObject *__pyx_t_13 = NULL;\n  PyObject *__pyx_t_14 = NULL;\n  Py_ssize_t __pyx_t_15;\n  Py_ssize_t __pyx_t_16;\n  __Pyx_RefNannySetupContext(\"frPoly\", 0);\n  __pyx_pybuffer_np_poly.pybuffer.buf = NULL;\n  __pyx_pybuffer_np_poly.refcount = 0;\n  __pyx_pybuffernd_np_poly.data = NULL;\n  __pyx_pybuffernd_np_poly.rcbuffer = &__pyx_pybuffer_np_poly;\n\n  /* \"pycocotools/_mask.pyx\":254\n * def frPoly( poly, siz h, siz w ):\n *     cdef np.ndarray[np.double_t, ndim=1] np_poly\n *     n = len(poly)             # <<<<<<<<<<<<<<\n *     Rs = RLEs(n)\n *     for i, p in enumerate(poly):\n */\n  __pyx_t_1 = PyObject_Length(__pyx_v_poly); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 254, __pyx_L1_error)\n  __pyx_v_n = __pyx_t_1;\n\n  /* \"pycocotools/_mask.pyx\":255\n *     cdef np.ndarray[np.double_t, ndim=1] np_poly\n *     n = len(poly)\n *     Rs = RLEs(n)             # <<<<<<<<<<<<<<\n *     for i, p in enumerate(poly):\n *         np_poly = np.array(p, dtype=np.double, order='F')\n */\n  __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_n); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 255, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_t_3 = __Pyx_PyObject_CallOneArg(((PyObject *)__pyx_ptype_11pycocotools_5_mask_RLEs), __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 255, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_v_Rs = ((struct __pyx_obj_11pycocotools_5_mask_RLEs *)__pyx_t_3);\n  __pyx_t_3 = 0;\n\n  /* \"pycocotools/_mask.pyx\":256\n *     n = len(poly)\n *     Rs = RLEs(n)\n *     for i, p in enumerate(poly):             # <<<<<<<<<<<<<<\n *         np_poly = np.array(p, dtype=np.double, order='F')\n *         rleFrPoly( <RLE*>&Rs._R[i], <const double*> np_poly.data, len(np_poly)/2, h, w )\n */\n  __Pyx_INCREF(__pyx_int_0);\n  __pyx_t_3 = __pyx_int_0;\n  if (likely(PyList_CheckExact(__pyx_v_poly)) || PyTuple_CheckExact(__pyx_v_poly)) {\n    __pyx_t_2 = __pyx_v_poly; __Pyx_INCREF(__pyx_t_2); __pyx_t_1 = 0;\n    __pyx_t_4 = NULL;\n  } else {\n    __pyx_t_1 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_v_poly); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 256, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_2);\n    __pyx_t_4 = Py_TYPE(__pyx_t_2)->tp_iternext; if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 256, __pyx_L1_error)\n  }\n  for (;;) {\n    if (likely(!__pyx_t_4)) {\n      if (likely(PyList_CheckExact(__pyx_t_2))) {\n        if (__pyx_t_1 >= PyList_GET_SIZE(__pyx_t_2)) break;\n        #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n        __pyx_t_5 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_1); __Pyx_INCREF(__pyx_t_5); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(0, 256, __pyx_L1_error)\n        #else\n        __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 256, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_5);\n        #endif\n      } else {\n        if (__pyx_t_1 >= PyTuple_GET_SIZE(__pyx_t_2)) break;\n        #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n        __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_1); __Pyx_INCREF(__pyx_t_5); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(0, 256, __pyx_L1_error)\n        #else\n        __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 256, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_5);\n        #endif\n      }\n    } else {\n      __pyx_t_5 = __pyx_t_4(__pyx_t_2);\n      if (unlikely(!__pyx_t_5)) {\n        PyObject* exc_type = PyErr_Occurred();\n        if (exc_type) {\n          if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear();\n          else __PYX_ERR(0, 256, __pyx_L1_error)\n        }\n        break;\n      }\n      __Pyx_GOTREF(__pyx_t_5);\n    }\n    __Pyx_XDECREF_SET(__pyx_v_p, __pyx_t_5);\n    __pyx_t_5 = 0;\n    __Pyx_INCREF(__pyx_t_3);\n    __Pyx_XDECREF_SET(__pyx_v_i, __pyx_t_3);\n    __pyx_t_5 = __Pyx_PyInt_AddObjC(__pyx_t_3, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 256, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_DECREF(__pyx_t_3);\n    __pyx_t_3 = __pyx_t_5;\n    __pyx_t_5 = 0;\n\n    /* \"pycocotools/_mask.pyx\":257\n *     Rs = RLEs(n)\n *     for i, p in enumerate(poly):\n *         np_poly = np.array(p, dtype=np.double, order='F')             # <<<<<<<<<<<<<<\n *         rleFrPoly( <RLE*>&Rs._R[i], <const double*> np_poly.data, len(np_poly)/2, h, w )\n *     objs = _toString(Rs)\n */\n    __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_np); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 257, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_array); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 257, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 257, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_INCREF(__pyx_v_p);\n    __Pyx_GIVEREF(__pyx_v_p);\n    PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_v_p);\n    __pyx_t_7 = __Pyx_PyDict_NewPresized(2); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 257, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __Pyx_GetModuleGlobalName(__pyx_t_8, __pyx_n_s_np); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 257, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_8);\n    __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_t_8, __pyx_n_s_double); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 257, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_9);\n    __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;\n    if (PyDict_SetItem(__pyx_t_7, __pyx_n_s_dtype, __pyx_t_9) < 0) __PYX_ERR(0, 257, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;\n    if (PyDict_SetItem(__pyx_t_7, __pyx_n_s_order, __pyx_n_s_F) < 0) __PYX_ERR(0, 257, __pyx_L1_error)\n    __pyx_t_9 = __Pyx_PyObject_Call(__pyx_t_6, __pyx_t_5, __pyx_t_7); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 257, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_9);\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    if (!(likely(((__pyx_t_9) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_9, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 257, __pyx_L1_error)\n    __pyx_t_10 = ((PyArrayObject *)__pyx_t_9);\n    {\n      __Pyx_BufFmt_StackElem __pyx_stack[1];\n      __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_np_poly.rcbuffer->pybuffer);\n      __pyx_t_11 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_np_poly.rcbuffer->pybuffer, (PyObject*)__pyx_t_10, &__Pyx_TypeInfo_nn___pyx_t_5numpy_double_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack);\n      if (unlikely(__pyx_t_11 < 0)) {\n        PyErr_Fetch(&__pyx_t_12, &__pyx_t_13, &__pyx_t_14);\n        if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_np_poly.rcbuffer->pybuffer, (PyObject*)__pyx_v_np_poly, &__Pyx_TypeInfo_nn___pyx_t_5numpy_double_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) {\n          Py_XDECREF(__pyx_t_12); Py_XDECREF(__pyx_t_13); Py_XDECREF(__pyx_t_14);\n          __Pyx_RaiseBufferFallbackError();\n        } else {\n          PyErr_Restore(__pyx_t_12, __pyx_t_13, __pyx_t_14);\n        }\n        __pyx_t_12 = __pyx_t_13 = __pyx_t_14 = 0;\n      }\n      __pyx_pybuffernd_np_poly.diminfo[0].strides = __pyx_pybuffernd_np_poly.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_np_poly.diminfo[0].shape = __pyx_pybuffernd_np_poly.rcbuffer->pybuffer.shape[0];\n      if (unlikely(__pyx_t_11 < 0)) __PYX_ERR(0, 257, __pyx_L1_error)\n    }\n    __pyx_t_10 = 0;\n    __Pyx_XDECREF_SET(__pyx_v_np_poly, ((PyArrayObject *)__pyx_t_9));\n    __pyx_t_9 = 0;\n\n    /* \"pycocotools/_mask.pyx\":258\n *     for i, p in enumerate(poly):\n *         np_poly = np.array(p, dtype=np.double, order='F')\n *         rleFrPoly( <RLE*>&Rs._R[i], <const double*> np_poly.data, len(np_poly)/2, h, w )             # <<<<<<<<<<<<<<\n *     objs = _toString(Rs)\n *     return objs\n */\n    __pyx_t_15 = __Pyx_PyIndex_AsSsize_t(__pyx_v_i); if (unlikely((__pyx_t_15 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 258, __pyx_L1_error)\n    __pyx_t_16 = PyObject_Length(((PyObject *)__pyx_v_np_poly)); if (unlikely(__pyx_t_16 == ((Py_ssize_t)-1))) __PYX_ERR(0, 258, __pyx_L1_error)\n    rleFrPoly(((RLE *)(&(__pyx_v_Rs->_R[__pyx_t_15]))), ((double const *)__pyx_v_np_poly->data), __Pyx_div_Py_ssize_t(__pyx_t_16, 2), __pyx_v_h, __pyx_v_w);\n\n    /* \"pycocotools/_mask.pyx\":256\n *     n = len(poly)\n *     Rs = RLEs(n)\n *     for i, p in enumerate(poly):             # <<<<<<<<<<<<<<\n *         np_poly = np.array(p, dtype=np.double, order='F')\n *         rleFrPoly( <RLE*>&Rs._R[i], <const double*> np_poly.data, len(np_poly)/2, h, w )\n */\n  }\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n\n  /* \"pycocotools/_mask.pyx\":259\n *         np_poly = np.array(p, dtype=np.double, order='F')\n *         rleFrPoly( <RLE*>&Rs._R[i], <const double*> np_poly.data, len(np_poly)/2, h, w )\n *     objs = _toString(Rs)             # <<<<<<<<<<<<<<\n *     return objs\n * \n */\n  __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_toString); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 259, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_t_9 = NULL;\n  if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {\n    __pyx_t_9 = PyMethod_GET_SELF(__pyx_t_2);\n    if (likely(__pyx_t_9)) {\n      PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);\n      __Pyx_INCREF(__pyx_t_9);\n      __Pyx_INCREF(function);\n      __Pyx_DECREF_SET(__pyx_t_2, function);\n    }\n  }\n  __pyx_t_3 = (__pyx_t_9) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_9, ((PyObject *)__pyx_v_Rs)) : __Pyx_PyObject_CallOneArg(__pyx_t_2, ((PyObject *)__pyx_v_Rs));\n  __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0;\n  if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 259, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_v_objs = __pyx_t_3;\n  __pyx_t_3 = 0;\n\n  /* \"pycocotools/_mask.pyx\":260\n *         rleFrPoly( <RLE*>&Rs._R[i], <const double*> np_poly.data, len(np_poly)/2, h, w )\n *     objs = _toString(Rs)\n *     return objs             # <<<<<<<<<<<<<<\n * \n * def frUncompressedRLE(ucRles, siz h, siz w):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(__pyx_v_objs);\n  __pyx_r = __pyx_v_objs;\n  goto __pyx_L0;\n\n  /* \"pycocotools/_mask.pyx\":252\n *     return objs\n * \n * def frPoly( poly, siz h, siz w ):             # <<<<<<<<<<<<<<\n *     cdef np.ndarray[np.double_t, ndim=1] np_poly\n *     n = len(poly)\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_XDECREF(__pyx_t_7);\n  __Pyx_XDECREF(__pyx_t_8);\n  __Pyx_XDECREF(__pyx_t_9);\n  { PyObject *__pyx_type, *__pyx_value, *__pyx_tb;\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_np_poly.rcbuffer->pybuffer);\n  __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);}\n  __Pyx_AddTraceback(\"pycocotools._mask.frPoly\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  goto __pyx_L2;\n  __pyx_L0:;\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_np_poly.rcbuffer->pybuffer);\n  __pyx_L2:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_np_poly);\n  __Pyx_XDECREF((PyObject *)__pyx_v_Rs);\n  __Pyx_XDECREF(__pyx_v_i);\n  __Pyx_XDECREF(__pyx_v_p);\n  __Pyx_XDECREF(__pyx_v_objs);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":262\n *     return objs\n * \n * def frUncompressedRLE(ucRles, siz h, siz w):             # <<<<<<<<<<<<<<\n *     cdef np.ndarray[np.uint32_t, ndim=1] cnts\n *     cdef RLE R\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_21frUncompressedRLE(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_21frUncompressedRLE = {\"frUncompressedRLE\", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_11pycocotools_5_mask_21frUncompressedRLE, METH_VARARGS|METH_KEYWORDS, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_21frUncompressedRLE(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyObject *__pyx_v_ucRles = 0;\n  CYTHON_UNUSED siz __pyx_v_h;\n  CYTHON_UNUSED siz __pyx_v_w;\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"frUncompressedRLE (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_ucRles,&__pyx_n_s_h,&__pyx_n_s_w,0};\n    PyObject* values[3] = {0,0,0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        CYTHON_FALLTHROUGH;\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        CYTHON_FALLTHROUGH;\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        CYTHON_FALLTHROUGH;\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_ucRles)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        CYTHON_FALLTHROUGH;\n        case  1:\n        if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_h)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"frUncompressedRLE\", 1, 3, 3, 1); __PYX_ERR(0, 262, __pyx_L3_error)\n        }\n        CYTHON_FALLTHROUGH;\n        case  2:\n        if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_w)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"frUncompressedRLE\", 1, 3, 3, 2); __PYX_ERR(0, 262, __pyx_L3_error)\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"frUncompressedRLE\") < 0)) __PYX_ERR(0, 262, __pyx_L3_error)\n      }\n    } else if (PyTuple_GET_SIZE(__pyx_args) != 3) {\n      goto __pyx_L5_argtuple_error;\n    } else {\n      values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n      values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n      values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n    }\n    __pyx_v_ucRles = values[0];\n    __pyx_v_h = __Pyx_PyInt_As_siz(values[1]); if (unlikely((__pyx_v_h == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 262, __pyx_L3_error)\n    __pyx_v_w = __Pyx_PyInt_As_siz(values[2]); if (unlikely((__pyx_v_w == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 262, __pyx_L3_error)\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"frUncompressedRLE\", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 262, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"pycocotools._mask.frUncompressedRLE\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return NULL;\n  __pyx_L4_argument_unpacking_done:;\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_20frUncompressedRLE(__pyx_self, __pyx_v_ucRles, __pyx_v_h, __pyx_v_w);\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_20frUncompressedRLE(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_ucRles, CYTHON_UNUSED siz __pyx_v_h, CYTHON_UNUSED siz __pyx_v_w) {\n  PyArrayObject *__pyx_v_cnts = 0;\n  RLE __pyx_v_R;\n  uint *__pyx_v_data;\n  Py_ssize_t __pyx_v_n;\n  PyObject *__pyx_v_objs = NULL;\n  Py_ssize_t __pyx_v_i;\n  struct __pyx_obj_11pycocotools_5_mask_RLEs *__pyx_v_Rs = NULL;\n  Py_ssize_t __pyx_v_j;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_cnts;\n  __Pyx_Buffer __pyx_pybuffer_cnts;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  Py_ssize_t __pyx_t_1;\n  PyObject *__pyx_t_2 = NULL;\n  Py_ssize_t __pyx_t_3;\n  Py_ssize_t __pyx_t_4;\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  PyObject *__pyx_t_7 = NULL;\n  PyObject *__pyx_t_8 = NULL;\n  PyArrayObject *__pyx_t_9 = NULL;\n  int __pyx_t_10;\n  PyObject *__pyx_t_11 = NULL;\n  PyObject *__pyx_t_12 = NULL;\n  PyObject *__pyx_t_13 = NULL;\n  Py_ssize_t __pyx_t_14;\n  Py_ssize_t __pyx_t_15;\n  Py_ssize_t __pyx_t_16;\n  Py_ssize_t __pyx_t_17;\n  RLE __pyx_t_18;\n  siz __pyx_t_19;\n  int __pyx_t_20;\n  __Pyx_RefNannySetupContext(\"frUncompressedRLE\", 0);\n  __pyx_pybuffer_cnts.pybuffer.buf = NULL;\n  __pyx_pybuffer_cnts.refcount = 0;\n  __pyx_pybuffernd_cnts.data = NULL;\n  __pyx_pybuffernd_cnts.rcbuffer = &__pyx_pybuffer_cnts;\n\n  /* \"pycocotools/_mask.pyx\":266\n *     cdef RLE R\n *     cdef uint *data\n *     n = len(ucRles)             # <<<<<<<<<<<<<<\n *     objs = []\n *     for i in range(n):\n */\n  __pyx_t_1 = PyObject_Length(__pyx_v_ucRles); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(0, 266, __pyx_L1_error)\n  __pyx_v_n = __pyx_t_1;\n\n  /* \"pycocotools/_mask.pyx\":267\n *     cdef uint *data\n *     n = len(ucRles)\n *     objs = []             # <<<<<<<<<<<<<<\n *     for i in range(n):\n *         Rs = RLEs(1)\n */\n  __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 267, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_v_objs = ((PyObject*)__pyx_t_2);\n  __pyx_t_2 = 0;\n\n  /* \"pycocotools/_mask.pyx\":268\n *     n = len(ucRles)\n *     objs = []\n *     for i in range(n):             # <<<<<<<<<<<<<<\n *         Rs = RLEs(1)\n *         cnts = np.array(ucRles[i]['counts'], dtype=np.uint32)\n */\n  __pyx_t_1 = __pyx_v_n;\n  __pyx_t_3 = __pyx_t_1;\n  for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) {\n    __pyx_v_i = __pyx_t_4;\n\n    /* \"pycocotools/_mask.pyx\":269\n *     objs = []\n *     for i in range(n):\n *         Rs = RLEs(1)             # <<<<<<<<<<<<<<\n *         cnts = np.array(ucRles[i]['counts'], dtype=np.uint32)\n *         # time for malloc can be saved here but it's fine\n */\n    __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_ptype_11pycocotools_5_mask_RLEs), __pyx_tuple__5, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 269, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_2);\n    __Pyx_XDECREF_SET(__pyx_v_Rs, ((struct __pyx_obj_11pycocotools_5_mask_RLEs *)__pyx_t_2));\n    __pyx_t_2 = 0;\n\n    /* \"pycocotools/_mask.pyx\":270\n *     for i in range(n):\n *         Rs = RLEs(1)\n *         cnts = np.array(ucRles[i]['counts'], dtype=np.uint32)             # <<<<<<<<<<<<<<\n *         # time for malloc can be saved here but it's fine\n *         data = <uint*> malloc(len(cnts)* sizeof(uint))\n */\n    __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_np); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 270, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_2);\n    __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_array); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 270, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n    __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_ucRles, __pyx_v_i, Py_ssize_t, 1, PyInt_FromSsize_t, 0, 1, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 270, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_2);\n    __pyx_t_6 = __Pyx_PyObject_Dict_GetItem(__pyx_t_2, __pyx_n_s_counts); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 270, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n    __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 270, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_2);\n    __Pyx_GIVEREF(__pyx_t_6);\n    PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_6);\n    __pyx_t_6 = 0;\n    __pyx_t_6 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 270, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_np); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 270, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_uint32); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 270, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_8);\n    __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    if (PyDict_SetItem(__pyx_t_6, __pyx_n_s_dtype, __pyx_t_8) < 0) __PYX_ERR(0, 270, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;\n    __pyx_t_8 = __Pyx_PyObject_Call(__pyx_t_5, __pyx_t_2, __pyx_t_6); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 270, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_8);\n    __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n    if (!(likely(((__pyx_t_8) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_8, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 270, __pyx_L1_error)\n    __pyx_t_9 = ((PyArrayObject *)__pyx_t_8);\n    {\n      __Pyx_BufFmt_StackElem __pyx_stack[1];\n      __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_cnts.rcbuffer->pybuffer);\n      __pyx_t_10 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_cnts.rcbuffer->pybuffer, (PyObject*)__pyx_t_9, &__Pyx_TypeInfo_nn___pyx_t_5numpy_uint32_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack);\n      if (unlikely(__pyx_t_10 < 0)) {\n        PyErr_Fetch(&__pyx_t_11, &__pyx_t_12, &__pyx_t_13);\n        if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_cnts.rcbuffer->pybuffer, (PyObject*)__pyx_v_cnts, &__Pyx_TypeInfo_nn___pyx_t_5numpy_uint32_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) {\n          Py_XDECREF(__pyx_t_11); Py_XDECREF(__pyx_t_12); Py_XDECREF(__pyx_t_13);\n          __Pyx_RaiseBufferFallbackError();\n        } else {\n          PyErr_Restore(__pyx_t_11, __pyx_t_12, __pyx_t_13);\n        }\n        __pyx_t_11 = __pyx_t_12 = __pyx_t_13 = 0;\n      }\n      __pyx_pybuffernd_cnts.diminfo[0].strides = __pyx_pybuffernd_cnts.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_cnts.diminfo[0].shape = __pyx_pybuffernd_cnts.rcbuffer->pybuffer.shape[0];\n      if (unlikely(__pyx_t_10 < 0)) __PYX_ERR(0, 270, __pyx_L1_error)\n    }\n    __pyx_t_9 = 0;\n    __Pyx_XDECREF_SET(__pyx_v_cnts, ((PyArrayObject *)__pyx_t_8));\n    __pyx_t_8 = 0;\n\n    /* \"pycocotools/_mask.pyx\":272\n *         cnts = np.array(ucRles[i]['counts'], dtype=np.uint32)\n *         # time for malloc can be saved here but it's fine\n *         data = <uint*> malloc(len(cnts)* sizeof(uint))             # <<<<<<<<<<<<<<\n *         for j in range(len(cnts)):\n *             data[j] = <uint> cnts[j]\n */\n    __pyx_t_14 = PyObject_Length(((PyObject *)__pyx_v_cnts)); if (unlikely(__pyx_t_14 == ((Py_ssize_t)-1))) __PYX_ERR(0, 272, __pyx_L1_error)\n    __pyx_v_data = ((uint *)malloc((__pyx_t_14 * (sizeof(unsigned int)))));\n\n    /* \"pycocotools/_mask.pyx\":273\n *         # time for malloc can be saved here but it's fine\n *         data = <uint*> malloc(len(cnts)* sizeof(uint))\n *         for j in range(len(cnts)):             # <<<<<<<<<<<<<<\n *             data[j] = <uint> cnts[j]\n *         R = RLE(ucRles[i]['size'][0], ucRles[i]['size'][1], len(cnts), <uint*> data)\n */\n    __pyx_t_14 = PyObject_Length(((PyObject *)__pyx_v_cnts)); if (unlikely(__pyx_t_14 == ((Py_ssize_t)-1))) __PYX_ERR(0, 273, __pyx_L1_error)\n    __pyx_t_15 = __pyx_t_14;\n    for (__pyx_t_16 = 0; __pyx_t_16 < __pyx_t_15; __pyx_t_16+=1) {\n      __pyx_v_j = __pyx_t_16;\n\n      /* \"pycocotools/_mask.pyx\":274\n *         data = <uint*> malloc(len(cnts)* sizeof(uint))\n *         for j in range(len(cnts)):\n *             data[j] = <uint> cnts[j]             # <<<<<<<<<<<<<<\n *         R = RLE(ucRles[i]['size'][0], ucRles[i]['size'][1], len(cnts), <uint*> data)\n *         Rs._R[0] = R\n */\n      __pyx_t_17 = __pyx_v_j;\n      __pyx_t_10 = -1;\n      if (__pyx_t_17 < 0) {\n        __pyx_t_17 += __pyx_pybuffernd_cnts.diminfo[0].shape;\n        if (unlikely(__pyx_t_17 < 0)) __pyx_t_10 = 0;\n      } else if (unlikely(__pyx_t_17 >= __pyx_pybuffernd_cnts.diminfo[0].shape)) __pyx_t_10 = 0;\n      if (unlikely(__pyx_t_10 != -1)) {\n        __Pyx_RaiseBufferIndexError(__pyx_t_10);\n        __PYX_ERR(0, 274, __pyx_L1_error)\n      }\n      (__pyx_v_data[__pyx_v_j]) = ((uint)(*__Pyx_BufPtrStrided1d(__pyx_t_5numpy_uint32_t *, __pyx_pybuffernd_cnts.rcbuffer->pybuffer.buf, __pyx_t_17, __pyx_pybuffernd_cnts.diminfo[0].strides)));\n    }\n\n    /* \"pycocotools/_mask.pyx\":275\n *         for j in range(len(cnts)):\n *             data[j] = <uint> cnts[j]\n *         R = RLE(ucRles[i]['size'][0], ucRles[i]['size'][1], len(cnts), <uint*> data)             # <<<<<<<<<<<<<<\n *         Rs._R[0] = R\n *         objs.append(_toString(Rs)[0])\n */\n    __pyx_t_8 = __Pyx_GetItemInt(__pyx_v_ucRles, __pyx_v_i, Py_ssize_t, 1, PyInt_FromSsize_t, 0, 1, 1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 275, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_8);\n    __pyx_t_6 = __Pyx_PyObject_Dict_GetItem(__pyx_t_8, __pyx_n_s_size); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 275, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;\n    __pyx_t_8 = __Pyx_GetItemInt(__pyx_t_6, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 275, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_8);\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n    __pyx_t_19 = __Pyx_PyInt_As_siz(__pyx_t_8); if (unlikely((__pyx_t_19 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 275, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;\n    __pyx_t_18.h = __pyx_t_19;\n    __pyx_t_8 = __Pyx_GetItemInt(__pyx_v_ucRles, __pyx_v_i, Py_ssize_t, 1, PyInt_FromSsize_t, 0, 1, 1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 275, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_8);\n    __pyx_t_6 = __Pyx_PyObject_Dict_GetItem(__pyx_t_8, __pyx_n_s_size); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 275, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;\n    __pyx_t_8 = __Pyx_GetItemInt(__pyx_t_6, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 275, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_8);\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n    __pyx_t_19 = __Pyx_PyInt_As_siz(__pyx_t_8); if (unlikely((__pyx_t_19 == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 275, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;\n    __pyx_t_18.w = __pyx_t_19;\n    __pyx_t_14 = PyObject_Length(((PyObject *)__pyx_v_cnts)); if (unlikely(__pyx_t_14 == ((Py_ssize_t)-1))) __PYX_ERR(0, 275, __pyx_L1_error)\n    __pyx_t_18.m = __pyx_t_14;\n    __pyx_t_18.cnts = ((uint *)__pyx_v_data);\n    __pyx_v_R = __pyx_t_18;\n\n    /* \"pycocotools/_mask.pyx\":276\n *             data[j] = <uint> cnts[j]\n *         R = RLE(ucRles[i]['size'][0], ucRles[i]['size'][1], len(cnts), <uint*> data)\n *         Rs._R[0] = R             # <<<<<<<<<<<<<<\n *         objs.append(_toString(Rs)[0])\n *     return objs\n */\n    (__pyx_v_Rs->_R[0]) = __pyx_v_R;\n\n    /* \"pycocotools/_mask.pyx\":277\n *         R = RLE(ucRles[i]['size'][0], ucRles[i]['size'][1], len(cnts), <uint*> data)\n *         Rs._R[0] = R\n *         objs.append(_toString(Rs)[0])             # <<<<<<<<<<<<<<\n *     return objs\n * \n */\n    __Pyx_GetModuleGlobalName(__pyx_t_6, __pyx_n_s_toString); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 277, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __pyx_t_2 = NULL;\n    if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_6))) {\n      __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_6);\n      if (likely(__pyx_t_2)) {\n        PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6);\n        __Pyx_INCREF(__pyx_t_2);\n        __Pyx_INCREF(function);\n        __Pyx_DECREF_SET(__pyx_t_6, function);\n      }\n    }\n    __pyx_t_8 = (__pyx_t_2) ? __Pyx_PyObject_Call2Args(__pyx_t_6, __pyx_t_2, ((PyObject *)__pyx_v_Rs)) : __Pyx_PyObject_CallOneArg(__pyx_t_6, ((PyObject *)__pyx_v_Rs));\n    __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n    if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 277, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_8);\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n    __pyx_t_6 = __Pyx_GetItemInt(__pyx_t_8, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 277, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_6);\n    __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;\n    __pyx_t_20 = __Pyx_PyList_Append(__pyx_v_objs, __pyx_t_6); if (unlikely(__pyx_t_20 == ((int)-1))) __PYX_ERR(0, 277, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n  }\n\n  /* \"pycocotools/_mask.pyx\":278\n *         Rs._R[0] = R\n *         objs.append(_toString(Rs)[0])\n *     return objs             # <<<<<<<<<<<<<<\n * \n * def frPyObjects(pyobj, siz h, w):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(__pyx_v_objs);\n  __pyx_r = __pyx_v_objs;\n  goto __pyx_L0;\n\n  /* \"pycocotools/_mask.pyx\":262\n *     return objs\n * \n * def frUncompressedRLE(ucRles, siz h, siz w):             # <<<<<<<<<<<<<<\n *     cdef np.ndarray[np.uint32_t, ndim=1] cnts\n *     cdef RLE R\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_XDECREF(__pyx_t_7);\n  __Pyx_XDECREF(__pyx_t_8);\n  { PyObject *__pyx_type, *__pyx_value, *__pyx_tb;\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_cnts.rcbuffer->pybuffer);\n  __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);}\n  __Pyx_AddTraceback(\"pycocotools._mask.frUncompressedRLE\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  goto __pyx_L2;\n  __pyx_L0:;\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_cnts.rcbuffer->pybuffer);\n  __pyx_L2:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_cnts);\n  __Pyx_XDECREF(__pyx_v_objs);\n  __Pyx_XDECREF((PyObject *)__pyx_v_Rs);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"pycocotools/_mask.pyx\":280\n *     return objs\n * \n * def frPyObjects(pyobj, siz h, w):             # <<<<<<<<<<<<<<\n *     if type(pyobj) == np.ndarray:\n *         objs = frBbox(pyobj, h, w )\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_23frPyObjects(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic PyMethodDef __pyx_mdef_11pycocotools_5_mask_23frPyObjects = {\"frPyObjects\", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_11pycocotools_5_mask_23frPyObjects, METH_VARARGS|METH_KEYWORDS, 0};\nstatic PyObject *__pyx_pw_11pycocotools_5_mask_23frPyObjects(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyObject *__pyx_v_pyobj = 0;\n  siz __pyx_v_h;\n  PyObject *__pyx_v_w = 0;\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"frPyObjects (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pyobj,&__pyx_n_s_h,&__pyx_n_s_w,0};\n    PyObject* values[3] = {0,0,0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        CYTHON_FALLTHROUGH;\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        CYTHON_FALLTHROUGH;\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        CYTHON_FALLTHROUGH;\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pyobj)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        CYTHON_FALLTHROUGH;\n        case  1:\n        if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_h)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"frPyObjects\", 1, 3, 3, 1); __PYX_ERR(0, 280, __pyx_L3_error)\n        }\n        CYTHON_FALLTHROUGH;\n        case  2:\n        if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_w)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"frPyObjects\", 1, 3, 3, 2); __PYX_ERR(0, 280, __pyx_L3_error)\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"frPyObjects\") < 0)) __PYX_ERR(0, 280, __pyx_L3_error)\n      }\n    } else if (PyTuple_GET_SIZE(__pyx_args) != 3) {\n      goto __pyx_L5_argtuple_error;\n    } else {\n      values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n      values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n      values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n    }\n    __pyx_v_pyobj = values[0];\n    __pyx_v_h = __Pyx_PyInt_As_siz(values[1]); if (unlikely((__pyx_v_h == ((siz)-1)) && PyErr_Occurred())) __PYX_ERR(0, 280, __pyx_L3_error)\n    __pyx_v_w = values[2];\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"frPyObjects\", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 280, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"pycocotools._mask.frPyObjects\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return NULL;\n  __pyx_L4_argument_unpacking_done:;\n  __pyx_r = __pyx_pf_11pycocotools_5_mask_22frPyObjects(__pyx_self, __pyx_v_pyobj, __pyx_v_h, __pyx_v_w);\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_11pycocotools_5_mask_22frPyObjects(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_pyobj, siz __pyx_v_h, PyObject *__pyx_v_w) {\n  PyObject *__pyx_v_objs = NULL;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  int __pyx_t_2;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *__pyx_t_4 = NULL;\n  PyObject *__pyx_t_5 = NULL;\n  int __pyx_t_6;\n  PyObject *__pyx_t_7 = NULL;\n  int __pyx_t_8;\n  Py_ssize_t __pyx_t_9;\n  __Pyx_RefNannySetupContext(\"frPyObjects\", 0);\n\n  /* \"pycocotools/_mask.pyx\":281\n * \n * def frPyObjects(pyobj, siz h, w):\n *     if type(pyobj) == np.ndarray:             # <<<<<<<<<<<<<<\n *         objs = frBbox(pyobj, h, w )\n *     elif type(pyobj) == list and len(pyobj[0]) == 4:\n */\n  __pyx_t_1 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_pyobj)), ((PyObject *)__pyx_ptype_5numpy_ndarray), Py_EQ); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 281, __pyx_L1_error)\n  __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 281, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  if (__pyx_t_2) {\n\n    /* \"pycocotools/_mask.pyx\":282\n * def frPyObjects(pyobj, siz h, w):\n *     if type(pyobj) == np.ndarray:\n *         objs = frBbox(pyobj, h, w )             # <<<<<<<<<<<<<<\n *     elif type(pyobj) == list and len(pyobj[0]) == 4:\n *         objs = frBbox(pyobj, h, w )\n */\n    __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_frBbox); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 282, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __pyx_t_4 = __Pyx_PyInt_From_siz(__pyx_v_h); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 282, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __pyx_t_5 = NULL;\n    __pyx_t_6 = 0;\n    if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) {\n      __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_3);\n      if (likely(__pyx_t_5)) {\n        PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);\n        __Pyx_INCREF(__pyx_t_5);\n        __Pyx_INCREF(function);\n        __Pyx_DECREF_SET(__pyx_t_3, function);\n        __pyx_t_6 = 1;\n      }\n    }\n    #if CYTHON_FAST_PYCALL\n    if (PyFunction_Check(__pyx_t_3)) {\n      PyObject *__pyx_temp[4] = {__pyx_t_5, __pyx_v_pyobj, __pyx_t_4, __pyx_v_w};\n      __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_6, 3+__pyx_t_6); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 282, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;\n      __Pyx_GOTREF(__pyx_t_1);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    } else\n    #endif\n    #if CYTHON_FAST_PYCCALL\n    if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) {\n      PyObject *__pyx_temp[4] = {__pyx_t_5, __pyx_v_pyobj, __pyx_t_4, __pyx_v_w};\n      __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_6, 3+__pyx_t_6); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 282, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;\n      __Pyx_GOTREF(__pyx_t_1);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    } else\n    #endif\n    {\n      __pyx_t_7 = PyTuple_New(3+__pyx_t_6); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 282, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      if (__pyx_t_5) {\n        __Pyx_GIVEREF(__pyx_t_5); PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_5); __pyx_t_5 = NULL;\n      }\n      __Pyx_INCREF(__pyx_v_pyobj);\n      __Pyx_GIVEREF(__pyx_v_pyobj);\n      PyTuple_SET_ITEM(__pyx_t_7, 0+__pyx_t_6, __pyx_v_pyobj);\n      __Pyx_GIVEREF(__pyx_t_4);\n      PyTuple_SET_ITEM(__pyx_t_7, 1+__pyx_t_6, __pyx_t_4);\n      __Pyx_INCREF(__pyx_v_w);\n      __Pyx_GIVEREF(__pyx_v_w);\n      PyTuple_SET_ITEM(__pyx_t_7, 2+__pyx_t_6, __pyx_v_w);\n      __pyx_t_4 = 0;\n      __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_7, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 282, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_1);\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    }\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __pyx_v_objs = __pyx_t_1;\n    __pyx_t_1 = 0;\n\n    /* \"pycocotools/_mask.pyx\":281\n * \n * def frPyObjects(pyobj, siz h, w):\n *     if type(pyobj) == np.ndarray:             # <<<<<<<<<<<<<<\n *         objs = frBbox(pyobj, h, w )\n *     elif type(pyobj) == list and len(pyobj[0]) == 4:\n */\n    goto __pyx_L3;\n  }\n\n  /* \"pycocotools/_mask.pyx\":283\n *     if type(pyobj) == np.ndarray:\n *         objs = frBbox(pyobj, h, w )\n *     elif type(pyobj) == list and len(pyobj[0]) == 4:             # <<<<<<<<<<<<<<\n *         objs = frBbox(pyobj, h, w )\n *     elif type(pyobj) == list and len(pyobj[0]) > 4:\n */\n  __pyx_t_1 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_pyobj)), ((PyObject *)(&PyList_Type)), Py_EQ); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 283, __pyx_L1_error)\n  __pyx_t_8 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_8 < 0)) __PYX_ERR(0, 283, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  if (__pyx_t_8) {\n  } else {\n    __pyx_t_2 = __pyx_t_8;\n    goto __pyx_L4_bool_binop_done;\n  }\n  __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_pyobj, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 283, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_9 = PyObject_Length(__pyx_t_1); if (unlikely(__pyx_t_9 == ((Py_ssize_t)-1))) __PYX_ERR(0, 283, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_8 = ((__pyx_t_9 == 4) != 0);\n  __pyx_t_2 = __pyx_t_8;\n  __pyx_L4_bool_binop_done:;\n  if (__pyx_t_2) {\n\n    /* \"pycocotools/_mask.pyx\":284\n *         objs = frBbox(pyobj, h, w )\n *     elif type(pyobj) == list and len(pyobj[0]) == 4:\n *         objs = frBbox(pyobj, h, w )             # <<<<<<<<<<<<<<\n *     elif type(pyobj) == list and len(pyobj[0]) > 4:\n *         objs = frPoly(pyobj, h, w )\n */\n    __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_frBbox); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 284, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __pyx_t_7 = __Pyx_PyInt_From_siz(__pyx_v_h); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 284, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_7);\n    __pyx_t_4 = NULL;\n    __pyx_t_6 = 0;\n    if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) {\n      __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_3);\n      if (likely(__pyx_t_4)) {\n        PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);\n        __Pyx_INCREF(__pyx_t_4);\n        __Pyx_INCREF(function);\n        __Pyx_DECREF_SET(__pyx_t_3, function);\n        __pyx_t_6 = 1;\n      }\n    }\n    #if CYTHON_FAST_PYCALL\n    if (PyFunction_Check(__pyx_t_3)) {\n      PyObject *__pyx_temp[4] = {__pyx_t_4, __pyx_v_pyobj, __pyx_t_7, __pyx_v_w};\n      __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_6, 3+__pyx_t_6); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 284, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __Pyx_GOTREF(__pyx_t_1);\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    } else\n    #endif\n    #if CYTHON_FAST_PYCCALL\n    if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) {\n      PyObject *__pyx_temp[4] = {__pyx_t_4, __pyx_v_pyobj, __pyx_t_7, __pyx_v_w};\n      __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_6, 3+__pyx_t_6); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 284, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __Pyx_GOTREF(__pyx_t_1);\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    } else\n    #endif\n    {\n      __pyx_t_5 = PyTuple_New(3+__pyx_t_6); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 284, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_5);\n      if (__pyx_t_4) {\n        __Pyx_GIVEREF(__pyx_t_4); PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_4); __pyx_t_4 = NULL;\n      }\n      __Pyx_INCREF(__pyx_v_pyobj);\n      __Pyx_GIVEREF(__pyx_v_pyobj);\n      PyTuple_SET_ITEM(__pyx_t_5, 0+__pyx_t_6, __pyx_v_pyobj);\n      __Pyx_GIVEREF(__pyx_t_7);\n      PyTuple_SET_ITEM(__pyx_t_5, 1+__pyx_t_6, __pyx_t_7);\n      __Pyx_INCREF(__pyx_v_w);\n      __Pyx_GIVEREF(__pyx_v_w);\n      PyTuple_SET_ITEM(__pyx_t_5, 2+__pyx_t_6, __pyx_v_w);\n      __pyx_t_7 = 0;\n      __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_5, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 284, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_1);\n      __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    }\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __pyx_v_objs = __pyx_t_1;\n    __pyx_t_1 = 0;\n\n    /* \"pycocotools/_mask.pyx\":283\n *     if type(pyobj) == np.ndarray:\n *         objs = frBbox(pyobj, h, w )\n *     elif type(pyobj) == list and len(pyobj[0]) == 4:             # <<<<<<<<<<<<<<\n *         objs = frBbox(pyobj, h, w )\n *     elif type(pyobj) == list and len(pyobj[0]) > 4:\n */\n    goto __pyx_L3;\n  }\n\n  /* \"pycocotools/_mask.pyx\":285\n *     elif type(pyobj) == list and len(pyobj[0]) == 4:\n *         objs = frBbox(pyobj, h, w )\n *     elif type(pyobj) == list and len(pyobj[0]) > 4:             # <<<<<<<<<<<<<<\n *         objs = frPoly(pyobj, h, w )\n *     elif type(pyobj) == list and type(pyobj[0]) == dict:\n */\n  __pyx_t_1 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_pyobj)), ((PyObject *)(&PyList_Type)), Py_EQ); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 285, __pyx_L1_error)\n  __pyx_t_8 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_8 < 0)) __PYX_ERR(0, 285, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  if (__pyx_t_8) {\n  } else {\n    __pyx_t_2 = __pyx_t_8;\n    goto __pyx_L6_bool_binop_done;\n  }\n  __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_pyobj, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 285, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_9 = PyObject_Length(__pyx_t_1); if (unlikely(__pyx_t_9 == ((Py_ssize_t)-1))) __PYX_ERR(0, 285, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_8 = ((__pyx_t_9 > 4) != 0);\n  __pyx_t_2 = __pyx_t_8;\n  __pyx_L6_bool_binop_done:;\n  if (__pyx_t_2) {\n\n    /* \"pycocotools/_mask.pyx\":286\n *         objs = frBbox(pyobj, h, w )\n *     elif type(pyobj) == list and len(pyobj[0]) > 4:\n *         objs = frPoly(pyobj, h, w )             # <<<<<<<<<<<<<<\n *     elif type(pyobj) == list and type(pyobj[0]) == dict:\n *         objs = frUncompressedRLE(pyobj, h, w)\n */\n    __Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_frPoly); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 286, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __pyx_t_5 = __Pyx_PyInt_From_siz(__pyx_v_h); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 286, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_5);\n    __pyx_t_7 = NULL;\n    __pyx_t_6 = 0;\n    if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) {\n      __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_3);\n      if (likely(__pyx_t_7)) {\n        PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);\n        __Pyx_INCREF(__pyx_t_7);\n        __Pyx_INCREF(function);\n        __Pyx_DECREF_SET(__pyx_t_3, function);\n        __pyx_t_6 = 1;\n      }\n    }\n    #if CYTHON_FAST_PYCALL\n    if (PyFunction_Check(__pyx_t_3)) {\n      PyObject *__pyx_temp[4] = {__pyx_t_7, __pyx_v_pyobj, __pyx_t_5, __pyx_v_w};\n      __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_6, 3+__pyx_t_6); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 286, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;\n      __Pyx_GOTREF(__pyx_t_1);\n      __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    } else\n    #endif\n    #if CYTHON_FAST_PYCCALL\n    if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) {\n      PyObject *__pyx_temp[4] = {__pyx_t_7, __pyx_v_pyobj, __pyx_t_5, __pyx_v_w};\n      __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_6, 3+__pyx_t_6); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 286, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;\n      __Pyx_GOTREF(__pyx_t_1);\n      __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n    } else\n    #endif\n    {\n      __pyx_t_4 = PyTuple_New(3+__pyx_t_6); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 286, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      if (__pyx_t_7) {\n        __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_7); __pyx_t_7 = NULL;\n      }\n      __Pyx_INCREF(__pyx_v_pyobj);\n      __Pyx_GIVEREF(__pyx_v_pyobj);\n      PyTuple_SET_ITEM(__pyx_t_4, 0+__pyx_t_6, __pyx_v_pyobj);\n      __Pyx_GIVEREF(__pyx_t_5);\n      PyTuple_SET_ITEM(__pyx_t_4, 1+__pyx_t_6, __pyx_t_5);\n      __Pyx_INCREF(__pyx_v_w);\n      __Pyx_GIVEREF(__pyx_v_w);\n      PyTuple_SET_ITEM(__pyx_t_4, 2+__pyx_t_6, __pyx_v_w);\n      __pyx_t_5 = 0;\n      __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_4, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 286, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_1);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    }\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __pyx_v_objs = __pyx_t_1;\n    __pyx_t_1 = 0;\n\n    /* \"pycocotools/_mask.pyx\":285\n *     elif type(pyobj) == list and len(pyobj[0]) == 4:\n *         objs = frBbox(pyobj, h, w )\n *     elif type(pyobj) == list and len(pyobj[0]) > 4:             # <<<<<<<<<<<<<<\n *         objs = frPoly(pyobj, h, w )\n *     elif type(pyobj) == list and type(pyobj[0]) == dict:\n */\n    goto __pyx_L3;\n  }\n\n  /* \"pycocotools/_mask.pyx\":287\n *     elif type(pyobj) == list and len(pyobj[0]) > 4:\n *         objs = frPoly(pyobj, h, w )\n *     elif type(pyobj) == list and type(pyobj[0]) == dict:             # <<<<<<<<<<<<<<\n *         objs = frUncompressedRLE(pyobj, h, w)\n *     else:\n */\n  __pyx_t_1 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_v_pyobj)), ((PyObject *)(&PyList_Type)), Py_EQ); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 287, __pyx_L1_error)\n  __pyx_t_8 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_8 < 0)) __PYX_ERR(0, 287, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  if (__pyx_t_8) {\n  } else {\n    __pyx_t_2 = __pyx_t_8;\n    goto __pyx_L8_bool_binop_done;\n  }\n  __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_pyobj, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 287, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_3 = PyObject_RichCompare(((PyObject *)Py_TYPE(__pyx_t_1)), ((PyObject *)(&PyDict_Type)), Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 287, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_8 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_8 < 0)) __PYX_ERR(0, 287, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  __pyx_t_2 = __pyx_t_8;\n  __pyx_L8_bool_binop_done:;\n  if (likely(__pyx_t_2)) {\n\n    /* \"pycocotools/_mask.pyx\":288\n *         objs = frPoly(pyobj, h, w )\n *     elif type(pyobj) == list and type(pyobj[0]) == dict:\n *         objs = frUncompressedRLE(pyobj, h, w)             # <<<<<<<<<<<<<<\n *     else:\n *         raise Exception('input type is not supported.')\n */\n    __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_frUncompressedRLE); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 288, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __pyx_t_4 = __Pyx_PyInt_From_siz(__pyx_v_h); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 288, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __pyx_t_5 = NULL;\n    __pyx_t_6 = 0;\n    if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_1))) {\n      __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_1);\n      if (likely(__pyx_t_5)) {\n        PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1);\n        __Pyx_INCREF(__pyx_t_5);\n        __Pyx_INCREF(function);\n        __Pyx_DECREF_SET(__pyx_t_1, function);\n        __pyx_t_6 = 1;\n      }\n    }\n    #if CYTHON_FAST_PYCALL\n    if (PyFunction_Check(__pyx_t_1)) {\n      PyObject *__pyx_temp[4] = {__pyx_t_5, __pyx_v_pyobj, __pyx_t_4, __pyx_v_w};\n      __pyx_t_3 = __Pyx_PyFunction_FastCall(__pyx_t_1, __pyx_temp+1-__pyx_t_6, 3+__pyx_t_6); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 288, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    } else\n    #endif\n    #if CYTHON_FAST_PYCCALL\n    if (__Pyx_PyFastCFunction_Check(__pyx_t_1)) {\n      PyObject *__pyx_temp[4] = {__pyx_t_5, __pyx_v_pyobj, __pyx_t_4, __pyx_v_w};\n      __pyx_t_3 = __Pyx_PyCFunction_FastCall(__pyx_t_1, __pyx_temp+1-__pyx_t_6, 3+__pyx_t_6); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 288, __pyx_L1_error)\n      __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    } else\n    #endif\n    {\n      __pyx_t_7 = PyTuple_New(3+__pyx_t_6); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 288, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_7);\n      if (__pyx_t_5) {\n        __Pyx_GIVEREF(__pyx_t_5); PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_5); __pyx_t_5 = NULL;\n      }\n      __Pyx_INCREF(__pyx_v_pyobj);\n      __Pyx_GIVEREF(__pyx_v_pyobj);\n      PyTuple_SET_ITEM(__pyx_t_7, 0+__pyx_t_6, __pyx_v_pyobj);\n      __Pyx_GIVEREF(__pyx_t_4);\n      PyTuple_SET_ITEM(__pyx_t_7, 1+__pyx_t_6, __pyx_t_4);\n      __Pyx_INCREF(__pyx_v_w);\n      __Pyx_GIVEREF(__pyx_v_w);\n      PyTuple_SET_ITEM(__pyx_t_7, 2+__pyx_t_6, __pyx_v_w);\n      __pyx_t_4 = 0;\n      __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_7, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 288, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;\n    }\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n    __pyx_v_objs = __pyx_t_3;\n    __pyx_t_3 = 0;\n\n    /* \"pycocotools/_mask.pyx\":287\n *     elif type(pyobj) == list and len(pyobj[0]) > 4:\n *         objs = frPoly(pyobj, h, w )\n *     elif type(pyobj) == list and type(pyobj[0]) == dict:             # <<<<<<<<<<<<<<\n *         objs = frUncompressedRLE(pyobj, h, w)\n *     else:\n */\n    goto __pyx_L3;\n  }\n\n  /* \"pycocotools/_mask.pyx\":290\n *         objs = frUncompressedRLE(pyobj, h, w)\n *     else:\n *         raise Exception('input type is not supported.')             # <<<<<<<<<<<<<<\n *     return objs\n */\n  /*else*/ {\n    __pyx_t_3 = __Pyx_PyObject_Call(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])), __pyx_tuple__19, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 290, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __PYX_ERR(0, 290, __pyx_L1_error)\n  }\n  __pyx_L3:;\n\n  /* \"pycocotools/_mask.pyx\":291\n *     else:\n *         raise Exception('input type is not supported.')\n *     return objs             # <<<<<<<<<<<<<<\n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(__pyx_v_objs);\n  __pyx_r = __pyx_v_objs;\n  goto __pyx_L0;\n\n  /* \"pycocotools/_mask.pyx\":280\n *     return objs\n * \n * def frPyObjects(pyobj, siz h, w):             # <<<<<<<<<<<<<<\n *     if type(pyobj) == np.ndarray:\n *         objs = frBbox(pyobj, h, w )\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_7);\n  __Pyx_AddTraceback(\"pycocotools._mask.frPyObjects\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XDECREF(__pyx_v_objs);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":258\n *         # experimental exception made for __getbuffer__ and __releasebuffer__\n *         # -- the details of this may change.\n *         def __getbuffer__(ndarray self, Py_buffer* info, int flags):             # <<<<<<<<<<<<<<\n *             # This implementation of getbuffer is geared towards Cython\n *             # requirements, and does not yet fulfill the PEP.\n */\n\n/* Python wrapper */\nstatic CYTHON_UNUSED int __pyx_pw_5numpy_7ndarray_1__getbuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/\nstatic CYTHON_UNUSED int __pyx_pw_5numpy_7ndarray_1__getbuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) {\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__getbuffer__ (wrapper)\", 0);\n  __pyx_r = __pyx_pf_5numpy_7ndarray___getbuffer__(((PyArrayObject *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic int __pyx_pf_5numpy_7ndarray___getbuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) {\n  int __pyx_v_i;\n  int __pyx_v_ndim;\n  int __pyx_v_endian_detector;\n  int __pyx_v_little_endian;\n  int __pyx_v_t;\n  char *__pyx_v_f;\n  PyArray_Descr *__pyx_v_descr = 0;\n  int __pyx_v_offset;\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  int __pyx_t_1;\n  int __pyx_t_2;\n  PyObject *__pyx_t_3 = NULL;\n  int __pyx_t_4;\n  int __pyx_t_5;\n  int __pyx_t_6;\n  PyArray_Descr *__pyx_t_7;\n  PyObject *__pyx_t_8 = NULL;\n  char *__pyx_t_9;\n  if (__pyx_v_info == NULL) {\n    PyErr_SetString(PyExc_BufferError, \"PyObject_GetBuffer: view==NULL argument is obsolete\");\n    return -1;\n  }\n  __Pyx_RefNannySetupContext(\"__getbuffer__\", 0);\n  __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None);\n  __Pyx_GIVEREF(__pyx_v_info->obj);\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":265\n * \n *             cdef int i, ndim\n *             cdef int endian_detector = 1             # <<<<<<<<<<<<<<\n *             cdef bint little_endian = ((<char*>&endian_detector)[0] != 0)\n * \n */\n  __pyx_v_endian_detector = 1;\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":266\n *             cdef int i, ndim\n *             cdef int endian_detector = 1\n *             cdef bint little_endian = ((<char*>&endian_detector)[0] != 0)             # <<<<<<<<<<<<<<\n * \n *             ndim = PyArray_NDIM(self)\n */\n  __pyx_v_little_endian = ((((char *)(&__pyx_v_endian_detector))[0]) != 0);\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":268\n *             cdef bint little_endian = ((<char*>&endian_detector)[0] != 0)\n * \n *             ndim = PyArray_NDIM(self)             # <<<<<<<<<<<<<<\n * \n *             if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)\n */\n  __pyx_v_ndim = PyArray_NDIM(__pyx_v_self);\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":270\n *             ndim = PyArray_NDIM(self)\n * \n *             if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)             # <<<<<<<<<<<<<<\n *                 and not PyArray_CHKFLAGS(self, NPY_ARRAY_C_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not C contiguous\")\n */\n  __pyx_t_2 = (((__pyx_v_flags & PyBUF_C_CONTIGUOUS) == PyBUF_C_CONTIGUOUS) != 0);\n  if (__pyx_t_2) {\n  } else {\n    __pyx_t_1 = __pyx_t_2;\n    goto __pyx_L4_bool_binop_done;\n  }\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":271\n * \n *             if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)\n *                 and not PyArray_CHKFLAGS(self, NPY_ARRAY_C_CONTIGUOUS)):             # <<<<<<<<<<<<<<\n *                 raise ValueError(u\"ndarray is not C contiguous\")\n * \n */\n  __pyx_t_2 = ((!(PyArray_CHKFLAGS(__pyx_v_self, NPY_ARRAY_C_CONTIGUOUS) != 0)) != 0);\n  __pyx_t_1 = __pyx_t_2;\n  __pyx_L4_bool_binop_done:;\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":270\n *             ndim = PyArray_NDIM(self)\n * \n *             if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)             # <<<<<<<<<<<<<<\n *                 and not PyArray_CHKFLAGS(self, NPY_ARRAY_C_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not C contiguous\")\n */\n  if (unlikely(__pyx_t_1)) {\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":272\n *             if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)\n *                 and not PyArray_CHKFLAGS(self, NPY_ARRAY_C_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not C contiguous\")             # <<<<<<<<<<<<<<\n * \n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)\n */\n    __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__20, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 272, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __PYX_ERR(2, 272, __pyx_L1_error)\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":270\n *             ndim = PyArray_NDIM(self)\n * \n *             if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)             # <<<<<<<<<<<<<<\n *                 and not PyArray_CHKFLAGS(self, NPY_ARRAY_C_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not C contiguous\")\n */\n  }\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":274\n *                 raise ValueError(u\"ndarray is not C contiguous\")\n * \n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)             # <<<<<<<<<<<<<<\n *                 and not PyArray_CHKFLAGS(self, NPY_ARRAY_F_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not Fortran contiguous\")\n */\n  __pyx_t_2 = (((__pyx_v_flags & PyBUF_F_CONTIGUOUS) == PyBUF_F_CONTIGUOUS) != 0);\n  if (__pyx_t_2) {\n  } else {\n    __pyx_t_1 = __pyx_t_2;\n    goto __pyx_L7_bool_binop_done;\n  }\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":275\n * \n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)\n *                 and not PyArray_CHKFLAGS(self, NPY_ARRAY_F_CONTIGUOUS)):             # <<<<<<<<<<<<<<\n *                 raise ValueError(u\"ndarray is not Fortran contiguous\")\n * \n */\n  __pyx_t_2 = ((!(PyArray_CHKFLAGS(__pyx_v_self, NPY_ARRAY_F_CONTIGUOUS) != 0)) != 0);\n  __pyx_t_1 = __pyx_t_2;\n  __pyx_L7_bool_binop_done:;\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":274\n *                 raise ValueError(u\"ndarray is not C contiguous\")\n * \n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)             # <<<<<<<<<<<<<<\n *                 and not PyArray_CHKFLAGS(self, NPY_ARRAY_F_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not Fortran contiguous\")\n */\n  if (unlikely(__pyx_t_1)) {\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":276\n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)\n *                 and not PyArray_CHKFLAGS(self, NPY_ARRAY_F_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not Fortran contiguous\")             # <<<<<<<<<<<<<<\n * \n *             info.buf = PyArray_DATA(self)\n */\n    __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__21, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 276, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __PYX_ERR(2, 276, __pyx_L1_error)\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":274\n *                 raise ValueError(u\"ndarray is not C contiguous\")\n * \n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)             # <<<<<<<<<<<<<<\n *                 and not PyArray_CHKFLAGS(self, NPY_ARRAY_F_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not Fortran contiguous\")\n */\n  }\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":278\n *                 raise ValueError(u\"ndarray is not Fortran contiguous\")\n * \n *             info.buf = PyArray_DATA(self)             # <<<<<<<<<<<<<<\n *             info.ndim = ndim\n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):\n */\n  __pyx_v_info->buf = PyArray_DATA(__pyx_v_self);\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":279\n * \n *             info.buf = PyArray_DATA(self)\n *             info.ndim = ndim             # <<<<<<<<<<<<<<\n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):\n *                 # Allocate new buffer for strides and shape info.\n */\n  __pyx_v_info->ndim = __pyx_v_ndim;\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":280\n *             info.buf = PyArray_DATA(self)\n *             info.ndim = ndim\n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):             # <<<<<<<<<<<<<<\n *                 # Allocate new buffer for strides and shape info.\n *                 # This is allocated as one block, strides first.\n */\n  __pyx_t_1 = (((sizeof(npy_intp)) != (sizeof(Py_ssize_t))) != 0);\n  if (__pyx_t_1) {\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":283\n *                 # Allocate new buffer for strides and shape info.\n *                 # This is allocated as one block, strides first.\n *                 info.strides = <Py_ssize_t*>PyObject_Malloc(sizeof(Py_ssize_t) * 2 * <size_t>ndim)             # <<<<<<<<<<<<<<\n *                 info.shape = info.strides + ndim\n *                 for i in range(ndim):\n */\n    __pyx_v_info->strides = ((Py_ssize_t *)PyObject_Malloc((((sizeof(Py_ssize_t)) * 2) * ((size_t)__pyx_v_ndim))));\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":284\n *                 # This is allocated as one block, strides first.\n *                 info.strides = <Py_ssize_t*>PyObject_Malloc(sizeof(Py_ssize_t) * 2 * <size_t>ndim)\n *                 info.shape = info.strides + ndim             # <<<<<<<<<<<<<<\n *                 for i in range(ndim):\n *                     info.strides[i] = PyArray_STRIDES(self)[i]\n */\n    __pyx_v_info->shape = (__pyx_v_info->strides + __pyx_v_ndim);\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":285\n *                 info.strides = <Py_ssize_t*>PyObject_Malloc(sizeof(Py_ssize_t) * 2 * <size_t>ndim)\n *                 info.shape = info.strides + ndim\n *                 for i in range(ndim):             # <<<<<<<<<<<<<<\n *                     info.strides[i] = PyArray_STRIDES(self)[i]\n *                     info.shape[i] = PyArray_DIMS(self)[i]\n */\n    __pyx_t_4 = __pyx_v_ndim;\n    __pyx_t_5 = __pyx_t_4;\n    for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) {\n      __pyx_v_i = __pyx_t_6;\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":286\n *                 info.shape = info.strides + ndim\n *                 for i in range(ndim):\n *                     info.strides[i] = PyArray_STRIDES(self)[i]             # <<<<<<<<<<<<<<\n *                     info.shape[i] = PyArray_DIMS(self)[i]\n *             else:\n */\n      (__pyx_v_info->strides[__pyx_v_i]) = (PyArray_STRIDES(__pyx_v_self)[__pyx_v_i]);\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":287\n *                 for i in range(ndim):\n *                     info.strides[i] = PyArray_STRIDES(self)[i]\n *                     info.shape[i] = PyArray_DIMS(self)[i]             # <<<<<<<<<<<<<<\n *             else:\n *                 info.strides = <Py_ssize_t*>PyArray_STRIDES(self)\n */\n      (__pyx_v_info->shape[__pyx_v_i]) = (PyArray_DIMS(__pyx_v_self)[__pyx_v_i]);\n    }\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":280\n *             info.buf = PyArray_DATA(self)\n *             info.ndim = ndim\n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):             # <<<<<<<<<<<<<<\n *                 # Allocate new buffer for strides and shape info.\n *                 # This is allocated as one block, strides first.\n */\n    goto __pyx_L9;\n  }\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":289\n *                     info.shape[i] = PyArray_DIMS(self)[i]\n *             else:\n *                 info.strides = <Py_ssize_t*>PyArray_STRIDES(self)             # <<<<<<<<<<<<<<\n *                 info.shape = <Py_ssize_t*>PyArray_DIMS(self)\n *             info.suboffsets = NULL\n */\n  /*else*/ {\n    __pyx_v_info->strides = ((Py_ssize_t *)PyArray_STRIDES(__pyx_v_self));\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":290\n *             else:\n *                 info.strides = <Py_ssize_t*>PyArray_STRIDES(self)\n *                 info.shape = <Py_ssize_t*>PyArray_DIMS(self)             # <<<<<<<<<<<<<<\n *             info.suboffsets = NULL\n *             info.itemsize = PyArray_ITEMSIZE(self)\n */\n    __pyx_v_info->shape = ((Py_ssize_t *)PyArray_DIMS(__pyx_v_self));\n  }\n  __pyx_L9:;\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":291\n *                 info.strides = <Py_ssize_t*>PyArray_STRIDES(self)\n *                 info.shape = <Py_ssize_t*>PyArray_DIMS(self)\n *             info.suboffsets = NULL             # <<<<<<<<<<<<<<\n *             info.itemsize = PyArray_ITEMSIZE(self)\n *             info.readonly = not PyArray_ISWRITEABLE(self)\n */\n  __pyx_v_info->suboffsets = NULL;\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":292\n *                 info.shape = <Py_ssize_t*>PyArray_DIMS(self)\n *             info.suboffsets = NULL\n *             info.itemsize = PyArray_ITEMSIZE(self)             # <<<<<<<<<<<<<<\n *             info.readonly = not PyArray_ISWRITEABLE(self)\n * \n */\n  __pyx_v_info->itemsize = PyArray_ITEMSIZE(__pyx_v_self);\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":293\n *             info.suboffsets = NULL\n *             info.itemsize = PyArray_ITEMSIZE(self)\n *             info.readonly = not PyArray_ISWRITEABLE(self)             # <<<<<<<<<<<<<<\n * \n *             cdef int t\n */\n  __pyx_v_info->readonly = (!(PyArray_ISWRITEABLE(__pyx_v_self) != 0));\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":296\n * \n *             cdef int t\n *             cdef char* f = NULL             # <<<<<<<<<<<<<<\n *             cdef dtype descr = <dtype>PyArray_DESCR(self)\n *             cdef int offset\n */\n  __pyx_v_f = NULL;\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":297\n *             cdef int t\n *             cdef char* f = NULL\n *             cdef dtype descr = <dtype>PyArray_DESCR(self)             # <<<<<<<<<<<<<<\n *             cdef int offset\n * \n */\n  __pyx_t_7 = PyArray_DESCR(__pyx_v_self);\n  __pyx_t_3 = ((PyObject *)__pyx_t_7);\n  __Pyx_INCREF(__pyx_t_3);\n  __pyx_v_descr = ((PyArray_Descr *)__pyx_t_3);\n  __pyx_t_3 = 0;\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":300\n *             cdef int offset\n * \n *             info.obj = self             # <<<<<<<<<<<<<<\n * \n *             if not PyDataType_HASFIELDS(descr):\n */\n  __Pyx_INCREF(((PyObject *)__pyx_v_self));\n  __Pyx_GIVEREF(((PyObject *)__pyx_v_self));\n  __Pyx_GOTREF(__pyx_v_info->obj);\n  __Pyx_DECREF(__pyx_v_info->obj);\n  __pyx_v_info->obj = ((PyObject *)__pyx_v_self);\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":302\n *             info.obj = self\n * \n *             if not PyDataType_HASFIELDS(descr):             # <<<<<<<<<<<<<<\n *                 t = descr.type_num\n *                 if ((descr.byteorder == c'>' and little_endian) or\n */\n  __pyx_t_1 = ((!(PyDataType_HASFIELDS(__pyx_v_descr) != 0)) != 0);\n  if (__pyx_t_1) {\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":303\n * \n *             if not PyDataType_HASFIELDS(descr):\n *                 t = descr.type_num             # <<<<<<<<<<<<<<\n *                 if ((descr.byteorder == c'>' and little_endian) or\n *                     (descr.byteorder == c'<' and not little_endian)):\n */\n    __pyx_t_4 = __pyx_v_descr->type_num;\n    __pyx_v_t = __pyx_t_4;\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":304\n *             if not PyDataType_HASFIELDS(descr):\n *                 t = descr.type_num\n *                 if ((descr.byteorder == c'>' and little_endian) or             # <<<<<<<<<<<<<<\n *                     (descr.byteorder == c'<' and not little_endian)):\n *                     raise ValueError(u\"Non-native byte order not supported\")\n */\n    __pyx_t_2 = ((__pyx_v_descr->byteorder == '>') != 0);\n    if (!__pyx_t_2) {\n      goto __pyx_L15_next_or;\n    } else {\n    }\n    __pyx_t_2 = (__pyx_v_little_endian != 0);\n    if (!__pyx_t_2) {\n    } else {\n      __pyx_t_1 = __pyx_t_2;\n      goto __pyx_L14_bool_binop_done;\n    }\n    __pyx_L15_next_or:;\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":305\n *                 t = descr.type_num\n *                 if ((descr.byteorder == c'>' and little_endian) or\n *                     (descr.byteorder == c'<' and not little_endian)):             # <<<<<<<<<<<<<<\n *                     raise ValueError(u\"Non-native byte order not supported\")\n *                 if   t == NPY_BYTE:        f = \"b\"\n */\n    __pyx_t_2 = ((__pyx_v_descr->byteorder == '<') != 0);\n    if (__pyx_t_2) {\n    } else {\n      __pyx_t_1 = __pyx_t_2;\n      goto __pyx_L14_bool_binop_done;\n    }\n    __pyx_t_2 = ((!(__pyx_v_little_endian != 0)) != 0);\n    __pyx_t_1 = __pyx_t_2;\n    __pyx_L14_bool_binop_done:;\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":304\n *             if not PyDataType_HASFIELDS(descr):\n *                 t = descr.type_num\n *                 if ((descr.byteorder == c'>' and little_endian) or             # <<<<<<<<<<<<<<\n *                     (descr.byteorder == c'<' and not little_endian)):\n *                     raise ValueError(u\"Non-native byte order not supported\")\n */\n    if (unlikely(__pyx_t_1)) {\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":306\n *                 if ((descr.byteorder == c'>' and little_endian) or\n *                     (descr.byteorder == c'<' and not little_endian)):\n *                     raise ValueError(u\"Non-native byte order not supported\")             # <<<<<<<<<<<<<<\n *                 if   t == NPY_BYTE:        f = \"b\"\n *                 elif t == NPY_UBYTE:       f = \"B\"\n */\n      __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__22, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 306, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __PYX_ERR(2, 306, __pyx_L1_error)\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":304\n *             if not PyDataType_HASFIELDS(descr):\n *                 t = descr.type_num\n *                 if ((descr.byteorder == c'>' and little_endian) or             # <<<<<<<<<<<<<<\n *                     (descr.byteorder == c'<' and not little_endian)):\n *                     raise ValueError(u\"Non-native byte order not supported\")\n */\n    }\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":307\n *                     (descr.byteorder == c'<' and not little_endian)):\n *                     raise ValueError(u\"Non-native byte order not supported\")\n *                 if   t == NPY_BYTE:        f = \"b\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_UBYTE:       f = \"B\"\n *                 elif t == NPY_SHORT:       f = \"h\"\n */\n    switch (__pyx_v_t) {\n      case NPY_BYTE:\n      __pyx_v_f = ((char *)\"b\");\n      break;\n      case NPY_UBYTE:\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":308\n *                     raise ValueError(u\"Non-native byte order not supported\")\n *                 if   t == NPY_BYTE:        f = \"b\"\n *                 elif t == NPY_UBYTE:       f = \"B\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_SHORT:       f = \"h\"\n *                 elif t == NPY_USHORT:      f = \"H\"\n */\n      __pyx_v_f = ((char *)\"B\");\n      break;\n      case NPY_SHORT:\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":309\n *                 if   t == NPY_BYTE:        f = \"b\"\n *                 elif t == NPY_UBYTE:       f = \"B\"\n *                 elif t == NPY_SHORT:       f = \"h\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_USHORT:      f = \"H\"\n *                 elif t == NPY_INT:         f = \"i\"\n */\n      __pyx_v_f = ((char *)\"h\");\n      break;\n      case NPY_USHORT:\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":310\n *                 elif t == NPY_UBYTE:       f = \"B\"\n *                 elif t == NPY_SHORT:       f = \"h\"\n *                 elif t == NPY_USHORT:      f = \"H\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_INT:         f = \"i\"\n *                 elif t == NPY_UINT:        f = \"I\"\n */\n      __pyx_v_f = ((char *)\"H\");\n      break;\n      case NPY_INT:\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":311\n *                 elif t == NPY_SHORT:       f = \"h\"\n *                 elif t == NPY_USHORT:      f = \"H\"\n *                 elif t == NPY_INT:         f = \"i\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_UINT:        f = \"I\"\n *                 elif t == NPY_LONG:        f = \"l\"\n */\n      __pyx_v_f = ((char *)\"i\");\n      break;\n      case NPY_UINT:\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":312\n *                 elif t == NPY_USHORT:      f = \"H\"\n *                 elif t == NPY_INT:         f = \"i\"\n *                 elif t == NPY_UINT:        f = \"I\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_LONG:        f = \"l\"\n *                 elif t == NPY_ULONG:       f = \"L\"\n */\n      __pyx_v_f = ((char *)\"I\");\n      break;\n      case NPY_LONG:\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":313\n *                 elif t == NPY_INT:         f = \"i\"\n *                 elif t == NPY_UINT:        f = \"I\"\n *                 elif t == NPY_LONG:        f = \"l\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_ULONG:       f = \"L\"\n *                 elif t == NPY_LONGLONG:    f = \"q\"\n */\n      __pyx_v_f = ((char *)\"l\");\n      break;\n      case NPY_ULONG:\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":314\n *                 elif t == NPY_UINT:        f = \"I\"\n *                 elif t == NPY_LONG:        f = \"l\"\n *                 elif t == NPY_ULONG:       f = \"L\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_LONGLONG:    f = \"q\"\n *                 elif t == NPY_ULONGLONG:   f = \"Q\"\n */\n      __pyx_v_f = ((char *)\"L\");\n      break;\n      case NPY_LONGLONG:\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":315\n *                 elif t == NPY_LONG:        f = \"l\"\n *                 elif t == NPY_ULONG:       f = \"L\"\n *                 elif t == NPY_LONGLONG:    f = \"q\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_ULONGLONG:   f = \"Q\"\n *                 elif t == NPY_FLOAT:       f = \"f\"\n */\n      __pyx_v_f = ((char *)\"q\");\n      break;\n      case NPY_ULONGLONG:\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":316\n *                 elif t == NPY_ULONG:       f = \"L\"\n *                 elif t == NPY_LONGLONG:    f = \"q\"\n *                 elif t == NPY_ULONGLONG:   f = \"Q\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_FLOAT:       f = \"f\"\n *                 elif t == NPY_DOUBLE:      f = \"d\"\n */\n      __pyx_v_f = ((char *)\"Q\");\n      break;\n      case NPY_FLOAT:\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":317\n *                 elif t == NPY_LONGLONG:    f = \"q\"\n *                 elif t == NPY_ULONGLONG:   f = \"Q\"\n *                 elif t == NPY_FLOAT:       f = \"f\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_DOUBLE:      f = \"d\"\n *                 elif t == NPY_LONGDOUBLE:  f = \"g\"\n */\n      __pyx_v_f = ((char *)\"f\");\n      break;\n      case NPY_DOUBLE:\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":318\n *                 elif t == NPY_ULONGLONG:   f = \"Q\"\n *                 elif t == NPY_FLOAT:       f = \"f\"\n *                 elif t == NPY_DOUBLE:      f = \"d\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_LONGDOUBLE:  f = \"g\"\n *                 elif t == NPY_CFLOAT:      f = \"Zf\"\n */\n      __pyx_v_f = ((char *)\"d\");\n      break;\n      case NPY_LONGDOUBLE:\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":319\n *                 elif t == NPY_FLOAT:       f = \"f\"\n *                 elif t == NPY_DOUBLE:      f = \"d\"\n *                 elif t == NPY_LONGDOUBLE:  f = \"g\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_CFLOAT:      f = \"Zf\"\n *                 elif t == NPY_CDOUBLE:     f = \"Zd\"\n */\n      __pyx_v_f = ((char *)\"g\");\n      break;\n      case NPY_CFLOAT:\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":320\n *                 elif t == NPY_DOUBLE:      f = \"d\"\n *                 elif t == NPY_LONGDOUBLE:  f = \"g\"\n *                 elif t == NPY_CFLOAT:      f = \"Zf\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_CDOUBLE:     f = \"Zd\"\n *                 elif t == NPY_CLONGDOUBLE: f = \"Zg\"\n */\n      __pyx_v_f = ((char *)\"Zf\");\n      break;\n      case NPY_CDOUBLE:\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":321\n *                 elif t == NPY_LONGDOUBLE:  f = \"g\"\n *                 elif t == NPY_CFLOAT:      f = \"Zf\"\n *                 elif t == NPY_CDOUBLE:     f = \"Zd\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_CLONGDOUBLE: f = \"Zg\"\n *                 elif t == NPY_OBJECT:      f = \"O\"\n */\n      __pyx_v_f = ((char *)\"Zd\");\n      break;\n      case NPY_CLONGDOUBLE:\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":322\n *                 elif t == NPY_CFLOAT:      f = \"Zf\"\n *                 elif t == NPY_CDOUBLE:     f = \"Zd\"\n *                 elif t == NPY_CLONGDOUBLE: f = \"Zg\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_OBJECT:      f = \"O\"\n *                 else:\n */\n      __pyx_v_f = ((char *)\"Zg\");\n      break;\n      case NPY_OBJECT:\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":323\n *                 elif t == NPY_CDOUBLE:     f = \"Zd\"\n *                 elif t == NPY_CLONGDOUBLE: f = \"Zg\"\n *                 elif t == NPY_OBJECT:      f = \"O\"             # <<<<<<<<<<<<<<\n *                 else:\n *                     raise ValueError(u\"unknown dtype code in numpy.pxd (%d)\" % t)\n */\n      __pyx_v_f = ((char *)\"O\");\n      break;\n      default:\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":325\n *                 elif t == NPY_OBJECT:      f = \"O\"\n *                 else:\n *                     raise ValueError(u\"unknown dtype code in numpy.pxd (%d)\" % t)             # <<<<<<<<<<<<<<\n *                 info.format = f\n *                 return\n */\n      __pyx_t_3 = __Pyx_PyInt_From_int(__pyx_v_t); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 325, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_8 = PyUnicode_Format(__pyx_kp_u_unknown_dtype_code_in_numpy_pxd, __pyx_t_3); if (unlikely(!__pyx_t_8)) __PYX_ERR(2, 325, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_8);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_builtin_ValueError, __pyx_t_8); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 325, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;\n      __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __PYX_ERR(2, 325, __pyx_L1_error)\n      break;\n    }\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":326\n *                 else:\n *                     raise ValueError(u\"unknown dtype code in numpy.pxd (%d)\" % t)\n *                 info.format = f             # <<<<<<<<<<<<<<\n *                 return\n *             else:\n */\n    __pyx_v_info->format = __pyx_v_f;\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":327\n *                     raise ValueError(u\"unknown dtype code in numpy.pxd (%d)\" % t)\n *                 info.format = f\n *                 return             # <<<<<<<<<<<<<<\n *             else:\n *                 info.format = <char*>PyObject_Malloc(_buffer_format_string_len)\n */\n    __pyx_r = 0;\n    goto __pyx_L0;\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":302\n *             info.obj = self\n * \n *             if not PyDataType_HASFIELDS(descr):             # <<<<<<<<<<<<<<\n *                 t = descr.type_num\n *                 if ((descr.byteorder == c'>' and little_endian) or\n */\n  }\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":329\n *                 return\n *             else:\n *                 info.format = <char*>PyObject_Malloc(_buffer_format_string_len)             # <<<<<<<<<<<<<<\n *                 info.format[0] = c'^' # Native data types, manual alignment\n *                 offset = 0\n */\n  /*else*/ {\n    __pyx_v_info->format = ((char *)PyObject_Malloc(0xFF));\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":330\n *             else:\n *                 info.format = <char*>PyObject_Malloc(_buffer_format_string_len)\n *                 info.format[0] = c'^' # Native data types, manual alignment             # <<<<<<<<<<<<<<\n *                 offset = 0\n *                 f = _util_dtypestring(descr, info.format + 1,\n */\n    (__pyx_v_info->format[0]) = '^';\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":331\n *                 info.format = <char*>PyObject_Malloc(_buffer_format_string_len)\n *                 info.format[0] = c'^' # Native data types, manual alignment\n *                 offset = 0             # <<<<<<<<<<<<<<\n *                 f = _util_dtypestring(descr, info.format + 1,\n *                                       info.format + _buffer_format_string_len,\n */\n    __pyx_v_offset = 0;\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":332\n *                 info.format[0] = c'^' # Native data types, manual alignment\n *                 offset = 0\n *                 f = _util_dtypestring(descr, info.format + 1,             # <<<<<<<<<<<<<<\n *                                       info.format + _buffer_format_string_len,\n *                                       &offset)\n */\n    __pyx_t_9 = __pyx_f_5numpy__util_dtypestring(__pyx_v_descr, (__pyx_v_info->format + 1), (__pyx_v_info->format + 0xFF), (&__pyx_v_offset)); if (unlikely(__pyx_t_9 == ((char *)NULL))) __PYX_ERR(2, 332, __pyx_L1_error)\n    __pyx_v_f = __pyx_t_9;\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":335\n *                                       info.format + _buffer_format_string_len,\n *                                       &offset)\n *                 f[0] = c'\\0' # Terminate format string             # <<<<<<<<<<<<<<\n * \n *         def __releasebuffer__(ndarray self, Py_buffer* info):\n */\n    (__pyx_v_f[0]) = '\\x00';\n  }\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":258\n *         # experimental exception made for __getbuffer__ and __releasebuffer__\n *         # -- the details of this may change.\n *         def __getbuffer__(ndarray self, Py_buffer* info, int flags):             # <<<<<<<<<<<<<<\n *             # This implementation of getbuffer is geared towards Cython\n *             # requirements, and does not yet fulfill the PEP.\n */\n\n  /* function exit code */\n  __pyx_r = 0;\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_8);\n  __Pyx_AddTraceback(\"numpy.ndarray.__getbuffer__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = -1;\n  if (__pyx_v_info->obj != NULL) {\n    __Pyx_GOTREF(__pyx_v_info->obj);\n    __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0;\n  }\n  goto __pyx_L2;\n  __pyx_L0:;\n  if (__pyx_v_info->obj == Py_None) {\n    __Pyx_GOTREF(__pyx_v_info->obj);\n    __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0;\n  }\n  __pyx_L2:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_descr);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":337\n *                 f[0] = c'\\0' # Terminate format string\n * \n *         def __releasebuffer__(ndarray self, Py_buffer* info):             # <<<<<<<<<<<<<<\n *             if PyArray_HASFIELDS(self):\n *                 PyObject_Free(info.format)\n */\n\n/* Python wrapper */\nstatic CYTHON_UNUSED void __pyx_pw_5numpy_7ndarray_3__releasebuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info); /*proto*/\nstatic CYTHON_UNUSED void __pyx_pw_5numpy_7ndarray_3__releasebuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info) {\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__releasebuffer__ (wrapper)\", 0);\n  __pyx_pf_5numpy_7ndarray_2__releasebuffer__(((PyArrayObject *)__pyx_v_self), ((Py_buffer *)__pyx_v_info));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n}\n\nstatic void __pyx_pf_5numpy_7ndarray_2__releasebuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info) {\n  __Pyx_RefNannyDeclarations\n  int __pyx_t_1;\n  __Pyx_RefNannySetupContext(\"__releasebuffer__\", 0);\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":338\n * \n *         def __releasebuffer__(ndarray self, Py_buffer* info):\n *             if PyArray_HASFIELDS(self):             # <<<<<<<<<<<<<<\n *                 PyObject_Free(info.format)\n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):\n */\n  __pyx_t_1 = (PyArray_HASFIELDS(__pyx_v_self) != 0);\n  if (__pyx_t_1) {\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":339\n *         def __releasebuffer__(ndarray self, Py_buffer* info):\n *             if PyArray_HASFIELDS(self):\n *                 PyObject_Free(info.format)             # <<<<<<<<<<<<<<\n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):\n *                 PyObject_Free(info.strides)\n */\n    PyObject_Free(__pyx_v_info->format);\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":338\n * \n *         def __releasebuffer__(ndarray self, Py_buffer* info):\n *             if PyArray_HASFIELDS(self):             # <<<<<<<<<<<<<<\n *                 PyObject_Free(info.format)\n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):\n */\n  }\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":340\n *             if PyArray_HASFIELDS(self):\n *                 PyObject_Free(info.format)\n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):             # <<<<<<<<<<<<<<\n *                 PyObject_Free(info.strides)\n *                 # info.shape was stored after info.strides in the same block\n */\n  __pyx_t_1 = (((sizeof(npy_intp)) != (sizeof(Py_ssize_t))) != 0);\n  if (__pyx_t_1) {\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":341\n *                 PyObject_Free(info.format)\n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):\n *                 PyObject_Free(info.strides)             # <<<<<<<<<<<<<<\n *                 # info.shape was stored after info.strides in the same block\n * \n */\n    PyObject_Free(__pyx_v_info->strides);\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":340\n *             if PyArray_HASFIELDS(self):\n *                 PyObject_Free(info.format)\n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):             # <<<<<<<<<<<<<<\n *                 PyObject_Free(info.strides)\n *                 # info.shape was stored after info.strides in the same block\n */\n  }\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":337\n *                 f[0] = c'\\0' # Terminate format string\n * \n *         def __releasebuffer__(ndarray self, Py_buffer* info):             # <<<<<<<<<<<<<<\n *             if PyArray_HASFIELDS(self):\n *                 PyObject_Free(info.format)\n */\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n}\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":821\n * ctypedef npy_cdouble     complex_t\n * \n * cdef inline object PyArray_MultiIterNew1(a):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(1, <void*>a)\n * \n */\n\nstatic CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew1(PyObject *__pyx_v_a) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"PyArray_MultiIterNew1\", 0);\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":822\n * \n * cdef inline object PyArray_MultiIterNew1(a):\n *     return PyArray_MultiIterNew(1, <void*>a)             # <<<<<<<<<<<<<<\n * \n * cdef inline object PyArray_MultiIterNew2(a, b):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_1 = PyArray_MultiIterNew(1, ((void *)__pyx_v_a)); if (unlikely(!__pyx_t_1)) __PYX_ERR(2, 822, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_r = __pyx_t_1;\n  __pyx_t_1 = 0;\n  goto __pyx_L0;\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":821\n * ctypedef npy_cdouble     complex_t\n * \n * cdef inline object PyArray_MultiIterNew1(a):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(1, <void*>a)\n * \n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"numpy.PyArray_MultiIterNew1\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = 0;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":824\n *     return PyArray_MultiIterNew(1, <void*>a)\n * \n * cdef inline object PyArray_MultiIterNew2(a, b):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(2, <void*>a, <void*>b)\n * \n */\n\nstatic CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew2(PyObject *__pyx_v_a, PyObject *__pyx_v_b) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"PyArray_MultiIterNew2\", 0);\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":825\n * \n * cdef inline object PyArray_MultiIterNew2(a, b):\n *     return PyArray_MultiIterNew(2, <void*>a, <void*>b)             # <<<<<<<<<<<<<<\n * \n * cdef inline object PyArray_MultiIterNew3(a, b, c):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_1 = PyArray_MultiIterNew(2, ((void *)__pyx_v_a), ((void *)__pyx_v_b)); if (unlikely(!__pyx_t_1)) __PYX_ERR(2, 825, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_r = __pyx_t_1;\n  __pyx_t_1 = 0;\n  goto __pyx_L0;\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":824\n *     return PyArray_MultiIterNew(1, <void*>a)\n * \n * cdef inline object PyArray_MultiIterNew2(a, b):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(2, <void*>a, <void*>b)\n * \n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"numpy.PyArray_MultiIterNew2\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = 0;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":827\n *     return PyArray_MultiIterNew(2, <void*>a, <void*>b)\n * \n * cdef inline object PyArray_MultiIterNew3(a, b, c):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(3, <void*>a, <void*>b, <void*> c)\n * \n */\n\nstatic CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew3(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"PyArray_MultiIterNew3\", 0);\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":828\n * \n * cdef inline object PyArray_MultiIterNew3(a, b, c):\n *     return PyArray_MultiIterNew(3, <void*>a, <void*>b, <void*> c)             # <<<<<<<<<<<<<<\n * \n * cdef inline object PyArray_MultiIterNew4(a, b, c, d):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_1 = PyArray_MultiIterNew(3, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c)); if (unlikely(!__pyx_t_1)) __PYX_ERR(2, 828, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_r = __pyx_t_1;\n  __pyx_t_1 = 0;\n  goto __pyx_L0;\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":827\n *     return PyArray_MultiIterNew(2, <void*>a, <void*>b)\n * \n * cdef inline object PyArray_MultiIterNew3(a, b, c):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(3, <void*>a, <void*>b, <void*> c)\n * \n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"numpy.PyArray_MultiIterNew3\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = 0;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":830\n *     return PyArray_MultiIterNew(3, <void*>a, <void*>b, <void*> c)\n * \n * cdef inline object PyArray_MultiIterNew4(a, b, c, d):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(4, <void*>a, <void*>b, <void*>c, <void*> d)\n * \n */\n\nstatic CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew4(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_d) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"PyArray_MultiIterNew4\", 0);\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":831\n * \n * cdef inline object PyArray_MultiIterNew4(a, b, c, d):\n *     return PyArray_MultiIterNew(4, <void*>a, <void*>b, <void*>c, <void*> d)             # <<<<<<<<<<<<<<\n * \n * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_1 = PyArray_MultiIterNew(4, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c), ((void *)__pyx_v_d)); if (unlikely(!__pyx_t_1)) __PYX_ERR(2, 831, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_r = __pyx_t_1;\n  __pyx_t_1 = 0;\n  goto __pyx_L0;\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":830\n *     return PyArray_MultiIterNew(3, <void*>a, <void*>b, <void*> c)\n * \n * cdef inline object PyArray_MultiIterNew4(a, b, c, d):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(4, <void*>a, <void*>b, <void*>c, <void*> d)\n * \n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"numpy.PyArray_MultiIterNew4\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = 0;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":833\n *     return PyArray_MultiIterNew(4, <void*>a, <void*>b, <void*>c, <void*> d)\n * \n * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(5, <void*>a, <void*>b, <void*>c, <void*> d, <void*> e)\n * \n */\n\nstatic CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew5(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_d, PyObject *__pyx_v_e) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"PyArray_MultiIterNew5\", 0);\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":834\n * \n * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e):\n *     return PyArray_MultiIterNew(5, <void*>a, <void*>b, <void*>c, <void*> d, <void*> e)             # <<<<<<<<<<<<<<\n * \n * cdef inline tuple PyDataType_SHAPE(dtype d):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_1 = PyArray_MultiIterNew(5, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c), ((void *)__pyx_v_d), ((void *)__pyx_v_e)); if (unlikely(!__pyx_t_1)) __PYX_ERR(2, 834, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_r = __pyx_t_1;\n  __pyx_t_1 = 0;\n  goto __pyx_L0;\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":833\n *     return PyArray_MultiIterNew(4, <void*>a, <void*>b, <void*>c, <void*> d)\n * \n * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(5, <void*>a, <void*>b, <void*>c, <void*> d, <void*> e)\n * \n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"numpy.PyArray_MultiIterNew5\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = 0;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":836\n *     return PyArray_MultiIterNew(5, <void*>a, <void*>b, <void*>c, <void*> d, <void*> e)\n * \n * cdef inline tuple PyDataType_SHAPE(dtype d):             # <<<<<<<<<<<<<<\n *     if PyDataType_HASSUBARRAY(d):\n *         return <tuple>d.subarray.shape\n */\n\nstatic CYTHON_INLINE PyObject *__pyx_f_5numpy_PyDataType_SHAPE(PyArray_Descr *__pyx_v_d) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  int __pyx_t_1;\n  __Pyx_RefNannySetupContext(\"PyDataType_SHAPE\", 0);\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":837\n * \n * cdef inline tuple PyDataType_SHAPE(dtype d):\n *     if PyDataType_HASSUBARRAY(d):             # <<<<<<<<<<<<<<\n *         return <tuple>d.subarray.shape\n *     else:\n */\n  __pyx_t_1 = (PyDataType_HASSUBARRAY(__pyx_v_d) != 0);\n  if (__pyx_t_1) {\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":838\n * cdef inline tuple PyDataType_SHAPE(dtype d):\n *     if PyDataType_HASSUBARRAY(d):\n *         return <tuple>d.subarray.shape             # <<<<<<<<<<<<<<\n *     else:\n *         return ()\n */\n    __Pyx_XDECREF(__pyx_r);\n    __Pyx_INCREF(((PyObject*)__pyx_v_d->subarray->shape));\n    __pyx_r = ((PyObject*)__pyx_v_d->subarray->shape);\n    goto __pyx_L0;\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":837\n * \n * cdef inline tuple PyDataType_SHAPE(dtype d):\n *     if PyDataType_HASSUBARRAY(d):             # <<<<<<<<<<<<<<\n *         return <tuple>d.subarray.shape\n *     else:\n */\n  }\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":840\n *         return <tuple>d.subarray.shape\n *     else:\n *         return ()             # <<<<<<<<<<<<<<\n * \n * cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL:\n */\n  /*else*/ {\n    __Pyx_XDECREF(__pyx_r);\n    __Pyx_INCREF(__pyx_empty_tuple);\n    __pyx_r = __pyx_empty_tuple;\n    goto __pyx_L0;\n  }\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":836\n *     return PyArray_MultiIterNew(5, <void*>a, <void*>b, <void*>c, <void*> d, <void*> e)\n * \n * cdef inline tuple PyDataType_SHAPE(dtype d):             # <<<<<<<<<<<<<<\n *     if PyDataType_HASSUBARRAY(d):\n *         return <tuple>d.subarray.shape\n */\n\n  /* function exit code */\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":842\n *         return ()\n * \n * cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL:             # <<<<<<<<<<<<<<\n *     # Recursive utility function used in __getbuffer__ to get format\n *     # string. The new location in the format string is returned.\n */\n\nstatic CYTHON_INLINE char *__pyx_f_5numpy__util_dtypestring(PyArray_Descr *__pyx_v_descr, char *__pyx_v_f, char *__pyx_v_end, int *__pyx_v_offset) {\n  PyArray_Descr *__pyx_v_child = 0;\n  int __pyx_v_endian_detector;\n  int __pyx_v_little_endian;\n  PyObject *__pyx_v_fields = 0;\n  PyObject *__pyx_v_childname = NULL;\n  PyObject *__pyx_v_new_offset = NULL;\n  PyObject *__pyx_v_t = NULL;\n  char *__pyx_r;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  Py_ssize_t __pyx_t_2;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *__pyx_t_4 = NULL;\n  int __pyx_t_5;\n  int __pyx_t_6;\n  int __pyx_t_7;\n  long __pyx_t_8;\n  char *__pyx_t_9;\n  __Pyx_RefNannySetupContext(\"_util_dtypestring\", 0);\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":847\n * \n *     cdef dtype child\n *     cdef int endian_detector = 1             # <<<<<<<<<<<<<<\n *     cdef bint little_endian = ((<char*>&endian_detector)[0] != 0)\n *     cdef tuple fields\n */\n  __pyx_v_endian_detector = 1;\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":848\n *     cdef dtype child\n *     cdef int endian_detector = 1\n *     cdef bint little_endian = ((<char*>&endian_detector)[0] != 0)             # <<<<<<<<<<<<<<\n *     cdef tuple fields\n * \n */\n  __pyx_v_little_endian = ((((char *)(&__pyx_v_endian_detector))[0]) != 0);\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":851\n *     cdef tuple fields\n * \n *     for childname in descr.names:             # <<<<<<<<<<<<<<\n *         fields = descr.fields[childname]\n *         child, new_offset = fields\n */\n  if (unlikely(__pyx_v_descr->names == Py_None)) {\n    PyErr_SetString(PyExc_TypeError, \"'NoneType' object is not iterable\");\n    __PYX_ERR(2, 851, __pyx_L1_error)\n  }\n  __pyx_t_1 = __pyx_v_descr->names; __Pyx_INCREF(__pyx_t_1); __pyx_t_2 = 0;\n  for (;;) {\n    if (__pyx_t_2 >= PyTuple_GET_SIZE(__pyx_t_1)) break;\n    #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n    __pyx_t_3 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_2); __Pyx_INCREF(__pyx_t_3); __pyx_t_2++; if (unlikely(0 < 0)) __PYX_ERR(2, 851, __pyx_L1_error)\n    #else\n    __pyx_t_3 = PySequence_ITEM(__pyx_t_1, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 851, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    #endif\n    __Pyx_XDECREF_SET(__pyx_v_childname, __pyx_t_3);\n    __pyx_t_3 = 0;\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":852\n * \n *     for childname in descr.names:\n *         fields = descr.fields[childname]             # <<<<<<<<<<<<<<\n *         child, new_offset = fields\n * \n */\n    if (unlikely(__pyx_v_descr->fields == Py_None)) {\n      PyErr_SetString(PyExc_TypeError, \"'NoneType' object is not subscriptable\");\n      __PYX_ERR(2, 852, __pyx_L1_error)\n    }\n    __pyx_t_3 = __Pyx_PyDict_GetItem(__pyx_v_descr->fields, __pyx_v_childname); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 852, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    if (!(likely(PyTuple_CheckExact(__pyx_t_3))||((__pyx_t_3) == Py_None)||(PyErr_Format(PyExc_TypeError, \"Expected %.16s, got %.200s\", \"tuple\", Py_TYPE(__pyx_t_3)->tp_name), 0))) __PYX_ERR(2, 852, __pyx_L1_error)\n    __Pyx_XDECREF_SET(__pyx_v_fields, ((PyObject*)__pyx_t_3));\n    __pyx_t_3 = 0;\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":853\n *     for childname in descr.names:\n *         fields = descr.fields[childname]\n *         child, new_offset = fields             # <<<<<<<<<<<<<<\n * \n *         if (end - f) - <int>(new_offset - offset[0]) < 15:\n */\n    if (likely(__pyx_v_fields != Py_None)) {\n      PyObject* sequence = __pyx_v_fields;\n      Py_ssize_t size = __Pyx_PySequence_SIZE(sequence);\n      if (unlikely(size != 2)) {\n        if (size > 2) __Pyx_RaiseTooManyValuesError(2);\n        else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size);\n        __PYX_ERR(2, 853, __pyx_L1_error)\n      }\n      #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n      __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0); \n      __pyx_t_4 = PyTuple_GET_ITEM(sequence, 1); \n      __Pyx_INCREF(__pyx_t_3);\n      __Pyx_INCREF(__pyx_t_4);\n      #else\n      __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 853, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 853, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      #endif\n    } else {\n      __Pyx_RaiseNoneNotIterableError(); __PYX_ERR(2, 853, __pyx_L1_error)\n    }\n    if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_dtype))))) __PYX_ERR(2, 853, __pyx_L1_error)\n    __Pyx_XDECREF_SET(__pyx_v_child, ((PyArray_Descr *)__pyx_t_3));\n    __pyx_t_3 = 0;\n    __Pyx_XDECREF_SET(__pyx_v_new_offset, __pyx_t_4);\n    __pyx_t_4 = 0;\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":855\n *         child, new_offset = fields\n * \n *         if (end - f) - <int>(new_offset - offset[0]) < 15:             # <<<<<<<<<<<<<<\n *             raise RuntimeError(u\"Format string allocated too short, see comment in numpy.pxd\")\n * \n */\n    __pyx_t_4 = __Pyx_PyInt_From_int((__pyx_v_offset[0])); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 855, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __pyx_t_3 = PyNumber_Subtract(__pyx_v_new_offset, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 855, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    __pyx_t_5 = __Pyx_PyInt_As_int(__pyx_t_3); if (unlikely((__pyx_t_5 == (int)-1) && PyErr_Occurred())) __PYX_ERR(2, 855, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __pyx_t_6 = ((((__pyx_v_end - __pyx_v_f) - ((int)__pyx_t_5)) < 15) != 0);\n    if (unlikely(__pyx_t_6)) {\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":856\n * \n *         if (end - f) - <int>(new_offset - offset[0]) < 15:\n *             raise RuntimeError(u\"Format string allocated too short, see comment in numpy.pxd\")             # <<<<<<<<<<<<<<\n * \n *         if ((child.byteorder == c'>' and little_endian) or\n */\n      __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_RuntimeError, __pyx_tuple__23, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 856, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __PYX_ERR(2, 856, __pyx_L1_error)\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":855\n *         child, new_offset = fields\n * \n *         if (end - f) - <int>(new_offset - offset[0]) < 15:             # <<<<<<<<<<<<<<\n *             raise RuntimeError(u\"Format string allocated too short, see comment in numpy.pxd\")\n * \n */\n    }\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":858\n *             raise RuntimeError(u\"Format string allocated too short, see comment in numpy.pxd\")\n * \n *         if ((child.byteorder == c'>' and little_endian) or             # <<<<<<<<<<<<<<\n *             (child.byteorder == c'<' and not little_endian)):\n *             raise ValueError(u\"Non-native byte order not supported\")\n */\n    __pyx_t_7 = ((__pyx_v_child->byteorder == '>') != 0);\n    if (!__pyx_t_7) {\n      goto __pyx_L8_next_or;\n    } else {\n    }\n    __pyx_t_7 = (__pyx_v_little_endian != 0);\n    if (!__pyx_t_7) {\n    } else {\n      __pyx_t_6 = __pyx_t_7;\n      goto __pyx_L7_bool_binop_done;\n    }\n    __pyx_L8_next_or:;\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":859\n * \n *         if ((child.byteorder == c'>' and little_endian) or\n *             (child.byteorder == c'<' and not little_endian)):             # <<<<<<<<<<<<<<\n *             raise ValueError(u\"Non-native byte order not supported\")\n *             # One could encode it in the format string and have Cython\n */\n    __pyx_t_7 = ((__pyx_v_child->byteorder == '<') != 0);\n    if (__pyx_t_7) {\n    } else {\n      __pyx_t_6 = __pyx_t_7;\n      goto __pyx_L7_bool_binop_done;\n    }\n    __pyx_t_7 = ((!(__pyx_v_little_endian != 0)) != 0);\n    __pyx_t_6 = __pyx_t_7;\n    __pyx_L7_bool_binop_done:;\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":858\n *             raise RuntimeError(u\"Format string allocated too short, see comment in numpy.pxd\")\n * \n *         if ((child.byteorder == c'>' and little_endian) or             # <<<<<<<<<<<<<<\n *             (child.byteorder == c'<' and not little_endian)):\n *             raise ValueError(u\"Non-native byte order not supported\")\n */\n    if (unlikely(__pyx_t_6)) {\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":860\n *         if ((child.byteorder == c'>' and little_endian) or\n *             (child.byteorder == c'<' and not little_endian)):\n *             raise ValueError(u\"Non-native byte order not supported\")             # <<<<<<<<<<<<<<\n *             # One could encode it in the format string and have Cython\n *             # complain instead, BUT: < and > in format strings also imply\n */\n      __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__22, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 860, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __PYX_ERR(2, 860, __pyx_L1_error)\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":858\n *             raise RuntimeError(u\"Format string allocated too short, see comment in numpy.pxd\")\n * \n *         if ((child.byteorder == c'>' and little_endian) or             # <<<<<<<<<<<<<<\n *             (child.byteorder == c'<' and not little_endian)):\n *             raise ValueError(u\"Non-native byte order not supported\")\n */\n    }\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":870\n * \n *         # Output padding bytes\n *         while offset[0] < new_offset:             # <<<<<<<<<<<<<<\n *             f[0] = 120 # \"x\"; pad byte\n *             f += 1\n */\n    while (1) {\n      __pyx_t_3 = __Pyx_PyInt_From_int((__pyx_v_offset[0])); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 870, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_t_3, __pyx_v_new_offset, Py_LT); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 870, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 870, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (!__pyx_t_6) break;\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":871\n *         # Output padding bytes\n *         while offset[0] < new_offset:\n *             f[0] = 120 # \"x\"; pad byte             # <<<<<<<<<<<<<<\n *             f += 1\n *             offset[0] += 1\n */\n      (__pyx_v_f[0]) = 0x78;\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":872\n *         while offset[0] < new_offset:\n *             f[0] = 120 # \"x\"; pad byte\n *             f += 1             # <<<<<<<<<<<<<<\n *             offset[0] += 1\n * \n */\n      __pyx_v_f = (__pyx_v_f + 1);\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":873\n *             f[0] = 120 # \"x\"; pad byte\n *             f += 1\n *             offset[0] += 1             # <<<<<<<<<<<<<<\n * \n *         offset[0] += child.itemsize\n */\n      __pyx_t_8 = 0;\n      (__pyx_v_offset[__pyx_t_8]) = ((__pyx_v_offset[__pyx_t_8]) + 1);\n    }\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":875\n *             offset[0] += 1\n * \n *         offset[0] += child.itemsize             # <<<<<<<<<<<<<<\n * \n *         if not PyDataType_HASFIELDS(child):\n */\n    __pyx_t_8 = 0;\n    (__pyx_v_offset[__pyx_t_8]) = ((__pyx_v_offset[__pyx_t_8]) + __pyx_v_child->elsize);\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":877\n *         offset[0] += child.itemsize\n * \n *         if not PyDataType_HASFIELDS(child):             # <<<<<<<<<<<<<<\n *             t = child.type_num\n *             if end - f < 5:\n */\n    __pyx_t_6 = ((!(PyDataType_HASFIELDS(__pyx_v_child) != 0)) != 0);\n    if (__pyx_t_6) {\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":878\n * \n *         if not PyDataType_HASFIELDS(child):\n *             t = child.type_num             # <<<<<<<<<<<<<<\n *             if end - f < 5:\n *                 raise RuntimeError(u\"Format string allocated too short.\")\n */\n      __pyx_t_4 = __Pyx_PyInt_From_int(__pyx_v_child->type_num); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 878, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_XDECREF_SET(__pyx_v_t, __pyx_t_4);\n      __pyx_t_4 = 0;\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":879\n *         if not PyDataType_HASFIELDS(child):\n *             t = child.type_num\n *             if end - f < 5:             # <<<<<<<<<<<<<<\n *                 raise RuntimeError(u\"Format string allocated too short.\")\n * \n */\n      __pyx_t_6 = (((__pyx_v_end - __pyx_v_f) < 5) != 0);\n      if (unlikely(__pyx_t_6)) {\n\n        /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":880\n *             t = child.type_num\n *             if end - f < 5:\n *                 raise RuntimeError(u\"Format string allocated too short.\")             # <<<<<<<<<<<<<<\n * \n *             # Until ticket #99 is fixed, use integers to avoid warnings\n */\n        __pyx_t_4 = __Pyx_PyObject_Call(__pyx_builtin_RuntimeError, __pyx_tuple__24, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 880, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_4);\n        __Pyx_Raise(__pyx_t_4, 0, 0, 0);\n        __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n        __PYX_ERR(2, 880, __pyx_L1_error)\n\n        /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":879\n *         if not PyDataType_HASFIELDS(child):\n *             t = child.type_num\n *             if end - f < 5:             # <<<<<<<<<<<<<<\n *                 raise RuntimeError(u\"Format string allocated too short.\")\n * \n */\n      }\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":883\n * \n *             # Until ticket #99 is fixed, use integers to avoid warnings\n *             if   t == NPY_BYTE:        f[0] =  98 #\"b\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_UBYTE:       f[0] =  66 #\"B\"\n *             elif t == NPY_SHORT:       f[0] = 104 #\"h\"\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_BYTE); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 883, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 883, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 883, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 98;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":884\n *             # Until ticket #99 is fixed, use integers to avoid warnings\n *             if   t == NPY_BYTE:        f[0] =  98 #\"b\"\n *             elif t == NPY_UBYTE:       f[0] =  66 #\"B\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_SHORT:       f[0] = 104 #\"h\"\n *             elif t == NPY_USHORT:      f[0] =  72 #\"H\"\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_UBYTE); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 884, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 884, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 884, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 66;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":885\n *             if   t == NPY_BYTE:        f[0] =  98 #\"b\"\n *             elif t == NPY_UBYTE:       f[0] =  66 #\"B\"\n *             elif t == NPY_SHORT:       f[0] = 104 #\"h\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_USHORT:      f[0] =  72 #\"H\"\n *             elif t == NPY_INT:         f[0] = 105 #\"i\"\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_SHORT); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 885, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 885, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 885, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 0x68;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":886\n *             elif t == NPY_UBYTE:       f[0] =  66 #\"B\"\n *             elif t == NPY_SHORT:       f[0] = 104 #\"h\"\n *             elif t == NPY_USHORT:      f[0] =  72 #\"H\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_INT:         f[0] = 105 #\"i\"\n *             elif t == NPY_UINT:        f[0] =  73 #\"I\"\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_USHORT); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 886, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 886, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 886, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 72;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":887\n *             elif t == NPY_SHORT:       f[0] = 104 #\"h\"\n *             elif t == NPY_USHORT:      f[0] =  72 #\"H\"\n *             elif t == NPY_INT:         f[0] = 105 #\"i\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_UINT:        f[0] =  73 #\"I\"\n *             elif t == NPY_LONG:        f[0] = 108 #\"l\"\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_INT); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 887, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 887, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 887, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 0x69;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":888\n *             elif t == NPY_USHORT:      f[0] =  72 #\"H\"\n *             elif t == NPY_INT:         f[0] = 105 #\"i\"\n *             elif t == NPY_UINT:        f[0] =  73 #\"I\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_LONG:        f[0] = 108 #\"l\"\n *             elif t == NPY_ULONG:       f[0] = 76  #\"L\"\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_UINT); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 888, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 888, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 888, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 73;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":889\n *             elif t == NPY_INT:         f[0] = 105 #\"i\"\n *             elif t == NPY_UINT:        f[0] =  73 #\"I\"\n *             elif t == NPY_LONG:        f[0] = 108 #\"l\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_ULONG:       f[0] = 76  #\"L\"\n *             elif t == NPY_LONGLONG:    f[0] = 113 #\"q\"\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_LONG); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 889, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 889, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 889, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 0x6C;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":890\n *             elif t == NPY_UINT:        f[0] =  73 #\"I\"\n *             elif t == NPY_LONG:        f[0] = 108 #\"l\"\n *             elif t == NPY_ULONG:       f[0] = 76  #\"L\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_LONGLONG:    f[0] = 113 #\"q\"\n *             elif t == NPY_ULONGLONG:   f[0] = 81  #\"Q\"\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_ULONG); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 890, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 890, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 890, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 76;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":891\n *             elif t == NPY_LONG:        f[0] = 108 #\"l\"\n *             elif t == NPY_ULONG:       f[0] = 76  #\"L\"\n *             elif t == NPY_LONGLONG:    f[0] = 113 #\"q\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_ULONGLONG:   f[0] = 81  #\"Q\"\n *             elif t == NPY_FLOAT:       f[0] = 102 #\"f\"\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_LONGLONG); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 891, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 891, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 891, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 0x71;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":892\n *             elif t == NPY_ULONG:       f[0] = 76  #\"L\"\n *             elif t == NPY_LONGLONG:    f[0] = 113 #\"q\"\n *             elif t == NPY_ULONGLONG:   f[0] = 81  #\"Q\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_FLOAT:       f[0] = 102 #\"f\"\n *             elif t == NPY_DOUBLE:      f[0] = 100 #\"d\"\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_ULONGLONG); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 892, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 892, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 892, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 81;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":893\n *             elif t == NPY_LONGLONG:    f[0] = 113 #\"q\"\n *             elif t == NPY_ULONGLONG:   f[0] = 81  #\"Q\"\n *             elif t == NPY_FLOAT:       f[0] = 102 #\"f\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_DOUBLE:      f[0] = 100 #\"d\"\n *             elif t == NPY_LONGDOUBLE:  f[0] = 103 #\"g\"\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_FLOAT); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 893, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 893, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 893, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 0x66;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":894\n *             elif t == NPY_ULONGLONG:   f[0] = 81  #\"Q\"\n *             elif t == NPY_FLOAT:       f[0] = 102 #\"f\"\n *             elif t == NPY_DOUBLE:      f[0] = 100 #\"d\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_LONGDOUBLE:  f[0] = 103 #\"g\"\n *             elif t == NPY_CFLOAT:      f[0] = 90; f[1] = 102; f += 1 # Zf\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_DOUBLE); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 894, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 894, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 894, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 0x64;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":895\n *             elif t == NPY_FLOAT:       f[0] = 102 #\"f\"\n *             elif t == NPY_DOUBLE:      f[0] = 100 #\"d\"\n *             elif t == NPY_LONGDOUBLE:  f[0] = 103 #\"g\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_CFLOAT:      f[0] = 90; f[1] = 102; f += 1 # Zf\n *             elif t == NPY_CDOUBLE:     f[0] = 90; f[1] = 100; f += 1 # Zd\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_LONGDOUBLE); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 895, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 895, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 895, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 0x67;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":896\n *             elif t == NPY_DOUBLE:      f[0] = 100 #\"d\"\n *             elif t == NPY_LONGDOUBLE:  f[0] = 103 #\"g\"\n *             elif t == NPY_CFLOAT:      f[0] = 90; f[1] = 102; f += 1 # Zf             # <<<<<<<<<<<<<<\n *             elif t == NPY_CDOUBLE:     f[0] = 90; f[1] = 100; f += 1 # Zd\n *             elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_CFLOAT); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 896, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 896, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 896, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 90;\n        (__pyx_v_f[1]) = 0x66;\n        __pyx_v_f = (__pyx_v_f + 1);\n        goto __pyx_L15;\n      }\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":897\n *             elif t == NPY_LONGDOUBLE:  f[0] = 103 #\"g\"\n *             elif t == NPY_CFLOAT:      f[0] = 90; f[1] = 102; f += 1 # Zf\n *             elif t == NPY_CDOUBLE:     f[0] = 90; f[1] = 100; f += 1 # Zd             # <<<<<<<<<<<<<<\n *             elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg\n *             elif t == NPY_OBJECT:      f[0] = 79 #\"O\"\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_CDOUBLE); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 897, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 897, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 897, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 90;\n        (__pyx_v_f[1]) = 0x64;\n        __pyx_v_f = (__pyx_v_f + 1);\n        goto __pyx_L15;\n      }\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":898\n *             elif t == NPY_CFLOAT:      f[0] = 90; f[1] = 102; f += 1 # Zf\n *             elif t == NPY_CDOUBLE:     f[0] = 90; f[1] = 100; f += 1 # Zd\n *             elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg             # <<<<<<<<<<<<<<\n *             elif t == NPY_OBJECT:      f[0] = 79 #\"O\"\n *             else:\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_CLONGDOUBLE); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 898, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 898, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 898, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 90;\n        (__pyx_v_f[1]) = 0x67;\n        __pyx_v_f = (__pyx_v_f + 1);\n        goto __pyx_L15;\n      }\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":899\n *             elif t == NPY_CDOUBLE:     f[0] = 90; f[1] = 100; f += 1 # Zd\n *             elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg\n *             elif t == NPY_OBJECT:      f[0] = 79 #\"O\"             # <<<<<<<<<<<<<<\n *             else:\n *                 raise ValueError(u\"unknown dtype code in numpy.pxd (%d)\" % t)\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_OBJECT); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 899, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 899, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 899, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (likely(__pyx_t_6)) {\n        (__pyx_v_f[0]) = 79;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":901\n *             elif t == NPY_OBJECT:      f[0] = 79 #\"O\"\n *             else:\n *                 raise ValueError(u\"unknown dtype code in numpy.pxd (%d)\" % t)             # <<<<<<<<<<<<<<\n *             f += 1\n *         else:\n */\n      /*else*/ {\n        __pyx_t_3 = __Pyx_PyUnicode_FormatSafe(__pyx_kp_u_unknown_dtype_code_in_numpy_pxd, __pyx_v_t); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 901, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __pyx_t_4 = __Pyx_PyObject_CallOneArg(__pyx_builtin_ValueError, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 901, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_4);\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __Pyx_Raise(__pyx_t_4, 0, 0, 0);\n        __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n        __PYX_ERR(2, 901, __pyx_L1_error)\n      }\n      __pyx_L15:;\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":902\n *             else:\n *                 raise ValueError(u\"unknown dtype code in numpy.pxd (%d)\" % t)\n *             f += 1             # <<<<<<<<<<<<<<\n *         else:\n *             # Cython ignores struct boundary information (\"T{...}\"),\n */\n      __pyx_v_f = (__pyx_v_f + 1);\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":877\n *         offset[0] += child.itemsize\n * \n *         if not PyDataType_HASFIELDS(child):             # <<<<<<<<<<<<<<\n *             t = child.type_num\n *             if end - f < 5:\n */\n      goto __pyx_L13;\n    }\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":906\n *             # Cython ignores struct boundary information (\"T{...}\"),\n *             # so don't output it\n *             f = _util_dtypestring(child, f, end, offset)             # <<<<<<<<<<<<<<\n *     return f\n * \n */\n    /*else*/ {\n      __pyx_t_9 = __pyx_f_5numpy__util_dtypestring(__pyx_v_child, __pyx_v_f, __pyx_v_end, __pyx_v_offset); if (unlikely(__pyx_t_9 == ((char *)NULL))) __PYX_ERR(2, 906, __pyx_L1_error)\n      __pyx_v_f = __pyx_t_9;\n    }\n    __pyx_L13:;\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":851\n *     cdef tuple fields\n * \n *     for childname in descr.names:             # <<<<<<<<<<<<<<\n *         fields = descr.fields[childname]\n *         child, new_offset = fields\n */\n  }\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":907\n *             # so don't output it\n *             f = _util_dtypestring(child, f, end, offset)\n *     return f             # <<<<<<<<<<<<<<\n * \n * \n */\n  __pyx_r = __pyx_v_f;\n  goto __pyx_L0;\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":842\n *         return ()\n * \n * cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL:             # <<<<<<<<<<<<<<\n *     # Recursive utility function used in __getbuffer__ to get format\n *     # string. The new location in the format string is returned.\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_AddTraceback(\"numpy._util_dtypestring\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_child);\n  __Pyx_XDECREF(__pyx_v_fields);\n  __Pyx_XDECREF(__pyx_v_childname);\n  __Pyx_XDECREF(__pyx_v_new_offset);\n  __Pyx_XDECREF(__pyx_v_t);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1022\n *     int _import_umath() except -1\n * \n * cdef inline void set_array_base(ndarray arr, object base):             # <<<<<<<<<<<<<<\n *     Py_INCREF(base) # important to do this before stealing the reference below!\n *     PyArray_SetBaseObject(arr, base)\n */\n\nstatic CYTHON_INLINE void __pyx_f_5numpy_set_array_base(PyArrayObject *__pyx_v_arr, PyObject *__pyx_v_base) {\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"set_array_base\", 0);\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1023\n * \n * cdef inline void set_array_base(ndarray arr, object base):\n *     Py_INCREF(base) # important to do this before stealing the reference below!             # <<<<<<<<<<<<<<\n *     PyArray_SetBaseObject(arr, base)\n * \n */\n  Py_INCREF(__pyx_v_base);\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1024\n * cdef inline void set_array_base(ndarray arr, object base):\n *     Py_INCREF(base) # important to do this before stealing the reference below!\n *     PyArray_SetBaseObject(arr, base)             # <<<<<<<<<<<<<<\n * \n * cdef inline object get_array_base(ndarray arr):\n */\n  (void)(PyArray_SetBaseObject(__pyx_v_arr, __pyx_v_base));\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1022\n *     int _import_umath() except -1\n * \n * cdef inline void set_array_base(ndarray arr, object base):             # <<<<<<<<<<<<<<\n *     Py_INCREF(base) # important to do this before stealing the reference below!\n *     PyArray_SetBaseObject(arr, base)\n */\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n}\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1026\n *     PyArray_SetBaseObject(arr, base)\n * \n * cdef inline object get_array_base(ndarray arr):             # <<<<<<<<<<<<<<\n *     base = PyArray_BASE(arr)\n *     if base is NULL:\n */\n\nstatic CYTHON_INLINE PyObject *__pyx_f_5numpy_get_array_base(PyArrayObject *__pyx_v_arr) {\n  PyObject *__pyx_v_base;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  int __pyx_t_1;\n  __Pyx_RefNannySetupContext(\"get_array_base\", 0);\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1027\n * \n * cdef inline object get_array_base(ndarray arr):\n *     base = PyArray_BASE(arr)             # <<<<<<<<<<<<<<\n *     if base is NULL:\n *         return None\n */\n  __pyx_v_base = PyArray_BASE(__pyx_v_arr);\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1028\n * cdef inline object get_array_base(ndarray arr):\n *     base = PyArray_BASE(arr)\n *     if base is NULL:             # <<<<<<<<<<<<<<\n *         return None\n *     return <object>base\n */\n  __pyx_t_1 = ((__pyx_v_base == NULL) != 0);\n  if (__pyx_t_1) {\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1029\n *     base = PyArray_BASE(arr)\n *     if base is NULL:\n *         return None             # <<<<<<<<<<<<<<\n *     return <object>base\n * \n */\n    __Pyx_XDECREF(__pyx_r);\n    __pyx_r = Py_None; __Pyx_INCREF(Py_None);\n    goto __pyx_L0;\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1028\n * cdef inline object get_array_base(ndarray arr):\n *     base = PyArray_BASE(arr)\n *     if base is NULL:             # <<<<<<<<<<<<<<\n *         return None\n *     return <object>base\n */\n  }\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1030\n *     if base is NULL:\n *         return None\n *     return <object>base             # <<<<<<<<<<<<<<\n * \n * # Versions of the import_* functions which are more suitable for\n */\n  __Pyx_XDECREF(__pyx_r);\n  __Pyx_INCREF(((PyObject *)__pyx_v_base));\n  __pyx_r = ((PyObject *)__pyx_v_base);\n  goto __pyx_L0;\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1026\n *     PyArray_SetBaseObject(arr, base)\n * \n * cdef inline object get_array_base(ndarray arr):             # <<<<<<<<<<<<<<\n *     base = PyArray_BASE(arr)\n *     if base is NULL:\n */\n\n  /* function exit code */\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1034\n * # Versions of the import_* functions which are more suitable for\n * # Cython code.\n * cdef inline int import_array() except -1:             # <<<<<<<<<<<<<<\n *     try:\n *         _import_array()\n */\n\nstatic CYTHON_INLINE int __pyx_f_5numpy_import_array(void) {\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  int __pyx_t_4;\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  PyObject *__pyx_t_7 = NULL;\n  PyObject *__pyx_t_8 = NULL;\n  __Pyx_RefNannySetupContext(\"import_array\", 0);\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1035\n * # Cython code.\n * cdef inline int import_array() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_array()\n *     except Exception:\n */\n  {\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_2, &__pyx_t_3);\n    __Pyx_XGOTREF(__pyx_t_1);\n    __Pyx_XGOTREF(__pyx_t_2);\n    __Pyx_XGOTREF(__pyx_t_3);\n    /*try:*/ {\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1036\n * cdef inline int import_array() except -1:\n *     try:\n *         _import_array()             # <<<<<<<<<<<<<<\n *     except Exception:\n *         raise ImportError(\"numpy.core.multiarray failed to import\")\n */\n      __pyx_t_4 = _import_array(); if (unlikely(__pyx_t_4 == ((int)-1))) __PYX_ERR(2, 1036, __pyx_L3_error)\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1035\n * # Cython code.\n * cdef inline int import_array() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_array()\n *     except Exception:\n */\n    }\n    __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;\n    __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n    __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n    goto __pyx_L8_try_end;\n    __pyx_L3_error:;\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1037\n *     try:\n *         _import_array()\n *     except Exception:             # <<<<<<<<<<<<<<\n *         raise ImportError(\"numpy.core.multiarray failed to import\")\n * \n */\n    __pyx_t_4 = __Pyx_PyErr_ExceptionMatches(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])));\n    if (__pyx_t_4) {\n      __Pyx_AddTraceback(\"numpy.import_array\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n      if (__Pyx_GetException(&__pyx_t_5, &__pyx_t_6, &__pyx_t_7) < 0) __PYX_ERR(2, 1037, __pyx_L5_except_error)\n      __Pyx_GOTREF(__pyx_t_5);\n      __Pyx_GOTREF(__pyx_t_6);\n      __Pyx_GOTREF(__pyx_t_7);\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1038\n *         _import_array()\n *     except Exception:\n *         raise ImportError(\"numpy.core.multiarray failed to import\")             # <<<<<<<<<<<<<<\n * \n * cdef inline int import_umath() except -1:\n */\n      __pyx_t_8 = __Pyx_PyObject_Call(__pyx_builtin_ImportError, __pyx_tuple__25, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(2, 1038, __pyx_L5_except_error)\n      __Pyx_GOTREF(__pyx_t_8);\n      __Pyx_Raise(__pyx_t_8, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;\n      __PYX_ERR(2, 1038, __pyx_L5_except_error)\n    }\n    goto __pyx_L5_except_error;\n    __pyx_L5_except_error:;\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1035\n * # Cython code.\n * cdef inline int import_array() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_array()\n *     except Exception:\n */\n    __Pyx_XGIVEREF(__pyx_t_1);\n    __Pyx_XGIVEREF(__pyx_t_2);\n    __Pyx_XGIVEREF(__pyx_t_3);\n    __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3);\n    goto __pyx_L1_error;\n    __pyx_L8_try_end:;\n  }\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1034\n * # Versions of the import_* functions which are more suitable for\n * # Cython code.\n * cdef inline int import_array() except -1:             # <<<<<<<<<<<<<<\n *     try:\n *         _import_array()\n */\n\n  /* function exit code */\n  __pyx_r = 0;\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_XDECREF(__pyx_t_7);\n  __Pyx_XDECREF(__pyx_t_8);\n  __Pyx_AddTraceback(\"numpy.import_array\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = -1;\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1040\n *         raise ImportError(\"numpy.core.multiarray failed to import\")\n * \n * cdef inline int import_umath() except -1:             # <<<<<<<<<<<<<<\n *     try:\n *         _import_umath()\n */\n\nstatic CYTHON_INLINE int __pyx_f_5numpy_import_umath(void) {\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  int __pyx_t_4;\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  PyObject *__pyx_t_7 = NULL;\n  PyObject *__pyx_t_8 = NULL;\n  __Pyx_RefNannySetupContext(\"import_umath\", 0);\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1041\n * \n * cdef inline int import_umath() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_umath()\n *     except Exception:\n */\n  {\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_2, &__pyx_t_3);\n    __Pyx_XGOTREF(__pyx_t_1);\n    __Pyx_XGOTREF(__pyx_t_2);\n    __Pyx_XGOTREF(__pyx_t_3);\n    /*try:*/ {\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1042\n * cdef inline int import_umath() except -1:\n *     try:\n *         _import_umath()             # <<<<<<<<<<<<<<\n *     except Exception:\n *         raise ImportError(\"numpy.core.umath failed to import\")\n */\n      __pyx_t_4 = _import_umath(); if (unlikely(__pyx_t_4 == ((int)-1))) __PYX_ERR(2, 1042, __pyx_L3_error)\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1041\n * \n * cdef inline int import_umath() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_umath()\n *     except Exception:\n */\n    }\n    __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;\n    __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n    __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n    goto __pyx_L8_try_end;\n    __pyx_L3_error:;\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1043\n *     try:\n *         _import_umath()\n *     except Exception:             # <<<<<<<<<<<<<<\n *         raise ImportError(\"numpy.core.umath failed to import\")\n * \n */\n    __pyx_t_4 = __Pyx_PyErr_ExceptionMatches(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])));\n    if (__pyx_t_4) {\n      __Pyx_AddTraceback(\"numpy.import_umath\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n      if (__Pyx_GetException(&__pyx_t_5, &__pyx_t_6, &__pyx_t_7) < 0) __PYX_ERR(2, 1043, __pyx_L5_except_error)\n      __Pyx_GOTREF(__pyx_t_5);\n      __Pyx_GOTREF(__pyx_t_6);\n      __Pyx_GOTREF(__pyx_t_7);\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1044\n *         _import_umath()\n *     except Exception:\n *         raise ImportError(\"numpy.core.umath failed to import\")             # <<<<<<<<<<<<<<\n * \n * cdef inline int import_ufunc() except -1:\n */\n      __pyx_t_8 = __Pyx_PyObject_Call(__pyx_builtin_ImportError, __pyx_tuple__26, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(2, 1044, __pyx_L5_except_error)\n      __Pyx_GOTREF(__pyx_t_8);\n      __Pyx_Raise(__pyx_t_8, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;\n      __PYX_ERR(2, 1044, __pyx_L5_except_error)\n    }\n    goto __pyx_L5_except_error;\n    __pyx_L5_except_error:;\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1041\n * \n * cdef inline int import_umath() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_umath()\n *     except Exception:\n */\n    __Pyx_XGIVEREF(__pyx_t_1);\n    __Pyx_XGIVEREF(__pyx_t_2);\n    __Pyx_XGIVEREF(__pyx_t_3);\n    __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3);\n    goto __pyx_L1_error;\n    __pyx_L8_try_end:;\n  }\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1040\n *         raise ImportError(\"numpy.core.multiarray failed to import\")\n * \n * cdef inline int import_umath() except -1:             # <<<<<<<<<<<<<<\n *     try:\n *         _import_umath()\n */\n\n  /* function exit code */\n  __pyx_r = 0;\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_XDECREF(__pyx_t_7);\n  __Pyx_XDECREF(__pyx_t_8);\n  __Pyx_AddTraceback(\"numpy.import_umath\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = -1;\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1046\n *         raise ImportError(\"numpy.core.umath failed to import\")\n * \n * cdef inline int import_ufunc() except -1:             # <<<<<<<<<<<<<<\n *     try:\n *         _import_umath()\n */\n\nstatic CYTHON_INLINE int __pyx_f_5numpy_import_ufunc(void) {\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  int __pyx_t_4;\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  PyObject *__pyx_t_7 = NULL;\n  PyObject *__pyx_t_8 = NULL;\n  __Pyx_RefNannySetupContext(\"import_ufunc\", 0);\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1047\n * \n * cdef inline int import_ufunc() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_umath()\n *     except Exception:\n */\n  {\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_2, &__pyx_t_3);\n    __Pyx_XGOTREF(__pyx_t_1);\n    __Pyx_XGOTREF(__pyx_t_2);\n    __Pyx_XGOTREF(__pyx_t_3);\n    /*try:*/ {\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1048\n * cdef inline int import_ufunc() except -1:\n *     try:\n *         _import_umath()             # <<<<<<<<<<<<<<\n *     except Exception:\n *         raise ImportError(\"numpy.core.umath failed to import\")\n */\n      __pyx_t_4 = _import_umath(); if (unlikely(__pyx_t_4 == ((int)-1))) __PYX_ERR(2, 1048, __pyx_L3_error)\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1047\n * \n * cdef inline int import_ufunc() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_umath()\n *     except Exception:\n */\n    }\n    __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;\n    __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n    __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n    goto __pyx_L8_try_end;\n    __pyx_L3_error:;\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1049\n *     try:\n *         _import_umath()\n *     except Exception:             # <<<<<<<<<<<<<<\n *         raise ImportError(\"numpy.core.umath failed to import\")\n */\n    __pyx_t_4 = __Pyx_PyErr_ExceptionMatches(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])));\n    if (__pyx_t_4) {\n      __Pyx_AddTraceback(\"numpy.import_ufunc\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n      if (__Pyx_GetException(&__pyx_t_5, &__pyx_t_6, &__pyx_t_7) < 0) __PYX_ERR(2, 1049, __pyx_L5_except_error)\n      __Pyx_GOTREF(__pyx_t_5);\n      __Pyx_GOTREF(__pyx_t_6);\n      __Pyx_GOTREF(__pyx_t_7);\n\n      /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1050\n *         _import_umath()\n *     except Exception:\n *         raise ImportError(\"numpy.core.umath failed to import\")             # <<<<<<<<<<<<<<\n */\n      __pyx_t_8 = __Pyx_PyObject_Call(__pyx_builtin_ImportError, __pyx_tuple__26, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(2, 1050, __pyx_L5_except_error)\n      __Pyx_GOTREF(__pyx_t_8);\n      __Pyx_Raise(__pyx_t_8, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;\n      __PYX_ERR(2, 1050, __pyx_L5_except_error)\n    }\n    goto __pyx_L5_except_error;\n    __pyx_L5_except_error:;\n\n    /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1047\n * \n * cdef inline int import_ufunc() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_umath()\n *     except Exception:\n */\n    __Pyx_XGIVEREF(__pyx_t_1);\n    __Pyx_XGIVEREF(__pyx_t_2);\n    __Pyx_XGIVEREF(__pyx_t_3);\n    __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3);\n    goto __pyx_L1_error;\n    __pyx_L8_try_end:;\n  }\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1046\n *         raise ImportError(\"numpy.core.umath failed to import\")\n * \n * cdef inline int import_ufunc() except -1:             # <<<<<<<<<<<<<<\n *     try:\n *         _import_umath()\n */\n\n  /* function exit code */\n  __pyx_r = 0;\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_XDECREF(__pyx_t_7);\n  __Pyx_XDECREF(__pyx_t_8);\n  __Pyx_AddTraceback(\"numpy.import_ufunc\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = -1;\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_tp_new_11pycocotools_5_mask_RLEs(PyTypeObject *t, PyObject *a, PyObject *k) {\n  PyObject *o;\n  if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) {\n    o = (*t->tp_alloc)(t, 0);\n  } else {\n    o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0);\n  }\n  if (unlikely(!o)) return 0;\n  if (unlikely(__pyx_pw_11pycocotools_5_mask_4RLEs_1__cinit__(o, a, k) < 0)) goto bad;\n  return o;\n  bad:\n  Py_DECREF(o); o = 0;\n  return NULL;\n}\n\nstatic void __pyx_tp_dealloc_11pycocotools_5_mask_RLEs(PyObject *o) {\n  #if CYTHON_USE_TP_FINALIZE\n  if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && (!PyType_IS_GC(Py_TYPE(o)) || !_PyGC_FINALIZED(o))) {\n    if (PyObject_CallFinalizerFromDealloc(o)) return;\n  }\n  #endif\n  {\n    PyObject *etype, *eval, *etb;\n    PyErr_Fetch(&etype, &eval, &etb);\n    ++Py_REFCNT(o);\n    __pyx_pw_11pycocotools_5_mask_4RLEs_3__dealloc__(o);\n    --Py_REFCNT(o);\n    PyErr_Restore(etype, eval, etb);\n  }\n  (*Py_TYPE(o)->tp_free)(o);\n}\n\nstatic PyObject *__pyx_tp_getattro_11pycocotools_5_mask_RLEs(PyObject *o, PyObject *n) {\n  PyObject *v = __Pyx_PyObject_GenericGetAttr(o, n);\n  if (!v && PyErr_ExceptionMatches(PyExc_AttributeError)) {\n    PyErr_Clear();\n    v = __pyx_pw_11pycocotools_5_mask_4RLEs_5__getattr__(o, n);\n  }\n  return v;\n}\n\nstatic PyMethodDef __pyx_methods_11pycocotools_5_mask_RLEs[] = {\n  {\"__getattr__\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_4RLEs_5__getattr__, METH_O|METH_COEXIST, 0},\n  {\"__reduce_cython__\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_4RLEs_7__reduce_cython__, METH_NOARGS, 0},\n  {\"__setstate_cython__\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_4RLEs_9__setstate_cython__, METH_O, 0},\n  {0, 0, 0, 0}\n};\n\nstatic PyTypeObject __pyx_type_11pycocotools_5_mask_RLEs = {\n  PyVarObject_HEAD_INIT(0, 0)\n  \"pycocotools._mask.RLEs\", /*tp_name*/\n  sizeof(struct __pyx_obj_11pycocotools_5_mask_RLEs), /*tp_basicsize*/\n  0, /*tp_itemsize*/\n  __pyx_tp_dealloc_11pycocotools_5_mask_RLEs, /*tp_dealloc*/\n  0, /*tp_print*/\n  0, /*tp_getattr*/\n  0, /*tp_setattr*/\n  #if PY_MAJOR_VERSION < 3\n  0, /*tp_compare*/\n  #endif\n  #if PY_MAJOR_VERSION >= 3\n  0, /*tp_as_async*/\n  #endif\n  0, /*tp_repr*/\n  0, /*tp_as_number*/\n  0, /*tp_as_sequence*/\n  0, /*tp_as_mapping*/\n  0, /*tp_hash*/\n  0, /*tp_call*/\n  0, /*tp_str*/\n  __pyx_tp_getattro_11pycocotools_5_mask_RLEs, /*tp_getattro*/\n  0, /*tp_setattro*/\n  0, /*tp_as_buffer*/\n  Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE, /*tp_flags*/\n  0, /*tp_doc*/\n  0, /*tp_traverse*/\n  0, /*tp_clear*/\n  0, /*tp_richcompare*/\n  0, /*tp_weaklistoffset*/\n  0, /*tp_iter*/\n  0, /*tp_iternext*/\n  __pyx_methods_11pycocotools_5_mask_RLEs, /*tp_methods*/\n  0, /*tp_members*/\n  0, /*tp_getset*/\n  0, /*tp_base*/\n  0, /*tp_dict*/\n  0, /*tp_descr_get*/\n  0, /*tp_descr_set*/\n  0, /*tp_dictoffset*/\n  0, /*tp_init*/\n  0, /*tp_alloc*/\n  __pyx_tp_new_11pycocotools_5_mask_RLEs, /*tp_new*/\n  0, /*tp_free*/\n  0, /*tp_is_gc*/\n  0, /*tp_bases*/\n  0, /*tp_mro*/\n  0, /*tp_cache*/\n  0, /*tp_subclasses*/\n  0, /*tp_weaklist*/\n  0, /*tp_del*/\n  0, /*tp_version_tag*/\n  #if PY_VERSION_HEX >= 0x030400a1\n  0, /*tp_finalize*/\n  #endif\n  #if PY_VERSION_HEX >= 0x030800b1\n  0, /*tp_vectorcall*/\n  #endif\n};\n\nstatic PyObject *__pyx_tp_new_11pycocotools_5_mask_Masks(PyTypeObject *t, PyObject *a, PyObject *k) {\n  PyObject *o;\n  if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) {\n    o = (*t->tp_alloc)(t, 0);\n  } else {\n    o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0);\n  }\n  if (unlikely(!o)) return 0;\n  if (unlikely(__pyx_pw_11pycocotools_5_mask_5Masks_1__cinit__(o, a, k) < 0)) goto bad;\n  return o;\n  bad:\n  Py_DECREF(o); o = 0;\n  return NULL;\n}\n\nstatic void __pyx_tp_dealloc_11pycocotools_5_mask_Masks(PyObject *o) {\n  #if CYTHON_USE_TP_FINALIZE\n  if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && (!PyType_IS_GC(Py_TYPE(o)) || !_PyGC_FINALIZED(o))) {\n    if (PyObject_CallFinalizerFromDealloc(o)) return;\n  }\n  #endif\n  (*Py_TYPE(o)->tp_free)(o);\n}\n\nstatic PyMethodDef __pyx_methods_11pycocotools_5_mask_Masks[] = {\n  {\"__array__\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_5Masks_3__array__, METH_NOARGS, 0},\n  {\"__reduce_cython__\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_5Masks_5__reduce_cython__, METH_NOARGS, 0},\n  {\"__setstate_cython__\", (PyCFunction)__pyx_pw_11pycocotools_5_mask_5Masks_7__setstate_cython__, METH_O, 0},\n  {0, 0, 0, 0}\n};\n\nstatic PyTypeObject __pyx_type_11pycocotools_5_mask_Masks = {\n  PyVarObject_HEAD_INIT(0, 0)\n  \"pycocotools._mask.Masks\", /*tp_name*/\n  sizeof(struct __pyx_obj_11pycocotools_5_mask_Masks), /*tp_basicsize*/\n  0, /*tp_itemsize*/\n  __pyx_tp_dealloc_11pycocotools_5_mask_Masks, /*tp_dealloc*/\n  0, /*tp_print*/\n  0, /*tp_getattr*/\n  0, /*tp_setattr*/\n  #if PY_MAJOR_VERSION < 3\n  0, /*tp_compare*/\n  #endif\n  #if PY_MAJOR_VERSION >= 3\n  0, /*tp_as_async*/\n  #endif\n  0, /*tp_repr*/\n  0, /*tp_as_number*/\n  0, /*tp_as_sequence*/\n  0, /*tp_as_mapping*/\n  0, /*tp_hash*/\n  0, /*tp_call*/\n  0, /*tp_str*/\n  0, /*tp_getattro*/\n  0, /*tp_setattro*/\n  0, /*tp_as_buffer*/\n  Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE, /*tp_flags*/\n  0, /*tp_doc*/\n  0, /*tp_traverse*/\n  0, /*tp_clear*/\n  0, /*tp_richcompare*/\n  0, /*tp_weaklistoffset*/\n  0, /*tp_iter*/\n  0, /*tp_iternext*/\n  __pyx_methods_11pycocotools_5_mask_Masks, /*tp_methods*/\n  0, /*tp_members*/\n  0, /*tp_getset*/\n  0, /*tp_base*/\n  0, /*tp_dict*/\n  0, /*tp_descr_get*/\n  0, /*tp_descr_set*/\n  0, /*tp_dictoffset*/\n  0, /*tp_init*/\n  0, /*tp_alloc*/\n  __pyx_tp_new_11pycocotools_5_mask_Masks, /*tp_new*/\n  0, /*tp_free*/\n  0, /*tp_is_gc*/\n  0, /*tp_bases*/\n  0, /*tp_mro*/\n  0, /*tp_cache*/\n  0, /*tp_subclasses*/\n  0, /*tp_weaklist*/\n  0, /*tp_del*/\n  0, /*tp_version_tag*/\n  #if PY_VERSION_HEX >= 0x030400a1\n  0, /*tp_finalize*/\n  #endif\n  #if PY_VERSION_HEX >= 0x030800b1\n  0, /*tp_vectorcall*/\n  #endif\n};\n\nstatic PyMethodDef __pyx_methods[] = {\n  {0, 0, 0, 0}\n};\n\n#if PY_MAJOR_VERSION >= 3\n#if CYTHON_PEP489_MULTI_PHASE_INIT\nstatic PyObject* __pyx_pymod_create(PyObject *spec, PyModuleDef *def); /*proto*/\nstatic int __pyx_pymod_exec__mask(PyObject* module); /*proto*/\nstatic PyModuleDef_Slot __pyx_moduledef_slots[] = {\n  {Py_mod_create, (void*)__pyx_pymod_create},\n  {Py_mod_exec, (void*)__pyx_pymod_exec__mask},\n  {0, NULL}\n};\n#endif\n\nstatic struct PyModuleDef __pyx_moduledef = {\n    PyModuleDef_HEAD_INIT,\n    \"_mask\",\n    0, /* m_doc */\n  #if CYTHON_PEP489_MULTI_PHASE_INIT\n    0, /* m_size */\n  #else\n    -1, /* m_size */\n  #endif\n    __pyx_methods /* m_methods */,\n  #if CYTHON_PEP489_MULTI_PHASE_INIT\n    __pyx_moduledef_slots, /* m_slots */\n  #else\n    NULL, /* m_reload */\n  #endif\n    NULL, /* m_traverse */\n    NULL, /* m_clear */\n    NULL /* m_free */\n};\n#endif\n#ifndef CYTHON_SMALL_CODE\n#if defined(__clang__)\n    #define CYTHON_SMALL_CODE\n#elif defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3))\n    #define CYTHON_SMALL_CODE __attribute__((cold))\n#else\n    #define CYTHON_SMALL_CODE\n#endif\n#endif\n\nstatic __Pyx_StringTabEntry __pyx_string_tab[] = {\n  {&__pyx_n_s_AttributeError, __pyx_k_AttributeError, sizeof(__pyx_k_AttributeError), 0, 0, 1, 1},\n  {&__pyx_n_s_F, __pyx_k_F, sizeof(__pyx_k_F), 0, 0, 1, 1},\n  {&__pyx_kp_u_Format_string_allocated_too_shor, __pyx_k_Format_string_allocated_too_shor, sizeof(__pyx_k_Format_string_allocated_too_shor), 0, 1, 0, 0},\n  {&__pyx_kp_u_Format_string_allocated_too_shor_2, __pyx_k_Format_string_allocated_too_shor_2, sizeof(__pyx_k_Format_string_allocated_too_shor_2), 0, 1, 0, 0},\n  {&__pyx_n_s_ImportError, __pyx_k_ImportError, sizeof(__pyx_k_ImportError), 0, 0, 1, 1},\n  {&__pyx_n_s_Masks, __pyx_k_Masks, sizeof(__pyx_k_Masks), 0, 0, 1, 1},\n  {&__pyx_n_s_N, __pyx_k_N, sizeof(__pyx_k_N), 0, 0, 1, 1},\n  {&__pyx_kp_u_Non_native_byte_order_not_suppor, __pyx_k_Non_native_byte_order_not_suppor, sizeof(__pyx_k_Non_native_byte_order_not_suppor), 0, 1, 0, 0},\n  {&__pyx_n_s_R, __pyx_k_R, sizeof(__pyx_k_R), 0, 0, 1, 1},\n  {&__pyx_n_s_RLEs, __pyx_k_RLEs, sizeof(__pyx_k_RLEs), 0, 0, 1, 1},\n  {&__pyx_n_s_Rs, __pyx_k_Rs, sizeof(__pyx_k_Rs), 0, 0, 1, 1},\n  {&__pyx_n_s_RuntimeError, __pyx_k_RuntimeError, sizeof(__pyx_k_RuntimeError), 0, 0, 1, 1},\n  {&__pyx_kp_s_The_dt_and_gt_should_have_the_sa, __pyx_k_The_dt_and_gt_should_have_the_sa, sizeof(__pyx_k_The_dt_and_gt_should_have_the_sa), 0, 0, 1, 0},\n  {&__pyx_n_s_TypeError, __pyx_k_TypeError, sizeof(__pyx_k_TypeError), 0, 0, 1, 1},\n  {&__pyx_n_s_ValueError, __pyx_k_ValueError, sizeof(__pyx_k_ValueError), 0, 0, 1, 1},\n  {&__pyx_n_s_a, __pyx_k_a, sizeof(__pyx_k_a), 0, 0, 1, 1},\n  {&__pyx_n_s_a_2, __pyx_k_a_2, sizeof(__pyx_k_a_2), 0, 0, 1, 1},\n  {&__pyx_n_s_all, __pyx_k_all, sizeof(__pyx_k_all), 0, 0, 1, 1},\n  {&__pyx_n_s_area, __pyx_k_area, sizeof(__pyx_k_area), 0, 0, 1, 1},\n  {&__pyx_n_s_array, __pyx_k_array, sizeof(__pyx_k_array), 0, 0, 1, 1},\n  {&__pyx_n_s_astype, __pyx_k_astype, sizeof(__pyx_k_astype), 0, 0, 1, 1},\n  {&__pyx_n_s_author, __pyx_k_author, sizeof(__pyx_k_author), 0, 0, 1, 1},\n  {&__pyx_n_s_bb, __pyx_k_bb, sizeof(__pyx_k_bb), 0, 0, 1, 1},\n  {&__pyx_n_s_bbIou, __pyx_k_bbIou, sizeof(__pyx_k_bbIou), 0, 0, 1, 1},\n  {&__pyx_n_s_bb_2, __pyx_k_bb_2, sizeof(__pyx_k_bb_2), 0, 0, 1, 1},\n  {&__pyx_n_s_c_string, __pyx_k_c_string, sizeof(__pyx_k_c_string), 0, 0, 1, 1},\n  {&__pyx_n_s_cline_in_traceback, __pyx_k_cline_in_traceback, sizeof(__pyx_k_cline_in_traceback), 0, 0, 1, 1},\n  {&__pyx_n_s_cnts, __pyx_k_cnts, sizeof(__pyx_k_cnts), 0, 0, 1, 1},\n  {&__pyx_n_s_counts, __pyx_k_counts, sizeof(__pyx_k_counts), 0, 0, 1, 1},\n  {&__pyx_n_s_data, __pyx_k_data, sizeof(__pyx_k_data), 0, 0, 1, 1},\n  {&__pyx_n_s_decode, __pyx_k_decode, sizeof(__pyx_k_decode), 0, 0, 1, 1},\n  {&__pyx_n_s_double, __pyx_k_double, sizeof(__pyx_k_double), 0, 0, 1, 1},\n  {&__pyx_n_s_dt, __pyx_k_dt, sizeof(__pyx_k_dt), 0, 0, 1, 1},\n  {&__pyx_n_s_dtype, __pyx_k_dtype, sizeof(__pyx_k_dtype), 0, 0, 1, 1},\n  {&__pyx_n_s_encode, __pyx_k_encode, sizeof(__pyx_k_encode), 0, 0, 1, 1},\n  {&__pyx_n_s_enumerate, __pyx_k_enumerate, sizeof(__pyx_k_enumerate), 0, 0, 1, 1},\n  {&__pyx_n_s_frBbox, __pyx_k_frBbox, sizeof(__pyx_k_frBbox), 0, 0, 1, 1},\n  {&__pyx_n_s_frPoly, __pyx_k_frPoly, sizeof(__pyx_k_frPoly), 0, 0, 1, 1},\n  {&__pyx_n_s_frPyObjects, __pyx_k_frPyObjects, sizeof(__pyx_k_frPyObjects), 0, 0, 1, 1},\n  {&__pyx_n_s_frString, __pyx_k_frString, sizeof(__pyx_k_frString), 0, 0, 1, 1},\n  {&__pyx_n_s_frUncompressedRLE, __pyx_k_frUncompressedRLE, sizeof(__pyx_k_frUncompressedRLE), 0, 0, 1, 1},\n  {&__pyx_n_s_getstate, __pyx_k_getstate, sizeof(__pyx_k_getstate), 0, 0, 1, 1},\n  {&__pyx_n_s_gt, __pyx_k_gt, sizeof(__pyx_k_gt), 0, 0, 1, 1},\n  {&__pyx_n_s_h, __pyx_k_h, sizeof(__pyx_k_h), 0, 0, 1, 1},\n  {&__pyx_n_s_i, __pyx_k_i, sizeof(__pyx_k_i), 0, 0, 1, 1},\n  {&__pyx_n_s_import, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1},\n  {&__pyx_kp_s_input_data_type_not_allowed, __pyx_k_input_data_type_not_allowed, sizeof(__pyx_k_input_data_type_not_allowed), 0, 0, 1, 0},\n  {&__pyx_kp_s_input_type_is_not_supported, __pyx_k_input_type_is_not_supported, sizeof(__pyx_k_input_type_is_not_supported), 0, 0, 1, 0},\n  {&__pyx_n_s_intersect, __pyx_k_intersect, sizeof(__pyx_k_intersect), 0, 0, 1, 1},\n  {&__pyx_n_s_iou, __pyx_k_iou, sizeof(__pyx_k_iou), 0, 0, 1, 1},\n  {&__pyx_n_s_iouFun, __pyx_k_iouFun, sizeof(__pyx_k_iouFun), 0, 0, 1, 1},\n  {&__pyx_n_s_iou_2, __pyx_k_iou_2, sizeof(__pyx_k_iou_2), 0, 0, 1, 1},\n  {&__pyx_n_s_iou_locals__bbIou, __pyx_k_iou_locals__bbIou, sizeof(__pyx_k_iou_locals__bbIou), 0, 0, 1, 1},\n  {&__pyx_n_s_iou_locals__len, __pyx_k_iou_locals__len, sizeof(__pyx_k_iou_locals__len), 0, 0, 1, 1},\n  {&__pyx_n_s_iou_locals__preproc, __pyx_k_iou_locals__preproc, sizeof(__pyx_k_iou_locals__preproc), 0, 0, 1, 1},\n  {&__pyx_n_s_iou_locals__rleIou, __pyx_k_iou_locals__rleIou, sizeof(__pyx_k_iou_locals__rleIou), 0, 0, 1, 1},\n  {&__pyx_n_s_isbox, __pyx_k_isbox, sizeof(__pyx_k_isbox), 0, 0, 1, 1},\n  {&__pyx_n_s_iscrowd, __pyx_k_iscrowd, sizeof(__pyx_k_iscrowd), 0, 0, 1, 1},\n  {&__pyx_n_s_isrle, __pyx_k_isrle, sizeof(__pyx_k_isrle), 0, 0, 1, 1},\n  {&__pyx_n_s_j, __pyx_k_j, sizeof(__pyx_k_j), 0, 0, 1, 1},\n  {&__pyx_n_s_len, __pyx_k_len, sizeof(__pyx_k_len), 0, 0, 1, 1},\n  {&__pyx_kp_s_list_input_can_be_bounding_box_N, __pyx_k_list_input_can_be_bounding_box_N, sizeof(__pyx_k_list_input_can_be_bounding_box_N), 0, 0, 1, 0},\n  {&__pyx_n_s_m, __pyx_k_m, sizeof(__pyx_k_m), 0, 0, 1, 1},\n  {&__pyx_n_s_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 0, 1, 1},\n  {&__pyx_n_s_mask, __pyx_k_mask, sizeof(__pyx_k_mask), 0, 0, 1, 1},\n  {&__pyx_n_s_masks, __pyx_k_masks, sizeof(__pyx_k_masks), 0, 0, 1, 1},\n  {&__pyx_n_s_merge, __pyx_k_merge, sizeof(__pyx_k_merge), 0, 0, 1, 1},\n  {&__pyx_n_s_n, __pyx_k_n, sizeof(__pyx_k_n), 0, 0, 1, 1},\n  {&__pyx_n_s_name, __pyx_k_name, sizeof(__pyx_k_name), 0, 0, 1, 1},\n  {&__pyx_kp_u_ndarray_is_not_C_contiguous, __pyx_k_ndarray_is_not_C_contiguous, sizeof(__pyx_k_ndarray_is_not_C_contiguous), 0, 1, 0, 0},\n  {&__pyx_kp_u_ndarray_is_not_Fortran_contiguou, __pyx_k_ndarray_is_not_Fortran_contiguou, sizeof(__pyx_k_ndarray_is_not_Fortran_contiguou), 0, 1, 0, 0},\n  {&__pyx_kp_s_no_default___reduce___due_to_non, __pyx_k_no_default___reduce___due_to_non, sizeof(__pyx_k_no_default___reduce___due_to_non), 0, 0, 1, 0},\n  {&__pyx_n_s_np, __pyx_k_np, sizeof(__pyx_k_np), 0, 0, 1, 1},\n  {&__pyx_n_s_np_poly, __pyx_k_np_poly, sizeof(__pyx_k_np_poly), 0, 0, 1, 1},\n  {&__pyx_n_s_numpy, __pyx_k_numpy, sizeof(__pyx_k_numpy), 0, 0, 1, 1},\n  {&__pyx_kp_s_numpy_core_multiarray_failed_to, __pyx_k_numpy_core_multiarray_failed_to, sizeof(__pyx_k_numpy_core_multiarray_failed_to), 0, 0, 1, 0},\n  {&__pyx_kp_s_numpy_core_umath_failed_to_impor, __pyx_k_numpy_core_umath_failed_to_impor, sizeof(__pyx_k_numpy_core_umath_failed_to_impor), 0, 0, 1, 0},\n  {&__pyx_kp_s_numpy_ndarray_input_is_only_for, __pyx_k_numpy_ndarray_input_is_only_for, sizeof(__pyx_k_numpy_ndarray_input_is_only_for), 0, 0, 1, 0},\n  {&__pyx_n_s_obj, __pyx_k_obj, sizeof(__pyx_k_obj), 0, 0, 1, 1},\n  {&__pyx_n_s_objs, __pyx_k_objs, sizeof(__pyx_k_objs), 0, 0, 1, 1},\n  {&__pyx_n_s_order, __pyx_k_order, sizeof(__pyx_k_order), 0, 0, 1, 1},\n  {&__pyx_n_s_p, __pyx_k_p, sizeof(__pyx_k_p), 0, 0, 1, 1},\n  {&__pyx_n_s_poly, __pyx_k_poly, sizeof(__pyx_k_poly), 0, 0, 1, 1},\n  {&__pyx_n_s_preproc, __pyx_k_preproc, sizeof(__pyx_k_preproc), 0, 0, 1, 1},\n  {&__pyx_n_s_py_string, __pyx_k_py_string, sizeof(__pyx_k_py_string), 0, 0, 1, 1},\n  {&__pyx_n_s_pycocotools__mask, __pyx_k_pycocotools__mask, sizeof(__pyx_k_pycocotools__mask), 0, 0, 1, 1},\n  {&__pyx_kp_s_pycocotools__mask_pyx, __pyx_k_pycocotools__mask_pyx, sizeof(__pyx_k_pycocotools__mask_pyx), 0, 0, 1, 0},\n  {&__pyx_n_s_pyiscrowd, __pyx_k_pyiscrowd, sizeof(__pyx_k_pyiscrowd), 0, 0, 1, 1},\n  {&__pyx_n_s_pyobj, __pyx_k_pyobj, sizeof(__pyx_k_pyobj), 0, 0, 1, 1},\n  {&__pyx_n_s_range, __pyx_k_range, sizeof(__pyx_k_range), 0, 0, 1, 1},\n  {&__pyx_n_s_reduce, __pyx_k_reduce, sizeof(__pyx_k_reduce), 0, 0, 1, 1},\n  {&__pyx_n_s_reduce_cython, __pyx_k_reduce_cython, sizeof(__pyx_k_reduce_cython), 0, 0, 1, 1},\n  {&__pyx_n_s_reduce_ex, __pyx_k_reduce_ex, sizeof(__pyx_k_reduce_ex), 0, 0, 1, 1},\n  {&__pyx_n_s_reshape, __pyx_k_reshape, sizeof(__pyx_k_reshape), 0, 0, 1, 1},\n  {&__pyx_n_s_rleIou, __pyx_k_rleIou, sizeof(__pyx_k_rleIou), 0, 0, 1, 1},\n  {&__pyx_n_s_rleObjs, __pyx_k_rleObjs, sizeof(__pyx_k_rleObjs), 0, 0, 1, 1},\n  {&__pyx_n_s_setstate, __pyx_k_setstate, sizeof(__pyx_k_setstate), 0, 0, 1, 1},\n  {&__pyx_n_s_setstate_cython, __pyx_k_setstate_cython, sizeof(__pyx_k_setstate_cython), 0, 0, 1, 1},\n  {&__pyx_n_s_shape, __pyx_k_shape, sizeof(__pyx_k_shape), 0, 0, 1, 1},\n  {&__pyx_n_s_size, __pyx_k_size, sizeof(__pyx_k_size), 0, 0, 1, 1},\n  {&__pyx_n_s_test, __pyx_k_test, sizeof(__pyx_k_test), 0, 0, 1, 1},\n  {&__pyx_n_s_toBbox, __pyx_k_toBbox, sizeof(__pyx_k_toBbox), 0, 0, 1, 1},\n  {&__pyx_n_s_toString, __pyx_k_toString, sizeof(__pyx_k_toString), 0, 0, 1, 1},\n  {&__pyx_n_s_tsungyi, __pyx_k_tsungyi, sizeof(__pyx_k_tsungyi), 0, 0, 1, 1},\n  {&__pyx_n_s_ucRles, __pyx_k_ucRles, sizeof(__pyx_k_ucRles), 0, 0, 1, 1},\n  {&__pyx_n_s_uint32, __pyx_k_uint32, sizeof(__pyx_k_uint32), 0, 0, 1, 1},\n  {&__pyx_n_s_uint8, __pyx_k_uint8, sizeof(__pyx_k_uint8), 0, 0, 1, 1},\n  {&__pyx_kp_u_unknown_dtype_code_in_numpy_pxd, __pyx_k_unknown_dtype_code_in_numpy_pxd, sizeof(__pyx_k_unknown_dtype_code_in_numpy_pxd), 0, 1, 0, 0},\n  {&__pyx_kp_s_unrecognized_type_The_following, __pyx_k_unrecognized_type_The_following, sizeof(__pyx_k_unrecognized_type_The_following), 0, 0, 1, 0},\n  {&__pyx_n_s_w, __pyx_k_w, sizeof(__pyx_k_w), 0, 0, 1, 1},\n  {&__pyx_n_s_zeros, __pyx_k_zeros, sizeof(__pyx_k_zeros), 0, 0, 1, 1},\n  {0, 0, 0, 0, 0, 0, 0}\n};\nstatic CYTHON_SMALL_CODE int __Pyx_InitCachedBuiltins(void) {\n  __pyx_builtin_range = __Pyx_GetBuiltinName(__pyx_n_s_range); if (!__pyx_builtin_range) __PYX_ERR(0, 64, __pyx_L1_error)\n  __pyx_builtin_AttributeError = __Pyx_GetBuiltinName(__pyx_n_s_AttributeError); if (!__pyx_builtin_AttributeError) __PYX_ERR(0, 70, __pyx_L1_error)\n  __pyx_builtin_TypeError = __Pyx_GetBuiltinName(__pyx_n_s_TypeError); if (!__pyx_builtin_TypeError) __PYX_ERR(1, 2, __pyx_L1_error)\n  __pyx_builtin_enumerate = __Pyx_GetBuiltinName(__pyx_n_s_enumerate); if (!__pyx_builtin_enumerate) __PYX_ERR(0, 121, __pyx_L1_error)\n  __pyx_builtin_ValueError = __Pyx_GetBuiltinName(__pyx_n_s_ValueError); if (!__pyx_builtin_ValueError) __PYX_ERR(2, 272, __pyx_L1_error)\n  __pyx_builtin_RuntimeError = __Pyx_GetBuiltinName(__pyx_n_s_RuntimeError); if (!__pyx_builtin_RuntimeError) __PYX_ERR(2, 856, __pyx_L1_error)\n  __pyx_builtin_ImportError = __Pyx_GetBuiltinName(__pyx_n_s_ImportError); if (!__pyx_builtin_ImportError) __PYX_ERR(2, 1038, __pyx_L1_error)\n  return 0;\n  __pyx_L1_error:;\n  return -1;\n}\n\nstatic CYTHON_SMALL_CODE int __Pyx_InitCachedConstants(void) {\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__Pyx_InitCachedConstants\", 0);\n\n  /* \"(tree fragment)\":2\n * def __reduce_cython__(self):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")             # <<<<<<<<<<<<<<\n * def __setstate_cython__(self, __pyx_state):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n */\n  __pyx_tuple_ = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple_)) __PYX_ERR(1, 2, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple_);\n  __Pyx_GIVEREF(__pyx_tuple_);\n\n  /* \"(tree fragment)\":4\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n * def __setstate_cython__(self, __pyx_state):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")             # <<<<<<<<<<<<<<\n */\n  __pyx_tuple__2 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__2)) __PYX_ERR(1, 4, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__2);\n  __Pyx_GIVEREF(__pyx_tuple__2);\n\n  /* \"(tree fragment)\":2\n * def __reduce_cython__(self):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")             # <<<<<<<<<<<<<<\n * def __setstate_cython__(self, __pyx_state):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n */\n  __pyx_tuple__3 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__3)) __PYX_ERR(1, 2, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__3);\n  __Pyx_GIVEREF(__pyx_tuple__3);\n\n  /* \"(tree fragment)\":4\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")\n * def __setstate_cython__(self, __pyx_state):\n *     raise TypeError(\"no default __reduce__ due to non-trivial __cinit__\")             # <<<<<<<<<<<<<<\n */\n  __pyx_tuple__4 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__4)) __PYX_ERR(1, 4, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__4);\n  __Pyx_GIVEREF(__pyx_tuple__4);\n\n  /* \"pycocotools/_mask.pyx\":146\n * def merge(rleObjs, bint intersect=0):\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef RLEs R = RLEs(1)             # <<<<<<<<<<<<<<\n *     rleMerge(<RLE*>Rs._R, <RLE*> R._R, <siz> Rs._n, intersect)\n *     obj = _toString(R)[0]\n */\n  __pyx_tuple__5 = PyTuple_Pack(1, __pyx_int_1); if (unlikely(!__pyx_tuple__5)) __PYX_ERR(0, 146, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__5);\n  __Pyx_GIVEREF(__pyx_tuple__5);\n\n  /* \"pycocotools/_mask.pyx\":172\n *             # check if it's Nx4 bbox\n *             if not len(objs.shape) == 2 or not objs.shape[1] == 4:\n *                 raise Exception('numpy ndarray input is only for *bounding boxes* and should have Nx4 dimension')             # <<<<<<<<<<<<<<\n *             objs = objs.astype(np.double)\n *         elif type(objs) == list:\n */\n  __pyx_tuple__6 = PyTuple_Pack(1, __pyx_kp_s_numpy_ndarray_input_is_only_for); if (unlikely(!__pyx_tuple__6)) __PYX_ERR(0, 172, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__6);\n  __Pyx_GIVEREF(__pyx_tuple__6);\n\n  /* \"pycocotools/_mask.pyx\":185\n *                 objs = _frString(objs)\n *             else:\n *                 raise Exception('list input can be bounding box (Nx4) or RLEs ([RLE])')             # <<<<<<<<<<<<<<\n *         else:\n *             raise Exception('unrecognized type.  The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.')\n */\n  __pyx_tuple__7 = PyTuple_Pack(1, __pyx_kp_s_list_input_can_be_bounding_box_N); if (unlikely(!__pyx_tuple__7)) __PYX_ERR(0, 185, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__7);\n  __Pyx_GIVEREF(__pyx_tuple__7);\n\n  /* \"pycocotools/_mask.pyx\":187\n *                 raise Exception('list input can be bounding box (Nx4) or RLEs ([RLE])')\n *         else:\n *             raise Exception('unrecognized type.  The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.')             # <<<<<<<<<<<<<<\n *         return objs\n *     def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t,  ndim=1] _iou):\n */\n  __pyx_tuple__8 = PyTuple_Pack(1, __pyx_kp_s_unrecognized_type_The_following); if (unlikely(!__pyx_tuple__8)) __PYX_ERR(0, 187, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__8);\n  __Pyx_GIVEREF(__pyx_tuple__8);\n\n  /* \"pycocotools/_mask.pyx\":164\n * # iou computation. support function overload (RLEs-RLEs and bbox-bbox).\n * def iou( dt, gt, pyiscrowd ):\n *     def _preproc(objs):             # <<<<<<<<<<<<<<\n *         if len(objs) == 0:\n *             return objs\n */\n  __pyx_tuple__9 = PyTuple_Pack(4, __pyx_n_s_objs, __pyx_n_s_isbox, __pyx_n_s_isrle, __pyx_n_s_obj); if (unlikely(!__pyx_tuple__9)) __PYX_ERR(0, 164, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__9);\n  __Pyx_GIVEREF(__pyx_tuple__9);\n  __pyx_codeobj__10 = (PyObject*)__Pyx_PyCode_New(1, 0, 4, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__9, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pycocotools__mask_pyx, __pyx_n_s_preproc, 164, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__10)) __PYX_ERR(0, 164, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":189\n *             raise Exception('unrecognized type.  The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.')\n *         return objs\n *     def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t,  ndim=1] _iou):             # <<<<<<<<<<<<<<\n *         rleIou( <RLE*> dt._R, <RLE*> gt._R, m, n, <byte*> iscrowd.data, <double*> _iou.data )\n *     def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):\n */\n  __pyx_tuple__11 = PyTuple_Pack(6, __pyx_n_s_dt, __pyx_n_s_gt, __pyx_n_s_iscrowd, __pyx_n_s_m, __pyx_n_s_n, __pyx_n_s_iou); if (unlikely(!__pyx_tuple__11)) __PYX_ERR(0, 189, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__11);\n  __Pyx_GIVEREF(__pyx_tuple__11);\n  __pyx_codeobj__12 = (PyObject*)__Pyx_PyCode_New(6, 0, 6, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__11, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pycocotools__mask_pyx, __pyx_n_s_rleIou, 189, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__12)) __PYX_ERR(0, 189, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":191\n *     def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t,  ndim=1] _iou):\n *         rleIou( <RLE*> dt._R, <RLE*> gt._R, m, n, <byte*> iscrowd.data, <double*> _iou.data )\n *     def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):             # <<<<<<<<<<<<<<\n *         bbIou( <BB> dt.data, <BB> gt.data, m, n, <byte*> iscrowd.data, <double*>_iou.data )\n *     def _len(obj):\n */\n  __pyx_tuple__13 = PyTuple_Pack(6, __pyx_n_s_dt, __pyx_n_s_gt, __pyx_n_s_iscrowd, __pyx_n_s_m, __pyx_n_s_n, __pyx_n_s_iou); if (unlikely(!__pyx_tuple__13)) __PYX_ERR(0, 191, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__13);\n  __Pyx_GIVEREF(__pyx_tuple__13);\n  __pyx_codeobj__14 = (PyObject*)__Pyx_PyCode_New(6, 0, 6, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__13, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pycocotools__mask_pyx, __pyx_n_s_bbIou, 191, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__14)) __PYX_ERR(0, 191, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":193\n *     def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):\n *         bbIou( <BB> dt.data, <BB> gt.data, m, n, <byte*> iscrowd.data, <double*>_iou.data )\n *     def _len(obj):             # <<<<<<<<<<<<<<\n *         cdef siz N = 0\n *         if type(obj) == RLEs:\n */\n  __pyx_tuple__15 = PyTuple_Pack(2, __pyx_n_s_obj, __pyx_n_s_N); if (unlikely(!__pyx_tuple__15)) __PYX_ERR(0, 193, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__15);\n  __Pyx_GIVEREF(__pyx_tuple__15);\n  __pyx_codeobj__16 = (PyObject*)__Pyx_PyCode_New(1, 0, 2, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__15, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pycocotools__mask_pyx, __pyx_n_s_len, 193, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__16)) __PYX_ERR(0, 193, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":213\n *         return []\n *     if not type(dt) == type(gt):\n *         raise Exception('The dt and gt should have the same data type, either RLEs, list or np.ndarray')             # <<<<<<<<<<<<<<\n * \n *     # define local variables\n */\n  __pyx_tuple__17 = PyTuple_Pack(1, __pyx_kp_s_The_dt_and_gt_should_have_the_sa); if (unlikely(!__pyx_tuple__17)) __PYX_ERR(0, 213, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__17);\n  __Pyx_GIVEREF(__pyx_tuple__17);\n\n  /* \"pycocotools/_mask.pyx\":224\n *         _iouFun = _bbIou\n *     else:\n *         raise Exception('input data type not allowed.')             # <<<<<<<<<<<<<<\n *     _iou = <double*> malloc(m*n* sizeof(double))\n *     iou = np.zeros((m*n, ), dtype=np.double)\n */\n  __pyx_tuple__18 = PyTuple_Pack(1, __pyx_kp_s_input_data_type_not_allowed); if (unlikely(!__pyx_tuple__18)) __PYX_ERR(0, 224, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__18);\n  __Pyx_GIVEREF(__pyx_tuple__18);\n\n  /* \"pycocotools/_mask.pyx\":290\n *         objs = frUncompressedRLE(pyobj, h, w)\n *     else:\n *         raise Exception('input type is not supported.')             # <<<<<<<<<<<<<<\n *     return objs\n */\n  __pyx_tuple__19 = PyTuple_Pack(1, __pyx_kp_s_input_type_is_not_supported); if (unlikely(!__pyx_tuple__19)) __PYX_ERR(0, 290, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__19);\n  __Pyx_GIVEREF(__pyx_tuple__19);\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":272\n *             if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)\n *                 and not PyArray_CHKFLAGS(self, NPY_ARRAY_C_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not C contiguous\")             # <<<<<<<<<<<<<<\n * \n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)\n */\n  __pyx_tuple__20 = PyTuple_Pack(1, __pyx_kp_u_ndarray_is_not_C_contiguous); if (unlikely(!__pyx_tuple__20)) __PYX_ERR(2, 272, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__20);\n  __Pyx_GIVEREF(__pyx_tuple__20);\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":276\n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)\n *                 and not PyArray_CHKFLAGS(self, NPY_ARRAY_F_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not Fortran contiguous\")             # <<<<<<<<<<<<<<\n * \n *             info.buf = PyArray_DATA(self)\n */\n  __pyx_tuple__21 = PyTuple_Pack(1, __pyx_kp_u_ndarray_is_not_Fortran_contiguou); if (unlikely(!__pyx_tuple__21)) __PYX_ERR(2, 276, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__21);\n  __Pyx_GIVEREF(__pyx_tuple__21);\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":306\n *                 if ((descr.byteorder == c'>' and little_endian) or\n *                     (descr.byteorder == c'<' and not little_endian)):\n *                     raise ValueError(u\"Non-native byte order not supported\")             # <<<<<<<<<<<<<<\n *                 if   t == NPY_BYTE:        f = \"b\"\n *                 elif t == NPY_UBYTE:       f = \"B\"\n */\n  __pyx_tuple__22 = PyTuple_Pack(1, __pyx_kp_u_Non_native_byte_order_not_suppor); if (unlikely(!__pyx_tuple__22)) __PYX_ERR(2, 306, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__22);\n  __Pyx_GIVEREF(__pyx_tuple__22);\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":856\n * \n *         if (end - f) - <int>(new_offset - offset[0]) < 15:\n *             raise RuntimeError(u\"Format string allocated too short, see comment in numpy.pxd\")             # <<<<<<<<<<<<<<\n * \n *         if ((child.byteorder == c'>' and little_endian) or\n */\n  __pyx_tuple__23 = PyTuple_Pack(1, __pyx_kp_u_Format_string_allocated_too_shor); if (unlikely(!__pyx_tuple__23)) __PYX_ERR(2, 856, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__23);\n  __Pyx_GIVEREF(__pyx_tuple__23);\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":880\n *             t = child.type_num\n *             if end - f < 5:\n *                 raise RuntimeError(u\"Format string allocated too short.\")             # <<<<<<<<<<<<<<\n * \n *             # Until ticket #99 is fixed, use integers to avoid warnings\n */\n  __pyx_tuple__24 = PyTuple_Pack(1, __pyx_kp_u_Format_string_allocated_too_shor_2); if (unlikely(!__pyx_tuple__24)) __PYX_ERR(2, 880, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__24);\n  __Pyx_GIVEREF(__pyx_tuple__24);\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1038\n *         _import_array()\n *     except Exception:\n *         raise ImportError(\"numpy.core.multiarray failed to import\")             # <<<<<<<<<<<<<<\n * \n * cdef inline int import_umath() except -1:\n */\n  __pyx_tuple__25 = PyTuple_Pack(1, __pyx_kp_s_numpy_core_multiarray_failed_to); if (unlikely(!__pyx_tuple__25)) __PYX_ERR(2, 1038, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__25);\n  __Pyx_GIVEREF(__pyx_tuple__25);\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1044\n *         _import_umath()\n *     except Exception:\n *         raise ImportError(\"numpy.core.umath failed to import\")             # <<<<<<<<<<<<<<\n * \n * cdef inline int import_ufunc() except -1:\n */\n  __pyx_tuple__26 = PyTuple_Pack(1, __pyx_kp_s_numpy_core_umath_failed_to_impor); if (unlikely(!__pyx_tuple__26)) __PYX_ERR(2, 1044, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__26);\n  __Pyx_GIVEREF(__pyx_tuple__26);\n\n  /* \"pycocotools/_mask.pyx\":100\n * \n * # internal conversion from Python RLEs object to compressed RLE format\n * def _toString(RLEs Rs):             # <<<<<<<<<<<<<<\n *     cdef siz n = Rs.n\n *     cdef bytes py_string\n */\n  __pyx_tuple__27 = PyTuple_Pack(6, __pyx_n_s_Rs, __pyx_n_s_n, __pyx_n_s_py_string, __pyx_n_s_c_string, __pyx_n_s_objs, __pyx_n_s_i); if (unlikely(!__pyx_tuple__27)) __PYX_ERR(0, 100, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__27);\n  __Pyx_GIVEREF(__pyx_tuple__27);\n  __pyx_codeobj__28 = (PyObject*)__Pyx_PyCode_New(1, 0, 6, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__27, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pycocotools__mask_pyx, __pyx_n_s_toString, 100, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__28)) __PYX_ERR(0, 100, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":116\n * \n * # internal conversion from compressed RLE format to Python RLEs object\n * def _frString(rleObjs):             # <<<<<<<<<<<<<<\n *     cdef siz n = len(rleObjs)\n *     Rs = RLEs(n)\n */\n  __pyx_tuple__29 = PyTuple_Pack(7, __pyx_n_s_rleObjs, __pyx_n_s_n, __pyx_n_s_Rs, __pyx_n_s_py_string, __pyx_n_s_c_string, __pyx_n_s_i, __pyx_n_s_obj); if (unlikely(!__pyx_tuple__29)) __PYX_ERR(0, 116, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__29);\n  __Pyx_GIVEREF(__pyx_tuple__29);\n  __pyx_codeobj__30 = (PyObject*)__Pyx_PyCode_New(1, 0, 7, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__29, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pycocotools__mask_pyx, __pyx_n_s_frString, 116, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__30)) __PYX_ERR(0, 116, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":129\n * # encode mask to RLEs objects\n * # list of RLE string can be generated by RLEs member function\n * def encode(np.ndarray[np.uint8_t, ndim=3, mode='fortran'] mask):             # <<<<<<<<<<<<<<\n *     h, w, n = mask.shape[0], mask.shape[1], mask.shape[2]\n *     cdef RLEs Rs = RLEs(n)\n */\n  __pyx_tuple__31 = PyTuple_Pack(6, __pyx_n_s_mask, __pyx_n_s_h, __pyx_n_s_w, __pyx_n_s_n, __pyx_n_s_Rs, __pyx_n_s_objs); if (unlikely(!__pyx_tuple__31)) __PYX_ERR(0, 129, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__31);\n  __Pyx_GIVEREF(__pyx_tuple__31);\n  __pyx_codeobj__32 = (PyObject*)__Pyx_PyCode_New(1, 0, 6, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__31, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pycocotools__mask_pyx, __pyx_n_s_encode, 129, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__32)) __PYX_ERR(0, 129, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":137\n * \n * # decode mask from compressed list of RLE string or RLEs object\n * def decode(rleObjs):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     h, w, n = Rs._R[0].h, Rs._R[0].w, Rs._n\n */\n  __pyx_tuple__33 = PyTuple_Pack(6, __pyx_n_s_rleObjs, __pyx_n_s_Rs, __pyx_n_s_h, __pyx_n_s_w, __pyx_n_s_n, __pyx_n_s_masks); if (unlikely(!__pyx_tuple__33)) __PYX_ERR(0, 137, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__33);\n  __Pyx_GIVEREF(__pyx_tuple__33);\n  __pyx_codeobj__34 = (PyObject*)__Pyx_PyCode_New(1, 0, 6, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__33, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pycocotools__mask_pyx, __pyx_n_s_decode, 137, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__34)) __PYX_ERR(0, 137, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":144\n *     return np.array(masks)\n * \n * def merge(rleObjs, bint intersect=0):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef RLEs R = RLEs(1)\n */\n  __pyx_tuple__35 = PyTuple_Pack(5, __pyx_n_s_rleObjs, __pyx_n_s_intersect, __pyx_n_s_Rs, __pyx_n_s_R, __pyx_n_s_obj); if (unlikely(!__pyx_tuple__35)) __PYX_ERR(0, 144, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__35);\n  __Pyx_GIVEREF(__pyx_tuple__35);\n  __pyx_codeobj__36 = (PyObject*)__Pyx_PyCode_New(2, 0, 5, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__35, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pycocotools__mask_pyx, __pyx_n_s_merge, 144, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__36)) __PYX_ERR(0, 144, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":151\n *     return obj\n * \n * def area(rleObjs):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef uint* _a = <uint*> malloc(Rs._n* sizeof(uint))\n */\n  __pyx_tuple__37 = PyTuple_Pack(5, __pyx_n_s_rleObjs, __pyx_n_s_Rs, __pyx_n_s_a, __pyx_n_s_shape, __pyx_n_s_a_2); if (unlikely(!__pyx_tuple__37)) __PYX_ERR(0, 151, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__37);\n  __Pyx_GIVEREF(__pyx_tuple__37);\n  __pyx_codeobj__38 = (PyObject*)__Pyx_PyCode_New(1, 0, 5, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__37, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pycocotools__mask_pyx, __pyx_n_s_area, 151, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__38)) __PYX_ERR(0, 151, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":163\n * \n * # iou computation. support function overload (RLEs-RLEs and bbox-bbox).\n * def iou( dt, gt, pyiscrowd ):             # <<<<<<<<<<<<<<\n *     def _preproc(objs):\n *         if len(objs) == 0:\n */\n  __pyx_tuple__39 = PyTuple_Pack(18, __pyx_n_s_dt, __pyx_n_s_gt, __pyx_n_s_pyiscrowd, __pyx_n_s_preproc, __pyx_n_s_preproc, __pyx_n_s_rleIou, __pyx_n_s_rleIou, __pyx_n_s_bbIou, __pyx_n_s_bbIou, __pyx_n_s_len, __pyx_n_s_len, __pyx_n_s_iscrowd, __pyx_n_s_m, __pyx_n_s_n, __pyx_n_s_iou, __pyx_n_s_shape, __pyx_n_s_iouFun, __pyx_n_s_iou_2); if (unlikely(!__pyx_tuple__39)) __PYX_ERR(0, 163, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__39);\n  __Pyx_GIVEREF(__pyx_tuple__39);\n  __pyx_codeobj__40 = (PyObject*)__Pyx_PyCode_New(3, 0, 18, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__39, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pycocotools__mask_pyx, __pyx_n_s_iou_2, 163, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__40)) __PYX_ERR(0, 163, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":233\n *     return iou.reshape((m,n), order='F')\n * \n * def toBbox( rleObjs ):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef siz n = Rs.n\n */\n  __pyx_tuple__41 = PyTuple_Pack(6, __pyx_n_s_rleObjs, __pyx_n_s_Rs, __pyx_n_s_n, __pyx_n_s_bb_2, __pyx_n_s_shape, __pyx_n_s_bb); if (unlikely(!__pyx_tuple__41)) __PYX_ERR(0, 233, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__41);\n  __Pyx_GIVEREF(__pyx_tuple__41);\n  __pyx_codeobj__42 = (PyObject*)__Pyx_PyCode_New(1, 0, 6, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__41, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pycocotools__mask_pyx, __pyx_n_s_toBbox, 233, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__42)) __PYX_ERR(0, 233, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":245\n *     return bb\n * \n * def frBbox(np.ndarray[np.double_t, ndim=2] bb, siz h, siz w ):             # <<<<<<<<<<<<<<\n *     cdef siz n = bb.shape[0]\n *     Rs = RLEs(n)\n */\n  __pyx_tuple__43 = PyTuple_Pack(6, __pyx_n_s_bb, __pyx_n_s_h, __pyx_n_s_w, __pyx_n_s_n, __pyx_n_s_Rs, __pyx_n_s_objs); if (unlikely(!__pyx_tuple__43)) __PYX_ERR(0, 245, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__43);\n  __Pyx_GIVEREF(__pyx_tuple__43);\n  __pyx_codeobj__44 = (PyObject*)__Pyx_PyCode_New(3, 0, 6, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__43, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pycocotools__mask_pyx, __pyx_n_s_frBbox, 245, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__44)) __PYX_ERR(0, 245, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":252\n *     return objs\n * \n * def frPoly( poly, siz h, siz w ):             # <<<<<<<<<<<<<<\n *     cdef np.ndarray[np.double_t, ndim=1] np_poly\n *     n = len(poly)\n */\n  __pyx_tuple__45 = PyTuple_Pack(9, __pyx_n_s_poly, __pyx_n_s_h, __pyx_n_s_w, __pyx_n_s_np_poly, __pyx_n_s_n, __pyx_n_s_Rs, __pyx_n_s_i, __pyx_n_s_p, __pyx_n_s_objs); if (unlikely(!__pyx_tuple__45)) __PYX_ERR(0, 252, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__45);\n  __Pyx_GIVEREF(__pyx_tuple__45);\n  __pyx_codeobj__46 = (PyObject*)__Pyx_PyCode_New(3, 0, 9, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__45, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pycocotools__mask_pyx, __pyx_n_s_frPoly, 252, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__46)) __PYX_ERR(0, 252, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":262\n *     return objs\n * \n * def frUncompressedRLE(ucRles, siz h, siz w):             # <<<<<<<<<<<<<<\n *     cdef np.ndarray[np.uint32_t, ndim=1] cnts\n *     cdef RLE R\n */\n  __pyx_tuple__47 = PyTuple_Pack(11, __pyx_n_s_ucRles, __pyx_n_s_h, __pyx_n_s_w, __pyx_n_s_cnts, __pyx_n_s_R, __pyx_n_s_data, __pyx_n_s_n, __pyx_n_s_objs, __pyx_n_s_i, __pyx_n_s_Rs, __pyx_n_s_j); if (unlikely(!__pyx_tuple__47)) __PYX_ERR(0, 262, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__47);\n  __Pyx_GIVEREF(__pyx_tuple__47);\n  __pyx_codeobj__48 = (PyObject*)__Pyx_PyCode_New(3, 0, 11, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__47, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pycocotools__mask_pyx, __pyx_n_s_frUncompressedRLE, 262, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__48)) __PYX_ERR(0, 262, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":280\n *     return objs\n * \n * def frPyObjects(pyobj, siz h, w):             # <<<<<<<<<<<<<<\n *     if type(pyobj) == np.ndarray:\n *         objs = frBbox(pyobj, h, w )\n */\n  __pyx_tuple__49 = PyTuple_Pack(4, __pyx_n_s_pyobj, __pyx_n_s_h, __pyx_n_s_w, __pyx_n_s_objs); if (unlikely(!__pyx_tuple__49)) __PYX_ERR(0, 280, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__49);\n  __Pyx_GIVEREF(__pyx_tuple__49);\n  __pyx_codeobj__50 = (PyObject*)__Pyx_PyCode_New(3, 0, 4, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__49, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_pycocotools__mask_pyx, __pyx_n_s_frPyObjects, 280, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__50)) __PYX_ERR(0, 280, __pyx_L1_error)\n  __Pyx_RefNannyFinishContext();\n  return 0;\n  __pyx_L1_error:;\n  __Pyx_RefNannyFinishContext();\n  return -1;\n}\n\nstatic CYTHON_SMALL_CODE int __Pyx_InitGlobals(void) {\n  if (__Pyx_InitStrings(__pyx_string_tab) < 0) __PYX_ERR(0, 1, __pyx_L1_error);\n  __pyx_int_0 = PyInt_FromLong(0); if (unlikely(!__pyx_int_0)) __PYX_ERR(0, 1, __pyx_L1_error)\n  __pyx_int_1 = PyInt_FromLong(1); if (unlikely(!__pyx_int_1)) __PYX_ERR(0, 1, __pyx_L1_error)\n  __pyx_int_4 = PyInt_FromLong(4); if (unlikely(!__pyx_int_4)) __PYX_ERR(0, 1, __pyx_L1_error)\n  return 0;\n  __pyx_L1_error:;\n  return -1;\n}\n\nstatic CYTHON_SMALL_CODE int __Pyx_modinit_global_init_code(void); /*proto*/\nstatic CYTHON_SMALL_CODE int __Pyx_modinit_variable_export_code(void); /*proto*/\nstatic CYTHON_SMALL_CODE int __Pyx_modinit_function_export_code(void); /*proto*/\nstatic CYTHON_SMALL_CODE int __Pyx_modinit_type_init_code(void); /*proto*/\nstatic CYTHON_SMALL_CODE int __Pyx_modinit_type_import_code(void); /*proto*/\nstatic CYTHON_SMALL_CODE int __Pyx_modinit_variable_import_code(void); /*proto*/\nstatic CYTHON_SMALL_CODE int __Pyx_modinit_function_import_code(void); /*proto*/\n\nstatic int __Pyx_modinit_global_init_code(void) {\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__Pyx_modinit_global_init_code\", 0);\n  /*--- Global init code ---*/\n  __Pyx_RefNannyFinishContext();\n  return 0;\n}\n\nstatic int __Pyx_modinit_variable_export_code(void) {\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__Pyx_modinit_variable_export_code\", 0);\n  /*--- Variable export code ---*/\n  __Pyx_RefNannyFinishContext();\n  return 0;\n}\n\nstatic int __Pyx_modinit_function_export_code(void) {\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__Pyx_modinit_function_export_code\", 0);\n  /*--- Function export code ---*/\n  __Pyx_RefNannyFinishContext();\n  return 0;\n}\n\nstatic int __Pyx_modinit_type_init_code(void) {\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__Pyx_modinit_type_init_code\", 0);\n  /*--- Type init code ---*/\n  if (PyType_Ready(&__pyx_type_11pycocotools_5_mask_RLEs) < 0) __PYX_ERR(0, 53, __pyx_L1_error)\n  #if PY_VERSION_HEX < 0x030800B1\n  __pyx_type_11pycocotools_5_mask_RLEs.tp_print = 0;\n  #endif\n  if (PyObject_SetAttr(__pyx_m, __pyx_n_s_RLEs, (PyObject *)&__pyx_type_11pycocotools_5_mask_RLEs) < 0) __PYX_ERR(0, 53, __pyx_L1_error)\n  if (__Pyx_setup_reduce((PyObject*)&__pyx_type_11pycocotools_5_mask_RLEs) < 0) __PYX_ERR(0, 53, __pyx_L1_error)\n  __pyx_ptype_11pycocotools_5_mask_RLEs = &__pyx_type_11pycocotools_5_mask_RLEs;\n  if (PyType_Ready(&__pyx_type_11pycocotools_5_mask_Masks) < 0) __PYX_ERR(0, 74, __pyx_L1_error)\n  #if PY_VERSION_HEX < 0x030800B1\n  __pyx_type_11pycocotools_5_mask_Masks.tp_print = 0;\n  #endif\n  if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_type_11pycocotools_5_mask_Masks.tp_dictoffset && __pyx_type_11pycocotools_5_mask_Masks.tp_getattro == PyObject_GenericGetAttr)) {\n    __pyx_type_11pycocotools_5_mask_Masks.tp_getattro = __Pyx_PyObject_GenericGetAttr;\n  }\n  if (PyObject_SetAttr(__pyx_m, __pyx_n_s_Masks, (PyObject *)&__pyx_type_11pycocotools_5_mask_Masks) < 0) __PYX_ERR(0, 74, __pyx_L1_error)\n  if (__Pyx_setup_reduce((PyObject*)&__pyx_type_11pycocotools_5_mask_Masks) < 0) __PYX_ERR(0, 74, __pyx_L1_error)\n  __pyx_ptype_11pycocotools_5_mask_Masks = &__pyx_type_11pycocotools_5_mask_Masks;\n  __Pyx_RefNannyFinishContext();\n  return 0;\n  __pyx_L1_error:;\n  __Pyx_RefNannyFinishContext();\n  return -1;\n}\n\nstatic int __Pyx_modinit_type_import_code(void) {\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"__Pyx_modinit_type_import_code\", 0);\n  /*--- Type import code ---*/\n  __pyx_t_1 = PyImport_ImportModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_t_1)) __PYX_ERR(3, 9, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_ptype_7cpython_4type_type = __Pyx_ImportType(__pyx_t_1, __Pyx_BUILTIN_MODULE_NAME, \"type\", \n  #if defined(PYPY_VERSION_NUM) && PYPY_VERSION_NUM < 0x050B0000\n  sizeof(PyTypeObject),\n  #else\n  sizeof(PyHeapTypeObject),\n  #endif\n  __Pyx_ImportType_CheckSize_Warn);\n   if (!__pyx_ptype_7cpython_4type_type) __PYX_ERR(3, 9, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_1 = PyImport_ImportModule(\"numpy\"); if (unlikely(!__pyx_t_1)) __PYX_ERR(2, 206, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_ptype_5numpy_dtype = __Pyx_ImportType(__pyx_t_1, \"numpy\", \"dtype\", sizeof(PyArray_Descr), __Pyx_ImportType_CheckSize_Ignore);\n   if (!__pyx_ptype_5numpy_dtype) __PYX_ERR(2, 206, __pyx_L1_error)\n  __pyx_ptype_5numpy_flatiter = __Pyx_ImportType(__pyx_t_1, \"numpy\", \"flatiter\", sizeof(PyArrayIterObject), __Pyx_ImportType_CheckSize_Warn);\n   if (!__pyx_ptype_5numpy_flatiter) __PYX_ERR(2, 229, __pyx_L1_error)\n  __pyx_ptype_5numpy_broadcast = __Pyx_ImportType(__pyx_t_1, \"numpy\", \"broadcast\", sizeof(PyArrayMultiIterObject), __Pyx_ImportType_CheckSize_Warn);\n   if (!__pyx_ptype_5numpy_broadcast) __PYX_ERR(2, 233, __pyx_L1_error)\n  __pyx_ptype_5numpy_ndarray = __Pyx_ImportType(__pyx_t_1, \"numpy\", \"ndarray\", sizeof(PyArrayObject), __Pyx_ImportType_CheckSize_Ignore);\n   if (!__pyx_ptype_5numpy_ndarray) __PYX_ERR(2, 242, __pyx_L1_error)\n  __pyx_ptype_5numpy_ufunc = __Pyx_ImportType(__pyx_t_1, \"numpy\", \"ufunc\", sizeof(PyUFuncObject), __Pyx_ImportType_CheckSize_Warn);\n   if (!__pyx_ptype_5numpy_ufunc) __PYX_ERR(2, 918, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __Pyx_RefNannyFinishContext();\n  return 0;\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_RefNannyFinishContext();\n  return -1;\n}\n\nstatic int __Pyx_modinit_variable_import_code(void) {\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__Pyx_modinit_variable_import_code\", 0);\n  /*--- Variable import code ---*/\n  __Pyx_RefNannyFinishContext();\n  return 0;\n}\n\nstatic int __Pyx_modinit_function_import_code(void) {\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__Pyx_modinit_function_import_code\", 0);\n  /*--- Function import code ---*/\n  __Pyx_RefNannyFinishContext();\n  return 0;\n}\n\n\n#if PY_MAJOR_VERSION < 3\n#ifdef CYTHON_NO_PYINIT_EXPORT\n#define __Pyx_PyMODINIT_FUNC void\n#else\n#define __Pyx_PyMODINIT_FUNC PyMODINIT_FUNC\n#endif\n#else\n#ifdef CYTHON_NO_PYINIT_EXPORT\n#define __Pyx_PyMODINIT_FUNC PyObject *\n#else\n#define __Pyx_PyMODINIT_FUNC PyMODINIT_FUNC\n#endif\n#endif\n\n\n#if PY_MAJOR_VERSION < 3\n__Pyx_PyMODINIT_FUNC init_mask(void) CYTHON_SMALL_CODE; /*proto*/\n__Pyx_PyMODINIT_FUNC init_mask(void)\n#else\n__Pyx_PyMODINIT_FUNC PyInit__mask(void) CYTHON_SMALL_CODE; /*proto*/\n__Pyx_PyMODINIT_FUNC PyInit__mask(void)\n#if CYTHON_PEP489_MULTI_PHASE_INIT\n{\n  return PyModuleDef_Init(&__pyx_moduledef);\n}\nstatic CYTHON_SMALL_CODE int __Pyx_check_single_interpreter(void) {\n    #if PY_VERSION_HEX >= 0x030700A1\n    static PY_INT64_T main_interpreter_id = -1;\n    PY_INT64_T current_id = PyInterpreterState_GetID(PyThreadState_Get()->interp);\n    if (main_interpreter_id == -1) {\n        main_interpreter_id = current_id;\n        return (unlikely(current_id == -1)) ? -1 : 0;\n    } else if (unlikely(main_interpreter_id != current_id))\n    #else\n    static PyInterpreterState *main_interpreter = NULL;\n    PyInterpreterState *current_interpreter = PyThreadState_Get()->interp;\n    if (!main_interpreter) {\n        main_interpreter = current_interpreter;\n    } else if (unlikely(main_interpreter != current_interpreter))\n    #endif\n    {\n        PyErr_SetString(\n            PyExc_ImportError,\n            \"Interpreter change detected - this module can only be loaded into one interpreter per process.\");\n        return -1;\n    }\n    return 0;\n}\nstatic CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *moddict, const char* from_name, const char* to_name, int allow_none) {\n    PyObject *value = PyObject_GetAttrString(spec, from_name);\n    int result = 0;\n    if (likely(value)) {\n        if (allow_none || value != Py_None) {\n            result = PyDict_SetItemString(moddict, to_name, value);\n        }\n        Py_DECREF(value);\n    } else if (PyErr_ExceptionMatches(PyExc_AttributeError)) {\n        PyErr_Clear();\n    } else {\n        result = -1;\n    }\n    return result;\n}\nstatic CYTHON_SMALL_CODE PyObject* __pyx_pymod_create(PyObject *spec, CYTHON_UNUSED PyModuleDef *def) {\n    PyObject *module = NULL, *moddict, *modname;\n    if (__Pyx_check_single_interpreter())\n        return NULL;\n    if (__pyx_m)\n        return __Pyx_NewRef(__pyx_m);\n    modname = PyObject_GetAttrString(spec, \"name\");\n    if (unlikely(!modname)) goto bad;\n    module = PyModule_NewObject(modname);\n    Py_DECREF(modname);\n    if (unlikely(!module)) goto bad;\n    moddict = PyModule_GetDict(module);\n    if (unlikely(!moddict)) goto bad;\n    if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, \"loader\", \"__loader__\", 1) < 0)) goto bad;\n    if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, \"origin\", \"__file__\", 1) < 0)) goto bad;\n    if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, \"parent\", \"__package__\", 1) < 0)) goto bad;\n    if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, \"submodule_search_locations\", \"__path__\", 0) < 0)) goto bad;\n    return module;\nbad:\n    Py_XDECREF(module);\n    return NULL;\n}\n\n\nstatic CYTHON_SMALL_CODE int __pyx_pymod_exec__mask(PyObject *__pyx_pyinit_module)\n#endif\n#endif\n{\n  PyObject *__pyx_t_1 = NULL;\n  int __pyx_t_2;\n  __Pyx_RefNannyDeclarations\n  #if CYTHON_PEP489_MULTI_PHASE_INIT\n  if (__pyx_m) {\n    if (__pyx_m == __pyx_pyinit_module) return 0;\n    PyErr_SetString(PyExc_RuntimeError, \"Module '_mask' has already been imported. Re-initialisation is not supported.\");\n    return -1;\n  }\n  #elif PY_MAJOR_VERSION >= 3\n  if (__pyx_m) return __Pyx_NewRef(__pyx_m);\n  #endif\n  #if CYTHON_REFNANNY\n__Pyx_RefNanny = __Pyx_RefNannyImportAPI(\"refnanny\");\nif (!__Pyx_RefNanny) {\n  PyErr_Clear();\n  __Pyx_RefNanny = __Pyx_RefNannyImportAPI(\"Cython.Runtime.refnanny\");\n  if (!__Pyx_RefNanny)\n      Py_FatalError(\"failed to import 'refnanny' module\");\n}\n#endif\n  __Pyx_RefNannySetupContext(\"__Pyx_PyMODINIT_FUNC PyInit__mask(void)\", 0);\n  if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #ifdef __Pxy_PyFrame_Initialize_Offsets\n  __Pxy_PyFrame_Initialize_Offsets();\n  #endif\n  __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error)\n  __pyx_empty_bytes = PyBytes_FromStringAndSize(\"\", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error)\n  __pyx_empty_unicode = PyUnicode_FromStringAndSize(\"\", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error)\n  #ifdef __Pyx_CyFunction_USED\n  if (__pyx_CyFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  #ifdef __Pyx_FusedFunction_USED\n  if (__pyx_FusedFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  #ifdef __Pyx_Coroutine_USED\n  if (__pyx_Coroutine_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  #ifdef __Pyx_Generator_USED\n  if (__pyx_Generator_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  #ifdef __Pyx_AsyncGen_USED\n  if (__pyx_AsyncGen_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  #ifdef __Pyx_StopAsyncIteration_USED\n  if (__pyx_StopAsyncIteration_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  /*--- Library function declarations ---*/\n  /*--- Threads initialization code ---*/\n  #if defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS\n  #ifdef WITH_THREAD /* Python build with threading support? */\n  PyEval_InitThreads();\n  #endif\n  #endif\n  /*--- Module creation code ---*/\n  #if CYTHON_PEP489_MULTI_PHASE_INIT\n  __pyx_m = __pyx_pyinit_module;\n  Py_INCREF(__pyx_m);\n  #else\n  #if PY_MAJOR_VERSION < 3\n  __pyx_m = Py_InitModule4(\"_mask\", __pyx_methods, 0, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m);\n  #else\n  __pyx_m = PyModule_Create(&__pyx_moduledef);\n  #endif\n  if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  __pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error)\n  Py_INCREF(__pyx_d);\n  __pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error)\n  Py_INCREF(__pyx_b);\n  __pyx_cython_runtime = PyImport_AddModule((char *) \"cython_runtime\"); if (unlikely(!__pyx_cython_runtime)) __PYX_ERR(0, 1, __pyx_L1_error)\n  Py_INCREF(__pyx_cython_runtime);\n  if (PyObject_SetAttrString(__pyx_m, \"__builtins__\", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error);\n  /*--- Initialize various global constants etc. ---*/\n  if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT)\n  if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  if (__pyx_module_is_main_pycocotools___mask) {\n    if (PyObject_SetAttr(__pyx_m, __pyx_n_s_name, __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  }\n  #if PY_MAJOR_VERSION >= 3\n  {\n    PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error)\n    if (!PyDict_GetItemString(modules, \"pycocotools._mask\")) {\n      if (unlikely(PyDict_SetItemString(modules, \"pycocotools._mask\", __pyx_m) < 0)) __PYX_ERR(0, 1, __pyx_L1_error)\n    }\n  }\n  #endif\n  /*--- Builtin init code ---*/\n  if (__Pyx_InitCachedBuiltins() < 0) goto __pyx_L1_error;\n  /*--- Constants init code ---*/\n  if (__Pyx_InitCachedConstants() < 0) goto __pyx_L1_error;\n  /*--- Global type/function init code ---*/\n  (void)__Pyx_modinit_global_init_code();\n  (void)__Pyx_modinit_variable_export_code();\n  (void)__Pyx_modinit_function_export_code();\n  if (unlikely(__Pyx_modinit_type_init_code() != 0)) goto __pyx_L1_error;\n  if (unlikely(__Pyx_modinit_type_import_code() != 0)) goto __pyx_L1_error;\n  (void)__Pyx_modinit_variable_import_code();\n  (void)__Pyx_modinit_function_import_code();\n  /*--- Execution code ---*/\n  #if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED)\n  if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n\n  /* \"pycocotools/_mask.pyx\":11\n * #**************************************************************************\n * \n * __author__ = 'tsungyi'             # <<<<<<<<<<<<<<\n * \n * # import both Python-level and C-level symbols of Numpy\n */\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_author, __pyx_n_s_tsungyi) < 0) __PYX_ERR(0, 11, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":15\n * # import both Python-level and C-level symbols of Numpy\n * # the API uses Numpy to interface C and Python\n * import numpy as np             # <<<<<<<<<<<<<<\n * cimport numpy as np\n * from libc.stdlib cimport malloc, free\n */\n  __pyx_t_1 = __Pyx_Import(__pyx_n_s_numpy, 0, -1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 15, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_np, __pyx_t_1) < 0) __PYX_ERR(0, 15, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":20\n * \n * # intialized Numpy. must do.\n * np.import_array()             # <<<<<<<<<<<<<<\n * \n * # import numpy C function\n */\n  __pyx_t_2 = __pyx_f_5numpy_import_array(); if (unlikely(__pyx_t_2 == ((int)-1))) __PYX_ERR(0, 20, __pyx_L1_error)\n\n  /* \"pycocotools/_mask.pyx\":100\n * \n * # internal conversion from Python RLEs object to compressed RLE format\n * def _toString(RLEs Rs):             # <<<<<<<<<<<<<<\n *     cdef siz n = Rs.n\n *     cdef bytes py_string\n */\n  __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_1_toString, NULL, __pyx_n_s_pycocotools__mask); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 100, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_toString, __pyx_t_1) < 0) __PYX_ERR(0, 100, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":116\n * \n * # internal conversion from compressed RLE format to Python RLEs object\n * def _frString(rleObjs):             # <<<<<<<<<<<<<<\n *     cdef siz n = len(rleObjs)\n *     Rs = RLEs(n)\n */\n  __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_3_frString, NULL, __pyx_n_s_pycocotools__mask); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 116, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_frString, __pyx_t_1) < 0) __PYX_ERR(0, 116, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":129\n * # encode mask to RLEs objects\n * # list of RLE string can be generated by RLEs member function\n * def encode(np.ndarray[np.uint8_t, ndim=3, mode='fortran'] mask):             # <<<<<<<<<<<<<<\n *     h, w, n = mask.shape[0], mask.shape[1], mask.shape[2]\n *     cdef RLEs Rs = RLEs(n)\n */\n  __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_5encode, NULL, __pyx_n_s_pycocotools__mask); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 129, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_encode, __pyx_t_1) < 0) __PYX_ERR(0, 129, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":137\n * \n * # decode mask from compressed list of RLE string or RLEs object\n * def decode(rleObjs):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     h, w, n = Rs._R[0].h, Rs._R[0].w, Rs._n\n */\n  __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_7decode, NULL, __pyx_n_s_pycocotools__mask); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 137, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_decode, __pyx_t_1) < 0) __PYX_ERR(0, 137, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":144\n *     return np.array(masks)\n * \n * def merge(rleObjs, bint intersect=0):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef RLEs R = RLEs(1)\n */\n  __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_9merge, NULL, __pyx_n_s_pycocotools__mask); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 144, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_merge, __pyx_t_1) < 0) __PYX_ERR(0, 144, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":151\n *     return obj\n * \n * def area(rleObjs):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef uint* _a = <uint*> malloc(Rs._n* sizeof(uint))\n */\n  __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_11area, NULL, __pyx_n_s_pycocotools__mask); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 151, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_area, __pyx_t_1) < 0) __PYX_ERR(0, 151, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":163\n * \n * # iou computation. support function overload (RLEs-RLEs and bbox-bbox).\n * def iou( dt, gt, pyiscrowd ):             # <<<<<<<<<<<<<<\n *     def _preproc(objs):\n *         if len(objs) == 0:\n */\n  __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_13iou, NULL, __pyx_n_s_pycocotools__mask); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 163, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_iou_2, __pyx_t_1) < 0) __PYX_ERR(0, 163, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":233\n *     return iou.reshape((m,n), order='F')\n * \n * def toBbox( rleObjs ):             # <<<<<<<<<<<<<<\n *     cdef RLEs Rs = _frString(rleObjs)\n *     cdef siz n = Rs.n\n */\n  __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_15toBbox, NULL, __pyx_n_s_pycocotools__mask); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 233, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_toBbox, __pyx_t_1) < 0) __PYX_ERR(0, 233, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":245\n *     return bb\n * \n * def frBbox(np.ndarray[np.double_t, ndim=2] bb, siz h, siz w ):             # <<<<<<<<<<<<<<\n *     cdef siz n = bb.shape[0]\n *     Rs = RLEs(n)\n */\n  __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_17frBbox, NULL, __pyx_n_s_pycocotools__mask); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 245, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_frBbox, __pyx_t_1) < 0) __PYX_ERR(0, 245, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":252\n *     return objs\n * \n * def frPoly( poly, siz h, siz w ):             # <<<<<<<<<<<<<<\n *     cdef np.ndarray[np.double_t, ndim=1] np_poly\n *     n = len(poly)\n */\n  __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_19frPoly, NULL, __pyx_n_s_pycocotools__mask); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 252, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_frPoly, __pyx_t_1) < 0) __PYX_ERR(0, 252, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":262\n *     return objs\n * \n * def frUncompressedRLE(ucRles, siz h, siz w):             # <<<<<<<<<<<<<<\n *     cdef np.ndarray[np.uint32_t, ndim=1] cnts\n *     cdef RLE R\n */\n  __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_21frUncompressedRLE, NULL, __pyx_n_s_pycocotools__mask); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 262, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_frUncompressedRLE, __pyx_t_1) < 0) __PYX_ERR(0, 262, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":280\n *     return objs\n * \n * def frPyObjects(pyobj, siz h, w):             # <<<<<<<<<<<<<<\n *     if type(pyobj) == np.ndarray:\n *         objs = frBbox(pyobj, h, w )\n */\n  __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_11pycocotools_5_mask_23frPyObjects, NULL, __pyx_n_s_pycocotools__mask); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 280, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_frPyObjects, __pyx_t_1) < 0) __PYX_ERR(0, 280, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"pycocotools/_mask.pyx\":1\n * # distutils: language = c             # <<<<<<<<<<<<<<\n * # distutils: sources = ../MatlabAPI/private/maskApi.c\n * \n */\n  __pyx_t_1 = __Pyx_PyDict_NewPresized(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_1) < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"../../../anaconda3/envs/pytorch0.3/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1046\n *         raise ImportError(\"numpy.core.umath failed to import\")\n * \n * cdef inline int import_ufunc() except -1:             # <<<<<<<<<<<<<<\n *     try:\n *         _import_umath()\n */\n\n  /*--- Wrapped vars code ---*/\n\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  if (__pyx_m) {\n    if (__pyx_d) {\n      __Pyx_AddTraceback(\"init pycocotools._mask\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n    }\n    Py_CLEAR(__pyx_m);\n  } else if (!PyErr_Occurred()) {\n    PyErr_SetString(PyExc_ImportError, \"init pycocotools._mask\");\n  }\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  #if CYTHON_PEP489_MULTI_PHASE_INIT\n  return (__pyx_m != NULL) ? 0 : -1;\n  #elif PY_MAJOR_VERSION >= 3\n  return __pyx_m;\n  #else\n  return;\n  #endif\n}\n\n/* --- Runtime support code --- */\n/* Refnanny */\n#if CYTHON_REFNANNY\nstatic __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) {\n    PyObject *m = NULL, *p = NULL;\n    void *r = NULL;\n    m = PyImport_ImportModule(modname);\n    if (!m) goto end;\n    p = PyObject_GetAttrString(m, \"RefNannyAPI\");\n    if (!p) goto end;\n    r = PyLong_AsVoidPtr(p);\nend:\n    Py_XDECREF(p);\n    Py_XDECREF(m);\n    return (__Pyx_RefNannyAPIStruct *)r;\n}\n#endif\n\n/* PyObjectGetAttrStr */\n#if CYTHON_USE_TYPE_SLOTS\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) {\n    PyTypeObject* tp = Py_TYPE(obj);\n    if (likely(tp->tp_getattro))\n        return tp->tp_getattro(obj, attr_name);\n#if PY_MAJOR_VERSION < 3\n    if (likely(tp->tp_getattr))\n        return tp->tp_getattr(obj, PyString_AS_STRING(attr_name));\n#endif\n    return PyObject_GetAttr(obj, attr_name);\n}\n#endif\n\n/* GetBuiltinName */\nstatic PyObject *__Pyx_GetBuiltinName(PyObject *name) {\n    PyObject* result = __Pyx_PyObject_GetAttrStr(__pyx_b, name);\n    if (unlikely(!result)) {\n        PyErr_Format(PyExc_NameError,\n#if PY_MAJOR_VERSION >= 3\n            \"name '%U' is not defined\", name);\n#else\n            \"name '%.200s' is not defined\", PyString_AS_STRING(name));\n#endif\n    }\n    return result;\n}\n\n/* RaiseDoubleKeywords */\nstatic void __Pyx_RaiseDoubleKeywordsError(\n    const char* func_name,\n    PyObject* kw_name)\n{\n    PyErr_Format(PyExc_TypeError,\n        #if PY_MAJOR_VERSION >= 3\n        \"%s() got multiple values for keyword argument '%U'\", func_name, kw_name);\n        #else\n        \"%s() got multiple values for keyword argument '%s'\", func_name,\n        PyString_AsString(kw_name));\n        #endif\n}\n\n/* ParseKeywords */\nstatic int __Pyx_ParseOptionalKeywords(\n    PyObject *kwds,\n    PyObject **argnames[],\n    PyObject *kwds2,\n    PyObject *values[],\n    Py_ssize_t num_pos_args,\n    const char* function_name)\n{\n    PyObject *key = 0, *value = 0;\n    Py_ssize_t pos = 0;\n    PyObject*** name;\n    PyObject*** first_kw_arg = argnames + num_pos_args;\n    while (PyDict_Next(kwds, &pos, &key, &value)) {\n        name = first_kw_arg;\n        while (*name && (**name != key)) name++;\n        if (*name) {\n            values[name-argnames] = value;\n            continue;\n        }\n        name = first_kw_arg;\n        #if PY_MAJOR_VERSION < 3\n        if (likely(PyString_CheckExact(key)) || likely(PyString_Check(key))) {\n            while (*name) {\n                if ((CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**name) == PyString_GET_SIZE(key))\n                        && _PyString_Eq(**name, key)) {\n                    values[name-argnames] = value;\n                    break;\n                }\n                name++;\n            }\n            if (*name) continue;\n            else {\n                PyObject*** argname = argnames;\n                while (argname != first_kw_arg) {\n                    if ((**argname == key) || (\n                            (CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**argname) == PyString_GET_SIZE(key))\n                             && _PyString_Eq(**argname, key))) {\n                        goto arg_passed_twice;\n                    }\n                    argname++;\n                }\n            }\n        } else\n        #endif\n        if (likely(PyUnicode_Check(key))) {\n            while (*name) {\n                int cmp = (**name == key) ? 0 :\n                #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3\n                    (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :\n                #endif\n                    PyUnicode_Compare(**name, key);\n                if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad;\n                if (cmp == 0) {\n                    values[name-argnames] = value;\n                    break;\n                }\n                name++;\n            }\n            if (*name) continue;\n            else {\n                PyObject*** argname = argnames;\n                while (argname != first_kw_arg) {\n                    int cmp = (**argname == key) ? 0 :\n                    #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3\n                        (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :\n                    #endif\n                        PyUnicode_Compare(**argname, key);\n                    if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad;\n                    if (cmp == 0) goto arg_passed_twice;\n                    argname++;\n                }\n            }\n        } else\n            goto invalid_keyword_type;\n        if (kwds2) {\n            if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad;\n        } else {\n            goto invalid_keyword;\n        }\n    }\n    return 0;\narg_passed_twice:\n    __Pyx_RaiseDoubleKeywordsError(function_name, key);\n    goto bad;\ninvalid_keyword_type:\n    PyErr_Format(PyExc_TypeError,\n        \"%.200s() keywords must be strings\", function_name);\n    goto bad;\ninvalid_keyword:\n    PyErr_Format(PyExc_TypeError,\n    #if PY_MAJOR_VERSION < 3\n        \"%.200s() got an unexpected keyword argument '%.200s'\",\n        function_name, PyString_AsString(key));\n    #else\n        \"%s() got an unexpected keyword argument '%U'\",\n        function_name, key);\n    #endif\nbad:\n    return -1;\n}\n\n/* RaiseArgTupleInvalid */\nstatic void __Pyx_RaiseArgtupleInvalid(\n    const char* func_name,\n    int exact,\n    Py_ssize_t num_min,\n    Py_ssize_t num_max,\n    Py_ssize_t num_found)\n{\n    Py_ssize_t num_expected;\n    const char *more_or_less;\n    if (num_found < num_min) {\n        num_expected = num_min;\n        more_or_less = \"at least\";\n    } else {\n        num_expected = num_max;\n        more_or_less = \"at most\";\n    }\n    if (exact) {\n        more_or_less = \"exactly\";\n    }\n    PyErr_Format(PyExc_TypeError,\n                 \"%.200s() takes %.8s %\" CYTHON_FORMAT_SSIZE_T \"d positional argument%.1s (%\" CYTHON_FORMAT_SSIZE_T \"d given)\",\n                 func_name, more_or_less, num_expected,\n                 (num_expected == 1) ? \"\" : \"s\", num_found);\n}\n\n/* BytesEquals */\nstatic CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals) {\n#if CYTHON_COMPILING_IN_PYPY\n    return PyObject_RichCompareBool(s1, s2, equals);\n#else\n    if (s1 == s2) {\n        return (equals == Py_EQ);\n    } else if (PyBytes_CheckExact(s1) & PyBytes_CheckExact(s2)) {\n        const char *ps1, *ps2;\n        Py_ssize_t length = PyBytes_GET_SIZE(s1);\n        if (length != PyBytes_GET_SIZE(s2))\n            return (equals == Py_NE);\n        ps1 = PyBytes_AS_STRING(s1);\n        ps2 = PyBytes_AS_STRING(s2);\n        if (ps1[0] != ps2[0]) {\n            return (equals == Py_NE);\n        } else if (length == 1) {\n            return (equals == Py_EQ);\n        } else {\n            int result;\n#if CYTHON_USE_UNICODE_INTERNALS\n            Py_hash_t hash1, hash2;\n            hash1 = ((PyBytesObject*)s1)->ob_shash;\n            hash2 = ((PyBytesObject*)s2)->ob_shash;\n            if (hash1 != hash2 && hash1 != -1 && hash2 != -1) {\n                return (equals == Py_NE);\n            }\n#endif\n            result = memcmp(ps1, ps2, (size_t)length);\n            return (equals == Py_EQ) ? (result == 0) : (result != 0);\n        }\n    } else if ((s1 == Py_None) & PyBytes_CheckExact(s2)) {\n        return (equals == Py_NE);\n    } else if ((s2 == Py_None) & PyBytes_CheckExact(s1)) {\n        return (equals == Py_NE);\n    } else {\n        int result;\n        PyObject* py_result = PyObject_RichCompare(s1, s2, equals);\n        if (!py_result)\n            return -1;\n        result = __Pyx_PyObject_IsTrue(py_result);\n        Py_DECREF(py_result);\n        return result;\n    }\n#endif\n}\n\n/* UnicodeEquals */\nstatic CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals) {\n#if CYTHON_COMPILING_IN_PYPY\n    return PyObject_RichCompareBool(s1, s2, equals);\n#else\n#if PY_MAJOR_VERSION < 3\n    PyObject* owned_ref = NULL;\n#endif\n    int s1_is_unicode, s2_is_unicode;\n    if (s1 == s2) {\n        goto return_eq;\n    }\n    s1_is_unicode = PyUnicode_CheckExact(s1);\n    s2_is_unicode = PyUnicode_CheckExact(s2);\n#if PY_MAJOR_VERSION < 3\n    if ((s1_is_unicode & (!s2_is_unicode)) && PyString_CheckExact(s2)) {\n        owned_ref = PyUnicode_FromObject(s2);\n        if (unlikely(!owned_ref))\n            return -1;\n        s2 = owned_ref;\n        s2_is_unicode = 1;\n    } else if ((s2_is_unicode & (!s1_is_unicode)) && PyString_CheckExact(s1)) {\n        owned_ref = PyUnicode_FromObject(s1);\n        if (unlikely(!owned_ref))\n            return -1;\n        s1 = owned_ref;\n        s1_is_unicode = 1;\n    } else if (((!s2_is_unicode) & (!s1_is_unicode))) {\n        return __Pyx_PyBytes_Equals(s1, s2, equals);\n    }\n#endif\n    if (s1_is_unicode & s2_is_unicode) {\n        Py_ssize_t length;\n        int kind;\n        void *data1, *data2;\n        if (unlikely(__Pyx_PyUnicode_READY(s1) < 0) || unlikely(__Pyx_PyUnicode_READY(s2) < 0))\n            return -1;\n        length = __Pyx_PyUnicode_GET_LENGTH(s1);\n        if (length != __Pyx_PyUnicode_GET_LENGTH(s2)) {\n            goto return_ne;\n        }\n#if CYTHON_USE_UNICODE_INTERNALS\n        {\n            Py_hash_t hash1, hash2;\n        #if CYTHON_PEP393_ENABLED\n            hash1 = ((PyASCIIObject*)s1)->hash;\n            hash2 = ((PyASCIIObject*)s2)->hash;\n        #else\n            hash1 = ((PyUnicodeObject*)s1)->hash;\n            hash2 = ((PyUnicodeObject*)s2)->hash;\n        #endif\n            if (hash1 != hash2 && hash1 != -1 && hash2 != -1) {\n                goto return_ne;\n            }\n        }\n#endif\n        kind = __Pyx_PyUnicode_KIND(s1);\n        if (kind != __Pyx_PyUnicode_KIND(s2)) {\n            goto return_ne;\n        }\n        data1 = __Pyx_PyUnicode_DATA(s1);\n        data2 = __Pyx_PyUnicode_DATA(s2);\n        if (__Pyx_PyUnicode_READ(kind, data1, 0) != __Pyx_PyUnicode_READ(kind, data2, 0)) {\n            goto return_ne;\n        } else if (length == 1) {\n            goto return_eq;\n        } else {\n            int result = memcmp(data1, data2, (size_t)(length * kind));\n            #if PY_MAJOR_VERSION < 3\n            Py_XDECREF(owned_ref);\n            #endif\n            return (equals == Py_EQ) ? (result == 0) : (result != 0);\n        }\n    } else if ((s1 == Py_None) & s2_is_unicode) {\n        goto return_ne;\n    } else if ((s2 == Py_None) & s1_is_unicode) {\n        goto return_ne;\n    } else {\n        int result;\n        PyObject* py_result = PyObject_RichCompare(s1, s2, equals);\n        #if PY_MAJOR_VERSION < 3\n        Py_XDECREF(owned_ref);\n        #endif\n        if (!py_result)\n            return -1;\n        result = __Pyx_PyObject_IsTrue(py_result);\n        Py_DECREF(py_result);\n        return result;\n    }\nreturn_eq:\n    #if PY_MAJOR_VERSION < 3\n    Py_XDECREF(owned_ref);\n    #endif\n    return (equals == Py_EQ);\nreturn_ne:\n    #if PY_MAJOR_VERSION < 3\n    Py_XDECREF(owned_ref);\n    #endif\n    return (equals == Py_NE);\n#endif\n}\n\n/* PyCFunctionFastCall */\n#if CYTHON_FAST_PYCCALL\nstatic CYTHON_INLINE PyObject * __Pyx_PyCFunction_FastCall(PyObject *func_obj, PyObject **args, Py_ssize_t nargs) {\n    PyCFunctionObject *func = (PyCFunctionObject*)func_obj;\n    PyCFunction meth = PyCFunction_GET_FUNCTION(func);\n    PyObject *self = PyCFunction_GET_SELF(func);\n    int flags = PyCFunction_GET_FLAGS(func);\n    assert(PyCFunction_Check(func));\n    assert(METH_FASTCALL == (flags & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS)));\n    assert(nargs >= 0);\n    assert(nargs == 0 || args != NULL);\n    /* _PyCFunction_FastCallDict() must not be called with an exception set,\n       because it may clear it (directly or indirectly) and so the\n       caller loses its exception */\n    assert(!PyErr_Occurred());\n    if ((PY_VERSION_HEX < 0x030700A0) || unlikely(flags & METH_KEYWORDS)) {\n        return (*((__Pyx_PyCFunctionFastWithKeywords)(void*)meth)) (self, args, nargs, NULL);\n    } else {\n        return (*((__Pyx_PyCFunctionFast)(void*)meth)) (self, args, nargs);\n    }\n}\n#endif\n\n/* PyFunctionFastCall */\n#if CYTHON_FAST_PYCALL\nstatic PyObject* __Pyx_PyFunction_FastCallNoKw(PyCodeObject *co, PyObject **args, Py_ssize_t na,\n                                               PyObject *globals) {\n    PyFrameObject *f;\n    PyThreadState *tstate = __Pyx_PyThreadState_Current;\n    PyObject **fastlocals;\n    Py_ssize_t i;\n    PyObject *result;\n    assert(globals != NULL);\n    /* XXX Perhaps we should create a specialized\n       PyFrame_New() that doesn't take locals, but does\n       take builtins without sanity checking them.\n       */\n    assert(tstate != NULL);\n    f = PyFrame_New(tstate, co, globals, NULL);\n    if (f == NULL) {\n        return NULL;\n    }\n    fastlocals = __Pyx_PyFrame_GetLocalsplus(f);\n    for (i = 0; i < na; i++) {\n        Py_INCREF(*args);\n        fastlocals[i] = *args++;\n    }\n    result = PyEval_EvalFrameEx(f,0);\n    ++tstate->recursion_depth;\n    Py_DECREF(f);\n    --tstate->recursion_depth;\n    return result;\n}\n#if 1 || PY_VERSION_HEX < 0x030600B1\nstatic PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs) {\n    PyCodeObject *co = (PyCodeObject *)PyFunction_GET_CODE(func);\n    PyObject *globals = PyFunction_GET_GLOBALS(func);\n    PyObject *argdefs = PyFunction_GET_DEFAULTS(func);\n    PyObject *closure;\n#if PY_MAJOR_VERSION >= 3\n    PyObject *kwdefs;\n#endif\n    PyObject *kwtuple, **k;\n    PyObject **d;\n    Py_ssize_t nd;\n    Py_ssize_t nk;\n    PyObject *result;\n    assert(kwargs == NULL || PyDict_Check(kwargs));\n    nk = kwargs ? PyDict_Size(kwargs) : 0;\n    if (Py_EnterRecursiveCall((char*)\" while calling a Python object\")) {\n        return NULL;\n    }\n    if (\n#if PY_MAJOR_VERSION >= 3\n            co->co_kwonlyargcount == 0 &&\n#endif\n            likely(kwargs == NULL || nk == 0) &&\n            co->co_flags == (CO_OPTIMIZED | CO_NEWLOCALS | CO_NOFREE)) {\n        if (argdefs == NULL && co->co_argcount == nargs) {\n            result = __Pyx_PyFunction_FastCallNoKw(co, args, nargs, globals);\n            goto done;\n        }\n        else if (nargs == 0 && argdefs != NULL\n                 && co->co_argcount == Py_SIZE(argdefs)) {\n            /* function called with no arguments, but all parameters have\n               a default value: use default values as arguments .*/\n            args = &PyTuple_GET_ITEM(argdefs, 0);\n            result =__Pyx_PyFunction_FastCallNoKw(co, args, Py_SIZE(argdefs), globals);\n            goto done;\n        }\n    }\n    if (kwargs != NULL) {\n        Py_ssize_t pos, i;\n        kwtuple = PyTuple_New(2 * nk);\n        if (kwtuple == NULL) {\n            result = NULL;\n            goto done;\n        }\n        k = &PyTuple_GET_ITEM(kwtuple, 0);\n        pos = i = 0;\n        while (PyDict_Next(kwargs, &pos, &k[i], &k[i+1])) {\n            Py_INCREF(k[i]);\n            Py_INCREF(k[i+1]);\n            i += 2;\n        }\n        nk = i / 2;\n    }\n    else {\n        kwtuple = NULL;\n        k = NULL;\n    }\n    closure = PyFunction_GET_CLOSURE(func);\n#if PY_MAJOR_VERSION >= 3\n    kwdefs = PyFunction_GET_KW_DEFAULTS(func);\n#endif\n    if (argdefs != NULL) {\n        d = &PyTuple_GET_ITEM(argdefs, 0);\n        nd = Py_SIZE(argdefs);\n    }\n    else {\n        d = NULL;\n        nd = 0;\n    }\n#if PY_MAJOR_VERSION >= 3\n    result = PyEval_EvalCodeEx((PyObject*)co, globals, (PyObject *)NULL,\n                               args, (int)nargs,\n                               k, (int)nk,\n                               d, (int)nd, kwdefs, closure);\n#else\n    result = PyEval_EvalCodeEx(co, globals, (PyObject *)NULL,\n                               args, (int)nargs,\n                               k, (int)nk,\n                               d, (int)nd, closure);\n#endif\n    Py_XDECREF(kwtuple);\ndone:\n    Py_LeaveRecursiveCall();\n    return result;\n}\n#endif\n#endif\n\n/* PyObjectCall */\n#if CYTHON_COMPILING_IN_CPYTHON\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw) {\n    PyObject *result;\n    ternaryfunc call = func->ob_type->tp_call;\n    if (unlikely(!call))\n        return PyObject_Call(func, arg, kw);\n    if (unlikely(Py_EnterRecursiveCall((char*)\" while calling a Python object\")))\n        return NULL;\n    result = (*call)(func, arg, kw);\n    Py_LeaveRecursiveCall();\n    if (unlikely(!result) && unlikely(!PyErr_Occurred())) {\n        PyErr_SetString(\n            PyExc_SystemError,\n            \"NULL result without error in PyObject_Call\");\n    }\n    return result;\n}\n#endif\n\n/* PyObjectCallMethO */\n#if CYTHON_COMPILING_IN_CPYTHON\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg) {\n    PyObject *self, *result;\n    PyCFunction cfunc;\n    cfunc = PyCFunction_GET_FUNCTION(func);\n    self = PyCFunction_GET_SELF(func);\n    if (unlikely(Py_EnterRecursiveCall((char*)\" while calling a Python object\")))\n        return NULL;\n    result = cfunc(self, arg);\n    Py_LeaveRecursiveCall();\n    if (unlikely(!result) && unlikely(!PyErr_Occurred())) {\n        PyErr_SetString(\n            PyExc_SystemError,\n            \"NULL result without error in PyObject_Call\");\n    }\n    return result;\n}\n#endif\n\n/* PyObjectCallOneArg */\n#if CYTHON_COMPILING_IN_CPYTHON\nstatic PyObject* __Pyx__PyObject_CallOneArg(PyObject *func, PyObject *arg) {\n    PyObject *result;\n    PyObject *args = PyTuple_New(1);\n    if (unlikely(!args)) return NULL;\n    Py_INCREF(arg);\n    PyTuple_SET_ITEM(args, 0, arg);\n    result = __Pyx_PyObject_Call(func, args, NULL);\n    Py_DECREF(args);\n    return result;\n}\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) {\n#if CYTHON_FAST_PYCALL\n    if (PyFunction_Check(func)) {\n        return __Pyx_PyFunction_FastCall(func, &arg, 1);\n    }\n#endif\n    if (likely(PyCFunction_Check(func))) {\n        if (likely(PyCFunction_GET_FLAGS(func) & METH_O)) {\n            return __Pyx_PyObject_CallMethO(func, arg);\n#if CYTHON_FAST_PYCCALL\n        } else if (PyCFunction_GET_FLAGS(func) & METH_FASTCALL) {\n            return __Pyx_PyCFunction_FastCall(func, &arg, 1);\n#endif\n        }\n    }\n    return __Pyx__PyObject_CallOneArg(func, arg);\n}\n#else\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) {\n    PyObject *result;\n    PyObject *args = PyTuple_Pack(1, arg);\n    if (unlikely(!args)) return NULL;\n    result = __Pyx_PyObject_Call(func, args, NULL);\n    Py_DECREF(args);\n    return result;\n}\n#endif\n\n/* PyErrFetchRestore */\n#if CYTHON_FAST_THREAD_STATE\nstatic CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) {\n    PyObject *tmp_type, *tmp_value, *tmp_tb;\n    tmp_type = tstate->curexc_type;\n    tmp_value = tstate->curexc_value;\n    tmp_tb = tstate->curexc_traceback;\n    tstate->curexc_type = type;\n    tstate->curexc_value = value;\n    tstate->curexc_traceback = tb;\n    Py_XDECREF(tmp_type);\n    Py_XDECREF(tmp_value);\n    Py_XDECREF(tmp_tb);\n}\nstatic CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) {\n    *type = tstate->curexc_type;\n    *value = tstate->curexc_value;\n    *tb = tstate->curexc_traceback;\n    tstate->curexc_type = 0;\n    tstate->curexc_value = 0;\n    tstate->curexc_traceback = 0;\n}\n#endif\n\n/* RaiseException */\n#if PY_MAJOR_VERSION < 3\nstatic void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb,\n                        CYTHON_UNUSED PyObject *cause) {\n    __Pyx_PyThreadState_declare\n    Py_XINCREF(type);\n    if (!value || value == Py_None)\n        value = NULL;\n    else\n        Py_INCREF(value);\n    if (!tb || tb == Py_None)\n        tb = NULL;\n    else {\n        Py_INCREF(tb);\n        if (!PyTraceBack_Check(tb)) {\n            PyErr_SetString(PyExc_TypeError,\n                \"raise: arg 3 must be a traceback or None\");\n            goto raise_error;\n        }\n    }\n    if (PyType_Check(type)) {\n#if CYTHON_COMPILING_IN_PYPY\n        if (!value) {\n            Py_INCREF(Py_None);\n            value = Py_None;\n        }\n#endif\n        PyErr_NormalizeException(&type, &value, &tb);\n    } else {\n        if (value) {\n            PyErr_SetString(PyExc_TypeError,\n                \"instance exception may not have a separate value\");\n            goto raise_error;\n        }\n        value = type;\n        type = (PyObject*) Py_TYPE(type);\n        Py_INCREF(type);\n        if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) {\n            PyErr_SetString(PyExc_TypeError,\n                \"raise: exception class must be a subclass of BaseException\");\n            goto raise_error;\n        }\n    }\n    __Pyx_PyThreadState_assign\n    __Pyx_ErrRestore(type, value, tb);\n    return;\nraise_error:\n    Py_XDECREF(value);\n    Py_XDECREF(type);\n    Py_XDECREF(tb);\n    return;\n}\n#else\nstatic void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) {\n    PyObject* owned_instance = NULL;\n    if (tb == Py_None) {\n        tb = 0;\n    } else if (tb && !PyTraceBack_Check(tb)) {\n        PyErr_SetString(PyExc_TypeError,\n            \"raise: arg 3 must be a traceback or None\");\n        goto bad;\n    }\n    if (value == Py_None)\n        value = 0;\n    if (PyExceptionInstance_Check(type)) {\n        if (value) {\n            PyErr_SetString(PyExc_TypeError,\n                \"instance exception may not have a separate value\");\n            goto bad;\n        }\n        value = type;\n        type = (PyObject*) Py_TYPE(value);\n    } else if (PyExceptionClass_Check(type)) {\n        PyObject *instance_class = NULL;\n        if (value && PyExceptionInstance_Check(value)) {\n            instance_class = (PyObject*) Py_TYPE(value);\n            if (instance_class != type) {\n                int is_subclass = PyObject_IsSubclass(instance_class, type);\n                if (!is_subclass) {\n                    instance_class = NULL;\n                } else if (unlikely(is_subclass == -1)) {\n                    goto bad;\n                } else {\n                    type = instance_class;\n                }\n            }\n        }\n        if (!instance_class) {\n            PyObject *args;\n            if (!value)\n                args = PyTuple_New(0);\n            else if (PyTuple_Check(value)) {\n                Py_INCREF(value);\n                args = value;\n            } else\n                args = PyTuple_Pack(1, value);\n            if (!args)\n                goto bad;\n            owned_instance = PyObject_Call(type, args, NULL);\n            Py_DECREF(args);\n            if (!owned_instance)\n                goto bad;\n            value = owned_instance;\n            if (!PyExceptionInstance_Check(value)) {\n                PyErr_Format(PyExc_TypeError,\n                             \"calling %R should have returned an instance of \"\n                             \"BaseException, not %R\",\n                             type, Py_TYPE(value));\n                goto bad;\n            }\n        }\n    } else {\n        PyErr_SetString(PyExc_TypeError,\n            \"raise: exception class must be a subclass of BaseException\");\n        goto bad;\n    }\n    if (cause) {\n        PyObject *fixed_cause;\n        if (cause == Py_None) {\n            fixed_cause = NULL;\n        } else if (PyExceptionClass_Check(cause)) {\n            fixed_cause = PyObject_CallObject(cause, NULL);\n            if (fixed_cause == NULL)\n                goto bad;\n        } else if (PyExceptionInstance_Check(cause)) {\n            fixed_cause = cause;\n            Py_INCREF(fixed_cause);\n        } else {\n            PyErr_SetString(PyExc_TypeError,\n                            \"exception causes must derive from \"\n                            \"BaseException\");\n            goto bad;\n        }\n        PyException_SetCause(value, fixed_cause);\n    }\n    PyErr_SetObject(type, value);\n    if (tb) {\n#if CYTHON_COMPILING_IN_PYPY\n        PyObject *tmp_type, *tmp_value, *tmp_tb;\n        PyErr_Fetch(&tmp_type, &tmp_value, &tmp_tb);\n        Py_INCREF(tb);\n        PyErr_Restore(tmp_type, tmp_value, tb);\n        Py_XDECREF(tmp_tb);\n#else\n        PyThreadState *tstate = __Pyx_PyThreadState_Current;\n        PyObject* tmp_tb = tstate->curexc_traceback;\n        if (tb != tmp_tb) {\n            Py_INCREF(tb);\n            tstate->curexc_traceback = tb;\n            Py_XDECREF(tmp_tb);\n        }\n#endif\n    }\nbad:\n    Py_XDECREF(owned_instance);\n    return;\n}\n#endif\n\n/* ExtTypeTest */\nstatic CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type) {\n    if (unlikely(!type)) {\n        PyErr_SetString(PyExc_SystemError, \"Missing type object\");\n        return 0;\n    }\n    if (likely(__Pyx_TypeCheck(obj, type)))\n        return 1;\n    PyErr_Format(PyExc_TypeError, \"Cannot convert %.200s to %.200s\",\n                 Py_TYPE(obj)->tp_name, type->tp_name);\n    return 0;\n}\n\n/* ArgTypeTest */\nstatic int __Pyx__ArgTypeTest(PyObject *obj, PyTypeObject *type, const char *name, int exact)\n{\n    if (unlikely(!type)) {\n        PyErr_SetString(PyExc_SystemError, \"Missing type object\");\n        return 0;\n    }\n    else if (exact) {\n        #if PY_MAJOR_VERSION == 2\n        if ((type == &PyBaseString_Type) && likely(__Pyx_PyBaseString_CheckExact(obj))) return 1;\n        #endif\n    }\n    else {\n        if (likely(__Pyx_TypeCheck(obj, type))) return 1;\n    }\n    PyErr_Format(PyExc_TypeError,\n        \"Argument '%.200s' has incorrect type (expected %.200s, got %.200s)\",\n        name, type->tp_name, Py_TYPE(obj)->tp_name);\n    return 0;\n}\n\n/* PyIntBinop */\n#if !CYTHON_COMPILING_IN_PYPY\nstatic PyObject* __Pyx_PyInt_AddObjC(PyObject *op1, PyObject *op2, CYTHON_UNUSED long intval, int inplace, int zerodivision_check) {\n    (void)inplace;\n    (void)zerodivision_check;\n    #if PY_MAJOR_VERSION < 3\n    if (likely(PyInt_CheckExact(op1))) {\n        const long b = intval;\n        long x;\n        long a = PyInt_AS_LONG(op1);\n            x = (long)((unsigned long)a + b);\n            if (likely((x^a) >= 0 || (x^b) >= 0))\n                return PyInt_FromLong(x);\n            return PyLong_Type.tp_as_number->nb_add(op1, op2);\n    }\n    #endif\n    #if CYTHON_USE_PYLONG_INTERNALS\n    if (likely(PyLong_CheckExact(op1))) {\n        const long b = intval;\n        long a, x;\n#ifdef HAVE_LONG_LONG\n        const PY_LONG_LONG llb = intval;\n        PY_LONG_LONG lla, llx;\n#endif\n        const digit* digits = ((PyLongObject*)op1)->ob_digit;\n        const Py_ssize_t size = Py_SIZE(op1);\n        if (likely(__Pyx_sst_abs(size) <= 1)) {\n            a = likely(size) ? digits[0] : 0;\n            if (size == -1) a = -a;\n        } else {\n            switch (size) {\n                case -2:\n                    if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {\n                        a = -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));\n                        break;\n#ifdef HAVE_LONG_LONG\n                    } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) {\n                        lla = -(PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0]));\n                        goto long_long;\n#endif\n                    }\n                    CYTHON_FALLTHROUGH;\n                case 2:\n                    if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {\n                        a = (long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));\n                        break;\n#ifdef HAVE_LONG_LONG\n                    } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) {\n                        lla = (PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0]));\n                        goto long_long;\n#endif\n                    }\n                    CYTHON_FALLTHROUGH;\n                case -3:\n                    if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {\n                        a = -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));\n                        break;\n#ifdef HAVE_LONG_LONG\n                    } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) {\n                        lla = -(PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0]));\n                        goto long_long;\n#endif\n                    }\n                    CYTHON_FALLTHROUGH;\n                case 3:\n                    if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {\n                        a = (long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));\n                        break;\n#ifdef HAVE_LONG_LONG\n                    } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) {\n                        lla = (PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0]));\n                        goto long_long;\n#endif\n                    }\n                    CYTHON_FALLTHROUGH;\n                case -4:\n                    if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) {\n                        a = -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));\n                        break;\n#ifdef HAVE_LONG_LONG\n                    } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) {\n                        lla = -(PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0]));\n                        goto long_long;\n#endif\n                    }\n                    CYTHON_FALLTHROUGH;\n                case 4:\n                    if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) {\n                        a = (long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));\n                        break;\n#ifdef HAVE_LONG_LONG\n                    } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) {\n                        lla = (PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0]));\n                        goto long_long;\n#endif\n                    }\n                    CYTHON_FALLTHROUGH;\n                default: return PyLong_Type.tp_as_number->nb_add(op1, op2);\n            }\n        }\n                x = a + b;\n            return PyLong_FromLong(x);\n#ifdef HAVE_LONG_LONG\n        long_long:\n                llx = lla + llb;\n            return PyLong_FromLongLong(llx);\n#endif\n        \n        \n    }\n    #endif\n    if (PyFloat_CheckExact(op1)) {\n        const long b = intval;\n        double a = PyFloat_AS_DOUBLE(op1);\n            double result;\n            PyFPE_START_PROTECT(\"add\", return NULL)\n            result = ((double)a) + (double)b;\n            PyFPE_END_PROTECT(result)\n            return PyFloat_FromDouble(result);\n    }\n    return (inplace ? PyNumber_InPlaceAdd : PyNumber_Add)(op1, op2);\n}\n#endif\n\n/* DictGetItem */\n#if PY_MAJOR_VERSION >= 3 && !CYTHON_COMPILING_IN_PYPY\nstatic PyObject *__Pyx_PyDict_GetItem(PyObject *d, PyObject* key) {\n    PyObject *value;\n    value = PyDict_GetItemWithError(d, key);\n    if (unlikely(!value)) {\n        if (!PyErr_Occurred()) {\n            if (unlikely(PyTuple_Check(key))) {\n                PyObject* args = PyTuple_Pack(1, key);\n                if (likely(args)) {\n                    PyErr_SetObject(PyExc_KeyError, args);\n                    Py_DECREF(args);\n                }\n            } else {\n                PyErr_SetObject(PyExc_KeyError, key);\n            }\n        }\n        return NULL;\n    }\n    Py_INCREF(value);\n    return value;\n}\n#endif\n\n/* GetItemInt */\nstatic PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j) {\n    PyObject *r;\n    if (!j) return NULL;\n    r = PyObject_GetItem(o, j);\n    Py_DECREF(j);\n    return r;\n}\nstatic CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i,\n                                                              CYTHON_NCP_UNUSED int wraparound,\n                                                              CYTHON_NCP_UNUSED int boundscheck) {\n#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n    Py_ssize_t wrapped_i = i;\n    if (wraparound & unlikely(i < 0)) {\n        wrapped_i += PyList_GET_SIZE(o);\n    }\n    if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyList_GET_SIZE(o)))) {\n        PyObject *r = PyList_GET_ITEM(o, wrapped_i);\n        Py_INCREF(r);\n        return r;\n    }\n    return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i));\n#else\n    return PySequence_GetItem(o, i);\n#endif\n}\nstatic CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i,\n                                                              CYTHON_NCP_UNUSED int wraparound,\n                                                              CYTHON_NCP_UNUSED int boundscheck) {\n#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n    Py_ssize_t wrapped_i = i;\n    if (wraparound & unlikely(i < 0)) {\n        wrapped_i += PyTuple_GET_SIZE(o);\n    }\n    if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyTuple_GET_SIZE(o)))) {\n        PyObject *r = PyTuple_GET_ITEM(o, wrapped_i);\n        Py_INCREF(r);\n        return r;\n    }\n    return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i));\n#else\n    return PySequence_GetItem(o, i);\n#endif\n}\nstatic CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, int is_list,\n                                                     CYTHON_NCP_UNUSED int wraparound,\n                                                     CYTHON_NCP_UNUSED int boundscheck) {\n#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS && CYTHON_USE_TYPE_SLOTS\n    if (is_list || PyList_CheckExact(o)) {\n        Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyList_GET_SIZE(o);\n        if ((!boundscheck) || (likely(__Pyx_is_valid_index(n, PyList_GET_SIZE(o))))) {\n            PyObject *r = PyList_GET_ITEM(o, n);\n            Py_INCREF(r);\n            return r;\n        }\n    }\n    else if (PyTuple_CheckExact(o)) {\n        Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyTuple_GET_SIZE(o);\n        if ((!boundscheck) || likely(__Pyx_is_valid_index(n, PyTuple_GET_SIZE(o)))) {\n            PyObject *r = PyTuple_GET_ITEM(o, n);\n            Py_INCREF(r);\n            return r;\n        }\n    } else {\n        PySequenceMethods *m = Py_TYPE(o)->tp_as_sequence;\n        if (likely(m && m->sq_item)) {\n            if (wraparound && unlikely(i < 0) && likely(m->sq_length)) {\n                Py_ssize_t l = m->sq_length(o);\n                if (likely(l >= 0)) {\n                    i += l;\n                } else {\n                    if (!PyErr_ExceptionMatches(PyExc_OverflowError))\n                        return NULL;\n                    PyErr_Clear();\n                }\n            }\n            return m->sq_item(o, i);\n        }\n    }\n#else\n    if (is_list || PySequence_Check(o)) {\n        return PySequence_GetItem(o, i);\n    }\n#endif\n    return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i));\n}\n\n/* IsLittleEndian */\nstatic CYTHON_INLINE int __Pyx_Is_Little_Endian(void)\n{\n  union {\n    uint32_t u32;\n    uint8_t u8[4];\n  } S;\n  S.u32 = 0x01020304;\n  return S.u8[0] == 4;\n}\n\n/* BufferFormatCheck */\nstatic void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx,\n                              __Pyx_BufFmt_StackElem* stack,\n                              __Pyx_TypeInfo* type) {\n  stack[0].field = &ctx->root;\n  stack[0].parent_offset = 0;\n  ctx->root.type = type;\n  ctx->root.name = \"buffer dtype\";\n  ctx->root.offset = 0;\n  ctx->head = stack;\n  ctx->head->field = &ctx->root;\n  ctx->fmt_offset = 0;\n  ctx->head->parent_offset = 0;\n  ctx->new_packmode = '@';\n  ctx->enc_packmode = '@';\n  ctx->new_count = 1;\n  ctx->enc_count = 0;\n  ctx->enc_type = 0;\n  ctx->is_complex = 0;\n  ctx->is_valid_array = 0;\n  ctx->struct_alignment = 0;\n  while (type->typegroup == 'S') {\n    ++ctx->head;\n    ctx->head->field = type->fields;\n    ctx->head->parent_offset = 0;\n    type = type->fields->type;\n  }\n}\nstatic int __Pyx_BufFmt_ParseNumber(const char** ts) {\n    int count;\n    const char* t = *ts;\n    if (*t < '0' || *t > '9') {\n      return -1;\n    } else {\n        count = *t++ - '0';\n        while (*t >= '0' && *t <= '9') {\n            count *= 10;\n            count += *t++ - '0';\n        }\n    }\n    *ts = t;\n    return count;\n}\nstatic int __Pyx_BufFmt_ExpectNumber(const char **ts) {\n    int number = __Pyx_BufFmt_ParseNumber(ts);\n    if (number == -1)\n        PyErr_Format(PyExc_ValueError,\\\n                     \"Does not understand character buffer dtype format string ('%c')\", **ts);\n    return number;\n}\nstatic void __Pyx_BufFmt_RaiseUnexpectedChar(char ch) {\n  PyErr_Format(PyExc_ValueError,\n               \"Unexpected format string character: '%c'\", ch);\n}\nstatic const char* __Pyx_BufFmt_DescribeTypeChar(char ch, int is_complex) {\n  switch (ch) {\n    case 'c': return \"'char'\";\n    case 'b': return \"'signed char'\";\n    case 'B': return \"'unsigned char'\";\n    case 'h': return \"'short'\";\n    case 'H': return \"'unsigned short'\";\n    case 'i': return \"'int'\";\n    case 'I': return \"'unsigned int'\";\n    case 'l': return \"'long'\";\n    case 'L': return \"'unsigned long'\";\n    case 'q': return \"'long long'\";\n    case 'Q': return \"'unsigned long long'\";\n    case 'f': return (is_complex ? \"'complex float'\" : \"'float'\");\n    case 'd': return (is_complex ? \"'complex double'\" : \"'double'\");\n    case 'g': return (is_complex ? \"'complex long double'\" : \"'long double'\");\n    case 'T': return \"a struct\";\n    case 'O': return \"Python object\";\n    case 'P': return \"a pointer\";\n    case 's': case 'p': return \"a string\";\n    case 0: return \"end\";\n    default: return \"unparseable format string\";\n  }\n}\nstatic size_t __Pyx_BufFmt_TypeCharToStandardSize(char ch, int is_complex) {\n  switch (ch) {\n    case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1;\n    case 'h': case 'H': return 2;\n    case 'i': case 'I': case 'l': case 'L': return 4;\n    case 'q': case 'Q': return 8;\n    case 'f': return (is_complex ? 8 : 4);\n    case 'd': return (is_complex ? 16 : 8);\n    case 'g': {\n      PyErr_SetString(PyExc_ValueError, \"Python does not define a standard format string size for long double ('g')..\");\n      return 0;\n    }\n    case 'O': case 'P': return sizeof(void*);\n    default:\n      __Pyx_BufFmt_RaiseUnexpectedChar(ch);\n      return 0;\n    }\n}\nstatic size_t __Pyx_BufFmt_TypeCharToNativeSize(char ch, int is_complex) {\n  switch (ch) {\n    case 'c': case 'b': case 'B': case 's': case 'p': return 1;\n    case 'h': case 'H': return sizeof(short);\n    case 'i': case 'I': return sizeof(int);\n    case 'l': case 'L': return sizeof(long);\n    #ifdef HAVE_LONG_LONG\n    case 'q': case 'Q': return sizeof(PY_LONG_LONG);\n    #endif\n    case 'f': return sizeof(float) * (is_complex ? 2 : 1);\n    case 'd': return sizeof(double) * (is_complex ? 2 : 1);\n    case 'g': return sizeof(long double) * (is_complex ? 2 : 1);\n    case 'O': case 'P': return sizeof(void*);\n    default: {\n      __Pyx_BufFmt_RaiseUnexpectedChar(ch);\n      return 0;\n    }\n  }\n}\ntypedef struct { char c; short x; } __Pyx_st_short;\ntypedef struct { char c; int x; } __Pyx_st_int;\ntypedef struct { char c; long x; } __Pyx_st_long;\ntypedef struct { char c; float x; } __Pyx_st_float;\ntypedef struct { char c; double x; } __Pyx_st_double;\ntypedef struct { char c; long double x; } __Pyx_st_longdouble;\ntypedef struct { char c; void *x; } __Pyx_st_void_p;\n#ifdef HAVE_LONG_LONG\ntypedef struct { char c; PY_LONG_LONG x; } __Pyx_st_longlong;\n#endif\nstatic size_t __Pyx_BufFmt_TypeCharToAlignment(char ch, CYTHON_UNUSED int is_complex) {\n  switch (ch) {\n    case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1;\n    case 'h': case 'H': return sizeof(__Pyx_st_short) - sizeof(short);\n    case 'i': case 'I': return sizeof(__Pyx_st_int) - sizeof(int);\n    case 'l': case 'L': return sizeof(__Pyx_st_long) - sizeof(long);\n#ifdef HAVE_LONG_LONG\n    case 'q': case 'Q': return sizeof(__Pyx_st_longlong) - sizeof(PY_LONG_LONG);\n#endif\n    case 'f': return sizeof(__Pyx_st_float) - sizeof(float);\n    case 'd': return sizeof(__Pyx_st_double) - sizeof(double);\n    case 'g': return sizeof(__Pyx_st_longdouble) - sizeof(long double);\n    case 'P': case 'O': return sizeof(__Pyx_st_void_p) - sizeof(void*);\n    default:\n      __Pyx_BufFmt_RaiseUnexpectedChar(ch);\n      return 0;\n    }\n}\n/* These are for computing the padding at the end of the struct to align\n   on the first member of the struct. This will probably the same as above,\n   but we don't have any guarantees.\n */\ntypedef struct { short x; char c; } __Pyx_pad_short;\ntypedef struct { int x; char c; } __Pyx_pad_int;\ntypedef struct { long x; char c; } __Pyx_pad_long;\ntypedef struct { float x; char c; } __Pyx_pad_float;\ntypedef struct { double x; char c; } __Pyx_pad_double;\ntypedef struct { long double x; char c; } __Pyx_pad_longdouble;\ntypedef struct { void *x; char c; } __Pyx_pad_void_p;\n#ifdef HAVE_LONG_LONG\ntypedef struct { PY_LONG_LONG x; char c; } __Pyx_pad_longlong;\n#endif\nstatic size_t __Pyx_BufFmt_TypeCharToPadding(char ch, CYTHON_UNUSED int is_complex) {\n  switch (ch) {\n    case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1;\n    case 'h': case 'H': return sizeof(__Pyx_pad_short) - sizeof(short);\n    case 'i': case 'I': return sizeof(__Pyx_pad_int) - sizeof(int);\n    case 'l': case 'L': return sizeof(__Pyx_pad_long) - sizeof(long);\n#ifdef HAVE_LONG_LONG\n    case 'q': case 'Q': return sizeof(__Pyx_pad_longlong) - sizeof(PY_LONG_LONG);\n#endif\n    case 'f': return sizeof(__Pyx_pad_float) - sizeof(float);\n    case 'd': return sizeof(__Pyx_pad_double) - sizeof(double);\n    case 'g': return sizeof(__Pyx_pad_longdouble) - sizeof(long double);\n    case 'P': case 'O': return sizeof(__Pyx_pad_void_p) - sizeof(void*);\n    default:\n      __Pyx_BufFmt_RaiseUnexpectedChar(ch);\n      return 0;\n    }\n}\nstatic char __Pyx_BufFmt_TypeCharToGroup(char ch, int is_complex) {\n  switch (ch) {\n    case 'c':\n        return 'H';\n    case 'b': case 'h': case 'i':\n    case 'l': case 'q': case 's': case 'p':\n        return 'I';\n    case 'B': case 'H': case 'I': case 'L': case 'Q':\n        return 'U';\n    case 'f': case 'd': case 'g':\n        return (is_complex ? 'C' : 'R');\n    case 'O':\n        return 'O';\n    case 'P':\n        return 'P';\n    default: {\n      __Pyx_BufFmt_RaiseUnexpectedChar(ch);\n      return 0;\n    }\n  }\n}\nstatic void __Pyx_BufFmt_RaiseExpected(__Pyx_BufFmt_Context* ctx) {\n  if (ctx->head == NULL || ctx->head->field == &ctx->root) {\n    const char* expected;\n    const char* quote;\n    if (ctx->head == NULL) {\n      expected = \"end\";\n      quote = \"\";\n    } else {\n      expected = ctx->head->field->type->name;\n      quote = \"'\";\n    }\n    PyErr_Format(PyExc_ValueError,\n                 \"Buffer dtype mismatch, expected %s%s%s but got %s\",\n                 quote, expected, quote,\n                 __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex));\n  } else {\n    __Pyx_StructField* field = ctx->head->field;\n    __Pyx_StructField* parent = (ctx->head - 1)->field;\n    PyErr_Format(PyExc_ValueError,\n                 \"Buffer dtype mismatch, expected '%s' but got %s in '%s.%s'\",\n                 field->type->name, __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex),\n                 parent->type->name, field->name);\n  }\n}\nstatic int __Pyx_BufFmt_ProcessTypeChunk(__Pyx_BufFmt_Context* ctx) {\n  char group;\n  size_t size, offset, arraysize = 1;\n  if (ctx->enc_type == 0) return 0;\n  if (ctx->head->field->type->arraysize[0]) {\n    int i, ndim = 0;\n    if (ctx->enc_type == 's' || ctx->enc_type == 'p') {\n        ctx->is_valid_array = ctx->head->field->type->ndim == 1;\n        ndim = 1;\n        if (ctx->enc_count != ctx->head->field->type->arraysize[0]) {\n            PyErr_Format(PyExc_ValueError,\n                         \"Expected a dimension of size %zu, got %zu\",\n                         ctx->head->field->type->arraysize[0], ctx->enc_count);\n            return -1;\n        }\n    }\n    if (!ctx->is_valid_array) {\n      PyErr_Format(PyExc_ValueError, \"Expected %d dimensions, got %d\",\n                   ctx->head->field->type->ndim, ndim);\n      return -1;\n    }\n    for (i = 0; i < ctx->head->field->type->ndim; i++) {\n      arraysize *= ctx->head->field->type->arraysize[i];\n    }\n    ctx->is_valid_array = 0;\n    ctx->enc_count = 1;\n  }\n  group = __Pyx_BufFmt_TypeCharToGroup(ctx->enc_type, ctx->is_complex);\n  do {\n    __Pyx_StructField* field = ctx->head->field;\n    __Pyx_TypeInfo* type = field->type;\n    if (ctx->enc_packmode == '@' || ctx->enc_packmode == '^') {\n      size = __Pyx_BufFmt_TypeCharToNativeSize(ctx->enc_type, ctx->is_complex);\n    } else {\n      size = __Pyx_BufFmt_TypeCharToStandardSize(ctx->enc_type, ctx->is_complex);\n    }\n    if (ctx->enc_packmode == '@') {\n      size_t align_at = __Pyx_BufFmt_TypeCharToAlignment(ctx->enc_type, ctx->is_complex);\n      size_t align_mod_offset;\n      if (align_at == 0) return -1;\n      align_mod_offset = ctx->fmt_offset % align_at;\n      if (align_mod_offset > 0) ctx->fmt_offset += align_at - align_mod_offset;\n      if (ctx->struct_alignment == 0)\n          ctx->struct_alignment = __Pyx_BufFmt_TypeCharToPadding(ctx->enc_type,\n                                                                 ctx->is_complex);\n    }\n    if (type->size != size || type->typegroup != group) {\n      if (type->typegroup == 'C' && type->fields != NULL) {\n        size_t parent_offset = ctx->head->parent_offset + field->offset;\n        ++ctx->head;\n        ctx->head->field = type->fields;\n        ctx->head->parent_offset = parent_offset;\n        continue;\n      }\n      if ((type->typegroup == 'H' || group == 'H') && type->size == size) {\n      } else {\n          __Pyx_BufFmt_RaiseExpected(ctx);\n          return -1;\n      }\n    }\n    offset = ctx->head->parent_offset + field->offset;\n    if (ctx->fmt_offset != offset) {\n      PyErr_Format(PyExc_ValueError,\n                   \"Buffer dtype mismatch; next field is at offset %\" CYTHON_FORMAT_SSIZE_T \"d but %\" CYTHON_FORMAT_SSIZE_T \"d expected\",\n                   (Py_ssize_t)ctx->fmt_offset, (Py_ssize_t)offset);\n      return -1;\n    }\n    ctx->fmt_offset += size;\n    if (arraysize)\n      ctx->fmt_offset += (arraysize - 1) * size;\n    --ctx->enc_count;\n    while (1) {\n      if (field == &ctx->root) {\n        ctx->head = NULL;\n        if (ctx->enc_count != 0) {\n          __Pyx_BufFmt_RaiseExpected(ctx);\n          return -1;\n        }\n        break;\n      }\n      ctx->head->field = ++field;\n      if (field->type == NULL) {\n        --ctx->head;\n        field = ctx->head->field;\n        continue;\n      } else if (field->type->typegroup == 'S') {\n        size_t parent_offset = ctx->head->parent_offset + field->offset;\n        if (field->type->fields->type == NULL) continue;\n        field = field->type->fields;\n        ++ctx->head;\n        ctx->head->field = field;\n        ctx->head->parent_offset = parent_offset;\n        break;\n      } else {\n        break;\n      }\n    }\n  } while (ctx->enc_count);\n  ctx->enc_type = 0;\n  ctx->is_complex = 0;\n  return 0;\n}\nstatic PyObject *\n__pyx_buffmt_parse_array(__Pyx_BufFmt_Context* ctx, const char** tsp)\n{\n    const char *ts = *tsp;\n    int i = 0, number;\n    int ndim = ctx->head->field->type->ndim;\n;\n    ++ts;\n    if (ctx->new_count != 1) {\n        PyErr_SetString(PyExc_ValueError,\n                        \"Cannot handle repeated arrays in format string\");\n        return NULL;\n    }\n    if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;\n    while (*ts && *ts != ')') {\n        switch (*ts) {\n            case ' ': case '\\f': case '\\r': case '\\n': case '\\t': case '\\v':  continue;\n            default:  break;\n        }\n        number = __Pyx_BufFmt_ExpectNumber(&ts);\n        if (number == -1) return NULL;\n        if (i < ndim && (size_t) number != ctx->head->field->type->arraysize[i])\n            return PyErr_Format(PyExc_ValueError,\n                        \"Expected a dimension of size %zu, got %d\",\n                        ctx->head->field->type->arraysize[i], number);\n        if (*ts != ',' && *ts != ')')\n            return PyErr_Format(PyExc_ValueError,\n                                \"Expected a comma in format string, got '%c'\", *ts);\n        if (*ts == ',') ts++;\n        i++;\n    }\n    if (i != ndim)\n        return PyErr_Format(PyExc_ValueError, \"Expected %d dimension(s), got %d\",\n                            ctx->head->field->type->ndim, i);\n    if (!*ts) {\n        PyErr_SetString(PyExc_ValueError,\n                        \"Unexpected end of format string, expected ')'\");\n        return NULL;\n    }\n    ctx->is_valid_array = 1;\n    ctx->new_count = 1;\n    *tsp = ++ts;\n    return Py_None;\n}\nstatic const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts) {\n  int got_Z = 0;\n  while (1) {\n    switch(*ts) {\n      case 0:\n        if (ctx->enc_type != 0 && ctx->head == NULL) {\n          __Pyx_BufFmt_RaiseExpected(ctx);\n          return NULL;\n        }\n        if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;\n        if (ctx->head != NULL) {\n          __Pyx_BufFmt_RaiseExpected(ctx);\n          return NULL;\n        }\n        return ts;\n      case ' ':\n      case '\\r':\n      case '\\n':\n        ++ts;\n        break;\n      case '<':\n        if (!__Pyx_Is_Little_Endian()) {\n          PyErr_SetString(PyExc_ValueError, \"Little-endian buffer not supported on big-endian compiler\");\n          return NULL;\n        }\n        ctx->new_packmode = '=';\n        ++ts;\n        break;\n      case '>':\n      case '!':\n        if (__Pyx_Is_Little_Endian()) {\n          PyErr_SetString(PyExc_ValueError, \"Big-endian buffer not supported on little-endian compiler\");\n          return NULL;\n        }\n        ctx->new_packmode = '=';\n        ++ts;\n        break;\n      case '=':\n      case '@':\n      case '^':\n        ctx->new_packmode = *ts++;\n        break;\n      case 'T':\n        {\n          const char* ts_after_sub;\n          size_t i, struct_count = ctx->new_count;\n          size_t struct_alignment = ctx->struct_alignment;\n          ctx->new_count = 1;\n          ++ts;\n          if (*ts != '{') {\n            PyErr_SetString(PyExc_ValueError, \"Buffer acquisition: Expected '{' after 'T'\");\n            return NULL;\n          }\n          if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;\n          ctx->enc_type = 0;\n          ctx->enc_count = 0;\n          ctx->struct_alignment = 0;\n          ++ts;\n          ts_after_sub = ts;\n          for (i = 0; i != struct_count; ++i) {\n            ts_after_sub = __Pyx_BufFmt_CheckString(ctx, ts);\n            if (!ts_after_sub) return NULL;\n          }\n          ts = ts_after_sub;\n          if (struct_alignment) ctx->struct_alignment = struct_alignment;\n        }\n        break;\n      case '}':\n        {\n          size_t alignment = ctx->struct_alignment;\n          ++ts;\n          if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;\n          ctx->enc_type = 0;\n          if (alignment && ctx->fmt_offset % alignment) {\n            ctx->fmt_offset += alignment - (ctx->fmt_offset % alignment);\n          }\n        }\n        return ts;\n      case 'x':\n        if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;\n        ctx->fmt_offset += ctx->new_count;\n        ctx->new_count = 1;\n        ctx->enc_count = 0;\n        ctx->enc_type = 0;\n        ctx->enc_packmode = ctx->new_packmode;\n        ++ts;\n        break;\n      case 'Z':\n        got_Z = 1;\n        ++ts;\n        if (*ts != 'f' && *ts != 'd' && *ts != 'g') {\n          __Pyx_BufFmt_RaiseUnexpectedChar('Z');\n          return NULL;\n        }\n        CYTHON_FALLTHROUGH;\n      case 'c': case 'b': case 'B': case 'h': case 'H': case 'i': case 'I':\n      case 'l': case 'L': case 'q': case 'Q':\n      case 'f': case 'd': case 'g':\n      case 'O': case 'p':\n        if (ctx->enc_type == *ts && got_Z == ctx->is_complex &&\n            ctx->enc_packmode == ctx->new_packmode) {\n          ctx->enc_count += ctx->new_count;\n          ctx->new_count = 1;\n          got_Z = 0;\n          ++ts;\n          break;\n        }\n        CYTHON_FALLTHROUGH;\n      case 's':\n        if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;\n        ctx->enc_count = ctx->new_count;\n        ctx->enc_packmode = ctx->new_packmode;\n        ctx->enc_type = *ts;\n        ctx->is_complex = got_Z;\n        ++ts;\n        ctx->new_count = 1;\n        got_Z = 0;\n        break;\n      case ':':\n        ++ts;\n        while(*ts != ':') ++ts;\n        ++ts;\n        break;\n      case '(':\n        if (!__pyx_buffmt_parse_array(ctx, &ts)) return NULL;\n        break;\n      default:\n        {\n          int number = __Pyx_BufFmt_ExpectNumber(&ts);\n          if (number == -1) return NULL;\n          ctx->new_count = (size_t)number;\n        }\n    }\n  }\n}\n\n/* BufferGetAndValidate */\n  static CYTHON_INLINE void __Pyx_SafeReleaseBuffer(Py_buffer* info) {\n  if (unlikely(info->buf == NULL)) return;\n  if (info->suboffsets == __Pyx_minusones) info->suboffsets = NULL;\n  __Pyx_ReleaseBuffer(info);\n}\nstatic void __Pyx_ZeroBuffer(Py_buffer* buf) {\n  buf->buf = NULL;\n  buf->obj = NULL;\n  buf->strides = __Pyx_zeros;\n  buf->shape = __Pyx_zeros;\n  buf->suboffsets = __Pyx_minusones;\n}\nstatic int __Pyx__GetBufferAndValidate(\n        Py_buffer* buf, PyObject* obj,  __Pyx_TypeInfo* dtype, int flags,\n        int nd, int cast, __Pyx_BufFmt_StackElem* stack)\n{\n  buf->buf = NULL;\n  if (unlikely(__Pyx_GetBuffer(obj, buf, flags) == -1)) {\n    __Pyx_ZeroBuffer(buf);\n    return -1;\n  }\n  if (unlikely(buf->ndim != nd)) {\n    PyErr_Format(PyExc_ValueError,\n                 \"Buffer has wrong number of dimensions (expected %d, got %d)\",\n                 nd, buf->ndim);\n    goto fail;\n  }\n  if (!cast) {\n    __Pyx_BufFmt_Context ctx;\n    __Pyx_BufFmt_Init(&ctx, stack, dtype);\n    if (!__Pyx_BufFmt_CheckString(&ctx, buf->format)) goto fail;\n  }\n  if (unlikely((size_t)buf->itemsize != dtype->size)) {\n    PyErr_Format(PyExc_ValueError,\n      \"Item size of buffer (%\" CYTHON_FORMAT_SSIZE_T \"d byte%s) does not match size of '%s' (%\" CYTHON_FORMAT_SSIZE_T \"d byte%s)\",\n      buf->itemsize, (buf->itemsize > 1) ? \"s\" : \"\",\n      dtype->name, (Py_ssize_t)dtype->size, (dtype->size > 1) ? \"s\" : \"\");\n    goto fail;\n  }\n  if (buf->suboffsets == NULL) buf->suboffsets = __Pyx_minusones;\n  return 0;\nfail:;\n  __Pyx_SafeReleaseBuffer(buf);\n  return -1;\n}\n\n/* PyDictVersioning */\n  #if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS\nstatic CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj) {\n    PyObject *dict = Py_TYPE(obj)->tp_dict;\n    return likely(dict) ? __PYX_GET_DICT_VERSION(dict) : 0;\n}\nstatic CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj) {\n    PyObject **dictptr = NULL;\n    Py_ssize_t offset = Py_TYPE(obj)->tp_dictoffset;\n    if (offset) {\n#if CYTHON_COMPILING_IN_CPYTHON\n        dictptr = (likely(offset > 0)) ? (PyObject **) ((char *)obj + offset) : _PyObject_GetDictPtr(obj);\n#else\n        dictptr = _PyObject_GetDictPtr(obj);\n#endif\n    }\n    return (dictptr && *dictptr) ? __PYX_GET_DICT_VERSION(*dictptr) : 0;\n}\nstatic CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version) {\n    PyObject *dict = Py_TYPE(obj)->tp_dict;\n    if (unlikely(!dict) || unlikely(tp_dict_version != __PYX_GET_DICT_VERSION(dict)))\n        return 0;\n    return obj_dict_version == __Pyx_get_object_dict_version(obj);\n}\n#endif\n\n/* GetModuleGlobalName */\n  #if CYTHON_USE_DICT_VERSIONS\nstatic PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value)\n#else\nstatic CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name)\n#endif\n{\n    PyObject *result;\n#if !CYTHON_AVOID_BORROWED_REFS\n#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1\n    result = _PyDict_GetItem_KnownHash(__pyx_d, name, ((PyASCIIObject *) name)->hash);\n    __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version)\n    if (likely(result)) {\n        return __Pyx_NewRef(result);\n    } else if (unlikely(PyErr_Occurred())) {\n        return NULL;\n    }\n#else\n    result = PyDict_GetItem(__pyx_d, name);\n    __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version)\n    if (likely(result)) {\n        return __Pyx_NewRef(result);\n    }\n#endif\n#else\n    result = PyObject_GetItem(__pyx_d, name);\n    __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version)\n    if (likely(result)) {\n        return __Pyx_NewRef(result);\n    }\n    PyErr_Clear();\n#endif\n    return __Pyx_GetBuiltinName(name);\n}\n\n/* PyObjectCall2Args */\n  static CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2) {\n    PyObject *args, *result = NULL;\n    #if CYTHON_FAST_PYCALL\n    if (PyFunction_Check(function)) {\n        PyObject *args[2] = {arg1, arg2};\n        return __Pyx_PyFunction_FastCall(function, args, 2);\n    }\n    #endif\n    #if CYTHON_FAST_PYCCALL\n    if (__Pyx_PyFastCFunction_Check(function)) {\n        PyObject *args[2] = {arg1, arg2};\n        return __Pyx_PyCFunction_FastCall(function, args, 2);\n    }\n    #endif\n    args = PyTuple_New(2);\n    if (unlikely(!args)) goto done;\n    Py_INCREF(arg1);\n    PyTuple_SET_ITEM(args, 0, arg1);\n    Py_INCREF(arg2);\n    PyTuple_SET_ITEM(args, 1, arg2);\n    Py_INCREF(function);\n    result = __Pyx_PyObject_Call(function, args, NULL);\n    Py_DECREF(args);\n    Py_DECREF(function);\ndone:\n    return result;\n}\n\n/* PyIntCompare */\n  static CYTHON_INLINE PyObject* __Pyx_PyInt_EqObjC(PyObject *op1, PyObject *op2, CYTHON_UNUSED long intval, CYTHON_UNUSED long inplace) {\n    if (op1 == op2) {\n        Py_RETURN_TRUE;\n    }\n    #if PY_MAJOR_VERSION < 3\n    if (likely(PyInt_CheckExact(op1))) {\n        const long b = intval;\n        long a = PyInt_AS_LONG(op1);\n        if (a == b) Py_RETURN_TRUE; else Py_RETURN_FALSE;\n    }\n    #endif\n    #if CYTHON_USE_PYLONG_INTERNALS\n    if (likely(PyLong_CheckExact(op1))) {\n        int unequal;\n        unsigned long uintval;\n        Py_ssize_t size = Py_SIZE(op1);\n        const digit* digits = ((PyLongObject*)op1)->ob_digit;\n        if (intval == 0) {\n            if (size == 0) Py_RETURN_TRUE; else Py_RETURN_FALSE;\n        } else if (intval < 0) {\n            if (size >= 0)\n                Py_RETURN_FALSE;\n            intval = -intval;\n            size = -size;\n        } else {\n            if (size <= 0)\n                Py_RETURN_FALSE;\n        }\n        uintval = (unsigned long) intval;\n#if PyLong_SHIFT * 4 < SIZEOF_LONG*8\n        if (uintval >> (PyLong_SHIFT * 4)) {\n            unequal = (size != 5) || (digits[0] != (uintval & (unsigned long) PyLong_MASK))\n                 | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[2] != ((uintval >> (2 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[3] != ((uintval >> (3 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[4] != ((uintval >> (4 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK));\n        } else\n#endif\n#if PyLong_SHIFT * 3 < SIZEOF_LONG*8\n        if (uintval >> (PyLong_SHIFT * 3)) {\n            unequal = (size != 4) || (digits[0] != (uintval & (unsigned long) PyLong_MASK))\n                 | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[2] != ((uintval >> (2 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[3] != ((uintval >> (3 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK));\n        } else\n#endif\n#if PyLong_SHIFT * 2 < SIZEOF_LONG*8\n        if (uintval >> (PyLong_SHIFT * 2)) {\n            unequal = (size != 3) || (digits[0] != (uintval & (unsigned long) PyLong_MASK))\n                 | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[2] != ((uintval >> (2 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK));\n        } else\n#endif\n#if PyLong_SHIFT * 1 < SIZEOF_LONG*8\n        if (uintval >> (PyLong_SHIFT * 1)) {\n            unequal = (size != 2) || (digits[0] != (uintval & (unsigned long) PyLong_MASK))\n                 | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK));\n        } else\n#endif\n            unequal = (size != 1) || (((unsigned long) digits[0]) != (uintval & (unsigned long) PyLong_MASK));\n        if (unequal == 0) Py_RETURN_TRUE; else Py_RETURN_FALSE;\n    }\n    #endif\n    if (PyFloat_CheckExact(op1)) {\n        const long b = intval;\n        double a = PyFloat_AS_DOUBLE(op1);\n        if ((double)a == (double)b) Py_RETURN_TRUE; else Py_RETURN_FALSE;\n    }\n    return (\n        PyObject_RichCompare(op1, op2, Py_EQ));\n}\n\n/* FetchCommonType */\n  static PyTypeObject* __Pyx_FetchCommonType(PyTypeObject* type) {\n    PyObject* fake_module;\n    PyTypeObject* cached_type = NULL;\n    fake_module = PyImport_AddModule((char*) \"_cython_\" CYTHON_ABI);\n    if (!fake_module) return NULL;\n    Py_INCREF(fake_module);\n    cached_type = (PyTypeObject*) PyObject_GetAttrString(fake_module, type->tp_name);\n    if (cached_type) {\n        if (!PyType_Check((PyObject*)cached_type)) {\n            PyErr_Format(PyExc_TypeError,\n                \"Shared Cython type %.200s is not a type object\",\n                type->tp_name);\n            goto bad;\n        }\n        if (cached_type->tp_basicsize != type->tp_basicsize) {\n            PyErr_Format(PyExc_TypeError,\n                \"Shared Cython type %.200s has the wrong size, try recompiling\",\n                type->tp_name);\n            goto bad;\n        }\n    } else {\n        if (!PyErr_ExceptionMatches(PyExc_AttributeError)) goto bad;\n        PyErr_Clear();\n        if (PyType_Ready(type) < 0) goto bad;\n        if (PyObject_SetAttrString(fake_module, type->tp_name, (PyObject*) type) < 0)\n            goto bad;\n        Py_INCREF(type);\n        cached_type = type;\n    }\ndone:\n    Py_DECREF(fake_module);\n    return cached_type;\nbad:\n    Py_XDECREF(cached_type);\n    cached_type = NULL;\n    goto done;\n}\n\n/* CythonFunction */\n  #include <structmember.h>\nstatic PyObject *\n__Pyx_CyFunction_get_doc(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *closure)\n{\n    if (unlikely(op->func_doc == NULL)) {\n        if (op->func.m_ml->ml_doc) {\n#if PY_MAJOR_VERSION >= 3\n            op->func_doc = PyUnicode_FromString(op->func.m_ml->ml_doc);\n#else\n            op->func_doc = PyString_FromString(op->func.m_ml->ml_doc);\n#endif\n            if (unlikely(op->func_doc == NULL))\n                return NULL;\n        } else {\n            Py_INCREF(Py_None);\n            return Py_None;\n        }\n    }\n    Py_INCREF(op->func_doc);\n    return op->func_doc;\n}\nstatic int\n__Pyx_CyFunction_set_doc(__pyx_CyFunctionObject *op, PyObject *value, CYTHON_UNUSED void *context)\n{\n    PyObject *tmp = op->func_doc;\n    if (value == NULL) {\n        value = Py_None;\n    }\n    Py_INCREF(value);\n    op->func_doc = value;\n    Py_XDECREF(tmp);\n    return 0;\n}\nstatic PyObject *\n__Pyx_CyFunction_get_name(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context)\n{\n    if (unlikely(op->func_name == NULL)) {\n#if PY_MAJOR_VERSION >= 3\n        op->func_name = PyUnicode_InternFromString(op->func.m_ml->ml_name);\n#else\n        op->func_name = PyString_InternFromString(op->func.m_ml->ml_name);\n#endif\n        if (unlikely(op->func_name == NULL))\n            return NULL;\n    }\n    Py_INCREF(op->func_name);\n    return op->func_name;\n}\nstatic int\n__Pyx_CyFunction_set_name(__pyx_CyFunctionObject *op, PyObject *value, CYTHON_UNUSED void *context)\n{\n    PyObject *tmp;\n#if PY_MAJOR_VERSION >= 3\n    if (unlikely(value == NULL || !PyUnicode_Check(value)))\n#else\n    if (unlikely(value == NULL || !PyString_Check(value)))\n#endif\n    {\n        PyErr_SetString(PyExc_TypeError,\n                        \"__name__ must be set to a string object\");\n        return -1;\n    }\n    tmp = op->func_name;\n    Py_INCREF(value);\n    op->func_name = value;\n    Py_XDECREF(tmp);\n    return 0;\n}\nstatic PyObject *\n__Pyx_CyFunction_get_qualname(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context)\n{\n    Py_INCREF(op->func_qualname);\n    return op->func_qualname;\n}\nstatic int\n__Pyx_CyFunction_set_qualname(__pyx_CyFunctionObject *op, PyObject *value, CYTHON_UNUSED void *context)\n{\n    PyObject *tmp;\n#if PY_MAJOR_VERSION >= 3\n    if (unlikely(value == NULL || !PyUnicode_Check(value)))\n#else\n    if (unlikely(value == NULL || !PyString_Check(value)))\n#endif\n    {\n        PyErr_SetString(PyExc_TypeError,\n                        \"__qualname__ must be set to a string object\");\n        return -1;\n    }\n    tmp = op->func_qualname;\n    Py_INCREF(value);\n    op->func_qualname = value;\n    Py_XDECREF(tmp);\n    return 0;\n}\nstatic PyObject *\n__Pyx_CyFunction_get_self(__pyx_CyFunctionObject *m, CYTHON_UNUSED void *closure)\n{\n    PyObject *self;\n    self = m->func_closure;\n    if (self == NULL)\n        self = Py_None;\n    Py_INCREF(self);\n    return self;\n}\nstatic PyObject *\n__Pyx_CyFunction_get_dict(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context)\n{\n    if (unlikely(op->func_dict == NULL)) {\n        op->func_dict = PyDict_New();\n        if (unlikely(op->func_dict == NULL))\n            return NULL;\n    }\n    Py_INCREF(op->func_dict);\n    return op->func_dict;\n}\nstatic int\n__Pyx_CyFunction_set_dict(__pyx_CyFunctionObject *op, PyObject *value, CYTHON_UNUSED void *context)\n{\n    PyObject *tmp;\n    if (unlikely(value == NULL)) {\n        PyErr_SetString(PyExc_TypeError,\n               \"function's dictionary may not be deleted\");\n        return -1;\n    }\n    if (unlikely(!PyDict_Check(value))) {\n        PyErr_SetString(PyExc_TypeError,\n               \"setting function's dictionary to a non-dict\");\n        return -1;\n    }\n    tmp = op->func_dict;\n    Py_INCREF(value);\n    op->func_dict = value;\n    Py_XDECREF(tmp);\n    return 0;\n}\nstatic PyObject *\n__Pyx_CyFunction_get_globals(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context)\n{\n    Py_INCREF(op->func_globals);\n    return op->func_globals;\n}\nstatic PyObject *\n__Pyx_CyFunction_get_closure(CYTHON_UNUSED __pyx_CyFunctionObject *op, CYTHON_UNUSED void *context)\n{\n    Py_INCREF(Py_None);\n    return Py_None;\n}\nstatic PyObject *\n__Pyx_CyFunction_get_code(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context)\n{\n    PyObject* result = (op->func_code) ? op->func_code : Py_None;\n    Py_INCREF(result);\n    return result;\n}\nstatic int\n__Pyx_CyFunction_init_defaults(__pyx_CyFunctionObject *op) {\n    int result = 0;\n    PyObject *res = op->defaults_getter((PyObject *) op);\n    if (unlikely(!res))\n        return -1;\n    #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n    op->defaults_tuple = PyTuple_GET_ITEM(res, 0);\n    Py_INCREF(op->defaults_tuple);\n    op->defaults_kwdict = PyTuple_GET_ITEM(res, 1);\n    Py_INCREF(op->defaults_kwdict);\n    #else\n    op->defaults_tuple = PySequence_ITEM(res, 0);\n    if (unlikely(!op->defaults_tuple)) result = -1;\n    else {\n        op->defaults_kwdict = PySequence_ITEM(res, 1);\n        if (unlikely(!op->defaults_kwdict)) result = -1;\n    }\n    #endif\n    Py_DECREF(res);\n    return result;\n}\nstatic int\n__Pyx_CyFunction_set_defaults(__pyx_CyFunctionObject *op, PyObject* value, CYTHON_UNUSED void *context) {\n    PyObject* tmp;\n    if (!value) {\n        value = Py_None;\n    } else if (value != Py_None && !PyTuple_Check(value)) {\n        PyErr_SetString(PyExc_TypeError,\n                        \"__defaults__ must be set to a tuple object\");\n        return -1;\n    }\n    Py_INCREF(value);\n    tmp = op->defaults_tuple;\n    op->defaults_tuple = value;\n    Py_XDECREF(tmp);\n    return 0;\n}\nstatic PyObject *\n__Pyx_CyFunction_get_defaults(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) {\n    PyObject* result = op->defaults_tuple;\n    if (unlikely(!result)) {\n        if (op->defaults_getter) {\n            if (__Pyx_CyFunction_init_defaults(op) < 0) return NULL;\n            result = op->defaults_tuple;\n        } else {\n            result = Py_None;\n        }\n    }\n    Py_INCREF(result);\n    return result;\n}\nstatic int\n__Pyx_CyFunction_set_kwdefaults(__pyx_CyFunctionObject *op, PyObject* value, CYTHON_UNUSED void *context) {\n    PyObject* tmp;\n    if (!value) {\n        value = Py_None;\n    } else if (value != Py_None && !PyDict_Check(value)) {\n        PyErr_SetString(PyExc_TypeError,\n                        \"__kwdefaults__ must be set to a dict object\");\n        return -1;\n    }\n    Py_INCREF(value);\n    tmp = op->defaults_kwdict;\n    op->defaults_kwdict = value;\n    Py_XDECREF(tmp);\n    return 0;\n}\nstatic PyObject *\n__Pyx_CyFunction_get_kwdefaults(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) {\n    PyObject* result = op->defaults_kwdict;\n    if (unlikely(!result)) {\n        if (op->defaults_getter) {\n            if (__Pyx_CyFunction_init_defaults(op) < 0) return NULL;\n            result = op->defaults_kwdict;\n        } else {\n            result = Py_None;\n        }\n    }\n    Py_INCREF(result);\n    return result;\n}\nstatic int\n__Pyx_CyFunction_set_annotations(__pyx_CyFunctionObject *op, PyObject* value, CYTHON_UNUSED void *context) {\n    PyObject* tmp;\n    if (!value || value == Py_None) {\n        value = NULL;\n    } else if (!PyDict_Check(value)) {\n        PyErr_SetString(PyExc_TypeError,\n                        \"__annotations__ must be set to a dict object\");\n        return -1;\n    }\n    Py_XINCREF(value);\n    tmp = op->func_annotations;\n    op->func_annotations = value;\n    Py_XDECREF(tmp);\n    return 0;\n}\nstatic PyObject *\n__Pyx_CyFunction_get_annotations(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) {\n    PyObject* result = op->func_annotations;\n    if (unlikely(!result)) {\n        result = PyDict_New();\n        if (unlikely(!result)) return NULL;\n        op->func_annotations = result;\n    }\n    Py_INCREF(result);\n    return result;\n}\nstatic PyGetSetDef __pyx_CyFunction_getsets[] = {\n    {(char *) \"func_doc\", (getter)__Pyx_CyFunction_get_doc, (setter)__Pyx_CyFunction_set_doc, 0, 0},\n    {(char *) \"__doc__\",  (getter)__Pyx_CyFunction_get_doc, (setter)__Pyx_CyFunction_set_doc, 0, 0},\n    {(char *) \"func_name\", (getter)__Pyx_CyFunction_get_name, (setter)__Pyx_CyFunction_set_name, 0, 0},\n    {(char *) \"__name__\", (getter)__Pyx_CyFunction_get_name, (setter)__Pyx_CyFunction_set_name, 0, 0},\n    {(char *) \"__qualname__\", (getter)__Pyx_CyFunction_get_qualname, (setter)__Pyx_CyFunction_set_qualname, 0, 0},\n    {(char *) \"__self__\", (getter)__Pyx_CyFunction_get_self, 0, 0, 0},\n    {(char *) \"func_dict\", (getter)__Pyx_CyFunction_get_dict, (setter)__Pyx_CyFunction_set_dict, 0, 0},\n    {(char *) \"__dict__\", (getter)__Pyx_CyFunction_get_dict, (setter)__Pyx_CyFunction_set_dict, 0, 0},\n    {(char *) \"func_globals\", (getter)__Pyx_CyFunction_get_globals, 0, 0, 0},\n    {(char *) \"__globals__\", (getter)__Pyx_CyFunction_get_globals, 0, 0, 0},\n    {(char *) \"func_closure\", (getter)__Pyx_CyFunction_get_closure, 0, 0, 0},\n    {(char *) \"__closure__\", (getter)__Pyx_CyFunction_get_closure, 0, 0, 0},\n    {(char *) \"func_code\", (getter)__Pyx_CyFunction_get_code, 0, 0, 0},\n    {(char *) \"__code__\", (getter)__Pyx_CyFunction_get_code, 0, 0, 0},\n    {(char *) \"func_defaults\", (getter)__Pyx_CyFunction_get_defaults, (setter)__Pyx_CyFunction_set_defaults, 0, 0},\n    {(char *) \"__defaults__\", (getter)__Pyx_CyFunction_get_defaults, (setter)__Pyx_CyFunction_set_defaults, 0, 0},\n    {(char *) \"__kwdefaults__\", (getter)__Pyx_CyFunction_get_kwdefaults, (setter)__Pyx_CyFunction_set_kwdefaults, 0, 0},\n    {(char *) \"__annotations__\", (getter)__Pyx_CyFunction_get_annotations, (setter)__Pyx_CyFunction_set_annotations, 0, 0},\n    {0, 0, 0, 0, 0}\n};\nstatic PyMemberDef __pyx_CyFunction_members[] = {\n    {(char *) \"__module__\", T_OBJECT, offsetof(PyCFunctionObject, m_module), PY_WRITE_RESTRICTED, 0},\n    {0, 0, 0,  0, 0}\n};\nstatic PyObject *\n__Pyx_CyFunction_reduce(__pyx_CyFunctionObject *m, CYTHON_UNUSED PyObject *args)\n{\n#if PY_MAJOR_VERSION >= 3\n    return PyUnicode_FromString(m->func.m_ml->ml_name);\n#else\n    return PyString_FromString(m->func.m_ml->ml_name);\n#endif\n}\nstatic PyMethodDef __pyx_CyFunction_methods[] = {\n    {\"__reduce__\", (PyCFunction)__Pyx_CyFunction_reduce, METH_VARARGS, 0},\n    {0, 0, 0, 0}\n};\n#if PY_VERSION_HEX < 0x030500A0\n#define __Pyx_CyFunction_weakreflist(cyfunc) ((cyfunc)->func_weakreflist)\n#else\n#define __Pyx_CyFunction_weakreflist(cyfunc) ((cyfunc)->func.m_weakreflist)\n#endif\nstatic PyObject *__Pyx_CyFunction_New(PyTypeObject *type, PyMethodDef *ml, int flags, PyObject* qualname,\n                                      PyObject *closure, PyObject *module, PyObject* globals, PyObject* code) {\n    __pyx_CyFunctionObject *op = PyObject_GC_New(__pyx_CyFunctionObject, type);\n    if (op == NULL)\n        return NULL;\n    op->flags = flags;\n    __Pyx_CyFunction_weakreflist(op) = NULL;\n    op->func.m_ml = ml;\n    op->func.m_self = (PyObject *) op;\n    Py_XINCREF(closure);\n    op->func_closure = closure;\n    Py_XINCREF(module);\n    op->func.m_module = module;\n    op->func_dict = NULL;\n    op->func_name = NULL;\n    Py_INCREF(qualname);\n    op->func_qualname = qualname;\n    op->func_doc = NULL;\n    op->func_classobj = NULL;\n    op->func_globals = globals;\n    Py_INCREF(op->func_globals);\n    Py_XINCREF(code);\n    op->func_code = code;\n    op->defaults_pyobjects = 0;\n    op->defaults = NULL;\n    op->defaults_tuple = NULL;\n    op->defaults_kwdict = NULL;\n    op->defaults_getter = NULL;\n    op->func_annotations = NULL;\n    PyObject_GC_Track(op);\n    return (PyObject *) op;\n}\nstatic int\n__Pyx_CyFunction_clear(__pyx_CyFunctionObject *m)\n{\n    Py_CLEAR(m->func_closure);\n    Py_CLEAR(m->func.m_module);\n    Py_CLEAR(m->func_dict);\n    Py_CLEAR(m->func_name);\n    Py_CLEAR(m->func_qualname);\n    Py_CLEAR(m->func_doc);\n    Py_CLEAR(m->func_globals);\n    Py_CLEAR(m->func_code);\n    Py_CLEAR(m->func_classobj);\n    Py_CLEAR(m->defaults_tuple);\n    Py_CLEAR(m->defaults_kwdict);\n    Py_CLEAR(m->func_annotations);\n    if (m->defaults) {\n        PyObject **pydefaults = __Pyx_CyFunction_Defaults(PyObject *, m);\n        int i;\n        for (i = 0; i < m->defaults_pyobjects; i++)\n            Py_XDECREF(pydefaults[i]);\n        PyObject_Free(m->defaults);\n        m->defaults = NULL;\n    }\n    return 0;\n}\nstatic void __Pyx__CyFunction_dealloc(__pyx_CyFunctionObject *m)\n{\n    if (__Pyx_CyFunction_weakreflist(m) != NULL)\n        PyObject_ClearWeakRefs((PyObject *) m);\n    __Pyx_CyFunction_clear(m);\n    PyObject_GC_Del(m);\n}\nstatic void __Pyx_CyFunction_dealloc(__pyx_CyFunctionObject *m)\n{\n    PyObject_GC_UnTrack(m);\n    __Pyx__CyFunction_dealloc(m);\n}\nstatic int __Pyx_CyFunction_traverse(__pyx_CyFunctionObject *m, visitproc visit, void *arg)\n{\n    Py_VISIT(m->func_closure);\n    Py_VISIT(m->func.m_module);\n    Py_VISIT(m->func_dict);\n    Py_VISIT(m->func_name);\n    Py_VISIT(m->func_qualname);\n    Py_VISIT(m->func_doc);\n    Py_VISIT(m->func_globals);\n    Py_VISIT(m->func_code);\n    Py_VISIT(m->func_classobj);\n    Py_VISIT(m->defaults_tuple);\n    Py_VISIT(m->defaults_kwdict);\n    if (m->defaults) {\n        PyObject **pydefaults = __Pyx_CyFunction_Defaults(PyObject *, m);\n        int i;\n        for (i = 0; i < m->defaults_pyobjects; i++)\n            Py_VISIT(pydefaults[i]);\n    }\n    return 0;\n}\nstatic PyObject *__Pyx_CyFunction_descr_get(PyObject *func, PyObject *obj, PyObject *type)\n{\n    __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func;\n    if (m->flags & __Pyx_CYFUNCTION_STATICMETHOD) {\n        Py_INCREF(func);\n        return func;\n    }\n    if (m->flags & __Pyx_CYFUNCTION_CLASSMETHOD) {\n        if (type == NULL)\n            type = (PyObject *)(Py_TYPE(obj));\n        return __Pyx_PyMethod_New(func, type, (PyObject *)(Py_TYPE(type)));\n    }\n    if (obj == Py_None)\n        obj = NULL;\n    return __Pyx_PyMethod_New(func, obj, type);\n}\nstatic PyObject*\n__Pyx_CyFunction_repr(__pyx_CyFunctionObject *op)\n{\n#if PY_MAJOR_VERSION >= 3\n    return PyUnicode_FromFormat(\"<cyfunction %U at %p>\",\n                                op->func_qualname, (void *)op);\n#else\n    return PyString_FromFormat(\"<cyfunction %s at %p>\",\n                               PyString_AsString(op->func_qualname), (void *)op);\n#endif\n}\nstatic PyObject * __Pyx_CyFunction_CallMethod(PyObject *func, PyObject *self, PyObject *arg, PyObject *kw) {\n    PyCFunctionObject* f = (PyCFunctionObject*)func;\n    PyCFunction meth = f->m_ml->ml_meth;\n    Py_ssize_t size;\n    switch (f->m_ml->ml_flags & (METH_VARARGS | METH_KEYWORDS | METH_NOARGS | METH_O)) {\n    case METH_VARARGS:\n        if (likely(kw == NULL || PyDict_Size(kw) == 0))\n            return (*meth)(self, arg);\n        break;\n    case METH_VARARGS | METH_KEYWORDS:\n        return (*(PyCFunctionWithKeywords)(void*)meth)(self, arg, kw);\n    case METH_NOARGS:\n        if (likely(kw == NULL || PyDict_Size(kw) == 0)) {\n            size = PyTuple_GET_SIZE(arg);\n            if (likely(size == 0))\n                return (*meth)(self, NULL);\n            PyErr_Format(PyExc_TypeError,\n                \"%.200s() takes no arguments (%\" CYTHON_FORMAT_SSIZE_T \"d given)\",\n                f->m_ml->ml_name, size);\n            return NULL;\n        }\n        break;\n    case METH_O:\n        if (likely(kw == NULL || PyDict_Size(kw) == 0)) {\n            size = PyTuple_GET_SIZE(arg);\n            if (likely(size == 1)) {\n                PyObject *result, *arg0;\n                #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n                arg0 = PyTuple_GET_ITEM(arg, 0);\n                #else\n                arg0 = PySequence_ITEM(arg, 0); if (unlikely(!arg0)) return NULL;\n                #endif\n                result = (*meth)(self, arg0);\n                #if !(CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS)\n                Py_DECREF(arg0);\n                #endif\n                return result;\n            }\n            PyErr_Format(PyExc_TypeError,\n                \"%.200s() takes exactly one argument (%\" CYTHON_FORMAT_SSIZE_T \"d given)\",\n                f->m_ml->ml_name, size);\n            return NULL;\n        }\n        break;\n    default:\n        PyErr_SetString(PyExc_SystemError, \"Bad call flags in \"\n                        \"__Pyx_CyFunction_Call. METH_OLDARGS is no \"\n                        \"longer supported!\");\n        return NULL;\n    }\n    PyErr_Format(PyExc_TypeError, \"%.200s() takes no keyword arguments\",\n                 f->m_ml->ml_name);\n    return NULL;\n}\nstatic CYTHON_INLINE PyObject *__Pyx_CyFunction_Call(PyObject *func, PyObject *arg, PyObject *kw) {\n    return __Pyx_CyFunction_CallMethod(func, ((PyCFunctionObject*)func)->m_self, arg, kw);\n}\nstatic PyObject *__Pyx_CyFunction_CallAsMethod(PyObject *func, PyObject *args, PyObject *kw) {\n    PyObject *result;\n    __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *) func;\n    if ((cyfunc->flags & __Pyx_CYFUNCTION_CCLASS) && !(cyfunc->flags & __Pyx_CYFUNCTION_STATICMETHOD)) {\n        Py_ssize_t argc;\n        PyObject *new_args;\n        PyObject *self;\n        argc = PyTuple_GET_SIZE(args);\n        new_args = PyTuple_GetSlice(args, 1, argc);\n        if (unlikely(!new_args))\n            return NULL;\n        self = PyTuple_GetItem(args, 0);\n        if (unlikely(!self)) {\n            Py_DECREF(new_args);\n            return NULL;\n        }\n        result = __Pyx_CyFunction_CallMethod(func, self, new_args, kw);\n        Py_DECREF(new_args);\n    } else {\n        result = __Pyx_CyFunction_Call(func, args, kw);\n    }\n    return result;\n}\nstatic PyTypeObject __pyx_CyFunctionType_type = {\n    PyVarObject_HEAD_INIT(0, 0)\n    \"cython_function_or_method\",\n    sizeof(__pyx_CyFunctionObject),\n    0,\n    (destructor) __Pyx_CyFunction_dealloc,\n    0,\n    0,\n    0,\n#if PY_MAJOR_VERSION < 3\n    0,\n#else\n    0,\n#endif\n    (reprfunc) __Pyx_CyFunction_repr,\n    0,\n    0,\n    0,\n    0,\n    __Pyx_CyFunction_CallAsMethod,\n    0,\n    0,\n    0,\n    0,\n    Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC,\n    0,\n    (traverseproc) __Pyx_CyFunction_traverse,\n    (inquiry) __Pyx_CyFunction_clear,\n    0,\n#if PY_VERSION_HEX < 0x030500A0\n    offsetof(__pyx_CyFunctionObject, func_weakreflist),\n#else\n    offsetof(PyCFunctionObject, m_weakreflist),\n#endif\n    0,\n    0,\n    __pyx_CyFunction_methods,\n    __pyx_CyFunction_members,\n    __pyx_CyFunction_getsets,\n    0,\n    0,\n    __Pyx_CyFunction_descr_get,\n    0,\n    offsetof(__pyx_CyFunctionObject, func_dict),\n    0,\n    0,\n    0,\n    0,\n    0,\n    0,\n    0,\n    0,\n    0,\n    0,\n    0,\n    0,\n#if PY_VERSION_HEX >= 0x030400a1\n    0,\n#endif\n#if PY_VERSION_HEX >= 0x030800b1\n    0,\n#endif\n};\nstatic int __pyx_CyFunction_init(void) {\n    __pyx_CyFunctionType = __Pyx_FetchCommonType(&__pyx_CyFunctionType_type);\n    if (unlikely(__pyx_CyFunctionType == NULL)) {\n        return -1;\n    }\n    return 0;\n}\nstatic CYTHON_INLINE void *__Pyx_CyFunction_InitDefaults(PyObject *func, size_t size, int pyobjects) {\n    __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func;\n    m->defaults = PyObject_Malloc(size);\n    if (unlikely(!m->defaults))\n        return PyErr_NoMemory();\n    memset(m->defaults, 0, size);\n    m->defaults_pyobjects = pyobjects;\n    return m->defaults;\n}\nstatic CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsTuple(PyObject *func, PyObject *tuple) {\n    __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func;\n    m->defaults_tuple = tuple;\n    Py_INCREF(tuple);\n}\nstatic CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsKwDict(PyObject *func, PyObject *dict) {\n    __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func;\n    m->defaults_kwdict = dict;\n    Py_INCREF(dict);\n}\nstatic CYTHON_INLINE void __Pyx_CyFunction_SetAnnotationsDict(PyObject *func, PyObject *dict) {\n    __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func;\n    m->func_annotations = dict;\n    Py_INCREF(dict);\n}\n\n/* BufferFallbackError */\n  static void __Pyx_RaiseBufferFallbackError(void) {\n  PyErr_SetString(PyExc_ValueError,\n     \"Buffer acquisition failed on assignment; and then reacquiring the old buffer failed too!\");\n}\n\n/* None */\n  static CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t a, Py_ssize_t b) {\n    Py_ssize_t q = a / b;\n    Py_ssize_t r = a - q*b;\n    q -= ((r != 0) & ((r ^ b) < 0));\n    return q;\n}\n\n/* BufferIndexError */\n  static void __Pyx_RaiseBufferIndexError(int axis) {\n  PyErr_Format(PyExc_IndexError,\n     \"Out of bounds on buffer access (axis %d)\", axis);\n}\n\n/* RaiseTooManyValuesToUnpack */\n  static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected) {\n    PyErr_Format(PyExc_ValueError,\n                 \"too many values to unpack (expected %\" CYTHON_FORMAT_SSIZE_T \"d)\", expected);\n}\n\n/* RaiseNeedMoreValuesToUnpack */\n  static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index) {\n    PyErr_Format(PyExc_ValueError,\n                 \"need more than %\" CYTHON_FORMAT_SSIZE_T \"d value%.1s to unpack\",\n                 index, (index == 1) ? \"\" : \"s\");\n}\n\n/* RaiseNoneIterError */\n  static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void) {\n    PyErr_SetString(PyExc_TypeError, \"'NoneType' object is not iterable\");\n}\n\n/* GetTopmostException */\n  #if CYTHON_USE_EXC_INFO_STACK\nstatic _PyErr_StackItem *\n__Pyx_PyErr_GetTopmostException(PyThreadState *tstate)\n{\n    _PyErr_StackItem *exc_info = tstate->exc_info;\n    while ((exc_info->exc_type == NULL || exc_info->exc_type == Py_None) &&\n           exc_info->previous_item != NULL)\n    {\n        exc_info = exc_info->previous_item;\n    }\n    return exc_info;\n}\n#endif\n\n/* SaveResetException */\n  #if CYTHON_FAST_THREAD_STATE\nstatic CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) {\n    #if CYTHON_USE_EXC_INFO_STACK\n    _PyErr_StackItem *exc_info = __Pyx_PyErr_GetTopmostException(tstate);\n    *type = exc_info->exc_type;\n    *value = exc_info->exc_value;\n    *tb = exc_info->exc_traceback;\n    #else\n    *type = tstate->exc_type;\n    *value = tstate->exc_value;\n    *tb = tstate->exc_traceback;\n    #endif\n    Py_XINCREF(*type);\n    Py_XINCREF(*value);\n    Py_XINCREF(*tb);\n}\nstatic CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) {\n    PyObject *tmp_type, *tmp_value, *tmp_tb;\n    #if CYTHON_USE_EXC_INFO_STACK\n    _PyErr_StackItem *exc_info = tstate->exc_info;\n    tmp_type = exc_info->exc_type;\n    tmp_value = exc_info->exc_value;\n    tmp_tb = exc_info->exc_traceback;\n    exc_info->exc_type = type;\n    exc_info->exc_value = value;\n    exc_info->exc_traceback = tb;\n    #else\n    tmp_type = tstate->exc_type;\n    tmp_value = tstate->exc_value;\n    tmp_tb = tstate->exc_traceback;\n    tstate->exc_type = type;\n    tstate->exc_value = value;\n    tstate->exc_traceback = tb;\n    #endif\n    Py_XDECREF(tmp_type);\n    Py_XDECREF(tmp_value);\n    Py_XDECREF(tmp_tb);\n}\n#endif\n\n/* PyErrExceptionMatches */\n  #if CYTHON_FAST_THREAD_STATE\nstatic int __Pyx_PyErr_ExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) {\n    Py_ssize_t i, n;\n    n = PyTuple_GET_SIZE(tuple);\n#if PY_MAJOR_VERSION >= 3\n    for (i=0; i<n; i++) {\n        if (exc_type == PyTuple_GET_ITEM(tuple, i)) return 1;\n    }\n#endif\n    for (i=0; i<n; i++) {\n        if (__Pyx_PyErr_GivenExceptionMatches(exc_type, PyTuple_GET_ITEM(tuple, i))) return 1;\n    }\n    return 0;\n}\nstatic CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err) {\n    PyObject *exc_type = tstate->curexc_type;\n    if (exc_type == err) return 1;\n    if (unlikely(!exc_type)) return 0;\n    if (unlikely(PyTuple_Check(err)))\n        return __Pyx_PyErr_ExceptionMatchesTuple(exc_type, err);\n    return __Pyx_PyErr_GivenExceptionMatches(exc_type, err);\n}\n#endif\n\n/* GetException */\n  #if CYTHON_FAST_THREAD_STATE\nstatic int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb)\n#else\nstatic int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb)\n#endif\n{\n    PyObject *local_type, *local_value, *local_tb;\n#if CYTHON_FAST_THREAD_STATE\n    PyObject *tmp_type, *tmp_value, *tmp_tb;\n    local_type = tstate->curexc_type;\n    local_value = tstate->curexc_value;\n    local_tb = tstate->curexc_traceback;\n    tstate->curexc_type = 0;\n    tstate->curexc_value = 0;\n    tstate->curexc_traceback = 0;\n#else\n    PyErr_Fetch(&local_type, &local_value, &local_tb);\n#endif\n    PyErr_NormalizeException(&local_type, &local_value, &local_tb);\n#if CYTHON_FAST_THREAD_STATE\n    if (unlikely(tstate->curexc_type))\n#else\n    if (unlikely(PyErr_Occurred()))\n#endif\n        goto bad;\n    #if PY_MAJOR_VERSION >= 3\n    if (local_tb) {\n        if (unlikely(PyException_SetTraceback(local_value, local_tb) < 0))\n            goto bad;\n    }\n    #endif\n    Py_XINCREF(local_tb);\n    Py_XINCREF(local_type);\n    Py_XINCREF(local_value);\n    *type = local_type;\n    *value = local_value;\n    *tb = local_tb;\n#if CYTHON_FAST_THREAD_STATE\n    #if CYTHON_USE_EXC_INFO_STACK\n    {\n        _PyErr_StackItem *exc_info = tstate->exc_info;\n        tmp_type = exc_info->exc_type;\n        tmp_value = exc_info->exc_value;\n        tmp_tb = exc_info->exc_traceback;\n        exc_info->exc_type = local_type;\n        exc_info->exc_value = local_value;\n        exc_info->exc_traceback = local_tb;\n    }\n    #else\n    tmp_type = tstate->exc_type;\n    tmp_value = tstate->exc_value;\n    tmp_tb = tstate->exc_traceback;\n    tstate->exc_type = local_type;\n    tstate->exc_value = local_value;\n    tstate->exc_traceback = local_tb;\n    #endif\n    Py_XDECREF(tmp_type);\n    Py_XDECREF(tmp_value);\n    Py_XDECREF(tmp_tb);\n#else\n    PyErr_SetExcInfo(local_type, local_value, local_tb);\n#endif\n    return 0;\nbad:\n    *type = 0;\n    *value = 0;\n    *tb = 0;\n    Py_XDECREF(local_type);\n    Py_XDECREF(local_value);\n    Py_XDECREF(local_tb);\n    return -1;\n}\n\n/* PyObject_GenericGetAttrNoDict */\n  #if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000\nstatic PyObject *__Pyx_RaiseGenericGetAttributeError(PyTypeObject *tp, PyObject *attr_name) {\n    PyErr_Format(PyExc_AttributeError,\n#if PY_MAJOR_VERSION >= 3\n                 \"'%.50s' object has no attribute '%U'\",\n                 tp->tp_name, attr_name);\n#else\n                 \"'%.50s' object has no attribute '%.400s'\",\n                 tp->tp_name, PyString_AS_STRING(attr_name));\n#endif\n    return NULL;\n}\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name) {\n    PyObject *descr;\n    PyTypeObject *tp = Py_TYPE(obj);\n    if (unlikely(!PyString_Check(attr_name))) {\n        return PyObject_GenericGetAttr(obj, attr_name);\n    }\n    assert(!tp->tp_dictoffset);\n    descr = _PyType_Lookup(tp, attr_name);\n    if (unlikely(!descr)) {\n        return __Pyx_RaiseGenericGetAttributeError(tp, attr_name);\n    }\n    Py_INCREF(descr);\n    #if PY_MAJOR_VERSION < 3\n    if (likely(PyType_HasFeature(Py_TYPE(descr), Py_TPFLAGS_HAVE_CLASS)))\n    #endif\n    {\n        descrgetfunc f = Py_TYPE(descr)->tp_descr_get;\n        if (unlikely(f)) {\n            PyObject *res = f(descr, obj, (PyObject *)tp);\n            Py_DECREF(descr);\n            return res;\n        }\n    }\n    return descr;\n}\n#endif\n\n/* PyObject_GenericGetAttr */\n  #if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000\nstatic PyObject* __Pyx_PyObject_GenericGetAttr(PyObject* obj, PyObject* attr_name) {\n    if (unlikely(Py_TYPE(obj)->tp_dictoffset)) {\n        return PyObject_GenericGetAttr(obj, attr_name);\n    }\n    return __Pyx_PyObject_GenericGetAttrNoDict(obj, attr_name);\n}\n#endif\n\n/* SetupReduce */\n  static int __Pyx_setup_reduce_is_named(PyObject* meth, PyObject* name) {\n  int ret;\n  PyObject *name_attr;\n  name_attr = __Pyx_PyObject_GetAttrStr(meth, __pyx_n_s_name);\n  if (likely(name_attr)) {\n      ret = PyObject_RichCompareBool(name_attr, name, Py_EQ);\n  } else {\n      ret = -1;\n  }\n  if (unlikely(ret < 0)) {\n      PyErr_Clear();\n      ret = 0;\n  }\n  Py_XDECREF(name_attr);\n  return ret;\n}\nstatic int __Pyx_setup_reduce(PyObject* type_obj) {\n    int ret = 0;\n    PyObject *object_reduce = NULL;\n    PyObject *object_reduce_ex = NULL;\n    PyObject *reduce = NULL;\n    PyObject *reduce_ex = NULL;\n    PyObject *reduce_cython = NULL;\n    PyObject *setstate = NULL;\n    PyObject *setstate_cython = NULL;\n#if CYTHON_USE_PYTYPE_LOOKUP\n    if (_PyType_Lookup((PyTypeObject*)type_obj, __pyx_n_s_getstate)) goto GOOD;\n#else\n    if (PyObject_HasAttr(type_obj, __pyx_n_s_getstate)) goto GOOD;\n#endif\n#if CYTHON_USE_PYTYPE_LOOKUP\n    object_reduce_ex = _PyType_Lookup(&PyBaseObject_Type, __pyx_n_s_reduce_ex); if (!object_reduce_ex) goto BAD;\n#else\n    object_reduce_ex = __Pyx_PyObject_GetAttrStr((PyObject*)&PyBaseObject_Type, __pyx_n_s_reduce_ex); if (!object_reduce_ex) goto BAD;\n#endif\n    reduce_ex = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_reduce_ex); if (unlikely(!reduce_ex)) goto BAD;\n    if (reduce_ex == object_reduce_ex) {\n#if CYTHON_USE_PYTYPE_LOOKUP\n        object_reduce = _PyType_Lookup(&PyBaseObject_Type, __pyx_n_s_reduce); if (!object_reduce) goto BAD;\n#else\n        object_reduce = __Pyx_PyObject_GetAttrStr((PyObject*)&PyBaseObject_Type, __pyx_n_s_reduce); if (!object_reduce) goto BAD;\n#endif\n        reduce = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_reduce); if (unlikely(!reduce)) goto BAD;\n        if (reduce == object_reduce || __Pyx_setup_reduce_is_named(reduce, __pyx_n_s_reduce_cython)) {\n            reduce_cython = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_reduce_cython); if (unlikely(!reduce_cython)) goto BAD;\n            ret = PyDict_SetItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_reduce, reduce_cython); if (unlikely(ret < 0)) goto BAD;\n            ret = PyDict_DelItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_reduce_cython); if (unlikely(ret < 0)) goto BAD;\n            setstate = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_setstate);\n            if (!setstate) PyErr_Clear();\n            if (!setstate || __Pyx_setup_reduce_is_named(setstate, __pyx_n_s_setstate_cython)) {\n                setstate_cython = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_setstate_cython); if (unlikely(!setstate_cython)) goto BAD;\n                ret = PyDict_SetItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_setstate, setstate_cython); if (unlikely(ret < 0)) goto BAD;\n                ret = PyDict_DelItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_setstate_cython); if (unlikely(ret < 0)) goto BAD;\n            }\n            PyType_Modified((PyTypeObject*)type_obj);\n        }\n    }\n    goto GOOD;\nBAD:\n    if (!PyErr_Occurred())\n        PyErr_Format(PyExc_RuntimeError, \"Unable to initialize pickling for %s\", ((PyTypeObject*)type_obj)->tp_name);\n    ret = -1;\nGOOD:\n#if !CYTHON_USE_PYTYPE_LOOKUP\n    Py_XDECREF(object_reduce);\n    Py_XDECREF(object_reduce_ex);\n#endif\n    Py_XDECREF(reduce);\n    Py_XDECREF(reduce_ex);\n    Py_XDECREF(reduce_cython);\n    Py_XDECREF(setstate);\n    Py_XDECREF(setstate_cython);\n    return ret;\n}\n\n/* TypeImport */\n  #ifndef __PYX_HAVE_RT_ImportType\n#define __PYX_HAVE_RT_ImportType\nstatic PyTypeObject *__Pyx_ImportType(PyObject *module, const char *module_name, const char *class_name,\n    size_t size, enum __Pyx_ImportType_CheckSize check_size)\n{\n    PyObject *result = 0;\n    char warning[200];\n    Py_ssize_t basicsize;\n#ifdef Py_LIMITED_API\n    PyObject *py_basicsize;\n#endif\n    result = PyObject_GetAttrString(module, class_name);\n    if (!result)\n        goto bad;\n    if (!PyType_Check(result)) {\n        PyErr_Format(PyExc_TypeError,\n            \"%.200s.%.200s is not a type object\",\n            module_name, class_name);\n        goto bad;\n    }\n#ifndef Py_LIMITED_API\n    basicsize = ((PyTypeObject *)result)->tp_basicsize;\n#else\n    py_basicsize = PyObject_GetAttrString(result, \"__basicsize__\");\n    if (!py_basicsize)\n        goto bad;\n    basicsize = PyLong_AsSsize_t(py_basicsize);\n    Py_DECREF(py_basicsize);\n    py_basicsize = 0;\n    if (basicsize == (Py_ssize_t)-1 && PyErr_Occurred())\n        goto bad;\n#endif\n    if ((size_t)basicsize < size) {\n        PyErr_Format(PyExc_ValueError,\n            \"%.200s.%.200s size changed, may indicate binary incompatibility. \"\n            \"Expected %zd from C header, got %zd from PyObject\",\n            module_name, class_name, size, basicsize);\n        goto bad;\n    }\n    if (check_size == __Pyx_ImportType_CheckSize_Error && (size_t)basicsize != size) {\n        PyErr_Format(PyExc_ValueError,\n            \"%.200s.%.200s size changed, may indicate binary incompatibility. \"\n            \"Expected %zd from C header, got %zd from PyObject\",\n            module_name, class_name, size, basicsize);\n        goto bad;\n    }\n    else if (check_size == __Pyx_ImportType_CheckSize_Warn && (size_t)basicsize > size) {\n        PyOS_snprintf(warning, sizeof(warning),\n            \"%s.%s size changed, may indicate binary incompatibility. \"\n            \"Expected %zd from C header, got %zd from PyObject\",\n            module_name, class_name, size, basicsize);\n        if (PyErr_WarnEx(NULL, warning, 0) < 0) goto bad;\n    }\n    return (PyTypeObject *)result;\nbad:\n    Py_XDECREF(result);\n    return NULL;\n}\n#endif\n\n/* Import */\n  static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level) {\n    PyObject *empty_list = 0;\n    PyObject *module = 0;\n    PyObject *global_dict = 0;\n    PyObject *empty_dict = 0;\n    PyObject *list;\n    #if PY_MAJOR_VERSION < 3\n    PyObject *py_import;\n    py_import = __Pyx_PyObject_GetAttrStr(__pyx_b, __pyx_n_s_import);\n    if (!py_import)\n        goto bad;\n    #endif\n    if (from_list)\n        list = from_list;\n    else {\n        empty_list = PyList_New(0);\n        if (!empty_list)\n            goto bad;\n        list = empty_list;\n    }\n    global_dict = PyModule_GetDict(__pyx_m);\n    if (!global_dict)\n        goto bad;\n    empty_dict = PyDict_New();\n    if (!empty_dict)\n        goto bad;\n    {\n        #if PY_MAJOR_VERSION >= 3\n        if (level == -1) {\n            if (strchr(__Pyx_MODULE_NAME, '.')) {\n                module = PyImport_ImportModuleLevelObject(\n                    name, global_dict, empty_dict, list, 1);\n                if (!module) {\n                    if (!PyErr_ExceptionMatches(PyExc_ImportError))\n                        goto bad;\n                    PyErr_Clear();\n                }\n            }\n            level = 0;\n        }\n        #endif\n        if (!module) {\n            #if PY_MAJOR_VERSION < 3\n            PyObject *py_level = PyInt_FromLong(level);\n            if (!py_level)\n                goto bad;\n            module = PyObject_CallFunctionObjArgs(py_import,\n                name, global_dict, empty_dict, list, py_level, (PyObject *)NULL);\n            Py_DECREF(py_level);\n            #else\n            module = PyImport_ImportModuleLevelObject(\n                name, global_dict, empty_dict, list, level);\n            #endif\n        }\n    }\nbad:\n    #if PY_MAJOR_VERSION < 3\n    Py_XDECREF(py_import);\n    #endif\n    Py_XDECREF(empty_list);\n    Py_XDECREF(empty_dict);\n    return module;\n}\n\n/* CLineInTraceback */\n  #ifndef CYTHON_CLINE_IN_TRACEBACK\nstatic int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line) {\n    PyObject *use_cline;\n    PyObject *ptype, *pvalue, *ptraceback;\n#if CYTHON_COMPILING_IN_CPYTHON\n    PyObject **cython_runtime_dict;\n#endif\n    if (unlikely(!__pyx_cython_runtime)) {\n        return c_line;\n    }\n    __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback);\n#if CYTHON_COMPILING_IN_CPYTHON\n    cython_runtime_dict = _PyObject_GetDictPtr(__pyx_cython_runtime);\n    if (likely(cython_runtime_dict)) {\n        __PYX_PY_DICT_LOOKUP_IF_MODIFIED(\n            use_cline, *cython_runtime_dict,\n            __Pyx_PyDict_GetItemStr(*cython_runtime_dict, __pyx_n_s_cline_in_traceback))\n    } else\n#endif\n    {\n      PyObject *use_cline_obj = __Pyx_PyObject_GetAttrStr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback);\n      if (use_cline_obj) {\n        use_cline = PyObject_Not(use_cline_obj) ? Py_False : Py_True;\n        Py_DECREF(use_cline_obj);\n      } else {\n        PyErr_Clear();\n        use_cline = NULL;\n      }\n    }\n    if (!use_cline) {\n        c_line = 0;\n        PyObject_SetAttr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback, Py_False);\n    }\n    else if (use_cline == Py_False || (use_cline != Py_True && PyObject_Not(use_cline) != 0)) {\n        c_line = 0;\n    }\n    __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback);\n    return c_line;\n}\n#endif\n\n/* CodeObjectCache */\n  static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line) {\n    int start = 0, mid = 0, end = count - 1;\n    if (end >= 0 && code_line > entries[end].code_line) {\n        return count;\n    }\n    while (start < end) {\n        mid = start + (end - start) / 2;\n        if (code_line < entries[mid].code_line) {\n            end = mid;\n        } else if (code_line > entries[mid].code_line) {\n             start = mid + 1;\n        } else {\n            return mid;\n        }\n    }\n    if (code_line <= entries[mid].code_line) {\n        return mid;\n    } else {\n        return mid + 1;\n    }\n}\nstatic PyCodeObject *__pyx_find_code_object(int code_line) {\n    PyCodeObject* code_object;\n    int pos;\n    if (unlikely(!code_line) || unlikely(!__pyx_code_cache.entries)) {\n        return NULL;\n    }\n    pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line);\n    if (unlikely(pos >= __pyx_code_cache.count) || unlikely(__pyx_code_cache.entries[pos].code_line != code_line)) {\n        return NULL;\n    }\n    code_object = __pyx_code_cache.entries[pos].code_object;\n    Py_INCREF(code_object);\n    return code_object;\n}\nstatic void __pyx_insert_code_object(int code_line, PyCodeObject* code_object) {\n    int pos, i;\n    __Pyx_CodeObjectCacheEntry* entries = __pyx_code_cache.entries;\n    if (unlikely(!code_line)) {\n        return;\n    }\n    if (unlikely(!entries)) {\n        entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Malloc(64*sizeof(__Pyx_CodeObjectCacheEntry));\n        if (likely(entries)) {\n            __pyx_code_cache.entries = entries;\n            __pyx_code_cache.max_count = 64;\n            __pyx_code_cache.count = 1;\n            entries[0].code_line = code_line;\n            entries[0].code_object = code_object;\n            Py_INCREF(code_object);\n        }\n        return;\n    }\n    pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line);\n    if ((pos < __pyx_code_cache.count) && unlikely(__pyx_code_cache.entries[pos].code_line == code_line)) {\n        PyCodeObject* tmp = entries[pos].code_object;\n        entries[pos].code_object = code_object;\n        Py_DECREF(tmp);\n        return;\n    }\n    if (__pyx_code_cache.count == __pyx_code_cache.max_count) {\n        int new_max = __pyx_code_cache.max_count + 64;\n        entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Realloc(\n            __pyx_code_cache.entries, (size_t)new_max*sizeof(__Pyx_CodeObjectCacheEntry));\n        if (unlikely(!entries)) {\n            return;\n        }\n        __pyx_code_cache.entries = entries;\n        __pyx_code_cache.max_count = new_max;\n    }\n    for (i=__pyx_code_cache.count; i>pos; i--) {\n        entries[i] = entries[i-1];\n    }\n    entries[pos].code_line = code_line;\n    entries[pos].code_object = code_object;\n    __pyx_code_cache.count++;\n    Py_INCREF(code_object);\n}\n\n/* AddTraceback */\n  #include \"compile.h\"\n#include \"frameobject.h\"\n#include \"traceback.h\"\nstatic PyCodeObject* __Pyx_CreateCodeObjectForTraceback(\n            const char *funcname, int c_line,\n            int py_line, const char *filename) {\n    PyCodeObject *py_code = 0;\n    PyObject *py_srcfile = 0;\n    PyObject *py_funcname = 0;\n    #if PY_MAJOR_VERSION < 3\n    py_srcfile = PyString_FromString(filename);\n    #else\n    py_srcfile = PyUnicode_FromString(filename);\n    #endif\n    if (!py_srcfile) goto bad;\n    if (c_line) {\n        #if PY_MAJOR_VERSION < 3\n        py_funcname = PyString_FromFormat( \"%s (%s:%d)\", funcname, __pyx_cfilenm, c_line);\n        #else\n        py_funcname = PyUnicode_FromFormat( \"%s (%s:%d)\", funcname, __pyx_cfilenm, c_line);\n        #endif\n    }\n    else {\n        #if PY_MAJOR_VERSION < 3\n        py_funcname = PyString_FromString(funcname);\n        #else\n        py_funcname = PyUnicode_FromString(funcname);\n        #endif\n    }\n    if (!py_funcname) goto bad;\n    py_code = __Pyx_PyCode_New(\n        0,\n        0,\n        0,\n        0,\n        0,\n        __pyx_empty_bytes, /*PyObject *code,*/\n        __pyx_empty_tuple, /*PyObject *consts,*/\n        __pyx_empty_tuple, /*PyObject *names,*/\n        __pyx_empty_tuple, /*PyObject *varnames,*/\n        __pyx_empty_tuple, /*PyObject *freevars,*/\n        __pyx_empty_tuple, /*PyObject *cellvars,*/\n        py_srcfile,   /*PyObject *filename,*/\n        py_funcname,  /*PyObject *name,*/\n        py_line,\n        __pyx_empty_bytes  /*PyObject *lnotab*/\n    );\n    Py_DECREF(py_srcfile);\n    Py_DECREF(py_funcname);\n    return py_code;\nbad:\n    Py_XDECREF(py_srcfile);\n    Py_XDECREF(py_funcname);\n    return NULL;\n}\nstatic void __Pyx_AddTraceback(const char *funcname, int c_line,\n                               int py_line, const char *filename) {\n    PyCodeObject *py_code = 0;\n    PyFrameObject *py_frame = 0;\n    PyThreadState *tstate = __Pyx_PyThreadState_Current;\n    if (c_line) {\n        c_line = __Pyx_CLineForTraceback(tstate, c_line);\n    }\n    py_code = __pyx_find_code_object(c_line ? -c_line : py_line);\n    if (!py_code) {\n        py_code = __Pyx_CreateCodeObjectForTraceback(\n            funcname, c_line, py_line, filename);\n        if (!py_code) goto bad;\n        __pyx_insert_code_object(c_line ? -c_line : py_line, py_code);\n    }\n    py_frame = PyFrame_New(\n        tstate,            /*PyThreadState *tstate,*/\n        py_code,           /*PyCodeObject *code,*/\n        __pyx_d,    /*PyObject *globals,*/\n        0                  /*PyObject *locals*/\n    );\n    if (!py_frame) goto bad;\n    __Pyx_PyFrame_SetLineNumber(py_frame, py_line);\n    PyTraceBack_Here(py_frame);\nbad:\n    Py_XDECREF(py_code);\n    Py_XDECREF(py_frame);\n}\n\n#if PY_MAJOR_VERSION < 3\nstatic int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags) {\n    if (PyObject_CheckBuffer(obj)) return PyObject_GetBuffer(obj, view, flags);\n        if (__Pyx_TypeCheck(obj, __pyx_ptype_5numpy_ndarray)) return __pyx_pw_5numpy_7ndarray_1__getbuffer__(obj, view, flags);\n    PyErr_Format(PyExc_TypeError, \"'%.200s' does not have the buffer interface\", Py_TYPE(obj)->tp_name);\n    return -1;\n}\nstatic void __Pyx_ReleaseBuffer(Py_buffer *view) {\n    PyObject *obj = view->obj;\n    if (!obj) return;\n    if (PyObject_CheckBuffer(obj)) {\n        PyBuffer_Release(view);\n        return;\n    }\n    if ((0)) {}\n        else if (__Pyx_TypeCheck(obj, __pyx_ptype_5numpy_ndarray)) __pyx_pw_5numpy_7ndarray_3__releasebuffer__(obj, view);\n    view->obj = NULL;\n    Py_DECREF(obj);\n}\n#endif\n\n\n  /* CIntFromPyVerify */\n  #define __PYX_VERIFY_RETURN_INT(target_type, func_type, func_value)\\\n    __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 0)\n#define __PYX_VERIFY_RETURN_INT_EXC(target_type, func_type, func_value)\\\n    __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 1)\n#define __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, exc)\\\n    {\\\n        func_type value = func_value;\\\n        if (sizeof(target_type) < sizeof(func_type)) {\\\n            if (unlikely(value != (func_type) (target_type) value)) {\\\n                func_type zero = 0;\\\n                if (exc && unlikely(value == (func_type)-1 && PyErr_Occurred()))\\\n                    return (target_type) -1;\\\n                if (is_unsigned && unlikely(value < zero))\\\n                    goto raise_neg_overflow;\\\n                else\\\n                    goto raise_overflow;\\\n            }\\\n        }\\\n        return (target_type) value;\\\n    }\n\n/* CIntToPy */\n  static CYTHON_INLINE PyObject* __Pyx_PyInt_From_siz(siz value) {\n    const siz neg_one = (siz) ((siz) 0 - (siz) 1), const_zero = (siz) 0;\n    const int is_unsigned = neg_one > const_zero;\n    if (is_unsigned) {\n        if (sizeof(siz) < sizeof(long)) {\n            return PyInt_FromLong((long) value);\n        } else if (sizeof(siz) <= sizeof(unsigned long)) {\n            return PyLong_FromUnsignedLong((unsigned long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(siz) <= sizeof(unsigned PY_LONG_LONG)) {\n            return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value);\n#endif\n        }\n    } else {\n        if (sizeof(siz) <= sizeof(long)) {\n            return PyInt_FromLong((long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(siz) <= sizeof(PY_LONG_LONG)) {\n            return PyLong_FromLongLong((PY_LONG_LONG) value);\n#endif\n        }\n    }\n    {\n        int one = 1; int little = (int)*(unsigned char *)&one;\n        unsigned char *bytes = (unsigned char *)&value;\n        return _PyLong_FromByteArray(bytes, sizeof(siz),\n                                     little, !is_unsigned);\n    }\n}\n\n/* CIntToPy */\n  static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) {\n    const long neg_one = (long) ((long) 0 - (long) 1), const_zero = (long) 0;\n    const int is_unsigned = neg_one > const_zero;\n    if (is_unsigned) {\n        if (sizeof(long) < sizeof(long)) {\n            return PyInt_FromLong((long) value);\n        } else if (sizeof(long) <= sizeof(unsigned long)) {\n            return PyLong_FromUnsignedLong((unsigned long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) {\n            return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value);\n#endif\n        }\n    } else {\n        if (sizeof(long) <= sizeof(long)) {\n            return PyInt_FromLong((long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) {\n            return PyLong_FromLongLong((PY_LONG_LONG) value);\n#endif\n        }\n    }\n    {\n        int one = 1; int little = (int)*(unsigned char *)&one;\n        unsigned char *bytes = (unsigned char *)&value;\n        return _PyLong_FromByteArray(bytes, sizeof(long),\n                                     little, !is_unsigned);\n    }\n}\n\n/* CIntToPy */\n  static CYTHON_INLINE PyObject* __Pyx_PyInt_From_Py_intptr_t(Py_intptr_t value) {\n    const Py_intptr_t neg_one = (Py_intptr_t) ((Py_intptr_t) 0 - (Py_intptr_t) 1), const_zero = (Py_intptr_t) 0;\n    const int is_unsigned = neg_one > const_zero;\n    if (is_unsigned) {\n        if (sizeof(Py_intptr_t) < sizeof(long)) {\n            return PyInt_FromLong((long) value);\n        } else if (sizeof(Py_intptr_t) <= sizeof(unsigned long)) {\n            return PyLong_FromUnsignedLong((unsigned long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(Py_intptr_t) <= sizeof(unsigned PY_LONG_LONG)) {\n            return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value);\n#endif\n        }\n    } else {\n        if (sizeof(Py_intptr_t) <= sizeof(long)) {\n            return PyInt_FromLong((long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(Py_intptr_t) <= sizeof(PY_LONG_LONG)) {\n            return PyLong_FromLongLong((PY_LONG_LONG) value);\n#endif\n        }\n    }\n    {\n        int one = 1; int little = (int)*(unsigned char *)&one;\n        unsigned char *bytes = (unsigned char *)&value;\n        return _PyLong_FromByteArray(bytes, sizeof(Py_intptr_t),\n                                     little, !is_unsigned);\n    }\n}\n\n/* Declarations */\n  #if CYTHON_CCOMPLEX\n  #ifdef __cplusplus\n    static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float x, float y) {\n      return ::std::complex< float >(x, y);\n    }\n  #else\n    static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float x, float y) {\n      return x + y*(__pyx_t_float_complex)_Complex_I;\n    }\n  #endif\n#else\n    static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float x, float y) {\n      __pyx_t_float_complex z;\n      z.real = x;\n      z.imag = y;\n      return z;\n    }\n#endif\n\n/* Arithmetic */\n  #if CYTHON_CCOMPLEX\n#else\n    static CYTHON_INLINE int __Pyx_c_eq_float(__pyx_t_float_complex a, __pyx_t_float_complex b) {\n       return (a.real == b.real) && (a.imag == b.imag);\n    }\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_sum_float(__pyx_t_float_complex a, __pyx_t_float_complex b) {\n        __pyx_t_float_complex z;\n        z.real = a.real + b.real;\n        z.imag = a.imag + b.imag;\n        return z;\n    }\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_diff_float(__pyx_t_float_complex a, __pyx_t_float_complex b) {\n        __pyx_t_float_complex z;\n        z.real = a.real - b.real;\n        z.imag = a.imag - b.imag;\n        return z;\n    }\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_prod_float(__pyx_t_float_complex a, __pyx_t_float_complex b) {\n        __pyx_t_float_complex z;\n        z.real = a.real * b.real - a.imag * b.imag;\n        z.imag = a.real * b.imag + a.imag * b.real;\n        return z;\n    }\n    #if 1\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quot_float(__pyx_t_float_complex a, __pyx_t_float_complex b) {\n        if (b.imag == 0) {\n            return __pyx_t_float_complex_from_parts(a.real / b.real, a.imag / b.real);\n        } else if (fabsf(b.real) >= fabsf(b.imag)) {\n            if (b.real == 0 && b.imag == 0) {\n                return __pyx_t_float_complex_from_parts(a.real / b.real, a.imag / b.imag);\n            } else {\n                float r = b.imag / b.real;\n                float s = (float)(1.0) / (b.real + b.imag * r);\n                return __pyx_t_float_complex_from_parts(\n                    (a.real + a.imag * r) * s, (a.imag - a.real * r) * s);\n            }\n        } else {\n            float r = b.real / b.imag;\n            float s = (float)(1.0) / (b.imag + b.real * r);\n            return __pyx_t_float_complex_from_parts(\n                (a.real * r + a.imag) * s, (a.imag * r - a.real) * s);\n        }\n    }\n    #else\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quot_float(__pyx_t_float_complex a, __pyx_t_float_complex b) {\n        if (b.imag == 0) {\n            return __pyx_t_float_complex_from_parts(a.real / b.real, a.imag / b.real);\n        } else {\n            float denom = b.real * b.real + b.imag * b.imag;\n            return __pyx_t_float_complex_from_parts(\n                (a.real * b.real + a.imag * b.imag) / denom,\n                (a.imag * b.real - a.real * b.imag) / denom);\n        }\n    }\n    #endif\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_neg_float(__pyx_t_float_complex a) {\n        __pyx_t_float_complex z;\n        z.real = -a.real;\n        z.imag = -a.imag;\n        return z;\n    }\n    static CYTHON_INLINE int __Pyx_c_is_zero_float(__pyx_t_float_complex a) {\n       return (a.real == 0) && (a.imag == 0);\n    }\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_conj_float(__pyx_t_float_complex a) {\n        __pyx_t_float_complex z;\n        z.real =  a.real;\n        z.imag = -a.imag;\n        return z;\n    }\n    #if 1\n        static CYTHON_INLINE float __Pyx_c_abs_float(__pyx_t_float_complex z) {\n          #if !defined(HAVE_HYPOT) || defined(_MSC_VER)\n            return sqrtf(z.real*z.real + z.imag*z.imag);\n          #else\n            return hypotf(z.real, z.imag);\n          #endif\n        }\n        static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_pow_float(__pyx_t_float_complex a, __pyx_t_float_complex b) {\n            __pyx_t_float_complex z;\n            float r, lnr, theta, z_r, z_theta;\n            if (b.imag == 0 && b.real == (int)b.real) {\n                if (b.real < 0) {\n                    float denom = a.real * a.real + a.imag * a.imag;\n                    a.real = a.real / denom;\n                    a.imag = -a.imag / denom;\n                    b.real = -b.real;\n                }\n                switch ((int)b.real) {\n                    case 0:\n                        z.real = 1;\n                        z.imag = 0;\n                        return z;\n                    case 1:\n                        return a;\n                    case 2:\n                        z = __Pyx_c_prod_float(a, a);\n                        return __Pyx_c_prod_float(a, a);\n                    case 3:\n                        z = __Pyx_c_prod_float(a, a);\n                        return __Pyx_c_prod_float(z, a);\n                    case 4:\n                        z = __Pyx_c_prod_float(a, a);\n                        return __Pyx_c_prod_float(z, z);\n                }\n            }\n            if (a.imag == 0) {\n                if (a.real == 0) {\n                    return a;\n                } else if (b.imag == 0) {\n                    z.real = powf(a.real, b.real);\n                    z.imag = 0;\n                    return z;\n                } else if (a.real > 0) {\n                    r = a.real;\n                    theta = 0;\n                } else {\n                    r = -a.real;\n                    theta = atan2f(0.0, -1.0);\n                }\n            } else {\n                r = __Pyx_c_abs_float(a);\n                theta = atan2f(a.imag, a.real);\n            }\n            lnr = logf(r);\n            z_r = expf(lnr * b.real - theta * b.imag);\n            z_theta = theta * b.real + lnr * b.imag;\n            z.real = z_r * cosf(z_theta);\n            z.imag = z_r * sinf(z_theta);\n            return z;\n        }\n    #endif\n#endif\n\n/* Declarations */\n  #if CYTHON_CCOMPLEX\n  #ifdef __cplusplus\n    static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) {\n      return ::std::complex< double >(x, y);\n    }\n  #else\n    static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) {\n      return x + y*(__pyx_t_double_complex)_Complex_I;\n    }\n  #endif\n#else\n    static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) {\n      __pyx_t_double_complex z;\n      z.real = x;\n      z.imag = y;\n      return z;\n    }\n#endif\n\n/* Arithmetic */\n  #if CYTHON_CCOMPLEX\n#else\n    static CYTHON_INLINE int __Pyx_c_eq_double(__pyx_t_double_complex a, __pyx_t_double_complex b) {\n       return (a.real == b.real) && (a.imag == b.imag);\n    }\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum_double(__pyx_t_double_complex a, __pyx_t_double_complex b) {\n        __pyx_t_double_complex z;\n        z.real = a.real + b.real;\n        z.imag = a.imag + b.imag;\n        return z;\n    }\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff_double(__pyx_t_double_complex a, __pyx_t_double_complex b) {\n        __pyx_t_double_complex z;\n        z.real = a.real - b.real;\n        z.imag = a.imag - b.imag;\n        return z;\n    }\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod_double(__pyx_t_double_complex a, __pyx_t_double_complex b) {\n        __pyx_t_double_complex z;\n        z.real = a.real * b.real - a.imag * b.imag;\n        z.imag = a.real * b.imag + a.imag * b.real;\n        return z;\n    }\n    #if 1\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex a, __pyx_t_double_complex b) {\n        if (b.imag == 0) {\n            return __pyx_t_double_complex_from_parts(a.real / b.real, a.imag / b.real);\n        } else if (fabs(b.real) >= fabs(b.imag)) {\n            if (b.real == 0 && b.imag == 0) {\n                return __pyx_t_double_complex_from_parts(a.real / b.real, a.imag / b.imag);\n            } else {\n                double r = b.imag / b.real;\n                double s = (double)(1.0) / (b.real + b.imag * r);\n                return __pyx_t_double_complex_from_parts(\n                    (a.real + a.imag * r) * s, (a.imag - a.real * r) * s);\n            }\n        } else {\n            double r = b.real / b.imag;\n            double s = (double)(1.0) / (b.imag + b.real * r);\n            return __pyx_t_double_complex_from_parts(\n                (a.real * r + a.imag) * s, (a.imag * r - a.real) * s);\n        }\n    }\n    #else\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex a, __pyx_t_double_complex b) {\n        if (b.imag == 0) {\n            return __pyx_t_double_complex_from_parts(a.real / b.real, a.imag / b.real);\n        } else {\n            double denom = b.real * b.real + b.imag * b.imag;\n            return __pyx_t_double_complex_from_parts(\n                (a.real * b.real + a.imag * b.imag) / denom,\n                (a.imag * b.real - a.real * b.imag) / denom);\n        }\n    }\n    #endif\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg_double(__pyx_t_double_complex a) {\n        __pyx_t_double_complex z;\n        z.real = -a.real;\n        z.imag = -a.imag;\n        return z;\n    }\n    static CYTHON_INLINE int __Pyx_c_is_zero_double(__pyx_t_double_complex a) {\n       return (a.real == 0) && (a.imag == 0);\n    }\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj_double(__pyx_t_double_complex a) {\n        __pyx_t_double_complex z;\n        z.real =  a.real;\n        z.imag = -a.imag;\n        return z;\n    }\n    #if 1\n        static CYTHON_INLINE double __Pyx_c_abs_double(__pyx_t_double_complex z) {\n          #if !defined(HAVE_HYPOT) || defined(_MSC_VER)\n            return sqrt(z.real*z.real + z.imag*z.imag);\n          #else\n            return hypot(z.real, z.imag);\n          #endif\n        }\n        static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_pow_double(__pyx_t_double_complex a, __pyx_t_double_complex b) {\n            __pyx_t_double_complex z;\n            double r, lnr, theta, z_r, z_theta;\n            if (b.imag == 0 && b.real == (int)b.real) {\n                if (b.real < 0) {\n                    double denom = a.real * a.real + a.imag * a.imag;\n                    a.real = a.real / denom;\n                    a.imag = -a.imag / denom;\n                    b.real = -b.real;\n                }\n                switch ((int)b.real) {\n                    case 0:\n                        z.real = 1;\n                        z.imag = 0;\n                        return z;\n                    case 1:\n                        return a;\n                    case 2:\n                        z = __Pyx_c_prod_double(a, a);\n                        return __Pyx_c_prod_double(a, a);\n                    case 3:\n                        z = __Pyx_c_prod_double(a, a);\n                        return __Pyx_c_prod_double(z, a);\n                    case 4:\n                        z = __Pyx_c_prod_double(a, a);\n                        return __Pyx_c_prod_double(z, z);\n                }\n            }\n            if (a.imag == 0) {\n                if (a.real == 0) {\n                    return a;\n                } else if (b.imag == 0) {\n                    z.real = pow(a.real, b.real);\n                    z.imag = 0;\n                    return z;\n                } else if (a.real > 0) {\n                    r = a.real;\n                    theta = 0;\n                } else {\n                    r = -a.real;\n                    theta = atan2(0.0, -1.0);\n                }\n            } else {\n                r = __Pyx_c_abs_double(a);\n                theta = atan2(a.imag, a.real);\n            }\n            lnr = log(r);\n            z_r = exp(lnr * b.real - theta * b.imag);\n            z_theta = theta * b.real + lnr * b.imag;\n            z.real = z_r * cos(z_theta);\n            z.imag = z_r * sin(z_theta);\n            return z;\n        }\n    #endif\n#endif\n\n/* CIntToPy */\n  static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value) {\n    const int neg_one = (int) ((int) 0 - (int) 1), const_zero = (int) 0;\n    const int is_unsigned = neg_one > const_zero;\n    if (is_unsigned) {\n        if (sizeof(int) < sizeof(long)) {\n            return PyInt_FromLong((long) value);\n        } else if (sizeof(int) <= sizeof(unsigned long)) {\n            return PyLong_FromUnsignedLong((unsigned long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) {\n            return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value);\n#endif\n        }\n    } else {\n        if (sizeof(int) <= sizeof(long)) {\n            return PyInt_FromLong((long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) {\n            return PyLong_FromLongLong((PY_LONG_LONG) value);\n#endif\n        }\n    }\n    {\n        int one = 1; int little = (int)*(unsigned char *)&one;\n        unsigned char *bytes = (unsigned char *)&value;\n        return _PyLong_FromByteArray(bytes, sizeof(int),\n                                     little, !is_unsigned);\n    }\n}\n\n/* CIntToPy */\n  static CYTHON_INLINE PyObject* __Pyx_PyInt_From_enum__NPY_TYPES(enum NPY_TYPES value) {\n    const enum NPY_TYPES neg_one = (enum NPY_TYPES) ((enum NPY_TYPES) 0 - (enum NPY_TYPES) 1), const_zero = (enum NPY_TYPES) 0;\n    const int is_unsigned = neg_one > const_zero;\n    if (is_unsigned) {\n        if (sizeof(enum NPY_TYPES) < sizeof(long)) {\n            return PyInt_FromLong((long) value);\n        } else if (sizeof(enum NPY_TYPES) <= sizeof(unsigned long)) {\n            return PyLong_FromUnsignedLong((unsigned long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(enum NPY_TYPES) <= sizeof(unsigned PY_LONG_LONG)) {\n            return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value);\n#endif\n        }\n    } else {\n        if (sizeof(enum NPY_TYPES) <= sizeof(long)) {\n            return PyInt_FromLong((long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(enum NPY_TYPES) <= sizeof(PY_LONG_LONG)) {\n            return PyLong_FromLongLong((PY_LONG_LONG) value);\n#endif\n        }\n    }\n    {\n        int one = 1; int little = (int)*(unsigned char *)&one;\n        unsigned char *bytes = (unsigned char *)&value;\n        return _PyLong_FromByteArray(bytes, sizeof(enum NPY_TYPES),\n                                     little, !is_unsigned);\n    }\n}\n\n/* CIntFromPy */\n  static CYTHON_INLINE siz __Pyx_PyInt_As_siz(PyObject *x) {\n    const siz neg_one = (siz) ((siz) 0 - (siz) 1), const_zero = (siz) 0;\n    const int is_unsigned = neg_one > const_zero;\n#if PY_MAJOR_VERSION < 3\n    if (likely(PyInt_Check(x))) {\n        if (sizeof(siz) < sizeof(long)) {\n            __PYX_VERIFY_RETURN_INT(siz, long, PyInt_AS_LONG(x))\n        } else {\n            long val = PyInt_AS_LONG(x);\n            if (is_unsigned && unlikely(val < 0)) {\n                goto raise_neg_overflow;\n            }\n            return (siz) val;\n        }\n    } else\n#endif\n    if (likely(PyLong_Check(x))) {\n        if (is_unsigned) {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (siz) 0;\n                case  1: __PYX_VERIFY_RETURN_INT(siz, digit, digits[0])\n                case 2:\n                    if (8 * sizeof(siz) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(siz, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(siz) >= 2 * PyLong_SHIFT) {\n                            return (siz) (((((siz)digits[1]) << PyLong_SHIFT) | (siz)digits[0]));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(siz) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(siz, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(siz) >= 3 * PyLong_SHIFT) {\n                            return (siz) (((((((siz)digits[2]) << PyLong_SHIFT) | (siz)digits[1]) << PyLong_SHIFT) | (siz)digits[0]));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(siz) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(siz, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(siz) >= 4 * PyLong_SHIFT) {\n                            return (siz) (((((((((siz)digits[3]) << PyLong_SHIFT) | (siz)digits[2]) << PyLong_SHIFT) | (siz)digits[1]) << PyLong_SHIFT) | (siz)digits[0]));\n                        }\n                    }\n                    break;\n            }\n#endif\n#if CYTHON_COMPILING_IN_CPYTHON\n            if (unlikely(Py_SIZE(x) < 0)) {\n                goto raise_neg_overflow;\n            }\n#else\n            {\n                int result = PyObject_RichCompareBool(x, Py_False, Py_LT);\n                if (unlikely(result < 0))\n                    return (siz) -1;\n                if (unlikely(result == 1))\n                    goto raise_neg_overflow;\n            }\n#endif\n            if (sizeof(siz) <= sizeof(unsigned long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(siz, unsigned long, PyLong_AsUnsignedLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(siz) <= sizeof(unsigned PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(siz, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x))\n#endif\n            }\n        } else {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (siz) 0;\n                case -1: __PYX_VERIFY_RETURN_INT(siz, sdigit, (sdigit) (-(sdigit)digits[0]))\n                case  1: __PYX_VERIFY_RETURN_INT(siz,  digit, +digits[0])\n                case -2:\n                    if (8 * sizeof(siz) - 1 > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(siz, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(siz) - 1 > 2 * PyLong_SHIFT) {\n                            return (siz) (((siz)-1)*(((((siz)digits[1]) << PyLong_SHIFT) | (siz)digits[0])));\n                        }\n                    }\n                    break;\n                case 2:\n                    if (8 * sizeof(siz) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(siz, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(siz) - 1 > 2 * PyLong_SHIFT) {\n                            return (siz) ((((((siz)digits[1]) << PyLong_SHIFT) | (siz)digits[0])));\n                        }\n                    }\n                    break;\n                case -3:\n                    if (8 * sizeof(siz) - 1 > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(siz, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(siz) - 1 > 3 * PyLong_SHIFT) {\n                            return (siz) (((siz)-1)*(((((((siz)digits[2]) << PyLong_SHIFT) | (siz)digits[1]) << PyLong_SHIFT) | (siz)digits[0])));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(siz) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(siz, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(siz) - 1 > 3 * PyLong_SHIFT) {\n                            return (siz) ((((((((siz)digits[2]) << PyLong_SHIFT) | (siz)digits[1]) << PyLong_SHIFT) | (siz)digits[0])));\n                        }\n                    }\n                    break;\n                case -4:\n                    if (8 * sizeof(siz) - 1 > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(siz, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(siz) - 1 > 4 * PyLong_SHIFT) {\n                            return (siz) (((siz)-1)*(((((((((siz)digits[3]) << PyLong_SHIFT) | (siz)digits[2]) << PyLong_SHIFT) | (siz)digits[1]) << PyLong_SHIFT) | (siz)digits[0])));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(siz) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(siz, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(siz) - 1 > 4 * PyLong_SHIFT) {\n                            return (siz) ((((((((((siz)digits[3]) << PyLong_SHIFT) | (siz)digits[2]) << PyLong_SHIFT) | (siz)digits[1]) << PyLong_SHIFT) | (siz)digits[0])));\n                        }\n                    }\n                    break;\n            }\n#endif\n            if (sizeof(siz) <= sizeof(long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(siz, long, PyLong_AsLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(siz) <= sizeof(PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(siz, PY_LONG_LONG, PyLong_AsLongLong(x))\n#endif\n            }\n        }\n        {\n#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray)\n            PyErr_SetString(PyExc_RuntimeError,\n                            \"_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers\");\n#else\n            siz val;\n            PyObject *v = __Pyx_PyNumber_IntOrLong(x);\n #if PY_MAJOR_VERSION < 3\n            if (likely(v) && !PyLong_Check(v)) {\n                PyObject *tmp = v;\n                v = PyNumber_Long(tmp);\n                Py_DECREF(tmp);\n            }\n #endif\n            if (likely(v)) {\n                int one = 1; int is_little = (int)*(unsigned char *)&one;\n                unsigned char *bytes = (unsigned char *)&val;\n                int ret = _PyLong_AsByteArray((PyLongObject *)v,\n                                              bytes, sizeof(val),\n                                              is_little, !is_unsigned);\n                Py_DECREF(v);\n                if (likely(!ret))\n                    return val;\n            }\n#endif\n            return (siz) -1;\n        }\n    } else {\n        siz val;\n        PyObject *tmp = __Pyx_PyNumber_IntOrLong(x);\n        if (!tmp) return (siz) -1;\n        val = __Pyx_PyInt_As_siz(tmp);\n        Py_DECREF(tmp);\n        return val;\n    }\nraise_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"value too large to convert to siz\");\n    return (siz) -1;\nraise_neg_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"can't convert negative value to siz\");\n    return (siz) -1;\n}\n\n/* CIntFromPy */\n  static CYTHON_INLINE size_t __Pyx_PyInt_As_size_t(PyObject *x) {\n    const size_t neg_one = (size_t) ((size_t) 0 - (size_t) 1), const_zero = (size_t) 0;\n    const int is_unsigned = neg_one > const_zero;\n#if PY_MAJOR_VERSION < 3\n    if (likely(PyInt_Check(x))) {\n        if (sizeof(size_t) < sizeof(long)) {\n            __PYX_VERIFY_RETURN_INT(size_t, long, PyInt_AS_LONG(x))\n        } else {\n            long val = PyInt_AS_LONG(x);\n            if (is_unsigned && unlikely(val < 0)) {\n                goto raise_neg_overflow;\n            }\n            return (size_t) val;\n        }\n    } else\n#endif\n    if (likely(PyLong_Check(x))) {\n        if (is_unsigned) {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (size_t) 0;\n                case  1: __PYX_VERIFY_RETURN_INT(size_t, digit, digits[0])\n                case 2:\n                    if (8 * sizeof(size_t) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(size_t, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(size_t) >= 2 * PyLong_SHIFT) {\n                            return (size_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(size_t) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(size_t, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(size_t) >= 3 * PyLong_SHIFT) {\n                            return (size_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(size_t) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(size_t, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(size_t) >= 4 * PyLong_SHIFT) {\n                            return (size_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n                        }\n                    }\n                    break;\n            }\n#endif\n#if CYTHON_COMPILING_IN_CPYTHON\n            if (unlikely(Py_SIZE(x) < 0)) {\n                goto raise_neg_overflow;\n            }\n#else\n            {\n                int result = PyObject_RichCompareBool(x, Py_False, Py_LT);\n                if (unlikely(result < 0))\n                    return (size_t) -1;\n                if (unlikely(result == 1))\n                    goto raise_neg_overflow;\n            }\n#endif\n            if (sizeof(size_t) <= sizeof(unsigned long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(size_t, unsigned long, PyLong_AsUnsignedLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(size_t) <= sizeof(unsigned PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(size_t, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x))\n#endif\n            }\n        } else {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (size_t) 0;\n                case -1: __PYX_VERIFY_RETURN_INT(size_t, sdigit, (sdigit) (-(sdigit)digits[0]))\n                case  1: __PYX_VERIFY_RETURN_INT(size_t,  digit, +digits[0])\n                case -2:\n                    if (8 * sizeof(size_t) - 1 > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(size_t, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(size_t) - 1 > 2 * PyLong_SHIFT) {\n                            return (size_t) (((size_t)-1)*(((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])));\n                        }\n                    }\n                    break;\n                case 2:\n                    if (8 * sizeof(size_t) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(size_t, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(size_t) - 1 > 2 * PyLong_SHIFT) {\n                            return (size_t) ((((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])));\n                        }\n                    }\n                    break;\n                case -3:\n                    if (8 * sizeof(size_t) - 1 > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(size_t, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(size_t) - 1 > 3 * PyLong_SHIFT) {\n                            return (size_t) (((size_t)-1)*(((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(size_t) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(size_t, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(size_t) - 1 > 3 * PyLong_SHIFT) {\n                            return (size_t) ((((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])));\n                        }\n                    }\n                    break;\n                case -4:\n                    if (8 * sizeof(size_t) - 1 > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(size_t, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(size_t) - 1 > 4 * PyLong_SHIFT) {\n                            return (size_t) (((size_t)-1)*(((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(size_t) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(size_t, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(size_t) - 1 > 4 * PyLong_SHIFT) {\n                            return (size_t) ((((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])));\n                        }\n                    }\n                    break;\n            }\n#endif\n            if (sizeof(size_t) <= sizeof(long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(size_t, long, PyLong_AsLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(size_t) <= sizeof(PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(size_t, PY_LONG_LONG, PyLong_AsLongLong(x))\n#endif\n            }\n        }\n        {\n#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray)\n            PyErr_SetString(PyExc_RuntimeError,\n                            \"_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers\");\n#else\n            size_t val;\n            PyObject *v = __Pyx_PyNumber_IntOrLong(x);\n #if PY_MAJOR_VERSION < 3\n            if (likely(v) && !PyLong_Check(v)) {\n                PyObject *tmp = v;\n                v = PyNumber_Long(tmp);\n                Py_DECREF(tmp);\n            }\n #endif\n            if (likely(v)) {\n                int one = 1; int is_little = (int)*(unsigned char *)&one;\n                unsigned char *bytes = (unsigned char *)&val;\n                int ret = _PyLong_AsByteArray((PyLongObject *)v,\n                                              bytes, sizeof(val),\n                                              is_little, !is_unsigned);\n                Py_DECREF(v);\n                if (likely(!ret))\n                    return val;\n            }\n#endif\n            return (size_t) -1;\n        }\n    } else {\n        size_t val;\n        PyObject *tmp = __Pyx_PyNumber_IntOrLong(x);\n        if (!tmp) return (size_t) -1;\n        val = __Pyx_PyInt_As_size_t(tmp);\n        Py_DECREF(tmp);\n        return val;\n    }\nraise_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"value too large to convert to size_t\");\n    return (size_t) -1;\nraise_neg_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"can't convert negative value to size_t\");\n    return (size_t) -1;\n}\n\n/* CIntFromPy */\n  static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *x) {\n    const int neg_one = (int) ((int) 0 - (int) 1), const_zero = (int) 0;\n    const int is_unsigned = neg_one > const_zero;\n#if PY_MAJOR_VERSION < 3\n    if (likely(PyInt_Check(x))) {\n        if (sizeof(int) < sizeof(long)) {\n            __PYX_VERIFY_RETURN_INT(int, long, PyInt_AS_LONG(x))\n        } else {\n            long val = PyInt_AS_LONG(x);\n            if (is_unsigned && unlikely(val < 0)) {\n                goto raise_neg_overflow;\n            }\n            return (int) val;\n        }\n    } else\n#endif\n    if (likely(PyLong_Check(x))) {\n        if (is_unsigned) {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (int) 0;\n                case  1: __PYX_VERIFY_RETURN_INT(int, digit, digits[0])\n                case 2:\n                    if (8 * sizeof(int) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) >= 2 * PyLong_SHIFT) {\n                            return (int) (((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(int) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) >= 3 * PyLong_SHIFT) {\n                            return (int) (((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(int) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) >= 4 * PyLong_SHIFT) {\n                            return (int) (((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]));\n                        }\n                    }\n                    break;\n            }\n#endif\n#if CYTHON_COMPILING_IN_CPYTHON\n            if (unlikely(Py_SIZE(x) < 0)) {\n                goto raise_neg_overflow;\n            }\n#else\n            {\n                int result = PyObject_RichCompareBool(x, Py_False, Py_LT);\n                if (unlikely(result < 0))\n                    return (int) -1;\n                if (unlikely(result == 1))\n                    goto raise_neg_overflow;\n            }\n#endif\n            if (sizeof(int) <= sizeof(unsigned long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(int, unsigned long, PyLong_AsUnsignedLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x))\n#endif\n            }\n        } else {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (int) 0;\n                case -1: __PYX_VERIFY_RETURN_INT(int, sdigit, (sdigit) (-(sdigit)digits[0]))\n                case  1: __PYX_VERIFY_RETURN_INT(int,  digit, +digits[0])\n                case -2:\n                    if (8 * sizeof(int) - 1 > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) {\n                            return (int) (((int)-1)*(((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));\n                        }\n                    }\n                    break;\n                case 2:\n                    if (8 * sizeof(int) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) {\n                            return (int) ((((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));\n                        }\n                    }\n                    break;\n                case -3:\n                    if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) {\n                            return (int) (((int)-1)*(((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(int) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) {\n                            return (int) ((((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));\n                        }\n                    }\n                    break;\n                case -4:\n                    if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) {\n                            return (int) (((int)-1)*(((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(int) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) {\n                            return (int) ((((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));\n                        }\n                    }\n                    break;\n            }\n#endif\n            if (sizeof(int) <= sizeof(long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(int, long, PyLong_AsLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(int, PY_LONG_LONG, PyLong_AsLongLong(x))\n#endif\n            }\n        }\n        {\n#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray)\n            PyErr_SetString(PyExc_RuntimeError,\n                            \"_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers\");\n#else\n            int val;\n            PyObject *v = __Pyx_PyNumber_IntOrLong(x);\n #if PY_MAJOR_VERSION < 3\n            if (likely(v) && !PyLong_Check(v)) {\n                PyObject *tmp = v;\n                v = PyNumber_Long(tmp);\n                Py_DECREF(tmp);\n            }\n #endif\n            if (likely(v)) {\n                int one = 1; int is_little = (int)*(unsigned char *)&one;\n                unsigned char *bytes = (unsigned char *)&val;\n                int ret = _PyLong_AsByteArray((PyLongObject *)v,\n                                              bytes, sizeof(val),\n                                              is_little, !is_unsigned);\n                Py_DECREF(v);\n                if (likely(!ret))\n                    return val;\n            }\n#endif\n            return (int) -1;\n        }\n    } else {\n        int val;\n        PyObject *tmp = __Pyx_PyNumber_IntOrLong(x);\n        if (!tmp) return (int) -1;\n        val = __Pyx_PyInt_As_int(tmp);\n        Py_DECREF(tmp);\n        return val;\n    }\nraise_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"value too large to convert to int\");\n    return (int) -1;\nraise_neg_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"can't convert negative value to int\");\n    return (int) -1;\n}\n\n/* CIntFromPy */\n  static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *x) {\n    const long neg_one = (long) ((long) 0 - (long) 1), const_zero = (long) 0;\n    const int is_unsigned = neg_one > const_zero;\n#if PY_MAJOR_VERSION < 3\n    if (likely(PyInt_Check(x))) {\n        if (sizeof(long) < sizeof(long)) {\n            __PYX_VERIFY_RETURN_INT(long, long, PyInt_AS_LONG(x))\n        } else {\n            long val = PyInt_AS_LONG(x);\n            if (is_unsigned && unlikely(val < 0)) {\n                goto raise_neg_overflow;\n            }\n            return (long) val;\n        }\n    } else\n#endif\n    if (likely(PyLong_Check(x))) {\n        if (is_unsigned) {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (long) 0;\n                case  1: __PYX_VERIFY_RETURN_INT(long, digit, digits[0])\n                case 2:\n                    if (8 * sizeof(long) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) >= 2 * PyLong_SHIFT) {\n                            return (long) (((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(long) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) >= 3 * PyLong_SHIFT) {\n                            return (long) (((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(long) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) >= 4 * PyLong_SHIFT) {\n                            return (long) (((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]));\n                        }\n                    }\n                    break;\n            }\n#endif\n#if CYTHON_COMPILING_IN_CPYTHON\n            if (unlikely(Py_SIZE(x) < 0)) {\n                goto raise_neg_overflow;\n            }\n#else\n            {\n                int result = PyObject_RichCompareBool(x, Py_False, Py_LT);\n                if (unlikely(result < 0))\n                    return (long) -1;\n                if (unlikely(result == 1))\n                    goto raise_neg_overflow;\n            }\n#endif\n            if (sizeof(long) <= sizeof(unsigned long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(long, unsigned long, PyLong_AsUnsignedLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(long, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x))\n#endif\n            }\n        } else {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (long) 0;\n                case -1: __PYX_VERIFY_RETURN_INT(long, sdigit, (sdigit) (-(sdigit)digits[0]))\n                case  1: __PYX_VERIFY_RETURN_INT(long,  digit, +digits[0])\n                case -2:\n                    if (8 * sizeof(long) - 1 > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {\n                            return (long) (((long)-1)*(((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));\n                        }\n                    }\n                    break;\n                case 2:\n                    if (8 * sizeof(long) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {\n                            return (long) ((((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));\n                        }\n                    }\n                    break;\n                case -3:\n                    if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {\n                            return (long) (((long)-1)*(((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(long) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {\n                            return (long) ((((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));\n                        }\n                    }\n                    break;\n                case -4:\n                    if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) {\n                            return (long) (((long)-1)*(((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(long) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) {\n                            return (long) ((((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));\n                        }\n                    }\n                    break;\n            }\n#endif\n            if (sizeof(long) <= sizeof(long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(long, long, PyLong_AsLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(long, PY_LONG_LONG, PyLong_AsLongLong(x))\n#endif\n            }\n        }\n        {\n#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray)\n            PyErr_SetString(PyExc_RuntimeError,\n                            \"_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers\");\n#else\n            long val;\n            PyObject *v = __Pyx_PyNumber_IntOrLong(x);\n #if PY_MAJOR_VERSION < 3\n            if (likely(v) && !PyLong_Check(v)) {\n                PyObject *tmp = v;\n                v = PyNumber_Long(tmp);\n                Py_DECREF(tmp);\n            }\n #endif\n            if (likely(v)) {\n                int one = 1; int is_little = (int)*(unsigned char *)&one;\n                unsigned char *bytes = (unsigned char *)&val;\n                int ret = _PyLong_AsByteArray((PyLongObject *)v,\n                                              bytes, sizeof(val),\n                                              is_little, !is_unsigned);\n                Py_DECREF(v);\n                if (likely(!ret))\n                    return val;\n            }\n#endif\n            return (long) -1;\n        }\n    } else {\n        long val;\n        PyObject *tmp = __Pyx_PyNumber_IntOrLong(x);\n        if (!tmp) return (long) -1;\n        val = __Pyx_PyInt_As_long(tmp);\n        Py_DECREF(tmp);\n        return val;\n    }\nraise_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"value too large to convert to long\");\n    return (long) -1;\nraise_neg_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"can't convert negative value to long\");\n    return (long) -1;\n}\n\n/* FastTypeChecks */\n  #if CYTHON_COMPILING_IN_CPYTHON\nstatic int __Pyx_InBases(PyTypeObject *a, PyTypeObject *b) {\n    while (a) {\n        a = a->tp_base;\n        if (a == b)\n            return 1;\n    }\n    return b == &PyBaseObject_Type;\n}\nstatic CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b) {\n    PyObject *mro;\n    if (a == b) return 1;\n    mro = a->tp_mro;\n    if (likely(mro)) {\n        Py_ssize_t i, n;\n        n = PyTuple_GET_SIZE(mro);\n        for (i = 0; i < n; i++) {\n            if (PyTuple_GET_ITEM(mro, i) == (PyObject *)b)\n                return 1;\n        }\n        return 0;\n    }\n    return __Pyx_InBases(a, b);\n}\n#if PY_MAJOR_VERSION == 2\nstatic int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject* exc_type2) {\n    PyObject *exception, *value, *tb;\n    int res;\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ErrFetch(&exception, &value, &tb);\n    res = exc_type1 ? PyObject_IsSubclass(err, exc_type1) : 0;\n    if (unlikely(res == -1)) {\n        PyErr_WriteUnraisable(err);\n        res = 0;\n    }\n    if (!res) {\n        res = PyObject_IsSubclass(err, exc_type2);\n        if (unlikely(res == -1)) {\n            PyErr_WriteUnraisable(err);\n            res = 0;\n        }\n    }\n    __Pyx_ErrRestore(exception, value, tb);\n    return res;\n}\n#else\nstatic CYTHON_INLINE int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject *exc_type2) {\n    int res = exc_type1 ? __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type1) : 0;\n    if (!res) {\n        res = __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type2);\n    }\n    return res;\n}\n#endif\nstatic int __Pyx_PyErr_GivenExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) {\n    Py_ssize_t i, n;\n    assert(PyExceptionClass_Check(exc_type));\n    n = PyTuple_GET_SIZE(tuple);\n#if PY_MAJOR_VERSION >= 3\n    for (i=0; i<n; i++) {\n        if (exc_type == PyTuple_GET_ITEM(tuple, i)) return 1;\n    }\n#endif\n    for (i=0; i<n; i++) {\n        PyObject *t = PyTuple_GET_ITEM(tuple, i);\n        #if PY_MAJOR_VERSION < 3\n        if (likely(exc_type == t)) return 1;\n        #endif\n        if (likely(PyExceptionClass_Check(t))) {\n            if (__Pyx_inner_PyErr_GivenExceptionMatches2(exc_type, NULL, t)) return 1;\n        } else {\n        }\n    }\n    return 0;\n}\nstatic CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject* exc_type) {\n    if (likely(err == exc_type)) return 1;\n    if (likely(PyExceptionClass_Check(err))) {\n        if (likely(PyExceptionClass_Check(exc_type))) {\n            return __Pyx_inner_PyErr_GivenExceptionMatches2(err, NULL, exc_type);\n        } else if (likely(PyTuple_Check(exc_type))) {\n            return __Pyx_PyErr_GivenExceptionMatchesTuple(err, exc_type);\n        } else {\n        }\n    }\n    return PyErr_GivenExceptionMatches(err, exc_type);\n}\nstatic CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *exc_type1, PyObject *exc_type2) {\n    assert(PyExceptionClass_Check(exc_type1));\n    assert(PyExceptionClass_Check(exc_type2));\n    if (likely(err == exc_type1 || err == exc_type2)) return 1;\n    if (likely(PyExceptionClass_Check(err))) {\n        return __Pyx_inner_PyErr_GivenExceptionMatches2(err, exc_type1, exc_type2);\n    }\n    return (PyErr_GivenExceptionMatches(err, exc_type1) || PyErr_GivenExceptionMatches(err, exc_type2));\n}\n#endif\n\n/* CheckBinaryVersion */\n  static int __Pyx_check_binary_version(void) {\n    char ctversion[4], rtversion[4];\n    PyOS_snprintf(ctversion, 4, \"%d.%d\", PY_MAJOR_VERSION, PY_MINOR_VERSION);\n    PyOS_snprintf(rtversion, 4, \"%s\", Py_GetVersion());\n    if (ctversion[0] != rtversion[0] || ctversion[2] != rtversion[2]) {\n        char message[200];\n        PyOS_snprintf(message, sizeof(message),\n                      \"compiletime version %s of module '%.100s' \"\n                      \"does not match runtime version %s\",\n                      ctversion, __Pyx_MODULE_NAME, rtversion);\n        return PyErr_WarnEx(NULL, message, 1);\n    }\n    return 0;\n}\n\n/* InitStrings */\n  static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) {\n    while (t->p) {\n        #if PY_MAJOR_VERSION < 3\n        if (t->is_unicode) {\n            *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL);\n        } else if (t->intern) {\n            *t->p = PyString_InternFromString(t->s);\n        } else {\n            *t->p = PyString_FromStringAndSize(t->s, t->n - 1);\n        }\n        #else\n        if (t->is_unicode | t->is_str) {\n            if (t->intern) {\n                *t->p = PyUnicode_InternFromString(t->s);\n            } else if (t->encoding) {\n                *t->p = PyUnicode_Decode(t->s, t->n - 1, t->encoding, NULL);\n            } else {\n                *t->p = PyUnicode_FromStringAndSize(t->s, t->n - 1);\n            }\n        } else {\n            *t->p = PyBytes_FromStringAndSize(t->s, t->n - 1);\n        }\n        #endif\n        if (!*t->p)\n            return -1;\n        if (PyObject_Hash(*t->p) == -1)\n            return -1;\n        ++t;\n    }\n    return 0;\n}\n\nstatic CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_str) {\n    return __Pyx_PyUnicode_FromStringAndSize(c_str, (Py_ssize_t)strlen(c_str));\n}\nstatic CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject* o) {\n    Py_ssize_t ignore;\n    return __Pyx_PyObject_AsStringAndSize(o, &ignore);\n}\n#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT\n#if !CYTHON_PEP393_ENABLED\nstatic const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) {\n    char* defenc_c;\n    PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL);\n    if (!defenc) return NULL;\n    defenc_c = PyBytes_AS_STRING(defenc);\n#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII\n    {\n        char* end = defenc_c + PyBytes_GET_SIZE(defenc);\n        char* c;\n        for (c = defenc_c; c < end; c++) {\n            if ((unsigned char) (*c) >= 128) {\n                PyUnicode_AsASCIIString(o);\n                return NULL;\n            }\n        }\n    }\n#endif\n    *length = PyBytes_GET_SIZE(defenc);\n    return defenc_c;\n}\n#else\nstatic CYTHON_INLINE const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) {\n    if (unlikely(__Pyx_PyUnicode_READY(o) == -1)) return NULL;\n#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII\n    if (likely(PyUnicode_IS_ASCII(o))) {\n        *length = PyUnicode_GET_LENGTH(o);\n        return PyUnicode_AsUTF8(o);\n    } else {\n        PyUnicode_AsASCIIString(o);\n        return NULL;\n    }\n#else\n    return PyUnicode_AsUTF8AndSize(o, length);\n#endif\n}\n#endif\n#endif\nstatic CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) {\n#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT\n    if (\n#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII\n            __Pyx_sys_getdefaultencoding_not_ascii &&\n#endif\n            PyUnicode_Check(o)) {\n        return __Pyx_PyUnicode_AsStringAndSize(o, length);\n    } else\n#endif\n#if (!CYTHON_COMPILING_IN_PYPY) || (defined(PyByteArray_AS_STRING) && defined(PyByteArray_GET_SIZE))\n    if (PyByteArray_Check(o)) {\n        *length = PyByteArray_GET_SIZE(o);\n        return PyByteArray_AS_STRING(o);\n    } else\n#endif\n    {\n        char* result;\n        int r = PyBytes_AsStringAndSize(o, &result, length);\n        if (unlikely(r < 0)) {\n            return NULL;\n        } else {\n            return result;\n        }\n    }\n}\nstatic CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) {\n   int is_true = x == Py_True;\n   if (is_true | (x == Py_False) | (x == Py_None)) return is_true;\n   else return PyObject_IsTrue(x);\n}\nstatic CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject* x) {\n    int retval;\n    if (unlikely(!x)) return -1;\n    retval = __Pyx_PyObject_IsTrue(x);\n    Py_DECREF(x);\n    return retval;\n}\nstatic PyObject* __Pyx_PyNumber_IntOrLongWrongResultType(PyObject* result, const char* type_name) {\n#if PY_MAJOR_VERSION >= 3\n    if (PyLong_Check(result)) {\n        if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1,\n                \"__int__ returned non-int (type %.200s).  \"\n                \"The ability to return an instance of a strict subclass of int \"\n                \"is deprecated, and may be removed in a future version of Python.\",\n                Py_TYPE(result)->tp_name)) {\n            Py_DECREF(result);\n            return NULL;\n        }\n        return result;\n    }\n#endif\n    PyErr_Format(PyExc_TypeError,\n                 \"__%.4s__ returned non-%.4s (type %.200s)\",\n                 type_name, type_name, Py_TYPE(result)->tp_name);\n    Py_DECREF(result);\n    return NULL;\n}\nstatic CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) {\n#if CYTHON_USE_TYPE_SLOTS\n  PyNumberMethods *m;\n#endif\n  const char *name = NULL;\n  PyObject *res = NULL;\n#if PY_MAJOR_VERSION < 3\n  if (likely(PyInt_Check(x) || PyLong_Check(x)))\n#else\n  if (likely(PyLong_Check(x)))\n#endif\n    return __Pyx_NewRef(x);\n#if CYTHON_USE_TYPE_SLOTS\n  m = Py_TYPE(x)->tp_as_number;\n  #if PY_MAJOR_VERSION < 3\n  if (m && m->nb_int) {\n    name = \"int\";\n    res = m->nb_int(x);\n  }\n  else if (m && m->nb_long) {\n    name = \"long\";\n    res = m->nb_long(x);\n  }\n  #else\n  if (likely(m && m->nb_int)) {\n    name = \"int\";\n    res = m->nb_int(x);\n  }\n  #endif\n#else\n  if (!PyBytes_CheckExact(x) && !PyUnicode_CheckExact(x)) {\n    res = PyNumber_Int(x);\n  }\n#endif\n  if (likely(res)) {\n#if PY_MAJOR_VERSION < 3\n    if (unlikely(!PyInt_Check(res) && !PyLong_Check(res))) {\n#else\n    if (unlikely(!PyLong_CheckExact(res))) {\n#endif\n        return __Pyx_PyNumber_IntOrLongWrongResultType(res, name);\n    }\n  }\n  else if (!PyErr_Occurred()) {\n    PyErr_SetString(PyExc_TypeError,\n                    \"an integer is required\");\n  }\n  return res;\n}\nstatic CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) {\n  Py_ssize_t ival;\n  PyObject *x;\n#if PY_MAJOR_VERSION < 3\n  if (likely(PyInt_CheckExact(b))) {\n    if (sizeof(Py_ssize_t) >= sizeof(long))\n        return PyInt_AS_LONG(b);\n    else\n        return PyInt_AsSsize_t(b);\n  }\n#endif\n  if (likely(PyLong_CheckExact(b))) {\n    #if CYTHON_USE_PYLONG_INTERNALS\n    const digit* digits = ((PyLongObject*)b)->ob_digit;\n    const Py_ssize_t size = Py_SIZE(b);\n    if (likely(__Pyx_sst_abs(size) <= 1)) {\n        ival = likely(size) ? digits[0] : 0;\n        if (size == -1) ival = -ival;\n        return ival;\n    } else {\n      switch (size) {\n         case 2:\n           if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) {\n             return (Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n           }\n           break;\n         case -2:\n           if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) {\n             return -(Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n           }\n           break;\n         case 3:\n           if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) {\n             return (Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n           }\n           break;\n         case -3:\n           if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) {\n             return -(Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n           }\n           break;\n         case 4:\n           if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) {\n             return (Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n           }\n           break;\n         case -4:\n           if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) {\n             return -(Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n           }\n           break;\n      }\n    }\n    #endif\n    return PyLong_AsSsize_t(b);\n  }\n  x = PyNumber_Index(b);\n  if (!x) return -1;\n  ival = PyInt_AsSsize_t(x);\n  Py_DECREF(x);\n  return ival;\n}\nstatic CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b) {\n  return b ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False);\n}\nstatic CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) {\n    return PyInt_FromSize_t(ival);\n}\n\n\n#endif /* Py_PYTHON_H */\n"
  },
  {
    "path": "lib/pycocotools/_mask.pyx",
    "content": "# distutils: language = c\n# distutils: sources = ../MatlabAPI/private/maskApi.c\n\n#**************************************************************************\n# Microsoft COCO Toolbox.      version 2.0\n# Data, paper, and tutorials available at:  http://mscoco.org/\n# Code written by Piotr Dollar and Tsung-Yi Lin, 2015.\n# Licensed under the Simplified BSD License [see coco/license.txt]\n#**************************************************************************\n\n__author__ = 'tsungyi'\n\n# import both Python-level and C-level symbols of Numpy\n# the API uses Numpy to interface C and Python\nimport numpy as np\ncimport numpy as np\nfrom libc.stdlib cimport malloc, free\n\n# intialized Numpy. must do.\nnp.import_array()\n\n# import numpy C function\n# we use PyArray_ENABLEFLAGS to make Numpy ndarray responsible to memoery management\ncdef extern from \"numpy/arrayobject.h\":\n    void PyArray_ENABLEFLAGS(np.ndarray arr, int flags)\n\n# Declare the prototype of the C functions in MaskApi.h\ncdef extern from \"maskApi.h\":\n    ctypedef unsigned int uint\n    ctypedef unsigned long siz\n    ctypedef unsigned char byte\n    ctypedef double* BB\n    ctypedef struct RLE:\n        siz h,\n        siz w,\n        siz m,\n        uint* cnts,\n    void rlesInit( RLE **R, siz n )\n    void rleEncode( RLE *R, const byte *M, siz h, siz w, siz n )\n    void rleDecode( const RLE *R, byte *mask, siz n )\n    void rleMerge( const RLE *R, RLE *M, siz n, bint intersect )\n    void rleArea( const RLE *R, siz n, uint *a )\n    void rleIou( RLE *dt, RLE *gt, siz m, siz n, byte *iscrowd, double *o )\n    void bbIou( BB dt, BB gt, siz m, siz n, byte *iscrowd, double *o )\n    void rleToBbox( const RLE *R, BB bb, siz n )\n    void rleFrBbox( RLE *R, const BB bb, siz h, siz w, siz n )\n    void rleFrPoly( RLE *R, const double *xy, siz k, siz h, siz w )\n    char* rleToString( const RLE *R )\n    void rleFrString( RLE *R, char *s, siz h, siz w )\n\n# python class to wrap RLE array in C\n# the class handles the memory allocation and deallocation\ncdef class RLEs:\n    cdef RLE *_R\n    cdef siz _n\n\n    def __cinit__(self, siz n =0):\n        rlesInit(&self._R, n)\n        self._n = n\n\n    # free the RLE array here\n    def __dealloc__(self):\n        if self._R is not NULL:\n            for i in range(self._n):\n                free(self._R[i].cnts)\n            free(self._R)\n    def __getattr__(self, key):\n        if key == 'n':\n            return self._n\n        raise AttributeError(key)\n\n# python class to wrap Mask array in C\n# the class handles the memory allocation and deallocation\ncdef class Masks:\n    cdef byte *_mask\n    cdef siz _h\n    cdef siz _w\n    cdef siz _n\n\n    def __cinit__(self, h, w, n):\n        self._mask = <byte*> malloc(h*w*n* sizeof(byte))\n        self._h = h\n        self._w = w\n        self._n = n\n    # def __dealloc__(self):\n        # the memory management of _mask has been passed to np.ndarray\n        # it doesn't need to be freed here\n\n    # called when passing into np.array() and return an np.ndarray in column-major order\n    def __array__(self):\n        cdef np.npy_intp shape[1]\n        shape[0] = <np.npy_intp> self._h*self._w*self._n\n        # Create a 1D array, and reshape it to fortran/Matlab column-major array\n        ndarray = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT8, self._mask).reshape((self._h, self._w, self._n), order='F')\n        # The _mask allocated by Masks is now handled by ndarray\n        PyArray_ENABLEFLAGS(ndarray, np.NPY_OWNDATA)\n        return ndarray\n\n# internal conversion from Python RLEs object to compressed RLE format\ndef _toString(RLEs Rs):\n    cdef siz n = Rs.n\n    cdef bytes py_string\n    cdef char* c_string\n    objs = []\n    for i in range(n):\n        c_string = rleToString( <RLE*> &Rs._R[i] )\n        py_string = c_string\n        objs.append({\n            'size': [Rs._R[i].h, Rs._R[i].w],\n            'counts': py_string\n        })\n        free(c_string)\n    return objs\n\n# internal conversion from compressed RLE format to Python RLEs object\ndef _frString(rleObjs):\n    cdef siz n = len(rleObjs)\n    Rs = RLEs(n)\n    cdef bytes py_string\n    cdef char* c_string\n    for i, obj in enumerate(rleObjs):\n        py_string = str(obj['counts'])\n        c_string = py_string\n        rleFrString( <RLE*> &Rs._R[i], <char*> c_string, obj['size'][0], obj['size'][1] )\n    return Rs\n\n# encode mask to RLEs objects\n# list of RLE string can be generated by RLEs member function\ndef encode(np.ndarray[np.uint8_t, ndim=3, mode='fortran'] mask):\n    h, w, n = mask.shape[0], mask.shape[1], mask.shape[2]\n    cdef RLEs Rs = RLEs(n)\n    rleEncode(Rs._R,<byte*>mask.data,h,w,n)\n    objs = _toString(Rs)\n    return objs\n\n# decode mask from compressed list of RLE string or RLEs object\ndef decode(rleObjs):\n    cdef RLEs Rs = _frString(rleObjs)\n    h, w, n = Rs._R[0].h, Rs._R[0].w, Rs._n\n    masks = Masks(h, w, n)\n    rleDecode( <RLE*>Rs._R, masks._mask, n );\n    return np.array(masks)\n\ndef merge(rleObjs, bint intersect=0):\n    cdef RLEs Rs = _frString(rleObjs)\n    cdef RLEs R = RLEs(1)\n    rleMerge(<RLE*>Rs._R, <RLE*> R._R, <siz> Rs._n, intersect)\n    obj = _toString(R)[0]\n    return obj\n\ndef area(rleObjs):\n    cdef RLEs Rs = _frString(rleObjs)\n    cdef uint* _a = <uint*> malloc(Rs._n* sizeof(uint))\n    rleArea(Rs._R, Rs._n, _a)\n    cdef np.npy_intp shape[1]\n    shape[0] = <np.npy_intp> Rs._n\n    a = np.array((Rs._n, ), dtype=np.uint8)\n    a = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT32, _a)\n    PyArray_ENABLEFLAGS(a, np.NPY_OWNDATA)\n    return a\n\n# iou computation. support function overload (RLEs-RLEs and bbox-bbox).\ndef iou( dt, gt, pyiscrowd ):\n    def _preproc(objs):\n        if len(objs) == 0:\n            return objs\n        if type(objs) == np.ndarray:\n            if len(objs.shape) == 1:\n                objs = objs.reshape((objs[0], 1))\n            # check if it's Nx4 bbox\n            if not len(objs.shape) == 2 or not objs.shape[1] == 4:\n                raise Exception('numpy ndarray input is only for *bounding boxes* and should have Nx4 dimension')\n            objs = objs.astype(np.double)\n        elif type(objs) == list:\n            # check if list is in box format and convert it to np.ndarray\n            isbox = np.all(np.array([(len(obj)==4) and ((type(obj)==list) or (type(obj)==np.ndarray)) for obj in objs]))\n            isrle = np.all(np.array([type(obj) == dict for obj in objs]))\n            if isbox:\n                objs = np.array(objs, dtype=np.double)\n                if len(objs.shape) == 1:\n                    objs = objs.reshape((1,objs.shape[0]))\n            elif isrle:\n                objs = _frString(objs)\n            else:\n                raise Exception('list input can be bounding box (Nx4) or RLEs ([RLE])')\n        else:\n            raise Exception('unrecognized type.  The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.')\n        return objs\n    def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t,  ndim=1] _iou):\n        rleIou( <RLE*> dt._R, <RLE*> gt._R, m, n, <byte*> iscrowd.data, <double*> _iou.data )\n    def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):\n        bbIou( <BB> dt.data, <BB> gt.data, m, n, <byte*> iscrowd.data, <double*>_iou.data )\n    def _len(obj):\n        cdef siz N = 0\n        if type(obj) == RLEs:\n            N = obj.n\n        elif len(obj)==0:\n            pass\n        elif type(obj) == np.ndarray:\n            N = obj.shape[0]\n        return N\n    # convert iscrowd to numpy array\n    cdef np.ndarray[np.uint8_t, ndim=1] iscrowd = np.array(pyiscrowd, dtype=np.uint8)\n    # simple type checking\n    cdef siz m, n\n    dt = _preproc(dt)\n    gt = _preproc(gt)\n    m = _len(dt)\n    n = _len(gt)\n    if m == 0 or n == 0:\n        return []\n    if not type(dt) == type(gt):\n        raise Exception('The dt and gt should have the same data type, either RLEs, list or np.ndarray')\n\n    # define local variables\n    cdef double* _iou = <double*> 0\n    cdef np.npy_intp shape[1]\n    # check type and assign iou function\n    if type(dt) == RLEs:\n        _iouFun = _rleIou\n    elif type(dt) == np.ndarray:\n        _iouFun = _bbIou\n    else:\n        raise Exception('input data type not allowed.')\n    _iou = <double*> malloc(m*n* sizeof(double))\n    iou = np.zeros((m*n, ), dtype=np.double)\n    shape[0] = <np.npy_intp> m*n\n    iou = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _iou)\n    PyArray_ENABLEFLAGS(iou, np.NPY_OWNDATA)\n    _iouFun(dt, gt, iscrowd, m, n, iou)\n    return iou.reshape((m,n), order='F')\n\ndef toBbox( rleObjs ):\n    cdef RLEs Rs = _frString(rleObjs)\n    cdef siz n = Rs.n\n    cdef BB _bb = <BB> malloc(4*n* sizeof(double))\n    rleToBbox( <const RLE*> Rs._R, _bb, n )\n    cdef np.npy_intp shape[1]\n    shape[0] = <np.npy_intp> 4*n\n    bb = np.array((1,4*n), dtype=np.double)\n    bb = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _bb).reshape((n, 4))\n    PyArray_ENABLEFLAGS(bb, np.NPY_OWNDATA)\n    return bb\n\ndef frBbox(np.ndarray[np.double_t, ndim=2] bb, siz h, siz w ):\n    cdef siz n = bb.shape[0]\n    Rs = RLEs(n)\n    rleFrBbox( <RLE*> Rs._R, <const BB> bb.data, h, w, n )\n    objs = _toString(Rs)\n    return objs\n\ndef frPoly( poly, siz h, siz w ):\n    cdef np.ndarray[np.double_t, ndim=1] np_poly\n    n = len(poly)\n    Rs = RLEs(n)\n    for i, p in enumerate(poly):\n        np_poly = np.array(p, dtype=np.double, order='F')\n        rleFrPoly( <RLE*>&Rs._R[i], <const double*> np_poly.data, len(np_poly)/2, h, w )\n    objs = _toString(Rs)\n    return objs\n\ndef frUncompressedRLE(ucRles, siz h, siz w):\n    cdef np.ndarray[np.uint32_t, ndim=1] cnts\n    cdef RLE R\n    cdef uint *data\n    n = len(ucRles)\n    objs = []\n    for i in range(n):\n        Rs = RLEs(1)\n        cnts = np.array(ucRles[i]['counts'], dtype=np.uint32)\n        # time for malloc can be saved here but it's fine\n        data = <uint*> malloc(len(cnts)* sizeof(uint))\n        for j in range(len(cnts)):\n            data[j] = <uint> cnts[j]\n        R = RLE(ucRles[i]['size'][0], ucRles[i]['size'][1], len(cnts), <uint*> data)\n        Rs._R[0] = R\n        objs.append(_toString(Rs)[0])\n    return objs\n\ndef frPyObjects(pyobj, siz h, w):\n    if type(pyobj) == np.ndarray:\n        objs = frBbox(pyobj, h, w )\n    elif type(pyobj) == list and len(pyobj[0]) == 4:\n        objs = frBbox(pyobj, h, w )\n    elif type(pyobj) == list and len(pyobj[0]) > 4:\n        objs = frPoly(pyobj, h, w )\n    elif type(pyobj) == list and type(pyobj[0]) == dict:\n        objs = frUncompressedRLE(pyobj, h, w)\n    else:\n        raise Exception('input type is not supported.')\n    return objs\n"
  },
  {
    "path": "lib/pycocotools/coco.py",
    "content": "from __future__ import print_function\nfrom __future__ import absolute_import\n\n__author__ = 'tylin'\n__version__ = '1.0.1'\n# Interface for accessing the Microsoft COCO dataset.\n\n# Microsoft COCO is a large image dataset designed for object detection,\n# segmentation, and caption generation. pycocotools is a Python API that\n# assists in loading, parsing and visualizing the annotations in COCO.\n# Please visit http://mscoco.org/ for more information on COCO, including\n# for the data, paper, and tutorials. The exact format of the annotations\n# is also described on the COCO website. For example usage of the pycocotools\n# please see pycocotools_demo.ipynb. In addition to this API, please download both\n# the COCO images and annotations in order to run the demo.\n\n# An alternative to using the API is to load the annotations directly\n# into Python dictionary\n# Using the API provides additional utility functions. Note that this API\n# supports both *instance* and *caption* annotations. In the case of\n# captions not all functions are defined (e.g. categories are undefined).\n\n# The following API functions are defined:\n#  COCO       - COCO api class that loads COCO annotation file and prepare data structures.\n#  decodeMask - Decode binary mask M encoded via run-length encoding.\n#  encodeMask - Encode binary mask M using run-length encoding.\n#  getAnnIds  - Get ann ids that satisfy given filter conditions.\n#  getCatIds  - Get cat ids that satisfy given filter conditions.\n#  getImgIds  - Get img ids that satisfy given filter conditions.\n#  loadAnns   - Load anns with the specified ids.\n#  loadCats   - Load cats with the specified ids.\n#  loadImgs   - Load imgs with the specified ids.\n#  segToMask  - Convert polygon segmentation to binary mask.\n#  showAnns   - Display the specified annotations.\n#  loadRes    - Load algorithm results and create API for accessing them.\n#  download   - Download COCO images from mscoco.org server.\n# Throughout the API \"ann\"=annotation, \"cat\"=category, and \"img\"=image.\n# Help on each functions can be accessed by: \"help COCO>function\".\n\n# See also COCO>decodeMask,\n# COCO>encodeMask, COCO>getAnnIds, COCO>getCatIds,\n# COCO>getImgIds, COCO>loadAnns, COCO>loadCats,\n# COCO>loadImgs, COCO>segToMask, COCO>showAnns\n\n# Microsoft COCO Toolbox.      version 2.0\n# Data, paper, and tutorials available at:  http://mscoco.org/\n# Code written by Piotr Dollar and Tsung-Yi Lin, 2014.\n# Licensed under the Simplified BSD License [see bsd.txt]\n\nimport json\nimport datetime\nimport time\nimport matplotlib.pyplot as plt\nfrom matplotlib.collections import PatchCollection\nfrom matplotlib.patches import Polygon\nimport numpy as np\n# from skimage.draw import polygon\nimport urllib\nimport copy\nimport itertools\nfrom . import mask\nimport os\n\ntry:\n    unicode  # Python 2\nexcept NameError:\n    unicode = str  # Python 3\n\n\nclass COCO:\n    def __init__(self, annotation_file=None):\n        \"\"\"\n        Constructor of Microsoft COCO helper class for reading and visualizing annotations.\n        :param annotation_file (str): location of annotation file\n        :param image_folder (str): location to the folder that hosts images.\n        :return:\n        \"\"\"\n        # load dataset\n        self.dataset = {}\n        self.anns = []\n        self.imgToAnns = {}\n        self.catToImgs = {}\n        self.imgs = {}\n        self.cats = {}\n        if not annotation_file == None:\n            print('loading annotations into memory...')\n            tic = time.time()\n            dataset = json.load(open(annotation_file, 'r'))\n            print('Done (t=%0.2fs)' % (time.time() - tic))\n            self.dataset = dataset\n            self.createIndex()\n\n    def createIndex(self):\n        # create index\n        print('creating index...')\n        anns = {}\n        imgToAnns = {}\n        catToImgs = {}\n        cats = {}\n        imgs = {}\n        if 'annotations' in self.dataset:\n            imgToAnns = {ann['image_id']: [] for ann in self.dataset['annotations']}\n            anns = {ann['id']: [] for ann in self.dataset['annotations']}\n            for ann in self.dataset['annotations']:\n                imgToAnns[ann['image_id']] += [ann]\n                anns[ann['id']] = ann\n\n        if 'images' in self.dataset:\n            imgs = {im['id']: {} for im in self.dataset['images']}\n            for img in self.dataset['images']:\n                imgs[img['id']] = img\n\n        if 'categories' in self.dataset:\n            cats = {cat['id']: [] for cat in self.dataset['categories']}\n            for cat in self.dataset['categories']:\n                cats[cat['id']] = cat\n            catToImgs = {cat['id']: [] for cat in self.dataset['categories']}\n            if 'annotations' in self.dataset:\n                for ann in self.dataset['annotations']:\n                    catToImgs[ann['category_id']] += [ann['image_id']]\n\n        print('index created!')\n\n        # create class members\n        self.anns = anns\n        self.imgToAnns = imgToAnns\n        self.catToImgs = catToImgs\n        self.imgs = imgs\n        self.cats = cats\n\n    def info(self):\n        \"\"\"\n        Print information about the annotation file.\n        :return:\n        \"\"\"\n        for key, value in self.dataset['info'].items():\n            print('%s: %s' % (key, value))\n\n    def getAnnIds(self, imgIds=[], catIds=[], areaRng=[], iscrowd=None):\n        \"\"\"\n        Get ann ids that satisfy given filter conditions. default skips that filter\n        :param imgIds  (int array)     : get anns for given imgs\n               catIds  (int array)     : get anns for given cats\n               areaRng (float array)   : get anns for given area range (e.g. [0 inf])\n               iscrowd (boolean)       : get anns for given crowd label (False or True)\n        :return: ids (int array)       : integer array of ann ids\n        \"\"\"\n        imgIds = imgIds if type(imgIds) == list else [imgIds]\n        catIds = catIds if type(catIds) == list else [catIds]\n\n        if len(imgIds) == len(catIds) == len(areaRng) == 0:\n            anns = self.dataset['annotations']\n        else:\n            if not len(imgIds) == 0:\n                # this can be changed by defaultdict\n                lists = [self.imgToAnns[imgId] for imgId in imgIds if imgId in self.imgToAnns]\n                anns = list(itertools.chain.from_iterable(lists))\n            else:\n                anns = self.dataset['annotations']\n            anns = anns if len(catIds) == 0 else [ann for ann in anns if ann['category_id'] in catIds]\n            anns = anns if len(areaRng) == 0 else [ann for ann in anns if\n                                                   ann['area'] > areaRng[0] and ann['area'] < areaRng[1]]\n        if not iscrowd == None:\n            ids = [ann['id'] for ann in anns if ann['iscrowd'] == iscrowd]\n        else:\n            ids = [ann['id'] for ann in anns]\n        return ids\n\n    def getCatIds(self, catNms=[], supNms=[], catIds=[]):\n        \"\"\"\n        filtering parameters. default skips that filter.\n        :param catNms (str array)  : get cats for given cat names\n        :param supNms (str array)  : get cats for given supercategory names\n        :param catIds (int array)  : get cats for given cat ids\n        :return: ids (int array)   : integer array of cat ids\n        \"\"\"\n        catNms = catNms if type(catNms) == list else [catNms]\n        supNms = supNms if type(supNms) == list else [supNms]\n        catIds = catIds if type(catIds) == list else [catIds]\n\n        if len(catNms) == len(supNms) == len(catIds) == 0:\n            cats = self.dataset['categories']\n        else:\n            cats = self.dataset['categories']\n            cats = cats if len(catNms) == 0 else [cat for cat in cats if cat['name'] in catNms]\n            cats = cats if len(supNms) == 0 else [cat for cat in cats if cat['supercategory'] in supNms]\n            cats = cats if len(catIds) == 0 else [cat for cat in cats if cat['id'] in catIds]\n        ids = [cat['id'] for cat in cats]\n        return ids\n\n    def getImgIds(self, imgIds=[], catIds=[]):\n        '''\n        Get img ids that satisfy given filter conditions.\n        :param imgIds (int array) : get imgs for given ids\n        :param catIds (int array) : get imgs with all given cats\n        :return: ids (int array)  : integer array of img ids\n        '''\n        imgIds = imgIds if type(imgIds) == list else [imgIds]\n        catIds = catIds if type(catIds) == list else [catIds]\n\n        if len(imgIds) == len(catIds) == 0:\n            ids = self.imgs.keys()\n        else:\n            ids = set(imgIds)\n            for i, catId in enumerate(catIds):\n                if i == 0 and len(ids) == 0:\n                    ids = set(self.catToImgs[catId])\n                else:\n                    ids &= set(self.catToImgs[catId])\n        return list(ids)\n\n    def loadAnns(self, ids=[]):\n        \"\"\"\n        Load anns with the specified ids.\n        :param ids (int array)       : integer ids specifying anns\n        :return: anns (object array) : loaded ann objects\n        \"\"\"\n        if type(ids) == list:\n            return [self.anns[id] for id in ids]\n        elif type(ids) == int:\n            return [self.anns[ids]]\n\n    def loadCats(self, ids=[]):\n        \"\"\"\n        Load cats with the specified ids.\n        :param ids (int array)       : integer ids specifying cats\n        :return: cats (object array) : loaded cat objects\n        \"\"\"\n        if type(ids) == list:\n            return [self.cats[id] for id in ids]\n        elif type(ids) == int:\n            return [self.cats[ids]]\n\n    def loadImgs(self, ids=[]):\n        \"\"\"\n        Load anns with the specified ids.\n        :param ids (int array)       : integer ids specifying img\n        :return: imgs (object array) : loaded img objects\n        \"\"\"\n        if type(ids) == list:\n            return [self.imgs[id] for id in ids]\n        elif type(ids) == int:\n            return [self.imgs[ids]]\n\n    def showAnns(self, anns):\n        \"\"\"\n        Display the specified annotations.\n        :param anns (array of object): annotations to display\n        :return: None\n        \"\"\"\n        if len(anns) == 0:\n            return 0\n        if 'segmentation' in anns[0]:\n            datasetType = 'instances'\n        elif 'caption' in anns[0]:\n            datasetType = 'captions'\n        if datasetType == 'instances':\n            ax = plt.gca()\n            polygons = []\n            color = []\n            for ann in anns:\n                c = np.random.random((1, 3)).tolist()[0]\n                if type(ann['segmentation']) == list:\n                    # polygon\n                    for seg in ann['segmentation']:\n                        poly = np.array(seg).reshape((len(seg) / 2, 2))\n                        polygons.append(Polygon(poly, True, alpha=0.4))\n                        color.append(c)\n                else:\n                    # mask\n                    t = self.imgs[ann['image_id']]\n                    if type(ann['segmentation']['counts']) == list:\n                        rle = mask.frPyObjects([ann['segmentation']], t['height'], t['width'])\n                    else:\n                        rle = [ann['segmentation']]\n                    m = mask.decode(rle)\n                    img = np.ones((m.shape[0], m.shape[1], 3))\n                    if ann['iscrowd'] == 1:\n                        color_mask = np.array([2.0, 166.0, 101.0]) / 255\n                    if ann['iscrowd'] == 0:\n                        color_mask = np.random.random((1, 3)).tolist()[0]\n                    for i in range(3):\n                        img[:, :, i] = color_mask[i]\n                    ax.imshow(np.dstack((img, m * 0.5)))\n            p = PatchCollection(polygons, facecolors=color, edgecolors=(0, 0, 0, 1), linewidths=3, alpha=0.4)\n            ax.add_collection(p)\n        elif datasetType == 'captions':\n            for ann in anns:\n                print(ann['caption'])\n\n    def loadRes(self, resFile):\n        \"\"\"\n        Load result file and return a result api object.\n        :param   resFile (str)     : file name of result file\n        :return: res (obj)         : result api object\n        \"\"\"\n        res = COCO()\n        res.dataset['images'] = [img for img in self.dataset['images']]\n        # res.dataset['info'] = copy.deepcopy(self.dataset['info'])\n        # res.dataset['licenses'] = copy.deepcopy(self.dataset['licenses'])\n\n        print('Loading and preparing results...     ')\n        tic = time.time()\n        anns = json.load(open(resFile))\n        assert type(anns) == list, 'results in not an array of objects'\n        annsImgIds = [ann['image_id'] for ann in anns]\n        assert set(annsImgIds) == (set(annsImgIds) & set(self.getImgIds())), \\\n            'Results do not correspond to current coco set'\n        if 'caption' in anns[0]:\n            imgIds = set([img['id'] for img in res.dataset['images']]) & set([ann['image_id'] for ann in anns])\n            res.dataset['images'] = [img for img in res.dataset['images'] if img['id'] in imgIds]\n            for id, ann in enumerate(anns):\n                ann['id'] = id + 1\n        elif 'bbox' in anns[0] and not anns[0]['bbox'] == []:\n            res.dataset['categories'] = copy.deepcopy(self.dataset['categories'])\n            for id, ann in enumerate(anns):\n                bb = ann['bbox']\n                x1, x2, y1, y2 = [bb[0], bb[0] + bb[2], bb[1], bb[1] + bb[3]]\n                if not 'segmentation' in ann:\n                    ann['segmentation'] = [[x1, y1, x1, y2, x2, y2, x2, y1]]\n                ann['area'] = bb[2] * bb[3]\n                ann['id'] = id + 1\n                ann['iscrowd'] = 0\n        elif 'segmentation' in anns[0]:\n            res.dataset['categories'] = copy.deepcopy(self.dataset['categories'])\n            for id, ann in enumerate(anns):\n                # now only support compressed RLE format as segmentation results\n                ann['area'] = mask.area([ann['segmentation']])[0]\n                if not 'bbox' in ann:\n                    ann['bbox'] = mask.toBbox([ann['segmentation']])[0]\n                ann['id'] = id + 1\n                ann['iscrowd'] = 0\n        print('DONE (t=%0.2fs)' % (time.time() - tic))\n\n        res.dataset['annotations'] = anns\n        res.createIndex()\n        return res\n\n    def download(self, tarDir=None, imgIds=[]):\n        '''\n        Download COCO images from mscoco.org server.\n        :param tarDir (str): COCO results directory name\n               imgIds (list): images to be downloaded\n        :return:\n        '''\n        if tarDir is None:\n            print('Please specify target directory')\n            return -1\n        if len(imgIds) == 0:\n            imgs = self.imgs.values()\n        else:\n            imgs = self.loadImgs(imgIds)\n        N = len(imgs)\n        if not os.path.exists(tarDir):\n            os.makedirs(tarDir)\n        for i, img in enumerate(imgs):\n            tic = time.time()\n            fname = os.path.join(tarDir, img['file_name'])\n            if not os.path.exists(fname):\n                urllib.urlretrieve(img['coco_url'], fname)\n            print('downloaded %d/%d images (t=%.1fs)' % (i, N, time.time() - tic))"
  },
  {
    "path": "lib/pycocotools/cocoeval.py",
    "content": "import numpy as np\nimport datetime\nimport time\nfrom collections import defaultdict\nfrom . import mask as maskUtils\nimport copy\n\nclass COCOeval:\n    # Interface for evaluating detection on the Microsoft COCO dataset.\n    #\n    # The usage for CocoEval is as follows:\n    #  cocoGt=..., cocoDt=...       # load dataset and results\n    #  E = CocoEval(cocoGt,cocoDt); # initialize CocoEval object\n    #  E.params.recThrs = ...;      # set parameters as desired\n    #  E.evaluate();                # run per image evaluation\n    #  E.accumulate();              # accumulate per image results\n    #  E.summarize();               # display summary metrics of results\n    # For example usage see evalDemo.m and http://mscoco.org/.\n    #\n    # The evaluation parameters are as follows (defaults in brackets):\n    #  imgIds     - [all] N img ids to use for evaluation\n    #  catIds     - [all] K cat ids to use for evaluation\n    #  iouThrs    - [.5:.05:.95] T=10 IoU thresholds for evaluation\n    #  recThrs    - [0:.01:1] R=101 recall thresholds for evaluation\n    #  areaRng    - [...] A=4 object area ranges for evaluation\n    #  maxDets    - [1 10 100] M=3 thresholds on max detections per image\n    #  iouType    - ['segm'] set iouType to 'segm', 'bbox' or 'keypoints'\n    #  iouType replaced the now DEPRECATED useSegm parameter.\n    #  useCats    - [1] if true use category labels for evaluation\n    # Note: if useCats=0 category labels are ignored as in proposal scoring.\n    # Note: multiple areaRngs [Ax2] and maxDets [Mx1] can be specified.\n    #\n    # evaluate(): evaluates detections on every image and every category and\n    # concats the results into the \"evalImgs\" with fields:\n    #  dtIds      - [1xD] id for each of the D detections (dt)\n    #  gtIds      - [1xG] id for each of the G ground truths (gt)\n    #  dtMatches  - [TxD] matching gt id at each IoU or 0\n    #  gtMatches  - [TxG] matching dt id at each IoU or 0\n    #  dtScores   - [1xD] confidence of each dt\n    #  gtIgnore   - [1xG] ignore flag for each gt\n    #  dtIgnore   - [TxD] ignore flag for each dt at each IoU\n    #\n    # accumulate(): accumulates the per-image, per-category evaluation\n    # results in \"evalImgs\" into the dictionary \"eval\" with fields:\n    #  params     - parameters used for evaluation\n    #  date       - date evaluation was performed\n    #  counts     - [T,R,K,A,M] parameter dimensions (see above)\n    #  precision  - [TxRxKxAxM] precision for every evaluation setting\n    #  recall     - [TxKxAxM] max recall for every evaluation setting\n    # Note: precision and recall==-1 for settings with no gt objects.\n    #\n    # See also coco, mask, pycocoDemo, pycocoEvalDemo\n    #\n    # Microsoft COCO Toolbox.      version 2.0\n    # Data, paper, and tutorials available at:  http://mscoco.org/\n    # Code written by Piotr Dollar and Tsung-Yi Lin, 2015.\n    # Licensed under the Simplified BSD License [see coco/license.txt]\n    def __init__(self, cocoGt=None, cocoDt=None, iouType='segm'):\n        '''\n        Initialize CocoEval using coco APIs for gt and dt\n        :param cocoGt: coco object with ground truth annotations\n        :param cocoDt: coco object with detection results\n        :return: None\n        '''\n        if not iouType:\n            print('iouType not specified. use default iouType segm')\n        self.cocoGt   = cocoGt              # ground truth COCO API\n        self.cocoDt   = cocoDt              # detections COCO API\n        self.params   = {}                  # evaluation parameters\n        self.evalImgs = defaultdict(list)   # per-image per-category evaluation results [KxAxI] elements\n        self.eval     = {}                  # accumulated evaluation results\n        self._gts = defaultdict(list)       # gt for evaluation\n        self._dts = defaultdict(list)       # dt for evaluation\n        self.params = Params(iouType=iouType) # parameters\n        self._paramsEval = {}               # parameters for evaluation\n        self.stats = []                     # result summarization\n        self.ious = {}                      # ious between all gts and dts\n        if not cocoGt is None:\n            self.params.imgIds = sorted(cocoGt.getImgIds())\n            self.params.catIds = sorted(cocoGt.getCatIds())\n\n\n    def _prepare(self):\n        '''\n        Prepare ._gts and ._dts for evaluation based on params\n        :return: None\n        '''\n        def _toMask(anns, coco):\n            # modify ann['segmentation'] by reference\n            for ann in anns:\n                rle = coco.annToRLE(ann)\n                ann['segmentation'] = rle\n        p = self.params\n        if p.useCats:\n            gts=self.cocoGt.loadAnns(self.cocoGt.getAnnIds(imgIds=p.imgIds, catIds=p.catIds))\n            dts=self.cocoDt.loadAnns(self.cocoDt.getAnnIds(imgIds=p.imgIds, catIds=p.catIds))\n        else:\n            gts=self.cocoGt.loadAnns(self.cocoGt.getAnnIds(imgIds=p.imgIds))\n            dts=self.cocoDt.loadAnns(self.cocoDt.getAnnIds(imgIds=p.imgIds))\n\n        # convert ground truth to mask if iouType == 'segm'\n        if p.iouType == 'segm':\n            _toMask(gts, self.cocoGt)\n            _toMask(dts, self.cocoDt)\n        # set ignore flag\n        for gt in gts:\n            gt['ignore'] = gt['ignore'] if 'ignore' in gt else 0\n            gt['ignore'] = 'iscrowd' in gt and gt['iscrowd']\n            if p.iouType == 'keypoints':\n                gt['ignore'] = (gt['num_keypoints'] == 0) or gt['ignore']\n        self._gts = defaultdict(list)       # gt for evaluation\n        self._dts = defaultdict(list)       # dt for evaluation\n        for gt in gts:\n            self._gts[gt['image_id'], gt['category_id']].append(gt)\n        for dt in dts:\n            self._dts[dt['image_id'], dt['category_id']].append(dt)\n        self.evalImgs = defaultdict(list)   # per-image per-category evaluation results\n        self.eval     = {}                  # accumulated evaluation results\n\n    def evaluate(self):\n        '''\n        Run per image evaluation on given images and store results (a list of dict) in self.evalImgs\n        :return: None\n        '''\n        tic = time.time()\n        print('Running per image evaluation...')\n        p = self.params\n        # add backward compatibility if useSegm is specified in params\n        if not p.useSegm is None:\n            p.iouType = 'segm' if p.useSegm == 1 else 'bbox'\n            print('useSegm (deprecated) is not None. Running {} evaluation'.format(p.iouType))\n        print('Evaluate annotation type *{}*'.format(p.iouType))\n        p.imgIds = list(np.unique(p.imgIds))\n        if p.useCats:\n            p.catIds = list(np.unique(p.catIds))\n        p.maxDets = sorted(p.maxDets)\n        self.params=p\n\n        self._prepare()\n        # loop through images, area range, max detection number\n        catIds = p.catIds if p.useCats else [-1]\n\n        if p.iouType == 'segm' or p.iouType == 'bbox':\n            computeIoU = self.computeIoU\n        elif p.iouType == 'keypoints':\n            computeIoU = self.computeOks\n        self.ious = {(imgId, catId): computeIoU(imgId, catId) \\\n                        for imgId in p.imgIds\n                        for catId in catIds}\n\n        evaluateImg = self.evaluateImg\n        maxDet = p.maxDets[-1]\n        self.evalImgs = [evaluateImg(imgId, catId, areaRng, maxDet)\n                 for catId in catIds\n                 for areaRng in p.areaRng\n                 for imgId in p.imgIds\n             ]\n        self._paramsEval = copy.deepcopy(self.params)\n        toc = time.time()\n        print('DONE (t={:0.2f}s).'.format(toc-tic))\n\n    def computeIoU(self, imgId, catId):\n        p = self.params\n        if p.useCats:\n            gt = self._gts[imgId,catId]\n            dt = self._dts[imgId,catId]\n        else:\n            gt = [_ for cId in p.catIds for _ in self._gts[imgId,cId]]\n            dt = [_ for cId in p.catIds for _ in self._dts[imgId,cId]]\n        if len(gt) == 0 and len(dt) ==0:\n            return []\n        inds = np.argsort([-d['score'] for d in dt], kind='mergesort')\n        dt = [dt[i] for i in inds]\n        if len(dt) > p.maxDets[-1]:\n            dt=dt[0:p.maxDets[-1]]\n\n        if p.iouType == 'segm':\n            g = [g['segmentation'] for g in gt]\n            d = [d['segmentation'] for d in dt]\n        elif p.iouType == 'bbox':\n            g = [g['bbox'] for g in gt]\n            d = [d['bbox'] for d in dt]\n        else:\n            raise Exception('unknown iouType for iou computation')\n\n        # compute iou between each dt and gt region\n        iscrowd = [int(o['iscrowd']) for o in gt]\n        ious = maskUtils.iou(d,g,iscrowd)\n        return ious\n\n    def computeOks(self, imgId, catId):\n        p = self.params\n        # dimention here should be Nxm\n        gts = self._gts[imgId, catId]\n        dts = self._dts[imgId, catId]\n        inds = np.argsort([-d['score'] for d in dts], kind='mergesort')\n        dts = [dts[i] for i in inds]\n        if len(dts) > p.maxDets[-1]:\n            dts = dts[0:p.maxDets[-1]]\n        # if len(gts) == 0 and len(dts) == 0:\n        if len(gts) == 0 or len(dts) == 0:\n            return []\n        ious = np.zeros((len(dts), len(gts)))\n        sigmas = np.array([.26, .25, .25, .35, .35, .79, .79, .72, .72, .62,.62, 1.07, 1.07, .87, .87, .89, .89])/10.0\n        vars = (sigmas * 2)**2\n        k = len(sigmas)\n        # compute oks between each detection and ground truth object\n        for j, gt in enumerate(gts):\n            # create bounds for ignore regions(double the gt bbox)\n            g = np.array(gt['keypoints'])\n            xg = g[0::3]; yg = g[1::3]; vg = g[2::3]\n            k1 = np.count_nonzero(vg > 0)\n            bb = gt['bbox']\n            x0 = bb[0] - bb[2]; x1 = bb[0] + bb[2] * 2\n            y0 = bb[1] - bb[3]; y1 = bb[1] + bb[3] * 2\n            for i, dt in enumerate(dts):\n                d = np.array(dt['keypoints'])\n                xd = d[0::3]; yd = d[1::3]\n                if k1>0:\n                    # measure the per-keypoint distance if keypoints visible\n                    dx = xd - xg\n                    dy = yd - yg\n                else:\n                    # measure minimum distance to keypoints in (x0,y0) & (x1,y1)\n                    z = np.zeros((k))\n                    dx = np.max((z, x0-xd),axis=0)+np.max((z, xd-x1),axis=0)\n                    dy = np.max((z, y0-yd),axis=0)+np.max((z, yd-y1),axis=0)\n                e = (dx**2 + dy**2) / vars / (gt['area']+np.spacing(1)) / 2\n                if k1 > 0:\n                    e=e[vg > 0]\n                ious[i, j] = np.sum(np.exp(-e)) / e.shape[0]\n        return ious\n\n    def evaluateImg(self, imgId, catId, aRng, maxDet):\n        '''\n        perform evaluation for single category and image\n        :return: dict (single image results)\n        '''\n        p = self.params\n        if p.useCats:\n            gt = self._gts[imgId,catId]\n            dt = self._dts[imgId,catId]\n        else:\n            gt = [_ for cId in p.catIds for _ in self._gts[imgId,cId]]\n            dt = [_ for cId in p.catIds for _ in self._dts[imgId,cId]]\n        if len(gt) == 0 and len(dt) ==0:\n            return None\n\n        for g in gt:\n            if g['ignore'] or (g['area']<aRng[0] or g['area']>aRng[1]):\n                g['_ignore'] = 1\n            else:\n                g['_ignore'] = 0\n\n        # sort dt highest score first, sort gt ignore last\n        gtind = np.argsort([g['_ignore'] for g in gt], kind='mergesort')\n        gt = [gt[i] for i in gtind]\n        dtind = np.argsort([-d['score'] for d in dt], kind='mergesort')\n        dt = [dt[i] for i in dtind[0:maxDet]]\n        iscrowd = [int(o['iscrowd']) for o in gt]\n        # load computed ious\n        ious = self.ious[imgId, catId][:, gtind] if len(self.ious[imgId, catId]) > 0 else self.ious[imgId, catId]\n\n        T = len(p.iouThrs)\n        G = len(gt)\n        D = len(dt)\n        gtm  = np.zeros((T,G))\n        dtm  = np.zeros((T,D))\n        gtIg = np.array([g['_ignore'] for g in gt])\n        dtIg = np.zeros((T,D))\n        if not len(ious)==0:\n            for tind, t in enumerate(p.iouThrs):\n                for dind, d in enumerate(dt):\n                    # information about best match so far (m=-1 -> unmatched)\n                    iou = min([t,1-1e-10])\n                    m   = -1\n                    for gind, g in enumerate(gt):\n                        # if this gt already matched, and not a crowd, continue\n                        if gtm[tind,gind]>0 and not iscrowd[gind]:\n                            continue\n                        # if dt matched to reg gt, and on ignore gt, stop\n                        if m>-1 and gtIg[m]==0 and gtIg[gind]==1:\n                            break\n                        # continue to next gt unless better match made\n                        if ious[dind,gind] < iou:\n                            continue\n                        # if match successful and best so far, store appropriately\n                        iou=ious[dind,gind]\n                        m=gind\n                    # if match made store id of match for both dt and gt\n                    if m ==-1:\n                        continue\n                    dtIg[tind,dind] = gtIg[m]\n                    dtm[tind,dind]  = gt[m]['id']\n                    gtm[tind,m]     = d['id']\n        # set unmatched detections outside of area range to ignore\n        a = np.array([d['area']<aRng[0] or d['area']>aRng[1] for d in dt]).reshape((1, len(dt)))\n        dtIg = np.logical_or(dtIg, np.logical_and(dtm==0, np.repeat(a,T,0)))\n        # store results for given image and category\n        return {\n                'image_id':     imgId,\n                'category_id':  catId,\n                'aRng':         aRng,\n                'maxDet':       maxDet,\n                'dtIds':        [d['id'] for d in dt],\n                'gtIds':        [g['id'] for g in gt],\n                'dtMatches':    dtm,\n                'gtMatches':    gtm,\n                'dtScores':     [d['score'] for d in dt],\n                'gtIgnore':     gtIg,\n                'dtIgnore':     dtIg,\n            }\n\n    def accumulate(self, p = None):\n        '''\n        Accumulate per image evaluation results and store the result in self.eval\n        :param p: input params for evaluation\n        :return: None\n        '''\n        print('Accumulating evaluation results...')\n        tic = time.time()\n        if not self.evalImgs:\n            print('Please run evaluate() first')\n        # allows input customized parameters\n        if p is None:\n            p = self.params\n        p.catIds = p.catIds if p.useCats == 1 else [-1]\n        T           = len(p.iouThrs)\n        R           = len(p.recThrs)\n        K           = len(p.catIds) if p.useCats else 1\n        A           = len(p.areaRng)\n        M           = len(p.maxDets)\n        precision   = -np.ones((T,R,K,A,M)) # -1 for the precision of absent categories\n        recall      = -np.ones((T,K,A,M))\n\n        # create dictionary for future indexing\n        _pe = self._paramsEval\n        catIds = _pe.catIds if _pe.useCats else [-1]\n        setK = set(catIds)\n        setA = set(map(tuple, _pe.areaRng))\n        setM = set(_pe.maxDets)\n        setI = set(_pe.imgIds)\n        # get inds to evaluate\n        k_list = [n for n, k in enumerate(p.catIds)  if k in setK]\n        m_list = [m for n, m in enumerate(p.maxDets) if m in setM]\n        a_list = [n for n, a in enumerate(map(lambda x: tuple(x), p.areaRng)) if a in setA]\n        i_list = [n for n, i in enumerate(p.imgIds)  if i in setI]\n        I0 = len(_pe.imgIds)\n        A0 = len(_pe.areaRng)\n        # retrieve E at each category, area range, and max number of detections\n        for k, k0 in enumerate(k_list):\n            Nk = k0*A0*I0\n            for a, a0 in enumerate(a_list):\n                Na = a0*I0\n                for m, maxDet in enumerate(m_list):\n                    E = [self.evalImgs[Nk + Na + i] for i in i_list]\n                    E = [e for e in E if not e is None]\n                    if len(E) == 0:\n                        continue\n                    dtScores = np.concatenate([e['dtScores'][0:maxDet] for e in E])\n\n                    # different sorting method generates slightly different results.\n                    # mergesort is used to be consistent as Matlab implementation.\n                    inds = np.argsort(-dtScores, kind='mergesort')\n\n                    dtm  = np.concatenate([e['dtMatches'][:,0:maxDet] for e in E], axis=1)[:,inds]\n                    dtIg = np.concatenate([e['dtIgnore'][:,0:maxDet]  for e in E], axis=1)[:,inds]\n                    gtIg = np.concatenate([e['gtIgnore'] for e in E])\n                    npig = np.count_nonzero(gtIg==0 )\n                    if npig == 0:\n                        continue\n                    tps = np.logical_and(               dtm,  np.logical_not(dtIg) )\n                    fps = np.logical_and(np.logical_not(dtm), np.logical_not(dtIg) )\n\n                    tp_sum = np.cumsum(tps, axis=1).astype(dtype=np.float)\n                    fp_sum = np.cumsum(fps, axis=1).astype(dtype=np.float)\n                    for t, (tp, fp) in enumerate(zip(tp_sum, fp_sum)):\n                        tp = np.array(tp)\n                        fp = np.array(fp)\n                        nd = len(tp)\n                        rc = tp / npig\n                        pr = tp / (fp+tp+np.spacing(1))\n                        q  = np.zeros((R,))\n\n                        if nd:\n                            recall[t,k,a,m] = rc[-1]\n                        else:\n                            recall[t,k,a,m] = 0\n\n                        # numpy is slow without cython optimization for accessing elements\n                        # use python array gets significant speed improvement\n                        pr = pr.tolist(); q = q.tolist()\n\n                        for i in range(nd-1, 0, -1):\n                            if pr[i] > pr[i-1]:\n                                pr[i-1] = pr[i]\n\n                        inds = np.searchsorted(rc, p.recThrs, side='left')\n                        try:\n                            for ri, pi in enumerate(inds):\n                                q[ri] = pr[pi]\n                        except:\n                            pass\n                        precision[t,:,k,a,m] = np.array(q)\n        self.eval = {\n            'params': p,\n            'counts': [T, R, K, A, M],\n            'date': datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'),\n            'precision': precision,\n            'recall':   recall,\n        }\n        toc = time.time()\n        print('DONE (t={:0.2f}s).'.format( toc-tic))\n\n    def summarize(self):\n        '''\n        Compute and display summary metrics for evaluation results.\n        Note this functin can *only* be applied on the default parameter setting\n        '''\n        def _summarize( ap=1, iouThr=None, areaRng='all', maxDets=100 ):\n            p = self.params\n            iStr = ' {:<18} {} @[ IoU={:<9} | area={:>6s} | maxDets={:>3d} ] = {:0.3f}'\n            titleStr = 'Average Precision' if ap == 1 else 'Average Recall'\n            typeStr = '(AP)' if ap==1 else '(AR)'\n            iouStr = '{:0.2f}:{:0.2f}'.format(p.iouThrs[0], p.iouThrs[-1]) \\\n                if iouThr is None else '{:0.2f}'.format(iouThr)\n\n            aind = [i for i, aRng in enumerate(p.areaRngLbl) if aRng == areaRng]\n            mind = [i for i, mDet in enumerate(p.maxDets) if mDet == maxDets]\n            if ap == 1:\n                # dimension of precision: [TxRxKxAxM]\n                s = self.eval['precision']\n                # IoU\n                if iouThr is not None:\n                    t = np.where(iouThr == p.iouThrs)[0]\n                    s = s[t]\n                s = s[:,:,:,aind,mind]\n            else:\n                # dimension of recall: [TxKxAxM]\n                s = self.eval['recall']\n                if iouThr is not None:\n                    t = np.where(iouThr == p.iouThrs)[0]\n                    s = s[t]\n                s = s[:,:,aind,mind]\n            if len(s[s>-1])==0:\n                mean_s = -1\n            else:\n                mean_s = np.mean(s[s>-1])\n            print(iStr.format(titleStr, typeStr, iouStr, areaRng, maxDets, mean_s))\n            return mean_s\n        def _summarizeDets():\n            stats = np.zeros((12,))\n            stats[0] = _summarize(1)\n            stats[1] = _summarize(1, iouThr=.5, maxDets=self.params.maxDets[2])\n            stats[2] = _summarize(1, iouThr=.75, maxDets=self.params.maxDets[2])\n            stats[3] = _summarize(1, areaRng='small', maxDets=self.params.maxDets[2])\n            stats[4] = _summarize(1, areaRng='medium', maxDets=self.params.maxDets[2])\n            stats[5] = _summarize(1, areaRng='large', maxDets=self.params.maxDets[2])\n            stats[6] = _summarize(0, maxDets=self.params.maxDets[0])\n            stats[7] = _summarize(0, maxDets=self.params.maxDets[1])\n            stats[8] = _summarize(0, maxDets=self.params.maxDets[2])\n            stats[9] = _summarize(0, areaRng='small', maxDets=self.params.maxDets[2])\n            stats[10] = _summarize(0, areaRng='medium', maxDets=self.params.maxDets[2])\n            stats[11] = _summarize(0, areaRng='large', maxDets=self.params.maxDets[2])\n            return stats\n        def _summarizeKps():\n            stats = np.zeros((10,))\n            stats[0] = _summarize(1, maxDets=20)\n            stats[1] = _summarize(1, maxDets=20, iouThr=.5)\n            stats[2] = _summarize(1, maxDets=20, iouThr=.75)\n            stats[3] = _summarize(1, maxDets=20, areaRng='medium')\n            stats[4] = _summarize(1, maxDets=20, areaRng='large')\n            stats[5] = _summarize(0, maxDets=20)\n            stats[6] = _summarize(0, maxDets=20, iouThr=.5)\n            stats[7] = _summarize(0, maxDets=20, iouThr=.75)\n            stats[8] = _summarize(0, maxDets=20, areaRng='medium')\n            stats[9] = _summarize(0, maxDets=20, areaRng='large')\n            return stats\n        if not self.eval:\n            raise Exception('Please run accumulate() first')\n        iouType = self.params.iouType\n        if iouType == 'segm' or iouType == 'bbox':\n            summarize = _summarizeDets\n        elif iouType == 'keypoints':\n            summarize = _summarizeKps\n        self.stats = summarize()\n\n    def __str__(self):\n        self.summarize()\n\nclass Params:\n    '''\n    Params for coco evaluation api\n    '''\n    def setDetParams(self):\n        self.imgIds = []\n        self.catIds = []\n        # np.arange causes trouble.  the data point on arange is slightly larger than the true value\n        self.iouThrs = np.linspace(.5, 0.95, np.round((0.95 - .5) / .05) + 1, endpoint=True)\n        self.recThrs = np.linspace(.0, 1.00, np.round((1.00 - .0) / .01) + 1, endpoint=True)\n        self.maxDets = [1, 10, 100]\n        self.areaRng = [[0 ** 2, 1e5 ** 2], [0 ** 2, 32 ** 2], [32 ** 2, 96 ** 2], [96 ** 2, 1e5 ** 2]]\n        self.areaRngLbl = ['all', 'small', 'medium', 'large']\n        self.useCats = 1\n\n    def setKpParams(self):\n        self.imgIds = []\n        self.catIds = []\n        # np.arange causes trouble.  the data point on arange is slightly larger than the true value\n        self.iouThrs = np.linspace(.5, 0.95, np.round((0.95 - .5) / .05) + 1, endpoint=True)\n        self.recThrs = np.linspace(.0, 1.00, np.round((1.00 - .0) / .01) + 1, endpoint=True)\n        self.maxDets = [20]\n        self.areaRng = [[0 ** 2, 1e5 ** 2], [32 ** 2, 96 ** 2], [96 ** 2, 1e5 ** 2]]\n        self.areaRngLbl = ['all', 'medium', 'large']\n        self.useCats = 1\n\n    def __init__(self, iouType='segm'):\n        if iouType == 'segm' or iouType == 'bbox':\n            self.setDetParams()\n        elif iouType == 'keypoints':\n            self.setKpParams()\n        else:\n            raise Exception('iouType not supported')\n        self.iouType = iouType\n        # useSegm is deprecated\n        self.useSegm = None"
  },
  {
    "path": "lib/pycocotools/license.txt",
    "content": "Copyright (c) 2014, Piotr Dollar and Tsung-Yi Lin\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met: \n\n1. Redistributions of source code must retain the above copyright notice, this\n   list of conditions and the following disclaimer. \n2. Redistributions in binary form must reproduce the above copyright notice,\n   this list of conditions and the following disclaimer in the documentation\n   and/or other materials provided with the distribution. \n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\nANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\nWARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR\nANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\nLOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND\nON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\nSOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nThe views and conclusions contained in the software and documentation are those\nof the authors and should not be interpreted as representing official policies, \neither expressed or implied, of the FreeBSD Project.\n"
  },
  {
    "path": "lib/pycocotools/mask.py",
    "content": "__author__ = 'tsungyi'\n\nfrom . import _mask\n\n# Interface for manipulating masks stored in RLE format.\n#\n# RLE is a simple yet efficient format for storing binary masks. RLE\n# first divides a vector (or vectorized image) into a series of piecewise\n# constant regions and then for each piece simply stores the length of\n# that piece. For example, given M=[0 0 1 1 1 0 1] the RLE counts would\n# be [2 3 1 1], or for M=[1 1 1 1 1 1 0] the counts would be [0 6 1]\n# (note that the odd counts are always the numbers of zeros). Instead of\n# storing the counts directly, additional compression is achieved with a\n# variable bitrate representation based on a common scheme called LEB128.\n#\n# Compression is greatest given large piecewise constant regions.\n# Specifically, the size of the RLE is proportional to the number of\n# *boundaries* in M (or for an image the number of boundaries in the y\n# direction). Assuming fairly simple shapes, the RLE representation is\n# O(sqrt(n)) where n is number of pixels in the object. Hence space usage\n# is substantially lower, especially for large simple objects (large n).\n#\n# Many common operations on masks can be computed directly using the RLE\n# (without need for decoding). This includes computations such as area,\n# union, intersection, etc. All of these operations are linear in the\n# size of the RLE, in other words they are O(sqrt(n)) where n is the area\n# of the object. Computing these operations on the original mask is O(n).\n# Thus, using the RLE can result in substantial computational savings.\n#\n# The following API functions are defined:\n#  encode         - Encode binary masks using RLE.\n#  decode         - Decode binary masks encoded via RLE.\n#  merge          - Compute union or intersection of encoded masks.\n#  iou            - Compute intersection over union between masks.\n#  area           - Compute area of encoded masks.\n#  toBbox         - Get bounding boxes surrounding encoded masks.\n#  frPyObjects    - Convert polygon, bbox, and uncompressed RLE to encoded RLE mask.\n#\n# Usage:\n#  Rs     = encode( masks )\n#  masks  = decode( Rs )\n#  R      = merge( Rs, intersect=false )\n#  o      = iou( dt, gt, iscrowd )\n#  a      = area( Rs )\n#  bbs    = toBbox( Rs )\n#  Rs     = frPyObjects( [pyObjects], h, w )\n#\n# In the API the following formats are used:\n#  Rs      - [dict] Run-length encoding of binary masks\n#  R       - dict Run-length encoding of binary mask\n#  masks   - [hxwxn] Binary mask(s) (must have type np.ndarray(dtype=uint8) in column-major order)\n#  iscrowd - [nx1] list of np.ndarray. 1 indicates corresponding gt image has crowd region to ignore\n#  bbs     - [nx4] Bounding box(es) stored as [x y w h]\n#  poly    - Polygon stored as [[x1 y1 x2 y2...],[x1 y1 ...],...] (2D list)\n#  dt,gt   - May be either bounding boxes or encoded masks\n# Both poly and bbs are 0-indexed (bbox=[0 0 1 1] encloses first pixel).\n#\n# Finally, a note about the intersection over union (iou) computation.\n# The standard iou of a ground truth (gt) and detected (dt) object is\n#  iou(gt,dt) = area(intersect(gt,dt)) / area(union(gt,dt))\n# For \"crowd\" regions, we use a modified criteria. If a gt object is\n# marked as \"iscrowd\", we allow a dt to match any subregion of the gt.\n# Choosing gt' in the crowd gt that best matches the dt can be done using\n# gt'=intersect(dt,gt). Since by definition union(gt',dt)=dt, computing\n#  iou(gt,dt,iscrowd) = iou(gt',dt) = area(intersect(gt,dt)) / area(dt)\n# For crowd gt regions we use this modified criteria above for the iou.\n#\n# To compile run \"python setup.py build_ext --inplace\"\n# Please do not contact us for help with compiling.\n#\n# Microsoft COCO Toolbox.      version 2.0\n# Data, paper, and tutorials available at:  http://mscoco.org/\n# Code written by Piotr Dollar and Tsung-Yi Lin, 2015.\n# Licensed under the Simplified BSD License [see coco/license.txt]\n\nencode      = _mask.encode\ndecode      = _mask.decode\niou         = _mask.iou\nmerge       = _mask.merge\narea        = _mask.area\ntoBbox      = _mask.toBbox\nfrPyObjects = _mask.frPyObjects"
  },
  {
    "path": "lib/pycocotools/maskApi.c",
    "content": "/**************************************************************************\n* Microsoft COCO Toolbox.      version 2.0\n* Data, paper, and tutorials available at:  http://mscoco.org/\n* Code written by Piotr Dollar and Tsung-Yi Lin, 2015.\n* Licensed under the Simplified BSD License [see coco/license.txt]\n**************************************************************************/\n#include \"maskApi.h\"\n#include <math.h>\n#include <stdlib.h>\n\nuint umin( uint a, uint b ) { return (a<b) ? a : b; }\nuint umax( uint a, uint b ) { return (a>b) ? a : b; }\n\nvoid rleInit( RLE *R, siz h, siz w, siz m, uint *cnts ) {\n  R->h=h; R->w=w; R->m=m; R->cnts=(m==0)?0:malloc(sizeof(uint)*m);\n  if(cnts) for(siz j=0; j<m; j++) R->cnts[j]=cnts[j];\n}\n\nvoid rleFree( RLE *R ) {\n  free(R->cnts); R->cnts=0;\n}\n\nvoid rlesInit( RLE **R, siz n ) {\n  *R = (RLE*) malloc(sizeof(RLE)*n);\n  for(siz i=0; i<n; i++) rleInit((*R)+i,0,0,0,0);\n}\n\nvoid rlesFree( RLE **R, siz n ) {\n  for(siz i=0; i<n; i++) rleFree((*R)+i); free(*R); *R=0;\n}\n\nvoid rleEncode( RLE *R, const byte *M, siz h, siz w, siz n ) {\n  siz i, j, k, a=w*h; uint c, *cnts; byte p;\n  cnts = malloc(sizeof(uint)*(a+1));\n  for(i=0; i<n; i++) {\n    const byte *T=M+a*i; k=0; p=0; c=0;\n    for(j=0; j<a; j++) { if(T[j]!=p) { cnts[k++]=c; c=0; p=T[j]; } c++; }\n    cnts[k++]=c; rleInit(R+i,h,w,k,cnts);\n  }\n  free(cnts);\n}\n\nvoid rleDecode( const RLE *R, byte *M, siz n ) {\n  for( siz i=0; i<n; i++ ) {\n    byte v=0; for( siz j=0; j<R[i].m; j++ ) {\n      for( siz k=0; k<R[i].cnts[j]; k++ ) *(M++)=v; v=!v; }}\n}\n\nvoid rleMerge( const RLE *R, RLE *M, siz n, bool intersect ) {\n  uint *cnts, c, ca, cb, cc, ct; bool v, va, vb, vp;\n  siz i, a, b, h=R[0].h, w=R[0].w, m=R[0].m; RLE A, B;\n  if(n==0) { rleInit(M,0,0,0,0); return; }\n  if(n==1) { rleInit(M,h,w,m,R[0].cnts); return; }\n  cnts = malloc(sizeof(uint)*(h*w+1));\n  for( a=0; a<m; a++ ) cnts[a]=R[0].cnts[a];\n  for( i=1; i<n; i++ ) {\n    B=R[i]; if(B.h!=h||B.w!=w) { h=w=m=0; break; }\n    rleInit(&A,h,w,m,cnts); ca=A.cnts[0]; cb=B.cnts[0];\n    v=va=vb=0; m=0; a=b=1; cc=0; ct=1;\n    while( ct>0 ) {\n      c=umin(ca,cb); cc+=c; ct=0;\n      ca-=c; if(!ca && a<A.m) { ca=A.cnts[a++]; va=!va; } ct+=ca;\n      cb-=c; if(!cb && b<B.m) { cb=B.cnts[b++]; vb=!vb; } ct+=cb;\n      vp=v; if(intersect) v=va&&vb; else v=va||vb;\n      if( v!=vp||ct==0 ) { cnts[m++]=cc; cc=0; }\n    }\n    rleFree(&A);\n  }\n  rleInit(M,h,w,m,cnts); free(cnts);\n}\n\nvoid rleArea( const RLE *R, siz n, uint *a ) {\n  for( siz i=0; i<n; i++ ) {\n    a[i]=0; for( siz j=1; j<R[i].m; j+=2 ) a[i]+=R[i].cnts[j]; }\n}\n\nvoid rleIou( RLE *dt, RLE *gt, siz m, siz n, byte *iscrowd, double *o ) {\n  siz g, d; BB db, gb; bool crowd;\n  db=malloc(sizeof(double)*m*4); rleToBbox(dt,db,m);\n  gb=malloc(sizeof(double)*n*4); rleToBbox(gt,gb,n);\n  bbIou(db,gb,m,n,iscrowd,o); free(db); free(gb);\n  for( g=0; g<n; g++ ) for( d=0; d<m; d++ ) if(o[g*m+d]>0) {\n    crowd=iscrowd!=NULL && iscrowd[g];\n    if(dt[d].h!=gt[g].h || dt[d].w!=gt[g].w) { o[g*m+d]=-1; continue; }\n    siz ka, kb, a, b; uint c, ca, cb, ct, i, u; bool va, vb;\n    ca=dt[d].cnts[0]; ka=dt[d].m; va=vb=0;\n    cb=gt[g].cnts[0]; kb=gt[g].m; a=b=1; i=u=0; ct=1;\n    while( ct>0 ) {\n      c=umin(ca,cb); if(va||vb) { u+=c; if(va&&vb) i+=c; } ct=0;\n      ca-=c; if(!ca && a<ka) { ca=dt[d].cnts[a++]; va=!va; } ct+=ca;\n      cb-=c; if(!cb && b<kb) { cb=gt[g].cnts[b++]; vb=!vb; } ct+=cb;\n    }\n    if(i==0) u=1; else if(crowd) rleArea(dt+d,1,&u);\n    o[g*m+d] = (double)i/(double)u;\n  }\n}\n\nvoid bbIou( BB dt, BB gt, siz m, siz n, byte *iscrowd, double *o ) {\n  double h, w, i, u, ga, da; siz g, d; bool crowd;\n  for( g=0; g<n; g++ ) {\n    BB G=gt+g*4; ga=G[2]*G[3]; crowd=iscrowd!=NULL && iscrowd[g];\n    for( d=0; d<m; d++ ) {\n      BB D=dt+d*4; da=D[2]*D[3]; o[g*m+d]=0;\n      w=fmin(D[2]+D[0],G[2]+G[0])-fmax(D[0],G[0]); if(w<=0) continue;\n      h=fmin(D[3]+D[1],G[3]+G[1])-fmax(D[1],G[1]); if(h<=0) continue;\n      i=w*h; u = crowd ? da : da+ga-i; o[g*m+d]=i/u;\n    }\n  }\n}\n\nvoid rleToBbox( const RLE *R, BB bb, siz n ) {\n  for( siz i=0; i<n; i++ ) {\n    uint h, w, x, y, xs, ys, xe, ye, cc, t; siz j, m;\n    h=(uint)R[i].h; w=(uint)R[i].w; m=R[i].m;\n    m=((siz)(m/2))*2; xs=w; ys=h; xe=ye=0; cc=0;\n    if(m==0) { bb[4*i+0]=bb[4*i+1]=bb[4*i+2]=bb[4*i+3]=0; continue; }\n    for( j=0; j<m; j++ ) {\n      cc+=R[i].cnts[j]; t=cc-j%2; y=t%h; x=(t-y)/h;\n      xs=umin(xs,x); xe=umax(xe,x); ys=umin(ys,y); ye=umax(ye,y);\n    }\n    bb[4*i+0]=xs; bb[4*i+2]=xe-xs+1;\n    bb[4*i+1]=ys; bb[4*i+3]=ye-ys+1;\n  }\n}\n\nvoid rleFrBbox( RLE *R, const BB bb, siz h, siz w, siz n ) {\n  for( siz i=0; i<n; i++ ) {\n    double xs=bb[4*i+0], xe=xs+bb[4*i+2];\n    double ys=bb[4*i+1], ye=ys+bb[4*i+3];\n    double xy[8] = {xs,ys,xs,ye,xe,ye,xe,ys};\n    rleFrPoly( R+i, xy, 4, h, w );\n  }\n}\n\nint uintCompare(const void *a, const void *b) {\n  uint c=*((uint*)a), d=*((uint*)b); return c>d?1:c<d?-1:0;\n}\n\nvoid rleFrPoly( RLE *R, const double *xy, siz k, siz h, siz w ) {\n  // upsample and get discrete points densely along entire boundary\n  siz j, m=0; double scale=5; int *x, *y, *u, *v; uint *a, *b;\n  x=malloc(sizeof(int)*(k+1)); y=malloc(sizeof(int)*(k+1));\n  for(j=0; j<k; j++) x[j]=(int)(scale*xy[j*2+0]+.5); x[k]=x[0];\n  for(j=0; j<k; j++) y[j]=(int)(scale*xy[j*2+1]+.5); y[k]=y[0];\n  for(j=0; j<k; j++) m+=umax(abs(x[j]-x[j+1]),abs(y[j]-y[j+1]))+1;\n  u=malloc(sizeof(int)*m); v=malloc(sizeof(int)*m); m=0;\n  for( j=0; j<k; j++ ) {\n    int xs=x[j], xe=x[j+1], ys=y[j], ye=y[j+1], dx, dy, t;\n    bool flip; double s; dx=abs(xe-xs); dy=abs(ys-ye);\n    flip = (dx>=dy && xs>xe) || (dx<dy && ys>ye);\n    if(flip) { t=xs; xs=xe; xe=t; t=ys; ys=ye; ye=t; }\n    s = dx>=dy ? (double)(ye-ys)/dx : (double)(xe-xs)/dy;\n    if(dx>=dy) for( int d=0; d<=dx; d++ ) {\n      t=flip?dx-d:d; u[m]=t+xs; v[m]=(int)(ys+s*t+.5); m++;\n    } else for( int d=0; d<=dy; d++ ) {\n      t=flip?dy-d:d; v[m]=t+ys; u[m]=(int)(xs+s*t+.5); m++;\n    }\n  }\n  // get points along y-boundary and downsample\n  free(x); free(y); k=m; m=0; double xd, yd;\n  x=malloc(sizeof(int)*k); y=malloc(sizeof(int)*k);\n  for( j=1; j<k; j++ ) if(u[j]!=u[j-1]) {\n    xd=(double)(u[j]<u[j-1]?u[j]:u[j]-1); xd=(xd+.5)/scale-.5;\n    if( floor(xd)!=xd || xd<0 || xd>w-1 ) continue;\n    yd=(double)(v[j]<v[j-1]?v[j]:v[j-1]); yd=(yd+.5)/scale-.5;\n    if(yd<0) yd=0; else if(yd>h) yd=h; yd=ceil(yd);\n    x[m]=(int) xd; y[m]=(int) yd; m++;\n  }\n  // compute rle encoding given y-boundary points\n  k=m; a=malloc(sizeof(uint)*(k+1));\n  for( j=0; j<k; j++ ) a[j]=(uint)(x[j]*(int)(h)+y[j]);\n  a[k++]=(uint)(h*w); free(u); free(v); free(x); free(y);\n  qsort(a,k,sizeof(uint),uintCompare); uint p=0;\n  for( j=0; j<k; j++ ) { uint t=a[j]; a[j]-=p; p=t; }\n  b=malloc(sizeof(uint)*k); j=m=0; b[m++]=a[j++];\n  while(j<k) if(a[j]>0) b[m++]=a[j++]; else {\n    j++; if(j<k) b[m-1]+=a[j++]; }\n  rleInit(R,h,w,m,b); free(a); free(b);\n}\n\nchar* rleToString( const RLE *R ) {\n  // Similar to LEB128 but using 6 bits/char and ascii chars 48-111.\n  siz i, m=R->m, p=0; long x; bool more;\n  char *s=malloc(sizeof(char)*m*6);\n  for( i=0; i<m; i++ ) {\n    x=(long) R->cnts[i]; if(i>2) x-=(long) R->cnts[i-2]; more=1;\n    while( more ) {\n      char c=x & 0x1f; x >>= 5; more=(c & 0x10) ? x!=-1 : x!=0;\n      if(more) c |= 0x20; c+=48; s[p++]=c;\n    }\n  }\n  s[p]=0; return s;\n}\n\nvoid rleFrString( RLE *R, char *s, siz h, siz w ) {\n  siz m=0, p=0, k; long x; bool more; uint *cnts;\n  while( s[m] ) m++; cnts=malloc(sizeof(uint)*m); m=0;\n  while( s[p] ) {\n    x=0; k=0; more=1;\n    while( more ) {\n      char c=s[p]-48; x |= (c & 0x1f) << 5*k;\n      more = c & 0x20; p++; k++;\n      if(!more && (c & 0x10)) x |= -1 << 5*k;\n    }\n    if(m>2) x+=(long) cnts[m-2]; cnts[m++]=(uint) x;\n  }\n  rleInit(R,h,w,m,cnts); free(cnts);\n}\n"
  },
  {
    "path": "lib/pycocotools/maskApi.h",
    "content": "/**************************************************************************\n* Microsoft COCO Toolbox.      version 2.0\n* Data, paper, and tutorials available at:  http://mscoco.org/\n* Code written by Piotr Dollar and Tsung-Yi Lin, 2015.\n* Licensed under the Simplified BSD License [see coco/license.txt]\n**************************************************************************/\n#pragma once\n#include <stdbool.h>\n\ntypedef unsigned int uint;\ntypedef unsigned long siz;\ntypedef unsigned char byte;\ntypedef double* BB;\ntypedef struct { siz h, w, m; uint *cnts; } RLE;\n\n// Initialize/destroy RLE.\nvoid rleInit( RLE *R, siz h, siz w, siz m, uint *cnts );\nvoid rleFree( RLE *R );\n\n// Initialize/destroy RLE array.\nvoid rlesInit( RLE **R, siz n );\nvoid rlesFree( RLE **R, siz n );\n\n// Encode binary masks using RLE.\nvoid rleEncode( RLE *R, const byte *mask, siz h, siz w, siz n );\n\n// Decode binary masks encoded via RLE.\nvoid rleDecode( const RLE *R, byte *mask, siz n );\n\n// Compute union or intersection of encoded masks.\nvoid rleMerge( const RLE *R, RLE *M, siz n, bool intersect );\n\n// Compute area of encoded masks.\nvoid rleArea( const RLE *R, siz n, uint *a );\n\n// Compute intersection over union between masks.\nvoid rleIou( RLE *dt, RLE *gt, siz m, siz n, byte *iscrowd, double *o );\n\n// Compute intersection over union between bounding boxes.\nvoid bbIou( BB dt, BB gt, siz m, siz n, byte *iscrowd, double *o );\n\n// Get bounding boxes surrounding encoded masks.\nvoid rleToBbox( const RLE *R, BB bb, siz n );\n\n// Convert bounding boxes to encoded masks.\nvoid rleFrBbox( RLE *R, const BB bb, siz h, siz w, siz n );\n\n// Convert polygon to encoded mask.\nvoid rleFrPoly( RLE *R, const double *xy, siz k, siz h, siz w );\n\n// Get compressed string representation of encoded mask.\nchar* rleToString( const RLE *R );\n\n// Convert from compressed string representation of encoded mask.\nvoid rleFrString( RLE *R, char *s, siz h, siz w );\n"
  },
  {
    "path": "lib/roi_data_layer/__init__.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n"
  },
  {
    "path": "lib/roi_data_layer/minibatch.py",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick and Xinlei Chen\n# --------------------------------------------------------\n\n\"\"\"Compute minibatch blobs for training a Fast R-CNN network.\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\n# import numpy.random as npr\nfrom scipy.misc import imread\nfrom model.utils.config import cfg\nfrom model.utils.blob import prep_im_for_blob, im_list_to_blob\nimport pdb\ndef get_minibatch(roidb, num_classes, random_scale_inds):\n  \"\"\"Given a roidb, construct a minibatch sampled from it.\"\"\"\n  num_images = len(roidb)\n  # Sample random scales to use for each image in this batch\n  # random_scale_inds = npr.randint(0, high=len(cfg.TRAIN.SCALES),\n  #                 size=num_images)\n  assert(cfg.TRAIN.BATCH_SIZE % num_images == 0), \\\n    'num_images ({}) must divide BATCH_SIZE ({})'. \\\n    format(num_images, cfg.TRAIN.BATCH_SIZE)\n\n  # Get the input image blob, formatted for caffe\n  im_blob, im_scales = _get_image_blob(roidb, random_scale_inds)\n\n  blobs = {'data': im_blob}\n\n  assert len(im_scales) == 1, \"Single batch only\"\n  assert len(roidb) == 1, \"Single batch only\"\n  \n  # gt boxes: (x1, y1, x2, y2, cls)\n  if cfg.TRAIN.USE_ALL_GT:\n    # Include all ground truth boxes\n    gt_inds = np.where(roidb[0]['gt_classes'] != 0)[0]\n  else:\n    # For the COCO ground truth boxes, exclude the ones that are ''iscrowd'' \n    gt_inds = np.where(roidb[0]['gt_classes'] != 0 & np.all(roidb[0]['gt_overlaps'].toarray() > -1.0, axis=1))[0]\n  gt_boxes = np.empty((len(gt_inds), 5), dtype=np.float32)\n  gt_boxes[:, 0:4] = roidb[0]['boxes'][gt_inds, :] * im_scales[0]\n  gt_boxes[:, 4] = roidb[0]['gt_classes'][gt_inds]\n  blobs['gt_boxes'] = gt_boxes\n  blobs['im_info'] = np.array(\n    [[im_blob.shape[1], im_blob.shape[2], im_scales[0]]],\n    dtype=np.float32)\n\n  blobs['img_id'] = roidb[0]['img_id']\n\n  return blobs\n\ndef _get_image_blob(roidb, scale_inds):\n  \"\"\"Builds an input blob from the images in the roidb at the specified\n  scales.\n  \"\"\"\n  num_images = len(roidb)\n\n  processed_ims = []\n  im_scales = []\n  for i in range(num_images):\n    #im = cv2.imread(roidb[i]['image'])\n    im = imread(roidb[i]['image'])\n\n    if len(im.shape) == 2:\n      im = im[:,:,np.newaxis]\n      im = np.concatenate((im,im,im), axis=2)\n    # flip the channel, since the original one using cv2\n    # rgb -> bgr\n    im = im[:,:,::-1]\n\n    if roidb[i]['flipped']:\n      im = im[:, ::-1, :]\n    target_size = cfg.TRAIN.SCALES[scale_inds[i]]\n    im, im_scale = prep_im_for_blob(im, cfg.PIXEL_MEANS, target_size,\n                    cfg.TRAIN.MAX_SIZE)\n    im_scales.append(im_scale)\n    processed_ims.append(im)\n\n  # Create a blob to hold the input images\n  blob = im_list_to_blob(processed_ims)\n\n  return blob, im_scales\n"
  },
  {
    "path": "lib/roi_data_layer/roibatchLoader.py",
    "content": "\n\"\"\"The data layer used during training to train a Fast R-CNN network.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport torch.utils.data as data\nfrom PIL import Image\nimport torch\n\nfrom model.utils.config import cfg\nfrom roi_data_layer.minibatch import get_minibatch, get_minibatch\nfrom model.rpn.bbox_transform import bbox_transform_inv, clip_boxes\n\nimport numpy as np\nimport numpy.random as npr\nimport random\nimport time\nimport pdb\n\nclass roibatchLoader(data.Dataset):\n  def __init__(self, roidb, ratio_list, ratio_index, batch_size, num_classes, training=True, normalize=None):\n    self._roidb = roidb\n    self._num_classes = num_classes\n    # we make the height of image consistent to trim_height, trim_width\n    self.trim_height = cfg.TRAIN.TRIM_HEIGHT\n    self.trim_width = cfg.TRAIN.TRIM_WIDTH\n    self.max_num_box = cfg.MAX_NUM_GT_BOXES\n    self.training = training\n    self.normalize = normalize\n    self.ratio_list = ratio_list\n    self.ratio_index = ratio_index\n    self.batch_size = batch_size\n    self.data_size = len(self.ratio_list)\n\n    # given the ratio_list, we want to make the ratio same for each batch.\n    self.ratio_list_batch = torch.Tensor(self.data_size).zero_()\n    num_batch = int(np.ceil(len(ratio_index) / batch_size))\n    for i in range(num_batch):\n        left_idx = i*batch_size\n        right_idx = min((i+1)*batch_size-1, self.data_size-1)\n\n        if ratio_list[right_idx] < 1:\n            # for ratio < 1, we preserve the leftmost in each batch.\n            target_ratio = ratio_list[left_idx]\n        elif ratio_list[left_idx] > 1:\n            # for ratio > 1, we preserve the rightmost in each batch.\n            target_ratio = ratio_list[right_idx]\n        else:\n            # for ratio cross 1, we make it to be 1.\n            target_ratio = 1\n\n        self.ratio_list_batch[left_idx:(right_idx+1)] = target_ratio\n\n\n  def __getitem__(self, index):\n    if self.training:\n        index_ratio = int(self.ratio_index[index])\n    else:\n        index_ratio = index\n\n    # get the anchor index for current sample index\n    # here we set the anchor index to the last one\n    # sample in this group\n    minibatch_db = [self._roidb[index_ratio]]\n    if not index % self.batch_size:\n        # Sample random scales to use for each image in this batch\n        self.random_scale_inds = npr.randint(0, high=len(cfg.TRAIN.SCALES),\n                                             size=len(minibatch_db))\n    blobs = get_minibatch(minibatch_db, self._num_classes, self.random_scale_inds)\n    img_id = blobs['img_id']\n    data = torch.from_numpy(blobs['data'])\n    im_info = torch.from_numpy(blobs['im_info'])\n    # we need to random shuffle the bounding box.\n    data_height, data_width = data.size(1), data.size(2)\n\n    if self.training:\n        np.random.shuffle(blobs['gt_boxes'])\n        gt_boxes = torch.from_numpy(blobs['gt_boxes'])\n\n        ########################################################\n        # padding the input image to fixed size for each group #\n        ########################################################\n\n        # NOTE1: need to cope with the case where a group cover both conditions. (done)\n        # NOTE2: need to consider the situation for the tail samples. (no worry)\n        # NOTE3: need to implement a parallel data loader. (no worry)\n        # get the index range\n\n        # if the image need to crop, crop to the target size.\n        ratio = self.ratio_list_batch[index]\n\n        if self._roidb[index_ratio]['need_crop']:\n            if ratio < 1:\n                # this means that data_width << data_height, we need to crop the\n                # data_height\n                min_y = int(torch.min(gt_boxes[:,1]))\n                max_y = int(torch.max(gt_boxes[:,3]))\n                trim_size = int(np.floor(data_width / ratio))\n                box_region = max_y - min_y + 1\n                if min_y == 0:\n                    y_s = 0\n                else:\n                    if (box_region-trim_size) < 0:\n                        y_s_min = max(max_y-trim_size, 0)\n                        y_s_max = min(min_y, data_height-trim_size)\n                        if y_s_min == y_s_max:\n                            y_s = y_s_min\n                        else:\n                            y_s = np.random.choice(range(y_s_min, y_s_max))\n                    else:\n                        y_s_add = int((box_region-trim_size)/2)\n                        if y_s_add == 0:\n                            y_s = min_y\n                        else:\n                            y_s = np.random.choice(range(min_y, min_y+y_s_add))\n                # crop the image\n                data = data[:, y_s:(y_s + trim_size), :, :]\n\n                # shift y coordiante of gt_boxes\n                gt_boxes[:, 1] = gt_boxes[:, 1] - float(y_s)\n                gt_boxes[:, 3] = gt_boxes[:, 3] - float(y_s)\n\n                # update gt bounding box according the trip\n                gt_boxes[:, 1].clamp_(0, trim_size - 1)\n                gt_boxes[:, 3].clamp_(0, trim_size - 1)\n\n            else:\n                # this means that data_width >> data_height, we need to crop the\n                # data_width\n                min_x = int(torch.min(gt_boxes[:,0]))\n                max_x = int(torch.max(gt_boxes[:,2]))\n                trim_size = int(np.ceil(data_height * ratio))\n                box_region = max_x - min_x + 1\n                if min_x == 0:\n                    x_s = 0\n                else:\n                    if (box_region-trim_size) < 0:\n                        x_s_min = max(max_x-trim_size, 0)\n                        x_s_max = min(min_x, data_width-trim_size)\n                        if x_s_min == x_s_max:\n                            x_s = x_s_min\n                        else:\n                            x_s = np.random.choice(range(x_s_min, x_s_max))\n                    else:\n                        x_s_add = int((box_region-trim_size)/2)\n                        if x_s_add == 0:\n                            x_s = min_x\n                        else:\n                            x_s = np.random.choice(range(min_x, min_x+x_s_add))\n                # crop the image\n                data = data[:, :, x_s:(x_s + trim_size), :]\n\n                # shift x coordiante of gt_boxes\n                gt_boxes[:, 0] = gt_boxes[:, 0] - float(x_s)\n                gt_boxes[:, 2] = gt_boxes[:, 2] - float(x_s)\n                # update gt bounding box according the trip\n                gt_boxes[:, 0].clamp_(0, trim_size - 1)\n                gt_boxes[:, 2].clamp_(0, trim_size - 1)\n\n        # based on the ratio, padding the image.\n        if ratio < 1:\n            # this means that data_width < data_height\n            trim_size = int(np.floor(data_width / ratio))\n\n            padding_data = torch.FloatTensor(int(np.ceil(data_width / ratio)), \\\n                                             data_width, 3).zero_()\n\n            padding_data[:data_height, :, :] = data[0]\n            # update im_info\n            im_info[0, 0] = padding_data.size(0)\n            # print(\"height %d %d \\n\" %(index, anchor_idx))\n        elif ratio > 1:\n            # this means that data_width > data_height\n            # if the image need to crop.\n            padding_data = torch.FloatTensor(data_height, \\\n                                             int(np.ceil(data_height * ratio)), 3).zero_()\n            padding_data[:, :data_width, :] = data[0]\n            im_info[0, 1] = padding_data.size(1)\n        else:\n            trim_size = min(data_height, data_width)\n            padding_data = torch.FloatTensor(trim_size, trim_size, 3).zero_()\n            padding_data = data[0][:trim_size, :trim_size, :]\n            gt_boxes[:, :4].clamp_(0, trim_size)\n            im_info[0, 0] = trim_size\n            im_info[0, 1] = trim_size\n\n\n        # check the bounding box:\n        not_keep = (gt_boxes[:,0] == gt_boxes[:,2]) | (gt_boxes[:,1] == gt_boxes[:,3])\n        keep = torch.nonzero(not_keep == 0).view(-1)\n\n        gt_boxes_padding = torch.FloatTensor(self.max_num_box, gt_boxes.size(1)).zero_()\n        if keep.numel() != 0:\n            gt_boxes = gt_boxes[keep]\n            num_boxes = min(gt_boxes.size(0), self.max_num_box)\n            gt_boxes_padding[:num_boxes,:] = gt_boxes[:num_boxes]\n        else:\n            num_boxes = 0\n\n            # permute trim_data to adapt to downstream processing\n        padding_data = padding_data.permute(2, 0, 1).contiguous()\n        im_info = im_info.view(3)\n\n        return padding_data, im_info, gt_boxes_padding, num_boxes\n    else:\n        data = data.permute(0, 3, 1, 2).contiguous().view(3, data_height, data_width)\n        im_info = im_info.view(3)\n\n        gt_boxes = torch.FloatTensor([1,1,1,1,1]).expand(1,1,-1)\n        num_boxes = 0\n\n        return data, im_info, gt_boxes, num_boxes, img_id\n\n  def __len__(self):\n    return len(self._roidb)\n"
  },
  {
    "path": "lib/roi_data_layer/roidb.py",
    "content": "\"\"\"Transform a roidb into a trainable roidb by adding a bunch of metadata.\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport datasets\nimport numpy as np\nfrom model.utils.config import cfg\nfrom datasets.factory import get_imdb\nimport PIL\nimport pdb\nimport collections\n\ndef prepare_roidb(imdb):\n  \"\"\"Enrich the imdb's roidb by adding some derived quantities that\n  are useful for training. This function precomputes the maximum\n  overlap, taken over ground-truth boxes, between each ROI and\n  each ground-truth box. The class with maximum overlap is also\n  recorded.\n  \"\"\"\n\n  roidb = imdb.roidb\n  if not (imdb.name.startswith('coco') or imdb.name.startswith('vg')):\n    sizes = [PIL.Image.open(imdb.image_path_at(i)).size\n         for i in range(imdb.num_images)]\n         \n  for i in range(len(imdb.image_index)):\n    roidb[i]['img_id'] = imdb.image_id_at(i)\n    roidb[i]['image'] = imdb.image_path_at(i)\n    if not (imdb.name.startswith('coco') or imdb.name.startswith('vg')):\n      roidb[i]['width'] = sizes[i][0]\n      roidb[i]['height'] = sizes[i][1]\n    # need gt_overlaps as a dense array for argmax\n    gt_overlaps = roidb[i]['gt_overlaps'].toarray()\n    # max overlap with gt over classes (columns)\n    max_overlaps = gt_overlaps.max(axis=1)\n    # gt class that had the max overlap\n    max_classes = gt_overlaps.argmax(axis=1)\n    roidb[i]['max_classes'] = max_classes\n    roidb[i]['max_overlaps'] = max_overlaps\n    # sanity checks\n    # max overlap of 0 => class should be zero (background)\n    zero_inds = np.where(max_overlaps == 0)[0]\n    assert all(max_classes[zero_inds] == 0)\n    # max overlap > 0 => class should not be zero (must be a fg class)\n    nonzero_inds = np.where(max_overlaps > 0)[0]\n    assert all(max_classes[nonzero_inds] != 0)\n\ndef update_keyvalue(rdb, idx):\n    ## update the roidb keyvaule\n    r = rdb.copy()\n    keys = ['gt_classes','boxes']\n    for k in keys:\n        if isinstance(r[k], list):\n            r[k] = [rdb[k][idx]]\n        elif isinstance(r[k], np.ndarray):\n            r[k] = np.array(rdb[k[idx]], dtype=r[k].dtype)\n    return r\n\n\ndef filter_class_roidb(roidb, shot, imdb):\n  class_count = collections.defaultdict(int)\n  for cls in range(1, len(imdb.classes)):\n      class_count[cls] = 0\n  new_roidb = []\n  length = len(roidb) // 2 # consider the flipped\n  for idx, rdb in enumerate(roidb[:length]):\n    boxes = []\n    gt_classes = []\n    gt_overlaps = []\n    max_classes = []\n    max_overlaps = []\n\n    boxes_flipped = []\n    gt_classes_flipped = []\n    gt_overlaps_flipped = []\n    max_classes_flipped = []\n    max_overlaps_flipped = []\n    rdb_flipped = roidb[idx + length]\n    for i in range(len(rdb['gt_classes'])):\n      cls_id = rdb['gt_classes'][i]\n      if class_count[cls_id] < shot and cls_id > 15:\n        boxes.append(rdb['boxes'][i])\n        gt_classes.append(rdb['gt_classes'][i])\n        gt_overlaps.append(rdb['gt_overlaps'][i])\n        max_classes.append(rdb['max_classes'][i])\n        max_overlaps.append(rdb['max_overlaps'][i])\n\n        boxes_flipped.append(rdb_flipped['boxes'][i])\n        gt_classes_flipped.append(rdb_flipped['gt_classes'][i])\n        gt_overlaps_flipped.append(rdb_flipped['gt_overlaps'][i])\n        max_classes_flipped.append(rdb_flipped['max_classes'][i])\n        max_overlaps_flipped.append(rdb_flipped['max_overlaps'][i])\n        class_count[cls_id] += 1\n\n      elif cls_id <= 15:\n        boxes.append(rdb['boxes'][i])\n        gt_classes.append(rdb['gt_classes'][i])\n        gt_overlaps.append(rdb['gt_overlaps'][i])\n        max_classes.append(rdb['max_classes'][i])\n        max_overlaps.append(rdb['max_overlaps'][i])\n\n        boxes_flipped.append(rdb_flipped['boxes'][i])\n        gt_classes_flipped.append(rdb_flipped['gt_classes'][i])\n        gt_overlaps_flipped.append(rdb_flipped['gt_overlaps'][i])\n        max_classes_flipped.append(rdb_flipped['max_classes'][i])\n        max_overlaps_flipped.append(rdb_flipped['max_overlaps'][i])\n        class_count[cls_id] += 1\n\n    if len(boxes) > 0:\n      new_roidb.append(\n        {'boxes': np.array(boxes, dtype=np.uint16), 'gt_classes': np.array(gt_classes, dtype=np.int32),\n         'gt_overlaps': gt_overlaps, 'flipped': rdb['flipped'], 'img_id': rdb['img_id'],\n         'image': rdb['image'],\n         'width': rdb['width'], 'height': rdb['height'], 'max_classes': np.array(max_classes),\n         'need_crop': rdb['need_crop'],\n         'max_overlaps': np.array(max_overlaps, dtype=np.float32)})\n\n      new_roidb.append(\n        {'boxes': np.array(boxes_flipped, dtype=np.uint16),\n         'gt_classes': np.array(gt_classes_flipped, dtype=np.int32),\n         'gt_overlaps': gt_overlaps_flipped, 'flipped': rdb_flipped['flipped'],\n         'img_id': rdb_flipped['img_id'],\n         'image': rdb_flipped['image'],\n         'width': rdb_flipped['width'], 'height': rdb_flipped['height'],\n         'max_classes': np.array(max_classes_flipped),\n         'need_crop': rdb_flipped['need_crop'],\n         'max_overlaps': np.array(max_overlaps_flipped, dtype=np.float32)})\n    \n  return new_roidb\n\n\ndef rank_roidb_ratio(roidb):\n    # rank roidb based on the ratio between width and height.\n    ratio_large = 2 # largest ratio to preserve.\n    ratio_small = 0.5 # smallest ratio to preserve.    \n    \n    ratio_list = []\n    for i in range(len(roidb)):\n      width = roidb[i]['width']\n      height = roidb[i]['height']\n      ratio = width / float(height)\n\n      if ratio > ratio_large:\n        roidb[i]['need_crop'] = 1\n        ratio = ratio_large\n      elif ratio < ratio_small:\n        roidb[i]['need_crop'] = 1\n        ratio = ratio_small        \n      else:\n        roidb[i]['need_crop'] = 0\n\n      ratio_list.append(ratio)\n\n    ratio_list = np.array(ratio_list)\n    ratio_index = np.argsort(ratio_list)\n    return ratio_list[ratio_index], ratio_index\n\ndef filter_roidb(roidb):\n    # filter the image without bounding box.\n    print('before filtering, there are %d images...' % (len(roidb)))\n    i = 0\n    while i < len(roidb):\n      if len(roidb[i]['boxes']) == 0:\n        del roidb[i]\n        i -= 1\n      i += 1\n\n    print('after filtering, there are %d images...' % (len(roidb)))\n    return roidb\n\ndef combined_roidb(imdb_names, training=True):\n  \"\"\"\n  Combine multiple roidbs\n  \"\"\"\n\n  def get_training_roidb(imdb):\n    \"\"\"Returns a roidb (Region of Interest database) for use in training.\"\"\"\n    if cfg.TRAIN.USE_FLIPPED:\n      print('Appending horizontally-flipped training examples...')\n      imdb.append_flipped_images()\n      print('done')\n\n    print('Preparing training data...')\n\n    prepare_roidb(imdb)\n    print('done')\n    return imdb.roidb\n  \n  def get_roidb(imdb_name):\n    imdb = get_imdb(imdb_name)\n    print('Loaded dataset `{:s}` for training'.format(imdb.name))\n    imdb.set_proposal_method(cfg.TRAIN.PROPOSAL_METHOD) #gt\n    print('Set proposal method: {:s}'.format(cfg.TRAIN.PROPOSAL_METHOD))\n    roidb = get_training_roidb(imdb)\n    return roidb\n\n  roidbs = [get_roidb(s) for s in imdb_names.split('+')]\n  roidb = roidbs[0]\n\n  if len(roidbs) > 1:\n    for r in roidbs[1:]:\n      roidb.extend(r)\n    tmp = get_imdb(imdb_names.split('+')[1])\n    imdb = datasets.imdb.imdb(imdb_names, tmp.classes)\n  else:\n    imdb = get_imdb(imdb_names)\n\n  if training:\n    roidb = filter_roidb(roidb)\n\n  ratio_list, ratio_index = rank_roidb_ratio(roidb)\n\n  return imdb, roidb, ratio_list, ratio_index\n"
  },
  {
    "path": "lib/setup.py",
    "content": "from __future__ import print_function\n# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick\n# --------------------------------------------------------\n\nimport os\nfrom os.path import join as pjoin\nimport numpy as np\nfrom distutils.core import setup\nfrom distutils.extension import Extension\nfrom Cython.Distutils import build_ext\n\n\ndef find_in_path(name, path):\n    \"Find a file in a search path\"\n    # adapted fom http://code.activestate.com/recipes/52224-find-a-file-given-a-search-path/\n    for dir in path.split(os.pathsep):\n        binpath = pjoin(dir, name)\n        if os.path.exists(binpath):\n            return os.path.abspath(binpath)\n    return None\n\n\n# def locate_cuda():\n#     \"\"\"Locate the CUDA environment on the system\n#\n#     Returns a dict with keys 'home', 'nvcc', 'include', and 'lib64'\n#     and values giving the absolute path to each directory.\n#\n#     Starts by looking for the CUDAHOME env variable. If not found, everything\n#     is based on finding 'nvcc' in the PATH.\n#     \"\"\"\n# \n#     # first check if the CUDAHOME env variable is in use\n#     if 'CUDAHOME' in os.environ:\n#         home = os.environ['CUDAHOME']\n#         nvcc = pjoin(home, 'bin', 'nvcc')\n#     else:\n#         # otherwise, search the PATH for NVCC\n#         default_path = pjoin(os.sep, 'usr', 'local', 'cuda', 'bin')\n#         nvcc = find_in_path('nvcc', os.environ['PATH'] + os.pathsep + default_path)\n#         if nvcc is None:\n#             raise EnvironmentError('The nvcc binary could not be '\n#                                    'located in your $PATH. Either add it to your path, or set $CUDAHOME')\n#         home = os.path.dirname(os.path.dirname(nvcc))\n#\n#     cudaconfig = {'home': home, 'nvcc': nvcc,\n#                   'include': pjoin(home, 'include'),\n#                   'lib64': pjoin(home, 'lib64')}\n#     for k, v in cudaconfig.iteritems():\n#         if not os.path.exists(v):\n#             raise EnvironmentError('The CUDA %s path could not be located in %s' % (k, v))\n#\n#     return cudaconfig\n\n\n# CUDA = locate_cuda()\n\n# Obtain the numpy include directory.  This logic works across numpy versions.\ntry:\n    numpy_include = np.get_include()\nexcept AttributeError:\n    numpy_include = np.get_numpy_include()\n\n\ndef customize_compiler_for_nvcc(self):\n    \"\"\"inject deep into distutils to customize how the dispatch\n    to gcc/nvcc works.\n\n    If you subclass UnixCCompiler, it's not trivial to get your subclass\n    injected in, and still have the right customizations (i.e.\n    distutils.sysconfig.customize_compiler) run on it. So instead of going\n    the OO route, I have this. Note, it's kindof like a wierd functional\n    subclassing going on.\"\"\"\n\n    # tell the compiler it can processes .cu\n    self.src_extensions.append('.cu')\n\n    # save references to the default compiler_so and _comple methods\n    default_compiler_so = self.compiler_so\n    super = self._compile\n\n    # now redefine the _compile method. This gets executed for each\n    # object but distutils doesn't have the ability to change compilers\n    # based on source extension: we add it.\n    def _compile(obj, src, ext, cc_args, extra_postargs, pp_opts):\n        print(extra_postargs)\n        if os.path.splitext(src)[1] == '.cu':\n            # use the cuda for .cu files\n            self.set_executable('compiler_so', CUDA['nvcc'])\n            # use only a subset of the extra_postargs, which are 1-1 translated\n            # from the extra_compile_args in the Extension class\n            postargs = extra_postargs['nvcc']\n        else:\n            postargs = extra_postargs['gcc']\n\n        super(obj, src, ext, cc_args, postargs, pp_opts)\n        # reset the default compiler_so, which we might have changed for cuda\n        self.compiler_so = default_compiler_so\n\n    # inject our redefined _compile method into the class\n    self._compile = _compile\n\n\n# run the customize_compiler\nclass custom_build_ext(build_ext):\n    def build_extensions(self):\n        customize_compiler_for_nvcc(self.compiler)\n        build_ext.build_extensions(self)\n\n\next_modules = [\n    Extension(\n        \"model.utils.cython_bbox\",\n        [\"model/utils/bbox.pyx\"],\n        extra_compile_args={'gcc': [\"-Wno-cpp\", \"-Wno-unused-function\"]},\n        include_dirs=[numpy_include]\n    ),\n    Extension(\n        'pycocotools._mask',\n        sources=['pycocotools/maskApi.c', 'pycocotools/_mask.pyx'],\n        include_dirs=[numpy_include, 'pycocotools'],\n        extra_compile_args={\n            'gcc': ['-Wno-cpp', '-Wno-unused-function', '-std=c99']},\n    ),\n]\n\nsetup(\n    name='faster_rcnn',\n    ext_modules=ext_modules,\n    # inject our custom trigger\n    cmdclass={'build_ext': custom_build_ext},\n)\n"
  },
  {
    "path": "test_metarcnn.py",
    "content": "# --------------------------------------------------------\n# Pytorch Meta R-CNN\n# Written by Anny Xu, Xiaopeng Yan, based on the code from Jianwei Yang\n# --------------------------------------------------------\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport _init_paths\nimport os\nimport sys\nimport numpy as np\nimport argparse\nimport pprint\nimport pdb\nimport time\nimport torch\nimport cv2\nfrom torch.autograd import Variable\nimport torch.nn as nn\nimport torch.optim as optim\nimport pickle\nfrom roi_data_layer.roidb import combined_roidb\nfrom roi_data_layer.roibatchLoader import roibatchLoader\nfrom model.utils.config import cfg, cfg_from_file, cfg_from_list, get_output_dir\nfrom model.rpn.bbox_transform import clip_boxes\nfrom model.nms.nms_wrapper import nms\nfrom model.rpn.bbox_transform import bbox_transform_inv\nfrom model.utils.net_utils import save_net, load_net, vis_detections, vis_detections_label_only\n\nfrom matplotlib import pyplot as plt\nimport torch.utils.data as Data\nfrom model.utils.net_utils import weights_normal_init, save_net, load_net, \\\n    adjust_learning_rate, save_checkpoint, clip_gradient\n#from tsne import plot_embedding\nimport collections\n\nimport pickle\ntry:\n    xrange  # Python 2\nexcept NameError:\n    xrange = range  # Python 3\n\n\ndef parse_args():\n    \"\"\"\n    Parse input arguments\n    \"\"\"\n    parser = argparse.ArgumentParser(description='Test a Meta R-CNN network')\n    # Define Model and data\n    parser.add_argument('--dataset', dest='dataset',\n                        help='training dataset:coco2017,coco,pascal_07_12',\n                        default='pascal_07_12', type=str)\n    parser.add_argument('--net', dest='net',\n                        help='metarcnn',\n                        default='metarcnn', type=str)\n    # Define testing parameters\n    parser.add_argument('--cuda', dest='cuda',\n                        default=True, type=bool,\n                        help='whether use CUDA')\n    parser.add_argument('--cag', dest='class_agnostic',\n                        default=False, type=bool,\n                        help='whether perform class_agnostic bbox regression')\n    # Define meta parameters\n    parser.add_argument('--meta_test', dest='meta_test', default=False, type=bool,\n                        help='whether perform meta testing')\n    parser.add_argument('--meta_loss', dest='meta_loss', default=False, type=bool,\n                        help='whether perform adding meta loss')\n    parser.add_argument('--shots', dest='shots',\n                        help='the number of meta input',\n                        default=1, type=int)\n    parser.add_argument('--meta_type', dest='meta_type', default=1, type=int,\n                        help='choose which sets of metaclass')\n    parser.add_argument('--phase', dest='phase',\n                        help='the phase of training process',\n                        default=1, type=int)\n    # resume trained model\n    parser.add_argument('--load_dir', dest='load_dir',\n                        help='directory to load models', default=\"exps\",\n                        type=str)\n    parser.add_argument('--checksession', dest='checksession',\n                        help='checksession to load model',\n                        default=3256, type=int)\n    parser.add_argument('--checkepoch', dest='checkepoch',\n                        help='checkepoch to load network',\n                        default=12, type=int)\n    parser.add_argument('--checkpoint', dest='checkpoint',\n                        help='checkpoint to load network',\n                        default=21985, type=int)\n    # Others\n    parser.add_argument('--bs', dest='batch_size',\n                        help='batch_size',\n                        default=1, type=int)\n    parser.add_argument('--vis', dest='vis',\n                        help='visualization mode',\n                        action='store_true')\n    parser.add_argument('--save', dest='save_dir',\n                        help='directory to save logs', default='models',\n                        type=str)\n    args = parser.parse_args()\n    return args\n\n\nlr = cfg.TRAIN.LEARNING_RATE\nmomentum = cfg.TRAIN.MOMENTUM\nweight_decay = cfg.TRAIN.WEIGHT_DECAY\n\nif __name__ == '__main__':\n    args = parse_args()\n\n    if args.net == 'metarcnn':\n        from model.faster_rcnn.resnet import resnet\n    print('Called with args:')\n    print(args)\n    if torch.cuda.is_available() and not args.cuda:\n        print(\"WARNING: You have a CUDA device, so you should probably run with --cuda\")\n    np.random.seed(cfg.RNG_SEED)\n    if args.dataset == \"coco\":\n        args.imdb_name = \"coco_2014_train+coco_2014_valminusminival\"\n        args.imdbval_name = \"coco_2014_minival\"\n        args.set_cfgs = ['ANCHOR_SCALES', '[4, 8, 16, 32]', 'ANCHOR_RATIOS', '[0.5,1,2]', 'MAX_NUM_GT_BOXES', '50']\n    elif args.dataset == \"pascal_voc_0712\":\n        args.imdbval_name = \"voc_2007_test\"\n        args.set_cfgs = ['ANCHOR_SCALES', '[8, 16, 32]', 'ANCHOR_RATIOS', '[0.5,1,2]', 'MAX_NUM_GT_BOXES', '20']\n    # the number of sets of metaclass\n    cfg.TRAIN.META_TYPE = args.meta_type\n    args.cfg_file = \"cfgs/res101_ms.yml\"\n    if args.cfg_file is not None:\n        cfg_from_file(args.cfg_file)\n    if args.set_cfgs is not None:\n        cfg_from_list(args.set_cfgs)\n\n    print('Using config:')\n    pprint.pprint(cfg)\n\n    cfg.TRAIN.USE_FLIPPED = False\n    imdb, roidb, ratio_list, ratio_index = combined_roidb(args.imdbval_name, False)\n    imdb.competition_mode(on=True)\n\n    print('{:d} roidb entries'.format(len(roidb)))\n\n    input_dir = args.load_dir\n    if not os.path.exists(input_dir):\n        raise Exception('There is no input directory for loading network from ' + input_dir)\n    load_name = os.path.join(input_dir,\n                             '{}_{}_{}_{}_{}.pth'.format(args.dataset, str(args.net), args.checksession,\n                                                         args.checkepoch, args.checkpoint))\n    # initilize the network here.\n    if args.net == 'metarcnn':\n        fasterRCNN = resnet(imdb.classes, 101, pretrained=False, class_agnostic=args.class_agnostic, meta_train=False,\n                            meta_test=args.meta_test, meta_loss=args.meta_loss)\n    else:\n        print('No module define')\n\n    load_name = os.path.join(input_dir,\n                             '{}_{}_{}_{}_{}.pth'.format(args.dataset, str(args.net), args.checksession,\n                                                         args.checkepoch, args.checkpoint))\n    fasterRCNN.create_architecture()\n    print(\"load checkpoint %s\" % (load_name))\n    checkpoint = torch.load(load_name)\n    fasterRCNN.load_state_dict(checkpoint['model'])\n    if 'pooling_mode' in checkpoint.keys():\n        cfg.POOLING_MODE = checkpoint['pooling_mode']\n    print('load model successfully!')\n    if args.cuda:\n        cfg.CUDA = True\n\n    if args.cuda:\n        fasterRCNN.cuda()\n\n    start = time.time()\n    max_per_image = 100\n\n    vis = args.vis\n    if vis:\n        thresh = 0.5\n    else:\n        thresh = 0.0001\n\n    fasterRCNN.eval()\n    empty_array = np.transpose(np.array([[], [], [], [], []]), (1, 0))\n\n    # if meta test\n    mean_class_attentions = None\n    if args.meta_test:\n        print('loading mean class attentions!')\n        mean_class_attentions = pickle.load(open(os.path.join('attentions',str(args.phase) + '_shots_' + str(args.shots) + '_mean_class_attentions.pkl'), 'rb'))\n\n    save_name = '{}_{}'.format(args.save_dir, args.checkepoch)\n    num_images = len(imdb.image_index)\n    all_boxes = [[[] for _ in range(num_images)] for _ in range(imdb.num_classes)]\n\n    output_dir = get_output_dir(imdb, save_name)\n    dataset = roibatchLoader(roidb, ratio_list, ratio_index, args.batch_size,\n                             imdb.num_classes, training=False, normalize=False)\n    dataloader = torch.utils.data.DataLoader(dataset, batch_size=args.batch_size,\n                                             shuffle=False, num_workers=0, pin_memory=True)\n\n    data_iter = iter(dataloader)\n\n    _t = {'im_detect': time.time(), 'misc': time.time()}\n    det_file = os.path.join(output_dir, 'detections.pkl')\n\n    for i in range(num_images):\n        data = next(data_iter)\n        im_data_list = []\n        im_info_list = []\n        gt_boxes_list = []\n        num_boxes_list = []\n        # initilize the tensor holder here.\n        im_data = torch.FloatTensor(1)\n        im_info = torch.FloatTensor(1)\n        num_boxes = torch.LongTensor(1)\n        gt_boxes = torch.FloatTensor(1)\n        # ship to cuda\n        if args.cuda:\n            im_data = im_data.cuda()\n            im_info = im_info.cuda()\n            num_boxes = num_boxes.cuda()\n            gt_boxes = gt_boxes.cuda()\n        # make variable\n        im_data = Variable(im_data, volatile=True)\n        im_info = Variable(im_info, volatile=True)\n        num_boxes = Variable(num_boxes, volatile=True)\n        gt_boxes = Variable(gt_boxes, volatile=True)\n        im_data.data.resize_(data[0].size()).copy_(data[0])\n        im_info.data.resize_(data[1].size()).copy_(data[1])\n        gt_boxes.data.resize_(data[2].size()).copy_(data[2])\n        num_boxes.data.resize_(data[3].size()).copy_(data[3])\n\n        im_data_list.append(im_data)\n        im_info_list.append(im_info)\n        gt_boxes_list.append(gt_boxes)\n        num_boxes_list.append(num_boxes)\n        det_tic = time.time()\n        rois, \\\n        rpn_loss_cls, rpn_loss_box, \\\n        RCNN_loss_cls, RCNN_loss_bbox, \\\n        rois_label, cls_prob_list, bbox_pred_list, _ = fasterRCNN(im_data_list, im_info_list, gt_boxes_list,\n                                                                  num_boxes_list,mean_class_attentions=mean_class_attentions)\n        if args.meta_test:\n            for clsidx in range(len(cls_prob_list)):\n                cls_prob = cls_prob_list[clsidx]\n                bbox_pred = bbox_pred_list[clsidx]\n                scores = cls_prob.data\n                boxes = rois.data[:, :, 1:5]\n                if cfg.TEST.BBOX_REG:\n                    # Apply bounding-box regression deltas\n                    box_deltas = bbox_pred.data\n                    if cfg.TRAIN.BBOX_NORMALIZE_TARGETS_PRECOMPUTED:\n                        # Optionally normalize targets by a precomputed mean and stdev\n                        if args.class_agnostic:\n                            box_deltas = box_deltas.view(-1, 4) * torch.FloatTensor(\n                                cfg.TRAIN.BBOX_NORMALIZE_STDS).cuda() \\\n                                         + torch.FloatTensor(cfg.TRAIN.BBOX_NORMALIZE_MEANS).cuda()\n                            box_deltas = box_deltas.view(1, -1, 4)\n                        else:\n                            box_deltas = box_deltas.view(-1, 4) * torch.FloatTensor(\n                                cfg.TRAIN.BBOX_NORMALIZE_STDS).cuda() \\\n                                         + torch.FloatTensor(cfg.TRAIN.BBOX_NORMALIZE_MEANS).cuda()\n                            box_deltas = box_deltas.view(1, -1, 4 * len(imdb.classes))\n\n                    pred_boxes = bbox_transform_inv(boxes, box_deltas, 1)\n                    pred_boxes = clip_boxes(pred_boxes, im_info.data, 1)\n\n                else:\n                    # Simply repeat the boxes, once for each class\n                    pred_boxes = np.tile(boxes, (1, scores.shape[1]))\n\n                pred_boxes /= data[1][0][2]\n                scores = scores.squeeze()\n                pred_boxes = pred_boxes.squeeze()\n                if clsidx == 0:\n                    allscores = scores[:, clsidx].unsqueeze(1)\n                    allpredboxes = pred_boxes[:, (clsidx) * 4:(clsidx + 1) * 4]\n                    allscores = torch.cat([allscores, scores[:, (clsidx + 1)].unsqueeze(1)], dim=1)\n                    allpredboxes = torch.cat([allpredboxes, pred_boxes[:, (clsidx + 1) * 4:(clsidx + 2) * 4]], dim=1)\n                else:\n                    allscores = torch.cat([allscores, scores[:, (clsidx + 1)].unsqueeze(1)], dim=1)\n                    allpredboxes = torch.cat([allpredboxes, pred_boxes[:, (clsidx + 1) * 4:(clsidx + 2) * 4]], dim=1)\n            scores = allscores\n            pred_boxes = allpredboxes\n        else:\n            scores = cls_prob_list.data\n            boxes = rois.data[:, :, 1:5]\n            if cfg.TEST.BBOX_REG:\n                # Apply bounding-box regression deltas\n                box_deltas = bbox_pred_list.data\n                if cfg.TRAIN.BBOX_NORMALIZE_TARGETS_PRECOMPUTED:\n                    # Optionally normalize targets by a precomputed mean and stdev\n                    if args.class_agnostic:\n                        box_deltas = box_deltas.view(-1, 4) * torch.FloatTensor(cfg.TRAIN.BBOX_NORMALIZE_STDS).cuda() \\\n                                     + torch.FloatTensor(cfg.TRAIN.BBOX_NORMALIZE_MEANS).cuda()\n                        box_deltas = box_deltas.view(1, -1, 4)\n                    else:\n                        box_deltas = box_deltas.view(-1, 4) * torch.FloatTensor(cfg.TRAIN.BBOX_NORMALIZE_STDS).cuda() \\\n                                     + torch.FloatTensor(cfg.TRAIN.BBOX_NORMALIZE_MEANS).cuda()\n                        box_deltas = box_deltas.view(1, -1, 4 * len(imdb.classes))\n\n                pred_boxes = bbox_transform_inv(boxes, box_deltas, 1)\n                pred_boxes = clip_boxes(pred_boxes, im_info.data, 1)\n            else:\n                # Simply repeat the boxes, once for each class\n                pred_boxes = np.tile(boxes, (1, scores.shape[1]))\n            pred_boxes /= data[1][0][2]\n            scores = scores.squeeze()\n        pred_boxes = pred_boxes.squeeze()\n        det_toc = time.time()\n        detect_time = det_toc - det_tic\n        misc_tic = time.time()\n        if vis:\n            im = cv2.imread(imdb.image_path_at(int(data[4])))\n            im2show = np.copy(im)\n        for j in range(1, imdb.num_classes):\n            inds = torch.nonzero(scores[:, j] > thresh).view(-1)\n            # if there is det\n            if inds.numel() > 0:\n                cls_scores = scores[:, j][inds]\n                _, order = torch.sort(cls_scores, 0, True)\n                if args.class_agnostic:\n                    cls_boxes = pred_boxes[inds, :]\n                else:\n                    cls_boxes = pred_boxes[inds][:, j * 4:(j + 1) * 4]\n                cls_dets = torch.cat((cls_boxes, cls_scores.unsqueeze(1)), 1)\n                cls_dets = cls_dets[order]\n                keep = nms(cls_dets, cfg.TEST.NMS)\n                cls_dets = cls_dets[keep.view(-1).long()]\n                if vis:\n                    im2show = vis_detections_label_only(im2show, imdb.classes[j], cls_dets.cpu().numpy(), 0.3)\n                all_boxes[j][i] = cls_dets.cpu().numpy()\n            else:\n                all_boxes[j][i] = empty_array\n\n        # Limit to max_per_image detections *over all classes*\n        if max_per_image > 0:\n            image_scores = np.hstack([all_boxes[j][i][:, -1] for j in range(1, imdb.num_classes)])\n            if len(image_scores) > max_per_image:\n                image_thresh = np.sort(image_scores)[-max_per_image]\n                for j in range(1, imdb.num_classes):\n                    keep = np.where(all_boxes[j][i][:, -1] >= image_thresh)[0]\n                    all_boxes[j][i] = all_boxes[j][i][keep, :]\n\n        misc_toc = time.time()\n        nms_time = misc_toc - misc_tic\n\n        sys.stdout.write('im_detect: {:d}/{:d} {:.3f}s {:.3f}s     \\n'.\n                         format(i + 1, num_images, detect_time, nms_time))\n        sys.stdout.flush()\n\n        if vis:\n            im_dir = 'vis/' + str(data[4].numpy()[0]) + '_metarcnn.png'\n            cv2.imwrite(im_dir, im2show)\n            plt.imshow(im2show[:, :, ::-1])\n            plt.show()\n\n    with open(det_file, 'wb') as f:\n        pickle.dump(all_boxes, f, pickle.HIGHEST_PROTOCOL)\n\n    print('Evaluating detections')\n    ############################### changed by Anny Xu 2019/1/29 begin################################\n    imdb.evaluate_detections(all_boxes, output_dir, **vars(args))\n    ############################## end ###########################################################\n    end = time.time()\n    print(\"test time: %0.4fs\" % (end - start))\n"
  },
  {
    "path": "train_metarcnn.py",
    "content": "# --------------------------------------------------------\n# Pytorch Meta R-CNN\n# Written by Anny Xu, Xiaopeng Yan, based on the code from Jianwei Yang\n# --------------------------------------------------------\nimport _init_paths\nimport os\nimport sys\nimport numpy as np\nimport argparse\nimport pprint\nimport pdb\nimport time\nimport collections\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\nimport random\n\nfrom tensorboardX import SummaryWriter\nimport torchvision.transforms as transforms\nfrom torch.utils.data.sampler import Sampler\nfrom torch.autograd import Variable\nimport torch.utils.data as Data\nfrom roi_data_layer.roidb import combined_roidb, rank_roidb_ratio, filter_class_roidb\nfrom roi_data_layer.roibatchLoader import roibatchLoader\nfrom model.utils.config import cfg, cfg_from_file, cfg_from_list, get_output_dir\nfrom model.utils.net_utils import weights_normal_init, save_net, load_net, \\\n    adjust_learning_rate, save_checkpoint, clip_gradient\nfrom model.faster_rcnn.resnet import resnet\nimport pickle\nfrom datasets.metadata import MetaDataset\nfrom collections import OrderedDict\n\ndef parse_args():\n    \"\"\"\n    Parse input arguments\n    \"\"\"\n    parser = argparse.ArgumentParser(description='Train Meta R-CNN network')\n    # Define training data and Model\n    parser.add_argument('--dataset', dest='dataset',\n                        help='training dataset:coco2017,coco,pascal_07_12',\n                        default='pascal_voc_0712', type=str)\n    parser.add_argument('--net', dest='net',\n                        help='metarcnn',\n                        default='metarcnn', type=str)\n    # Define display and save dir\n    parser.add_argument('--start_epoch', dest='start_epoch',\n                        help='starting epoch',\n                        default=1, type=int)\n    parser.add_argument('--epochs', dest='max_epochs',\n                        help='number of epochs to train',\n                        default=21, type=int)\n    parser.add_argument('--disp_interval', dest='disp_interval',\n                        help='number of iterations to display',\n                        default=100, type=int)\n    parser.add_argument('--checkpoint_interval', dest='checkpoint_interval',\n                        help='number of iterations to display',\n                        default=10000, type=int)\n    parser.add_argument('--save_dir', dest='save_dir',\n                        help='directory to save models', default=\"./models\",\n                        type=str)\n    # Define training parameters\n    parser.add_argument('--nw', dest='num_workers',\n                        help='number of worker to load data',\n                        default=0, type=int)\n    parser.add_argument('--cuda', dest='cuda', default=True, type=bool,\n                        help='whether use CUDA')\n    parser.add_argument('--bs', dest='batch_size',\n                        help='batch_size',\n                        default=1, type=int)\n    parser.add_argument('--cag', dest='class_agnostic', default=False, type=bool,\n                        help='whether perform class_agnostic bbox regression')\n    # Define meta parameters\n    parser.add_argument('--meta_train', dest='meta_train', default=False, type=bool,\n                        help='whether perform meta training')\n    parser.add_argument('--meta_loss', dest='meta_loss', default=False, type=bool,\n                        help='whether perform adding meta loss')\n    parser.add_argument('--phase', dest='phase',\n                        help='the phase of training process',\n                        default=1, type=int)\n    parser.add_argument('--shots', dest='shots',\n                        help='the number meta input of PRN network',\n                        default=1, type=int)\n    parser.add_argument('--meta_type', dest='meta_type', default=1, type=int,\n                        help='choose which sets of metaclass')\n    # config optimization\n    parser.add_argument('--o', dest='optimizer',\n                        help='training optimizer',\n                        default=\"sgd\", type=str)\n    parser.add_argument('--lr', dest='lr',\n                        help='starting learning rate',\n                        default=0.001, type=float)\n    parser.add_argument('--lr_decay_step', dest='lr_decay_step',\n                        help='step to do learning rate decay, unit is epoch',\n                        default=4, type=int)\n    parser.add_argument('--lr_decay_gamma', dest='lr_decay_gamma',\n                        help='learning rate decay ratio',\n                        default=0.1, type=float)\n    # set training session\n    parser.add_argument('--s', dest='session',\n                        help='training session',\n                        default=1, type=int)\n    # resume trained model\n    parser.add_argument('--r', dest='resume',\n                        help='resume checkpoint or not',\n                        default=False, type=bool)\n    parser.add_argument('--checksession', dest='checksession',\n                        help='checksession to load model',\n                        default=1, type=int)\n    parser.add_argument('--checkepoch', dest='checkepoch',\n                        help='checkepoch to load model',\n                        default=10, type=int)\n    parser.add_argument('--checkpoint', dest='checkpoint',\n                        help='checkpoint to load model',\n                        default=21985, type=int)\n    # log and diaplay\n    parser.add_argument('--use_tfboard', dest='use_tfboard',\n                        help='whether use tensorflow tensorboard',\n                        default=True, type=bool)\n    parser.add_argument('--log_dir', dest='log_dir',\n                        help='directory to save logs', default='logs',\n                        type=str)\n    args = parser.parse_args()\n    return args\n\n\nclass sampler(Sampler):\n    def __init__(self, train_size, batch_size):\n        self.num_data = train_size\n        self.num_per_batch = int(train_size / batch_size)\n        self.batch_size = batch_size\n        self.range = torch.arange(0, batch_size).view(1, batch_size).long()\n        self.leftover_flag = False\n        if train_size % batch_size:\n            self.leftover = torch.arange(self.num_per_batch * batch_size, train_size).long()\n            self.leftover_flag = True\n\n    def __iter__(self):\n        rand_num = torch.randperm(self.num_per_batch).view(-1, 1) * self.batch_size\n        self.rand_num = rand_num.expand(self.num_per_batch, self.batch_size) + self.range\n        self.rand_num_view = self.rand_num.view(-1)\n\n        if self.leftover_flag:\n            self.rand_num_view = torch.cat((self.rand_num_view, self.leftover), 0)\n\n        return iter(self.rand_num_view)\n\n    def __len__(self):\n        return self.num_data\n\n\nif __name__ == '__main__':\n    args = parse_args()\n\n    print('Called with args:')\n    print(args)\n    if args.use_tfboard:\n        writer = SummaryWriter(args.log_dir)\n    if args.dataset == \"coco2017\":\n        args.imdb_name = \"coco_2017_train\"\n        args.imdbval_name = \"coco_2017_val\"\n        args.set_cfgs = ['ANCHOR_SCALES', '[2, 4, 8, 16, 32]', 'ANCHOR_RATIOS', '[0.5,1,2]', 'MAX_NUM_GT_BOXES', '50']\n    elif args.dataset == \"coco\":\n        args.imdb_name = \"coco_2014_train+coco_2014_valminusminival\"\n        args.imdbval_name = \"coco_2014_minival\"\n        args.set_cfgs = ['ANCHOR_SCALES', '[2, 4, 8, 16, 32]', 'ANCHOR_RATIOS', '[0.5,1,2]', 'MAX_NUM_GT_BOXES', '50']\n    elif args.dataset == \"pascal_voc_0712\":\n        if args.phase == 1: # three types of base and novel classes splits\n            if args.meta_type == 1:\n                args.imdb_name = \"voc_2007_train_first_split+voc_2012_train_first_split\"\n            elif args.meta_type == 2:\n                args.imdb_name = \"voc_2007_train_second_split+voc_2012_train_second_split\"\n            elif args.meta_type == 3:\n                args.imdb_name = \"voc_2007_train_third_split+voc_2012_train_third_split\"\n        else:\n            args.imdb_name = \"voc_2007_shots\" # the default sampled shots  saved path of meta classes in the first phase\n        args.imdbval_name = \"voc_2007_test\"\n        args.set_cfgs = ['ANCHOR_SCALES', '[8, 16, 32]', 'ANCHOR_RATIOS', '[0.5,1,2]', 'MAX_NUM_GT_BOXES', '20']\n     # the number of sets of metaclass\n    cfg.TRAIN.META_TYPE = args.meta_type\n\n    cfg.USE_GPU_NMS = args.cuda\n    if args.cuda:\n        cfg.CUDA = True\n\n    args.cfg_file = \"cfgs/res101_ms.yml\"\n    if args.cfg_file is not None:\n        cfg_from_file(args.cfg_file)\n    if args.set_cfgs is not None:\n        cfg_from_list(args.set_cfgs)\n\n    print('Using config:')\n    pprint.pprint(cfg)\n    np.random.seed(cfg.RNG_SEED)\n    if torch.cuda.is_available() and not args.cuda:\n        print(\"WARNING: You have a CUDA device, so you should probably run with --cuda\")\n    if args.phase == 1:\n        # First phase only use the base classes\n        shots = 200\n        if args.meta_type == 1:  #  use the first sets of base classes\n            metaclass = cfg.TRAIN.BASECLASSES_FIRST\n        if args.meta_type == 2:  #  use the second sets of base classes\n            metaclass = cfg.TRAIN.BASECLASSES_SECOND\n        if args.meta_type == 3:  #  use the third sets of base classes\n            metaclass = cfg.TRAIN.BASECLASSES_THIRD\n    else:\n        # Second phase only use fewshot number of base and novel classes\n        shots = args.shots\n        if args.meta_type == 1:  #  use the first sets of all classes\n            metaclass = cfg.TRAIN.ALLCLASSES_FIRST\n        if args.meta_type == 2:  #  use the second sets of all classes\n            metaclass = cfg.TRAIN.ALLCLASSES_SECOND\n        if args.meta_type == 3:  #  use the third sets of all classes\n            metaclass = cfg.TRAIN.ALLCLASSES_THIRD\n    # prepare meta sets for meta training\n    if args.meta_train:\n        # construct the input dataset of PRN network\n        img_size = 224\n        if args.phase == 1:\n            img_set = [('2007', 'trainval'), ('2012', 'trainval')]\n        else:\n            img_set = [('2007', 'trainval')]\n        metadataset = MetaDataset('data/VOCdevkit2007',\n                                     img_set, metaclass, img_size, shots=shots, shuffle=True,phase = args.phase)\n\n        metaloader = torch.utils.data.DataLoader(metadataset, batch_size=1, shuffle=False, num_workers=0,\n                                                 pin_memory=True)\n\n\n    imdb, roidb, ratio_list, ratio_index = combined_roidb(args.imdb_name)\n    # filter roidb for the second phase\n    if args.phase == 2:\n        roidb = filter_class_roidb(roidb, args.shots, imdb)\n        ratio_list, ratio_index = rank_roidb_ratio(roidb)\n        imdb.set_roidb(roidb)\n\n    train_size = len(roidb)\n    print('{:d} roidb entries'.format(len(roidb)))\n    sys.stdout.flush()\n\n    output_dir = args.save_dir\n    if not os.path.exists(output_dir):\n        os.makedirs(output_dir)\n\n    sampler_batch = sampler(train_size, args.batch_size)\n    dataset = roibatchLoader(roidb, ratio_list, ratio_index, args.batch_size, imdb.num_classes, training=True)\n    dataloader = torch.utils.data.DataLoader(dataset, batch_size=args.batch_size,\n                                             sampler=sampler_batch, num_workers=args.num_workers, pin_memory=False)\n\n    # initilize the network here\n    if args.net == 'metarcnn':\n        fasterRCNN = resnet(imdb.classes, 101, pretrained=True, class_agnostic=args.class_agnostic,\n                            meta_train=args.meta_train, meta_loss=args.meta_loss)\n    fasterRCNN.create_architecture()\n\n    # initilize the optimizer here\n    lr = cfg.TRAIN.LEARNING_RATE\n    lr = args.lr\n    params = []\n    for key, value in dict(fasterRCNN.named_parameters()).items():\n        if value.requires_grad:\n            if 'bias' in key:\n                params += [{'params': [value], 'lr': lr * (cfg.TRAIN.DOUBLE_BIAS + 1), \\\n                            'weight_decay': cfg.TRAIN.BIAS_DECAY and cfg.TRAIN.WEIGHT_DECAY or 0}]\n            else:\n                params += [{'params': [value], 'lr': lr, 'weight_decay': cfg.TRAIN.WEIGHT_DECAY}]\n    if args.optimizer == \"adam\":\n        lr = lr * 0.1\n        optimizer = torch.optim.Adam(params)\n    elif args.optimizer == \"sgd\":\n        optimizer = torch.optim.SGD(params, momentum=cfg.TRAIN.MOMENTUM)\n\n    if args.cuda:\n        fasterRCNN.cuda()\n\n    if args.resume:\n        load_name = os.path.join(output_dir,\n                                 '{}_metarcnn_{}_{}_{}.pth'.format(args.dataset, args.checksession,\n                                                                   args.checkepoch, args.checkpoint))\n        print(\"loading checkpoint %s\" % (load_name))\n        checkpoint = torch.load(load_name)\n        args.session = checkpoint['session']\n        args.start_epoch = checkpoint['epoch']\n        # the number of classes in second phase is different from first phase\n        if args.phase == 2:\n            new_state_dict = OrderedDict()\n            # initilize params of RCNN_cls_score and RCNN_bbox_pred for second phase\n            RCNN_cls_score = nn.Linear(2048, imdb.num_classes)\n            RCNN_bbox_pred = nn.Linear(2048, 4 * imdb.num_classes)\n            for k, v in checkpoint['model'].items():\n                name = k\n                new_state_dict[name] = v\n                if 'RCNN_cls_score.weight' in k:\n                    new_state_dict[name] = RCNN_cls_score.weight\n                if 'RCNN_cls_score.bias' in k:\n                    new_state_dict[name] = RCNN_cls_score.bias\n                if 'RCNN_bbox_pred.weight' in k:\n                    new_state_dict[name] = RCNN_bbox_pred.weight\n                if 'RCNN_bbox_pred.bias' in k:\n                    new_state_dict[name] = RCNN_bbox_pred.bias\n            fasterRCNN.load_state_dict(new_state_dict)\n        elif args.phase == 1:\n            fasterRCNN.load_state_dict(checkpoint['model'])\n            optimizer.load_state_dict(checkpoint['optimizer'])\n            lr = optimizer.param_groups[0]['lr']\n\n        if 'pooling_mode' in checkpoint.keys():\n            cfg.POOLING_MODE = checkpoint['pooling_mode']\n        print(\"loaded checkpoint %s\" % (load_name))\n\n    iters_per_epoch = int(train_size / args.batch_size)\n\n    for epoch in range(args.start_epoch, args.max_epochs):\n        fasterRCNN.train()\n        loss_temp = 0\n        start = time.time()\n\n        if epoch % (args.lr_decay_step + 1) == 0:\n            adjust_learning_rate(optimizer, args.lr_decay_gamma)\n            lr *= args.lr_decay_gamma\n\n        data_iter = iter(dataloader)\n        meta_iter = iter(metaloader)\n        for step in range(iters_per_epoch):\n            try:\n                data = next(data_iter)\n            except:\n                data_iter = iter(dataloader)\n                data = next(data_iter)\n\n            im_data_list = []\n            im_info_list = []\n            gt_boxes_list = []\n            num_boxes_list = []\n\n            # initilize the tensor holder here.\n            im_data = torch.FloatTensor(1)\n            im_info = torch.FloatTensor(1)\n            num_boxes = torch.LongTensor(1)\n            gt_boxes = torch.FloatTensor(1)\n            # ship to cuda\n            if args.cuda:\n                im_data = im_data.cuda()\n                im_info = im_info.cuda()\n                num_boxes = num_boxes.cuda()\n                gt_boxes = gt_boxes.cuda()\n            # make variable\n            im_data = Variable(im_data)\n            im_info = Variable(im_info)\n            num_boxes = Variable(num_boxes)\n            gt_boxes = Variable(gt_boxes)\n\n            if args.meta_train:\n                # get prn network input data\n                try:\n                    prndata,prncls = next(meta_iter)\n                except:\n                    meta_iter = iter(metaloader)\n                    prndata, prncls = next(meta_iter)\n\n                im_data_list.append(Variable(torch.cat(prndata,dim=0).cuda()))\n                im_info_list.append(prncls)\n                im_data.data.resize_(data[0].size()).copy_(data[0])\n                im_info.data.resize_(data[1].size()).copy_(data[1])\n                gt_boxes.data.resize_(data[2].size()).copy_(data[2])\n                num_boxes.data.resize_(data[3].size()).copy_(data[3])\n                im_data_list.append(im_data)\n                im_info_list.append(im_info)\n                gt_boxes_list.append(gt_boxes)\n                num_boxes_list.append(num_boxes)\n\n            else:\n\n                im_data.data.resize_(data[0].size()).copy_(data[0])\n                im_info.data.resize_(data[1].size()).copy_(data[1])\n                gt_boxes.data.resize_(data[2].size()).copy_(data[2])\n                num_boxes.data.resize_(data[3].size()).copy_(data[3])\n                im_data_list.append(im_data)\n                im_info_list.append(im_info)\n                gt_boxes_list.append(gt_boxes)\n                num_boxes_list.append(num_boxes)\n\n            fasterRCNN.zero_grad()\n\n            rois, rpn_loss_cls, rpn_loss_box, \\\n            RCNN_loss_cls, RCNN_loss_bbox, \\\n            rois_label, cls_prob, bbox_pred, meta_loss = fasterRCNN(im_data_list, im_info_list, gt_boxes_list,\n                                                                    num_boxes_list)\n\n            if args.meta_train:\n                loss = rpn_loss_cls.mean() + rpn_loss_box.mean() + sum(RCNN_loss_cls) / args.batch_size + sum(\n                    RCNN_loss_bbox) / args.batch_size + meta_loss / len(metaclass)\n            else:\n                loss = rpn_loss_cls.mean() + rpn_loss_box.mean() \\\n                       + RCNN_loss_cls.mean() + RCNN_loss_bbox.mean()\n\n            loss_temp += loss.data[0]\n\n            # backward\n            optimizer.zero_grad()\n            loss.backward()\n            # if args.net == \"vgg16\" or \"res101\":\n            #     clip_gradient(fasterRCNN, 10.)\n            optimizer.step()\n\n            torch.cuda.empty_cache()\n\n            if step % args.disp_interval == 0:\n                end = time.time()\n                if step > 0:\n                    loss_temp /= args.disp_interval  # loss_temp is aver loss\n\n                loss_rpn_cls = rpn_loss_cls.data[0]\n                loss_rpn_box = rpn_loss_box.data[0]\n                if not args.meta_train:\n                    loss_rcnn_cls = RCNN_loss_cls.data[0]\n                    loss_rcnn_box = RCNN_loss_bbox.data[0]\n                else:\n                    loss_rcnn_cls = sum(RCNN_loss_cls) / args.batch_size\n                    loss_rcnn_box = sum(RCNN_loss_bbox) / args.batch_size\n                    loss_metarcnn = meta_loss / len(metaclass)\n\n                fg_cnt = torch.sum(rois_label.data.ne(0))\n                bg_cnt = rois_label.data.numel() - fg_cnt\n\n                print(\"[session %d][epoch %2d][iter %4d] loss: %.4f, lr: %.2e\" \\\n                      % (args.session, epoch, step, loss_temp, lr))\n                print(\"\\t\\t\\tfg/bg=(%d/%d), time cost: %f\" % (fg_cnt, bg_cnt, end - start))\n                if args.meta_train:\n                    print(\"\\t\\t\\trpn_cls: %.4f, rpn_box: %.4f, rcnn_cls: %.4f, rcnn_box %.4f, meta_loss %.4f\" \\\n                          % (loss_rpn_cls, loss_rpn_box, loss_rcnn_cls, loss_rcnn_box, loss_metarcnn ))\n                else:\n                    print(\"\\t\\t\\trpn_cls: %.4f, rpn_box: %.4f, rcnn_cls: %.4f, rcnn_box %.4f\" \\\n                          % (loss_rpn_cls, loss_rpn_box, loss_rcnn_cls, loss_rcnn_box))\n\n                sys.stdout.flush()\n\n                if args.use_tfboard:\n                    info = {\n                        'loss': loss_temp,\n                        'loss_rpn_cls': loss_rpn_cls,\n                        'loss_rpn_box': loss_rpn_box,\n                        'loss_rcnn_cls': loss_rcnn_cls,\n                        'loss_rcnn_box': loss_rcnn_box\n                    }\n                    niter = (epoch - 1) * iters_per_epoch + step\n                    for tag, value in info.items():\n                        writer.add_scalar(tag, value, niter)\n\n                loss_temp = 0\n                start = time.time()\n\n        if args.meta_train:\n            save_name = os.path.join(output_dir,\n                                     '{}_{}_{}_{}_{}.pth'.format(str(args.dataset), str(args.net), shots, epoch,\n                                                                 step))\n        else:\n            save_name = os.path.join(output_dir, '{}_{}_{}_{}.pth'.format(str(args.dataset), str(args.net),\n                                                                          epoch, step))\n        save_checkpoint({\n            'session': args.session,\n            'epoch': epoch + 1,\n            'model': fasterRCNN.state_dict(),\n            'optimizer': optimizer.state_dict(),\n            'pooling_mode': cfg.POOLING_MODE,\n            'class_agnostic': args.class_agnostic,\n        }, save_name)\n    print('save model: {}'.format(save_name))\n    end = time.time()\n    print(end - start)\n\n    if args.meta_train: # to extract the mean classes attentions of shots for testing\n        class_attentions = collections.defaultdict(list)\n        meta_iter = iter(metaloader)\n        for i in range(shots):\n            prndata, prncls = next(meta_iter)\n            im_data_list = []\n            im_info_list = []\n            gt_boxes_list = []\n            num_boxes_list = []\n            im_data = torch.FloatTensor(1)\n            if args.cuda:\n                im_data = im_data.cuda()\n            im_data = Variable(im_data, volatile=True)\n            im_data.data.resize_(prndata.squeeze(0).size()).copy_(prndata.squeeze(0))\n            im_data_list.append(im_data)\n            attentions = fasterRCNN(im_data_list, im_info_list, gt_boxes_list, num_boxes_list,\n                                            average_shot=True)\n            for idx, cls in enumerate(prncls):\n                class_attentions[int(cls)].append(attentions[idx])\n        # calculate mean attention vectors of every class\n        mean_class_attentions = {k: sum(v) / len(v) for k, v in class_attentions.items()}\n        save_path = 'attentions'\n        if not os.path.exists(save_path):\n            os.mkdir(save_path)\n        with open(os.path.join(save_path, str(args.phase) + '_shots_' + str(args.shots) + '_mean_class_attentions.pkl'), 'wb') as f:\n            pickle.dump(mean_class_attentions, f, pickle.HIGHEST_PROTOCOL)\n        print('save ' + str(args.shots) + ' mean classes attentions done!')\n"
  }
]