[
  {
    "path": ".gitignore",
    "content": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nenv/\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\n*.egg-info/\n.installed.cfg\n*.egg\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n.hypothesis/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# pyenv\n.python-version\n\n# celery beat schedule file\ncelerybeat-schedule\n\n# SageMath parsed files\n*.sage.py\n\n# dotenv\n.env\n\n# virtualenv\n.venv\nvenv/\nENV/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2018 Rowan Zellers\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "Makefile",
    "content": "export PATH := /usr/local/cuda-9.1/bin:$(PATH)\n\nall: draw_rectangles box_intersections nms roi_align lstm\n\ndraw_rectangles:\n\tcd lib/draw_rectangles; python setup.py build_ext --inplace\nbox_intersections:\n\tcd lib/fpn/box_intersections_cpu; python setup.py build_ext --inplace\nnms:\n\tcd lib/fpn/nms; make\nroi_align:\n\tcd lib/fpn/roi_align; make\nlstm:\n\tcd lib/lstm/highway_lstm_cuda; ./make.sh"
  },
  {
    "path": "README.md",
    "content": "# neural-motifs\n\n### Like this work, or scene understanding in general? You might be interested in checking out my brand new dataset VCR: Visual Commonsense Reasoning, at [visualcommonsense.com](https://visualcommonsense.com)!\n\nThis repository contains data and code for the paper [Neural Motifs: Scene Graph Parsing with Global Context (CVPR 2018)](https://arxiv.org/abs/1711.06640v2) For the project page (as well as links to the baseline checkpoints), check out [rowanzellers.com/neuralmotifs](https://rowanzellers.com/neuralmotifs). If the paper significantly inspires you, we request that you cite our work:\n\n### Bibtex\n\n```\n@inproceedings{zellers2018scenegraphs,\n  title={Neural Motifs: Scene Graph Parsing with Global Context},\n  author={Zellers, Rowan and Yatskar, Mark and Thomson, Sam and Choi, Yejin},\n  booktitle = \"Conference on Computer Vision and Pattern Recognition\",  \n  year={2018}\n}\n```\n# Setup\n\n\n0. Install python3.6 and pytorch 3. I recommend the [Anaconda distribution](https://repo.continuum.io/archive/). To install PyTorch if you haven't already, use\n ```conda install pytorch=0.3.0 torchvision=0.2.0 cuda90 -c pytorch```.\n \n1. Update the config file with the dataset paths. Specifically:\n    - Visual Genome (the VG_100K folder, image_data.json, VG-SGG.h5, and VG-SGG-dicts.json). See data/stanford_filtered/README.md for the steps I used to download these.\n    - You'll also need to fix your PYTHONPATH: ```export PYTHONPATH=/home/rowan/code/scene-graph``` \n\n2. Compile everything. run ```make``` in the main directory: this compiles the Bilinear Interpolation operation for the RoIs as well as the Highway LSTM.\n\n3. Pretrain VG detection. The old version involved pretraining COCO as well, but we got rid of that for simplicity. Run ./scripts/pretrain_detector.sh\nNote: You might have to modify the learning rate and batch size, particularly if you don't have 3 Titan X GPUs (which is what I used). [You can also download the pretrained detector checkpoint here.](https://drive.google.com/open?id=11zKRr2OF5oclFL47kjFYBOxScotQzArX)\n\n4. Train VG scene graph classification: run ./scripts/train_models_sgcls.sh 2 (will run on GPU 2). OR, download the MotifNet-cls checkpoint here: [Motifnet-SGCls/PredCls](https://drive.google.com/open?id=12qziGKYjFD3LAnoy4zDT3bcg5QLC0qN6).\n5. Refine for detection: run ./scripts/refine_for_detection.sh 2 or download the [Motifnet-SGDet](https://drive.google.com/open?id=1thd_5uSamJQaXAPVGVOUZGAOfGCYZYmb) checkpoint.\n6. Evaluate: Refer to the scripts ./scripts/eval_models_sg[cls/det].sh.\n\n# help\n\nFeel free to open an issue if you encounter trouble getting it to work!\n"
  },
  {
    "path": "config.py",
    "content": "\"\"\"\nConfiguration file!\n\"\"\"\nimport os\nfrom argparse import ArgumentParser\nimport numpy as np\n\nROOT_PATH = os.path.dirname(os.path.realpath(__file__))\nDATA_PATH = os.path.join(ROOT_PATH, 'data')\n\ndef path(fn):\n    return os.path.join(DATA_PATH, fn)\n\ndef stanford_path(fn):\n    return os.path.join(DATA_PATH, 'stanford_filtered', fn)\n\n# =============================================================================\n# Update these with where your data is stored ~~~~~~~~~~~~~~~~~~~~~~~~~\n\nVG_IMAGES = '/home/rowan/datasets2/VG_100K_2/VG_100K'\nRCNN_CHECKPOINT_FN = path('faster_rcnn_500k.h5')\n\nIM_DATA_FN = stanford_path('image_data.json')\nVG_SGG_FN = stanford_path('VG-SGG.h5')\nVG_SGG_DICT_FN = stanford_path('VG-SGG-dicts.json')\nPROPOSAL_FN = stanford_path('proposals.h5')\n\nCOCO_PATH = '/home/rowan/datasets/mscoco'\n# =============================================================================\n# =============================================================================\n\n\nMODES = ('sgdet', 'sgcls', 'predcls')\n\nBOX_SCALE = 1024  # Scale at which we have the boxes\nIM_SCALE = 592      # Our images will be resized to this res without padding\n\n# Proposal assignments\nBG_THRESH_HI = 0.5\nBG_THRESH_LO = 0.0\n\nRPN_POSITIVE_OVERLAP = 0.7\n# IOU < thresh: negative example\nRPN_NEGATIVE_OVERLAP = 0.3\n\n# Max number of foreground examples\nRPN_FG_FRACTION = 0.5\nFG_FRACTION = 0.25\n# Total number of examples\nRPN_BATCHSIZE = 256\nROIS_PER_IMG = 256\nREL_FG_FRACTION = 0.25\nRELS_PER_IMG = 256\n\nRELS_PER_IMG_REFINE = 64\n\nBATCHNORM_MOMENTUM = 0.01\nANCHOR_SIZE = 16\n\nANCHOR_RATIOS = (0.23232838, 0.63365731, 1.28478321, 3.15089189) #(0.5, 1, 2)\nANCHOR_SCALES = (2.22152954, 4.12315647, 7.21692515, 12.60263013, 22.7102731) #(4, 8, 16, 32)\n\nclass ModelConfig(object):\n    \"\"\"Wrapper class for model hyperparameters.\"\"\"\n    def __init__(self):\n        \"\"\"\n        Defaults\n        \"\"\"\n        self.coco = None\n        self.ckpt = None\n        self.save_dir = None\n        self.lr = None\n        self.batch_size = None\n        self.val_size = None\n        self.l2 = None\n        self.clip = None\n        self.num_gpus = None\n        self.num_workers = None\n        self.print_interval = None\n        self.gt_box = None\n        self.mode = None\n        self.refine = None\n        self.ad3 = False\n        self.test = False\n        self.adam = False\n        self.multi_pred=False\n        self.cache = None\n        self.model = None\n        self.use_proposals=False\n        self.use_resnet=False\n        self.use_tanh=False\n        self.use_bias = False\n        self.limit_vision=False\n        self.num_epochs=None\n        self.old_feats=False\n        self.order=None\n        self.det_ckpt=None\n        self.nl_edge=None\n        self.nl_obj=None\n        self.hidden_dim=None\n        self.pass_in_obj_feats_to_decoder = None\n        self.pass_in_obj_feats_to_edge = None\n        self.pooling_dim = None\n        self.rec_dropout = None\n        self.parser = self.setup_parser()\n        self.args = vars(self.parser.parse_args())\n\n        print(\"~~~~~~~~ Hyperparameters used: ~~~~~~~\")\n        for x, y in self.args.items():\n            print(\"{} : {}\".format(x, y))\n\n        self.__dict__.update(self.args)\n\n        if len(self.ckpt) != 0:\n            self.ckpt = os.path.join(ROOT_PATH, self.ckpt)\n        else:\n            self.ckpt = None\n\n        if len(self.cache) != 0:\n            self.cache = os.path.join(ROOT_PATH, self.cache)\n        else:\n            self.cache = None\n\n        if len(self.save_dir) == 0:\n            self.save_dir = None\n        else:\n            self.save_dir = os.path.join(ROOT_PATH, self.save_dir)\n            if not os.path.exists(self.save_dir):\n                os.mkdir(self.save_dir)\n\n        assert self.val_size >= 0\n\n        if self.mode not in MODES:\n            raise ValueError(\"Invalid mode: mode must be in {}\".format(MODES))\n\n        if self.model not in ('motifnet', 'stanford'):\n            raise ValueError(\"Invalid model {}\".format(self.model))\n\n\n        if self.ckpt is not None and not os.path.exists(self.ckpt):\n            raise ValueError(\"Ckpt file ({}) doesnt exist\".format(self.ckpt))\n\n    def setup_parser(self):\n        \"\"\"\n        Sets up an argument parser\n        :return:\n        \"\"\"\n        parser = ArgumentParser(description='training code')\n\n\n        # Options to deprecate\n        parser.add_argument('-coco', dest='coco', help='Use COCO (default to VG)', action='store_true')\n        parser.add_argument('-ckpt', dest='ckpt', help='Filename to load from', type=str, default='')\n        parser.add_argument('-det_ckpt', dest='det_ckpt', help='Filename to load detection parameters from', type=str, default='')\n\n        parser.add_argument('-save_dir', dest='save_dir',\n                            help='Directory to save things to, such as checkpoints/save', default='', type=str)\n\n        parser.add_argument('-ngpu', dest='num_gpus', help='cuantos GPUs tienes', type=int, default=3)\n        parser.add_argument('-nwork', dest='num_workers', help='num processes to use as workers', type=int, default=1)\n\n        parser.add_argument('-lr', dest='lr', help='learning rate', type=float, default=1e-3)\n\n        parser.add_argument('-b', dest='batch_size', help='batch size per GPU',type=int, default=2)\n        parser.add_argument('-val_size', dest='val_size', help='val size to use (if 0 we wont use val)', type=int, default=5000)\n\n        parser.add_argument('-l2', dest='l2', help='weight decay', type=float, default=1e-4)\n        parser.add_argument('-clip', dest='clip', help='gradients will be clipped to have norm less than this', type=float, default=5.0)\n        parser.add_argument('-p', dest='print_interval', help='print during training', type=int,\n                            default=100)\n        parser.add_argument('-m', dest='mode', help='mode \\in {sgdet, sgcls, predcls}', type=str,\n                            default='sgdet')\n        parser.add_argument('-model', dest='model', help='which model to use? (motifnet, stanford). If you want to use the baseline (NoContext) model, then pass in motifnet here, and nl_obj, nl_edge=0', type=str,\n                            default='motifnet')\n        parser.add_argument('-old_feats', dest='old_feats', help='Use the original image features for the edges', action='store_true')\n        parser.add_argument('-order', dest='order', help='Linearization order for Rois (confidence -default, size, random)',\n                            type=str, default='confidence')\n        parser.add_argument('-cache', dest='cache', help='where should we cache predictions', type=str,\n                            default='')\n        parser.add_argument('-gt_box', dest='gt_box', help='use gt boxes during training', action='store_true')\n        parser.add_argument('-adam', dest='adam', help='use adam. Not recommended', action='store_true')\n        parser.add_argument('-test', dest='test', help='test set', action='store_true')\n        parser.add_argument('-multipred', dest='multi_pred', help='Allow multiple predicates per pair of box0, box1.', action='store_true')\n        parser.add_argument('-nepoch', dest='num_epochs', help='Number of epochs to train the model for',type=int, default=25)\n        parser.add_argument('-resnet', dest='use_resnet', help='use resnet instead of VGG', action='store_true')\n        parser.add_argument('-proposals', dest='use_proposals', help='Use Xu et als proposals', action='store_true')\n        parser.add_argument('-nl_obj', dest='nl_obj', help='Num object layers', type=int, default=1)\n        parser.add_argument('-nl_edge', dest='nl_edge', help='Num edge layers', type=int, default=2)\n        parser.add_argument('-hidden_dim', dest='hidden_dim', help='Num edge layers', type=int, default=256)\n        parser.add_argument('-pooling_dim', dest='pooling_dim', help='Dimension of pooling', type=int, default=4096)\n        parser.add_argument('-pass_in_obj_feats_to_decoder', dest='pass_in_obj_feats_to_decoder', action='store_true')\n        parser.add_argument('-pass_in_obj_feats_to_edge', dest='pass_in_obj_feats_to_edge', action='store_true')\n        parser.add_argument('-rec_dropout', dest='rec_dropout', help='recurrent dropout to add', type=float, default=0.1)\n        parser.add_argument('-use_bias', dest='use_bias',  action='store_true')\n        parser.add_argument('-use_tanh', dest='use_tanh',  action='store_true')\n        parser.add_argument('-limit_vision', dest='limit_vision',  action='store_true')\n        return parser\n"
  },
  {
    "path": "data/stanford_filtered/README.md",
    "content": "# Filtered data\nAdapted from [Danfei Xu](https://github.com/danfeiX/scene-graph-TF-release/blob/master/data_tools/README.md).\n\nFollow the folling steps to get the dataset set up.\n1. Download the VG images [part1](https://cs.stanford.edu/people/rak248/VG_100K_2/images.zip) [part2](https://cs.stanford.edu/people/rak248/VG_100K_2/images2.zip). Extract these images to a file and link to them in `config.py` (eg. currently I have `VG_IMAGES=data/visual_genome/VG_100K`). \n2. Download the [VG metadata](http://cvgl.stanford.edu/scene-graph/VG/image_data.json). I recommend extracting it to this directory (e.g. `data/stanford_filtered/image_data.json`), or you can edit the path in `config.py`.\n3. Download the [scene graphs](http://cvgl.stanford.edu/scene-graph/dataset/VG-SGG.h5) and extract them to `data/stanford_filtered/VG-SGG.h5`\n4. Download the [scene graph dataset metadata](http://cvgl.stanford.edu/scene-graph/dataset/VG-SGG-dicts.json) and extract it to `data/stanford_filtered/VG-SGG-dicts.json`\n"
  },
  {
    "path": "dataloaders/__init__.py",
    "content": ""
  },
  {
    "path": "dataloaders/blob.py",
    "content": "\"\"\"\nData blob, hopefully to make collating less painful and MGPU training possible\n\"\"\"\nfrom lib.fpn.anchor_targets import anchor_target_layer\nimport numpy as np\nimport torch\nfrom torch.autograd import Variable\n\n\nclass Blob(object):\n    def __init__(self, mode='det', is_train=False, num_gpus=1, primary_gpu=0, batch_size_per_gpu=3):\n        \"\"\"\n        Initializes an empty Blob object.\n        :param mode: 'det' for detection and 'rel' for det+relationship\n        :param is_train: True if it's training\n        \"\"\"\n        assert mode in ('det', 'rel')\n        assert num_gpus >= 1\n        self.mode = mode\n        self.is_train = is_train\n        self.num_gpus = num_gpus\n        self.batch_size_per_gpu = batch_size_per_gpu\n        self.primary_gpu = primary_gpu\n\n        self.imgs = []  # [num_images, 3, IM_SCALE, IM_SCALE] array\n        self.im_sizes = []  # [num_images, 4] array of (h, w, scale, num_valid_anchors)\n        self.all_anchor_inds = []  # [all_anchors, 2] array of (img_ind, anchor_idx). Only has valid\n        # boxes (meaning some are gonna get cut out)\n        self.all_anchors = []  # [num_im, IM_SCALE/4, IM_SCALE/4, num_anchors, 4] shapes. Anchors outside get squashed\n                               # to 0\n        self.gt_boxes = []  # [num_gt, 4] boxes\n        self.gt_classes = []  # [num_gt,2] array of img_ind, class\n        self.gt_rels = []  # [num_rels, 3]. Each row is (gtbox0, gtbox1, rel).\n\n        self.gt_sents = []\n        self.gt_nodes = []\n        self.sent_lengths = []\n\n        self.train_anchor_labels = []  # [train_anchors, 5] array of (img_ind, h, w, A, labels)\n        self.train_anchors = []  # [train_anchors, 8] shapes with anchor, target\n\n        self.train_anchor_inds = None  # This will be split into GPUs, just (img_ind, h, w, A).\n\n        self.batch_size = None\n        self.gt_box_chunks = None\n        self.anchor_chunks = None\n        self.train_chunks = None\n        self.proposal_chunks = None\n        self.proposals = []\n\n    @property\n    def is_flickr(self):\n        return self.mode == 'flickr'\n\n    @property\n    def is_rel(self):\n        return self.mode == 'rel'\n\n    @property\n    def volatile(self):\n        return not self.is_train\n\n    def append(self, d):\n        \"\"\"\n        Adds a single image to the blob\n        :param datom:\n        :return:\n        \"\"\"\n        i = len(self.imgs)\n        self.imgs.append(d['img'])\n\n        h, w, scale = d['img_size']\n\n        # all anchors\n        self.im_sizes.append((h, w, scale))\n\n        gt_boxes_ = d['gt_boxes'].astype(np.float32) * d['scale']\n        self.gt_boxes.append(gt_boxes_)\n\n        self.gt_classes.append(np.column_stack((\n            i * np.ones(d['gt_classes'].shape[0], dtype=np.int64),\n            d['gt_classes'],\n        )))\n\n        # Add relationship info\n        if self.is_rel:\n            self.gt_rels.append(np.column_stack((\n                i * np.ones(d['gt_relations'].shape[0], dtype=np.int64),\n                d['gt_relations'])))\n\n        # Augment with anchor targets\n        if self.is_train:\n            train_anchors_, train_anchor_inds_, train_anchor_targets_, train_anchor_labels_ = \\\n                anchor_target_layer(gt_boxes_, (h, w))\n\n            self.train_anchors.append(np.hstack((train_anchors_, train_anchor_targets_)))\n\n            self.train_anchor_labels.append(np.column_stack((\n                i * np.ones(train_anchor_inds_.shape[0], dtype=np.int64),\n                train_anchor_inds_,\n                train_anchor_labels_,\n            )))\n\n        if 'proposals' in d:\n            self.proposals.append(np.column_stack((i * np.ones(d['proposals'].shape[0], dtype=np.float32),\n                                                   d['scale'] * d['proposals'].astype(np.float32))))\n\n\n\n    def _chunkize(self, datom, tensor=torch.LongTensor):\n        \"\"\"\n        Turn data list into chunks, one per GPU\n        :param datom: List of lists of numpy arrays that will be concatenated.\n        :return:\n        \"\"\"\n        chunk_sizes = [0] * self.num_gpus\n        for i in range(self.num_gpus):\n            for j in range(self.batch_size_per_gpu):\n                chunk_sizes[i] += datom[i * self.batch_size_per_gpu + j].shape[0]\n        return Variable(tensor(np.concatenate(datom, 0)), volatile=self.volatile), chunk_sizes\n\n    def reduce(self):\n        \"\"\" Merges all the detections into flat lists + numbers of how many are in each\"\"\"\n        if len(self.imgs) != self.batch_size_per_gpu * self.num_gpus:\n            raise ValueError(\"Wrong batch size? imgs len {} bsize/gpu {} numgpus {}\".format(\n                len(self.imgs), self.batch_size_per_gpu, self.num_gpus\n            ))\n\n        self.imgs = Variable(torch.stack(self.imgs, 0), volatile=self.volatile)\n        self.im_sizes = np.stack(self.im_sizes).reshape(\n            (self.num_gpus, self.batch_size_per_gpu, 3))\n\n        if self.is_rel:\n            self.gt_rels, self.gt_rel_chunks = self._chunkize(self.gt_rels)\n\n        self.gt_boxes, self.gt_box_chunks = self._chunkize(self.gt_boxes, tensor=torch.FloatTensor)\n        self.gt_classes, _ = self._chunkize(self.gt_classes)\n        if self.is_train:\n            self.train_anchor_labels, self.train_chunks = self._chunkize(self.train_anchor_labels)\n            self.train_anchors, _ = self._chunkize(self.train_anchors, tensor=torch.FloatTensor)\n            self.train_anchor_inds = self.train_anchor_labels[:, :-1].contiguous()\n\n        if len(self.proposals) != 0:\n            self.proposals, self.proposal_chunks = self._chunkize(self.proposals, tensor=torch.FloatTensor)\n\n\n\n    def _scatter(self, x, chunk_sizes, dim=0):\n        \"\"\" Helper function\"\"\"\n        if self.num_gpus == 1:\n            return x.cuda(self.primary_gpu, async=True)\n        return torch.nn.parallel.scatter_gather.Scatter.apply(\n            list(range(self.num_gpus)), chunk_sizes, dim, x)\n\n    def scatter(self):\n        \"\"\" Assigns everything to the GPUs\"\"\"\n        self.imgs = self._scatter(self.imgs, [self.batch_size_per_gpu] * self.num_gpus)\n\n        self.gt_classes_primary = self.gt_classes.cuda(self.primary_gpu, async=True)\n        self.gt_boxes_primary = self.gt_boxes.cuda(self.primary_gpu, async=True)\n\n        # Predcls might need these\n        self.gt_classes = self._scatter(self.gt_classes, self.gt_box_chunks)\n        self.gt_boxes = self._scatter(self.gt_boxes, self.gt_box_chunks)\n\n        if self.is_train:\n\n            self.train_anchor_inds = self._scatter(self.train_anchor_inds,\n                                                   self.train_chunks)\n            self.train_anchor_labels = self.train_anchor_labels.cuda(self.primary_gpu, async=True)\n            self.train_anchors = self.train_anchors.cuda(self.primary_gpu, async=True)\n\n            if self.is_rel:\n                self.gt_rels = self._scatter(self.gt_rels, self.gt_rel_chunks)\n        else:\n            if self.is_rel:\n                self.gt_rels = self.gt_rels.cuda(self.primary_gpu, async=True)\n\n        if self.proposal_chunks is not None:\n            self.proposals = self._scatter(self.proposals, self.proposal_chunks)\n\n    def __getitem__(self, index):\n        \"\"\"\n        Returns a tuple containing data\n        :param index: Which GPU we're on, or 0 if no GPUs\n        :return: If training:\n        (image, im_size, img_start_ind, anchor_inds, anchors, gt_boxes, gt_classes, \n        train_anchor_inds)\n        test:\n        (image, im_size, img_start_ind, anchor_inds, anchors)\n        \"\"\"\n        if index not in list(range(self.num_gpus)):\n            raise ValueError(\"Out of bounds with index {} and {} gpus\".format(index, self.num_gpus))\n\n        if self.is_rel:\n            rels = self.gt_rels\n            if index > 0 or self.num_gpus != 1:\n                rels_i = rels[index] if self.is_rel else None\n        elif self.is_flickr:\n            rels = (self.gt_sents, self.gt_nodes)\n            if index > 0 or self.num_gpus != 1:\n                rels_i = (self.gt_sents[index], self.gt_nodes[index])\n        else:\n            rels = None\n            rels_i = None\n\n        if self.proposal_chunks is None:\n            proposals = None\n        else:\n            proposals = self.proposals\n\n        if index == 0 and self.num_gpus == 1:\n            image_offset = 0\n            if self.is_train:\n                return (self.imgs, self.im_sizes[0], image_offset,\n                        self.gt_boxes, self.gt_classes, rels, proposals, self.train_anchor_inds)\n            return self.imgs, self.im_sizes[0], image_offset, self.gt_boxes, self.gt_classes, rels, proposals\n\n        # Otherwise proposals is None\n        assert proposals is None\n\n        image_offset = self.batch_size_per_gpu * index\n        # TODO: Return a namedtuple\n        if self.is_train:\n            return (\n            self.imgs[index], self.im_sizes[index], image_offset,\n            self.gt_boxes[index], self.gt_classes[index], rels_i, None, self.train_anchor_inds[index])\n        return (self.imgs[index], self.im_sizes[index], image_offset,\n                self.gt_boxes[index], self.gt_classes[index], rels_i, None)\n\n"
  },
  {
    "path": "dataloaders/image_transforms.py",
    "content": "# Some image transforms\n\nfrom PIL import Image, ImageOps, ImageFilter, ImageEnhance\nimport numpy as np\nfrom random import randint\n# All of these need to be called on PIL imagez\n\nclass SquarePad(object):\n    def __call__(self, img):\n        w, h = img.size\n        img_padded = ImageOps.expand(img, border=(0, 0, max(h - w, 0), max(w - h, 0)),\n                                     fill=(int(0.485 * 256), int(0.456 * 256), int(0.406 * 256)))\n        return img_padded\n\n\nclass Grayscale(object):\n    \"\"\"\n    Converts to grayscale (not always, sometimes).\n    \"\"\"\n    def __call__(self, img):\n        factor = np.sqrt(np.sqrt(np.random.rand(1)))\n        # print(\"gray {}\".format(factor))\n        enhancer = ImageEnhance.Color(img)\n        return enhancer.enhance(factor)\n\n\nclass Brightness(object):\n    \"\"\"\n    Converts to grayscale (not always, sometimes).\n    \"\"\"\n    def __call__(self, img):\n        factor = np.random.randn(1)/6+1\n        factor = min(max(factor, 0.5), 1.5)\n        # print(\"brightness {}\".format(factor))\n\n        enhancer = ImageEnhance.Brightness(img)\n        return enhancer.enhance(factor)\n\n\nclass Contrast(object):\n    \"\"\"\n    Converts to grayscale (not always, sometimes).\n    \"\"\"\n    def __call__(self, img):\n        factor = np.random.randn(1)/8+1.0\n        factor = min(max(factor, 0.5), 1.5)\n        # print(\"contrast {}\".format(factor))\n\n        enhancer = ImageEnhance.Contrast(img)\n        return enhancer.enhance(factor)\n\n\nclass Hue(object):\n    \"\"\"\n    Converts to grayscale\n    \"\"\"\n    def __call__(self, img):\n        # 30 seems good\n        factor = int(np.random.randn(1)*8)\n        factor = min(max(factor, -30), 30)\n        factor = np.array(factor, dtype=np.uint8)\n\n        hsv = np.array(img.convert('HSV'))\n        hsv[:,:,0] += factor\n        new_img = Image.fromarray(hsv, 'HSV').convert('RGB')\n\n        return new_img\n\n\nclass Sharpness(object):\n    \"\"\"\n    Converts to grayscale\n    \"\"\"\n    def __call__(self, img):\n        factor = 1.0 + np.random.randn(1)/5\n        # print(\"sharpness {}\".format(factor))\n        enhancer = ImageEnhance.Sharpness(img)\n        return enhancer.enhance(factor)\n\n\ndef random_crop(img, boxes, box_scale, round_boxes=True, max_crop_fraction=0.1):\n    \"\"\"\n    Randomly crops the image\n    :param img: PIL image\n    :param boxes: Ground truth boxes\n    :param box_scale: This is the scale that the boxes are at (e.g. 1024 wide). We'll preserve that ratio\n    :param round_boxes: Set this to true if we're going to round the boxes to ints\n    :return: Cropped image, new boxes\n    \"\"\"\n\n    w, h = img.size\n\n    max_crop_w = int(w*max_crop_fraction)\n    max_crop_h = int(h*max_crop_fraction)\n    boxes_scaled = boxes * max(w,h) / box_scale\n    max_to_crop_top = min(int(boxes_scaled[:, 1].min()), max_crop_h)\n    max_to_crop_left = min(int(boxes_scaled[:, 0].min()), max_crop_w)\n    max_to_crop_right = min(int(w - boxes_scaled[:, 2].max()), max_crop_w)\n    max_to_crop_bottom = min(int(h - boxes_scaled[:, 3].max()), max_crop_h)\n\n    crop_top = randint(0, max(max_to_crop_top, 0))\n    crop_left = randint(0, max(max_to_crop_left, 0))\n    crop_right = randint(0, max(max_to_crop_right, 0))\n    crop_bottom = randint(0, max(max_to_crop_bottom, 0))\n    img_cropped = img.crop((crop_left, crop_top, w - crop_right, h - crop_bottom))\n\n    new_boxes = box_scale / max(img_cropped.size) * np.column_stack(\n        (boxes_scaled[:,0]-crop_left, boxes_scaled[:,1]-crop_top, boxes_scaled[:,2]-crop_left, boxes_scaled[:,3]-crop_top))\n\n    if round_boxes:\n        new_boxes = np.round(new_boxes).astype(np.int32)\n    return img_cropped, new_boxes\n\n\nclass RandomOrder(object):\n    \"\"\" Composes several transforms together in random order - or not at all!\n    \"\"\"\n\n    def __init__(self, transforms):\n        self.transforms = transforms\n\n    def __call__(self, img):\n        if self.transforms is None:\n            return img\n        num_to_pick = np.random.choice(len(self.transforms))\n        if num_to_pick == 0:\n            return img\n\n        order = np.random.choice(len(self.transforms), size=num_to_pick, replace=False)\n        for i in order:\n            img = self.transforms[i](img)\n        return img"
  },
  {
    "path": "dataloaders/mscoco.py",
    "content": "from config import COCO_PATH, IM_SCALE, BOX_SCALE\nimport os\nfrom torch.utils.data import Dataset\nfrom pycocotools.coco import COCO\nfrom PIL import Image\nfrom lib.fpn.anchor_targets import anchor_target_layer\nfrom torchvision.transforms import Resize, Compose, ToTensor, Normalize\nfrom dataloaders.image_transforms import SquarePad, Grayscale, Brightness, Sharpness, Contrast, RandomOrder, Hue, random_crop\nimport numpy as np\nfrom dataloaders.blob import Blob\nimport torch\n\nclass CocoDetection(Dataset):\n    \"\"\"\n    Adapted from the torchvision code\n    \"\"\"\n\n    def __init__(self, mode):\n        \"\"\"\n        :param mode: train2014 or val2014\n        \"\"\"\n        self.mode = mode\n        self.root = os.path.join(COCO_PATH, mode)\n        self.ann_file = os.path.join(COCO_PATH, 'annotations', 'instances_{}.json'.format(mode))\n        self.coco = COCO(self.ann_file)\n        self.ids = [k for k in self.coco.imgs.keys() if len(self.coco.imgToAnns[k]) > 0]\n\n\n        tform = []\n        if self.is_train:\n             tform.append(RandomOrder([\n                 Grayscale(),\n                 Brightness(),\n                 Contrast(),\n                 Sharpness(),\n                 Hue(),\n             ]))\n\n        tform += [\n            SquarePad(),\n            Resize(IM_SCALE),\n            ToTensor(),\n            Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),\n        ]\n\n        self.transform_pipeline = Compose(tform)\n        self.ind_to_classes = ['__background__'] + [v['name'] for k, v in self.coco.cats.items()]\n        # COCO inds are weird (84 inds in total but a bunch of numbers are skipped)\n        self.id_to_ind = {coco_id:(ind+1) for ind, coco_id in enumerate(self.coco.cats.keys())}\n        self.id_to_ind[0] = 0\n\n        self.ind_to_id = {x:y for y,x in self.id_to_ind.items()}\n\n    @property\n    def is_train(self):\n        return self.mode.startswith('train')\n\n    def __getitem__(self, index):\n        \"\"\"\n        Args:\n            index (int): Index\n\n        Returns: entry dict\n        \"\"\"\n        img_id = self.ids[index]\n        path = self.coco.loadImgs(img_id)[0]['file_name']\n        image_unpadded = Image.open(os.path.join(self.root, path)).convert('RGB')\n        ann_ids = self.coco.getAnnIds(imgIds=img_id)\n        anns = self.coco.loadAnns(ann_ids)\n        gt_classes = np.array([self.id_to_ind[x['category_id']] for x in anns], dtype=np.int64)\n\n        if np.any(gt_classes >= len(self.ind_to_classes)):\n            raise ValueError(\"OH NO {}\".format(index))\n\n        if len(anns) == 0:\n            raise ValueError(\"Annotations should not be empty\")\n        #     gt_boxes = np.array((0, 4), dtype=np.float32)\n        # else:\n        gt_boxes = np.array([x['bbox'] for x in anns], dtype=np.float32)\n\n        if np.any(gt_boxes[:, [0,1]] < 0):\n            raise ValueError(\"GT boxes empty columns\")\n        if np.any(gt_boxes[:, [2,3]] < 0):\n            raise ValueError(\"GT boxes empty h/w\")\n        gt_boxes[:, [2, 3]] += gt_boxes[:, [0, 1]]\n\n        # Rescale so that the boxes are at BOX_SCALE\n        if self.is_train:\n            image_unpadded, gt_boxes = random_crop(image_unpadded,\n                                                   gt_boxes * BOX_SCALE / max(image_unpadded.size),\n                                                   BOX_SCALE,\n                                                   round_boxes=False,\n                                                   )\n        else:\n            # Seems a bit silly because we won't be using GT boxes then but whatever\n            gt_boxes = gt_boxes * BOX_SCALE / max(image_unpadded.size)\n        w, h = image_unpadded.size\n        box_scale_factor = BOX_SCALE / max(w, h)\n\n        # Optionally flip the image if we're doing training\n        flipped = self.is_train and np.random.random() > 0.5\n        if flipped:\n            scaled_w = int(box_scale_factor * float(w))\n            image_unpadded = image_unpadded.transpose(Image.FLIP_LEFT_RIGHT)\n            gt_boxes[:, [0, 2]] = scaled_w - gt_boxes[:, [2, 0]]\n\n        img_scale_factor = IM_SCALE / max(w, h)\n        if h > w:\n            im_size = (IM_SCALE, int(w*img_scale_factor), img_scale_factor)\n        elif h < w:\n            im_size = (int(h*img_scale_factor), IM_SCALE, img_scale_factor)\n        else:\n            im_size = (IM_SCALE, IM_SCALE, img_scale_factor)\n\n        entry = {\n            'img': self.transform_pipeline(image_unpadded),\n            'img_size': im_size,\n            'gt_boxes': gt_boxes,\n            'gt_classes': gt_classes,\n            'scale': IM_SCALE / BOX_SCALE,\n            'index': index,\n            'image_id': img_id,\n            'flipped': flipped,\n            'fn': path,\n        }\n\n        return entry\n\n    @classmethod\n    def splits(cls, *args, **kwargs):\n        \"\"\" Helper method to generate splits of the dataset\"\"\"\n        train = cls('train2014', *args, **kwargs)\n        val = cls('val2014', *args, **kwargs)\n        return train, val\n\n    def __len__(self):\n        return len(self.ids)\n\n\ndef coco_collate(data, num_gpus=3, is_train=False):\n    blob = Blob(mode='det', is_train=is_train, num_gpus=num_gpus,\n                batch_size_per_gpu=len(data) // num_gpus)\n    for d in data:\n        blob.append(d)\n    blob.reduce()\n    return blob\n\n\nclass CocoDataLoader(torch.utils.data.DataLoader):\n    \"\"\"\n    Iterates through the data, filtering out None,\n     but also loads everything as a (cuda) variable\n    \"\"\"\n    # def __iter__(self):\n    #     for x in super(CocoDataLoader, self).__iter__():\n    #         if isinstance(x, tuple) or isinstance(x, list):\n    #             yield tuple(y.cuda(async=True) if hasattr(y, 'cuda') else y for y in x)\n    #         else:\n    #             yield x.cuda(async=True)\n\n    @classmethod\n    def splits(cls, train_data, val_data, batch_size=3, num_workers=1, num_gpus=3, **kwargs):\n        train_load = cls(\n            dataset=train_data,\n            batch_size=batch_size*num_gpus,\n            shuffle=True,\n            num_workers=num_workers,\n            collate_fn=lambda x: coco_collate(x, num_gpus=num_gpus, is_train=True),\n            drop_last=True,\n            # pin_memory=True,\n            **kwargs,\n        )\n        val_load = cls(\n            dataset=val_data,\n            batch_size=batch_size*num_gpus,\n            shuffle=False,\n            num_workers=num_workers,\n            collate_fn=lambda x: coco_collate(x, num_gpus=num_gpus, is_train=False),\n            drop_last=True,\n            # pin_memory=True,\n            **kwargs,\n        )\n        return train_load, val_load\n\n\nif __name__ == '__main__':\n    train, val = CocoDetection.splits()\n    gtbox = train[0]['gt_boxes']\n    img_size = train[0]['img_size']\n    anchor_strides, labels, bbox_targets = anchor_target_layer(gtbox, img_size)\n"
  },
  {
    "path": "dataloaders/visual_genome.py",
    "content": "\"\"\"\nFile that involves dataloaders for the Visual Genome dataset.\n\"\"\"\n\nimport json\nimport os\n\nimport h5py\nimport numpy as np\nimport torch\nfrom PIL import Image\nfrom torch.utils.data import Dataset\nfrom torchvision.transforms import Resize, Compose, ToTensor, Normalize\nfrom dataloaders.blob import Blob\nfrom lib.fpn.box_intersections_cpu.bbox import bbox_overlaps\nfrom config import VG_IMAGES, IM_DATA_FN, VG_SGG_FN, VG_SGG_DICT_FN, BOX_SCALE, IM_SCALE, PROPOSAL_FN\nfrom dataloaders.image_transforms import SquarePad, Grayscale, Brightness, Sharpness, Contrast, \\\n    RandomOrder, Hue, random_crop\nfrom collections import defaultdict\nfrom pycocotools.coco import COCO\n\n\nclass VG(Dataset):\n    def __init__(self, mode, roidb_file=VG_SGG_FN, dict_file=VG_SGG_DICT_FN,\n                 image_file=IM_DATA_FN, filter_empty_rels=True, num_im=-1, num_val_im=5000,\n                 filter_duplicate_rels=True, filter_non_overlap=True,\n                 use_proposals=False):\n        \"\"\"\n        Torch dataset for VisualGenome\n        :param mode: Must be train, test, or val\n        :param roidb_file:  HDF5 containing the GT boxes, classes, and relationships\n        :param dict_file: JSON Contains mapping of classes/relationships to words\n        :param image_file: HDF5 containing image filenames\n        :param filter_empty_rels: True if we filter out images without relationships between\n                             boxes. One might want to set this to false if training a detector.\n        :param filter_duplicate_rels: Whenever we see a duplicate relationship we'll sample instead\n        :param num_im: Number of images in the entire dataset. -1 for all images.\n        :param num_val_im: Number of images in the validation set (must be less than num_im\n               unless num_im is -1.)\n        :param proposal_file: If None, we don't provide proposals. Otherwise file for where we get RPN\n            proposals\n        \"\"\"\n        if mode not in ('test', 'train', 'val'):\n            raise ValueError(\"Mode must be in test, train, or val. Supplied {}\".format(mode))\n        self.mode = mode\n\n        # Initialize\n        self.roidb_file = roidb_file\n        self.dict_file = dict_file\n        self.image_file = image_file\n        self.filter_non_overlap = filter_non_overlap\n        self.filter_duplicate_rels = filter_duplicate_rels and self.mode == 'train'\n\n        self.split_mask, self.gt_boxes, self.gt_classes, self.relationships = load_graphs(\n            self.roidb_file, self.mode, num_im, num_val_im=num_val_im,\n            filter_empty_rels=filter_empty_rels,\n            filter_non_overlap=self.filter_non_overlap and self.is_train,\n        )\n\n        self.filenames = load_image_filenames(image_file)\n        self.filenames = [self.filenames[i] for i in np.where(self.split_mask)[0]]\n\n        self.ind_to_classes, self.ind_to_predicates = load_info(dict_file)\n\n        if use_proposals:\n            print(\"Loading proposals\", flush=True)\n            p_h5 = h5py.File(PROPOSAL_FN, 'r')\n            rpn_rois = p_h5['rpn_rois']\n            rpn_scores = p_h5['rpn_scores']\n            rpn_im_to_roi_idx = np.array(p_h5['im_to_roi_idx'][self.split_mask])\n            rpn_num_rois = np.array(p_h5['num_rois'][self.split_mask])\n\n            self.rpn_rois = []\n            for i in range(len(self.filenames)):\n                rpn_i = np.column_stack((\n                    rpn_scores[rpn_im_to_roi_idx[i]:rpn_im_to_roi_idx[i] + rpn_num_rois[i]],\n                    rpn_rois[rpn_im_to_roi_idx[i]:rpn_im_to_roi_idx[i] + rpn_num_rois[i]],\n                ))\n                self.rpn_rois.append(rpn_i)\n        else:\n            self.rpn_rois = None\n\n        # You could add data augmentation here. But we didn't.\n        # tform = []\n        # if self.is_train:\n        #     tform.append(RandomOrder([\n        #         Grayscale(),\n        #         Brightness(),\n        #         Contrast(),\n        #         Sharpness(),\n        #         Hue(),\n        #     ]))\n\n        tform = [\n            SquarePad(),\n            Resize(IM_SCALE),\n            ToTensor(),\n            Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),\n        ]\n        self.transform_pipeline = Compose(tform)\n\n    @property\n    def coco(self):\n        \"\"\"\n        :return: a Coco-like object that we can use to evaluate detection!\n        \"\"\"\n        anns = []\n        for i, (cls_array, box_array) in enumerate(zip(self.gt_classes, self.gt_boxes)):\n            for cls, box in zip(cls_array.tolist(), box_array.tolist()):\n                anns.append({\n                    'area': (box[3] - box[1] + 1) * (box[2] - box[0] + 1),\n                    'bbox': [box[0], box[1], box[2] - box[0] + 1, box[3] - box[1] + 1],\n                    'category_id': cls,\n                    'id': len(anns),\n                    'image_id': i,\n                    'iscrowd': 0,\n                })\n        fauxcoco = COCO()\n        fauxcoco.dataset = {\n            'info': {'description': 'ayy lmao'},\n            'images': [{'id': i} for i in range(self.__len__())],\n            'categories': [{'supercategory': 'person',\n                               'id': i, 'name': name} for i, name in enumerate(self.ind_to_classes) if name != '__background__'],\n            'annotations': anns,\n        }\n        fauxcoco.createIndex()\n        return fauxcoco\n\n    @property\n    def is_train(self):\n        return self.mode.startswith('train')\n\n    @classmethod\n    def splits(cls, *args, **kwargs):\n        \"\"\" Helper method to generate splits of the dataset\"\"\"\n        train = cls('train', *args, **kwargs)\n        val = cls('val', *args, **kwargs)\n        test = cls('test', *args, **kwargs)\n        return train, val, test\n\n    def __getitem__(self, index):\n        image_unpadded = Image.open(self.filenames[index]).convert('RGB')\n\n        # Optionally flip the image if we're doing training\n        flipped = self.is_train and np.random.random() > 0.5\n        gt_boxes = self.gt_boxes[index].copy()\n\n        # Boxes are already at BOX_SCALE\n        if self.is_train:\n            # crop boxes that are too large. This seems to be only a problem for image heights, but whatevs\n            gt_boxes[:, [1, 3]] = gt_boxes[:, [1, 3]].clip(\n                None, BOX_SCALE / max(image_unpadded.size) * image_unpadded.size[1])\n            gt_boxes[:, [0, 2]] = gt_boxes[:, [0, 2]].clip(\n                None, BOX_SCALE / max(image_unpadded.size) * image_unpadded.size[0])\n\n            # # crop the image for data augmentation\n            # image_unpadded, gt_boxes = random_crop(image_unpadded, gt_boxes, BOX_SCALE, round_boxes=True)\n\n        w, h = image_unpadded.size\n        box_scale_factor = BOX_SCALE / max(w, h)\n\n        if flipped:\n            scaled_w = int(box_scale_factor * float(w))\n            # print(\"Scaled w is {}\".format(scaled_w))\n            image_unpadded = image_unpadded.transpose(Image.FLIP_LEFT_RIGHT)\n            gt_boxes[:, [0, 2]] = scaled_w - gt_boxes[:, [2, 0]]\n\n        img_scale_factor = IM_SCALE / max(w, h)\n        if h > w:\n            im_size = (IM_SCALE, int(w * img_scale_factor), img_scale_factor)\n        elif h < w:\n            im_size = (int(h * img_scale_factor), IM_SCALE, img_scale_factor)\n        else:\n            im_size = (IM_SCALE, IM_SCALE, img_scale_factor)\n\n        gt_rels = self.relationships[index].copy()\n        if self.filter_duplicate_rels:\n            # Filter out dupes!\n            assert self.mode == 'train'\n            old_size = gt_rels.shape[0]\n            all_rel_sets = defaultdict(list)\n            for (o0, o1, r) in gt_rels:\n                all_rel_sets[(o0, o1)].append(r)\n            gt_rels = [(k[0], k[1], np.random.choice(v)) for k,v in all_rel_sets.items()]\n            gt_rels = np.array(gt_rels)\n\n        entry = {\n            'img': self.transform_pipeline(image_unpadded),\n            'img_size': im_size,\n            'gt_boxes': gt_boxes,\n            'gt_classes': self.gt_classes[index].copy(),\n            'gt_relations': gt_rels,\n            'scale': IM_SCALE / BOX_SCALE,  # Multiply the boxes by this.\n            'index': index,\n            'flipped': flipped,\n            'fn': self.filenames[index],\n        }\n\n        if self.rpn_rois is not None:\n            entry['proposals'] = self.rpn_rois[index]\n\n        assertion_checks(entry)\n        return entry\n\n    def __len__(self):\n        return len(self.filenames)\n\n    @property\n    def num_predicates(self):\n        return len(self.ind_to_predicates)\n\n    @property\n    def num_classes(self):\n        return len(self.ind_to_classes)\n\n\n# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n# MISC. HELPER FUNCTIONS ~~~~~~~~~~~~~~~~~~~~~\n# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\ndef assertion_checks(entry):\n    im_size = tuple(entry['img'].size())\n    if len(im_size) != 3:\n        raise ValueError(\"Img must be dim-3\")\n\n    c, h, w = entry['img'].size()\n    if c != 3:\n        raise ValueError(\"Must have 3 color channels\")\n\n    num_gt = entry['gt_boxes'].shape[0]\n    if entry['gt_classes'].shape[0] != num_gt:\n        raise ValueError(\"GT classes and GT boxes must have same number of examples\")\n\n    assert (entry['gt_boxes'][:, 2] >= entry['gt_boxes'][:, 0]).all()\n    assert (entry['gt_boxes'] >= -1).all()\n\n\ndef load_image_filenames(image_file, image_dir=VG_IMAGES):\n    \"\"\"\n    Loads the image filenames from visual genome from the JSON file that contains them.\n    This matches the preprocessing in scene-graph-TF-release/data_tools/vg_to_imdb.py.\n    :param image_file: JSON file. Elements contain the param \"image_id\".\n    :param image_dir: directory where the VisualGenome images are located\n    :return: List of filenames corresponding to the good images\n    \"\"\"\n    with open(image_file, 'r') as f:\n        im_data = json.load(f)\n\n    corrupted_ims = ['1592.jpg', '1722.jpg', '4616.jpg', '4617.jpg']\n    fns = []\n    for i, img in enumerate(im_data):\n        basename = '{}.jpg'.format(img['image_id'])\n        if basename in corrupted_ims:\n            continue\n\n        filename = os.path.join(image_dir, basename)\n        if os.path.exists(filename):\n            fns.append(filename)\n    assert len(fns) == 108073\n    return fns\n\n\ndef load_graphs(graphs_file, mode='train', num_im=-1, num_val_im=0, filter_empty_rels=True,\n                filter_non_overlap=False):\n    \"\"\"\n    Load the file containing the GT boxes and relations, as well as the dataset split\n    :param graphs_file: HDF5\n    :param mode: (train, val, or test)\n    :param num_im: Number of images we want\n    :param num_val_im: Number of validation images\n    :param filter_empty_rels: (will be filtered otherwise.)\n    :param filter_non_overlap: If training, filter images that dont overlap.\n    :return: image_index: numpy array corresponding to the index of images we're using\n             boxes: List where each element is a [num_gt, 4] array of ground \n                    truth boxes (x1, y1, x2, y2)\n             gt_classes: List where each element is a [num_gt] array of classes\n             relationships: List where each element is a [num_r, 3] array of \n                    (box_ind_1, box_ind_2, predicate) relationships\n    \"\"\"\n    if mode not in ('train', 'val', 'test'):\n        raise ValueError('{} invalid'.format(mode))\n\n    roi_h5 = h5py.File(graphs_file, 'r')\n    data_split = roi_h5['split'][:]\n    split = 2 if mode == 'test' else 0\n    split_mask = data_split == split\n\n    # Filter out images without bounding boxes\n    split_mask &= roi_h5['img_to_first_box'][:] >= 0\n    if filter_empty_rels:\n        split_mask &= roi_h5['img_to_first_rel'][:] >= 0\n\n    image_index = np.where(split_mask)[0]\n    if num_im > -1:\n        image_index = image_index[:num_im]\n    if num_val_im > 0:\n        if mode == 'val':\n            image_index = image_index[:num_val_im]\n        elif mode == 'train':\n            image_index = image_index[num_val_im:]\n\n\n    split_mask = np.zeros_like(data_split).astype(bool)\n    split_mask[image_index] = True\n\n    # Get box information\n    all_labels = roi_h5['labels'][:, 0]\n    all_boxes = roi_h5['boxes_{}'.format(BOX_SCALE)][:]  # will index later\n    assert np.all(all_boxes[:, :2] >= 0)  # sanity check\n    assert np.all(all_boxes[:, 2:] > 0)  # no empty box\n\n    # convert from xc, yc, w, h to x1, y1, x2, y2\n    all_boxes[:, :2] = all_boxes[:, :2] - all_boxes[:, 2:] / 2\n    all_boxes[:, 2:] = all_boxes[:, :2] + all_boxes[:, 2:]\n\n    im_to_first_box = roi_h5['img_to_first_box'][split_mask]\n    im_to_last_box = roi_h5['img_to_last_box'][split_mask]\n    im_to_first_rel = roi_h5['img_to_first_rel'][split_mask]\n    im_to_last_rel = roi_h5['img_to_last_rel'][split_mask]\n\n    # load relation labels\n    _relations = roi_h5['relationships'][:]\n    _relation_predicates = roi_h5['predicates'][:, 0]\n    assert (im_to_first_rel.shape[0] == im_to_last_rel.shape[0])\n    assert (_relations.shape[0] == _relation_predicates.shape[0])  # sanity check\n\n    # Get everything by image.\n    boxes = []\n    gt_classes = []\n    relationships = []\n    for i in range(len(image_index)):\n        boxes_i = all_boxes[im_to_first_box[i]:im_to_last_box[i] + 1, :]\n        gt_classes_i = all_labels[im_to_first_box[i]:im_to_last_box[i] + 1]\n\n        if im_to_first_rel[i] >= 0:\n            predicates = _relation_predicates[im_to_first_rel[i]:im_to_last_rel[i] + 1]\n            obj_idx = _relations[im_to_first_rel[i]:im_to_last_rel[i] + 1] - im_to_first_box[i]\n            assert np.all(obj_idx >= 0)\n            assert np.all(obj_idx < boxes_i.shape[0])\n            rels = np.column_stack((obj_idx, predicates))\n        else:\n            assert not filter_empty_rels\n            rels = np.zeros((0, 3), dtype=np.int32)\n\n        if filter_non_overlap:\n            assert mode == 'train'\n            inters = bbox_overlaps(boxes_i, boxes_i)\n            rel_overs = inters[rels[:, 0], rels[:, 1]]\n            inc = np.where(rel_overs > 0.0)[0]\n\n            if inc.size > 0:\n                rels = rels[inc]\n            else:\n                split_mask[image_index[i]] = 0\n                continue\n\n        boxes.append(boxes_i)\n        gt_classes.append(gt_classes_i)\n        relationships.append(rels)\n\n    return split_mask, boxes, gt_classes, relationships\n\n\ndef load_info(info_file):\n    \"\"\"\n    Loads the file containing the visual genome label meanings\n    :param info_file: JSON\n    :return: ind_to_classes: sorted list of classes\n             ind_to_predicates: sorted list of predicates\n    \"\"\"\n    info = json.load(open(info_file, 'r'))\n    info['label_to_idx']['__background__'] = 0\n    info['predicate_to_idx']['__background__'] = 0\n\n    class_to_ind = info['label_to_idx']\n    predicate_to_ind = info['predicate_to_idx']\n    ind_to_classes = sorted(class_to_ind, key=lambda k: class_to_ind[k])\n    ind_to_predicates = sorted(predicate_to_ind, key=lambda k: predicate_to_ind[k])\n\n    return ind_to_classes, ind_to_predicates\n\n\ndef vg_collate(data, num_gpus=3, is_train=False, mode='det'):\n    assert mode in ('det', 'rel')\n    blob = Blob(mode=mode, is_train=is_train, num_gpus=num_gpus,\n                batch_size_per_gpu=len(data) // num_gpus)\n    for d in data:\n        blob.append(d)\n    blob.reduce()\n    return blob\n\n\nclass VGDataLoader(torch.utils.data.DataLoader):\n    \"\"\"\n    Iterates through the data, filtering out None,\n     but also loads everything as a (cuda) variable\n    \"\"\"\n\n    @classmethod\n    def splits(cls, train_data, val_data, batch_size=3, num_workers=1, num_gpus=3, mode='det',\n               **kwargs):\n        assert mode in ('det', 'rel')\n        train_load = cls(\n            dataset=train_data,\n            batch_size=batch_size * num_gpus,\n            shuffle=True,\n            num_workers=num_workers,\n            collate_fn=lambda x: vg_collate(x, mode=mode, num_gpus=num_gpus, is_train=True),\n            drop_last=True,\n            # pin_memory=True,\n            **kwargs,\n        )\n        val_load = cls(\n            dataset=val_data,\n            batch_size=batch_size * num_gpus if mode=='det' else num_gpus,\n            shuffle=False,\n            num_workers=num_workers,\n            collate_fn=lambda x: vg_collate(x, mode=mode, num_gpus=num_gpus, is_train=False),\n            drop_last=True,\n            # pin_memory=True,\n            **kwargs,\n        )\n        return train_load, val_load\n"
  },
  {
    "path": "docs/LICENSE.md",
    "content": "MIT License\n\nCopyright (c) 2017 Heiswayi Nrird\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "docs/_config.yaml",
    "content": "exclude: [README.md, LICENSE.md]\n\ndefaults:\n  - values:\n      layout: default\n"
  },
  {
    "path": "docs/_includes/image.html",
    "content": "<div class=\"image-wrapper\">\n    <img src=\"{{ include.url }}\" alt=\"{{ include.description }}\" />\n</div>\n\n"
  },
  {
    "path": "docs/_layouts/default.html",
    "content": "<!DOCTYPE html>\n<html>\n\n<head>\n    <meta charset=\"UTF-8\">\n    <meta http-equiv=\"X-UA-Compatible\" content=\"IE=edge,chrome=1\">\n    <meta name=\"viewport\" content=\"width=device-width,initial-scale=1\">\n    <title>{{ page.title }}</title>\n    <meta name=\"author\" content=\"{{ page.author }}\">\n    <meta name=\"description\" content=\"{{ page.description }}\">\n    <meta name=\"keywords\" content=\"{{ page.keywords }}\">\n    <link href=\"https://fonts.googleapis.com/css?family=Bungee+Shade|Droid+Sans\" rel=\"stylesheet\">\n    <style type=\"text/css\">\n        a {\n            color: #357edd;\n        }\n        a:hover {\n            color: #e7040f;\n        }\n        body {\n            font-size: 1.1em;\n            line-height: 1.2em;\n            margin: auto;\n            max-width: 700px;\n            width: 90%;\n            font-family: 'Droid Sans', sans-serif;\n        }\n        h1,\n        h2 {\n            letter-spacing: -.028em;\n            margin-top: 1em;\n            line-height: 2em;\n        }\n        pre,\n        code {\n            font-size: .9em;\n            font-family: Monaco, 'Lucida Console', monospace;\n            color: #f14e32;\n        }\n        pre {\n            overflow-x: auto;\n            margin: 0 0 2em 0;\n        }\n        .image-wrapper{\n          max-width:90%;\n          height:auto;\n          position: relative;\n          display:block;\n          margin:0 auto;\n        }\n\n        .image-wrapper img{\n          max-width:100% !important;\n          height:auto;\n          display:block;\n        }\n\n        /*.site-title {*/\n          /*font-family: 'Bungee Shade', cursive;*/\n        /*}*/\n    </style>\n    <script type=\"text/javascript\">\n        var _gaq = _gaq || [];\n        _gaq.push(['_setAccount', '{{ page.google_analytics_id }}']);\n        _gaq.push(['_trackPageview']);\n        (function() {\n            var ga = document.createElement('script');\n            ga.type = 'text/javascript';\n            ga.async = true;\n            ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';\n            var s = document.getElementsByTagName('script')[0];\n            s.parentNode.insertBefore(ga, s);\n        })();\n    </script>\n</head>\n\n<body>\n    <!--<h1 class=\"site-title\">{{ page.title }}</h1>-->\n    {{ content }}\n</body>\n\n</html>\n"
  },
  {
    "path": "docs/index.md",
    "content": "---\npermalink: /\ntitle: Neural Motifs\nauthor: Rowan Zellers\ndescription: Scene Graph Parsing with Global Context (CVPR 2018)\ngoogle_analytics_id: UA-84290243-3\n---\n# Neural Motifs: Scene Graph Parsing with Global Context (CVPR 2018)\n\n### by [Rowan Zellers](https://rowanzellers.com), [Mark Yatskar](https://homes.cs.washington.edu/~my89/), [Sam Thomson](https://http://samthomson.com/), [Yejin Choi](https://homes.cs.washington.edu/~yejin/)\n\n\n{% include image.html url=\"teaser.png\" description=\"teaser\" %} \n\n# Overview\n\n* In this work, we investigate the problem of producing structured graph representations of visual scenes. Similar to object detection, we must predict a box around each object. Here, we also need to predict an edge (with one of several labels, possibly `background`) between every ordered pair of boxes, producing a directed graph where the edges hopefully represent the semantics and interactions present in the scene.\n* We present an analysis of the [Visual Genome Scene Graphs dataset](http://visualgenome.org/). In particular:\n    * Object labels (e.g. person, shirt) are highly predictive of edge labels (e.g. wearing), but **not vice versa**.\n    * Over 90% of the edges in the dataset are non-semantic.\n    * There is a significant amount of structure in the dataset, in the form of graph motifs (regularly appearing substructures). \n* Motivated by our analysis, we present a simple baseline that outperforms previous approaches.\n* We introduce Stacked Motif Networks (MotifNet), which is a novel architecture that is designed to capture higher order motifs in scene graphs. In doing so, it achieves a sizeable performance gain over prior state-of-the-art.\n\n# Read the paper!\nThe old version of the paper is available at [arxiv link](https://arxiv.org/abs/1711.06640) - camera ready version coming soon!\n\n# Bibtex\n```\n@inproceedings{zellers2018scenegraphs,\n  title={Neural Motifs: Scene Graph Parsing with Global Context},\n  author={Zellers, Rowan and Yatskar, Mark and Thomson, Sam and Choi, Yejin},\n  booktitle = \"Conference on Computer Vision and Pattern Recognition\",  \n  year={2018}\n}\n```\n\n# View some examples!\n\nCheck out [this tool](https://rowanzellers.com/scenegraph2/) I made to visualize the scene graph predictions. Disclaimer: the predictions are from an earlier version of the model, but hopefully they're still helpful!\n\n# Code\n\nVisit the [`neural-motifs` GitHub repository](https://github.com/rowanz/neural-motifs) for our reference implementation and instructions for running our code.\n\nIt is released under the MIT license.\n\n# Checkpoints available for download\n* [Pretrained Detector](https://drive.google.com/open?id=11zKRr2OF5oclFL47kjFYBOxScotQzArX)\n* [Motifnet-SGDet](https://drive.google.com/open?id=1thd_5uSamJQaXAPVGVOUZGAOfGCYZYmb)\n* [Motifnet-SGCls/PredCls](https://drive.google.com/open?id=12qziGKYjFD3LAnoy4zDT3bcg5QLC0qN6)\n\n# questions?\n\nFeel free to get in touch! My main website is at [rowanzellers.com](https://rowanzellers.com)\n"
  },
  {
    "path": "docs/upload.sh",
    "content": "#!/usr/bin/env bash\n\nscp -r _site/* USERNAME@SITE:~/rowanzellers.com/neuralmotifs"
  },
  {
    "path": "lib/__init__.py",
    "content": ""
  },
  {
    "path": "lib/draw_rectangles/draw_rectangles.c",
    "content": "/* Generated by Cython 0.25.2 */\n\n/* BEGIN: Cython Metadata\n{\n    \"distutils\": {\n        \"depends\": []\n    },\n    \"module_name\": \"draw_rectangles\"\n}\nEND: Cython Metadata */\n\n#define PY_SSIZE_T_CLEAN\n#include \"Python.h\"\n#ifndef Py_PYTHON_H\n    #error Python headers needed to compile C extensions, please install development version of Python.\n#elif PY_VERSION_HEX < 0x02060000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03020000)\n    #error Cython requires Python 2.6+ or Python 3.2+.\n#else\n#define CYTHON_ABI \"0_25_2\"\n#include <stddef.h>\n#ifndef offsetof\n  #define offsetof(type, member) ( (size_t) & ((type*)0) -> member )\n#endif\n#if !defined(WIN32) && !defined(MS_WINDOWS)\n  #ifndef __stdcall\n    #define __stdcall\n  #endif\n  #ifndef __cdecl\n    #define __cdecl\n  #endif\n  #ifndef __fastcall\n    #define __fastcall\n  #endif\n#endif\n#ifndef DL_IMPORT\n  #define DL_IMPORT(t) t\n#endif\n#ifndef DL_EXPORT\n  #define DL_EXPORT(t) t\n#endif\n#ifndef HAVE_LONG_LONG\n  #if PY_VERSION_HEX >= 0x03030000 || (PY_MAJOR_VERSION == 2 && PY_VERSION_HEX >= 0x02070000)\n    #define HAVE_LONG_LONG\n  #endif\n#endif\n#ifndef PY_LONG_LONG\n  #define PY_LONG_LONG LONG_LONG\n#endif\n#ifndef Py_HUGE_VAL\n  #define Py_HUGE_VAL HUGE_VAL\n#endif\n#ifdef PYPY_VERSION\n  #define CYTHON_COMPILING_IN_PYPY 1\n  #define CYTHON_COMPILING_IN_PYSTON 0\n  #define CYTHON_COMPILING_IN_CPYTHON 0\n  #undef CYTHON_USE_TYPE_SLOTS\n  #define CYTHON_USE_TYPE_SLOTS 0\n  #undef CYTHON_USE_ASYNC_SLOTS\n  #define CYTHON_USE_ASYNC_SLOTS 0\n  #undef CYTHON_USE_PYLIST_INTERNALS\n  #define CYTHON_USE_PYLIST_INTERNALS 0\n  #undef CYTHON_USE_UNICODE_INTERNALS\n  #define CYTHON_USE_UNICODE_INTERNALS 0\n  #undef CYTHON_USE_UNICODE_WRITER\n  #define CYTHON_USE_UNICODE_WRITER 0\n  #undef CYTHON_USE_PYLONG_INTERNALS\n  #define CYTHON_USE_PYLONG_INTERNALS 0\n  #undef CYTHON_AVOID_BORROWED_REFS\n  #define CYTHON_AVOID_BORROWED_REFS 1\n  #undef CYTHON_ASSUME_SAFE_MACROS\n  #define CYTHON_ASSUME_SAFE_MACROS 0\n  #undef CYTHON_UNPACK_METHODS\n  #define CYTHON_UNPACK_METHODS 0\n  #undef CYTHON_FAST_THREAD_STATE\n  #define CYTHON_FAST_THREAD_STATE 0\n  #undef CYTHON_FAST_PYCALL\n  #define CYTHON_FAST_PYCALL 0\n#elif defined(PYSTON_VERSION)\n  #define CYTHON_COMPILING_IN_PYPY 0\n  #define CYTHON_COMPILING_IN_PYSTON 1\n  #define CYTHON_COMPILING_IN_CPYTHON 0\n  #ifndef CYTHON_USE_TYPE_SLOTS\n    #define CYTHON_USE_TYPE_SLOTS 1\n  #endif\n  #undef CYTHON_USE_ASYNC_SLOTS\n  #define CYTHON_USE_ASYNC_SLOTS 0\n  #undef CYTHON_USE_PYLIST_INTERNALS\n  #define CYTHON_USE_PYLIST_INTERNALS 0\n  #ifndef CYTHON_USE_UNICODE_INTERNALS\n    #define CYTHON_USE_UNICODE_INTERNALS 1\n  #endif\n  #undef CYTHON_USE_UNICODE_WRITER\n  #define CYTHON_USE_UNICODE_WRITER 0\n  #undef CYTHON_USE_PYLONG_INTERNALS\n  #define CYTHON_USE_PYLONG_INTERNALS 0\n  #ifndef CYTHON_AVOID_BORROWED_REFS\n    #define CYTHON_AVOID_BORROWED_REFS 0\n  #endif\n  #ifndef CYTHON_ASSUME_SAFE_MACROS\n    #define CYTHON_ASSUME_SAFE_MACROS 1\n  #endif\n  #ifndef CYTHON_UNPACK_METHODS\n    #define CYTHON_UNPACK_METHODS 1\n  #endif\n  #undef CYTHON_FAST_THREAD_STATE\n  #define CYTHON_FAST_THREAD_STATE 0\n  #undef CYTHON_FAST_PYCALL\n  #define CYTHON_FAST_PYCALL 0\n#else\n  #define CYTHON_COMPILING_IN_PYPY 0\n  #define CYTHON_COMPILING_IN_PYSTON 0\n  #define CYTHON_COMPILING_IN_CPYTHON 1\n  #ifndef CYTHON_USE_TYPE_SLOTS\n    #define CYTHON_USE_TYPE_SLOTS 1\n  #endif\n  #if PY_MAJOR_VERSION < 3\n    #undef CYTHON_USE_ASYNC_SLOTS\n    #define CYTHON_USE_ASYNC_SLOTS 0\n  #elif !defined(CYTHON_USE_ASYNC_SLOTS)\n    #define CYTHON_USE_ASYNC_SLOTS 1\n  #endif\n  #if PY_VERSION_HEX < 0x02070000\n    #undef CYTHON_USE_PYLONG_INTERNALS\n    #define CYTHON_USE_PYLONG_INTERNALS 0\n  #elif !defined(CYTHON_USE_PYLONG_INTERNALS)\n    #define CYTHON_USE_PYLONG_INTERNALS 1\n  #endif\n  #ifndef CYTHON_USE_PYLIST_INTERNALS\n    #define CYTHON_USE_PYLIST_INTERNALS 1\n  #endif\n  #ifndef CYTHON_USE_UNICODE_INTERNALS\n    #define CYTHON_USE_UNICODE_INTERNALS 1\n  #endif\n  #if PY_VERSION_HEX < 0x030300F0\n    #undef CYTHON_USE_UNICODE_WRITER\n    #define CYTHON_USE_UNICODE_WRITER 0\n  #elif !defined(CYTHON_USE_UNICODE_WRITER)\n    #define CYTHON_USE_UNICODE_WRITER 1\n  #endif\n  #ifndef CYTHON_AVOID_BORROWED_REFS\n    #define CYTHON_AVOID_BORROWED_REFS 0\n  #endif\n  #ifndef CYTHON_ASSUME_SAFE_MACROS\n    #define CYTHON_ASSUME_SAFE_MACROS 1\n  #endif\n  #ifndef CYTHON_UNPACK_METHODS\n    #define CYTHON_UNPACK_METHODS 1\n  #endif\n  #ifndef CYTHON_FAST_THREAD_STATE\n    #define CYTHON_FAST_THREAD_STATE 1\n  #endif\n  #ifndef CYTHON_FAST_PYCALL\n    #define CYTHON_FAST_PYCALL 1\n  #endif\n#endif\n#if !defined(CYTHON_FAST_PYCCALL)\n#define CYTHON_FAST_PYCCALL  (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1)\n#endif\n#if CYTHON_USE_PYLONG_INTERNALS\n  #include \"longintrepr.h\"\n  #undef SHIFT\n  #undef BASE\n  #undef MASK\n#endif\n#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX < 0x02070600 && !defined(Py_OptimizeFlag)\n  #define Py_OptimizeFlag 0\n#endif\n#define __PYX_BUILD_PY_SSIZE_T \"n\"\n#define CYTHON_FORMAT_SSIZE_T \"z\"\n#if PY_MAJOR_VERSION < 3\n  #define __Pyx_BUILTIN_MODULE_NAME \"__builtin__\"\n  #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\\\n          PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\n  #define __Pyx_DefaultClassType PyClass_Type\n#else\n  #define __Pyx_BUILTIN_MODULE_NAME \"builtins\"\n  #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\\\n          PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\n  #define __Pyx_DefaultClassType PyType_Type\n#endif\n#ifndef Py_TPFLAGS_CHECKTYPES\n  #define Py_TPFLAGS_CHECKTYPES 0\n#endif\n#ifndef Py_TPFLAGS_HAVE_INDEX\n  #define Py_TPFLAGS_HAVE_INDEX 0\n#endif\n#ifndef Py_TPFLAGS_HAVE_NEWBUFFER\n  #define Py_TPFLAGS_HAVE_NEWBUFFER 0\n#endif\n#ifndef Py_TPFLAGS_HAVE_FINALIZE\n  #define Py_TPFLAGS_HAVE_FINALIZE 0\n#endif\n#ifndef METH_FASTCALL\n  #define METH_FASTCALL 0x80\n  typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject **args,\n                                              Py_ssize_t nargs, PyObject *kwnames);\n#else\n  #define __Pyx_PyCFunctionFast _PyCFunctionFast\n#endif\n#if CYTHON_FAST_PYCCALL\n#define __Pyx_PyFastCFunction_Check(func)\\\n    ((PyCFunction_Check(func) && (METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST)))))\n#else\n#define __Pyx_PyFastCFunction_Check(func) 0\n#endif\n#if PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND)\n  #define CYTHON_PEP393_ENABLED 1\n  #define __Pyx_PyUnicode_READY(op)       (likely(PyUnicode_IS_READY(op)) ?\\\n                                              0 : _PyUnicode_Ready((PyObject *)(op)))\n  #define __Pyx_PyUnicode_GET_LENGTH(u)   PyUnicode_GET_LENGTH(u)\n  #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i)\n  #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u)   PyUnicode_MAX_CHAR_VALUE(u)\n  #define __Pyx_PyUnicode_KIND(u)         PyUnicode_KIND(u)\n  #define __Pyx_PyUnicode_DATA(u)         PyUnicode_DATA(u)\n  #define __Pyx_PyUnicode_READ(k, d, i)   PyUnicode_READ(k, d, i)\n  #define __Pyx_PyUnicode_WRITE(k, d, i, ch)  PyUnicode_WRITE(k, d, i, ch)\n  #define __Pyx_PyUnicode_IS_TRUE(u)      (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u)))\n#else\n  #define CYTHON_PEP393_ENABLED 0\n  #define PyUnicode_1BYTE_KIND  1\n  #define PyUnicode_2BYTE_KIND  2\n  #define PyUnicode_4BYTE_KIND  4\n  #define __Pyx_PyUnicode_READY(op)       (0)\n  #define __Pyx_PyUnicode_GET_LENGTH(u)   PyUnicode_GET_SIZE(u)\n  #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i]))\n  #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u)   ((sizeof(Py_UNICODE) == 2) ? 65535 : 1114111)\n  #define __Pyx_PyUnicode_KIND(u)         (sizeof(Py_UNICODE))\n  #define __Pyx_PyUnicode_DATA(u)         ((void*)PyUnicode_AS_UNICODE(u))\n  #define __Pyx_PyUnicode_READ(k, d, i)   ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i]))\n  #define __Pyx_PyUnicode_WRITE(k, d, i, ch)  (((void)(k)), ((Py_UNICODE*)d)[i] = ch)\n  #define __Pyx_PyUnicode_IS_TRUE(u)      (0 != PyUnicode_GET_SIZE(u))\n#endif\n#if CYTHON_COMPILING_IN_PYPY\n  #define __Pyx_PyUnicode_Concat(a, b)      PyNumber_Add(a, b)\n  #define __Pyx_PyUnicode_ConcatSafe(a, b)  PyNumber_Add(a, b)\n#else\n  #define __Pyx_PyUnicode_Concat(a, b)      PyUnicode_Concat(a, b)\n  #define __Pyx_PyUnicode_ConcatSafe(a, b)  ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\\\n      PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b))\n#endif\n#if CYTHON_COMPILING_IN_PYPY && !defined(PyUnicode_Contains)\n  #define PyUnicode_Contains(u, s)  PySequence_Contains(u, s)\n#endif\n#if CYTHON_COMPILING_IN_PYPY && !defined(PyByteArray_Check)\n  #define PyByteArray_Check(obj)  PyObject_TypeCheck(obj, &PyByteArray_Type)\n#endif\n#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Format)\n  #define PyObject_Format(obj, fmt)  PyObject_CallMethod(obj, \"__format__\", \"O\", fmt)\n#endif\n#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc)\n  #define PyObject_Malloc(s)   PyMem_Malloc(s)\n  #define PyObject_Free(p)     PyMem_Free(p)\n  #define PyObject_Realloc(p)  PyMem_Realloc(p)\n#endif\n#if CYTHON_COMPILING_IN_PYSTON\n  #define __Pyx_PyCode_HasFreeVars(co)  PyCode_HasFreeVars(co)\n  #define __Pyx_PyFrame_SetLineNumber(frame, lineno) PyFrame_SetLineNumber(frame, lineno)\n#else\n  #define __Pyx_PyCode_HasFreeVars(co)  (PyCode_GetNumFree(co) > 0)\n  #define __Pyx_PyFrame_SetLineNumber(frame, lineno)  (frame)->f_lineno = (lineno)\n#endif\n#define __Pyx_PyString_FormatSafe(a, b)   ((unlikely((a) == Py_None)) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b))\n#define __Pyx_PyUnicode_FormatSafe(a, b)  ((unlikely((a) == Py_None)) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b))\n#if PY_MAJOR_VERSION >= 3\n  #define __Pyx_PyString_Format(a, b)  PyUnicode_Format(a, b)\n#else\n  #define __Pyx_PyString_Format(a, b)  PyString_Format(a, b)\n#endif\n#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII)\n  #define PyObject_ASCII(o)            PyObject_Repr(o)\n#endif\n#if PY_MAJOR_VERSION >= 3\n  #define PyBaseString_Type            PyUnicode_Type\n  #define PyStringObject               PyUnicodeObject\n  #define PyString_Type                PyUnicode_Type\n  #define PyString_Check               PyUnicode_Check\n  #define PyString_CheckExact          PyUnicode_CheckExact\n#endif\n#if PY_MAJOR_VERSION >= 3\n  #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj)\n  #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj)\n#else\n  #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj))\n  #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj))\n#endif\n#ifndef PySet_CheckExact\n  #define PySet_CheckExact(obj)        (Py_TYPE(obj) == &PySet_Type)\n#endif\n#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type)\n#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception)\n#if PY_MAJOR_VERSION >= 3\n  #define PyIntObject                  PyLongObject\n  #define PyInt_Type                   PyLong_Type\n  #define PyInt_Check(op)              PyLong_Check(op)\n  #define PyInt_CheckExact(op)         PyLong_CheckExact(op)\n  #define PyInt_FromString             PyLong_FromString\n  #define PyInt_FromUnicode            PyLong_FromUnicode\n  #define PyInt_FromLong               PyLong_FromLong\n  #define PyInt_FromSize_t             PyLong_FromSize_t\n  #define PyInt_FromSsize_t            PyLong_FromSsize_t\n  #define PyInt_AsLong                 PyLong_AsLong\n  #define PyInt_AS_LONG                PyLong_AS_LONG\n  #define PyInt_AsSsize_t              PyLong_AsSsize_t\n  #define PyInt_AsUnsignedLongMask     PyLong_AsUnsignedLongMask\n  #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask\n  #define PyNumber_Int                 PyNumber_Long\n#endif\n#if PY_MAJOR_VERSION >= 3\n  #define PyBoolObject                 PyLongObject\n#endif\n#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY\n  #ifndef PyUnicode_InternFromString\n    #define PyUnicode_InternFromString(s) PyUnicode_FromString(s)\n  #endif\n#endif\n#if PY_VERSION_HEX < 0x030200A4\n  typedef long Py_hash_t;\n  #define __Pyx_PyInt_FromHash_t PyInt_FromLong\n  #define __Pyx_PyInt_AsHash_t   PyInt_AsLong\n#else\n  #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t\n  #define __Pyx_PyInt_AsHash_t   PyInt_AsSsize_t\n#endif\n#if PY_MAJOR_VERSION >= 3\n  #define __Pyx_PyMethod_New(func, self, klass) ((self) ? PyMethod_New(func, self) : PyInstanceMethod_New(func))\n#else\n  #define __Pyx_PyMethod_New(func, self, klass) PyMethod_New(func, self, klass)\n#endif\n#if CYTHON_USE_ASYNC_SLOTS\n  #if PY_VERSION_HEX >= 0x030500B1\n    #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods\n    #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async)\n  #else\n    typedef struct {\n        unaryfunc am_await;\n        unaryfunc am_aiter;\n        unaryfunc am_anext;\n    } __Pyx_PyAsyncMethodsStruct;\n    #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved))\n  #endif\n#else\n  #define __Pyx_PyType_AsAsync(obj) NULL\n#endif\n#ifndef CYTHON_RESTRICT\n  #if defined(__GNUC__)\n    #define CYTHON_RESTRICT __restrict__\n  #elif defined(_MSC_VER) && _MSC_VER >= 1400\n    #define CYTHON_RESTRICT __restrict\n  #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L\n    #define CYTHON_RESTRICT restrict\n  #else\n    #define CYTHON_RESTRICT\n  #endif\n#endif\n#ifndef CYTHON_UNUSED\n# if defined(__GNUC__)\n#   if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4))\n#     define CYTHON_UNUSED __attribute__ ((__unused__))\n#   else\n#     define CYTHON_UNUSED\n#   endif\n# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER))\n#   define CYTHON_UNUSED __attribute__ ((__unused__))\n# else\n#   define CYTHON_UNUSED\n# endif\n#endif\n#ifndef CYTHON_MAYBE_UNUSED_VAR\n#  if defined(__cplusplus)\n     template<class T> void CYTHON_MAYBE_UNUSED_VAR( const T& ) { }\n#  else\n#    define CYTHON_MAYBE_UNUSED_VAR(x) (void)(x)\n#  endif\n#endif\n#ifndef CYTHON_NCP_UNUSED\n# if CYTHON_COMPILING_IN_CPYTHON\n#  define CYTHON_NCP_UNUSED\n# else\n#  define CYTHON_NCP_UNUSED CYTHON_UNUSED\n# endif\n#endif\n#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None)\n\n#ifndef CYTHON_INLINE\n  #if defined(__clang__)\n    #define CYTHON_INLINE __inline__ __attribute__ ((__unused__))\n  #elif defined(__GNUC__)\n    #define CYTHON_INLINE __inline__\n  #elif defined(_MSC_VER)\n    #define CYTHON_INLINE __inline\n  #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L\n    #define CYTHON_INLINE inline\n  #else\n    #define CYTHON_INLINE\n  #endif\n#endif\n\n#if defined(WIN32) || defined(MS_WINDOWS)\n  #define _USE_MATH_DEFINES\n#endif\n#include <math.h>\n#ifdef NAN\n#define __PYX_NAN() ((float) NAN)\n#else\nstatic CYTHON_INLINE float __PYX_NAN() {\n  float value;\n  memset(&value, 0xFF, sizeof(value));\n  return value;\n}\n#endif\n#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL)\n#define __Pyx_truncl trunc\n#else\n#define __Pyx_truncl truncl\n#endif\n\n\n#define __PYX_ERR(f_index, lineno, Ln_error) \\\n{ \\\n  __pyx_filename = __pyx_f[f_index]; __pyx_lineno = lineno; __pyx_clineno = __LINE__; goto Ln_error; \\\n}\n\n#if PY_MAJOR_VERSION >= 3\n  #define __Pyx_PyNumber_Divide(x,y)         PyNumber_TrueDivide(x,y)\n  #define __Pyx_PyNumber_InPlaceDivide(x,y)  PyNumber_InPlaceTrueDivide(x,y)\n#else\n  #define __Pyx_PyNumber_Divide(x,y)         PyNumber_Divide(x,y)\n  #define __Pyx_PyNumber_InPlaceDivide(x,y)  PyNumber_InPlaceDivide(x,y)\n#endif\n\n#ifndef __PYX_EXTERN_C\n  #ifdef __cplusplus\n    #define __PYX_EXTERN_C extern \"C\"\n  #else\n    #define __PYX_EXTERN_C extern\n  #endif\n#endif\n\n#define __PYX_HAVE__draw_rectangles\n#define __PYX_HAVE_API__draw_rectangles\n#include <string.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include \"numpy/arrayobject.h\"\n#include \"numpy/ufuncobject.h\"\n#ifdef _OPENMP\n#include <omp.h>\n#endif /* _OPENMP */\n\n#ifdef PYREX_WITHOUT_ASSERTIONS\n#define CYTHON_WITHOUT_ASSERTIONS\n#endif\n\ntypedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding;\n                const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry;\n\n#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0\n#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT 0\n#define __PYX_DEFAULT_STRING_ENCODING \"\"\n#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString\n#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize\n#define __Pyx_uchar_cast(c) ((unsigned char)c)\n#define __Pyx_long_cast(x) ((long)x)\n#define __Pyx_fits_Py_ssize_t(v, type, is_signed)  (\\\n    (sizeof(type) < sizeof(Py_ssize_t))  ||\\\n    (sizeof(type) > sizeof(Py_ssize_t) &&\\\n          likely(v < (type)PY_SSIZE_T_MAX ||\\\n                 v == (type)PY_SSIZE_T_MAX)  &&\\\n          (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\\\n                                v == (type)PY_SSIZE_T_MIN)))  ||\\\n    (sizeof(type) == sizeof(Py_ssize_t) &&\\\n          (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\\\n                               v == (type)PY_SSIZE_T_MAX)))  )\n#if defined (__cplusplus) && __cplusplus >= 201103L\n    #include <cstdlib>\n    #define __Pyx_sst_abs(value) std::abs(value)\n#elif SIZEOF_INT >= SIZEOF_SIZE_T\n    #define __Pyx_sst_abs(value) abs(value)\n#elif SIZEOF_LONG >= SIZEOF_SIZE_T\n    #define __Pyx_sst_abs(value) labs(value)\n#elif defined (_MSC_VER) && defined (_M_X64)\n    #define __Pyx_sst_abs(value) _abs64(value)\n#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L\n    #define __Pyx_sst_abs(value) llabs(value)\n#elif defined (__GNUC__)\n    #define __Pyx_sst_abs(value) __builtin_llabs(value)\n#else\n    #define __Pyx_sst_abs(value) ((value<0) ? -value : value)\n#endif\nstatic CYTHON_INLINE char* __Pyx_PyObject_AsString(PyObject*);\nstatic CYTHON_INLINE char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length);\n#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s))\n#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l)\n#define __Pyx_PyBytes_FromString        PyBytes_FromString\n#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize\nstatic CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*);\n#if PY_MAJOR_VERSION < 3\n    #define __Pyx_PyStr_FromString        __Pyx_PyBytes_FromString\n    #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize\n#else\n    #define __Pyx_PyStr_FromString        __Pyx_PyUnicode_FromString\n    #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize\n#endif\n#define __Pyx_PyObject_AsSString(s)    ((signed char*) __Pyx_PyObject_AsString(s))\n#define __Pyx_PyObject_AsUString(s)    ((unsigned char*) __Pyx_PyObject_AsString(s))\n#define __Pyx_PyObject_FromCString(s)  __Pyx_PyObject_FromString((const char*)s)\n#define __Pyx_PyBytes_FromCString(s)   __Pyx_PyBytes_FromString((const char*)s)\n#define __Pyx_PyByteArray_FromCString(s)   __Pyx_PyByteArray_FromString((const char*)s)\n#define __Pyx_PyStr_FromCString(s)     __Pyx_PyStr_FromString((const char*)s)\n#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s)\n#if PY_MAJOR_VERSION < 3\nstatic CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u)\n{\n    const Py_UNICODE *u_end = u;\n    while (*u_end++) ;\n    return (size_t)(u_end - u - 1);\n}\n#else\n#define __Pyx_Py_UNICODE_strlen Py_UNICODE_strlen\n#endif\n#define __Pyx_PyUnicode_FromUnicode(u)       PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u))\n#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode\n#define __Pyx_PyUnicode_AsUnicode            PyUnicode_AsUnicode\n#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj)\n#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None)\n#define __Pyx_PyBool_FromLong(b) ((b) ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False))\nstatic CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*);\nstatic CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x);\nstatic CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*);\nstatic CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t);\n#if CYTHON_ASSUME_SAFE_MACROS\n#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x))\n#else\n#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x)\n#endif\n#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x))\n#if PY_MAJOR_VERSION >= 3\n#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x))\n#else\n#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x))\n#endif\n#define __Pyx_PyNumber_Float(x) (PyFloat_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Float(x))\n#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII\nstatic int __Pyx_sys_getdefaultencoding_not_ascii;\nstatic int __Pyx_init_sys_getdefaultencoding_params(void) {\n    PyObject* sys;\n    PyObject* default_encoding = NULL;\n    PyObject* ascii_chars_u = NULL;\n    PyObject* ascii_chars_b = NULL;\n    const char* default_encoding_c;\n    sys = PyImport_ImportModule(\"sys\");\n    if (!sys) goto bad;\n    default_encoding = PyObject_CallMethod(sys, (char*) \"getdefaultencoding\", NULL);\n    Py_DECREF(sys);\n    if (!default_encoding) goto bad;\n    default_encoding_c = PyBytes_AsString(default_encoding);\n    if (!default_encoding_c) goto bad;\n    if (strcmp(default_encoding_c, \"ascii\") == 0) {\n        __Pyx_sys_getdefaultencoding_not_ascii = 0;\n    } else {\n        char ascii_chars[128];\n        int c;\n        for (c = 0; c < 128; c++) {\n            ascii_chars[c] = c;\n        }\n        __Pyx_sys_getdefaultencoding_not_ascii = 1;\n        ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL);\n        if (!ascii_chars_u) goto bad;\n        ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL);\n        if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) {\n            PyErr_Format(\n                PyExc_ValueError,\n                \"This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.\",\n                default_encoding_c);\n            goto bad;\n        }\n        Py_DECREF(ascii_chars_u);\n        Py_DECREF(ascii_chars_b);\n    }\n    Py_DECREF(default_encoding);\n    return 0;\nbad:\n    Py_XDECREF(default_encoding);\n    Py_XDECREF(ascii_chars_u);\n    Py_XDECREF(ascii_chars_b);\n    return -1;\n}\n#endif\n#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3\n#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL)\n#else\n#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL)\n#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT\nstatic char* __PYX_DEFAULT_STRING_ENCODING;\nstatic int __Pyx_init_sys_getdefaultencoding_params(void) {\n    PyObject* sys;\n    PyObject* default_encoding = NULL;\n    char* default_encoding_c;\n    sys = PyImport_ImportModule(\"sys\");\n    if (!sys) goto bad;\n    default_encoding = PyObject_CallMethod(sys, (char*) (const char*) \"getdefaultencoding\", NULL);\n    Py_DECREF(sys);\n    if (!default_encoding) goto bad;\n    default_encoding_c = PyBytes_AsString(default_encoding);\n    if (!default_encoding_c) goto bad;\n    __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c));\n    if (!__PYX_DEFAULT_STRING_ENCODING) goto bad;\n    strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c);\n    Py_DECREF(default_encoding);\n    return 0;\nbad:\n    Py_XDECREF(default_encoding);\n    return -1;\n}\n#endif\n#endif\n\n\n/* Test for GCC > 2.95 */\n#if defined(__GNUC__)     && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95)))\n  #define likely(x)   __builtin_expect(!!(x), 1)\n  #define unlikely(x) __builtin_expect(!!(x), 0)\n#else /* !__GNUC__ or GCC < 2.95 */\n  #define likely(x)   (x)\n  #define unlikely(x) (x)\n#endif /* __GNUC__ */\n\nstatic PyObject *__pyx_m;\nstatic PyObject *__pyx_d;\nstatic PyObject *__pyx_b;\nstatic PyObject *__pyx_empty_tuple;\nstatic PyObject *__pyx_empty_bytes;\nstatic PyObject *__pyx_empty_unicode;\nstatic int __pyx_lineno;\nstatic int __pyx_clineno = 0;\nstatic const char * __pyx_cfilenm= __FILE__;\nstatic const char *__pyx_filename;\n\n/* Header.proto */\n#if !defined(CYTHON_CCOMPLEX)\n  #if defined(__cplusplus)\n    #define CYTHON_CCOMPLEX 1\n  #elif defined(_Complex_I)\n    #define CYTHON_CCOMPLEX 1\n  #else\n    #define CYTHON_CCOMPLEX 0\n  #endif\n#endif\n#if CYTHON_CCOMPLEX\n  #ifdef __cplusplus\n    #include <complex>\n  #else\n    #include <complex.h>\n  #endif\n#endif\n#if CYTHON_CCOMPLEX && !defined(__cplusplus) && defined(__sun__) && defined(__GNUC__)\n  #undef _Complex_I\n  #define _Complex_I 1.0fj\n#endif\n\n\nstatic const char *__pyx_f[] = {\n  \"draw_rectangles.pyx\",\n  \"__init__.pxd\",\n  \"type.pxd\",\n};\n/* BufferFormatStructs.proto */\n#define IS_UNSIGNED(type) (((type) -1) > 0)\nstruct __Pyx_StructField_;\n#define __PYX_BUF_FLAGS_PACKED_STRUCT (1 << 0)\ntypedef struct {\n  const char* name;\n  struct __Pyx_StructField_* fields;\n  size_t size;\n  size_t arraysize[8];\n  int ndim;\n  char typegroup;\n  char is_unsigned;\n  int flags;\n} __Pyx_TypeInfo;\ntypedef struct __Pyx_StructField_ {\n  __Pyx_TypeInfo* type;\n  const char* name;\n  size_t offset;\n} __Pyx_StructField;\ntypedef struct {\n  __Pyx_StructField* field;\n  size_t parent_offset;\n} __Pyx_BufFmt_StackElem;\ntypedef struct {\n  __Pyx_StructField root;\n  __Pyx_BufFmt_StackElem* head;\n  size_t fmt_offset;\n  size_t new_count, enc_count;\n  size_t struct_alignment;\n  int is_complex;\n  char enc_type;\n  char new_packmode;\n  char enc_packmode;\n  char is_valid_array;\n} __Pyx_BufFmt_Context;\n\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":725\n * # in Cython to enable them only on the right systems.\n * \n * ctypedef npy_int8       int8_t             # <<<<<<<<<<<<<<\n * ctypedef npy_int16      int16_t\n * ctypedef npy_int32      int32_t\n */\ntypedef npy_int8 __pyx_t_5numpy_int8_t;\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":726\n * \n * ctypedef npy_int8       int8_t\n * ctypedef npy_int16      int16_t             # <<<<<<<<<<<<<<\n * ctypedef npy_int32      int32_t\n * ctypedef npy_int64      int64_t\n */\ntypedef npy_int16 __pyx_t_5numpy_int16_t;\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":727\n * ctypedef npy_int8       int8_t\n * ctypedef npy_int16      int16_t\n * ctypedef npy_int32      int32_t             # <<<<<<<<<<<<<<\n * ctypedef npy_int64      int64_t\n * #ctypedef npy_int96      int96_t\n */\ntypedef npy_int32 __pyx_t_5numpy_int32_t;\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":728\n * ctypedef npy_int16      int16_t\n * ctypedef npy_int32      int32_t\n * ctypedef npy_int64      int64_t             # <<<<<<<<<<<<<<\n * #ctypedef npy_int96      int96_t\n * #ctypedef npy_int128     int128_t\n */\ntypedef npy_int64 __pyx_t_5numpy_int64_t;\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":732\n * #ctypedef npy_int128     int128_t\n * \n * ctypedef npy_uint8      uint8_t             # <<<<<<<<<<<<<<\n * ctypedef npy_uint16     uint16_t\n * ctypedef npy_uint32     uint32_t\n */\ntypedef npy_uint8 __pyx_t_5numpy_uint8_t;\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":733\n * \n * ctypedef npy_uint8      uint8_t\n * ctypedef npy_uint16     uint16_t             # <<<<<<<<<<<<<<\n * ctypedef npy_uint32     uint32_t\n * ctypedef npy_uint64     uint64_t\n */\ntypedef npy_uint16 __pyx_t_5numpy_uint16_t;\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":734\n * ctypedef npy_uint8      uint8_t\n * ctypedef npy_uint16     uint16_t\n * ctypedef npy_uint32     uint32_t             # <<<<<<<<<<<<<<\n * ctypedef npy_uint64     uint64_t\n * #ctypedef npy_uint96     uint96_t\n */\ntypedef npy_uint32 __pyx_t_5numpy_uint32_t;\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":735\n * ctypedef npy_uint16     uint16_t\n * ctypedef npy_uint32     uint32_t\n * ctypedef npy_uint64     uint64_t             # <<<<<<<<<<<<<<\n * #ctypedef npy_uint96     uint96_t\n * #ctypedef npy_uint128    uint128_t\n */\ntypedef npy_uint64 __pyx_t_5numpy_uint64_t;\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":739\n * #ctypedef npy_uint128    uint128_t\n * \n * ctypedef npy_float32    float32_t             # <<<<<<<<<<<<<<\n * ctypedef npy_float64    float64_t\n * #ctypedef npy_float80    float80_t\n */\ntypedef npy_float32 __pyx_t_5numpy_float32_t;\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":740\n * \n * ctypedef npy_float32    float32_t\n * ctypedef npy_float64    float64_t             # <<<<<<<<<<<<<<\n * #ctypedef npy_float80    float80_t\n * #ctypedef npy_float128   float128_t\n */\ntypedef npy_float64 __pyx_t_5numpy_float64_t;\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":749\n * # The int types are mapped a bit surprising --\n * # numpy.int corresponds to 'l' and numpy.long to 'q'\n * ctypedef npy_long       int_t             # <<<<<<<<<<<<<<\n * ctypedef npy_longlong   long_t\n * ctypedef npy_longlong   longlong_t\n */\ntypedef npy_long __pyx_t_5numpy_int_t;\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":750\n * # numpy.int corresponds to 'l' and numpy.long to 'q'\n * ctypedef npy_long       int_t\n * ctypedef npy_longlong   long_t             # <<<<<<<<<<<<<<\n * ctypedef npy_longlong   longlong_t\n * \n */\ntypedef npy_longlong __pyx_t_5numpy_long_t;\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":751\n * ctypedef npy_long       int_t\n * ctypedef npy_longlong   long_t\n * ctypedef npy_longlong   longlong_t             # <<<<<<<<<<<<<<\n * \n * ctypedef npy_ulong      uint_t\n */\ntypedef npy_longlong __pyx_t_5numpy_longlong_t;\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":753\n * ctypedef npy_longlong   longlong_t\n * \n * ctypedef npy_ulong      uint_t             # <<<<<<<<<<<<<<\n * ctypedef npy_ulonglong  ulong_t\n * ctypedef npy_ulonglong  ulonglong_t\n */\ntypedef npy_ulong __pyx_t_5numpy_uint_t;\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":754\n * \n * ctypedef npy_ulong      uint_t\n * ctypedef npy_ulonglong  ulong_t             # <<<<<<<<<<<<<<\n * ctypedef npy_ulonglong  ulonglong_t\n * \n */\ntypedef npy_ulonglong __pyx_t_5numpy_ulong_t;\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":755\n * ctypedef npy_ulong      uint_t\n * ctypedef npy_ulonglong  ulong_t\n * ctypedef npy_ulonglong  ulonglong_t             # <<<<<<<<<<<<<<\n * \n * ctypedef npy_intp       intp_t\n */\ntypedef npy_ulonglong __pyx_t_5numpy_ulonglong_t;\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":757\n * ctypedef npy_ulonglong  ulonglong_t\n * \n * ctypedef npy_intp       intp_t             # <<<<<<<<<<<<<<\n * ctypedef npy_uintp      uintp_t\n * \n */\ntypedef npy_intp __pyx_t_5numpy_intp_t;\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":758\n * \n * ctypedef npy_intp       intp_t\n * ctypedef npy_uintp      uintp_t             # <<<<<<<<<<<<<<\n * \n * ctypedef npy_double     float_t\n */\ntypedef npy_uintp __pyx_t_5numpy_uintp_t;\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":760\n * ctypedef npy_uintp      uintp_t\n * \n * ctypedef npy_double     float_t             # <<<<<<<<<<<<<<\n * ctypedef npy_double     double_t\n * ctypedef npy_longdouble longdouble_t\n */\ntypedef npy_double __pyx_t_5numpy_float_t;\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":761\n * \n * ctypedef npy_double     float_t\n * ctypedef npy_double     double_t             # <<<<<<<<<<<<<<\n * ctypedef npy_longdouble longdouble_t\n * \n */\ntypedef npy_double __pyx_t_5numpy_double_t;\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":762\n * ctypedef npy_double     float_t\n * ctypedef npy_double     double_t\n * ctypedef npy_longdouble longdouble_t             # <<<<<<<<<<<<<<\n * \n * ctypedef npy_cfloat      cfloat_t\n */\ntypedef npy_longdouble __pyx_t_5numpy_longdouble_t;\n\n/* \"draw_rectangles.pyx\":10\n * \n * DTYPE = np.float32\n * ctypedef np.float32_t DTYPE_t             # <<<<<<<<<<<<<<\n * \n * def draw_union_boxes(bbox_pairs, pooling_size, padding=0):\n */\ntypedef __pyx_t_5numpy_float32_t __pyx_t_15draw_rectangles_DTYPE_t;\n/* Declarations.proto */\n#if CYTHON_CCOMPLEX\n  #ifdef __cplusplus\n    typedef ::std::complex< float > __pyx_t_float_complex;\n  #else\n    typedef float _Complex __pyx_t_float_complex;\n  #endif\n#else\n    typedef struct { float real, imag; } __pyx_t_float_complex;\n#endif\nstatic CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float, float);\n\n/* Declarations.proto */\n#if CYTHON_CCOMPLEX\n  #ifdef __cplusplus\n    typedef ::std::complex< double > __pyx_t_double_complex;\n  #else\n    typedef double _Complex __pyx_t_double_complex;\n  #endif\n#else\n    typedef struct { double real, imag; } __pyx_t_double_complex;\n#endif\nstatic CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double, double);\n\n\n/*--- Type declarations ---*/\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":764\n * ctypedef npy_longdouble longdouble_t\n * \n * ctypedef npy_cfloat      cfloat_t             # <<<<<<<<<<<<<<\n * ctypedef npy_cdouble     cdouble_t\n * ctypedef npy_clongdouble clongdouble_t\n */\ntypedef npy_cfloat __pyx_t_5numpy_cfloat_t;\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":765\n * \n * ctypedef npy_cfloat      cfloat_t\n * ctypedef npy_cdouble     cdouble_t             # <<<<<<<<<<<<<<\n * ctypedef npy_clongdouble clongdouble_t\n * \n */\ntypedef npy_cdouble __pyx_t_5numpy_cdouble_t;\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":766\n * ctypedef npy_cfloat      cfloat_t\n * ctypedef npy_cdouble     cdouble_t\n * ctypedef npy_clongdouble clongdouble_t             # <<<<<<<<<<<<<<\n * \n * ctypedef npy_cdouble     complex_t\n */\ntypedef npy_clongdouble __pyx_t_5numpy_clongdouble_t;\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":768\n * ctypedef npy_clongdouble clongdouble_t\n * \n * ctypedef npy_cdouble     complex_t             # <<<<<<<<<<<<<<\n * \n * cdef inline object PyArray_MultiIterNew1(a):\n */\ntypedef npy_cdouble __pyx_t_5numpy_complex_t;\n\n/* --- Runtime support code (head) --- */\n/* Refnanny.proto */\n#ifndef CYTHON_REFNANNY\n  #define CYTHON_REFNANNY 0\n#endif\n#if CYTHON_REFNANNY\n  typedef struct {\n    void (*INCREF)(void*, PyObject*, int);\n    void (*DECREF)(void*, PyObject*, int);\n    void (*GOTREF)(void*, PyObject*, int);\n    void (*GIVEREF)(void*, PyObject*, int);\n    void* (*SetupContext)(const char*, int, const char*);\n    void (*FinishContext)(void**);\n  } __Pyx_RefNannyAPIStruct;\n  static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL;\n  static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname);\n  #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL;\n#ifdef WITH_THREAD\n  #define __Pyx_RefNannySetupContext(name, acquire_gil)\\\n          if (acquire_gil) {\\\n              PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\\\n              __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\\\n              PyGILState_Release(__pyx_gilstate_save);\\\n          } else {\\\n              __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\\\n          }\n#else\n  #define __Pyx_RefNannySetupContext(name, acquire_gil)\\\n          __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__)\n#endif\n  #define __Pyx_RefNannyFinishContext()\\\n          __Pyx_RefNanny->FinishContext(&__pyx_refnanny)\n  #define __Pyx_INCREF(r)  __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__)\n  #define __Pyx_DECREF(r)  __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__)\n  #define __Pyx_GOTREF(r)  __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__)\n  #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__)\n  #define __Pyx_XINCREF(r)  do { if((r) != NULL) {__Pyx_INCREF(r); }} while(0)\n  #define __Pyx_XDECREF(r)  do { if((r) != NULL) {__Pyx_DECREF(r); }} while(0)\n  #define __Pyx_XGOTREF(r)  do { if((r) != NULL) {__Pyx_GOTREF(r); }} while(0)\n  #define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);}} while(0)\n#else\n  #define __Pyx_RefNannyDeclarations\n  #define __Pyx_RefNannySetupContext(name, acquire_gil)\n  #define __Pyx_RefNannyFinishContext()\n  #define __Pyx_INCREF(r) Py_INCREF(r)\n  #define __Pyx_DECREF(r) Py_DECREF(r)\n  #define __Pyx_GOTREF(r)\n  #define __Pyx_GIVEREF(r)\n  #define __Pyx_XINCREF(r) Py_XINCREF(r)\n  #define __Pyx_XDECREF(r) Py_XDECREF(r)\n  #define __Pyx_XGOTREF(r)\n  #define __Pyx_XGIVEREF(r)\n#endif\n#define __Pyx_XDECREF_SET(r, v) do {\\\n        PyObject *tmp = (PyObject *) r;\\\n        r = v; __Pyx_XDECREF(tmp);\\\n    } while (0)\n#define __Pyx_DECREF_SET(r, v) do {\\\n        PyObject *tmp = (PyObject *) r;\\\n        r = v; __Pyx_DECREF(tmp);\\\n    } while (0)\n#define __Pyx_CLEAR(r)    do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0)\n#define __Pyx_XCLEAR(r)   do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0)\n\n/* PyObjectGetAttrStr.proto */\n#if CYTHON_USE_TYPE_SLOTS\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) {\n    PyTypeObject* tp = Py_TYPE(obj);\n    if (likely(tp->tp_getattro))\n        return tp->tp_getattro(obj, attr_name);\n#if PY_MAJOR_VERSION < 3\n    if (likely(tp->tp_getattr))\n        return tp->tp_getattr(obj, PyString_AS_STRING(attr_name));\n#endif\n    return PyObject_GetAttr(obj, attr_name);\n}\n#else\n#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n)\n#endif\n\n/* GetBuiltinName.proto */\nstatic PyObject *__Pyx_GetBuiltinName(PyObject *name);\n\n/* RaiseArgTupleInvalid.proto */\nstatic void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact,\n    Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found);\n\n/* RaiseDoubleKeywords.proto */\nstatic void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name);\n\n/* ParseKeywords.proto */\nstatic int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[],\\\n    PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args,\\\n    const char* function_name);\n\n/* PyIntBinop.proto */\n#if !CYTHON_COMPILING_IN_PYPY\nstatic PyObject* __Pyx_PyInt_EqObjC(PyObject *op1, PyObject *op2, long intval, int inplace);\n#else\n#define __Pyx_PyInt_EqObjC(op1, op2, intval, inplace)\\\n    PyObject_RichCompare(op1, op2, Py_EQ)\n    #endif\n\n/* ExtTypeTest.proto */\nstatic CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type);\n\n/* BufferFormatCheck.proto */\nstatic CYTHON_INLINE int  __Pyx_GetBufferAndValidate(Py_buffer* buf, PyObject* obj,\n    __Pyx_TypeInfo* dtype, int flags, int nd, int cast, __Pyx_BufFmt_StackElem* stack);\nstatic CYTHON_INLINE void __Pyx_SafeReleaseBuffer(Py_buffer* info);\nstatic const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts);\nstatic void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx,\n                              __Pyx_BufFmt_StackElem* stack,\n                              __Pyx_TypeInfo* type); // PROTO\n\n/* GetModuleGlobalName.proto */\nstatic CYTHON_INLINE PyObject *__Pyx_GetModuleGlobalName(PyObject *name);\n\n/* PyObjectCall.proto */\n#if CYTHON_COMPILING_IN_CPYTHON\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw);\n#else\n#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw)\n#endif\n\n/* BufferIndexError.proto */\nstatic void __Pyx_RaiseBufferIndexError(int axis);\n\n#define __Pyx_BufPtrStrided2d(type, buf, i0, s0, i1, s1) (type)((char*)buf + i0 * s0 + i1 * s1)\n#define __Pyx_BufPtrStrided4d(type, buf, i0, s0, i1, s1, i2, s2, i3, s3) (type)((char*)buf + i0 * s0 + i1 * s1 + i2 * s2 + i3 * s3)\n/* PyThreadStateGet.proto */\n#if CYTHON_FAST_THREAD_STATE\n#define __Pyx_PyThreadState_declare  PyThreadState *__pyx_tstate;\n#define __Pyx_PyThreadState_assign  __pyx_tstate = PyThreadState_GET();\n#else\n#define __Pyx_PyThreadState_declare\n#define __Pyx_PyThreadState_assign\n#endif\n\n/* PyErrFetchRestore.proto */\n#if CYTHON_FAST_THREAD_STATE\n#define __Pyx_ErrRestoreWithState(type, value, tb)  __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb)\n#define __Pyx_ErrFetchWithState(type, value, tb)    __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb)\n#define __Pyx_ErrRestore(type, value, tb)  __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb)\n#define __Pyx_ErrFetch(type, value, tb)    __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb)\nstatic CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb);\nstatic CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);\n#else\n#define __Pyx_ErrRestoreWithState(type, value, tb)  PyErr_Restore(type, value, tb)\n#define __Pyx_ErrFetchWithState(type, value, tb)  PyErr_Fetch(type, value, tb)\n#define __Pyx_ErrRestore(type, value, tb)  PyErr_Restore(type, value, tb)\n#define __Pyx_ErrFetch(type, value, tb)  PyErr_Fetch(type, value, tb)\n#endif\n\n/* RaiseException.proto */\nstatic void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause);\n\n/* DictGetItem.proto */\n#if PY_MAJOR_VERSION >= 3 && !CYTHON_COMPILING_IN_PYPY\nstatic PyObject *__Pyx_PyDict_GetItem(PyObject *d, PyObject* key) {\n    PyObject *value;\n    value = PyDict_GetItemWithError(d, key);\n    if (unlikely(!value)) {\n        if (!PyErr_Occurred()) {\n            PyObject* args = PyTuple_Pack(1, key);\n            if (likely(args))\n                PyErr_SetObject(PyExc_KeyError, args);\n            Py_XDECREF(args);\n        }\n        return NULL;\n    }\n    Py_INCREF(value);\n    return value;\n}\n#else\n    #define __Pyx_PyDict_GetItem(d, key) PyObject_GetItem(d, key)\n#endif\n\n/* RaiseTooManyValuesToUnpack.proto */\nstatic CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected);\n\n/* RaiseNeedMoreValuesToUnpack.proto */\nstatic CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index);\n\n/* RaiseNoneIterError.proto */\nstatic CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void);\n\n/* SaveResetException.proto */\n#if CYTHON_FAST_THREAD_STATE\n#define __Pyx_ExceptionSave(type, value, tb)  __Pyx__ExceptionSave(__pyx_tstate, type, value, tb)\nstatic CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);\n#define __Pyx_ExceptionReset(type, value, tb)  __Pyx__ExceptionReset(__pyx_tstate, type, value, tb)\nstatic CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb);\n#else\n#define __Pyx_ExceptionSave(type, value, tb)   PyErr_GetExcInfo(type, value, tb)\n#define __Pyx_ExceptionReset(type, value, tb)  PyErr_SetExcInfo(type, value, tb)\n#endif\n\n/* PyErrExceptionMatches.proto */\n#if CYTHON_FAST_THREAD_STATE\n#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err)\nstatic CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err);\n#else\n#define __Pyx_PyErr_ExceptionMatches(err)  PyErr_ExceptionMatches(err)\n#endif\n\n/* GetException.proto */\n#if CYTHON_FAST_THREAD_STATE\n#define __Pyx_GetException(type, value, tb)  __Pyx__GetException(__pyx_tstate, type, value, tb)\nstatic int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);\n#else\nstatic int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb);\n#endif\n\n/* Import.proto */\nstatic PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level);\n\n/* CodeObjectCache.proto */\ntypedef struct {\n    PyCodeObject* code_object;\n    int code_line;\n} __Pyx_CodeObjectCacheEntry;\nstruct __Pyx_CodeObjectCache {\n    int count;\n    int max_count;\n    __Pyx_CodeObjectCacheEntry* entries;\n};\nstatic struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL};\nstatic int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line);\nstatic PyCodeObject *__pyx_find_code_object(int code_line);\nstatic void __pyx_insert_code_object(int code_line, PyCodeObject* code_object);\n\n/* AddTraceback.proto */\nstatic void __Pyx_AddTraceback(const char *funcname, int c_line,\n                               int py_line, const char *filename);\n\n/* BufferStructDeclare.proto */\ntypedef struct {\n  Py_ssize_t shape, strides, suboffsets;\n} __Pyx_Buf_DimInfo;\ntypedef struct {\n  size_t refcount;\n  Py_buffer pybuffer;\n} __Pyx_Buffer;\ntypedef struct {\n  __Pyx_Buffer *rcbuffer;\n  char *data;\n  __Pyx_Buf_DimInfo diminfo[8];\n} __Pyx_LocalBuf_ND;\n\n#if PY_MAJOR_VERSION < 3\n    static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags);\n    static void __Pyx_ReleaseBuffer(Py_buffer *view);\n#else\n    #define __Pyx_GetBuffer PyObject_GetBuffer\n    #define __Pyx_ReleaseBuffer PyBuffer_Release\n#endif\n\n\n/* None.proto */\nstatic Py_ssize_t __Pyx_zeros[] = {0, 0, 0, 0, 0, 0, 0, 0};\nstatic Py_ssize_t __Pyx_minusones[] = {-1, -1, -1, -1, -1, -1, -1, -1};\n\n/* CIntToPy.proto */\nstatic CYTHON_INLINE PyObject* __Pyx_PyInt_From_unsigned_int(unsigned int value);\n\n/* RealImag.proto */\n#if CYTHON_CCOMPLEX\n  #ifdef __cplusplus\n    #define __Pyx_CREAL(z) ((z).real())\n    #define __Pyx_CIMAG(z) ((z).imag())\n  #else\n    #define __Pyx_CREAL(z) (__real__(z))\n    #define __Pyx_CIMAG(z) (__imag__(z))\n  #endif\n#else\n    #define __Pyx_CREAL(z) ((z).real)\n    #define __Pyx_CIMAG(z) ((z).imag)\n#endif\n#if defined(__cplusplus) && CYTHON_CCOMPLEX\\\n        && (defined(_WIN32) || defined(__clang__) || (defined(__GNUC__) && (__GNUC__ >= 5 || __GNUC__ == 4 && __GNUC_MINOR__ >= 4 )) || __cplusplus >= 201103)\n    #define __Pyx_SET_CREAL(z,x) ((z).real(x))\n    #define __Pyx_SET_CIMAG(z,y) ((z).imag(y))\n#else\n    #define __Pyx_SET_CREAL(z,x) __Pyx_CREAL(z) = (x)\n    #define __Pyx_SET_CIMAG(z,y) __Pyx_CIMAG(z) = (y)\n#endif\n\n/* Arithmetic.proto */\n#if CYTHON_CCOMPLEX\n    #define __Pyx_c_eq_float(a, b)   ((a)==(b))\n    #define __Pyx_c_sum_float(a, b)  ((a)+(b))\n    #define __Pyx_c_diff_float(a, b) ((a)-(b))\n    #define __Pyx_c_prod_float(a, b) ((a)*(b))\n    #define __Pyx_c_quot_float(a, b) ((a)/(b))\n    #define __Pyx_c_neg_float(a)     (-(a))\n  #ifdef __cplusplus\n    #define __Pyx_c_is_zero_float(z) ((z)==(float)0)\n    #define __Pyx_c_conj_float(z)    (::std::conj(z))\n    #if 1\n        #define __Pyx_c_abs_float(z)     (::std::abs(z))\n        #define __Pyx_c_pow_float(a, b)  (::std::pow(a, b))\n    #endif\n  #else\n    #define __Pyx_c_is_zero_float(z) ((z)==0)\n    #define __Pyx_c_conj_float(z)    (conjf(z))\n    #if 1\n        #define __Pyx_c_abs_float(z)     (cabsf(z))\n        #define __Pyx_c_pow_float(a, b)  (cpowf(a, b))\n    #endif\n #endif\n#else\n    static CYTHON_INLINE int __Pyx_c_eq_float(__pyx_t_float_complex, __pyx_t_float_complex);\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_sum_float(__pyx_t_float_complex, __pyx_t_float_complex);\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_diff_float(__pyx_t_float_complex, __pyx_t_float_complex);\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_prod_float(__pyx_t_float_complex, __pyx_t_float_complex);\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quot_float(__pyx_t_float_complex, __pyx_t_float_complex);\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_neg_float(__pyx_t_float_complex);\n    static CYTHON_INLINE int __Pyx_c_is_zero_float(__pyx_t_float_complex);\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_conj_float(__pyx_t_float_complex);\n    #if 1\n        static CYTHON_INLINE float __Pyx_c_abs_float(__pyx_t_float_complex);\n        static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_pow_float(__pyx_t_float_complex, __pyx_t_float_complex);\n    #endif\n#endif\n\n/* Arithmetic.proto */\n#if CYTHON_CCOMPLEX\n    #define __Pyx_c_eq_double(a, b)   ((a)==(b))\n    #define __Pyx_c_sum_double(a, b)  ((a)+(b))\n    #define __Pyx_c_diff_double(a, b) ((a)-(b))\n    #define __Pyx_c_prod_double(a, b) ((a)*(b))\n    #define __Pyx_c_quot_double(a, b) ((a)/(b))\n    #define __Pyx_c_neg_double(a)     (-(a))\n  #ifdef __cplusplus\n    #define __Pyx_c_is_zero_double(z) ((z)==(double)0)\n    #define __Pyx_c_conj_double(z)    (::std::conj(z))\n    #if 1\n        #define __Pyx_c_abs_double(z)     (::std::abs(z))\n        #define __Pyx_c_pow_double(a, b)  (::std::pow(a, b))\n    #endif\n  #else\n    #define __Pyx_c_is_zero_double(z) ((z)==0)\n    #define __Pyx_c_conj_double(z)    (conj(z))\n    #if 1\n        #define __Pyx_c_abs_double(z)     (cabs(z))\n        #define __Pyx_c_pow_double(a, b)  (cpow(a, b))\n    #endif\n #endif\n#else\n    static CYTHON_INLINE int __Pyx_c_eq_double(__pyx_t_double_complex, __pyx_t_double_complex);\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum_double(__pyx_t_double_complex, __pyx_t_double_complex);\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff_double(__pyx_t_double_complex, __pyx_t_double_complex);\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod_double(__pyx_t_double_complex, __pyx_t_double_complex);\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex, __pyx_t_double_complex);\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg_double(__pyx_t_double_complex);\n    static CYTHON_INLINE int __Pyx_c_is_zero_double(__pyx_t_double_complex);\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj_double(__pyx_t_double_complex);\n    #if 1\n        static CYTHON_INLINE double __Pyx_c_abs_double(__pyx_t_double_complex);\n        static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_pow_double(__pyx_t_double_complex, __pyx_t_double_complex);\n    #endif\n#endif\n\n/* CIntToPy.proto */\nstatic CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value);\n\n/* CIntToPy.proto */\nstatic CYTHON_INLINE PyObject* __Pyx_PyInt_From_enum__NPY_TYPES(enum NPY_TYPES value);\n\n/* CIntFromPy.proto */\nstatic CYTHON_INLINE unsigned int __Pyx_PyInt_As_unsigned_int(PyObject *);\n\n/* CIntFromPy.proto */\nstatic CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *);\n\n/* CIntToPy.proto */\nstatic CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value);\n\n/* CIntFromPy.proto */\nstatic CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *);\n\n/* CheckBinaryVersion.proto */\nstatic int __Pyx_check_binary_version(void);\n\n/* PyIdentifierFromString.proto */\n#if !defined(__Pyx_PyIdentifier_FromString)\n#if PY_MAJOR_VERSION < 3\n  #define __Pyx_PyIdentifier_FromString(s) PyString_FromString(s)\n#else\n  #define __Pyx_PyIdentifier_FromString(s) PyUnicode_FromString(s)\n#endif\n#endif\n\n/* ModuleImport.proto */\nstatic PyObject *__Pyx_ImportModule(const char *name);\n\n/* TypeImport.proto */\nstatic PyTypeObject *__Pyx_ImportType(const char *module_name, const char *class_name, size_t size, int strict);\n\n/* InitStrings.proto */\nstatic int __Pyx_InitStrings(__Pyx_StringTabEntry *t);\n\n\n/* Module declarations from 'cython' */\n\n/* Module declarations from 'cpython.buffer' */\n\n/* Module declarations from 'libc.string' */\n\n/* Module declarations from 'libc.stdio' */\n\n/* Module declarations from '__builtin__' */\n\n/* Module declarations from 'cpython.type' */\nstatic PyTypeObject *__pyx_ptype_7cpython_4type_type = 0;\n\n/* Module declarations from 'cpython' */\n\n/* Module declarations from 'cpython.object' */\n\n/* Module declarations from 'cpython.ref' */\n\n/* Module declarations from 'libc.stdlib' */\n\n/* Module declarations from 'numpy' */\n\n/* Module declarations from 'numpy' */\nstatic PyTypeObject *__pyx_ptype_5numpy_dtype = 0;\nstatic PyTypeObject *__pyx_ptype_5numpy_flatiter = 0;\nstatic PyTypeObject *__pyx_ptype_5numpy_broadcast = 0;\nstatic PyTypeObject *__pyx_ptype_5numpy_ndarray = 0;\nstatic PyTypeObject *__pyx_ptype_5numpy_ufunc = 0;\nstatic CYTHON_INLINE char *__pyx_f_5numpy__util_dtypestring(PyArray_Descr *, char *, char *, int *); /*proto*/\n\n/* Module declarations from 'draw_rectangles' */\nstatic __pyx_t_15draw_rectangles_DTYPE_t __pyx_f_15draw_rectangles_minmax(__pyx_t_15draw_rectangles_DTYPE_t); /*proto*/\nstatic PyArrayObject *__pyx_f_15draw_rectangles_draw_union_boxes_c(PyArrayObject *, unsigned int); /*proto*/\nstatic __Pyx_TypeInfo __Pyx_TypeInfo_nn___pyx_t_15draw_rectangles_DTYPE_t = { \"DTYPE_t\", NULL, sizeof(__pyx_t_15draw_rectangles_DTYPE_t), { 0 }, 0, 'R', 0, 0 };\n#define __Pyx_MODULE_NAME \"draw_rectangles\"\nint __pyx_module_is_main_draw_rectangles = 0;\n\n/* Implementation of 'draw_rectangles' */\nstatic PyObject *__pyx_builtin_range;\nstatic PyObject *__pyx_builtin_ValueError;\nstatic PyObject *__pyx_builtin_RuntimeError;\nstatic PyObject *__pyx_builtin_ImportError;\nstatic const char __pyx_k_np[] = \"np\";\nstatic const char __pyx_k_main[] = \"__main__\";\nstatic const char __pyx_k_test[] = \"__test__\";\nstatic const char __pyx_k_DTYPE[] = \"DTYPE\";\nstatic const char __pyx_k_dtype[] = \"dtype\";\nstatic const char __pyx_k_numpy[] = \"numpy\";\nstatic const char __pyx_k_range[] = \"range\";\nstatic const char __pyx_k_zeros[] = \"zeros\";\nstatic const char __pyx_k_import[] = \"__import__\";\nstatic const char __pyx_k_float32[] = \"float32\";\nstatic const char __pyx_k_padding[] = \"padding\";\nstatic const char __pyx_k_ValueError[] = \"ValueError\";\nstatic const char __pyx_k_bbox_pairs[] = \"bbox_pairs\";\nstatic const char __pyx_k_ImportError[] = \"ImportError\";\nstatic const char __pyx_k_RuntimeError[] = \"RuntimeError\";\nstatic const char __pyx_k_pooling_size[] = \"pooling_size\";\nstatic const char __pyx_k_draw_rectangles[] = \"draw_rectangles\";\nstatic const char __pyx_k_draw_union_boxes[] = \"draw_union_boxes\";\nstatic const char __pyx_k_Padding_0_not_supported_yet[] = \"Padding>0 not supported yet\";\nstatic const char __pyx_k_ndarray_is_not_C_contiguous[] = \"ndarray is not C contiguous\";\nstatic const char __pyx_k_Users_rowanz_code_scene_graph_l[] = \"/Users/rowanz/code/scene-graph/lib/draw_rectangles/draw_rectangles.pyx\";\nstatic const char __pyx_k_numpy_core_multiarray_failed_to[] = \"numpy.core.multiarray failed to import\";\nstatic const char __pyx_k_unknown_dtype_code_in_numpy_pxd[] = \"unknown dtype code in numpy.pxd (%d)\";\nstatic const char __pyx_k_Format_string_allocated_too_shor[] = \"Format string allocated too short, see comment in numpy.pxd\";\nstatic const char __pyx_k_Non_native_byte_order_not_suppor[] = \"Non-native byte order not supported\";\nstatic const char __pyx_k_ndarray_is_not_Fortran_contiguou[] = \"ndarray is not Fortran contiguous\";\nstatic const char __pyx_k_numpy_core_umath_failed_to_impor[] = \"numpy.core.umath failed to import\";\nstatic const char __pyx_k_Format_string_allocated_too_shor_2[] = \"Format string allocated too short.\";\nstatic PyObject *__pyx_n_s_DTYPE;\nstatic PyObject *__pyx_kp_u_Format_string_allocated_too_shor;\nstatic PyObject *__pyx_kp_u_Format_string_allocated_too_shor_2;\nstatic PyObject *__pyx_n_s_ImportError;\nstatic PyObject *__pyx_kp_u_Non_native_byte_order_not_suppor;\nstatic PyObject *__pyx_kp_s_Padding_0_not_supported_yet;\nstatic PyObject *__pyx_n_s_RuntimeError;\nstatic PyObject *__pyx_kp_s_Users_rowanz_code_scene_graph_l;\nstatic PyObject *__pyx_n_s_ValueError;\nstatic PyObject *__pyx_n_s_bbox_pairs;\nstatic PyObject *__pyx_n_s_draw_rectangles;\nstatic PyObject *__pyx_n_s_draw_union_boxes;\nstatic PyObject *__pyx_n_s_dtype;\nstatic PyObject *__pyx_n_s_float32;\nstatic PyObject *__pyx_n_s_import;\nstatic PyObject *__pyx_n_s_main;\nstatic PyObject *__pyx_kp_u_ndarray_is_not_C_contiguous;\nstatic PyObject *__pyx_kp_u_ndarray_is_not_Fortran_contiguou;\nstatic PyObject *__pyx_n_s_np;\nstatic PyObject *__pyx_n_s_numpy;\nstatic PyObject *__pyx_kp_s_numpy_core_multiarray_failed_to;\nstatic PyObject *__pyx_kp_s_numpy_core_umath_failed_to_impor;\nstatic PyObject *__pyx_n_s_padding;\nstatic PyObject *__pyx_n_s_pooling_size;\nstatic PyObject *__pyx_n_s_range;\nstatic PyObject *__pyx_n_s_test;\nstatic PyObject *__pyx_kp_u_unknown_dtype_code_in_numpy_pxd;\nstatic PyObject *__pyx_n_s_zeros;\nstatic PyObject *__pyx_pf_15draw_rectangles_draw_union_boxes(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_bbox_pairs, PyObject *__pyx_v_pooling_size, PyObject *__pyx_v_padding); /* proto */\nstatic int __pyx_pf_5numpy_7ndarray___getbuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */\nstatic void __pyx_pf_5numpy_7ndarray_2__releasebuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info); /* proto */\nstatic PyObject *__pyx_int_0;\nstatic PyObject *__pyx_int_2;\nstatic PyObject *__pyx_tuple_;\nstatic PyObject *__pyx_tuple__2;\nstatic PyObject *__pyx_tuple__3;\nstatic PyObject *__pyx_tuple__4;\nstatic PyObject *__pyx_tuple__5;\nstatic PyObject *__pyx_tuple__6;\nstatic PyObject *__pyx_tuple__7;\nstatic PyObject *__pyx_tuple__8;\nstatic PyObject *__pyx_tuple__9;\nstatic PyObject *__pyx_tuple__10;\nstatic PyObject *__pyx_codeobj__11;\n\n/* \"draw_rectangles.pyx\":12\n * ctypedef np.float32_t DTYPE_t\n * \n * def draw_union_boxes(bbox_pairs, pooling_size, padding=0):             # <<<<<<<<<<<<<<\n *     \"\"\"\n *     Draws union boxes for the image.\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_15draw_rectangles_1draw_union_boxes(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic char __pyx_doc_15draw_rectangles_draw_union_boxes[] = \"\\n    Draws union boxes for the image.\\n    :param box_pairs: [num_pairs, 8]\\n    :param fmap_size: Size of the original feature map\\n    :param stride: ratio between fmap size and original img (<1)\\n    :param pooling_size: resize everything to this size\\n    :return: [num_pairs, 2, pooling_size, pooling_size arr\\n    \";\nstatic PyMethodDef __pyx_mdef_15draw_rectangles_1draw_union_boxes = {\"draw_union_boxes\", (PyCFunction)__pyx_pw_15draw_rectangles_1draw_union_boxes, METH_VARARGS|METH_KEYWORDS, __pyx_doc_15draw_rectangles_draw_union_boxes};\nstatic PyObject *__pyx_pw_15draw_rectangles_1draw_union_boxes(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyObject *__pyx_v_bbox_pairs = 0;\n  PyObject *__pyx_v_pooling_size = 0;\n  PyObject *__pyx_v_padding = 0;\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"draw_union_boxes (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_bbox_pairs,&__pyx_n_s_pooling_size,&__pyx_n_s_padding,0};\n    PyObject* values[3] = {0,0,0};\n    values[2] = ((PyObject *)__pyx_int_0);\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_bbox_pairs)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        case  1:\n        if (likely((values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_pooling_size)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"draw_union_boxes\", 0, 2, 3, 1); __PYX_ERR(0, 12, __pyx_L3_error)\n        }\n        case  2:\n        if (kw_args > 0) {\n          PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s_padding);\n          if (value) { values[2] = value; kw_args--; }\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"draw_union_boxes\") < 0)) __PYX_ERR(0, 12, __pyx_L3_error)\n      }\n    } else {\n      switch (PyTuple_GET_SIZE(__pyx_args)) {\n        case  3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n    }\n    __pyx_v_bbox_pairs = values[0];\n    __pyx_v_pooling_size = values[1];\n    __pyx_v_padding = values[2];\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"draw_union_boxes\", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 12, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"draw_rectangles.draw_union_boxes\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return NULL;\n  __pyx_L4_argument_unpacking_done:;\n  __pyx_r = __pyx_pf_15draw_rectangles_draw_union_boxes(__pyx_self, __pyx_v_bbox_pairs, __pyx_v_pooling_size, __pyx_v_padding);\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_15draw_rectangles_draw_union_boxes(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_bbox_pairs, PyObject *__pyx_v_pooling_size, PyObject *__pyx_v_padding) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  int __pyx_t_2;\n  unsigned int __pyx_t_3;\n  __Pyx_RefNannySetupContext(\"draw_union_boxes\", 0);\n\n  /* \"draw_rectangles.pyx\":21\n *     :return: [num_pairs, 2, pooling_size, pooling_size arr\n *     \"\"\"\n *     assert padding == 0, \"Padding>0 not supported yet\"             # <<<<<<<<<<<<<<\n *     return draw_union_boxes_c(bbox_pairs, pooling_size)\n * \n */\n  #ifndef CYTHON_WITHOUT_ASSERTIONS\n  if (unlikely(!Py_OptimizeFlag)) {\n    __pyx_t_1 = __Pyx_PyInt_EqObjC(__pyx_v_padding, __pyx_int_0, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 21, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_1);\n    __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 21, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n    if (unlikely(!__pyx_t_2)) {\n      PyErr_SetObject(PyExc_AssertionError, __pyx_kp_s_Padding_0_not_supported_yet);\n      __PYX_ERR(0, 21, __pyx_L1_error)\n    }\n  }\n  #endif\n\n  /* \"draw_rectangles.pyx\":22\n *     \"\"\"\n *     assert padding == 0, \"Padding>0 not supported yet\"\n *     return draw_union_boxes_c(bbox_pairs, pooling_size)             # <<<<<<<<<<<<<<\n * \n * cdef DTYPE_t minmax(DTYPE_t x):\n */\n  __Pyx_XDECREF(__pyx_r);\n  if (!(likely(((__pyx_v_bbox_pairs) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_bbox_pairs, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 22, __pyx_L1_error)\n  __pyx_t_3 = __Pyx_PyInt_As_unsigned_int(__pyx_v_pooling_size); if (unlikely((__pyx_t_3 == (unsigned int)-1) && PyErr_Occurred())) __PYX_ERR(0, 22, __pyx_L1_error)\n  __pyx_t_1 = ((PyObject *)__pyx_f_15draw_rectangles_draw_union_boxes_c(((PyArrayObject *)__pyx_v_bbox_pairs), __pyx_t_3)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 22, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_r = __pyx_t_1;\n  __pyx_t_1 = 0;\n  goto __pyx_L0;\n\n  /* \"draw_rectangles.pyx\":12\n * ctypedef np.float32_t DTYPE_t\n * \n * def draw_union_boxes(bbox_pairs, pooling_size, padding=0):             # <<<<<<<<<<<<<<\n *     \"\"\"\n *     Draws union boxes for the image.\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"draw_rectangles.draw_union_boxes\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"draw_rectangles.pyx\":24\n *     return draw_union_boxes_c(bbox_pairs, pooling_size)\n * \n * cdef DTYPE_t minmax(DTYPE_t x):             # <<<<<<<<<<<<<<\n *     return min(max(x, 0), 1)\n * \n */\n\nstatic __pyx_t_15draw_rectangles_DTYPE_t __pyx_f_15draw_rectangles_minmax(__pyx_t_15draw_rectangles_DTYPE_t __pyx_v_x) {\n  __pyx_t_15draw_rectangles_DTYPE_t __pyx_r;\n  __Pyx_RefNannyDeclarations\n  long __pyx_t_1;\n  long __pyx_t_2;\n  __pyx_t_15draw_rectangles_DTYPE_t __pyx_t_3;\n  __pyx_t_15draw_rectangles_DTYPE_t __pyx_t_4;\n  __Pyx_RefNannySetupContext(\"minmax\", 0);\n\n  /* \"draw_rectangles.pyx\":25\n * \n * cdef DTYPE_t minmax(DTYPE_t x):\n *     return min(max(x, 0), 1)             # <<<<<<<<<<<<<<\n * \n * cdef np.ndarray[DTYPE_t, ndim=4] draw_union_boxes_c(\n */\n  __pyx_t_1 = 1;\n  __pyx_t_2 = 0;\n  __pyx_t_3 = __pyx_v_x;\n  if (((__pyx_t_2 > __pyx_t_3) != 0)) {\n    __pyx_t_4 = __pyx_t_2;\n  } else {\n    __pyx_t_4 = __pyx_t_3;\n  }\n  __pyx_t_3 = __pyx_t_4;\n  if (((__pyx_t_1 < __pyx_t_3) != 0)) {\n    __pyx_t_4 = __pyx_t_1;\n  } else {\n    __pyx_t_4 = __pyx_t_3;\n  }\n  __pyx_r = __pyx_t_4;\n  goto __pyx_L0;\n\n  /* \"draw_rectangles.pyx\":24\n *     return draw_union_boxes_c(bbox_pairs, pooling_size)\n * \n * cdef DTYPE_t minmax(DTYPE_t x):             # <<<<<<<<<<<<<<\n *     return min(max(x, 0), 1)\n * \n */\n\n  /* function exit code */\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"draw_rectangles.pyx\":27\n *     return min(max(x, 0), 1)\n * \n * cdef np.ndarray[DTYPE_t, ndim=4] draw_union_boxes_c(             # <<<<<<<<<<<<<<\n *         np.ndarray[DTYPE_t, ndim=2] box_pairs, unsigned int pooling_size):\n *     \"\"\"\n */\n\nstatic PyArrayObject *__pyx_f_15draw_rectangles_draw_union_boxes_c(PyArrayObject *__pyx_v_box_pairs, unsigned int __pyx_v_pooling_size) {\n  unsigned int __pyx_v_N;\n  PyArrayObject *__pyx_v_uboxes = 0;\n  __pyx_t_15draw_rectangles_DTYPE_t __pyx_v_x1_union;\n  __pyx_t_15draw_rectangles_DTYPE_t __pyx_v_y1_union;\n  __pyx_t_15draw_rectangles_DTYPE_t __pyx_v_x2_union;\n  __pyx_t_15draw_rectangles_DTYPE_t __pyx_v_y2_union;\n  __pyx_t_15draw_rectangles_DTYPE_t __pyx_v_w;\n  __pyx_t_15draw_rectangles_DTYPE_t __pyx_v_h;\n  __pyx_t_15draw_rectangles_DTYPE_t __pyx_v_x1_box;\n  __pyx_t_15draw_rectangles_DTYPE_t __pyx_v_y1_box;\n  __pyx_t_15draw_rectangles_DTYPE_t __pyx_v_x2_box;\n  __pyx_t_15draw_rectangles_DTYPE_t __pyx_v_y2_box;\n  __pyx_t_15draw_rectangles_DTYPE_t __pyx_v_y_contrib;\n  __pyx_t_15draw_rectangles_DTYPE_t __pyx_v_x_contrib;\n  unsigned int __pyx_v_n;\n  unsigned int __pyx_v_i;\n  unsigned int __pyx_v_j;\n  unsigned int __pyx_v_k;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_box_pairs;\n  __Pyx_Buffer __pyx_pybuffer_box_pairs;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_uboxes;\n  __Pyx_Buffer __pyx_pybuffer_uboxes;\n  PyArrayObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *__pyx_t_4 = NULL;\n  PyObject *__pyx_t_5 = NULL;\n  PyArrayObject *__pyx_t_6 = NULL;\n  unsigned int __pyx_t_7;\n  unsigned int __pyx_t_8;\n  size_t __pyx_t_9;\n  Py_ssize_t __pyx_t_10;\n  int __pyx_t_11;\n  __pyx_t_15draw_rectangles_DTYPE_t __pyx_t_12;\n  size_t __pyx_t_13;\n  Py_ssize_t __pyx_t_14;\n  __pyx_t_15draw_rectangles_DTYPE_t __pyx_t_15;\n  __pyx_t_15draw_rectangles_DTYPE_t __pyx_t_16;\n  size_t __pyx_t_17;\n  Py_ssize_t __pyx_t_18;\n  size_t __pyx_t_19;\n  Py_ssize_t __pyx_t_20;\n  size_t __pyx_t_21;\n  Py_ssize_t __pyx_t_22;\n  size_t __pyx_t_23;\n  Py_ssize_t __pyx_t_24;\n  size_t __pyx_t_25;\n  Py_ssize_t __pyx_t_26;\n  size_t __pyx_t_27;\n  Py_ssize_t __pyx_t_28;\n  unsigned int __pyx_t_29;\n  size_t __pyx_t_30;\n  Py_ssize_t __pyx_t_31;\n  size_t __pyx_t_32;\n  Py_ssize_t __pyx_t_33;\n  size_t __pyx_t_34;\n  Py_ssize_t __pyx_t_35;\n  size_t __pyx_t_36;\n  Py_ssize_t __pyx_t_37;\n  unsigned int __pyx_t_38;\n  unsigned int __pyx_t_39;\n  unsigned int __pyx_t_40;\n  unsigned int __pyx_t_41;\n  size_t __pyx_t_42;\n  size_t __pyx_t_43;\n  size_t __pyx_t_44;\n  size_t __pyx_t_45;\n  __Pyx_RefNannySetupContext(\"draw_union_boxes_c\", 0);\n  __pyx_pybuffer_uboxes.pybuffer.buf = NULL;\n  __pyx_pybuffer_uboxes.refcount = 0;\n  __pyx_pybuffernd_uboxes.data = NULL;\n  __pyx_pybuffernd_uboxes.rcbuffer = &__pyx_pybuffer_uboxes;\n  __pyx_pybuffer_box_pairs.pybuffer.buf = NULL;\n  __pyx_pybuffer_box_pairs.refcount = 0;\n  __pyx_pybuffernd_box_pairs.data = NULL;\n  __pyx_pybuffernd_box_pairs.rcbuffer = &__pyx_pybuffer_box_pairs;\n  {\n    __Pyx_BufFmt_StackElem __pyx_stack[1];\n    if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_box_pairs.rcbuffer->pybuffer, (PyObject*)__pyx_v_box_pairs, &__Pyx_TypeInfo_nn___pyx_t_15draw_rectangles_DTYPE_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) __PYX_ERR(0, 27, __pyx_L1_error)\n  }\n  __pyx_pybuffernd_box_pairs.diminfo[0].strides = __pyx_pybuffernd_box_pairs.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_box_pairs.diminfo[0].shape = __pyx_pybuffernd_box_pairs.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_box_pairs.diminfo[1].strides = __pyx_pybuffernd_box_pairs.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_box_pairs.diminfo[1].shape = __pyx_pybuffernd_box_pairs.rcbuffer->pybuffer.shape[1];\n\n  /* \"draw_rectangles.pyx\":38\n *     overlaps: (N, K) ndarray of overlap between boxes and query_boxes\n *     \"\"\"\n *     cdef unsigned int N = box_pairs.shape[0]             # <<<<<<<<<<<<<<\n * \n *     cdef np.ndarray[DTYPE_t, ndim = 4] uboxes = np.zeros(\n */\n  __pyx_v_N = (__pyx_v_box_pairs->dimensions[0]);\n\n  /* \"draw_rectangles.pyx\":40\n *     cdef unsigned int N = box_pairs.shape[0]\n * \n *     cdef np.ndarray[DTYPE_t, ndim = 4] uboxes = np.zeros(             # <<<<<<<<<<<<<<\n *         (N, 2, pooling_size, pooling_size), dtype=DTYPE)\n *     cdef DTYPE_t x1_union, y1_union, x2_union, y2_union, w, h, x1_box, y1_box, x2_box, y2_box, y_contrib, x_contrib\n */\n  __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 40, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_zeros); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 40, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"draw_rectangles.pyx\":41\n * \n *     cdef np.ndarray[DTYPE_t, ndim = 4] uboxes = np.zeros(\n *         (N, 2, pooling_size, pooling_size), dtype=DTYPE)             # <<<<<<<<<<<<<<\n *     cdef DTYPE_t x1_union, y1_union, x2_union, y2_union, w, h, x1_box, y1_box, x2_box, y2_box, y_contrib, x_contrib\n *     cdef unsigned int n, i, j, k\n */\n  __pyx_t_1 = __Pyx_PyInt_From_unsigned_int(__pyx_v_N); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 41, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_3 = __Pyx_PyInt_From_unsigned_int(__pyx_v_pooling_size); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 41, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __pyx_t_4 = __Pyx_PyInt_From_unsigned_int(__pyx_v_pooling_size); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 41, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __pyx_t_5 = PyTuple_New(4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 41, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __Pyx_GIVEREF(__pyx_t_1);\n  PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_1);\n  __Pyx_INCREF(__pyx_int_2);\n  __Pyx_GIVEREF(__pyx_int_2);\n  PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_int_2);\n  __Pyx_GIVEREF(__pyx_t_3);\n  PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_t_3);\n  __Pyx_GIVEREF(__pyx_t_4);\n  PyTuple_SET_ITEM(__pyx_t_5, 3, __pyx_t_4);\n  __pyx_t_1 = 0;\n  __pyx_t_3 = 0;\n  __pyx_t_4 = 0;\n\n  /* \"draw_rectangles.pyx\":40\n *     cdef unsigned int N = box_pairs.shape[0]\n * \n *     cdef np.ndarray[DTYPE_t, ndim = 4] uboxes = np.zeros(             # <<<<<<<<<<<<<<\n *         (N, 2, pooling_size, pooling_size), dtype=DTYPE)\n *     cdef DTYPE_t x1_union, y1_union, x2_union, y2_union, w, h, x1_box, y1_box, x2_box, y2_box, y_contrib, x_contrib\n */\n  __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 40, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __Pyx_GIVEREF(__pyx_t_5);\n  PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_5);\n  __pyx_t_5 = 0;\n\n  /* \"draw_rectangles.pyx\":41\n * \n *     cdef np.ndarray[DTYPE_t, ndim = 4] uboxes = np.zeros(\n *         (N, 2, pooling_size, pooling_size), dtype=DTYPE)             # <<<<<<<<<<<<<<\n *     cdef DTYPE_t x1_union, y1_union, x2_union, y2_union, w, h, x1_box, y1_box, x2_box, y2_box, y_contrib, x_contrib\n *     cdef unsigned int n, i, j, k\n */\n  __pyx_t_5 = PyDict_New(); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 41, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_5);\n  __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_DTYPE); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 41, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  if (PyDict_SetItem(__pyx_t_5, __pyx_n_s_dtype, __pyx_t_3) < 0) __PYX_ERR(0, 41, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n\n  /* \"draw_rectangles.pyx\":40\n *     cdef unsigned int N = box_pairs.shape[0]\n * \n *     cdef np.ndarray[DTYPE_t, ndim = 4] uboxes = np.zeros(             # <<<<<<<<<<<<<<\n *         (N, 2, pooling_size, pooling_size), dtype=DTYPE)\n *     cdef DTYPE_t x1_union, y1_union, x2_union, y2_union, w, h, x1_box, y1_box, x2_box, y2_box, y_contrib, x_contrib\n */\n  __pyx_t_3 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_4, __pyx_t_5); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 40, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n  __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;\n  if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 40, __pyx_L1_error)\n  __pyx_t_6 = ((PyArrayObject *)__pyx_t_3);\n  {\n    __Pyx_BufFmt_StackElem __pyx_stack[1];\n    if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_uboxes.rcbuffer->pybuffer, (PyObject*)__pyx_t_6, &__Pyx_TypeInfo_nn___pyx_t_15draw_rectangles_DTYPE_t, PyBUF_FORMAT| PyBUF_STRIDES| PyBUF_WRITABLE, 4, 0, __pyx_stack) == -1)) {\n      __pyx_v_uboxes = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); __pyx_pybuffernd_uboxes.rcbuffer->pybuffer.buf = NULL;\n      __PYX_ERR(0, 40, __pyx_L1_error)\n    } else {__pyx_pybuffernd_uboxes.diminfo[0].strides = __pyx_pybuffernd_uboxes.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_uboxes.diminfo[0].shape = __pyx_pybuffernd_uboxes.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_uboxes.diminfo[1].strides = __pyx_pybuffernd_uboxes.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_uboxes.diminfo[1].shape = __pyx_pybuffernd_uboxes.rcbuffer->pybuffer.shape[1]; __pyx_pybuffernd_uboxes.diminfo[2].strides = __pyx_pybuffernd_uboxes.rcbuffer->pybuffer.strides[2]; __pyx_pybuffernd_uboxes.diminfo[2].shape = __pyx_pybuffernd_uboxes.rcbuffer->pybuffer.shape[2]; __pyx_pybuffernd_uboxes.diminfo[3].strides = __pyx_pybuffernd_uboxes.rcbuffer->pybuffer.strides[3]; __pyx_pybuffernd_uboxes.diminfo[3].shape = __pyx_pybuffernd_uboxes.rcbuffer->pybuffer.shape[3];\n    }\n  }\n  __pyx_t_6 = 0;\n  __pyx_v_uboxes = ((PyArrayObject *)__pyx_t_3);\n  __pyx_t_3 = 0;\n\n  /* \"draw_rectangles.pyx\":45\n *     cdef unsigned int n, i, j, k\n * \n *     for n in range(N):             # <<<<<<<<<<<<<<\n *         x1_union = min(box_pairs[n, 0], box_pairs[n, 4])\n *         y1_union = min(box_pairs[n, 1], box_pairs[n, 5])\n */\n  __pyx_t_7 = __pyx_v_N;\n  for (__pyx_t_8 = 0; __pyx_t_8 < __pyx_t_7; __pyx_t_8+=1) {\n    __pyx_v_n = __pyx_t_8;\n\n    /* \"draw_rectangles.pyx\":46\n * \n *     for n in range(N):\n *         x1_union = min(box_pairs[n, 0], box_pairs[n, 4])             # <<<<<<<<<<<<<<\n *         y1_union = min(box_pairs[n, 1], box_pairs[n, 5])\n *         x2_union = max(box_pairs[n, 2], box_pairs[n, 6])\n */\n    __pyx_t_9 = __pyx_v_n;\n    __pyx_t_10 = 4;\n    __pyx_t_11 = -1;\n    if (unlikely(__pyx_t_9 >= (size_t)__pyx_pybuffernd_box_pairs.diminfo[0].shape)) __pyx_t_11 = 0;\n    if (__pyx_t_10 < 0) {\n      __pyx_t_10 += __pyx_pybuffernd_box_pairs.diminfo[1].shape;\n      if (unlikely(__pyx_t_10 < 0)) __pyx_t_11 = 1;\n    } else if (unlikely(__pyx_t_10 >= __pyx_pybuffernd_box_pairs.diminfo[1].shape)) __pyx_t_11 = 1;\n    if (unlikely(__pyx_t_11 != -1)) {\n      __Pyx_RaiseBufferIndexError(__pyx_t_11);\n      __PYX_ERR(0, 46, __pyx_L1_error)\n    }\n    __pyx_t_12 = (*__Pyx_BufPtrStrided2d(__pyx_t_15draw_rectangles_DTYPE_t *, __pyx_pybuffernd_box_pairs.rcbuffer->pybuffer.buf, __pyx_t_9, __pyx_pybuffernd_box_pairs.diminfo[0].strides, __pyx_t_10, __pyx_pybuffernd_box_pairs.diminfo[1].strides));\n    __pyx_t_13 = __pyx_v_n;\n    __pyx_t_14 = 0;\n    __pyx_t_11 = -1;\n    if (unlikely(__pyx_t_13 >= (size_t)__pyx_pybuffernd_box_pairs.diminfo[0].shape)) __pyx_t_11 = 0;\n    if (__pyx_t_14 < 0) {\n      __pyx_t_14 += __pyx_pybuffernd_box_pairs.diminfo[1].shape;\n      if (unlikely(__pyx_t_14 < 0)) __pyx_t_11 = 1;\n    } else if (unlikely(__pyx_t_14 >= __pyx_pybuffernd_box_pairs.diminfo[1].shape)) __pyx_t_11 = 1;\n    if (unlikely(__pyx_t_11 != -1)) {\n      __Pyx_RaiseBufferIndexError(__pyx_t_11);\n      __PYX_ERR(0, 46, __pyx_L1_error)\n    }\n    __pyx_t_15 = (*__Pyx_BufPtrStrided2d(__pyx_t_15draw_rectangles_DTYPE_t *, __pyx_pybuffernd_box_pairs.rcbuffer->pybuffer.buf, __pyx_t_13, __pyx_pybuffernd_box_pairs.diminfo[0].strides, __pyx_t_14, __pyx_pybuffernd_box_pairs.diminfo[1].strides));\n    if (((__pyx_t_12 < __pyx_t_15) != 0)) {\n      __pyx_t_16 = __pyx_t_12;\n    } else {\n      __pyx_t_16 = __pyx_t_15;\n    }\n    __pyx_v_x1_union = __pyx_t_16;\n\n    /* \"draw_rectangles.pyx\":47\n *     for n in range(N):\n *         x1_union = min(box_pairs[n, 0], box_pairs[n, 4])\n *         y1_union = min(box_pairs[n, 1], box_pairs[n, 5])             # <<<<<<<<<<<<<<\n *         x2_union = max(box_pairs[n, 2], box_pairs[n, 6])\n *         y2_union = max(box_pairs[n, 3], box_pairs[n, 7])\n */\n    __pyx_t_17 = __pyx_v_n;\n    __pyx_t_18 = 5;\n    __pyx_t_11 = -1;\n    if (unlikely(__pyx_t_17 >= (size_t)__pyx_pybuffernd_box_pairs.diminfo[0].shape)) __pyx_t_11 = 0;\n    if (__pyx_t_18 < 0) {\n      __pyx_t_18 += __pyx_pybuffernd_box_pairs.diminfo[1].shape;\n      if (unlikely(__pyx_t_18 < 0)) __pyx_t_11 = 1;\n    } else if (unlikely(__pyx_t_18 >= __pyx_pybuffernd_box_pairs.diminfo[1].shape)) __pyx_t_11 = 1;\n    if (unlikely(__pyx_t_11 != -1)) {\n      __Pyx_RaiseBufferIndexError(__pyx_t_11);\n      __PYX_ERR(0, 47, __pyx_L1_error)\n    }\n    __pyx_t_16 = (*__Pyx_BufPtrStrided2d(__pyx_t_15draw_rectangles_DTYPE_t *, __pyx_pybuffernd_box_pairs.rcbuffer->pybuffer.buf, __pyx_t_17, __pyx_pybuffernd_box_pairs.diminfo[0].strides, __pyx_t_18, __pyx_pybuffernd_box_pairs.diminfo[1].strides));\n    __pyx_t_19 = __pyx_v_n;\n    __pyx_t_20 = 1;\n    __pyx_t_11 = -1;\n    if (unlikely(__pyx_t_19 >= (size_t)__pyx_pybuffernd_box_pairs.diminfo[0].shape)) __pyx_t_11 = 0;\n    if (__pyx_t_20 < 0) {\n      __pyx_t_20 += __pyx_pybuffernd_box_pairs.diminfo[1].shape;\n      if (unlikely(__pyx_t_20 < 0)) __pyx_t_11 = 1;\n    } else if (unlikely(__pyx_t_20 >= __pyx_pybuffernd_box_pairs.diminfo[1].shape)) __pyx_t_11 = 1;\n    if (unlikely(__pyx_t_11 != -1)) {\n      __Pyx_RaiseBufferIndexError(__pyx_t_11);\n      __PYX_ERR(0, 47, __pyx_L1_error)\n    }\n    __pyx_t_12 = (*__Pyx_BufPtrStrided2d(__pyx_t_15draw_rectangles_DTYPE_t *, __pyx_pybuffernd_box_pairs.rcbuffer->pybuffer.buf, __pyx_t_19, __pyx_pybuffernd_box_pairs.diminfo[0].strides, __pyx_t_20, __pyx_pybuffernd_box_pairs.diminfo[1].strides));\n    if (((__pyx_t_16 < __pyx_t_12) != 0)) {\n      __pyx_t_15 = __pyx_t_16;\n    } else {\n      __pyx_t_15 = __pyx_t_12;\n    }\n    __pyx_v_y1_union = __pyx_t_15;\n\n    /* \"draw_rectangles.pyx\":48\n *         x1_union = min(box_pairs[n, 0], box_pairs[n, 4])\n *         y1_union = min(box_pairs[n, 1], box_pairs[n, 5])\n *         x2_union = max(box_pairs[n, 2], box_pairs[n, 6])             # <<<<<<<<<<<<<<\n *         y2_union = max(box_pairs[n, 3], box_pairs[n, 7])\n * \n */\n    __pyx_t_21 = __pyx_v_n;\n    __pyx_t_22 = 6;\n    __pyx_t_11 = -1;\n    if (unlikely(__pyx_t_21 >= (size_t)__pyx_pybuffernd_box_pairs.diminfo[0].shape)) __pyx_t_11 = 0;\n    if (__pyx_t_22 < 0) {\n      __pyx_t_22 += __pyx_pybuffernd_box_pairs.diminfo[1].shape;\n      if (unlikely(__pyx_t_22 < 0)) __pyx_t_11 = 1;\n    } else if (unlikely(__pyx_t_22 >= __pyx_pybuffernd_box_pairs.diminfo[1].shape)) __pyx_t_11 = 1;\n    if (unlikely(__pyx_t_11 != -1)) {\n      __Pyx_RaiseBufferIndexError(__pyx_t_11);\n      __PYX_ERR(0, 48, __pyx_L1_error)\n    }\n    __pyx_t_15 = (*__Pyx_BufPtrStrided2d(__pyx_t_15draw_rectangles_DTYPE_t *, __pyx_pybuffernd_box_pairs.rcbuffer->pybuffer.buf, __pyx_t_21, __pyx_pybuffernd_box_pairs.diminfo[0].strides, __pyx_t_22, __pyx_pybuffernd_box_pairs.diminfo[1].strides));\n    __pyx_t_23 = __pyx_v_n;\n    __pyx_t_24 = 2;\n    __pyx_t_11 = -1;\n    if (unlikely(__pyx_t_23 >= (size_t)__pyx_pybuffernd_box_pairs.diminfo[0].shape)) __pyx_t_11 = 0;\n    if (__pyx_t_24 < 0) {\n      __pyx_t_24 += __pyx_pybuffernd_box_pairs.diminfo[1].shape;\n      if (unlikely(__pyx_t_24 < 0)) __pyx_t_11 = 1;\n    } else if (unlikely(__pyx_t_24 >= __pyx_pybuffernd_box_pairs.diminfo[1].shape)) __pyx_t_11 = 1;\n    if (unlikely(__pyx_t_11 != -1)) {\n      __Pyx_RaiseBufferIndexError(__pyx_t_11);\n      __PYX_ERR(0, 48, __pyx_L1_error)\n    }\n    __pyx_t_16 = (*__Pyx_BufPtrStrided2d(__pyx_t_15draw_rectangles_DTYPE_t *, __pyx_pybuffernd_box_pairs.rcbuffer->pybuffer.buf, __pyx_t_23, __pyx_pybuffernd_box_pairs.diminfo[0].strides, __pyx_t_24, __pyx_pybuffernd_box_pairs.diminfo[1].strides));\n    if (((__pyx_t_15 > __pyx_t_16) != 0)) {\n      __pyx_t_12 = __pyx_t_15;\n    } else {\n      __pyx_t_12 = __pyx_t_16;\n    }\n    __pyx_v_x2_union = __pyx_t_12;\n\n    /* \"draw_rectangles.pyx\":49\n *         y1_union = min(box_pairs[n, 1], box_pairs[n, 5])\n *         x2_union = max(box_pairs[n, 2], box_pairs[n, 6])\n *         y2_union = max(box_pairs[n, 3], box_pairs[n, 7])             # <<<<<<<<<<<<<<\n * \n *         w = x2_union - x1_union\n */\n    __pyx_t_25 = __pyx_v_n;\n    __pyx_t_26 = 7;\n    __pyx_t_11 = -1;\n    if (unlikely(__pyx_t_25 >= (size_t)__pyx_pybuffernd_box_pairs.diminfo[0].shape)) __pyx_t_11 = 0;\n    if (__pyx_t_26 < 0) {\n      __pyx_t_26 += __pyx_pybuffernd_box_pairs.diminfo[1].shape;\n      if (unlikely(__pyx_t_26 < 0)) __pyx_t_11 = 1;\n    } else if (unlikely(__pyx_t_26 >= __pyx_pybuffernd_box_pairs.diminfo[1].shape)) __pyx_t_11 = 1;\n    if (unlikely(__pyx_t_11 != -1)) {\n      __Pyx_RaiseBufferIndexError(__pyx_t_11);\n      __PYX_ERR(0, 49, __pyx_L1_error)\n    }\n    __pyx_t_12 = (*__Pyx_BufPtrStrided2d(__pyx_t_15draw_rectangles_DTYPE_t *, __pyx_pybuffernd_box_pairs.rcbuffer->pybuffer.buf, __pyx_t_25, __pyx_pybuffernd_box_pairs.diminfo[0].strides, __pyx_t_26, __pyx_pybuffernd_box_pairs.diminfo[1].strides));\n    __pyx_t_27 = __pyx_v_n;\n    __pyx_t_28 = 3;\n    __pyx_t_11 = -1;\n    if (unlikely(__pyx_t_27 >= (size_t)__pyx_pybuffernd_box_pairs.diminfo[0].shape)) __pyx_t_11 = 0;\n    if (__pyx_t_28 < 0) {\n      __pyx_t_28 += __pyx_pybuffernd_box_pairs.diminfo[1].shape;\n      if (unlikely(__pyx_t_28 < 0)) __pyx_t_11 = 1;\n    } else if (unlikely(__pyx_t_28 >= __pyx_pybuffernd_box_pairs.diminfo[1].shape)) __pyx_t_11 = 1;\n    if (unlikely(__pyx_t_11 != -1)) {\n      __Pyx_RaiseBufferIndexError(__pyx_t_11);\n      __PYX_ERR(0, 49, __pyx_L1_error)\n    }\n    __pyx_t_15 = (*__Pyx_BufPtrStrided2d(__pyx_t_15draw_rectangles_DTYPE_t *, __pyx_pybuffernd_box_pairs.rcbuffer->pybuffer.buf, __pyx_t_27, __pyx_pybuffernd_box_pairs.diminfo[0].strides, __pyx_t_28, __pyx_pybuffernd_box_pairs.diminfo[1].strides));\n    if (((__pyx_t_12 > __pyx_t_15) != 0)) {\n      __pyx_t_16 = __pyx_t_12;\n    } else {\n      __pyx_t_16 = __pyx_t_15;\n    }\n    __pyx_v_y2_union = __pyx_t_16;\n\n    /* \"draw_rectangles.pyx\":51\n *         y2_union = max(box_pairs[n, 3], box_pairs[n, 7])\n * \n *         w = x2_union - x1_union             # <<<<<<<<<<<<<<\n *         h = y2_union - y1_union\n * \n */\n    __pyx_v_w = (__pyx_v_x2_union - __pyx_v_x1_union);\n\n    /* \"draw_rectangles.pyx\":52\n * \n *         w = x2_union - x1_union\n *         h = y2_union - y1_union             # <<<<<<<<<<<<<<\n * \n *         for i in range(2):\n */\n    __pyx_v_h = (__pyx_v_y2_union - __pyx_v_y1_union);\n\n    /* \"draw_rectangles.pyx\":54\n *         h = y2_union - y1_union\n * \n *         for i in range(2):             # <<<<<<<<<<<<<<\n *             # Now everything is in the range [0, pooling_size].\n *             x1_box = (box_pairs[n, 0+4*i] - x1_union)*pooling_size / w\n */\n    for (__pyx_t_29 = 0; __pyx_t_29 < 2; __pyx_t_29+=1) {\n      __pyx_v_i = __pyx_t_29;\n\n      /* \"draw_rectangles.pyx\":56\n *         for i in range(2):\n *             # Now everything is in the range [0, pooling_size].\n *             x1_box = (box_pairs[n, 0+4*i] - x1_union)*pooling_size / w             # <<<<<<<<<<<<<<\n *             y1_box = (box_pairs[n, 1+4*i] - y1_union)*pooling_size / h\n *             x2_box = (box_pairs[n, 2+4*i] - x1_union)*pooling_size / w\n */\n      __pyx_t_30 = __pyx_v_n;\n      __pyx_t_31 = (0 + (4 * __pyx_v_i));\n      __pyx_t_11 = -1;\n      if (unlikely(__pyx_t_30 >= (size_t)__pyx_pybuffernd_box_pairs.diminfo[0].shape)) __pyx_t_11 = 0;\n      if (__pyx_t_31 < 0) {\n        __pyx_t_31 += __pyx_pybuffernd_box_pairs.diminfo[1].shape;\n        if (unlikely(__pyx_t_31 < 0)) __pyx_t_11 = 1;\n      } else if (unlikely(__pyx_t_31 >= __pyx_pybuffernd_box_pairs.diminfo[1].shape)) __pyx_t_11 = 1;\n      if (unlikely(__pyx_t_11 != -1)) {\n        __Pyx_RaiseBufferIndexError(__pyx_t_11);\n        __PYX_ERR(0, 56, __pyx_L1_error)\n      }\n      __pyx_t_16 = (((*__Pyx_BufPtrStrided2d(__pyx_t_15draw_rectangles_DTYPE_t *, __pyx_pybuffernd_box_pairs.rcbuffer->pybuffer.buf, __pyx_t_30, __pyx_pybuffernd_box_pairs.diminfo[0].strides, __pyx_t_31, __pyx_pybuffernd_box_pairs.diminfo[1].strides)) - __pyx_v_x1_union) * __pyx_v_pooling_size);\n      if (unlikely(__pyx_v_w == 0)) {\n        PyErr_SetString(PyExc_ZeroDivisionError, \"float division\");\n        __PYX_ERR(0, 56, __pyx_L1_error)\n      }\n      __pyx_v_x1_box = (__pyx_t_16 / __pyx_v_w);\n\n      /* \"draw_rectangles.pyx\":57\n *             # Now everything is in the range [0, pooling_size].\n *             x1_box = (box_pairs[n, 0+4*i] - x1_union)*pooling_size / w\n *             y1_box = (box_pairs[n, 1+4*i] - y1_union)*pooling_size / h             # <<<<<<<<<<<<<<\n *             x2_box = (box_pairs[n, 2+4*i] - x1_union)*pooling_size / w\n *             y2_box = (box_pairs[n, 3+4*i] - y1_union)*pooling_size / h\n */\n      __pyx_t_32 = __pyx_v_n;\n      __pyx_t_33 = (1 + (4 * __pyx_v_i));\n      __pyx_t_11 = -1;\n      if (unlikely(__pyx_t_32 >= (size_t)__pyx_pybuffernd_box_pairs.diminfo[0].shape)) __pyx_t_11 = 0;\n      if (__pyx_t_33 < 0) {\n        __pyx_t_33 += __pyx_pybuffernd_box_pairs.diminfo[1].shape;\n        if (unlikely(__pyx_t_33 < 0)) __pyx_t_11 = 1;\n      } else if (unlikely(__pyx_t_33 >= __pyx_pybuffernd_box_pairs.diminfo[1].shape)) __pyx_t_11 = 1;\n      if (unlikely(__pyx_t_11 != -1)) {\n        __Pyx_RaiseBufferIndexError(__pyx_t_11);\n        __PYX_ERR(0, 57, __pyx_L1_error)\n      }\n      __pyx_t_16 = (((*__Pyx_BufPtrStrided2d(__pyx_t_15draw_rectangles_DTYPE_t *, __pyx_pybuffernd_box_pairs.rcbuffer->pybuffer.buf, __pyx_t_32, __pyx_pybuffernd_box_pairs.diminfo[0].strides, __pyx_t_33, __pyx_pybuffernd_box_pairs.diminfo[1].strides)) - __pyx_v_y1_union) * __pyx_v_pooling_size);\n      if (unlikely(__pyx_v_h == 0)) {\n        PyErr_SetString(PyExc_ZeroDivisionError, \"float division\");\n        __PYX_ERR(0, 57, __pyx_L1_error)\n      }\n      __pyx_v_y1_box = (__pyx_t_16 / __pyx_v_h);\n\n      /* \"draw_rectangles.pyx\":58\n *             x1_box = (box_pairs[n, 0+4*i] - x1_union)*pooling_size / w\n *             y1_box = (box_pairs[n, 1+4*i] - y1_union)*pooling_size / h\n *             x2_box = (box_pairs[n, 2+4*i] - x1_union)*pooling_size / w             # <<<<<<<<<<<<<<\n *             y2_box = (box_pairs[n, 3+4*i] - y1_union)*pooling_size / h\n *             # print(\"{:.3f}, {:.3f}, {:.3f}, {:.3f}\".format(x1_box, y1_box, x2_box, y2_box))\n */\n      __pyx_t_34 = __pyx_v_n;\n      __pyx_t_35 = (2 + (4 * __pyx_v_i));\n      __pyx_t_11 = -1;\n      if (unlikely(__pyx_t_34 >= (size_t)__pyx_pybuffernd_box_pairs.diminfo[0].shape)) __pyx_t_11 = 0;\n      if (__pyx_t_35 < 0) {\n        __pyx_t_35 += __pyx_pybuffernd_box_pairs.diminfo[1].shape;\n        if (unlikely(__pyx_t_35 < 0)) __pyx_t_11 = 1;\n      } else if (unlikely(__pyx_t_35 >= __pyx_pybuffernd_box_pairs.diminfo[1].shape)) __pyx_t_11 = 1;\n      if (unlikely(__pyx_t_11 != -1)) {\n        __Pyx_RaiseBufferIndexError(__pyx_t_11);\n        __PYX_ERR(0, 58, __pyx_L1_error)\n      }\n      __pyx_t_16 = (((*__Pyx_BufPtrStrided2d(__pyx_t_15draw_rectangles_DTYPE_t *, __pyx_pybuffernd_box_pairs.rcbuffer->pybuffer.buf, __pyx_t_34, __pyx_pybuffernd_box_pairs.diminfo[0].strides, __pyx_t_35, __pyx_pybuffernd_box_pairs.diminfo[1].strides)) - __pyx_v_x1_union) * __pyx_v_pooling_size);\n      if (unlikely(__pyx_v_w == 0)) {\n        PyErr_SetString(PyExc_ZeroDivisionError, \"float division\");\n        __PYX_ERR(0, 58, __pyx_L1_error)\n      }\n      __pyx_v_x2_box = (__pyx_t_16 / __pyx_v_w);\n\n      /* \"draw_rectangles.pyx\":59\n *             y1_box = (box_pairs[n, 1+4*i] - y1_union)*pooling_size / h\n *             x2_box = (box_pairs[n, 2+4*i] - x1_union)*pooling_size / w\n *             y2_box = (box_pairs[n, 3+4*i] - y1_union)*pooling_size / h             # <<<<<<<<<<<<<<\n *             # print(\"{:.3f}, {:.3f}, {:.3f}, {:.3f}\".format(x1_box, y1_box, x2_box, y2_box))\n *             for j in range(pooling_size):\n */\n      __pyx_t_36 = __pyx_v_n;\n      __pyx_t_37 = (3 + (4 * __pyx_v_i));\n      __pyx_t_11 = -1;\n      if (unlikely(__pyx_t_36 >= (size_t)__pyx_pybuffernd_box_pairs.diminfo[0].shape)) __pyx_t_11 = 0;\n      if (__pyx_t_37 < 0) {\n        __pyx_t_37 += __pyx_pybuffernd_box_pairs.diminfo[1].shape;\n        if (unlikely(__pyx_t_37 < 0)) __pyx_t_11 = 1;\n      } else if (unlikely(__pyx_t_37 >= __pyx_pybuffernd_box_pairs.diminfo[1].shape)) __pyx_t_11 = 1;\n      if (unlikely(__pyx_t_11 != -1)) {\n        __Pyx_RaiseBufferIndexError(__pyx_t_11);\n        __PYX_ERR(0, 59, __pyx_L1_error)\n      }\n      __pyx_t_16 = (((*__Pyx_BufPtrStrided2d(__pyx_t_15draw_rectangles_DTYPE_t *, __pyx_pybuffernd_box_pairs.rcbuffer->pybuffer.buf, __pyx_t_36, __pyx_pybuffernd_box_pairs.diminfo[0].strides, __pyx_t_37, __pyx_pybuffernd_box_pairs.diminfo[1].strides)) - __pyx_v_y1_union) * __pyx_v_pooling_size);\n      if (unlikely(__pyx_v_h == 0)) {\n        PyErr_SetString(PyExc_ZeroDivisionError, \"float division\");\n        __PYX_ERR(0, 59, __pyx_L1_error)\n      }\n      __pyx_v_y2_box = (__pyx_t_16 / __pyx_v_h);\n\n      /* \"draw_rectangles.pyx\":61\n *             y2_box = (box_pairs[n, 3+4*i] - y1_union)*pooling_size / h\n *             # print(\"{:.3f}, {:.3f}, {:.3f}, {:.3f}\".format(x1_box, y1_box, x2_box, y2_box))\n *             for j in range(pooling_size):             # <<<<<<<<<<<<<<\n *                 y_contrib = minmax(j+1-y1_box)*minmax(y2_box-j)\n *                 for k in range(pooling_size):\n */\n      __pyx_t_38 = __pyx_v_pooling_size;\n      for (__pyx_t_39 = 0; __pyx_t_39 < __pyx_t_38; __pyx_t_39+=1) {\n        __pyx_v_j = __pyx_t_39;\n\n        /* \"draw_rectangles.pyx\":62\n *             # print(\"{:.3f}, {:.3f}, {:.3f}, {:.3f}\".format(x1_box, y1_box, x2_box, y2_box))\n *             for j in range(pooling_size):\n *                 y_contrib = minmax(j+1-y1_box)*minmax(y2_box-j)             # <<<<<<<<<<<<<<\n *                 for k in range(pooling_size):\n *                     x_contrib = minmax(k+1-x1_box)*minmax(x2_box-k)\n */\n        __pyx_v_y_contrib = (__pyx_f_15draw_rectangles_minmax(((__pyx_v_j + 1) - __pyx_v_y1_box)) * __pyx_f_15draw_rectangles_minmax((__pyx_v_y2_box - __pyx_v_j)));\n\n        /* \"draw_rectangles.pyx\":63\n *             for j in range(pooling_size):\n *                 y_contrib = minmax(j+1-y1_box)*minmax(y2_box-j)\n *                 for k in range(pooling_size):             # <<<<<<<<<<<<<<\n *                     x_contrib = minmax(k+1-x1_box)*minmax(x2_box-k)\n *                     # print(\"j {} yc {} k {} xc {}\".format(j, y_contrib, k, x_contrib))\n */\n        __pyx_t_40 = __pyx_v_pooling_size;\n        for (__pyx_t_41 = 0; __pyx_t_41 < __pyx_t_40; __pyx_t_41+=1) {\n          __pyx_v_k = __pyx_t_41;\n\n          /* \"draw_rectangles.pyx\":64\n *                 y_contrib = minmax(j+1-y1_box)*minmax(y2_box-j)\n *                 for k in range(pooling_size):\n *                     x_contrib = minmax(k+1-x1_box)*minmax(x2_box-k)             # <<<<<<<<<<<<<<\n *                     # print(\"j {} yc {} k {} xc {}\".format(j, y_contrib, k, x_contrib))\n *                     uboxes[n,i,j,k] = x_contrib*y_contrib\n */\n          __pyx_v_x_contrib = (__pyx_f_15draw_rectangles_minmax(((__pyx_v_k + 1) - __pyx_v_x1_box)) * __pyx_f_15draw_rectangles_minmax((__pyx_v_x2_box - __pyx_v_k)));\n\n          /* \"draw_rectangles.pyx\":66\n *                     x_contrib = minmax(k+1-x1_box)*minmax(x2_box-k)\n *                     # print(\"j {} yc {} k {} xc {}\".format(j, y_contrib, k, x_contrib))\n *                     uboxes[n,i,j,k] = x_contrib*y_contrib             # <<<<<<<<<<<<<<\n *     return uboxes\n */\n          __pyx_t_42 = __pyx_v_n;\n          __pyx_t_43 = __pyx_v_i;\n          __pyx_t_44 = __pyx_v_j;\n          __pyx_t_45 = __pyx_v_k;\n          __pyx_t_11 = -1;\n          if (unlikely(__pyx_t_42 >= (size_t)__pyx_pybuffernd_uboxes.diminfo[0].shape)) __pyx_t_11 = 0;\n          if (unlikely(__pyx_t_43 >= (size_t)__pyx_pybuffernd_uboxes.diminfo[1].shape)) __pyx_t_11 = 1;\n          if (unlikely(__pyx_t_44 >= (size_t)__pyx_pybuffernd_uboxes.diminfo[2].shape)) __pyx_t_11 = 2;\n          if (unlikely(__pyx_t_45 >= (size_t)__pyx_pybuffernd_uboxes.diminfo[3].shape)) __pyx_t_11 = 3;\n          if (unlikely(__pyx_t_11 != -1)) {\n            __Pyx_RaiseBufferIndexError(__pyx_t_11);\n            __PYX_ERR(0, 66, __pyx_L1_error)\n          }\n          *__Pyx_BufPtrStrided4d(__pyx_t_15draw_rectangles_DTYPE_t *, __pyx_pybuffernd_uboxes.rcbuffer->pybuffer.buf, __pyx_t_42, __pyx_pybuffernd_uboxes.diminfo[0].strides, __pyx_t_43, __pyx_pybuffernd_uboxes.diminfo[1].strides, __pyx_t_44, __pyx_pybuffernd_uboxes.diminfo[2].strides, __pyx_t_45, __pyx_pybuffernd_uboxes.diminfo[3].strides) = (__pyx_v_x_contrib * __pyx_v_y_contrib);\n        }\n      }\n    }\n  }\n\n  /* \"draw_rectangles.pyx\":67\n *                     # print(\"j {} yc {} k {} xc {}\".format(j, y_contrib, k, x_contrib))\n *                     uboxes[n,i,j,k] = x_contrib*y_contrib\n *     return uboxes             # <<<<<<<<<<<<<<\n */\n  __Pyx_XDECREF(((PyObject *)__pyx_r));\n  __Pyx_INCREF(((PyObject *)__pyx_v_uboxes));\n  __pyx_r = ((PyArrayObject *)__pyx_v_uboxes);\n  goto __pyx_L0;\n\n  /* \"draw_rectangles.pyx\":27\n *     return min(max(x, 0), 1)\n * \n * cdef np.ndarray[DTYPE_t, ndim=4] draw_union_boxes_c(             # <<<<<<<<<<<<<<\n *         np.ndarray[DTYPE_t, ndim=2] box_pairs, unsigned int pooling_size):\n *     \"\"\"\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_XDECREF(__pyx_t_5);\n  { PyObject *__pyx_type, *__pyx_value, *__pyx_tb;\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_box_pairs.rcbuffer->pybuffer);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_uboxes.rcbuffer->pybuffer);\n  __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);}\n  __Pyx_AddTraceback(\"draw_rectangles.draw_union_boxes_c\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = 0;\n  goto __pyx_L2;\n  __pyx_L0:;\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_box_pairs.rcbuffer->pybuffer);\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_uboxes.rcbuffer->pybuffer);\n  __pyx_L2:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_uboxes);\n  __Pyx_XGIVEREF((PyObject *)__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":197\n *         # experimental exception made for __getbuffer__ and __releasebuffer__\n *         # -- the details of this may change.\n *         def __getbuffer__(ndarray self, Py_buffer* info, int flags):             # <<<<<<<<<<<<<<\n *             # This implementation of getbuffer is geared towards Cython\n *             # requirements, and does not yet fullfill the PEP.\n */\n\n/* Python wrapper */\nstatic CYTHON_UNUSED int __pyx_pw_5numpy_7ndarray_1__getbuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/\nstatic CYTHON_UNUSED int __pyx_pw_5numpy_7ndarray_1__getbuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) {\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__getbuffer__ (wrapper)\", 0);\n  __pyx_r = __pyx_pf_5numpy_7ndarray___getbuffer__(((PyArrayObject *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic int __pyx_pf_5numpy_7ndarray___getbuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) {\n  int __pyx_v_copy_shape;\n  int __pyx_v_i;\n  int __pyx_v_ndim;\n  int __pyx_v_endian_detector;\n  int __pyx_v_little_endian;\n  int __pyx_v_t;\n  char *__pyx_v_f;\n  PyArray_Descr *__pyx_v_descr = 0;\n  int __pyx_v_offset;\n  int __pyx_v_hasfields;\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  int __pyx_t_1;\n  int __pyx_t_2;\n  PyObject *__pyx_t_3 = NULL;\n  int __pyx_t_4;\n  int __pyx_t_5;\n  PyObject *__pyx_t_6 = NULL;\n  char *__pyx_t_7;\n  __Pyx_RefNannySetupContext(\"__getbuffer__\", 0);\n  if (__pyx_v_info != NULL) {\n    __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None);\n    __Pyx_GIVEREF(__pyx_v_info->obj);\n  }\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":203\n *             # of flags\n * \n *             if info == NULL: return             # <<<<<<<<<<<<<<\n * \n *             cdef int copy_shape, i, ndim\n */\n  __pyx_t_1 = ((__pyx_v_info == NULL) != 0);\n  if (__pyx_t_1) {\n    __pyx_r = 0;\n    goto __pyx_L0;\n  }\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":206\n * \n *             cdef int copy_shape, i, ndim\n *             cdef int endian_detector = 1             # <<<<<<<<<<<<<<\n *             cdef bint little_endian = ((<char*>&endian_detector)[0] != 0)\n * \n */\n  __pyx_v_endian_detector = 1;\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":207\n *             cdef int copy_shape, i, ndim\n *             cdef int endian_detector = 1\n *             cdef bint little_endian = ((<char*>&endian_detector)[0] != 0)             # <<<<<<<<<<<<<<\n * \n *             ndim = PyArray_NDIM(self)\n */\n  __pyx_v_little_endian = ((((char *)(&__pyx_v_endian_detector))[0]) != 0);\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":209\n *             cdef bint little_endian = ((<char*>&endian_detector)[0] != 0)\n * \n *             ndim = PyArray_NDIM(self)             # <<<<<<<<<<<<<<\n * \n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):\n */\n  __pyx_v_ndim = PyArray_NDIM(__pyx_v_self);\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":211\n *             ndim = PyArray_NDIM(self)\n * \n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):             # <<<<<<<<<<<<<<\n *                 copy_shape = 1\n *             else:\n */\n  __pyx_t_1 = (((sizeof(npy_intp)) != (sizeof(Py_ssize_t))) != 0);\n  if (__pyx_t_1) {\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":212\n * \n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):\n *                 copy_shape = 1             # <<<<<<<<<<<<<<\n *             else:\n *                 copy_shape = 0\n */\n    __pyx_v_copy_shape = 1;\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":211\n *             ndim = PyArray_NDIM(self)\n * \n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):             # <<<<<<<<<<<<<<\n *                 copy_shape = 1\n *             else:\n */\n    goto __pyx_L4;\n  }\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":214\n *                 copy_shape = 1\n *             else:\n *                 copy_shape = 0             # <<<<<<<<<<<<<<\n * \n *             if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)\n */\n  /*else*/ {\n    __pyx_v_copy_shape = 0;\n  }\n  __pyx_L4:;\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":216\n *                 copy_shape = 0\n * \n *             if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)             # <<<<<<<<<<<<<<\n *                 and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not C contiguous\")\n */\n  __pyx_t_2 = (((__pyx_v_flags & PyBUF_C_CONTIGUOUS) == PyBUF_C_CONTIGUOUS) != 0);\n  if (__pyx_t_2) {\n  } else {\n    __pyx_t_1 = __pyx_t_2;\n    goto __pyx_L6_bool_binop_done;\n  }\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":217\n * \n *             if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)\n *                 and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)):             # <<<<<<<<<<<<<<\n *                 raise ValueError(u\"ndarray is not C contiguous\")\n * \n */\n  __pyx_t_2 = ((!(PyArray_CHKFLAGS(__pyx_v_self, NPY_C_CONTIGUOUS) != 0)) != 0);\n  __pyx_t_1 = __pyx_t_2;\n  __pyx_L6_bool_binop_done:;\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":216\n *                 copy_shape = 0\n * \n *             if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)             # <<<<<<<<<<<<<<\n *                 and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not C contiguous\")\n */\n  if (__pyx_t_1) {\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":218\n *             if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)\n *                 and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not C contiguous\")             # <<<<<<<<<<<<<<\n * \n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)\n */\n    __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple_, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 218, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __PYX_ERR(1, 218, __pyx_L1_error)\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":216\n *                 copy_shape = 0\n * \n *             if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)             # <<<<<<<<<<<<<<\n *                 and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not C contiguous\")\n */\n  }\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":220\n *                 raise ValueError(u\"ndarray is not C contiguous\")\n * \n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)             # <<<<<<<<<<<<<<\n *                 and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not Fortran contiguous\")\n */\n  __pyx_t_2 = (((__pyx_v_flags & PyBUF_F_CONTIGUOUS) == PyBUF_F_CONTIGUOUS) != 0);\n  if (__pyx_t_2) {\n  } else {\n    __pyx_t_1 = __pyx_t_2;\n    goto __pyx_L9_bool_binop_done;\n  }\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":221\n * \n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)\n *                 and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)):             # <<<<<<<<<<<<<<\n *                 raise ValueError(u\"ndarray is not Fortran contiguous\")\n * \n */\n  __pyx_t_2 = ((!(PyArray_CHKFLAGS(__pyx_v_self, NPY_F_CONTIGUOUS) != 0)) != 0);\n  __pyx_t_1 = __pyx_t_2;\n  __pyx_L9_bool_binop_done:;\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":220\n *                 raise ValueError(u\"ndarray is not C contiguous\")\n * \n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)             # <<<<<<<<<<<<<<\n *                 and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not Fortran contiguous\")\n */\n  if (__pyx_t_1) {\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":222\n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)\n *                 and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not Fortran contiguous\")             # <<<<<<<<<<<<<<\n * \n *             info.buf = PyArray_DATA(self)\n */\n    __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__2, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 222, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __PYX_ERR(1, 222, __pyx_L1_error)\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":220\n *                 raise ValueError(u\"ndarray is not C contiguous\")\n * \n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)             # <<<<<<<<<<<<<<\n *                 and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not Fortran contiguous\")\n */\n  }\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":224\n *                 raise ValueError(u\"ndarray is not Fortran contiguous\")\n * \n *             info.buf = PyArray_DATA(self)             # <<<<<<<<<<<<<<\n *             info.ndim = ndim\n *             if copy_shape:\n */\n  __pyx_v_info->buf = PyArray_DATA(__pyx_v_self);\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":225\n * \n *             info.buf = PyArray_DATA(self)\n *             info.ndim = ndim             # <<<<<<<<<<<<<<\n *             if copy_shape:\n *                 # Allocate new buffer for strides and shape info.\n */\n  __pyx_v_info->ndim = __pyx_v_ndim;\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":226\n *             info.buf = PyArray_DATA(self)\n *             info.ndim = ndim\n *             if copy_shape:             # <<<<<<<<<<<<<<\n *                 # Allocate new buffer for strides and shape info.\n *                 # This is allocated as one block, strides first.\n */\n  __pyx_t_1 = (__pyx_v_copy_shape != 0);\n  if (__pyx_t_1) {\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":229\n *                 # Allocate new buffer for strides and shape info.\n *                 # This is allocated as one block, strides first.\n *                 info.strides = <Py_ssize_t*>stdlib.malloc(sizeof(Py_ssize_t) * <size_t>ndim * 2)             # <<<<<<<<<<<<<<\n *                 info.shape = info.strides + ndim\n *                 for i in range(ndim):\n */\n    __pyx_v_info->strides = ((Py_ssize_t *)malloc((((sizeof(Py_ssize_t)) * ((size_t)__pyx_v_ndim)) * 2)));\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":230\n *                 # This is allocated as one block, strides first.\n *                 info.strides = <Py_ssize_t*>stdlib.malloc(sizeof(Py_ssize_t) * <size_t>ndim * 2)\n *                 info.shape = info.strides + ndim             # <<<<<<<<<<<<<<\n *                 for i in range(ndim):\n *                     info.strides[i] = PyArray_STRIDES(self)[i]\n */\n    __pyx_v_info->shape = (__pyx_v_info->strides + __pyx_v_ndim);\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":231\n *                 info.strides = <Py_ssize_t*>stdlib.malloc(sizeof(Py_ssize_t) * <size_t>ndim * 2)\n *                 info.shape = info.strides + ndim\n *                 for i in range(ndim):             # <<<<<<<<<<<<<<\n *                     info.strides[i] = PyArray_STRIDES(self)[i]\n *                     info.shape[i] = PyArray_DIMS(self)[i]\n */\n    __pyx_t_4 = __pyx_v_ndim;\n    for (__pyx_t_5 = 0; __pyx_t_5 < __pyx_t_4; __pyx_t_5+=1) {\n      __pyx_v_i = __pyx_t_5;\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":232\n *                 info.shape = info.strides + ndim\n *                 for i in range(ndim):\n *                     info.strides[i] = PyArray_STRIDES(self)[i]             # <<<<<<<<<<<<<<\n *                     info.shape[i] = PyArray_DIMS(self)[i]\n *             else:\n */\n      (__pyx_v_info->strides[__pyx_v_i]) = (PyArray_STRIDES(__pyx_v_self)[__pyx_v_i]);\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":233\n *                 for i in range(ndim):\n *                     info.strides[i] = PyArray_STRIDES(self)[i]\n *                     info.shape[i] = PyArray_DIMS(self)[i]             # <<<<<<<<<<<<<<\n *             else:\n *                 info.strides = <Py_ssize_t*>PyArray_STRIDES(self)\n */\n      (__pyx_v_info->shape[__pyx_v_i]) = (PyArray_DIMS(__pyx_v_self)[__pyx_v_i]);\n    }\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":226\n *             info.buf = PyArray_DATA(self)\n *             info.ndim = ndim\n *             if copy_shape:             # <<<<<<<<<<<<<<\n *                 # Allocate new buffer for strides and shape info.\n *                 # This is allocated as one block, strides first.\n */\n    goto __pyx_L11;\n  }\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":235\n *                     info.shape[i] = PyArray_DIMS(self)[i]\n *             else:\n *                 info.strides = <Py_ssize_t*>PyArray_STRIDES(self)             # <<<<<<<<<<<<<<\n *                 info.shape = <Py_ssize_t*>PyArray_DIMS(self)\n *             info.suboffsets = NULL\n */\n  /*else*/ {\n    __pyx_v_info->strides = ((Py_ssize_t *)PyArray_STRIDES(__pyx_v_self));\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":236\n *             else:\n *                 info.strides = <Py_ssize_t*>PyArray_STRIDES(self)\n *                 info.shape = <Py_ssize_t*>PyArray_DIMS(self)             # <<<<<<<<<<<<<<\n *             info.suboffsets = NULL\n *             info.itemsize = PyArray_ITEMSIZE(self)\n */\n    __pyx_v_info->shape = ((Py_ssize_t *)PyArray_DIMS(__pyx_v_self));\n  }\n  __pyx_L11:;\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":237\n *                 info.strides = <Py_ssize_t*>PyArray_STRIDES(self)\n *                 info.shape = <Py_ssize_t*>PyArray_DIMS(self)\n *             info.suboffsets = NULL             # <<<<<<<<<<<<<<\n *             info.itemsize = PyArray_ITEMSIZE(self)\n *             info.readonly = not PyArray_ISWRITEABLE(self)\n */\n  __pyx_v_info->suboffsets = NULL;\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":238\n *                 info.shape = <Py_ssize_t*>PyArray_DIMS(self)\n *             info.suboffsets = NULL\n *             info.itemsize = PyArray_ITEMSIZE(self)             # <<<<<<<<<<<<<<\n *             info.readonly = not PyArray_ISWRITEABLE(self)\n * \n */\n  __pyx_v_info->itemsize = PyArray_ITEMSIZE(__pyx_v_self);\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":239\n *             info.suboffsets = NULL\n *             info.itemsize = PyArray_ITEMSIZE(self)\n *             info.readonly = not PyArray_ISWRITEABLE(self)             # <<<<<<<<<<<<<<\n * \n *             cdef int t\n */\n  __pyx_v_info->readonly = (!(PyArray_ISWRITEABLE(__pyx_v_self) != 0));\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":242\n * \n *             cdef int t\n *             cdef char* f = NULL             # <<<<<<<<<<<<<<\n *             cdef dtype descr = self.descr\n *             cdef int offset\n */\n  __pyx_v_f = NULL;\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":243\n *             cdef int t\n *             cdef char* f = NULL\n *             cdef dtype descr = self.descr             # <<<<<<<<<<<<<<\n *             cdef int offset\n * \n */\n  __pyx_t_3 = ((PyObject *)__pyx_v_self->descr);\n  __Pyx_INCREF(__pyx_t_3);\n  __pyx_v_descr = ((PyArray_Descr *)__pyx_t_3);\n  __pyx_t_3 = 0;\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":246\n *             cdef int offset\n * \n *             cdef bint hasfields = PyDataType_HASFIELDS(descr)             # <<<<<<<<<<<<<<\n * \n *             if not hasfields and not copy_shape:\n */\n  __pyx_v_hasfields = PyDataType_HASFIELDS(__pyx_v_descr);\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":248\n *             cdef bint hasfields = PyDataType_HASFIELDS(descr)\n * \n *             if not hasfields and not copy_shape:             # <<<<<<<<<<<<<<\n *                 # do not call releasebuffer\n *                 info.obj = None\n */\n  __pyx_t_2 = ((!(__pyx_v_hasfields != 0)) != 0);\n  if (__pyx_t_2) {\n  } else {\n    __pyx_t_1 = __pyx_t_2;\n    goto __pyx_L15_bool_binop_done;\n  }\n  __pyx_t_2 = ((!(__pyx_v_copy_shape != 0)) != 0);\n  __pyx_t_1 = __pyx_t_2;\n  __pyx_L15_bool_binop_done:;\n  if (__pyx_t_1) {\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":250\n *             if not hasfields and not copy_shape:\n *                 # do not call releasebuffer\n *                 info.obj = None             # <<<<<<<<<<<<<<\n *             else:\n *                 # need to call releasebuffer\n */\n    __Pyx_INCREF(Py_None);\n    __Pyx_GIVEREF(Py_None);\n    __Pyx_GOTREF(__pyx_v_info->obj);\n    __Pyx_DECREF(__pyx_v_info->obj);\n    __pyx_v_info->obj = Py_None;\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":248\n *             cdef bint hasfields = PyDataType_HASFIELDS(descr)\n * \n *             if not hasfields and not copy_shape:             # <<<<<<<<<<<<<<\n *                 # do not call releasebuffer\n *                 info.obj = None\n */\n    goto __pyx_L14;\n  }\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":253\n *             else:\n *                 # need to call releasebuffer\n *                 info.obj = self             # <<<<<<<<<<<<<<\n * \n *             if not hasfields:\n */\n  /*else*/ {\n    __Pyx_INCREF(((PyObject *)__pyx_v_self));\n    __Pyx_GIVEREF(((PyObject *)__pyx_v_self));\n    __Pyx_GOTREF(__pyx_v_info->obj);\n    __Pyx_DECREF(__pyx_v_info->obj);\n    __pyx_v_info->obj = ((PyObject *)__pyx_v_self);\n  }\n  __pyx_L14:;\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":255\n *                 info.obj = self\n * \n *             if not hasfields:             # <<<<<<<<<<<<<<\n *                 t = descr.type_num\n *                 if ((descr.byteorder == c'>' and little_endian) or\n */\n  __pyx_t_1 = ((!(__pyx_v_hasfields != 0)) != 0);\n  if (__pyx_t_1) {\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":256\n * \n *             if not hasfields:\n *                 t = descr.type_num             # <<<<<<<<<<<<<<\n *                 if ((descr.byteorder == c'>' and little_endian) or\n *                     (descr.byteorder == c'<' and not little_endian)):\n */\n    __pyx_t_4 = __pyx_v_descr->type_num;\n    __pyx_v_t = __pyx_t_4;\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":257\n *             if not hasfields:\n *                 t = descr.type_num\n *                 if ((descr.byteorder == c'>' and little_endian) or             # <<<<<<<<<<<<<<\n *                     (descr.byteorder == c'<' and not little_endian)):\n *                     raise ValueError(u\"Non-native byte order not supported\")\n */\n    __pyx_t_2 = ((__pyx_v_descr->byteorder == '>') != 0);\n    if (!__pyx_t_2) {\n      goto __pyx_L20_next_or;\n    } else {\n    }\n    __pyx_t_2 = (__pyx_v_little_endian != 0);\n    if (!__pyx_t_2) {\n    } else {\n      __pyx_t_1 = __pyx_t_2;\n      goto __pyx_L19_bool_binop_done;\n    }\n    __pyx_L20_next_or:;\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":258\n *                 t = descr.type_num\n *                 if ((descr.byteorder == c'>' and little_endian) or\n *                     (descr.byteorder == c'<' and not little_endian)):             # <<<<<<<<<<<<<<\n *                     raise ValueError(u\"Non-native byte order not supported\")\n *                 if   t == NPY_BYTE:        f = \"b\"\n */\n    __pyx_t_2 = ((__pyx_v_descr->byteorder == '<') != 0);\n    if (__pyx_t_2) {\n    } else {\n      __pyx_t_1 = __pyx_t_2;\n      goto __pyx_L19_bool_binop_done;\n    }\n    __pyx_t_2 = ((!(__pyx_v_little_endian != 0)) != 0);\n    __pyx_t_1 = __pyx_t_2;\n    __pyx_L19_bool_binop_done:;\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":257\n *             if not hasfields:\n *                 t = descr.type_num\n *                 if ((descr.byteorder == c'>' and little_endian) or             # <<<<<<<<<<<<<<\n *                     (descr.byteorder == c'<' and not little_endian)):\n *                     raise ValueError(u\"Non-native byte order not supported\")\n */\n    if (__pyx_t_1) {\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":259\n *                 if ((descr.byteorder == c'>' and little_endian) or\n *                     (descr.byteorder == c'<' and not little_endian)):\n *                     raise ValueError(u\"Non-native byte order not supported\")             # <<<<<<<<<<<<<<\n *                 if   t == NPY_BYTE:        f = \"b\"\n *                 elif t == NPY_UBYTE:       f = \"B\"\n */\n      __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__3, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 259, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __PYX_ERR(1, 259, __pyx_L1_error)\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":257\n *             if not hasfields:\n *                 t = descr.type_num\n *                 if ((descr.byteorder == c'>' and little_endian) or             # <<<<<<<<<<<<<<\n *                     (descr.byteorder == c'<' and not little_endian)):\n *                     raise ValueError(u\"Non-native byte order not supported\")\n */\n    }\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":260\n *                     (descr.byteorder == c'<' and not little_endian)):\n *                     raise ValueError(u\"Non-native byte order not supported\")\n *                 if   t == NPY_BYTE:        f = \"b\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_UBYTE:       f = \"B\"\n *                 elif t == NPY_SHORT:       f = \"h\"\n */\n    switch (__pyx_v_t) {\n      case NPY_BYTE:\n      __pyx_v_f = ((char *)\"b\");\n      break;\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":261\n *                     raise ValueError(u\"Non-native byte order not supported\")\n *                 if   t == NPY_BYTE:        f = \"b\"\n *                 elif t == NPY_UBYTE:       f = \"B\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_SHORT:       f = \"h\"\n *                 elif t == NPY_USHORT:      f = \"H\"\n */\n      case NPY_UBYTE:\n      __pyx_v_f = ((char *)\"B\");\n      break;\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":262\n *                 if   t == NPY_BYTE:        f = \"b\"\n *                 elif t == NPY_UBYTE:       f = \"B\"\n *                 elif t == NPY_SHORT:       f = \"h\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_USHORT:      f = \"H\"\n *                 elif t == NPY_INT:         f = \"i\"\n */\n      case NPY_SHORT:\n      __pyx_v_f = ((char *)\"h\");\n      break;\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":263\n *                 elif t == NPY_UBYTE:       f = \"B\"\n *                 elif t == NPY_SHORT:       f = \"h\"\n *                 elif t == NPY_USHORT:      f = \"H\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_INT:         f = \"i\"\n *                 elif t == NPY_UINT:        f = \"I\"\n */\n      case NPY_USHORT:\n      __pyx_v_f = ((char *)\"H\");\n      break;\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":264\n *                 elif t == NPY_SHORT:       f = \"h\"\n *                 elif t == NPY_USHORT:      f = \"H\"\n *                 elif t == NPY_INT:         f = \"i\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_UINT:        f = \"I\"\n *                 elif t == NPY_LONG:        f = \"l\"\n */\n      case NPY_INT:\n      __pyx_v_f = ((char *)\"i\");\n      break;\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":265\n *                 elif t == NPY_USHORT:      f = \"H\"\n *                 elif t == NPY_INT:         f = \"i\"\n *                 elif t == NPY_UINT:        f = \"I\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_LONG:        f = \"l\"\n *                 elif t == NPY_ULONG:       f = \"L\"\n */\n      case NPY_UINT:\n      __pyx_v_f = ((char *)\"I\");\n      break;\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":266\n *                 elif t == NPY_INT:         f = \"i\"\n *                 elif t == NPY_UINT:        f = \"I\"\n *                 elif t == NPY_LONG:        f = \"l\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_ULONG:       f = \"L\"\n *                 elif t == NPY_LONGLONG:    f = \"q\"\n */\n      case NPY_LONG:\n      __pyx_v_f = ((char *)\"l\");\n      break;\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":267\n *                 elif t == NPY_UINT:        f = \"I\"\n *                 elif t == NPY_LONG:        f = \"l\"\n *                 elif t == NPY_ULONG:       f = \"L\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_LONGLONG:    f = \"q\"\n *                 elif t == NPY_ULONGLONG:   f = \"Q\"\n */\n      case NPY_ULONG:\n      __pyx_v_f = ((char *)\"L\");\n      break;\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":268\n *                 elif t == NPY_LONG:        f = \"l\"\n *                 elif t == NPY_ULONG:       f = \"L\"\n *                 elif t == NPY_LONGLONG:    f = \"q\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_ULONGLONG:   f = \"Q\"\n *                 elif t == NPY_FLOAT:       f = \"f\"\n */\n      case NPY_LONGLONG:\n      __pyx_v_f = ((char *)\"q\");\n      break;\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":269\n *                 elif t == NPY_ULONG:       f = \"L\"\n *                 elif t == NPY_LONGLONG:    f = \"q\"\n *                 elif t == NPY_ULONGLONG:   f = \"Q\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_FLOAT:       f = \"f\"\n *                 elif t == NPY_DOUBLE:      f = \"d\"\n */\n      case NPY_ULONGLONG:\n      __pyx_v_f = ((char *)\"Q\");\n      break;\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":270\n *                 elif t == NPY_LONGLONG:    f = \"q\"\n *                 elif t == NPY_ULONGLONG:   f = \"Q\"\n *                 elif t == NPY_FLOAT:       f = \"f\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_DOUBLE:      f = \"d\"\n *                 elif t == NPY_LONGDOUBLE:  f = \"g\"\n */\n      case NPY_FLOAT:\n      __pyx_v_f = ((char *)\"f\");\n      break;\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":271\n *                 elif t == NPY_ULONGLONG:   f = \"Q\"\n *                 elif t == NPY_FLOAT:       f = \"f\"\n *                 elif t == NPY_DOUBLE:      f = \"d\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_LONGDOUBLE:  f = \"g\"\n *                 elif t == NPY_CFLOAT:      f = \"Zf\"\n */\n      case NPY_DOUBLE:\n      __pyx_v_f = ((char *)\"d\");\n      break;\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":272\n *                 elif t == NPY_FLOAT:       f = \"f\"\n *                 elif t == NPY_DOUBLE:      f = \"d\"\n *                 elif t == NPY_LONGDOUBLE:  f = \"g\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_CFLOAT:      f = \"Zf\"\n *                 elif t == NPY_CDOUBLE:     f = \"Zd\"\n */\n      case NPY_LONGDOUBLE:\n      __pyx_v_f = ((char *)\"g\");\n      break;\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":273\n *                 elif t == NPY_DOUBLE:      f = \"d\"\n *                 elif t == NPY_LONGDOUBLE:  f = \"g\"\n *                 elif t == NPY_CFLOAT:      f = \"Zf\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_CDOUBLE:     f = \"Zd\"\n *                 elif t == NPY_CLONGDOUBLE: f = \"Zg\"\n */\n      case NPY_CFLOAT:\n      __pyx_v_f = ((char *)\"Zf\");\n      break;\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":274\n *                 elif t == NPY_LONGDOUBLE:  f = \"g\"\n *                 elif t == NPY_CFLOAT:      f = \"Zf\"\n *                 elif t == NPY_CDOUBLE:     f = \"Zd\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_CLONGDOUBLE: f = \"Zg\"\n *                 elif t == NPY_OBJECT:      f = \"O\"\n */\n      case NPY_CDOUBLE:\n      __pyx_v_f = ((char *)\"Zd\");\n      break;\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":275\n *                 elif t == NPY_CFLOAT:      f = \"Zf\"\n *                 elif t == NPY_CDOUBLE:     f = \"Zd\"\n *                 elif t == NPY_CLONGDOUBLE: f = \"Zg\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_OBJECT:      f = \"O\"\n *                 else:\n */\n      case NPY_CLONGDOUBLE:\n      __pyx_v_f = ((char *)\"Zg\");\n      break;\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":276\n *                 elif t == NPY_CDOUBLE:     f = \"Zd\"\n *                 elif t == NPY_CLONGDOUBLE: f = \"Zg\"\n *                 elif t == NPY_OBJECT:      f = \"O\"             # <<<<<<<<<<<<<<\n *                 else:\n *                     raise ValueError(u\"unknown dtype code in numpy.pxd (%d)\" % t)\n */\n      case NPY_OBJECT:\n      __pyx_v_f = ((char *)\"O\");\n      break;\n      default:\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":278\n *                 elif t == NPY_OBJECT:      f = \"O\"\n *                 else:\n *                     raise ValueError(u\"unknown dtype code in numpy.pxd (%d)\" % t)             # <<<<<<<<<<<<<<\n *                 info.format = f\n *                 return\n */\n      __pyx_t_3 = __Pyx_PyInt_From_int(__pyx_v_t); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 278, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_6 = PyUnicode_Format(__pyx_kp_u_unknown_dtype_code_in_numpy_pxd, __pyx_t_3); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 278, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_6);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 278, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_GIVEREF(__pyx_t_6);\n      PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_6);\n      __pyx_t_6 = 0;\n      __pyx_t_6 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 278, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_6);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __Pyx_Raise(__pyx_t_6, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n      __PYX_ERR(1, 278, __pyx_L1_error)\n      break;\n    }\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":279\n *                 else:\n *                     raise ValueError(u\"unknown dtype code in numpy.pxd (%d)\" % t)\n *                 info.format = f             # <<<<<<<<<<<<<<\n *                 return\n *             else:\n */\n    __pyx_v_info->format = __pyx_v_f;\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":280\n *                     raise ValueError(u\"unknown dtype code in numpy.pxd (%d)\" % t)\n *                 info.format = f\n *                 return             # <<<<<<<<<<<<<<\n *             else:\n *                 info.format = <char*>stdlib.malloc(_buffer_format_string_len)\n */\n    __pyx_r = 0;\n    goto __pyx_L0;\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":255\n *                 info.obj = self\n * \n *             if not hasfields:             # <<<<<<<<<<<<<<\n *                 t = descr.type_num\n *                 if ((descr.byteorder == c'>' and little_endian) or\n */\n  }\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":282\n *                 return\n *             else:\n *                 info.format = <char*>stdlib.malloc(_buffer_format_string_len)             # <<<<<<<<<<<<<<\n *                 info.format[0] = c'^' # Native data types, manual alignment\n *                 offset = 0\n */\n  /*else*/ {\n    __pyx_v_info->format = ((char *)malloc(0xFF));\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":283\n *             else:\n *                 info.format = <char*>stdlib.malloc(_buffer_format_string_len)\n *                 info.format[0] = c'^' # Native data types, manual alignment             # <<<<<<<<<<<<<<\n *                 offset = 0\n *                 f = _util_dtypestring(descr, info.format + 1,\n */\n    (__pyx_v_info->format[0]) = '^';\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":284\n *                 info.format = <char*>stdlib.malloc(_buffer_format_string_len)\n *                 info.format[0] = c'^' # Native data types, manual alignment\n *                 offset = 0             # <<<<<<<<<<<<<<\n *                 f = _util_dtypestring(descr, info.format + 1,\n *                                       info.format + _buffer_format_string_len,\n */\n    __pyx_v_offset = 0;\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":285\n *                 info.format[0] = c'^' # Native data types, manual alignment\n *                 offset = 0\n *                 f = _util_dtypestring(descr, info.format + 1,             # <<<<<<<<<<<<<<\n *                                       info.format + _buffer_format_string_len,\n *                                       &offset)\n */\n    __pyx_t_7 = __pyx_f_5numpy__util_dtypestring(__pyx_v_descr, (__pyx_v_info->format + 1), (__pyx_v_info->format + 0xFF), (&__pyx_v_offset)); if (unlikely(__pyx_t_7 == NULL)) __PYX_ERR(1, 285, __pyx_L1_error)\n    __pyx_v_f = __pyx_t_7;\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":288\n *                                       info.format + _buffer_format_string_len,\n *                                       &offset)\n *                 f[0] = c'\\0' # Terminate format string             # <<<<<<<<<<<<<<\n * \n *         def __releasebuffer__(ndarray self, Py_buffer* info):\n */\n    (__pyx_v_f[0]) = '\\x00';\n  }\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":197\n *         # experimental exception made for __getbuffer__ and __releasebuffer__\n *         # -- the details of this may change.\n *         def __getbuffer__(ndarray self, Py_buffer* info, int flags):             # <<<<<<<<<<<<<<\n *             # This implementation of getbuffer is geared towards Cython\n *             # requirements, and does not yet fullfill the PEP.\n */\n\n  /* function exit code */\n  __pyx_r = 0;\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_AddTraceback(\"numpy.ndarray.__getbuffer__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = -1;\n  if (__pyx_v_info != NULL && __pyx_v_info->obj != NULL) {\n    __Pyx_GOTREF(__pyx_v_info->obj);\n    __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = NULL;\n  }\n  goto __pyx_L2;\n  __pyx_L0:;\n  if (__pyx_v_info != NULL && __pyx_v_info->obj == Py_None) {\n    __Pyx_GOTREF(Py_None);\n    __Pyx_DECREF(Py_None); __pyx_v_info->obj = NULL;\n  }\n  __pyx_L2:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_descr);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":290\n *                 f[0] = c'\\0' # Terminate format string\n * \n *         def __releasebuffer__(ndarray self, Py_buffer* info):             # <<<<<<<<<<<<<<\n *             if PyArray_HASFIELDS(self):\n *                 stdlib.free(info.format)\n */\n\n/* Python wrapper */\nstatic CYTHON_UNUSED void __pyx_pw_5numpy_7ndarray_3__releasebuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info); /*proto*/\nstatic CYTHON_UNUSED void __pyx_pw_5numpy_7ndarray_3__releasebuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info) {\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__releasebuffer__ (wrapper)\", 0);\n  __pyx_pf_5numpy_7ndarray_2__releasebuffer__(((PyArrayObject *)__pyx_v_self), ((Py_buffer *)__pyx_v_info));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n}\n\nstatic void __pyx_pf_5numpy_7ndarray_2__releasebuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info) {\n  __Pyx_RefNannyDeclarations\n  int __pyx_t_1;\n  __Pyx_RefNannySetupContext(\"__releasebuffer__\", 0);\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":291\n * \n *         def __releasebuffer__(ndarray self, Py_buffer* info):\n *             if PyArray_HASFIELDS(self):             # <<<<<<<<<<<<<<\n *                 stdlib.free(info.format)\n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):\n */\n  __pyx_t_1 = (PyArray_HASFIELDS(__pyx_v_self) != 0);\n  if (__pyx_t_1) {\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":292\n *         def __releasebuffer__(ndarray self, Py_buffer* info):\n *             if PyArray_HASFIELDS(self):\n *                 stdlib.free(info.format)             # <<<<<<<<<<<<<<\n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):\n *                 stdlib.free(info.strides)\n */\n    free(__pyx_v_info->format);\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":291\n * \n *         def __releasebuffer__(ndarray self, Py_buffer* info):\n *             if PyArray_HASFIELDS(self):             # <<<<<<<<<<<<<<\n *                 stdlib.free(info.format)\n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):\n */\n  }\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":293\n *             if PyArray_HASFIELDS(self):\n *                 stdlib.free(info.format)\n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):             # <<<<<<<<<<<<<<\n *                 stdlib.free(info.strides)\n *                 # info.shape was stored after info.strides in the same block\n */\n  __pyx_t_1 = (((sizeof(npy_intp)) != (sizeof(Py_ssize_t))) != 0);\n  if (__pyx_t_1) {\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":294\n *                 stdlib.free(info.format)\n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):\n *                 stdlib.free(info.strides)             # <<<<<<<<<<<<<<\n *                 # info.shape was stored after info.strides in the same block\n * \n */\n    free(__pyx_v_info->strides);\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":293\n *             if PyArray_HASFIELDS(self):\n *                 stdlib.free(info.format)\n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):             # <<<<<<<<<<<<<<\n *                 stdlib.free(info.strides)\n *                 # info.shape was stored after info.strides in the same block\n */\n  }\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":290\n *                 f[0] = c'\\0' # Terminate format string\n * \n *         def __releasebuffer__(ndarray self, Py_buffer* info):             # <<<<<<<<<<<<<<\n *             if PyArray_HASFIELDS(self):\n *                 stdlib.free(info.format)\n */\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n}\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":770\n * ctypedef npy_cdouble     complex_t\n * \n * cdef inline object PyArray_MultiIterNew1(a):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(1, <void*>a)\n * \n */\n\nstatic CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew1(PyObject *__pyx_v_a) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"PyArray_MultiIterNew1\", 0);\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":771\n * \n * cdef inline object PyArray_MultiIterNew1(a):\n *     return PyArray_MultiIterNew(1, <void*>a)             # <<<<<<<<<<<<<<\n * \n * cdef inline object PyArray_MultiIterNew2(a, b):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_1 = PyArray_MultiIterNew(1, ((void *)__pyx_v_a)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 771, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_r = __pyx_t_1;\n  __pyx_t_1 = 0;\n  goto __pyx_L0;\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":770\n * ctypedef npy_cdouble     complex_t\n * \n * cdef inline object PyArray_MultiIterNew1(a):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(1, <void*>a)\n * \n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"numpy.PyArray_MultiIterNew1\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = 0;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":773\n *     return PyArray_MultiIterNew(1, <void*>a)\n * \n * cdef inline object PyArray_MultiIterNew2(a, b):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(2, <void*>a, <void*>b)\n * \n */\n\nstatic CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew2(PyObject *__pyx_v_a, PyObject *__pyx_v_b) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"PyArray_MultiIterNew2\", 0);\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":774\n * \n * cdef inline object PyArray_MultiIterNew2(a, b):\n *     return PyArray_MultiIterNew(2, <void*>a, <void*>b)             # <<<<<<<<<<<<<<\n * \n * cdef inline object PyArray_MultiIterNew3(a, b, c):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_1 = PyArray_MultiIterNew(2, ((void *)__pyx_v_a), ((void *)__pyx_v_b)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 774, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_r = __pyx_t_1;\n  __pyx_t_1 = 0;\n  goto __pyx_L0;\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":773\n *     return PyArray_MultiIterNew(1, <void*>a)\n * \n * cdef inline object PyArray_MultiIterNew2(a, b):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(2, <void*>a, <void*>b)\n * \n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"numpy.PyArray_MultiIterNew2\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = 0;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":776\n *     return PyArray_MultiIterNew(2, <void*>a, <void*>b)\n * \n * cdef inline object PyArray_MultiIterNew3(a, b, c):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(3, <void*>a, <void*>b, <void*> c)\n * \n */\n\nstatic CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew3(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"PyArray_MultiIterNew3\", 0);\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":777\n * \n * cdef inline object PyArray_MultiIterNew3(a, b, c):\n *     return PyArray_MultiIterNew(3, <void*>a, <void*>b, <void*> c)             # <<<<<<<<<<<<<<\n * \n * cdef inline object PyArray_MultiIterNew4(a, b, c, d):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_1 = PyArray_MultiIterNew(3, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 777, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_r = __pyx_t_1;\n  __pyx_t_1 = 0;\n  goto __pyx_L0;\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":776\n *     return PyArray_MultiIterNew(2, <void*>a, <void*>b)\n * \n * cdef inline object PyArray_MultiIterNew3(a, b, c):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(3, <void*>a, <void*>b, <void*> c)\n * \n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"numpy.PyArray_MultiIterNew3\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = 0;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":779\n *     return PyArray_MultiIterNew(3, <void*>a, <void*>b, <void*> c)\n * \n * cdef inline object PyArray_MultiIterNew4(a, b, c, d):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(4, <void*>a, <void*>b, <void*>c, <void*> d)\n * \n */\n\nstatic CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew4(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_d) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"PyArray_MultiIterNew4\", 0);\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":780\n * \n * cdef inline object PyArray_MultiIterNew4(a, b, c, d):\n *     return PyArray_MultiIterNew(4, <void*>a, <void*>b, <void*>c, <void*> d)             # <<<<<<<<<<<<<<\n * \n * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_1 = PyArray_MultiIterNew(4, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c), ((void *)__pyx_v_d)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 780, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_r = __pyx_t_1;\n  __pyx_t_1 = 0;\n  goto __pyx_L0;\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":779\n *     return PyArray_MultiIterNew(3, <void*>a, <void*>b, <void*> c)\n * \n * cdef inline object PyArray_MultiIterNew4(a, b, c, d):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(4, <void*>a, <void*>b, <void*>c, <void*> d)\n * \n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"numpy.PyArray_MultiIterNew4\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = 0;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":782\n *     return PyArray_MultiIterNew(4, <void*>a, <void*>b, <void*>c, <void*> d)\n * \n * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(5, <void*>a, <void*>b, <void*>c, <void*> d, <void*> e)\n * \n */\n\nstatic CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew5(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_d, PyObject *__pyx_v_e) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"PyArray_MultiIterNew5\", 0);\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":783\n * \n * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e):\n *     return PyArray_MultiIterNew(5, <void*>a, <void*>b, <void*>c, <void*> d, <void*> e)             # <<<<<<<<<<<<<<\n * \n * cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL:\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_1 = PyArray_MultiIterNew(5, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c), ((void *)__pyx_v_d), ((void *)__pyx_v_e)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 783, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_r = __pyx_t_1;\n  __pyx_t_1 = 0;\n  goto __pyx_L0;\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":782\n *     return PyArray_MultiIterNew(4, <void*>a, <void*>b, <void*>c, <void*> d)\n * \n * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(5, <void*>a, <void*>b, <void*>c, <void*> d, <void*> e)\n * \n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"numpy.PyArray_MultiIterNew5\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = 0;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":785\n *     return PyArray_MultiIterNew(5, <void*>a, <void*>b, <void*>c, <void*> d, <void*> e)\n * \n * cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL:             # <<<<<<<<<<<<<<\n *     # Recursive utility function used in __getbuffer__ to get format\n *     # string. The new location in the format string is returned.\n */\n\nstatic CYTHON_INLINE char *__pyx_f_5numpy__util_dtypestring(PyArray_Descr *__pyx_v_descr, char *__pyx_v_f, char *__pyx_v_end, int *__pyx_v_offset) {\n  PyArray_Descr *__pyx_v_child = 0;\n  int __pyx_v_endian_detector;\n  int __pyx_v_little_endian;\n  PyObject *__pyx_v_fields = 0;\n  PyObject *__pyx_v_childname = NULL;\n  PyObject *__pyx_v_new_offset = NULL;\n  PyObject *__pyx_v_t = NULL;\n  char *__pyx_r;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  Py_ssize_t __pyx_t_2;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *__pyx_t_4 = NULL;\n  int __pyx_t_5;\n  int __pyx_t_6;\n  int __pyx_t_7;\n  long __pyx_t_8;\n  char *__pyx_t_9;\n  __Pyx_RefNannySetupContext(\"_util_dtypestring\", 0);\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":790\n * \n *     cdef dtype child\n *     cdef int endian_detector = 1             # <<<<<<<<<<<<<<\n *     cdef bint little_endian = ((<char*>&endian_detector)[0] != 0)\n *     cdef tuple fields\n */\n  __pyx_v_endian_detector = 1;\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":791\n *     cdef dtype child\n *     cdef int endian_detector = 1\n *     cdef bint little_endian = ((<char*>&endian_detector)[0] != 0)             # <<<<<<<<<<<<<<\n *     cdef tuple fields\n * \n */\n  __pyx_v_little_endian = ((((char *)(&__pyx_v_endian_detector))[0]) != 0);\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":794\n *     cdef tuple fields\n * \n *     for childname in descr.names:             # <<<<<<<<<<<<<<\n *         fields = descr.fields[childname]\n *         child, new_offset = fields\n */\n  if (unlikely(__pyx_v_descr->names == Py_None)) {\n    PyErr_SetString(PyExc_TypeError, \"'NoneType' object is not iterable\");\n    __PYX_ERR(1, 794, __pyx_L1_error)\n  }\n  __pyx_t_1 = __pyx_v_descr->names; __Pyx_INCREF(__pyx_t_1); __pyx_t_2 = 0;\n  for (;;) {\n    if (__pyx_t_2 >= PyTuple_GET_SIZE(__pyx_t_1)) break;\n    #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n    __pyx_t_3 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_2); __Pyx_INCREF(__pyx_t_3); __pyx_t_2++; if (unlikely(0 < 0)) __PYX_ERR(1, 794, __pyx_L1_error)\n    #else\n    __pyx_t_3 = PySequence_ITEM(__pyx_t_1, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 794, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    #endif\n    __Pyx_XDECREF_SET(__pyx_v_childname, __pyx_t_3);\n    __pyx_t_3 = 0;\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":795\n * \n *     for childname in descr.names:\n *         fields = descr.fields[childname]             # <<<<<<<<<<<<<<\n *         child, new_offset = fields\n * \n */\n    if (unlikely(__pyx_v_descr->fields == Py_None)) {\n      PyErr_SetString(PyExc_TypeError, \"'NoneType' object is not subscriptable\");\n      __PYX_ERR(1, 795, __pyx_L1_error)\n    }\n    __pyx_t_3 = __Pyx_PyDict_GetItem(__pyx_v_descr->fields, __pyx_v_childname); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 795, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    if (!(likely(PyTuple_CheckExact(__pyx_t_3))||((__pyx_t_3) == Py_None)||(PyErr_Format(PyExc_TypeError, \"Expected %.16s, got %.200s\", \"tuple\", Py_TYPE(__pyx_t_3)->tp_name), 0))) __PYX_ERR(1, 795, __pyx_L1_error)\n    __Pyx_XDECREF_SET(__pyx_v_fields, ((PyObject*)__pyx_t_3));\n    __pyx_t_3 = 0;\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":796\n *     for childname in descr.names:\n *         fields = descr.fields[childname]\n *         child, new_offset = fields             # <<<<<<<<<<<<<<\n * \n *         if (end - f) - <int>(new_offset - offset[0]) < 15:\n */\n    if (likely(__pyx_v_fields != Py_None)) {\n      PyObject* sequence = __pyx_v_fields;\n      #if !CYTHON_COMPILING_IN_PYPY\n      Py_ssize_t size = Py_SIZE(sequence);\n      #else\n      Py_ssize_t size = PySequence_Size(sequence);\n      #endif\n      if (unlikely(size != 2)) {\n        if (size > 2) __Pyx_RaiseTooManyValuesError(2);\n        else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size);\n        __PYX_ERR(1, 796, __pyx_L1_error)\n      }\n      #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n      __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0); \n      __pyx_t_4 = PyTuple_GET_ITEM(sequence, 1); \n      __Pyx_INCREF(__pyx_t_3);\n      __Pyx_INCREF(__pyx_t_4);\n      #else\n      __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 796, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 796, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      #endif\n    } else {\n      __Pyx_RaiseNoneNotIterableError(); __PYX_ERR(1, 796, __pyx_L1_error)\n    }\n    if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_dtype))))) __PYX_ERR(1, 796, __pyx_L1_error)\n    __Pyx_XDECREF_SET(__pyx_v_child, ((PyArray_Descr *)__pyx_t_3));\n    __pyx_t_3 = 0;\n    __Pyx_XDECREF_SET(__pyx_v_new_offset, __pyx_t_4);\n    __pyx_t_4 = 0;\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":798\n *         child, new_offset = fields\n * \n *         if (end - f) - <int>(new_offset - offset[0]) < 15:             # <<<<<<<<<<<<<<\n *             raise RuntimeError(u\"Format string allocated too short, see comment in numpy.pxd\")\n * \n */\n    __pyx_t_4 = __Pyx_PyInt_From_int((__pyx_v_offset[0])); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 798, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __pyx_t_3 = PyNumber_Subtract(__pyx_v_new_offset, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 798, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    __pyx_t_5 = __Pyx_PyInt_As_int(__pyx_t_3); if (unlikely((__pyx_t_5 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 798, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __pyx_t_6 = ((((__pyx_v_end - __pyx_v_f) - ((int)__pyx_t_5)) < 15) != 0);\n    if (__pyx_t_6) {\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":799\n * \n *         if (end - f) - <int>(new_offset - offset[0]) < 15:\n *             raise RuntimeError(u\"Format string allocated too short, see comment in numpy.pxd\")             # <<<<<<<<<<<<<<\n * \n *         if ((child.byteorder == c'>' and little_endian) or\n */\n      __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_RuntimeError, __pyx_tuple__4, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 799, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __PYX_ERR(1, 799, __pyx_L1_error)\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":798\n *         child, new_offset = fields\n * \n *         if (end - f) - <int>(new_offset - offset[0]) < 15:             # <<<<<<<<<<<<<<\n *             raise RuntimeError(u\"Format string allocated too short, see comment in numpy.pxd\")\n * \n */\n    }\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":801\n *             raise RuntimeError(u\"Format string allocated too short, see comment in numpy.pxd\")\n * \n *         if ((child.byteorder == c'>' and little_endian) or             # <<<<<<<<<<<<<<\n *             (child.byteorder == c'<' and not little_endian)):\n *             raise ValueError(u\"Non-native byte order not supported\")\n */\n    __pyx_t_7 = ((__pyx_v_child->byteorder == '>') != 0);\n    if (!__pyx_t_7) {\n      goto __pyx_L8_next_or;\n    } else {\n    }\n    __pyx_t_7 = (__pyx_v_little_endian != 0);\n    if (!__pyx_t_7) {\n    } else {\n      __pyx_t_6 = __pyx_t_7;\n      goto __pyx_L7_bool_binop_done;\n    }\n    __pyx_L8_next_or:;\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":802\n * \n *         if ((child.byteorder == c'>' and little_endian) or\n *             (child.byteorder == c'<' and not little_endian)):             # <<<<<<<<<<<<<<\n *             raise ValueError(u\"Non-native byte order not supported\")\n *             # One could encode it in the format string and have Cython\n */\n    __pyx_t_7 = ((__pyx_v_child->byteorder == '<') != 0);\n    if (__pyx_t_7) {\n    } else {\n      __pyx_t_6 = __pyx_t_7;\n      goto __pyx_L7_bool_binop_done;\n    }\n    __pyx_t_7 = ((!(__pyx_v_little_endian != 0)) != 0);\n    __pyx_t_6 = __pyx_t_7;\n    __pyx_L7_bool_binop_done:;\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":801\n *             raise RuntimeError(u\"Format string allocated too short, see comment in numpy.pxd\")\n * \n *         if ((child.byteorder == c'>' and little_endian) or             # <<<<<<<<<<<<<<\n *             (child.byteorder == c'<' and not little_endian)):\n *             raise ValueError(u\"Non-native byte order not supported\")\n */\n    if (__pyx_t_6) {\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":803\n *         if ((child.byteorder == c'>' and little_endian) or\n *             (child.byteorder == c'<' and not little_endian)):\n *             raise ValueError(u\"Non-native byte order not supported\")             # <<<<<<<<<<<<<<\n *             # One could encode it in the format string and have Cython\n *             # complain instead, BUT: < and > in format strings also imply\n */\n      __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__5, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 803, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __PYX_ERR(1, 803, __pyx_L1_error)\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":801\n *             raise RuntimeError(u\"Format string allocated too short, see comment in numpy.pxd\")\n * \n *         if ((child.byteorder == c'>' and little_endian) or             # <<<<<<<<<<<<<<\n *             (child.byteorder == c'<' and not little_endian)):\n *             raise ValueError(u\"Non-native byte order not supported\")\n */\n    }\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":813\n * \n *         # Output padding bytes\n *         while offset[0] < new_offset:             # <<<<<<<<<<<<<<\n *             f[0] = 120 # \"x\"; pad byte\n *             f += 1\n */\n    while (1) {\n      __pyx_t_3 = __Pyx_PyInt_From_int((__pyx_v_offset[0])); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 813, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_t_3, __pyx_v_new_offset, Py_LT); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 813, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 813, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (!__pyx_t_6) break;\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":814\n *         # Output padding bytes\n *         while offset[0] < new_offset:\n *             f[0] = 120 # \"x\"; pad byte             # <<<<<<<<<<<<<<\n *             f += 1\n *             offset[0] += 1\n */\n      (__pyx_v_f[0]) = 0x78;\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":815\n *         while offset[0] < new_offset:\n *             f[0] = 120 # \"x\"; pad byte\n *             f += 1             # <<<<<<<<<<<<<<\n *             offset[0] += 1\n * \n */\n      __pyx_v_f = (__pyx_v_f + 1);\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":816\n *             f[0] = 120 # \"x\"; pad byte\n *             f += 1\n *             offset[0] += 1             # <<<<<<<<<<<<<<\n * \n *         offset[0] += child.itemsize\n */\n      __pyx_t_8 = 0;\n      (__pyx_v_offset[__pyx_t_8]) = ((__pyx_v_offset[__pyx_t_8]) + 1);\n    }\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":818\n *             offset[0] += 1\n * \n *         offset[0] += child.itemsize             # <<<<<<<<<<<<<<\n * \n *         if not PyDataType_HASFIELDS(child):\n */\n    __pyx_t_8 = 0;\n    (__pyx_v_offset[__pyx_t_8]) = ((__pyx_v_offset[__pyx_t_8]) + __pyx_v_child->elsize);\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":820\n *         offset[0] += child.itemsize\n * \n *         if not PyDataType_HASFIELDS(child):             # <<<<<<<<<<<<<<\n *             t = child.type_num\n *             if end - f < 5:\n */\n    __pyx_t_6 = ((!(PyDataType_HASFIELDS(__pyx_v_child) != 0)) != 0);\n    if (__pyx_t_6) {\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":821\n * \n *         if not PyDataType_HASFIELDS(child):\n *             t = child.type_num             # <<<<<<<<<<<<<<\n *             if end - f < 5:\n *                 raise RuntimeError(u\"Format string allocated too short.\")\n */\n      __pyx_t_4 = __Pyx_PyInt_From_int(__pyx_v_child->type_num); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 821, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_XDECREF_SET(__pyx_v_t, __pyx_t_4);\n      __pyx_t_4 = 0;\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":822\n *         if not PyDataType_HASFIELDS(child):\n *             t = child.type_num\n *             if end - f < 5:             # <<<<<<<<<<<<<<\n *                 raise RuntimeError(u\"Format string allocated too short.\")\n * \n */\n      __pyx_t_6 = (((__pyx_v_end - __pyx_v_f) < 5) != 0);\n      if (__pyx_t_6) {\n\n        /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":823\n *             t = child.type_num\n *             if end - f < 5:\n *                 raise RuntimeError(u\"Format string allocated too short.\")             # <<<<<<<<<<<<<<\n * \n *             # Until ticket #99 is fixed, use integers to avoid warnings\n */\n        __pyx_t_4 = __Pyx_PyObject_Call(__pyx_builtin_RuntimeError, __pyx_tuple__6, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 823, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_4);\n        __Pyx_Raise(__pyx_t_4, 0, 0, 0);\n        __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n        __PYX_ERR(1, 823, __pyx_L1_error)\n\n        /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":822\n *         if not PyDataType_HASFIELDS(child):\n *             t = child.type_num\n *             if end - f < 5:             # <<<<<<<<<<<<<<\n *                 raise RuntimeError(u\"Format string allocated too short.\")\n * \n */\n      }\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":826\n * \n *             # Until ticket #99 is fixed, use integers to avoid warnings\n *             if   t == NPY_BYTE:        f[0] =  98 #\"b\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_UBYTE:       f[0] =  66 #\"B\"\n *             elif t == NPY_SHORT:       f[0] = 104 #\"h\"\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_BYTE); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 826, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 826, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 826, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 98;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":827\n *             # Until ticket #99 is fixed, use integers to avoid warnings\n *             if   t == NPY_BYTE:        f[0] =  98 #\"b\"\n *             elif t == NPY_UBYTE:       f[0] =  66 #\"B\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_SHORT:       f[0] = 104 #\"h\"\n *             elif t == NPY_USHORT:      f[0] =  72 #\"H\"\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_UBYTE); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 827, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 827, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 827, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 66;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":828\n *             if   t == NPY_BYTE:        f[0] =  98 #\"b\"\n *             elif t == NPY_UBYTE:       f[0] =  66 #\"B\"\n *             elif t == NPY_SHORT:       f[0] = 104 #\"h\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_USHORT:      f[0] =  72 #\"H\"\n *             elif t == NPY_INT:         f[0] = 105 #\"i\"\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_SHORT); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 828, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 828, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 828, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 0x68;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":829\n *             elif t == NPY_UBYTE:       f[0] =  66 #\"B\"\n *             elif t == NPY_SHORT:       f[0] = 104 #\"h\"\n *             elif t == NPY_USHORT:      f[0] =  72 #\"H\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_INT:         f[0] = 105 #\"i\"\n *             elif t == NPY_UINT:        f[0] =  73 #\"I\"\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_USHORT); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 829, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 829, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 829, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 72;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":830\n *             elif t == NPY_SHORT:       f[0] = 104 #\"h\"\n *             elif t == NPY_USHORT:      f[0] =  72 #\"H\"\n *             elif t == NPY_INT:         f[0] = 105 #\"i\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_UINT:        f[0] =  73 #\"I\"\n *             elif t == NPY_LONG:        f[0] = 108 #\"l\"\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_INT); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 830, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 830, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 830, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 0x69;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":831\n *             elif t == NPY_USHORT:      f[0] =  72 #\"H\"\n *             elif t == NPY_INT:         f[0] = 105 #\"i\"\n *             elif t == NPY_UINT:        f[0] =  73 #\"I\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_LONG:        f[0] = 108 #\"l\"\n *             elif t == NPY_ULONG:       f[0] = 76  #\"L\"\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_UINT); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 831, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 831, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 831, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 73;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":832\n *             elif t == NPY_INT:         f[0] = 105 #\"i\"\n *             elif t == NPY_UINT:        f[0] =  73 #\"I\"\n *             elif t == NPY_LONG:        f[0] = 108 #\"l\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_ULONG:       f[0] = 76  #\"L\"\n *             elif t == NPY_LONGLONG:    f[0] = 113 #\"q\"\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_LONG); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 832, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 832, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 832, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 0x6C;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":833\n *             elif t == NPY_UINT:        f[0] =  73 #\"I\"\n *             elif t == NPY_LONG:        f[0] = 108 #\"l\"\n *             elif t == NPY_ULONG:       f[0] = 76  #\"L\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_LONGLONG:    f[0] = 113 #\"q\"\n *             elif t == NPY_ULONGLONG:   f[0] = 81  #\"Q\"\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_ULONG); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 833, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 833, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 833, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 76;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":834\n *             elif t == NPY_LONG:        f[0] = 108 #\"l\"\n *             elif t == NPY_ULONG:       f[0] = 76  #\"L\"\n *             elif t == NPY_LONGLONG:    f[0] = 113 #\"q\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_ULONGLONG:   f[0] = 81  #\"Q\"\n *             elif t == NPY_FLOAT:       f[0] = 102 #\"f\"\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_LONGLONG); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 834, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 834, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 834, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 0x71;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":835\n *             elif t == NPY_ULONG:       f[0] = 76  #\"L\"\n *             elif t == NPY_LONGLONG:    f[0] = 113 #\"q\"\n *             elif t == NPY_ULONGLONG:   f[0] = 81  #\"Q\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_FLOAT:       f[0] = 102 #\"f\"\n *             elif t == NPY_DOUBLE:      f[0] = 100 #\"d\"\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_ULONGLONG); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 835, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 835, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 835, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 81;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":836\n *             elif t == NPY_LONGLONG:    f[0] = 113 #\"q\"\n *             elif t == NPY_ULONGLONG:   f[0] = 81  #\"Q\"\n *             elif t == NPY_FLOAT:       f[0] = 102 #\"f\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_DOUBLE:      f[0] = 100 #\"d\"\n *             elif t == NPY_LONGDOUBLE:  f[0] = 103 #\"g\"\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_FLOAT); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 836, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 836, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 836, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 0x66;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":837\n *             elif t == NPY_ULONGLONG:   f[0] = 81  #\"Q\"\n *             elif t == NPY_FLOAT:       f[0] = 102 #\"f\"\n *             elif t == NPY_DOUBLE:      f[0] = 100 #\"d\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_LONGDOUBLE:  f[0] = 103 #\"g\"\n *             elif t == NPY_CFLOAT:      f[0] = 90; f[1] = 102; f += 1 # Zf\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_DOUBLE); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 837, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 837, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 837, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 0x64;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":838\n *             elif t == NPY_FLOAT:       f[0] = 102 #\"f\"\n *             elif t == NPY_DOUBLE:      f[0] = 100 #\"d\"\n *             elif t == NPY_LONGDOUBLE:  f[0] = 103 #\"g\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_CFLOAT:      f[0] = 90; f[1] = 102; f += 1 # Zf\n *             elif t == NPY_CDOUBLE:     f[0] = 90; f[1] = 100; f += 1 # Zd\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_LONGDOUBLE); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 838, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 838, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 838, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 0x67;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":839\n *             elif t == NPY_DOUBLE:      f[0] = 100 #\"d\"\n *             elif t == NPY_LONGDOUBLE:  f[0] = 103 #\"g\"\n *             elif t == NPY_CFLOAT:      f[0] = 90; f[1] = 102; f += 1 # Zf             # <<<<<<<<<<<<<<\n *             elif t == NPY_CDOUBLE:     f[0] = 90; f[1] = 100; f += 1 # Zd\n *             elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_CFLOAT); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 839, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 839, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 839, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 90;\n        (__pyx_v_f[1]) = 0x66;\n        __pyx_v_f = (__pyx_v_f + 1);\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":840\n *             elif t == NPY_LONGDOUBLE:  f[0] = 103 #\"g\"\n *             elif t == NPY_CFLOAT:      f[0] = 90; f[1] = 102; f += 1 # Zf\n *             elif t == NPY_CDOUBLE:     f[0] = 90; f[1] = 100; f += 1 # Zd             # <<<<<<<<<<<<<<\n *             elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg\n *             elif t == NPY_OBJECT:      f[0] = 79 #\"O\"\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_CDOUBLE); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 840, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 840, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 840, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 90;\n        (__pyx_v_f[1]) = 0x64;\n        __pyx_v_f = (__pyx_v_f + 1);\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":841\n *             elif t == NPY_CFLOAT:      f[0] = 90; f[1] = 102; f += 1 # Zf\n *             elif t == NPY_CDOUBLE:     f[0] = 90; f[1] = 100; f += 1 # Zd\n *             elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg             # <<<<<<<<<<<<<<\n *             elif t == NPY_OBJECT:      f[0] = 79 #\"O\"\n *             else:\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_CLONGDOUBLE); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 841, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 841, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 841, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 90;\n        (__pyx_v_f[1]) = 0x67;\n        __pyx_v_f = (__pyx_v_f + 1);\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":842\n *             elif t == NPY_CDOUBLE:     f[0] = 90; f[1] = 100; f += 1 # Zd\n *             elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg\n *             elif t == NPY_OBJECT:      f[0] = 79 #\"O\"             # <<<<<<<<<<<<<<\n *             else:\n *                 raise ValueError(u\"unknown dtype code in numpy.pxd (%d)\" % t)\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_OBJECT); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 842, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 842, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 842, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 79;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":844\n *             elif t == NPY_OBJECT:      f[0] = 79 #\"O\"\n *             else:\n *                 raise ValueError(u\"unknown dtype code in numpy.pxd (%d)\" % t)             # <<<<<<<<<<<<<<\n *             f += 1\n *         else:\n */\n      /*else*/ {\n        __pyx_t_3 = PyUnicode_Format(__pyx_kp_u_unknown_dtype_code_in_numpy_pxd, __pyx_v_t); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 844, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 844, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_4);\n        __Pyx_GIVEREF(__pyx_t_3);\n        PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3);\n        __pyx_t_3 = 0;\n        __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_t_4, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 844, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n        __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __PYX_ERR(1, 844, __pyx_L1_error)\n      }\n      __pyx_L15:;\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":845\n *             else:\n *                 raise ValueError(u\"unknown dtype code in numpy.pxd (%d)\" % t)\n *             f += 1             # <<<<<<<<<<<<<<\n *         else:\n *             # Cython ignores struct boundary information (\"T{...}\"),\n */\n      __pyx_v_f = (__pyx_v_f + 1);\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":820\n *         offset[0] += child.itemsize\n * \n *         if not PyDataType_HASFIELDS(child):             # <<<<<<<<<<<<<<\n *             t = child.type_num\n *             if end - f < 5:\n */\n      goto __pyx_L13;\n    }\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":849\n *             # Cython ignores struct boundary information (\"T{...}\"),\n *             # so don't output it\n *             f = _util_dtypestring(child, f, end, offset)             # <<<<<<<<<<<<<<\n *     return f\n * \n */\n    /*else*/ {\n      __pyx_t_9 = __pyx_f_5numpy__util_dtypestring(__pyx_v_child, __pyx_v_f, __pyx_v_end, __pyx_v_offset); if (unlikely(__pyx_t_9 == NULL)) __PYX_ERR(1, 849, __pyx_L1_error)\n      __pyx_v_f = __pyx_t_9;\n    }\n    __pyx_L13:;\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":794\n *     cdef tuple fields\n * \n *     for childname in descr.names:             # <<<<<<<<<<<<<<\n *         fields = descr.fields[childname]\n *         child, new_offset = fields\n */\n  }\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":850\n *             # so don't output it\n *             f = _util_dtypestring(child, f, end, offset)\n *     return f             # <<<<<<<<<<<<<<\n * \n * \n */\n  __pyx_r = __pyx_v_f;\n  goto __pyx_L0;\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":785\n *     return PyArray_MultiIterNew(5, <void*>a, <void*>b, <void*>c, <void*> d, <void*> e)\n * \n * cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL:             # <<<<<<<<<<<<<<\n *     # Recursive utility function used in __getbuffer__ to get format\n *     # string. The new location in the format string is returned.\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_AddTraceback(\"numpy._util_dtypestring\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_child);\n  __Pyx_XDECREF(__pyx_v_fields);\n  __Pyx_XDECREF(__pyx_v_childname);\n  __Pyx_XDECREF(__pyx_v_new_offset);\n  __Pyx_XDECREF(__pyx_v_t);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":966\n * \n * \n * cdef inline void set_array_base(ndarray arr, object base):             # <<<<<<<<<<<<<<\n *      cdef PyObject* baseptr\n *      if base is None:\n */\n\nstatic CYTHON_INLINE void __pyx_f_5numpy_set_array_base(PyArrayObject *__pyx_v_arr, PyObject *__pyx_v_base) {\n  PyObject *__pyx_v_baseptr;\n  __Pyx_RefNannyDeclarations\n  int __pyx_t_1;\n  int __pyx_t_2;\n  __Pyx_RefNannySetupContext(\"set_array_base\", 0);\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":968\n * cdef inline void set_array_base(ndarray arr, object base):\n *      cdef PyObject* baseptr\n *      if base is None:             # <<<<<<<<<<<<<<\n *          baseptr = NULL\n *      else:\n */\n  __pyx_t_1 = (__pyx_v_base == Py_None);\n  __pyx_t_2 = (__pyx_t_1 != 0);\n  if (__pyx_t_2) {\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":969\n *      cdef PyObject* baseptr\n *      if base is None:\n *          baseptr = NULL             # <<<<<<<<<<<<<<\n *      else:\n *          Py_INCREF(base) # important to do this before decref below!\n */\n    __pyx_v_baseptr = NULL;\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":968\n * cdef inline void set_array_base(ndarray arr, object base):\n *      cdef PyObject* baseptr\n *      if base is None:             # <<<<<<<<<<<<<<\n *          baseptr = NULL\n *      else:\n */\n    goto __pyx_L3;\n  }\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":971\n *          baseptr = NULL\n *      else:\n *          Py_INCREF(base) # important to do this before decref below!             # <<<<<<<<<<<<<<\n *          baseptr = <PyObject*>base\n *      Py_XDECREF(arr.base)\n */\n  /*else*/ {\n    Py_INCREF(__pyx_v_base);\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":972\n *      else:\n *          Py_INCREF(base) # important to do this before decref below!\n *          baseptr = <PyObject*>base             # <<<<<<<<<<<<<<\n *      Py_XDECREF(arr.base)\n *      arr.base = baseptr\n */\n    __pyx_v_baseptr = ((PyObject *)__pyx_v_base);\n  }\n  __pyx_L3:;\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":973\n *          Py_INCREF(base) # important to do this before decref below!\n *          baseptr = <PyObject*>base\n *      Py_XDECREF(arr.base)             # <<<<<<<<<<<<<<\n *      arr.base = baseptr\n * \n */\n  Py_XDECREF(__pyx_v_arr->base);\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":974\n *          baseptr = <PyObject*>base\n *      Py_XDECREF(arr.base)\n *      arr.base = baseptr             # <<<<<<<<<<<<<<\n * \n * cdef inline object get_array_base(ndarray arr):\n */\n  __pyx_v_arr->base = __pyx_v_baseptr;\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":966\n * \n * \n * cdef inline void set_array_base(ndarray arr, object base):             # <<<<<<<<<<<<<<\n *      cdef PyObject* baseptr\n *      if base is None:\n */\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n}\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":976\n *      arr.base = baseptr\n * \n * cdef inline object get_array_base(ndarray arr):             # <<<<<<<<<<<<<<\n *     if arr.base is NULL:\n *         return None\n */\n\nstatic CYTHON_INLINE PyObject *__pyx_f_5numpy_get_array_base(PyArrayObject *__pyx_v_arr) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  int __pyx_t_1;\n  __Pyx_RefNannySetupContext(\"get_array_base\", 0);\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":977\n * \n * cdef inline object get_array_base(ndarray arr):\n *     if arr.base is NULL:             # <<<<<<<<<<<<<<\n *         return None\n *     else:\n */\n  __pyx_t_1 = ((__pyx_v_arr->base == NULL) != 0);\n  if (__pyx_t_1) {\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":978\n * cdef inline object get_array_base(ndarray arr):\n *     if arr.base is NULL:\n *         return None             # <<<<<<<<<<<<<<\n *     else:\n *         return <object>arr.base\n */\n    __Pyx_XDECREF(__pyx_r);\n    __Pyx_INCREF(Py_None);\n    __pyx_r = Py_None;\n    goto __pyx_L0;\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":977\n * \n * cdef inline object get_array_base(ndarray arr):\n *     if arr.base is NULL:             # <<<<<<<<<<<<<<\n *         return None\n *     else:\n */\n  }\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":980\n *         return None\n *     else:\n *         return <object>arr.base             # <<<<<<<<<<<<<<\n * \n * \n */\n  /*else*/ {\n    __Pyx_XDECREF(__pyx_r);\n    __Pyx_INCREF(((PyObject *)__pyx_v_arr->base));\n    __pyx_r = ((PyObject *)__pyx_v_arr->base);\n    goto __pyx_L0;\n  }\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":976\n *      arr.base = baseptr\n * \n * cdef inline object get_array_base(ndarray arr):             # <<<<<<<<<<<<<<\n *     if arr.base is NULL:\n *         return None\n */\n\n  /* function exit code */\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":985\n * # Versions of the import_* functions which are more suitable for\n * # Cython code.\n * cdef inline int import_array() except -1:             # <<<<<<<<<<<<<<\n *     try:\n *         _import_array()\n */\n\nstatic CYTHON_INLINE int __pyx_f_5numpy_import_array(void) {\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  int __pyx_t_4;\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  PyObject *__pyx_t_7 = NULL;\n  PyObject *__pyx_t_8 = NULL;\n  __Pyx_RefNannySetupContext(\"import_array\", 0);\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":986\n * # Cython code.\n * cdef inline int import_array() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_array()\n *     except Exception:\n */\n  {\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_2, &__pyx_t_3);\n    __Pyx_XGOTREF(__pyx_t_1);\n    __Pyx_XGOTREF(__pyx_t_2);\n    __Pyx_XGOTREF(__pyx_t_3);\n    /*try:*/ {\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":987\n * cdef inline int import_array() except -1:\n *     try:\n *         _import_array()             # <<<<<<<<<<<<<<\n *     except Exception:\n *         raise ImportError(\"numpy.core.multiarray failed to import\")\n */\n      __pyx_t_4 = _import_array(); if (unlikely(__pyx_t_4 == -1)) __PYX_ERR(1, 987, __pyx_L3_error)\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":986\n * # Cython code.\n * cdef inline int import_array() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_array()\n *     except Exception:\n */\n    }\n    __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;\n    __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n    __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n    goto __pyx_L10_try_end;\n    __pyx_L3_error:;\n    __Pyx_PyThreadState_assign\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":988\n *     try:\n *         _import_array()\n *     except Exception:             # <<<<<<<<<<<<<<\n *         raise ImportError(\"numpy.core.multiarray failed to import\")\n * \n */\n    __pyx_t_4 = __Pyx_PyErr_ExceptionMatches(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])));\n    if (__pyx_t_4) {\n      __Pyx_AddTraceback(\"numpy.import_array\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n      if (__Pyx_GetException(&__pyx_t_5, &__pyx_t_6, &__pyx_t_7) < 0) __PYX_ERR(1, 988, __pyx_L5_except_error)\n      __Pyx_GOTREF(__pyx_t_5);\n      __Pyx_GOTREF(__pyx_t_6);\n      __Pyx_GOTREF(__pyx_t_7);\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":989\n *         _import_array()\n *     except Exception:\n *         raise ImportError(\"numpy.core.multiarray failed to import\")             # <<<<<<<<<<<<<<\n * \n * cdef inline int import_umath() except -1:\n */\n      __pyx_t_8 = __Pyx_PyObject_Call(__pyx_builtin_ImportError, __pyx_tuple__7, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 989, __pyx_L5_except_error)\n      __Pyx_GOTREF(__pyx_t_8);\n      __Pyx_Raise(__pyx_t_8, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;\n      __PYX_ERR(1, 989, __pyx_L5_except_error)\n    }\n    goto __pyx_L5_except_error;\n    __pyx_L5_except_error:;\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":986\n * # Cython code.\n * cdef inline int import_array() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_array()\n *     except Exception:\n */\n    __Pyx_PyThreadState_assign\n    __Pyx_XGIVEREF(__pyx_t_1);\n    __Pyx_XGIVEREF(__pyx_t_2);\n    __Pyx_XGIVEREF(__pyx_t_3);\n    __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3);\n    goto __pyx_L1_error;\n    __pyx_L10_try_end:;\n  }\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":985\n * # Versions of the import_* functions which are more suitable for\n * # Cython code.\n * cdef inline int import_array() except -1:             # <<<<<<<<<<<<<<\n *     try:\n *         _import_array()\n */\n\n  /* function exit code */\n  __pyx_r = 0;\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_XDECREF(__pyx_t_7);\n  __Pyx_XDECREF(__pyx_t_8);\n  __Pyx_AddTraceback(\"numpy.import_array\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = -1;\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":991\n *         raise ImportError(\"numpy.core.multiarray failed to import\")\n * \n * cdef inline int import_umath() except -1:             # <<<<<<<<<<<<<<\n *     try:\n *         _import_umath()\n */\n\nstatic CYTHON_INLINE int __pyx_f_5numpy_import_umath(void) {\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  int __pyx_t_4;\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  PyObject *__pyx_t_7 = NULL;\n  PyObject *__pyx_t_8 = NULL;\n  __Pyx_RefNannySetupContext(\"import_umath\", 0);\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":992\n * \n * cdef inline int import_umath() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_umath()\n *     except Exception:\n */\n  {\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_2, &__pyx_t_3);\n    __Pyx_XGOTREF(__pyx_t_1);\n    __Pyx_XGOTREF(__pyx_t_2);\n    __Pyx_XGOTREF(__pyx_t_3);\n    /*try:*/ {\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":993\n * cdef inline int import_umath() except -1:\n *     try:\n *         _import_umath()             # <<<<<<<<<<<<<<\n *     except Exception:\n *         raise ImportError(\"numpy.core.umath failed to import\")\n */\n      __pyx_t_4 = _import_umath(); if (unlikely(__pyx_t_4 == -1)) __PYX_ERR(1, 993, __pyx_L3_error)\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":992\n * \n * cdef inline int import_umath() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_umath()\n *     except Exception:\n */\n    }\n    __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;\n    __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n    __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n    goto __pyx_L10_try_end;\n    __pyx_L3_error:;\n    __Pyx_PyThreadState_assign\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":994\n *     try:\n *         _import_umath()\n *     except Exception:             # <<<<<<<<<<<<<<\n *         raise ImportError(\"numpy.core.umath failed to import\")\n * \n */\n    __pyx_t_4 = __Pyx_PyErr_ExceptionMatches(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])));\n    if (__pyx_t_4) {\n      __Pyx_AddTraceback(\"numpy.import_umath\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n      if (__Pyx_GetException(&__pyx_t_5, &__pyx_t_6, &__pyx_t_7) < 0) __PYX_ERR(1, 994, __pyx_L5_except_error)\n      __Pyx_GOTREF(__pyx_t_5);\n      __Pyx_GOTREF(__pyx_t_6);\n      __Pyx_GOTREF(__pyx_t_7);\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":995\n *         _import_umath()\n *     except Exception:\n *         raise ImportError(\"numpy.core.umath failed to import\")             # <<<<<<<<<<<<<<\n * \n * cdef inline int import_ufunc() except -1:\n */\n      __pyx_t_8 = __Pyx_PyObject_Call(__pyx_builtin_ImportError, __pyx_tuple__8, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 995, __pyx_L5_except_error)\n      __Pyx_GOTREF(__pyx_t_8);\n      __Pyx_Raise(__pyx_t_8, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;\n      __PYX_ERR(1, 995, __pyx_L5_except_error)\n    }\n    goto __pyx_L5_except_error;\n    __pyx_L5_except_error:;\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":992\n * \n * cdef inline int import_umath() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_umath()\n *     except Exception:\n */\n    __Pyx_PyThreadState_assign\n    __Pyx_XGIVEREF(__pyx_t_1);\n    __Pyx_XGIVEREF(__pyx_t_2);\n    __Pyx_XGIVEREF(__pyx_t_3);\n    __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3);\n    goto __pyx_L1_error;\n    __pyx_L10_try_end:;\n  }\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":991\n *         raise ImportError(\"numpy.core.multiarray failed to import\")\n * \n * cdef inline int import_umath() except -1:             # <<<<<<<<<<<<<<\n *     try:\n *         _import_umath()\n */\n\n  /* function exit code */\n  __pyx_r = 0;\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_XDECREF(__pyx_t_7);\n  __Pyx_XDECREF(__pyx_t_8);\n  __Pyx_AddTraceback(\"numpy.import_umath\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = -1;\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":997\n *         raise ImportError(\"numpy.core.umath failed to import\")\n * \n * cdef inline int import_ufunc() except -1:             # <<<<<<<<<<<<<<\n *     try:\n *         _import_umath()\n */\n\nstatic CYTHON_INLINE int __pyx_f_5numpy_import_ufunc(void) {\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  int __pyx_t_4;\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  PyObject *__pyx_t_7 = NULL;\n  PyObject *__pyx_t_8 = NULL;\n  __Pyx_RefNannySetupContext(\"import_ufunc\", 0);\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":998\n * \n * cdef inline int import_ufunc() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_umath()\n *     except Exception:\n */\n  {\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_2, &__pyx_t_3);\n    __Pyx_XGOTREF(__pyx_t_1);\n    __Pyx_XGOTREF(__pyx_t_2);\n    __Pyx_XGOTREF(__pyx_t_3);\n    /*try:*/ {\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":999\n * cdef inline int import_ufunc() except -1:\n *     try:\n *         _import_umath()             # <<<<<<<<<<<<<<\n *     except Exception:\n *         raise ImportError(\"numpy.core.umath failed to import\")\n */\n      __pyx_t_4 = _import_umath(); if (unlikely(__pyx_t_4 == -1)) __PYX_ERR(1, 999, __pyx_L3_error)\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":998\n * \n * cdef inline int import_ufunc() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_umath()\n *     except Exception:\n */\n    }\n    __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;\n    __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n    __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n    goto __pyx_L10_try_end;\n    __pyx_L3_error:;\n    __Pyx_PyThreadState_assign\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1000\n *     try:\n *         _import_umath()\n *     except Exception:             # <<<<<<<<<<<<<<\n *         raise ImportError(\"numpy.core.umath failed to import\")\n */\n    __pyx_t_4 = __Pyx_PyErr_ExceptionMatches(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])));\n    if (__pyx_t_4) {\n      __Pyx_AddTraceback(\"numpy.import_ufunc\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n      if (__Pyx_GetException(&__pyx_t_5, &__pyx_t_6, &__pyx_t_7) < 0) __PYX_ERR(1, 1000, __pyx_L5_except_error)\n      __Pyx_GOTREF(__pyx_t_5);\n      __Pyx_GOTREF(__pyx_t_6);\n      __Pyx_GOTREF(__pyx_t_7);\n\n      /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1001\n *         _import_umath()\n *     except Exception:\n *         raise ImportError(\"numpy.core.umath failed to import\")             # <<<<<<<<<<<<<<\n */\n      __pyx_t_8 = __Pyx_PyObject_Call(__pyx_builtin_ImportError, __pyx_tuple__9, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 1001, __pyx_L5_except_error)\n      __Pyx_GOTREF(__pyx_t_8);\n      __Pyx_Raise(__pyx_t_8, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;\n      __PYX_ERR(1, 1001, __pyx_L5_except_error)\n    }\n    goto __pyx_L5_except_error;\n    __pyx_L5_except_error:;\n\n    /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":998\n * \n * cdef inline int import_ufunc() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_umath()\n *     except Exception:\n */\n    __Pyx_PyThreadState_assign\n    __Pyx_XGIVEREF(__pyx_t_1);\n    __Pyx_XGIVEREF(__pyx_t_2);\n    __Pyx_XGIVEREF(__pyx_t_3);\n    __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3);\n    goto __pyx_L1_error;\n    __pyx_L10_try_end:;\n  }\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":997\n *         raise ImportError(\"numpy.core.umath failed to import\")\n * \n * cdef inline int import_ufunc() except -1:             # <<<<<<<<<<<<<<\n *     try:\n *         _import_umath()\n */\n\n  /* function exit code */\n  __pyx_r = 0;\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_XDECREF(__pyx_t_7);\n  __Pyx_XDECREF(__pyx_t_8);\n  __Pyx_AddTraceback(\"numpy.import_ufunc\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = -1;\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyMethodDef __pyx_methods[] = {\n  {0, 0, 0, 0}\n};\n\n#if PY_MAJOR_VERSION >= 3\nstatic struct PyModuleDef __pyx_moduledef = {\n  #if PY_VERSION_HEX < 0x03020000\n    { PyObject_HEAD_INIT(NULL) NULL, 0, NULL },\n  #else\n    PyModuleDef_HEAD_INIT,\n  #endif\n    \"draw_rectangles\",\n    0, /* m_doc */\n    -1, /* m_size */\n    __pyx_methods /* m_methods */,\n    NULL, /* m_reload */\n    NULL, /* m_traverse */\n    NULL, /* m_clear */\n    NULL /* m_free */\n};\n#endif\n\nstatic __Pyx_StringTabEntry __pyx_string_tab[] = {\n  {&__pyx_n_s_DTYPE, __pyx_k_DTYPE, sizeof(__pyx_k_DTYPE), 0, 0, 1, 1},\n  {&__pyx_kp_u_Format_string_allocated_too_shor, __pyx_k_Format_string_allocated_too_shor, sizeof(__pyx_k_Format_string_allocated_too_shor), 0, 1, 0, 0},\n  {&__pyx_kp_u_Format_string_allocated_too_shor_2, __pyx_k_Format_string_allocated_too_shor_2, sizeof(__pyx_k_Format_string_allocated_too_shor_2), 0, 1, 0, 0},\n  {&__pyx_n_s_ImportError, __pyx_k_ImportError, sizeof(__pyx_k_ImportError), 0, 0, 1, 1},\n  {&__pyx_kp_u_Non_native_byte_order_not_suppor, __pyx_k_Non_native_byte_order_not_suppor, sizeof(__pyx_k_Non_native_byte_order_not_suppor), 0, 1, 0, 0},\n  {&__pyx_kp_s_Padding_0_not_supported_yet, __pyx_k_Padding_0_not_supported_yet, sizeof(__pyx_k_Padding_0_not_supported_yet), 0, 0, 1, 0},\n  {&__pyx_n_s_RuntimeError, __pyx_k_RuntimeError, sizeof(__pyx_k_RuntimeError), 0, 0, 1, 1},\n  {&__pyx_kp_s_Users_rowanz_code_scene_graph_l, __pyx_k_Users_rowanz_code_scene_graph_l, sizeof(__pyx_k_Users_rowanz_code_scene_graph_l), 0, 0, 1, 0},\n  {&__pyx_n_s_ValueError, __pyx_k_ValueError, sizeof(__pyx_k_ValueError), 0, 0, 1, 1},\n  {&__pyx_n_s_bbox_pairs, __pyx_k_bbox_pairs, sizeof(__pyx_k_bbox_pairs), 0, 0, 1, 1},\n  {&__pyx_n_s_draw_rectangles, __pyx_k_draw_rectangles, sizeof(__pyx_k_draw_rectangles), 0, 0, 1, 1},\n  {&__pyx_n_s_draw_union_boxes, __pyx_k_draw_union_boxes, sizeof(__pyx_k_draw_union_boxes), 0, 0, 1, 1},\n  {&__pyx_n_s_dtype, __pyx_k_dtype, sizeof(__pyx_k_dtype), 0, 0, 1, 1},\n  {&__pyx_n_s_float32, __pyx_k_float32, sizeof(__pyx_k_float32), 0, 0, 1, 1},\n  {&__pyx_n_s_import, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1},\n  {&__pyx_n_s_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 0, 1, 1},\n  {&__pyx_kp_u_ndarray_is_not_C_contiguous, __pyx_k_ndarray_is_not_C_contiguous, sizeof(__pyx_k_ndarray_is_not_C_contiguous), 0, 1, 0, 0},\n  {&__pyx_kp_u_ndarray_is_not_Fortran_contiguou, __pyx_k_ndarray_is_not_Fortran_contiguou, sizeof(__pyx_k_ndarray_is_not_Fortran_contiguou), 0, 1, 0, 0},\n  {&__pyx_n_s_np, __pyx_k_np, sizeof(__pyx_k_np), 0, 0, 1, 1},\n  {&__pyx_n_s_numpy, __pyx_k_numpy, sizeof(__pyx_k_numpy), 0, 0, 1, 1},\n  {&__pyx_kp_s_numpy_core_multiarray_failed_to, __pyx_k_numpy_core_multiarray_failed_to, sizeof(__pyx_k_numpy_core_multiarray_failed_to), 0, 0, 1, 0},\n  {&__pyx_kp_s_numpy_core_umath_failed_to_impor, __pyx_k_numpy_core_umath_failed_to_impor, sizeof(__pyx_k_numpy_core_umath_failed_to_impor), 0, 0, 1, 0},\n  {&__pyx_n_s_padding, __pyx_k_padding, sizeof(__pyx_k_padding), 0, 0, 1, 1},\n  {&__pyx_n_s_pooling_size, __pyx_k_pooling_size, sizeof(__pyx_k_pooling_size), 0, 0, 1, 1},\n  {&__pyx_n_s_range, __pyx_k_range, sizeof(__pyx_k_range), 0, 0, 1, 1},\n  {&__pyx_n_s_test, __pyx_k_test, sizeof(__pyx_k_test), 0, 0, 1, 1},\n  {&__pyx_kp_u_unknown_dtype_code_in_numpy_pxd, __pyx_k_unknown_dtype_code_in_numpy_pxd, sizeof(__pyx_k_unknown_dtype_code_in_numpy_pxd), 0, 1, 0, 0},\n  {&__pyx_n_s_zeros, __pyx_k_zeros, sizeof(__pyx_k_zeros), 0, 0, 1, 1},\n  {0, 0, 0, 0, 0, 0, 0}\n};\nstatic int __Pyx_InitCachedBuiltins(void) {\n  __pyx_builtin_range = __Pyx_GetBuiltinName(__pyx_n_s_range); if (!__pyx_builtin_range) __PYX_ERR(0, 45, __pyx_L1_error)\n  __pyx_builtin_ValueError = __Pyx_GetBuiltinName(__pyx_n_s_ValueError); if (!__pyx_builtin_ValueError) __PYX_ERR(1, 218, __pyx_L1_error)\n  __pyx_builtin_RuntimeError = __Pyx_GetBuiltinName(__pyx_n_s_RuntimeError); if (!__pyx_builtin_RuntimeError) __PYX_ERR(1, 799, __pyx_L1_error)\n  __pyx_builtin_ImportError = __Pyx_GetBuiltinName(__pyx_n_s_ImportError); if (!__pyx_builtin_ImportError) __PYX_ERR(1, 989, __pyx_L1_error)\n  return 0;\n  __pyx_L1_error:;\n  return -1;\n}\n\nstatic int __Pyx_InitCachedConstants(void) {\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__Pyx_InitCachedConstants\", 0);\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":218\n *             if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)\n *                 and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not C contiguous\")             # <<<<<<<<<<<<<<\n * \n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)\n */\n  __pyx_tuple_ = PyTuple_Pack(1, __pyx_kp_u_ndarray_is_not_C_contiguous); if (unlikely(!__pyx_tuple_)) __PYX_ERR(1, 218, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple_);\n  __Pyx_GIVEREF(__pyx_tuple_);\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":222\n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)\n *                 and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not Fortran contiguous\")             # <<<<<<<<<<<<<<\n * \n *             info.buf = PyArray_DATA(self)\n */\n  __pyx_tuple__2 = PyTuple_Pack(1, __pyx_kp_u_ndarray_is_not_Fortran_contiguou); if (unlikely(!__pyx_tuple__2)) __PYX_ERR(1, 222, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__2);\n  __Pyx_GIVEREF(__pyx_tuple__2);\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":259\n *                 if ((descr.byteorder == c'>' and little_endian) or\n *                     (descr.byteorder == c'<' and not little_endian)):\n *                     raise ValueError(u\"Non-native byte order not supported\")             # <<<<<<<<<<<<<<\n *                 if   t == NPY_BYTE:        f = \"b\"\n *                 elif t == NPY_UBYTE:       f = \"B\"\n */\n  __pyx_tuple__3 = PyTuple_Pack(1, __pyx_kp_u_Non_native_byte_order_not_suppor); if (unlikely(!__pyx_tuple__3)) __PYX_ERR(1, 259, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__3);\n  __Pyx_GIVEREF(__pyx_tuple__3);\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":799\n * \n *         if (end - f) - <int>(new_offset - offset[0]) < 15:\n *             raise RuntimeError(u\"Format string allocated too short, see comment in numpy.pxd\")             # <<<<<<<<<<<<<<\n * \n *         if ((child.byteorder == c'>' and little_endian) or\n */\n  __pyx_tuple__4 = PyTuple_Pack(1, __pyx_kp_u_Format_string_allocated_too_shor); if (unlikely(!__pyx_tuple__4)) __PYX_ERR(1, 799, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__4);\n  __Pyx_GIVEREF(__pyx_tuple__4);\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":803\n *         if ((child.byteorder == c'>' and little_endian) or\n *             (child.byteorder == c'<' and not little_endian)):\n *             raise ValueError(u\"Non-native byte order not supported\")             # <<<<<<<<<<<<<<\n *             # One could encode it in the format string and have Cython\n *             # complain instead, BUT: < and > in format strings also imply\n */\n  __pyx_tuple__5 = PyTuple_Pack(1, __pyx_kp_u_Non_native_byte_order_not_suppor); if (unlikely(!__pyx_tuple__5)) __PYX_ERR(1, 803, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__5);\n  __Pyx_GIVEREF(__pyx_tuple__5);\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":823\n *             t = child.type_num\n *             if end - f < 5:\n *                 raise RuntimeError(u\"Format string allocated too short.\")             # <<<<<<<<<<<<<<\n * \n *             # Until ticket #99 is fixed, use integers to avoid warnings\n */\n  __pyx_tuple__6 = PyTuple_Pack(1, __pyx_kp_u_Format_string_allocated_too_shor_2); if (unlikely(!__pyx_tuple__6)) __PYX_ERR(1, 823, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__6);\n  __Pyx_GIVEREF(__pyx_tuple__6);\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":989\n *         _import_array()\n *     except Exception:\n *         raise ImportError(\"numpy.core.multiarray failed to import\")             # <<<<<<<<<<<<<<\n * \n * cdef inline int import_umath() except -1:\n */\n  __pyx_tuple__7 = PyTuple_Pack(1, __pyx_kp_s_numpy_core_multiarray_failed_to); if (unlikely(!__pyx_tuple__7)) __PYX_ERR(1, 989, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__7);\n  __Pyx_GIVEREF(__pyx_tuple__7);\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":995\n *         _import_umath()\n *     except Exception:\n *         raise ImportError(\"numpy.core.umath failed to import\")             # <<<<<<<<<<<<<<\n * \n * cdef inline int import_ufunc() except -1:\n */\n  __pyx_tuple__8 = PyTuple_Pack(1, __pyx_kp_s_numpy_core_umath_failed_to_impor); if (unlikely(!__pyx_tuple__8)) __PYX_ERR(1, 995, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__8);\n  __Pyx_GIVEREF(__pyx_tuple__8);\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1001\n *         _import_umath()\n *     except Exception:\n *         raise ImportError(\"numpy.core.umath failed to import\")             # <<<<<<<<<<<<<<\n */\n  __pyx_tuple__9 = PyTuple_Pack(1, __pyx_kp_s_numpy_core_umath_failed_to_impor); if (unlikely(!__pyx_tuple__9)) __PYX_ERR(1, 1001, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__9);\n  __Pyx_GIVEREF(__pyx_tuple__9);\n\n  /* \"draw_rectangles.pyx\":12\n * ctypedef np.float32_t DTYPE_t\n * \n * def draw_union_boxes(bbox_pairs, pooling_size, padding=0):             # <<<<<<<<<<<<<<\n *     \"\"\"\n *     Draws union boxes for the image.\n */\n  __pyx_tuple__10 = PyTuple_Pack(3, __pyx_n_s_bbox_pairs, __pyx_n_s_pooling_size, __pyx_n_s_padding); if (unlikely(!__pyx_tuple__10)) __PYX_ERR(0, 12, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__10);\n  __Pyx_GIVEREF(__pyx_tuple__10);\n  __pyx_codeobj__11 = (PyObject*)__Pyx_PyCode_New(3, 0, 3, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__10, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Users_rowanz_code_scene_graph_l, __pyx_n_s_draw_union_boxes, 12, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__11)) __PYX_ERR(0, 12, __pyx_L1_error)\n  __Pyx_RefNannyFinishContext();\n  return 0;\n  __pyx_L1_error:;\n  __Pyx_RefNannyFinishContext();\n  return -1;\n}\n\nstatic int __Pyx_InitGlobals(void) {\n  if (__Pyx_InitStrings(__pyx_string_tab) < 0) __PYX_ERR(0, 1, __pyx_L1_error);\n  __pyx_int_0 = PyInt_FromLong(0); if (unlikely(!__pyx_int_0)) __PYX_ERR(0, 1, __pyx_L1_error)\n  __pyx_int_2 = PyInt_FromLong(2); if (unlikely(!__pyx_int_2)) __PYX_ERR(0, 1, __pyx_L1_error)\n  return 0;\n  __pyx_L1_error:;\n  return -1;\n}\n\n#if PY_MAJOR_VERSION < 3\nPyMODINIT_FUNC initdraw_rectangles(void); /*proto*/\nPyMODINIT_FUNC initdraw_rectangles(void)\n#else\nPyMODINIT_FUNC PyInit_draw_rectangles(void); /*proto*/\nPyMODINIT_FUNC PyInit_draw_rectangles(void)\n#endif\n{\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  __Pyx_RefNannyDeclarations\n  #if CYTHON_REFNANNY\n  __Pyx_RefNanny = __Pyx_RefNannyImportAPI(\"refnanny\");\n  if (!__Pyx_RefNanny) {\n      PyErr_Clear();\n      __Pyx_RefNanny = __Pyx_RefNannyImportAPI(\"Cython.Runtime.refnanny\");\n      if (!__Pyx_RefNanny)\n          Py_FatalError(\"failed to import 'refnanny' module\");\n  }\n  #endif\n  __Pyx_RefNannySetupContext(\"PyMODINIT_FUNC PyInit_draw_rectangles(void)\", 0);\n  if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error)\n  __pyx_empty_bytes = PyBytes_FromStringAndSize(\"\", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error)\n  __pyx_empty_unicode = PyUnicode_FromStringAndSize(\"\", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error)\n  #ifdef __Pyx_CyFunction_USED\n  if (__pyx_CyFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  #ifdef __Pyx_FusedFunction_USED\n  if (__pyx_FusedFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  #ifdef __Pyx_Coroutine_USED\n  if (__pyx_Coroutine_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  #ifdef __Pyx_Generator_USED\n  if (__pyx_Generator_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  #ifdef __Pyx_StopAsyncIteration_USED\n  if (__pyx_StopAsyncIteration_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  /*--- Library function declarations ---*/\n  /*--- Threads initialization code ---*/\n  #if defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS\n  #ifdef WITH_THREAD /* Python build with threading support? */\n  PyEval_InitThreads();\n  #endif\n  #endif\n  /*--- Module creation code ---*/\n  #if PY_MAJOR_VERSION < 3\n  __pyx_m = Py_InitModule4(\"draw_rectangles\", __pyx_methods, 0, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m);\n  #else\n  __pyx_m = PyModule_Create(&__pyx_moduledef);\n  #endif\n  if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error)\n  __pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error)\n  Py_INCREF(__pyx_d);\n  __pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error)\n  #if CYTHON_COMPILING_IN_PYPY\n  Py_INCREF(__pyx_b);\n  #endif\n  if (PyObject_SetAttrString(__pyx_m, \"__builtins__\", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error);\n  /*--- Initialize various global constants etc. ---*/\n  if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT)\n  if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  if (__pyx_module_is_main_draw_rectangles) {\n    if (PyObject_SetAttrString(__pyx_m, \"__name__\", __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  }\n  #if PY_MAJOR_VERSION >= 3\n  {\n    PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error)\n    if (!PyDict_GetItemString(modules, \"draw_rectangles\")) {\n      if (unlikely(PyDict_SetItemString(modules, \"draw_rectangles\", __pyx_m) < 0)) __PYX_ERR(0, 1, __pyx_L1_error)\n    }\n  }\n  #endif\n  /*--- Builtin init code ---*/\n  if (__Pyx_InitCachedBuiltins() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  /*--- Constants init code ---*/\n  if (__Pyx_InitCachedConstants() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  /*--- Global init code ---*/\n  /*--- Variable export code ---*/\n  /*--- Function export code ---*/\n  /*--- Type init code ---*/\n  /*--- Type import code ---*/\n  __pyx_ptype_7cpython_4type_type = __Pyx_ImportType(__Pyx_BUILTIN_MODULE_NAME, \"type\", \n  #if CYTHON_COMPILING_IN_PYPY\n  sizeof(PyTypeObject),\n  #else\n  sizeof(PyHeapTypeObject),\n  #endif\n  0); if (unlikely(!__pyx_ptype_7cpython_4type_type)) __PYX_ERR(2, 9, __pyx_L1_error)\n  __pyx_ptype_5numpy_dtype = __Pyx_ImportType(\"numpy\", \"dtype\", sizeof(PyArray_Descr), 0); if (unlikely(!__pyx_ptype_5numpy_dtype)) __PYX_ERR(1, 155, __pyx_L1_error)\n  __pyx_ptype_5numpy_flatiter = __Pyx_ImportType(\"numpy\", \"flatiter\", sizeof(PyArrayIterObject), 0); if (unlikely(!__pyx_ptype_5numpy_flatiter)) __PYX_ERR(1, 168, __pyx_L1_error)\n  __pyx_ptype_5numpy_broadcast = __Pyx_ImportType(\"numpy\", \"broadcast\", sizeof(PyArrayMultiIterObject), 0); if (unlikely(!__pyx_ptype_5numpy_broadcast)) __PYX_ERR(1, 172, __pyx_L1_error)\n  __pyx_ptype_5numpy_ndarray = __Pyx_ImportType(\"numpy\", \"ndarray\", sizeof(PyArrayObject), 0); if (unlikely(!__pyx_ptype_5numpy_ndarray)) __PYX_ERR(1, 181, __pyx_L1_error)\n  __pyx_ptype_5numpy_ufunc = __Pyx_ImportType(\"numpy\", \"ufunc\", sizeof(PyUFuncObject), 0); if (unlikely(!__pyx_ptype_5numpy_ufunc)) __PYX_ERR(1, 861, __pyx_L1_error)\n  /*--- Variable import code ---*/\n  /*--- Function import code ---*/\n  /*--- Execution code ---*/\n  #if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED)\n  if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n\n  /* \"draw_rectangles.pyx\":6\n * \n * cimport cython\n * import numpy as np             # <<<<<<<<<<<<<<\n * cimport numpy as np\n * \n */\n  __pyx_t_1 = __Pyx_Import(__pyx_n_s_numpy, 0, -1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 6, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_np, __pyx_t_1) < 0) __PYX_ERR(0, 6, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"draw_rectangles.pyx\":9\n * cimport numpy as np\n * \n * DTYPE = np.float32             # <<<<<<<<<<<<<<\n * ctypedef np.float32_t DTYPE_t\n * \n */\n  __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 9, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_float32); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 9, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_DTYPE, __pyx_t_2) < 0) __PYX_ERR(0, 9, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n\n  /* \"draw_rectangles.pyx\":12\n * ctypedef np.float32_t DTYPE_t\n * \n * def draw_union_boxes(bbox_pairs, pooling_size, padding=0):             # <<<<<<<<<<<<<<\n *     \"\"\"\n *     Draws union boxes for the image.\n */\n  __pyx_t_2 = PyCFunction_NewEx(&__pyx_mdef_15draw_rectangles_1draw_union_boxes, NULL, __pyx_n_s_draw_rectangles); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 12, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_draw_union_boxes, __pyx_t_2) < 0) __PYX_ERR(0, 12, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n\n  /* \"draw_rectangles.pyx\":1\n * ######             # <<<<<<<<<<<<<<\n * # Draws rectangles\n * ######\n */\n  __pyx_t_2 = PyDict_New(); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n\n  /* \"../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":997\n *         raise ImportError(\"numpy.core.umath failed to import\")\n * \n * cdef inline int import_ufunc() except -1:             # <<<<<<<<<<<<<<\n *     try:\n *         _import_umath()\n */\n\n  /*--- Wrapped vars code ---*/\n\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  if (__pyx_m) {\n    if (__pyx_d) {\n      __Pyx_AddTraceback(\"init draw_rectangles\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n    }\n    Py_DECREF(__pyx_m); __pyx_m = 0;\n  } else if (!PyErr_Occurred()) {\n    PyErr_SetString(PyExc_ImportError, \"init draw_rectangles\");\n  }\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  #if PY_MAJOR_VERSION < 3\n  return;\n  #else\n  return __pyx_m;\n  #endif\n}\n\n/* --- Runtime support code --- */\n/* Refnanny */\n#if CYTHON_REFNANNY\nstatic __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) {\n    PyObject *m = NULL, *p = NULL;\n    void *r = NULL;\n    m = PyImport_ImportModule((char *)modname);\n    if (!m) goto end;\n    p = PyObject_GetAttrString(m, (char *)\"RefNannyAPI\");\n    if (!p) goto end;\n    r = PyLong_AsVoidPtr(p);\nend:\n    Py_XDECREF(p);\n    Py_XDECREF(m);\n    return (__Pyx_RefNannyAPIStruct *)r;\n}\n#endif\n\n/* GetBuiltinName */\nstatic PyObject *__Pyx_GetBuiltinName(PyObject *name) {\n    PyObject* result = __Pyx_PyObject_GetAttrStr(__pyx_b, name);\n    if (unlikely(!result)) {\n        PyErr_Format(PyExc_NameError,\n#if PY_MAJOR_VERSION >= 3\n            \"name '%U' is not defined\", name);\n#else\n            \"name '%.200s' is not defined\", PyString_AS_STRING(name));\n#endif\n    }\n    return result;\n}\n\n/* RaiseArgTupleInvalid */\nstatic void __Pyx_RaiseArgtupleInvalid(\n    const char* func_name,\n    int exact,\n    Py_ssize_t num_min,\n    Py_ssize_t num_max,\n    Py_ssize_t num_found)\n{\n    Py_ssize_t num_expected;\n    const char *more_or_less;\n    if (num_found < num_min) {\n        num_expected = num_min;\n        more_or_less = \"at least\";\n    } else {\n        num_expected = num_max;\n        more_or_less = \"at most\";\n    }\n    if (exact) {\n        more_or_less = \"exactly\";\n    }\n    PyErr_Format(PyExc_TypeError,\n                 \"%.200s() takes %.8s %\" CYTHON_FORMAT_SSIZE_T \"d positional argument%.1s (%\" CYTHON_FORMAT_SSIZE_T \"d given)\",\n                 func_name, more_or_less, num_expected,\n                 (num_expected == 1) ? \"\" : \"s\", num_found);\n}\n\n/* RaiseDoubleKeywords */\nstatic void __Pyx_RaiseDoubleKeywordsError(\n    const char* func_name,\n    PyObject* kw_name)\n{\n    PyErr_Format(PyExc_TypeError,\n        #if PY_MAJOR_VERSION >= 3\n        \"%s() got multiple values for keyword argument '%U'\", func_name, kw_name);\n        #else\n        \"%s() got multiple values for keyword argument '%s'\", func_name,\n        PyString_AsString(kw_name));\n        #endif\n}\n\n/* ParseKeywords */\nstatic int __Pyx_ParseOptionalKeywords(\n    PyObject *kwds,\n    PyObject **argnames[],\n    PyObject *kwds2,\n    PyObject *values[],\n    Py_ssize_t num_pos_args,\n    const char* function_name)\n{\n    PyObject *key = 0, *value = 0;\n    Py_ssize_t pos = 0;\n    PyObject*** name;\n    PyObject*** first_kw_arg = argnames + num_pos_args;\n    while (PyDict_Next(kwds, &pos, &key, &value)) {\n        name = first_kw_arg;\n        while (*name && (**name != key)) name++;\n        if (*name) {\n            values[name-argnames] = value;\n            continue;\n        }\n        name = first_kw_arg;\n        #if PY_MAJOR_VERSION < 3\n        if (likely(PyString_CheckExact(key)) || likely(PyString_Check(key))) {\n            while (*name) {\n                if ((CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**name) == PyString_GET_SIZE(key))\n                        && _PyString_Eq(**name, key)) {\n                    values[name-argnames] = value;\n                    break;\n                }\n                name++;\n            }\n            if (*name) continue;\n            else {\n                PyObject*** argname = argnames;\n                while (argname != first_kw_arg) {\n                    if ((**argname == key) || (\n                            (CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**argname) == PyString_GET_SIZE(key))\n                             && _PyString_Eq(**argname, key))) {\n                        goto arg_passed_twice;\n                    }\n                    argname++;\n                }\n            }\n        } else\n        #endif\n        if (likely(PyUnicode_Check(key))) {\n            while (*name) {\n                int cmp = (**name == key) ? 0 :\n                #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3\n                    (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :\n                #endif\n                    PyUnicode_Compare(**name, key);\n                if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad;\n                if (cmp == 0) {\n                    values[name-argnames] = value;\n                    break;\n                }\n                name++;\n            }\n            if (*name) continue;\n            else {\n                PyObject*** argname = argnames;\n                while (argname != first_kw_arg) {\n                    int cmp = (**argname == key) ? 0 :\n                    #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3\n                        (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :\n                    #endif\n                        PyUnicode_Compare(**argname, key);\n                    if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad;\n                    if (cmp == 0) goto arg_passed_twice;\n                    argname++;\n                }\n            }\n        } else\n            goto invalid_keyword_type;\n        if (kwds2) {\n            if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad;\n        } else {\n            goto invalid_keyword;\n        }\n    }\n    return 0;\narg_passed_twice:\n    __Pyx_RaiseDoubleKeywordsError(function_name, key);\n    goto bad;\ninvalid_keyword_type:\n    PyErr_Format(PyExc_TypeError,\n        \"%.200s() keywords must be strings\", function_name);\n    goto bad;\ninvalid_keyword:\n    PyErr_Format(PyExc_TypeError,\n    #if PY_MAJOR_VERSION < 3\n        \"%.200s() got an unexpected keyword argument '%.200s'\",\n        function_name, PyString_AsString(key));\n    #else\n        \"%s() got an unexpected keyword argument '%U'\",\n        function_name, key);\n    #endif\nbad:\n    return -1;\n}\n\n/* PyIntBinop */\n#if !CYTHON_COMPILING_IN_PYPY\nstatic PyObject* __Pyx_PyInt_EqObjC(PyObject *op1, PyObject *op2, CYTHON_UNUSED long intval, CYTHON_UNUSED int inplace) {\n    if (op1 == op2) {\n        Py_RETURN_TRUE;\n    }\n    #if PY_MAJOR_VERSION < 3\n    if (likely(PyInt_CheckExact(op1))) {\n        const long b = intval;\n        long a = PyInt_AS_LONG(op1);\n        if (a == b) {\n            Py_RETURN_TRUE;\n        } else {\n            Py_RETURN_FALSE;\n        }\n    }\n    #endif\n    #if CYTHON_USE_PYLONG_INTERNALS\n    if (likely(PyLong_CheckExact(op1))) {\n        const long b = intval;\n        long a;\n        const digit* digits = ((PyLongObject*)op1)->ob_digit;\n        const Py_ssize_t size = Py_SIZE(op1);\n        if (likely(__Pyx_sst_abs(size) <= 1)) {\n            a = likely(size) ? digits[0] : 0;\n            if (size == -1) a = -a;\n        } else {\n            switch (size) {\n                case -2:\n                    if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {\n                        a = -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));\n                        break;\n                    }\n                case 2:\n                    if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {\n                        a = (long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));\n                        break;\n                    }\n                case -3:\n                    if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {\n                        a = -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));\n                        break;\n                    }\n                case 3:\n                    if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {\n                        a = (long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));\n                        break;\n                    }\n                case -4:\n                    if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) {\n                        a = -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));\n                        break;\n                    }\n                case 4:\n                    if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) {\n                        a = (long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]));\n                        break;\n                    }\n                #if PyLong_SHIFT < 30 && PyLong_SHIFT != 15\n                default: return PyLong_Type.tp_richcompare(op1, op2, Py_EQ);\n                #else\n                default: Py_RETURN_FALSE;\n                #endif\n            }\n        }\n            if (a == b) {\n                Py_RETURN_TRUE;\n            } else {\n                Py_RETURN_FALSE;\n            }\n    }\n    #endif\n    if (PyFloat_CheckExact(op1)) {\n        const long b = intval;\n        double a = PyFloat_AS_DOUBLE(op1);\n            if ((double)a == (double)b) {\n                Py_RETURN_TRUE;\n            } else {\n                Py_RETURN_FALSE;\n            }\n    }\n    return PyObject_RichCompare(op1, op2, Py_EQ);\n}\n#endif\n\n/* ExtTypeTest */\nstatic CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type) {\n    if (unlikely(!type)) {\n        PyErr_SetString(PyExc_SystemError, \"Missing type object\");\n        return 0;\n    }\n    if (likely(PyObject_TypeCheck(obj, type)))\n        return 1;\n    PyErr_Format(PyExc_TypeError, \"Cannot convert %.200s to %.200s\",\n                 Py_TYPE(obj)->tp_name, type->tp_name);\n    return 0;\n}\n\n/* BufferFormatCheck */\nstatic CYTHON_INLINE int __Pyx_IsLittleEndian(void) {\n  unsigned int n = 1;\n  return *(unsigned char*)(&n) != 0;\n}\nstatic void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx,\n                              __Pyx_BufFmt_StackElem* stack,\n                              __Pyx_TypeInfo* type) {\n  stack[0].field = &ctx->root;\n  stack[0].parent_offset = 0;\n  ctx->root.type = type;\n  ctx->root.name = \"buffer dtype\";\n  ctx->root.offset = 0;\n  ctx->head = stack;\n  ctx->head->field = &ctx->root;\n  ctx->fmt_offset = 0;\n  ctx->head->parent_offset = 0;\n  ctx->new_packmode = '@';\n  ctx->enc_packmode = '@';\n  ctx->new_count = 1;\n  ctx->enc_count = 0;\n  ctx->enc_type = 0;\n  ctx->is_complex = 0;\n  ctx->is_valid_array = 0;\n  ctx->struct_alignment = 0;\n  while (type->typegroup == 'S') {\n    ++ctx->head;\n    ctx->head->field = type->fields;\n    ctx->head->parent_offset = 0;\n    type = type->fields->type;\n  }\n}\nstatic int __Pyx_BufFmt_ParseNumber(const char** ts) {\n    int count;\n    const char* t = *ts;\n    if (*t < '0' || *t > '9') {\n      return -1;\n    } else {\n        count = *t++ - '0';\n        while (*t >= '0' && *t < '9') {\n            count *= 10;\n            count += *t++ - '0';\n        }\n    }\n    *ts = t;\n    return count;\n}\nstatic int __Pyx_BufFmt_ExpectNumber(const char **ts) {\n    int number = __Pyx_BufFmt_ParseNumber(ts);\n    if (number == -1)\n        PyErr_Format(PyExc_ValueError,\\\n                     \"Does not understand character buffer dtype format string ('%c')\", **ts);\n    return number;\n}\nstatic void __Pyx_BufFmt_RaiseUnexpectedChar(char ch) {\n  PyErr_Format(PyExc_ValueError,\n               \"Unexpected format string character: '%c'\", ch);\n}\nstatic const char* __Pyx_BufFmt_DescribeTypeChar(char ch, int is_complex) {\n  switch (ch) {\n    case 'c': return \"'char'\";\n    case 'b': return \"'signed char'\";\n    case 'B': return \"'unsigned char'\";\n    case 'h': return \"'short'\";\n    case 'H': return \"'unsigned short'\";\n    case 'i': return \"'int'\";\n    case 'I': return \"'unsigned int'\";\n    case 'l': return \"'long'\";\n    case 'L': return \"'unsigned long'\";\n    case 'q': return \"'long long'\";\n    case 'Q': return \"'unsigned long long'\";\n    case 'f': return (is_complex ? \"'complex float'\" : \"'float'\");\n    case 'd': return (is_complex ? \"'complex double'\" : \"'double'\");\n    case 'g': return (is_complex ? \"'complex long double'\" : \"'long double'\");\n    case 'T': return \"a struct\";\n    case 'O': return \"Python object\";\n    case 'P': return \"a pointer\";\n    case 's': case 'p': return \"a string\";\n    case 0: return \"end\";\n    default: return \"unparseable format string\";\n  }\n}\nstatic size_t __Pyx_BufFmt_TypeCharToStandardSize(char ch, int is_complex) {\n  switch (ch) {\n    case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1;\n    case 'h': case 'H': return 2;\n    case 'i': case 'I': case 'l': case 'L': return 4;\n    case 'q': case 'Q': return 8;\n    case 'f': return (is_complex ? 8 : 4);\n    case 'd': return (is_complex ? 16 : 8);\n    case 'g': {\n      PyErr_SetString(PyExc_ValueError, \"Python does not define a standard format string size for long double ('g')..\");\n      return 0;\n    }\n    case 'O': case 'P': return sizeof(void*);\n    default:\n      __Pyx_BufFmt_RaiseUnexpectedChar(ch);\n      return 0;\n    }\n}\nstatic size_t __Pyx_BufFmt_TypeCharToNativeSize(char ch, int is_complex) {\n  switch (ch) {\n    case 'c': case 'b': case 'B': case 's': case 'p': return 1;\n    case 'h': case 'H': return sizeof(short);\n    case 'i': case 'I': return sizeof(int);\n    case 'l': case 'L': return sizeof(long);\n    #ifdef HAVE_LONG_LONG\n    case 'q': case 'Q': return sizeof(PY_LONG_LONG);\n    #endif\n    case 'f': return sizeof(float) * (is_complex ? 2 : 1);\n    case 'd': return sizeof(double) * (is_complex ? 2 : 1);\n    case 'g': return sizeof(long double) * (is_complex ? 2 : 1);\n    case 'O': case 'P': return sizeof(void*);\n    default: {\n      __Pyx_BufFmt_RaiseUnexpectedChar(ch);\n      return 0;\n    }\n  }\n}\ntypedef struct { char c; short x; } __Pyx_st_short;\ntypedef struct { char c; int x; } __Pyx_st_int;\ntypedef struct { char c; long x; } __Pyx_st_long;\ntypedef struct { char c; float x; } __Pyx_st_float;\ntypedef struct { char c; double x; } __Pyx_st_double;\ntypedef struct { char c; long double x; } __Pyx_st_longdouble;\ntypedef struct { char c; void *x; } __Pyx_st_void_p;\n#ifdef HAVE_LONG_LONG\ntypedef struct { char c; PY_LONG_LONG x; } __Pyx_st_longlong;\n#endif\nstatic size_t __Pyx_BufFmt_TypeCharToAlignment(char ch, CYTHON_UNUSED int is_complex) {\n  switch (ch) {\n    case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1;\n    case 'h': case 'H': return sizeof(__Pyx_st_short) - sizeof(short);\n    case 'i': case 'I': return sizeof(__Pyx_st_int) - sizeof(int);\n    case 'l': case 'L': return sizeof(__Pyx_st_long) - sizeof(long);\n#ifdef HAVE_LONG_LONG\n    case 'q': case 'Q': return sizeof(__Pyx_st_longlong) - sizeof(PY_LONG_LONG);\n#endif\n    case 'f': return sizeof(__Pyx_st_float) - sizeof(float);\n    case 'd': return sizeof(__Pyx_st_double) - sizeof(double);\n    case 'g': return sizeof(__Pyx_st_longdouble) - sizeof(long double);\n    case 'P': case 'O': return sizeof(__Pyx_st_void_p) - sizeof(void*);\n    default:\n      __Pyx_BufFmt_RaiseUnexpectedChar(ch);\n      return 0;\n    }\n}\n/* These are for computing the padding at the end of the struct to align\n   on the first member of the struct. This will probably the same as above,\n   but we don't have any guarantees.\n */\ntypedef struct { short x; char c; } __Pyx_pad_short;\ntypedef struct { int x; char c; } __Pyx_pad_int;\ntypedef struct { long x; char c; } __Pyx_pad_long;\ntypedef struct { float x; char c; } __Pyx_pad_float;\ntypedef struct { double x; char c; } __Pyx_pad_double;\ntypedef struct { long double x; char c; } __Pyx_pad_longdouble;\ntypedef struct { void *x; char c; } __Pyx_pad_void_p;\n#ifdef HAVE_LONG_LONG\ntypedef struct { PY_LONG_LONG x; char c; } __Pyx_pad_longlong;\n#endif\nstatic size_t __Pyx_BufFmt_TypeCharToPadding(char ch, CYTHON_UNUSED int is_complex) {\n  switch (ch) {\n    case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1;\n    case 'h': case 'H': return sizeof(__Pyx_pad_short) - sizeof(short);\n    case 'i': case 'I': return sizeof(__Pyx_pad_int) - sizeof(int);\n    case 'l': case 'L': return sizeof(__Pyx_pad_long) - sizeof(long);\n#ifdef HAVE_LONG_LONG\n    case 'q': case 'Q': return sizeof(__Pyx_pad_longlong) - sizeof(PY_LONG_LONG);\n#endif\n    case 'f': return sizeof(__Pyx_pad_float) - sizeof(float);\n    case 'd': return sizeof(__Pyx_pad_double) - sizeof(double);\n    case 'g': return sizeof(__Pyx_pad_longdouble) - sizeof(long double);\n    case 'P': case 'O': return sizeof(__Pyx_pad_void_p) - sizeof(void*);\n    default:\n      __Pyx_BufFmt_RaiseUnexpectedChar(ch);\n      return 0;\n    }\n}\nstatic char __Pyx_BufFmt_TypeCharToGroup(char ch, int is_complex) {\n  switch (ch) {\n    case 'c':\n        return 'H';\n    case 'b': case 'h': case 'i':\n    case 'l': case 'q': case 's': case 'p':\n        return 'I';\n    case 'B': case 'H': case 'I': case 'L': case 'Q':\n        return 'U';\n    case 'f': case 'd': case 'g':\n        return (is_complex ? 'C' : 'R');\n    case 'O':\n        return 'O';\n    case 'P':\n        return 'P';\n    default: {\n      __Pyx_BufFmt_RaiseUnexpectedChar(ch);\n      return 0;\n    }\n  }\n}\nstatic void __Pyx_BufFmt_RaiseExpected(__Pyx_BufFmt_Context* ctx) {\n  if (ctx->head == NULL || ctx->head->field == &ctx->root) {\n    const char* expected;\n    const char* quote;\n    if (ctx->head == NULL) {\n      expected = \"end\";\n      quote = \"\";\n    } else {\n      expected = ctx->head->field->type->name;\n      quote = \"'\";\n    }\n    PyErr_Format(PyExc_ValueError,\n                 \"Buffer dtype mismatch, expected %s%s%s but got %s\",\n                 quote, expected, quote,\n                 __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex));\n  } else {\n    __Pyx_StructField* field = ctx->head->field;\n    __Pyx_StructField* parent = (ctx->head - 1)->field;\n    PyErr_Format(PyExc_ValueError,\n                 \"Buffer dtype mismatch, expected '%s' but got %s in '%s.%s'\",\n                 field->type->name, __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex),\n                 parent->type->name, field->name);\n  }\n}\nstatic int __Pyx_BufFmt_ProcessTypeChunk(__Pyx_BufFmt_Context* ctx) {\n  char group;\n  size_t size, offset, arraysize = 1;\n  if (ctx->enc_type == 0) return 0;\n  if (ctx->head->field->type->arraysize[0]) {\n    int i, ndim = 0;\n    if (ctx->enc_type == 's' || ctx->enc_type == 'p') {\n        ctx->is_valid_array = ctx->head->field->type->ndim == 1;\n        ndim = 1;\n        if (ctx->enc_count != ctx->head->field->type->arraysize[0]) {\n            PyErr_Format(PyExc_ValueError,\n                         \"Expected a dimension of size %zu, got %zu\",\n                         ctx->head->field->type->arraysize[0], ctx->enc_count);\n            return -1;\n        }\n    }\n    if (!ctx->is_valid_array) {\n      PyErr_Format(PyExc_ValueError, \"Expected %d dimensions, got %d\",\n                   ctx->head->field->type->ndim, ndim);\n      return -1;\n    }\n    for (i = 0; i < ctx->head->field->type->ndim; i++) {\n      arraysize *= ctx->head->field->type->arraysize[i];\n    }\n    ctx->is_valid_array = 0;\n    ctx->enc_count = 1;\n  }\n  group = __Pyx_BufFmt_TypeCharToGroup(ctx->enc_type, ctx->is_complex);\n  do {\n    __Pyx_StructField* field = ctx->head->field;\n    __Pyx_TypeInfo* type = field->type;\n    if (ctx->enc_packmode == '@' || ctx->enc_packmode == '^') {\n      size = __Pyx_BufFmt_TypeCharToNativeSize(ctx->enc_type, ctx->is_complex);\n    } else {\n      size = __Pyx_BufFmt_TypeCharToStandardSize(ctx->enc_type, ctx->is_complex);\n    }\n    if (ctx->enc_packmode == '@') {\n      size_t align_at = __Pyx_BufFmt_TypeCharToAlignment(ctx->enc_type, ctx->is_complex);\n      size_t align_mod_offset;\n      if (align_at == 0) return -1;\n      align_mod_offset = ctx->fmt_offset % align_at;\n      if (align_mod_offset > 0) ctx->fmt_offset += align_at - align_mod_offset;\n      if (ctx->struct_alignment == 0)\n          ctx->struct_alignment = __Pyx_BufFmt_TypeCharToPadding(ctx->enc_type,\n                                                                 ctx->is_complex);\n    }\n    if (type->size != size || type->typegroup != group) {\n      if (type->typegroup == 'C' && type->fields != NULL) {\n        size_t parent_offset = ctx->head->parent_offset + field->offset;\n        ++ctx->head;\n        ctx->head->field = type->fields;\n        ctx->head->parent_offset = parent_offset;\n        continue;\n      }\n      if ((type->typegroup == 'H' || group == 'H') && type->size == size) {\n      } else {\n          __Pyx_BufFmt_RaiseExpected(ctx);\n          return -1;\n      }\n    }\n    offset = ctx->head->parent_offset + field->offset;\n    if (ctx->fmt_offset != offset) {\n      PyErr_Format(PyExc_ValueError,\n                   \"Buffer dtype mismatch; next field is at offset %\" CYTHON_FORMAT_SSIZE_T \"d but %\" CYTHON_FORMAT_SSIZE_T \"d expected\",\n                   (Py_ssize_t)ctx->fmt_offset, (Py_ssize_t)offset);\n      return -1;\n    }\n    ctx->fmt_offset += size;\n    if (arraysize)\n      ctx->fmt_offset += (arraysize - 1) * size;\n    --ctx->enc_count;\n    while (1) {\n      if (field == &ctx->root) {\n        ctx->head = NULL;\n        if (ctx->enc_count != 0) {\n          __Pyx_BufFmt_RaiseExpected(ctx);\n          return -1;\n        }\n        break;\n      }\n      ctx->head->field = ++field;\n      if (field->type == NULL) {\n        --ctx->head;\n        field = ctx->head->field;\n        continue;\n      } else if (field->type->typegroup == 'S') {\n        size_t parent_offset = ctx->head->parent_offset + field->offset;\n        if (field->type->fields->type == NULL) continue;\n        field = field->type->fields;\n        ++ctx->head;\n        ctx->head->field = field;\n        ctx->head->parent_offset = parent_offset;\n        break;\n      } else {\n        break;\n      }\n    }\n  } while (ctx->enc_count);\n  ctx->enc_type = 0;\n  ctx->is_complex = 0;\n  return 0;\n}\nstatic CYTHON_INLINE PyObject *\n__pyx_buffmt_parse_array(__Pyx_BufFmt_Context* ctx, const char** tsp)\n{\n    const char *ts = *tsp;\n    int i = 0, number;\n    int ndim = ctx->head->field->type->ndim;\n;\n    ++ts;\n    if (ctx->new_count != 1) {\n        PyErr_SetString(PyExc_ValueError,\n                        \"Cannot handle repeated arrays in format string\");\n        return NULL;\n    }\n    if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;\n    while (*ts && *ts != ')') {\n        switch (*ts) {\n            case ' ': case '\\f': case '\\r': case '\\n': case '\\t': case '\\v':  continue;\n            default:  break;\n        }\n        number = __Pyx_BufFmt_ExpectNumber(&ts);\n        if (number == -1) return NULL;\n        if (i < ndim && (size_t) number != ctx->head->field->type->arraysize[i])\n            return PyErr_Format(PyExc_ValueError,\n                        \"Expected a dimension of size %zu, got %d\",\n                        ctx->head->field->type->arraysize[i], number);\n        if (*ts != ',' && *ts != ')')\n            return PyErr_Format(PyExc_ValueError,\n                                \"Expected a comma in format string, got '%c'\", *ts);\n        if (*ts == ',') ts++;\n        i++;\n    }\n    if (i != ndim)\n        return PyErr_Format(PyExc_ValueError, \"Expected %d dimension(s), got %d\",\n                            ctx->head->field->type->ndim, i);\n    if (!*ts) {\n        PyErr_SetString(PyExc_ValueError,\n                        \"Unexpected end of format string, expected ')'\");\n        return NULL;\n    }\n    ctx->is_valid_array = 1;\n    ctx->new_count = 1;\n    *tsp = ++ts;\n    return Py_None;\n}\nstatic const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts) {\n  int got_Z = 0;\n  while (1) {\n    switch(*ts) {\n      case 0:\n        if (ctx->enc_type != 0 && ctx->head == NULL) {\n          __Pyx_BufFmt_RaiseExpected(ctx);\n          return NULL;\n        }\n        if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;\n        if (ctx->head != NULL) {\n          __Pyx_BufFmt_RaiseExpected(ctx);\n          return NULL;\n        }\n        return ts;\n      case ' ':\n      case '\\r':\n      case '\\n':\n        ++ts;\n        break;\n      case '<':\n        if (!__Pyx_IsLittleEndian()) {\n          PyErr_SetString(PyExc_ValueError, \"Little-endian buffer not supported on big-endian compiler\");\n          return NULL;\n        }\n        ctx->new_packmode = '=';\n        ++ts;\n        break;\n      case '>':\n      case '!':\n        if (__Pyx_IsLittleEndian()) {\n          PyErr_SetString(PyExc_ValueError, \"Big-endian buffer not supported on little-endian compiler\");\n          return NULL;\n        }\n        ctx->new_packmode = '=';\n        ++ts;\n        break;\n      case '=':\n      case '@':\n      case '^':\n        ctx->new_packmode = *ts++;\n        break;\n      case 'T':\n        {\n          const char* ts_after_sub;\n          size_t i, struct_count = ctx->new_count;\n          size_t struct_alignment = ctx->struct_alignment;\n          ctx->new_count = 1;\n          ++ts;\n          if (*ts != '{') {\n            PyErr_SetString(PyExc_ValueError, \"Buffer acquisition: Expected '{' after 'T'\");\n            return NULL;\n          }\n          if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;\n          ctx->enc_type = 0;\n          ctx->enc_count = 0;\n          ctx->struct_alignment = 0;\n          ++ts;\n          ts_after_sub = ts;\n          for (i = 0; i != struct_count; ++i) {\n            ts_after_sub = __Pyx_BufFmt_CheckString(ctx, ts);\n            if (!ts_after_sub) return NULL;\n          }\n          ts = ts_after_sub;\n          if (struct_alignment) ctx->struct_alignment = struct_alignment;\n        }\n        break;\n      case '}':\n        {\n          size_t alignment = ctx->struct_alignment;\n          ++ts;\n          if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;\n          ctx->enc_type = 0;\n          if (alignment && ctx->fmt_offset % alignment) {\n            ctx->fmt_offset += alignment - (ctx->fmt_offset % alignment);\n          }\n        }\n        return ts;\n      case 'x':\n        if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;\n        ctx->fmt_offset += ctx->new_count;\n        ctx->new_count = 1;\n        ctx->enc_count = 0;\n        ctx->enc_type = 0;\n        ctx->enc_packmode = ctx->new_packmode;\n        ++ts;\n        break;\n      case 'Z':\n        got_Z = 1;\n        ++ts;\n        if (*ts != 'f' && *ts != 'd' && *ts != 'g') {\n          __Pyx_BufFmt_RaiseUnexpectedChar('Z');\n          return NULL;\n        }\n      case 'c': case 'b': case 'B': case 'h': case 'H': case 'i': case 'I':\n      case 'l': case 'L': case 'q': case 'Q':\n      case 'f': case 'd': case 'g':\n      case 'O': case 'p':\n        if (ctx->enc_type == *ts && got_Z == ctx->is_complex &&\n            ctx->enc_packmode == ctx->new_packmode) {\n          ctx->enc_count += ctx->new_count;\n          ctx->new_count = 1;\n          got_Z = 0;\n          ++ts;\n          break;\n        }\n      case 's':\n        if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;\n        ctx->enc_count = ctx->new_count;\n        ctx->enc_packmode = ctx->new_packmode;\n        ctx->enc_type = *ts;\n        ctx->is_complex = got_Z;\n        ++ts;\n        ctx->new_count = 1;\n        got_Z = 0;\n        break;\n      case ':':\n        ++ts;\n        while(*ts != ':') ++ts;\n        ++ts;\n        break;\n      case '(':\n        if (!__pyx_buffmt_parse_array(ctx, &ts)) return NULL;\n        break;\n      default:\n        {\n          int number = __Pyx_BufFmt_ExpectNumber(&ts);\n          if (number == -1) return NULL;\n          ctx->new_count = (size_t)number;\n        }\n    }\n  }\n}\nstatic CYTHON_INLINE void __Pyx_ZeroBuffer(Py_buffer* buf) {\n  buf->buf = NULL;\n  buf->obj = NULL;\n  buf->strides = __Pyx_zeros;\n  buf->shape = __Pyx_zeros;\n  buf->suboffsets = __Pyx_minusones;\n}\nstatic CYTHON_INLINE int __Pyx_GetBufferAndValidate(\n        Py_buffer* buf, PyObject* obj,  __Pyx_TypeInfo* dtype, int flags,\n        int nd, int cast, __Pyx_BufFmt_StackElem* stack)\n{\n  if (obj == Py_None || obj == NULL) {\n    __Pyx_ZeroBuffer(buf);\n    return 0;\n  }\n  buf->buf = NULL;\n  if (__Pyx_GetBuffer(obj, buf, flags) == -1) goto fail;\n  if (buf->ndim != nd) {\n    PyErr_Format(PyExc_ValueError,\n                 \"Buffer has wrong number of dimensions (expected %d, got %d)\",\n                 nd, buf->ndim);\n    goto fail;\n  }\n  if (!cast) {\n    __Pyx_BufFmt_Context ctx;\n    __Pyx_BufFmt_Init(&ctx, stack, dtype);\n    if (!__Pyx_BufFmt_CheckString(&ctx, buf->format)) goto fail;\n  }\n  if ((unsigned)buf->itemsize != dtype->size) {\n    PyErr_Format(PyExc_ValueError,\n      \"Item size of buffer (%\" CYTHON_FORMAT_SSIZE_T \"d byte%s) does not match size of '%s' (%\" CYTHON_FORMAT_SSIZE_T \"d byte%s)\",\n      buf->itemsize, (buf->itemsize > 1) ? \"s\" : \"\",\n      dtype->name, (Py_ssize_t)dtype->size, (dtype->size > 1) ? \"s\" : \"\");\n    goto fail;\n  }\n  if (buf->suboffsets == NULL) buf->suboffsets = __Pyx_minusones;\n  return 0;\nfail:;\n  __Pyx_ZeroBuffer(buf);\n  return -1;\n}\nstatic CYTHON_INLINE void __Pyx_SafeReleaseBuffer(Py_buffer* info) {\n  if (info->buf == NULL) return;\n  if (info->suboffsets == __Pyx_minusones) info->suboffsets = NULL;\n  __Pyx_ReleaseBuffer(info);\n}\n\n/* GetModuleGlobalName */\n  static CYTHON_INLINE PyObject *__Pyx_GetModuleGlobalName(PyObject *name) {\n    PyObject *result;\n#if !CYTHON_AVOID_BORROWED_REFS\n    result = PyDict_GetItem(__pyx_d, name);\n    if (likely(result)) {\n        Py_INCREF(result);\n    } else {\n#else\n    result = PyObject_GetItem(__pyx_d, name);\n    if (!result) {\n        PyErr_Clear();\n#endif\n        result = __Pyx_GetBuiltinName(name);\n    }\n    return result;\n}\n\n/* PyObjectCall */\n    #if CYTHON_COMPILING_IN_CPYTHON\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw) {\n    PyObject *result;\n    ternaryfunc call = func->ob_type->tp_call;\n    if (unlikely(!call))\n        return PyObject_Call(func, arg, kw);\n    if (unlikely(Py_EnterRecursiveCall((char*)\" while calling a Python object\")))\n        return NULL;\n    result = (*call)(func, arg, kw);\n    Py_LeaveRecursiveCall();\n    if (unlikely(!result) && unlikely(!PyErr_Occurred())) {\n        PyErr_SetString(\n            PyExc_SystemError,\n            \"NULL result without error in PyObject_Call\");\n    }\n    return result;\n}\n#endif\n\n/* BufferIndexError */\n    static void __Pyx_RaiseBufferIndexError(int axis) {\n  PyErr_Format(PyExc_IndexError,\n     \"Out of bounds on buffer access (axis %d)\", axis);\n}\n\n/* PyErrFetchRestore */\n    #if CYTHON_FAST_THREAD_STATE\nstatic CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) {\n    PyObject *tmp_type, *tmp_value, *tmp_tb;\n    tmp_type = tstate->curexc_type;\n    tmp_value = tstate->curexc_value;\n    tmp_tb = tstate->curexc_traceback;\n    tstate->curexc_type = type;\n    tstate->curexc_value = value;\n    tstate->curexc_traceback = tb;\n    Py_XDECREF(tmp_type);\n    Py_XDECREF(tmp_value);\n    Py_XDECREF(tmp_tb);\n}\nstatic CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) {\n    *type = tstate->curexc_type;\n    *value = tstate->curexc_value;\n    *tb = tstate->curexc_traceback;\n    tstate->curexc_type = 0;\n    tstate->curexc_value = 0;\n    tstate->curexc_traceback = 0;\n}\n#endif\n\n/* RaiseException */\n    #if PY_MAJOR_VERSION < 3\nstatic void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb,\n                        CYTHON_UNUSED PyObject *cause) {\n    __Pyx_PyThreadState_declare\n    Py_XINCREF(type);\n    if (!value || value == Py_None)\n        value = NULL;\n    else\n        Py_INCREF(value);\n    if (!tb || tb == Py_None)\n        tb = NULL;\n    else {\n        Py_INCREF(tb);\n        if (!PyTraceBack_Check(tb)) {\n            PyErr_SetString(PyExc_TypeError,\n                \"raise: arg 3 must be a traceback or None\");\n            goto raise_error;\n        }\n    }\n    if (PyType_Check(type)) {\n#if CYTHON_COMPILING_IN_PYPY\n        if (!value) {\n            Py_INCREF(Py_None);\n            value = Py_None;\n        }\n#endif\n        PyErr_NormalizeException(&type, &value, &tb);\n    } else {\n        if (value) {\n            PyErr_SetString(PyExc_TypeError,\n                \"instance exception may not have a separate value\");\n            goto raise_error;\n        }\n        value = type;\n        type = (PyObject*) Py_TYPE(type);\n        Py_INCREF(type);\n        if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) {\n            PyErr_SetString(PyExc_TypeError,\n                \"raise: exception class must be a subclass of BaseException\");\n            goto raise_error;\n        }\n    }\n    __Pyx_PyThreadState_assign\n    __Pyx_ErrRestore(type, value, tb);\n    return;\nraise_error:\n    Py_XDECREF(value);\n    Py_XDECREF(type);\n    Py_XDECREF(tb);\n    return;\n}\n#else\nstatic void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) {\n    PyObject* owned_instance = NULL;\n    if (tb == Py_None) {\n        tb = 0;\n    } else if (tb && !PyTraceBack_Check(tb)) {\n        PyErr_SetString(PyExc_TypeError,\n            \"raise: arg 3 must be a traceback or None\");\n        goto bad;\n    }\n    if (value == Py_None)\n        value = 0;\n    if (PyExceptionInstance_Check(type)) {\n        if (value) {\n            PyErr_SetString(PyExc_TypeError,\n                \"instance exception may not have a separate value\");\n            goto bad;\n        }\n        value = type;\n        type = (PyObject*) Py_TYPE(value);\n    } else if (PyExceptionClass_Check(type)) {\n        PyObject *instance_class = NULL;\n        if (value && PyExceptionInstance_Check(value)) {\n            instance_class = (PyObject*) Py_TYPE(value);\n            if (instance_class != type) {\n                int is_subclass = PyObject_IsSubclass(instance_class, type);\n                if (!is_subclass) {\n                    instance_class = NULL;\n                } else if (unlikely(is_subclass == -1)) {\n                    goto bad;\n                } else {\n                    type = instance_class;\n                }\n            }\n        }\n        if (!instance_class) {\n            PyObject *args;\n            if (!value)\n                args = PyTuple_New(0);\n            else if (PyTuple_Check(value)) {\n                Py_INCREF(value);\n                args = value;\n            } else\n                args = PyTuple_Pack(1, value);\n            if (!args)\n                goto bad;\n            owned_instance = PyObject_Call(type, args, NULL);\n            Py_DECREF(args);\n            if (!owned_instance)\n                goto bad;\n            value = owned_instance;\n            if (!PyExceptionInstance_Check(value)) {\n                PyErr_Format(PyExc_TypeError,\n                             \"calling %R should have returned an instance of \"\n                             \"BaseException, not %R\",\n                             type, Py_TYPE(value));\n                goto bad;\n            }\n        }\n    } else {\n        PyErr_SetString(PyExc_TypeError,\n            \"raise: exception class must be a subclass of BaseException\");\n        goto bad;\n    }\n#if PY_VERSION_HEX >= 0x03030000\n    if (cause) {\n#else\n    if (cause && cause != Py_None) {\n#endif\n        PyObject *fixed_cause;\n        if (cause == Py_None) {\n            fixed_cause = NULL;\n        } else if (PyExceptionClass_Check(cause)) {\n            fixed_cause = PyObject_CallObject(cause, NULL);\n            if (fixed_cause == NULL)\n                goto bad;\n        } else if (PyExceptionInstance_Check(cause)) {\n            fixed_cause = cause;\n            Py_INCREF(fixed_cause);\n        } else {\n            PyErr_SetString(PyExc_TypeError,\n                            \"exception causes must derive from \"\n                            \"BaseException\");\n            goto bad;\n        }\n        PyException_SetCause(value, fixed_cause);\n    }\n    PyErr_SetObject(type, value);\n    if (tb) {\n#if CYTHON_COMPILING_IN_PYPY\n        PyObject *tmp_type, *tmp_value, *tmp_tb;\n        PyErr_Fetch(&tmp_type, &tmp_value, &tmp_tb);\n        Py_INCREF(tb);\n        PyErr_Restore(tmp_type, tmp_value, tb);\n        Py_XDECREF(tmp_tb);\n#else\n        PyThreadState *tstate = PyThreadState_GET();\n        PyObject* tmp_tb = tstate->curexc_traceback;\n        if (tb != tmp_tb) {\n            Py_INCREF(tb);\n            tstate->curexc_traceback = tb;\n            Py_XDECREF(tmp_tb);\n        }\n#endif\n    }\nbad:\n    Py_XDECREF(owned_instance);\n    return;\n}\n#endif\n\n/* RaiseTooManyValuesToUnpack */\n      static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected) {\n    PyErr_Format(PyExc_ValueError,\n                 \"too many values to unpack (expected %\" CYTHON_FORMAT_SSIZE_T \"d)\", expected);\n}\n\n/* RaiseNeedMoreValuesToUnpack */\n      static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index) {\n    PyErr_Format(PyExc_ValueError,\n                 \"need more than %\" CYTHON_FORMAT_SSIZE_T \"d value%.1s to unpack\",\n                 index, (index == 1) ? \"\" : \"s\");\n}\n\n/* RaiseNoneIterError */\n      static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void) {\n    PyErr_SetString(PyExc_TypeError, \"'NoneType' object is not iterable\");\n}\n\n/* SaveResetException */\n      #if CYTHON_FAST_THREAD_STATE\nstatic CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) {\n    *type = tstate->exc_type;\n    *value = tstate->exc_value;\n    *tb = tstate->exc_traceback;\n    Py_XINCREF(*type);\n    Py_XINCREF(*value);\n    Py_XINCREF(*tb);\n}\nstatic CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) {\n    PyObject *tmp_type, *tmp_value, *tmp_tb;\n    tmp_type = tstate->exc_type;\n    tmp_value = tstate->exc_value;\n    tmp_tb = tstate->exc_traceback;\n    tstate->exc_type = type;\n    tstate->exc_value = value;\n    tstate->exc_traceback = tb;\n    Py_XDECREF(tmp_type);\n    Py_XDECREF(tmp_value);\n    Py_XDECREF(tmp_tb);\n}\n#endif\n\n/* PyErrExceptionMatches */\n      #if CYTHON_FAST_THREAD_STATE\nstatic CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err) {\n    PyObject *exc_type = tstate->curexc_type;\n    if (exc_type == err) return 1;\n    if (unlikely(!exc_type)) return 0;\n    return PyErr_GivenExceptionMatches(exc_type, err);\n}\n#endif\n\n/* GetException */\n      #if CYTHON_FAST_THREAD_STATE\nstatic int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) {\n#else\nstatic int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb) {\n#endif\n    PyObject *local_type, *local_value, *local_tb;\n#if CYTHON_FAST_THREAD_STATE\n    PyObject *tmp_type, *tmp_value, *tmp_tb;\n    local_type = tstate->curexc_type;\n    local_value = tstate->curexc_value;\n    local_tb = tstate->curexc_traceback;\n    tstate->curexc_type = 0;\n    tstate->curexc_value = 0;\n    tstate->curexc_traceback = 0;\n#else\n    PyErr_Fetch(&local_type, &local_value, &local_tb);\n#endif\n    PyErr_NormalizeException(&local_type, &local_value, &local_tb);\n#if CYTHON_FAST_THREAD_STATE\n    if (unlikely(tstate->curexc_type))\n#else\n    if (unlikely(PyErr_Occurred()))\n#endif\n        goto bad;\n    #if PY_MAJOR_VERSION >= 3\n    if (local_tb) {\n        if (unlikely(PyException_SetTraceback(local_value, local_tb) < 0))\n            goto bad;\n    }\n    #endif\n    Py_XINCREF(local_tb);\n    Py_XINCREF(local_type);\n    Py_XINCREF(local_value);\n    *type = local_type;\n    *value = local_value;\n    *tb = local_tb;\n#if CYTHON_FAST_THREAD_STATE\n    tmp_type = tstate->exc_type;\n    tmp_value = tstate->exc_value;\n    tmp_tb = tstate->exc_traceback;\n    tstate->exc_type = local_type;\n    tstate->exc_value = local_value;\n    tstate->exc_traceback = local_tb;\n    Py_XDECREF(tmp_type);\n    Py_XDECREF(tmp_value);\n    Py_XDECREF(tmp_tb);\n#else\n    PyErr_SetExcInfo(local_type, local_value, local_tb);\n#endif\n    return 0;\nbad:\n    *type = 0;\n    *value = 0;\n    *tb = 0;\n    Py_XDECREF(local_type);\n    Py_XDECREF(local_value);\n    Py_XDECREF(local_tb);\n    return -1;\n}\n\n/* Import */\n        static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level) {\n    PyObject *empty_list = 0;\n    PyObject *module = 0;\n    PyObject *global_dict = 0;\n    PyObject *empty_dict = 0;\n    PyObject *list;\n    #if PY_VERSION_HEX < 0x03030000\n    PyObject *py_import;\n    py_import = __Pyx_PyObject_GetAttrStr(__pyx_b, __pyx_n_s_import);\n    if (!py_import)\n        goto bad;\n    #endif\n    if (from_list)\n        list = from_list;\n    else {\n        empty_list = PyList_New(0);\n        if (!empty_list)\n            goto bad;\n        list = empty_list;\n    }\n    global_dict = PyModule_GetDict(__pyx_m);\n    if (!global_dict)\n        goto bad;\n    empty_dict = PyDict_New();\n    if (!empty_dict)\n        goto bad;\n    {\n        #if PY_MAJOR_VERSION >= 3\n        if (level == -1) {\n            if (strchr(__Pyx_MODULE_NAME, '.')) {\n                #if PY_VERSION_HEX < 0x03030000\n                PyObject *py_level = PyInt_FromLong(1);\n                if (!py_level)\n                    goto bad;\n                module = PyObject_CallFunctionObjArgs(py_import,\n                    name, global_dict, empty_dict, list, py_level, NULL);\n                Py_DECREF(py_level);\n                #else\n                module = PyImport_ImportModuleLevelObject(\n                    name, global_dict, empty_dict, list, 1);\n                #endif\n                if (!module) {\n                    if (!PyErr_ExceptionMatches(PyExc_ImportError))\n                        goto bad;\n                    PyErr_Clear();\n                }\n            }\n            level = 0;\n        }\n        #endif\n        if (!module) {\n            #if PY_VERSION_HEX < 0x03030000\n            PyObject *py_level = PyInt_FromLong(level);\n            if (!py_level)\n                goto bad;\n            module = PyObject_CallFunctionObjArgs(py_import,\n                name, global_dict, empty_dict, list, py_level, NULL);\n            Py_DECREF(py_level);\n            #else\n            module = PyImport_ImportModuleLevelObject(\n                name, global_dict, empty_dict, list, level);\n            #endif\n        }\n    }\nbad:\n    #if PY_VERSION_HEX < 0x03030000\n    Py_XDECREF(py_import);\n    #endif\n    Py_XDECREF(empty_list);\n    Py_XDECREF(empty_dict);\n    return module;\n}\n\n/* CodeObjectCache */\n        static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line) {\n    int start = 0, mid = 0, end = count - 1;\n    if (end >= 0 && code_line > entries[end].code_line) {\n        return count;\n    }\n    while (start < end) {\n        mid = start + (end - start) / 2;\n        if (code_line < entries[mid].code_line) {\n            end = mid;\n        } else if (code_line > entries[mid].code_line) {\n             start = mid + 1;\n        } else {\n            return mid;\n        }\n    }\n    if (code_line <= entries[mid].code_line) {\n        return mid;\n    } else {\n        return mid + 1;\n    }\n}\nstatic PyCodeObject *__pyx_find_code_object(int code_line) {\n    PyCodeObject* code_object;\n    int pos;\n    if (unlikely(!code_line) || unlikely(!__pyx_code_cache.entries)) {\n        return NULL;\n    }\n    pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line);\n    if (unlikely(pos >= __pyx_code_cache.count) || unlikely(__pyx_code_cache.entries[pos].code_line != code_line)) {\n        return NULL;\n    }\n    code_object = __pyx_code_cache.entries[pos].code_object;\n    Py_INCREF(code_object);\n    return code_object;\n}\nstatic void __pyx_insert_code_object(int code_line, PyCodeObject* code_object) {\n    int pos, i;\n    __Pyx_CodeObjectCacheEntry* entries = __pyx_code_cache.entries;\n    if (unlikely(!code_line)) {\n        return;\n    }\n    if (unlikely(!entries)) {\n        entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Malloc(64*sizeof(__Pyx_CodeObjectCacheEntry));\n        if (likely(entries)) {\n            __pyx_code_cache.entries = entries;\n            __pyx_code_cache.max_count = 64;\n            __pyx_code_cache.count = 1;\n            entries[0].code_line = code_line;\n            entries[0].code_object = code_object;\n            Py_INCREF(code_object);\n        }\n        return;\n    }\n    pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line);\n    if ((pos < __pyx_code_cache.count) && unlikely(__pyx_code_cache.entries[pos].code_line == code_line)) {\n        PyCodeObject* tmp = entries[pos].code_object;\n        entries[pos].code_object = code_object;\n        Py_DECREF(tmp);\n        return;\n    }\n    if (__pyx_code_cache.count == __pyx_code_cache.max_count) {\n        int new_max = __pyx_code_cache.max_count + 64;\n        entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Realloc(\n            __pyx_code_cache.entries, (size_t)new_max*sizeof(__Pyx_CodeObjectCacheEntry));\n        if (unlikely(!entries)) {\n            return;\n        }\n        __pyx_code_cache.entries = entries;\n        __pyx_code_cache.max_count = new_max;\n    }\n    for (i=__pyx_code_cache.count; i>pos; i--) {\n        entries[i] = entries[i-1];\n    }\n    entries[pos].code_line = code_line;\n    entries[pos].code_object = code_object;\n    __pyx_code_cache.count++;\n    Py_INCREF(code_object);\n}\n\n/* AddTraceback */\n        #include \"compile.h\"\n#include \"frameobject.h\"\n#include \"traceback.h\"\nstatic PyCodeObject* __Pyx_CreateCodeObjectForTraceback(\n            const char *funcname, int c_line,\n            int py_line, const char *filename) {\n    PyCodeObject *py_code = 0;\n    PyObject *py_srcfile = 0;\n    PyObject *py_funcname = 0;\n    #if PY_MAJOR_VERSION < 3\n    py_srcfile = PyString_FromString(filename);\n    #else\n    py_srcfile = PyUnicode_FromString(filename);\n    #endif\n    if (!py_srcfile) goto bad;\n    if (c_line) {\n        #if PY_MAJOR_VERSION < 3\n        py_funcname = PyString_FromFormat( \"%s (%s:%d)\", funcname, __pyx_cfilenm, c_line);\n        #else\n        py_funcname = PyUnicode_FromFormat( \"%s (%s:%d)\", funcname, __pyx_cfilenm, c_line);\n        #endif\n    }\n    else {\n        #if PY_MAJOR_VERSION < 3\n        py_funcname = PyString_FromString(funcname);\n        #else\n        py_funcname = PyUnicode_FromString(funcname);\n        #endif\n    }\n    if (!py_funcname) goto bad;\n    py_code = __Pyx_PyCode_New(\n        0,\n        0,\n        0,\n        0,\n        0,\n        __pyx_empty_bytes, /*PyObject *code,*/\n        __pyx_empty_tuple, /*PyObject *consts,*/\n        __pyx_empty_tuple, /*PyObject *names,*/\n        __pyx_empty_tuple, /*PyObject *varnames,*/\n        __pyx_empty_tuple, /*PyObject *freevars,*/\n        __pyx_empty_tuple, /*PyObject *cellvars,*/\n        py_srcfile,   /*PyObject *filename,*/\n        py_funcname,  /*PyObject *name,*/\n        py_line,\n        __pyx_empty_bytes  /*PyObject *lnotab*/\n    );\n    Py_DECREF(py_srcfile);\n    Py_DECREF(py_funcname);\n    return py_code;\nbad:\n    Py_XDECREF(py_srcfile);\n    Py_XDECREF(py_funcname);\n    return NULL;\n}\nstatic void __Pyx_AddTraceback(const char *funcname, int c_line,\n                               int py_line, const char *filename) {\n    PyCodeObject *py_code = 0;\n    PyFrameObject *py_frame = 0;\n    py_code = __pyx_find_code_object(c_line ? c_line : py_line);\n    if (!py_code) {\n        py_code = __Pyx_CreateCodeObjectForTraceback(\n            funcname, c_line, py_line, filename);\n        if (!py_code) goto bad;\n        __pyx_insert_code_object(c_line ? c_line : py_line, py_code);\n    }\n    py_frame = PyFrame_New(\n        PyThreadState_GET(), /*PyThreadState *tstate,*/\n        py_code,             /*PyCodeObject *code,*/\n        __pyx_d,      /*PyObject *globals,*/\n        0                    /*PyObject *locals*/\n    );\n    if (!py_frame) goto bad;\n    __Pyx_PyFrame_SetLineNumber(py_frame, py_line);\n    PyTraceBack_Here(py_frame);\nbad:\n    Py_XDECREF(py_code);\n    Py_XDECREF(py_frame);\n}\n\n#if PY_MAJOR_VERSION < 3\nstatic int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags) {\n    if (PyObject_CheckBuffer(obj)) return PyObject_GetBuffer(obj, view, flags);\n        if (PyObject_TypeCheck(obj, __pyx_ptype_5numpy_ndarray)) return __pyx_pw_5numpy_7ndarray_1__getbuffer__(obj, view, flags);\n    PyErr_Format(PyExc_TypeError, \"'%.200s' does not have the buffer interface\", Py_TYPE(obj)->tp_name);\n    return -1;\n}\nstatic void __Pyx_ReleaseBuffer(Py_buffer *view) {\n    PyObject *obj = view->obj;\n    if (!obj) return;\n    if (PyObject_CheckBuffer(obj)) {\n        PyBuffer_Release(view);\n        return;\n    }\n        if (PyObject_TypeCheck(obj, __pyx_ptype_5numpy_ndarray)) { __pyx_pw_5numpy_7ndarray_3__releasebuffer__(obj, view); return; }\n    Py_DECREF(obj);\n    view->obj = NULL;\n}\n#endif\n\n\n        /* CIntFromPyVerify */\n        #define __PYX_VERIFY_RETURN_INT(target_type, func_type, func_value)\\\n    __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 0)\n#define __PYX_VERIFY_RETURN_INT_EXC(target_type, func_type, func_value)\\\n    __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 1)\n#define __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, exc)\\\n    {\\\n        func_type value = func_value;\\\n        if (sizeof(target_type) < sizeof(func_type)) {\\\n            if (unlikely(value != (func_type) (target_type) value)) {\\\n                func_type zero = 0;\\\n                if (exc && unlikely(value == (func_type)-1 && PyErr_Occurred()))\\\n                    return (target_type) -1;\\\n                if (is_unsigned && unlikely(value < zero))\\\n                    goto raise_neg_overflow;\\\n                else\\\n                    goto raise_overflow;\\\n            }\\\n        }\\\n        return (target_type) value;\\\n    }\n\n/* CIntToPy */\n        static CYTHON_INLINE PyObject* __Pyx_PyInt_From_unsigned_int(unsigned int value) {\n    const unsigned int neg_one = (unsigned int) -1, const_zero = (unsigned int) 0;\n    const int is_unsigned = neg_one > const_zero;\n    if (is_unsigned) {\n        if (sizeof(unsigned int) < sizeof(long)) {\n            return PyInt_FromLong((long) value);\n        } else if (sizeof(unsigned int) <= sizeof(unsigned long)) {\n            return PyLong_FromUnsignedLong((unsigned long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(unsigned int) <= sizeof(unsigned PY_LONG_LONG)) {\n            return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value);\n#endif\n        }\n    } else {\n        if (sizeof(unsigned int) <= sizeof(long)) {\n            return PyInt_FromLong((long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(unsigned int) <= sizeof(PY_LONG_LONG)) {\n            return PyLong_FromLongLong((PY_LONG_LONG) value);\n#endif\n        }\n    }\n    {\n        int one = 1; int little = (int)*(unsigned char *)&one;\n        unsigned char *bytes = (unsigned char *)&value;\n        return _PyLong_FromByteArray(bytes, sizeof(unsigned int),\n                                     little, !is_unsigned);\n    }\n}\n\n/* Declarations */\n        #if CYTHON_CCOMPLEX\n  #ifdef __cplusplus\n    static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float x, float y) {\n      return ::std::complex< float >(x, y);\n    }\n  #else\n    static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float x, float y) {\n      return x + y*(__pyx_t_float_complex)_Complex_I;\n    }\n  #endif\n#else\n    static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float x, float y) {\n      __pyx_t_float_complex z;\n      z.real = x;\n      z.imag = y;\n      return z;\n    }\n#endif\n\n/* Arithmetic */\n        #if CYTHON_CCOMPLEX\n#else\n    static CYTHON_INLINE int __Pyx_c_eq_float(__pyx_t_float_complex a, __pyx_t_float_complex b) {\n       return (a.real == b.real) && (a.imag == b.imag);\n    }\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_sum_float(__pyx_t_float_complex a, __pyx_t_float_complex b) {\n        __pyx_t_float_complex z;\n        z.real = a.real + b.real;\n        z.imag = a.imag + b.imag;\n        return z;\n    }\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_diff_float(__pyx_t_float_complex a, __pyx_t_float_complex b) {\n        __pyx_t_float_complex z;\n        z.real = a.real - b.real;\n        z.imag = a.imag - b.imag;\n        return z;\n    }\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_prod_float(__pyx_t_float_complex a, __pyx_t_float_complex b) {\n        __pyx_t_float_complex z;\n        z.real = a.real * b.real - a.imag * b.imag;\n        z.imag = a.real * b.imag + a.imag * b.real;\n        return z;\n    }\n    #if 1\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quot_float(__pyx_t_float_complex a, __pyx_t_float_complex b) {\n        if (b.imag == 0) {\n            return __pyx_t_float_complex_from_parts(a.real / b.real, a.imag / b.real);\n        } else if (fabsf(b.real) >= fabsf(b.imag)) {\n            if (b.real == 0 && b.imag == 0) {\n                return __pyx_t_float_complex_from_parts(a.real / b.real, a.imag / b.imag);\n            } else {\n                float r = b.imag / b.real;\n                float s = 1.0 / (b.real + b.imag * r);\n                return __pyx_t_float_complex_from_parts(\n                    (a.real + a.imag * r) * s, (a.imag - a.real * r) * s);\n            }\n        } else {\n            float r = b.real / b.imag;\n            float s = 1.0 / (b.imag + b.real * r);\n            return __pyx_t_float_complex_from_parts(\n                (a.real * r + a.imag) * s, (a.imag * r - a.real) * s);\n        }\n    }\n    #else\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quot_float(__pyx_t_float_complex a, __pyx_t_float_complex b) {\n        if (b.imag == 0) {\n            return __pyx_t_float_complex_from_parts(a.real / b.real, a.imag / b.real);\n        } else {\n            float denom = b.real * b.real + b.imag * b.imag;\n            return __pyx_t_float_complex_from_parts(\n                (a.real * b.real + a.imag * b.imag) / denom,\n                (a.imag * b.real - a.real * b.imag) / denom);\n        }\n    }\n    #endif\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_neg_float(__pyx_t_float_complex a) {\n        __pyx_t_float_complex z;\n        z.real = -a.real;\n        z.imag = -a.imag;\n        return z;\n    }\n    static CYTHON_INLINE int __Pyx_c_is_zero_float(__pyx_t_float_complex a) {\n       return (a.real == 0) && (a.imag == 0);\n    }\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_conj_float(__pyx_t_float_complex a) {\n        __pyx_t_float_complex z;\n        z.real =  a.real;\n        z.imag = -a.imag;\n        return z;\n    }\n    #if 1\n        static CYTHON_INLINE float __Pyx_c_abs_float(__pyx_t_float_complex z) {\n          #if !defined(HAVE_HYPOT) || defined(_MSC_VER)\n            return sqrtf(z.real*z.real + z.imag*z.imag);\n          #else\n            return hypotf(z.real, z.imag);\n          #endif\n        }\n        static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_pow_float(__pyx_t_float_complex a, __pyx_t_float_complex b) {\n            __pyx_t_float_complex z;\n            float r, lnr, theta, z_r, z_theta;\n            if (b.imag == 0 && b.real == (int)b.real) {\n                if (b.real < 0) {\n                    float denom = a.real * a.real + a.imag * a.imag;\n                    a.real = a.real / denom;\n                    a.imag = -a.imag / denom;\n                    b.real = -b.real;\n                }\n                switch ((int)b.real) {\n                    case 0:\n                        z.real = 1;\n                        z.imag = 0;\n                        return z;\n                    case 1:\n                        return a;\n                    case 2:\n                        z = __Pyx_c_prod_float(a, a);\n                        return __Pyx_c_prod_float(a, a);\n                    case 3:\n                        z = __Pyx_c_prod_float(a, a);\n                        return __Pyx_c_prod_float(z, a);\n                    case 4:\n                        z = __Pyx_c_prod_float(a, a);\n                        return __Pyx_c_prod_float(z, z);\n                }\n            }\n            if (a.imag == 0) {\n                if (a.real == 0) {\n                    return a;\n                } else if (b.imag == 0) {\n                    z.real = powf(a.real, b.real);\n                    z.imag = 0;\n                    return z;\n                } else if (a.real > 0) {\n                    r = a.real;\n                    theta = 0;\n                } else {\n                    r = -a.real;\n                    theta = atan2f(0, -1);\n                }\n            } else {\n                r = __Pyx_c_abs_float(a);\n                theta = atan2f(a.imag, a.real);\n            }\n            lnr = logf(r);\n            z_r = expf(lnr * b.real - theta * b.imag);\n            z_theta = theta * b.real + lnr * b.imag;\n            z.real = z_r * cosf(z_theta);\n            z.imag = z_r * sinf(z_theta);\n            return z;\n        }\n    #endif\n#endif\n\n/* Declarations */\n        #if CYTHON_CCOMPLEX\n  #ifdef __cplusplus\n    static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) {\n      return ::std::complex< double >(x, y);\n    }\n  #else\n    static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) {\n      return x + y*(__pyx_t_double_complex)_Complex_I;\n    }\n  #endif\n#else\n    static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) {\n      __pyx_t_double_complex z;\n      z.real = x;\n      z.imag = y;\n      return z;\n    }\n#endif\n\n/* Arithmetic */\n        #if CYTHON_CCOMPLEX\n#else\n    static CYTHON_INLINE int __Pyx_c_eq_double(__pyx_t_double_complex a, __pyx_t_double_complex b) {\n       return (a.real == b.real) && (a.imag == b.imag);\n    }\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum_double(__pyx_t_double_complex a, __pyx_t_double_complex b) {\n        __pyx_t_double_complex z;\n        z.real = a.real + b.real;\n        z.imag = a.imag + b.imag;\n        return z;\n    }\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff_double(__pyx_t_double_complex a, __pyx_t_double_complex b) {\n        __pyx_t_double_complex z;\n        z.real = a.real - b.real;\n        z.imag = a.imag - b.imag;\n        return z;\n    }\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod_double(__pyx_t_double_complex a, __pyx_t_double_complex b) {\n        __pyx_t_double_complex z;\n        z.real = a.real * b.real - a.imag * b.imag;\n        z.imag = a.real * b.imag + a.imag * b.real;\n        return z;\n    }\n    #if 1\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex a, __pyx_t_double_complex b) {\n        if (b.imag == 0) {\n            return __pyx_t_double_complex_from_parts(a.real / b.real, a.imag / b.real);\n        } else if (fabs(b.real) >= fabs(b.imag)) {\n            if (b.real == 0 && b.imag == 0) {\n                return __pyx_t_double_complex_from_parts(a.real / b.real, a.imag / b.imag);\n            } else {\n                double r = b.imag / b.real;\n                double s = 1.0 / (b.real + b.imag * r);\n                return __pyx_t_double_complex_from_parts(\n                    (a.real + a.imag * r) * s, (a.imag - a.real * r) * s);\n            }\n        } else {\n            double r = b.real / b.imag;\n            double s = 1.0 / (b.imag + b.real * r);\n            return __pyx_t_double_complex_from_parts(\n                (a.real * r + a.imag) * s, (a.imag * r - a.real) * s);\n        }\n    }\n    #else\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex a, __pyx_t_double_complex b) {\n        if (b.imag == 0) {\n            return __pyx_t_double_complex_from_parts(a.real / b.real, a.imag / b.real);\n        } else {\n            double denom = b.real * b.real + b.imag * b.imag;\n            return __pyx_t_double_complex_from_parts(\n                (a.real * b.real + a.imag * b.imag) / denom,\n                (a.imag * b.real - a.real * b.imag) / denom);\n        }\n    }\n    #endif\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg_double(__pyx_t_double_complex a) {\n        __pyx_t_double_complex z;\n        z.real = -a.real;\n        z.imag = -a.imag;\n        return z;\n    }\n    static CYTHON_INLINE int __Pyx_c_is_zero_double(__pyx_t_double_complex a) {\n       return (a.real == 0) && (a.imag == 0);\n    }\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj_double(__pyx_t_double_complex a) {\n        __pyx_t_double_complex z;\n        z.real =  a.real;\n        z.imag = -a.imag;\n        return z;\n    }\n    #if 1\n        static CYTHON_INLINE double __Pyx_c_abs_double(__pyx_t_double_complex z) {\n          #if !defined(HAVE_HYPOT) || defined(_MSC_VER)\n            return sqrt(z.real*z.real + z.imag*z.imag);\n          #else\n            return hypot(z.real, z.imag);\n          #endif\n        }\n        static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_pow_double(__pyx_t_double_complex a, __pyx_t_double_complex b) {\n            __pyx_t_double_complex z;\n            double r, lnr, theta, z_r, z_theta;\n            if (b.imag == 0 && b.real == (int)b.real) {\n                if (b.real < 0) {\n                    double denom = a.real * a.real + a.imag * a.imag;\n                    a.real = a.real / denom;\n                    a.imag = -a.imag / denom;\n                    b.real = -b.real;\n                }\n                switch ((int)b.real) {\n                    case 0:\n                        z.real = 1;\n                        z.imag = 0;\n                        return z;\n                    case 1:\n                        return a;\n                    case 2:\n                        z = __Pyx_c_prod_double(a, a);\n                        return __Pyx_c_prod_double(a, a);\n                    case 3:\n                        z = __Pyx_c_prod_double(a, a);\n                        return __Pyx_c_prod_double(z, a);\n                    case 4:\n                        z = __Pyx_c_prod_double(a, a);\n                        return __Pyx_c_prod_double(z, z);\n                }\n            }\n            if (a.imag == 0) {\n                if (a.real == 0) {\n                    return a;\n                } else if (b.imag == 0) {\n                    z.real = pow(a.real, b.real);\n                    z.imag = 0;\n                    return z;\n                } else if (a.real > 0) {\n                    r = a.real;\n                    theta = 0;\n                } else {\n                    r = -a.real;\n                    theta = atan2(0, -1);\n                }\n            } else {\n                r = __Pyx_c_abs_double(a);\n                theta = atan2(a.imag, a.real);\n            }\n            lnr = log(r);\n            z_r = exp(lnr * b.real - theta * b.imag);\n            z_theta = theta * b.real + lnr * b.imag;\n            z.real = z_r * cos(z_theta);\n            z.imag = z_r * sin(z_theta);\n            return z;\n        }\n    #endif\n#endif\n\n/* CIntToPy */\n        static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value) {\n    const int neg_one = (int) -1, const_zero = (int) 0;\n    const int is_unsigned = neg_one > const_zero;\n    if (is_unsigned) {\n        if (sizeof(int) < sizeof(long)) {\n            return PyInt_FromLong((long) value);\n        } else if (sizeof(int) <= sizeof(unsigned long)) {\n            return PyLong_FromUnsignedLong((unsigned long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) {\n            return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value);\n#endif\n        }\n    } else {\n        if (sizeof(int) <= sizeof(long)) {\n            return PyInt_FromLong((long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) {\n            return PyLong_FromLongLong((PY_LONG_LONG) value);\n#endif\n        }\n    }\n    {\n        int one = 1; int little = (int)*(unsigned char *)&one;\n        unsigned char *bytes = (unsigned char *)&value;\n        return _PyLong_FromByteArray(bytes, sizeof(int),\n                                     little, !is_unsigned);\n    }\n}\n\n/* CIntToPy */\n        static CYTHON_INLINE PyObject* __Pyx_PyInt_From_enum__NPY_TYPES(enum NPY_TYPES value) {\n    const enum NPY_TYPES neg_one = (enum NPY_TYPES) -1, const_zero = (enum NPY_TYPES) 0;\n    const int is_unsigned = neg_one > const_zero;\n    if (is_unsigned) {\n        if (sizeof(enum NPY_TYPES) < sizeof(long)) {\n            return PyInt_FromLong((long) value);\n        } else if (sizeof(enum NPY_TYPES) <= sizeof(unsigned long)) {\n            return PyLong_FromUnsignedLong((unsigned long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(enum NPY_TYPES) <= sizeof(unsigned PY_LONG_LONG)) {\n            return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value);\n#endif\n        }\n    } else {\n        if (sizeof(enum NPY_TYPES) <= sizeof(long)) {\n            return PyInt_FromLong((long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(enum NPY_TYPES) <= sizeof(PY_LONG_LONG)) {\n            return PyLong_FromLongLong((PY_LONG_LONG) value);\n#endif\n        }\n    }\n    {\n        int one = 1; int little = (int)*(unsigned char *)&one;\n        unsigned char *bytes = (unsigned char *)&value;\n        return _PyLong_FromByteArray(bytes, sizeof(enum NPY_TYPES),\n                                     little, !is_unsigned);\n    }\n}\n\n/* CIntFromPy */\n        static CYTHON_INLINE unsigned int __Pyx_PyInt_As_unsigned_int(PyObject *x) {\n    const unsigned int neg_one = (unsigned int) -1, const_zero = (unsigned int) 0;\n    const int is_unsigned = neg_one > const_zero;\n#if PY_MAJOR_VERSION < 3\n    if (likely(PyInt_Check(x))) {\n        if (sizeof(unsigned int) < sizeof(long)) {\n            __PYX_VERIFY_RETURN_INT(unsigned int, long, PyInt_AS_LONG(x))\n        } else {\n            long val = PyInt_AS_LONG(x);\n            if (is_unsigned && unlikely(val < 0)) {\n                goto raise_neg_overflow;\n            }\n            return (unsigned int) val;\n        }\n    } else\n#endif\n    if (likely(PyLong_Check(x))) {\n        if (is_unsigned) {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (unsigned int) 0;\n                case  1: __PYX_VERIFY_RETURN_INT(unsigned int, digit, digits[0])\n                case 2:\n                    if (8 * sizeof(unsigned int) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(unsigned int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(unsigned int) >= 2 * PyLong_SHIFT) {\n                            return (unsigned int) (((((unsigned int)digits[1]) << PyLong_SHIFT) | (unsigned int)digits[0]));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(unsigned int) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(unsigned int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(unsigned int) >= 3 * PyLong_SHIFT) {\n                            return (unsigned int) (((((((unsigned int)digits[2]) << PyLong_SHIFT) | (unsigned int)digits[1]) << PyLong_SHIFT) | (unsigned int)digits[0]));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(unsigned int) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(unsigned int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(unsigned int) >= 4 * PyLong_SHIFT) {\n                            return (unsigned int) (((((((((unsigned int)digits[3]) << PyLong_SHIFT) | (unsigned int)digits[2]) << PyLong_SHIFT) | (unsigned int)digits[1]) << PyLong_SHIFT) | (unsigned int)digits[0]));\n                        }\n                    }\n                    break;\n            }\n#endif\n#if CYTHON_COMPILING_IN_CPYTHON\n            if (unlikely(Py_SIZE(x) < 0)) {\n                goto raise_neg_overflow;\n            }\n#else\n            {\n                int result = PyObject_RichCompareBool(x, Py_False, Py_LT);\n                if (unlikely(result < 0))\n                    return (unsigned int) -1;\n                if (unlikely(result == 1))\n                    goto raise_neg_overflow;\n            }\n#endif\n            if (sizeof(unsigned int) <= sizeof(unsigned long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(unsigned int, unsigned long, PyLong_AsUnsignedLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(unsigned int) <= sizeof(unsigned PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(unsigned int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x))\n#endif\n            }\n        } else {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (unsigned int) 0;\n                case -1: __PYX_VERIFY_RETURN_INT(unsigned int, sdigit, (sdigit) (-(sdigit)digits[0]))\n                case  1: __PYX_VERIFY_RETURN_INT(unsigned int,  digit, +digits[0])\n                case -2:\n                    if (8 * sizeof(unsigned int) - 1 > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(unsigned int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(unsigned int) - 1 > 2 * PyLong_SHIFT) {\n                            return (unsigned int) (((unsigned int)-1)*(((((unsigned int)digits[1]) << PyLong_SHIFT) | (unsigned int)digits[0])));\n                        }\n                    }\n                    break;\n                case 2:\n                    if (8 * sizeof(unsigned int) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(unsigned int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(unsigned int) - 1 > 2 * PyLong_SHIFT) {\n                            return (unsigned int) ((((((unsigned int)digits[1]) << PyLong_SHIFT) | (unsigned int)digits[0])));\n                        }\n                    }\n                    break;\n                case -3:\n                    if (8 * sizeof(unsigned int) - 1 > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(unsigned int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(unsigned int) - 1 > 3 * PyLong_SHIFT) {\n                            return (unsigned int) (((unsigned int)-1)*(((((((unsigned int)digits[2]) << PyLong_SHIFT) | (unsigned int)digits[1]) << PyLong_SHIFT) | (unsigned int)digits[0])));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(unsigned int) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(unsigned int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(unsigned int) - 1 > 3 * PyLong_SHIFT) {\n                            return (unsigned int) ((((((((unsigned int)digits[2]) << PyLong_SHIFT) | (unsigned int)digits[1]) << PyLong_SHIFT) | (unsigned int)digits[0])));\n                        }\n                    }\n                    break;\n                case -4:\n                    if (8 * sizeof(unsigned int) - 1 > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(unsigned int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(unsigned int) - 1 > 4 * PyLong_SHIFT) {\n                            return (unsigned int) (((unsigned int)-1)*(((((((((unsigned int)digits[3]) << PyLong_SHIFT) | (unsigned int)digits[2]) << PyLong_SHIFT) | (unsigned int)digits[1]) << PyLong_SHIFT) | (unsigned int)digits[0])));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(unsigned int) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(unsigned int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(unsigned int) - 1 > 4 * PyLong_SHIFT) {\n                            return (unsigned int) ((((((((((unsigned int)digits[3]) << PyLong_SHIFT) | (unsigned int)digits[2]) << PyLong_SHIFT) | (unsigned int)digits[1]) << PyLong_SHIFT) | (unsigned int)digits[0])));\n                        }\n                    }\n                    break;\n            }\n#endif\n            if (sizeof(unsigned int) <= sizeof(long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(unsigned int, long, PyLong_AsLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(unsigned int) <= sizeof(PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(unsigned int, PY_LONG_LONG, PyLong_AsLongLong(x))\n#endif\n            }\n        }\n        {\n#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray)\n            PyErr_SetString(PyExc_RuntimeError,\n                            \"_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers\");\n#else\n            unsigned int val;\n            PyObject *v = __Pyx_PyNumber_IntOrLong(x);\n #if PY_MAJOR_VERSION < 3\n            if (likely(v) && !PyLong_Check(v)) {\n                PyObject *tmp = v;\n                v = PyNumber_Long(tmp);\n                Py_DECREF(tmp);\n            }\n #endif\n            if (likely(v)) {\n                int one = 1; int is_little = (int)*(unsigned char *)&one;\n                unsigned char *bytes = (unsigned char *)&val;\n                int ret = _PyLong_AsByteArray((PyLongObject *)v,\n                                              bytes, sizeof(val),\n                                              is_little, !is_unsigned);\n                Py_DECREF(v);\n                if (likely(!ret))\n                    return val;\n            }\n#endif\n            return (unsigned int) -1;\n        }\n    } else {\n        unsigned int val;\n        PyObject *tmp = __Pyx_PyNumber_IntOrLong(x);\n        if (!tmp) return (unsigned int) -1;\n        val = __Pyx_PyInt_As_unsigned_int(tmp);\n        Py_DECREF(tmp);\n        return val;\n    }\nraise_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"value too large to convert to unsigned int\");\n    return (unsigned int) -1;\nraise_neg_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"can't convert negative value to unsigned int\");\n    return (unsigned int) -1;\n}\n\n/* CIntFromPy */\n        static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *x) {\n    const int neg_one = (int) -1, const_zero = (int) 0;\n    const int is_unsigned = neg_one > const_zero;\n#if PY_MAJOR_VERSION < 3\n    if (likely(PyInt_Check(x))) {\n        if (sizeof(int) < sizeof(long)) {\n            __PYX_VERIFY_RETURN_INT(int, long, PyInt_AS_LONG(x))\n        } else {\n            long val = PyInt_AS_LONG(x);\n            if (is_unsigned && unlikely(val < 0)) {\n                goto raise_neg_overflow;\n            }\n            return (int) val;\n        }\n    } else\n#endif\n    if (likely(PyLong_Check(x))) {\n        if (is_unsigned) {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (int) 0;\n                case  1: __PYX_VERIFY_RETURN_INT(int, digit, digits[0])\n                case 2:\n                    if (8 * sizeof(int) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) >= 2 * PyLong_SHIFT) {\n                            return (int) (((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(int) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) >= 3 * PyLong_SHIFT) {\n                            return (int) (((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(int) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) >= 4 * PyLong_SHIFT) {\n                            return (int) (((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]));\n                        }\n                    }\n                    break;\n            }\n#endif\n#if CYTHON_COMPILING_IN_CPYTHON\n            if (unlikely(Py_SIZE(x) < 0)) {\n                goto raise_neg_overflow;\n            }\n#else\n            {\n                int result = PyObject_RichCompareBool(x, Py_False, Py_LT);\n                if (unlikely(result < 0))\n                    return (int) -1;\n                if (unlikely(result == 1))\n                    goto raise_neg_overflow;\n            }\n#endif\n            if (sizeof(int) <= sizeof(unsigned long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(int, unsigned long, PyLong_AsUnsignedLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x))\n#endif\n            }\n        } else {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (int) 0;\n                case -1: __PYX_VERIFY_RETURN_INT(int, sdigit, (sdigit) (-(sdigit)digits[0]))\n                case  1: __PYX_VERIFY_RETURN_INT(int,  digit, +digits[0])\n                case -2:\n                    if (8 * sizeof(int) - 1 > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) {\n                            return (int) (((int)-1)*(((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));\n                        }\n                    }\n                    break;\n                case 2:\n                    if (8 * sizeof(int) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) {\n                            return (int) ((((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));\n                        }\n                    }\n                    break;\n                case -3:\n                    if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) {\n                            return (int) (((int)-1)*(((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(int) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) {\n                            return (int) ((((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));\n                        }\n                    }\n                    break;\n                case -4:\n                    if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) {\n                            return (int) (((int)-1)*(((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(int) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) {\n                            return (int) ((((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));\n                        }\n                    }\n                    break;\n            }\n#endif\n            if (sizeof(int) <= sizeof(long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(int, long, PyLong_AsLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(int, PY_LONG_LONG, PyLong_AsLongLong(x))\n#endif\n            }\n        }\n        {\n#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray)\n            PyErr_SetString(PyExc_RuntimeError,\n                            \"_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers\");\n#else\n            int val;\n            PyObject *v = __Pyx_PyNumber_IntOrLong(x);\n #if PY_MAJOR_VERSION < 3\n            if (likely(v) && !PyLong_Check(v)) {\n                PyObject *tmp = v;\n                v = PyNumber_Long(tmp);\n                Py_DECREF(tmp);\n            }\n #endif\n            if (likely(v)) {\n                int one = 1; int is_little = (int)*(unsigned char *)&one;\n                unsigned char *bytes = (unsigned char *)&val;\n                int ret = _PyLong_AsByteArray((PyLongObject *)v,\n                                              bytes, sizeof(val),\n                                              is_little, !is_unsigned);\n                Py_DECREF(v);\n                if (likely(!ret))\n                    return val;\n            }\n#endif\n            return (int) -1;\n        }\n    } else {\n        int val;\n        PyObject *tmp = __Pyx_PyNumber_IntOrLong(x);\n        if (!tmp) return (int) -1;\n        val = __Pyx_PyInt_As_int(tmp);\n        Py_DECREF(tmp);\n        return val;\n    }\nraise_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"value too large to convert to int\");\n    return (int) -1;\nraise_neg_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"can't convert negative value to int\");\n    return (int) -1;\n}\n\n/* CIntToPy */\n        static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) {\n    const long neg_one = (long) -1, const_zero = (long) 0;\n    const int is_unsigned = neg_one > const_zero;\n    if (is_unsigned) {\n        if (sizeof(long) < sizeof(long)) {\n            return PyInt_FromLong((long) value);\n        } else if (sizeof(long) <= sizeof(unsigned long)) {\n            return PyLong_FromUnsignedLong((unsigned long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) {\n            return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value);\n#endif\n        }\n    } else {\n        if (sizeof(long) <= sizeof(long)) {\n            return PyInt_FromLong((long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) {\n            return PyLong_FromLongLong((PY_LONG_LONG) value);\n#endif\n        }\n    }\n    {\n        int one = 1; int little = (int)*(unsigned char *)&one;\n        unsigned char *bytes = (unsigned char *)&value;\n        return _PyLong_FromByteArray(bytes, sizeof(long),\n                                     little, !is_unsigned);\n    }\n}\n\n/* CIntFromPy */\n        static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *x) {\n    const long neg_one = (long) -1, const_zero = (long) 0;\n    const int is_unsigned = neg_one > const_zero;\n#if PY_MAJOR_VERSION < 3\n    if (likely(PyInt_Check(x))) {\n        if (sizeof(long) < sizeof(long)) {\n            __PYX_VERIFY_RETURN_INT(long, long, PyInt_AS_LONG(x))\n        } else {\n            long val = PyInt_AS_LONG(x);\n            if (is_unsigned && unlikely(val < 0)) {\n                goto raise_neg_overflow;\n            }\n            return (long) val;\n        }\n    } else\n#endif\n    if (likely(PyLong_Check(x))) {\n        if (is_unsigned) {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (long) 0;\n                case  1: __PYX_VERIFY_RETURN_INT(long, digit, digits[0])\n                case 2:\n                    if (8 * sizeof(long) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) >= 2 * PyLong_SHIFT) {\n                            return (long) (((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(long) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) >= 3 * PyLong_SHIFT) {\n                            return (long) (((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(long) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) >= 4 * PyLong_SHIFT) {\n                            return (long) (((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]));\n                        }\n                    }\n                    break;\n            }\n#endif\n#if CYTHON_COMPILING_IN_CPYTHON\n            if (unlikely(Py_SIZE(x) < 0)) {\n                goto raise_neg_overflow;\n            }\n#else\n            {\n                int result = PyObject_RichCompareBool(x, Py_False, Py_LT);\n                if (unlikely(result < 0))\n                    return (long) -1;\n                if (unlikely(result == 1))\n                    goto raise_neg_overflow;\n            }\n#endif\n            if (sizeof(long) <= sizeof(unsigned long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(long, unsigned long, PyLong_AsUnsignedLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(long, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x))\n#endif\n            }\n        } else {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (long) 0;\n                case -1: __PYX_VERIFY_RETURN_INT(long, sdigit, (sdigit) (-(sdigit)digits[0]))\n                case  1: __PYX_VERIFY_RETURN_INT(long,  digit, +digits[0])\n                case -2:\n                    if (8 * sizeof(long) - 1 > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {\n                            return (long) (((long)-1)*(((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));\n                        }\n                    }\n                    break;\n                case 2:\n                    if (8 * sizeof(long) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {\n                            return (long) ((((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));\n                        }\n                    }\n                    break;\n                case -3:\n                    if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {\n                            return (long) (((long)-1)*(((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(long) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {\n                            return (long) ((((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));\n                        }\n                    }\n                    break;\n                case -4:\n                    if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) {\n                            return (long) (((long)-1)*(((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(long) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) {\n                            return (long) ((((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));\n                        }\n                    }\n                    break;\n            }\n#endif\n            if (sizeof(long) <= sizeof(long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(long, long, PyLong_AsLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(long, PY_LONG_LONG, PyLong_AsLongLong(x))\n#endif\n            }\n        }\n        {\n#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray)\n            PyErr_SetString(PyExc_RuntimeError,\n                            \"_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers\");\n#else\n            long val;\n            PyObject *v = __Pyx_PyNumber_IntOrLong(x);\n #if PY_MAJOR_VERSION < 3\n            if (likely(v) && !PyLong_Check(v)) {\n                PyObject *tmp = v;\n                v = PyNumber_Long(tmp);\n                Py_DECREF(tmp);\n            }\n #endif\n            if (likely(v)) {\n                int one = 1; int is_little = (int)*(unsigned char *)&one;\n                unsigned char *bytes = (unsigned char *)&val;\n                int ret = _PyLong_AsByteArray((PyLongObject *)v,\n                                              bytes, sizeof(val),\n                                              is_little, !is_unsigned);\n                Py_DECREF(v);\n                if (likely(!ret))\n                    return val;\n            }\n#endif\n            return (long) -1;\n        }\n    } else {\n        long val;\n        PyObject *tmp = __Pyx_PyNumber_IntOrLong(x);\n        if (!tmp) return (long) -1;\n        val = __Pyx_PyInt_As_long(tmp);\n        Py_DECREF(tmp);\n        return val;\n    }\nraise_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"value too large to convert to long\");\n    return (long) -1;\nraise_neg_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"can't convert negative value to long\");\n    return (long) -1;\n}\n\n/* CheckBinaryVersion */\n        static int __Pyx_check_binary_version(void) {\n    char ctversion[4], rtversion[4];\n    PyOS_snprintf(ctversion, 4, \"%d.%d\", PY_MAJOR_VERSION, PY_MINOR_VERSION);\n    PyOS_snprintf(rtversion, 4, \"%s\", Py_GetVersion());\n    if (ctversion[0] != rtversion[0] || ctversion[2] != rtversion[2]) {\n        char message[200];\n        PyOS_snprintf(message, sizeof(message),\n                      \"compiletime version %s of module '%.100s' \"\n                      \"does not match runtime version %s\",\n                      ctversion, __Pyx_MODULE_NAME, rtversion);\n        return PyErr_WarnEx(NULL, message, 1);\n    }\n    return 0;\n}\n\n/* ModuleImport */\n        #ifndef __PYX_HAVE_RT_ImportModule\n#define __PYX_HAVE_RT_ImportModule\nstatic PyObject *__Pyx_ImportModule(const char *name) {\n    PyObject *py_name = 0;\n    PyObject *py_module = 0;\n    py_name = __Pyx_PyIdentifier_FromString(name);\n    if (!py_name)\n        goto bad;\n    py_module = PyImport_Import(py_name);\n    Py_DECREF(py_name);\n    return py_module;\nbad:\n    Py_XDECREF(py_name);\n    return 0;\n}\n#endif\n\n/* TypeImport */\n        #ifndef __PYX_HAVE_RT_ImportType\n#define __PYX_HAVE_RT_ImportType\nstatic PyTypeObject *__Pyx_ImportType(const char *module_name, const char *class_name,\n    size_t size, int strict)\n{\n    PyObject *py_module = 0;\n    PyObject *result = 0;\n    PyObject *py_name = 0;\n    char warning[200];\n    Py_ssize_t basicsize;\n#ifdef Py_LIMITED_API\n    PyObject *py_basicsize;\n#endif\n    py_module = __Pyx_ImportModule(module_name);\n    if (!py_module)\n        goto bad;\n    py_name = __Pyx_PyIdentifier_FromString(class_name);\n    if (!py_name)\n        goto bad;\n    result = PyObject_GetAttr(py_module, py_name);\n    Py_DECREF(py_name);\n    py_name = 0;\n    Py_DECREF(py_module);\n    py_module = 0;\n    if (!result)\n        goto bad;\n    if (!PyType_Check(result)) {\n        PyErr_Format(PyExc_TypeError,\n            \"%.200s.%.200s is not a type object\",\n            module_name, class_name);\n        goto bad;\n    }\n#ifndef Py_LIMITED_API\n    basicsize = ((PyTypeObject *)result)->tp_basicsize;\n#else\n    py_basicsize = PyObject_GetAttrString(result, \"__basicsize__\");\n    if (!py_basicsize)\n        goto bad;\n    basicsize = PyLong_AsSsize_t(py_basicsize);\n    Py_DECREF(py_basicsize);\n    py_basicsize = 0;\n    if (basicsize == (Py_ssize_t)-1 && PyErr_Occurred())\n        goto bad;\n#endif\n    if (!strict && (size_t)basicsize > size) {\n        PyOS_snprintf(warning, sizeof(warning),\n            \"%s.%s size changed, may indicate binary incompatibility. Expected %zd, got %zd\",\n            module_name, class_name, basicsize, size);\n        if (PyErr_WarnEx(NULL, warning, 0) < 0) goto bad;\n    }\n    else if ((size_t)basicsize != size) {\n        PyErr_Format(PyExc_ValueError,\n            \"%.200s.%.200s has the wrong size, try recompiling. Expected %zd, got %zd\",\n            module_name, class_name, basicsize, size);\n        goto bad;\n    }\n    return (PyTypeObject *)result;\nbad:\n    Py_XDECREF(py_module);\n    Py_XDECREF(result);\n    return NULL;\n}\n#endif\n\n/* InitStrings */\n        static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) {\n    while (t->p) {\n        #if PY_MAJOR_VERSION < 3\n        if (t->is_unicode) {\n            *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL);\n        } else if (t->intern) {\n            *t->p = PyString_InternFromString(t->s);\n        } else {\n            *t->p = PyString_FromStringAndSize(t->s, t->n - 1);\n        }\n        #else\n        if (t->is_unicode | t->is_str) {\n            if (t->intern) {\n                *t->p = PyUnicode_InternFromString(t->s);\n            } else if (t->encoding) {\n                *t->p = PyUnicode_Decode(t->s, t->n - 1, t->encoding, NULL);\n            } else {\n                *t->p = PyUnicode_FromStringAndSize(t->s, t->n - 1);\n            }\n        } else {\n            *t->p = PyBytes_FromStringAndSize(t->s, t->n - 1);\n        }\n        #endif\n        if (!*t->p)\n            return -1;\n        ++t;\n    }\n    return 0;\n}\n\nstatic CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_str) {\n    return __Pyx_PyUnicode_FromStringAndSize(c_str, (Py_ssize_t)strlen(c_str));\n}\nstatic CYTHON_INLINE char* __Pyx_PyObject_AsString(PyObject* o) {\n    Py_ssize_t ignore;\n    return __Pyx_PyObject_AsStringAndSize(o, &ignore);\n}\nstatic CYTHON_INLINE char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) {\n#if CYTHON_COMPILING_IN_CPYTHON && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT)\n    if (\n#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII\n            __Pyx_sys_getdefaultencoding_not_ascii &&\n#endif\n            PyUnicode_Check(o)) {\n#if PY_VERSION_HEX < 0x03030000\n        char* defenc_c;\n        PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL);\n        if (!defenc) return NULL;\n        defenc_c = PyBytes_AS_STRING(defenc);\n#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII\n        {\n            char* end = defenc_c + PyBytes_GET_SIZE(defenc);\n            char* c;\n            for (c = defenc_c; c < end; c++) {\n                if ((unsigned char) (*c) >= 128) {\n                    PyUnicode_AsASCIIString(o);\n                    return NULL;\n                }\n            }\n        }\n#endif\n        *length = PyBytes_GET_SIZE(defenc);\n        return defenc_c;\n#else\n        if (__Pyx_PyUnicode_READY(o) == -1) return NULL;\n#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII\n        if (PyUnicode_IS_ASCII(o)) {\n            *length = PyUnicode_GET_LENGTH(o);\n            return PyUnicode_AsUTF8(o);\n        } else {\n            PyUnicode_AsASCIIString(o);\n            return NULL;\n        }\n#else\n        return PyUnicode_AsUTF8AndSize(o, length);\n#endif\n#endif\n    } else\n#endif\n#if (!CYTHON_COMPILING_IN_PYPY) || (defined(PyByteArray_AS_STRING) && defined(PyByteArray_GET_SIZE))\n    if (PyByteArray_Check(o)) {\n        *length = PyByteArray_GET_SIZE(o);\n        return PyByteArray_AS_STRING(o);\n    } else\n#endif\n    {\n        char* result;\n        int r = PyBytes_AsStringAndSize(o, &result, length);\n        if (unlikely(r < 0)) {\n            return NULL;\n        } else {\n            return result;\n        }\n    }\n}\nstatic CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) {\n   int is_true = x == Py_True;\n   if (is_true | (x == Py_False) | (x == Py_None)) return is_true;\n   else return PyObject_IsTrue(x);\n}\nstatic CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) {\n#if CYTHON_USE_TYPE_SLOTS\n  PyNumberMethods *m;\n#endif\n  const char *name = NULL;\n  PyObject *res = NULL;\n#if PY_MAJOR_VERSION < 3\n  if (PyInt_Check(x) || PyLong_Check(x))\n#else\n  if (PyLong_Check(x))\n#endif\n    return __Pyx_NewRef(x);\n#if CYTHON_USE_TYPE_SLOTS\n  m = Py_TYPE(x)->tp_as_number;\n  #if PY_MAJOR_VERSION < 3\n  if (m && m->nb_int) {\n    name = \"int\";\n    res = PyNumber_Int(x);\n  }\n  else if (m && m->nb_long) {\n    name = \"long\";\n    res = PyNumber_Long(x);\n  }\n  #else\n  if (m && m->nb_int) {\n    name = \"int\";\n    res = PyNumber_Long(x);\n  }\n  #endif\n#else\n  res = PyNumber_Int(x);\n#endif\n  if (res) {\n#if PY_MAJOR_VERSION < 3\n    if (!PyInt_Check(res) && !PyLong_Check(res)) {\n#else\n    if (!PyLong_Check(res)) {\n#endif\n      PyErr_Format(PyExc_TypeError,\n                   \"__%.4s__ returned non-%.4s (type %.200s)\",\n                   name, name, Py_TYPE(res)->tp_name);\n      Py_DECREF(res);\n      return NULL;\n    }\n  }\n  else if (!PyErr_Occurred()) {\n    PyErr_SetString(PyExc_TypeError,\n                    \"an integer is required\");\n  }\n  return res;\n}\nstatic CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) {\n  Py_ssize_t ival;\n  PyObject *x;\n#if PY_MAJOR_VERSION < 3\n  if (likely(PyInt_CheckExact(b))) {\n    if (sizeof(Py_ssize_t) >= sizeof(long))\n        return PyInt_AS_LONG(b);\n    else\n        return PyInt_AsSsize_t(x);\n  }\n#endif\n  if (likely(PyLong_CheckExact(b))) {\n    #if CYTHON_USE_PYLONG_INTERNALS\n    const digit* digits = ((PyLongObject*)b)->ob_digit;\n    const Py_ssize_t size = Py_SIZE(b);\n    if (likely(__Pyx_sst_abs(size) <= 1)) {\n        ival = likely(size) ? digits[0] : 0;\n        if (size == -1) ival = -ival;\n        return ival;\n    } else {\n      switch (size) {\n         case 2:\n           if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) {\n             return (Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n           }\n           break;\n         case -2:\n           if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) {\n             return -(Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n           }\n           break;\n         case 3:\n           if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) {\n             return (Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n           }\n           break;\n         case -3:\n           if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) {\n             return -(Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n           }\n           break;\n         case 4:\n           if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) {\n             return (Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n           }\n           break;\n         case -4:\n           if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) {\n             return -(Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n           }\n           break;\n      }\n    }\n    #endif\n    return PyLong_AsSsize_t(b);\n  }\n  x = PyNumber_Index(b);\n  if (!x) return -1;\n  ival = PyInt_AsSsize_t(x);\n  Py_DECREF(x);\n  return ival;\n}\nstatic CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) {\n    return PyInt_FromSize_t(ival);\n}\n\n\n#endif /* Py_PYTHON_H */\n"
  },
  {
    "path": "lib/draw_rectangles/draw_rectangles.pyx",
    "content": "######\n# Draws rectangles\n######\n\ncimport cython\nimport numpy as np\ncimport numpy as np\n\nDTYPE = np.float32\nctypedef np.float32_t DTYPE_t\n\ndef draw_union_boxes(bbox_pairs, pooling_size, padding=0):\n    \"\"\"\n    Draws union boxes for the image.\n    :param box_pairs: [num_pairs, 8]\n    :param fmap_size: Size of the original feature map\n    :param stride: ratio between fmap size and original img (<1)\n    :param pooling_size: resize everything to this size\n    :return: [num_pairs, 2, pooling_size, pooling_size arr\n    \"\"\"\n    assert padding == 0, \"Padding>0 not supported yet\"\n    return draw_union_boxes_c(bbox_pairs, pooling_size)\n\ncdef DTYPE_t minmax(DTYPE_t x):\n    return min(max(x, 0), 1)\n\ncdef np.ndarray[DTYPE_t, ndim=4] draw_union_boxes_c(\n        np.ndarray[DTYPE_t, ndim=2] box_pairs, unsigned int pooling_size):\n    \"\"\"\n    Parameters\n    ----------\n    boxes: (N, 4) ndarray of float. everything has arbitrary ratios\n    query_boxes: (K, 4) ndarray of float\n    Returns\n    -------\n    overlaps: (N, K) ndarray of overlap between boxes and query_boxes\n    \"\"\"\n    cdef unsigned int N = box_pairs.shape[0]\n\n    cdef np.ndarray[DTYPE_t, ndim = 4] uboxes = np.zeros(\n        (N, 2, pooling_size, pooling_size), dtype=DTYPE)\n    cdef DTYPE_t x1_union, y1_union, x2_union, y2_union, w, h, x1_box, y1_box, x2_box, y2_box, y_contrib, x_contrib\n    cdef unsigned int n, i, j, k\n\n    for n in range(N):\n        x1_union = min(box_pairs[n, 0], box_pairs[n, 4])\n        y1_union = min(box_pairs[n, 1], box_pairs[n, 5])\n        x2_union = max(box_pairs[n, 2], box_pairs[n, 6])\n        y2_union = max(box_pairs[n, 3], box_pairs[n, 7])\n\n        w = x2_union - x1_union\n        h = y2_union - y1_union\n       \n        for i in range(2):\n            # Now everything is in the range [0, pooling_size].\n            x1_box = (box_pairs[n, 0+4*i] - x1_union)*pooling_size / w\n            y1_box = (box_pairs[n, 1+4*i] - y1_union)*pooling_size / h\n            x2_box = (box_pairs[n, 2+4*i] - x1_union)*pooling_size / w\n            y2_box = (box_pairs[n, 3+4*i] - y1_union)*pooling_size / h\n            # print(\"{:.3f}, {:.3f}, {:.3f}, {:.3f}\".format(x1_box, y1_box, x2_box, y2_box))\n            for j in range(pooling_size):\n                y_contrib = minmax(j+1-y1_box)*minmax(y2_box-j)\n                for k in range(pooling_size):\n                    x_contrib = minmax(k+1-x1_box)*minmax(x2_box-k)                \n                    # print(\"j {} yc {} k {} xc {}\".format(j, y_contrib, k, x_contrib))\n                    uboxes[n,i,j,k] = x_contrib*y_contrib\n    return uboxes\n"
  },
  {
    "path": "lib/draw_rectangles/setup.py",
    "content": "from distutils.core import setup\nfrom Cython.Build import cythonize\nimport numpy\n\nsetup(name=\"draw_rectangles_cython\", ext_modules=cythonize('draw_rectangles.pyx'), include_dirs=[numpy.get_include()])"
  },
  {
    "path": "lib/evaluation/__init__.py",
    "content": ""
  },
  {
    "path": "lib/evaluation/sg_eval.py",
    "content": "\"\"\"\nAdapted from Danfei Xu. In particular, slow code was removed\n\"\"\"\nimport numpy as np\nfrom functools import reduce\nfrom lib.pytorch_misc import intersect_2d, argsort_desc\nfrom lib.fpn.box_intersections_cpu.bbox import bbox_overlaps\nfrom config import MODES\nnp.set_printoptions(precision=3)\n\nclass BasicSceneGraphEvaluator:\n    def __init__(self, mode, multiple_preds=False):\n        self.result_dict = {}\n        self.mode = mode\n        self.result_dict[self.mode + '_recall'] = {20: [], 50: [], 100: []}\n        self.multiple_preds = multiple_preds\n\n    @classmethod\n    def all_modes(cls, **kwargs):\n        evaluators = {m: cls(mode=m, **kwargs) for m in MODES}\n        return evaluators\n\n    @classmethod\n    def vrd_modes(cls, **kwargs):\n        evaluators = {m: cls(mode=m, multiple_preds=True, **kwargs) for m in ('preddet', 'phrdet')}\n        return evaluators\n\n    def evaluate_scene_graph_entry(self, gt_entry, pred_scores, viz_dict=None, iou_thresh=0.5):\n        res = evaluate_from_dict(gt_entry, pred_scores, self.mode, self.result_dict,\n                                  viz_dict=viz_dict, iou_thresh=iou_thresh, multiple_preds=self.multiple_preds)\n        # self.print_stats()\n        return res\n\n    def save(self, fn):\n        np.save(fn, self.result_dict)\n\n    def print_stats(self):\n        print('======================' + self.mode + '============================')\n        for k, v in self.result_dict[self.mode + '_recall'].items():\n            print('R@%i: %f' % (k, np.mean(v)))\n\n\ndef evaluate_from_dict(gt_entry, pred_entry, mode, result_dict, multiple_preds=False,\n                       viz_dict=None, **kwargs):\n    \"\"\"\n    Shortcut to doing evaluate_recall from dict\n    :param gt_entry: Dictionary containing gt_relations, gt_boxes, gt_classes\n    :param pred_entry: Dictionary containing pred_rels, pred_boxes (if detection), pred_classes\n    :param mode: 'det' or 'cls'\n    :param result_dict: \n    :param viz_dict: \n    :param kwargs: \n    :return: \n    \"\"\"\n    gt_rels = gt_entry['gt_relations']\n    gt_boxes = gt_entry['gt_boxes'].astype(float)\n    gt_classes = gt_entry['gt_classes']\n\n    pred_rel_inds = pred_entry['pred_rel_inds']\n    rel_scores = pred_entry['rel_scores']\n\n    if mode == 'predcls':\n        pred_boxes = gt_boxes\n        pred_classes = gt_classes\n        obj_scores = np.ones(gt_classes.shape[0])\n    elif mode == 'sgcls':\n        pred_boxes = gt_boxes\n        pred_classes = pred_entry['pred_classes']\n        obj_scores = pred_entry['obj_scores']\n    elif mode == 'sgdet' or mode == 'phrdet':\n        pred_boxes = pred_entry['pred_boxes'].astype(float)\n        pred_classes = pred_entry['pred_classes']\n        obj_scores = pred_entry['obj_scores']\n    elif mode == 'preddet':\n        # Only extract the indices that appear in GT\n        prc = intersect_2d(pred_rel_inds, gt_rels[:, :2])\n        if prc.size == 0:\n            for k in result_dict[mode + '_recall']:\n                result_dict[mode + '_recall'][k].append(0.0)\n            return None, None, None\n        pred_inds_per_gt = prc.argmax(0)\n        pred_rel_inds = pred_rel_inds[pred_inds_per_gt]\n        rel_scores = rel_scores[pred_inds_per_gt]\n\n        # Now sort the matching ones\n        rel_scores_sorted = argsort_desc(rel_scores[:,1:])\n        rel_scores_sorted[:,1] += 1\n        rel_scores_sorted = np.column_stack((pred_rel_inds[rel_scores_sorted[:,0]], rel_scores_sorted[:,1]))\n\n        matches = intersect_2d(rel_scores_sorted, gt_rels)\n        for k in result_dict[mode + '_recall']:\n            rec_i = float(matches[:k].any(0).sum()) / float(gt_rels.shape[0])\n            result_dict[mode + '_recall'][k].append(rec_i)\n        return None, None, None\n    else:\n        raise ValueError('invalid mode')\n\n    if multiple_preds:\n        obj_scores_per_rel = obj_scores[pred_rel_inds].prod(1)\n        overall_scores = obj_scores_per_rel[:,None] * rel_scores[:,1:]\n        score_inds = argsort_desc(overall_scores)[:100]\n        pred_rels = np.column_stack((pred_rel_inds[score_inds[:,0]], score_inds[:,1]+1))\n        predicate_scores = rel_scores[score_inds[:,0], score_inds[:,1]+1]\n    else:\n        pred_rels = np.column_stack((pred_rel_inds, 1+rel_scores[:,1:].argmax(1)))\n        predicate_scores = rel_scores[:,1:].max(1)\n\n    pred_to_gt, pred_5ples, rel_scores = evaluate_recall(\n                gt_rels, gt_boxes, gt_classes,\n                pred_rels, pred_boxes, pred_classes,\n                predicate_scores, obj_scores, phrdet= mode=='phrdet',\n                **kwargs)\n\n    for k in result_dict[mode + '_recall']:\n\n        match = reduce(np.union1d, pred_to_gt[:k])\n\n        rec_i = float(len(match)) / float(gt_rels.shape[0])\n        result_dict[mode + '_recall'][k].append(rec_i)\n    return pred_to_gt, pred_5ples, rel_scores\n\n    # print(\" \".join([\"R@{:2d}: {:.3f}\".format(k, v[-1]) for k, v in result_dict[mode + '_recall'].items()]))\n    # Deal with visualization later\n    # # Optionally, log things to a separate dictionary\n    # if viz_dict is not None:\n    #     # Caution: pred scores has changed (we took off the 0 class)\n    #     gt_rels_scores = pred_scores[\n    #         gt_rels[:, 0],\n    #         gt_rels[:, 1],\n    #         gt_rels[:, 2] - 1,\n    #     ]\n    #     # gt_rels_scores_cls = gt_rels_scores * pred_class_scores[\n    #     #         gt_rels[:, 0]] * pred_class_scores[gt_rels[:, 1]]\n    #\n    #     viz_dict[mode + '_pred_rels'] = pred_5ples.tolist()\n    #     viz_dict[mode + '_pred_rels_scores'] = max_pred_scores.tolist()\n    #     viz_dict[mode + '_pred_rels_scores_cls'] = max_rel_scores.tolist()\n    #     viz_dict[mode + '_gt_rels_scores'] = gt_rels_scores.tolist()\n    #     viz_dict[mode + '_gt_rels_scores_cls'] = gt_rels_scores_cls.tolist()\n    #\n    #     # Serialize pred2gt matching as a list of lists, where each sublist is of the form\n    #     # pred_ind, gt_ind1, gt_ind2, ....\n    #     viz_dict[mode + '_pred2gt_rel'] = pred_to_gt\n\n\n###########################\ndef evaluate_recall(gt_rels, gt_boxes, gt_classes,\n                    pred_rels, pred_boxes, pred_classes, rel_scores=None, cls_scores=None,\n                    iou_thresh=0.5, phrdet=False):\n    \"\"\"\n    Evaluates the recall\n    :param gt_rels: [#gt_rel, 3] array of GT relations\n    :param gt_boxes: [#gt_box, 4] array of GT boxes\n    :param gt_classes: [#gt_box] array of GT classes\n    :param pred_rels: [#pred_rel, 3] array of pred rels. Assumed these are in sorted order\n                      and refer to IDs in pred classes / pred boxes\n                      (id0, id1, rel)\n    :param pred_boxes:  [#pred_box, 4] array of pred boxes\n    :param pred_classes: [#pred_box] array of predicted classes for these boxes\n    :return: pred_to_gt: Matching from predicate to GT\n             pred_5ples: the predicted (id0, id1, cls0, cls1, rel)\n             rel_scores: [cls_0score, cls1_score, relscore]\n                   \"\"\"\n    if pred_rels.size == 0:\n        return [[]], np.zeros((0,5)), np.zeros(0)\n\n    num_gt_boxes = gt_boxes.shape[0]\n    num_gt_relations = gt_rels.shape[0]\n    assert num_gt_relations != 0\n\n    gt_triplets, gt_triplet_boxes, _ = _triplet(gt_rels[:, 2],\n                                                gt_rels[:, :2],\n                                                gt_classes,\n                                                gt_boxes)\n    num_boxes = pred_boxes.shape[0]\n    assert pred_rels[:,:2].max() < pred_classes.shape[0]\n\n    # Exclude self rels\n    # assert np.all(pred_rels[:,0] != pred_rels[:,1])\n    assert np.all(pred_rels[:,2] > 0)\n\n    pred_triplets, pred_triplet_boxes, relation_scores = \\\n        _triplet(pred_rels[:,2], pred_rels[:,:2], pred_classes, pred_boxes,\n                 rel_scores, cls_scores)\n\n    scores_overall = relation_scores.prod(1)\n    if not np.all(scores_overall[1:] <= scores_overall[:-1] + 1e-5):\n        print(\"Somehow the relations weren't sorted properly: \\n{}\".format(scores_overall))\n        # raise ValueError(\"Somehow the relations werent sorted properly\")\n\n    # Compute recall. It's most efficient to match once and then do recall after\n    pred_to_gt = _compute_pred_matches(\n        gt_triplets,\n        pred_triplets,\n        gt_triplet_boxes,\n        pred_triplet_boxes,\n        iou_thresh,\n        phrdet=phrdet,\n    )\n\n    # Contains some extra stuff for visualization. Not needed.\n    pred_5ples = np.column_stack((\n        pred_rels[:,:2],\n        pred_triplets[:, [0, 2, 1]],\n    ))\n\n    return pred_to_gt, pred_5ples, relation_scores\n\n\ndef _triplet(predicates, relations, classes, boxes,\n             predicate_scores=None, class_scores=None):\n    \"\"\"\n    format predictions into triplets\n    :param predicates: A 1d numpy array of num_boxes*(num_boxes-1) predicates, corresponding to\n                       each pair of possibilities\n    :param relations: A (num_boxes*(num_boxes-1), 2) array, where each row represents the boxes\n                      in that relation\n    :param classes: A (num_boxes) array of the classes for each thing.\n    :param boxes: A (num_boxes,4) array of the bounding boxes for everything.\n    :param predicate_scores: A (num_boxes*(num_boxes-1)) array of the scores for each predicate\n    :param class_scores: A (num_boxes) array of the likelihood for each object.\n    :return: Triplets: (num_relations, 3) array of class, relation, class\n             Triplet boxes: (num_relation, 8) array of boxes for the parts\n             Triplet scores: num_relation array of the scores overall for the triplets\n    \"\"\"\n    assert (predicates.shape[0] == relations.shape[0])\n\n    sub_ob_classes = classes[relations[:, :2]]\n    triplets = np.column_stack((sub_ob_classes[:, 0], predicates, sub_ob_classes[:, 1]))\n    triplet_boxes = np.column_stack((boxes[relations[:, 0]], boxes[relations[:, 1]]))\n\n    triplet_scores = None\n    if predicate_scores is not None and class_scores is not None:\n        triplet_scores = np.column_stack((\n            class_scores[relations[:, 0]],\n            class_scores[relations[:, 1]],\n            predicate_scores,\n        ))\n\n    return triplets, triplet_boxes, triplet_scores\n\n\ndef _compute_pred_matches(gt_triplets, pred_triplets,\n                 gt_boxes, pred_boxes, iou_thresh, phrdet=False):\n    \"\"\"\n    Given a set of predicted triplets, return the list of matching GT's for each of the\n    given predictions\n    :param gt_triplets: \n    :param pred_triplets: \n    :param gt_boxes: \n    :param pred_boxes: \n    :param iou_thresh: \n    :return: \n    \"\"\"\n    # This performs a matrix multiplication-esque thing between the two arrays\n    # Instead of summing, we want the equality, so we reduce in that way\n    # The rows correspond to GT triplets, columns to pred triplets\n    keeps = intersect_2d(gt_triplets, pred_triplets)\n    gt_has_match = keeps.any(1)\n    pred_to_gt = [[] for x in range(pred_boxes.shape[0])]\n    for gt_ind, gt_box, keep_inds in zip(np.where(gt_has_match)[0],\n                                         gt_boxes[gt_has_match],\n                                         keeps[gt_has_match],\n                                         ):\n        boxes = pred_boxes[keep_inds]\n        if phrdet:\n            # Evaluate where the union box > 0.5\n            gt_box_union = gt_box.reshape((2, 4))\n            gt_box_union = np.concatenate((gt_box_union.min(0)[:2], gt_box_union.max(0)[2:]), 0)\n\n            box_union = boxes.reshape((-1, 2, 4))\n            box_union = np.concatenate((box_union.min(1)[:,:2], box_union.max(1)[:,2:]), 1)\n\n            inds = bbox_overlaps(gt_box_union[None], box_union)[0] >= iou_thresh\n\n        else:\n            sub_iou = bbox_overlaps(gt_box[None,:4], boxes[:, :4])[0]\n            obj_iou = bbox_overlaps(gt_box[None,4:], boxes[:, 4:])[0]\n\n            inds = (sub_iou >= iou_thresh) & (obj_iou >= iou_thresh)\n\n        for i in np.where(keep_inds)[0][inds]:\n            pred_to_gt[i].append(int(gt_ind))\n    return pred_to_gt\n"
  },
  {
    "path": "lib/evaluation/sg_eval_all_rel_cates.py",
    "content": "\"\"\"\nAdapted from Danfei Xu. In particular, slow code was removed\n\"\"\"\nimport numpy as np\nfrom functools import reduce\nfrom lib.pytorch_misc import intersect_2d, argsort_desc\nfrom lib.fpn.box_intersections_cpu.bbox import bbox_overlaps\nfrom config import MODES\nimport sys\nnp.set_printoptions(precision=3)\n\nclass BasicSceneGraphEvaluator:\n    def __init__(self, mode, multiple_preds=False):\n        self.result_dict = {}\n        self.mode = mode\n        rel_cats = {\n            0: 'all_rel_cates',\n            1: \"above\",\n            2: \"across\",\n            3: \"against\",\n            4: \"along\",\n            5: \"and\",\n            6: \"at\",\n            7: \"attached to\",\n            8: \"behind\",\n            9: \"belonging to\",\n            10: \"between\",\n            11: \"carrying\",\n            12: \"covered in\",\n            13: \"covering\",\n            14: \"eating\",\n            15: \"flying in\",\n            16: \"for\",\n            17: \"from\",\n            18: \"growing on\",\n            19: \"hanging from\",\n            20: \"has\",\n            21: \"holding\",\n            22: \"in\",\n            23: \"in front of\",\n            24: \"laying on\",\n            25: \"looking at\",\n            26: \"lying on\",\n            27: \"made of\",\n            28: \"mounted on\",\n            29: \"near\",\n            30: \"of\",\n            31: \"on\",\n            32: \"on back of\",\n            33: \"over\",\n            34: \"painted on\",\n            35: \"parked on\",\n            36: \"part of\",\n            37: \"playing\",\n            38: \"riding\",\n            39: \"says\",\n            40: \"sitting on\",\n            41: \"standing on\",\n            42: \"to\",\n            43: \"under\",\n            44: \"using\",\n            45: \"walking in\",\n            46: \"walking on\",\n            47: \"watching\",\n            48: \"wearing\",\n            49: \"wears\",\n            50: \"with\"\n        }\n        self.rel_cats = rel_cats\n        self.result_dict[self.mode + '_recall'] = {20: {}, 50: {}, 100: []}\n        for key, value in self.result_dict[self.mode + '_recall'].items():\n            self.result_dict[self.mode + '_recall'][key] = {}\n            for rel_cat_id, rel_cat_name in rel_cats.items():\n                self.result_dict[self.mode + '_recall'][key][rel_cat_name] = []\n        self.multiple_preds = multiple_preds\n\n    @classmethod\n    def all_modes(cls, **kwargs):\n        evaluators = {m: cls(mode=m, **kwargs) for m in MODES}\n        return evaluators\n\n    @classmethod\n    def vrd_modes(cls, **kwargs):\n        evaluators = {m: cls(mode=m, multiple_preds=True, **kwargs) for m in ('preddet', 'phrdet')}\n        return evaluators\n\n    def evaluate_scene_graph_entry(self, gt_entry, pred_scores, viz_dict=None, iou_thresh=0.5):\n        res = evaluate_from_dict(gt_entry, pred_scores, self.mode, self.result_dict,\n                                  viz_dict=viz_dict, iou_thresh=iou_thresh, multiple_preds=self.multiple_preds, rel_cats=self.rel_cats)\n        # self.print_stats()\n        return res\n\n    def save(self, fn):\n        np.save(fn, self.result_dict)\n\n    def print_stats(self):\n        print('======================' + self.mode + '============================')\n        for k, v in self.result_dict[self.mode + '_recall'].items():\n            for rel_cat_id, rel_cat_name in self.rel_cats.items():\n                print('R@%i: %f' % (k, np.mean(v[rel_cat_name])), rel_cat_name)\n            print('~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~')\n\ndef evaluate_from_dict(gt_entry, pred_entry, mode, result_dict, multiple_preds=False,\n                       viz_dict=None, rel_cats=None, **kwargs):\n    \"\"\"\n    Shortcut to doing evaluate_recall from dict\n    :param gt_entry: Dictionary containing gt_relations, gt_boxes, gt_classes\n    :param pred_entry: Dictionary containing pred_rels, pred_boxes (if detection), pred_classes\n    :param mode: 'det' or 'cls'\n    :param result_dict:\n    :param viz_dict:\n    :param kwargs:\n    :return:\n    \"\"\"\n    gt_rels = gt_entry['gt_relations']\n    gt_boxes = gt_entry['gt_boxes'].astype(float)\n    gt_classes = gt_entry['gt_classes']\n\n    gt_rels_nums = [0 for x in range(len(rel_cats))]\n    for rel in gt_rels:\n        gt_rels_nums[rel[2]] += 1\n        gt_rels_nums[0] += 1\n\n\n    pred_rel_inds = pred_entry['pred_rel_inds']\n    rel_scores = pred_entry['rel_scores']\n\n    if mode == 'predcls':\n        pred_boxes = gt_boxes\n        pred_classes = gt_classes\n        obj_scores = np.ones(gt_classes.shape[0])\n    elif mode == 'sgcls':\n        pred_boxes = gt_boxes\n        pred_classes = pred_entry['pred_classes']\n        obj_scores = pred_entry['obj_scores']\n    elif mode == 'sgdet' or mode == 'phrdet':\n        pred_boxes = pred_entry['pred_boxes'].astype(float)\n        pred_classes = pred_entry['pred_classes']\n        obj_scores = pred_entry['obj_scores']\n    elif mode == 'preddet':\n        # Only extract the indices that appear in GT\n        prc = intersect_2d(pred_rel_inds, gt_rels[:, :2])\n        if prc.size == 0:\n            for k in result_dict[mode + '_recall']:\n                result_dict[mode + '_recall'][k].append(0.0)\n            return None, None, None\n        pred_inds_per_gt = prc.argmax(0)\n        pred_rel_inds = pred_rel_inds[pred_inds_per_gt]\n        rel_scores = rel_scores[pred_inds_per_gt]\n\n        # Now sort the matching ones\n        rel_scores_sorted = argsort_desc(rel_scores[:,1:])\n        rel_scores_sorted[:,1] += 1\n        rel_scores_sorted = np.column_stack((pred_rel_inds[rel_scores_sorted[:,0]], rel_scores_sorted[:,1]))\n\n        matches = intersect_2d(rel_scores_sorted, gt_rels)\n        for k in result_dict[mode + '_recall']:\n            rec_i = float(matches[:k].any(0).sum()) / float(gt_rels.shape[0])\n            result_dict[mode + '_recall'][k].append(rec_i)\n        return None, None, None\n    else:\n        raise ValueError('invalid mode')\n\n    if multiple_preds:\n        obj_scores_per_rel = obj_scores[pred_rel_inds].prod(1)\n        overall_scores = obj_scores_per_rel[:,None] * rel_scores[:,1:]\n        score_inds = argsort_desc(overall_scores)[:100]\n        pred_rels = np.column_stack((pred_rel_inds[score_inds[:,0]], score_inds[:,1]+1))\n        predicate_scores = rel_scores[score_inds[:,0], score_inds[:,1]+1]\n    else:\n        pred_rels = np.column_stack((pred_rel_inds, 1+rel_scores[:,1:].argmax(1)))\n        predicate_scores = rel_scores[:,1:].max(1)\n\n    pred_to_gt, pred_5ples, rel_scores = evaluate_recall(\n                gt_rels, gt_boxes, gt_classes,\n                pred_rels, pred_boxes, pred_classes,\n                predicate_scores, obj_scores, phrdet= mode=='phrdet',rel_cats=rel_cats,\n                **kwargs)\n\n    for k in result_dict[mode + '_recall']:\n        for rel_cat_id, rel_cat_name in rel_cats.items():\n            match = reduce(np.union1d, pred_to_gt[rel_cat_name][:k])\n            rec_i = float(len(match)) / (float(gt_rels_nums[rel_cat_id]) + sys.float_info.min) #float(gt_rels.shape[0])\n            result_dict[mode + '_recall'][k][rel_cat_name].append(rec_i)\n\n    return pred_to_gt, pred_5ples, rel_scores\n\n    # print(\" \".join([\"R@{:2d}: {:.3f}\".format(k, v[-1]) for k, v in result_dict[mode + '_recall'].items()]))\n    # Deal with visualization later\n    # # Optionally, log things to a separate dictionary\n    # if viz_dict is not None:\n    #     # Caution: pred scores has changed (we took off the 0 class)\n    #     gt_rels_scores = pred_scores[\n    #         gt_rels[:, 0],\n    #         gt_rels[:, 1],\n    #         gt_rels[:, 2] - 1,\n    #     ]\n    #     # gt_rels_scores_cls = gt_rels_scores * pred_class_scores[\n    #     #         gt_rels[:, 0]] * pred_class_scores[gt_rels[:, 1]]\n    #\n    #     viz_dict[mode + '_pred_rels'] = pred_5ples.tolist()\n    #     viz_dict[mode + '_pred_rels_scores'] = max_pred_scores.tolist()\n    #     viz_dict[mode + '_pred_rels_scores_cls'] = max_rel_scores.tolist()\n    #     viz_dict[mode + '_gt_rels_scores'] = gt_rels_scores.tolist()\n    #     viz_dict[mode + '_gt_rels_scores_cls'] = gt_rels_scores_cls.tolist()\n    #\n    #     # Serialize pred2gt matching as a list of lists, where each sublist is of the form\n    #     # pred_ind, gt_ind1, gt_ind2, ....\n    #     viz_dict[mode + '_pred2gt_rel'] = pred_to_gt\n\n\n###########################\ndef evaluate_recall(gt_rels, gt_boxes, gt_classes,\n                    pred_rels, pred_boxes, pred_classes, rel_scores=None, cls_scores=None,\n                    iou_thresh=0.5, phrdet=False, rel_cats=None):\n    \"\"\"\n    Evaluates the recall\n    :param gt_rels: [#gt_rel, 3] array of GT relations\n    :param gt_boxes: [#gt_box, 4] array of GT boxes\n    :param gt_classes: [#gt_box] array of GT classes\n    :param pred_rels: [#pred_rel, 3] array of pred rels. Assumed these are in sorted order\n                      and refer to IDs in pred classes / pred boxes\n                      (id0, id1, rel)\n    :param pred_boxes:  [#pred_box, 4] array of pred boxes\n    :param pred_classes: [#pred_box] array of predicted classes for these boxes\n    :return: pred_to_gt: Matching from predicate to GT\n             pred_5ples: the predicted (id0, id1, cls0, cls1, rel)\n             rel_scores: [cls_0score, cls1_score, relscore]\n                   \"\"\"\n    if pred_rels.size == 0:\n        return [[]], np.zeros((0,5)), np.zeros(0)\n\n    num_gt_boxes = gt_boxes.shape[0]\n    num_gt_relations = gt_rels.shape[0]\n    assert num_gt_relations != 0\n\n    gt_triplets, gt_triplet_boxes, _ = _triplet(gt_rels[:, 2],\n                                                gt_rels[:, :2],\n                                                gt_classes,\n                                                gt_boxes)\n    num_boxes = pred_boxes.shape[0]\n    assert pred_rels[:,:2].max() < pred_classes.shape[0]\n\n    # Exclude self rels\n    # assert np.all(pred_rels[:,0] != pred_rels[:,1])\n    assert np.all(pred_rels[:,2] > 0)\n\n    pred_triplets, pred_triplet_boxes, relation_scores = \\\n        _triplet(pred_rels[:,2], pred_rels[:,:2], pred_classes, pred_boxes,\n                 rel_scores, cls_scores)\n\n    scores_overall = relation_scores.prod(1)\n    if not np.all(scores_overall[1:] <= scores_overall[:-1] + 1e-5):\n        print(\"Somehow the relations weren't sorted properly: \\n{}\".format(scores_overall))\n        # raise ValueError(\"Somehow the relations werent sorted properly\")\n\n    # Compute recall. It's most efficient to match once and then do recall after\n    pred_to_gt = _compute_pred_matches(\n        gt_triplets,\n        pred_triplets,\n        gt_triplet_boxes,\n        pred_triplet_boxes,\n        iou_thresh,\n        phrdet=phrdet,\n        rel_cats=rel_cats,\n    )\n\n    # Contains some extra stuff for visualization. Not needed.\n    pred_5ples = np.column_stack((\n        pred_rels[:,:2],\n        pred_triplets[:, [0, 2, 1]],\n    ))\n\n    return pred_to_gt, pred_5ples, relation_scores\n\n\ndef _triplet(predicates, relations, classes, boxes,\n             predicate_scores=None, class_scores=None):\n    \"\"\"\n    format predictions into triplets\n    :param predicates: A 1d numpy array of num_boxes*(num_boxes-1) predicates, corresponding to\n                       each pair of possibilities\n    :param relations: A (num_boxes*(num_boxes-1), 2) array, where each row represents the boxes\n                      in that relation\n    :param classes: A (num_boxes) array of the classes for each thing.\n    :param boxes: A (num_boxes,4) array of the bounding boxes for everything.\n    :param predicate_scores: A (num_boxes*(num_boxes-1)) array of the scores for each predicate\n    :param class_scores: A (num_boxes) array of the likelihood for each object.\n    :return: Triplets: (num_relations, 3) array of class, relation, class\n             Triplet boxes: (num_relation, 8) array of boxes for the parts\n             Triplet scores: num_relation array of the scores overall for the triplets\n    \"\"\"\n    assert (predicates.shape[0] == relations.shape[0])\n\n    sub_ob_classes = classes[relations[:, :2]]\n    triplets = np.column_stack((sub_ob_classes[:, 0], predicates, sub_ob_classes[:, 1]))\n    triplet_boxes = np.column_stack((boxes[relations[:, 0]], boxes[relations[:, 1]]))\n\n    triplet_scores = None\n    if predicate_scores is not None and class_scores is not None:\n        triplet_scores = np.column_stack((\n            class_scores[relations[:, 0]],\n            class_scores[relations[:, 1]],\n            predicate_scores,\n        ))\n\n    return triplets, triplet_boxes, triplet_scores\n\n\ndef _compute_pred_matches(gt_triplets, pred_triplets,\n                 gt_boxes, pred_boxes, iou_thresh, phrdet=False, rel_cats=None):\n    \"\"\"\n    Given a set of predicted triplets, return the list of matching GT's for each of the\n    given predictions\n    :param gt_triplets:\n    :param pred_triplets:\n    :param gt_boxes:\n    :param pred_boxes:\n    :param iou_thresh:\n    :return:\n    \"\"\"\n    # This performs a matrix multiplication-esque thing between the two arrays\n    # Instead of summing, we want the equality, so we reduce in that way\n    # The rows correspond to GT triplets, columns to pred triplets\n    keeps = intersect_2d(gt_triplets, pred_triplets)\n    gt_has_match = keeps.any(1)\n    pred_to_gt = {}\n    for rel_cat_id, rel_cat_name in rel_cats.items():\n        pred_to_gt[rel_cat_name] = [[] for x in range(pred_boxes.shape[0])]\n    for gt_ind, gt_box, keep_inds in zip(np.where(gt_has_match)[0],\n                                         gt_boxes[gt_has_match],\n                                         keeps[gt_has_match],\n                                         ):\n        boxes = pred_boxes[keep_inds]\n        if phrdet:\n            # Evaluate where the union box > 0.5\n            gt_box_union = gt_box.reshape((2, 4))\n            gt_box_union = np.concatenate((gt_box_union.min(0)[:2], gt_box_union.max(0)[2:]), 0)\n\n            box_union = boxes.reshape((-1, 2, 4))\n            box_union = np.concatenate((box_union.min(1)[:,:2], box_union.max(1)[:,2:]), 1)\n\n            inds = bbox_overlaps(gt_box_union[None], box_union)[0] >= iou_thresh\n\n        else:\n            sub_iou = bbox_overlaps(gt_box[None,:4], boxes[:, :4])[0]\n            obj_iou = bbox_overlaps(gt_box[None,4:], boxes[:, 4:])[0]\n\n            inds = (sub_iou >= iou_thresh) & (obj_iou >= iou_thresh)\n\n        for i in np.where(keep_inds)[0][inds]:\n            pred_to_gt['all_rel_cates'][i].append(int(gt_ind))\n            pred_to_gt[rel_cats[gt_triplets[int(gt_ind), 1]]][i].append(int(gt_ind))\n    return pred_to_gt\n"
  },
  {
    "path": "lib/evaluation/sg_eval_slow.py",
    "content": "# JUST TO CHECK THAT IT IS EXACTLY THE SAME..................................\nimport numpy as np\nfrom config import MODES\n\nclass BasicSceneGraphEvaluator:\n\n    def __init__(self, mode):\n        self.result_dict = {}\n        self.mode = {'sgdet':'sg_det', 'sgcls':'sg_cls', 'predcls':'pred_cls'}[mode]\n\n        self.result_dict = {}\n        self.result_dict[self.mode + '_recall'] = {20:[], 50:[], 100:[]}\n\n\n    @classmethod\n    def all_modes(cls):\n        evaluators = {m: cls(mode=m) for m in MODES}\n        return evaluators\n    def evaluate_scene_graph_entry(self, gt_entry, pred_entry, iou_thresh=0.5):\n\n        roidb_entry = {\n            'max_overlaps': np.ones(gt_entry['gt_classes'].shape[0], dtype=np.int64),\n            'boxes': gt_entry['gt_boxes'],\n            'gt_relations': gt_entry['gt_relations'],\n            'gt_classes': gt_entry['gt_classes'],\n        }\n        sg_entry = {\n            'boxes': pred_entry['pred_boxes'],\n            'relations': pred_entry['pred_rels'],\n            'obj_scores': pred_entry['obj_scores'],\n            'rel_scores': pred_entry['rel_scores'],\n            'pred_classes': pred_entry['pred_classes'],\n        }\n\n        pred_triplets, triplet_boxes = \\\n            eval_relation_recall(sg_entry, roidb_entry,\n                                self.result_dict,\n                                self.mode,\n                                iou_thresh=iou_thresh)\n        return pred_triplets, triplet_boxes\n\n\n    def save(self, fn):\n        np.save(fn, self.result_dict)\n\n\n    def print_stats(self):\n        print('======================' + self.mode + '============================')\n        for k, v in self.result_dict[self.mode + '_recall'].items():\n            print('R@%i: %f' % (k, np.mean(v)))\n\n    def save(self, fn):\n        np.save(fn, self.result_dict)\n\n    def print_stats(self):\n        print('======================' + self.mode + '============================')\n        for k, v in self.result_dict[self.mode + '_recall'].items():\n            print('R@%i: %f' % (k, np.mean(v)))\n\n\ndef eval_relation_recall(sg_entry,\n                         roidb_entry,\n                         result_dict,\n                         mode,\n                         iou_thresh):\n\n    # gt\n    gt_inds = np.where(roidb_entry['max_overlaps'] == 1)[0]\n    gt_boxes = roidb_entry['boxes'][gt_inds].copy().astype(float)\n    num_gt_boxes = gt_boxes.shape[0]\n    gt_relations = roidb_entry['gt_relations'].copy()\n    gt_classes = roidb_entry['gt_classes'].copy()\n\n    num_gt_relations = gt_relations.shape[0]\n    if num_gt_relations == 0:\n        return (None, None)\n    gt_class_scores = np.ones(num_gt_boxes)\n    gt_predicate_scores = np.ones(num_gt_relations)\n    gt_triplets, gt_triplet_boxes, _ = _triplet(gt_relations[:,2],\n                                             gt_relations[:,:2],\n                                             gt_classes,\n                                             gt_boxes,\n                                             gt_predicate_scores,\n                                             gt_class_scores)\n\n    # pred\n    box_preds = sg_entry['boxes']\n    num_boxes = box_preds.shape[0]\n    relations = sg_entry['relations']\n    classes = sg_entry['pred_classes'].copy()\n    class_scores = sg_entry['obj_scores'].copy()\n\n    num_relations = relations.shape[0]\n\n    if mode =='pred_cls':\n        # if predicate classification task\n        # use ground truth bounding boxes\n        assert(num_boxes == num_gt_boxes)\n        classes = gt_classes\n        class_scores = gt_class_scores\n        boxes = gt_boxes\n    elif mode =='sg_cls':\n        assert(num_boxes == num_gt_boxes)\n        # if scene graph classification task\n        # use gt boxes, but predicted classes\n        # classes = np.argmax(class_preds, 1)\n        # class_scores = class_preds.max(axis=1)\n        boxes = gt_boxes\n    elif mode =='sg_det':\n        # if scene graph detection task\n        # use preicted boxes and predicted classes\n        # classes = np.argmax(class_preds, 1)\n        # class_scores = class_preds.max(axis=1)\n        boxes = box_preds\n    else:\n        raise NotImplementedError('Incorrect Mode! %s' % mode)\n\n    pred_triplets = np.column_stack((\n        classes[relations[:, 0]],\n        relations[:,2],\n        classes[relations[:, 1]],\n    ))\n    pred_triplet_boxes = np.column_stack((\n        boxes[relations[:, 0]],\n        boxes[relations[:, 1]],\n    ))\n    relation_scores = np.column_stack((\n        class_scores[relations[:, 0]],\n        sg_entry['rel_scores'],\n        class_scores[relations[:, 1]],\n    )).prod(1)\n\n    sorted_inds = np.argsort(relation_scores)[::-1]\n    # compue recall\n    for k in result_dict[mode + '_recall']:\n        this_k = min(k, num_relations)\n        keep_inds = sorted_inds[:this_k]\n        recall = _relation_recall(gt_triplets,\n                                  pred_triplets[keep_inds,:],\n                                  gt_triplet_boxes,\n                                  pred_triplet_boxes[keep_inds,:],\n                                  iou_thresh)\n        result_dict[mode + '_recall'][k].append(recall)\n\n    # for visualization\n    return pred_triplets[sorted_inds, :], pred_triplet_boxes[sorted_inds, :]\n\n\ndef _triplet(predicates, relations, classes, boxes,\n             predicate_scores, class_scores):\n\n    # format predictions into triplets\n    assert(predicates.shape[0] == relations.shape[0])\n    num_relations = relations.shape[0]\n    triplets = np.zeros([num_relations, 3]).astype(np.int32)\n    triplet_boxes = np.zeros([num_relations, 8]).astype(np.int32)\n    triplet_scores = np.zeros([num_relations]).astype(np.float32)\n    for i in range(num_relations):\n        triplets[i, 1] = predicates[i]\n        sub_i, obj_i = relations[i,:2]\n        triplets[i, 0] = classes[sub_i]\n        triplets[i, 2] = classes[obj_i]\n        triplet_boxes[i, :4] = boxes[sub_i, :]\n        triplet_boxes[i, 4:] = boxes[obj_i, :]\n        # compute triplet score\n        score =  class_scores[sub_i]\n        score *= class_scores[obj_i]\n        score *= predicate_scores[i]\n        triplet_scores[i] = score\n    return triplets, triplet_boxes, triplet_scores\n\n\ndef _relation_recall(gt_triplets, pred_triplets,\n                     gt_boxes, pred_boxes, iou_thresh):\n\n    # compute the R@K metric for a set of predicted triplets\n\n    num_gt = gt_triplets.shape[0]\n    num_correct_pred_gt = 0\n\n    for gt, gt_box in zip(gt_triplets, gt_boxes):\n        keep = np.zeros(pred_triplets.shape[0]).astype(bool)\n        for i, pred in enumerate(pred_triplets):\n            if gt[0] == pred[0] and gt[1] == pred[1] and gt[2] == pred[2]:\n                keep[i] = True\n        if not np.any(keep):\n            continue\n        boxes = pred_boxes[keep,:]\n        sub_iou = iou(gt_box[:4], boxes[:,:4])\n        obj_iou = iou(gt_box[4:], boxes[:,4:])\n        inds = np.intersect1d(np.where(sub_iou >= iou_thresh)[0],\n                              np.where(obj_iou >= iou_thresh)[0])\n        if inds.size > 0:\n            num_correct_pred_gt += 1\n    return float(num_correct_pred_gt) / float(num_gt)\n\n\ndef iou(gt_box, pred_boxes):\n    # computer Intersection-over-Union between two sets of boxes\n    ixmin = np.maximum(gt_box[0], pred_boxes[:,0])\n    iymin = np.maximum(gt_box[1], pred_boxes[:,1])\n    ixmax = np.minimum(gt_box[2], pred_boxes[:,2])\n    iymax = np.minimum(gt_box[3], pred_boxes[:,3])\n    iw = np.maximum(ixmax - ixmin + 1., 0.)\n    ih = np.maximum(iymax - iymin + 1., 0.)\n    inters = iw * ih\n\n    # union\n    uni = ((gt_box[2] - gt_box[0] + 1.) * (gt_box[3] - gt_box[1] + 1.) +\n            (pred_boxes[:, 2] - pred_boxes[:, 0] + 1.) *\n            (pred_boxes[:, 3] - pred_boxes[:, 1] + 1.) - inters)\n\n    overlaps = inters / uni\n    return overlaps\n"
  },
  {
    "path": "lib/evaluation/test_sg_eval.py",
    "content": "# Just some tests so you can be assured that sg_eval.py works the same as the (original) stanford evaluation\n\nimport numpy as np\nfrom six.moves import xrange\nfrom dataloaders.visual_genome import VG\nfrom lib.evaluation.sg_eval import evaluate_from_dict\nfrom tqdm import trange\nfrom lib.fpn.box_utils import center_size, point_form\ndef eval_relation_recall(sg_entry,\n                         roidb_entry,\n                         result_dict,\n                         mode,\n                         iou_thresh):\n\n    # gt\n    gt_inds = np.where(roidb_entry['max_overlaps'] == 1)[0]\n    gt_boxes = roidb_entry['boxes'][gt_inds].copy().astype(float)\n    num_gt_boxes = gt_boxes.shape[0]\n    gt_relations = roidb_entry['gt_relations'].copy()\n    gt_classes = roidb_entry['gt_classes'].copy()\n\n    num_gt_relations = gt_relations.shape[0]\n    if num_gt_relations == 0:\n        return (None, None)\n    gt_class_scores = np.ones(num_gt_boxes)\n    gt_predicate_scores = np.ones(num_gt_relations)\n    gt_triplets, gt_triplet_boxes, _ = _triplet(gt_relations[:,2],\n                                             gt_relations[:,:2],\n                                             gt_classes,\n                                             gt_boxes,\n                                             gt_predicate_scores,\n                                             gt_class_scores)\n\n    # pred\n    box_preds = sg_entry['boxes']\n    num_boxes = box_preds.shape[0]\n    predicate_preds = sg_entry['relations']\n    class_preds = sg_entry['scores']\n    predicate_preds = predicate_preds.reshape(num_boxes, num_boxes, -1)\n\n    # no bg\n    predicate_preds = predicate_preds[:, :, 1:]\n    predicates = np.argmax(predicate_preds, 2).ravel() + 1\n    predicate_scores = predicate_preds.max(axis=2).ravel()\n    relations = []\n    keep = []\n    for i in xrange(num_boxes):\n        for j in xrange(num_boxes):\n            if i != j:\n                keep.append(num_boxes*i + j)\n                relations.append([i, j])\n    # take out self relations\n    predicates = predicates[keep]\n    predicate_scores = predicate_scores[keep]\n\n    relations = np.array(relations)\n    assert(relations.shape[0] == num_boxes * (num_boxes - 1))\n    assert(predicates.shape[0] == relations.shape[0])\n    num_relations = relations.shape[0]\n\n    if mode =='predcls':\n        # if predicate classification task\n        # use ground truth bounding boxes\n        assert(num_boxes == num_gt_boxes)\n        classes = gt_classes\n        class_scores = gt_class_scores\n        boxes = gt_boxes\n    elif mode =='sgcls':\n        assert(num_boxes == num_gt_boxes)\n        # if scene graph classification task\n        # use gt boxes, but predicted classes\n        classes = np.argmax(class_preds, 1)\n        class_scores = class_preds.max(axis=1)\n        boxes = gt_boxes\n    elif mode =='sgdet':\n        # if scene graph detection task\n        # use preicted boxes and predicted classes\n        classes = np.argmax(class_preds, 1)\n        class_scores = class_preds.max(axis=1)\n        boxes = []\n        for i, c in enumerate(classes):\n            boxes.append(box_preds[i]) # no bbox regression, c*4:(c+1)*4])\n        boxes = np.vstack(boxes)\n    else:\n        raise NotImplementedError('Incorrect Mode! %s' % mode)\n\n    pred_triplets, pred_triplet_boxes, relation_scores = \\\n        _triplet(predicates, relations, classes, boxes,\n                 predicate_scores, class_scores)\n\n\n    sorted_inds = np.argsort(relation_scores)[::-1]\n    # compue recall\n    for k in result_dict[mode + '_recall']:\n        this_k = min(k, num_relations)\n        keep_inds = sorted_inds[:this_k]\n        recall = _relation_recall(gt_triplets,\n                                  pred_triplets[keep_inds,:],\n                                  gt_triplet_boxes,\n                                  pred_triplet_boxes[keep_inds,:],\n                                  iou_thresh)\n        result_dict[mode + '_recall'][k].append(recall)\n\n    # for visualization\n    return pred_triplets[sorted_inds, :], pred_triplet_boxes[sorted_inds, :]\n\n\ndef _triplet(predicates, relations, classes, boxes,\n             predicate_scores, class_scores):\n\n    # format predictions into triplets\n    assert(predicates.shape[0] == relations.shape[0])\n    num_relations = relations.shape[0]\n    triplets = np.zeros([num_relations, 3]).astype(np.int32)\n    triplet_boxes = np.zeros([num_relations, 8]).astype(np.int32)\n    triplet_scores = np.zeros([num_relations]).astype(np.float32)\n    for i in xrange(num_relations):\n        triplets[i, 1] = predicates[i]\n        sub_i, obj_i = relations[i,:2]\n        triplets[i, 0] = classes[sub_i]\n        triplets[i, 2] = classes[obj_i]\n        triplet_boxes[i, :4] = boxes[sub_i, :]\n        triplet_boxes[i, 4:] = boxes[obj_i, :]\n        # compute triplet score\n        score =  class_scores[sub_i]\n        score *= class_scores[obj_i]\n        score *= predicate_scores[i]\n        triplet_scores[i] = score\n    return triplets, triplet_boxes, triplet_scores\n\n\ndef _relation_recall(gt_triplets, pred_triplets,\n                     gt_boxes, pred_boxes, iou_thresh):\n\n    # compute the R@K metric for a set of predicted triplets\n\n    num_gt = gt_triplets.shape[0]\n    num_correct_pred_gt = 0\n\n    for gt, gt_box in zip(gt_triplets, gt_boxes):\n        keep = np.zeros(pred_triplets.shape[0]).astype(bool)\n        for i, pred in enumerate(pred_triplets):\n            if gt[0] == pred[0] and gt[1] == pred[1] and gt[2] == pred[2]:\n                keep[i] = True\n        if not np.any(keep):\n            continue\n        boxes = pred_boxes[keep,:]\n        sub_iou = iou(gt_box[:4], boxes[:,:4])\n        obj_iou = iou(gt_box[4:], boxes[:,4:])\n        inds = np.intersect1d(np.where(sub_iou >= iou_thresh)[0],\n                              np.where(obj_iou >= iou_thresh)[0])\n        if inds.size > 0:\n            num_correct_pred_gt += 1\n    return float(num_correct_pred_gt) / float(num_gt)\n\n\ndef iou(gt_box, pred_boxes):\n    # computer Intersection-over-Union between two sets of boxes\n    ixmin = np.maximum(gt_box[0], pred_boxes[:,0])\n    iymin = np.maximum(gt_box[1], pred_boxes[:,1])\n    ixmax = np.minimum(gt_box[2], pred_boxes[:,2])\n    iymax = np.minimum(gt_box[3], pred_boxes[:,3])\n    iw = np.maximum(ixmax - ixmin + 1., 0.)\n    ih = np.maximum(iymax - iymin + 1., 0.)\n    inters = iw * ih\n\n    # union\n    uni = ((gt_box[2] - gt_box[0] + 1.) * (gt_box[3] - gt_box[1] + 1.) +\n            (pred_boxes[:, 2] - pred_boxes[:, 0] + 1.) *\n            (pred_boxes[:, 3] - pred_boxes[:, 1] + 1.) - inters)\n\n    overlaps = inters / uni\n    return overlaps\n\ntrain, val, test = VG.splits()\n\nresult_dict_mine = {'sgdet_recall': {20: [], 50: [], 100: []}}\nresult_dict_theirs = {'sgdet_recall': {20: [], 50: [], 100: []}}\n\nfor img_i in trange(len(val)):\n    gt_entry = {\n        'gt_classes': val.gt_classes[img_i].copy(),\n        'gt_relations': val.relationships[img_i].copy(),\n        'gt_boxes': val.gt_boxes[img_i].copy(),\n    }\n\n    # Use shuffled GT boxes\n    gt_indices = np.arange(gt_entry['gt_boxes'].shape[0]) #np.random.choice(gt_entry['gt_boxes'].shape[0], 20)\n    pred_boxes = gt_entry['gt_boxes'][gt_indices]\n\n    # Jitter the boxes a bit\n    pred_boxes = center_size(pred_boxes)\n    pred_boxes[:,:2] += np.random.rand(pred_boxes.shape[0], 2)*128\n    pred_boxes[:,2:] *= (1+np.random.randn(pred_boxes.shape[0], 2).clip(-0.1, 0.1))\n    pred_boxes = point_form(pred_boxes)\n\n    obj_scores = np.random.rand(pred_boxes.shape[0])\n\n    rels_to_use = np.column_stack(np.where(1 - np.diag(np.ones(pred_boxes.shape[0], dtype=np.int32))))\n    rel_scores = np.random.rand(min(100, rels_to_use.shape[0]), 51)\n    rel_scores = rel_scores / rel_scores.sum(1, keepdims=True)\n    pred_rel_inds = rels_to_use[np.random.choice(rels_to_use.shape[0], rel_scores.shape[0],\n                                                               replace=False)]\n\n    # We must sort by P(o, o, r)\n    rel_order = np.argsort(-rel_scores[:,1:].max(1) * obj_scores[pred_rel_inds[:,0]] * obj_scores[pred_rel_inds[:,1]])\n\n    pred_entry = {\n        'pred_boxes': pred_boxes,\n        'pred_classes': gt_entry['gt_classes'][gt_indices], #1+np.random.choice(150, pred_boxes.shape[0], replace=True),\n        'obj_scores': obj_scores,\n        'pred_rel_inds': pred_rel_inds[rel_order],\n        'rel_scores': rel_scores[rel_order],\n    }\n\n    # def check_whether_they_are_the_same(gt_entry, pred_entry):\n    evaluate_from_dict(gt_entry, pred_entry, 'sgdet', result_dict_mine, multiple_preds=False,\n                       viz_dict=None)\n\n    #########################\n    predicate_scores_theirs = np.zeros((pred_boxes.shape[0], pred_boxes.shape[0], 51), dtype=np.float64)\n    for (o1, o2), s in zip(pred_entry['pred_rel_inds'], pred_entry['rel_scores']):\n        predicate_scores_theirs[o1, o2] = s\n\n    obj_scores_theirs = np.zeros((obj_scores.shape[0], 151), dtype=np.float64)\n    obj_scores_theirs[np.arange(obj_scores.shape[0]), pred_entry['pred_classes']] = obj_scores\n\n    sg_entry_orig_format = {\n        'boxes': pred_entry['pred_boxes'],\n        # 'gt_classes': gt_entry['gt_classes'],\n        # 'gt_relations': gt_entry['gt_relations'],\n        'relations': predicate_scores_theirs,\n        'scores': obj_scores_theirs\n    }\n    roidb_entry = {\n        'max_overlaps': np.concatenate((np.ones(gt_entry['gt_boxes'].shape[0]), np.zeros(pred_entry['pred_boxes'].shape[0])), 0),\n        'boxes': np.concatenate((gt_entry['gt_boxes'], pred_entry['pred_boxes']), 0),\n        'gt_classes': gt_entry['gt_classes'],\n        'gt_relations': gt_entry['gt_relations'],\n    }\n    eval_relation_recall(sg_entry_orig_format, roidb_entry, result_dict_theirs, 'sgdet', iou_thresh=0.5)\n\nmy_results = np.array(result_dict_mine['sgdet_recall'][20])\ntheir_results = np.array(result_dict_theirs['sgdet_recall'][20])\n\nassert np.all(my_results == their_results)"
  },
  {
    "path": "lib/fpn/anchor_targets.py",
    "content": "\"\"\"\nGenerates anchor targets to train the detector. Does this during the collate step in training\nas it's much cheaper to do this on a separate thread.\n\nHeavily adapted from faster_rcnn/rpn_msr/anchor_target_layer.py.\n\"\"\"\nimport numpy as np\nimport numpy.random as npr\n\nfrom config import IM_SCALE, RPN_NEGATIVE_OVERLAP, RPN_POSITIVE_OVERLAP, \\\n    RPN_BATCHSIZE, RPN_FG_FRACTION, ANCHOR_SIZE, ANCHOR_SCALES, ANCHOR_RATIOS\nfrom lib.fpn.box_intersections_cpu.bbox import bbox_overlaps\nfrom lib.fpn.generate_anchors import generate_anchors\n\n\ndef anchor_target_layer(gt_boxes, im_size, \n                        allowed_border=0):\n    \"\"\"\n    Assign anchors to ground-truth targets. Produces anchor classification\n    labels and bounding-box regression targets.\n\n    for each (H, W) location i\n      generate 3 anchor boxes centered on cell i\n    filter out-of-image anchors\n    measure GT overlap\n\n    :param gt_boxes: [x1, y1, x2, y2] boxes. These are assumed to be at the same scale as\n                     the image (IM_SCALE)\n    :param im_size: Size of the image (h, w). This is assumed to be scaled to IM_SCALE\n    \"\"\"\n    if max(im_size) != IM_SCALE:\n        raise ValueError(\"im size is {}\".format(im_size))\n    h, w = im_size\n\n    # Get the indices of the anchors in the feature map.\n    # h, w, A, 4\n    ans_np = generate_anchors(base_size=ANCHOR_SIZE,\n                              feat_stride=16,\n                              anchor_scales=ANCHOR_SCALES,\n                              anchor_ratios=ANCHOR_RATIOS,\n                              )\n    ans_np_flat = ans_np.reshape((-1, 4))\n    inds_inside = np.where(\n        (ans_np_flat[:, 0] >= -allowed_border) &\n        (ans_np_flat[:, 1] >= -allowed_border) &\n        (ans_np_flat[:, 2] < w + allowed_border) &  # width\n        (ans_np_flat[:, 3] < h + allowed_border)  # height\n    )[0]\n    good_ans_flat = ans_np_flat[inds_inside]\n    if good_ans_flat.size == 0:\n        raise ValueError(\"There were no good anchors for an image of size {} with boxes {}\".format(im_size, gt_boxes))\n\n    # overlaps between the anchors and the gt boxes [num_anchors, num_gtboxes]\n    overlaps = bbox_overlaps(good_ans_flat, gt_boxes)\n    anchor_to_gtbox = overlaps.argmax(axis=1)\n    max_overlaps = overlaps[np.arange(anchor_to_gtbox.shape[0]), anchor_to_gtbox]\n    gtbox_to_anchor = overlaps.argmax(axis=0)\n    gt_max_overlaps = overlaps[gtbox_to_anchor, np.arange(overlaps.shape[1])]\n    gt_argmax_overlaps = np.where(overlaps == gt_max_overlaps)[0]\n\n    # Good anchors are those that match SOMEWHERE within a decent tolerance\n    # label: 1 is positive, 0 is negative, -1 is dont care.\n    # assign bg labels first so that positive labels can clobber them\n    labels = (-1) * np.ones(overlaps.shape[0], dtype=np.int64)\n    labels[max_overlaps < RPN_NEGATIVE_OVERLAP] = 0\n    labels[gt_argmax_overlaps] = 1\n    labels[max_overlaps >= RPN_POSITIVE_OVERLAP] = 1\n\n    # subsample positive labels if we have too many\n    num_fg = int(RPN_FG_FRACTION * RPN_BATCHSIZE)\n    fg_inds = np.where(labels == 1)[0]\n    if len(fg_inds) > num_fg:\n        labels[npr.choice(fg_inds, size=(len(fg_inds) - num_fg), replace=False)] = -1\n\n    # subsample negative labels if we have too many\n    num_bg = RPN_BATCHSIZE - np.sum(labels == 1)\n    bg_inds = np.where(labels == 0)[0]\n    if len(bg_inds) > num_bg:\n        labels[npr.choice(bg_inds, size=(len(bg_inds) - num_bg), replace=False)] = -1\n    # print(\"{} fg {} bg ratio{:.3f} inds inside {}\".format(RPN_BATCHSIZE-num_bg, num_bg, (RPN_BATCHSIZE-num_bg)/RPN_BATCHSIZE, inds_inside.shape[0]))\n\n\n    # Get the labels at the original size\n    labels_unmap = (-1) * np.ones(ans_np_flat.shape[0], dtype=np.int64)\n    labels_unmap[inds_inside] = labels\n\n    # h, w, A\n    labels_unmap_res = labels_unmap.reshape(ans_np.shape[:-1])\n    anchor_inds = np.column_stack(np.where(labels_unmap_res >= 0))\n\n    # These ought to be in the same order\n    anchor_inds_flat = np.where(labels >= 0)[0]\n    anchors = good_ans_flat[anchor_inds_flat]\n    bbox_targets = gt_boxes[anchor_to_gtbox[anchor_inds_flat]]\n    labels = labels[anchor_inds_flat]\n\n    assert np.all(labels >= 0)\n\n\n    # Anchors: [num_used, 4]\n    # Anchor_inds: [num_used, 3] (h, w, A)\n    # bbox_targets: [num_used, 4]\n    # labels: [num_used]\n\n    return anchors, anchor_inds, bbox_targets, labels\n"
  },
  {
    "path": "lib/fpn/box_intersections_cpu/bbox.c",
    "content": "/* Generated by Cython 0.25.2 */\n\n/* BEGIN: Cython Metadata\n{\n    \"distutils\": {\n        \"depends\": []\n    },\n    \"module_name\": \"bbox\"\n}\nEND: Cython Metadata */\n\n#define PY_SSIZE_T_CLEAN\n#include \"Python.h\"\n#ifndef Py_PYTHON_H\n    #error Python headers needed to compile C extensions, please install development version of Python.\n#elif PY_VERSION_HEX < 0x02060000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03020000)\n    #error Cython requires Python 2.6+ or Python 3.2+.\n#else\n#define CYTHON_ABI \"0_25_2\"\n#include <stddef.h>\n#ifndef offsetof\n  #define offsetof(type, member) ( (size_t) & ((type*)0) -> member )\n#endif\n#if !defined(WIN32) && !defined(MS_WINDOWS)\n  #ifndef __stdcall\n    #define __stdcall\n  #endif\n  #ifndef __cdecl\n    #define __cdecl\n  #endif\n  #ifndef __fastcall\n    #define __fastcall\n  #endif\n#endif\n#ifndef DL_IMPORT\n  #define DL_IMPORT(t) t\n#endif\n#ifndef DL_EXPORT\n  #define DL_EXPORT(t) t\n#endif\n#ifndef HAVE_LONG_LONG\n  #if PY_VERSION_HEX >= 0x03030000 || (PY_MAJOR_VERSION == 2 && PY_VERSION_HEX >= 0x02070000)\n    #define HAVE_LONG_LONG\n  #endif\n#endif\n#ifndef PY_LONG_LONG\n  #define PY_LONG_LONG LONG_LONG\n#endif\n#ifndef Py_HUGE_VAL\n  #define Py_HUGE_VAL HUGE_VAL\n#endif\n#ifdef PYPY_VERSION\n  #define CYTHON_COMPILING_IN_PYPY 1\n  #define CYTHON_COMPILING_IN_PYSTON 0\n  #define CYTHON_COMPILING_IN_CPYTHON 0\n  #undef CYTHON_USE_TYPE_SLOTS\n  #define CYTHON_USE_TYPE_SLOTS 0\n  #undef CYTHON_USE_ASYNC_SLOTS\n  #define CYTHON_USE_ASYNC_SLOTS 0\n  #undef CYTHON_USE_PYLIST_INTERNALS\n  #define CYTHON_USE_PYLIST_INTERNALS 0\n  #undef CYTHON_USE_UNICODE_INTERNALS\n  #define CYTHON_USE_UNICODE_INTERNALS 0\n  #undef CYTHON_USE_UNICODE_WRITER\n  #define CYTHON_USE_UNICODE_WRITER 0\n  #undef CYTHON_USE_PYLONG_INTERNALS\n  #define CYTHON_USE_PYLONG_INTERNALS 0\n  #undef CYTHON_AVOID_BORROWED_REFS\n  #define CYTHON_AVOID_BORROWED_REFS 1\n  #undef CYTHON_ASSUME_SAFE_MACROS\n  #define CYTHON_ASSUME_SAFE_MACROS 0\n  #undef CYTHON_UNPACK_METHODS\n  #define CYTHON_UNPACK_METHODS 0\n  #undef CYTHON_FAST_THREAD_STATE\n  #define CYTHON_FAST_THREAD_STATE 0\n  #undef CYTHON_FAST_PYCALL\n  #define CYTHON_FAST_PYCALL 0\n#elif defined(PYSTON_VERSION)\n  #define CYTHON_COMPILING_IN_PYPY 0\n  #define CYTHON_COMPILING_IN_PYSTON 1\n  #define CYTHON_COMPILING_IN_CPYTHON 0\n  #ifndef CYTHON_USE_TYPE_SLOTS\n    #define CYTHON_USE_TYPE_SLOTS 1\n  #endif\n  #undef CYTHON_USE_ASYNC_SLOTS\n  #define CYTHON_USE_ASYNC_SLOTS 0\n  #undef CYTHON_USE_PYLIST_INTERNALS\n  #define CYTHON_USE_PYLIST_INTERNALS 0\n  #ifndef CYTHON_USE_UNICODE_INTERNALS\n    #define CYTHON_USE_UNICODE_INTERNALS 1\n  #endif\n  #undef CYTHON_USE_UNICODE_WRITER\n  #define CYTHON_USE_UNICODE_WRITER 0\n  #undef CYTHON_USE_PYLONG_INTERNALS\n  #define CYTHON_USE_PYLONG_INTERNALS 0\n  #ifndef CYTHON_AVOID_BORROWED_REFS\n    #define CYTHON_AVOID_BORROWED_REFS 0\n  #endif\n  #ifndef CYTHON_ASSUME_SAFE_MACROS\n    #define CYTHON_ASSUME_SAFE_MACROS 1\n  #endif\n  #ifndef CYTHON_UNPACK_METHODS\n    #define CYTHON_UNPACK_METHODS 1\n  #endif\n  #undef CYTHON_FAST_THREAD_STATE\n  #define CYTHON_FAST_THREAD_STATE 0\n  #undef CYTHON_FAST_PYCALL\n  #define CYTHON_FAST_PYCALL 0\n#else\n  #define CYTHON_COMPILING_IN_PYPY 0\n  #define CYTHON_COMPILING_IN_PYSTON 0\n  #define CYTHON_COMPILING_IN_CPYTHON 1\n  #ifndef CYTHON_USE_TYPE_SLOTS\n    #define CYTHON_USE_TYPE_SLOTS 1\n  #endif\n  #if PY_MAJOR_VERSION < 3\n    #undef CYTHON_USE_ASYNC_SLOTS\n    #define CYTHON_USE_ASYNC_SLOTS 0\n  #elif !defined(CYTHON_USE_ASYNC_SLOTS)\n    #define CYTHON_USE_ASYNC_SLOTS 1\n  #endif\n  #if PY_VERSION_HEX < 0x02070000\n    #undef CYTHON_USE_PYLONG_INTERNALS\n    #define CYTHON_USE_PYLONG_INTERNALS 0\n  #elif !defined(CYTHON_USE_PYLONG_INTERNALS)\n    #define CYTHON_USE_PYLONG_INTERNALS 1\n  #endif\n  #ifndef CYTHON_USE_PYLIST_INTERNALS\n    #define CYTHON_USE_PYLIST_INTERNALS 1\n  #endif\n  #ifndef CYTHON_USE_UNICODE_INTERNALS\n    #define CYTHON_USE_UNICODE_INTERNALS 1\n  #endif\n  #if PY_VERSION_HEX < 0x030300F0\n    #undef CYTHON_USE_UNICODE_WRITER\n    #define CYTHON_USE_UNICODE_WRITER 0\n  #elif !defined(CYTHON_USE_UNICODE_WRITER)\n    #define CYTHON_USE_UNICODE_WRITER 1\n  #endif\n  #ifndef CYTHON_AVOID_BORROWED_REFS\n    #define CYTHON_AVOID_BORROWED_REFS 0\n  #endif\n  #ifndef CYTHON_ASSUME_SAFE_MACROS\n    #define CYTHON_ASSUME_SAFE_MACROS 1\n  #endif\n  #ifndef CYTHON_UNPACK_METHODS\n    #define CYTHON_UNPACK_METHODS 1\n  #endif\n  #ifndef CYTHON_FAST_THREAD_STATE\n    #define CYTHON_FAST_THREAD_STATE 1\n  #endif\n  #ifndef CYTHON_FAST_PYCALL\n    #define CYTHON_FAST_PYCALL 1\n  #endif\n#endif\n#if !defined(CYTHON_FAST_PYCCALL)\n#define CYTHON_FAST_PYCCALL  (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1)\n#endif\n#if CYTHON_USE_PYLONG_INTERNALS\n  #include \"longintrepr.h\"\n  #undef SHIFT\n  #undef BASE\n  #undef MASK\n#endif\n#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX < 0x02070600 && !defined(Py_OptimizeFlag)\n  #define Py_OptimizeFlag 0\n#endif\n#define __PYX_BUILD_PY_SSIZE_T \"n\"\n#define CYTHON_FORMAT_SSIZE_T \"z\"\n#if PY_MAJOR_VERSION < 3\n  #define __Pyx_BUILTIN_MODULE_NAME \"__builtin__\"\n  #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\\\n          PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\n  #define __Pyx_DefaultClassType PyClass_Type\n#else\n  #define __Pyx_BUILTIN_MODULE_NAME \"builtins\"\n  #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\\\n          PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\n  #define __Pyx_DefaultClassType PyType_Type\n#endif\n#ifndef Py_TPFLAGS_CHECKTYPES\n  #define Py_TPFLAGS_CHECKTYPES 0\n#endif\n#ifndef Py_TPFLAGS_HAVE_INDEX\n  #define Py_TPFLAGS_HAVE_INDEX 0\n#endif\n#ifndef Py_TPFLAGS_HAVE_NEWBUFFER\n  #define Py_TPFLAGS_HAVE_NEWBUFFER 0\n#endif\n#ifndef Py_TPFLAGS_HAVE_FINALIZE\n  #define Py_TPFLAGS_HAVE_FINALIZE 0\n#endif\n#ifndef METH_FASTCALL\n  #define METH_FASTCALL 0x80\n  typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject **args,\n                                              Py_ssize_t nargs, PyObject *kwnames);\n#else\n  #define __Pyx_PyCFunctionFast _PyCFunctionFast\n#endif\n#if CYTHON_FAST_PYCCALL\n#define __Pyx_PyFastCFunction_Check(func)\\\n    ((PyCFunction_Check(func) && (METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST)))))\n#else\n#define __Pyx_PyFastCFunction_Check(func) 0\n#endif\n#if PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND)\n  #define CYTHON_PEP393_ENABLED 1\n  #define __Pyx_PyUnicode_READY(op)       (likely(PyUnicode_IS_READY(op)) ?\\\n                                              0 : _PyUnicode_Ready((PyObject *)(op)))\n  #define __Pyx_PyUnicode_GET_LENGTH(u)   PyUnicode_GET_LENGTH(u)\n  #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i)\n  #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u)   PyUnicode_MAX_CHAR_VALUE(u)\n  #define __Pyx_PyUnicode_KIND(u)         PyUnicode_KIND(u)\n  #define __Pyx_PyUnicode_DATA(u)         PyUnicode_DATA(u)\n  #define __Pyx_PyUnicode_READ(k, d, i)   PyUnicode_READ(k, d, i)\n  #define __Pyx_PyUnicode_WRITE(k, d, i, ch)  PyUnicode_WRITE(k, d, i, ch)\n  #define __Pyx_PyUnicode_IS_TRUE(u)      (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u)))\n#else\n  #define CYTHON_PEP393_ENABLED 0\n  #define PyUnicode_1BYTE_KIND  1\n  #define PyUnicode_2BYTE_KIND  2\n  #define PyUnicode_4BYTE_KIND  4\n  #define __Pyx_PyUnicode_READY(op)       (0)\n  #define __Pyx_PyUnicode_GET_LENGTH(u)   PyUnicode_GET_SIZE(u)\n  #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i]))\n  #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u)   ((sizeof(Py_UNICODE) == 2) ? 65535 : 1114111)\n  #define __Pyx_PyUnicode_KIND(u)         (sizeof(Py_UNICODE))\n  #define __Pyx_PyUnicode_DATA(u)         ((void*)PyUnicode_AS_UNICODE(u))\n  #define __Pyx_PyUnicode_READ(k, d, i)   ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i]))\n  #define __Pyx_PyUnicode_WRITE(k, d, i, ch)  (((void)(k)), ((Py_UNICODE*)d)[i] = ch)\n  #define __Pyx_PyUnicode_IS_TRUE(u)      (0 != PyUnicode_GET_SIZE(u))\n#endif\n#if CYTHON_COMPILING_IN_PYPY\n  #define __Pyx_PyUnicode_Concat(a, b)      PyNumber_Add(a, b)\n  #define __Pyx_PyUnicode_ConcatSafe(a, b)  PyNumber_Add(a, b)\n#else\n  #define __Pyx_PyUnicode_Concat(a, b)      PyUnicode_Concat(a, b)\n  #define __Pyx_PyUnicode_ConcatSafe(a, b)  ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\\\n      PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b))\n#endif\n#if CYTHON_COMPILING_IN_PYPY && !defined(PyUnicode_Contains)\n  #define PyUnicode_Contains(u, s)  PySequence_Contains(u, s)\n#endif\n#if CYTHON_COMPILING_IN_PYPY && !defined(PyByteArray_Check)\n  #define PyByteArray_Check(obj)  PyObject_TypeCheck(obj, &PyByteArray_Type)\n#endif\n#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Format)\n  #define PyObject_Format(obj, fmt)  PyObject_CallMethod(obj, \"__format__\", \"O\", fmt)\n#endif\n#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc)\n  #define PyObject_Malloc(s)   PyMem_Malloc(s)\n  #define PyObject_Free(p)     PyMem_Free(p)\n  #define PyObject_Realloc(p)  PyMem_Realloc(p)\n#endif\n#if CYTHON_COMPILING_IN_PYSTON\n  #define __Pyx_PyCode_HasFreeVars(co)  PyCode_HasFreeVars(co)\n  #define __Pyx_PyFrame_SetLineNumber(frame, lineno) PyFrame_SetLineNumber(frame, lineno)\n#else\n  #define __Pyx_PyCode_HasFreeVars(co)  (PyCode_GetNumFree(co) > 0)\n  #define __Pyx_PyFrame_SetLineNumber(frame, lineno)  (frame)->f_lineno = (lineno)\n#endif\n#define __Pyx_PyString_FormatSafe(a, b)   ((unlikely((a) == Py_None)) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b))\n#define __Pyx_PyUnicode_FormatSafe(a, b)  ((unlikely((a) == Py_None)) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b))\n#if PY_MAJOR_VERSION >= 3\n  #define __Pyx_PyString_Format(a, b)  PyUnicode_Format(a, b)\n#else\n  #define __Pyx_PyString_Format(a, b)  PyString_Format(a, b)\n#endif\n#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII)\n  #define PyObject_ASCII(o)            PyObject_Repr(o)\n#endif\n#if PY_MAJOR_VERSION >= 3\n  #define PyBaseString_Type            PyUnicode_Type\n  #define PyStringObject               PyUnicodeObject\n  #define PyString_Type                PyUnicode_Type\n  #define PyString_Check               PyUnicode_Check\n  #define PyString_CheckExact          PyUnicode_CheckExact\n#endif\n#if PY_MAJOR_VERSION >= 3\n  #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj)\n  #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj)\n#else\n  #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj))\n  #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj))\n#endif\n#ifndef PySet_CheckExact\n  #define PySet_CheckExact(obj)        (Py_TYPE(obj) == &PySet_Type)\n#endif\n#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type)\n#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception)\n#if PY_MAJOR_VERSION >= 3\n  #define PyIntObject                  PyLongObject\n  #define PyInt_Type                   PyLong_Type\n  #define PyInt_Check(op)              PyLong_Check(op)\n  #define PyInt_CheckExact(op)         PyLong_CheckExact(op)\n  #define PyInt_FromString             PyLong_FromString\n  #define PyInt_FromUnicode            PyLong_FromUnicode\n  #define PyInt_FromLong               PyLong_FromLong\n  #define PyInt_FromSize_t             PyLong_FromSize_t\n  #define PyInt_FromSsize_t            PyLong_FromSsize_t\n  #define PyInt_AsLong                 PyLong_AsLong\n  #define PyInt_AS_LONG                PyLong_AS_LONG\n  #define PyInt_AsSsize_t              PyLong_AsSsize_t\n  #define PyInt_AsUnsignedLongMask     PyLong_AsUnsignedLongMask\n  #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask\n  #define PyNumber_Int                 PyNumber_Long\n#endif\n#if PY_MAJOR_VERSION >= 3\n  #define PyBoolObject                 PyLongObject\n#endif\n#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY\n  #ifndef PyUnicode_InternFromString\n    #define PyUnicode_InternFromString(s) PyUnicode_FromString(s)\n  #endif\n#endif\n#if PY_VERSION_HEX < 0x030200A4\n  typedef long Py_hash_t;\n  #define __Pyx_PyInt_FromHash_t PyInt_FromLong\n  #define __Pyx_PyInt_AsHash_t   PyInt_AsLong\n#else\n  #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t\n  #define __Pyx_PyInt_AsHash_t   PyInt_AsSsize_t\n#endif\n#if PY_MAJOR_VERSION >= 3\n  #define __Pyx_PyMethod_New(func, self, klass) ((self) ? PyMethod_New(func, self) : PyInstanceMethod_New(func))\n#else\n  #define __Pyx_PyMethod_New(func, self, klass) PyMethod_New(func, self, klass)\n#endif\n#if CYTHON_USE_ASYNC_SLOTS\n  #if PY_VERSION_HEX >= 0x030500B1\n    #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods\n    #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async)\n  #else\n    typedef struct {\n        unaryfunc am_await;\n        unaryfunc am_aiter;\n        unaryfunc am_anext;\n    } __Pyx_PyAsyncMethodsStruct;\n    #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved))\n  #endif\n#else\n  #define __Pyx_PyType_AsAsync(obj) NULL\n#endif\n#ifndef CYTHON_RESTRICT\n  #if defined(__GNUC__)\n    #define CYTHON_RESTRICT __restrict__\n  #elif defined(_MSC_VER) && _MSC_VER >= 1400\n    #define CYTHON_RESTRICT __restrict\n  #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L\n    #define CYTHON_RESTRICT restrict\n  #else\n    #define CYTHON_RESTRICT\n  #endif\n#endif\n#ifndef CYTHON_UNUSED\n# if defined(__GNUC__)\n#   if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4))\n#     define CYTHON_UNUSED __attribute__ ((__unused__))\n#   else\n#     define CYTHON_UNUSED\n#   endif\n# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER))\n#   define CYTHON_UNUSED __attribute__ ((__unused__))\n# else\n#   define CYTHON_UNUSED\n# endif\n#endif\n#ifndef CYTHON_MAYBE_UNUSED_VAR\n#  if defined(__cplusplus)\n     template<class T> void CYTHON_MAYBE_UNUSED_VAR( const T& ) { }\n#  else\n#    define CYTHON_MAYBE_UNUSED_VAR(x) (void)(x)\n#  endif\n#endif\n#ifndef CYTHON_NCP_UNUSED\n# if CYTHON_COMPILING_IN_CPYTHON\n#  define CYTHON_NCP_UNUSED\n# else\n#  define CYTHON_NCP_UNUSED CYTHON_UNUSED\n# endif\n#endif\n#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None)\n\n#ifndef CYTHON_INLINE\n  #if defined(__clang__)\n    #define CYTHON_INLINE __inline__ __attribute__ ((__unused__))\n  #elif defined(__GNUC__)\n    #define CYTHON_INLINE __inline__\n  #elif defined(_MSC_VER)\n    #define CYTHON_INLINE __inline\n  #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L\n    #define CYTHON_INLINE inline\n  #else\n    #define CYTHON_INLINE\n  #endif\n#endif\n\n#if defined(WIN32) || defined(MS_WINDOWS)\n  #define _USE_MATH_DEFINES\n#endif\n#include <math.h>\n#ifdef NAN\n#define __PYX_NAN() ((float) NAN)\n#else\nstatic CYTHON_INLINE float __PYX_NAN() {\n  float value;\n  memset(&value, 0xFF, sizeof(value));\n  return value;\n}\n#endif\n#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL)\n#define __Pyx_truncl trunc\n#else\n#define __Pyx_truncl truncl\n#endif\n\n\n#define __PYX_ERR(f_index, lineno, Ln_error) \\\n{ \\\n  __pyx_filename = __pyx_f[f_index]; __pyx_lineno = lineno; __pyx_clineno = __LINE__; goto Ln_error; \\\n}\n\n#if PY_MAJOR_VERSION >= 3\n  #define __Pyx_PyNumber_Divide(x,y)         PyNumber_TrueDivide(x,y)\n  #define __Pyx_PyNumber_InPlaceDivide(x,y)  PyNumber_InPlaceTrueDivide(x,y)\n#else\n  #define __Pyx_PyNumber_Divide(x,y)         PyNumber_Divide(x,y)\n  #define __Pyx_PyNumber_InPlaceDivide(x,y)  PyNumber_InPlaceDivide(x,y)\n#endif\n\n#ifndef __PYX_EXTERN_C\n  #ifdef __cplusplus\n    #define __PYX_EXTERN_C extern \"C\"\n  #else\n    #define __PYX_EXTERN_C extern\n  #endif\n#endif\n\n#define __PYX_HAVE__bbox\n#define __PYX_HAVE_API__bbox\n#include <string.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include \"numpy/arrayobject.h\"\n#include \"numpy/ufuncobject.h\"\n#ifdef _OPENMP\n#include <omp.h>\n#endif /* _OPENMP */\n\n#ifdef PYREX_WITHOUT_ASSERTIONS\n#define CYTHON_WITHOUT_ASSERTIONS\n#endif\n\ntypedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding;\n                const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry;\n\n#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0\n#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT 0\n#define __PYX_DEFAULT_STRING_ENCODING \"\"\n#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString\n#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize\n#define __Pyx_uchar_cast(c) ((unsigned char)c)\n#define __Pyx_long_cast(x) ((long)x)\n#define __Pyx_fits_Py_ssize_t(v, type, is_signed)  (\\\n    (sizeof(type) < sizeof(Py_ssize_t))  ||\\\n    (sizeof(type) > sizeof(Py_ssize_t) &&\\\n          likely(v < (type)PY_SSIZE_T_MAX ||\\\n                 v == (type)PY_SSIZE_T_MAX)  &&\\\n          (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\\\n                                v == (type)PY_SSIZE_T_MIN)))  ||\\\n    (sizeof(type) == sizeof(Py_ssize_t) &&\\\n          (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\\\n                               v == (type)PY_SSIZE_T_MAX)))  )\n#if defined (__cplusplus) && __cplusplus >= 201103L\n    #include <cstdlib>\n    #define __Pyx_sst_abs(value) std::abs(value)\n#elif SIZEOF_INT >= SIZEOF_SIZE_T\n    #define __Pyx_sst_abs(value) abs(value)\n#elif SIZEOF_LONG >= SIZEOF_SIZE_T\n    #define __Pyx_sst_abs(value) labs(value)\n#elif defined (_MSC_VER) && defined (_M_X64)\n    #define __Pyx_sst_abs(value) _abs64(value)\n#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L\n    #define __Pyx_sst_abs(value) llabs(value)\n#elif defined (__GNUC__)\n    #define __Pyx_sst_abs(value) __builtin_llabs(value)\n#else\n    #define __Pyx_sst_abs(value) ((value<0) ? -value : value)\n#endif\nstatic CYTHON_INLINE char* __Pyx_PyObject_AsString(PyObject*);\nstatic CYTHON_INLINE char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length);\n#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s))\n#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l)\n#define __Pyx_PyBytes_FromString        PyBytes_FromString\n#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize\nstatic CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*);\n#if PY_MAJOR_VERSION < 3\n    #define __Pyx_PyStr_FromString        __Pyx_PyBytes_FromString\n    #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize\n#else\n    #define __Pyx_PyStr_FromString        __Pyx_PyUnicode_FromString\n    #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize\n#endif\n#define __Pyx_PyObject_AsSString(s)    ((signed char*) __Pyx_PyObject_AsString(s))\n#define __Pyx_PyObject_AsUString(s)    ((unsigned char*) __Pyx_PyObject_AsString(s))\n#define __Pyx_PyObject_FromCString(s)  __Pyx_PyObject_FromString((const char*)s)\n#define __Pyx_PyBytes_FromCString(s)   __Pyx_PyBytes_FromString((const char*)s)\n#define __Pyx_PyByteArray_FromCString(s)   __Pyx_PyByteArray_FromString((const char*)s)\n#define __Pyx_PyStr_FromCString(s)     __Pyx_PyStr_FromString((const char*)s)\n#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s)\n#if PY_MAJOR_VERSION < 3\nstatic CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u)\n{\n    const Py_UNICODE *u_end = u;\n    while (*u_end++) ;\n    return (size_t)(u_end - u - 1);\n}\n#else\n#define __Pyx_Py_UNICODE_strlen Py_UNICODE_strlen\n#endif\n#define __Pyx_PyUnicode_FromUnicode(u)       PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u))\n#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode\n#define __Pyx_PyUnicode_AsUnicode            PyUnicode_AsUnicode\n#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj)\n#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None)\n#define __Pyx_PyBool_FromLong(b) ((b) ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False))\nstatic CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*);\nstatic CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x);\nstatic CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*);\nstatic CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t);\n#if CYTHON_ASSUME_SAFE_MACROS\n#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x))\n#else\n#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x)\n#endif\n#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x))\n#if PY_MAJOR_VERSION >= 3\n#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x))\n#else\n#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x))\n#endif\n#define __Pyx_PyNumber_Float(x) (PyFloat_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Float(x))\n#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII\nstatic int __Pyx_sys_getdefaultencoding_not_ascii;\nstatic int __Pyx_init_sys_getdefaultencoding_params(void) {\n    PyObject* sys;\n    PyObject* default_encoding = NULL;\n    PyObject* ascii_chars_u = NULL;\n    PyObject* ascii_chars_b = NULL;\n    const char* default_encoding_c;\n    sys = PyImport_ImportModule(\"sys\");\n    if (!sys) goto bad;\n    default_encoding = PyObject_CallMethod(sys, (char*) \"getdefaultencoding\", NULL);\n    Py_DECREF(sys);\n    if (!default_encoding) goto bad;\n    default_encoding_c = PyBytes_AsString(default_encoding);\n    if (!default_encoding_c) goto bad;\n    if (strcmp(default_encoding_c, \"ascii\") == 0) {\n        __Pyx_sys_getdefaultencoding_not_ascii = 0;\n    } else {\n        char ascii_chars[128];\n        int c;\n        for (c = 0; c < 128; c++) {\n            ascii_chars[c] = c;\n        }\n        __Pyx_sys_getdefaultencoding_not_ascii = 1;\n        ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL);\n        if (!ascii_chars_u) goto bad;\n        ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL);\n        if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) {\n            PyErr_Format(\n                PyExc_ValueError,\n                \"This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.\",\n                default_encoding_c);\n            goto bad;\n        }\n        Py_DECREF(ascii_chars_u);\n        Py_DECREF(ascii_chars_b);\n    }\n    Py_DECREF(default_encoding);\n    return 0;\nbad:\n    Py_XDECREF(default_encoding);\n    Py_XDECREF(ascii_chars_u);\n    Py_XDECREF(ascii_chars_b);\n    return -1;\n}\n#endif\n#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3\n#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL)\n#else\n#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL)\n#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT\nstatic char* __PYX_DEFAULT_STRING_ENCODING;\nstatic int __Pyx_init_sys_getdefaultencoding_params(void) {\n    PyObject* sys;\n    PyObject* default_encoding = NULL;\n    char* default_encoding_c;\n    sys = PyImport_ImportModule(\"sys\");\n    if (!sys) goto bad;\n    default_encoding = PyObject_CallMethod(sys, (char*) (const char*) \"getdefaultencoding\", NULL);\n    Py_DECREF(sys);\n    if (!default_encoding) goto bad;\n    default_encoding_c = PyBytes_AsString(default_encoding);\n    if (!default_encoding_c) goto bad;\n    __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c));\n    if (!__PYX_DEFAULT_STRING_ENCODING) goto bad;\n    strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c);\n    Py_DECREF(default_encoding);\n    return 0;\nbad:\n    Py_XDECREF(default_encoding);\n    return -1;\n}\n#endif\n#endif\n\n\n/* Test for GCC > 2.95 */\n#if defined(__GNUC__)     && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95)))\n  #define likely(x)   __builtin_expect(!!(x), 1)\n  #define unlikely(x) __builtin_expect(!!(x), 0)\n#else /* !__GNUC__ or GCC < 2.95 */\n  #define likely(x)   (x)\n  #define unlikely(x) (x)\n#endif /* __GNUC__ */\n\nstatic PyObject *__pyx_m;\nstatic PyObject *__pyx_d;\nstatic PyObject *__pyx_b;\nstatic PyObject *__pyx_empty_tuple;\nstatic PyObject *__pyx_empty_bytes;\nstatic PyObject *__pyx_empty_unicode;\nstatic int __pyx_lineno;\nstatic int __pyx_clineno = 0;\nstatic const char * __pyx_cfilenm= __FILE__;\nstatic const char *__pyx_filename;\n\n/* Header.proto */\n#if !defined(CYTHON_CCOMPLEX)\n  #if defined(__cplusplus)\n    #define CYTHON_CCOMPLEX 1\n  #elif defined(_Complex_I)\n    #define CYTHON_CCOMPLEX 1\n  #else\n    #define CYTHON_CCOMPLEX 0\n  #endif\n#endif\n#if CYTHON_CCOMPLEX\n  #ifdef __cplusplus\n    #include <complex>\n  #else\n    #include <complex.h>\n  #endif\n#endif\n#if CYTHON_CCOMPLEX && !defined(__cplusplus) && defined(__sun__) && defined(__GNUC__)\n  #undef _Complex_I\n  #define _Complex_I 1.0fj\n#endif\n\n\nstatic const char *__pyx_f[] = {\n  \"bbox.pyx\",\n  \"__init__.pxd\",\n  \"type.pxd\",\n};\n/* BufferFormatStructs.proto */\n#define IS_UNSIGNED(type) (((type) -1) > 0)\nstruct __Pyx_StructField_;\n#define __PYX_BUF_FLAGS_PACKED_STRUCT (1 << 0)\ntypedef struct {\n  const char* name;\n  struct __Pyx_StructField_* fields;\n  size_t size;\n  size_t arraysize[8];\n  int ndim;\n  char typegroup;\n  char is_unsigned;\n  int flags;\n} __Pyx_TypeInfo;\ntypedef struct __Pyx_StructField_ {\n  __Pyx_TypeInfo* type;\n  const char* name;\n  size_t offset;\n} __Pyx_StructField;\ntypedef struct {\n  __Pyx_StructField* field;\n  size_t parent_offset;\n} __Pyx_BufFmt_StackElem;\ntypedef struct {\n  __Pyx_StructField root;\n  __Pyx_BufFmt_StackElem* head;\n  size_t fmt_offset;\n  size_t new_count, enc_count;\n  size_t struct_alignment;\n  int is_complex;\n  char enc_type;\n  char new_packmode;\n  char enc_packmode;\n  char is_valid_array;\n} __Pyx_BufFmt_Context;\n\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":725\n * # in Cython to enable them only on the right systems.\n * \n * ctypedef npy_int8       int8_t             # <<<<<<<<<<<<<<\n * ctypedef npy_int16      int16_t\n * ctypedef npy_int32      int32_t\n */\ntypedef npy_int8 __pyx_t_5numpy_int8_t;\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":726\n * \n * ctypedef npy_int8       int8_t\n * ctypedef npy_int16      int16_t             # <<<<<<<<<<<<<<\n * ctypedef npy_int32      int32_t\n * ctypedef npy_int64      int64_t\n */\ntypedef npy_int16 __pyx_t_5numpy_int16_t;\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":727\n * ctypedef npy_int8       int8_t\n * ctypedef npy_int16      int16_t\n * ctypedef npy_int32      int32_t             # <<<<<<<<<<<<<<\n * ctypedef npy_int64      int64_t\n * #ctypedef npy_int96      int96_t\n */\ntypedef npy_int32 __pyx_t_5numpy_int32_t;\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":728\n * ctypedef npy_int16      int16_t\n * ctypedef npy_int32      int32_t\n * ctypedef npy_int64      int64_t             # <<<<<<<<<<<<<<\n * #ctypedef npy_int96      int96_t\n * #ctypedef npy_int128     int128_t\n */\ntypedef npy_int64 __pyx_t_5numpy_int64_t;\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":732\n * #ctypedef npy_int128     int128_t\n * \n * ctypedef npy_uint8      uint8_t             # <<<<<<<<<<<<<<\n * ctypedef npy_uint16     uint16_t\n * ctypedef npy_uint32     uint32_t\n */\ntypedef npy_uint8 __pyx_t_5numpy_uint8_t;\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":733\n * \n * ctypedef npy_uint8      uint8_t\n * ctypedef npy_uint16     uint16_t             # <<<<<<<<<<<<<<\n * ctypedef npy_uint32     uint32_t\n * ctypedef npy_uint64     uint64_t\n */\ntypedef npy_uint16 __pyx_t_5numpy_uint16_t;\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":734\n * ctypedef npy_uint8      uint8_t\n * ctypedef npy_uint16     uint16_t\n * ctypedef npy_uint32     uint32_t             # <<<<<<<<<<<<<<\n * ctypedef npy_uint64     uint64_t\n * #ctypedef npy_uint96     uint96_t\n */\ntypedef npy_uint32 __pyx_t_5numpy_uint32_t;\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":735\n * ctypedef npy_uint16     uint16_t\n * ctypedef npy_uint32     uint32_t\n * ctypedef npy_uint64     uint64_t             # <<<<<<<<<<<<<<\n * #ctypedef npy_uint96     uint96_t\n * #ctypedef npy_uint128    uint128_t\n */\ntypedef npy_uint64 __pyx_t_5numpy_uint64_t;\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":739\n * #ctypedef npy_uint128    uint128_t\n * \n * ctypedef npy_float32    float32_t             # <<<<<<<<<<<<<<\n * ctypedef npy_float64    float64_t\n * #ctypedef npy_float80    float80_t\n */\ntypedef npy_float32 __pyx_t_5numpy_float32_t;\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":740\n * \n * ctypedef npy_float32    float32_t\n * ctypedef npy_float64    float64_t             # <<<<<<<<<<<<<<\n * #ctypedef npy_float80    float80_t\n * #ctypedef npy_float128   float128_t\n */\ntypedef npy_float64 __pyx_t_5numpy_float64_t;\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":749\n * # The int types are mapped a bit surprising --\n * # numpy.int corresponds to 'l' and numpy.long to 'q'\n * ctypedef npy_long       int_t             # <<<<<<<<<<<<<<\n * ctypedef npy_longlong   long_t\n * ctypedef npy_longlong   longlong_t\n */\ntypedef npy_long __pyx_t_5numpy_int_t;\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":750\n * # numpy.int corresponds to 'l' and numpy.long to 'q'\n * ctypedef npy_long       int_t\n * ctypedef npy_longlong   long_t             # <<<<<<<<<<<<<<\n * ctypedef npy_longlong   longlong_t\n * \n */\ntypedef npy_longlong __pyx_t_5numpy_long_t;\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":751\n * ctypedef npy_long       int_t\n * ctypedef npy_longlong   long_t\n * ctypedef npy_longlong   longlong_t             # <<<<<<<<<<<<<<\n * \n * ctypedef npy_ulong      uint_t\n */\ntypedef npy_longlong __pyx_t_5numpy_longlong_t;\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":753\n * ctypedef npy_longlong   longlong_t\n * \n * ctypedef npy_ulong      uint_t             # <<<<<<<<<<<<<<\n * ctypedef npy_ulonglong  ulong_t\n * ctypedef npy_ulonglong  ulonglong_t\n */\ntypedef npy_ulong __pyx_t_5numpy_uint_t;\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":754\n * \n * ctypedef npy_ulong      uint_t\n * ctypedef npy_ulonglong  ulong_t             # <<<<<<<<<<<<<<\n * ctypedef npy_ulonglong  ulonglong_t\n * \n */\ntypedef npy_ulonglong __pyx_t_5numpy_ulong_t;\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":755\n * ctypedef npy_ulong      uint_t\n * ctypedef npy_ulonglong  ulong_t\n * ctypedef npy_ulonglong  ulonglong_t             # <<<<<<<<<<<<<<\n * \n * ctypedef npy_intp       intp_t\n */\ntypedef npy_ulonglong __pyx_t_5numpy_ulonglong_t;\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":757\n * ctypedef npy_ulonglong  ulonglong_t\n * \n * ctypedef npy_intp       intp_t             # <<<<<<<<<<<<<<\n * ctypedef npy_uintp      uintp_t\n * \n */\ntypedef npy_intp __pyx_t_5numpy_intp_t;\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":758\n * \n * ctypedef npy_intp       intp_t\n * ctypedef npy_uintp      uintp_t             # <<<<<<<<<<<<<<\n * \n * ctypedef npy_double     float_t\n */\ntypedef npy_uintp __pyx_t_5numpy_uintp_t;\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":760\n * ctypedef npy_uintp      uintp_t\n * \n * ctypedef npy_double     float_t             # <<<<<<<<<<<<<<\n * ctypedef npy_double     double_t\n * ctypedef npy_longdouble longdouble_t\n */\ntypedef npy_double __pyx_t_5numpy_float_t;\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":761\n * \n * ctypedef npy_double     float_t\n * ctypedef npy_double     double_t             # <<<<<<<<<<<<<<\n * ctypedef npy_longdouble longdouble_t\n * \n */\ntypedef npy_double __pyx_t_5numpy_double_t;\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":762\n * ctypedef npy_double     float_t\n * ctypedef npy_double     double_t\n * ctypedef npy_longdouble longdouble_t             # <<<<<<<<<<<<<<\n * \n * ctypedef npy_cfloat      cfloat_t\n */\ntypedef npy_longdouble __pyx_t_5numpy_longdouble_t;\n\n/* \"bbox.pyx\":13\n * \n * DTYPE = np.float\n * ctypedef np.float_t DTYPE_t             # <<<<<<<<<<<<<<\n * \n * def bbox_overlaps(boxes, query_boxes):\n */\ntypedef __pyx_t_5numpy_float_t __pyx_t_4bbox_DTYPE_t;\n/* Declarations.proto */\n#if CYTHON_CCOMPLEX\n  #ifdef __cplusplus\n    typedef ::std::complex< float > __pyx_t_float_complex;\n  #else\n    typedef float _Complex __pyx_t_float_complex;\n  #endif\n#else\n    typedef struct { float real, imag; } __pyx_t_float_complex;\n#endif\nstatic CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float, float);\n\n/* Declarations.proto */\n#if CYTHON_CCOMPLEX\n  #ifdef __cplusplus\n    typedef ::std::complex< double > __pyx_t_double_complex;\n  #else\n    typedef double _Complex __pyx_t_double_complex;\n  #endif\n#else\n    typedef struct { double real, imag; } __pyx_t_double_complex;\n#endif\nstatic CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double, double);\n\n\n/*--- Type declarations ---*/\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":764\n * ctypedef npy_longdouble longdouble_t\n * \n * ctypedef npy_cfloat      cfloat_t             # <<<<<<<<<<<<<<\n * ctypedef npy_cdouble     cdouble_t\n * ctypedef npy_clongdouble clongdouble_t\n */\ntypedef npy_cfloat __pyx_t_5numpy_cfloat_t;\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":765\n * \n * ctypedef npy_cfloat      cfloat_t\n * ctypedef npy_cdouble     cdouble_t             # <<<<<<<<<<<<<<\n * ctypedef npy_clongdouble clongdouble_t\n * \n */\ntypedef npy_cdouble __pyx_t_5numpy_cdouble_t;\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":766\n * ctypedef npy_cfloat      cfloat_t\n * ctypedef npy_cdouble     cdouble_t\n * ctypedef npy_clongdouble clongdouble_t             # <<<<<<<<<<<<<<\n * \n * ctypedef npy_cdouble     complex_t\n */\ntypedef npy_clongdouble __pyx_t_5numpy_clongdouble_t;\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":768\n * ctypedef npy_clongdouble clongdouble_t\n * \n * ctypedef npy_cdouble     complex_t             # <<<<<<<<<<<<<<\n * \n * cdef inline object PyArray_MultiIterNew1(a):\n */\ntypedef npy_cdouble __pyx_t_5numpy_complex_t;\n\n/* --- Runtime support code (head) --- */\n/* Refnanny.proto */\n#ifndef CYTHON_REFNANNY\n  #define CYTHON_REFNANNY 0\n#endif\n#if CYTHON_REFNANNY\n  typedef struct {\n    void (*INCREF)(void*, PyObject*, int);\n    void (*DECREF)(void*, PyObject*, int);\n    void (*GOTREF)(void*, PyObject*, int);\n    void (*GIVEREF)(void*, PyObject*, int);\n    void* (*SetupContext)(const char*, int, const char*);\n    void (*FinishContext)(void**);\n  } __Pyx_RefNannyAPIStruct;\n  static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL;\n  static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname);\n  #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL;\n#ifdef WITH_THREAD\n  #define __Pyx_RefNannySetupContext(name, acquire_gil)\\\n          if (acquire_gil) {\\\n              PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\\\n              __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\\\n              PyGILState_Release(__pyx_gilstate_save);\\\n          } else {\\\n              __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\\\n          }\n#else\n  #define __Pyx_RefNannySetupContext(name, acquire_gil)\\\n          __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__)\n#endif\n  #define __Pyx_RefNannyFinishContext()\\\n          __Pyx_RefNanny->FinishContext(&__pyx_refnanny)\n  #define __Pyx_INCREF(r)  __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__)\n  #define __Pyx_DECREF(r)  __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__)\n  #define __Pyx_GOTREF(r)  __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__)\n  #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__)\n  #define __Pyx_XINCREF(r)  do { if((r) != NULL) {__Pyx_INCREF(r); }} while(0)\n  #define __Pyx_XDECREF(r)  do { if((r) != NULL) {__Pyx_DECREF(r); }} while(0)\n  #define __Pyx_XGOTREF(r)  do { if((r) != NULL) {__Pyx_GOTREF(r); }} while(0)\n  #define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);}} while(0)\n#else\n  #define __Pyx_RefNannyDeclarations\n  #define __Pyx_RefNannySetupContext(name, acquire_gil)\n  #define __Pyx_RefNannyFinishContext()\n  #define __Pyx_INCREF(r) Py_INCREF(r)\n  #define __Pyx_DECREF(r) Py_DECREF(r)\n  #define __Pyx_GOTREF(r)\n  #define __Pyx_GIVEREF(r)\n  #define __Pyx_XINCREF(r) Py_XINCREF(r)\n  #define __Pyx_XDECREF(r) Py_XDECREF(r)\n  #define __Pyx_XGOTREF(r)\n  #define __Pyx_XGIVEREF(r)\n#endif\n#define __Pyx_XDECREF_SET(r, v) do {\\\n        PyObject *tmp = (PyObject *) r;\\\n        r = v; __Pyx_XDECREF(tmp);\\\n    } while (0)\n#define __Pyx_DECREF_SET(r, v) do {\\\n        PyObject *tmp = (PyObject *) r;\\\n        r = v; __Pyx_DECREF(tmp);\\\n    } while (0)\n#define __Pyx_CLEAR(r)    do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0)\n#define __Pyx_XCLEAR(r)   do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0)\n\n/* PyObjectGetAttrStr.proto */\n#if CYTHON_USE_TYPE_SLOTS\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) {\n    PyTypeObject* tp = Py_TYPE(obj);\n    if (likely(tp->tp_getattro))\n        return tp->tp_getattro(obj, attr_name);\n#if PY_MAJOR_VERSION < 3\n    if (likely(tp->tp_getattr))\n        return tp->tp_getattr(obj, PyString_AS_STRING(attr_name));\n#endif\n    return PyObject_GetAttr(obj, attr_name);\n}\n#else\n#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n)\n#endif\n\n/* GetBuiltinName.proto */\nstatic PyObject *__Pyx_GetBuiltinName(PyObject *name);\n\n/* RaiseArgTupleInvalid.proto */\nstatic void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact,\n    Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found);\n\n/* RaiseDoubleKeywords.proto */\nstatic void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name);\n\n/* ParseKeywords.proto */\nstatic int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[],\\\n    PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args,\\\n    const char* function_name);\n\n/* GetModuleGlobalName.proto */\nstatic CYTHON_INLINE PyObject *__Pyx_GetModuleGlobalName(PyObject *name);\n\n/* PyObjectCall.proto */\n#if CYTHON_COMPILING_IN_CPYTHON\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw);\n#else\n#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw)\n#endif\n\n/* ExtTypeTest.proto */\nstatic CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type);\n\n/* BufferFormatCheck.proto */\nstatic CYTHON_INLINE int  __Pyx_GetBufferAndValidate(Py_buffer* buf, PyObject* obj,\n    __Pyx_TypeInfo* dtype, int flags, int nd, int cast, __Pyx_BufFmt_StackElem* stack);\nstatic CYTHON_INLINE void __Pyx_SafeReleaseBuffer(Py_buffer* info);\nstatic const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts);\nstatic void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx,\n                              __Pyx_BufFmt_StackElem* stack,\n                              __Pyx_TypeInfo* type); // PROTO\n\n/* PyThreadStateGet.proto */\n#if CYTHON_FAST_THREAD_STATE\n#define __Pyx_PyThreadState_declare  PyThreadState *__pyx_tstate;\n#define __Pyx_PyThreadState_assign  __pyx_tstate = PyThreadState_GET();\n#else\n#define __Pyx_PyThreadState_declare\n#define __Pyx_PyThreadState_assign\n#endif\n\n/* PyErrFetchRestore.proto */\n#if CYTHON_FAST_THREAD_STATE\n#define __Pyx_ErrRestoreWithState(type, value, tb)  __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb)\n#define __Pyx_ErrFetchWithState(type, value, tb)    __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb)\n#define __Pyx_ErrRestore(type, value, tb)  __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb)\n#define __Pyx_ErrFetch(type, value, tb)    __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb)\nstatic CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb);\nstatic CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);\n#else\n#define __Pyx_ErrRestoreWithState(type, value, tb)  PyErr_Restore(type, value, tb)\n#define __Pyx_ErrFetchWithState(type, value, tb)  PyErr_Fetch(type, value, tb)\n#define __Pyx_ErrRestore(type, value, tb)  PyErr_Restore(type, value, tb)\n#define __Pyx_ErrFetch(type, value, tb)  PyErr_Fetch(type, value, tb)\n#endif\n\n/* BufferIndexError.proto */\nstatic void __Pyx_RaiseBufferIndexError(int axis);\n\n#define __Pyx_BufPtrStrided2d(type, buf, i0, s0, i1, s1) (type)((char*)buf + i0 * s0 + i1 * s1)\n/* RaiseException.proto */\nstatic void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause);\n\n/* DictGetItem.proto */\n#if PY_MAJOR_VERSION >= 3 && !CYTHON_COMPILING_IN_PYPY\nstatic PyObject *__Pyx_PyDict_GetItem(PyObject *d, PyObject* key) {\n    PyObject *value;\n    value = PyDict_GetItemWithError(d, key);\n    if (unlikely(!value)) {\n        if (!PyErr_Occurred()) {\n            PyObject* args = PyTuple_Pack(1, key);\n            if (likely(args))\n                PyErr_SetObject(PyExc_KeyError, args);\n            Py_XDECREF(args);\n        }\n        return NULL;\n    }\n    Py_INCREF(value);\n    return value;\n}\n#else\n    #define __Pyx_PyDict_GetItem(d, key) PyObject_GetItem(d, key)\n#endif\n\n/* RaiseTooManyValuesToUnpack.proto */\nstatic CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected);\n\n/* RaiseNeedMoreValuesToUnpack.proto */\nstatic CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index);\n\n/* RaiseNoneIterError.proto */\nstatic CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void);\n\n/* SaveResetException.proto */\n#if CYTHON_FAST_THREAD_STATE\n#define __Pyx_ExceptionSave(type, value, tb)  __Pyx__ExceptionSave(__pyx_tstate, type, value, tb)\nstatic CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);\n#define __Pyx_ExceptionReset(type, value, tb)  __Pyx__ExceptionReset(__pyx_tstate, type, value, tb)\nstatic CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb);\n#else\n#define __Pyx_ExceptionSave(type, value, tb)   PyErr_GetExcInfo(type, value, tb)\n#define __Pyx_ExceptionReset(type, value, tb)  PyErr_SetExcInfo(type, value, tb)\n#endif\n\n/* PyErrExceptionMatches.proto */\n#if CYTHON_FAST_THREAD_STATE\n#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err)\nstatic CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err);\n#else\n#define __Pyx_PyErr_ExceptionMatches(err)  PyErr_ExceptionMatches(err)\n#endif\n\n/* GetException.proto */\n#if CYTHON_FAST_THREAD_STATE\n#define __Pyx_GetException(type, value, tb)  __Pyx__GetException(__pyx_tstate, type, value, tb)\nstatic int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);\n#else\nstatic int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb);\n#endif\n\n/* Import.proto */\nstatic PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level);\n\n/* CodeObjectCache.proto */\ntypedef struct {\n    PyCodeObject* code_object;\n    int code_line;\n} __Pyx_CodeObjectCacheEntry;\nstruct __Pyx_CodeObjectCache {\n    int count;\n    int max_count;\n    __Pyx_CodeObjectCacheEntry* entries;\n};\nstatic struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL};\nstatic int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line);\nstatic PyCodeObject *__pyx_find_code_object(int code_line);\nstatic void __pyx_insert_code_object(int code_line, PyCodeObject* code_object);\n\n/* AddTraceback.proto */\nstatic void __Pyx_AddTraceback(const char *funcname, int c_line,\n                               int py_line, const char *filename);\n\n/* BufferStructDeclare.proto */\ntypedef struct {\n  Py_ssize_t shape, strides, suboffsets;\n} __Pyx_Buf_DimInfo;\ntypedef struct {\n  size_t refcount;\n  Py_buffer pybuffer;\n} __Pyx_Buffer;\ntypedef struct {\n  __Pyx_Buffer *rcbuffer;\n  char *data;\n  __Pyx_Buf_DimInfo diminfo[8];\n} __Pyx_LocalBuf_ND;\n\n#if PY_MAJOR_VERSION < 3\n    static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags);\n    static void __Pyx_ReleaseBuffer(Py_buffer *view);\n#else\n    #define __Pyx_GetBuffer PyObject_GetBuffer\n    #define __Pyx_ReleaseBuffer PyBuffer_Release\n#endif\n\n\n/* None.proto */\nstatic Py_ssize_t __Pyx_zeros[] = {0, 0, 0, 0, 0, 0, 0, 0};\nstatic Py_ssize_t __Pyx_minusones[] = {-1, -1, -1, -1, -1, -1, -1, -1};\n\n/* CIntToPy.proto */\nstatic CYTHON_INLINE PyObject* __Pyx_PyInt_From_unsigned_int(unsigned int value);\n\n/* RealImag.proto */\n#if CYTHON_CCOMPLEX\n  #ifdef __cplusplus\n    #define __Pyx_CREAL(z) ((z).real())\n    #define __Pyx_CIMAG(z) ((z).imag())\n  #else\n    #define __Pyx_CREAL(z) (__real__(z))\n    #define __Pyx_CIMAG(z) (__imag__(z))\n  #endif\n#else\n    #define __Pyx_CREAL(z) ((z).real)\n    #define __Pyx_CIMAG(z) ((z).imag)\n#endif\n#if defined(__cplusplus) && CYTHON_CCOMPLEX\\\n        && (defined(_WIN32) || defined(__clang__) || (defined(__GNUC__) && (__GNUC__ >= 5 || __GNUC__ == 4 && __GNUC_MINOR__ >= 4 )) || __cplusplus >= 201103)\n    #define __Pyx_SET_CREAL(z,x) ((z).real(x))\n    #define __Pyx_SET_CIMAG(z,y) ((z).imag(y))\n#else\n    #define __Pyx_SET_CREAL(z,x) __Pyx_CREAL(z) = (x)\n    #define __Pyx_SET_CIMAG(z,y) __Pyx_CIMAG(z) = (y)\n#endif\n\n/* Arithmetic.proto */\n#if CYTHON_CCOMPLEX\n    #define __Pyx_c_eq_float(a, b)   ((a)==(b))\n    #define __Pyx_c_sum_float(a, b)  ((a)+(b))\n    #define __Pyx_c_diff_float(a, b) ((a)-(b))\n    #define __Pyx_c_prod_float(a, b) ((a)*(b))\n    #define __Pyx_c_quot_float(a, b) ((a)/(b))\n    #define __Pyx_c_neg_float(a)     (-(a))\n  #ifdef __cplusplus\n    #define __Pyx_c_is_zero_float(z) ((z)==(float)0)\n    #define __Pyx_c_conj_float(z)    (::std::conj(z))\n    #if 1\n        #define __Pyx_c_abs_float(z)     (::std::abs(z))\n        #define __Pyx_c_pow_float(a, b)  (::std::pow(a, b))\n    #endif\n  #else\n    #define __Pyx_c_is_zero_float(z) ((z)==0)\n    #define __Pyx_c_conj_float(z)    (conjf(z))\n    #if 1\n        #define __Pyx_c_abs_float(z)     (cabsf(z))\n        #define __Pyx_c_pow_float(a, b)  (cpowf(a, b))\n    #endif\n #endif\n#else\n    static CYTHON_INLINE int __Pyx_c_eq_float(__pyx_t_float_complex, __pyx_t_float_complex);\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_sum_float(__pyx_t_float_complex, __pyx_t_float_complex);\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_diff_float(__pyx_t_float_complex, __pyx_t_float_complex);\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_prod_float(__pyx_t_float_complex, __pyx_t_float_complex);\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quot_float(__pyx_t_float_complex, __pyx_t_float_complex);\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_neg_float(__pyx_t_float_complex);\n    static CYTHON_INLINE int __Pyx_c_is_zero_float(__pyx_t_float_complex);\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_conj_float(__pyx_t_float_complex);\n    #if 1\n        static CYTHON_INLINE float __Pyx_c_abs_float(__pyx_t_float_complex);\n        static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_pow_float(__pyx_t_float_complex, __pyx_t_float_complex);\n    #endif\n#endif\n\n/* Arithmetic.proto */\n#if CYTHON_CCOMPLEX\n    #define __Pyx_c_eq_double(a, b)   ((a)==(b))\n    #define __Pyx_c_sum_double(a, b)  ((a)+(b))\n    #define __Pyx_c_diff_double(a, b) ((a)-(b))\n    #define __Pyx_c_prod_double(a, b) ((a)*(b))\n    #define __Pyx_c_quot_double(a, b) ((a)/(b))\n    #define __Pyx_c_neg_double(a)     (-(a))\n  #ifdef __cplusplus\n    #define __Pyx_c_is_zero_double(z) ((z)==(double)0)\n    #define __Pyx_c_conj_double(z)    (::std::conj(z))\n    #if 1\n        #define __Pyx_c_abs_double(z)     (::std::abs(z))\n        #define __Pyx_c_pow_double(a, b)  (::std::pow(a, b))\n    #endif\n  #else\n    #define __Pyx_c_is_zero_double(z) ((z)==0)\n    #define __Pyx_c_conj_double(z)    (conj(z))\n    #if 1\n        #define __Pyx_c_abs_double(z)     (cabs(z))\n        #define __Pyx_c_pow_double(a, b)  (cpow(a, b))\n    #endif\n #endif\n#else\n    static CYTHON_INLINE int __Pyx_c_eq_double(__pyx_t_double_complex, __pyx_t_double_complex);\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum_double(__pyx_t_double_complex, __pyx_t_double_complex);\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff_double(__pyx_t_double_complex, __pyx_t_double_complex);\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod_double(__pyx_t_double_complex, __pyx_t_double_complex);\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex, __pyx_t_double_complex);\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg_double(__pyx_t_double_complex);\n    static CYTHON_INLINE int __Pyx_c_is_zero_double(__pyx_t_double_complex);\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj_double(__pyx_t_double_complex);\n    #if 1\n        static CYTHON_INLINE double __Pyx_c_abs_double(__pyx_t_double_complex);\n        static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_pow_double(__pyx_t_double_complex, __pyx_t_double_complex);\n    #endif\n#endif\n\n/* CIntToPy.proto */\nstatic CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value);\n\n/* CIntToPy.proto */\nstatic CYTHON_INLINE PyObject* __Pyx_PyInt_From_enum__NPY_TYPES(enum NPY_TYPES value);\n\n/* CIntFromPy.proto */\nstatic CYTHON_INLINE unsigned int __Pyx_PyInt_As_unsigned_int(PyObject *);\n\n/* CIntFromPy.proto */\nstatic CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *);\n\n/* CIntToPy.proto */\nstatic CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value);\n\n/* CIntFromPy.proto */\nstatic CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *);\n\n/* CheckBinaryVersion.proto */\nstatic int __Pyx_check_binary_version(void);\n\n/* PyIdentifierFromString.proto */\n#if !defined(__Pyx_PyIdentifier_FromString)\n#if PY_MAJOR_VERSION < 3\n  #define __Pyx_PyIdentifier_FromString(s) PyString_FromString(s)\n#else\n  #define __Pyx_PyIdentifier_FromString(s) PyUnicode_FromString(s)\n#endif\n#endif\n\n/* ModuleImport.proto */\nstatic PyObject *__Pyx_ImportModule(const char *name);\n\n/* TypeImport.proto */\nstatic PyTypeObject *__Pyx_ImportType(const char *module_name, const char *class_name, size_t size, int strict);\n\n/* InitStrings.proto */\nstatic int __Pyx_InitStrings(__Pyx_StringTabEntry *t);\n\n\n/* Module declarations from 'cython' */\n\n/* Module declarations from 'cpython.buffer' */\n\n/* Module declarations from 'libc.string' */\n\n/* Module declarations from 'libc.stdio' */\n\n/* Module declarations from '__builtin__' */\n\n/* Module declarations from 'cpython.type' */\nstatic PyTypeObject *__pyx_ptype_7cpython_4type_type = 0;\n\n/* Module declarations from 'cpython' */\n\n/* Module declarations from 'cpython.object' */\n\n/* Module declarations from 'cpython.ref' */\n\n/* Module declarations from 'libc.stdlib' */\n\n/* Module declarations from 'numpy' */\n\n/* Module declarations from 'numpy' */\nstatic PyTypeObject *__pyx_ptype_5numpy_dtype = 0;\nstatic PyTypeObject *__pyx_ptype_5numpy_flatiter = 0;\nstatic PyTypeObject *__pyx_ptype_5numpy_broadcast = 0;\nstatic PyTypeObject *__pyx_ptype_5numpy_ndarray = 0;\nstatic PyTypeObject *__pyx_ptype_5numpy_ufunc = 0;\nstatic CYTHON_INLINE char *__pyx_f_5numpy__util_dtypestring(PyArray_Descr *, char *, char *, int *); /*proto*/\n\n/* Module declarations from 'bbox' */\nstatic PyArrayObject *__pyx_f_4bbox_bbox_overlaps_c(PyArrayObject *, PyArrayObject *); /*proto*/\nstatic PyArrayObject *__pyx_f_4bbox_bbox_intersections_c(PyArrayObject *, PyArrayObject *); /*proto*/\nstatic __Pyx_TypeInfo __Pyx_TypeInfo_nn___pyx_t_4bbox_DTYPE_t = { \"DTYPE_t\", NULL, sizeof(__pyx_t_4bbox_DTYPE_t), { 0 }, 0, 'R', 0, 0 };\n#define __Pyx_MODULE_NAME \"bbox\"\nint __pyx_module_is_main_bbox = 0;\n\n/* Implementation of 'bbox' */\nstatic PyObject *__pyx_builtin_range;\nstatic PyObject *__pyx_builtin_ValueError;\nstatic PyObject *__pyx_builtin_RuntimeError;\nstatic PyObject *__pyx_builtin_ImportError;\nstatic const char __pyx_k_np[] = \"np\";\nstatic const char __pyx_k_bbox[] = \"bbox\";\nstatic const char __pyx_k_main[] = \"__main__\";\nstatic const char __pyx_k_test[] = \"__test__\";\nstatic const char __pyx_k_DTYPE[] = \"DTYPE\";\nstatic const char __pyx_k_boxes[] = \"boxes\";\nstatic const char __pyx_k_dtype[] = \"dtype\";\nstatic const char __pyx_k_float[] = \"float\";\nstatic const char __pyx_k_numpy[] = \"numpy\";\nstatic const char __pyx_k_range[] = \"range\";\nstatic const char __pyx_k_zeros[] = \"zeros\";\nstatic const char __pyx_k_import[] = \"__import__\";\nstatic const char __pyx_k_ValueError[] = \"ValueError\";\nstatic const char __pyx_k_ImportError[] = \"ImportError\";\nstatic const char __pyx_k_query_boxes[] = \"query_boxes\";\nstatic const char __pyx_k_RuntimeError[] = \"RuntimeError\";\nstatic const char __pyx_k_boxes_contig[] = \"boxes_contig\";\nstatic const char __pyx_k_query_contig[] = \"query_contig\";\nstatic const char __pyx_k_bbox_overlaps[] = \"bbox_overlaps\";\nstatic const char __pyx_k_ascontiguousarray[] = \"ascontiguousarray\";\nstatic const char __pyx_k_bbox_intersections[] = \"bbox_intersections\";\nstatic const char __pyx_k_ndarray_is_not_C_contiguous[] = \"ndarray is not C contiguous\";\nstatic const char __pyx_k_Users_rowanz_code_scene_graph_l[] = \"/Users/rowanz/code/scene-graph/lib/fpn/box_intersections_cpu/bbox.pyx\";\nstatic const char __pyx_k_numpy_core_multiarray_failed_to[] = \"numpy.core.multiarray failed to import\";\nstatic const char __pyx_k_unknown_dtype_code_in_numpy_pxd[] = \"unknown dtype code in numpy.pxd (%d)\";\nstatic const char __pyx_k_Format_string_allocated_too_shor[] = \"Format string allocated too short, see comment in numpy.pxd\";\nstatic const char __pyx_k_Non_native_byte_order_not_suppor[] = \"Non-native byte order not supported\";\nstatic const char __pyx_k_ndarray_is_not_Fortran_contiguou[] = \"ndarray is not Fortran contiguous\";\nstatic const char __pyx_k_numpy_core_umath_failed_to_impor[] = \"numpy.core.umath failed to import\";\nstatic const char __pyx_k_Format_string_allocated_too_shor_2[] = \"Format string allocated too short.\";\nstatic PyObject *__pyx_n_s_DTYPE;\nstatic PyObject *__pyx_kp_u_Format_string_allocated_too_shor;\nstatic PyObject *__pyx_kp_u_Format_string_allocated_too_shor_2;\nstatic PyObject *__pyx_n_s_ImportError;\nstatic PyObject *__pyx_kp_u_Non_native_byte_order_not_suppor;\nstatic PyObject *__pyx_n_s_RuntimeError;\nstatic PyObject *__pyx_kp_s_Users_rowanz_code_scene_graph_l;\nstatic PyObject *__pyx_n_s_ValueError;\nstatic PyObject *__pyx_n_s_ascontiguousarray;\nstatic PyObject *__pyx_n_s_bbox;\nstatic PyObject *__pyx_n_s_bbox_intersections;\nstatic PyObject *__pyx_n_s_bbox_overlaps;\nstatic PyObject *__pyx_n_s_boxes;\nstatic PyObject *__pyx_n_s_boxes_contig;\nstatic PyObject *__pyx_n_s_dtype;\nstatic PyObject *__pyx_n_s_float;\nstatic PyObject *__pyx_n_s_import;\nstatic PyObject *__pyx_n_s_main;\nstatic PyObject *__pyx_kp_u_ndarray_is_not_C_contiguous;\nstatic PyObject *__pyx_kp_u_ndarray_is_not_Fortran_contiguou;\nstatic PyObject *__pyx_n_s_np;\nstatic PyObject *__pyx_n_s_numpy;\nstatic PyObject *__pyx_kp_s_numpy_core_multiarray_failed_to;\nstatic PyObject *__pyx_kp_s_numpy_core_umath_failed_to_impor;\nstatic PyObject *__pyx_n_s_query_boxes;\nstatic PyObject *__pyx_n_s_query_contig;\nstatic PyObject *__pyx_n_s_range;\nstatic PyObject *__pyx_n_s_test;\nstatic PyObject *__pyx_kp_u_unknown_dtype_code_in_numpy_pxd;\nstatic PyObject *__pyx_n_s_zeros;\nstatic PyObject *__pyx_pf_4bbox_bbox_overlaps(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_boxes, PyObject *__pyx_v_query_boxes); /* proto */\nstatic PyObject *__pyx_pf_4bbox_2bbox_intersections(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_boxes, PyObject *__pyx_v_query_boxes); /* proto */\nstatic int __pyx_pf_5numpy_7ndarray___getbuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */\nstatic void __pyx_pf_5numpy_7ndarray_2__releasebuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info); /* proto */\nstatic PyObject *__pyx_tuple_;\nstatic PyObject *__pyx_tuple__2;\nstatic PyObject *__pyx_tuple__3;\nstatic PyObject *__pyx_tuple__4;\nstatic PyObject *__pyx_tuple__5;\nstatic PyObject *__pyx_tuple__6;\nstatic PyObject *__pyx_tuple__7;\nstatic PyObject *__pyx_tuple__8;\nstatic PyObject *__pyx_tuple__9;\nstatic PyObject *__pyx_tuple__10;\nstatic PyObject *__pyx_tuple__12;\nstatic PyObject *__pyx_codeobj__11;\nstatic PyObject *__pyx_codeobj__13;\n\n/* \"bbox.pyx\":15\n * ctypedef np.float_t DTYPE_t\n * \n * def bbox_overlaps(boxes, query_boxes):             # <<<<<<<<<<<<<<\n *     cdef np.ndarray[DTYPE_t, ndim=2] boxes_contig = np.ascontiguousarray(boxes, dtype=DTYPE)\n *     cdef np.ndarray[DTYPE_t, ndim=2] query_contig = np.ascontiguousarray(query_boxes, dtype=DTYPE)\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_4bbox_1bbox_overlaps(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic PyMethodDef __pyx_mdef_4bbox_1bbox_overlaps = {\"bbox_overlaps\", (PyCFunction)__pyx_pw_4bbox_1bbox_overlaps, METH_VARARGS|METH_KEYWORDS, 0};\nstatic PyObject *__pyx_pw_4bbox_1bbox_overlaps(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyObject *__pyx_v_boxes = 0;\n  PyObject *__pyx_v_query_boxes = 0;\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"bbox_overlaps (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_boxes,&__pyx_n_s_query_boxes,0};\n    PyObject* values[2] = {0,0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_boxes)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        case  1:\n        if (likely((values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_query_boxes)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"bbox_overlaps\", 1, 2, 2, 1); __PYX_ERR(0, 15, __pyx_L3_error)\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"bbox_overlaps\") < 0)) __PYX_ERR(0, 15, __pyx_L3_error)\n      }\n    } else if (PyTuple_GET_SIZE(__pyx_args) != 2) {\n      goto __pyx_L5_argtuple_error;\n    } else {\n      values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n      values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n    }\n    __pyx_v_boxes = values[0];\n    __pyx_v_query_boxes = values[1];\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"bbox_overlaps\", 1, 2, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 15, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"bbox.bbox_overlaps\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return NULL;\n  __pyx_L4_argument_unpacking_done:;\n  __pyx_r = __pyx_pf_4bbox_bbox_overlaps(__pyx_self, __pyx_v_boxes, __pyx_v_query_boxes);\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_4bbox_bbox_overlaps(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_boxes, PyObject *__pyx_v_query_boxes) {\n  PyArrayObject *__pyx_v_boxes_contig = 0;\n  PyArrayObject *__pyx_v_query_contig = 0;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_boxes_contig;\n  __Pyx_Buffer __pyx_pybuffer_boxes_contig;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_query_contig;\n  __Pyx_Buffer __pyx_pybuffer_query_contig;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *__pyx_t_4 = NULL;\n  PyArrayObject *__pyx_t_5 = NULL;\n  PyArrayObject *__pyx_t_6 = NULL;\n  __Pyx_RefNannySetupContext(\"bbox_overlaps\", 0);\n  __pyx_pybuffer_boxes_contig.pybuffer.buf = NULL;\n  __pyx_pybuffer_boxes_contig.refcount = 0;\n  __pyx_pybuffernd_boxes_contig.data = NULL;\n  __pyx_pybuffernd_boxes_contig.rcbuffer = &__pyx_pybuffer_boxes_contig;\n  __pyx_pybuffer_query_contig.pybuffer.buf = NULL;\n  __pyx_pybuffer_query_contig.refcount = 0;\n  __pyx_pybuffernd_query_contig.data = NULL;\n  __pyx_pybuffernd_query_contig.rcbuffer = &__pyx_pybuffer_query_contig;\n\n  /* \"bbox.pyx\":16\n * \n * def bbox_overlaps(boxes, query_boxes):\n *     cdef np.ndarray[DTYPE_t, ndim=2] boxes_contig = np.ascontiguousarray(boxes, dtype=DTYPE)             # <<<<<<<<<<<<<<\n *     cdef np.ndarray[DTYPE_t, ndim=2] query_contig = np.ascontiguousarray(query_boxes, dtype=DTYPE)\n * \n */\n  __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 16, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_ascontiguousarray); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 16, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 16, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_INCREF(__pyx_v_boxes);\n  __Pyx_GIVEREF(__pyx_v_boxes);\n  PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_boxes);\n  __pyx_t_3 = PyDict_New(); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 16, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_DTYPE); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 16, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_dtype, __pyx_t_4) < 0) __PYX_ERR(0, 16, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n  __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 16, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  if (!(likely(((__pyx_t_4) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_4, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 16, __pyx_L1_error)\n  __pyx_t_5 = ((PyArrayObject *)__pyx_t_4);\n  {\n    __Pyx_BufFmt_StackElem __pyx_stack[1];\n    if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_boxes_contig.rcbuffer->pybuffer, (PyObject*)__pyx_t_5, &__Pyx_TypeInfo_nn___pyx_t_4bbox_DTYPE_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) {\n      __pyx_v_boxes_contig = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); __pyx_pybuffernd_boxes_contig.rcbuffer->pybuffer.buf = NULL;\n      __PYX_ERR(0, 16, __pyx_L1_error)\n    } else {__pyx_pybuffernd_boxes_contig.diminfo[0].strides = __pyx_pybuffernd_boxes_contig.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_boxes_contig.diminfo[0].shape = __pyx_pybuffernd_boxes_contig.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_boxes_contig.diminfo[1].strides = __pyx_pybuffernd_boxes_contig.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_boxes_contig.diminfo[1].shape = __pyx_pybuffernd_boxes_contig.rcbuffer->pybuffer.shape[1];\n    }\n  }\n  __pyx_t_5 = 0;\n  __pyx_v_boxes_contig = ((PyArrayObject *)__pyx_t_4);\n  __pyx_t_4 = 0;\n\n  /* \"bbox.pyx\":17\n * def bbox_overlaps(boxes, query_boxes):\n *     cdef np.ndarray[DTYPE_t, ndim=2] boxes_contig = np.ascontiguousarray(boxes, dtype=DTYPE)\n *     cdef np.ndarray[DTYPE_t, ndim=2] query_contig = np.ascontiguousarray(query_boxes, dtype=DTYPE)             # <<<<<<<<<<<<<<\n * \n *     return bbox_overlaps_c(boxes_contig, query_contig)\n */\n  __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 17, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_ascontiguousarray); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 17, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n  __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 17, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __Pyx_INCREF(__pyx_v_query_boxes);\n  __Pyx_GIVEREF(__pyx_v_query_boxes);\n  PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v_query_boxes);\n  __pyx_t_1 = PyDict_New(); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 17, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_DTYPE); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 17, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  if (PyDict_SetItem(__pyx_t_1, __pyx_n_s_dtype, __pyx_t_2) < 0) __PYX_ERR(0, 17, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_4, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 17, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  if (!(likely(((__pyx_t_2) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_2, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 17, __pyx_L1_error)\n  __pyx_t_6 = ((PyArrayObject *)__pyx_t_2);\n  {\n    __Pyx_BufFmt_StackElem __pyx_stack[1];\n    if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_query_contig.rcbuffer->pybuffer, (PyObject*)__pyx_t_6, &__Pyx_TypeInfo_nn___pyx_t_4bbox_DTYPE_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) {\n      __pyx_v_query_contig = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); __pyx_pybuffernd_query_contig.rcbuffer->pybuffer.buf = NULL;\n      __PYX_ERR(0, 17, __pyx_L1_error)\n    } else {__pyx_pybuffernd_query_contig.diminfo[0].strides = __pyx_pybuffernd_query_contig.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_query_contig.diminfo[0].shape = __pyx_pybuffernd_query_contig.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_query_contig.diminfo[1].strides = __pyx_pybuffernd_query_contig.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_query_contig.diminfo[1].shape = __pyx_pybuffernd_query_contig.rcbuffer->pybuffer.shape[1];\n    }\n  }\n  __pyx_t_6 = 0;\n  __pyx_v_query_contig = ((PyArrayObject *)__pyx_t_2);\n  __pyx_t_2 = 0;\n\n  /* \"bbox.pyx\":19\n *     cdef np.ndarray[DTYPE_t, ndim=2] query_contig = np.ascontiguousarray(query_boxes, dtype=DTYPE)\n * \n *     return bbox_overlaps_c(boxes_contig, query_contig)             # <<<<<<<<<<<<<<\n * \n * cdef np.ndarray[DTYPE_t, ndim=2] bbox_overlaps_c(\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_2 = ((PyObject *)__pyx_f_4bbox_bbox_overlaps_c(((PyArrayObject *)__pyx_v_boxes_contig), ((PyArrayObject *)__pyx_v_query_contig))); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 19, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_r = __pyx_t_2;\n  __pyx_t_2 = 0;\n  goto __pyx_L0;\n\n  /* \"bbox.pyx\":15\n * ctypedef np.float_t DTYPE_t\n * \n * def bbox_overlaps(boxes, query_boxes):             # <<<<<<<<<<<<<<\n *     cdef np.ndarray[DTYPE_t, ndim=2] boxes_contig = np.ascontiguousarray(boxes, dtype=DTYPE)\n *     cdef np.ndarray[DTYPE_t, ndim=2] query_contig = np.ascontiguousarray(query_boxes, dtype=DTYPE)\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_4);\n  { PyObject *__pyx_type, *__pyx_value, *__pyx_tb;\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_boxes_contig.rcbuffer->pybuffer);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_query_contig.rcbuffer->pybuffer);\n  __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);}\n  __Pyx_AddTraceback(\"bbox.bbox_overlaps\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  goto __pyx_L2;\n  __pyx_L0:;\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_boxes_contig.rcbuffer->pybuffer);\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_query_contig.rcbuffer->pybuffer);\n  __pyx_L2:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_boxes_contig);\n  __Pyx_XDECREF((PyObject *)__pyx_v_query_contig);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"bbox.pyx\":21\n *     return bbox_overlaps_c(boxes_contig, query_contig)\n * \n * cdef np.ndarray[DTYPE_t, ndim=2] bbox_overlaps_c(             # <<<<<<<<<<<<<<\n *         np.ndarray[DTYPE_t, ndim=2] boxes,\n *         np.ndarray[DTYPE_t, ndim=2] query_boxes):\n */\n\nstatic PyArrayObject *__pyx_f_4bbox_bbox_overlaps_c(PyArrayObject *__pyx_v_boxes, PyArrayObject *__pyx_v_query_boxes) {\n  unsigned int __pyx_v_N;\n  unsigned int __pyx_v_K;\n  PyArrayObject *__pyx_v_overlaps = 0;\n  __pyx_t_4bbox_DTYPE_t __pyx_v_iw;\n  __pyx_t_4bbox_DTYPE_t __pyx_v_ih;\n  __pyx_t_4bbox_DTYPE_t __pyx_v_box_area;\n  __pyx_t_4bbox_DTYPE_t __pyx_v_ua;\n  unsigned int __pyx_v_k;\n  unsigned int __pyx_v_n;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_boxes;\n  __Pyx_Buffer __pyx_pybuffer_boxes;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_overlaps;\n  __Pyx_Buffer __pyx_pybuffer_overlaps;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_query_boxes;\n  __Pyx_Buffer __pyx_pybuffer_query_boxes;\n  PyArrayObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *__pyx_t_4 = NULL;\n  PyArrayObject *__pyx_t_5 = NULL;\n  unsigned int __pyx_t_6;\n  unsigned int __pyx_t_7;\n  size_t __pyx_t_8;\n  Py_ssize_t __pyx_t_9;\n  int __pyx_t_10;\n  size_t __pyx_t_11;\n  Py_ssize_t __pyx_t_12;\n  size_t __pyx_t_13;\n  Py_ssize_t __pyx_t_14;\n  size_t __pyx_t_15;\n  Py_ssize_t __pyx_t_16;\n  unsigned int __pyx_t_17;\n  unsigned int __pyx_t_18;\n  size_t __pyx_t_19;\n  Py_ssize_t __pyx_t_20;\n  __pyx_t_4bbox_DTYPE_t __pyx_t_21;\n  size_t __pyx_t_22;\n  Py_ssize_t __pyx_t_23;\n  __pyx_t_4bbox_DTYPE_t __pyx_t_24;\n  __pyx_t_4bbox_DTYPE_t __pyx_t_25;\n  size_t __pyx_t_26;\n  Py_ssize_t __pyx_t_27;\n  size_t __pyx_t_28;\n  Py_ssize_t __pyx_t_29;\n  __pyx_t_4bbox_DTYPE_t __pyx_t_30;\n  int __pyx_t_31;\n  size_t __pyx_t_32;\n  Py_ssize_t __pyx_t_33;\n  size_t __pyx_t_34;\n  Py_ssize_t __pyx_t_35;\n  size_t __pyx_t_36;\n  Py_ssize_t __pyx_t_37;\n  size_t __pyx_t_38;\n  Py_ssize_t __pyx_t_39;\n  size_t __pyx_t_40;\n  Py_ssize_t __pyx_t_41;\n  size_t __pyx_t_42;\n  Py_ssize_t __pyx_t_43;\n  size_t __pyx_t_44;\n  Py_ssize_t __pyx_t_45;\n  size_t __pyx_t_46;\n  Py_ssize_t __pyx_t_47;\n  size_t __pyx_t_48;\n  size_t __pyx_t_49;\n  __Pyx_RefNannySetupContext(\"bbox_overlaps_c\", 0);\n  __pyx_pybuffer_overlaps.pybuffer.buf = NULL;\n  __pyx_pybuffer_overlaps.refcount = 0;\n  __pyx_pybuffernd_overlaps.data = NULL;\n  __pyx_pybuffernd_overlaps.rcbuffer = &__pyx_pybuffer_overlaps;\n  __pyx_pybuffer_boxes.pybuffer.buf = NULL;\n  __pyx_pybuffer_boxes.refcount = 0;\n  __pyx_pybuffernd_boxes.data = NULL;\n  __pyx_pybuffernd_boxes.rcbuffer = &__pyx_pybuffer_boxes;\n  __pyx_pybuffer_query_boxes.pybuffer.buf = NULL;\n  __pyx_pybuffer_query_boxes.refcount = 0;\n  __pyx_pybuffernd_query_boxes.data = NULL;\n  __pyx_pybuffernd_query_boxes.rcbuffer = &__pyx_pybuffer_query_boxes;\n  {\n    __Pyx_BufFmt_StackElem __pyx_stack[1];\n    if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_boxes.rcbuffer->pybuffer, (PyObject*)__pyx_v_boxes, &__Pyx_TypeInfo_nn___pyx_t_4bbox_DTYPE_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) __PYX_ERR(0, 21, __pyx_L1_error)\n  }\n  __pyx_pybuffernd_boxes.diminfo[0].strides = __pyx_pybuffernd_boxes.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_boxes.diminfo[0].shape = __pyx_pybuffernd_boxes.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_boxes.diminfo[1].strides = __pyx_pybuffernd_boxes.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_boxes.diminfo[1].shape = __pyx_pybuffernd_boxes.rcbuffer->pybuffer.shape[1];\n  {\n    __Pyx_BufFmt_StackElem __pyx_stack[1];\n    if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_query_boxes.rcbuffer->pybuffer, (PyObject*)__pyx_v_query_boxes, &__Pyx_TypeInfo_nn___pyx_t_4bbox_DTYPE_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) __PYX_ERR(0, 21, __pyx_L1_error)\n  }\n  __pyx_pybuffernd_query_boxes.diminfo[0].strides = __pyx_pybuffernd_query_boxes.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_query_boxes.diminfo[0].shape = __pyx_pybuffernd_query_boxes.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_query_boxes.diminfo[1].strides = __pyx_pybuffernd_query_boxes.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_query_boxes.diminfo[1].shape = __pyx_pybuffernd_query_boxes.rcbuffer->pybuffer.shape[1];\n\n  /* \"bbox.pyx\":33\n *     overlaps: (N, K) ndarray of overlap between boxes and query_boxes\n *     \"\"\"\n *     cdef unsigned int N = boxes.shape[0]             # <<<<<<<<<<<<<<\n *     cdef unsigned int K = query_boxes.shape[0]\n *     cdef np.ndarray[DTYPE_t, ndim=2] overlaps = np.zeros((N, K), dtype=DTYPE)\n */\n  __pyx_v_N = (__pyx_v_boxes->dimensions[0]);\n\n  /* \"bbox.pyx\":34\n *     \"\"\"\n *     cdef unsigned int N = boxes.shape[0]\n *     cdef unsigned int K = query_boxes.shape[0]             # <<<<<<<<<<<<<<\n *     cdef np.ndarray[DTYPE_t, ndim=2] overlaps = np.zeros((N, K), dtype=DTYPE)\n *     cdef DTYPE_t iw, ih, box_area\n */\n  __pyx_v_K = (__pyx_v_query_boxes->dimensions[0]);\n\n  /* \"bbox.pyx\":35\n *     cdef unsigned int N = boxes.shape[0]\n *     cdef unsigned int K = query_boxes.shape[0]\n *     cdef np.ndarray[DTYPE_t, ndim=2] overlaps = np.zeros((N, K), dtype=DTYPE)             # <<<<<<<<<<<<<<\n *     cdef DTYPE_t iw, ih, box_area\n *     cdef DTYPE_t ua\n */\n  __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 35, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_zeros); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 35, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_1 = __Pyx_PyInt_From_unsigned_int(__pyx_v_N); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 35, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_3 = __Pyx_PyInt_From_unsigned_int(__pyx_v_K); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 35, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 35, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __Pyx_GIVEREF(__pyx_t_1);\n  PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_1);\n  __Pyx_GIVEREF(__pyx_t_3);\n  PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_3);\n  __pyx_t_1 = 0;\n  __pyx_t_3 = 0;\n  __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 35, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_GIVEREF(__pyx_t_4);\n  PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_4);\n  __pyx_t_4 = 0;\n  __pyx_t_4 = PyDict_New(); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 35, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_DTYPE); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 35, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_t_4, __pyx_n_s_dtype, __pyx_t_1) < 0) __PYX_ERR(0, 35, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_3, __pyx_t_4); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 35, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n  if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 35, __pyx_L1_error)\n  __pyx_t_5 = ((PyArrayObject *)__pyx_t_1);\n  {\n    __Pyx_BufFmt_StackElem __pyx_stack[1];\n    if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_overlaps.rcbuffer->pybuffer, (PyObject*)__pyx_t_5, &__Pyx_TypeInfo_nn___pyx_t_4bbox_DTYPE_t, PyBUF_FORMAT| PyBUF_STRIDES| PyBUF_WRITABLE, 2, 0, __pyx_stack) == -1)) {\n      __pyx_v_overlaps = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); __pyx_pybuffernd_overlaps.rcbuffer->pybuffer.buf = NULL;\n      __PYX_ERR(0, 35, __pyx_L1_error)\n    } else {__pyx_pybuffernd_overlaps.diminfo[0].strides = __pyx_pybuffernd_overlaps.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_overlaps.diminfo[0].shape = __pyx_pybuffernd_overlaps.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_overlaps.diminfo[1].strides = __pyx_pybuffernd_overlaps.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_overlaps.diminfo[1].shape = __pyx_pybuffernd_overlaps.rcbuffer->pybuffer.shape[1];\n    }\n  }\n  __pyx_t_5 = 0;\n  __pyx_v_overlaps = ((PyArrayObject *)__pyx_t_1);\n  __pyx_t_1 = 0;\n\n  /* \"bbox.pyx\":39\n *     cdef DTYPE_t ua\n *     cdef unsigned int k, n\n *     for k in range(K):             # <<<<<<<<<<<<<<\n *         box_area = (\n *             (query_boxes[k, 2] - query_boxes[k, 0] + 1) *\n */\n  __pyx_t_6 = __pyx_v_K;\n  for (__pyx_t_7 = 0; __pyx_t_7 < __pyx_t_6; __pyx_t_7+=1) {\n    __pyx_v_k = __pyx_t_7;\n\n    /* \"bbox.pyx\":41\n *     for k in range(K):\n *         box_area = (\n *             (query_boxes[k, 2] - query_boxes[k, 0] + 1) *             # <<<<<<<<<<<<<<\n *             (query_boxes[k, 3] - query_boxes[k, 1] + 1)\n *         )\n */\n    __pyx_t_8 = __pyx_v_k;\n    __pyx_t_9 = 2;\n    __pyx_t_10 = -1;\n    if (unlikely(__pyx_t_8 >= (size_t)__pyx_pybuffernd_query_boxes.diminfo[0].shape)) __pyx_t_10 = 0;\n    if (__pyx_t_9 < 0) {\n      __pyx_t_9 += __pyx_pybuffernd_query_boxes.diminfo[1].shape;\n      if (unlikely(__pyx_t_9 < 0)) __pyx_t_10 = 1;\n    } else if (unlikely(__pyx_t_9 >= __pyx_pybuffernd_query_boxes.diminfo[1].shape)) __pyx_t_10 = 1;\n    if (unlikely(__pyx_t_10 != -1)) {\n      __Pyx_RaiseBufferIndexError(__pyx_t_10);\n      __PYX_ERR(0, 41, __pyx_L1_error)\n    }\n    __pyx_t_11 = __pyx_v_k;\n    __pyx_t_12 = 0;\n    __pyx_t_10 = -1;\n    if (unlikely(__pyx_t_11 >= (size_t)__pyx_pybuffernd_query_boxes.diminfo[0].shape)) __pyx_t_10 = 0;\n    if (__pyx_t_12 < 0) {\n      __pyx_t_12 += __pyx_pybuffernd_query_boxes.diminfo[1].shape;\n      if (unlikely(__pyx_t_12 < 0)) __pyx_t_10 = 1;\n    } else if (unlikely(__pyx_t_12 >= __pyx_pybuffernd_query_boxes.diminfo[1].shape)) __pyx_t_10 = 1;\n    if (unlikely(__pyx_t_10 != -1)) {\n      __Pyx_RaiseBufferIndexError(__pyx_t_10);\n      __PYX_ERR(0, 41, __pyx_L1_error)\n    }\n\n    /* \"bbox.pyx\":42\n *         box_area = (\n *             (query_boxes[k, 2] - query_boxes[k, 0] + 1) *\n *             (query_boxes[k, 3] - query_boxes[k, 1] + 1)             # <<<<<<<<<<<<<<\n *         )\n *         for n in range(N):\n */\n    __pyx_t_13 = __pyx_v_k;\n    __pyx_t_14 = 3;\n    __pyx_t_10 = -1;\n    if (unlikely(__pyx_t_13 >= (size_t)__pyx_pybuffernd_query_boxes.diminfo[0].shape)) __pyx_t_10 = 0;\n    if (__pyx_t_14 < 0) {\n      __pyx_t_14 += __pyx_pybuffernd_query_boxes.diminfo[1].shape;\n      if (unlikely(__pyx_t_14 < 0)) __pyx_t_10 = 1;\n    } else if (unlikely(__pyx_t_14 >= __pyx_pybuffernd_query_boxes.diminfo[1].shape)) __pyx_t_10 = 1;\n    if (unlikely(__pyx_t_10 != -1)) {\n      __Pyx_RaiseBufferIndexError(__pyx_t_10);\n      __PYX_ERR(0, 42, __pyx_L1_error)\n    }\n    __pyx_t_15 = __pyx_v_k;\n    __pyx_t_16 = 1;\n    __pyx_t_10 = -1;\n    if (unlikely(__pyx_t_15 >= (size_t)__pyx_pybuffernd_query_boxes.diminfo[0].shape)) __pyx_t_10 = 0;\n    if (__pyx_t_16 < 0) {\n      __pyx_t_16 += __pyx_pybuffernd_query_boxes.diminfo[1].shape;\n      if (unlikely(__pyx_t_16 < 0)) __pyx_t_10 = 1;\n    } else if (unlikely(__pyx_t_16 >= __pyx_pybuffernd_query_boxes.diminfo[1].shape)) __pyx_t_10 = 1;\n    if (unlikely(__pyx_t_10 != -1)) {\n      __Pyx_RaiseBufferIndexError(__pyx_t_10);\n      __PYX_ERR(0, 42, __pyx_L1_error)\n    }\n\n    /* \"bbox.pyx\":41\n *     for k in range(K):\n *         box_area = (\n *             (query_boxes[k, 2] - query_boxes[k, 0] + 1) *             # <<<<<<<<<<<<<<\n *             (query_boxes[k, 3] - query_boxes[k, 1] + 1)\n *         )\n */\n    __pyx_v_box_area = ((((*__Pyx_BufPtrStrided2d(__pyx_t_4bbox_DTYPE_t *, __pyx_pybuffernd_query_boxes.rcbuffer->pybuffer.buf, __pyx_t_8, __pyx_pybuffernd_query_boxes.diminfo[0].strides, __pyx_t_9, __pyx_pybuffernd_query_boxes.diminfo[1].strides)) - (*__Pyx_BufPtrStrided2d(__pyx_t_4bbox_DTYPE_t *, __pyx_pybuffernd_query_boxes.rcbuffer->pybuffer.buf, __pyx_t_11, __pyx_pybuffernd_query_boxes.diminfo[0].strides, __pyx_t_12, __pyx_pybuffernd_query_boxes.diminfo[1].strides))) + 1.0) * (((*__Pyx_BufPtrStrided2d(__pyx_t_4bbox_DTYPE_t *, __pyx_pybuffernd_query_boxes.rcbuffer->pybuffer.buf, __pyx_t_13, __pyx_pybuffernd_query_boxes.diminfo[0].strides, __pyx_t_14, __pyx_pybuffernd_query_boxes.diminfo[1].strides)) - (*__Pyx_BufPtrStrided2d(__pyx_t_4bbox_DTYPE_t *, __pyx_pybuffernd_query_boxes.rcbuffer->pybuffer.buf, __pyx_t_15, __pyx_pybuffernd_query_boxes.diminfo[0].strides, __pyx_t_16, __pyx_pybuffernd_query_boxes.diminfo[1].strides))) + 1.0));\n\n    /* \"bbox.pyx\":44\n *             (query_boxes[k, 3] - query_boxes[k, 1] + 1)\n *         )\n *         for n in range(N):             # <<<<<<<<<<<<<<\n *             iw = (\n *                 min(boxes[n, 2], query_boxes[k, 2]) -\n */\n    __pyx_t_17 = __pyx_v_N;\n    for (__pyx_t_18 = 0; __pyx_t_18 < __pyx_t_17; __pyx_t_18+=1) {\n      __pyx_v_n = __pyx_t_18;\n\n      /* \"bbox.pyx\":46\n *         for n in range(N):\n *             iw = (\n *                 min(boxes[n, 2], query_boxes[k, 2]) -             # <<<<<<<<<<<<<<\n *                 max(boxes[n, 0], query_boxes[k, 0]) + 1\n *             )\n */\n      __pyx_t_19 = __pyx_v_k;\n      __pyx_t_20 = 2;\n      __pyx_t_10 = -1;\n      if (unlikely(__pyx_t_19 >= (size_t)__pyx_pybuffernd_query_boxes.diminfo[0].shape)) __pyx_t_10 = 0;\n      if (__pyx_t_20 < 0) {\n        __pyx_t_20 += __pyx_pybuffernd_query_boxes.diminfo[1].shape;\n        if (unlikely(__pyx_t_20 < 0)) __pyx_t_10 = 1;\n      } else if (unlikely(__pyx_t_20 >= __pyx_pybuffernd_query_boxes.diminfo[1].shape)) __pyx_t_10 = 1;\n      if (unlikely(__pyx_t_10 != -1)) {\n        __Pyx_RaiseBufferIndexError(__pyx_t_10);\n        __PYX_ERR(0, 46, __pyx_L1_error)\n      }\n      __pyx_t_21 = (*__Pyx_BufPtrStrided2d(__pyx_t_4bbox_DTYPE_t *, __pyx_pybuffernd_query_boxes.rcbuffer->pybuffer.buf, __pyx_t_19, __pyx_pybuffernd_query_boxes.diminfo[0].strides, __pyx_t_20, __pyx_pybuffernd_query_boxes.diminfo[1].strides));\n      __pyx_t_22 = __pyx_v_n;\n      __pyx_t_23 = 2;\n      __pyx_t_10 = -1;\n      if (unlikely(__pyx_t_22 >= (size_t)__pyx_pybuffernd_boxes.diminfo[0].shape)) __pyx_t_10 = 0;\n      if (__pyx_t_23 < 0) {\n        __pyx_t_23 += __pyx_pybuffernd_boxes.diminfo[1].shape;\n        if (unlikely(__pyx_t_23 < 0)) __pyx_t_10 = 1;\n      } else if (unlikely(__pyx_t_23 >= __pyx_pybuffernd_boxes.diminfo[1].shape)) __pyx_t_10 = 1;\n      if (unlikely(__pyx_t_10 != -1)) {\n        __Pyx_RaiseBufferIndexError(__pyx_t_10);\n        __PYX_ERR(0, 46, __pyx_L1_error)\n      }\n      __pyx_t_24 = (*__Pyx_BufPtrStrided2d(__pyx_t_4bbox_DTYPE_t *, __pyx_pybuffernd_boxes.rcbuffer->pybuffer.buf, __pyx_t_22, __pyx_pybuffernd_boxes.diminfo[0].strides, __pyx_t_23, __pyx_pybuffernd_boxes.diminfo[1].strides));\n      if (((__pyx_t_21 < __pyx_t_24) != 0)) {\n        __pyx_t_25 = __pyx_t_21;\n      } else {\n        __pyx_t_25 = __pyx_t_24;\n      }\n\n      /* \"bbox.pyx\":47\n *             iw = (\n *                 min(boxes[n, 2], query_boxes[k, 2]) -\n *                 max(boxes[n, 0], query_boxes[k, 0]) + 1             # <<<<<<<<<<<<<<\n *             )\n *             if iw > 0:\n */\n      __pyx_t_26 = __pyx_v_k;\n      __pyx_t_27 = 0;\n      __pyx_t_10 = -1;\n      if (unlikely(__pyx_t_26 >= (size_t)__pyx_pybuffernd_query_boxes.diminfo[0].shape)) __pyx_t_10 = 0;\n      if (__pyx_t_27 < 0) {\n        __pyx_t_27 += __pyx_pybuffernd_query_boxes.diminfo[1].shape;\n        if (unlikely(__pyx_t_27 < 0)) __pyx_t_10 = 1;\n      } else if (unlikely(__pyx_t_27 >= __pyx_pybuffernd_query_boxes.diminfo[1].shape)) __pyx_t_10 = 1;\n      if (unlikely(__pyx_t_10 != -1)) {\n        __Pyx_RaiseBufferIndexError(__pyx_t_10);\n        __PYX_ERR(0, 47, __pyx_L1_error)\n      }\n      __pyx_t_21 = (*__Pyx_BufPtrStrided2d(__pyx_t_4bbox_DTYPE_t *, __pyx_pybuffernd_query_boxes.rcbuffer->pybuffer.buf, __pyx_t_26, __pyx_pybuffernd_query_boxes.diminfo[0].strides, __pyx_t_27, __pyx_pybuffernd_query_boxes.diminfo[1].strides));\n      __pyx_t_28 = __pyx_v_n;\n      __pyx_t_29 = 0;\n      __pyx_t_10 = -1;\n      if (unlikely(__pyx_t_28 >= (size_t)__pyx_pybuffernd_boxes.diminfo[0].shape)) __pyx_t_10 = 0;\n      if (__pyx_t_29 < 0) {\n        __pyx_t_29 += __pyx_pybuffernd_boxes.diminfo[1].shape;\n        if (unlikely(__pyx_t_29 < 0)) __pyx_t_10 = 1;\n      } else if (unlikely(__pyx_t_29 >= __pyx_pybuffernd_boxes.diminfo[1].shape)) __pyx_t_10 = 1;\n      if (unlikely(__pyx_t_10 != -1)) {\n        __Pyx_RaiseBufferIndexError(__pyx_t_10);\n        __PYX_ERR(0, 47, __pyx_L1_error)\n      }\n      __pyx_t_24 = (*__Pyx_BufPtrStrided2d(__pyx_t_4bbox_DTYPE_t *, __pyx_pybuffernd_boxes.rcbuffer->pybuffer.buf, __pyx_t_28, __pyx_pybuffernd_boxes.diminfo[0].strides, __pyx_t_29, __pyx_pybuffernd_boxes.diminfo[1].strides));\n      if (((__pyx_t_21 > __pyx_t_24) != 0)) {\n        __pyx_t_30 = __pyx_t_21;\n      } else {\n        __pyx_t_30 = __pyx_t_24;\n      }\n\n      /* \"bbox.pyx\":46\n *         for n in range(N):\n *             iw = (\n *                 min(boxes[n, 2], query_boxes[k, 2]) -             # <<<<<<<<<<<<<<\n *                 max(boxes[n, 0], query_boxes[k, 0]) + 1\n *             )\n */\n      __pyx_v_iw = ((__pyx_t_25 - __pyx_t_30) + 1.0);\n\n      /* \"bbox.pyx\":49\n *                 max(boxes[n, 0], query_boxes[k, 0]) + 1\n *             )\n *             if iw > 0:             # <<<<<<<<<<<<<<\n *                 ih = (\n *                     min(boxes[n, 3], query_boxes[k, 3]) -\n */\n      __pyx_t_31 = ((__pyx_v_iw > 0.0) != 0);\n      if (__pyx_t_31) {\n\n        /* \"bbox.pyx\":51\n *             if iw > 0:\n *                 ih = (\n *                     min(boxes[n, 3], query_boxes[k, 3]) -             # <<<<<<<<<<<<<<\n *                     max(boxes[n, 1], query_boxes[k, 1]) + 1\n *                 )\n */\n        __pyx_t_32 = __pyx_v_k;\n        __pyx_t_33 = 3;\n        __pyx_t_10 = -1;\n        if (unlikely(__pyx_t_32 >= (size_t)__pyx_pybuffernd_query_boxes.diminfo[0].shape)) __pyx_t_10 = 0;\n        if (__pyx_t_33 < 0) {\n          __pyx_t_33 += __pyx_pybuffernd_query_boxes.diminfo[1].shape;\n          if (unlikely(__pyx_t_33 < 0)) __pyx_t_10 = 1;\n        } else if (unlikely(__pyx_t_33 >= __pyx_pybuffernd_query_boxes.diminfo[1].shape)) __pyx_t_10 = 1;\n        if (unlikely(__pyx_t_10 != -1)) {\n          __Pyx_RaiseBufferIndexError(__pyx_t_10);\n          __PYX_ERR(0, 51, __pyx_L1_error)\n        }\n        __pyx_t_30 = (*__Pyx_BufPtrStrided2d(__pyx_t_4bbox_DTYPE_t *, __pyx_pybuffernd_query_boxes.rcbuffer->pybuffer.buf, __pyx_t_32, __pyx_pybuffernd_query_boxes.diminfo[0].strides, __pyx_t_33, __pyx_pybuffernd_query_boxes.diminfo[1].strides));\n        __pyx_t_34 = __pyx_v_n;\n        __pyx_t_35 = 3;\n        __pyx_t_10 = -1;\n        if (unlikely(__pyx_t_34 >= (size_t)__pyx_pybuffernd_boxes.diminfo[0].shape)) __pyx_t_10 = 0;\n        if (__pyx_t_35 < 0) {\n          __pyx_t_35 += __pyx_pybuffernd_boxes.diminfo[1].shape;\n          if (unlikely(__pyx_t_35 < 0)) __pyx_t_10 = 1;\n        } else if (unlikely(__pyx_t_35 >= __pyx_pybuffernd_boxes.diminfo[1].shape)) __pyx_t_10 = 1;\n        if (unlikely(__pyx_t_10 != -1)) {\n          __Pyx_RaiseBufferIndexError(__pyx_t_10);\n          __PYX_ERR(0, 51, __pyx_L1_error)\n        }\n        __pyx_t_25 = (*__Pyx_BufPtrStrided2d(__pyx_t_4bbox_DTYPE_t *, __pyx_pybuffernd_boxes.rcbuffer->pybuffer.buf, __pyx_t_34, __pyx_pybuffernd_boxes.diminfo[0].strides, __pyx_t_35, __pyx_pybuffernd_boxes.diminfo[1].strides));\n        if (((__pyx_t_30 < __pyx_t_25) != 0)) {\n          __pyx_t_21 = __pyx_t_30;\n        } else {\n          __pyx_t_21 = __pyx_t_25;\n        }\n\n        /* \"bbox.pyx\":52\n *                 ih = (\n *                     min(boxes[n, 3], query_boxes[k, 3]) -\n *                     max(boxes[n, 1], query_boxes[k, 1]) + 1             # <<<<<<<<<<<<<<\n *                 )\n *                 if ih > 0:\n */\n        __pyx_t_36 = __pyx_v_k;\n        __pyx_t_37 = 1;\n        __pyx_t_10 = -1;\n        if (unlikely(__pyx_t_36 >= (size_t)__pyx_pybuffernd_query_boxes.diminfo[0].shape)) __pyx_t_10 = 0;\n        if (__pyx_t_37 < 0) {\n          __pyx_t_37 += __pyx_pybuffernd_query_boxes.diminfo[1].shape;\n          if (unlikely(__pyx_t_37 < 0)) __pyx_t_10 = 1;\n        } else if (unlikely(__pyx_t_37 >= __pyx_pybuffernd_query_boxes.diminfo[1].shape)) __pyx_t_10 = 1;\n        if (unlikely(__pyx_t_10 != -1)) {\n          __Pyx_RaiseBufferIndexError(__pyx_t_10);\n          __PYX_ERR(0, 52, __pyx_L1_error)\n        }\n        __pyx_t_30 = (*__Pyx_BufPtrStrided2d(__pyx_t_4bbox_DTYPE_t *, __pyx_pybuffernd_query_boxes.rcbuffer->pybuffer.buf, __pyx_t_36, __pyx_pybuffernd_query_boxes.diminfo[0].strides, __pyx_t_37, __pyx_pybuffernd_query_boxes.diminfo[1].strides));\n        __pyx_t_38 = __pyx_v_n;\n        __pyx_t_39 = 1;\n        __pyx_t_10 = -1;\n        if (unlikely(__pyx_t_38 >= (size_t)__pyx_pybuffernd_boxes.diminfo[0].shape)) __pyx_t_10 = 0;\n        if (__pyx_t_39 < 0) {\n          __pyx_t_39 += __pyx_pybuffernd_boxes.diminfo[1].shape;\n          if (unlikely(__pyx_t_39 < 0)) __pyx_t_10 = 1;\n        } else if (unlikely(__pyx_t_39 >= __pyx_pybuffernd_boxes.diminfo[1].shape)) __pyx_t_10 = 1;\n        if (unlikely(__pyx_t_10 != -1)) {\n          __Pyx_RaiseBufferIndexError(__pyx_t_10);\n          __PYX_ERR(0, 52, __pyx_L1_error)\n        }\n        __pyx_t_25 = (*__Pyx_BufPtrStrided2d(__pyx_t_4bbox_DTYPE_t *, __pyx_pybuffernd_boxes.rcbuffer->pybuffer.buf, __pyx_t_38, __pyx_pybuffernd_boxes.diminfo[0].strides, __pyx_t_39, __pyx_pybuffernd_boxes.diminfo[1].strides));\n        if (((__pyx_t_30 > __pyx_t_25) != 0)) {\n          __pyx_t_24 = __pyx_t_30;\n        } else {\n          __pyx_t_24 = __pyx_t_25;\n        }\n\n        /* \"bbox.pyx\":51\n *             if iw > 0:\n *                 ih = (\n *                     min(boxes[n, 3], query_boxes[k, 3]) -             # <<<<<<<<<<<<<<\n *                     max(boxes[n, 1], query_boxes[k, 1]) + 1\n *                 )\n */\n        __pyx_v_ih = ((__pyx_t_21 - __pyx_t_24) + 1.0);\n\n        /* \"bbox.pyx\":54\n *                     max(boxes[n, 1], query_boxes[k, 1]) + 1\n *                 )\n *                 if ih > 0:             # <<<<<<<<<<<<<<\n *                     ua = float(\n *                         (boxes[n, 2] - boxes[n, 0] + 1) *\n */\n        __pyx_t_31 = ((__pyx_v_ih > 0.0) != 0);\n        if (__pyx_t_31) {\n\n          /* \"bbox.pyx\":56\n *                 if ih > 0:\n *                     ua = float(\n *                         (boxes[n, 2] - boxes[n, 0] + 1) *             # <<<<<<<<<<<<<<\n *                         (boxes[n, 3] - boxes[n, 1] + 1) +\n *                         box_area - iw * ih\n */\n          __pyx_t_40 = __pyx_v_n;\n          __pyx_t_41 = 2;\n          __pyx_t_10 = -1;\n          if (unlikely(__pyx_t_40 >= (size_t)__pyx_pybuffernd_boxes.diminfo[0].shape)) __pyx_t_10 = 0;\n          if (__pyx_t_41 < 0) {\n            __pyx_t_41 += __pyx_pybuffernd_boxes.diminfo[1].shape;\n            if (unlikely(__pyx_t_41 < 0)) __pyx_t_10 = 1;\n          } else if (unlikely(__pyx_t_41 >= __pyx_pybuffernd_boxes.diminfo[1].shape)) __pyx_t_10 = 1;\n          if (unlikely(__pyx_t_10 != -1)) {\n            __Pyx_RaiseBufferIndexError(__pyx_t_10);\n            __PYX_ERR(0, 56, __pyx_L1_error)\n          }\n          __pyx_t_42 = __pyx_v_n;\n          __pyx_t_43 = 0;\n          __pyx_t_10 = -1;\n          if (unlikely(__pyx_t_42 >= (size_t)__pyx_pybuffernd_boxes.diminfo[0].shape)) __pyx_t_10 = 0;\n          if (__pyx_t_43 < 0) {\n            __pyx_t_43 += __pyx_pybuffernd_boxes.diminfo[1].shape;\n            if (unlikely(__pyx_t_43 < 0)) __pyx_t_10 = 1;\n          } else if (unlikely(__pyx_t_43 >= __pyx_pybuffernd_boxes.diminfo[1].shape)) __pyx_t_10 = 1;\n          if (unlikely(__pyx_t_10 != -1)) {\n            __Pyx_RaiseBufferIndexError(__pyx_t_10);\n            __PYX_ERR(0, 56, __pyx_L1_error)\n          }\n\n          /* \"bbox.pyx\":57\n *                     ua = float(\n *                         (boxes[n, 2] - boxes[n, 0] + 1) *\n *                         (boxes[n, 3] - boxes[n, 1] + 1) +             # <<<<<<<<<<<<<<\n *                         box_area - iw * ih\n *                     )\n */\n          __pyx_t_44 = __pyx_v_n;\n          __pyx_t_45 = 3;\n          __pyx_t_10 = -1;\n          if (unlikely(__pyx_t_44 >= (size_t)__pyx_pybuffernd_boxes.diminfo[0].shape)) __pyx_t_10 = 0;\n          if (__pyx_t_45 < 0) {\n            __pyx_t_45 += __pyx_pybuffernd_boxes.diminfo[1].shape;\n            if (unlikely(__pyx_t_45 < 0)) __pyx_t_10 = 1;\n          } else if (unlikely(__pyx_t_45 >= __pyx_pybuffernd_boxes.diminfo[1].shape)) __pyx_t_10 = 1;\n          if (unlikely(__pyx_t_10 != -1)) {\n            __Pyx_RaiseBufferIndexError(__pyx_t_10);\n            __PYX_ERR(0, 57, __pyx_L1_error)\n          }\n          __pyx_t_46 = __pyx_v_n;\n          __pyx_t_47 = 1;\n          __pyx_t_10 = -1;\n          if (unlikely(__pyx_t_46 >= (size_t)__pyx_pybuffernd_boxes.diminfo[0].shape)) __pyx_t_10 = 0;\n          if (__pyx_t_47 < 0) {\n            __pyx_t_47 += __pyx_pybuffernd_boxes.diminfo[1].shape;\n            if (unlikely(__pyx_t_47 < 0)) __pyx_t_10 = 1;\n          } else if (unlikely(__pyx_t_47 >= __pyx_pybuffernd_boxes.diminfo[1].shape)) __pyx_t_10 = 1;\n          if (unlikely(__pyx_t_10 != -1)) {\n            __Pyx_RaiseBufferIndexError(__pyx_t_10);\n            __PYX_ERR(0, 57, __pyx_L1_error)\n          }\n\n          /* \"bbox.pyx\":55\n *                 )\n *                 if ih > 0:\n *                     ua = float(             # <<<<<<<<<<<<<<\n *                         (boxes[n, 2] - boxes[n, 0] + 1) *\n *                         (boxes[n, 3] - boxes[n, 1] + 1) +\n */\n          __pyx_v_ua = ((double)((((((*__Pyx_BufPtrStrided2d(__pyx_t_4bbox_DTYPE_t *, __pyx_pybuffernd_boxes.rcbuffer->pybuffer.buf, __pyx_t_40, __pyx_pybuffernd_boxes.diminfo[0].strides, __pyx_t_41, __pyx_pybuffernd_boxes.diminfo[1].strides)) - (*__Pyx_BufPtrStrided2d(__pyx_t_4bbox_DTYPE_t *, __pyx_pybuffernd_boxes.rcbuffer->pybuffer.buf, __pyx_t_42, __pyx_pybuffernd_boxes.diminfo[0].strides, __pyx_t_43, __pyx_pybuffernd_boxes.diminfo[1].strides))) + 1.0) * (((*__Pyx_BufPtrStrided2d(__pyx_t_4bbox_DTYPE_t *, __pyx_pybuffernd_boxes.rcbuffer->pybuffer.buf, __pyx_t_44, __pyx_pybuffernd_boxes.diminfo[0].strides, __pyx_t_45, __pyx_pybuffernd_boxes.diminfo[1].strides)) - (*__Pyx_BufPtrStrided2d(__pyx_t_4bbox_DTYPE_t *, __pyx_pybuffernd_boxes.rcbuffer->pybuffer.buf, __pyx_t_46, __pyx_pybuffernd_boxes.diminfo[0].strides, __pyx_t_47, __pyx_pybuffernd_boxes.diminfo[1].strides))) + 1.0)) + __pyx_v_box_area) - (__pyx_v_iw * __pyx_v_ih)));\n\n          /* \"bbox.pyx\":60\n *                         box_area - iw * ih\n *                     )\n *                     overlaps[n, k] = iw * ih / ua             # <<<<<<<<<<<<<<\n *     return overlaps\n * \n */\n          __pyx_t_24 = (__pyx_v_iw * __pyx_v_ih);\n          if (unlikely(__pyx_v_ua == 0)) {\n            PyErr_SetString(PyExc_ZeroDivisionError, \"float division\");\n            __PYX_ERR(0, 60, __pyx_L1_error)\n          }\n          __pyx_t_48 = __pyx_v_n;\n          __pyx_t_49 = __pyx_v_k;\n          __pyx_t_10 = -1;\n          if (unlikely(__pyx_t_48 >= (size_t)__pyx_pybuffernd_overlaps.diminfo[0].shape)) __pyx_t_10 = 0;\n          if (unlikely(__pyx_t_49 >= (size_t)__pyx_pybuffernd_overlaps.diminfo[1].shape)) __pyx_t_10 = 1;\n          if (unlikely(__pyx_t_10 != -1)) {\n            __Pyx_RaiseBufferIndexError(__pyx_t_10);\n            __PYX_ERR(0, 60, __pyx_L1_error)\n          }\n          *__Pyx_BufPtrStrided2d(__pyx_t_4bbox_DTYPE_t *, __pyx_pybuffernd_overlaps.rcbuffer->pybuffer.buf, __pyx_t_48, __pyx_pybuffernd_overlaps.diminfo[0].strides, __pyx_t_49, __pyx_pybuffernd_overlaps.diminfo[1].strides) = (__pyx_t_24 / __pyx_v_ua);\n\n          /* \"bbox.pyx\":54\n *                     max(boxes[n, 1], query_boxes[k, 1]) + 1\n *                 )\n *                 if ih > 0:             # <<<<<<<<<<<<<<\n *                     ua = float(\n *                         (boxes[n, 2] - boxes[n, 0] + 1) *\n */\n        }\n\n        /* \"bbox.pyx\":49\n *                 max(boxes[n, 0], query_boxes[k, 0]) + 1\n *             )\n *             if iw > 0:             # <<<<<<<<<<<<<<\n *                 ih = (\n *                     min(boxes[n, 3], query_boxes[k, 3]) -\n */\n      }\n    }\n  }\n\n  /* \"bbox.pyx\":61\n *                     )\n *                     overlaps[n, k] = iw * ih / ua\n *     return overlaps             # <<<<<<<<<<<<<<\n * \n * \n */\n  __Pyx_XDECREF(((PyObject *)__pyx_r));\n  __Pyx_INCREF(((PyObject *)__pyx_v_overlaps));\n  __pyx_r = ((PyArrayObject *)__pyx_v_overlaps);\n  goto __pyx_L0;\n\n  /* \"bbox.pyx\":21\n *     return bbox_overlaps_c(boxes_contig, query_contig)\n * \n * cdef np.ndarray[DTYPE_t, ndim=2] bbox_overlaps_c(             # <<<<<<<<<<<<<<\n *         np.ndarray[DTYPE_t, ndim=2] boxes,\n *         np.ndarray[DTYPE_t, ndim=2] query_boxes):\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_4);\n  { PyObject *__pyx_type, *__pyx_value, *__pyx_tb;\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_boxes.rcbuffer->pybuffer);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_overlaps.rcbuffer->pybuffer);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_query_boxes.rcbuffer->pybuffer);\n  __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);}\n  __Pyx_AddTraceback(\"bbox.bbox_overlaps_c\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = 0;\n  goto __pyx_L2;\n  __pyx_L0:;\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_boxes.rcbuffer->pybuffer);\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_overlaps.rcbuffer->pybuffer);\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_query_boxes.rcbuffer->pybuffer);\n  __pyx_L2:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_overlaps);\n  __Pyx_XGIVEREF((PyObject *)__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"bbox.pyx\":64\n * \n * \n * def bbox_intersections(boxes, query_boxes):             # <<<<<<<<<<<<<<\n *     cdef np.ndarray[DTYPE_t, ndim=2] boxes_contig = np.ascontiguousarray(boxes, dtype=DTYPE)\n *     cdef np.ndarray[DTYPE_t, ndim=2] query_contig = np.ascontiguousarray(query_boxes, dtype=DTYPE)\n */\n\n/* Python wrapper */\nstatic PyObject *__pyx_pw_4bbox_3bbox_intersections(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/\nstatic PyMethodDef __pyx_mdef_4bbox_3bbox_intersections = {\"bbox_intersections\", (PyCFunction)__pyx_pw_4bbox_3bbox_intersections, METH_VARARGS|METH_KEYWORDS, 0};\nstatic PyObject *__pyx_pw_4bbox_3bbox_intersections(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {\n  PyObject *__pyx_v_boxes = 0;\n  PyObject *__pyx_v_query_boxes = 0;\n  PyObject *__pyx_r = 0;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"bbox_intersections (wrapper)\", 0);\n  {\n    static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_boxes,&__pyx_n_s_query_boxes,0};\n    PyObject* values[2] = {0,0};\n    if (unlikely(__pyx_kwds)) {\n      Py_ssize_t kw_args;\n      const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);\n      switch (pos_args) {\n        case  2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n        case  1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n        case  0: break;\n        default: goto __pyx_L5_argtuple_error;\n      }\n      kw_args = PyDict_Size(__pyx_kwds);\n      switch (pos_args) {\n        case  0:\n        if (likely((values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_boxes)) != 0)) kw_args--;\n        else goto __pyx_L5_argtuple_error;\n        case  1:\n        if (likely((values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_query_boxes)) != 0)) kw_args--;\n        else {\n          __Pyx_RaiseArgtupleInvalid(\"bbox_intersections\", 1, 2, 2, 1); __PYX_ERR(0, 64, __pyx_L3_error)\n        }\n      }\n      if (unlikely(kw_args > 0)) {\n        if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, \"bbox_intersections\") < 0)) __PYX_ERR(0, 64, __pyx_L3_error)\n      }\n    } else if (PyTuple_GET_SIZE(__pyx_args) != 2) {\n      goto __pyx_L5_argtuple_error;\n    } else {\n      values[0] = PyTuple_GET_ITEM(__pyx_args, 0);\n      values[1] = PyTuple_GET_ITEM(__pyx_args, 1);\n    }\n    __pyx_v_boxes = values[0];\n    __pyx_v_query_boxes = values[1];\n  }\n  goto __pyx_L4_argument_unpacking_done;\n  __pyx_L5_argtuple_error:;\n  __Pyx_RaiseArgtupleInvalid(\"bbox_intersections\", 1, 2, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 64, __pyx_L3_error)\n  __pyx_L3_error:;\n  __Pyx_AddTraceback(\"bbox.bbox_intersections\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __Pyx_RefNannyFinishContext();\n  return NULL;\n  __pyx_L4_argument_unpacking_done:;\n  __pyx_r = __pyx_pf_4bbox_2bbox_intersections(__pyx_self, __pyx_v_boxes, __pyx_v_query_boxes);\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyObject *__pyx_pf_4bbox_2bbox_intersections(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_boxes, PyObject *__pyx_v_query_boxes) {\n  PyArrayObject *__pyx_v_boxes_contig = 0;\n  PyArrayObject *__pyx_v_query_contig = 0;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_boxes_contig;\n  __Pyx_Buffer __pyx_pybuffer_boxes_contig;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_query_contig;\n  __Pyx_Buffer __pyx_pybuffer_query_contig;\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *__pyx_t_4 = NULL;\n  PyArrayObject *__pyx_t_5 = NULL;\n  PyArrayObject *__pyx_t_6 = NULL;\n  __Pyx_RefNannySetupContext(\"bbox_intersections\", 0);\n  __pyx_pybuffer_boxes_contig.pybuffer.buf = NULL;\n  __pyx_pybuffer_boxes_contig.refcount = 0;\n  __pyx_pybuffernd_boxes_contig.data = NULL;\n  __pyx_pybuffernd_boxes_contig.rcbuffer = &__pyx_pybuffer_boxes_contig;\n  __pyx_pybuffer_query_contig.pybuffer.buf = NULL;\n  __pyx_pybuffer_query_contig.refcount = 0;\n  __pyx_pybuffernd_query_contig.data = NULL;\n  __pyx_pybuffernd_query_contig.rcbuffer = &__pyx_pybuffer_query_contig;\n\n  /* \"bbox.pyx\":65\n * \n * def bbox_intersections(boxes, query_boxes):\n *     cdef np.ndarray[DTYPE_t, ndim=2] boxes_contig = np.ascontiguousarray(boxes, dtype=DTYPE)             # <<<<<<<<<<<<<<\n *     cdef np.ndarray[DTYPE_t, ndim=2] query_contig = np.ascontiguousarray(query_boxes, dtype=DTYPE)\n * \n */\n  __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 65, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_ascontiguousarray); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 65, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 65, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_INCREF(__pyx_v_boxes);\n  __Pyx_GIVEREF(__pyx_v_boxes);\n  PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_boxes);\n  __pyx_t_3 = PyDict_New(); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 65, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_DTYPE); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 65, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_dtype, __pyx_t_4) < 0) __PYX_ERR(0, 65, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n  __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 65, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  if (!(likely(((__pyx_t_4) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_4, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 65, __pyx_L1_error)\n  __pyx_t_5 = ((PyArrayObject *)__pyx_t_4);\n  {\n    __Pyx_BufFmt_StackElem __pyx_stack[1];\n    if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_boxes_contig.rcbuffer->pybuffer, (PyObject*)__pyx_t_5, &__Pyx_TypeInfo_nn___pyx_t_4bbox_DTYPE_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) {\n      __pyx_v_boxes_contig = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); __pyx_pybuffernd_boxes_contig.rcbuffer->pybuffer.buf = NULL;\n      __PYX_ERR(0, 65, __pyx_L1_error)\n    } else {__pyx_pybuffernd_boxes_contig.diminfo[0].strides = __pyx_pybuffernd_boxes_contig.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_boxes_contig.diminfo[0].shape = __pyx_pybuffernd_boxes_contig.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_boxes_contig.diminfo[1].strides = __pyx_pybuffernd_boxes_contig.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_boxes_contig.diminfo[1].shape = __pyx_pybuffernd_boxes_contig.rcbuffer->pybuffer.shape[1];\n    }\n  }\n  __pyx_t_5 = 0;\n  __pyx_v_boxes_contig = ((PyArrayObject *)__pyx_t_4);\n  __pyx_t_4 = 0;\n\n  /* \"bbox.pyx\":66\n * def bbox_intersections(boxes, query_boxes):\n *     cdef np.ndarray[DTYPE_t, ndim=2] boxes_contig = np.ascontiguousarray(boxes, dtype=DTYPE)\n *     cdef np.ndarray[DTYPE_t, ndim=2] query_contig = np.ascontiguousarray(query_boxes, dtype=DTYPE)             # <<<<<<<<<<<<<<\n * \n *     return bbox_intersections_c(boxes_contig, query_contig)\n */\n  __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 66, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_ascontiguousarray); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 66, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n  __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 66, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __Pyx_INCREF(__pyx_v_query_boxes);\n  __Pyx_GIVEREF(__pyx_v_query_boxes);\n  PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v_query_boxes);\n  __pyx_t_1 = PyDict_New(); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 66, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_DTYPE); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 66, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  if (PyDict_SetItem(__pyx_t_1, __pyx_n_s_dtype, __pyx_t_2) < 0) __PYX_ERR(0, 66, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __pyx_t_2 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_4, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 66, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  if (!(likely(((__pyx_t_2) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_2, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 66, __pyx_L1_error)\n  __pyx_t_6 = ((PyArrayObject *)__pyx_t_2);\n  {\n    __Pyx_BufFmt_StackElem __pyx_stack[1];\n    if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_query_contig.rcbuffer->pybuffer, (PyObject*)__pyx_t_6, &__Pyx_TypeInfo_nn___pyx_t_4bbox_DTYPE_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) {\n      __pyx_v_query_contig = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); __pyx_pybuffernd_query_contig.rcbuffer->pybuffer.buf = NULL;\n      __PYX_ERR(0, 66, __pyx_L1_error)\n    } else {__pyx_pybuffernd_query_contig.diminfo[0].strides = __pyx_pybuffernd_query_contig.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_query_contig.diminfo[0].shape = __pyx_pybuffernd_query_contig.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_query_contig.diminfo[1].strides = __pyx_pybuffernd_query_contig.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_query_contig.diminfo[1].shape = __pyx_pybuffernd_query_contig.rcbuffer->pybuffer.shape[1];\n    }\n  }\n  __pyx_t_6 = 0;\n  __pyx_v_query_contig = ((PyArrayObject *)__pyx_t_2);\n  __pyx_t_2 = 0;\n\n  /* \"bbox.pyx\":68\n *     cdef np.ndarray[DTYPE_t, ndim=2] query_contig = np.ascontiguousarray(query_boxes, dtype=DTYPE)\n * \n *     return bbox_intersections_c(boxes_contig, query_contig)             # <<<<<<<<<<<<<<\n * \n * \n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_2 = ((PyObject *)__pyx_f_4bbox_bbox_intersections_c(((PyArrayObject *)__pyx_v_boxes_contig), ((PyArrayObject *)__pyx_v_query_contig))); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 68, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __pyx_r = __pyx_t_2;\n  __pyx_t_2 = 0;\n  goto __pyx_L0;\n\n  /* \"bbox.pyx\":64\n * \n * \n * def bbox_intersections(boxes, query_boxes):             # <<<<<<<<<<<<<<\n *     cdef np.ndarray[DTYPE_t, ndim=2] boxes_contig = np.ascontiguousarray(boxes, dtype=DTYPE)\n *     cdef np.ndarray[DTYPE_t, ndim=2] query_contig = np.ascontiguousarray(query_boxes, dtype=DTYPE)\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_4);\n  { PyObject *__pyx_type, *__pyx_value, *__pyx_tb;\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_boxes_contig.rcbuffer->pybuffer);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_query_contig.rcbuffer->pybuffer);\n  __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);}\n  __Pyx_AddTraceback(\"bbox.bbox_intersections\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  goto __pyx_L2;\n  __pyx_L0:;\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_boxes_contig.rcbuffer->pybuffer);\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_query_contig.rcbuffer->pybuffer);\n  __pyx_L2:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_boxes_contig);\n  __Pyx_XDECREF((PyObject *)__pyx_v_query_contig);\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"bbox.pyx\":71\n * \n * \n * cdef np.ndarray[DTYPE_t, ndim=2] bbox_intersections_c(             # <<<<<<<<<<<<<<\n *         np.ndarray[DTYPE_t, ndim=2] boxes,\n *         np.ndarray[DTYPE_t, ndim=2] query_boxes):\n */\n\nstatic PyArrayObject *__pyx_f_4bbox_bbox_intersections_c(PyArrayObject *__pyx_v_boxes, PyArrayObject *__pyx_v_query_boxes) {\n  unsigned int __pyx_v_N;\n  unsigned int __pyx_v_K;\n  PyArrayObject *__pyx_v_intersec = 0;\n  __pyx_t_4bbox_DTYPE_t __pyx_v_iw;\n  __pyx_t_4bbox_DTYPE_t __pyx_v_ih;\n  __pyx_t_4bbox_DTYPE_t __pyx_v_box_area;\n  unsigned int __pyx_v_k;\n  unsigned int __pyx_v_n;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_boxes;\n  __Pyx_Buffer __pyx_pybuffer_boxes;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_intersec;\n  __Pyx_Buffer __pyx_pybuffer_intersec;\n  __Pyx_LocalBuf_ND __pyx_pybuffernd_query_boxes;\n  __Pyx_Buffer __pyx_pybuffer_query_boxes;\n  PyArrayObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *__pyx_t_4 = NULL;\n  PyArrayObject *__pyx_t_5 = NULL;\n  unsigned int __pyx_t_6;\n  unsigned int __pyx_t_7;\n  size_t __pyx_t_8;\n  Py_ssize_t __pyx_t_9;\n  int __pyx_t_10;\n  size_t __pyx_t_11;\n  Py_ssize_t __pyx_t_12;\n  size_t __pyx_t_13;\n  Py_ssize_t __pyx_t_14;\n  size_t __pyx_t_15;\n  Py_ssize_t __pyx_t_16;\n  unsigned int __pyx_t_17;\n  unsigned int __pyx_t_18;\n  size_t __pyx_t_19;\n  Py_ssize_t __pyx_t_20;\n  __pyx_t_4bbox_DTYPE_t __pyx_t_21;\n  size_t __pyx_t_22;\n  Py_ssize_t __pyx_t_23;\n  __pyx_t_4bbox_DTYPE_t __pyx_t_24;\n  __pyx_t_4bbox_DTYPE_t __pyx_t_25;\n  size_t __pyx_t_26;\n  Py_ssize_t __pyx_t_27;\n  size_t __pyx_t_28;\n  Py_ssize_t __pyx_t_29;\n  __pyx_t_4bbox_DTYPE_t __pyx_t_30;\n  int __pyx_t_31;\n  size_t __pyx_t_32;\n  Py_ssize_t __pyx_t_33;\n  size_t __pyx_t_34;\n  Py_ssize_t __pyx_t_35;\n  size_t __pyx_t_36;\n  Py_ssize_t __pyx_t_37;\n  size_t __pyx_t_38;\n  Py_ssize_t __pyx_t_39;\n  size_t __pyx_t_40;\n  size_t __pyx_t_41;\n  __Pyx_RefNannySetupContext(\"bbox_intersections_c\", 0);\n  __pyx_pybuffer_intersec.pybuffer.buf = NULL;\n  __pyx_pybuffer_intersec.refcount = 0;\n  __pyx_pybuffernd_intersec.data = NULL;\n  __pyx_pybuffernd_intersec.rcbuffer = &__pyx_pybuffer_intersec;\n  __pyx_pybuffer_boxes.pybuffer.buf = NULL;\n  __pyx_pybuffer_boxes.refcount = 0;\n  __pyx_pybuffernd_boxes.data = NULL;\n  __pyx_pybuffernd_boxes.rcbuffer = &__pyx_pybuffer_boxes;\n  __pyx_pybuffer_query_boxes.pybuffer.buf = NULL;\n  __pyx_pybuffer_query_boxes.refcount = 0;\n  __pyx_pybuffernd_query_boxes.data = NULL;\n  __pyx_pybuffernd_query_boxes.rcbuffer = &__pyx_pybuffer_query_boxes;\n  {\n    __Pyx_BufFmt_StackElem __pyx_stack[1];\n    if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_boxes.rcbuffer->pybuffer, (PyObject*)__pyx_v_boxes, &__Pyx_TypeInfo_nn___pyx_t_4bbox_DTYPE_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) __PYX_ERR(0, 71, __pyx_L1_error)\n  }\n  __pyx_pybuffernd_boxes.diminfo[0].strides = __pyx_pybuffernd_boxes.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_boxes.diminfo[0].shape = __pyx_pybuffernd_boxes.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_boxes.diminfo[1].strides = __pyx_pybuffernd_boxes.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_boxes.diminfo[1].shape = __pyx_pybuffernd_boxes.rcbuffer->pybuffer.shape[1];\n  {\n    __Pyx_BufFmt_StackElem __pyx_stack[1];\n    if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_query_boxes.rcbuffer->pybuffer, (PyObject*)__pyx_v_query_boxes, &__Pyx_TypeInfo_nn___pyx_t_4bbox_DTYPE_t, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) __PYX_ERR(0, 71, __pyx_L1_error)\n  }\n  __pyx_pybuffernd_query_boxes.diminfo[0].strides = __pyx_pybuffernd_query_boxes.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_query_boxes.diminfo[0].shape = __pyx_pybuffernd_query_boxes.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_query_boxes.diminfo[1].strides = __pyx_pybuffernd_query_boxes.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_query_boxes.diminfo[1].shape = __pyx_pybuffernd_query_boxes.rcbuffer->pybuffer.shape[1];\n\n  /* \"bbox.pyx\":85\n *     overlaps: (N, K) ndarray of intersec between boxes and query_boxes\n *     \"\"\"\n *     cdef unsigned int N = boxes.shape[0]             # <<<<<<<<<<<<<<\n *     cdef unsigned int K = query_boxes.shape[0]\n *     cdef np.ndarray[DTYPE_t, ndim=2] intersec = np.zeros((N, K), dtype=DTYPE)\n */\n  __pyx_v_N = (__pyx_v_boxes->dimensions[0]);\n\n  /* \"bbox.pyx\":86\n *     \"\"\"\n *     cdef unsigned int N = boxes.shape[0]\n *     cdef unsigned int K = query_boxes.shape[0]             # <<<<<<<<<<<<<<\n *     cdef np.ndarray[DTYPE_t, ndim=2] intersec = np.zeros((N, K), dtype=DTYPE)\n *     cdef DTYPE_t iw, ih, box_area\n */\n  __pyx_v_K = (__pyx_v_query_boxes->dimensions[0]);\n\n  /* \"bbox.pyx\":87\n *     cdef unsigned int N = boxes.shape[0]\n *     cdef unsigned int K = query_boxes.shape[0]\n *     cdef np.ndarray[DTYPE_t, ndim=2] intersec = np.zeros((N, K), dtype=DTYPE)             # <<<<<<<<<<<<<<\n *     cdef DTYPE_t iw, ih, box_area\n *     cdef DTYPE_t ua\n */\n  __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 87, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_zeros); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 87, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_1 = __Pyx_PyInt_From_unsigned_int(__pyx_v_N); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 87, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_3 = __Pyx_PyInt_From_unsigned_int(__pyx_v_K); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 87, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 87, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __Pyx_GIVEREF(__pyx_t_1);\n  PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_1);\n  __Pyx_GIVEREF(__pyx_t_3);\n  PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_3);\n  __pyx_t_1 = 0;\n  __pyx_t_3 = 0;\n  __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 87, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_3);\n  __Pyx_GIVEREF(__pyx_t_4);\n  PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_4);\n  __pyx_t_4 = 0;\n  __pyx_t_4 = PyDict_New(); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 87, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_4);\n  __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_DTYPE); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 87, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_t_4, __pyx_n_s_dtype, __pyx_t_1) < 0) __PYX_ERR(0, 87, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_3, __pyx_t_4); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 87, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n  __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n  if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 87, __pyx_L1_error)\n  __pyx_t_5 = ((PyArrayObject *)__pyx_t_1);\n  {\n    __Pyx_BufFmt_StackElem __pyx_stack[1];\n    if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_intersec.rcbuffer->pybuffer, (PyObject*)__pyx_t_5, &__Pyx_TypeInfo_nn___pyx_t_4bbox_DTYPE_t, PyBUF_FORMAT| PyBUF_STRIDES| PyBUF_WRITABLE, 2, 0, __pyx_stack) == -1)) {\n      __pyx_v_intersec = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); __pyx_pybuffernd_intersec.rcbuffer->pybuffer.buf = NULL;\n      __PYX_ERR(0, 87, __pyx_L1_error)\n    } else {__pyx_pybuffernd_intersec.diminfo[0].strides = __pyx_pybuffernd_intersec.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_intersec.diminfo[0].shape = __pyx_pybuffernd_intersec.rcbuffer->pybuffer.shape[0]; __pyx_pybuffernd_intersec.diminfo[1].strides = __pyx_pybuffernd_intersec.rcbuffer->pybuffer.strides[1]; __pyx_pybuffernd_intersec.diminfo[1].shape = __pyx_pybuffernd_intersec.rcbuffer->pybuffer.shape[1];\n    }\n  }\n  __pyx_t_5 = 0;\n  __pyx_v_intersec = ((PyArrayObject *)__pyx_t_1);\n  __pyx_t_1 = 0;\n\n  /* \"bbox.pyx\":91\n *     cdef DTYPE_t ua\n *     cdef unsigned int k, n\n *     for k in range(K):             # <<<<<<<<<<<<<<\n *         box_area = (\n *             (query_boxes[k, 2] - query_boxes[k, 0] + 1) *\n */\n  __pyx_t_6 = __pyx_v_K;\n  for (__pyx_t_7 = 0; __pyx_t_7 < __pyx_t_6; __pyx_t_7+=1) {\n    __pyx_v_k = __pyx_t_7;\n\n    /* \"bbox.pyx\":93\n *     for k in range(K):\n *         box_area = (\n *             (query_boxes[k, 2] - query_boxes[k, 0] + 1) *             # <<<<<<<<<<<<<<\n *             (query_boxes[k, 3] - query_boxes[k, 1] + 1)\n *         )\n */\n    __pyx_t_8 = __pyx_v_k;\n    __pyx_t_9 = 2;\n    __pyx_t_10 = -1;\n    if (unlikely(__pyx_t_8 >= (size_t)__pyx_pybuffernd_query_boxes.diminfo[0].shape)) __pyx_t_10 = 0;\n    if (__pyx_t_9 < 0) {\n      __pyx_t_9 += __pyx_pybuffernd_query_boxes.diminfo[1].shape;\n      if (unlikely(__pyx_t_9 < 0)) __pyx_t_10 = 1;\n    } else if (unlikely(__pyx_t_9 >= __pyx_pybuffernd_query_boxes.diminfo[1].shape)) __pyx_t_10 = 1;\n    if (unlikely(__pyx_t_10 != -1)) {\n      __Pyx_RaiseBufferIndexError(__pyx_t_10);\n      __PYX_ERR(0, 93, __pyx_L1_error)\n    }\n    __pyx_t_11 = __pyx_v_k;\n    __pyx_t_12 = 0;\n    __pyx_t_10 = -1;\n    if (unlikely(__pyx_t_11 >= (size_t)__pyx_pybuffernd_query_boxes.diminfo[0].shape)) __pyx_t_10 = 0;\n    if (__pyx_t_12 < 0) {\n      __pyx_t_12 += __pyx_pybuffernd_query_boxes.diminfo[1].shape;\n      if (unlikely(__pyx_t_12 < 0)) __pyx_t_10 = 1;\n    } else if (unlikely(__pyx_t_12 >= __pyx_pybuffernd_query_boxes.diminfo[1].shape)) __pyx_t_10 = 1;\n    if (unlikely(__pyx_t_10 != -1)) {\n      __Pyx_RaiseBufferIndexError(__pyx_t_10);\n      __PYX_ERR(0, 93, __pyx_L1_error)\n    }\n\n    /* \"bbox.pyx\":94\n *         box_area = (\n *             (query_boxes[k, 2] - query_boxes[k, 0] + 1) *\n *             (query_boxes[k, 3] - query_boxes[k, 1] + 1)             # <<<<<<<<<<<<<<\n *         )\n *         for n in range(N):\n */\n    __pyx_t_13 = __pyx_v_k;\n    __pyx_t_14 = 3;\n    __pyx_t_10 = -1;\n    if (unlikely(__pyx_t_13 >= (size_t)__pyx_pybuffernd_query_boxes.diminfo[0].shape)) __pyx_t_10 = 0;\n    if (__pyx_t_14 < 0) {\n      __pyx_t_14 += __pyx_pybuffernd_query_boxes.diminfo[1].shape;\n      if (unlikely(__pyx_t_14 < 0)) __pyx_t_10 = 1;\n    } else if (unlikely(__pyx_t_14 >= __pyx_pybuffernd_query_boxes.diminfo[1].shape)) __pyx_t_10 = 1;\n    if (unlikely(__pyx_t_10 != -1)) {\n      __Pyx_RaiseBufferIndexError(__pyx_t_10);\n      __PYX_ERR(0, 94, __pyx_L1_error)\n    }\n    __pyx_t_15 = __pyx_v_k;\n    __pyx_t_16 = 1;\n    __pyx_t_10 = -1;\n    if (unlikely(__pyx_t_15 >= (size_t)__pyx_pybuffernd_query_boxes.diminfo[0].shape)) __pyx_t_10 = 0;\n    if (__pyx_t_16 < 0) {\n      __pyx_t_16 += __pyx_pybuffernd_query_boxes.diminfo[1].shape;\n      if (unlikely(__pyx_t_16 < 0)) __pyx_t_10 = 1;\n    } else if (unlikely(__pyx_t_16 >= __pyx_pybuffernd_query_boxes.diminfo[1].shape)) __pyx_t_10 = 1;\n    if (unlikely(__pyx_t_10 != -1)) {\n      __Pyx_RaiseBufferIndexError(__pyx_t_10);\n      __PYX_ERR(0, 94, __pyx_L1_error)\n    }\n\n    /* \"bbox.pyx\":93\n *     for k in range(K):\n *         box_area = (\n *             (query_boxes[k, 2] - query_boxes[k, 0] + 1) *             # <<<<<<<<<<<<<<\n *             (query_boxes[k, 3] - query_boxes[k, 1] + 1)\n *         )\n */\n    __pyx_v_box_area = ((((*__Pyx_BufPtrStrided2d(__pyx_t_4bbox_DTYPE_t *, __pyx_pybuffernd_query_boxes.rcbuffer->pybuffer.buf, __pyx_t_8, __pyx_pybuffernd_query_boxes.diminfo[0].strides, __pyx_t_9, __pyx_pybuffernd_query_boxes.diminfo[1].strides)) - (*__Pyx_BufPtrStrided2d(__pyx_t_4bbox_DTYPE_t *, __pyx_pybuffernd_query_boxes.rcbuffer->pybuffer.buf, __pyx_t_11, __pyx_pybuffernd_query_boxes.diminfo[0].strides, __pyx_t_12, __pyx_pybuffernd_query_boxes.diminfo[1].strides))) + 1.0) * (((*__Pyx_BufPtrStrided2d(__pyx_t_4bbox_DTYPE_t *, __pyx_pybuffernd_query_boxes.rcbuffer->pybuffer.buf, __pyx_t_13, __pyx_pybuffernd_query_boxes.diminfo[0].strides, __pyx_t_14, __pyx_pybuffernd_query_boxes.diminfo[1].strides)) - (*__Pyx_BufPtrStrided2d(__pyx_t_4bbox_DTYPE_t *, __pyx_pybuffernd_query_boxes.rcbuffer->pybuffer.buf, __pyx_t_15, __pyx_pybuffernd_query_boxes.diminfo[0].strides, __pyx_t_16, __pyx_pybuffernd_query_boxes.diminfo[1].strides))) + 1.0));\n\n    /* \"bbox.pyx\":96\n *             (query_boxes[k, 3] - query_boxes[k, 1] + 1)\n *         )\n *         for n in range(N):             # <<<<<<<<<<<<<<\n *             iw = (\n *                 min(boxes[n, 2], query_boxes[k, 2]) -\n */\n    __pyx_t_17 = __pyx_v_N;\n    for (__pyx_t_18 = 0; __pyx_t_18 < __pyx_t_17; __pyx_t_18+=1) {\n      __pyx_v_n = __pyx_t_18;\n\n      /* \"bbox.pyx\":98\n *         for n in range(N):\n *             iw = (\n *                 min(boxes[n, 2], query_boxes[k, 2]) -             # <<<<<<<<<<<<<<\n *                 max(boxes[n, 0], query_boxes[k, 0]) + 1\n *             )\n */\n      __pyx_t_19 = __pyx_v_k;\n      __pyx_t_20 = 2;\n      __pyx_t_10 = -1;\n      if (unlikely(__pyx_t_19 >= (size_t)__pyx_pybuffernd_query_boxes.diminfo[0].shape)) __pyx_t_10 = 0;\n      if (__pyx_t_20 < 0) {\n        __pyx_t_20 += __pyx_pybuffernd_query_boxes.diminfo[1].shape;\n        if (unlikely(__pyx_t_20 < 0)) __pyx_t_10 = 1;\n      } else if (unlikely(__pyx_t_20 >= __pyx_pybuffernd_query_boxes.diminfo[1].shape)) __pyx_t_10 = 1;\n      if (unlikely(__pyx_t_10 != -1)) {\n        __Pyx_RaiseBufferIndexError(__pyx_t_10);\n        __PYX_ERR(0, 98, __pyx_L1_error)\n      }\n      __pyx_t_21 = (*__Pyx_BufPtrStrided2d(__pyx_t_4bbox_DTYPE_t *, __pyx_pybuffernd_query_boxes.rcbuffer->pybuffer.buf, __pyx_t_19, __pyx_pybuffernd_query_boxes.diminfo[0].strides, __pyx_t_20, __pyx_pybuffernd_query_boxes.diminfo[1].strides));\n      __pyx_t_22 = __pyx_v_n;\n      __pyx_t_23 = 2;\n      __pyx_t_10 = -1;\n      if (unlikely(__pyx_t_22 >= (size_t)__pyx_pybuffernd_boxes.diminfo[0].shape)) __pyx_t_10 = 0;\n      if (__pyx_t_23 < 0) {\n        __pyx_t_23 += __pyx_pybuffernd_boxes.diminfo[1].shape;\n        if (unlikely(__pyx_t_23 < 0)) __pyx_t_10 = 1;\n      } else if (unlikely(__pyx_t_23 >= __pyx_pybuffernd_boxes.diminfo[1].shape)) __pyx_t_10 = 1;\n      if (unlikely(__pyx_t_10 != -1)) {\n        __Pyx_RaiseBufferIndexError(__pyx_t_10);\n        __PYX_ERR(0, 98, __pyx_L1_error)\n      }\n      __pyx_t_24 = (*__Pyx_BufPtrStrided2d(__pyx_t_4bbox_DTYPE_t *, __pyx_pybuffernd_boxes.rcbuffer->pybuffer.buf, __pyx_t_22, __pyx_pybuffernd_boxes.diminfo[0].strides, __pyx_t_23, __pyx_pybuffernd_boxes.diminfo[1].strides));\n      if (((__pyx_t_21 < __pyx_t_24) != 0)) {\n        __pyx_t_25 = __pyx_t_21;\n      } else {\n        __pyx_t_25 = __pyx_t_24;\n      }\n\n      /* \"bbox.pyx\":99\n *             iw = (\n *                 min(boxes[n, 2], query_boxes[k, 2]) -\n *                 max(boxes[n, 0], query_boxes[k, 0]) + 1             # <<<<<<<<<<<<<<\n *             )\n *             if iw > 0:\n */\n      __pyx_t_26 = __pyx_v_k;\n      __pyx_t_27 = 0;\n      __pyx_t_10 = -1;\n      if (unlikely(__pyx_t_26 >= (size_t)__pyx_pybuffernd_query_boxes.diminfo[0].shape)) __pyx_t_10 = 0;\n      if (__pyx_t_27 < 0) {\n        __pyx_t_27 += __pyx_pybuffernd_query_boxes.diminfo[1].shape;\n        if (unlikely(__pyx_t_27 < 0)) __pyx_t_10 = 1;\n      } else if (unlikely(__pyx_t_27 >= __pyx_pybuffernd_query_boxes.diminfo[1].shape)) __pyx_t_10 = 1;\n      if (unlikely(__pyx_t_10 != -1)) {\n        __Pyx_RaiseBufferIndexError(__pyx_t_10);\n        __PYX_ERR(0, 99, __pyx_L1_error)\n      }\n      __pyx_t_21 = (*__Pyx_BufPtrStrided2d(__pyx_t_4bbox_DTYPE_t *, __pyx_pybuffernd_query_boxes.rcbuffer->pybuffer.buf, __pyx_t_26, __pyx_pybuffernd_query_boxes.diminfo[0].strides, __pyx_t_27, __pyx_pybuffernd_query_boxes.diminfo[1].strides));\n      __pyx_t_28 = __pyx_v_n;\n      __pyx_t_29 = 0;\n      __pyx_t_10 = -1;\n      if (unlikely(__pyx_t_28 >= (size_t)__pyx_pybuffernd_boxes.diminfo[0].shape)) __pyx_t_10 = 0;\n      if (__pyx_t_29 < 0) {\n        __pyx_t_29 += __pyx_pybuffernd_boxes.diminfo[1].shape;\n        if (unlikely(__pyx_t_29 < 0)) __pyx_t_10 = 1;\n      } else if (unlikely(__pyx_t_29 >= __pyx_pybuffernd_boxes.diminfo[1].shape)) __pyx_t_10 = 1;\n      if (unlikely(__pyx_t_10 != -1)) {\n        __Pyx_RaiseBufferIndexError(__pyx_t_10);\n        __PYX_ERR(0, 99, __pyx_L1_error)\n      }\n      __pyx_t_24 = (*__Pyx_BufPtrStrided2d(__pyx_t_4bbox_DTYPE_t *, __pyx_pybuffernd_boxes.rcbuffer->pybuffer.buf, __pyx_t_28, __pyx_pybuffernd_boxes.diminfo[0].strides, __pyx_t_29, __pyx_pybuffernd_boxes.diminfo[1].strides));\n      if (((__pyx_t_21 > __pyx_t_24) != 0)) {\n        __pyx_t_30 = __pyx_t_21;\n      } else {\n        __pyx_t_30 = __pyx_t_24;\n      }\n\n      /* \"bbox.pyx\":98\n *         for n in range(N):\n *             iw = (\n *                 min(boxes[n, 2], query_boxes[k, 2]) -             # <<<<<<<<<<<<<<\n *                 max(boxes[n, 0], query_boxes[k, 0]) + 1\n *             )\n */\n      __pyx_v_iw = ((__pyx_t_25 - __pyx_t_30) + 1.0);\n\n      /* \"bbox.pyx\":101\n *                 max(boxes[n, 0], query_boxes[k, 0]) + 1\n *             )\n *             if iw > 0:             # <<<<<<<<<<<<<<\n *                 ih = (\n *                     min(boxes[n, 3], query_boxes[k, 3]) -\n */\n      __pyx_t_31 = ((__pyx_v_iw > 0.0) != 0);\n      if (__pyx_t_31) {\n\n        /* \"bbox.pyx\":103\n *             if iw > 0:\n *                 ih = (\n *                     min(boxes[n, 3], query_boxes[k, 3]) -             # <<<<<<<<<<<<<<\n *                     max(boxes[n, 1], query_boxes[k, 1]) + 1\n *                 )\n */\n        __pyx_t_32 = __pyx_v_k;\n        __pyx_t_33 = 3;\n        __pyx_t_10 = -1;\n        if (unlikely(__pyx_t_32 >= (size_t)__pyx_pybuffernd_query_boxes.diminfo[0].shape)) __pyx_t_10 = 0;\n        if (__pyx_t_33 < 0) {\n          __pyx_t_33 += __pyx_pybuffernd_query_boxes.diminfo[1].shape;\n          if (unlikely(__pyx_t_33 < 0)) __pyx_t_10 = 1;\n        } else if (unlikely(__pyx_t_33 >= __pyx_pybuffernd_query_boxes.diminfo[1].shape)) __pyx_t_10 = 1;\n        if (unlikely(__pyx_t_10 != -1)) {\n          __Pyx_RaiseBufferIndexError(__pyx_t_10);\n          __PYX_ERR(0, 103, __pyx_L1_error)\n        }\n        __pyx_t_30 = (*__Pyx_BufPtrStrided2d(__pyx_t_4bbox_DTYPE_t *, __pyx_pybuffernd_query_boxes.rcbuffer->pybuffer.buf, __pyx_t_32, __pyx_pybuffernd_query_boxes.diminfo[0].strides, __pyx_t_33, __pyx_pybuffernd_query_boxes.diminfo[1].strides));\n        __pyx_t_34 = __pyx_v_n;\n        __pyx_t_35 = 3;\n        __pyx_t_10 = -1;\n        if (unlikely(__pyx_t_34 >= (size_t)__pyx_pybuffernd_boxes.diminfo[0].shape)) __pyx_t_10 = 0;\n        if (__pyx_t_35 < 0) {\n          __pyx_t_35 += __pyx_pybuffernd_boxes.diminfo[1].shape;\n          if (unlikely(__pyx_t_35 < 0)) __pyx_t_10 = 1;\n        } else if (unlikely(__pyx_t_35 >= __pyx_pybuffernd_boxes.diminfo[1].shape)) __pyx_t_10 = 1;\n        if (unlikely(__pyx_t_10 != -1)) {\n          __Pyx_RaiseBufferIndexError(__pyx_t_10);\n          __PYX_ERR(0, 103, __pyx_L1_error)\n        }\n        __pyx_t_25 = (*__Pyx_BufPtrStrided2d(__pyx_t_4bbox_DTYPE_t *, __pyx_pybuffernd_boxes.rcbuffer->pybuffer.buf, __pyx_t_34, __pyx_pybuffernd_boxes.diminfo[0].strides, __pyx_t_35, __pyx_pybuffernd_boxes.diminfo[1].strides));\n        if (((__pyx_t_30 < __pyx_t_25) != 0)) {\n          __pyx_t_21 = __pyx_t_30;\n        } else {\n          __pyx_t_21 = __pyx_t_25;\n        }\n\n        /* \"bbox.pyx\":104\n *                 ih = (\n *                     min(boxes[n, 3], query_boxes[k, 3]) -\n *                     max(boxes[n, 1], query_boxes[k, 1]) + 1             # <<<<<<<<<<<<<<\n *                 )\n *                 if ih > 0:\n */\n        __pyx_t_36 = __pyx_v_k;\n        __pyx_t_37 = 1;\n        __pyx_t_10 = -1;\n        if (unlikely(__pyx_t_36 >= (size_t)__pyx_pybuffernd_query_boxes.diminfo[0].shape)) __pyx_t_10 = 0;\n        if (__pyx_t_37 < 0) {\n          __pyx_t_37 += __pyx_pybuffernd_query_boxes.diminfo[1].shape;\n          if (unlikely(__pyx_t_37 < 0)) __pyx_t_10 = 1;\n        } else if (unlikely(__pyx_t_37 >= __pyx_pybuffernd_query_boxes.diminfo[1].shape)) __pyx_t_10 = 1;\n        if (unlikely(__pyx_t_10 != -1)) {\n          __Pyx_RaiseBufferIndexError(__pyx_t_10);\n          __PYX_ERR(0, 104, __pyx_L1_error)\n        }\n        __pyx_t_30 = (*__Pyx_BufPtrStrided2d(__pyx_t_4bbox_DTYPE_t *, __pyx_pybuffernd_query_boxes.rcbuffer->pybuffer.buf, __pyx_t_36, __pyx_pybuffernd_query_boxes.diminfo[0].strides, __pyx_t_37, __pyx_pybuffernd_query_boxes.diminfo[1].strides));\n        __pyx_t_38 = __pyx_v_n;\n        __pyx_t_39 = 1;\n        __pyx_t_10 = -1;\n        if (unlikely(__pyx_t_38 >= (size_t)__pyx_pybuffernd_boxes.diminfo[0].shape)) __pyx_t_10 = 0;\n        if (__pyx_t_39 < 0) {\n          __pyx_t_39 += __pyx_pybuffernd_boxes.diminfo[1].shape;\n          if (unlikely(__pyx_t_39 < 0)) __pyx_t_10 = 1;\n        } else if (unlikely(__pyx_t_39 >= __pyx_pybuffernd_boxes.diminfo[1].shape)) __pyx_t_10 = 1;\n        if (unlikely(__pyx_t_10 != -1)) {\n          __Pyx_RaiseBufferIndexError(__pyx_t_10);\n          __PYX_ERR(0, 104, __pyx_L1_error)\n        }\n        __pyx_t_25 = (*__Pyx_BufPtrStrided2d(__pyx_t_4bbox_DTYPE_t *, __pyx_pybuffernd_boxes.rcbuffer->pybuffer.buf, __pyx_t_38, __pyx_pybuffernd_boxes.diminfo[0].strides, __pyx_t_39, __pyx_pybuffernd_boxes.diminfo[1].strides));\n        if (((__pyx_t_30 > __pyx_t_25) != 0)) {\n          __pyx_t_24 = __pyx_t_30;\n        } else {\n          __pyx_t_24 = __pyx_t_25;\n        }\n\n        /* \"bbox.pyx\":103\n *             if iw > 0:\n *                 ih = (\n *                     min(boxes[n, 3], query_boxes[k, 3]) -             # <<<<<<<<<<<<<<\n *                     max(boxes[n, 1], query_boxes[k, 1]) + 1\n *                 )\n */\n        __pyx_v_ih = ((__pyx_t_21 - __pyx_t_24) + 1.0);\n\n        /* \"bbox.pyx\":106\n *                     max(boxes[n, 1], query_boxes[k, 1]) + 1\n *                 )\n *                 if ih > 0:             # <<<<<<<<<<<<<<\n *                     intersec[n, k] = iw * ih / box_area\n *     return intersec\n */\n        __pyx_t_31 = ((__pyx_v_ih > 0.0) != 0);\n        if (__pyx_t_31) {\n\n          /* \"bbox.pyx\":107\n *                 )\n *                 if ih > 0:\n *                     intersec[n, k] = iw * ih / box_area             # <<<<<<<<<<<<<<\n *     return intersec\n */\n          __pyx_t_24 = (__pyx_v_iw * __pyx_v_ih);\n          if (unlikely(__pyx_v_box_area == 0)) {\n            PyErr_SetString(PyExc_ZeroDivisionError, \"float division\");\n            __PYX_ERR(0, 107, __pyx_L1_error)\n          }\n          __pyx_t_40 = __pyx_v_n;\n          __pyx_t_41 = __pyx_v_k;\n          __pyx_t_10 = -1;\n          if (unlikely(__pyx_t_40 >= (size_t)__pyx_pybuffernd_intersec.diminfo[0].shape)) __pyx_t_10 = 0;\n          if (unlikely(__pyx_t_41 >= (size_t)__pyx_pybuffernd_intersec.diminfo[1].shape)) __pyx_t_10 = 1;\n          if (unlikely(__pyx_t_10 != -1)) {\n            __Pyx_RaiseBufferIndexError(__pyx_t_10);\n            __PYX_ERR(0, 107, __pyx_L1_error)\n          }\n          *__Pyx_BufPtrStrided2d(__pyx_t_4bbox_DTYPE_t *, __pyx_pybuffernd_intersec.rcbuffer->pybuffer.buf, __pyx_t_40, __pyx_pybuffernd_intersec.diminfo[0].strides, __pyx_t_41, __pyx_pybuffernd_intersec.diminfo[1].strides) = (__pyx_t_24 / __pyx_v_box_area);\n\n          /* \"bbox.pyx\":106\n *                     max(boxes[n, 1], query_boxes[k, 1]) + 1\n *                 )\n *                 if ih > 0:             # <<<<<<<<<<<<<<\n *                     intersec[n, k] = iw * ih / box_area\n *     return intersec\n */\n        }\n\n        /* \"bbox.pyx\":101\n *                 max(boxes[n, 0], query_boxes[k, 0]) + 1\n *             )\n *             if iw > 0:             # <<<<<<<<<<<<<<\n *                 ih = (\n *                     min(boxes[n, 3], query_boxes[k, 3]) -\n */\n      }\n    }\n  }\n\n  /* \"bbox.pyx\":108\n *                 if ih > 0:\n *                     intersec[n, k] = iw * ih / box_area\n *     return intersec             # <<<<<<<<<<<<<<\n */\n  __Pyx_XDECREF(((PyObject *)__pyx_r));\n  __Pyx_INCREF(((PyObject *)__pyx_v_intersec));\n  __pyx_r = ((PyArrayObject *)__pyx_v_intersec);\n  goto __pyx_L0;\n\n  /* \"bbox.pyx\":71\n * \n * \n * cdef np.ndarray[DTYPE_t, ndim=2] bbox_intersections_c(             # <<<<<<<<<<<<<<\n *         np.ndarray[DTYPE_t, ndim=2] boxes,\n *         np.ndarray[DTYPE_t, ndim=2] query_boxes):\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_4);\n  { PyObject *__pyx_type, *__pyx_value, *__pyx_tb;\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_boxes.rcbuffer->pybuffer);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_intersec.rcbuffer->pybuffer);\n    __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_query_boxes.rcbuffer->pybuffer);\n  __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);}\n  __Pyx_AddTraceback(\"bbox.bbox_intersections_c\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = 0;\n  goto __pyx_L2;\n  __pyx_L0:;\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_boxes.rcbuffer->pybuffer);\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_intersec.rcbuffer->pybuffer);\n  __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_query_boxes.rcbuffer->pybuffer);\n  __pyx_L2:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_intersec);\n  __Pyx_XGIVEREF((PyObject *)__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":197\n *         # experimental exception made for __getbuffer__ and __releasebuffer__\n *         # -- the details of this may change.\n *         def __getbuffer__(ndarray self, Py_buffer* info, int flags):             # <<<<<<<<<<<<<<\n *             # This implementation of getbuffer is geared towards Cython\n *             # requirements, and does not yet fullfill the PEP.\n */\n\n/* Python wrapper */\nstatic CYTHON_UNUSED int __pyx_pw_5numpy_7ndarray_1__getbuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/\nstatic CYTHON_UNUSED int __pyx_pw_5numpy_7ndarray_1__getbuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) {\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__getbuffer__ (wrapper)\", 0);\n  __pyx_r = __pyx_pf_5numpy_7ndarray___getbuffer__(((PyArrayObject *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic int __pyx_pf_5numpy_7ndarray___getbuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) {\n  int __pyx_v_copy_shape;\n  int __pyx_v_i;\n  int __pyx_v_ndim;\n  int __pyx_v_endian_detector;\n  int __pyx_v_little_endian;\n  int __pyx_v_t;\n  char *__pyx_v_f;\n  PyArray_Descr *__pyx_v_descr = 0;\n  int __pyx_v_offset;\n  int __pyx_v_hasfields;\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  int __pyx_t_1;\n  int __pyx_t_2;\n  PyObject *__pyx_t_3 = NULL;\n  int __pyx_t_4;\n  int __pyx_t_5;\n  PyObject *__pyx_t_6 = NULL;\n  char *__pyx_t_7;\n  __Pyx_RefNannySetupContext(\"__getbuffer__\", 0);\n  if (__pyx_v_info != NULL) {\n    __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None);\n    __Pyx_GIVEREF(__pyx_v_info->obj);\n  }\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":203\n *             # of flags\n * \n *             if info == NULL: return             # <<<<<<<<<<<<<<\n * \n *             cdef int copy_shape, i, ndim\n */\n  __pyx_t_1 = ((__pyx_v_info == NULL) != 0);\n  if (__pyx_t_1) {\n    __pyx_r = 0;\n    goto __pyx_L0;\n  }\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":206\n * \n *             cdef int copy_shape, i, ndim\n *             cdef int endian_detector = 1             # <<<<<<<<<<<<<<\n *             cdef bint little_endian = ((<char*>&endian_detector)[0] != 0)\n * \n */\n  __pyx_v_endian_detector = 1;\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":207\n *             cdef int copy_shape, i, ndim\n *             cdef int endian_detector = 1\n *             cdef bint little_endian = ((<char*>&endian_detector)[0] != 0)             # <<<<<<<<<<<<<<\n * \n *             ndim = PyArray_NDIM(self)\n */\n  __pyx_v_little_endian = ((((char *)(&__pyx_v_endian_detector))[0]) != 0);\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":209\n *             cdef bint little_endian = ((<char*>&endian_detector)[0] != 0)\n * \n *             ndim = PyArray_NDIM(self)             # <<<<<<<<<<<<<<\n * \n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):\n */\n  __pyx_v_ndim = PyArray_NDIM(__pyx_v_self);\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":211\n *             ndim = PyArray_NDIM(self)\n * \n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):             # <<<<<<<<<<<<<<\n *                 copy_shape = 1\n *             else:\n */\n  __pyx_t_1 = (((sizeof(npy_intp)) != (sizeof(Py_ssize_t))) != 0);\n  if (__pyx_t_1) {\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":212\n * \n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):\n *                 copy_shape = 1             # <<<<<<<<<<<<<<\n *             else:\n *                 copy_shape = 0\n */\n    __pyx_v_copy_shape = 1;\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":211\n *             ndim = PyArray_NDIM(self)\n * \n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):             # <<<<<<<<<<<<<<\n *                 copy_shape = 1\n *             else:\n */\n    goto __pyx_L4;\n  }\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":214\n *                 copy_shape = 1\n *             else:\n *                 copy_shape = 0             # <<<<<<<<<<<<<<\n * \n *             if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)\n */\n  /*else*/ {\n    __pyx_v_copy_shape = 0;\n  }\n  __pyx_L4:;\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":216\n *                 copy_shape = 0\n * \n *             if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)             # <<<<<<<<<<<<<<\n *                 and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not C contiguous\")\n */\n  __pyx_t_2 = (((__pyx_v_flags & PyBUF_C_CONTIGUOUS) == PyBUF_C_CONTIGUOUS) != 0);\n  if (__pyx_t_2) {\n  } else {\n    __pyx_t_1 = __pyx_t_2;\n    goto __pyx_L6_bool_binop_done;\n  }\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":217\n * \n *             if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)\n *                 and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)):             # <<<<<<<<<<<<<<\n *                 raise ValueError(u\"ndarray is not C contiguous\")\n * \n */\n  __pyx_t_2 = ((!(PyArray_CHKFLAGS(__pyx_v_self, NPY_C_CONTIGUOUS) != 0)) != 0);\n  __pyx_t_1 = __pyx_t_2;\n  __pyx_L6_bool_binop_done:;\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":216\n *                 copy_shape = 0\n * \n *             if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)             # <<<<<<<<<<<<<<\n *                 and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not C contiguous\")\n */\n  if (__pyx_t_1) {\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":218\n *             if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)\n *                 and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not C contiguous\")             # <<<<<<<<<<<<<<\n * \n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)\n */\n    __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple_, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 218, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __PYX_ERR(1, 218, __pyx_L1_error)\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":216\n *                 copy_shape = 0\n * \n *             if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)             # <<<<<<<<<<<<<<\n *                 and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not C contiguous\")\n */\n  }\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":220\n *                 raise ValueError(u\"ndarray is not C contiguous\")\n * \n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)             # <<<<<<<<<<<<<<\n *                 and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not Fortran contiguous\")\n */\n  __pyx_t_2 = (((__pyx_v_flags & PyBUF_F_CONTIGUOUS) == PyBUF_F_CONTIGUOUS) != 0);\n  if (__pyx_t_2) {\n  } else {\n    __pyx_t_1 = __pyx_t_2;\n    goto __pyx_L9_bool_binop_done;\n  }\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":221\n * \n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)\n *                 and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)):             # <<<<<<<<<<<<<<\n *                 raise ValueError(u\"ndarray is not Fortran contiguous\")\n * \n */\n  __pyx_t_2 = ((!(PyArray_CHKFLAGS(__pyx_v_self, NPY_F_CONTIGUOUS) != 0)) != 0);\n  __pyx_t_1 = __pyx_t_2;\n  __pyx_L9_bool_binop_done:;\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":220\n *                 raise ValueError(u\"ndarray is not C contiguous\")\n * \n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)             # <<<<<<<<<<<<<<\n *                 and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not Fortran contiguous\")\n */\n  if (__pyx_t_1) {\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":222\n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)\n *                 and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not Fortran contiguous\")             # <<<<<<<<<<<<<<\n * \n *             info.buf = PyArray_DATA(self)\n */\n    __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__2, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 222, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __PYX_ERR(1, 222, __pyx_L1_error)\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":220\n *                 raise ValueError(u\"ndarray is not C contiguous\")\n * \n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)             # <<<<<<<<<<<<<<\n *                 and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not Fortran contiguous\")\n */\n  }\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":224\n *                 raise ValueError(u\"ndarray is not Fortran contiguous\")\n * \n *             info.buf = PyArray_DATA(self)             # <<<<<<<<<<<<<<\n *             info.ndim = ndim\n *             if copy_shape:\n */\n  __pyx_v_info->buf = PyArray_DATA(__pyx_v_self);\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":225\n * \n *             info.buf = PyArray_DATA(self)\n *             info.ndim = ndim             # <<<<<<<<<<<<<<\n *             if copy_shape:\n *                 # Allocate new buffer for strides and shape info.\n */\n  __pyx_v_info->ndim = __pyx_v_ndim;\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":226\n *             info.buf = PyArray_DATA(self)\n *             info.ndim = ndim\n *             if copy_shape:             # <<<<<<<<<<<<<<\n *                 # Allocate new buffer for strides and shape info.\n *                 # This is allocated as one block, strides first.\n */\n  __pyx_t_1 = (__pyx_v_copy_shape != 0);\n  if (__pyx_t_1) {\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":229\n *                 # Allocate new buffer for strides and shape info.\n *                 # This is allocated as one block, strides first.\n *                 info.strides = <Py_ssize_t*>stdlib.malloc(sizeof(Py_ssize_t) * <size_t>ndim * 2)             # <<<<<<<<<<<<<<\n *                 info.shape = info.strides + ndim\n *                 for i in range(ndim):\n */\n    __pyx_v_info->strides = ((Py_ssize_t *)malloc((((sizeof(Py_ssize_t)) * ((size_t)__pyx_v_ndim)) * 2)));\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":230\n *                 # This is allocated as one block, strides first.\n *                 info.strides = <Py_ssize_t*>stdlib.malloc(sizeof(Py_ssize_t) * <size_t>ndim * 2)\n *                 info.shape = info.strides + ndim             # <<<<<<<<<<<<<<\n *                 for i in range(ndim):\n *                     info.strides[i] = PyArray_STRIDES(self)[i]\n */\n    __pyx_v_info->shape = (__pyx_v_info->strides + __pyx_v_ndim);\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":231\n *                 info.strides = <Py_ssize_t*>stdlib.malloc(sizeof(Py_ssize_t) * <size_t>ndim * 2)\n *                 info.shape = info.strides + ndim\n *                 for i in range(ndim):             # <<<<<<<<<<<<<<\n *                     info.strides[i] = PyArray_STRIDES(self)[i]\n *                     info.shape[i] = PyArray_DIMS(self)[i]\n */\n    __pyx_t_4 = __pyx_v_ndim;\n    for (__pyx_t_5 = 0; __pyx_t_5 < __pyx_t_4; __pyx_t_5+=1) {\n      __pyx_v_i = __pyx_t_5;\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":232\n *                 info.shape = info.strides + ndim\n *                 for i in range(ndim):\n *                     info.strides[i] = PyArray_STRIDES(self)[i]             # <<<<<<<<<<<<<<\n *                     info.shape[i] = PyArray_DIMS(self)[i]\n *             else:\n */\n      (__pyx_v_info->strides[__pyx_v_i]) = (PyArray_STRIDES(__pyx_v_self)[__pyx_v_i]);\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":233\n *                 for i in range(ndim):\n *                     info.strides[i] = PyArray_STRIDES(self)[i]\n *                     info.shape[i] = PyArray_DIMS(self)[i]             # <<<<<<<<<<<<<<\n *             else:\n *                 info.strides = <Py_ssize_t*>PyArray_STRIDES(self)\n */\n      (__pyx_v_info->shape[__pyx_v_i]) = (PyArray_DIMS(__pyx_v_self)[__pyx_v_i]);\n    }\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":226\n *             info.buf = PyArray_DATA(self)\n *             info.ndim = ndim\n *             if copy_shape:             # <<<<<<<<<<<<<<\n *                 # Allocate new buffer for strides and shape info.\n *                 # This is allocated as one block, strides first.\n */\n    goto __pyx_L11;\n  }\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":235\n *                     info.shape[i] = PyArray_DIMS(self)[i]\n *             else:\n *                 info.strides = <Py_ssize_t*>PyArray_STRIDES(self)             # <<<<<<<<<<<<<<\n *                 info.shape = <Py_ssize_t*>PyArray_DIMS(self)\n *             info.suboffsets = NULL\n */\n  /*else*/ {\n    __pyx_v_info->strides = ((Py_ssize_t *)PyArray_STRIDES(__pyx_v_self));\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":236\n *             else:\n *                 info.strides = <Py_ssize_t*>PyArray_STRIDES(self)\n *                 info.shape = <Py_ssize_t*>PyArray_DIMS(self)             # <<<<<<<<<<<<<<\n *             info.suboffsets = NULL\n *             info.itemsize = PyArray_ITEMSIZE(self)\n */\n    __pyx_v_info->shape = ((Py_ssize_t *)PyArray_DIMS(__pyx_v_self));\n  }\n  __pyx_L11:;\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":237\n *                 info.strides = <Py_ssize_t*>PyArray_STRIDES(self)\n *                 info.shape = <Py_ssize_t*>PyArray_DIMS(self)\n *             info.suboffsets = NULL             # <<<<<<<<<<<<<<\n *             info.itemsize = PyArray_ITEMSIZE(self)\n *             info.readonly = not PyArray_ISWRITEABLE(self)\n */\n  __pyx_v_info->suboffsets = NULL;\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":238\n *                 info.shape = <Py_ssize_t*>PyArray_DIMS(self)\n *             info.suboffsets = NULL\n *             info.itemsize = PyArray_ITEMSIZE(self)             # <<<<<<<<<<<<<<\n *             info.readonly = not PyArray_ISWRITEABLE(self)\n * \n */\n  __pyx_v_info->itemsize = PyArray_ITEMSIZE(__pyx_v_self);\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":239\n *             info.suboffsets = NULL\n *             info.itemsize = PyArray_ITEMSIZE(self)\n *             info.readonly = not PyArray_ISWRITEABLE(self)             # <<<<<<<<<<<<<<\n * \n *             cdef int t\n */\n  __pyx_v_info->readonly = (!(PyArray_ISWRITEABLE(__pyx_v_self) != 0));\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":242\n * \n *             cdef int t\n *             cdef char* f = NULL             # <<<<<<<<<<<<<<\n *             cdef dtype descr = self.descr\n *             cdef int offset\n */\n  __pyx_v_f = NULL;\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":243\n *             cdef int t\n *             cdef char* f = NULL\n *             cdef dtype descr = self.descr             # <<<<<<<<<<<<<<\n *             cdef int offset\n * \n */\n  __pyx_t_3 = ((PyObject *)__pyx_v_self->descr);\n  __Pyx_INCREF(__pyx_t_3);\n  __pyx_v_descr = ((PyArray_Descr *)__pyx_t_3);\n  __pyx_t_3 = 0;\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":246\n *             cdef int offset\n * \n *             cdef bint hasfields = PyDataType_HASFIELDS(descr)             # <<<<<<<<<<<<<<\n * \n *             if not hasfields and not copy_shape:\n */\n  __pyx_v_hasfields = PyDataType_HASFIELDS(__pyx_v_descr);\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":248\n *             cdef bint hasfields = PyDataType_HASFIELDS(descr)\n * \n *             if not hasfields and not copy_shape:             # <<<<<<<<<<<<<<\n *                 # do not call releasebuffer\n *                 info.obj = None\n */\n  __pyx_t_2 = ((!(__pyx_v_hasfields != 0)) != 0);\n  if (__pyx_t_2) {\n  } else {\n    __pyx_t_1 = __pyx_t_2;\n    goto __pyx_L15_bool_binop_done;\n  }\n  __pyx_t_2 = ((!(__pyx_v_copy_shape != 0)) != 0);\n  __pyx_t_1 = __pyx_t_2;\n  __pyx_L15_bool_binop_done:;\n  if (__pyx_t_1) {\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":250\n *             if not hasfields and not copy_shape:\n *                 # do not call releasebuffer\n *                 info.obj = None             # <<<<<<<<<<<<<<\n *             else:\n *                 # need to call releasebuffer\n */\n    __Pyx_INCREF(Py_None);\n    __Pyx_GIVEREF(Py_None);\n    __Pyx_GOTREF(__pyx_v_info->obj);\n    __Pyx_DECREF(__pyx_v_info->obj);\n    __pyx_v_info->obj = Py_None;\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":248\n *             cdef bint hasfields = PyDataType_HASFIELDS(descr)\n * \n *             if not hasfields and not copy_shape:             # <<<<<<<<<<<<<<\n *                 # do not call releasebuffer\n *                 info.obj = None\n */\n    goto __pyx_L14;\n  }\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":253\n *             else:\n *                 # need to call releasebuffer\n *                 info.obj = self             # <<<<<<<<<<<<<<\n * \n *             if not hasfields:\n */\n  /*else*/ {\n    __Pyx_INCREF(((PyObject *)__pyx_v_self));\n    __Pyx_GIVEREF(((PyObject *)__pyx_v_self));\n    __Pyx_GOTREF(__pyx_v_info->obj);\n    __Pyx_DECREF(__pyx_v_info->obj);\n    __pyx_v_info->obj = ((PyObject *)__pyx_v_self);\n  }\n  __pyx_L14:;\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":255\n *                 info.obj = self\n * \n *             if not hasfields:             # <<<<<<<<<<<<<<\n *                 t = descr.type_num\n *                 if ((descr.byteorder == c'>' and little_endian) or\n */\n  __pyx_t_1 = ((!(__pyx_v_hasfields != 0)) != 0);\n  if (__pyx_t_1) {\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":256\n * \n *             if not hasfields:\n *                 t = descr.type_num             # <<<<<<<<<<<<<<\n *                 if ((descr.byteorder == c'>' and little_endian) or\n *                     (descr.byteorder == c'<' and not little_endian)):\n */\n    __pyx_t_4 = __pyx_v_descr->type_num;\n    __pyx_v_t = __pyx_t_4;\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":257\n *             if not hasfields:\n *                 t = descr.type_num\n *                 if ((descr.byteorder == c'>' and little_endian) or             # <<<<<<<<<<<<<<\n *                     (descr.byteorder == c'<' and not little_endian)):\n *                     raise ValueError(u\"Non-native byte order not supported\")\n */\n    __pyx_t_2 = ((__pyx_v_descr->byteorder == '>') != 0);\n    if (!__pyx_t_2) {\n      goto __pyx_L20_next_or;\n    } else {\n    }\n    __pyx_t_2 = (__pyx_v_little_endian != 0);\n    if (!__pyx_t_2) {\n    } else {\n      __pyx_t_1 = __pyx_t_2;\n      goto __pyx_L19_bool_binop_done;\n    }\n    __pyx_L20_next_or:;\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":258\n *                 t = descr.type_num\n *                 if ((descr.byteorder == c'>' and little_endian) or\n *                     (descr.byteorder == c'<' and not little_endian)):             # <<<<<<<<<<<<<<\n *                     raise ValueError(u\"Non-native byte order not supported\")\n *                 if   t == NPY_BYTE:        f = \"b\"\n */\n    __pyx_t_2 = ((__pyx_v_descr->byteorder == '<') != 0);\n    if (__pyx_t_2) {\n    } else {\n      __pyx_t_1 = __pyx_t_2;\n      goto __pyx_L19_bool_binop_done;\n    }\n    __pyx_t_2 = ((!(__pyx_v_little_endian != 0)) != 0);\n    __pyx_t_1 = __pyx_t_2;\n    __pyx_L19_bool_binop_done:;\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":257\n *             if not hasfields:\n *                 t = descr.type_num\n *                 if ((descr.byteorder == c'>' and little_endian) or             # <<<<<<<<<<<<<<\n *                     (descr.byteorder == c'<' and not little_endian)):\n *                     raise ValueError(u\"Non-native byte order not supported\")\n */\n    if (__pyx_t_1) {\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":259\n *                 if ((descr.byteorder == c'>' and little_endian) or\n *                     (descr.byteorder == c'<' and not little_endian)):\n *                     raise ValueError(u\"Non-native byte order not supported\")             # <<<<<<<<<<<<<<\n *                 if   t == NPY_BYTE:        f = \"b\"\n *                 elif t == NPY_UBYTE:       f = \"B\"\n */\n      __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__3, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 259, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __PYX_ERR(1, 259, __pyx_L1_error)\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":257\n *             if not hasfields:\n *                 t = descr.type_num\n *                 if ((descr.byteorder == c'>' and little_endian) or             # <<<<<<<<<<<<<<\n *                     (descr.byteorder == c'<' and not little_endian)):\n *                     raise ValueError(u\"Non-native byte order not supported\")\n */\n    }\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":260\n *                     (descr.byteorder == c'<' and not little_endian)):\n *                     raise ValueError(u\"Non-native byte order not supported\")\n *                 if   t == NPY_BYTE:        f = \"b\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_UBYTE:       f = \"B\"\n *                 elif t == NPY_SHORT:       f = \"h\"\n */\n    switch (__pyx_v_t) {\n      case NPY_BYTE:\n      __pyx_v_f = ((char *)\"b\");\n      break;\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":261\n *                     raise ValueError(u\"Non-native byte order not supported\")\n *                 if   t == NPY_BYTE:        f = \"b\"\n *                 elif t == NPY_UBYTE:       f = \"B\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_SHORT:       f = \"h\"\n *                 elif t == NPY_USHORT:      f = \"H\"\n */\n      case NPY_UBYTE:\n      __pyx_v_f = ((char *)\"B\");\n      break;\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":262\n *                 if   t == NPY_BYTE:        f = \"b\"\n *                 elif t == NPY_UBYTE:       f = \"B\"\n *                 elif t == NPY_SHORT:       f = \"h\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_USHORT:      f = \"H\"\n *                 elif t == NPY_INT:         f = \"i\"\n */\n      case NPY_SHORT:\n      __pyx_v_f = ((char *)\"h\");\n      break;\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":263\n *                 elif t == NPY_UBYTE:       f = \"B\"\n *                 elif t == NPY_SHORT:       f = \"h\"\n *                 elif t == NPY_USHORT:      f = \"H\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_INT:         f = \"i\"\n *                 elif t == NPY_UINT:        f = \"I\"\n */\n      case NPY_USHORT:\n      __pyx_v_f = ((char *)\"H\");\n      break;\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":264\n *                 elif t == NPY_SHORT:       f = \"h\"\n *                 elif t == NPY_USHORT:      f = \"H\"\n *                 elif t == NPY_INT:         f = \"i\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_UINT:        f = \"I\"\n *                 elif t == NPY_LONG:        f = \"l\"\n */\n      case NPY_INT:\n      __pyx_v_f = ((char *)\"i\");\n      break;\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":265\n *                 elif t == NPY_USHORT:      f = \"H\"\n *                 elif t == NPY_INT:         f = \"i\"\n *                 elif t == NPY_UINT:        f = \"I\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_LONG:        f = \"l\"\n *                 elif t == NPY_ULONG:       f = \"L\"\n */\n      case NPY_UINT:\n      __pyx_v_f = ((char *)\"I\");\n      break;\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":266\n *                 elif t == NPY_INT:         f = \"i\"\n *                 elif t == NPY_UINT:        f = \"I\"\n *                 elif t == NPY_LONG:        f = \"l\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_ULONG:       f = \"L\"\n *                 elif t == NPY_LONGLONG:    f = \"q\"\n */\n      case NPY_LONG:\n      __pyx_v_f = ((char *)\"l\");\n      break;\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":267\n *                 elif t == NPY_UINT:        f = \"I\"\n *                 elif t == NPY_LONG:        f = \"l\"\n *                 elif t == NPY_ULONG:       f = \"L\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_LONGLONG:    f = \"q\"\n *                 elif t == NPY_ULONGLONG:   f = \"Q\"\n */\n      case NPY_ULONG:\n      __pyx_v_f = ((char *)\"L\");\n      break;\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":268\n *                 elif t == NPY_LONG:        f = \"l\"\n *                 elif t == NPY_ULONG:       f = \"L\"\n *                 elif t == NPY_LONGLONG:    f = \"q\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_ULONGLONG:   f = \"Q\"\n *                 elif t == NPY_FLOAT:       f = \"f\"\n */\n      case NPY_LONGLONG:\n      __pyx_v_f = ((char *)\"q\");\n      break;\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":269\n *                 elif t == NPY_ULONG:       f = \"L\"\n *                 elif t == NPY_LONGLONG:    f = \"q\"\n *                 elif t == NPY_ULONGLONG:   f = \"Q\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_FLOAT:       f = \"f\"\n *                 elif t == NPY_DOUBLE:      f = \"d\"\n */\n      case NPY_ULONGLONG:\n      __pyx_v_f = ((char *)\"Q\");\n      break;\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":270\n *                 elif t == NPY_LONGLONG:    f = \"q\"\n *                 elif t == NPY_ULONGLONG:   f = \"Q\"\n *                 elif t == NPY_FLOAT:       f = \"f\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_DOUBLE:      f = \"d\"\n *                 elif t == NPY_LONGDOUBLE:  f = \"g\"\n */\n      case NPY_FLOAT:\n      __pyx_v_f = ((char *)\"f\");\n      break;\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":271\n *                 elif t == NPY_ULONGLONG:   f = \"Q\"\n *                 elif t == NPY_FLOAT:       f = \"f\"\n *                 elif t == NPY_DOUBLE:      f = \"d\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_LONGDOUBLE:  f = \"g\"\n *                 elif t == NPY_CFLOAT:      f = \"Zf\"\n */\n      case NPY_DOUBLE:\n      __pyx_v_f = ((char *)\"d\");\n      break;\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":272\n *                 elif t == NPY_FLOAT:       f = \"f\"\n *                 elif t == NPY_DOUBLE:      f = \"d\"\n *                 elif t == NPY_LONGDOUBLE:  f = \"g\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_CFLOAT:      f = \"Zf\"\n *                 elif t == NPY_CDOUBLE:     f = \"Zd\"\n */\n      case NPY_LONGDOUBLE:\n      __pyx_v_f = ((char *)\"g\");\n      break;\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":273\n *                 elif t == NPY_DOUBLE:      f = \"d\"\n *                 elif t == NPY_LONGDOUBLE:  f = \"g\"\n *                 elif t == NPY_CFLOAT:      f = \"Zf\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_CDOUBLE:     f = \"Zd\"\n *                 elif t == NPY_CLONGDOUBLE: f = \"Zg\"\n */\n      case NPY_CFLOAT:\n      __pyx_v_f = ((char *)\"Zf\");\n      break;\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":274\n *                 elif t == NPY_LONGDOUBLE:  f = \"g\"\n *                 elif t == NPY_CFLOAT:      f = \"Zf\"\n *                 elif t == NPY_CDOUBLE:     f = \"Zd\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_CLONGDOUBLE: f = \"Zg\"\n *                 elif t == NPY_OBJECT:      f = \"O\"\n */\n      case NPY_CDOUBLE:\n      __pyx_v_f = ((char *)\"Zd\");\n      break;\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":275\n *                 elif t == NPY_CFLOAT:      f = \"Zf\"\n *                 elif t == NPY_CDOUBLE:     f = \"Zd\"\n *                 elif t == NPY_CLONGDOUBLE: f = \"Zg\"             # <<<<<<<<<<<<<<\n *                 elif t == NPY_OBJECT:      f = \"O\"\n *                 else:\n */\n      case NPY_CLONGDOUBLE:\n      __pyx_v_f = ((char *)\"Zg\");\n      break;\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":276\n *                 elif t == NPY_CDOUBLE:     f = \"Zd\"\n *                 elif t == NPY_CLONGDOUBLE: f = \"Zg\"\n *                 elif t == NPY_OBJECT:      f = \"O\"             # <<<<<<<<<<<<<<\n *                 else:\n *                     raise ValueError(u\"unknown dtype code in numpy.pxd (%d)\" % t)\n */\n      case NPY_OBJECT:\n      __pyx_v_f = ((char *)\"O\");\n      break;\n      default:\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":278\n *                 elif t == NPY_OBJECT:      f = \"O\"\n *                 else:\n *                     raise ValueError(u\"unknown dtype code in numpy.pxd (%d)\" % t)             # <<<<<<<<<<<<<<\n *                 info.format = f\n *                 return\n */\n      __pyx_t_3 = __Pyx_PyInt_From_int(__pyx_v_t); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 278, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_6 = PyUnicode_Format(__pyx_kp_u_unknown_dtype_code_in_numpy_pxd, __pyx_t_3); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 278, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_6);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 278, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_GIVEREF(__pyx_t_6);\n      PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_6);\n      __pyx_t_6 = 0;\n      __pyx_t_6 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 278, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_6);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __Pyx_Raise(__pyx_t_6, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;\n      __PYX_ERR(1, 278, __pyx_L1_error)\n      break;\n    }\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":279\n *                 else:\n *                     raise ValueError(u\"unknown dtype code in numpy.pxd (%d)\" % t)\n *                 info.format = f             # <<<<<<<<<<<<<<\n *                 return\n *             else:\n */\n    __pyx_v_info->format = __pyx_v_f;\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":280\n *                     raise ValueError(u\"unknown dtype code in numpy.pxd (%d)\" % t)\n *                 info.format = f\n *                 return             # <<<<<<<<<<<<<<\n *             else:\n *                 info.format = <char*>stdlib.malloc(_buffer_format_string_len)\n */\n    __pyx_r = 0;\n    goto __pyx_L0;\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":255\n *                 info.obj = self\n * \n *             if not hasfields:             # <<<<<<<<<<<<<<\n *                 t = descr.type_num\n *                 if ((descr.byteorder == c'>' and little_endian) or\n */\n  }\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":282\n *                 return\n *             else:\n *                 info.format = <char*>stdlib.malloc(_buffer_format_string_len)             # <<<<<<<<<<<<<<\n *                 info.format[0] = c'^' # Native data types, manual alignment\n *                 offset = 0\n */\n  /*else*/ {\n    __pyx_v_info->format = ((char *)malloc(0xFF));\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":283\n *             else:\n *                 info.format = <char*>stdlib.malloc(_buffer_format_string_len)\n *                 info.format[0] = c'^' # Native data types, manual alignment             # <<<<<<<<<<<<<<\n *                 offset = 0\n *                 f = _util_dtypestring(descr, info.format + 1,\n */\n    (__pyx_v_info->format[0]) = '^';\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":284\n *                 info.format = <char*>stdlib.malloc(_buffer_format_string_len)\n *                 info.format[0] = c'^' # Native data types, manual alignment\n *                 offset = 0             # <<<<<<<<<<<<<<\n *                 f = _util_dtypestring(descr, info.format + 1,\n *                                       info.format + _buffer_format_string_len,\n */\n    __pyx_v_offset = 0;\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":285\n *                 info.format[0] = c'^' # Native data types, manual alignment\n *                 offset = 0\n *                 f = _util_dtypestring(descr, info.format + 1,             # <<<<<<<<<<<<<<\n *                                       info.format + _buffer_format_string_len,\n *                                       &offset)\n */\n    __pyx_t_7 = __pyx_f_5numpy__util_dtypestring(__pyx_v_descr, (__pyx_v_info->format + 1), (__pyx_v_info->format + 0xFF), (&__pyx_v_offset)); if (unlikely(__pyx_t_7 == NULL)) __PYX_ERR(1, 285, __pyx_L1_error)\n    __pyx_v_f = __pyx_t_7;\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":288\n *                                       info.format + _buffer_format_string_len,\n *                                       &offset)\n *                 f[0] = c'\\0' # Terminate format string             # <<<<<<<<<<<<<<\n * \n *         def __releasebuffer__(ndarray self, Py_buffer* info):\n */\n    (__pyx_v_f[0]) = '\\x00';\n  }\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":197\n *         # experimental exception made for __getbuffer__ and __releasebuffer__\n *         # -- the details of this may change.\n *         def __getbuffer__(ndarray self, Py_buffer* info, int flags):             # <<<<<<<<<<<<<<\n *             # This implementation of getbuffer is geared towards Cython\n *             # requirements, and does not yet fullfill the PEP.\n */\n\n  /* function exit code */\n  __pyx_r = 0;\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_AddTraceback(\"numpy.ndarray.__getbuffer__\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = -1;\n  if (__pyx_v_info != NULL && __pyx_v_info->obj != NULL) {\n    __Pyx_GOTREF(__pyx_v_info->obj);\n    __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = NULL;\n  }\n  goto __pyx_L2;\n  __pyx_L0:;\n  if (__pyx_v_info != NULL && __pyx_v_info->obj == Py_None) {\n    __Pyx_GOTREF(Py_None);\n    __Pyx_DECREF(Py_None); __pyx_v_info->obj = NULL;\n  }\n  __pyx_L2:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_descr);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":290\n *                 f[0] = c'\\0' # Terminate format string\n * \n *         def __releasebuffer__(ndarray self, Py_buffer* info):             # <<<<<<<<<<<<<<\n *             if PyArray_HASFIELDS(self):\n *                 stdlib.free(info.format)\n */\n\n/* Python wrapper */\nstatic CYTHON_UNUSED void __pyx_pw_5numpy_7ndarray_3__releasebuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info); /*proto*/\nstatic CYTHON_UNUSED void __pyx_pw_5numpy_7ndarray_3__releasebuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info) {\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__releasebuffer__ (wrapper)\", 0);\n  __pyx_pf_5numpy_7ndarray_2__releasebuffer__(((PyArrayObject *)__pyx_v_self), ((Py_buffer *)__pyx_v_info));\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n}\n\nstatic void __pyx_pf_5numpy_7ndarray_2__releasebuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info) {\n  __Pyx_RefNannyDeclarations\n  int __pyx_t_1;\n  __Pyx_RefNannySetupContext(\"__releasebuffer__\", 0);\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":291\n * \n *         def __releasebuffer__(ndarray self, Py_buffer* info):\n *             if PyArray_HASFIELDS(self):             # <<<<<<<<<<<<<<\n *                 stdlib.free(info.format)\n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):\n */\n  __pyx_t_1 = (PyArray_HASFIELDS(__pyx_v_self) != 0);\n  if (__pyx_t_1) {\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":292\n *         def __releasebuffer__(ndarray self, Py_buffer* info):\n *             if PyArray_HASFIELDS(self):\n *                 stdlib.free(info.format)             # <<<<<<<<<<<<<<\n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):\n *                 stdlib.free(info.strides)\n */\n    free(__pyx_v_info->format);\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":291\n * \n *         def __releasebuffer__(ndarray self, Py_buffer* info):\n *             if PyArray_HASFIELDS(self):             # <<<<<<<<<<<<<<\n *                 stdlib.free(info.format)\n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):\n */\n  }\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":293\n *             if PyArray_HASFIELDS(self):\n *                 stdlib.free(info.format)\n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):             # <<<<<<<<<<<<<<\n *                 stdlib.free(info.strides)\n *                 # info.shape was stored after info.strides in the same block\n */\n  __pyx_t_1 = (((sizeof(npy_intp)) != (sizeof(Py_ssize_t))) != 0);\n  if (__pyx_t_1) {\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":294\n *                 stdlib.free(info.format)\n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):\n *                 stdlib.free(info.strides)             # <<<<<<<<<<<<<<\n *                 # info.shape was stored after info.strides in the same block\n * \n */\n    free(__pyx_v_info->strides);\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":293\n *             if PyArray_HASFIELDS(self):\n *                 stdlib.free(info.format)\n *             if sizeof(npy_intp) != sizeof(Py_ssize_t):             # <<<<<<<<<<<<<<\n *                 stdlib.free(info.strides)\n *                 # info.shape was stored after info.strides in the same block\n */\n  }\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":290\n *                 f[0] = c'\\0' # Terminate format string\n * \n *         def __releasebuffer__(ndarray self, Py_buffer* info):             # <<<<<<<<<<<<<<\n *             if PyArray_HASFIELDS(self):\n *                 stdlib.free(info.format)\n */\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n}\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":770\n * ctypedef npy_cdouble     complex_t\n * \n * cdef inline object PyArray_MultiIterNew1(a):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(1, <void*>a)\n * \n */\n\nstatic CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew1(PyObject *__pyx_v_a) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"PyArray_MultiIterNew1\", 0);\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":771\n * \n * cdef inline object PyArray_MultiIterNew1(a):\n *     return PyArray_MultiIterNew(1, <void*>a)             # <<<<<<<<<<<<<<\n * \n * cdef inline object PyArray_MultiIterNew2(a, b):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_1 = PyArray_MultiIterNew(1, ((void *)__pyx_v_a)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 771, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_r = __pyx_t_1;\n  __pyx_t_1 = 0;\n  goto __pyx_L0;\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":770\n * ctypedef npy_cdouble     complex_t\n * \n * cdef inline object PyArray_MultiIterNew1(a):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(1, <void*>a)\n * \n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"numpy.PyArray_MultiIterNew1\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = 0;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":773\n *     return PyArray_MultiIterNew(1, <void*>a)\n * \n * cdef inline object PyArray_MultiIterNew2(a, b):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(2, <void*>a, <void*>b)\n * \n */\n\nstatic CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew2(PyObject *__pyx_v_a, PyObject *__pyx_v_b) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"PyArray_MultiIterNew2\", 0);\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":774\n * \n * cdef inline object PyArray_MultiIterNew2(a, b):\n *     return PyArray_MultiIterNew(2, <void*>a, <void*>b)             # <<<<<<<<<<<<<<\n * \n * cdef inline object PyArray_MultiIterNew3(a, b, c):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_1 = PyArray_MultiIterNew(2, ((void *)__pyx_v_a), ((void *)__pyx_v_b)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 774, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_r = __pyx_t_1;\n  __pyx_t_1 = 0;\n  goto __pyx_L0;\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":773\n *     return PyArray_MultiIterNew(1, <void*>a)\n * \n * cdef inline object PyArray_MultiIterNew2(a, b):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(2, <void*>a, <void*>b)\n * \n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"numpy.PyArray_MultiIterNew2\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = 0;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":776\n *     return PyArray_MultiIterNew(2, <void*>a, <void*>b)\n * \n * cdef inline object PyArray_MultiIterNew3(a, b, c):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(3, <void*>a, <void*>b, <void*> c)\n * \n */\n\nstatic CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew3(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"PyArray_MultiIterNew3\", 0);\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":777\n * \n * cdef inline object PyArray_MultiIterNew3(a, b, c):\n *     return PyArray_MultiIterNew(3, <void*>a, <void*>b, <void*> c)             # <<<<<<<<<<<<<<\n * \n * cdef inline object PyArray_MultiIterNew4(a, b, c, d):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_1 = PyArray_MultiIterNew(3, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 777, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_r = __pyx_t_1;\n  __pyx_t_1 = 0;\n  goto __pyx_L0;\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":776\n *     return PyArray_MultiIterNew(2, <void*>a, <void*>b)\n * \n * cdef inline object PyArray_MultiIterNew3(a, b, c):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(3, <void*>a, <void*>b, <void*> c)\n * \n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"numpy.PyArray_MultiIterNew3\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = 0;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":779\n *     return PyArray_MultiIterNew(3, <void*>a, <void*>b, <void*> c)\n * \n * cdef inline object PyArray_MultiIterNew4(a, b, c, d):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(4, <void*>a, <void*>b, <void*>c, <void*> d)\n * \n */\n\nstatic CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew4(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_d) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"PyArray_MultiIterNew4\", 0);\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":780\n * \n * cdef inline object PyArray_MultiIterNew4(a, b, c, d):\n *     return PyArray_MultiIterNew(4, <void*>a, <void*>b, <void*>c, <void*> d)             # <<<<<<<<<<<<<<\n * \n * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e):\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_1 = PyArray_MultiIterNew(4, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c), ((void *)__pyx_v_d)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 780, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_r = __pyx_t_1;\n  __pyx_t_1 = 0;\n  goto __pyx_L0;\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":779\n *     return PyArray_MultiIterNew(3, <void*>a, <void*>b, <void*> c)\n * \n * cdef inline object PyArray_MultiIterNew4(a, b, c, d):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(4, <void*>a, <void*>b, <void*>c, <void*> d)\n * \n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"numpy.PyArray_MultiIterNew4\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = 0;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":782\n *     return PyArray_MultiIterNew(4, <void*>a, <void*>b, <void*>c, <void*> d)\n * \n * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(5, <void*>a, <void*>b, <void*>c, <void*> d, <void*> e)\n * \n */\n\nstatic CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew5(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_d, PyObject *__pyx_v_e) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  __Pyx_RefNannySetupContext(\"PyArray_MultiIterNew5\", 0);\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":783\n * \n * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e):\n *     return PyArray_MultiIterNew(5, <void*>a, <void*>b, <void*>c, <void*> d, <void*> e)             # <<<<<<<<<<<<<<\n * \n * cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL:\n */\n  __Pyx_XDECREF(__pyx_r);\n  __pyx_t_1 = PyArray_MultiIterNew(5, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c), ((void *)__pyx_v_d), ((void *)__pyx_v_e)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 783, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_r = __pyx_t_1;\n  __pyx_t_1 = 0;\n  goto __pyx_L0;\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":782\n *     return PyArray_MultiIterNew(4, <void*>a, <void*>b, <void*>c, <void*> d)\n * \n * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e):             # <<<<<<<<<<<<<<\n *     return PyArray_MultiIterNew(5, <void*>a, <void*>b, <void*>c, <void*> d, <void*> e)\n * \n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_AddTraceback(\"numpy.PyArray_MultiIterNew5\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = 0;\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":785\n *     return PyArray_MultiIterNew(5, <void*>a, <void*>b, <void*>c, <void*> d, <void*> e)\n * \n * cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL:             # <<<<<<<<<<<<<<\n *     # Recursive utility function used in __getbuffer__ to get format\n *     # string. The new location in the format string is returned.\n */\n\nstatic CYTHON_INLINE char *__pyx_f_5numpy__util_dtypestring(PyArray_Descr *__pyx_v_descr, char *__pyx_v_f, char *__pyx_v_end, int *__pyx_v_offset) {\n  PyArray_Descr *__pyx_v_child = 0;\n  int __pyx_v_endian_detector;\n  int __pyx_v_little_endian;\n  PyObject *__pyx_v_fields = 0;\n  PyObject *__pyx_v_childname = NULL;\n  PyObject *__pyx_v_new_offset = NULL;\n  PyObject *__pyx_v_t = NULL;\n  char *__pyx_r;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  Py_ssize_t __pyx_t_2;\n  PyObject *__pyx_t_3 = NULL;\n  PyObject *__pyx_t_4 = NULL;\n  int __pyx_t_5;\n  int __pyx_t_6;\n  int __pyx_t_7;\n  long __pyx_t_8;\n  char *__pyx_t_9;\n  __Pyx_RefNannySetupContext(\"_util_dtypestring\", 0);\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":790\n * \n *     cdef dtype child\n *     cdef int endian_detector = 1             # <<<<<<<<<<<<<<\n *     cdef bint little_endian = ((<char*>&endian_detector)[0] != 0)\n *     cdef tuple fields\n */\n  __pyx_v_endian_detector = 1;\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":791\n *     cdef dtype child\n *     cdef int endian_detector = 1\n *     cdef bint little_endian = ((<char*>&endian_detector)[0] != 0)             # <<<<<<<<<<<<<<\n *     cdef tuple fields\n * \n */\n  __pyx_v_little_endian = ((((char *)(&__pyx_v_endian_detector))[0]) != 0);\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":794\n *     cdef tuple fields\n * \n *     for childname in descr.names:             # <<<<<<<<<<<<<<\n *         fields = descr.fields[childname]\n *         child, new_offset = fields\n */\n  if (unlikely(__pyx_v_descr->names == Py_None)) {\n    PyErr_SetString(PyExc_TypeError, \"'NoneType' object is not iterable\");\n    __PYX_ERR(1, 794, __pyx_L1_error)\n  }\n  __pyx_t_1 = __pyx_v_descr->names; __Pyx_INCREF(__pyx_t_1); __pyx_t_2 = 0;\n  for (;;) {\n    if (__pyx_t_2 >= PyTuple_GET_SIZE(__pyx_t_1)) break;\n    #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n    __pyx_t_3 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_2); __Pyx_INCREF(__pyx_t_3); __pyx_t_2++; if (unlikely(0 < 0)) __PYX_ERR(1, 794, __pyx_L1_error)\n    #else\n    __pyx_t_3 = PySequence_ITEM(__pyx_t_1, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 794, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    #endif\n    __Pyx_XDECREF_SET(__pyx_v_childname, __pyx_t_3);\n    __pyx_t_3 = 0;\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":795\n * \n *     for childname in descr.names:\n *         fields = descr.fields[childname]             # <<<<<<<<<<<<<<\n *         child, new_offset = fields\n * \n */\n    if (unlikely(__pyx_v_descr->fields == Py_None)) {\n      PyErr_SetString(PyExc_TypeError, \"'NoneType' object is not subscriptable\");\n      __PYX_ERR(1, 795, __pyx_L1_error)\n    }\n    __pyx_t_3 = __Pyx_PyDict_GetItem(__pyx_v_descr->fields, __pyx_v_childname); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 795, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    if (!(likely(PyTuple_CheckExact(__pyx_t_3))||((__pyx_t_3) == Py_None)||(PyErr_Format(PyExc_TypeError, \"Expected %.16s, got %.200s\", \"tuple\", Py_TYPE(__pyx_t_3)->tp_name), 0))) __PYX_ERR(1, 795, __pyx_L1_error)\n    __Pyx_XDECREF_SET(__pyx_v_fields, ((PyObject*)__pyx_t_3));\n    __pyx_t_3 = 0;\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":796\n *     for childname in descr.names:\n *         fields = descr.fields[childname]\n *         child, new_offset = fields             # <<<<<<<<<<<<<<\n * \n *         if (end - f) - <int>(new_offset - offset[0]) < 15:\n */\n    if (likely(__pyx_v_fields != Py_None)) {\n      PyObject* sequence = __pyx_v_fields;\n      #if !CYTHON_COMPILING_IN_PYPY\n      Py_ssize_t size = Py_SIZE(sequence);\n      #else\n      Py_ssize_t size = PySequence_Size(sequence);\n      #endif\n      if (unlikely(size != 2)) {\n        if (size > 2) __Pyx_RaiseTooManyValuesError(2);\n        else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size);\n        __PYX_ERR(1, 796, __pyx_L1_error)\n      }\n      #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS\n      __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0); \n      __pyx_t_4 = PyTuple_GET_ITEM(sequence, 1); \n      __Pyx_INCREF(__pyx_t_3);\n      __Pyx_INCREF(__pyx_t_4);\n      #else\n      __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 796, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 796, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      #endif\n    } else {\n      __Pyx_RaiseNoneNotIterableError(); __PYX_ERR(1, 796, __pyx_L1_error)\n    }\n    if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_dtype))))) __PYX_ERR(1, 796, __pyx_L1_error)\n    __Pyx_XDECREF_SET(__pyx_v_child, ((PyArray_Descr *)__pyx_t_3));\n    __pyx_t_3 = 0;\n    __Pyx_XDECREF_SET(__pyx_v_new_offset, __pyx_t_4);\n    __pyx_t_4 = 0;\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":798\n *         child, new_offset = fields\n * \n *         if (end - f) - <int>(new_offset - offset[0]) < 15:             # <<<<<<<<<<<<<<\n *             raise RuntimeError(u\"Format string allocated too short, see comment in numpy.pxd\")\n * \n */\n    __pyx_t_4 = __Pyx_PyInt_From_int((__pyx_v_offset[0])); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 798, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_4);\n    __pyx_t_3 = PyNumber_Subtract(__pyx_v_new_offset, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 798, __pyx_L1_error)\n    __Pyx_GOTREF(__pyx_t_3);\n    __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n    __pyx_t_5 = __Pyx_PyInt_As_int(__pyx_t_3); if (unlikely((__pyx_t_5 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 798, __pyx_L1_error)\n    __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n    __pyx_t_6 = ((((__pyx_v_end - __pyx_v_f) - ((int)__pyx_t_5)) < 15) != 0);\n    if (__pyx_t_6) {\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":799\n * \n *         if (end - f) - <int>(new_offset - offset[0]) < 15:\n *             raise RuntimeError(u\"Format string allocated too short, see comment in numpy.pxd\")             # <<<<<<<<<<<<<<\n * \n *         if ((child.byteorder == c'>' and little_endian) or\n */\n      __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_RuntimeError, __pyx_tuple__4, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 799, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __PYX_ERR(1, 799, __pyx_L1_error)\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":798\n *         child, new_offset = fields\n * \n *         if (end - f) - <int>(new_offset - offset[0]) < 15:             # <<<<<<<<<<<<<<\n *             raise RuntimeError(u\"Format string allocated too short, see comment in numpy.pxd\")\n * \n */\n    }\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":801\n *             raise RuntimeError(u\"Format string allocated too short, see comment in numpy.pxd\")\n * \n *         if ((child.byteorder == c'>' and little_endian) or             # <<<<<<<<<<<<<<\n *             (child.byteorder == c'<' and not little_endian)):\n *             raise ValueError(u\"Non-native byte order not supported\")\n */\n    __pyx_t_7 = ((__pyx_v_child->byteorder == '>') != 0);\n    if (!__pyx_t_7) {\n      goto __pyx_L8_next_or;\n    } else {\n    }\n    __pyx_t_7 = (__pyx_v_little_endian != 0);\n    if (!__pyx_t_7) {\n    } else {\n      __pyx_t_6 = __pyx_t_7;\n      goto __pyx_L7_bool_binop_done;\n    }\n    __pyx_L8_next_or:;\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":802\n * \n *         if ((child.byteorder == c'>' and little_endian) or\n *             (child.byteorder == c'<' and not little_endian)):             # <<<<<<<<<<<<<<\n *             raise ValueError(u\"Non-native byte order not supported\")\n *             # One could encode it in the format string and have Cython\n */\n    __pyx_t_7 = ((__pyx_v_child->byteorder == '<') != 0);\n    if (__pyx_t_7) {\n    } else {\n      __pyx_t_6 = __pyx_t_7;\n      goto __pyx_L7_bool_binop_done;\n    }\n    __pyx_t_7 = ((!(__pyx_v_little_endian != 0)) != 0);\n    __pyx_t_6 = __pyx_t_7;\n    __pyx_L7_bool_binop_done:;\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":801\n *             raise RuntimeError(u\"Format string allocated too short, see comment in numpy.pxd\")\n * \n *         if ((child.byteorder == c'>' and little_endian) or             # <<<<<<<<<<<<<<\n *             (child.byteorder == c'<' and not little_endian)):\n *             raise ValueError(u\"Non-native byte order not supported\")\n */\n    if (__pyx_t_6) {\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":803\n *         if ((child.byteorder == c'>' and little_endian) or\n *             (child.byteorder == c'<' and not little_endian)):\n *             raise ValueError(u\"Non-native byte order not supported\")             # <<<<<<<<<<<<<<\n *             # One could encode it in the format string and have Cython\n *             # complain instead, BUT: < and > in format strings also imply\n */\n      __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__5, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 803, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __PYX_ERR(1, 803, __pyx_L1_error)\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":801\n *             raise RuntimeError(u\"Format string allocated too short, see comment in numpy.pxd\")\n * \n *         if ((child.byteorder == c'>' and little_endian) or             # <<<<<<<<<<<<<<\n *             (child.byteorder == c'<' and not little_endian)):\n *             raise ValueError(u\"Non-native byte order not supported\")\n */\n    }\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":813\n * \n *         # Output padding bytes\n *         while offset[0] < new_offset:             # <<<<<<<<<<<<<<\n *             f[0] = 120 # \"x\"; pad byte\n *             f += 1\n */\n    while (1) {\n      __pyx_t_3 = __Pyx_PyInt_From_int((__pyx_v_offset[0])); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 813, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_t_3, __pyx_v_new_offset, Py_LT); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 813, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 813, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (!__pyx_t_6) break;\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":814\n *         # Output padding bytes\n *         while offset[0] < new_offset:\n *             f[0] = 120 # \"x\"; pad byte             # <<<<<<<<<<<<<<\n *             f += 1\n *             offset[0] += 1\n */\n      (__pyx_v_f[0]) = 0x78;\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":815\n *         while offset[0] < new_offset:\n *             f[0] = 120 # \"x\"; pad byte\n *             f += 1             # <<<<<<<<<<<<<<\n *             offset[0] += 1\n * \n */\n      __pyx_v_f = (__pyx_v_f + 1);\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":816\n *             f[0] = 120 # \"x\"; pad byte\n *             f += 1\n *             offset[0] += 1             # <<<<<<<<<<<<<<\n * \n *         offset[0] += child.itemsize\n */\n      __pyx_t_8 = 0;\n      (__pyx_v_offset[__pyx_t_8]) = ((__pyx_v_offset[__pyx_t_8]) + 1);\n    }\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":818\n *             offset[0] += 1\n * \n *         offset[0] += child.itemsize             # <<<<<<<<<<<<<<\n * \n *         if not PyDataType_HASFIELDS(child):\n */\n    __pyx_t_8 = 0;\n    (__pyx_v_offset[__pyx_t_8]) = ((__pyx_v_offset[__pyx_t_8]) + __pyx_v_child->elsize);\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":820\n *         offset[0] += child.itemsize\n * \n *         if not PyDataType_HASFIELDS(child):             # <<<<<<<<<<<<<<\n *             t = child.type_num\n *             if end - f < 5:\n */\n    __pyx_t_6 = ((!(PyDataType_HASFIELDS(__pyx_v_child) != 0)) != 0);\n    if (__pyx_t_6) {\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":821\n * \n *         if not PyDataType_HASFIELDS(child):\n *             t = child.type_num             # <<<<<<<<<<<<<<\n *             if end - f < 5:\n *                 raise RuntimeError(u\"Format string allocated too short.\")\n */\n      __pyx_t_4 = __Pyx_PyInt_From_int(__pyx_v_child->type_num); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 821, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __Pyx_XDECREF_SET(__pyx_v_t, __pyx_t_4);\n      __pyx_t_4 = 0;\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":822\n *         if not PyDataType_HASFIELDS(child):\n *             t = child.type_num\n *             if end - f < 5:             # <<<<<<<<<<<<<<\n *                 raise RuntimeError(u\"Format string allocated too short.\")\n * \n */\n      __pyx_t_6 = (((__pyx_v_end - __pyx_v_f) < 5) != 0);\n      if (__pyx_t_6) {\n\n        /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":823\n *             t = child.type_num\n *             if end - f < 5:\n *                 raise RuntimeError(u\"Format string allocated too short.\")             # <<<<<<<<<<<<<<\n * \n *             # Until ticket #99 is fixed, use integers to avoid warnings\n */\n        __pyx_t_4 = __Pyx_PyObject_Call(__pyx_builtin_RuntimeError, __pyx_tuple__6, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 823, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_4);\n        __Pyx_Raise(__pyx_t_4, 0, 0, 0);\n        __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n        __PYX_ERR(1, 823, __pyx_L1_error)\n\n        /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":822\n *         if not PyDataType_HASFIELDS(child):\n *             t = child.type_num\n *             if end - f < 5:             # <<<<<<<<<<<<<<\n *                 raise RuntimeError(u\"Format string allocated too short.\")\n * \n */\n      }\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":826\n * \n *             # Until ticket #99 is fixed, use integers to avoid warnings\n *             if   t == NPY_BYTE:        f[0] =  98 #\"b\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_UBYTE:       f[0] =  66 #\"B\"\n *             elif t == NPY_SHORT:       f[0] = 104 #\"h\"\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_BYTE); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 826, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 826, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 826, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 98;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":827\n *             # Until ticket #99 is fixed, use integers to avoid warnings\n *             if   t == NPY_BYTE:        f[0] =  98 #\"b\"\n *             elif t == NPY_UBYTE:       f[0] =  66 #\"B\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_SHORT:       f[0] = 104 #\"h\"\n *             elif t == NPY_USHORT:      f[0] =  72 #\"H\"\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_UBYTE); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 827, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 827, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 827, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 66;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":828\n *             if   t == NPY_BYTE:        f[0] =  98 #\"b\"\n *             elif t == NPY_UBYTE:       f[0] =  66 #\"B\"\n *             elif t == NPY_SHORT:       f[0] = 104 #\"h\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_USHORT:      f[0] =  72 #\"H\"\n *             elif t == NPY_INT:         f[0] = 105 #\"i\"\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_SHORT); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 828, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 828, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 828, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 0x68;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":829\n *             elif t == NPY_UBYTE:       f[0] =  66 #\"B\"\n *             elif t == NPY_SHORT:       f[0] = 104 #\"h\"\n *             elif t == NPY_USHORT:      f[0] =  72 #\"H\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_INT:         f[0] = 105 #\"i\"\n *             elif t == NPY_UINT:        f[0] =  73 #\"I\"\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_USHORT); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 829, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 829, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 829, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 72;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":830\n *             elif t == NPY_SHORT:       f[0] = 104 #\"h\"\n *             elif t == NPY_USHORT:      f[0] =  72 #\"H\"\n *             elif t == NPY_INT:         f[0] = 105 #\"i\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_UINT:        f[0] =  73 #\"I\"\n *             elif t == NPY_LONG:        f[0] = 108 #\"l\"\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_INT); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 830, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 830, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 830, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 0x69;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":831\n *             elif t == NPY_USHORT:      f[0] =  72 #\"H\"\n *             elif t == NPY_INT:         f[0] = 105 #\"i\"\n *             elif t == NPY_UINT:        f[0] =  73 #\"I\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_LONG:        f[0] = 108 #\"l\"\n *             elif t == NPY_ULONG:       f[0] = 76  #\"L\"\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_UINT); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 831, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 831, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 831, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 73;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":832\n *             elif t == NPY_INT:         f[0] = 105 #\"i\"\n *             elif t == NPY_UINT:        f[0] =  73 #\"I\"\n *             elif t == NPY_LONG:        f[0] = 108 #\"l\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_ULONG:       f[0] = 76  #\"L\"\n *             elif t == NPY_LONGLONG:    f[0] = 113 #\"q\"\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_LONG); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 832, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 832, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 832, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 0x6C;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":833\n *             elif t == NPY_UINT:        f[0] =  73 #\"I\"\n *             elif t == NPY_LONG:        f[0] = 108 #\"l\"\n *             elif t == NPY_ULONG:       f[0] = 76  #\"L\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_LONGLONG:    f[0] = 113 #\"q\"\n *             elif t == NPY_ULONGLONG:   f[0] = 81  #\"Q\"\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_ULONG); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 833, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 833, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 833, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 76;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":834\n *             elif t == NPY_LONG:        f[0] = 108 #\"l\"\n *             elif t == NPY_ULONG:       f[0] = 76  #\"L\"\n *             elif t == NPY_LONGLONG:    f[0] = 113 #\"q\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_ULONGLONG:   f[0] = 81  #\"Q\"\n *             elif t == NPY_FLOAT:       f[0] = 102 #\"f\"\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_LONGLONG); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 834, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 834, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 834, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 0x71;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":835\n *             elif t == NPY_ULONG:       f[0] = 76  #\"L\"\n *             elif t == NPY_LONGLONG:    f[0] = 113 #\"q\"\n *             elif t == NPY_ULONGLONG:   f[0] = 81  #\"Q\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_FLOAT:       f[0] = 102 #\"f\"\n *             elif t == NPY_DOUBLE:      f[0] = 100 #\"d\"\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_ULONGLONG); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 835, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 835, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 835, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 81;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":836\n *             elif t == NPY_LONGLONG:    f[0] = 113 #\"q\"\n *             elif t == NPY_ULONGLONG:   f[0] = 81  #\"Q\"\n *             elif t == NPY_FLOAT:       f[0] = 102 #\"f\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_DOUBLE:      f[0] = 100 #\"d\"\n *             elif t == NPY_LONGDOUBLE:  f[0] = 103 #\"g\"\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_FLOAT); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 836, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 836, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 836, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 0x66;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":837\n *             elif t == NPY_ULONGLONG:   f[0] = 81  #\"Q\"\n *             elif t == NPY_FLOAT:       f[0] = 102 #\"f\"\n *             elif t == NPY_DOUBLE:      f[0] = 100 #\"d\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_LONGDOUBLE:  f[0] = 103 #\"g\"\n *             elif t == NPY_CFLOAT:      f[0] = 90; f[1] = 102; f += 1 # Zf\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_DOUBLE); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 837, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 837, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 837, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 0x64;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":838\n *             elif t == NPY_FLOAT:       f[0] = 102 #\"f\"\n *             elif t == NPY_DOUBLE:      f[0] = 100 #\"d\"\n *             elif t == NPY_LONGDOUBLE:  f[0] = 103 #\"g\"             # <<<<<<<<<<<<<<\n *             elif t == NPY_CFLOAT:      f[0] = 90; f[1] = 102; f += 1 # Zf\n *             elif t == NPY_CDOUBLE:     f[0] = 90; f[1] = 100; f += 1 # Zd\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_LONGDOUBLE); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 838, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 838, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 838, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 0x67;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":839\n *             elif t == NPY_DOUBLE:      f[0] = 100 #\"d\"\n *             elif t == NPY_LONGDOUBLE:  f[0] = 103 #\"g\"\n *             elif t == NPY_CFLOAT:      f[0] = 90; f[1] = 102; f += 1 # Zf             # <<<<<<<<<<<<<<\n *             elif t == NPY_CDOUBLE:     f[0] = 90; f[1] = 100; f += 1 # Zd\n *             elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_CFLOAT); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 839, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 839, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 839, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 90;\n        (__pyx_v_f[1]) = 0x66;\n        __pyx_v_f = (__pyx_v_f + 1);\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":840\n *             elif t == NPY_LONGDOUBLE:  f[0] = 103 #\"g\"\n *             elif t == NPY_CFLOAT:      f[0] = 90; f[1] = 102; f += 1 # Zf\n *             elif t == NPY_CDOUBLE:     f[0] = 90; f[1] = 100; f += 1 # Zd             # <<<<<<<<<<<<<<\n *             elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg\n *             elif t == NPY_OBJECT:      f[0] = 79 #\"O\"\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_CDOUBLE); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 840, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 840, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 840, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 90;\n        (__pyx_v_f[1]) = 0x64;\n        __pyx_v_f = (__pyx_v_f + 1);\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":841\n *             elif t == NPY_CFLOAT:      f[0] = 90; f[1] = 102; f += 1 # Zf\n *             elif t == NPY_CDOUBLE:     f[0] = 90; f[1] = 100; f += 1 # Zd\n *             elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg             # <<<<<<<<<<<<<<\n *             elif t == NPY_OBJECT:      f[0] = 79 #\"O\"\n *             else:\n */\n      __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_CLONGDOUBLE); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 841, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_3);\n      __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 841, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 841, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 90;\n        (__pyx_v_f[1]) = 0x67;\n        __pyx_v_f = (__pyx_v_f + 1);\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":842\n *             elif t == NPY_CDOUBLE:     f[0] = 90; f[1] = 100; f += 1 # Zd\n *             elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg\n *             elif t == NPY_OBJECT:      f[0] = 79 #\"O\"             # <<<<<<<<<<<<<<\n *             else:\n *                 raise ValueError(u\"unknown dtype code in numpy.pxd (%d)\" % t)\n */\n      __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_OBJECT); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 842, __pyx_L1_error)\n      __Pyx_GOTREF(__pyx_t_4);\n      __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 842, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n      __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(1, 842, __pyx_L1_error)\n      __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n      if (__pyx_t_6) {\n        (__pyx_v_f[0]) = 79;\n        goto __pyx_L15;\n      }\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":844\n *             elif t == NPY_OBJECT:      f[0] = 79 #\"O\"\n *             else:\n *                 raise ValueError(u\"unknown dtype code in numpy.pxd (%d)\" % t)             # <<<<<<<<<<<<<<\n *             f += 1\n *         else:\n */\n      /*else*/ {\n        __pyx_t_3 = PyUnicode_Format(__pyx_kp_u_unknown_dtype_code_in_numpy_pxd, __pyx_v_t); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 844, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 844, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_4);\n        __Pyx_GIVEREF(__pyx_t_3);\n        PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3);\n        __pyx_t_3 = 0;\n        __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_t_4, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 844, __pyx_L1_error)\n        __Pyx_GOTREF(__pyx_t_3);\n        __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;\n        __Pyx_Raise(__pyx_t_3, 0, 0, 0);\n        __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;\n        __PYX_ERR(1, 844, __pyx_L1_error)\n      }\n      __pyx_L15:;\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":845\n *             else:\n *                 raise ValueError(u\"unknown dtype code in numpy.pxd (%d)\" % t)\n *             f += 1             # <<<<<<<<<<<<<<\n *         else:\n *             # Cython ignores struct boundary information (\"T{...}\"),\n */\n      __pyx_v_f = (__pyx_v_f + 1);\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":820\n *         offset[0] += child.itemsize\n * \n *         if not PyDataType_HASFIELDS(child):             # <<<<<<<<<<<<<<\n *             t = child.type_num\n *             if end - f < 5:\n */\n      goto __pyx_L13;\n    }\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":849\n *             # Cython ignores struct boundary information (\"T{...}\"),\n *             # so don't output it\n *             f = _util_dtypestring(child, f, end, offset)             # <<<<<<<<<<<<<<\n *     return f\n * \n */\n    /*else*/ {\n      __pyx_t_9 = __pyx_f_5numpy__util_dtypestring(__pyx_v_child, __pyx_v_f, __pyx_v_end, __pyx_v_offset); if (unlikely(__pyx_t_9 == NULL)) __PYX_ERR(1, 849, __pyx_L1_error)\n      __pyx_v_f = __pyx_t_9;\n    }\n    __pyx_L13:;\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":794\n *     cdef tuple fields\n * \n *     for childname in descr.names:             # <<<<<<<<<<<<<<\n *         fields = descr.fields[childname]\n *         child, new_offset = fields\n */\n  }\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":850\n *             # so don't output it\n *             f = _util_dtypestring(child, f, end, offset)\n *     return f             # <<<<<<<<<<<<<<\n * \n * \n */\n  __pyx_r = __pyx_v_f;\n  goto __pyx_L0;\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":785\n *     return PyArray_MultiIterNew(5, <void*>a, <void*>b, <void*>c, <void*> d, <void*> e)\n * \n * cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL:             # <<<<<<<<<<<<<<\n *     # Recursive utility function used in __getbuffer__ to get format\n *     # string. The new location in the format string is returned.\n */\n\n  /* function exit code */\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_3);\n  __Pyx_XDECREF(__pyx_t_4);\n  __Pyx_AddTraceback(\"numpy._util_dtypestring\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = NULL;\n  __pyx_L0:;\n  __Pyx_XDECREF((PyObject *)__pyx_v_child);\n  __Pyx_XDECREF(__pyx_v_fields);\n  __Pyx_XDECREF(__pyx_v_childname);\n  __Pyx_XDECREF(__pyx_v_new_offset);\n  __Pyx_XDECREF(__pyx_v_t);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":966\n * \n * \n * cdef inline void set_array_base(ndarray arr, object base):             # <<<<<<<<<<<<<<\n *      cdef PyObject* baseptr\n *      if base is None:\n */\n\nstatic CYTHON_INLINE void __pyx_f_5numpy_set_array_base(PyArrayObject *__pyx_v_arr, PyObject *__pyx_v_base) {\n  PyObject *__pyx_v_baseptr;\n  __Pyx_RefNannyDeclarations\n  int __pyx_t_1;\n  int __pyx_t_2;\n  __Pyx_RefNannySetupContext(\"set_array_base\", 0);\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":968\n * cdef inline void set_array_base(ndarray arr, object base):\n *      cdef PyObject* baseptr\n *      if base is None:             # <<<<<<<<<<<<<<\n *          baseptr = NULL\n *      else:\n */\n  __pyx_t_1 = (__pyx_v_base == Py_None);\n  __pyx_t_2 = (__pyx_t_1 != 0);\n  if (__pyx_t_2) {\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":969\n *      cdef PyObject* baseptr\n *      if base is None:\n *          baseptr = NULL             # <<<<<<<<<<<<<<\n *      else:\n *          Py_INCREF(base) # important to do this before decref below!\n */\n    __pyx_v_baseptr = NULL;\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":968\n * cdef inline void set_array_base(ndarray arr, object base):\n *      cdef PyObject* baseptr\n *      if base is None:             # <<<<<<<<<<<<<<\n *          baseptr = NULL\n *      else:\n */\n    goto __pyx_L3;\n  }\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":971\n *          baseptr = NULL\n *      else:\n *          Py_INCREF(base) # important to do this before decref below!             # <<<<<<<<<<<<<<\n *          baseptr = <PyObject*>base\n *      Py_XDECREF(arr.base)\n */\n  /*else*/ {\n    Py_INCREF(__pyx_v_base);\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":972\n *      else:\n *          Py_INCREF(base) # important to do this before decref below!\n *          baseptr = <PyObject*>base             # <<<<<<<<<<<<<<\n *      Py_XDECREF(arr.base)\n *      arr.base = baseptr\n */\n    __pyx_v_baseptr = ((PyObject *)__pyx_v_base);\n  }\n  __pyx_L3:;\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":973\n *          Py_INCREF(base) # important to do this before decref below!\n *          baseptr = <PyObject*>base\n *      Py_XDECREF(arr.base)             # <<<<<<<<<<<<<<\n *      arr.base = baseptr\n * \n */\n  Py_XDECREF(__pyx_v_arr->base);\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":974\n *          baseptr = <PyObject*>base\n *      Py_XDECREF(arr.base)\n *      arr.base = baseptr             # <<<<<<<<<<<<<<\n * \n * cdef inline object get_array_base(ndarray arr):\n */\n  __pyx_v_arr->base = __pyx_v_baseptr;\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":966\n * \n * \n * cdef inline void set_array_base(ndarray arr, object base):             # <<<<<<<<<<<<<<\n *      cdef PyObject* baseptr\n *      if base is None:\n */\n\n  /* function exit code */\n  __Pyx_RefNannyFinishContext();\n}\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":976\n *      arr.base = baseptr\n * \n * cdef inline object get_array_base(ndarray arr):             # <<<<<<<<<<<<<<\n *     if arr.base is NULL:\n *         return None\n */\n\nstatic CYTHON_INLINE PyObject *__pyx_f_5numpy_get_array_base(PyArrayObject *__pyx_v_arr) {\n  PyObject *__pyx_r = NULL;\n  __Pyx_RefNannyDeclarations\n  int __pyx_t_1;\n  __Pyx_RefNannySetupContext(\"get_array_base\", 0);\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":977\n * \n * cdef inline object get_array_base(ndarray arr):\n *     if arr.base is NULL:             # <<<<<<<<<<<<<<\n *         return None\n *     else:\n */\n  __pyx_t_1 = ((__pyx_v_arr->base == NULL) != 0);\n  if (__pyx_t_1) {\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":978\n * cdef inline object get_array_base(ndarray arr):\n *     if arr.base is NULL:\n *         return None             # <<<<<<<<<<<<<<\n *     else:\n *         return <object>arr.base\n */\n    __Pyx_XDECREF(__pyx_r);\n    __Pyx_INCREF(Py_None);\n    __pyx_r = Py_None;\n    goto __pyx_L0;\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":977\n * \n * cdef inline object get_array_base(ndarray arr):\n *     if arr.base is NULL:             # <<<<<<<<<<<<<<\n *         return None\n *     else:\n */\n  }\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":980\n *         return None\n *     else:\n *         return <object>arr.base             # <<<<<<<<<<<<<<\n * \n * \n */\n  /*else*/ {\n    __Pyx_XDECREF(__pyx_r);\n    __Pyx_INCREF(((PyObject *)__pyx_v_arr->base));\n    __pyx_r = ((PyObject *)__pyx_v_arr->base);\n    goto __pyx_L0;\n  }\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":976\n *      arr.base = baseptr\n * \n * cdef inline object get_array_base(ndarray arr):             # <<<<<<<<<<<<<<\n *     if arr.base is NULL:\n *         return None\n */\n\n  /* function exit code */\n  __pyx_L0:;\n  __Pyx_XGIVEREF(__pyx_r);\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":985\n * # Versions of the import_* functions which are more suitable for\n * # Cython code.\n * cdef inline int import_array() except -1:             # <<<<<<<<<<<<<<\n *     try:\n *         _import_array()\n */\n\nstatic CYTHON_INLINE int __pyx_f_5numpy_import_array(void) {\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  int __pyx_t_4;\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  PyObject *__pyx_t_7 = NULL;\n  PyObject *__pyx_t_8 = NULL;\n  __Pyx_RefNannySetupContext(\"import_array\", 0);\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":986\n * # Cython code.\n * cdef inline int import_array() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_array()\n *     except Exception:\n */\n  {\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_2, &__pyx_t_3);\n    __Pyx_XGOTREF(__pyx_t_1);\n    __Pyx_XGOTREF(__pyx_t_2);\n    __Pyx_XGOTREF(__pyx_t_3);\n    /*try:*/ {\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":987\n * cdef inline int import_array() except -1:\n *     try:\n *         _import_array()             # <<<<<<<<<<<<<<\n *     except Exception:\n *         raise ImportError(\"numpy.core.multiarray failed to import\")\n */\n      __pyx_t_4 = _import_array(); if (unlikely(__pyx_t_4 == -1)) __PYX_ERR(1, 987, __pyx_L3_error)\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":986\n * # Cython code.\n * cdef inline int import_array() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_array()\n *     except Exception:\n */\n    }\n    __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;\n    __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n    __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n    goto __pyx_L10_try_end;\n    __pyx_L3_error:;\n    __Pyx_PyThreadState_assign\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":988\n *     try:\n *         _import_array()\n *     except Exception:             # <<<<<<<<<<<<<<\n *         raise ImportError(\"numpy.core.multiarray failed to import\")\n * \n */\n    __pyx_t_4 = __Pyx_PyErr_ExceptionMatches(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])));\n    if (__pyx_t_4) {\n      __Pyx_AddTraceback(\"numpy.import_array\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n      if (__Pyx_GetException(&__pyx_t_5, &__pyx_t_6, &__pyx_t_7) < 0) __PYX_ERR(1, 988, __pyx_L5_except_error)\n      __Pyx_GOTREF(__pyx_t_5);\n      __Pyx_GOTREF(__pyx_t_6);\n      __Pyx_GOTREF(__pyx_t_7);\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":989\n *         _import_array()\n *     except Exception:\n *         raise ImportError(\"numpy.core.multiarray failed to import\")             # <<<<<<<<<<<<<<\n * \n * cdef inline int import_umath() except -1:\n */\n      __pyx_t_8 = __Pyx_PyObject_Call(__pyx_builtin_ImportError, __pyx_tuple__7, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 989, __pyx_L5_except_error)\n      __Pyx_GOTREF(__pyx_t_8);\n      __Pyx_Raise(__pyx_t_8, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;\n      __PYX_ERR(1, 989, __pyx_L5_except_error)\n    }\n    goto __pyx_L5_except_error;\n    __pyx_L5_except_error:;\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":986\n * # Cython code.\n * cdef inline int import_array() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_array()\n *     except Exception:\n */\n    __Pyx_PyThreadState_assign\n    __Pyx_XGIVEREF(__pyx_t_1);\n    __Pyx_XGIVEREF(__pyx_t_2);\n    __Pyx_XGIVEREF(__pyx_t_3);\n    __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3);\n    goto __pyx_L1_error;\n    __pyx_L10_try_end:;\n  }\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":985\n * # Versions of the import_* functions which are more suitable for\n * # Cython code.\n * cdef inline int import_array() except -1:             # <<<<<<<<<<<<<<\n *     try:\n *         _import_array()\n */\n\n  /* function exit code */\n  __pyx_r = 0;\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_XDECREF(__pyx_t_7);\n  __Pyx_XDECREF(__pyx_t_8);\n  __Pyx_AddTraceback(\"numpy.import_array\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = -1;\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":991\n *         raise ImportError(\"numpy.core.multiarray failed to import\")\n * \n * cdef inline int import_umath() except -1:             # <<<<<<<<<<<<<<\n *     try:\n *         _import_umath()\n */\n\nstatic CYTHON_INLINE int __pyx_f_5numpy_import_umath(void) {\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  int __pyx_t_4;\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  PyObject *__pyx_t_7 = NULL;\n  PyObject *__pyx_t_8 = NULL;\n  __Pyx_RefNannySetupContext(\"import_umath\", 0);\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":992\n * \n * cdef inline int import_umath() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_umath()\n *     except Exception:\n */\n  {\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_2, &__pyx_t_3);\n    __Pyx_XGOTREF(__pyx_t_1);\n    __Pyx_XGOTREF(__pyx_t_2);\n    __Pyx_XGOTREF(__pyx_t_3);\n    /*try:*/ {\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":993\n * cdef inline int import_umath() except -1:\n *     try:\n *         _import_umath()             # <<<<<<<<<<<<<<\n *     except Exception:\n *         raise ImportError(\"numpy.core.umath failed to import\")\n */\n      __pyx_t_4 = _import_umath(); if (unlikely(__pyx_t_4 == -1)) __PYX_ERR(1, 993, __pyx_L3_error)\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":992\n * \n * cdef inline int import_umath() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_umath()\n *     except Exception:\n */\n    }\n    __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;\n    __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n    __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n    goto __pyx_L10_try_end;\n    __pyx_L3_error:;\n    __Pyx_PyThreadState_assign\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":994\n *     try:\n *         _import_umath()\n *     except Exception:             # <<<<<<<<<<<<<<\n *         raise ImportError(\"numpy.core.umath failed to import\")\n * \n */\n    __pyx_t_4 = __Pyx_PyErr_ExceptionMatches(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])));\n    if (__pyx_t_4) {\n      __Pyx_AddTraceback(\"numpy.import_umath\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n      if (__Pyx_GetException(&__pyx_t_5, &__pyx_t_6, &__pyx_t_7) < 0) __PYX_ERR(1, 994, __pyx_L5_except_error)\n      __Pyx_GOTREF(__pyx_t_5);\n      __Pyx_GOTREF(__pyx_t_6);\n      __Pyx_GOTREF(__pyx_t_7);\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":995\n *         _import_umath()\n *     except Exception:\n *         raise ImportError(\"numpy.core.umath failed to import\")             # <<<<<<<<<<<<<<\n * \n * cdef inline int import_ufunc() except -1:\n */\n      __pyx_t_8 = __Pyx_PyObject_Call(__pyx_builtin_ImportError, __pyx_tuple__8, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 995, __pyx_L5_except_error)\n      __Pyx_GOTREF(__pyx_t_8);\n      __Pyx_Raise(__pyx_t_8, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;\n      __PYX_ERR(1, 995, __pyx_L5_except_error)\n    }\n    goto __pyx_L5_except_error;\n    __pyx_L5_except_error:;\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":992\n * \n * cdef inline int import_umath() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_umath()\n *     except Exception:\n */\n    __Pyx_PyThreadState_assign\n    __Pyx_XGIVEREF(__pyx_t_1);\n    __Pyx_XGIVEREF(__pyx_t_2);\n    __Pyx_XGIVEREF(__pyx_t_3);\n    __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3);\n    goto __pyx_L1_error;\n    __pyx_L10_try_end:;\n  }\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":991\n *         raise ImportError(\"numpy.core.multiarray failed to import\")\n * \n * cdef inline int import_umath() except -1:             # <<<<<<<<<<<<<<\n *     try:\n *         _import_umath()\n */\n\n  /* function exit code */\n  __pyx_r = 0;\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_XDECREF(__pyx_t_7);\n  __Pyx_XDECREF(__pyx_t_8);\n  __Pyx_AddTraceback(\"numpy.import_umath\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = -1;\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\n/* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":997\n *         raise ImportError(\"numpy.core.umath failed to import\")\n * \n * cdef inline int import_ufunc() except -1:             # <<<<<<<<<<<<<<\n *     try:\n *         _import_umath()\n */\n\nstatic CYTHON_INLINE int __pyx_f_5numpy_import_ufunc(void) {\n  int __pyx_r;\n  __Pyx_RefNannyDeclarations\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  PyObject *__pyx_t_3 = NULL;\n  int __pyx_t_4;\n  PyObject *__pyx_t_5 = NULL;\n  PyObject *__pyx_t_6 = NULL;\n  PyObject *__pyx_t_7 = NULL;\n  PyObject *__pyx_t_8 = NULL;\n  __Pyx_RefNannySetupContext(\"import_ufunc\", 0);\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":998\n * \n * cdef inline int import_ufunc() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_umath()\n *     except Exception:\n */\n  {\n    __Pyx_PyThreadState_declare\n    __Pyx_PyThreadState_assign\n    __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_2, &__pyx_t_3);\n    __Pyx_XGOTREF(__pyx_t_1);\n    __Pyx_XGOTREF(__pyx_t_2);\n    __Pyx_XGOTREF(__pyx_t_3);\n    /*try:*/ {\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":999\n * cdef inline int import_ufunc() except -1:\n *     try:\n *         _import_umath()             # <<<<<<<<<<<<<<\n *     except Exception:\n *         raise ImportError(\"numpy.core.umath failed to import\")\n */\n      __pyx_t_4 = _import_umath(); if (unlikely(__pyx_t_4 == -1)) __PYX_ERR(1, 999, __pyx_L3_error)\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":998\n * \n * cdef inline int import_ufunc() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_umath()\n *     except Exception:\n */\n    }\n    __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;\n    __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;\n    __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;\n    goto __pyx_L10_try_end;\n    __pyx_L3_error:;\n    __Pyx_PyThreadState_assign\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1000\n *     try:\n *         _import_umath()\n *     except Exception:             # <<<<<<<<<<<<<<\n *         raise ImportError(\"numpy.core.umath failed to import\")\n */\n    __pyx_t_4 = __Pyx_PyErr_ExceptionMatches(((PyObject *)(&((PyTypeObject*)PyExc_Exception)[0])));\n    if (__pyx_t_4) {\n      __Pyx_AddTraceback(\"numpy.import_ufunc\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n      if (__Pyx_GetException(&__pyx_t_5, &__pyx_t_6, &__pyx_t_7) < 0) __PYX_ERR(1, 1000, __pyx_L5_except_error)\n      __Pyx_GOTREF(__pyx_t_5);\n      __Pyx_GOTREF(__pyx_t_6);\n      __Pyx_GOTREF(__pyx_t_7);\n\n      /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1001\n *         _import_umath()\n *     except Exception:\n *         raise ImportError(\"numpy.core.umath failed to import\")             # <<<<<<<<<<<<<<\n */\n      __pyx_t_8 = __Pyx_PyObject_Call(__pyx_builtin_ImportError, __pyx_tuple__9, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 1001, __pyx_L5_except_error)\n      __Pyx_GOTREF(__pyx_t_8);\n      __Pyx_Raise(__pyx_t_8, 0, 0, 0);\n      __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;\n      __PYX_ERR(1, 1001, __pyx_L5_except_error)\n    }\n    goto __pyx_L5_except_error;\n    __pyx_L5_except_error:;\n\n    /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":998\n * \n * cdef inline int import_ufunc() except -1:\n *     try:             # <<<<<<<<<<<<<<\n *         _import_umath()\n *     except Exception:\n */\n    __Pyx_PyThreadState_assign\n    __Pyx_XGIVEREF(__pyx_t_1);\n    __Pyx_XGIVEREF(__pyx_t_2);\n    __Pyx_XGIVEREF(__pyx_t_3);\n    __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3);\n    goto __pyx_L1_error;\n    __pyx_L10_try_end:;\n  }\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":997\n *         raise ImportError(\"numpy.core.umath failed to import\")\n * \n * cdef inline int import_ufunc() except -1:             # <<<<<<<<<<<<<<\n *     try:\n *         _import_umath()\n */\n\n  /* function exit code */\n  __pyx_r = 0;\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_5);\n  __Pyx_XDECREF(__pyx_t_6);\n  __Pyx_XDECREF(__pyx_t_7);\n  __Pyx_XDECREF(__pyx_t_8);\n  __Pyx_AddTraceback(\"numpy.import_ufunc\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n  __pyx_r = -1;\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  return __pyx_r;\n}\n\nstatic PyMethodDef __pyx_methods[] = {\n  {0, 0, 0, 0}\n};\n\n#if PY_MAJOR_VERSION >= 3\nstatic struct PyModuleDef __pyx_moduledef = {\n  #if PY_VERSION_HEX < 0x03020000\n    { PyObject_HEAD_INIT(NULL) NULL, 0, NULL },\n  #else\n    PyModuleDef_HEAD_INIT,\n  #endif\n    \"bbox\",\n    0, /* m_doc */\n    -1, /* m_size */\n    __pyx_methods /* m_methods */,\n    NULL, /* m_reload */\n    NULL, /* m_traverse */\n    NULL, /* m_clear */\n    NULL /* m_free */\n};\n#endif\n\nstatic __Pyx_StringTabEntry __pyx_string_tab[] = {\n  {&__pyx_n_s_DTYPE, __pyx_k_DTYPE, sizeof(__pyx_k_DTYPE), 0, 0, 1, 1},\n  {&__pyx_kp_u_Format_string_allocated_too_shor, __pyx_k_Format_string_allocated_too_shor, sizeof(__pyx_k_Format_string_allocated_too_shor), 0, 1, 0, 0},\n  {&__pyx_kp_u_Format_string_allocated_too_shor_2, __pyx_k_Format_string_allocated_too_shor_2, sizeof(__pyx_k_Format_string_allocated_too_shor_2), 0, 1, 0, 0},\n  {&__pyx_n_s_ImportError, __pyx_k_ImportError, sizeof(__pyx_k_ImportError), 0, 0, 1, 1},\n  {&__pyx_kp_u_Non_native_byte_order_not_suppor, __pyx_k_Non_native_byte_order_not_suppor, sizeof(__pyx_k_Non_native_byte_order_not_suppor), 0, 1, 0, 0},\n  {&__pyx_n_s_RuntimeError, __pyx_k_RuntimeError, sizeof(__pyx_k_RuntimeError), 0, 0, 1, 1},\n  {&__pyx_kp_s_Users_rowanz_code_scene_graph_l, __pyx_k_Users_rowanz_code_scene_graph_l, sizeof(__pyx_k_Users_rowanz_code_scene_graph_l), 0, 0, 1, 0},\n  {&__pyx_n_s_ValueError, __pyx_k_ValueError, sizeof(__pyx_k_ValueError), 0, 0, 1, 1},\n  {&__pyx_n_s_ascontiguousarray, __pyx_k_ascontiguousarray, sizeof(__pyx_k_ascontiguousarray), 0, 0, 1, 1},\n  {&__pyx_n_s_bbox, __pyx_k_bbox, sizeof(__pyx_k_bbox), 0, 0, 1, 1},\n  {&__pyx_n_s_bbox_intersections, __pyx_k_bbox_intersections, sizeof(__pyx_k_bbox_intersections), 0, 0, 1, 1},\n  {&__pyx_n_s_bbox_overlaps, __pyx_k_bbox_overlaps, sizeof(__pyx_k_bbox_overlaps), 0, 0, 1, 1},\n  {&__pyx_n_s_boxes, __pyx_k_boxes, sizeof(__pyx_k_boxes), 0, 0, 1, 1},\n  {&__pyx_n_s_boxes_contig, __pyx_k_boxes_contig, sizeof(__pyx_k_boxes_contig), 0, 0, 1, 1},\n  {&__pyx_n_s_dtype, __pyx_k_dtype, sizeof(__pyx_k_dtype), 0, 0, 1, 1},\n  {&__pyx_n_s_float, __pyx_k_float, sizeof(__pyx_k_float), 0, 0, 1, 1},\n  {&__pyx_n_s_import, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1},\n  {&__pyx_n_s_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 0, 1, 1},\n  {&__pyx_kp_u_ndarray_is_not_C_contiguous, __pyx_k_ndarray_is_not_C_contiguous, sizeof(__pyx_k_ndarray_is_not_C_contiguous), 0, 1, 0, 0},\n  {&__pyx_kp_u_ndarray_is_not_Fortran_contiguou, __pyx_k_ndarray_is_not_Fortran_contiguou, sizeof(__pyx_k_ndarray_is_not_Fortran_contiguou), 0, 1, 0, 0},\n  {&__pyx_n_s_np, __pyx_k_np, sizeof(__pyx_k_np), 0, 0, 1, 1},\n  {&__pyx_n_s_numpy, __pyx_k_numpy, sizeof(__pyx_k_numpy), 0, 0, 1, 1},\n  {&__pyx_kp_s_numpy_core_multiarray_failed_to, __pyx_k_numpy_core_multiarray_failed_to, sizeof(__pyx_k_numpy_core_multiarray_failed_to), 0, 0, 1, 0},\n  {&__pyx_kp_s_numpy_core_umath_failed_to_impor, __pyx_k_numpy_core_umath_failed_to_impor, sizeof(__pyx_k_numpy_core_umath_failed_to_impor), 0, 0, 1, 0},\n  {&__pyx_n_s_query_boxes, __pyx_k_query_boxes, sizeof(__pyx_k_query_boxes), 0, 0, 1, 1},\n  {&__pyx_n_s_query_contig, __pyx_k_query_contig, sizeof(__pyx_k_query_contig), 0, 0, 1, 1},\n  {&__pyx_n_s_range, __pyx_k_range, sizeof(__pyx_k_range), 0, 0, 1, 1},\n  {&__pyx_n_s_test, __pyx_k_test, sizeof(__pyx_k_test), 0, 0, 1, 1},\n  {&__pyx_kp_u_unknown_dtype_code_in_numpy_pxd, __pyx_k_unknown_dtype_code_in_numpy_pxd, sizeof(__pyx_k_unknown_dtype_code_in_numpy_pxd), 0, 1, 0, 0},\n  {&__pyx_n_s_zeros, __pyx_k_zeros, sizeof(__pyx_k_zeros), 0, 0, 1, 1},\n  {0, 0, 0, 0, 0, 0, 0}\n};\nstatic int __Pyx_InitCachedBuiltins(void) {\n  __pyx_builtin_range = __Pyx_GetBuiltinName(__pyx_n_s_range); if (!__pyx_builtin_range) __PYX_ERR(0, 39, __pyx_L1_error)\n  __pyx_builtin_ValueError = __Pyx_GetBuiltinName(__pyx_n_s_ValueError); if (!__pyx_builtin_ValueError) __PYX_ERR(1, 218, __pyx_L1_error)\n  __pyx_builtin_RuntimeError = __Pyx_GetBuiltinName(__pyx_n_s_RuntimeError); if (!__pyx_builtin_RuntimeError) __PYX_ERR(1, 799, __pyx_L1_error)\n  __pyx_builtin_ImportError = __Pyx_GetBuiltinName(__pyx_n_s_ImportError); if (!__pyx_builtin_ImportError) __PYX_ERR(1, 989, __pyx_L1_error)\n  return 0;\n  __pyx_L1_error:;\n  return -1;\n}\n\nstatic int __Pyx_InitCachedConstants(void) {\n  __Pyx_RefNannyDeclarations\n  __Pyx_RefNannySetupContext(\"__Pyx_InitCachedConstants\", 0);\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":218\n *             if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)\n *                 and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not C contiguous\")             # <<<<<<<<<<<<<<\n * \n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)\n */\n  __pyx_tuple_ = PyTuple_Pack(1, __pyx_kp_u_ndarray_is_not_C_contiguous); if (unlikely(!__pyx_tuple_)) __PYX_ERR(1, 218, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple_);\n  __Pyx_GIVEREF(__pyx_tuple_);\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":222\n *             if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)\n *                 and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)):\n *                 raise ValueError(u\"ndarray is not Fortran contiguous\")             # <<<<<<<<<<<<<<\n * \n *             info.buf = PyArray_DATA(self)\n */\n  __pyx_tuple__2 = PyTuple_Pack(1, __pyx_kp_u_ndarray_is_not_Fortran_contiguou); if (unlikely(!__pyx_tuple__2)) __PYX_ERR(1, 222, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__2);\n  __Pyx_GIVEREF(__pyx_tuple__2);\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":259\n *                 if ((descr.byteorder == c'>' and little_endian) or\n *                     (descr.byteorder == c'<' and not little_endian)):\n *                     raise ValueError(u\"Non-native byte order not supported\")             # <<<<<<<<<<<<<<\n *                 if   t == NPY_BYTE:        f = \"b\"\n *                 elif t == NPY_UBYTE:       f = \"B\"\n */\n  __pyx_tuple__3 = PyTuple_Pack(1, __pyx_kp_u_Non_native_byte_order_not_suppor); if (unlikely(!__pyx_tuple__3)) __PYX_ERR(1, 259, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__3);\n  __Pyx_GIVEREF(__pyx_tuple__3);\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":799\n * \n *         if (end - f) - <int>(new_offset - offset[0]) < 15:\n *             raise RuntimeError(u\"Format string allocated too short, see comment in numpy.pxd\")             # <<<<<<<<<<<<<<\n * \n *         if ((child.byteorder == c'>' and little_endian) or\n */\n  __pyx_tuple__4 = PyTuple_Pack(1, __pyx_kp_u_Format_string_allocated_too_shor); if (unlikely(!__pyx_tuple__4)) __PYX_ERR(1, 799, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__4);\n  __Pyx_GIVEREF(__pyx_tuple__4);\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":803\n *         if ((child.byteorder == c'>' and little_endian) or\n *             (child.byteorder == c'<' and not little_endian)):\n *             raise ValueError(u\"Non-native byte order not supported\")             # <<<<<<<<<<<<<<\n *             # One could encode it in the format string and have Cython\n *             # complain instead, BUT: < and > in format strings also imply\n */\n  __pyx_tuple__5 = PyTuple_Pack(1, __pyx_kp_u_Non_native_byte_order_not_suppor); if (unlikely(!__pyx_tuple__5)) __PYX_ERR(1, 803, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__5);\n  __Pyx_GIVEREF(__pyx_tuple__5);\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":823\n *             t = child.type_num\n *             if end - f < 5:\n *                 raise RuntimeError(u\"Format string allocated too short.\")             # <<<<<<<<<<<<<<\n * \n *             # Until ticket #99 is fixed, use integers to avoid warnings\n */\n  __pyx_tuple__6 = PyTuple_Pack(1, __pyx_kp_u_Format_string_allocated_too_shor_2); if (unlikely(!__pyx_tuple__6)) __PYX_ERR(1, 823, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__6);\n  __Pyx_GIVEREF(__pyx_tuple__6);\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":989\n *         _import_array()\n *     except Exception:\n *         raise ImportError(\"numpy.core.multiarray failed to import\")             # <<<<<<<<<<<<<<\n * \n * cdef inline int import_umath() except -1:\n */\n  __pyx_tuple__7 = PyTuple_Pack(1, __pyx_kp_s_numpy_core_multiarray_failed_to); if (unlikely(!__pyx_tuple__7)) __PYX_ERR(1, 989, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__7);\n  __Pyx_GIVEREF(__pyx_tuple__7);\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":995\n *         _import_umath()\n *     except Exception:\n *         raise ImportError(\"numpy.core.umath failed to import\")             # <<<<<<<<<<<<<<\n * \n * cdef inline int import_ufunc() except -1:\n */\n  __pyx_tuple__8 = PyTuple_Pack(1, __pyx_kp_s_numpy_core_umath_failed_to_impor); if (unlikely(!__pyx_tuple__8)) __PYX_ERR(1, 995, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__8);\n  __Pyx_GIVEREF(__pyx_tuple__8);\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":1001\n *         _import_umath()\n *     except Exception:\n *         raise ImportError(\"numpy.core.umath failed to import\")             # <<<<<<<<<<<<<<\n */\n  __pyx_tuple__9 = PyTuple_Pack(1, __pyx_kp_s_numpy_core_umath_failed_to_impor); if (unlikely(!__pyx_tuple__9)) __PYX_ERR(1, 1001, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__9);\n  __Pyx_GIVEREF(__pyx_tuple__9);\n\n  /* \"bbox.pyx\":15\n * ctypedef np.float_t DTYPE_t\n * \n * def bbox_overlaps(boxes, query_boxes):             # <<<<<<<<<<<<<<\n *     cdef np.ndarray[DTYPE_t, ndim=2] boxes_contig = np.ascontiguousarray(boxes, dtype=DTYPE)\n *     cdef np.ndarray[DTYPE_t, ndim=2] query_contig = np.ascontiguousarray(query_boxes, dtype=DTYPE)\n */\n  __pyx_tuple__10 = PyTuple_Pack(4, __pyx_n_s_boxes, __pyx_n_s_query_boxes, __pyx_n_s_boxes_contig, __pyx_n_s_query_contig); if (unlikely(!__pyx_tuple__10)) __PYX_ERR(0, 15, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__10);\n  __Pyx_GIVEREF(__pyx_tuple__10);\n  __pyx_codeobj__11 = (PyObject*)__Pyx_PyCode_New(2, 0, 4, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__10, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Users_rowanz_code_scene_graph_l, __pyx_n_s_bbox_overlaps, 15, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__11)) __PYX_ERR(0, 15, __pyx_L1_error)\n\n  /* \"bbox.pyx\":64\n * \n * \n * def bbox_intersections(boxes, query_boxes):             # <<<<<<<<<<<<<<\n *     cdef np.ndarray[DTYPE_t, ndim=2] boxes_contig = np.ascontiguousarray(boxes, dtype=DTYPE)\n *     cdef np.ndarray[DTYPE_t, ndim=2] query_contig = np.ascontiguousarray(query_boxes, dtype=DTYPE)\n */\n  __pyx_tuple__12 = PyTuple_Pack(4, __pyx_n_s_boxes, __pyx_n_s_query_boxes, __pyx_n_s_boxes_contig, __pyx_n_s_query_contig); if (unlikely(!__pyx_tuple__12)) __PYX_ERR(0, 64, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_tuple__12);\n  __Pyx_GIVEREF(__pyx_tuple__12);\n  __pyx_codeobj__13 = (PyObject*)__Pyx_PyCode_New(2, 0, 4, 0, 0, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__12, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Users_rowanz_code_scene_graph_l, __pyx_n_s_bbox_intersections, 64, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__13)) __PYX_ERR(0, 64, __pyx_L1_error)\n  __Pyx_RefNannyFinishContext();\n  return 0;\n  __pyx_L1_error:;\n  __Pyx_RefNannyFinishContext();\n  return -1;\n}\n\nstatic int __Pyx_InitGlobals(void) {\n  if (__Pyx_InitStrings(__pyx_string_tab) < 0) __PYX_ERR(0, 1, __pyx_L1_error);\n  return 0;\n  __pyx_L1_error:;\n  return -1;\n}\n\n#if PY_MAJOR_VERSION < 3\nPyMODINIT_FUNC initbbox(void); /*proto*/\nPyMODINIT_FUNC initbbox(void)\n#else\nPyMODINIT_FUNC PyInit_bbox(void); /*proto*/\nPyMODINIT_FUNC PyInit_bbox(void)\n#endif\n{\n  PyObject *__pyx_t_1 = NULL;\n  PyObject *__pyx_t_2 = NULL;\n  __Pyx_RefNannyDeclarations\n  #if CYTHON_REFNANNY\n  __Pyx_RefNanny = __Pyx_RefNannyImportAPI(\"refnanny\");\n  if (!__Pyx_RefNanny) {\n      PyErr_Clear();\n      __Pyx_RefNanny = __Pyx_RefNannyImportAPI(\"Cython.Runtime.refnanny\");\n      if (!__Pyx_RefNanny)\n          Py_FatalError(\"failed to import 'refnanny' module\");\n  }\n  #endif\n  __Pyx_RefNannySetupContext(\"PyMODINIT_FUNC PyInit_bbox(void)\", 0);\n  if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error)\n  __pyx_empty_bytes = PyBytes_FromStringAndSize(\"\", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error)\n  __pyx_empty_unicode = PyUnicode_FromStringAndSize(\"\", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error)\n  #ifdef __Pyx_CyFunction_USED\n  if (__pyx_CyFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  #ifdef __Pyx_FusedFunction_USED\n  if (__pyx_FusedFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  #ifdef __Pyx_Coroutine_USED\n  if (__pyx_Coroutine_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  #ifdef __Pyx_Generator_USED\n  if (__pyx_Generator_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  #ifdef __Pyx_StopAsyncIteration_USED\n  if (__pyx_StopAsyncIteration_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  /*--- Library function declarations ---*/\n  /*--- Threads initialization code ---*/\n  #if defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS\n  #ifdef WITH_THREAD /* Python build with threading support? */\n  PyEval_InitThreads();\n  #endif\n  #endif\n  /*--- Module creation code ---*/\n  #if PY_MAJOR_VERSION < 3\n  __pyx_m = Py_InitModule4(\"bbox\", __pyx_methods, 0, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m);\n  #else\n  __pyx_m = PyModule_Create(&__pyx_moduledef);\n  #endif\n  if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error)\n  __pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error)\n  Py_INCREF(__pyx_d);\n  __pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error)\n  #if CYTHON_COMPILING_IN_PYPY\n  Py_INCREF(__pyx_b);\n  #endif\n  if (PyObject_SetAttrString(__pyx_m, \"__builtins__\", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error);\n  /*--- Initialize various global constants etc. ---*/\n  if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT)\n  if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n  if (__pyx_module_is_main_bbox) {\n    if (PyObject_SetAttrString(__pyx_m, \"__name__\", __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  }\n  #if PY_MAJOR_VERSION >= 3\n  {\n    PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error)\n    if (!PyDict_GetItemString(modules, \"bbox\")) {\n      if (unlikely(PyDict_SetItemString(modules, \"bbox\", __pyx_m) < 0)) __PYX_ERR(0, 1, __pyx_L1_error)\n    }\n  }\n  #endif\n  /*--- Builtin init code ---*/\n  if (__Pyx_InitCachedBuiltins() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  /*--- Constants init code ---*/\n  if (__Pyx_InitCachedConstants() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  /*--- Global init code ---*/\n  /*--- Variable export code ---*/\n  /*--- Function export code ---*/\n  /*--- Type init code ---*/\n  /*--- Type import code ---*/\n  __pyx_ptype_7cpython_4type_type = __Pyx_ImportType(__Pyx_BUILTIN_MODULE_NAME, \"type\", \n  #if CYTHON_COMPILING_IN_PYPY\n  sizeof(PyTypeObject),\n  #else\n  sizeof(PyHeapTypeObject),\n  #endif\n  0); if (unlikely(!__pyx_ptype_7cpython_4type_type)) __PYX_ERR(2, 9, __pyx_L1_error)\n  __pyx_ptype_5numpy_dtype = __Pyx_ImportType(\"numpy\", \"dtype\", sizeof(PyArray_Descr), 0); if (unlikely(!__pyx_ptype_5numpy_dtype)) __PYX_ERR(1, 155, __pyx_L1_error)\n  __pyx_ptype_5numpy_flatiter = __Pyx_ImportType(\"numpy\", \"flatiter\", sizeof(PyArrayIterObject), 0); if (unlikely(!__pyx_ptype_5numpy_flatiter)) __PYX_ERR(1, 168, __pyx_L1_error)\n  __pyx_ptype_5numpy_broadcast = __Pyx_ImportType(\"numpy\", \"broadcast\", sizeof(PyArrayMultiIterObject), 0); if (unlikely(!__pyx_ptype_5numpy_broadcast)) __PYX_ERR(1, 172, __pyx_L1_error)\n  __pyx_ptype_5numpy_ndarray = __Pyx_ImportType(\"numpy\", \"ndarray\", sizeof(PyArrayObject), 0); if (unlikely(!__pyx_ptype_5numpy_ndarray)) __PYX_ERR(1, 181, __pyx_L1_error)\n  __pyx_ptype_5numpy_ufunc = __Pyx_ImportType(\"numpy\", \"ufunc\", sizeof(PyUFuncObject), 0); if (unlikely(!__pyx_ptype_5numpy_ufunc)) __PYX_ERR(1, 861, __pyx_L1_error)\n  /*--- Variable import code ---*/\n  /*--- Function import code ---*/\n  /*--- Execution code ---*/\n  #if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED)\n  if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  #endif\n\n  /* \"bbox.pyx\":9\n * \n * cimport cython\n * import numpy as np             # <<<<<<<<<<<<<<\n * cimport numpy as np\n * \n */\n  __pyx_t_1 = __Pyx_Import(__pyx_n_s_numpy, 0, -1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 9, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_np, __pyx_t_1) < 0) __PYX_ERR(0, 9, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n\n  /* \"bbox.pyx\":12\n * cimport numpy as np\n * \n * DTYPE = np.float             # <<<<<<<<<<<<<<\n * ctypedef np.float_t DTYPE_t\n * \n */\n  __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 12, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_1);\n  __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_float); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 12, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_DTYPE, __pyx_t_2) < 0) __PYX_ERR(0, 12, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n\n  /* \"bbox.pyx\":15\n * ctypedef np.float_t DTYPE_t\n * \n * def bbox_overlaps(boxes, query_boxes):             # <<<<<<<<<<<<<<\n *     cdef np.ndarray[DTYPE_t, ndim=2] boxes_contig = np.ascontiguousarray(boxes, dtype=DTYPE)\n *     cdef np.ndarray[DTYPE_t, ndim=2] query_contig = np.ascontiguousarray(query_boxes, dtype=DTYPE)\n */\n  __pyx_t_2 = PyCFunction_NewEx(&__pyx_mdef_4bbox_1bbox_overlaps, NULL, __pyx_n_s_bbox); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 15, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_bbox_overlaps, __pyx_t_2) < 0) __PYX_ERR(0, 15, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n\n  /* \"bbox.pyx\":64\n * \n * \n * def bbox_intersections(boxes, query_boxes):             # <<<<<<<<<<<<<<\n *     cdef np.ndarray[DTYPE_t, ndim=2] boxes_contig = np.ascontiguousarray(boxes, dtype=DTYPE)\n *     cdef np.ndarray[DTYPE_t, ndim=2] query_contig = np.ascontiguousarray(query_boxes, dtype=DTYPE)\n */\n  __pyx_t_2 = PyCFunction_NewEx(&__pyx_mdef_4bbox_3bbox_intersections, NULL, __pyx_n_s_bbox); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 64, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_bbox_intersections, __pyx_t_2) < 0) __PYX_ERR(0, 64, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n\n  /* \"bbox.pyx\":1\n * # --------------------------------------------------------             # <<<<<<<<<<<<<<\n * # Fast R-CNN\n * # Copyright (c) 2015 Microsoft\n */\n  __pyx_t_2 = PyDict_New(); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1, __pyx_L1_error)\n  __Pyx_GOTREF(__pyx_t_2);\n  if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error)\n  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;\n\n  /* \"../../../../../anaconda/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd\":997\n *         raise ImportError(\"numpy.core.umath failed to import\")\n * \n * cdef inline int import_ufunc() except -1:             # <<<<<<<<<<<<<<\n *     try:\n *         _import_umath()\n */\n\n  /*--- Wrapped vars code ---*/\n\n  goto __pyx_L0;\n  __pyx_L1_error:;\n  __Pyx_XDECREF(__pyx_t_1);\n  __Pyx_XDECREF(__pyx_t_2);\n  if (__pyx_m) {\n    if (__pyx_d) {\n      __Pyx_AddTraceback(\"init bbox\", __pyx_clineno, __pyx_lineno, __pyx_filename);\n    }\n    Py_DECREF(__pyx_m); __pyx_m = 0;\n  } else if (!PyErr_Occurred()) {\n    PyErr_SetString(PyExc_ImportError, \"init bbox\");\n  }\n  __pyx_L0:;\n  __Pyx_RefNannyFinishContext();\n  #if PY_MAJOR_VERSION < 3\n  return;\n  #else\n  return __pyx_m;\n  #endif\n}\n\n/* --- Runtime support code --- */\n/* Refnanny */\n#if CYTHON_REFNANNY\nstatic __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) {\n    PyObject *m = NULL, *p = NULL;\n    void *r = NULL;\n    m = PyImport_ImportModule((char *)modname);\n    if (!m) goto end;\n    p = PyObject_GetAttrString(m, (char *)\"RefNannyAPI\");\n    if (!p) goto end;\n    r = PyLong_AsVoidPtr(p);\nend:\n    Py_XDECREF(p);\n    Py_XDECREF(m);\n    return (__Pyx_RefNannyAPIStruct *)r;\n}\n#endif\n\n/* GetBuiltinName */\nstatic PyObject *__Pyx_GetBuiltinName(PyObject *name) {\n    PyObject* result = __Pyx_PyObject_GetAttrStr(__pyx_b, name);\n    if (unlikely(!result)) {\n        PyErr_Format(PyExc_NameError,\n#if PY_MAJOR_VERSION >= 3\n            \"name '%U' is not defined\", name);\n#else\n            \"name '%.200s' is not defined\", PyString_AS_STRING(name));\n#endif\n    }\n    return result;\n}\n\n/* RaiseArgTupleInvalid */\nstatic void __Pyx_RaiseArgtupleInvalid(\n    const char* func_name,\n    int exact,\n    Py_ssize_t num_min,\n    Py_ssize_t num_max,\n    Py_ssize_t num_found)\n{\n    Py_ssize_t num_expected;\n    const char *more_or_less;\n    if (num_found < num_min) {\n        num_expected = num_min;\n        more_or_less = \"at least\";\n    } else {\n        num_expected = num_max;\n        more_or_less = \"at most\";\n    }\n    if (exact) {\n        more_or_less = \"exactly\";\n    }\n    PyErr_Format(PyExc_TypeError,\n                 \"%.200s() takes %.8s %\" CYTHON_FORMAT_SSIZE_T \"d positional argument%.1s (%\" CYTHON_FORMAT_SSIZE_T \"d given)\",\n                 func_name, more_or_less, num_expected,\n                 (num_expected == 1) ? \"\" : \"s\", num_found);\n}\n\n/* RaiseDoubleKeywords */\nstatic void __Pyx_RaiseDoubleKeywordsError(\n    const char* func_name,\n    PyObject* kw_name)\n{\n    PyErr_Format(PyExc_TypeError,\n        #if PY_MAJOR_VERSION >= 3\n        \"%s() got multiple values for keyword argument '%U'\", func_name, kw_name);\n        #else\n        \"%s() got multiple values for keyword argument '%s'\", func_name,\n        PyString_AsString(kw_name));\n        #endif\n}\n\n/* ParseKeywords */\nstatic int __Pyx_ParseOptionalKeywords(\n    PyObject *kwds,\n    PyObject **argnames[],\n    PyObject *kwds2,\n    PyObject *values[],\n    Py_ssize_t num_pos_args,\n    const char* function_name)\n{\n    PyObject *key = 0, *value = 0;\n    Py_ssize_t pos = 0;\n    PyObject*** name;\n    PyObject*** first_kw_arg = argnames + num_pos_args;\n    while (PyDict_Next(kwds, &pos, &key, &value)) {\n        name = first_kw_arg;\n        while (*name && (**name != key)) name++;\n        if (*name) {\n            values[name-argnames] = value;\n            continue;\n        }\n        name = first_kw_arg;\n        #if PY_MAJOR_VERSION < 3\n        if (likely(PyString_CheckExact(key)) || likely(PyString_Check(key))) {\n            while (*name) {\n                if ((CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**name) == PyString_GET_SIZE(key))\n                        && _PyString_Eq(**name, key)) {\n                    values[name-argnames] = value;\n                    break;\n                }\n                name++;\n            }\n            if (*name) continue;\n            else {\n                PyObject*** argname = argnames;\n                while (argname != first_kw_arg) {\n                    if ((**argname == key) || (\n                            (CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**argname) == PyString_GET_SIZE(key))\n                             && _PyString_Eq(**argname, key))) {\n                        goto arg_passed_twice;\n                    }\n                    argname++;\n                }\n            }\n        } else\n        #endif\n        if (likely(PyUnicode_Check(key))) {\n            while (*name) {\n                int cmp = (**name == key) ? 0 :\n                #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3\n                    (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :\n                #endif\n                    PyUnicode_Compare(**name, key);\n                if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad;\n                if (cmp == 0) {\n                    values[name-argnames] = value;\n                    break;\n                }\n                name++;\n            }\n            if (*name) continue;\n            else {\n                PyObject*** argname = argnames;\n                while (argname != first_kw_arg) {\n                    int cmp = (**argname == key) ? 0 :\n                    #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3\n                        (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :\n                    #endif\n                        PyUnicode_Compare(**argname, key);\n                    if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad;\n                    if (cmp == 0) goto arg_passed_twice;\n                    argname++;\n                }\n            }\n        } else\n            goto invalid_keyword_type;\n        if (kwds2) {\n            if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad;\n        } else {\n            goto invalid_keyword;\n        }\n    }\n    return 0;\narg_passed_twice:\n    __Pyx_RaiseDoubleKeywordsError(function_name, key);\n    goto bad;\ninvalid_keyword_type:\n    PyErr_Format(PyExc_TypeError,\n        \"%.200s() keywords must be strings\", function_name);\n    goto bad;\ninvalid_keyword:\n    PyErr_Format(PyExc_TypeError,\n    #if PY_MAJOR_VERSION < 3\n        \"%.200s() got an unexpected keyword argument '%.200s'\",\n        function_name, PyString_AsString(key));\n    #else\n        \"%s() got an unexpected keyword argument '%U'\",\n        function_name, key);\n    #endif\nbad:\n    return -1;\n}\n\n/* GetModuleGlobalName */\nstatic CYTHON_INLINE PyObject *__Pyx_GetModuleGlobalName(PyObject *name) {\n    PyObject *result;\n#if !CYTHON_AVOID_BORROWED_REFS\n    result = PyDict_GetItem(__pyx_d, name);\n    if (likely(result)) {\n        Py_INCREF(result);\n    } else {\n#else\n    result = PyObject_GetItem(__pyx_d, name);\n    if (!result) {\n        PyErr_Clear();\n#endif\n        result = __Pyx_GetBuiltinName(name);\n    }\n    return result;\n}\n\n/* PyObjectCall */\n  #if CYTHON_COMPILING_IN_CPYTHON\nstatic CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw) {\n    PyObject *result;\n    ternaryfunc call = func->ob_type->tp_call;\n    if (unlikely(!call))\n        return PyObject_Call(func, arg, kw);\n    if (unlikely(Py_EnterRecursiveCall((char*)\" while calling a Python object\")))\n        return NULL;\n    result = (*call)(func, arg, kw);\n    Py_LeaveRecursiveCall();\n    if (unlikely(!result) && unlikely(!PyErr_Occurred())) {\n        PyErr_SetString(\n            PyExc_SystemError,\n            \"NULL result without error in PyObject_Call\");\n    }\n    return result;\n}\n#endif\n\n/* ExtTypeTest */\n  static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type) {\n    if (unlikely(!type)) {\n        PyErr_SetString(PyExc_SystemError, \"Missing type object\");\n        return 0;\n    }\n    if (likely(PyObject_TypeCheck(obj, type)))\n        return 1;\n    PyErr_Format(PyExc_TypeError, \"Cannot convert %.200s to %.200s\",\n                 Py_TYPE(obj)->tp_name, type->tp_name);\n    return 0;\n}\n\n/* BufferFormatCheck */\n  static CYTHON_INLINE int __Pyx_IsLittleEndian(void) {\n  unsigned int n = 1;\n  return *(unsigned char*)(&n) != 0;\n}\nstatic void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx,\n                              __Pyx_BufFmt_StackElem* stack,\n                              __Pyx_TypeInfo* type) {\n  stack[0].field = &ctx->root;\n  stack[0].parent_offset = 0;\n  ctx->root.type = type;\n  ctx->root.name = \"buffer dtype\";\n  ctx->root.offset = 0;\n  ctx->head = stack;\n  ctx->head->field = &ctx->root;\n  ctx->fmt_offset = 0;\n  ctx->head->parent_offset = 0;\n  ctx->new_packmode = '@';\n  ctx->enc_packmode = '@';\n  ctx->new_count = 1;\n  ctx->enc_count = 0;\n  ctx->enc_type = 0;\n  ctx->is_complex = 0;\n  ctx->is_valid_array = 0;\n  ctx->struct_alignment = 0;\n  while (type->typegroup == 'S') {\n    ++ctx->head;\n    ctx->head->field = type->fields;\n    ctx->head->parent_offset = 0;\n    type = type->fields->type;\n  }\n}\nstatic int __Pyx_BufFmt_ParseNumber(const char** ts) {\n    int count;\n    const char* t = *ts;\n    if (*t < '0' || *t > '9') {\n      return -1;\n    } else {\n        count = *t++ - '0';\n        while (*t >= '0' && *t < '9') {\n            count *= 10;\n            count += *t++ - '0';\n        }\n    }\n    *ts = t;\n    return count;\n}\nstatic int __Pyx_BufFmt_ExpectNumber(const char **ts) {\n    int number = __Pyx_BufFmt_ParseNumber(ts);\n    if (number == -1)\n        PyErr_Format(PyExc_ValueError,\\\n                     \"Does not understand character buffer dtype format string ('%c')\", **ts);\n    return number;\n}\nstatic void __Pyx_BufFmt_RaiseUnexpectedChar(char ch) {\n  PyErr_Format(PyExc_ValueError,\n               \"Unexpected format string character: '%c'\", ch);\n}\nstatic const char* __Pyx_BufFmt_DescribeTypeChar(char ch, int is_complex) {\n  switch (ch) {\n    case 'c': return \"'char'\";\n    case 'b': return \"'signed char'\";\n    case 'B': return \"'unsigned char'\";\n    case 'h': return \"'short'\";\n    case 'H': return \"'unsigned short'\";\n    case 'i': return \"'int'\";\n    case 'I': return \"'unsigned int'\";\n    case 'l': return \"'long'\";\n    case 'L': return \"'unsigned long'\";\n    case 'q': return \"'long long'\";\n    case 'Q': return \"'unsigned long long'\";\n    case 'f': return (is_complex ? \"'complex float'\" : \"'float'\");\n    case 'd': return (is_complex ? \"'complex double'\" : \"'double'\");\n    case 'g': return (is_complex ? \"'complex long double'\" : \"'long double'\");\n    case 'T': return \"a struct\";\n    case 'O': return \"Python object\";\n    case 'P': return \"a pointer\";\n    case 's': case 'p': return \"a string\";\n    case 0: return \"end\";\n    default: return \"unparseable format string\";\n  }\n}\nstatic size_t __Pyx_BufFmt_TypeCharToStandardSize(char ch, int is_complex) {\n  switch (ch) {\n    case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1;\n    case 'h': case 'H': return 2;\n    case 'i': case 'I': case 'l': case 'L': return 4;\n    case 'q': case 'Q': return 8;\n    case 'f': return (is_complex ? 8 : 4);\n    case 'd': return (is_complex ? 16 : 8);\n    case 'g': {\n      PyErr_SetString(PyExc_ValueError, \"Python does not define a standard format string size for long double ('g')..\");\n      return 0;\n    }\n    case 'O': case 'P': return sizeof(void*);\n    default:\n      __Pyx_BufFmt_RaiseUnexpectedChar(ch);\n      return 0;\n    }\n}\nstatic size_t __Pyx_BufFmt_TypeCharToNativeSize(char ch, int is_complex) {\n  switch (ch) {\n    case 'c': case 'b': case 'B': case 's': case 'p': return 1;\n    case 'h': case 'H': return sizeof(short);\n    case 'i': case 'I': return sizeof(int);\n    case 'l': case 'L': return sizeof(long);\n    #ifdef HAVE_LONG_LONG\n    case 'q': case 'Q': return sizeof(PY_LONG_LONG);\n    #endif\n    case 'f': return sizeof(float) * (is_complex ? 2 : 1);\n    case 'd': return sizeof(double) * (is_complex ? 2 : 1);\n    case 'g': return sizeof(long double) * (is_complex ? 2 : 1);\n    case 'O': case 'P': return sizeof(void*);\n    default: {\n      __Pyx_BufFmt_RaiseUnexpectedChar(ch);\n      return 0;\n    }\n  }\n}\ntypedef struct { char c; short x; } __Pyx_st_short;\ntypedef struct { char c; int x; } __Pyx_st_int;\ntypedef struct { char c; long x; } __Pyx_st_long;\ntypedef struct { char c; float x; } __Pyx_st_float;\ntypedef struct { char c; double x; } __Pyx_st_double;\ntypedef struct { char c; long double x; } __Pyx_st_longdouble;\ntypedef struct { char c; void *x; } __Pyx_st_void_p;\n#ifdef HAVE_LONG_LONG\ntypedef struct { char c; PY_LONG_LONG x; } __Pyx_st_longlong;\n#endif\nstatic size_t __Pyx_BufFmt_TypeCharToAlignment(char ch, CYTHON_UNUSED int is_complex) {\n  switch (ch) {\n    case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1;\n    case 'h': case 'H': return sizeof(__Pyx_st_short) - sizeof(short);\n    case 'i': case 'I': return sizeof(__Pyx_st_int) - sizeof(int);\n    case 'l': case 'L': return sizeof(__Pyx_st_long) - sizeof(long);\n#ifdef HAVE_LONG_LONG\n    case 'q': case 'Q': return sizeof(__Pyx_st_longlong) - sizeof(PY_LONG_LONG);\n#endif\n    case 'f': return sizeof(__Pyx_st_float) - sizeof(float);\n    case 'd': return sizeof(__Pyx_st_double) - sizeof(double);\n    case 'g': return sizeof(__Pyx_st_longdouble) - sizeof(long double);\n    case 'P': case 'O': return sizeof(__Pyx_st_void_p) - sizeof(void*);\n    default:\n      __Pyx_BufFmt_RaiseUnexpectedChar(ch);\n      return 0;\n    }\n}\n/* These are for computing the padding at the end of the struct to align\n   on the first member of the struct. This will probably the same as above,\n   but we don't have any guarantees.\n */\ntypedef struct { short x; char c; } __Pyx_pad_short;\ntypedef struct { int x; char c; } __Pyx_pad_int;\ntypedef struct { long x; char c; } __Pyx_pad_long;\ntypedef struct { float x; char c; } __Pyx_pad_float;\ntypedef struct { double x; char c; } __Pyx_pad_double;\ntypedef struct { long double x; char c; } __Pyx_pad_longdouble;\ntypedef struct { void *x; char c; } __Pyx_pad_void_p;\n#ifdef HAVE_LONG_LONG\ntypedef struct { PY_LONG_LONG x; char c; } __Pyx_pad_longlong;\n#endif\nstatic size_t __Pyx_BufFmt_TypeCharToPadding(char ch, CYTHON_UNUSED int is_complex) {\n  switch (ch) {\n    case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1;\n    case 'h': case 'H': return sizeof(__Pyx_pad_short) - sizeof(short);\n    case 'i': case 'I': return sizeof(__Pyx_pad_int) - sizeof(int);\n    case 'l': case 'L': return sizeof(__Pyx_pad_long) - sizeof(long);\n#ifdef HAVE_LONG_LONG\n    case 'q': case 'Q': return sizeof(__Pyx_pad_longlong) - sizeof(PY_LONG_LONG);\n#endif\n    case 'f': return sizeof(__Pyx_pad_float) - sizeof(float);\n    case 'd': return sizeof(__Pyx_pad_double) - sizeof(double);\n    case 'g': return sizeof(__Pyx_pad_longdouble) - sizeof(long double);\n    case 'P': case 'O': return sizeof(__Pyx_pad_void_p) - sizeof(void*);\n    default:\n      __Pyx_BufFmt_RaiseUnexpectedChar(ch);\n      return 0;\n    }\n}\nstatic char __Pyx_BufFmt_TypeCharToGroup(char ch, int is_complex) {\n  switch (ch) {\n    case 'c':\n        return 'H';\n    case 'b': case 'h': case 'i':\n    case 'l': case 'q': case 's': case 'p':\n        return 'I';\n    case 'B': case 'H': case 'I': case 'L': case 'Q':\n        return 'U';\n    case 'f': case 'd': case 'g':\n        return (is_complex ? 'C' : 'R');\n    case 'O':\n        return 'O';\n    case 'P':\n        return 'P';\n    default: {\n      __Pyx_BufFmt_RaiseUnexpectedChar(ch);\n      return 0;\n    }\n  }\n}\nstatic void __Pyx_BufFmt_RaiseExpected(__Pyx_BufFmt_Context* ctx) {\n  if (ctx->head == NULL || ctx->head->field == &ctx->root) {\n    const char* expected;\n    const char* quote;\n    if (ctx->head == NULL) {\n      expected = \"end\";\n      quote = \"\";\n    } else {\n      expected = ctx->head->field->type->name;\n      quote = \"'\";\n    }\n    PyErr_Format(PyExc_ValueError,\n                 \"Buffer dtype mismatch, expected %s%s%s but got %s\",\n                 quote, expected, quote,\n                 __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex));\n  } else {\n    __Pyx_StructField* field = ctx->head->field;\n    __Pyx_StructField* parent = (ctx->head - 1)->field;\n    PyErr_Format(PyExc_ValueError,\n                 \"Buffer dtype mismatch, expected '%s' but got %s in '%s.%s'\",\n                 field->type->name, __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex),\n                 parent->type->name, field->name);\n  }\n}\nstatic int __Pyx_BufFmt_ProcessTypeChunk(__Pyx_BufFmt_Context* ctx) {\n  char group;\n  size_t size, offset, arraysize = 1;\n  if (ctx->enc_type == 0) return 0;\n  if (ctx->head->field->type->arraysize[0]) {\n    int i, ndim = 0;\n    if (ctx->enc_type == 's' || ctx->enc_type == 'p') {\n        ctx->is_valid_array = ctx->head->field->type->ndim == 1;\n        ndim = 1;\n        if (ctx->enc_count != ctx->head->field->type->arraysize[0]) {\n            PyErr_Format(PyExc_ValueError,\n                         \"Expected a dimension of size %zu, got %zu\",\n                         ctx->head->field->type->arraysize[0], ctx->enc_count);\n            return -1;\n        }\n    }\n    if (!ctx->is_valid_array) {\n      PyErr_Format(PyExc_ValueError, \"Expected %d dimensions, got %d\",\n                   ctx->head->field->type->ndim, ndim);\n      return -1;\n    }\n    for (i = 0; i < ctx->head->field->type->ndim; i++) {\n      arraysize *= ctx->head->field->type->arraysize[i];\n    }\n    ctx->is_valid_array = 0;\n    ctx->enc_count = 1;\n  }\n  group = __Pyx_BufFmt_TypeCharToGroup(ctx->enc_type, ctx->is_complex);\n  do {\n    __Pyx_StructField* field = ctx->head->field;\n    __Pyx_TypeInfo* type = field->type;\n    if (ctx->enc_packmode == '@' || ctx->enc_packmode == '^') {\n      size = __Pyx_BufFmt_TypeCharToNativeSize(ctx->enc_type, ctx->is_complex);\n    } else {\n      size = __Pyx_BufFmt_TypeCharToStandardSize(ctx->enc_type, ctx->is_complex);\n    }\n    if (ctx->enc_packmode == '@') {\n      size_t align_at = __Pyx_BufFmt_TypeCharToAlignment(ctx->enc_type, ctx->is_complex);\n      size_t align_mod_offset;\n      if (align_at == 0) return -1;\n      align_mod_offset = ctx->fmt_offset % align_at;\n      if (align_mod_offset > 0) ctx->fmt_offset += align_at - align_mod_offset;\n      if (ctx->struct_alignment == 0)\n          ctx->struct_alignment = __Pyx_BufFmt_TypeCharToPadding(ctx->enc_type,\n                                                                 ctx->is_complex);\n    }\n    if (type->size != size || type->typegroup != group) {\n      if (type->typegroup == 'C' && type->fields != NULL) {\n        size_t parent_offset = ctx->head->parent_offset + field->offset;\n        ++ctx->head;\n        ctx->head->field = type->fields;\n        ctx->head->parent_offset = parent_offset;\n        continue;\n      }\n      if ((type->typegroup == 'H' || group == 'H') && type->size == size) {\n      } else {\n          __Pyx_BufFmt_RaiseExpected(ctx);\n          return -1;\n      }\n    }\n    offset = ctx->head->parent_offset + field->offset;\n    if (ctx->fmt_offset != offset) {\n      PyErr_Format(PyExc_ValueError,\n                   \"Buffer dtype mismatch; next field is at offset %\" CYTHON_FORMAT_SSIZE_T \"d but %\" CYTHON_FORMAT_SSIZE_T \"d expected\",\n                   (Py_ssize_t)ctx->fmt_offset, (Py_ssize_t)offset);\n      return -1;\n    }\n    ctx->fmt_offset += size;\n    if (arraysize)\n      ctx->fmt_offset += (arraysize - 1) * size;\n    --ctx->enc_count;\n    while (1) {\n      if (field == &ctx->root) {\n        ctx->head = NULL;\n        if (ctx->enc_count != 0) {\n          __Pyx_BufFmt_RaiseExpected(ctx);\n          return -1;\n        }\n        break;\n      }\n      ctx->head->field = ++field;\n      if (field->type == NULL) {\n        --ctx->head;\n        field = ctx->head->field;\n        continue;\n      } else if (field->type->typegroup == 'S') {\n        size_t parent_offset = ctx->head->parent_offset + field->offset;\n        if (field->type->fields->type == NULL) continue;\n        field = field->type->fields;\n        ++ctx->head;\n        ctx->head->field = field;\n        ctx->head->parent_offset = parent_offset;\n        break;\n      } else {\n        break;\n      }\n    }\n  } while (ctx->enc_count);\n  ctx->enc_type = 0;\n  ctx->is_complex = 0;\n  return 0;\n}\nstatic CYTHON_INLINE PyObject *\n__pyx_buffmt_parse_array(__Pyx_BufFmt_Context* ctx, const char** tsp)\n{\n    const char *ts = *tsp;\n    int i = 0, number;\n    int ndim = ctx->head->field->type->ndim;\n;\n    ++ts;\n    if (ctx->new_count != 1) {\n        PyErr_SetString(PyExc_ValueError,\n                        \"Cannot handle repeated arrays in format string\");\n        return NULL;\n    }\n    if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;\n    while (*ts && *ts != ')') {\n        switch (*ts) {\n            case ' ': case '\\f': case '\\r': case '\\n': case '\\t': case '\\v':  continue;\n            default:  break;\n        }\n        number = __Pyx_BufFmt_ExpectNumber(&ts);\n        if (number == -1) return NULL;\n        if (i < ndim && (size_t) number != ctx->head->field->type->arraysize[i])\n            return PyErr_Format(PyExc_ValueError,\n                        \"Expected a dimension of size %zu, got %d\",\n                        ctx->head->field->type->arraysize[i], number);\n        if (*ts != ',' && *ts != ')')\n            return PyErr_Format(PyExc_ValueError,\n                                \"Expected a comma in format string, got '%c'\", *ts);\n        if (*ts == ',') ts++;\n        i++;\n    }\n    if (i != ndim)\n        return PyErr_Format(PyExc_ValueError, \"Expected %d dimension(s), got %d\",\n                            ctx->head->field->type->ndim, i);\n    if (!*ts) {\n        PyErr_SetString(PyExc_ValueError,\n                        \"Unexpected end of format string, expected ')'\");\n        return NULL;\n    }\n    ctx->is_valid_array = 1;\n    ctx->new_count = 1;\n    *tsp = ++ts;\n    return Py_None;\n}\nstatic const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts) {\n  int got_Z = 0;\n  while (1) {\n    switch(*ts) {\n      case 0:\n        if (ctx->enc_type != 0 && ctx->head == NULL) {\n          __Pyx_BufFmt_RaiseExpected(ctx);\n          return NULL;\n        }\n        if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;\n        if (ctx->head != NULL) {\n          __Pyx_BufFmt_RaiseExpected(ctx);\n          return NULL;\n        }\n        return ts;\n      case ' ':\n      case '\\r':\n      case '\\n':\n        ++ts;\n        break;\n      case '<':\n        if (!__Pyx_IsLittleEndian()) {\n          PyErr_SetString(PyExc_ValueError, \"Little-endian buffer not supported on big-endian compiler\");\n          return NULL;\n        }\n        ctx->new_packmode = '=';\n        ++ts;\n        break;\n      case '>':\n      case '!':\n        if (__Pyx_IsLittleEndian()) {\n          PyErr_SetString(PyExc_ValueError, \"Big-endian buffer not supported on little-endian compiler\");\n          return NULL;\n        }\n        ctx->new_packmode = '=';\n        ++ts;\n        break;\n      case '=':\n      case '@':\n      case '^':\n        ctx->new_packmode = *ts++;\n        break;\n      case 'T':\n        {\n          const char* ts_after_sub;\n          size_t i, struct_count = ctx->new_count;\n          size_t struct_alignment = ctx->struct_alignment;\n          ctx->new_count = 1;\n          ++ts;\n          if (*ts != '{') {\n            PyErr_SetString(PyExc_ValueError, \"Buffer acquisition: Expected '{' after 'T'\");\n            return NULL;\n          }\n          if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;\n          ctx->enc_type = 0;\n          ctx->enc_count = 0;\n          ctx->struct_alignment = 0;\n          ++ts;\n          ts_after_sub = ts;\n          for (i = 0; i != struct_count; ++i) {\n            ts_after_sub = __Pyx_BufFmt_CheckString(ctx, ts);\n            if (!ts_after_sub) return NULL;\n          }\n          ts = ts_after_sub;\n          if (struct_alignment) ctx->struct_alignment = struct_alignment;\n        }\n        break;\n      case '}':\n        {\n          size_t alignment = ctx->struct_alignment;\n          ++ts;\n          if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;\n          ctx->enc_type = 0;\n          if (alignment && ctx->fmt_offset % alignment) {\n            ctx->fmt_offset += alignment - (ctx->fmt_offset % alignment);\n          }\n        }\n        return ts;\n      case 'x':\n        if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;\n        ctx->fmt_offset += ctx->new_count;\n        ctx->new_count = 1;\n        ctx->enc_count = 0;\n        ctx->enc_type = 0;\n        ctx->enc_packmode = ctx->new_packmode;\n        ++ts;\n        break;\n      case 'Z':\n        got_Z = 1;\n        ++ts;\n        if (*ts != 'f' && *ts != 'd' && *ts != 'g') {\n          __Pyx_BufFmt_RaiseUnexpectedChar('Z');\n          return NULL;\n        }\n      case 'c': case 'b': case 'B': case 'h': case 'H': case 'i': case 'I':\n      case 'l': case 'L': case 'q': case 'Q':\n      case 'f': case 'd': case 'g':\n      case 'O': case 'p':\n        if (ctx->enc_type == *ts && got_Z == ctx->is_complex &&\n            ctx->enc_packmode == ctx->new_packmode) {\n          ctx->enc_count += ctx->new_count;\n          ctx->new_count = 1;\n          got_Z = 0;\n          ++ts;\n          break;\n        }\n      case 's':\n        if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL;\n        ctx->enc_count = ctx->new_count;\n        ctx->enc_packmode = ctx->new_packmode;\n        ctx->enc_type = *ts;\n        ctx->is_complex = got_Z;\n        ++ts;\n        ctx->new_count = 1;\n        got_Z = 0;\n        break;\n      case ':':\n        ++ts;\n        while(*ts != ':') ++ts;\n        ++ts;\n        break;\n      case '(':\n        if (!__pyx_buffmt_parse_array(ctx, &ts)) return NULL;\n        break;\n      default:\n        {\n          int number = __Pyx_BufFmt_ExpectNumber(&ts);\n          if (number == -1) return NULL;\n          ctx->new_count = (size_t)number;\n        }\n    }\n  }\n}\nstatic CYTHON_INLINE void __Pyx_ZeroBuffer(Py_buffer* buf) {\n  buf->buf = NULL;\n  buf->obj = NULL;\n  buf->strides = __Pyx_zeros;\n  buf->shape = __Pyx_zeros;\n  buf->suboffsets = __Pyx_minusones;\n}\nstatic CYTHON_INLINE int __Pyx_GetBufferAndValidate(\n        Py_buffer* buf, PyObject* obj,  __Pyx_TypeInfo* dtype, int flags,\n        int nd, int cast, __Pyx_BufFmt_StackElem* stack)\n{\n  if (obj == Py_None || obj == NULL) {\n    __Pyx_ZeroBuffer(buf);\n    return 0;\n  }\n  buf->buf = NULL;\n  if (__Pyx_GetBuffer(obj, buf, flags) == -1) goto fail;\n  if (buf->ndim != nd) {\n    PyErr_Format(PyExc_ValueError,\n                 \"Buffer has wrong number of dimensions (expected %d, got %d)\",\n                 nd, buf->ndim);\n    goto fail;\n  }\n  if (!cast) {\n    __Pyx_BufFmt_Context ctx;\n    __Pyx_BufFmt_Init(&ctx, stack, dtype);\n    if (!__Pyx_BufFmt_CheckString(&ctx, buf->format)) goto fail;\n  }\n  if ((unsigned)buf->itemsize != dtype->size) {\n    PyErr_Format(PyExc_ValueError,\n      \"Item size of buffer (%\" CYTHON_FORMAT_SSIZE_T \"d byte%s) does not match size of '%s' (%\" CYTHON_FORMAT_SSIZE_T \"d byte%s)\",\n      buf->itemsize, (buf->itemsize > 1) ? \"s\" : \"\",\n      dtype->name, (Py_ssize_t)dtype->size, (dtype->size > 1) ? \"s\" : \"\");\n    goto fail;\n  }\n  if (buf->suboffsets == NULL) buf->suboffsets = __Pyx_minusones;\n  return 0;\nfail:;\n  __Pyx_ZeroBuffer(buf);\n  return -1;\n}\nstatic CYTHON_INLINE void __Pyx_SafeReleaseBuffer(Py_buffer* info) {\n  if (info->buf == NULL) return;\n  if (info->suboffsets == __Pyx_minusones) info->suboffsets = NULL;\n  __Pyx_ReleaseBuffer(info);\n}\n\n/* PyErrFetchRestore */\n    #if CYTHON_FAST_THREAD_STATE\nstatic CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) {\n    PyObject *tmp_type, *tmp_value, *tmp_tb;\n    tmp_type = tstate->curexc_type;\n    tmp_value = tstate->curexc_value;\n    tmp_tb = tstate->curexc_traceback;\n    tstate->curexc_type = type;\n    tstate->curexc_value = value;\n    tstate->curexc_traceback = tb;\n    Py_XDECREF(tmp_type);\n    Py_XDECREF(tmp_value);\n    Py_XDECREF(tmp_tb);\n}\nstatic CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) {\n    *type = tstate->curexc_type;\n    *value = tstate->curexc_value;\n    *tb = tstate->curexc_traceback;\n    tstate->curexc_type = 0;\n    tstate->curexc_value = 0;\n    tstate->curexc_traceback = 0;\n}\n#endif\n\n/* BufferIndexError */\n    static void __Pyx_RaiseBufferIndexError(int axis) {\n  PyErr_Format(PyExc_IndexError,\n     \"Out of bounds on buffer access (axis %d)\", axis);\n}\n\n/* RaiseException */\n    #if PY_MAJOR_VERSION < 3\nstatic void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb,\n                        CYTHON_UNUSED PyObject *cause) {\n    __Pyx_PyThreadState_declare\n    Py_XINCREF(type);\n    if (!value || value == Py_None)\n        value = NULL;\n    else\n        Py_INCREF(value);\n    if (!tb || tb == Py_None)\n        tb = NULL;\n    else {\n        Py_INCREF(tb);\n        if (!PyTraceBack_Check(tb)) {\n            PyErr_SetString(PyExc_TypeError,\n                \"raise: arg 3 must be a traceback or None\");\n            goto raise_error;\n        }\n    }\n    if (PyType_Check(type)) {\n#if CYTHON_COMPILING_IN_PYPY\n        if (!value) {\n            Py_INCREF(Py_None);\n            value = Py_None;\n        }\n#endif\n        PyErr_NormalizeException(&type, &value, &tb);\n    } else {\n        if (value) {\n            PyErr_SetString(PyExc_TypeError,\n                \"instance exception may not have a separate value\");\n            goto raise_error;\n        }\n        value = type;\n        type = (PyObject*) Py_TYPE(type);\n        Py_INCREF(type);\n        if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) {\n            PyErr_SetString(PyExc_TypeError,\n                \"raise: exception class must be a subclass of BaseException\");\n            goto raise_error;\n        }\n    }\n    __Pyx_PyThreadState_assign\n    __Pyx_ErrRestore(type, value, tb);\n    return;\nraise_error:\n    Py_XDECREF(value);\n    Py_XDECREF(type);\n    Py_XDECREF(tb);\n    return;\n}\n#else\nstatic void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) {\n    PyObject* owned_instance = NULL;\n    if (tb == Py_None) {\n        tb = 0;\n    } else if (tb && !PyTraceBack_Check(tb)) {\n        PyErr_SetString(PyExc_TypeError,\n            \"raise: arg 3 must be a traceback or None\");\n        goto bad;\n    }\n    if (value == Py_None)\n        value = 0;\n    if (PyExceptionInstance_Check(type)) {\n        if (value) {\n            PyErr_SetString(PyExc_TypeError,\n                \"instance exception may not have a separate value\");\n            goto bad;\n        }\n        value = type;\n        type = (PyObject*) Py_TYPE(value);\n    } else if (PyExceptionClass_Check(type)) {\n        PyObject *instance_class = NULL;\n        if (value && PyExceptionInstance_Check(value)) {\n            instance_class = (PyObject*) Py_TYPE(value);\n            if (instance_class != type) {\n                int is_subclass = PyObject_IsSubclass(instance_class, type);\n                if (!is_subclass) {\n                    instance_class = NULL;\n                } else if (unlikely(is_subclass == -1)) {\n                    goto bad;\n                } else {\n                    type = instance_class;\n                }\n            }\n        }\n        if (!instance_class) {\n            PyObject *args;\n            if (!value)\n                args = PyTuple_New(0);\n            else if (PyTuple_Check(value)) {\n                Py_INCREF(value);\n                args = value;\n            } else\n                args = PyTuple_Pack(1, value);\n            if (!args)\n                goto bad;\n            owned_instance = PyObject_Call(type, args, NULL);\n            Py_DECREF(args);\n            if (!owned_instance)\n                goto bad;\n            value = owned_instance;\n            if (!PyExceptionInstance_Check(value)) {\n                PyErr_Format(PyExc_TypeError,\n                             \"calling %R should have returned an instance of \"\n                             \"BaseException, not %R\",\n                             type, Py_TYPE(value));\n                goto bad;\n            }\n        }\n    } else {\n        PyErr_SetString(PyExc_TypeError,\n            \"raise: exception class must be a subclass of BaseException\");\n        goto bad;\n    }\n#if PY_VERSION_HEX >= 0x03030000\n    if (cause) {\n#else\n    if (cause && cause != Py_None) {\n#endif\n        PyObject *fixed_cause;\n        if (cause == Py_None) {\n            fixed_cause = NULL;\n        } else if (PyExceptionClass_Check(cause)) {\n            fixed_cause = PyObject_CallObject(cause, NULL);\n            if (fixed_cause == NULL)\n                goto bad;\n        } else if (PyExceptionInstance_Check(cause)) {\n            fixed_cause = cause;\n            Py_INCREF(fixed_cause);\n        } else {\n            PyErr_SetString(PyExc_TypeError,\n                            \"exception causes must derive from \"\n                            \"BaseException\");\n            goto bad;\n        }\n        PyException_SetCause(value, fixed_cause);\n    }\n    PyErr_SetObject(type, value);\n    if (tb) {\n#if CYTHON_COMPILING_IN_PYPY\n        PyObject *tmp_type, *tmp_value, *tmp_tb;\n        PyErr_Fetch(&tmp_type, &tmp_value, &tmp_tb);\n        Py_INCREF(tb);\n        PyErr_Restore(tmp_type, tmp_value, tb);\n        Py_XDECREF(tmp_tb);\n#else\n        PyThreadState *tstate = PyThreadState_GET();\n        PyObject* tmp_tb = tstate->curexc_traceback;\n        if (tb != tmp_tb) {\n            Py_INCREF(tb);\n            tstate->curexc_traceback = tb;\n            Py_XDECREF(tmp_tb);\n        }\n#endif\n    }\nbad:\n    Py_XDECREF(owned_instance);\n    return;\n}\n#endif\n\n/* RaiseTooManyValuesToUnpack */\n      static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected) {\n    PyErr_Format(PyExc_ValueError,\n                 \"too many values to unpack (expected %\" CYTHON_FORMAT_SSIZE_T \"d)\", expected);\n}\n\n/* RaiseNeedMoreValuesToUnpack */\n      static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index) {\n    PyErr_Format(PyExc_ValueError,\n                 \"need more than %\" CYTHON_FORMAT_SSIZE_T \"d value%.1s to unpack\",\n                 index, (index == 1) ? \"\" : \"s\");\n}\n\n/* RaiseNoneIterError */\n      static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void) {\n    PyErr_SetString(PyExc_TypeError, \"'NoneType' object is not iterable\");\n}\n\n/* SaveResetException */\n      #if CYTHON_FAST_THREAD_STATE\nstatic CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) {\n    *type = tstate->exc_type;\n    *value = tstate->exc_value;\n    *tb = tstate->exc_traceback;\n    Py_XINCREF(*type);\n    Py_XINCREF(*value);\n    Py_XINCREF(*tb);\n}\nstatic CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) {\n    PyObject *tmp_type, *tmp_value, *tmp_tb;\n    tmp_type = tstate->exc_type;\n    tmp_value = tstate->exc_value;\n    tmp_tb = tstate->exc_traceback;\n    tstate->exc_type = type;\n    tstate->exc_value = value;\n    tstate->exc_traceback = tb;\n    Py_XDECREF(tmp_type);\n    Py_XDECREF(tmp_value);\n    Py_XDECREF(tmp_tb);\n}\n#endif\n\n/* PyErrExceptionMatches */\n      #if CYTHON_FAST_THREAD_STATE\nstatic CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err) {\n    PyObject *exc_type = tstate->curexc_type;\n    if (exc_type == err) return 1;\n    if (unlikely(!exc_type)) return 0;\n    return PyErr_GivenExceptionMatches(exc_type, err);\n}\n#endif\n\n/* GetException */\n      #if CYTHON_FAST_THREAD_STATE\nstatic int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) {\n#else\nstatic int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb) {\n#endif\n    PyObject *local_type, *local_value, *local_tb;\n#if CYTHON_FAST_THREAD_STATE\n    PyObject *tmp_type, *tmp_value, *tmp_tb;\n    local_type = tstate->curexc_type;\n    local_value = tstate->curexc_value;\n    local_tb = tstate->curexc_traceback;\n    tstate->curexc_type = 0;\n    tstate->curexc_value = 0;\n    tstate->curexc_traceback = 0;\n#else\n    PyErr_Fetch(&local_type, &local_value, &local_tb);\n#endif\n    PyErr_NormalizeException(&local_type, &local_value, &local_tb);\n#if CYTHON_FAST_THREAD_STATE\n    if (unlikely(tstate->curexc_type))\n#else\n    if (unlikely(PyErr_Occurred()))\n#endif\n        goto bad;\n    #if PY_MAJOR_VERSION >= 3\n    if (local_tb) {\n        if (unlikely(PyException_SetTraceback(local_value, local_tb) < 0))\n            goto bad;\n    }\n    #endif\n    Py_XINCREF(local_tb);\n    Py_XINCREF(local_type);\n    Py_XINCREF(local_value);\n    *type = local_type;\n    *value = local_value;\n    *tb = local_tb;\n#if CYTHON_FAST_THREAD_STATE\n    tmp_type = tstate->exc_type;\n    tmp_value = tstate->exc_value;\n    tmp_tb = tstate->exc_traceback;\n    tstate->exc_type = local_type;\n    tstate->exc_value = local_value;\n    tstate->exc_traceback = local_tb;\n    Py_XDECREF(tmp_type);\n    Py_XDECREF(tmp_value);\n    Py_XDECREF(tmp_tb);\n#else\n    PyErr_SetExcInfo(local_type, local_value, local_tb);\n#endif\n    return 0;\nbad:\n    *type = 0;\n    *value = 0;\n    *tb = 0;\n    Py_XDECREF(local_type);\n    Py_XDECREF(local_value);\n    Py_XDECREF(local_tb);\n    return -1;\n}\n\n/* Import */\n        static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level) {\n    PyObject *empty_list = 0;\n    PyObject *module = 0;\n    PyObject *global_dict = 0;\n    PyObject *empty_dict = 0;\n    PyObject *list;\n    #if PY_VERSION_HEX < 0x03030000\n    PyObject *py_import;\n    py_import = __Pyx_PyObject_GetAttrStr(__pyx_b, __pyx_n_s_import);\n    if (!py_import)\n        goto bad;\n    #endif\n    if (from_list)\n        list = from_list;\n    else {\n        empty_list = PyList_New(0);\n        if (!empty_list)\n            goto bad;\n        list = empty_list;\n    }\n    global_dict = PyModule_GetDict(__pyx_m);\n    if (!global_dict)\n        goto bad;\n    empty_dict = PyDict_New();\n    if (!empty_dict)\n        goto bad;\n    {\n        #if PY_MAJOR_VERSION >= 3\n        if (level == -1) {\n            if (strchr(__Pyx_MODULE_NAME, '.')) {\n                #if PY_VERSION_HEX < 0x03030000\n                PyObject *py_level = PyInt_FromLong(1);\n                if (!py_level)\n                    goto bad;\n                module = PyObject_CallFunctionObjArgs(py_import,\n                    name, global_dict, empty_dict, list, py_level, NULL);\n                Py_DECREF(py_level);\n                #else\n                module = PyImport_ImportModuleLevelObject(\n                    name, global_dict, empty_dict, list, 1);\n                #endif\n                if (!module) {\n                    if (!PyErr_ExceptionMatches(PyExc_ImportError))\n                        goto bad;\n                    PyErr_Clear();\n                }\n            }\n            level = 0;\n        }\n        #endif\n        if (!module) {\n            #if PY_VERSION_HEX < 0x03030000\n            PyObject *py_level = PyInt_FromLong(level);\n            if (!py_level)\n                goto bad;\n            module = PyObject_CallFunctionObjArgs(py_import,\n                name, global_dict, empty_dict, list, py_level, NULL);\n            Py_DECREF(py_level);\n            #else\n            module = PyImport_ImportModuleLevelObject(\n                name, global_dict, empty_dict, list, level);\n            #endif\n        }\n    }\nbad:\n    #if PY_VERSION_HEX < 0x03030000\n    Py_XDECREF(py_import);\n    #endif\n    Py_XDECREF(empty_list);\n    Py_XDECREF(empty_dict);\n    return module;\n}\n\n/* CodeObjectCache */\n        static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line) {\n    int start = 0, mid = 0, end = count - 1;\n    if (end >= 0 && code_line > entries[end].code_line) {\n        return count;\n    }\n    while (start < end) {\n        mid = start + (end - start) / 2;\n        if (code_line < entries[mid].code_line) {\n            end = mid;\n        } else if (code_line > entries[mid].code_line) {\n             start = mid + 1;\n        } else {\n            return mid;\n        }\n    }\n    if (code_line <= entries[mid].code_line) {\n        return mid;\n    } else {\n        return mid + 1;\n    }\n}\nstatic PyCodeObject *__pyx_find_code_object(int code_line) {\n    PyCodeObject* code_object;\n    int pos;\n    if (unlikely(!code_line) || unlikely(!__pyx_code_cache.entries)) {\n        return NULL;\n    }\n    pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line);\n    if (unlikely(pos >= __pyx_code_cache.count) || unlikely(__pyx_code_cache.entries[pos].code_line != code_line)) {\n        return NULL;\n    }\n    code_object = __pyx_code_cache.entries[pos].code_object;\n    Py_INCREF(code_object);\n    return code_object;\n}\nstatic void __pyx_insert_code_object(int code_line, PyCodeObject* code_object) {\n    int pos, i;\n    __Pyx_CodeObjectCacheEntry* entries = __pyx_code_cache.entries;\n    if (unlikely(!code_line)) {\n        return;\n    }\n    if (unlikely(!entries)) {\n        entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Malloc(64*sizeof(__Pyx_CodeObjectCacheEntry));\n        if (likely(entries)) {\n            __pyx_code_cache.entries = entries;\n            __pyx_code_cache.max_count = 64;\n            __pyx_code_cache.count = 1;\n            entries[0].code_line = code_line;\n            entries[0].code_object = code_object;\n            Py_INCREF(code_object);\n        }\n        return;\n    }\n    pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line);\n    if ((pos < __pyx_code_cache.count) && unlikely(__pyx_code_cache.entries[pos].code_line == code_line)) {\n        PyCodeObject* tmp = entries[pos].code_object;\n        entries[pos].code_object = code_object;\n        Py_DECREF(tmp);\n        return;\n    }\n    if (__pyx_code_cache.count == __pyx_code_cache.max_count) {\n        int new_max = __pyx_code_cache.max_count + 64;\n        entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Realloc(\n            __pyx_code_cache.entries, (size_t)new_max*sizeof(__Pyx_CodeObjectCacheEntry));\n        if (unlikely(!entries)) {\n            return;\n        }\n        __pyx_code_cache.entries = entries;\n        __pyx_code_cache.max_count = new_max;\n    }\n    for (i=__pyx_code_cache.count; i>pos; i--) {\n        entries[i] = entries[i-1];\n    }\n    entries[pos].code_line = code_line;\n    entries[pos].code_object = code_object;\n    __pyx_code_cache.count++;\n    Py_INCREF(code_object);\n}\n\n/* AddTraceback */\n        #include \"compile.h\"\n#include \"frameobject.h\"\n#include \"traceback.h\"\nstatic PyCodeObject* __Pyx_CreateCodeObjectForTraceback(\n            const char *funcname, int c_line,\n            int py_line, const char *filename) {\n    PyCodeObject *py_code = 0;\n    PyObject *py_srcfile = 0;\n    PyObject *py_funcname = 0;\n    #if PY_MAJOR_VERSION < 3\n    py_srcfile = PyString_FromString(filename);\n    #else\n    py_srcfile = PyUnicode_FromString(filename);\n    #endif\n    if (!py_srcfile) goto bad;\n    if (c_line) {\n        #if PY_MAJOR_VERSION < 3\n        py_funcname = PyString_FromFormat( \"%s (%s:%d)\", funcname, __pyx_cfilenm, c_line);\n        #else\n        py_funcname = PyUnicode_FromFormat( \"%s (%s:%d)\", funcname, __pyx_cfilenm, c_line);\n        #endif\n    }\n    else {\n        #if PY_MAJOR_VERSION < 3\n        py_funcname = PyString_FromString(funcname);\n        #else\n        py_funcname = PyUnicode_FromString(funcname);\n        #endif\n    }\n    if (!py_funcname) goto bad;\n    py_code = __Pyx_PyCode_New(\n        0,\n        0,\n        0,\n        0,\n        0,\n        __pyx_empty_bytes, /*PyObject *code,*/\n        __pyx_empty_tuple, /*PyObject *consts,*/\n        __pyx_empty_tuple, /*PyObject *names,*/\n        __pyx_empty_tuple, /*PyObject *varnames,*/\n        __pyx_empty_tuple, /*PyObject *freevars,*/\n        __pyx_empty_tuple, /*PyObject *cellvars,*/\n        py_srcfile,   /*PyObject *filename,*/\n        py_funcname,  /*PyObject *name,*/\n        py_line,\n        __pyx_empty_bytes  /*PyObject *lnotab*/\n    );\n    Py_DECREF(py_srcfile);\n    Py_DECREF(py_funcname);\n    return py_code;\nbad:\n    Py_XDECREF(py_srcfile);\n    Py_XDECREF(py_funcname);\n    return NULL;\n}\nstatic void __Pyx_AddTraceback(const char *funcname, int c_line,\n                               int py_line, const char *filename) {\n    PyCodeObject *py_code = 0;\n    PyFrameObject *py_frame = 0;\n    py_code = __pyx_find_code_object(c_line ? c_line : py_line);\n    if (!py_code) {\n        py_code = __Pyx_CreateCodeObjectForTraceback(\n            funcname, c_line, py_line, filename);\n        if (!py_code) goto bad;\n        __pyx_insert_code_object(c_line ? c_line : py_line, py_code);\n    }\n    py_frame = PyFrame_New(\n        PyThreadState_GET(), /*PyThreadState *tstate,*/\n        py_code,             /*PyCodeObject *code,*/\n        __pyx_d,      /*PyObject *globals,*/\n        0                    /*PyObject *locals*/\n    );\n    if (!py_frame) goto bad;\n    __Pyx_PyFrame_SetLineNumber(py_frame, py_line);\n    PyTraceBack_Here(py_frame);\nbad:\n    Py_XDECREF(py_code);\n    Py_XDECREF(py_frame);\n}\n\n#if PY_MAJOR_VERSION < 3\nstatic int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags) {\n    if (PyObject_CheckBuffer(obj)) return PyObject_GetBuffer(obj, view, flags);\n        if (PyObject_TypeCheck(obj, __pyx_ptype_5numpy_ndarray)) return __pyx_pw_5numpy_7ndarray_1__getbuffer__(obj, view, flags);\n    PyErr_Format(PyExc_TypeError, \"'%.200s' does not have the buffer interface\", Py_TYPE(obj)->tp_name);\n    return -1;\n}\nstatic void __Pyx_ReleaseBuffer(Py_buffer *view) {\n    PyObject *obj = view->obj;\n    if (!obj) return;\n    if (PyObject_CheckBuffer(obj)) {\n        PyBuffer_Release(view);\n        return;\n    }\n        if (PyObject_TypeCheck(obj, __pyx_ptype_5numpy_ndarray)) { __pyx_pw_5numpy_7ndarray_3__releasebuffer__(obj, view); return; }\n    Py_DECREF(obj);\n    view->obj = NULL;\n}\n#endif\n\n\n        /* CIntToPy */\n        static CYTHON_INLINE PyObject* __Pyx_PyInt_From_unsigned_int(unsigned int value) {\n    const unsigned int neg_one = (unsigned int) -1, const_zero = (unsigned int) 0;\n    const int is_unsigned = neg_one > const_zero;\n    if (is_unsigned) {\n        if (sizeof(unsigned int) < sizeof(long)) {\n            return PyInt_FromLong((long) value);\n        } else if (sizeof(unsigned int) <= sizeof(unsigned long)) {\n            return PyLong_FromUnsignedLong((unsigned long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(unsigned int) <= sizeof(unsigned PY_LONG_LONG)) {\n            return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value);\n#endif\n        }\n    } else {\n        if (sizeof(unsigned int) <= sizeof(long)) {\n            return PyInt_FromLong((long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(unsigned int) <= sizeof(PY_LONG_LONG)) {\n            return PyLong_FromLongLong((PY_LONG_LONG) value);\n#endif\n        }\n    }\n    {\n        int one = 1; int little = (int)*(unsigned char *)&one;\n        unsigned char *bytes = (unsigned char *)&value;\n        return _PyLong_FromByteArray(bytes, sizeof(unsigned int),\n                                     little, !is_unsigned);\n    }\n}\n\n/* CIntFromPyVerify */\n        #define __PYX_VERIFY_RETURN_INT(target_type, func_type, func_value)\\\n    __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 0)\n#define __PYX_VERIFY_RETURN_INT_EXC(target_type, func_type, func_value)\\\n    __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 1)\n#define __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, exc)\\\n    {\\\n        func_type value = func_value;\\\n        if (sizeof(target_type) < sizeof(func_type)) {\\\n            if (unlikely(value != (func_type) (target_type) value)) {\\\n                func_type zero = 0;\\\n                if (exc && unlikely(value == (func_type)-1 && PyErr_Occurred()))\\\n                    return (target_type) -1;\\\n                if (is_unsigned && unlikely(value < zero))\\\n                    goto raise_neg_overflow;\\\n                else\\\n                    goto raise_overflow;\\\n            }\\\n        }\\\n        return (target_type) value;\\\n    }\n\n/* Declarations */\n        #if CYTHON_CCOMPLEX\n  #ifdef __cplusplus\n    static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float x, float y) {\n      return ::std::complex< float >(x, y);\n    }\n  #else\n    static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float x, float y) {\n      return x + y*(__pyx_t_float_complex)_Complex_I;\n    }\n  #endif\n#else\n    static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float x, float y) {\n      __pyx_t_float_complex z;\n      z.real = x;\n      z.imag = y;\n      return z;\n    }\n#endif\n\n/* Arithmetic */\n        #if CYTHON_CCOMPLEX\n#else\n    static CYTHON_INLINE int __Pyx_c_eq_float(__pyx_t_float_complex a, __pyx_t_float_complex b) {\n       return (a.real == b.real) && (a.imag == b.imag);\n    }\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_sum_float(__pyx_t_float_complex a, __pyx_t_float_complex b) {\n        __pyx_t_float_complex z;\n        z.real = a.real + b.real;\n        z.imag = a.imag + b.imag;\n        return z;\n    }\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_diff_float(__pyx_t_float_complex a, __pyx_t_float_complex b) {\n        __pyx_t_float_complex z;\n        z.real = a.real - b.real;\n        z.imag = a.imag - b.imag;\n        return z;\n    }\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_prod_float(__pyx_t_float_complex a, __pyx_t_float_complex b) {\n        __pyx_t_float_complex z;\n        z.real = a.real * b.real - a.imag * b.imag;\n        z.imag = a.real * b.imag + a.imag * b.real;\n        return z;\n    }\n    #if 1\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quot_float(__pyx_t_float_complex a, __pyx_t_float_complex b) {\n        if (b.imag == 0) {\n            return __pyx_t_float_complex_from_parts(a.real / b.real, a.imag / b.real);\n        } else if (fabsf(b.real) >= fabsf(b.imag)) {\n            if (b.real == 0 && b.imag == 0) {\n                return __pyx_t_float_complex_from_parts(a.real / b.real, a.imag / b.imag);\n            } else {\n                float r = b.imag / b.real;\n                float s = 1.0 / (b.real + b.imag * r);\n                return __pyx_t_float_complex_from_parts(\n                    (a.real + a.imag * r) * s, (a.imag - a.real * r) * s);\n            }\n        } else {\n            float r = b.real / b.imag;\n            float s = 1.0 / (b.imag + b.real * r);\n            return __pyx_t_float_complex_from_parts(\n                (a.real * r + a.imag) * s, (a.imag * r - a.real) * s);\n        }\n    }\n    #else\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quot_float(__pyx_t_float_complex a, __pyx_t_float_complex b) {\n        if (b.imag == 0) {\n            return __pyx_t_float_complex_from_parts(a.real / b.real, a.imag / b.real);\n        } else {\n            float denom = b.real * b.real + b.imag * b.imag;\n            return __pyx_t_float_complex_from_parts(\n                (a.real * b.real + a.imag * b.imag) / denom,\n                (a.imag * b.real - a.real * b.imag) / denom);\n        }\n    }\n    #endif\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_neg_float(__pyx_t_float_complex a) {\n        __pyx_t_float_complex z;\n        z.real = -a.real;\n        z.imag = -a.imag;\n        return z;\n    }\n    static CYTHON_INLINE int __Pyx_c_is_zero_float(__pyx_t_float_complex a) {\n       return (a.real == 0) && (a.imag == 0);\n    }\n    static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_conj_float(__pyx_t_float_complex a) {\n        __pyx_t_float_complex z;\n        z.real =  a.real;\n        z.imag = -a.imag;\n        return z;\n    }\n    #if 1\n        static CYTHON_INLINE float __Pyx_c_abs_float(__pyx_t_float_complex z) {\n          #if !defined(HAVE_HYPOT) || defined(_MSC_VER)\n            return sqrtf(z.real*z.real + z.imag*z.imag);\n          #else\n            return hypotf(z.real, z.imag);\n          #endif\n        }\n        static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_pow_float(__pyx_t_float_complex a, __pyx_t_float_complex b) {\n            __pyx_t_float_complex z;\n            float r, lnr, theta, z_r, z_theta;\n            if (b.imag == 0 && b.real == (int)b.real) {\n                if (b.real < 0) {\n                    float denom = a.real * a.real + a.imag * a.imag;\n                    a.real = a.real / denom;\n                    a.imag = -a.imag / denom;\n                    b.real = -b.real;\n                }\n                switch ((int)b.real) {\n                    case 0:\n                        z.real = 1;\n                        z.imag = 0;\n                        return z;\n                    case 1:\n                        return a;\n                    case 2:\n                        z = __Pyx_c_prod_float(a, a);\n                        return __Pyx_c_prod_float(a, a);\n                    case 3:\n                        z = __Pyx_c_prod_float(a, a);\n                        return __Pyx_c_prod_float(z, a);\n                    case 4:\n                        z = __Pyx_c_prod_float(a, a);\n                        return __Pyx_c_prod_float(z, z);\n                }\n            }\n            if (a.imag == 0) {\n                if (a.real == 0) {\n                    return a;\n                } else if (b.imag == 0) {\n                    z.real = powf(a.real, b.real);\n                    z.imag = 0;\n                    return z;\n                } else if (a.real > 0) {\n                    r = a.real;\n                    theta = 0;\n                } else {\n                    r = -a.real;\n                    theta = atan2f(0, -1);\n                }\n            } else {\n                r = __Pyx_c_abs_float(a);\n                theta = atan2f(a.imag, a.real);\n            }\n            lnr = logf(r);\n            z_r = expf(lnr * b.real - theta * b.imag);\n            z_theta = theta * b.real + lnr * b.imag;\n            z.real = z_r * cosf(z_theta);\n            z.imag = z_r * sinf(z_theta);\n            return z;\n        }\n    #endif\n#endif\n\n/* Declarations */\n        #if CYTHON_CCOMPLEX\n  #ifdef __cplusplus\n    static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) {\n      return ::std::complex< double >(x, y);\n    }\n  #else\n    static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) {\n      return x + y*(__pyx_t_double_complex)_Complex_I;\n    }\n  #endif\n#else\n    static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) {\n      __pyx_t_double_complex z;\n      z.real = x;\n      z.imag = y;\n      return z;\n    }\n#endif\n\n/* Arithmetic */\n        #if CYTHON_CCOMPLEX\n#else\n    static CYTHON_INLINE int __Pyx_c_eq_double(__pyx_t_double_complex a, __pyx_t_double_complex b) {\n       return (a.real == b.real) && (a.imag == b.imag);\n    }\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum_double(__pyx_t_double_complex a, __pyx_t_double_complex b) {\n        __pyx_t_double_complex z;\n        z.real = a.real + b.real;\n        z.imag = a.imag + b.imag;\n        return z;\n    }\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff_double(__pyx_t_double_complex a, __pyx_t_double_complex b) {\n        __pyx_t_double_complex z;\n        z.real = a.real - b.real;\n        z.imag = a.imag - b.imag;\n        return z;\n    }\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod_double(__pyx_t_double_complex a, __pyx_t_double_complex b) {\n        __pyx_t_double_complex z;\n        z.real = a.real * b.real - a.imag * b.imag;\n        z.imag = a.real * b.imag + a.imag * b.real;\n        return z;\n    }\n    #if 1\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex a, __pyx_t_double_complex b) {\n        if (b.imag == 0) {\n            return __pyx_t_double_complex_from_parts(a.real / b.real, a.imag / b.real);\n        } else if (fabs(b.real) >= fabs(b.imag)) {\n            if (b.real == 0 && b.imag == 0) {\n                return __pyx_t_double_complex_from_parts(a.real / b.real, a.imag / b.imag);\n            } else {\n                double r = b.imag / b.real;\n                double s = 1.0 / (b.real + b.imag * r);\n                return __pyx_t_double_complex_from_parts(\n                    (a.real + a.imag * r) * s, (a.imag - a.real * r) * s);\n            }\n        } else {\n            double r = b.real / b.imag;\n            double s = 1.0 / (b.imag + b.real * r);\n            return __pyx_t_double_complex_from_parts(\n                (a.real * r + a.imag) * s, (a.imag * r - a.real) * s);\n        }\n    }\n    #else\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex a, __pyx_t_double_complex b) {\n        if (b.imag == 0) {\n            return __pyx_t_double_complex_from_parts(a.real / b.real, a.imag / b.real);\n        } else {\n            double denom = b.real * b.real + b.imag * b.imag;\n            return __pyx_t_double_complex_from_parts(\n                (a.real * b.real + a.imag * b.imag) / denom,\n                (a.imag * b.real - a.real * b.imag) / denom);\n        }\n    }\n    #endif\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg_double(__pyx_t_double_complex a) {\n        __pyx_t_double_complex z;\n        z.real = -a.real;\n        z.imag = -a.imag;\n        return z;\n    }\n    static CYTHON_INLINE int __Pyx_c_is_zero_double(__pyx_t_double_complex a) {\n       return (a.real == 0) && (a.imag == 0);\n    }\n    static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj_double(__pyx_t_double_complex a) {\n        __pyx_t_double_complex z;\n        z.real =  a.real;\n        z.imag = -a.imag;\n        return z;\n    }\n    #if 1\n        static CYTHON_INLINE double __Pyx_c_abs_double(__pyx_t_double_complex z) {\n          #if !defined(HAVE_HYPOT) || defined(_MSC_VER)\n            return sqrt(z.real*z.real + z.imag*z.imag);\n          #else\n            return hypot(z.real, z.imag);\n          #endif\n        }\n        static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_pow_double(__pyx_t_double_complex a, __pyx_t_double_complex b) {\n            __pyx_t_double_complex z;\n            double r, lnr, theta, z_r, z_theta;\n            if (b.imag == 0 && b.real == (int)b.real) {\n                if (b.real < 0) {\n                    double denom = a.real * a.real + a.imag * a.imag;\n                    a.real = a.real / denom;\n                    a.imag = -a.imag / denom;\n                    b.real = -b.real;\n                }\n                switch ((int)b.real) {\n                    case 0:\n                        z.real = 1;\n                        z.imag = 0;\n                        return z;\n                    case 1:\n                        return a;\n                    case 2:\n                        z = __Pyx_c_prod_double(a, a);\n                        return __Pyx_c_prod_double(a, a);\n                    case 3:\n                        z = __Pyx_c_prod_double(a, a);\n                        return __Pyx_c_prod_double(z, a);\n                    case 4:\n                        z = __Pyx_c_prod_double(a, a);\n                        return __Pyx_c_prod_double(z, z);\n                }\n            }\n            if (a.imag == 0) {\n                if (a.real == 0) {\n                    return a;\n                } else if (b.imag == 0) {\n                    z.real = pow(a.real, b.real);\n                    z.imag = 0;\n                    return z;\n                } else if (a.real > 0) {\n                    r = a.real;\n                    theta = 0;\n                } else {\n                    r = -a.real;\n                    theta = atan2(0, -1);\n                }\n            } else {\n                r = __Pyx_c_abs_double(a);\n                theta = atan2(a.imag, a.real);\n            }\n            lnr = log(r);\n            z_r = exp(lnr * b.real - theta * b.imag);\n            z_theta = theta * b.real + lnr * b.imag;\n            z.real = z_r * cos(z_theta);\n            z.imag = z_r * sin(z_theta);\n            return z;\n        }\n    #endif\n#endif\n\n/* CIntToPy */\n        static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value) {\n    const int neg_one = (int) -1, const_zero = (int) 0;\n    const int is_unsigned = neg_one > const_zero;\n    if (is_unsigned) {\n        if (sizeof(int) < sizeof(long)) {\n            return PyInt_FromLong((long) value);\n        } else if (sizeof(int) <= sizeof(unsigned long)) {\n            return PyLong_FromUnsignedLong((unsigned long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) {\n            return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value);\n#endif\n        }\n    } else {\n        if (sizeof(int) <= sizeof(long)) {\n            return PyInt_FromLong((long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) {\n            return PyLong_FromLongLong((PY_LONG_LONG) value);\n#endif\n        }\n    }\n    {\n        int one = 1; int little = (int)*(unsigned char *)&one;\n        unsigned char *bytes = (unsigned char *)&value;\n        return _PyLong_FromByteArray(bytes, sizeof(int),\n                                     little, !is_unsigned);\n    }\n}\n\n/* CIntToPy */\n        static CYTHON_INLINE PyObject* __Pyx_PyInt_From_enum__NPY_TYPES(enum NPY_TYPES value) {\n    const enum NPY_TYPES neg_one = (enum NPY_TYPES) -1, const_zero = (enum NPY_TYPES) 0;\n    const int is_unsigned = neg_one > const_zero;\n    if (is_unsigned) {\n        if (sizeof(enum NPY_TYPES) < sizeof(long)) {\n            return PyInt_FromLong((long) value);\n        } else if (sizeof(enum NPY_TYPES) <= sizeof(unsigned long)) {\n            return PyLong_FromUnsignedLong((unsigned long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(enum NPY_TYPES) <= sizeof(unsigned PY_LONG_LONG)) {\n            return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value);\n#endif\n        }\n    } else {\n        if (sizeof(enum NPY_TYPES) <= sizeof(long)) {\n            return PyInt_FromLong((long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(enum NPY_TYPES) <= sizeof(PY_LONG_LONG)) {\n            return PyLong_FromLongLong((PY_LONG_LONG) value);\n#endif\n        }\n    }\n    {\n        int one = 1; int little = (int)*(unsigned char *)&one;\n        unsigned char *bytes = (unsigned char *)&value;\n        return _PyLong_FromByteArray(bytes, sizeof(enum NPY_TYPES),\n                                     little, !is_unsigned);\n    }\n}\n\n/* CIntFromPy */\n        static CYTHON_INLINE unsigned int __Pyx_PyInt_As_unsigned_int(PyObject *x) {\n    const unsigned int neg_one = (unsigned int) -1, const_zero = (unsigned int) 0;\n    const int is_unsigned = neg_one > const_zero;\n#if PY_MAJOR_VERSION < 3\n    if (likely(PyInt_Check(x))) {\n        if (sizeof(unsigned int) < sizeof(long)) {\n            __PYX_VERIFY_RETURN_INT(unsigned int, long, PyInt_AS_LONG(x))\n        } else {\n            long val = PyInt_AS_LONG(x);\n            if (is_unsigned && unlikely(val < 0)) {\n                goto raise_neg_overflow;\n            }\n            return (unsigned int) val;\n        }\n    } else\n#endif\n    if (likely(PyLong_Check(x))) {\n        if (is_unsigned) {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (unsigned int) 0;\n                case  1: __PYX_VERIFY_RETURN_INT(unsigned int, digit, digits[0])\n                case 2:\n                    if (8 * sizeof(unsigned int) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(unsigned int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(unsigned int) >= 2 * PyLong_SHIFT) {\n                            return (unsigned int) (((((unsigned int)digits[1]) << PyLong_SHIFT) | (unsigned int)digits[0]));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(unsigned int) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(unsigned int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(unsigned int) >= 3 * PyLong_SHIFT) {\n                            return (unsigned int) (((((((unsigned int)digits[2]) << PyLong_SHIFT) | (unsigned int)digits[1]) << PyLong_SHIFT) | (unsigned int)digits[0]));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(unsigned int) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(unsigned int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(unsigned int) >= 4 * PyLong_SHIFT) {\n                            return (unsigned int) (((((((((unsigned int)digits[3]) << PyLong_SHIFT) | (unsigned int)digits[2]) << PyLong_SHIFT) | (unsigned int)digits[1]) << PyLong_SHIFT) | (unsigned int)digits[0]));\n                        }\n                    }\n                    break;\n            }\n#endif\n#if CYTHON_COMPILING_IN_CPYTHON\n            if (unlikely(Py_SIZE(x) < 0)) {\n                goto raise_neg_overflow;\n            }\n#else\n            {\n                int result = PyObject_RichCompareBool(x, Py_False, Py_LT);\n                if (unlikely(result < 0))\n                    return (unsigned int) -1;\n                if (unlikely(result == 1))\n                    goto raise_neg_overflow;\n            }\n#endif\n            if (sizeof(unsigned int) <= sizeof(unsigned long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(unsigned int, unsigned long, PyLong_AsUnsignedLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(unsigned int) <= sizeof(unsigned PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(unsigned int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x))\n#endif\n            }\n        } else {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (unsigned int) 0;\n                case -1: __PYX_VERIFY_RETURN_INT(unsigned int, sdigit, (sdigit) (-(sdigit)digits[0]))\n                case  1: __PYX_VERIFY_RETURN_INT(unsigned int,  digit, +digits[0])\n                case -2:\n                    if (8 * sizeof(unsigned int) - 1 > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(unsigned int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(unsigned int) - 1 > 2 * PyLong_SHIFT) {\n                            return (unsigned int) (((unsigned int)-1)*(((((unsigned int)digits[1]) << PyLong_SHIFT) | (unsigned int)digits[0])));\n                        }\n                    }\n                    break;\n                case 2:\n                    if (8 * sizeof(unsigned int) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(unsigned int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(unsigned int) - 1 > 2 * PyLong_SHIFT) {\n                            return (unsigned int) ((((((unsigned int)digits[1]) << PyLong_SHIFT) | (unsigned int)digits[0])));\n                        }\n                    }\n                    break;\n                case -3:\n                    if (8 * sizeof(unsigned int) - 1 > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(unsigned int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(unsigned int) - 1 > 3 * PyLong_SHIFT) {\n                            return (unsigned int) (((unsigned int)-1)*(((((((unsigned int)digits[2]) << PyLong_SHIFT) | (unsigned int)digits[1]) << PyLong_SHIFT) | (unsigned int)digits[0])));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(unsigned int) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(unsigned int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(unsigned int) - 1 > 3 * PyLong_SHIFT) {\n                            return (unsigned int) ((((((((unsigned int)digits[2]) << PyLong_SHIFT) | (unsigned int)digits[1]) << PyLong_SHIFT) | (unsigned int)digits[0])));\n                        }\n                    }\n                    break;\n                case -4:\n                    if (8 * sizeof(unsigned int) - 1 > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(unsigned int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(unsigned int) - 1 > 4 * PyLong_SHIFT) {\n                            return (unsigned int) (((unsigned int)-1)*(((((((((unsigned int)digits[3]) << PyLong_SHIFT) | (unsigned int)digits[2]) << PyLong_SHIFT) | (unsigned int)digits[1]) << PyLong_SHIFT) | (unsigned int)digits[0])));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(unsigned int) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(unsigned int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(unsigned int) - 1 > 4 * PyLong_SHIFT) {\n                            return (unsigned int) ((((((((((unsigned int)digits[3]) << PyLong_SHIFT) | (unsigned int)digits[2]) << PyLong_SHIFT) | (unsigned int)digits[1]) << PyLong_SHIFT) | (unsigned int)digits[0])));\n                        }\n                    }\n                    break;\n            }\n#endif\n            if (sizeof(unsigned int) <= sizeof(long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(unsigned int, long, PyLong_AsLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(unsigned int) <= sizeof(PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(unsigned int, PY_LONG_LONG, PyLong_AsLongLong(x))\n#endif\n            }\n        }\n        {\n#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray)\n            PyErr_SetString(PyExc_RuntimeError,\n                            \"_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers\");\n#else\n            unsigned int val;\n            PyObject *v = __Pyx_PyNumber_IntOrLong(x);\n #if PY_MAJOR_VERSION < 3\n            if (likely(v) && !PyLong_Check(v)) {\n                PyObject *tmp = v;\n                v = PyNumber_Long(tmp);\n                Py_DECREF(tmp);\n            }\n #endif\n            if (likely(v)) {\n                int one = 1; int is_little = (int)*(unsigned char *)&one;\n                unsigned char *bytes = (unsigned char *)&val;\n                int ret = _PyLong_AsByteArray((PyLongObject *)v,\n                                              bytes, sizeof(val),\n                                              is_little, !is_unsigned);\n                Py_DECREF(v);\n                if (likely(!ret))\n                    return val;\n            }\n#endif\n            return (unsigned int) -1;\n        }\n    } else {\n        unsigned int val;\n        PyObject *tmp = __Pyx_PyNumber_IntOrLong(x);\n        if (!tmp) return (unsigned int) -1;\n        val = __Pyx_PyInt_As_unsigned_int(tmp);\n        Py_DECREF(tmp);\n        return val;\n    }\nraise_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"value too large to convert to unsigned int\");\n    return (unsigned int) -1;\nraise_neg_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"can't convert negative value to unsigned int\");\n    return (unsigned int) -1;\n}\n\n/* CIntFromPy */\n        static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *x) {\n    const int neg_one = (int) -1, const_zero = (int) 0;\n    const int is_unsigned = neg_one > const_zero;\n#if PY_MAJOR_VERSION < 3\n    if (likely(PyInt_Check(x))) {\n        if (sizeof(int) < sizeof(long)) {\n            __PYX_VERIFY_RETURN_INT(int, long, PyInt_AS_LONG(x))\n        } else {\n            long val = PyInt_AS_LONG(x);\n            if (is_unsigned && unlikely(val < 0)) {\n                goto raise_neg_overflow;\n            }\n            return (int) val;\n        }\n    } else\n#endif\n    if (likely(PyLong_Check(x))) {\n        if (is_unsigned) {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (int) 0;\n                case  1: __PYX_VERIFY_RETURN_INT(int, digit, digits[0])\n                case 2:\n                    if (8 * sizeof(int) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) >= 2 * PyLong_SHIFT) {\n                            return (int) (((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(int) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) >= 3 * PyLong_SHIFT) {\n                            return (int) (((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(int) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) >= 4 * PyLong_SHIFT) {\n                            return (int) (((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]));\n                        }\n                    }\n                    break;\n            }\n#endif\n#if CYTHON_COMPILING_IN_CPYTHON\n            if (unlikely(Py_SIZE(x) < 0)) {\n                goto raise_neg_overflow;\n            }\n#else\n            {\n                int result = PyObject_RichCompareBool(x, Py_False, Py_LT);\n                if (unlikely(result < 0))\n                    return (int) -1;\n                if (unlikely(result == 1))\n                    goto raise_neg_overflow;\n            }\n#endif\n            if (sizeof(int) <= sizeof(unsigned long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(int, unsigned long, PyLong_AsUnsignedLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x))\n#endif\n            }\n        } else {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (int) 0;\n                case -1: __PYX_VERIFY_RETURN_INT(int, sdigit, (sdigit) (-(sdigit)digits[0]))\n                case  1: __PYX_VERIFY_RETURN_INT(int,  digit, +digits[0])\n                case -2:\n                    if (8 * sizeof(int) - 1 > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) {\n                            return (int) (((int)-1)*(((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));\n                        }\n                    }\n                    break;\n                case 2:\n                    if (8 * sizeof(int) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) {\n                            return (int) ((((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));\n                        }\n                    }\n                    break;\n                case -3:\n                    if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) {\n                            return (int) (((int)-1)*(((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(int) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) {\n                            return (int) ((((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));\n                        }\n                    }\n                    break;\n                case -4:\n                    if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) {\n                            return (int) (((int)-1)*(((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(int) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) {\n                            return (int) ((((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));\n                        }\n                    }\n                    break;\n            }\n#endif\n            if (sizeof(int) <= sizeof(long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(int, long, PyLong_AsLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(int, PY_LONG_LONG, PyLong_AsLongLong(x))\n#endif\n            }\n        }\n        {\n#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray)\n            PyErr_SetString(PyExc_RuntimeError,\n                            \"_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers\");\n#else\n            int val;\n            PyObject *v = __Pyx_PyNumber_IntOrLong(x);\n #if PY_MAJOR_VERSION < 3\n            if (likely(v) && !PyLong_Check(v)) {\n                PyObject *tmp = v;\n                v = PyNumber_Long(tmp);\n                Py_DECREF(tmp);\n            }\n #endif\n            if (likely(v)) {\n                int one = 1; int is_little = (int)*(unsigned char *)&one;\n                unsigned char *bytes = (unsigned char *)&val;\n                int ret = _PyLong_AsByteArray((PyLongObject *)v,\n                                              bytes, sizeof(val),\n                                              is_little, !is_unsigned);\n                Py_DECREF(v);\n                if (likely(!ret))\n                    return val;\n            }\n#endif\n            return (int) -1;\n        }\n    } else {\n        int val;\n        PyObject *tmp = __Pyx_PyNumber_IntOrLong(x);\n        if (!tmp) return (int) -1;\n        val = __Pyx_PyInt_As_int(tmp);\n        Py_DECREF(tmp);\n        return val;\n    }\nraise_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"value too large to convert to int\");\n    return (int) -1;\nraise_neg_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"can't convert negative value to int\");\n    return (int) -1;\n}\n\n/* CIntToPy */\n        static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) {\n    const long neg_one = (long) -1, const_zero = (long) 0;\n    const int is_unsigned = neg_one > const_zero;\n    if (is_unsigned) {\n        if (sizeof(long) < sizeof(long)) {\n            return PyInt_FromLong((long) value);\n        } else if (sizeof(long) <= sizeof(unsigned long)) {\n            return PyLong_FromUnsignedLong((unsigned long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) {\n            return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value);\n#endif\n        }\n    } else {\n        if (sizeof(long) <= sizeof(long)) {\n            return PyInt_FromLong((long) value);\n#ifdef HAVE_LONG_LONG\n        } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) {\n            return PyLong_FromLongLong((PY_LONG_LONG) value);\n#endif\n        }\n    }\n    {\n        int one = 1; int little = (int)*(unsigned char *)&one;\n        unsigned char *bytes = (unsigned char *)&value;\n        return _PyLong_FromByteArray(bytes, sizeof(long),\n                                     little, !is_unsigned);\n    }\n}\n\n/* CIntFromPy */\n        static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *x) {\n    const long neg_one = (long) -1, const_zero = (long) 0;\n    const int is_unsigned = neg_one > const_zero;\n#if PY_MAJOR_VERSION < 3\n    if (likely(PyInt_Check(x))) {\n        if (sizeof(long) < sizeof(long)) {\n            __PYX_VERIFY_RETURN_INT(long, long, PyInt_AS_LONG(x))\n        } else {\n            long val = PyInt_AS_LONG(x);\n            if (is_unsigned && unlikely(val < 0)) {\n                goto raise_neg_overflow;\n            }\n            return (long) val;\n        }\n    } else\n#endif\n    if (likely(PyLong_Check(x))) {\n        if (is_unsigned) {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (long) 0;\n                case  1: __PYX_VERIFY_RETURN_INT(long, digit, digits[0])\n                case 2:\n                    if (8 * sizeof(long) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) >= 2 * PyLong_SHIFT) {\n                            return (long) (((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(long) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) >= 3 * PyLong_SHIFT) {\n                            return (long) (((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(long) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) >= 4 * PyLong_SHIFT) {\n                            return (long) (((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]));\n                        }\n                    }\n                    break;\n            }\n#endif\n#if CYTHON_COMPILING_IN_CPYTHON\n            if (unlikely(Py_SIZE(x) < 0)) {\n                goto raise_neg_overflow;\n            }\n#else\n            {\n                int result = PyObject_RichCompareBool(x, Py_False, Py_LT);\n                if (unlikely(result < 0))\n                    return (long) -1;\n                if (unlikely(result == 1))\n                    goto raise_neg_overflow;\n            }\n#endif\n            if (sizeof(long) <= sizeof(unsigned long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(long, unsigned long, PyLong_AsUnsignedLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(long, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x))\n#endif\n            }\n        } else {\n#if CYTHON_USE_PYLONG_INTERNALS\n            const digit* digits = ((PyLongObject*)x)->ob_digit;\n            switch (Py_SIZE(x)) {\n                case  0: return (long) 0;\n                case -1: __PYX_VERIFY_RETURN_INT(long, sdigit, (sdigit) (-(sdigit)digits[0]))\n                case  1: __PYX_VERIFY_RETURN_INT(long,  digit, +digits[0])\n                case -2:\n                    if (8 * sizeof(long) - 1 > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {\n                            return (long) (((long)-1)*(((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));\n                        }\n                    }\n                    break;\n                case 2:\n                    if (8 * sizeof(long) > 1 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {\n                            return (long) ((((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));\n                        }\n                    }\n                    break;\n                case -3:\n                    if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {\n                            return (long) (((long)-1)*(((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));\n                        }\n                    }\n                    break;\n                case 3:\n                    if (8 * sizeof(long) > 2 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {\n                            return (long) ((((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));\n                        }\n                    }\n                    break;\n                case -4:\n                    if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) {\n                            return (long) (((long)-1)*(((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));\n                        }\n                    }\n                    break;\n                case 4:\n                    if (8 * sizeof(long) > 3 * PyLong_SHIFT) {\n                        if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {\n                            __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))\n                        } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) {\n                            return (long) ((((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));\n                        }\n                    }\n                    break;\n            }\n#endif\n            if (sizeof(long) <= sizeof(long)) {\n                __PYX_VERIFY_RETURN_INT_EXC(long, long, PyLong_AsLong(x))\n#ifdef HAVE_LONG_LONG\n            } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) {\n                __PYX_VERIFY_RETURN_INT_EXC(long, PY_LONG_LONG, PyLong_AsLongLong(x))\n#endif\n            }\n        }\n        {\n#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray)\n            PyErr_SetString(PyExc_RuntimeError,\n                            \"_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers\");\n#else\n            long val;\n            PyObject *v = __Pyx_PyNumber_IntOrLong(x);\n #if PY_MAJOR_VERSION < 3\n            if (likely(v) && !PyLong_Check(v)) {\n                PyObject *tmp = v;\n                v = PyNumber_Long(tmp);\n                Py_DECREF(tmp);\n            }\n #endif\n            if (likely(v)) {\n                int one = 1; int is_little = (int)*(unsigned char *)&one;\n                unsigned char *bytes = (unsigned char *)&val;\n                int ret = _PyLong_AsByteArray((PyLongObject *)v,\n                                              bytes, sizeof(val),\n                                              is_little, !is_unsigned);\n                Py_DECREF(v);\n                if (likely(!ret))\n                    return val;\n            }\n#endif\n            return (long) -1;\n        }\n    } else {\n        long val;\n        PyObject *tmp = __Pyx_PyNumber_IntOrLong(x);\n        if (!tmp) return (long) -1;\n        val = __Pyx_PyInt_As_long(tmp);\n        Py_DECREF(tmp);\n        return val;\n    }\nraise_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"value too large to convert to long\");\n    return (long) -1;\nraise_neg_overflow:\n    PyErr_SetString(PyExc_OverflowError,\n        \"can't convert negative value to long\");\n    return (long) -1;\n}\n\n/* CheckBinaryVersion */\n        static int __Pyx_check_binary_version(void) {\n    char ctversion[4], rtversion[4];\n    PyOS_snprintf(ctversion, 4, \"%d.%d\", PY_MAJOR_VERSION, PY_MINOR_VERSION);\n    PyOS_snprintf(rtversion, 4, \"%s\", Py_GetVersion());\n    if (ctversion[0] != rtversion[0] || ctversion[2] != rtversion[2]) {\n        char message[200];\n        PyOS_snprintf(message, sizeof(message),\n                      \"compiletime version %s of module '%.100s' \"\n                      \"does not match runtime version %s\",\n                      ctversion, __Pyx_MODULE_NAME, rtversion);\n        return PyErr_WarnEx(NULL, message, 1);\n    }\n    return 0;\n}\n\n/* ModuleImport */\n        #ifndef __PYX_HAVE_RT_ImportModule\n#define __PYX_HAVE_RT_ImportModule\nstatic PyObject *__Pyx_ImportModule(const char *name) {\n    PyObject *py_name = 0;\n    PyObject *py_module = 0;\n    py_name = __Pyx_PyIdentifier_FromString(name);\n    if (!py_name)\n        goto bad;\n    py_module = PyImport_Import(py_name);\n    Py_DECREF(py_name);\n    return py_module;\nbad:\n    Py_XDECREF(py_name);\n    return 0;\n}\n#endif\n\n/* TypeImport */\n        #ifndef __PYX_HAVE_RT_ImportType\n#define __PYX_HAVE_RT_ImportType\nstatic PyTypeObject *__Pyx_ImportType(const char *module_name, const char *class_name,\n    size_t size, int strict)\n{\n    PyObject *py_module = 0;\n    PyObject *result = 0;\n    PyObject *py_name = 0;\n    char warning[200];\n    Py_ssize_t basicsize;\n#ifdef Py_LIMITED_API\n    PyObject *py_basicsize;\n#endif\n    py_module = __Pyx_ImportModule(module_name);\n    if (!py_module)\n        goto bad;\n    py_name = __Pyx_PyIdentifier_FromString(class_name);\n    if (!py_name)\n        goto bad;\n    result = PyObject_GetAttr(py_module, py_name);\n    Py_DECREF(py_name);\n    py_name = 0;\n    Py_DECREF(py_module);\n    py_module = 0;\n    if (!result)\n        goto bad;\n    if (!PyType_Check(result)) {\n        PyErr_Format(PyExc_TypeError,\n            \"%.200s.%.200s is not a type object\",\n            module_name, class_name);\n        goto bad;\n    }\n#ifndef Py_LIMITED_API\n    basicsize = ((PyTypeObject *)result)->tp_basicsize;\n#else\n    py_basicsize = PyObject_GetAttrString(result, \"__basicsize__\");\n    if (!py_basicsize)\n        goto bad;\n    basicsize = PyLong_AsSsize_t(py_basicsize);\n    Py_DECREF(py_basicsize);\n    py_basicsize = 0;\n    if (basicsize == (Py_ssize_t)-1 && PyErr_Occurred())\n        goto bad;\n#endif\n    if (!strict && (size_t)basicsize > size) {\n        PyOS_snprintf(warning, sizeof(warning),\n            \"%s.%s size changed, may indicate binary incompatibility. Expected %zd, got %zd\",\n            module_name, class_name, basicsize, size);\n        if (PyErr_WarnEx(NULL, warning, 0) < 0) goto bad;\n    }\n    else if ((size_t)basicsize != size) {\n        PyErr_Format(PyExc_ValueError,\n            \"%.200s.%.200s has the wrong size, try recompiling. Expected %zd, got %zd\",\n            module_name, class_name, basicsize, size);\n        goto bad;\n    }\n    return (PyTypeObject *)result;\nbad:\n    Py_XDECREF(py_module);\n    Py_XDECREF(result);\n    return NULL;\n}\n#endif\n\n/* InitStrings */\n        static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) {\n    while (t->p) {\n        #if PY_MAJOR_VERSION < 3\n        if (t->is_unicode) {\n            *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL);\n        } else if (t->intern) {\n            *t->p = PyString_InternFromString(t->s);\n        } else {\n            *t->p = PyString_FromStringAndSize(t->s, t->n - 1);\n        }\n        #else\n        if (t->is_unicode | t->is_str) {\n            if (t->intern) {\n                *t->p = PyUnicode_InternFromString(t->s);\n            } else if (t->encoding) {\n                *t->p = PyUnicode_Decode(t->s, t->n - 1, t->encoding, NULL);\n            } else {\n                *t->p = PyUnicode_FromStringAndSize(t->s, t->n - 1);\n            }\n        } else {\n            *t->p = PyBytes_FromStringAndSize(t->s, t->n - 1);\n        }\n        #endif\n        if (!*t->p)\n            return -1;\n        ++t;\n    }\n    return 0;\n}\n\nstatic CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_str) {\n    return __Pyx_PyUnicode_FromStringAndSize(c_str, (Py_ssize_t)strlen(c_str));\n}\nstatic CYTHON_INLINE char* __Pyx_PyObject_AsString(PyObject* o) {\n    Py_ssize_t ignore;\n    return __Pyx_PyObject_AsStringAndSize(o, &ignore);\n}\nstatic CYTHON_INLINE char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) {\n#if CYTHON_COMPILING_IN_CPYTHON && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT)\n    if (\n#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII\n            __Pyx_sys_getdefaultencoding_not_ascii &&\n#endif\n            PyUnicode_Check(o)) {\n#if PY_VERSION_HEX < 0x03030000\n        char* defenc_c;\n        PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL);\n        if (!defenc) return NULL;\n        defenc_c = PyBytes_AS_STRING(defenc);\n#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII\n        {\n            char* end = defenc_c + PyBytes_GET_SIZE(defenc);\n            char* c;\n            for (c = defenc_c; c < end; c++) {\n                if ((unsigned char) (*c) >= 128) {\n                    PyUnicode_AsASCIIString(o);\n                    return NULL;\n                }\n            }\n        }\n#endif\n        *length = PyBytes_GET_SIZE(defenc);\n        return defenc_c;\n#else\n        if (__Pyx_PyUnicode_READY(o) == -1) return NULL;\n#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII\n        if (PyUnicode_IS_ASCII(o)) {\n            *length = PyUnicode_GET_LENGTH(o);\n            return PyUnicode_AsUTF8(o);\n        } else {\n            PyUnicode_AsASCIIString(o);\n            return NULL;\n        }\n#else\n        return PyUnicode_AsUTF8AndSize(o, length);\n#endif\n#endif\n    } else\n#endif\n#if (!CYTHON_COMPILING_IN_PYPY) || (defined(PyByteArray_AS_STRING) && defined(PyByteArray_GET_SIZE))\n    if (PyByteArray_Check(o)) {\n        *length = PyByteArray_GET_SIZE(o);\n        return PyByteArray_AS_STRING(o);\n    } else\n#endif\n    {\n        char* result;\n        int r = PyBytes_AsStringAndSize(o, &result, length);\n        if (unlikely(r < 0)) {\n            return NULL;\n        } else {\n            return result;\n        }\n    }\n}\nstatic CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) {\n   int is_true = x == Py_True;\n   if (is_true | (x == Py_False) | (x == Py_None)) return is_true;\n   else return PyObject_IsTrue(x);\n}\nstatic CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) {\n#if CYTHON_USE_TYPE_SLOTS\n  PyNumberMethods *m;\n#endif\n  const char *name = NULL;\n  PyObject *res = NULL;\n#if PY_MAJOR_VERSION < 3\n  if (PyInt_Check(x) || PyLong_Check(x))\n#else\n  if (PyLong_Check(x))\n#endif\n    return __Pyx_NewRef(x);\n#if CYTHON_USE_TYPE_SLOTS\n  m = Py_TYPE(x)->tp_as_number;\n  #if PY_MAJOR_VERSION < 3\n  if (m && m->nb_int) {\n    name = \"int\";\n    res = PyNumber_Int(x);\n  }\n  else if (m && m->nb_long) {\n    name = \"long\";\n    res = PyNumber_Long(x);\n  }\n  #else\n  if (m && m->nb_int) {\n    name = \"int\";\n    res = PyNumber_Long(x);\n  }\n  #endif\n#else\n  res = PyNumber_Int(x);\n#endif\n  if (res) {\n#if PY_MAJOR_VERSION < 3\n    if (!PyInt_Check(res) && !PyLong_Check(res)) {\n#else\n    if (!PyLong_Check(res)) {\n#endif\n      PyErr_Format(PyExc_TypeError,\n                   \"__%.4s__ returned non-%.4s (type %.200s)\",\n                   name, name, Py_TYPE(res)->tp_name);\n      Py_DECREF(res);\n      return NULL;\n    }\n  }\n  else if (!PyErr_Occurred()) {\n    PyErr_SetString(PyExc_TypeError,\n                    \"an integer is required\");\n  }\n  return res;\n}\nstatic CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) {\n  Py_ssize_t ival;\n  PyObject *x;\n#if PY_MAJOR_VERSION < 3\n  if (likely(PyInt_CheckExact(b))) {\n    if (sizeof(Py_ssize_t) >= sizeof(long))\n        return PyInt_AS_LONG(b);\n    else\n        return PyInt_AsSsize_t(x);\n  }\n#endif\n  if (likely(PyLong_CheckExact(b))) {\n    #if CYTHON_USE_PYLONG_INTERNALS\n    const digit* digits = ((PyLongObject*)b)->ob_digit;\n    const Py_ssize_t size = Py_SIZE(b);\n    if (likely(__Pyx_sst_abs(size) <= 1)) {\n        ival = likely(size) ? digits[0] : 0;\n        if (size == -1) ival = -ival;\n        return ival;\n    } else {\n      switch (size) {\n         case 2:\n           if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) {\n             return (Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n           }\n           break;\n         case -2:\n           if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) {\n             return -(Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n           }\n           break;\n         case 3:\n           if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) {\n             return (Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n           }\n           break;\n         case -3:\n           if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) {\n             return -(Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n           }\n           break;\n         case 4:\n           if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) {\n             return (Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n           }\n           break;\n         case -4:\n           if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) {\n             return -(Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));\n           }\n           break;\n      }\n    }\n    #endif\n    return PyLong_AsSsize_t(b);\n  }\n  x = PyNumber_Index(b);\n  if (!x) return -1;\n  ival = PyInt_AsSsize_t(x);\n  Py_DECREF(x);\n  return ival;\n}\nstatic CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) {\n    return PyInt_FromSize_t(ival);\n}\n\n\n#endif /* Py_PYTHON_H */\n"
  },
  {
    "path": "lib/fpn/box_intersections_cpu/bbox.pyx",
    "content": "# --------------------------------------------------------\n# Fast R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Sergey Karayev\n# --------------------------------------------------------\n\ncimport cython\nimport numpy as np\ncimport numpy as np\n\nDTYPE = np.float\nctypedef np.float_t DTYPE_t\n\ndef bbox_overlaps(boxes, query_boxes):\n    cdef np.ndarray[DTYPE_t, ndim=2] boxes_contig = np.ascontiguousarray(boxes, dtype=DTYPE)\n    cdef np.ndarray[DTYPE_t, ndim=2] query_contig = np.ascontiguousarray(query_boxes, dtype=DTYPE)\n\n    return bbox_overlaps_c(boxes_contig, query_contig)\n\ncdef np.ndarray[DTYPE_t, ndim=2] bbox_overlaps_c(\n        np.ndarray[DTYPE_t, ndim=2] boxes,\n        np.ndarray[DTYPE_t, ndim=2] query_boxes):\n    \"\"\"\n    Parameters\n    ----------\n    boxes: (N, 4) ndarray of float\n    query_boxes: (K, 4) ndarray of float\n    Returns\n    -------\n    overlaps: (N, K) ndarray of overlap between boxes and query_boxes\n    \"\"\"\n    cdef unsigned int N = boxes.shape[0]\n    cdef unsigned int K = query_boxes.shape[0]\n    cdef np.ndarray[DTYPE_t, ndim=2] overlaps = np.zeros((N, K), dtype=DTYPE)\n    cdef DTYPE_t iw, ih, box_area\n    cdef DTYPE_t ua\n    cdef unsigned int k, n\n    for k in range(K):\n        box_area = (\n            (query_boxes[k, 2] - query_boxes[k, 0] + 1) *\n            (query_boxes[k, 3] - query_boxes[k, 1] + 1)\n        )\n        for n in range(N):\n            iw = (\n                min(boxes[n, 2], query_boxes[k, 2]) -\n                max(boxes[n, 0], query_boxes[k, 0]) + 1\n            )\n            if iw > 0:\n                ih = (\n                    min(boxes[n, 3], query_boxes[k, 3]) -\n                    max(boxes[n, 1], query_boxes[k, 1]) + 1\n                )\n                if ih > 0:\n                    ua = float(\n                        (boxes[n, 2] - boxes[n, 0] + 1) *\n                        (boxes[n, 3] - boxes[n, 1] + 1) +\n                        box_area - iw * ih\n                    )\n                    overlaps[n, k] = iw * ih / ua\n    return overlaps\n\n\ndef bbox_intersections(boxes, query_boxes):\n    cdef np.ndarray[DTYPE_t, ndim=2] boxes_contig = np.ascontiguousarray(boxes, dtype=DTYPE)\n    cdef np.ndarray[DTYPE_t, ndim=2] query_contig = np.ascontiguousarray(query_boxes, dtype=DTYPE)\n\n    return bbox_intersections_c(boxes_contig, query_contig)\n\n\ncdef np.ndarray[DTYPE_t, ndim=2] bbox_intersections_c(\n        np.ndarray[DTYPE_t, ndim=2] boxes,\n        np.ndarray[DTYPE_t, ndim=2] query_boxes):\n    \"\"\"\n    For each query box compute the intersection ratio covered by boxes\n    ----------\n    Parameters\n    ----------\n    boxes: (N, 4) ndarray of float\n    query_boxes: (K, 4) ndarray of float\n    Returns\n    -------\n    overlaps: (N, K) ndarray of intersec between boxes and query_boxes\n    \"\"\"\n    cdef unsigned int N = boxes.shape[0]\n    cdef unsigned int K = query_boxes.shape[0]\n    cdef np.ndarray[DTYPE_t, ndim=2] intersec = np.zeros((N, K), dtype=DTYPE)\n    cdef DTYPE_t iw, ih, box_area\n    cdef DTYPE_t ua\n    cdef unsigned int k, n\n    for k in range(K):\n        box_area = (\n            (query_boxes[k, 2] - query_boxes[k, 0] + 1) *\n            (query_boxes[k, 3] - query_boxes[k, 1] + 1)\n        )\n        for n in range(N):\n            iw = (\n                min(boxes[n, 2], query_boxes[k, 2]) -\n                max(boxes[n, 0], query_boxes[k, 0]) + 1\n            )\n            if iw > 0:\n                ih = (\n                    min(boxes[n, 3], query_boxes[k, 3]) -\n                    max(boxes[n, 1], query_boxes[k, 1]) + 1\n                )\n                if ih > 0:\n                    intersec[n, k] = iw * ih / box_area\n    return intersec"
  },
  {
    "path": "lib/fpn/box_intersections_cpu/setup.py",
    "content": "from distutils.core import setup\nfrom Cython.Build import cythonize\nimport numpy\n\nsetup(name=\"bbox_cython\", ext_modules=cythonize('bbox.pyx'), include_dirs=[numpy.get_include()])"
  },
  {
    "path": "lib/fpn/box_utils.py",
    "content": "import torch\nimport numpy as np\nfrom torch.nn import functional as F\nfrom lib.fpn.box_intersections_cpu.bbox import bbox_overlaps as bbox_overlaps_np\nfrom lib.fpn.box_intersections_cpu.bbox import bbox_intersections as bbox_intersections_np\n\n\ndef bbox_loss(prior_boxes, deltas, gt_boxes, eps=1e-4, scale_before=1):\n    \"\"\"\n    Computes the loss for predicting the GT boxes from prior boxes\n    :param prior_boxes: [num_boxes, 4] (x1, y1, x2, y2)\n    :param deltas: [num_boxes, 4]    (tx, ty, th, tw)\n    :param gt_boxes: [num_boxes, 4] (x1, y1, x2, y2)\n    :return:\n    \"\"\"\n    prior_centers = center_size(prior_boxes) #(cx, cy, w, h)\n    gt_centers = center_size(gt_boxes) #(cx, cy, w, h)\n\n    center_targets = (gt_centers[:, :2] - prior_centers[:, :2]) / prior_centers[:, 2:]\n    size_targets = torch.log(gt_centers[:, 2:]) - torch.log(prior_centers[:, 2:])\n    all_targets = torch.cat((center_targets, size_targets), 1)\n\n    loss = F.smooth_l1_loss(deltas, all_targets, size_average=False)/(eps + prior_centers.size(0))\n\n    return loss\n\n\ndef bbox_preds(boxes, deltas):\n    \"\"\"\n    Converts \"deltas\" (predicted by the network) along with prior boxes\n    into (x1, y1, x2, y2) representation.\n    :param boxes: Prior boxes, represented as (x1, y1, x2, y2)\n    :param deltas: Offsets (tx, ty, tw, th)\n    :param box_strides [num_boxes,] distance apart between boxes. anchor box can't go more than\n       \\pm box_strides/2 from its current position. If None then we'll use the widths\n       and heights\n    :return: Transformed boxes\n    \"\"\"\n\n    if boxes.size(0) == 0:\n        return boxes\n    prior_centers = center_size(boxes)\n\n    xys = prior_centers[:, :2] + prior_centers[:, 2:] * deltas[:, :2]\n\n    whs = torch.exp(deltas[:, 2:]) * prior_centers[:, 2:]\n\n    return point_form(torch.cat((xys, whs), 1))\n\n\ndef center_size(boxes):\n    \"\"\" Convert prior_boxes to (cx, cy, w, h)\n    representation for comparison to center-size form ground truth data.\n    Args:\n        boxes: (tensor) point_form boxes\n    Return:\n        boxes: (tensor) Converted xmin, ymin, xmax, ymax form of boxes.\n    \"\"\"\n    wh = boxes[:, 2:] - boxes[:, :2] + 1.0\n\n    if isinstance(boxes, np.ndarray):\n        return np.column_stack((boxes[:, :2] + 0.5 * wh, wh))\n    return torch.cat((boxes[:, :2] + 0.5 * wh, wh), 1)\n\n\ndef point_form(boxes):\n    \"\"\" Convert prior_boxes to (xmin, ymin, xmax, ymax)\n    representation for comparison to point form ground truth data.\n    Args:\n        boxes: (tensor) center-size default boxes from priorbox layers.\n    Return:\n        boxes: (tensor) Converted xmin, ymin, xmax, ymax form of boxes.\n    \"\"\"\n    if isinstance(boxes, np.ndarray):\n        return np.column_stack((boxes[:, :2] - 0.5 * boxes[:, 2:],\n                                boxes[:, :2] + 0.5 * (boxes[:, 2:] - 2.0)))\n    return torch.cat((boxes[:, :2] - 0.5 * boxes[:, 2:],\n                      boxes[:, :2] + 0.5 * (boxes[:, 2:] - 2.0)), 1)  # xmax, ymax\n\n\n###########################################################################\n### Torch Utils, creds to Max de Groot\n###########################################################################\n\ndef bbox_intersections(box_a, box_b):\n    \"\"\" We resize both tensors to [A,B,2] without new malloc:\n    [A,2] -> [A,1,2] -> [A,B,2]\n    [B,2] -> [1,B,2] -> [A,B,2]\n    Then we compute the area of intersect between box_a and box_b.\n    Args:\n      box_a: (tensor) bounding boxes, Shape: [A,4].\n      box_b: (tensor) bounding boxes, Shape: [B,4].\n    Return:\n      (tensor) intersection area, Shape: [A,B].\n    \"\"\"\n    if isinstance(box_a, np.ndarray):\n        assert isinstance(box_b, np.ndarray)\n        return bbox_intersections_np(box_a, box_b)\n    A = box_a.size(0)\n    B = box_b.size(0)\n    max_xy = torch.min(box_a[:, 2:].unsqueeze(1).expand(A, B, 2),\n                       box_b[:, 2:].unsqueeze(0).expand(A, B, 2))\n    min_xy = torch.max(box_a[:, :2].unsqueeze(1).expand(A, B, 2),\n                       box_b[:, :2].unsqueeze(0).expand(A, B, 2))\n    inter = torch.clamp((max_xy - min_xy + 1.0), min=0)\n    return inter[:, :, 0] * inter[:, :, 1]\n\n\ndef bbox_overlaps(box_a, box_b):\n    \"\"\"Compute the jaccard overlap of two sets of boxes.  The jaccard overlap\n    is simply the intersection over union of two boxes.  Here we operate on\n    ground truth boxes and default boxes.\n    E.g.:\n        A ∩ B / A ∪ B = A ∩ B / (area(A) + area(B) - A ∩ B)\n    Args:\n        box_a: (tensor) Ground truth bounding boxes, Shape: [num_objects,4]\n        box_b: (tensor) Prior boxes from priorbox layers, Shape: [num_priors,4]\n    Return:\n        jaccard overlap: (tensor) Shape: [box_a.size(0), box_b.size(0)]\n    \"\"\"\n    if isinstance(box_a, np.ndarray):\n        assert isinstance(box_b, np.ndarray)\n        return bbox_overlaps_np(box_a, box_b)\n\n    inter = bbox_intersections(box_a, box_b)\n    area_a = ((box_a[:, 2] - box_a[:, 0] + 1.0) *\n              (box_a[:, 3] - box_a[:, 1] + 1.0)).unsqueeze(1).expand_as(inter)  # [A,B]\n    area_b = ((box_b[:, 2] - box_b[:, 0] + 1.0) *\n              (box_b[:, 3] - box_b[:, 1] + 1.0)).unsqueeze(0).expand_as(inter)  # [A,B]\n    union = area_a + area_b - inter\n    return inter / union  # [A,B]\n\n\ndef nms_overlaps(boxes):\n    \"\"\" get overlaps for each channel\"\"\"\n    assert boxes.dim() == 3\n    N = boxes.size(0)\n    nc = boxes.size(1)\n    max_xy = torch.min(boxes[:, None, :, 2:].expand(N, N, nc, 2),\n                       boxes[None, :, :, 2:].expand(N, N, nc, 2))\n\n    min_xy = torch.max(boxes[:, None, :, :2].expand(N, N, nc, 2),\n                       boxes[None, :, :, :2].expand(N, N, nc, 2))\n\n    inter = torch.clamp((max_xy - min_xy + 1.0), min=0)\n\n    # n, n, 151\n    inters = inter[:,:,:,0]*inter[:,:,:,1]\n    boxes_flat = boxes.view(-1, 4)\n    areas_flat = (boxes_flat[:,2]- boxes_flat[:,0]+1.0)*(\n        boxes_flat[:,3]- boxes_flat[:,1]+1.0)\n    areas = areas_flat.view(boxes.size(0), boxes.size(1))\n    union = -inters + areas[None] + areas[:, None]\n    return inters / union\n\n"
  },
  {
    "path": "lib/fpn/generate_anchors.py",
    "content": "# --------------------------------------------------------\n# Faster R-CNN\n# Copyright (c) 2015 Microsoft\n# Licensed under The MIT License [see LICENSE for details]\n# Written by Ross Girshick and Sean Bell\n# --------------------------------------------------------\nfrom config import IM_SCALE\n\nimport numpy as np\n\n\n# Verify that we compute the same anchors as Shaoqing's matlab implementation:\n#\n#    >> load output/rpn_cachedir/faster_rcnn_VOC2007_ZF_stage1_rpn/anchors.mat\n#    >> anchors\n#\n#    anchors =\n#\n#       -83   -39   100    56\n#      -175   -87   192   104\n#      -359  -183   376   200\n#       -55   -55    72    72\n#      -119  -119   136   136\n#      -247  -247   264   264\n#       -35   -79    52    96\n#       -79  -167    96   184\n#      -167  -343   184   360\n\n# array([[ -83.,  -39.,  100.,   56.],\n#       [-175.,  -87.,  192.,  104.],\n#       [-359., -183.,  376.,  200.],\n#       [ -55.,  -55.,   72.,   72.],\n#       [-119., -119.,  136.,  136.],\n#       [-247., -247.,  264.,  264.],\n#       [ -35.,  -79.,   52.,   96.],\n#       [ -79., -167.,   96.,  184.],\n#       [-167., -343.,  184.,  360.]])\n\ndef generate_anchors(base_size=16, feat_stride=16, anchor_scales=(8,16,32), anchor_ratios=(0.5,1,2)):\n  \"\"\" A wrapper function to generate anchors given different scales\n    Also return the number of anchors in variable 'length'\n  \"\"\"\n  anchors = generate_base_anchors(base_size=base_size, \n                                  ratios=np.array(anchor_ratios),\n                                  scales=np.array(anchor_scales))\n  A = anchors.shape[0]\n  shift_x = np.arange(0, IM_SCALE // feat_stride) * feat_stride # Same as shift_x\n  shift_x, shift_y = np.meshgrid(shift_x, shift_x)\n\n  shifts = np.stack([shift_x, shift_y, shift_x, shift_y], -1)  # h, w, 4\n  all_anchors = shifts[:, :, None] + anchors[None, None]  #h, w, A, 4\n  return all_anchors\n\n  # shifts = np.vstack((shift_x.ravel(), shift_y.ravel(), shift_x.ravel(), shift_y.ravel())).transpose()\n  # K = shifts.shape[0]\n  # # width changes faster, so here it is H, W, C\n  # anchors = anchors.reshape((1, A, 4)) + shifts.reshape((1, K, 4)).transpose((1, 0, 2))\n  # anchors = anchors.reshape((K * A, 4)).astype(np.float32, copy=False)\n  # length = np.int32(anchors.shape[0])\n\n\ndef generate_base_anchors(base_size=16, ratios=[0.5, 1, 2], scales=2 ** np.arange(3, 6)):\n  \"\"\"\n  Generate anchor (reference) windows by enumerating aspect ratios X\n  scales wrt a reference (0, 0, 15, 15) window.\n  \"\"\"\n\n  base_anchor = np.array([1, 1, base_size, base_size]) - 1\n  ratio_anchors = _ratio_enum(base_anchor, ratios)\n  anchors = np.vstack([_scale_enum(ratio_anchors[i, :], scales)\n                       for i in range(ratio_anchors.shape[0])])\n  return anchors\n\n\ndef _whctrs(anchor):\n  \"\"\"\n  Return width, height, x center, and y center for an anchor (window).\n  \"\"\"\n\n  w = anchor[2] - anchor[0] + 1\n  h = anchor[3] - anchor[1] + 1\n  x_ctr = anchor[0] + 0.5 * (w - 1)\n  y_ctr = anchor[1] + 0.5 * (h - 1)\n  return w, h, x_ctr, y_ctr\n\n\ndef _mkanchors(ws, hs, x_ctr, y_ctr):\n  \"\"\"\n  Given a vector of widths (ws) and heights (hs) around a center\n  (x_ctr, y_ctr), output a set of anchors (windows).\n  \"\"\"\n\n  ws = ws[:, np.newaxis]\n  hs = hs[:, np.newaxis]\n  anchors = np.hstack((x_ctr - 0.5 * (ws - 1),\n                       y_ctr - 0.5 * (hs - 1),\n                       x_ctr + 0.5 * (ws - 1),\n                       y_ctr + 0.5 * (hs - 1)))\n  return anchors\n\n\ndef _ratio_enum(anchor, ratios):\n  \"\"\"\n  Enumerate a set of anchors for each aspect ratio wrt an anchor.\n  \"\"\"\n\n  w, h, x_ctr, y_ctr = _whctrs(anchor)\n  size = w * h\n  size_ratios = size / ratios\n  # NOTE: CHANGED TO NOT HAVE ROUNDING\n  ws = np.sqrt(size_ratios)\n  hs = ws * ratios\n  anchors = _mkanchors(ws, hs, x_ctr, y_ctr)\n  return anchors\n\n\ndef _scale_enum(anchor, scales):\n  \"\"\"\n  Enumerate a set of anchors for each scale wrt an anchor.\n  \"\"\"\n\n  w, h, x_ctr, y_ctr = _whctrs(anchor)\n  ws = w * scales\n  hs = h * scales\n  anchors = _mkanchors(ws, hs, x_ctr, y_ctr)\n  return anchors\n"
  },
  {
    "path": "lib/fpn/make.sh",
    "content": "#!/usr/bin/env bash\n\ncd anchors\npython setup.py build_ext --inplace\ncd ..\n\ncd box_intersections_cpu\npython setup.py build_ext --inplace\ncd ..\n\ncd cpu_nms\npython build.py\ncd ..\n\ncd roi_align\npython build.py -C src/cuda clean\npython build.py -C src/cuda clean\ncd ..\n\necho \"Done compiling hopefully\"\n"
  },
  {
    "path": "lib/fpn/nms/Makefile",
    "content": "all: src/cuda/nms.cu.o\n\tpython build.py\n\nsrc/cuda/nms.cu.o: src/cuda/nms_kernel.cu\n\t$(MAKE) -C src/cuda\n\nclean:\n\t$(MAKE) -C src/cuda clean\n"
  },
  {
    "path": "lib/fpn/nms/build.py",
    "content": "import os\nimport torch\nfrom torch.utils.ffi import create_extension\n# Might have to export PATH=/usr/local/cuda-8.0/bin${PATH:+:${PATH}}\n\nsources = []\nheaders = []\ndefines = []\nwith_cuda = False\n\nif torch.cuda.is_available():\n    print('Including CUDA code.')\n    sources += ['src/nms_cuda.c']\n    headers += ['src/nms_cuda.h']\n    defines += [('WITH_CUDA', None)]\n    with_cuda = True\n\nthis_file = os.path.dirname(os.path.realpath(__file__))\nprint(this_file)\nextra_objects = ['src/cuda/nms.cu.o']\nextra_objects = [os.path.join(this_file, fname) for fname in extra_objects]\n\nffi = create_extension(\n    '_ext.nms',\n    headers=headers,\n    sources=sources,\n    define_macros=defines,\n    relative_to=__file__,\n    with_cuda=with_cuda,\n    extra_objects=extra_objects\n)\n\nif __name__ == '__main__':\n    ffi.build()\n\n"
  },
  {
    "path": "lib/fpn/nms/functions/nms.py",
    "content": "# Le code for doing NMS\nimport torch\nimport numpy as np\nfrom .._ext import nms\n\n\ndef apply_nms(scores, boxes,  pre_nms_topn=12000, post_nms_topn=2000, boxes_per_im=None,\n              nms_thresh=0.7):\n    \"\"\"\n    Note - this function is non-differentiable so everything is assumed to be a tensor, not\n    a variable.\n        \"\"\"\n    just_inds = boxes_per_im is None\n    if boxes_per_im is None:\n        boxes_per_im = [boxes.size(0)]\n\n\n    s = 0\n    keep = []\n    im_per = []\n    for bpi in boxes_per_im:\n        e = s + int(bpi)\n        keep_im = _nms_single_im(scores[s:e], boxes[s:e], pre_nms_topn, post_nms_topn, nms_thresh)\n        keep.append(keep_im + s)\n        im_per.append(keep_im.size(0))\n\n        s = e\n\n    inds = torch.cat(keep, 0)\n    if just_inds:\n        return inds\n    return inds, im_per\n\n\ndef _nms_single_im(scores, boxes,  pre_nms_topn=12000, post_nms_topn=2000, nms_thresh=0.7):\n    keep = torch.IntTensor(scores.size(0))\n    vs, idx = torch.sort(scores, dim=0, descending=True)\n    if idx.size(0) > pre_nms_topn:\n        idx = idx[:pre_nms_topn]\n    boxes_sorted = boxes[idx].contiguous()\n    num_out = nms.nms_apply(keep, boxes_sorted, nms_thresh)\n    num_out = min(num_out, post_nms_topn)\n    keep = keep[:num_out].long()\n    keep = idx[keep.cuda(scores.get_device())]\n    return keep\n"
  },
  {
    "path": "lib/fpn/nms/src/cuda/Makefile",
    "content": "all: nms_kernel.cu nms_kernel.h\n\t/usr/local/cuda/bin/nvcc -c -o nms.cu.o nms_kernel.cu --compiler-options -fPIC -gencode arch=compute_61,code=sm_61\nclean:\n\trm nms.cu.o\n"
  },
  {
    "path": "lib/fpn/nms/src/cuda/nms_kernel.cu",
    "content": "// ------------------------------------------------------------------\n// Faster R-CNN\n// Copyright (c) 2015 Microsoft\n// Licensed under The MIT License [see fast-rcnn/LICENSE for details]\n// Written by Shaoqing Ren\n// ------------------------------------------------------------------\n\n#include <vector>\n#include <iostream>\n\n#define CUDA_CHECK(condition) \\\n  /* Code block avoids redefinition of cudaError_t error */ \\\n  do { \\\n    cudaError_t error = condition; \\\n    if (error != cudaSuccess) { \\\n      std::cout << cudaGetErrorString(error) << std::endl; \\\n    } \\\n  } while (0)\n\n#define DIVUP(m,n) ((m) / (n) + ((m) % (n) > 0))\nint const threadsPerBlock = sizeof(unsigned long long) * 8;\n\n__device__ inline float devIoU(float const * const a, float const * const b) {\n  float left = max(a[0], b[0]), right = min(a[2], b[2]);\n  float top = max(a[1], b[1]), bottom = min(a[3], b[3]);\n  float width = max(right - left + 1, 0.f), height = max(bottom - top + 1, 0.f);\n  float interS = width * height;\n  float Sa = (a[2] - a[0] + 1) * (a[3] - a[1] + 1);\n  float Sb = (b[2] - b[0] + 1) * (b[3] - b[1] + 1);\n  return interS / (Sa + Sb - interS);\n}\n\n__global__ void nms_kernel(const int n_boxes, const float nms_overlap_thresh,\n                           const float *dev_boxes, unsigned long long *dev_mask) {\n  const int row_start = blockIdx.y;\n  const int col_start = blockIdx.x;\n\n  // if (row_start > col_start) return;\n\n  const int row_size =\n        min(n_boxes - row_start * threadsPerBlock, threadsPerBlock);\n  const int col_size =\n        min(n_boxes - col_start * threadsPerBlock, threadsPerBlock);\n\n  __shared__ float block_boxes[threadsPerBlock * 5];\n  if (threadIdx.x < col_size) {\n    block_boxes[threadIdx.x * 4 + 0] =\n        dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 4 + 0];\n    block_boxes[threadIdx.x * 4 + 1] =\n        dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 4 + 1];\n    block_boxes[threadIdx.x * 4 + 2] =\n        dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 4 + 2];\n    block_boxes[threadIdx.x * 4 + 3] =\n        dev_boxes[(threadsPerBlock * col_start + threadIdx.x) * 4 + 3];\n  }\n  __syncthreads();\n\n  if (threadIdx.x < row_size) {\n    const int cur_box_idx = threadsPerBlock * row_start + threadIdx.x;\n    const float *cur_box = dev_boxes + cur_box_idx * 4;\n    int i = 0;\n    unsigned long long t = 0;\n    int start = 0;\n    if (row_start == col_start) {\n      start = threadIdx.x + 1;\n    }\n    for (i = start; i < col_size; i++) {\n      if (devIoU(cur_box, block_boxes + i * 4) > nms_overlap_thresh) {\n        t |= 1ULL << i;\n      }\n    }\n    const int col_blocks = DIVUP(n_boxes, threadsPerBlock);\n    dev_mask[cur_box_idx * col_blocks + col_start] = t;\n  }\n}\n\nvoid _set_device(int device_id) {\n  int current_device;\n  CUDA_CHECK(cudaGetDevice(&current_device));\n  if (current_device == device_id) {\n    return;\n  }\n  // The call to cudaSetDevice must come before any calls to Get, which\n  // may perform initialization using the GPU.\n  CUDA_CHECK(cudaSetDevice(device_id));\n}\n\nextern \"C\" int ApplyNMSGPU(int* keep_out, const float* boxes_dev, const int boxes_num,\n          float nms_overlap_thresh, int device_id) {\n  _set_device(device_id);\n\n  unsigned long long* mask_dev = NULL;\n\n  const int col_blocks = DIVUP(boxes_num, threadsPerBlock);\n\n  CUDA_CHECK(cudaMalloc(&mask_dev,\n                        boxes_num * col_blocks * sizeof(unsigned long long)));\n\n  dim3 blocks(DIVUP(boxes_num, threadsPerBlock),\n              DIVUP(boxes_num, threadsPerBlock));\n  dim3 threads(threadsPerBlock);\n  nms_kernel<<<blocks, threads>>>(boxes_num,\n                                  nms_overlap_thresh,\n                                  boxes_dev,\n                                  mask_dev);\n\n  std::vector<unsigned long long> mask_host(boxes_num * col_blocks);\n  CUDA_CHECK(cudaMemcpy(&mask_host[0],\n                        mask_dev,\n                        sizeof(unsigned long long) * boxes_num * col_blocks,\n                        cudaMemcpyDeviceToHost));\n\n  std::vector<unsigned long long> remv(col_blocks);\n  memset(&remv[0], 0, sizeof(unsigned long long) * col_blocks);\n\n  int num_to_keep = 0;\n  for (int i = 0; i < boxes_num; i++) {\n    int nblock = i / threadsPerBlock;\n    int inblock = i % threadsPerBlock;\n\n    if (!(remv[nblock] & (1ULL << inblock))) {\n      keep_out[num_to_keep++] = i;\n      unsigned long long *p = &mask_host[0] + i * col_blocks;\n      for (int j = nblock; j < col_blocks; j++) {\n        remv[j] |= p[j];\n      }\n    }\n  }\n\n  CUDA_CHECK(cudaFree(mask_dev));\n  return num_to_keep;\n}\n"
  },
  {
    "path": "lib/fpn/nms/src/cuda/nms_kernel.h",
    "content": "int ApplyNMSGPU(int* keep_out, const float* boxes_dev, const int boxes_num,\n          float nms_overlap_thresh, int device_id);\n\n"
  },
  {
    "path": "lib/fpn/nms/src/nms_cuda.c",
    "content": "#include <THC/THC.h>\n#include <math.h>\n#include \"cuda/nms_kernel.h\"\n\nextern THCState *state;\n\nint nms_apply(THIntTensor* keep, THCudaTensor* boxes_sorted, const float nms_thresh)\n{\n    int* keep_data = THIntTensor_data(keep);\n    const float* boxes_sorted_data = THCudaTensor_data(state, boxes_sorted);\n\n    const int boxes_num = THCudaTensor_size(state, boxes_sorted, 0);\n\n    const int devId = THCudaTensor_getDevice(state, boxes_sorted);\n\n    int numTotalKeep = ApplyNMSGPU(keep_data, boxes_sorted_data, boxes_num, nms_thresh, devId);\n    return numTotalKeep;\n}\n\n\n"
  },
  {
    "path": "lib/fpn/nms/src/nms_cuda.h",
    "content": "int nms_apply(THIntTensor* keep, THCudaTensor* boxes_sorted, const float nms_thresh);"
  },
  {
    "path": "lib/fpn/proposal_assignments/proposal_assignments_det.py",
    "content": "\nimport numpy as np\nimport numpy.random as npr\nfrom config import BG_THRESH_HI, BG_THRESH_LO, FG_FRACTION, ROIS_PER_IMG\nfrom lib.fpn.box_utils import bbox_overlaps\nfrom lib.pytorch_misc import to_variable\nimport torch\n\n#############################################################\n# The following is only for object detection\n@to_variable\ndef proposal_assignments_det(rpn_rois, gt_boxes, gt_classes, image_offset, fg_thresh=0.5):\n    \"\"\"\n    Assign object detection proposals to ground-truth targets. Produces proposal\n    classification labels and bounding-box regression targets.\n    :param rpn_rois: [img_ind, x1, y1, x2, y2]\n    :param gt_boxes:   [num_boxes, 4] array of x0, y0, x1, y1\n    :param gt_classes: [num_boxes, 2] array of [img_ind, class]\n    :param Overlap threshold for a ROI to be considered foreground (if >= FG_THRESH)\n    :return:\n        rois: [num_rois, 5]\n        labels: [num_rois] array of labels\n        bbox_targets [num_rois, 4] array of targets for the labels.\n    \"\"\"\n    fg_rois_per_image = int(np.round(ROIS_PER_IMG * FG_FRACTION))\n\n    gt_img_inds = gt_classes[:, 0] - image_offset\n\n    all_boxes = torch.cat([rpn_rois[:, 1:], gt_boxes], 0)\n\n    ims_per_box = torch.cat([rpn_rois[:, 0].long(), gt_img_inds], 0)\n\n    im_sorted, idx = torch.sort(ims_per_box, 0)\n    all_boxes = all_boxes[idx]\n\n    # Assume that the GT boxes are already sorted in terms of image id\n    num_images = int(im_sorted[-1]) + 1\n\n    labels = []\n    rois = []\n    bbox_targets = []\n    for im_ind in range(num_images):\n        g_inds = (gt_img_inds == im_ind).nonzero()\n\n        if g_inds.dim() == 0:\n            continue\n        g_inds = g_inds.squeeze(1)\n        g_start = g_inds[0]\n        g_end = g_inds[-1] + 1\n\n        t_inds = (im_sorted == im_ind).nonzero().squeeze(1)\n        t_start = t_inds[0]\n        t_end = t_inds[-1] + 1\n\n        # Max overlaps: for each predicted box, get the max ROI\n        # Get the indices into the GT boxes too (must offset by the box start)\n        ious = bbox_overlaps(all_boxes[t_start:t_end], gt_boxes[g_start:g_end])\n        max_overlaps, gt_assignment = ious.max(1)\n        max_overlaps = max_overlaps.cpu().numpy()\n        # print(\"Best overlap is {}\".format(max_overlaps.max()))\n        # print(\"\\ngt assignment is {} while g_start is {} \\n ---\".format(gt_assignment, g_start))\n        gt_assignment += g_start\n\n        keep_inds_np, num_fg = _sel_inds(max_overlaps, fg_thresh, fg_rois_per_image,\n                                         ROIS_PER_IMG)\n\n        if keep_inds_np.size == 0:\n            continue\n\n        keep_inds = torch.LongTensor(keep_inds_np).cuda(rpn_rois.get_device())\n\n        labels_ = gt_classes[:, 1][gt_assignment[keep_inds]]\n        bbox_target_ = gt_boxes[gt_assignment[keep_inds]]\n\n        # Clamp labels_ for the background RoIs to 0\n        if num_fg < labels_.size(0):\n            labels_[num_fg:] = 0\n\n        rois_ = torch.cat((\n            im_sorted[t_start:t_end, None][keep_inds].float(),\n            all_boxes[t_start:t_end][keep_inds],\n        ), 1)\n\n        labels.append(labels_)\n        rois.append(rois_)\n        bbox_targets.append(bbox_target_)\n\n    rois = torch.cat(rois, 0)\n    labels = torch.cat(labels, 0)\n    bbox_targets = torch.cat(bbox_targets, 0)\n    return rois, labels, bbox_targets\n\n\ndef _sel_inds(max_overlaps, fg_thresh=0.5, fg_rois_per_image=128, rois_per_image=256):\n    # Select foreground RoIs as those with >= FG_THRESH overlap\n    fg_inds = np.where(max_overlaps >= fg_thresh)[0]\n\n    # Guard against the case when an image has fewer than fg_rois_per_image\n    # foreground RoIs\n    fg_rois_per_this_image = min(fg_rois_per_image, fg_inds.shape[0])\n    # Sample foreground regions without replacement\n    if fg_inds.size > 0:\n        fg_inds = npr.choice(fg_inds, size=fg_rois_per_this_image, replace=False)\n\n    # Select background RoIs as those within [BG_THRESH_LO, BG_THRESH_HI)\n    bg_inds = np.where((max_overlaps < BG_THRESH_HI) & (max_overlaps >= BG_THRESH_LO))[0]\n\n    # Compute number of background RoIs to take from this image (guarding\n    # against there being fewer than desired)\n    bg_rois_per_this_image = rois_per_image - fg_rois_per_this_image\n    bg_rois_per_this_image = min(bg_rois_per_this_image, bg_inds.size)\n    # Sample background regions without replacement\n    if bg_inds.size > 0:\n        bg_inds = npr.choice(bg_inds, size=bg_rois_per_this_image, replace=False)\n\n    return np.append(fg_inds, bg_inds), fg_rois_per_this_image\n\n"
  },
  {
    "path": "lib/fpn/proposal_assignments/proposal_assignments_gtbox.py",
    "content": "from lib.pytorch_misc import enumerate_by_image, gather_nd, random_choose\nfrom lib.fpn.box_utils import bbox_preds, center_size, bbox_overlaps\nimport torch\nfrom lib.pytorch_misc import diagonal_inds, to_variable\nfrom config import RELS_PER_IMG, REL_FG_FRACTION\n\n\n@to_variable\ndef proposal_assignments_gtbox(rois, gt_boxes, gt_classes, gt_rels, image_offset, fg_thresh=0.5):\n    \"\"\"\n    Assign object detection proposals to ground-truth targets. Produces proposal\n    classification labels and bounding-box regression targets.\n    :param rpn_rois: [img_ind, x1, y1, x2, y2]\n    :param gt_boxes:   [num_boxes, 4] array of x0, y0, x1, y1]. Not needed it seems\n    :param gt_classes: [num_boxes, 2] array of [img_ind, class]\n        Note, the img_inds here start at image_offset\n    :param gt_rels     [num_boxes, 4] array of [img_ind, box_0, box_1, rel type].\n        Note, the img_inds here start at image_offset\n    :param Overlap threshold for a ROI to be considered foreground (if >= FG_THRESH)\n    :return:\n        rois: [num_rois, 5]\n        labels: [num_rois] array of labels\n        bbox_targets [num_rois, 4] array of targets for the labels.\n        rel_labels: [num_rels, 4] (img ind, box0 ind, box1ind, rel type)\n    \"\"\"\n    im_inds = rois[:,0].long()\n\n    num_im = im_inds[-1] + 1\n\n    # Offset the image indices in fg_rels to refer to absolute indices (not just within img i)\n    fg_rels = gt_rels.clone()\n    fg_rels[:,0] -= image_offset\n    offset = {}\n    for i, s, e in enumerate_by_image(im_inds):\n        offset[i] = s\n    for i, s, e in enumerate_by_image(fg_rels[:, 0]):\n        fg_rels[s:e, 1:3] += offset[i]\n\n    # Try ALL things, not just intersections.\n    is_cand = (im_inds[:, None] == im_inds[None])\n    is_cand.view(-1)[diagonal_inds(is_cand)] = 0\n\n    # # Compute salience\n    # gt_inds = fg_rels[:, 1:3].contiguous().view(-1)\n    # labels_arange = labels.data.new(labels.size(0))\n    # torch.arange(0, labels.size(0), out=labels_arange)\n    # salience_labels = ((gt_inds[:, None] == labels_arange[None]).long().sum(0) > 0).long()\n    # labels = torch.stack((labels, salience_labels), 1)\n\n    # Add in some BG labels\n\n    # NOW WE HAVE TO EXCLUDE THE FGs.\n    # TODO: check if this causes an error if many duplicate GTs havent been filtered out\n\n    is_cand.view(-1)[fg_rels[:,1]*im_inds.size(0) + fg_rels[:,2]] = 0\n    is_bgcand = is_cand.nonzero()\n    # TODO: make this sample on a per image case\n    # If too many then sample\n    num_fg = min(fg_rels.size(0), int(RELS_PER_IMG * REL_FG_FRACTION * num_im))\n    if num_fg < fg_rels.size(0):\n        fg_rels = random_choose(fg_rels, num_fg)\n\n    # If too many then sample\n    num_bg = min(is_bgcand.size(0) if is_bgcand.dim() > 0 else 0,\n                 int(RELS_PER_IMG * num_im) - num_fg)\n    if num_bg > 0:\n        bg_rels = torch.cat((\n            im_inds[is_bgcand[:, 0]][:, None],\n            is_bgcand,\n            (is_bgcand[:, 0, None] < -10).long(),\n        ), 1)\n\n        if num_bg < is_bgcand.size(0):\n            bg_rels = random_choose(bg_rels, num_bg)\n        rel_labels = torch.cat((fg_rels, bg_rels), 0)\n    else:\n        rel_labels = fg_rels\n\n\n    # last sort by rel.\n    _, perm = torch.sort(rel_labels[:, 0]*(gt_boxes.size(0)**2) +\n                         rel_labels[:,1]*gt_boxes.size(0) + rel_labels[:,2])\n\n    rel_labels = rel_labels[perm].contiguous()\n\n    labels = gt_classes[:,1].contiguous()\n    return rois, labels, rel_labels\n"
  },
  {
    "path": "lib/fpn/proposal_assignments/proposal_assignments_postnms.py",
    "content": "# --------------------------------------------------------\n# Goal: assign ROIs to targets\n# --------------------------------------------------------\n\n\nimport numpy as np\nimport numpy.random as npr\nfrom .proposal_assignments_rel import _sel_rels\nfrom lib.fpn.box_utils import bbox_overlaps\nfrom lib.pytorch_misc import to_variable\nimport torch\n\n\n@to_variable\ndef proposal_assignments_postnms(\n        rois, gt_boxes, gt_classes, gt_rels, nms_inds, image_offset, fg_thresh=0.5,\n        max_objs=100, max_rels=100, rand_val=0.01):\n    \"\"\"\n    Assign object detection proposals to ground-truth targets. Produces proposal\n    classification labels and bounding-box regression targets.\n    :param rpn_rois: [img_ind, x1, y1, x2, y2]\n    :param gt_boxes:   [num_boxes, 4] array of x0, y0, x1, y1]\n    :param gt_classes: [num_boxes, 2] array of [img_ind, class]\n    :param gt_rels     [num_boxes, 4] array of [img_ind, box_0, box_1, rel type]\n    :param Overlap threshold for a ROI to be considered foreground (if >= FG_THRESH)\n    :return:\n        rois: [num_rois, 5]\n        labels: [num_rois] array of labels\n        rel_labels: [num_rels, 4] (img ind, box0 ind, box1ind, rel type)\n    \"\"\"\n    pred_inds_np = rois[:, 0].cpu().numpy().astype(np.int64)\n    pred_boxes_np = rois[:, 1:].cpu().numpy()\n    nms_inds_np = nms_inds.cpu().numpy()\n    sup_inds_np = np.setdiff1d(np.arange(pred_boxes_np.shape[0]), nms_inds_np)\n\n    # split into chosen and suppressed\n    chosen_inds_np = pred_inds_np[nms_inds_np]\n    chosen_boxes_np = pred_boxes_np[nms_inds_np]\n\n    suppre_inds_np = pred_inds_np[sup_inds_np]\n    suppre_boxes_np = pred_boxes_np[sup_inds_np]\n\n    gt_boxes_np = gt_boxes.cpu().numpy()\n    gt_classes_np = gt_classes.cpu().numpy()\n    gt_rels_np = gt_rels.cpu().numpy()\n\n    gt_classes_np[:, 0] -= image_offset\n    gt_rels_np[:, 0] -= image_offset\n\n    num_im = gt_classes_np[:, 0].max()+1\n\n    rois = []\n    obj_labels = []\n    rel_labels = []\n    num_box_seen = 0\n\n    for im_ind in range(num_im):\n        chosen_ind = np.where(chosen_inds_np == im_ind)[0]\n        suppre_ind = np.where(suppre_inds_np == im_ind)[0]\n\n        gt_ind = np.where(gt_classes_np[:, 0] == im_ind)[0]\n        gt_boxes_i = gt_boxes_np[gt_ind]\n        gt_classes_i = gt_classes_np[gt_ind, 1]\n        gt_rels_i = gt_rels_np[gt_rels_np[:, 0] == im_ind, 1:]\n\n        # Get IOUs between chosen and GT boxes and if needed we'll add more in\n\n        chosen_boxes_i = chosen_boxes_np[chosen_ind]\n        suppre_boxes_i = suppre_boxes_np[suppre_ind]\n\n        n_chosen = chosen_boxes_i.shape[0]\n        n_suppre = suppre_boxes_i.shape[0]\n        n_gt_box = gt_boxes_i.shape[0]\n\n        # add a teensy bit of random noise because some GT boxes might be duplicated, etc.\n        pred_boxes_i = np.concatenate((chosen_boxes_i, suppre_boxes_i, gt_boxes_i), 0)\n        ious = bbox_overlaps(pred_boxes_i, gt_boxes_i) + rand_val*(\n            np.random.rand(pred_boxes_i.shape[0], gt_boxes_i.shape[0])-0.5)\n\n        # Let's say that a box can only be assigned ONCE for now because we've already done\n        # the NMS and stuff.\n        is_hit = ious > fg_thresh\n\n        obj_assignments_i = is_hit.argmax(1)\n        obj_assignments_i[~is_hit.any(1)] = -1\n\n        vals, first_occurance_ind = np.unique(obj_assignments_i, return_index=True)\n        obj_assignments_i[np.setdiff1d(\n            np.arange(obj_assignments_i.shape[0]), first_occurance_ind)] = -1\n\n        extra_to_add = np.where(obj_assignments_i[n_chosen:] != -1)[0] + n_chosen\n\n        # Add them in somewhere at random\n        num_inds_to_have = min(max_objs, n_chosen + extra_to_add.shape[0])\n        boxes_i = np.zeros((num_inds_to_have, 4), dtype=np.float32)\n        labels_i = np.zeros(num_inds_to_have, dtype=np.int64)\n\n        inds_from_nms = np.sort(np.random.choice(num_inds_to_have, size=n_chosen, replace=False))\n        inds_from_elsewhere = np.setdiff1d(np.arange(num_inds_to_have), inds_from_nms)\n\n        boxes_i[inds_from_nms] = chosen_boxes_i\n        labels_i[inds_from_nms] = gt_classes_i[obj_assignments_i[:n_chosen]]\n\n        boxes_i[inds_from_elsewhere] = pred_boxes_i[extra_to_add]\n        labels_i[inds_from_elsewhere] = gt_classes_i[obj_assignments_i[extra_to_add]]\n\n        # Now, we do the relationships. same as for rle\n        all_rels_i = _sel_rels(bbox_overlaps(boxes_i, gt_boxes_i),\n                               boxes_i,\n                               labels_i,\n                               gt_classes_i,\n                               gt_rels_i,\n                               fg_thresh=fg_thresh,\n                               fg_rels_per_image=100)\n        all_rels_i[:,0:2] += num_box_seen\n\n        rois.append(np.column_stack((\n            im_ind * np.ones(boxes_i.shape[0], dtype=np.float32),\n            boxes_i,\n        )))\n        obj_labels.append(labels_i)\n        rel_labels.append(np.column_stack((\n            im_ind*np.ones(all_rels_i.shape[0], dtype=np.int64),\n            all_rels_i,\n        )))\n        num_box_seen += boxes_i.size\n\n    rois = torch.FloatTensor(np.concatenate(rois, 0)).cuda(gt_boxes.get_device(), async=True)\n    labels = torch.LongTensor(np.concatenate(obj_labels, 0)).cuda(gt_boxes.get_device(), async=True)\n    rel_labels = torch.LongTensor(np.concatenate(rel_labels, 0)).cuda(gt_boxes.get_device(),\n                                                                      async=True)\n\n    return rois, labels, rel_labels\n"
  },
  {
    "path": "lib/fpn/proposal_assignments/proposal_assignments_rel.py",
    "content": "# --------------------------------------------------------\n# Goal: assign ROIs to targets\n# --------------------------------------------------------\n\n\nimport numpy as np\nimport numpy.random as npr\nfrom config import BG_THRESH_HI, BG_THRESH_LO, FG_FRACTION_REL, ROIS_PER_IMG_REL, REL_FG_FRACTION, \\\n    RELS_PER_IMG\nfrom lib.fpn.box_utils import bbox_overlaps\nfrom lib.pytorch_misc import to_variable, nonintersecting_2d_inds\nfrom collections import defaultdict\nimport torch\n\n\n@to_variable\ndef proposal_assignments_rel(rpn_rois, gt_boxes, gt_classes, gt_rels, image_offset, fg_thresh=0.5):\n    \"\"\"\n    Assign object detection proposals to ground-truth targets. Produces proposal\n    classification labels and bounding-box regression targets.\n    :param rpn_rois: [img_ind, x1, y1, x2, y2]\n    :param gt_boxes:   [num_boxes, 4] array of x0, y0, x1, y1]\n    :param gt_classes: [num_boxes, 2] array of [img_ind, class]\n    :param gt_rels     [num_boxes, 4] array of [img_ind, box_0, box_1, rel type]\n    :param Overlap threshold for a ROI to be considered foreground (if >= FG_THRESH)\n    :return:\n        rois: [num_rois, 5]\n        labels: [num_rois] array of labels\n        bbox_targets [num_rois, 4] array of targets for the labels.\n        rel_labels: [num_rels, 4] (img ind, box0 ind, box1ind, rel type)\n    \"\"\"\n    fg_rois_per_image = int(np.round(ROIS_PER_IMG_REL * FG_FRACTION_REL))\n    fg_rels_per_image = int(np.round(REL_FG_FRACTION * RELS_PER_IMG))\n\n    pred_inds_np = rpn_rois[:, 0].cpu().numpy().astype(np.int64)\n    pred_boxes_np = rpn_rois[:, 1:].cpu().numpy()\n    gt_boxes_np = gt_boxes.cpu().numpy()\n    gt_classes_np = gt_classes.cpu().numpy()\n    gt_rels_np = gt_rels.cpu().numpy()\n\n    gt_classes_np[:, 0] -= image_offset\n    gt_rels_np[:, 0] -= image_offset\n\n    num_im = gt_classes_np[:, 0].max()+1\n\n    rois = []\n    obj_labels = []\n    rel_labels = []\n    bbox_targets = []\n\n    num_box_seen = 0\n\n    for im_ind in range(num_im):\n        pred_ind = np.where(pred_inds_np == im_ind)[0]\n\n        gt_ind = np.where(gt_classes_np[:, 0] == im_ind)[0]\n        gt_boxes_i = gt_boxes_np[gt_ind]\n        gt_classes_i = gt_classes_np[gt_ind, 1]\n        gt_rels_i = gt_rels_np[gt_rels_np[:, 0] == im_ind, 1:]\n\n        pred_boxes_i = np.concatenate((pred_boxes_np[pred_ind], gt_boxes_i), 0)\n        ious = bbox_overlaps(pred_boxes_i, gt_boxes_i)\n \n        obj_inds_i, obj_labels_i, obj_assignments_i = _sel_inds(ious, gt_classes_i, \n            fg_thresh, fg_rois_per_image, ROIS_PER_IMG_REL)\n\n        all_rels_i = _sel_rels(ious[obj_inds_i], pred_boxes_i[obj_inds_i], obj_labels_i,\n                               gt_classes_i, gt_rels_i,\n                               fg_thresh=fg_thresh, fg_rels_per_image=fg_rels_per_image)\n        all_rels_i[:,0:2] += num_box_seen\n\n        rois.append(np.column_stack((\n            im_ind * np.ones(obj_inds_i.shape[0], dtype=np.float32),\n            pred_boxes_i[obj_inds_i],\n        )))\n        obj_labels.append(obj_labels_i)\n        rel_labels.append(np.column_stack((\n            im_ind*np.ones(all_rels_i.shape[0], dtype=np.int64),\n            all_rels_i,\n        )))\n\n        # print(\"Gtboxes i {} obj assignments i {}\".format(gt_boxes_i, obj_assignments_i))\n        bbox_targets.append(gt_boxes_i[obj_assignments_i])\n\n        num_box_seen += obj_inds_i.size\n\n    rois = torch.FloatTensor(np.concatenate(rois, 0)).cuda(rpn_rois.get_device(), async=True)\n    labels = torch.LongTensor(np.concatenate(obj_labels, 0)).cuda(rpn_rois.get_device(), async=True)\n    bbox_targets = torch.FloatTensor(np.concatenate(bbox_targets, 0)).cuda(rpn_rois.get_device(),\n                                                                           async=True)\n    rel_labels = torch.LongTensor(np.concatenate(rel_labels, 0)).cuda(rpn_rois.get_device(),\n                                                                      async=True)\n\n    return rois, labels, bbox_targets, rel_labels\n\n\ndef _sel_rels(ious, pred_boxes, pred_labels, gt_classes, gt_rels, fg_thresh=0.5, fg_rels_per_image=128, num_sample_per_gt=1, filter_non_overlap=True):\n    \"\"\"\n    Selects the relations needed\n    :param ious: [num_pred', num_gt]\n    :param pred_boxes: [num_pred', num_gt]\n    :param pred_labels: [num_pred']\n    :param gt_classes: [num_gt]\n    :param gt_rels: [num_gtrel, 3]\n    :param fg_thresh: \n    :param fg_rels_per_image: \n    :return: new rels, [num_predrel, 3] where each is (pred_ind1, pred_ind2, predicate)\n    \"\"\"\n    is_match = (ious >= fg_thresh) & (pred_labels[:, None] == gt_classes[None, :])\n\n    pbi_iou = bbox_overlaps(pred_boxes, pred_boxes)\n\n    # Limit ourselves to only IOUs that overlap, but are not the exact same box\n    # since we duplicated stuff earlier.\n    if filter_non_overlap:\n        rel_possibilities = (pbi_iou < 1) & (pbi_iou > 0)\n        rels_intersect = rel_possibilities\n    else:\n        rel_possibilities = np.ones((pred_labels.shape[0], pred_labels.shape[0]),\n                                    dtype=np.int64) - np.eye(pred_labels.shape[0], dtype=np.int64)\n        rels_intersect = (pbi_iou < 1) & (pbi_iou > 0)\n\n    # ONLY select relations between ground truth because otherwise we get useless data\n    rel_possibilities[pred_labels == 0] = 0\n    rel_possibilities[:,pred_labels == 0] = 0\n\n    # For each GT relationship, sample exactly 1 relationship.\n    fg_rels = []\n    p_size = []\n    for i, (from_gtind, to_gtind, rel_id) in enumerate(gt_rels):\n        fg_rels_i = []\n        fg_scores_i = []\n\n        for from_ind in np.where(is_match[:,from_gtind])[0]:\n            for to_ind in np.where(is_match[:,to_gtind])[0]:\n                if from_ind != to_ind:\n                    fg_rels_i.append((from_ind, to_ind, rel_id))\n                    fg_scores_i.append((ious[from_ind, from_gtind]*ious[to_ind, to_gtind]))\n                    rel_possibilities[from_ind, to_ind] = 0\n        if len(fg_rels_i) == 0:\n            continue\n        p = np.array(fg_scores_i)\n        p = p/p.sum()\n        p_size.append(p.shape[0])\n        num_to_add = min(p.shape[0], num_sample_per_gt)\n        for rel_to_add in npr.choice(p.shape[0], p=p, size=num_to_add, replace=False):\n            fg_rels.append(fg_rels_i[rel_to_add])\n\n    bg_rels = np.column_stack(np.where(rel_possibilities))\n    bg_rels = np.column_stack((bg_rels, np.zeros(bg_rels.shape[0], dtype=np.int64)))\n\n    fg_rels = np.array(fg_rels, dtype=np.int64)\n    if fg_rels.size > 0 and fg_rels.shape[0] > fg_rels_per_image:\n        fg_rels = fg_rels[npr.choice(fg_rels.shape[0], size=fg_rels_per_image, replace=False)]\n        # print(\"{} scores for {} GT. max={} min={} BG rels {}\".format(\n        #     fg_rels_scores.shape[0], gt_rels.shape[0], fg_rels_scores.max(), fg_rels_scores.min(),\n        #     bg_rels.shape))\n    elif fg_rels.size == 0:\n        fg_rels = np.zeros((0,3), dtype=np.int64)\n\n    num_bg_rel = min(RELS_PER_IMG - fg_rels.shape[0], bg_rels.shape[0])\n    if bg_rels.size > 0:\n\n        # Sample 4x as many intersecting relationships as non-intersecting.\n        bg_rels_intersect = rels_intersect[bg_rels[:,0], bg_rels[:,1]]\n        p = bg_rels_intersect.astype(np.float32)\n        p[bg_rels_intersect == 0] = 0.2\n        p[bg_rels_intersect == 1] = 0.8\n        p /= p.sum()\n        bg_rels = bg_rels[np.random.choice(bg_rels.shape[0], p=p, size=num_bg_rel, replace=False)]\n    else:\n        bg_rels = np.zeros((0,3), dtype=np.int64)\n\n    #print(\"GTR {} -> AR {} vs {}\".format(gt_rels.shape, fg_rels.shape, bg_rels.shape))\n\n    all_rels = np.concatenate((fg_rels, bg_rels), 0)\n\n    # Sort by 2nd ind and then 1st ind\n    all_rels = all_rels[np.lexsort((all_rels[:, 1], all_rels[:, 0]))]\n    return all_rels\n\ndef _sel_inds(ious, gt_classes_i, fg_thresh=0.5, fg_rois_per_image=128, rois_per_image=256, n_sample_per=1):\n\n    #gt_assignment = ious.argmax(1)\n    #max_overlaps = ious[np.arange(ious.shape[0]), gt_assignment]\n    #fg_inds = np.where(max_overlaps >= fg_thresh)[0]\n    \n    fg_ious = ious.T >= fg_thresh #[num_gt, num_pred]\n    #is_bg = ~fg_ious.any(0)\n\n    # Sample K inds per GT image.\n    fg_inds = []\n    for i, (ious_i, cls_i) in enumerate(zip(fg_ious, gt_classes_i)):\n        n_sample_this_roi = min(n_sample_per, ious_i.sum())\n        if n_sample_this_roi > 0:\n            p = ious_i.astype(np.float64) / ious_i.sum()\n            for ind in npr.choice(ious_i.shape[0], p=p, size=n_sample_this_roi, replace=False):\n                fg_inds.append((ind, i))\n    \n    fg_inds = np.array(fg_inds, dtype=np.int64)\n    if fg_inds.size == 0:\n        fg_inds = np.zeros((0, 2), dtype=np.int64)\n    elif fg_inds.shape[0] > fg_rois_per_image:\n        #print(\"sample FG\")\n        fg_inds = fg_inds[npr.choice(fg_inds.shape[0], size=fg_rois_per_image, replace=False)]\n    \n    # Select background RoIs as those within [BG_THRESH_LO, BG_THRESH_HI)\n    max_overlaps = ious.max(1)\n    bg_inds = np.where((max_overlaps < BG_THRESH_HI) & (max_overlaps >= BG_THRESH_LO))[0]\n\n    # Compute number of background RoIs to take from this image (guarding\n    # against there being fewer than desired)\n    bg_rois_per_this_image = min(rois_per_image-fg_inds.shape[0], bg_inds.size)\n    # Sample background regions without replacement\n    if bg_inds.size > 0:\n        bg_inds = npr.choice(bg_inds, size=bg_rois_per_this_image, replace=False)\n\n\n    # FIx for format issues\n    obj_inds = np.concatenate((fg_inds[:,0], bg_inds), 0)\n    obj_assignments_i = np.concatenate((fg_inds[:,1], np.zeros(bg_inds.shape[0], dtype=np.int64)))\n    obj_labels_i = gt_classes_i[obj_assignments_i]\n    obj_labels_i[fg_inds.shape[0]:] = 0\n    #print(\"{} FG and {} BG\".format(fg_inds.shape[0], bg_inds.shape[0]))\n    return obj_inds, obj_labels_i, obj_assignments_i\n\n\n"
  },
  {
    "path": "lib/fpn/proposal_assignments/rel_assignments.py",
    "content": "# --------------------------------------------------------\n# Goal: assign ROIs to targets\n# --------------------------------------------------------\n\n\nimport numpy as np\nimport numpy.random as npr\nfrom config import BG_THRESH_HI, BG_THRESH_LO, REL_FG_FRACTION, RELS_PER_IMG_REFINE\nfrom lib.fpn.box_utils import bbox_overlaps\nfrom lib.pytorch_misc import to_variable, nonintersecting_2d_inds\nfrom collections import defaultdict\nimport torch\n\n@to_variable\ndef rel_assignments(im_inds, rpn_rois, roi_gtlabels, gt_boxes, gt_classes, gt_rels, image_offset,\n                    fg_thresh=0.5, num_sample_per_gt=4, filter_non_overlap=True):\n    \"\"\"\n    Assign object detection proposals to ground-truth targets. Produces proposal\n    classification labels and bounding-box regression targets.\n    :param rpn_rois: [img_ind, x1, y1, x2, y2]\n    :param gt_boxes:   [num_boxes, 4] array of x0, y0, x1, y1]\n    :param gt_classes: [num_boxes, 2] array of [img_ind, class]\n    :param gt_rels     [num_boxes, 4] array of [img_ind, box_0, box_1, rel type]\n    :param Overlap threshold for a ROI to be considered foreground (if >= FG_THRESH)\n    :return:\n        rois: [num_rois, 5]\n        labels: [num_rois] array of labels\n        bbox_targets [num_rois, 4] array of targets for the labels.\n        rel_labels: [num_rels, 4] (img ind, box0 ind, box1ind, rel type)\n    \"\"\"\n    fg_rels_per_image = int(np.round(REL_FG_FRACTION * 64))\n\n    pred_inds_np = im_inds.cpu().numpy()\n    pred_boxes_np = rpn_rois.cpu().numpy()\n    pred_boxlabels_np = roi_gtlabels.cpu().numpy()\n    gt_boxes_np = gt_boxes.cpu().numpy()\n    gt_classes_np = gt_classes.cpu().numpy()\n    gt_rels_np = gt_rels.cpu().numpy()\n\n    gt_classes_np[:, 0] -= image_offset\n    gt_rels_np[:, 0] -= image_offset\n\n    num_im = gt_classes_np[:, 0].max()+1\n\n    # print(\"Pred inds {} pred boxes {} pred box labels {} gt classes {} gt rels {}\".format(\n    #     pred_inds_np, pred_boxes_np, pred_boxlabels_np, gt_classes_np, gt_rels_np\n    # ))\n\n    rel_labels = []\n    num_box_seen = 0\n    for im_ind in range(num_im):\n        pred_ind = np.where(pred_inds_np == im_ind)[0]\n\n        gt_ind = np.where(gt_classes_np[:, 0] == im_ind)[0]\n        gt_boxes_i = gt_boxes_np[gt_ind]\n        gt_classes_i = gt_classes_np[gt_ind, 1]\n        gt_rels_i = gt_rels_np[gt_rels_np[:, 0] == im_ind, 1:]\n\n        # [num_pred, num_gt]\n        pred_boxes_i = pred_boxes_np[pred_ind]\n        pred_boxlabels_i = pred_boxlabels_np[pred_ind]\n\n        ious = bbox_overlaps(pred_boxes_i, gt_boxes_i)\n        is_match = (pred_boxlabels_i[:,None] == gt_classes_i[None]) & (ious >= fg_thresh)\n\n        # FOR BG. Limit ourselves to only IOUs that overlap, but are not the exact same box\n        pbi_iou = bbox_overlaps(pred_boxes_i, pred_boxes_i)\n        if filter_non_overlap:\n            rel_possibilities = (pbi_iou < 1) & (pbi_iou > 0)\n            rels_intersect = rel_possibilities\n        else:\n            rel_possibilities = np.ones((pred_boxes_i.shape[0], pred_boxes_i.shape[0]),\n                                        dtype=np.int64) - np.eye(pred_boxes_i.shape[0],\n                                                                 dtype=np.int64)\n            rels_intersect = (pbi_iou < 1) & (pbi_iou > 0)\n\n        # ONLY select relations between ground truth because otherwise we get useless data\n        rel_possibilities[pred_boxlabels_i == 0] = 0\n        rel_possibilities[:, pred_boxlabels_i == 0] = 0\n\n        # Sample the GT relationships.\n        fg_rels = []\n        p_size = []\n        for i, (from_gtind, to_gtind, rel_id) in enumerate(gt_rels_i):\n            fg_rels_i = []\n            fg_scores_i = []\n\n            for from_ind in np.where(is_match[:, from_gtind])[0]:\n                for to_ind in np.where(is_match[:, to_gtind])[0]:\n                    if from_ind != to_ind:\n                        fg_rels_i.append((from_ind, to_ind, rel_id))\n                        fg_scores_i.append((ious[from_ind, from_gtind] * ious[to_ind, to_gtind]))\n                        rel_possibilities[from_ind, to_ind] = 0\n            if len(fg_rels_i) == 0:\n                continue\n            p = np.array(fg_scores_i)\n            p = p / p.sum()\n            p_size.append(p.shape[0])\n            num_to_add = min(p.shape[0], num_sample_per_gt)\n            for rel_to_add in npr.choice(p.shape[0], p=p, size=num_to_add, replace=False):\n                fg_rels.append(fg_rels_i[rel_to_add])\n\n        fg_rels = np.array(fg_rels, dtype=np.int64)\n        if fg_rels.size > 0 and fg_rels.shape[0] > fg_rels_per_image:\n            fg_rels = fg_rels[npr.choice(fg_rels.shape[0], size=fg_rels_per_image, replace=False)]\n        elif fg_rels.size == 0:\n            fg_rels = np.zeros((0, 3), dtype=np.int64)\n\n        bg_rels = np.column_stack(np.where(rel_possibilities))\n        bg_rels = np.column_stack((bg_rels, np.zeros(bg_rels.shape[0], dtype=np.int64)))\n\n        num_bg_rel = min(64 - fg_rels.shape[0], bg_rels.shape[0])\n        if bg_rels.size > 0:\n            # Sample 4x as many intersecting relationships as non-intersecting.\n            # bg_rels_intersect = rels_intersect[bg_rels[:, 0], bg_rels[:, 1]]\n            # p = bg_rels_intersect.astype(np.float32)\n            # p[bg_rels_intersect == 0] = 0.2\n            # p[bg_rels_intersect == 1] = 0.8\n            # p /= p.sum()\n            bg_rels = bg_rels[\n                np.random.choice(bg_rels.shape[0],\n                                 #p=p,\n                                 size=num_bg_rel, replace=False)]\n        else:\n            bg_rels = np.zeros((0, 3), dtype=np.int64)\n\n        if fg_rels.size == 0 and bg_rels.size == 0:\n            # Just put something here\n            bg_rels = np.array([[0, 0, 0]], dtype=np.int64)\n\n        # print(\"GTR {} -> AR {} vs {}\".format(gt_rels.shape, fg_rels.shape, bg_rels.shape))\n        all_rels_i = np.concatenate((fg_rels, bg_rels), 0)\n        all_rels_i[:,0:2] += num_box_seen\n\n        all_rels_i = all_rels_i[np.lexsort((all_rels_i[:,1], all_rels_i[:,0]))]\n\n        rel_labels.append(np.column_stack((\n            im_ind*np.ones(all_rels_i.shape[0], dtype=np.int64),\n            all_rels_i,\n        )))\n\n        num_box_seen += pred_boxes_i.shape[0]\n    rel_labels = torch.LongTensor(np.concatenate(rel_labels, 0)).cuda(rpn_rois.get_device(),\n                                                                      async=True)\n    return rel_labels\n"
  },
  {
    "path": "lib/fpn/roi_align/Makefile",
    "content": "all: src/cuda/roi_align.cu.o\n\tpython build.py\n\nsrc/cuda/roi_align.cu.o: src/cuda/roi_align_kernel.cu\n\t$(MAKE) -C src/cuda\n\nclean:\n\t$(MAKE) -C src/cuda clean\n"
  },
  {
    "path": "lib/fpn/roi_align/__init__.py",
    "content": ""
  },
  {
    "path": "lib/fpn/roi_align/_ext/__init__.py",
    "content": ""
  },
  {
    "path": "lib/fpn/roi_align/_ext/roi_align/__init__.py",
    "content": "\nfrom torch.utils.ffi import _wrap_function\nfrom ._roi_align import lib as _lib, ffi as _ffi\n\n__all__ = []\ndef _import_symbols(locals):\n    for symbol in dir(_lib):\n        fn = getattr(_lib, symbol)\n        locals[symbol] = _wrap_function(fn, _ffi)\n        __all__.append(symbol)\n\n_import_symbols(locals())\n"
  },
  {
    "path": "lib/fpn/roi_align/build.py",
    "content": "import os\nimport torch\nfrom torch.utils.ffi import create_extension\n# Might have to export PATH=/usr/local/cuda-8.0/bin${PATH:+:${PATH}}\n\n# sources = ['src/roi_align.c']\n# headers = ['src/roi_align.h']\nsources = []\nheaders = []\ndefines = []\nwith_cuda = False\n\nif torch.cuda.is_available():\n    print('Including CUDA code.')\n    sources += ['src/roi_align_cuda.c']\n    headers += ['src/roi_align_cuda.h']\n    defines += [('WITH_CUDA', None)]\n    with_cuda = True\n\nthis_file = os.path.dirname(os.path.realpath(__file__))\nprint(this_file)\nextra_objects = ['src/cuda/roi_align.cu.o']\nextra_objects = [os.path.join(this_file, fname) for fname in extra_objects]\n\nffi = create_extension(\n    '_ext.roi_align',\n    headers=headers,\n    sources=sources,\n    define_macros=defines,\n    relative_to=__file__,\n    with_cuda=with_cuda,\n    extra_objects=extra_objects\n)\n\nif __name__ == '__main__':\n    ffi.build()\n"
  },
  {
    "path": "lib/fpn/roi_align/functions/__init__.py",
    "content": ""
  },
  {
    "path": "lib/fpn/roi_align/functions/roi_align.py",
    "content": "\"\"\"\nperforms ROI aligning\n\"\"\"\n\nimport torch\nfrom torch.autograd import Function\nfrom .._ext import roi_align\n\nclass RoIAlignFunction(Function):\n    def __init__(self, aligned_height, aligned_width, spatial_scale):\n        self.aligned_width = int(aligned_width)\n        self.aligned_height = int(aligned_height)\n        self.spatial_scale = float(spatial_scale)\n\n        self.feature_size = None\n\n    def forward(self, features, rois):\n        self.save_for_backward(rois)\n\n        rois_normalized = rois.clone()\n\n        self.feature_size = features.size()\n        batch_size, num_channels, data_height, data_width = self.feature_size\n\n        height = (data_height -1) / self.spatial_scale\n        width = (data_width - 1) / self.spatial_scale\n\n        rois_normalized[:,1] /= width\n        rois_normalized[:,2] /= height\n        rois_normalized[:,3] /= width\n        rois_normalized[:,4] /= height\n\n\n        num_rois = rois.size(0)\n\n        output = features.new(num_rois, num_channels, self.aligned_height,\n            self.aligned_width).zero_()\n\n        if features.is_cuda:\n            res = roi_align.roi_align_forward_cuda(self.aligned_height,\n                                             self.aligned_width,\n                                             self.spatial_scale, features,\n                                             rois_normalized, output)\n            assert res == 1\n        else:\n            raise ValueError\n\n        return output\n\n    def backward(self, grad_output):\n        assert(self.feature_size is not None and grad_output.is_cuda)\n\n        rois = self.saved_tensors[0]\n\n        rois_normalized = rois.clone()\n\n        batch_size, num_channels, data_height, data_width = self.feature_size\n\n        height = (data_height -1) / self.spatial_scale\n        width = (data_width - 1) / self.spatial_scale\n\n        rois_normalized[:,1] /= width\n        rois_normalized[:,2] /= height\n        rois_normalized[:,3] /= width\n        rois_normalized[:,4] /= height\n\n        grad_input = rois_normalized.new(batch_size, num_channels, data_height,\n                                  data_width).zero_()\n        res = roi_align.roi_align_backward_cuda(self.aligned_height,\n                                          self.aligned_width,\n                                          self.spatial_scale, grad_output,\n                                          rois_normalized, grad_input)\n        assert res == 1\n        return grad_input, None\n"
  },
  {
    "path": "lib/fpn/roi_align/modules/__init__.py",
    "content": ""
  },
  {
    "path": "lib/fpn/roi_align/modules/roi_align.py",
    "content": "from torch.nn.modules.module import Module\nfrom torch.nn.functional import avg_pool2d, max_pool2d\nfrom ..functions.roi_align import RoIAlignFunction\n\n\nclass RoIAlign(Module):\n    def __init__(self, aligned_height, aligned_width, spatial_scale):\n        super(RoIAlign, self).__init__()\n\n        self.aligned_width = int(aligned_width)\n        self.aligned_height = int(aligned_height)\n        self.spatial_scale = float(spatial_scale)\n\n    def forward(self, features, rois):\n        return RoIAlignFunction(self.aligned_height, self.aligned_width,\n                                self.spatial_scale)(features, rois)\n\nclass RoIAlignAvg(Module):\n    def __init__(self, aligned_height, aligned_width, spatial_scale):\n        super(RoIAlignAvg, self).__init__()\n\n        self.aligned_width = int(aligned_width)\n        self.aligned_height = int(aligned_height)\n        self.spatial_scale = float(spatial_scale)\n\n    def forward(self, features, rois):\n        x =  RoIAlignFunction(self.aligned_height+1, self.aligned_width+1,\n                                self.spatial_scale)(features, rois)\n        return avg_pool2d(x, kernel_size=2, stride=1)\n\nclass RoIAlignMax(Module):\n    def __init__(self, aligned_height, aligned_width, spatial_scale):\n        super(RoIAlignMax, self).__init__()\n\n        self.aligned_width = int(aligned_width)\n        self.aligned_height = int(aligned_height)\n        self.spatial_scale = float(spatial_scale)\n\n    def forward(self, features, rois):\n        x =  RoIAlignFunction(self.aligned_height+1, self.aligned_width+1,\n                                self.spatial_scale)(features, rois)\n        return max_pool2d(x, kernel_size=2, stride=1)\n"
  },
  {
    "path": "lib/fpn/roi_align/src/cuda/Makefile",
    "content": "all: roi_align_kernel.cu roi_align_kernel.h\n\t/usr/local/cuda/bin/nvcc -c -o roi_align.cu.o roi_align_kernel.cu --compiler-options -fPIC -gencode arch=compute_61,code=sm_61\nclean:\n\trm roi_align.cu.o\n"
  },
  {
    "path": "lib/fpn/roi_align/src/cuda/roi_align_kernel.cu",
    "content": "#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include <stdio.h>\n#include <math.h>\n#include <float.h>\n#include \"roi_align_kernel.h\"\n\n#define CUDA_1D_KERNEL_LOOP(i, n)                            \\\n    for (int i = blockIdx.x * blockDim.x + threadIdx.x; i < n; \\\n            i += blockDim.x * gridDim.x)\n\n\n    __global__ void ROIAlignForward(const int nthreads, const float* image_ptr, const float* boxes_ptr,\n         int num_boxes, int batch, int image_height, int image_width, int crop_height,\n         int crop_width, int depth, float extrapolation_value, float* crops_ptr) {\n    CUDA_1D_KERNEL_LOOP(out_idx, nthreads) {\n        // (n, c, ph, pw) is an element in the aligned output\n        int idx = out_idx;\n        const int x = idx % crop_width;\n        idx /= crop_width;\n        const int y = idx % crop_height;\n        idx /= crop_height;\n        const int d = idx % depth;\n        const int b = idx / depth;\n\n        const int b_in = int(boxes_ptr[b*5]);\n        const float x1 = boxes_ptr[b * 5 + 1];\n        const float y1 = boxes_ptr[b * 5 + 2];\n        const float x2 = boxes_ptr[b * 5 + 3];\n        const float y2 = boxes_ptr[b * 5 + 4];\n        if (b_in < 0 || b_in >= batch) {\n            continue;\n        }\n\n        const float height_scale =\n            (crop_height > 1) ? (y2 - y1) * (image_height - 1) / (crop_height - 1)\n                              : 0;\n        const float width_scale =\n            (crop_width > 1) ? (x2 - x1) * (image_width - 1) / (crop_width - 1) : 0;\n\n        const float in_y = (crop_height > 1)\n                               ? y1 * (image_height - 1) + y * height_scale\n                               : 0.5 * (y1 + y2) * (image_height - 1);\n        if (in_y < 0 || in_y > image_height - 1) {\n            crops_ptr[out_idx] = extrapolation_value;\n            continue;\n        }\n\n        const float in_x = (crop_width > 1)\n                               ? x1 * (image_width - 1) + x * width_scale\n                               : 0.5 * (x1 + x2) * (image_width - 1);\n        if (in_x < 0 || in_x > image_width - 1) {\n          crops_ptr[out_idx] = extrapolation_value;\n          continue;\n        }\n\n        const int top_y_index = floorf(in_y);\n        const int bottom_y_index = ceilf(in_y);\n        const float y_lerp = in_y - top_y_index;\n\n        const int left_x_index = floorf(in_x);\n        const int right_x_index = ceilf(in_x);\n        const float x_lerp = in_x - left_x_index;\n\n        const float top_left = image_ptr[((b_in*depth + d) * image_height\n            + top_y_index) * image_width + left_x_index];\n        const float top_right = image_ptr[((b_in*depth + d) * image_height\n            + top_y_index) * image_width + right_x_index];\n        const float bottom_left = image_ptr[((b_in*depth + d) * image_height\n            + bottom_y_index) * image_width + left_x_index];\n        const float bottom_right = image_ptr[((b_in*depth + d) * image_height\n            + bottom_y_index) * image_width + right_x_index];\n\n        const float top = top_left + (top_right - top_left) * x_lerp;\n        const float bottom = bottom_left + (bottom_right - bottom_left) * x_lerp;\n        crops_ptr[out_idx] = top + (bottom - top) * y_lerp;\n        }\n    }\n\n    int ROIAlignForwardLaucher(const float* image_ptr, const float* boxes_ptr,\n         int num_boxes,  int batch, int image_height, int image_width, int crop_height,\n         int crop_width, int depth, float extrapolation_value, float* crops_ptr, cudaStream_t stream) {\n\n        const int kThreadsPerBlock = 1024;\n        const int output_size = num_boxes * crop_height * crop_width * depth;\n        cudaError_t err;\n\n        ROIAlignForward<<<(output_size + kThreadsPerBlock - 1) / kThreadsPerBlock, kThreadsPerBlock, 0, stream>>>\n        (output_size, image_ptr, boxes_ptr, num_boxes, batch, image_height, image_width,\n         crop_height, crop_width, depth, extrapolation_value, crops_ptr);\n\n        err = cudaGetLastError();\n        if(cudaSuccess != err) {\n            fprintf( stderr, \"cudaCheckError() failed : %s\\n\", cudaGetErrorString( err ) );\n            exit( -1 );\n        }\n\n        return 1;\n    }\n\n__global__ void ROIAlignBackward(\n    const int nthreads, const float* grads_ptr, const float* boxes_ptr,\n    int num_boxes, int batch, int image_height,\n    int image_width, int crop_height, int crop_width, int depth,\n    float* grads_image_ptr) {\n  CUDA_1D_KERNEL_LOOP(out_idx, nthreads) {\n\n    // out_idx = d + depth * (w + crop_width * (h + crop_height * b))\n    int idx = out_idx;\n    const int x = idx % crop_width;\n    idx /= crop_width;\n    const int y = idx % crop_height;\n    idx /= crop_height;\n    const int d = idx % depth;\n    const int b = idx / depth;\n\n    const int b_in = boxes_ptr[b * 5];\n    const float x1 = boxes_ptr[b * 5 + 1];\n    const float y1 = boxes_ptr[b * 5 + 2];\n    const float x2 = boxes_ptr[b * 5 + 3];\n    const float y2 = boxes_ptr[b * 5 + 4];\n    if (b_in < 0 || b_in >= batch) {\n      continue;\n    }\n\n    const float height_scale =\n        (crop_height > 1) ? (y2 - y1) * (image_height - 1) / (crop_height - 1)\n                          : 0;\n    const float width_scale =\n        (crop_width > 1) ? (x2 - x1) * (image_width - 1) / (crop_width - 1) : 0;\n\n    const float in_y = (crop_height > 1)\n                           ? y1 * (image_height - 1) + y * height_scale\n                           : 0.5 * (y1 + y2) * (image_height - 1);\n    if (in_y < 0 || in_y > image_height - 1) {\n      continue;\n    }\n\n    const float in_x = (crop_width > 1)\n                           ? x1 * (image_width - 1) + x * width_scale\n                           : 0.5 * (x1 + x2) * (image_width - 1);\n    if (in_x < 0 || in_x > image_width - 1) {\n      continue;\n    }\n\n    const int top_y_index = floorf(in_y);\n    const int bottom_y_index = ceilf(in_y);\n    const float y_lerp = in_y - top_y_index;\n\n    const int left_x_index = floorf(in_x);\n    const int right_x_index = ceilf(in_x);\n    const float x_lerp = in_x - left_x_index;\n\n    const float dtop = (1 - y_lerp) * grads_ptr[out_idx];\n    atomicAdd(\n        grads_image_ptr + ((b_in*depth + d)*image_height + top_y_index) * image_width + left_x_index,\n        (1 - x_lerp) * dtop);\n    atomicAdd(grads_image_ptr +\n                      ((b_in * depth + d)*image_height+top_y_index)*image_width + right_x_index,\n                       x_lerp * dtop);\n\n    const float dbottom = y_lerp * grads_ptr[out_idx];\n    atomicAdd(grads_image_ptr + ((b_in*depth+d)*image_height+bottom_y_index)*image_width+left_x_index,\n        (1 - x_lerp) * dbottom);\n    atomicAdd(grads_image_ptr + ((b_in*depth+d)*image_height+bottom_y_index)*image_width+right_x_index,\n        x_lerp * dbottom);\n  }\n}\n\nint ROIAlignBackwardLaucher(const float* grads_ptr, const float* boxes_ptr, int num_boxes,\n    int batch, int image_height, int image_width, int crop_height, int crop_width, int depth,\n    float* grads_image_ptr, cudaStream_t stream) {\n        const int kThreadsPerBlock = 1024;\n        const int output_size = num_boxes * crop_height * crop_width * depth;\n        cudaError_t err;\n\n        ROIAlignBackward<<<(output_size + kThreadsPerBlock - 1) / kThreadsPerBlock, kThreadsPerBlock, 0, stream>>>\n        (output_size, grads_ptr, boxes_ptr, num_boxes, batch, image_height, image_width, crop_height,\n        crop_width, depth, grads_image_ptr);\n\n        err = cudaGetLastError();\n        if(cudaSuccess != err) {\n            fprintf( stderr, \"cudaCheckError() failed : %s\\n\", cudaGetErrorString( err ) );\n            exit( -1 );\n        }\n\n        return 1;\n    }\n\n\n#ifdef __cplusplus\n}\n#endif\n\n\n"
  },
  {
    "path": "lib/fpn/roi_align/src/cuda/roi_align_kernel.h",
    "content": "#ifndef _ROI_ALIGN_KERNEL\n#define _ROI_ALIGN_KERNEL\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n__global__ void ROIAlignForward(const int nthreads, const float* image_ptr, const float* boxes_ptr, int num_boxes, int batch, int image_height, int image_width, int crop_height,\n  int crop_width, int depth, float extrapolation_value, float* crops_ptr);\n\nint ROIAlignForwardLaucher(\n    const float* image_ptr, const float* boxes_ptr,\n         int num_boxes,  int batch, int image_height, int image_width, int crop_height,\n         int crop_width, int depth, float extrapolation_value, float* crops_ptr, cudaStream_t stream);\n\n__global__ void ROIAlignBackward(const int nthreads, const float* grads_ptr,\n    const float* boxes_ptr, int num_boxes, int batch, int image_height,\n    int image_width, int crop_height, int crop_width, int depth,\n    float* grads_image_ptr);\n\nint ROIAlignBackwardLaucher(const float* grads_ptr, const float* boxes_ptr, int num_boxes,\n    int batch, int image_height, int image_width, int crop_height,\n    int crop_width, int depth, float* grads_image_ptr, cudaStream_t stream);\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif\n\n"
  },
  {
    "path": "lib/fpn/roi_align/src/roi_align_cuda.c",
    "content": "#include <THC/THC.h>\n#include <math.h>\n#include \"cuda/roi_align_kernel.h\"\n\nextern THCState *state;\n\nint roi_align_forward_cuda(int crop_height, int crop_width, float spatial_scale,\n                        THCudaTensor * features, THCudaTensor * rois, THCudaTensor * output)\n{\n    // Grab the input tensor\n    float * image_ptr = THCudaTensor_data(state, features);\n    float * boxes_ptr = THCudaTensor_data(state, rois);\n\n    float * crops_ptr = THCudaTensor_data(state, output);\n\n    // Number of ROIs\n    int num_boxes = THCudaTensor_size(state, rois, 0);\n    int size_rois = THCudaTensor_size(state, rois, 1);\n    if (size_rois != 5)\n    {\n        return 0;\n    }\n\n    // batch size\n    int batch = THCudaTensor_size(state, features, 0);\n    // data height\n    int image_height = THCudaTensor_size(state, features, 2);\n    // data width\n    int image_width = THCudaTensor_size(state, features, 3);\n    // Number of channels\n    int depth = THCudaTensor_size(state, features, 1);\n\n    cudaStream_t stream = THCState_getCurrentStream(state);\n    float extrapolation_value = 0.0;\n\n    ROIAlignForwardLaucher(\n         image_ptr, boxes_ptr, num_boxes, batch, image_height, image_width,\n         crop_height, crop_width, depth, extrapolation_value, crops_ptr,\n         stream);\n\n    return 1;\n}\n\nint roi_align_backward_cuda(int crop_height, int crop_width, float spatial_scale,\n    THCudaTensor * top_grad, THCudaTensor * rois, THCudaTensor * bottom_grad)\n{\n    // Grab the input tensor\n    float * grads_ptr = THCudaTensor_data(state, top_grad);\n    float * boxes_ptr = THCudaTensor_data(state, rois);\n\n    float * grads_image_ptr = THCudaTensor_data(state, bottom_grad);\n\n    // Number of ROIs\n    int num_boxes = THCudaTensor_size(state, rois, 0);\n    int size_rois = THCudaTensor_size(state, rois, 1);\n    if (size_rois != 5)\n    {\n        return 0;\n    }\n\n    // batch size\n    int batch = THCudaTensor_size(state, bottom_grad, 0);\n    // data height\n    int image_height = THCudaTensor_size(state, bottom_grad, 2);\n    // data width\n    int image_width = THCudaTensor_size(state, bottom_grad, 3);\n    // Number of channels\n    int depth = THCudaTensor_size(state, bottom_grad, 1);\n\n    cudaStream_t stream = THCState_getCurrentStream(state);\n\n    ROIAlignBackwardLaucher(\n        grads_ptr, boxes_ptr, num_boxes, batch, image_height, image_width,\n        crop_height, crop_width, depth, grads_image_ptr, stream);\n    return 1;\n}\n"
  },
  {
    "path": "lib/fpn/roi_align/src/roi_align_cuda.h",
    "content": "int roi_align_forward_cuda(int crop_height, int crop_width, float spatial_scale,\n                        THCudaTensor * features, THCudaTensor * rois, THCudaTensor * output);\n\nint roi_align_backward_cuda(int crop_height, int crop_width, float spatial_scale,\n                        THCudaTensor * top_grad, THCudaTensor * rois,\n                        THCudaTensor * bottom_grad);\n"
  },
  {
    "path": "lib/get_dataset_counts.py",
    "content": "\"\"\"\nGet counts of all of the examples in the dataset. Used for creating the baseline\ndictionary model\n\"\"\"\n\nimport numpy as np\nfrom dataloaders.visual_genome import VG\nfrom lib.fpn.box_intersections_cpu.bbox import bbox_overlaps\nfrom lib.pytorch_misc import nonintersecting_2d_inds\n\n\ndef get_counts(train_data=VG(mode='train', filter_duplicate_rels=False, num_val_im=5000), must_overlap=True):\n    \"\"\"\n    Get counts of all of the relations. Used for modeling directly P(rel | o1, o2)\n    :param train_data: \n    :param must_overlap: \n    :return: \n    \"\"\"\n    fg_matrix = np.zeros((\n        train_data.num_classes,\n        train_data.num_classes,\n        train_data.num_predicates,\n    ), dtype=np.int64)\n\n    bg_matrix = np.zeros((\n        train_data.num_classes,\n        train_data.num_classes,\n    ), dtype=np.int64)\n\n    for ex_ind in range(len(train_data)):\n        gt_classes = train_data.gt_classes[ex_ind].copy()\n        gt_relations = train_data.relationships[ex_ind].copy()\n        gt_boxes = train_data.gt_boxes[ex_ind].copy()\n\n        # For the foreground, we'll just look at everything\n        o1o2 = gt_classes[gt_relations[:, :2]]\n        for (o1, o2), gtr in zip(o1o2, gt_relations[:,2]):\n            fg_matrix[o1, o2, gtr] += 1\n\n        # For the background, get all of the things that overlap.\n        o1o2_total = gt_classes[np.array(\n            box_filter(gt_boxes, must_overlap=must_overlap), dtype=int)]\n        for (o1, o2) in o1o2_total:\n            bg_matrix[o1, o2] += 1\n\n    return fg_matrix, bg_matrix\n\n\ndef box_filter(boxes, must_overlap=False):\n    \"\"\" Only include boxes that overlap as possible relations. \n    If no overlapping boxes, use all of them.\"\"\"\n    n_cands = boxes.shape[0]\n\n    overlaps = bbox_overlaps(boxes.astype(np.float), boxes.astype(np.float)) > 0\n    np.fill_diagonal(overlaps, 0)\n\n    all_possib = np.ones_like(overlaps, dtype=np.bool)\n    np.fill_diagonal(all_possib, 0)\n\n    if must_overlap:\n        possible_boxes = np.column_stack(np.where(overlaps))\n\n        if possible_boxes.size == 0:\n            possible_boxes = np.column_stack(np.where(all_possib))\n    else:\n        possible_boxes = np.column_stack(np.where(all_possib))\n    return possible_boxes\n\nif __name__ == '__main__':\n    fg, bg = get_counts(must_overlap=False)\n"
  },
  {
    "path": "lib/get_union_boxes.py",
    "content": "\"\"\"\ncredits to https://github.com/ruotianluo/pytorch-faster-rcnn/blob/master/lib/nets/network.py#L91\n\"\"\"\n\nimport torch\nfrom torch.autograd import Variable\nfrom torch.nn import functional as F\nfrom lib.fpn.roi_align.functions.roi_align import RoIAlignFunction\nfrom lib.draw_rectangles.draw_rectangles import draw_union_boxes\nimport numpy as np\nfrom torch.nn.modules.module import Module\nfrom torch import nn\nfrom config import BATCHNORM_MOMENTUM\n\nclass UnionBoxesAndFeats(Module):\n    def __init__(self, pooling_size=7, stride=16, dim=256, concat=False, use_feats=True):\n        \"\"\"\n        :param pooling_size: Pool the union boxes to this dimension\n        :param stride: pixel spacing in the entire image\n        :param dim: Dimension of the feats\n        :param concat: Whether to concat (yes) or add (False) the representations\n        \"\"\"\n        super(UnionBoxesAndFeats, self).__init__()\n        \n        self.pooling_size = pooling_size\n        self.stride = stride\n\n        self.dim = dim\n        self.use_feats = use_feats\n\n        self.conv = nn.Sequential(\n            nn.Conv2d(2, dim //2, kernel_size=7, stride=2, padding=3, bias=True),\n            nn.ReLU(inplace=True),\n            nn.BatchNorm2d(dim//2, momentum=BATCHNORM_MOMENTUM),\n            nn.MaxPool2d(kernel_size=3, stride=2, padding=1),\n            nn.Conv2d(dim // 2, dim, kernel_size=3, stride=1, padding=1, bias=True),\n            nn.ReLU(inplace=True),\n            nn.BatchNorm2d(dim, momentum=BATCHNORM_MOMENTUM),\n        )\n        self.concat = concat\n\n    def forward(self, fmap, rois, union_inds):\n        union_pools = union_boxes(fmap, rois, union_inds, pooling_size=self.pooling_size, stride=self.stride)\n        if not self.use_feats:\n            return union_pools.detach()\n\n        pair_rois = torch.cat((rois[:, 1:][union_inds[:, 0]], rois[:, 1:][union_inds[:, 1]]),1).data.cpu().numpy()\n        # rects_np = get_rect_features(pair_rois, self.pooling_size*2-1) - 0.5\n        rects_np = draw_union_boxes(pair_rois, self.pooling_size*4-1) - 0.5\n        rects = Variable(torch.FloatTensor(rects_np).cuda(fmap.get_device()), volatile=fmap.volatile)\n        if self.concat:\n            return torch.cat((union_pools, self.conv(rects)), 1)\n        return union_pools + self.conv(rects)\n\n# def get_rect_features(roi_pairs, pooling_size):\n#     rects_np = draw_union_boxes(roi_pairs, pooling_size)\n#     # add union + intersection\n#     stuff_to_cat = [\n#         rects_np.max(1),\n#         rects_np.min(1),\n#         np.minimum(1-rects_np[:,0], rects_np[:,1]),\n#         np.maximum(1-rects_np[:,0], rects_np[:,1]),\n#         np.minimum(rects_np[:,0], 1-rects_np[:,1]),\n#         np.maximum(rects_np[:,0], 1-rects_np[:,1]),\n#         np.minimum(1-rects_np[:,0], 1-rects_np[:,1]),\n#         np.maximum(1-rects_np[:,0], 1-rects_np[:,1]),\n#     ]\n#     rects_np = np.concatenate([rects_np] + [x[:,None] for x in stuff_to_cat], 1)\n#     return rects_np\n\n\ndef union_boxes(fmap, rois, union_inds, pooling_size=14, stride=16):\n    \"\"\"\n    :param fmap: (batch_size, d, IM_SIZE/stride, IM_SIZE/stride)\n    :param rois: (num_rois, 5) with [im_ind, x1, y1, x2, y2]\n    :param union_inds: (num_urois, 2) with [roi_ind1, roi_ind2]\n    :param pooling_size: we'll resize to this\n    :param stride:\n    :return:\n    \"\"\"\n    assert union_inds.size(1) == 2\n    im_inds = rois[:,0][union_inds[:,0]]\n    assert (im_inds.data == rois.data[:,0][union_inds[:,1]]).sum() == union_inds.size(0)\n    union_rois = torch.cat((\n        im_inds[:,None],\n        torch.min(rois[:, 1:3][union_inds[:, 0]], rois[:, 1:3][union_inds[:, 1]]),\n        torch.max(rois[:, 3:5][union_inds[:, 0]], rois[:, 3:5][union_inds[:, 1]]),\n    ),1)\n\n    # (num_rois, d, pooling_size, pooling_size)\n    union_pools = RoIAlignFunction(pooling_size, pooling_size,\n                                   spatial_scale=1/stride)(fmap, union_rois)\n    return union_pools\n \n"
  },
  {
    "path": "lib/lstm/__init__.py",
    "content": ""
  },
  {
    "path": "lib/lstm/decoder_rnn.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\nfrom torch.nn.utils.rnn import PackedSequence\nfrom typing import Optional, Tuple\n\nfrom lib.fpn.box_utils import nms_overlaps\nfrom lib.word_vectors import obj_edge_vectors\nfrom .highway_lstm_cuda.alternating_highway_lstm import block_orthogonal\nimport numpy as np\n\ndef get_dropout_mask(dropout_probability: float, tensor_for_masking: torch.autograd.Variable):\n    \"\"\"\n    Computes and returns an element-wise dropout mask for a given tensor, where\n    each element in the mask is dropped out with probability dropout_probability.\n    Note that the mask is NOT applied to the tensor - the tensor is passed to retain\n    the correct CUDA tensor type for the mask.\n\n    Parameters\n    ----------\n    dropout_probability : float, required.\n        Probability of dropping a dimension of the input.\n    tensor_for_masking : torch.Variable, required.\n\n\n    Returns\n    -------\n    A torch.FloatTensor consisting of the binary mask scaled by 1/ (1 - dropout_probability).\n    This scaling ensures expected values and variances of the output of applying this mask\n     and the original tensor are the same.\n    \"\"\"\n    binary_mask = tensor_for_masking.clone()\n    binary_mask.data.copy_(torch.rand(tensor_for_masking.size()) > dropout_probability)\n    # Scale mask by 1/keep_prob to preserve output statistics.\n    dropout_mask = binary_mask.float().div(1.0 - dropout_probability)\n    return dropout_mask\n\n\nclass DecoderRNN(torch.nn.Module):\n    def __init__(self, classes, embed_dim, inputs_dim, hidden_dim, recurrent_dropout_probability=0.2,\n                 use_highway=True, use_input_projection_bias=True):\n        \"\"\"\n        Initializes the RNN\n        :param embed_dim: Dimension of the embeddings\n        :param encoder_hidden_dim: Hidden dim of the encoder, for attention purposes\n        :param hidden_dim: Hidden dim of the decoder\n        :param vocab_size: Number of words in the vocab\n        :param bos_token: To use during decoding (non teacher forcing mode))\n        :param bos: beginning of sentence token\n        :param unk: unknown token (not used)\n        \"\"\"\n        super(DecoderRNN, self).__init__()\n\n        self.classes = classes\n        embed_vecs = obj_edge_vectors(['start'] + self.classes, wv_dim=100)\n        self.obj_embed = nn.Embedding(len(self.classes), embed_dim)\n        self.obj_embed.weight.data = embed_vecs\n        self.hidden_size = hidden_dim\n        self.inputs_dim = inputs_dim\n        self.nms_thresh = 0.3\n\n        self.recurrent_dropout_probability=recurrent_dropout_probability\n        self.use_highway=use_highway\n        # We do the projections for all the gates all at once, so if we are\n        # using highway layers, we need some extra projections, which is\n        # why the sizes of the Linear layers change here depending on this flag.\n        if use_highway:\n            self.input_linearity = torch.nn.Linear(self.input_size, 6 * self.hidden_size,\n                                                   bias=use_input_projection_bias)\n            self.state_linearity = torch.nn.Linear(self.hidden_size, 5 * self.hidden_size,\n                                                   bias=True)\n        else:\n            self.input_linearity = torch.nn.Linear(self.input_size, 4 * self.hidden_size,\n                                                   bias=use_input_projection_bias)\n            self.state_linearity = torch.nn.Linear(self.hidden_size, 4 * self.hidden_size,\n                                                   bias=True)\n\n        self.out = nn.Linear(self.hidden_size, len(self.classes))\n        self.reset_parameters()\n\n    @property\n    def input_size(self):\n        return self.inputs_dim + self.obj_embed.weight.size(1)\n\n    def reset_parameters(self):\n        # Use sensible default initializations for parameters.\n        block_orthogonal(self.input_linearity.weight.data, [self.hidden_size, self.input_size])\n        block_orthogonal(self.state_linearity.weight.data, [self.hidden_size, self.hidden_size])\n\n        self.state_linearity.bias.data.fill_(0.0)\n        # Initialize forget gate biases to 1.0 as per An Empirical\n        # Exploration of Recurrent Network Architectures, (Jozefowicz, 2015).\n        self.state_linearity.bias.data[self.hidden_size:2 * self.hidden_size].fill_(1.0)\n\n    def lstm_equations(self, timestep_input, previous_state, previous_memory, dropout_mask=None):\n        \"\"\"\n        Does the hairy LSTM math\n        :param timestep_input:\n        :param previous_state:\n        :param previous_memory:\n        :param dropout_mask:\n        :return:\n        \"\"\"\n        # Do the projections for all the gates all at once.\n        projected_input = self.input_linearity(timestep_input)\n        projected_state = self.state_linearity(previous_state)\n\n        # Main LSTM equations using relevant chunks of the big linear\n        # projections of the hidden state and inputs.\n        input_gate = torch.sigmoid(projected_input[:, 0 * self.hidden_size:1 * self.hidden_size] +\n                                   projected_state[:, 0 * self.hidden_size:1 * self.hidden_size])\n        forget_gate = torch.sigmoid(projected_input[:, 1 * self.hidden_size:2 * self.hidden_size] +\n                                    projected_state[:, 1 * self.hidden_size:2 * self.hidden_size])\n        memory_init = torch.tanh(projected_input[:, 2 * self.hidden_size:3 * self.hidden_size] +\n                                 projected_state[:, 2 * self.hidden_size:3 * self.hidden_size])\n        output_gate = torch.sigmoid(projected_input[:, 3 * self.hidden_size:4 * self.hidden_size] +\n                                    projected_state[:, 3 * self.hidden_size:4 * self.hidden_size])\n        memory = input_gate * memory_init + forget_gate * previous_memory\n        timestep_output = output_gate * torch.tanh(memory)\n\n        if self.use_highway:\n            highway_gate = torch.sigmoid(projected_input[:, 4 * self.hidden_size:5 * self.hidden_size] +\n                                         projected_state[:, 4 * self.hidden_size:5 * self.hidden_size])\n            highway_input_projection = projected_input[:, 5 * self.hidden_size:6 * self.hidden_size]\n            timestep_output = highway_gate * timestep_output + (1 - highway_gate) * highway_input_projection\n\n        # Only do dropout if the dropout prob is > 0.0 and we are in training mode.\n        if dropout_mask is not None and self.training:\n            timestep_output = timestep_output * dropout_mask\n        return timestep_output, memory\n\n    def forward(self,  # pylint: disable=arguments-differ\n                inputs: PackedSequence,\n                initial_state: Optional[Tuple[torch.Tensor, torch.Tensor]] = None,\n                labels=None, boxes_for_nms=None):\n        \"\"\"\n        Parameters\n        ----------\n        inputs : PackedSequence, required.\n            A tensor of shape (batch_size, num_timesteps, input_size)\n            to apply the LSTM over.\n\n        initial_state : Tuple[torch.Tensor, torch.Tensor], optional, (default = None)\n            A tuple (state, memory) representing the initial hidden state and memory\n            of the LSTM. Each tensor has shape (1, batch_size, output_dimension).\n\n        Returns\n        -------\n        A PackedSequence containing a torch.FloatTensor of shape\n        (batch_size, num_timesteps, output_dimension) representing\n        the outputs of the LSTM per timestep and a tuple containing\n        the LSTM state, with shape (1, batch_size, hidden_size) to\n        match the Pytorch API.\n        \"\"\"\n        if not isinstance(inputs, PackedSequence):\n            raise ValueError('inputs must be PackedSequence but got %s' % (type(inputs)))\n\n        assert isinstance(inputs, PackedSequence)\n        sequence_tensor, batch_lengths = inputs\n        batch_size = batch_lengths[0]\n\n        # We're just doing an LSTM decoder here so ignore states, etc\n        if initial_state is None:\n            previous_memory = Variable(sequence_tensor.data.new()\n                                                  .resize_(batch_size, self.hidden_size).fill_(0))\n            previous_state = Variable(sequence_tensor.data.new()\n                                                 .resize_(batch_size, self.hidden_size).fill_(0))\n        else:\n            assert len(initial_state) == 2\n            previous_state = initial_state[0].squeeze(0)\n            previous_memory = initial_state[1].squeeze(0)\n\n        previous_embed = self.obj_embed.weight[0, None].expand(batch_size, 100)\n\n        if self.recurrent_dropout_probability > 0.0:\n            dropout_mask = get_dropout_mask(self.recurrent_dropout_probability, previous_memory)\n        else:\n            dropout_mask = None\n\n        # Only accumulating label predictions here, discarding everything else\n        out_dists = []\n        out_commitments = []\n\n        end_ind = 0\n        for i, l_batch in enumerate(batch_lengths):\n            start_ind = end_ind\n            end_ind = end_ind + l_batch\n\n            if previous_memory.size(0) != l_batch:\n                previous_memory = previous_memory[:l_batch]\n                previous_state = previous_state[:l_batch]\n                previous_embed = previous_embed[:l_batch]\n                if dropout_mask is not None:\n                    dropout_mask = dropout_mask[:l_batch]\n\n            timestep_input = torch.cat((sequence_tensor[start_ind:end_ind], previous_embed), 1)\n\n            previous_state, previous_memory = self.lstm_equations(timestep_input, previous_state,\n                                                                  previous_memory, dropout_mask=dropout_mask)\n\n            pred_dist = self.out(previous_state)\n            out_dists.append(pred_dist)\n\n            if self.training:\n                labels_to_embed = labels[start_ind:end_ind].clone()\n                # Whenever labels are 0 set input to be our max prediction\n                nonzero_pred = pred_dist[:, 1:].max(1)[1] + 1\n                is_bg = (labels_to_embed.data == 0).nonzero()\n                if is_bg.dim() > 0:\n                    labels_to_embed[is_bg.squeeze(1)] = nonzero_pred[is_bg.squeeze(1)]\n                out_commitments.append(labels_to_embed)\n                previous_embed = self.obj_embed(labels_to_embed+1)\n            else:\n                assert l_batch == 1\n                out_dist_sample = F.softmax(pred_dist, dim=1)\n                # if boxes_for_nms is not None:\n                #     out_dist_sample[domains_allowed[i] == 0] = 0.0\n\n                # Greedily take the max here amongst non-bgs\n                best_ind = out_dist_sample[:, 1:].max(1)[1] + 1\n\n                # if boxes_for_nms is not None and i < boxes_for_nms.size(0):\n                #     best_int = int(best_ind.data[0])\n                #     domains_allowed[i:, best_int] *= (1 - is_overlap[i, i:, best_int])\n                out_commitments.append(best_ind)\n                previous_embed = self.obj_embed(best_ind+1)\n\n        # Do NMS here as a post-processing step\n        if boxes_for_nms is not None and not self.training:\n            is_overlap = nms_overlaps(boxes_for_nms.data).view(\n                boxes_for_nms.size(0), boxes_for_nms.size(0), boxes_for_nms.size(1)\n            ).cpu().numpy() >= self.nms_thresh\n            # is_overlap[np.arange(boxes_for_nms.size(0)), np.arange(boxes_for_nms.size(0))] = False\n\n            out_dists_sampled = F.softmax(torch.cat(out_dists,0), 1).data.cpu().numpy()\n            out_dists_sampled[:,0] = 0\n\n            out_commitments = out_commitments[0].data.new(len(out_commitments)).fill_(0)\n\n            for i in range(out_commitments.size(0)):\n                box_ind, cls_ind = np.unravel_index(out_dists_sampled.argmax(), out_dists_sampled.shape)\n                out_commitments[int(box_ind)] = int(cls_ind)\n                out_dists_sampled[is_overlap[box_ind,:,cls_ind], cls_ind] = 0.0\n                out_dists_sampled[box_ind] = -1.0 # This way we won't re-sample\n\n            out_commitments = Variable(out_commitments)\n        else:\n            out_commitments = torch.cat(out_commitments, 0)\n\n        return torch.cat(out_dists, 0), out_commitments\n"
  },
  {
    "path": "lib/lstm/highway_lstm_cuda/__init__.py",
    "content": ""
  },
  {
    "path": "lib/lstm/highway_lstm_cuda/_ext/__init__.py",
    "content": ""
  },
  {
    "path": "lib/lstm/highway_lstm_cuda/_ext/highway_lstm_layer/__init__.py",
    "content": "\nfrom torch.utils.ffi import _wrap_function\nfrom ._highway_lstm_layer import lib as _lib, ffi as _ffi\n\n__all__ = []\ndef _import_symbols(locals):\n    for symbol in dir(_lib):\n        fn = getattr(_lib, symbol)\n        locals[symbol] = _wrap_function(fn, _ffi)\n        __all__.append(symbol)\n\n_import_symbols(locals())\n"
  },
  {
    "path": "lib/lstm/highway_lstm_cuda/alternating_highway_lstm.py",
    "content": "from typing import Tuple\n\nfrom overrides import overrides\nimport torch\nfrom torch.autograd import Function, Variable\nfrom torch.nn import Parameter\nfrom torch.nn.utils.rnn import PackedSequence, pad_packed_sequence, pack_padded_sequence\nimport itertools\nfrom ._ext import highway_lstm_layer\n\n\ndef block_orthogonal(tensor, split_sizes, gain=1.0):\n    \"\"\"\n    An initializer which allows initializing model parameters in \"blocks\". This is helpful\n    in the case of recurrent models which use multiple gates applied to linear projections,\n    which can be computed efficiently if they are concatenated together. However, they are\n    separate parameters which should be initialized independently.\n    Parameters\n    ----------\n    tensor : ``torch.Tensor``, required.\n        A tensor to initialize.\n    split_sizes : List[int], required.\n        A list of length ``tensor.ndim()`` specifying the size of the\n        blocks along that particular dimension. E.g. ``[10, 20]`` would\n        result in the tensor being split into chunks of size 10 along the\n        first dimension and 20 along the second.\n    gain : float, optional (default = 1.0)\n        The gain (scaling) applied to the orthogonal initialization.\n    \"\"\"\n\n    if isinstance(tensor, Variable):\n        block_orthogonal(tensor.data, split_sizes, gain)\n        return tensor\n\n    sizes = list(tensor.size())\n    if any([a % b != 0 for a, b in zip(sizes, split_sizes)]):\n        raise ValueError(\"tensor dimensions must be divisible by their respective \"\n                         \"split_sizes. Found size: {} and split_sizes: {}\".format(sizes, split_sizes))\n    indexes = [list(range(0, max_size, split))\n               for max_size, split in zip(sizes, split_sizes)]\n    # Iterate over all possible blocks within the tensor.\n    for block_start_indices in itertools.product(*indexes):\n        # A list of tuples containing the index to start at for this block\n        # and the appropriate step size (i.e split_size[i] for dimension i).\n        index_and_step_tuples = zip(block_start_indices, split_sizes)\n        # This is a tuple of slices corresponding to:\n        # tensor[index: index + step_size, ...]. This is\n        # required because we could have an arbitrary number\n        # of dimensions. The actual slices we need are the\n        # start_index: start_index + step for each dimension in the tensor.\n        block_slice = tuple([slice(start_index, start_index + step)\n                             for start_index, step in index_and_step_tuples])\n\n        # let's not initialize empty things to 0s because THAT SOUNDS REALLY BAD\n        assert len(block_slice) == 2\n        sizes = [x.stop - x.start for x in block_slice]\n        tensor_copy = tensor.new(max(sizes), max(sizes))\n        torch.nn.init.orthogonal(tensor_copy, gain=gain)\n        tensor[block_slice] = tensor_copy[0:sizes[0], 0:sizes[1]]\n\n\nclass _AlternatingHighwayLSTMFunction(Function):\n    def __init__(self, input_size: int, hidden_size: int, num_layers: int, train: bool) -> None:\n        super(_AlternatingHighwayLSTMFunction, self).__init__()\n        self.input_size = input_size\n        self.hidden_size = hidden_size\n        self.num_layers = num_layers\n        self.train = train\n\n    @overrides\n    def forward(self,  # pylint: disable=arguments-differ\n                inputs: torch.Tensor,\n                weight: torch.Tensor,\n                bias: torch.Tensor,\n                state_accumulator: torch.Tensor,\n                memory_accumulator: torch.Tensor,\n                dropout_mask: torch.Tensor,\n                lengths: torch.Tensor,\n                gates: torch.Tensor) -> Tuple[torch.Tensor, None]:\n        sequence_length, batch_size, input_size = inputs.size()\n        tmp_i = inputs.new(batch_size, 6 * self.hidden_size)\n        tmp_h = inputs.new(batch_size, 5 * self.hidden_size)\n        is_training = 1 if self.train else 0\n        highway_lstm_layer.highway_lstm_forward_cuda(input_size,  # type: ignore # pylint: disable=no-member\n                                                     self.hidden_size,\n                                                     batch_size,\n                                                     self.num_layers,\n                                                     sequence_length,\n                                                     inputs,\n                                                     lengths,\n                                                     state_accumulator,\n                                                     memory_accumulator,\n                                                     tmp_i,\n                                                     tmp_h,\n                                                     weight,\n                                                     bias,\n                                                     dropout_mask,\n                                                     gates,\n                                                     is_training)\n\n        self.save_for_backward(inputs, lengths, weight, bias, state_accumulator,\n                               memory_accumulator, dropout_mask, gates)\n\n        # The state_accumulator has shape: (num_layers, sequence_length + 1, batch_size, hidden_size)\n        # so for the output, we want the last layer and all but the first timestep, which was the\n        # initial state.\n        output = state_accumulator[-1, 1:, :, :]\n        return output, state_accumulator[:, 1:, :, :]\n\n    @overrides\n    def backward(self, grad_output, grad_hy):  # pylint: disable=arguments-differ\n\n        (inputs, lengths, weight, bias, state_accumulator,  # pylint: disable=unpacking-non-sequence\n         memory_accumulator, dropout_mask, gates) = self.saved_tensors\n\n        inputs = inputs.contiguous()\n        sequence_length, batch_size, input_size = inputs.size()\n        parameters_need_grad = 1 if self.needs_input_grad[1] else 0  # pylint: disable=unsubscriptable-object\n\n        grad_input = inputs.new().resize_as_(inputs).zero_()\n        grad_state_accumulator = inputs.new().resize_as_(state_accumulator).zero_()\n        grad_memory_accumulator = inputs.new().resize_as_(memory_accumulator).zero_()\n        grad_weight = inputs.new()\n        grad_bias = inputs.new()\n        grad_dropout = None\n        grad_lengths = None\n        grad_gates = None\n\n        if parameters_need_grad:\n            grad_weight.resize_as_(weight).zero_()\n            grad_bias.resize_as_(bias).zero_()\n\n        tmp_i_gates_grad = inputs.new().resize_(batch_size, 6 * self.hidden_size).zero_()\n        tmp_h_gates_grad = inputs.new().resize_(batch_size, 5 * self.hidden_size).zero_()\n\n        is_training = 1 if self.train else 0\n        highway_lstm_layer.highway_lstm_backward_cuda(input_size,  # pylint: disable=no-member\n                                                      self.hidden_size,\n                                                      batch_size,\n                                                      self.num_layers,\n                                                      sequence_length,\n                                                      grad_output,\n                                                      lengths,\n                                                      grad_state_accumulator,\n                                                      grad_memory_accumulator,\n                                                      inputs,\n                                                      state_accumulator,\n                                                      memory_accumulator,\n                                                      weight,\n                                                      gates,\n                                                      dropout_mask,\n                                                      tmp_h_gates_grad,\n                                                      tmp_i_gates_grad,\n                                                      grad_hy,\n                                                      grad_input,\n                                                      grad_weight,\n                                                      grad_bias,\n                                                      is_training,\n                                                      parameters_need_grad)\n\n        return (grad_input, grad_weight, grad_bias, grad_state_accumulator,\n                grad_memory_accumulator, grad_dropout, grad_lengths, grad_gates)\n\n\nclass AlternatingHighwayLSTM(torch.nn.Module):\n    \"\"\"\n    A stacked LSTM with LSTM layers which alternate between going forwards over\n    the sequence and going backwards, with highway connections between each of\n    the alternating layers. This implementation is based on the description in\n    `Deep Semantic Role Labelling - What works and what's next\n    <https://homes.cs.washington.edu/~luheng/files/acl2017_hllz.pdf>`_ .\n\n    Parameters\n    ----------\n    input_size : int, required\n        The dimension of the inputs to the LSTM.\n    hidden_size : int, required\n        The dimension of the outputs of the LSTM.\n    num_layers : int, required\n        The number of stacked LSTMs to use.\n    recurrent_dropout_probability: float, optional (default = 0.0)\n        The dropout probability to be used in a dropout scheme as stated in\n        `A Theoretically Grounded Application of Dropout in Recurrent Neural Networks\n        <https://arxiv.org/abs/1512.05287>`_ .\n\n    Returns\n    -------\n    output : PackedSequence\n        The outputs of the interleaved LSTMs per timestep. A tensor of shape\n        (batch_size, max_timesteps, hidden_size) where for a given batch\n        element, all outputs past the sequence length for that batch are\n        zero tensors.\n    \"\"\"\n\n    def __init__(self,\n                 input_size: int,\n                 hidden_size: int,\n                 num_layers: int = 1,\n                 recurrent_dropout_probability: float = 0) -> None:\n        super(AlternatingHighwayLSTM, self).__init__()\n        self.input_size = input_size\n        self.hidden_size = hidden_size\n        self.num_layers = num_layers\n        self.recurrent_dropout_probability = recurrent_dropout_probability\n        self.training = True\n\n        # Input dimensions consider the fact that we do\n        # all of the LSTM projections (and highway parts)\n        # in a single matrix multiplication.\n        input_projection_size = 6 * hidden_size\n        state_projection_size = 5 * hidden_size\n        bias_size = 5 * hidden_size\n\n        # Here we are creating a single weight and bias with the\n        # parameters for all layers unfolded into it. This is necessary\n        # because unpacking and re-packing the weights inside the\n        # kernel would be slow, as it would happen every time it is called.\n        total_weight_size = 0\n        total_bias_size = 0\n        for layer in range(num_layers):\n            layer_input_size = input_size if layer == 0 else hidden_size\n\n            input_weights = input_projection_size * layer_input_size\n            state_weights = state_projection_size * hidden_size\n            total_weight_size += input_weights + state_weights\n\n            total_bias_size += bias_size\n\n        self.weight = Parameter(torch.FloatTensor(total_weight_size))\n        self.bias = Parameter(torch.FloatTensor(total_bias_size))\n        self.reset_parameters()\n\n    def reset_parameters(self) -> None:\n        self.bias.data.zero_()\n        weight_index = 0\n        bias_index = 0\n        for i in range(self.num_layers):\n            input_size = self.input_size if i == 0 else self.hidden_size\n\n            # Create a tensor of the right size and initialize it.\n            init_tensor = self.weight.data.new(input_size, self.hidden_size * 6).zero_()\n            block_orthogonal(init_tensor, [input_size, self.hidden_size])\n            # Copy it into the flat weight.\n            self.weight.data[weight_index: weight_index + init_tensor.nelement()] \\\n                .view_as(init_tensor).copy_(init_tensor)\n            weight_index += init_tensor.nelement()\n\n            # Same for the recurrent connection weight.\n            init_tensor = self.weight.data.new(self.hidden_size, self.hidden_size * 5).zero_()\n            block_orthogonal(init_tensor, [self.hidden_size, self.hidden_size])\n            self.weight.data[weight_index: weight_index + init_tensor.nelement()] \\\n                .view_as(init_tensor).copy_(init_tensor)\n            weight_index += init_tensor.nelement()\n\n            # Set the forget bias to 1.\n            self.bias.data[bias_index + self.hidden_size:bias_index + 2 * self.hidden_size].fill_(1)\n            bias_index += 5 * self.hidden_size\n\n    def forward(self, inputs, initial_state=None) -> Tuple[PackedSequence, torch.Tensor]:\n        \"\"\"\n        Parameters\n        ----------\n        inputs : ``PackedSequence``, required.\n            A batch first ``PackedSequence`` to run the stacked LSTM over.\n        initial_state : Tuple[torch.Tensor, torch.Tensor], optional, (default = None)\n            Currently, this is ignored.\n\n        Returns\n        -------\n        output_sequence : ``PackedSequence``\n            The encoded sequence of shape (batch_size, sequence_length, hidden_size)\n        final_states: ``torch.Tensor``\n            The per-layer final (state, memory) states of the LSTM, each with shape\n            (num_layers, batch_size, hidden_size).\n        \"\"\"\n        inputs, lengths = pad_packed_sequence(inputs, batch_first=False)\n\n        sequence_length, batch_size, _ = inputs.size()\n        accumulator_shape = [self.num_layers, sequence_length + 1, batch_size, self.hidden_size]\n        state_accumulator = Variable(inputs.data.new(*accumulator_shape).zero_(), requires_grad=False)\n        memory_accumulator = Variable(inputs.data.new(*accumulator_shape).zero_(), requires_grad=False)\n\n        dropout_weights = inputs.data.new().resize_(self.num_layers, batch_size, self.hidden_size).fill_(1.0)\n        if self.training:\n            # Normalize by 1 - dropout_prob to preserve the output statistics of the layer.\n            dropout_weights.bernoulli_(1 - self.recurrent_dropout_probability) \\\n                .div_((1 - self.recurrent_dropout_probability))\n\n        dropout_weights = Variable(dropout_weights, requires_grad=False)\n        gates = Variable(inputs.data.new().resize_(self.num_layers,\n                                                   sequence_length,\n                                                   batch_size, 6 * self.hidden_size))\n\n        lengths_variable = Variable(torch.IntTensor(lengths))\n        implementation = _AlternatingHighwayLSTMFunction(self.input_size,\n                                                         self.hidden_size,\n                                                         num_layers=self.num_layers,\n                                                         train=self.training)\n        output, _ = implementation(inputs, self.weight, self.bias, state_accumulator,\n                                   memory_accumulator, dropout_weights, lengths_variable, gates)\n\n        output = pack_padded_sequence(output, lengths, batch_first=False)\n        return output, None\n"
  },
  {
    "path": "lib/lstm/highway_lstm_cuda/build.py",
    "content": "# pylint: disable=invalid-name\nimport os\nimport torch\nfrom torch.utils.ffi import create_extension\n\nif not torch.cuda.is_available():\n    raise Exception('HighwayLSTM can only be compiled with CUDA')\n\nsources = ['src/highway_lstm_cuda.c']\nheaders = ['src/highway_lstm_cuda.h']\ndefines = [('WITH_CUDA', None)]\nwith_cuda = True\n\nthis_file = os.path.dirname(os.path.realpath(__file__))\nextra_objects = ['src/highway_lstm_kernel.cu.o']\nextra_objects = [os.path.join(this_file, fname) for fname in extra_objects]\n\nffi = create_extension(\n        '_ext.highway_lstm_layer',\n        headers=headers,\n        sources=sources,\n        define_macros=defines,\n        relative_to=__file__,\n        with_cuda=with_cuda,\n        extra_objects=extra_objects\n        )\n\nif __name__ == '__main__':\n    ffi.build()\n"
  },
  {
    "path": "lib/lstm/highway_lstm_cuda/make.sh",
    "content": "#!/usr/bin/env bash\n\nCUDA_PATH=/usr/local/cuda/\n\n# Which CUDA capabilities do we want to pre-build for?\n# https://developer.nvidia.com/cuda-gpus\n#   Compute/shader model   Cards\n#   61                    P4, P40, Titan X\n#   60                    P100\n#   52                    M40\n#   37                    K80\n#   35                    K40, K20\n#   30                    K10, Grid K520 (AWS G2)\n\nCUDA_MODELS=(52 61)\n\n# Nvidia doesn't guarantee binary compatability across GPU versions.\n# However, binary compatibility within one GPU generation can be guaranteed\n# under certain conditions because they share the basic instruction set.\n# This is the case between two GPU versions that do not show functional \n# differences at all (for instance when one version is a scaled down version\n# of the other), or when one version is functionally included in the other.\n\n# To fix this problem, we can create a 'fat binary' which generates multiple\n# translations of the CUDA source. The most appropriate version is chosen at\n# runtime by the CUDA driver. See:\n# http://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc/index.html#gpu-compilation\n# http://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc/index.html#fatbinaries\nCUDA_MODEL_TARGETS=\"\"\nfor i in \"${CUDA_MODELS[@]}\"\ndo\n        CUDA_MODEL_TARGETS+=\" -gencode arch=compute_${i},code=sm_${i}\"\ndone\n\necho \"Building kernel for following target architectures: \"\necho $CUDA_MODEL_TARGETS\n\ncd src\necho \"Compiling kernel\"\n/usr/local/cuda/bin/nvcc -c -o highway_lstm_kernel.cu.o highway_lstm_kernel.cu --compiler-options -fPIC $CUDA_MODEL_TARGETS\ncd ../\npython build.py\n"
  },
  {
    "path": "lib/lstm/highway_lstm_cuda/src/highway_lstm_cuda.c",
    "content": "#include <THC/THC.h>\n#include \"highway_lstm_kernel.h\"\n\nextern THCState *state;\n\nint highway_lstm_forward_cuda(int inputSize, int hiddenSize, int miniBatch,\n        int numLayers, int seqLength,\n        THCudaTensor *x,\n        THIntTensor *lengths,\n        THCudaTensor *h_data,\n        THCudaTensor *c_data,\n        THCudaTensor *tmp_i,\n        THCudaTensor *tmp_h,\n        THCudaTensor *T,\n        THCudaTensor *bias,\n        THCudaTensor *dropout,\n        THCudaTensor *gates,\n        int isTraining) {\n\n    float * x_ptr = THCudaTensor_data(state, x);\n    int * lengths_ptr = THIntTensor_data(lengths);\n    float * h_data_ptr = THCudaTensor_data(state, h_data);\n    float * c_data_ptr = THCudaTensor_data(state, c_data);\n    float * tmp_i_ptr = THCudaTensor_data(state, tmp_i);\n    float * tmp_h_ptr = THCudaTensor_data(state, tmp_h);\n    float * T_ptr = THCudaTensor_data(state, T);\n    float * bias_ptr = THCudaTensor_data(state, bias);\n    float * dropout_ptr = THCudaTensor_data(state, dropout);\n    float * gates_ptr;\n    if (isTraining == 1) {\n        gates_ptr = THCudaTensor_data(state, gates);\n    } else {\n        gates_ptr = NULL;\n    }\n\n    cudaStream_t stream = THCState_getCurrentStream(state);\n    cublasHandle_t handle = THCState_getCurrentBlasHandle(state);\n\n    highway_lstm_forward_ongpu(inputSize, hiddenSize, miniBatch, numLayers, \n            seqLength, x_ptr, lengths_ptr, h_data_ptr, c_data_ptr, tmp_i_ptr,\n            tmp_h_ptr, T_ptr, bias_ptr, dropout_ptr, gates_ptr,\n            isTraining, stream, handle);\n\n    return 1;\n\n}\n\nint highway_lstm_backward_cuda(int inputSize, int hiddenSize, int miniBatch, int numLayers, int seqLength,\n        THCudaTensor *out_grad,\n        THIntTensor *lengths,\n        THCudaTensor *h_data_grad,\n        THCudaTensor *c_data_grad,\n        THCudaTensor *x,\n        THCudaTensor *h_data,\n        THCudaTensor *c_data,\n        THCudaTensor *T,\n        THCudaTensor *gates_out,\n        THCudaTensor *dropout_in,\n        THCudaTensor *h_gates_grad,\n        THCudaTensor *i_gates_grad,\n        THCudaTensor *h_out_grad,\n        THCudaTensor *x_grad,\n        THCudaTensor *T_grad,\n        THCudaTensor *bias_grad,\n        int isTraining,\n        int do_weight_grad) {\n\n    float * out_grad_ptr = THCudaTensor_data(state, out_grad);\n    int * lengths_ptr = THIntTensor_data(lengths);\n    float * h_data_grad_ptr = THCudaTensor_data(state, h_data_grad);\n    float * c_data_grad_ptr = THCudaTensor_data(state, c_data_grad);\n    float * x_ptr = THCudaTensor_data(state, x);\n    float * h_data_ptr = THCudaTensor_data(state, h_data);\n    float * c_data_ptr = THCudaTensor_data(state, c_data);\n    float * T_ptr = THCudaTensor_data(state, T);\n    float * gates_out_ptr = THCudaTensor_data(state, gates_out);\n    float * dropout_in_ptr = THCudaTensor_data(state, dropout_in);\n    float * h_gates_grad_ptr = THCudaTensor_data(state, h_gates_grad);\n    float * i_gates_grad_ptr = THCudaTensor_data(state, i_gates_grad);\n    float * h_out_grad_ptr = THCudaTensor_data(state, h_out_grad);\n    float * x_grad_ptr = THCudaTensor_data(state, x_grad);\n    float * T_grad_ptr = THCudaTensor_data(state, T_grad);\n    float * bias_grad_ptr = THCudaTensor_data(state, bias_grad);\n\n    cudaStream_t stream = THCState_getCurrentStream(state);\n    cublasHandle_t handle = THCState_getCurrentBlasHandle(state);\n\n    highway_lstm_backward_ongpu(inputSize, hiddenSize, miniBatch, numLayers,\n            seqLength, out_grad_ptr, lengths_ptr, h_data_grad_ptr, c_data_grad_ptr,\n            x_ptr, h_data_ptr, c_data_ptr, T_ptr, gates_out_ptr, dropout_in_ptr,\n            h_gates_grad_ptr, i_gates_grad_ptr, h_out_grad_ptr,\n            x_grad_ptr, T_grad_ptr, bias_grad_ptr, isTraining, do_weight_grad,\n            stream, handle);\n\n    return 1;\n\n}\n"
  },
  {
    "path": "lib/lstm/highway_lstm_cuda/src/highway_lstm_cuda.h",
    "content": "int highway_lstm_forward_cuda(int inputSize, int hiddenSize, int miniBatch, int numLayers, int seqLength,\n    THCudaTensor *x, THIntTensor *lengths, THCudaTensor *h_data,\n    THCudaTensor *c_data, THCudaTensor *tmp_i,\n    THCudaTensor *tmp_h, THCudaTensor *T, THCudaTensor *bias,\n    THCudaTensor *dropout, THCudaTensor *gates, int isTraining);\n\nint highway_lstm_backward_cuda(int inputSize, int hiddenSize, int miniBatch, \n        int numLayers, int seqLength, THCudaTensor *out_grad, THIntTensor *lengths,\n        THCudaTensor *h_data_grad, THCudaTensor *c_data_grad, THCudaTensor *x, \n        THCudaTensor *h_data, THCudaTensor *c_data, THCudaTensor *T,\n        THCudaTensor *gates_out, THCudaTensor *dropout_in,\n        THCudaTensor *h_gates_grad, THCudaTensor *i_gates_grad,\n        THCudaTensor *h_out_grad, THCudaTensor *x_grad,  THCudaTensor *T_grad,\n        THCudaTensor *bias_grad, int isTraining, int do_weight_grad);\n"
  },
  {
    "path": "lib/lstm/highway_lstm_cuda/src/highway_lstm_kernel.cu",
    "content": "#include \"cuda_runtime.h\"\n#include \"curand.h\"\n#include \"cublas_v2.h\"\n#include <iostream>\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include <float.h>\n#include <stdio.h>\n#include \"highway_lstm_kernel.h\"\n\n#define BLOCK 256\n\n// Define some error checking macros.\n#define cudaErrCheck(stat) { cudaErrCheck_((stat), __FILE__, __LINE__); }\nvoid cudaErrCheck_(cudaError_t stat, const char *file, int line) {\n   if (stat != cudaSuccess) {\n      fprintf(stderr, \"CUDA Error: %s %s %d\\n\", cudaGetErrorString(stat), file, line);\n   }\n}\n\n#define cublasErrCheck(stat) { cublasErrCheck_((stat), __FILE__, __LINE__); }\nvoid cublasErrCheck_(cublasStatus_t stat, const char *file, int line) {\n   if (stat != CUBLAS_STATUS_SUCCESS) {\n      fprintf(stderr, \"cuBLAS Error: %d %s %d\\n\", stat, file, line);\n   }\n}\n\n// Device functions\n__forceinline__ __device__ float sigmoidf(float in) {\n   return 1.f / (1.f + expf(-in));  \n}\n\n__forceinline__ __device__ float dsigmoidf(float in) {\n   float s = sigmoidf(in);\n   return s * (1.f - s);\n}\n\n__forceinline__ __device__ float tanh2f(float in) {\n   float t = tanhf(in);\n   return t*t;\n}\n\n__global__ void elementWise_bp(int hiddenSize, int miniBatch, int numCovered,\n                               // Inputs\n                               float *out_grad,\n                               float *h_out_grad,\n                               float *c_out_grad,\n                               float *c_in,\n                               float *c_out,\n                               float *h_out,\n                               float *gates_out,\n                               float *dropout_in,\n                               // Outputs\n                               float *c_in_grad,\n                               float *i_gates_grad,\n                               float *h_gates_grad,\n                               int training) {\n   int index = blockIdx.x * blockDim.x + threadIdx.x;\n   \n   if (index >= numCovered * hiddenSize) return;\n    \n   int batch = index / hiddenSize;\n   int h_gateIndex = (index % hiddenSize) + 5 * batch * hiddenSize;\n   int i_gateIndex = (index % hiddenSize) + 6 * batch * hiddenSize;   \n\n   float d_h = out_grad[index] + h_out_grad[index];\n   d_h = d_h * dropout_in[index];\n\n   float in_gate = gates_out[i_gateIndex];\n   float forget_gate = gates_out[i_gateIndex + 1 * hiddenSize];\n   float act_gate = gates_out[i_gateIndex + 2 * hiddenSize];\n   float out_gate = gates_out[i_gateIndex + 3 * hiddenSize];\n   float r_gate = gates_out[i_gateIndex + 4 * hiddenSize];\n   float lin_gate = gates_out[i_gateIndex + 5 * hiddenSize];\n\n   float d_out = d_h * r_gate;\n   float d_c = d_out * out_gate * (1.f - tanh2f(c_out[index])) + c_out_grad[index];\n   float h_prime = out_gate * tanhf(c_out[index]);\n\n   float d_in_gate = d_c * act_gate * in_gate * (1.f - in_gate);\n   float d_forget_gate = d_c * c_in[index] * forget_gate * (1.f - forget_gate);\n   float d_act_gate = d_c * in_gate * (1.f - act_gate * act_gate);\n   float d_out_gate = d_out * tanhf(c_out[index]) * out_gate * (1.f - out_gate);\n   float d_r_gate = d_h * (h_prime - lin_gate) * r_gate * (1.f - r_gate);\n   float d_lin_gate = d_h * (1 - r_gate);\n\n   i_gates_grad[i_gateIndex] = d_in_gate;\n   i_gates_grad[i_gateIndex + 1 * hiddenSize] = d_forget_gate;\n   i_gates_grad[i_gateIndex + 2 * hiddenSize] = d_act_gate;\n   i_gates_grad[i_gateIndex + 3 * hiddenSize] = d_out_gate;\n   i_gates_grad[i_gateIndex + 4 * hiddenSize] = d_r_gate;\n   i_gates_grad[i_gateIndex + 5 * hiddenSize] = d_lin_gate;\n\n   h_gates_grad[h_gateIndex] = d_in_gate;\n   h_gates_grad[h_gateIndex + 1 * hiddenSize] = d_forget_gate;\n   h_gates_grad[h_gateIndex + 2 * hiddenSize] = d_act_gate;\n   h_gates_grad[h_gateIndex + 3 * hiddenSize] = d_out_gate;\n   h_gates_grad[h_gateIndex + 4 * hiddenSize] = d_r_gate;\n\n   c_in_grad[index] = forget_gate * d_c;\n}\n\n\n// Fused forward kernel\n__global__ void elementWise_fp(int hiddenSize, int miniBatch, int numCovered,\n                               float *tmp_h, \n                               float *tmp_i, \n                               float *bias,\n                               float *linearGates,\n                               float *h_out,\n                               float *dropout_in,\n                               float *c_in,\n                               float *c_out,\n                               int training) {\n   int index = blockIdx.x * blockDim.x + threadIdx.x;\n\n   if (index >= numCovered * hiddenSize) return;\n   \n   int batch = index / hiddenSize;\n   int h_gateIndex = (index % hiddenSize) + 5 * batch * hiddenSize;\n   int i_gateIndex = (index % hiddenSize) + 6 * batch * hiddenSize;   \n   \n   float g[6];\n\n   for (int i = 0; i < 5; i++) {\n      g[i] = tmp_i[i * hiddenSize + i_gateIndex] + tmp_h[i * hiddenSize + h_gateIndex];\n      g[i] += bias[i * hiddenSize + index % hiddenSize];\n   }   \n   // extra for highway\n   g[5] = tmp_i[5 * hiddenSize + i_gateIndex];\n   \n   float in_gate     = sigmoidf(g[0]);\n   float forget_gate = sigmoidf(g[1]);\n   float act_gate    = tanhf(g[2]);\n   float out_gate    = sigmoidf(g[3]);\n   float r_gate      = sigmoidf(g[4]);\n   float lin_gate    = g[5];\n\n   if (training == 1) {\n       linearGates[i_gateIndex] = in_gate;\n       linearGates[i_gateIndex + 1 * hiddenSize] = forget_gate;\n       linearGates[i_gateIndex + 2 * hiddenSize] = act_gate;\n       linearGates[i_gateIndex + 3 * hiddenSize] = out_gate;\n       linearGates[i_gateIndex + 4 * hiddenSize] = r_gate;\n       linearGates[i_gateIndex + 5 * hiddenSize] = lin_gate;\n   }\n   \n   float val = (forget_gate * c_in[index]) + (in_gate * act_gate);\n   \n   c_out[index] = val;\n\n   val = out_gate * tanhf(val);                                   \n   val = val * r_gate + (1. - r_gate) * lin_gate;\n   val = val * dropout_in[index];\n\n   h_out[index] = val;\n}\n\nvoid highway_lstm_backward_ongpu(int inputSize, int hiddenSize, int miniBatch,\n        int numLayers, int seqLength, float *out_grad, int *lengths,\n        float *h_data_grad, float * c_data_grad, float *x, float *h_data,\n        float *c_data, float *T,\n        float *gates_out, float *dropout_in, float *h_gates_grad,\n        float *i_gates_grad, float *h_out_grad, float *x_grad, float *T_grad, float *bias_grad,\n        int isTraining, int do_weight_grad, cudaStream_t stream, cublasHandle_t handle) {\n\n\n    const int numElements = hiddenSize * miniBatch;\n\n    cudaStream_t stream_i;\n    cudaStream_t stream_h;\n    cudaStream_t stream_wi;\n    cudaStream_t stream_wh;\n    cudaStream_t stream_wb;\n\n    cudaErrCheck(cudaStreamCreate(&stream_i));\n    cudaErrCheck(cudaStreamCreate(&stream_h));\n    cudaErrCheck(cudaStreamCreate(&stream_wi));\n    cudaErrCheck(cudaStreamCreate(&stream_wh));\n    cudaErrCheck(cudaStreamCreate(&stream_wb));\n\n    float one = 1.f;\n    float zero = 0.f;\n\n    float *ones_host = new float[miniBatch];\n    for (int i=0; i < miniBatch; i++) {\n        ones_host[i] = 1.f;\n    }\n    float *ones;\n    cudaErrCheck(cudaMalloc((void**)&ones, miniBatch * sizeof(float)));\n    cudaErrCheck(cudaMemcpy(ones, ones_host, miniBatch * sizeof(float), cudaMemcpyHostToDevice));\n\n    for (int layer = numLayers-1; layer >= 0; layer--) {\n        int direction;\n        int startInd;\n        int currNumCovered;\n        if (layer % 2 == 0) {\n            // forward direction\n            direction = -1;\n            startInd = seqLength-1;\n            currNumCovered = 0;\n        } else {\n            // backward direction\n            direction = 1;\n            startInd = 0;\n            currNumCovered = miniBatch;\n        }\n\n        for (int t = startInd; t < seqLength && t >= 0; t = t + direction) {\n            \n            int prevIndex;\n            int prevGradIndex;\n            if (direction == 1) {\n                while (lengths[currNumCovered-1] <= t) {\n                    currNumCovered--;\n                }\n                prevGradIndex = t;\n                prevIndex = (t+2)%(seqLength+1);\n            } else {\n                while ((currNumCovered < miniBatch) && (lengths[currNumCovered] > t)) {\n                    currNumCovered++;\n                }\n                prevGradIndex = (t+2)%(seqLength+1);\n                prevIndex = t;\n            }\n\n\n            float * gradPtr;\n            if (layer == numLayers-1) {\n                gradPtr = out_grad + t * numElements;\n            } else {\n                gradPtr = h_out_grad + t * numElements + layer * seqLength * numElements;\n            }\n\n            cublasErrCheck(cublasSetStream(handle, stream_i));\n\n            dim3 blockDim;\n            dim3 gridDim;\n\n            blockDim.x = BLOCK;\n            gridDim.x = ((currNumCovered * hiddenSize) + blockDim.x - 1) / blockDim.x;               \n\n            elementWise_bp <<< gridDim, blockDim , 0, stream>>> \n                (hiddenSize, miniBatch, currNumCovered,\n                 gradPtr,\n                 h_data_grad + prevGradIndex * numElements + layer * (seqLength + 1) * numElements,\n                 c_data_grad + prevGradIndex * numElements + layer * (seqLength + 1) * numElements,\n                 c_data + prevIndex * numElements + layer * (seqLength + 1) * numElements,\n                 c_data + (t+1) * numElements + layer * (seqLength + 1) * numElements,\n                 h_data + (t+1) * numElements + layer * (seqLength + 1) * numElements,\n                 gates_out + t * 6 * numElements + layer * seqLength * 6 * numElements,\n                 dropout_in + layer * numElements,\n                 c_data_grad + (t+1) * numElements + layer * (seqLength + 1) * numElements,\n                 i_gates_grad,\n                 h_gates_grad,\n                 isTraining);\n               cudaErrCheck(cudaGetLastError());\n               // END\n\n             cudaErrCheck(cudaDeviceSynchronize());\n\n             float *out_grad_ptr;\n             int weightStart;\n             int inSize;\n             if (layer == 0) {\n                 inSize = inputSize;\n                 out_grad_ptr = x_grad + t * inputSize * miniBatch;\n                 weightStart = 0;\n             } else {\n                 inSize = hiddenSize;\n                 out_grad_ptr = h_out_grad + t * numElements + (layer-1) * seqLength * numElements;\n                weightStart = 6 * hiddenSize * inputSize + 5 * hiddenSize * hiddenSize + (layer - 1) * 11 * hiddenSize * hiddenSize;\n             }\n\n             cublasErrCheck(cublasSgemm(handle,\n                         CUBLAS_OP_T, CUBLAS_OP_N,\n                         inSize, currNumCovered, 6*hiddenSize,\n                         &one,\n                         &T[weightStart],\n                         6 * hiddenSize,\n                         i_gates_grad,\n                         6 * hiddenSize,\n                         &zero,\n                         out_grad_ptr,\n                         inSize));\n\n             cublasErrCheck(cublasSetStream(handle, stream_h));\n\n             cublasErrCheck(cublasSgemm(handle,\n                        CUBLAS_OP_T, CUBLAS_OP_N,\n                        hiddenSize, currNumCovered, 5*hiddenSize,\n                        &one,\n                        &T[weightStart + 6*hiddenSize*inSize],\n                        5 * hiddenSize,\n                        h_gates_grad,\n                        5 * hiddenSize,\n                        &zero,\n                        h_data_grad + (t+1) * numElements + layer * (seqLength+1) * numElements,\n                        hiddenSize));\n\n             if (do_weight_grad == 1) {\n                 float *inputPtr;\n                 if (layer == 0) {\n                     inputPtr = x + t * inputSize * miniBatch;\n                 } else {\n                     inputPtr = h_data + (t+1) * numElements + (layer - 1) * (seqLength+1) * numElements;\n                 }\n\n                 cublasErrCheck(cublasSetStream(handle, stream_wi));\n\n                 // Update i_weights\n                 cublasErrCheck(cublasSgemm(handle,\n                             CUBLAS_OP_N, CUBLAS_OP_T,\n                             6 * hiddenSize, inSize, currNumCovered,\n                             &one,\n                             i_gates_grad,\n                             6 * hiddenSize,\n                             inputPtr,\n                             inSize,\n                             &one,\n                             &T_grad[weightStart],\n                             6 * hiddenSize));\n\n                 cublasErrCheck(cublasSetStream(handle, stream_wh));\n\n                 // Update h_weights\n                 cublasErrCheck(cublasSgemm(handle,\n                             CUBLAS_OP_N, CUBLAS_OP_T,\n                             5 * hiddenSize, hiddenSize, currNumCovered,\n                             &one,\n                             h_gates_grad,\n                             5 * hiddenSize,\n                             h_data + prevIndex * numElements + layer * (seqLength+1) * numElements,\n                             hiddenSize,\n                             &one,\n                             &T_grad[weightStart + 6 *hiddenSize*inSize],\n                             5 * hiddenSize));\n\n                 cublasErrCheck(cublasSetStream(handle, stream_wb));\n\n                 // Update bias_weights\n                 cublasErrCheck(cublasSgemv(handle,\n                             CUBLAS_OP_N,\n                             5 * hiddenSize, currNumCovered,\n                             &one,\n                             h_gates_grad,\n                             5 * hiddenSize,\n                             ones,\n                             1,\n                             &one,\n                             &bias_grad[layer * 5 * hiddenSize],\n                             1));\n             }\n\n           cudaErrCheck(cudaDeviceSynchronize());\n\n        }\n\n    }\n\n   cublasErrCheck(cublasSetStream(handle, stream));\n   cudaErrCheck(cudaStreamDestroy(stream_i));\n   cudaErrCheck(cudaStreamDestroy(stream_h));\n   cudaErrCheck(cudaStreamDestroy(stream_wi));\n   cudaErrCheck(cudaStreamDestroy(stream_wh));\n   cudaErrCheck(cudaStreamDestroy(stream_wb));\n\n   cudaErrCheck(cudaFree(ones));\n   delete [] ones_host;\n\n   cudaErrCheck(cudaDeviceSynchronize());\n}\n\nvoid highway_lstm_forward_ongpu(int inputSize, int hiddenSize, int miniBatch, \n        int numLayers, int seqLength, float *x, int *lengths, float *h_data, \n        float *c_data, float *tmp_i, float *tmp_h, float *T, float *bias,\n        float *dropout, float *gates, int is_training, cudaStream_t stream, cublasHandle_t handle) {\n\n    const int numElements = hiddenSize * miniBatch;\n\n    float zero = 0.f;\n    float one = 1.f;\n\n    cudaStream_t stream_i;\n    cudaStream_t stream_h;\n\n    cudaErrCheck(cudaStreamCreate(&stream_i));\n    cudaErrCheck(cudaStreamCreate(&stream_h));\n\n    for (int layer = 0; layer < numLayers; layer++) {\n        int direction;\n        int startInd;\n        int currNumCovered;\n        if (layer % 2 == 0) {\n            // forward direction\n            direction = 1;\n            startInd = 0;\n            currNumCovered = miniBatch;\n        } else {\n            // backward direction\n            direction = -1;\n            startInd = seqLength-1;\n            currNumCovered = 0;\n        }\n        cublasErrCheck(cublasSetStream(handle, stream));\n\n        for (int t = startInd; t < seqLength && t >= 0; t = t + direction) {\n            \n            int prevIndex;\n            if (direction == 1) {\n                while (lengths[currNumCovered-1] <= t) {\n                    currNumCovered--;\n                }\n                prevIndex = t;\n            } else {\n                while ((currNumCovered < miniBatch) && (lengths[currNumCovered] > t)) {\n                    currNumCovered++;\n                }\n                prevIndex = (t+2)%(seqLength+1);\n            }\n\n            int inSize;\n            int weightStart;\n            float *inputPtr;\n            if (layer == 0) {\n                inSize = inputSize;\n                weightStart = 0;\n                inputPtr = x + t * inputSize * miniBatch;\n                prevIndex = t;\n            } else {\n                inSize = hiddenSize;\n                weightStart = 6 * hiddenSize * inputSize + 5 * hiddenSize * hiddenSize + (layer - 1) * 11 * hiddenSize * hiddenSize;\n                inputPtr = h_data + (t+1) * numElements + (layer - 1) * (seqLength+1) * numElements;\n            }\n\n            cublasErrCheck(cublasSetStream(handle, stream_i));\n\n            cublasErrCheck(cublasSgemm(handle,\n                        CUBLAS_OP_N, CUBLAS_OP_N,\n                        6*hiddenSize, currNumCovered, inSize,\n                        &one,\n                        &T[weightStart],\n                        6 * hiddenSize,\n                        inputPtr,\n                        inSize,\n                        &zero,\n                        tmp_i,\n                        6 * hiddenSize));\n\n            cublasErrCheck(cublasSetStream(handle, stream_h));\n\n            cublasErrCheck(cublasSgemm(handle,\n                        CUBLAS_OP_N, CUBLAS_OP_N,\n                        5*hiddenSize, currNumCovered, hiddenSize,\n                        &one,\n                        &T[6 * hiddenSize * inSize + weightStart],\n                        5 * hiddenSize,\n                        h_data + prevIndex * numElements + layer * (seqLength + 1) * numElements,\n                        hiddenSize,\n                        &zero,\n                        tmp_h,\n                        5 * hiddenSize));\n\n            cudaErrCheck(cudaDeviceSynchronize());\n\n            dim3 blockDim;\n            dim3 gridDim;\n\n            blockDim.x = BLOCK;\n            gridDim.x = ((currNumCovered * hiddenSize) + blockDim.x - 1) / blockDim.x;               \n            elementWise_fp <<< gridDim, blockDim , 0, stream>>> \n                (hiddenSize, miniBatch, currNumCovered,\n                 tmp_h, \n                 tmp_i, \n                 bias + 5 * layer * hiddenSize,\n                 is_training ? gates + 6 * (t * numElements + layer * seqLength * numElements) : NULL,\n                 h_data + (t + 1) * numElements + layer * (seqLength + 1) * numElements,\n                 dropout + layer * numElements,\n                 c_data + prevIndex * numElements + layer * (seqLength + 1) * numElements,\n                 c_data + (t + 1) * numElements + layer * (seqLength + 1) * numElements,\n                 is_training);\n               cudaErrCheck(cudaGetLastError());\n\n            cudaErrCheck(cudaDeviceSynchronize());\n        }\n    }\n\n   cublasErrCheck(cublasSetStream(handle, stream));\n   cudaErrCheck(cudaStreamDestroy(stream_i));\n   cudaErrCheck(cudaStreamDestroy(stream_h));\n\n   cudaErrCheck(cudaDeviceSynchronize());\n}\n\n#ifdef __cplusplus\n}\n#endif\n"
  },
  {
    "path": "lib/lstm/highway_lstm_cuda/src/highway_lstm_kernel.h",
    "content": "#include <cublasXt.h>\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\nvoid highway_lstm_forward_ongpu(int inputSize, int hiddenSize, int miniBatch, int numLayers, int seqLength, float *x, int *lengths, float*h_data, float *c_data, float *tmp_i, float *tmp_h, float *T, float *bias, float *dropout, float *gates, int is_training, cudaStream_t stream, cublasHandle_t handle);\n\nvoid highway_lstm_backward_ongpu(int inputSize, int hiddenSize, int miniBatch, int numLayers, int seqLength, float *out_grad, int *lengths, float *h_data_grad, float *c_data_grad, float *x, float *h_data, float *c_data, float *T, float *gates_out, float *dropout_in, float *h_gates_grad, float *i_gates_grad, float *h_out_grad, float *x_grad, float *T_grad, float *bias_grad, int isTraining, int do_weight_grad, cudaStream_t stream, cublasHandle_t handle);\n\n#ifdef __cplusplus\n}\n#endif\n"
  },
  {
    "path": "lib/object_detector.py",
    "content": "import numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.parallel\nfrom torch.autograd import Variable\nfrom torch.nn import functional as F\n\nfrom config import ANCHOR_SIZE, ANCHOR_RATIOS, ANCHOR_SCALES\nfrom lib.fpn.generate_anchors import generate_anchors\nfrom lib.fpn.box_utils import bbox_preds, center_size, bbox_overlaps\nfrom lib.fpn.nms.functions.nms import apply_nms\nfrom lib.fpn.proposal_assignments.proposal_assignments_gtbox import proposal_assignments_gtbox\nfrom lib.fpn.proposal_assignments.proposal_assignments_det import proposal_assignments_det\n\nfrom lib.fpn.roi_align.functions.roi_align import RoIAlignFunction\nfrom lib.pytorch_misc import enumerate_by_image, gather_nd, diagonal_inds, Flattener\nfrom torchvision.models.vgg import vgg16\nfrom torchvision.models.resnet import resnet101\nfrom torch.nn.parallel._functions import Gather\n\n\nclass Result(object):\n    \"\"\" little container class for holding the detection result\n        od: object detector, rm: rel model\"\"\"\n\n    def __init__(self, od_obj_dists=None, rm_obj_dists=None,\n                 obj_scores=None, obj_preds=None, obj_fmap=None,\n                 od_box_deltas=None, rm_box_deltas=None,\n                 od_box_targets=None, rm_box_targets=None, od_box_priors=None, rm_box_priors=None,\n                 boxes_assigned=None, boxes_all=None, od_obj_labels=None, rm_obj_labels=None,\n                 rpn_scores=None, rpn_box_deltas=None, rel_labels=None,\n                 im_inds=None, fmap=None, rel_dists=None, rel_inds=None, rel_rep=None):\n        self.__dict__.update(locals())\n        del self.__dict__['self']\n\n    def is_none(self):\n        return all([v is None for k, v in self.__dict__.items() if k != 'self'])\n\n\ndef gather_res(outputs, target_device, dim=0):\n    \"\"\"\n    Assuming the signatures are the same accross results!\n    \"\"\"\n    out = outputs[0]\n    args = {field: Gather.apply(target_device, dim, *[getattr(o, field) for o in outputs])\n            for field, v in out.__dict__.items() if v is not None}\n    return type(out)(**args)\n\n\nclass ObjectDetector(nn.Module):\n    \"\"\"\n    Core model for doing object detection + getting the visual features. This could be the first step in\n    a pipeline. We can provide GT rois or use the RPN (which would then be classification!)\n    \"\"\"\n    MODES = ('rpntrain', 'gtbox', 'refinerels', 'proposals')\n\n    def __init__(self, classes, mode='rpntrain', num_gpus=1, nms_filter_duplicates=True,\n                 max_per_img=64, use_resnet=False, thresh=0.05):\n        \"\"\"\n        :param classes: Object classes\n        :param rel_classes: Relationship classes. None if were not using rel mode\n        :param num_gpus: how many GPUS 2 use\n        \"\"\"\n        super(ObjectDetector, self).__init__()\n\n        if mode not in self.MODES:\n            raise ValueError(\"invalid mode\")\n        self.mode = mode\n\n        self.classes = classes\n        self.num_gpus = num_gpus\n        self.pooling_size = 7\n        self.nms_filter_duplicates = nms_filter_duplicates\n        self.max_per_img = max_per_img\n        self.use_resnet = use_resnet\n        self.thresh = thresh\n\n        if not self.use_resnet:\n            vgg_model = load_vgg()\n            self.features = vgg_model.features\n            self.roi_fmap = vgg_model.classifier\n            rpn_input_dim = 512\n            output_dim = 4096\n        else:  # Deprecated\n            self.features = load_resnet()\n            self.compress = nn.Sequential(\n                nn.Conv2d(1024, 256, kernel_size=1),\n                nn.ReLU(inplace=True),\n                nn.BatchNorm2d(256),\n            )\n            self.roi_fmap = nn.Sequential(\n                nn.Linear(256 * 7 * 7, 2048),\n                nn.SELU(inplace=True),\n                nn.AlphaDropout(p=0.05),\n                nn.Linear(2048, 2048),\n                nn.SELU(inplace=True),\n                nn.AlphaDropout(p=0.05),\n            )\n            rpn_input_dim = 1024\n            output_dim = 2048\n\n        self.score_fc = nn.Linear(output_dim, self.num_classes)\n        self.bbox_fc = nn.Linear(output_dim, self.num_classes * 4)\n        self.rpn_head = RPNHead(dim=512, input_dim=rpn_input_dim)\n\n    @property\n    def num_classes(self):\n        return len(self.classes)\n\n    def feature_map(self, x):\n        \"\"\"\n        Produces feature map from the input image\n        :param x: [batch_size, 3, size, size] float32 padded image\n        :return: Feature maps at 1/16 the original size.\n        Each one is [batch_size, dim, IM_SIZE/k, IM_SIZE/k].\n        \"\"\"\n        if not self.use_resnet:\n            return self.features(x)  # Uncomment this for \"stanford\" setting in which it's frozen:      .detach()\n        x = self.features.conv1(x)\n        x = self.features.bn1(x)\n        x = self.features.relu(x)\n        x = self.features.maxpool(x)\n\n        c2 = self.features.layer1(x)\n        c3 = self.features.layer2(c2)\n        c4 = self.features.layer3(c3)\n        return c4\n\n    def obj_feature_map(self, features, rois):\n        \"\"\"\n        Gets the ROI features\n        :param features: [batch_size, dim, IM_SIZE/4, IM_SIZE/4] (features at level p2)\n        :param rois: [num_rois, 5] array of [img_num, x0, y0, x1, y1].\n        :return: [num_rois, #dim] array\n        \"\"\"\n        feature_pool = RoIAlignFunction(self.pooling_size, self.pooling_size, spatial_scale=1 / 16)(\n            self.compress(features) if self.use_resnet else features, rois)\n        return self.roi_fmap(feature_pool.view(rois.size(0), -1))\n\n    def rpn_boxes(self, fmap, im_sizes, image_offset, gt_boxes=None, gt_classes=None, gt_rels=None,\n                  train_anchor_inds=None, proposals=None):\n        \"\"\"\n        Gets boxes from the RPN\n        :param fmap:\n        :param im_sizes:\n        :param image_offset:\n        :param gt_boxes:\n        :param gt_classes:\n        :param gt_rels:\n        :param train_anchor_inds:\n        :return:\n        \"\"\"\n        rpn_feats = self.rpn_head(fmap)\n        rois = self.rpn_head.roi_proposals(\n            rpn_feats, im_sizes, nms_thresh=0.7,\n            pre_nms_topn=12000 if self.training and self.mode == 'rpntrain' else 6000,\n            post_nms_topn=2000 if self.training and self.mode == 'rpntrain' else 1000,\n        )\n        if self.training:\n            if gt_boxes is None or gt_classes is None or train_anchor_inds is None:\n                raise ValueError(\n                    \"Must supply GT boxes, GT classes, trainanchors when in train mode\")\n            rpn_scores, rpn_box_deltas = self.rpn_head.anchor_preds(rpn_feats, train_anchor_inds,\n                                                                    image_offset)\n\n            if gt_rels is not None and self.mode == 'rpntrain':\n                raise ValueError(\"Training the object detector and the relationship model with detection\"\n                                 \"at the same time isn't supported\")\n\n            if self.mode == 'refinerels':\n                all_rois = Variable(rois)\n                # Potentially you could add in GT rois if none match\n                # is_match = (bbox_overlaps(rois[:,1:].contiguous(), gt_boxes.data) > 0.5).long()\n                # gt_not_matched = (is_match.sum(0) == 0).nonzero()\n                #\n                # if gt_not_matched.dim() > 0:\n                #     gt_to_add = torch.cat((gt_classes[:,0,None][gt_not_matched.squeeze(1)].float(),\n                #                            gt_boxes[gt_not_matched.squeeze(1)]), 1)\n                #\n                #     all_rois = torch.cat((all_rois, gt_to_add),0)\n                #     num_gt = gt_to_add.size(0)\n                labels = None\n                bbox_targets = None\n                rel_labels = None\n            else:\n                all_rois, labels, bbox_targets = proposal_assignments_det(\n                    rois, gt_boxes.data, gt_classes.data, image_offset, fg_thresh=0.5)\n                rel_labels = None\n\n        else:\n            all_rois = Variable(rois, volatile=True)\n            labels = None\n            bbox_targets = None\n            rel_labels = None\n            rpn_box_deltas = None\n            rpn_scores = None\n\n        return all_rois, labels, bbox_targets, rpn_scores, rpn_box_deltas, rel_labels\n\n    def gt_boxes(self, fmap, im_sizes, image_offset, gt_boxes=None, gt_classes=None, gt_rels=None,\n                 train_anchor_inds=None, proposals=None):\n        \"\"\"\n        Gets GT boxes!\n        :param fmap:\n        :param im_sizes:\n        :param image_offset:\n        :param gt_boxes:\n        :param gt_classes:\n        :param gt_rels:\n        :param train_anchor_inds:\n        :return:\n        \"\"\"\n        assert gt_boxes is not None\n        im_inds = gt_classes[:, 0] - image_offset\n        rois = torch.cat((im_inds.float()[:, None], gt_boxes), 1)\n        if gt_rels is not None and self.training:\n            rois, labels, rel_labels = proposal_assignments_gtbox(\n                rois.data, gt_boxes.data, gt_classes.data, gt_rels.data, image_offset,\n                fg_thresh=0.5)\n        else:\n            labels = gt_classes[:, 1]\n            rel_labels = None\n\n        return rois, labels, None, None, None, rel_labels\n\n    def proposal_boxes(self, fmap, im_sizes, image_offset, gt_boxes=None, gt_classes=None, gt_rels=None,\n                       train_anchor_inds=None, proposals=None):\n        \"\"\"\n        Gets boxes from the RPN\n        :param fmap:\n        :param im_sizes:\n        :param image_offset:\n        :param gt_boxes:\n        :param gt_classes:\n        :param gt_rels:\n        :param train_anchor_inds:\n        :return:\n        \"\"\"\n        assert proposals is not None\n\n        rois = filter_roi_proposals(proposals[:, 2:].data.contiguous(), proposals[:, 1].data.contiguous(),\n                                    np.array([2000] * len(im_sizes)),\n                                    nms_thresh=0.7,\n                                    pre_nms_topn=12000 if self.training and self.mode == 'rpntrain' else 6000,\n                                    post_nms_topn=2000 if self.training and self.mode == 'rpntrain' else 1000,\n                                    )\n        if self.training:\n            all_rois, labels, bbox_targets = proposal_assignments_det(\n                rois, gt_boxes.data, gt_classes.data, image_offset, fg_thresh=0.5)\n\n            # RETRAINING FOR DETECTION HERE.\n            all_rois = torch.cat((all_rois, Variable(rois)), 0)\n        else:\n            all_rois = Variable(rois, volatile=True)\n            labels = None\n            bbox_targets = None\n\n        rpn_scores = None\n        rpn_box_deltas = None\n        rel_labels = None\n\n        return all_rois, labels, bbox_targets, rpn_scores, rpn_box_deltas, rel_labels\n\n    def get_boxes(self, *args, **kwargs):\n        if self.mode == 'gtbox':\n            fn = self.gt_boxes\n        elif self.mode == 'proposals':\n            assert kwargs['proposals'] is not None\n            fn = self.proposal_boxes\n        else:\n            fn = self.rpn_boxes\n        return fn(*args, **kwargs)\n\n    def forward(self, x, im_sizes, image_offset,\n                gt_boxes=None, gt_classes=None, gt_rels=None, proposals=None, train_anchor_inds=None,\n                return_fmap=False):\n        \"\"\"\n        Forward pass for detection\n        :param x: Images@[batch_size, 3, IM_SIZE, IM_SIZE]\n        :param im_sizes: A numpy array of (h, w, scale) for each image.\n        :param image_offset: Offset onto what image we're on for MGPU training (if single GPU this is 0)\n        :param gt_boxes:\n\n        Training parameters:\n        :param gt_boxes: [num_gt, 4] GT boxes over the batch.\n        :param gt_classes: [num_gt, 2] gt boxes where each one is (img_id, class)\n        :param proposals: things\n        :param train_anchor_inds: a [num_train, 2] array of indices for the anchors that will\n                                  be used to compute the training loss. Each (img_ind, fpn_idx)\n        :return: If train:\n        \"\"\"\n        fmap = self.feature_map(x)\n\n        # Get boxes from RPN\n        rois, obj_labels, bbox_targets, rpn_scores, rpn_box_deltas, rel_labels = \\\n            self.get_boxes(fmap, im_sizes, image_offset, gt_boxes,\n                           gt_classes, gt_rels, train_anchor_inds, proposals=proposals)\n\n        # Now classify them\n        obj_fmap = self.obj_feature_map(fmap, rois)\n        od_obj_dists = self.score_fc(obj_fmap)\n        od_box_deltas = self.bbox_fc(obj_fmap).view(\n            -1, len(self.classes), 4) if self.mode != 'gtbox' else None\n\n        od_box_priors = rois[:, 1:]\n\n        if (not self.training and not self.mode == 'gtbox') or self.mode in ('proposals', 'refinerels'):\n            nms_inds, nms_scores, nms_preds, nms_boxes_assign, nms_boxes, nms_imgs = self.nms_boxes(\n                od_obj_dists,\n                rois,\n                od_box_deltas, im_sizes,\n            )\n            im_inds = nms_imgs + image_offset\n            obj_dists = od_obj_dists[nms_inds]\n            obj_fmap = obj_fmap[nms_inds]\n            box_deltas = od_box_deltas[nms_inds]\n            box_priors = nms_boxes[:, 0]\n\n            if self.training and not self.mode == 'gtbox':\n                # NOTE: If we're doing this during training, we need to assign labels here.\n                pred_to_gtbox = bbox_overlaps(box_priors, gt_boxes).data\n                pred_to_gtbox[im_inds.data[:, None] != gt_classes.data[None, :, 0]] = 0.0\n\n                max_overlaps, argmax_overlaps = pred_to_gtbox.max(1)\n                rm_obj_labels = gt_classes[:, 1][argmax_overlaps]\n                rm_obj_labels[max_overlaps < 0.5] = 0\n            else:\n                rm_obj_labels = None\n        else:\n            im_inds = rois[:, 0].long().contiguous() + image_offset\n            nms_scores = None\n            nms_preds = None\n            nms_boxes_assign = None\n            nms_boxes = None\n            box_priors = rois[:, 1:]\n            rm_obj_labels = obj_labels\n            box_deltas = od_box_deltas\n            obj_dists = od_obj_dists\n\n        return Result(\n            od_obj_dists=od_obj_dists,\n            rm_obj_dists=obj_dists,\n            obj_scores=nms_scores,\n            obj_preds=nms_preds,\n            obj_fmap=obj_fmap,\n            od_box_deltas=od_box_deltas,\n            rm_box_deltas=box_deltas,\n            od_box_targets=bbox_targets,\n            rm_box_targets=bbox_targets,\n            od_box_priors=od_box_priors,\n            rm_box_priors=box_priors,\n            boxes_assigned=nms_boxes_assign,\n            boxes_all=nms_boxes,\n            od_obj_labels=obj_labels,\n            rm_obj_labels=rm_obj_labels,\n            rpn_scores=rpn_scores,\n            rpn_box_deltas=rpn_box_deltas,\n            rel_labels=rel_labels,\n            im_inds=im_inds,\n            fmap=fmap if return_fmap else None,\n        )\n\n    def nms_boxes(self, obj_dists, rois, box_deltas, im_sizes):\n        \"\"\"\n        Performs NMS on the boxes\n        :param obj_dists: [#rois, #classes]\n        :param rois: [#rois, 5]\n        :param box_deltas: [#rois, #classes, 4]\n        :param im_sizes: sizes of images\n        :return\n            nms_inds [#nms]\n            nms_scores [#nms]\n            nms_labels [#nms]\n            nms_boxes_assign [#nms, 4]\n            nms_boxes  [#nms, #classes, 4]. classid=0 is the box prior.\n        \"\"\"\n        # Now produce the boxes\n        # box deltas is (num_rois, num_classes, 4) but rois is only #(num_rois, 4)\n        boxes = bbox_preds(rois[:, None, 1:].expand_as(box_deltas).contiguous().view(-1, 4),\n                           box_deltas.view(-1, 4)).view(*box_deltas.size())\n\n        # Clip the boxes and get the best N dets per image.\n        inds = rois[:, 0].long().contiguous()\n        dets = []\n        for i, s, e in enumerate_by_image(inds.data):\n            h, w = im_sizes[i, :2]\n            boxes[s:e, :, 0].data.clamp_(min=0, max=w - 1)\n            boxes[s:e, :, 1].data.clamp_(min=0, max=h - 1)\n            boxes[s:e, :, 2].data.clamp_(min=0, max=w - 1)\n            boxes[s:e, :, 3].data.clamp_(min=0, max=h - 1)\n            d_filtered = filter_det(\n                F.softmax(obj_dists[s:e], 1), boxes[s:e], start_ind=s,\n                nms_filter_duplicates=self.nms_filter_duplicates,\n                max_per_img=self.max_per_img,\n                thresh=self.thresh,\n            )\n            if d_filtered is not None:\n                dets.append(d_filtered)\n\n        if len(dets) == 0:\n            print(\"nothing was detected\", flush=True)\n            return None\n        nms_inds, nms_scores, nms_labels = [torch.cat(x, 0) for x in zip(*dets)]\n        twod_inds = nms_inds * boxes.size(1) + nms_labels.data\n        nms_boxes_assign = boxes.view(-1, 4)[twod_inds]\n\n        nms_boxes = torch.cat((rois[:, 1:][nms_inds][:, None], boxes[nms_inds][:, 1:]), 1)\n        return nms_inds, nms_scores, nms_labels, nms_boxes_assign, nms_boxes, inds[nms_inds]\n\n    def __getitem__(self, batch):\n        \"\"\" Hack to do multi-GPU training\"\"\"\n        batch.scatter()\n        if self.num_gpus == 1:\n            return self(*batch[0])\n\n        replicas = nn.parallel.replicate(self, devices=list(range(self.num_gpus)))\n        outputs = nn.parallel.parallel_apply(replicas, [batch[i] for i in range(self.num_gpus)])\n\n        if any([x.is_none() for x in outputs]):\n            assert not self.training\n            return None\n        return gather_res(outputs, 0, dim=0)\n\n\ndef filter_det(scores, boxes, start_ind=0, max_per_img=100, thresh=0.001, pre_nms_topn=6000,\n               post_nms_topn=300, nms_thresh=0.3, nms_filter_duplicates=True):\n    \"\"\"\n    Filters the detections for a single image\n    :param scores: [num_rois, num_classes]\n    :param boxes: [num_rois, num_classes, 4]. Assumes the boxes have been clamped\n    :param max_per_img: Max detections per image\n    :param thresh: Threshold for calling it a good box\n    :param nms_filter_duplicates: True if we shouldn't allow for mulitple detections of the\n           same box (with different labels)\n    :return: A numpy concatenated array with up to 100 detections/img [num_im, x1, y1, x2, y2, score, cls]\n    \"\"\"\n\n    valid_cls = (scores[:, 1:].data.max(0)[0] > thresh).nonzero() + 1\n    if valid_cls.dim() == 0:\n        return None\n\n    nms_mask = scores.data.clone()\n    nms_mask.zero_()\n\n    for c_i in valid_cls.squeeze(1).cpu():\n        scores_ci = scores.data[:, c_i]\n        boxes_ci = boxes.data[:, c_i]\n\n        keep = apply_nms(scores_ci, boxes_ci,\n                         pre_nms_topn=pre_nms_topn, post_nms_topn=post_nms_topn,\n                         nms_thresh=nms_thresh)\n        nms_mask[:, c_i][keep] = 1\n\n    dists_all = Variable(nms_mask * scores.data, volatile=True)\n\n    if nms_filter_duplicates:\n        scores_pre, labels_pre = dists_all.data.max(1)\n        inds_all = scores_pre.nonzero()\n        assert inds_all.dim() != 0\n        inds_all = inds_all.squeeze(1)\n\n        labels_all = labels_pre[inds_all]\n        scores_all = scores_pre[inds_all]\n    else:\n        nz = nms_mask.nonzero()\n        assert nz.dim() != 0\n        inds_all = nz[:, 0]\n        labels_all = nz[:, 1]\n        scores_all = scores.data.view(-1)[inds_all * scores.data.size(1) + labels_all]\n\n    # dists_all = dists_all[inds_all]\n    # dists_all[:,0] = 1.0-dists_all.sum(1)\n\n    # # Limit to max per image detections\n    vs, idx = torch.sort(scores_all, dim=0, descending=True)\n    idx = idx[vs > thresh]\n    if max_per_img < idx.size(0):\n        idx = idx[:max_per_img]\n\n    inds_all = inds_all[idx] + start_ind\n    scores_all = Variable(scores_all[idx], volatile=True)\n    labels_all = Variable(labels_all[idx], volatile=True)\n    # dists_all = dists_all[idx]\n\n    return inds_all, scores_all, labels_all\n\n\nclass RPNHead(nn.Module):\n    \"\"\"\n    Serves as the class + box outputs for each level in the FPN.\n    \"\"\"\n\n    def __init__(self, dim=512, input_dim=1024):\n        \"\"\"\n        :param aspect_ratios: Aspect ratios for the anchors. NOTE - this can't be changed now\n               as it depends on other things in the C code...\n        \"\"\"\n        super(RPNHead, self).__init__()\n\n        self.anchor_target_dim = 6\n        self.stride = 16\n\n        self.conv = nn.Sequential(\n            nn.Conv2d(input_dim, dim, kernel_size=3, padding=1),\n            nn.ReLU6(inplace=True),  # Tensorflow docs use Relu6, so let's use it too....\n            nn.Conv2d(dim, self.anchor_target_dim * self._A,\n                      kernel_size=1)\n        )\n\n        ans_np = generate_anchors(base_size=ANCHOR_SIZE,\n                                  feat_stride=self.stride,\n                                  anchor_scales=ANCHOR_SCALES,\n                                  anchor_ratios=ANCHOR_RATIOS,\n                                  )\n        self.register_buffer('anchors', torch.FloatTensor(ans_np))\n\n    @property\n    def _A(self):\n        return len(ANCHOR_RATIOS) * len(ANCHOR_SCALES)\n\n    def forward(self, fmap):\n        \"\"\"\n        Gets the class / noclass predictions over all the scales\n\n        :param fmap: [batch_size, dim, IM_SIZE/16, IM_SIZE/16] featuremap\n        :return: [batch_size, IM_SIZE/16, IM_SIZE/16, A, 6]\n        \"\"\"\n        rez = self._reshape_channels(self.conv(fmap))\n        rez = rez.view(rez.size(0), rez.size(1), rez.size(2),\n                       self._A, self.anchor_target_dim)\n        return rez\n\n    def anchor_preds(self, preds, train_anchor_inds, image_offset):\n        \"\"\"\n        Get predictions for the training indices\n        :param preds: [batch_size, IM_SIZE/16, IM_SIZE/16, A, 6]\n        :param train_anchor_inds: [num_train, 4] indices into the predictions\n        :return: class_preds: [num_train, 2] array of yes/no\n                 box_preds:   [num_train, 4] array of predicted boxes\n        \"\"\"\n        assert train_anchor_inds.size(1) == 4\n        tai = train_anchor_inds.data.clone()\n        tai[:, 0] -= image_offset\n        train_regions = gather_nd(preds, tai)\n\n        class_preds = train_regions[:, :2]\n        box_preds = train_regions[:, 2:]\n        return class_preds, box_preds\n\n    @staticmethod\n    def _reshape_channels(x):\n        \"\"\" [batch_size, channels, h, w] -> [batch_size, h, w, channels] \"\"\"\n        assert x.dim() == 4\n        batch_size, nc, h, w = x.size()\n\n        x_t = x.view(batch_size, nc, -1).transpose(1, 2).contiguous()\n        x_t = x_t.view(batch_size, h, w, nc)\n        return x_t\n\n    def roi_proposals(self, fmap, im_sizes, nms_thresh=0.7, pre_nms_topn=12000, post_nms_topn=2000):\n        \"\"\"\n        :param fmap: [batch_size, IM_SIZE/16, IM_SIZE/16, A, 6]\n        :param im_sizes:        [batch_size, 3] numpy array of (h, w, scale)\n        :return: ROIS: shape [a <=post_nms_topn, 5] array of ROIS.\n        \"\"\"\n        class_fmap = fmap[:, :, :, :, :2].contiguous()\n\n        # GET THE GOOD BOXES AYY LMAO :')\n        class_preds = F.softmax(class_fmap, 4)[..., 1].data.contiguous()\n\n        box_fmap = fmap[:, :, :, :, 2:].data.contiguous()\n\n        anchor_stacked = torch.cat([self.anchors[None]] * fmap.size(0), 0)\n        box_preds = bbox_preds(anchor_stacked.view(-1, 4), box_fmap.view(-1, 4)).view(\n            *box_fmap.size())\n\n        for i, (h, w, scale) in enumerate(im_sizes):\n            # Zero out all the bad boxes h, w, A, 4\n            h_end = int(h) // self.stride\n            w_end = int(w) // self.stride\n            if h_end < class_preds.size(1):\n                class_preds[i, h_end:] = -0.01\n            if w_end < class_preds.size(2):\n                class_preds[i, :, w_end:] = -0.01\n\n            # and clamp the others\n            box_preds[i, :, :, :, 0].clamp_(min=0, max=w - 1)\n            box_preds[i, :, :, :, 1].clamp_(min=0, max=h - 1)\n            box_preds[i, :, :, :, 2].clamp_(min=0, max=w - 1)\n            box_preds[i, :, :, :, 3].clamp_(min=0, max=h - 1)\n\n        sizes = center_size(box_preds.view(-1, 4))\n        class_preds.view(-1)[(sizes[:, 2] < 4) | (sizes[:, 3] < 4)] = -0.01\n        return filter_roi_proposals(box_preds.view(-1, 4), class_preds.view(-1),\n                                    boxes_per_im=np.array([np.prod(box_preds.size()[1:-1])] * fmap.size(0)),\n                                    nms_thresh=nms_thresh,\n                                    pre_nms_topn=pre_nms_topn, post_nms_topn=post_nms_topn)\n\n\ndef filter_roi_proposals(box_preds, class_preds, boxes_per_im, nms_thresh=0.7, pre_nms_topn=12000, post_nms_topn=2000):\n    inds, im_per = apply_nms(\n        class_preds,\n        box_preds,\n        pre_nms_topn=pre_nms_topn,\n        post_nms_topn=post_nms_topn,\n        boxes_per_im=boxes_per_im,\n        nms_thresh=nms_thresh,\n    )\n    img_inds = torch.cat([val * torch.ones(i) for val, i in enumerate(im_per)], 0).cuda(\n        box_preds.get_device())\n    rois = torch.cat((img_inds[:, None], box_preds[inds]), 1)\n    return rois\n\n\ndef load_resnet():\n    model = resnet101(pretrained=True)\n    del model.layer4\n    del model.avgpool\n    del model.fc\n    return model\n\n\ndef load_vgg(use_dropout=True, use_relu=True, use_linear=True, pretrained=True):\n    model = vgg16(pretrained=pretrained)\n    del model.features._modules['30']  # Get rid of the maxpool\n    del model.classifier._modules['6']  # Get rid of class layer\n    if not use_dropout:\n        del model.classifier._modules['5']  # Get rid of dropout\n        if not use_relu:\n            del model.classifier._modules['4']  # Get rid of relu activation\n            if not use_linear:\n                del model.classifier._modules['3']  # Get rid of linear layer\n    return model\n"
  },
  {
    "path": "lib/pytorch_misc.py",
    "content": "\"\"\"\nMiscellaneous functions that might be useful for pytorch\n\"\"\"\n\nimport h5py\nimport numpy as np\nimport torch\nfrom torch.autograd import Variable\nimport os\nimport dill as pkl\nfrom itertools import tee\nfrom torch import nn\n\ndef optimistic_restore(network, state_dict):\n    mismatch = False\n    own_state = network.state_dict()\n    for name, param in state_dict.items():\n        if name not in own_state:\n            print(\"Unexpected key {} in state_dict with size {}\".format(name, param.size()))\n            mismatch = True\n        elif param.size() == own_state[name].size():\n            own_state[name].copy_(param)\n        else:\n            print(\"Network has {} with size {}, ckpt has {}\".format(name,\n                                                                    own_state[name].size(),\n                                                                    param.size()))\n            mismatch = True\n\n    missing = set(own_state.keys()) - set(state_dict.keys())\n    if len(missing) > 0:\n        print(\"We couldn't find {}\".format(','.join(missing)))\n        mismatch = True\n    return not mismatch\n\n\ndef pairwise(iterable):\n    \"s -> (s0,s1), (s1,s2), (s2, s3), ...\"\n    a, b = tee(iterable)\n    next(b, None)\n    return zip(a, b)\n\n\ndef get_ranking(predictions, labels, num_guesses=5):\n    \"\"\"\n    Given a matrix of predictions and labels for the correct ones, get the number of guesses\n    required to get the prediction right per example.\n    :param predictions: [batch_size, range_size] predictions\n    :param labels: [batch_size] array of labels\n    :param num_guesses: Number of guesses to return\n    :return:\n    \"\"\"\n    assert labels.size(0) == predictions.size(0)\n    assert labels.dim() == 1\n    assert predictions.dim() == 2\n\n    values, full_guesses = predictions.topk(predictions.size(1), dim=1)\n    _, ranking = full_guesses.topk(full_guesses.size(1), dim=1, largest=False)\n    gt_ranks = torch.gather(ranking.data, 1, labels.data[:, None]).squeeze()\n\n    guesses = full_guesses[:, :num_guesses]\n    return gt_ranks, guesses\n\ndef cache(f):\n    \"\"\"\n    Caches a computation\n    \"\"\"\n    def cache_wrapper(fn, *args, **kwargs):\n        if os.path.exists(fn):\n            with open(fn, 'rb') as file:\n                data = pkl.load(file)\n        else:\n            print(\"file {} not found, so rebuilding\".format(fn))\n            data = f(*args, **kwargs)\n            with open(fn, 'wb') as file:\n                pkl.dump(data, file)\n        return data\n    return cache_wrapper\n\n\nclass Flattener(nn.Module):\n    def __init__(self):\n        \"\"\"\n        Flattens last 3 dimensions to make it only batch size, -1\n        \"\"\"\n        super(Flattener, self).__init__()\n    def forward(self, x):\n        return x.view(x.size(0), -1)\n\n\ndef to_variable(f):\n    \"\"\"\n    Decorator that pushes all the outputs to a variable\n    :param f: \n    :return: \n    \"\"\"\n    def variable_wrapper(*args, **kwargs):\n        rez = f(*args, **kwargs)\n        if isinstance(rez, tuple):\n            return tuple([Variable(x) for x in rez])\n        return Variable(rez)\n    return variable_wrapper\n\ndef arange(base_tensor, n=None):\n    new_size = base_tensor.size(0) if n is None else n\n    new_vec = base_tensor.new(new_size).long()\n    torch.arange(0, new_size, out=new_vec)\n    return new_vec\n\n\ndef to_onehot(vec, num_classes, fill=1000):\n    \"\"\"\n    Creates a [size, num_classes] torch FloatTensor where\n    one_hot[i, vec[i]] = fill\n    \n    :param vec: 1d torch tensor\n    :param num_classes: int\n    :param fill: value that we want + and - things to be.\n    :return: \n    \"\"\"\n    onehot_result = vec.new(vec.size(0), num_classes).float().fill_(-fill)\n    arange_inds = vec.new(vec.size(0)).long()\n    torch.arange(0, vec.size(0), out=arange_inds)\n\n    onehot_result.view(-1)[vec + num_classes*arange_inds] = fill\n    return onehot_result\n\ndef save_net(fname, net):\n    h5f = h5py.File(fname, mode='w')\n    for k, v in list(net.state_dict().items()):\n        h5f.create_dataset(k, data=v.cpu().numpy())\n\n\ndef load_net(fname, net):\n    h5f = h5py.File(fname, mode='r')\n    for k, v in list(net.state_dict().items()):\n        param = torch.from_numpy(np.asarray(h5f[k]))\n\n        if v.size() != param.size():\n            print(\"On k={} desired size is {} but supplied {}\".format(k, v.size(), param.size()))\n        else:\n            v.copy_(param)\n\n\ndef batch_index_iterator(len_l, batch_size, skip_end=True):\n    \"\"\"\n    Provides indices that iterate over a list\n    :param len_l: int representing size of thing that we will\n        iterate over\n    :param batch_size: size of each batch\n    :param skip_end: if true, don't iterate over the last batch\n    :return: A generator that returns (start, end) tuples\n        as it goes through all batches\n    \"\"\"\n    iterate_until = len_l\n    if skip_end:\n        iterate_until = (len_l // batch_size) * batch_size\n\n    for b_start in range(0, iterate_until, batch_size):\n        yield (b_start, min(b_start+batch_size, len_l))\n\ndef batch_map(f, a, batch_size):\n    \"\"\"\n    Maps f over the array a in chunks of batch_size.\n    :param f: function to be applied. Must take in a block of\n            (batch_size, dim_a) and map it to (batch_size, something).\n    :param a: Array to be applied over of shape (num_rows, dim_a).\n    :param batch_size: size of each array\n    :return: Array of size (num_rows, something).\n    \"\"\"\n    rez = []\n    for s, e in batch_index_iterator(a.size(0), batch_size, skip_end=False):\n        print(\"Calling on {}\".format(a[s:e].size()))\n        rez.append(f(a[s:e]))\n\n    return torch.cat(rez)\n\n\ndef const_row(fill, l, volatile=False):\n    input_tok = Variable(torch.LongTensor([fill] * l),volatile=volatile)\n    if torch.cuda.is_available():\n        input_tok = input_tok.cuda()\n    return input_tok\n\n\ndef print_para(model):\n    \"\"\"\n    Prints parameters of a model\n    :param opt:\n    :return:\n    \"\"\"\n    st = {}\n    strings = []\n    total_params = 0\n    for p_name, p in model.named_parameters():\n\n        if not ('bias' in p_name.split('.')[-1] or 'bn' in p_name.split('.')[-1]):\n            st[p_name] = ([str(x) for x in p.size()], np.prod(p.size()), p.requires_grad)\n        total_params += np.prod(p.size())\n    for p_name, (size, prod, p_req_grad) in sorted(st.items(), key=lambda x: -x[1][1]):\n        strings.append(\"{:<50s}: {:<16s}({:8d}) ({})\".format(\n            p_name, '[{}]'.format(','.join(size)), prod, 'grad' if p_req_grad else '    '\n        ))\n    return '\\n {:.1f}M total parameters \\n ----- \\n \\n{}'.format(total_params / 1000000.0, '\\n'.join(strings))\n\n\ndef accuracy(output, target, topk=(1,)):\n    \"\"\"Computes the precision@k for the specified values of k\"\"\"\n    maxk = max(topk)\n    batch_size = target.size(0)\n\n    _, pred = output.topk(maxk, 1, True, True)\n    pred = pred.t()\n    correct = pred.eq(target.view(1, -1).expand_as(pred))\n\n    res = []\n    for k in topk:\n        correct_k = correct[:k].view(-1).float().sum(0)\n        res.append(correct_k.mul_(100.0 / batch_size))\n    return res\n\n\ndef nonintersecting_2d_inds(x):\n    \"\"\"\n    Returns np.array([(a,b) for a in range(x) for b in range(x) if a != b]) efficiently\n    :param x: Size\n    :return: a x*(x-1) array that is [(0,1), (0,2)... (0, x-1), (1,0), (1,2), ..., (x-1, x-2)]\n    \"\"\"\n    rs = 1 - np.diag(np.ones(x, dtype=np.int32))\n    relations = np.column_stack(np.where(rs))\n    return relations\n\n\ndef intersect_2d(x1, x2):\n    \"\"\"\n    Given two arrays [m1, n], [m2,n], returns a [m1, m2] array where each entry is True if those\n    rows match.\n    :param x1: [m1, n] numpy array\n    :param x2: [m2, n] numpy array\n    :return: [m1, m2] bool array of the intersections\n    \"\"\"\n    if x1.shape[1] != x2.shape[1]:\n        raise ValueError(\"Input arrays must have same #columns\")\n\n    # This performs a matrix multiplication-esque thing between the two arrays\n    # Instead of summing, we want the equality, so we reduce in that way\n    res = (x1[..., None] == x2.T[None, ...]).all(1)\n    return res\n\ndef np_to_variable(x, is_cuda=True, dtype=torch.FloatTensor):\n    v = Variable(torch.from_numpy(x).type(dtype))\n    if is_cuda:\n        v = v.cuda()\n    return v\n\ndef gather_nd(x, index):\n    \"\"\"\n\n    :param x: n dimensional tensor [x0, x1, x2, ... x{n-1}, dim]\n    :param index: [num, n-1] where each row contains the indices we'll use\n    :return: [num, dim]\n    \"\"\"\n    nd = x.dim() - 1\n    assert nd > 0\n    assert index.dim() == 2\n    assert index.size(1) == nd\n    dim = x.size(-1)\n\n    sel_inds = index[:,nd-1].clone()\n    mult_factor = x.size(nd-1)\n    for col in range(nd-2, -1, -1): # [n-2, n-3, ..., 1, 0]\n        sel_inds += index[:,col] * mult_factor\n        mult_factor *= x.size(col)\n\n    grouped = x.view(-1, dim)[sel_inds]\n    return grouped\n\n\ndef enumerate_by_image(im_inds):\n    im_inds_np = im_inds.cpu().numpy()\n    initial_ind = int(im_inds_np[0])\n    s = 0\n    for i, val in enumerate(im_inds_np):\n        if val != initial_ind:\n            yield initial_ind, s, i\n            initial_ind = int(val)\n            s = i\n    yield initial_ind, s, len(im_inds_np)\n    # num_im = im_inds[-1] + 1\n    # # print(\"Num im is {}\".format(num_im))\n    # for i in range(num_im):\n    #     # print(\"On i={}\".format(i))\n    #     inds_i = (im_inds == i).nonzero()\n    #     if inds_i.dim() == 0:\n    #         continue\n    #     inds_i = inds_i.squeeze(1)\n    #     s = inds_i[0]\n    #     e = inds_i[-1] + 1\n    #     # print(\"On i={} we have s={} e={}\".format(i, s, e))\n    #     yield i, s, e\n\ndef diagonal_inds(tensor):\n    \"\"\"\n    Returns the indices required to go along first 2 dims of tensor in diag fashion\n    :param tensor: thing\n    :return: \n    \"\"\"\n    assert tensor.dim() >= 2\n    assert tensor.size(0) == tensor.size(1)\n    size = tensor.size(0)\n    arange_inds = tensor.new(size).long()\n    torch.arange(0, tensor.size(0), out=arange_inds)\n    return (size+1)*arange_inds\n\ndef enumerate_imsize(im_sizes):\n    s = 0\n    for i, (h, w, scale, num_anchors) in enumerate(im_sizes):\n        na = int(num_anchors)\n        e = s + na\n        yield i, s, e, h, w, scale, na\n\n        s = e\n\ndef argsort_desc(scores):\n    \"\"\"\n    Returns the indices that sort scores descending in a smart way\n    :param scores: Numpy array of arbitrary size\n    :return: an array of size [numel(scores), dim(scores)] where each row is the index you'd\n             need to get the score.\n    \"\"\"\n    return np.column_stack(np.unravel_index(np.argsort(-scores.ravel()), scores.shape))\n\n\ndef unravel_index(index, dims):\n    unraveled = []\n    index_cp = index.clone()\n    for d in dims[::-1]:\n        unraveled.append(index_cp % d)\n        index_cp /= d\n    return torch.cat([x[:,None] for x in unraveled[::-1]], 1)\n\ndef de_chunkize(tensor, chunks):\n    s = 0\n    for c in chunks:\n        yield tensor[s:(s+c)]\n        s = s+c\n\ndef random_choose(tensor, num):\n    \"randomly choose indices\"\n    num_choose = min(tensor.size(0), num)\n    if num_choose == tensor.size(0):\n        return tensor\n\n    # Gotta do this in numpy because of https://github.com/pytorch/pytorch/issues/1868\n    rand_idx = np.random.choice(tensor.size(0), size=num, replace=False)\n    rand_idx = torch.LongTensor(rand_idx).cuda(tensor.get_device())\n    chosen = tensor[rand_idx].contiguous()\n\n    # rand_values = tensor.new(tensor.size(0)).float().normal_()\n    # _, idx = torch.sort(rand_values)\n    #\n    # chosen = tensor[idx[:num]].contiguous()\n    return chosen\n\n\ndef transpose_packed_sequence_inds(lengths):\n    \"\"\"\n    Goes from a TxB packed sequence to a BxT or vice versa. Assumes that nothing is a variable\n    :param ps: PackedSequence\n    :return:\n    \"\"\"\n\n    new_inds = []\n    new_lens = []\n    cum_add = np.cumsum([0] + lengths)\n    max_len = lengths[0]\n    length_pointer = len(lengths) - 1\n    for i in range(max_len):\n        while length_pointer > 0 and lengths[length_pointer] <= i:\n            length_pointer -= 1\n        new_inds.append(cum_add[:(length_pointer+1)].copy())\n        cum_add[:(length_pointer+1)] += 1\n        new_lens.append(length_pointer+1)\n    new_inds = np.concatenate(new_inds, 0)\n    return new_inds, new_lens\n\n\ndef right_shift_packed_sequence_inds(lengths):\n    \"\"\"\n    :param lengths: e.g. [2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1]\n    :return: perm indices for the old stuff (TxB) to shift it right 1 slot so as to accomodate\n             BOS toks\n             \n             visual example: of lengths = [4,3,1,1]\n    before:\n    \n        a (0)  b (4)  c (7) d (8)\n        a (1)  b (5)\n        a (2)  b (6)\n        a (3)\n        \n    after:\n    \n        bos a (0)  b (4)  c (7)\n        bos a (1)\n        bos a (2)\n        bos              \n    \"\"\"\n    cur_ind = 0\n    inds = []\n    for (l1, l2) in zip(lengths[:-1], lengths[1:]):\n        for i in range(l2):\n            inds.append(cur_ind + i)\n        cur_ind += l1\n    return inds\n\ndef clip_grad_norm(named_parameters, max_norm, clip=False, verbose=False):\n    r\"\"\"Clips gradient norm of an iterable of parameters.\n\n    The norm is computed over all gradients together, as if they were\n    concatenated into a single vector. Gradients are modified in-place.\n\n    Arguments:\n        parameters (Iterable[Variable]): an iterable of Variables that will have\n            gradients normalized\n        max_norm (float or int): max norm of the gradients\n\n    Returns:\n        Total norm of the parameters (viewed as a single vector).\n    \"\"\"\n    max_norm = float(max_norm)\n\n    total_norm = 0\n    param_to_norm = {}\n    param_to_shape = {}\n    for n, p in named_parameters:\n        if p.grad is not None:\n            param_norm = p.grad.data.norm(2)\n            total_norm += param_norm ** 2\n            param_to_norm[n] = param_norm\n            param_to_shape[n] = p.size()\n\n    total_norm = total_norm ** (1. / 2)\n    clip_coef = max_norm / (total_norm + 1e-6)\n    if clip_coef < 1 and clip:\n        for _, p in named_parameters:\n            if p.grad is not None:\n                p.grad.data.mul_(clip_coef)\n\n    if verbose:\n        print('---Total norm {:.3f} clip coef {:.3f}-----------------'.format(total_norm, clip_coef))\n        for name, norm in sorted(param_to_norm.items(), key=lambda x: -x[1]):\n            print(\"{:<50s}: {:.3f}, ({})\".format(name, norm, param_to_shape[name]))\n        print('-------------------------------', flush=True)\n\n    return total_norm\n\ndef update_lr(optimizer, lr=1e-4):\n    print(\"------ Learning rate -> {}\".format(lr))\n    for param_group in optimizer.param_groups:\n        param_group['lr'] = lr"
  },
  {
    "path": "lib/rel_model.py",
    "content": "\"\"\"\nLet's get the relationships yo\n\"\"\"\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.parallel\nfrom torch.autograd import Variable\nfrom torch.nn import functional as F\nfrom torch.nn.utils.rnn import PackedSequence\nfrom lib.resnet import resnet_l4\nfrom config import BATCHNORM_MOMENTUM\nfrom lib.fpn.nms.functions.nms import apply_nms\n\n# from lib.decoder_rnn import DecoderRNN, lstm_factory, LockedDropout\nfrom lib.lstm.decoder_rnn import DecoderRNN\nfrom lib.lstm.highway_lstm_cuda.alternating_highway_lstm import AlternatingHighwayLSTM\nfrom lib.fpn.box_utils import bbox_overlaps, center_size\nfrom lib.get_union_boxes import UnionBoxesAndFeats\nfrom lib.fpn.proposal_assignments.rel_assignments import rel_assignments\nfrom lib.object_detector import ObjectDetector, gather_res, load_vgg\nfrom lib.pytorch_misc import transpose_packed_sequence_inds, to_onehot, arange, enumerate_by_image, diagonal_inds, Flattener\nfrom lib.sparse_targets import FrequencyBias\nfrom lib.surgery import filter_dets\nfrom lib.word_vectors import obj_edge_vectors\nfrom lib.fpn.roi_align.functions.roi_align import RoIAlignFunction\nimport math\n\n\ndef _sort_by_score(im_inds, scores):\n    \"\"\"\n    We'll sort everything scorewise from Hi->low, BUT we need to keep images together\n    and sort LSTM from l\n    :param im_inds: Which im we're on\n    :param scores: Goodness ranging between [0, 1]. Higher numbers come FIRST\n    :return: Permutation to put everything in the right order for the LSTM\n             Inverse permutation\n             Lengths for the TxB packed sequence.\n    \"\"\"\n    num_im = im_inds[-1] + 1\n    rois_per_image = scores.new(num_im)\n    lengths = []\n    for i, s, e in enumerate_by_image(im_inds):\n        rois_per_image[i] = 2 * (s - e) * num_im + i\n        lengths.append(e - s)\n    lengths = sorted(lengths, reverse=True)\n    inds, ls_transposed = transpose_packed_sequence_inds(lengths)  # move it to TxB form\n    inds = torch.LongTensor(inds).cuda(im_inds.get_device())\n\n    # ~~~~~~~~~~~~~~~~\n    # HACKY CODE ALERT!!!\n    # we're sorting by confidence which is in the range (0,1), but more importantly by longest\n    # img....\n    # ~~~~~~~~~~~~~~~~\n    roi_order = scores - 2 * rois_per_image[im_inds]\n    _, perm = torch.sort(roi_order, 0, descending=True)\n    perm = perm[inds]\n    _, inv_perm = torch.sort(perm)\n\n    return perm, inv_perm, ls_transposed\n\nMODES = ('sgdet', 'sgcls', 'predcls')\n\n\nclass LinearizedContext(nn.Module):\n    \"\"\"\n    Module for computing the object contexts and edge contexts\n    \"\"\"\n    def __init__(self, classes, rel_classes, mode='sgdet',\n                 embed_dim=200, hidden_dim=256, obj_dim=2048,\n                 nl_obj=2, nl_edge=2, dropout_rate=0.2, order='confidence',\n                 pass_in_obj_feats_to_decoder=True,\n                 pass_in_obj_feats_to_edge=True):\n        super(LinearizedContext, self).__init__()\n        self.classes = classes\n        self.rel_classes = rel_classes\n        assert mode in MODES\n        self.mode = mode\n\n        self.nl_obj = nl_obj\n        self.nl_edge = nl_edge\n\n        self.embed_dim = embed_dim\n        self.hidden_dim = hidden_dim\n        self.obj_dim = obj_dim\n        self.dropout_rate = dropout_rate\n        self.pass_in_obj_feats_to_decoder = pass_in_obj_feats_to_decoder\n        self.pass_in_obj_feats_to_edge = pass_in_obj_feats_to_edge\n\n        assert order in ('size', 'confidence', 'random', 'leftright')\n        self.order = order\n\n        # EMBEDDINGS\n        embed_vecs = obj_edge_vectors(self.classes, wv_dim=self.embed_dim)\n        self.obj_embed = nn.Embedding(self.num_classes, self.embed_dim)\n        self.obj_embed.weight.data = embed_vecs.clone()\n\n        self.obj_embed2 = nn.Embedding(self.num_classes, self.embed_dim)\n        self.obj_embed2.weight.data = embed_vecs.clone()\n\n        # This probably doesn't help it much\n        self.pos_embed = nn.Sequential(*[\n            nn.BatchNorm1d(4, momentum=BATCHNORM_MOMENTUM / 10.0),\n            nn.Linear(4, 128),\n            nn.ReLU(inplace=True),\n            nn.Dropout(0.1),\n        ])\n\n        if self.nl_obj > 0:\n            self.obj_ctx_rnn = AlternatingHighwayLSTM(\n                input_size=self.obj_dim+self.embed_dim+128,\n                hidden_size=self.hidden_dim,\n                num_layers=self.nl_obj,\n                recurrent_dropout_probability=dropout_rate)\n\n            decoder_inputs_dim = self.hidden_dim\n            if self.pass_in_obj_feats_to_decoder:\n                decoder_inputs_dim += self.obj_dim + self.embed_dim\n\n            self.decoder_rnn = DecoderRNN(self.classes, embed_dim=self.embed_dim,\n                                          inputs_dim=decoder_inputs_dim,\n                                          hidden_dim=self.hidden_dim,\n                                          recurrent_dropout_probability=dropout_rate)\n        else:\n            self.decoder_lin = nn.Linear(self.obj_dim + self.embed_dim + 128, self.num_classes)\n\n        if self.nl_edge > 0:\n            input_dim = self.embed_dim\n            if self.nl_obj > 0:\n                input_dim += self.hidden_dim\n            if self.pass_in_obj_feats_to_edge:\n                input_dim += self.obj_dim\n            self.edge_ctx_rnn = AlternatingHighwayLSTM(input_size=input_dim,\n                                                       hidden_size=self.hidden_dim,\n                                                       num_layers=self.nl_edge,\n                                                       recurrent_dropout_probability=dropout_rate)\n\n    def sort_rois(self, batch_idx, confidence, box_priors):\n        \"\"\"\n        :param batch_idx: tensor with what index we're on\n        :param confidence: tensor with confidences between [0,1)\n        :param boxes: tensor with (x1, y1, x2, y2)\n        :return: Permutation, inverse permutation, and the lengths transposed (same as _sort_by_score)\n        \"\"\"\n        cxcywh = center_size(box_priors)\n        if self.order == 'size':\n            sizes = cxcywh[:,2] * cxcywh[:, 3]\n            # sizes = (box_priors[:, 2] - box_priors[:, 0] + 1) * (box_priors[:, 3] - box_priors[:, 1] + 1)\n            assert sizes.min() > 0.0\n            scores = sizes / (sizes.max() + 1)\n        elif self.order == 'confidence':\n            scores = confidence\n        elif self.order == 'random':\n            scores = torch.FloatTensor(np.random.rand(batch_idx.size(0))).cuda(batch_idx.get_device())\n        elif self.order == 'leftright':\n            centers = cxcywh[:,0]\n            scores = centers / (centers.max() + 1)\n        else:\n            raise ValueError(\"invalid mode {}\".format(self.order))\n        return _sort_by_score(batch_idx, scores)\n\n    @property\n    def num_classes(self):\n        return len(self.classes)\n\n    @property\n    def num_rels(self):\n        return len(self.rel_classes)\n\n    def edge_ctx(self, obj_feats, obj_dists, im_inds, obj_preds, box_priors=None):\n        \"\"\"\n        Object context and object classification.\n        :param obj_feats: [num_obj, img_dim + object embedding0 dim]\n        :param obj_dists: [num_obj, #classes]\n        :param im_inds: [num_obj] the indices of the images\n        :return: edge_ctx: [num_obj, #feats] For later!\n        \"\"\"\n\n        # Only use hard embeddings\n        obj_embed2 = self.obj_embed2(obj_preds)\n        # obj_embed3 = F.softmax(obj_dists, dim=1) @ self.obj_embed3.weight\n        inp_feats = torch.cat((obj_embed2, obj_feats), 1)\n\n        # Sort by the confidence of the maximum detection.\n        confidence = F.softmax(obj_dists, dim=1).data.view(-1)[\n            obj_preds.data + arange(obj_preds.data) * self.num_classes]\n        perm, inv_perm, ls_transposed = self.sort_rois(im_inds.data, confidence, box_priors)\n\n        edge_input_packed = PackedSequence(inp_feats[perm], ls_transposed)\n        edge_reps = self.edge_ctx_rnn(edge_input_packed)[0][0]\n\n        # now we're good! unperm\n        edge_ctx = edge_reps[inv_perm]\n        return edge_ctx\n\n    def obj_ctx(self, obj_feats, obj_dists, im_inds, obj_labels=None, box_priors=None, boxes_per_cls=None):\n        \"\"\"\n        Object context and object classification.\n        :param obj_feats: [num_obj, img_dim + object embedding0 dim]\n        :param obj_dists: [num_obj, #classes]\n        :param im_inds: [num_obj] the indices of the images\n        :param obj_labels: [num_obj] the GT labels of the image\n        :param boxes: [num_obj, 4] boxes. We'll use this for NMS\n        :return: obj_dists: [num_obj, #classes] new probability distribution.\n                 obj_preds: argmax of that distribution.\n                 obj_final_ctx: [num_obj, #feats] For later!\n        \"\"\"\n        # Sort by the confidence of the maximum detection.\n        confidence = F.softmax(obj_dists, dim=1).data[:, 1:].max(1)[0]\n        perm, inv_perm, ls_transposed = self.sort_rois(im_inds.data, confidence, box_priors)\n        # Pass object features, sorted by score, into the encoder LSTM\n        obj_inp_rep = obj_feats[perm].contiguous()\n        input_packed = PackedSequence(obj_inp_rep, ls_transposed)\n\n        encoder_rep = self.obj_ctx_rnn(input_packed)[0][0]\n        # Decode in order\n        if self.mode != 'predcls':\n            decoder_inp = PackedSequence(torch.cat((obj_inp_rep, encoder_rep), 1) if self.pass_in_obj_feats_to_decoder else encoder_rep,\n                                         ls_transposed)\n            obj_dists, obj_preds = self.decoder_rnn(\n                decoder_inp, #obj_dists[perm],\n                labels=obj_labels[perm] if obj_labels is not None else None,\n                boxes_for_nms=boxes_per_cls[perm] if boxes_per_cls is not None else None,\n                )\n            obj_preds = obj_preds[inv_perm]\n            obj_dists = obj_dists[inv_perm]\n        else:\n            assert obj_labels is not None\n            obj_preds = obj_labels\n            obj_dists = Variable(to_onehot(obj_preds.data, self.num_classes))\n        encoder_rep = encoder_rep[inv_perm]\n\n        return obj_dists, obj_preds, encoder_rep\n\n    def forward(self, obj_fmaps, obj_logits, im_inds, obj_labels=None, box_priors=None, boxes_per_cls=None):\n        \"\"\"\n        Forward pass through the object and edge context\n        :param obj_priors:\n        :param obj_fmaps:\n        :param im_inds:\n        :param obj_labels:\n        :param boxes:\n        :return:\n        \"\"\"\n        obj_embed = F.softmax(obj_logits, dim=1) @ self.obj_embed.weight\n        pos_embed = self.pos_embed(Variable(center_size(box_priors)))\n        obj_pre_rep = torch.cat((obj_fmaps, obj_embed, pos_embed), 1)\n\n        if self.nl_obj > 0:\n            obj_dists2, obj_preds, obj_ctx = self.obj_ctx(\n                obj_pre_rep,\n                obj_logits,\n                im_inds,\n                obj_labels,\n                box_priors,\n                boxes_per_cls,\n            )\n        else:\n            # UNSURE WHAT TO DO HERE\n            if self.mode == 'predcls':\n                obj_dists2 = Variable(to_onehot(obj_labels.data, self.num_classes))\n            else:\n                obj_dists2 = self.decoder_lin(obj_pre_rep)\n\n            if self.mode == 'sgdet' and not self.training:\n                # NMS here for baseline\n\n                probs = F.softmax(obj_dists2, 1)\n                nms_mask = obj_dists2.data.clone()\n                nms_mask.zero_()\n                for c_i in range(1, obj_dists2.size(1)):\n                    scores_ci = probs.data[:, c_i]\n                    boxes_ci = boxes_per_cls.data[:, c_i]\n\n                    keep = apply_nms(scores_ci, boxes_ci,\n                                     pre_nms_topn=scores_ci.size(0), post_nms_topn=scores_ci.size(0),\n                                     nms_thresh=0.3)\n                    nms_mask[:, c_i][keep] = 1\n\n                obj_preds = Variable(nms_mask * probs.data, volatile=True)[:,1:].max(1)[1] + 1\n            else:\n                obj_preds = obj_labels if obj_labels is not None else obj_dists2[:,1:].max(1)[1] + 1\n            obj_ctx = obj_pre_rep\n\n        edge_ctx = None\n        if self.nl_edge > 0:\n            edge_ctx = self.edge_ctx(\n                torch.cat((obj_fmaps, obj_ctx), 1) if self.pass_in_obj_feats_to_edge else obj_ctx,\n                obj_dists=obj_dists2.detach(),  # Was previously obj_logits.\n                im_inds=im_inds,\n                obj_preds=obj_preds,\n                box_priors=box_priors,\n            )\n\n        return obj_dists2, obj_preds, edge_ctx\n\n\nclass RelModel(nn.Module):\n    \"\"\"\n    RELATIONSHIPS\n    \"\"\"\n    def __init__(self, classes, rel_classes, mode='sgdet', num_gpus=1, use_vision=True, require_overlap_det=True,\n                 embed_dim=200, hidden_dim=256, pooling_dim=2048,\n                 nl_obj=1, nl_edge=2, use_resnet=False, order='confidence', thresh=0.01,\n                 use_proposals=False, pass_in_obj_feats_to_decoder=True,\n                 pass_in_obj_feats_to_edge=True, rec_dropout=0.0, use_bias=True, use_tanh=True,\n                 limit_vision=True):\n\n        \"\"\"\n        :param classes: Object classes\n        :param rel_classes: Relationship classes. None if were not using rel mode\n        :param mode: (sgcls, predcls, or sgdet)\n        :param num_gpus: how many GPUS 2 use\n        :param use_vision: Whether to use vision in the final product\n        :param require_overlap_det: Whether two objects must intersect\n        :param embed_dim: Dimension for all embeddings\n        :param hidden_dim: LSTM hidden size\n        :param obj_dim:\n        \"\"\"\n        super(RelModel, self).__init__()\n        self.classes = classes\n        self.rel_classes = rel_classes\n        self.num_gpus = num_gpus\n        assert mode in MODES\n        self.mode = mode\n\n        self.pooling_size = 7\n        self.embed_dim = embed_dim\n        self.hidden_dim = hidden_dim\n        self.obj_dim = 2048 if use_resnet else 4096\n        self.pooling_dim = pooling_dim\n\n        self.use_bias = use_bias\n        self.use_vision = use_vision\n        self.use_tanh = use_tanh\n        self.limit_vision=limit_vision\n        self.require_overlap = require_overlap_det and self.mode == 'sgdet'\n\n        self.detector = ObjectDetector(\n            classes=classes,\n            mode=('proposals' if use_proposals else 'refinerels') if mode == 'sgdet' else 'gtbox',\n            use_resnet=use_resnet,\n            thresh=thresh,\n            max_per_img=64,\n        )\n\n        self.context = LinearizedContext(self.classes, self.rel_classes, mode=self.mode,\n                                         embed_dim=self.embed_dim, hidden_dim=self.hidden_dim,\n                                         obj_dim=self.obj_dim,\n                                         nl_obj=nl_obj, nl_edge=nl_edge, dropout_rate=rec_dropout,\n                                         order=order,\n                                         pass_in_obj_feats_to_decoder=pass_in_obj_feats_to_decoder,\n                                         pass_in_obj_feats_to_edge=pass_in_obj_feats_to_edge)\n\n        # Image Feats (You'll have to disable if you want to turn off the features from here)\n        self.union_boxes = UnionBoxesAndFeats(pooling_size=self.pooling_size, stride=16,\n                                              dim=1024 if use_resnet else 512)\n\n        if use_resnet:\n            self.roi_fmap = nn.Sequential(\n                resnet_l4(relu_end=False),\n                nn.AvgPool2d(self.pooling_size),\n                Flattener(),\n            )\n        else:\n            roi_fmap = [\n                Flattener(),\n                load_vgg(use_dropout=False, use_relu=False, use_linear=pooling_dim == 4096, pretrained=False).classifier,\n            ]\n            if pooling_dim != 4096:\n                roi_fmap.append(nn.Linear(4096, pooling_dim))\n            self.roi_fmap = nn.Sequential(*roi_fmap)\n            self.roi_fmap_obj = load_vgg(pretrained=False).classifier\n\n        ###################################\n        self.post_lstm = nn.Linear(self.hidden_dim, self.pooling_dim * 2)\n\n        # Initialize to sqrt(1/2n) so that the outputs all have mean 0 and variance 1.\n        # (Half contribution comes from LSTM, half from embedding.\n\n        # In practice the pre-lstm stuff tends to have stdev 0.1 so I multiplied this by 10.\n        self.post_lstm.weight.data.normal_(0, 10.0 * math.sqrt(1.0 / self.hidden_dim))\n        self.post_lstm.bias.data.zero_()\n\n        if nl_edge == 0:\n            self.post_emb = nn.Embedding(self.num_classes, self.pooling_dim*2)\n            self.post_emb.weight.data.normal_(0, math.sqrt(1.0))\n\n        self.rel_compress = nn.Linear(self.pooling_dim, self.num_rels, bias=True)\n        self.rel_compress.weight = torch.nn.init.xavier_normal(self.rel_compress.weight, gain=1.0)\n        if self.use_bias:\n            self.freq_bias = FrequencyBias()\n\n    @property\n    def num_classes(self):\n        return len(self.classes)\n\n    @property\n    def num_rels(self):\n        return len(self.rel_classes)\n\n    def visual_rep(self, features, rois, pair_inds):\n        \"\"\"\n        Classify the features\n        :param features: [batch_size, dim, IM_SIZE/4, IM_SIZE/4]\n        :param rois: [num_rois, 5] array of [img_num, x0, y0, x1, y1].\n        :param pair_inds inds to use when predicting\n        :return: score_pred, a [num_rois, num_classes] array\n                 box_pred, a [num_rois, num_classes, 4] array\n        \"\"\"\n        assert pair_inds.size(1) == 2\n        uboxes = self.union_boxes(features, rois, pair_inds)\n        return self.roi_fmap(uboxes)\n\n    def get_rel_inds(self, rel_labels, im_inds, box_priors):\n        # Get the relationship candidates\n        if self.training:\n            rel_inds = rel_labels[:, :3].data.clone()\n        else:\n            rel_cands = im_inds.data[:, None] == im_inds.data[None]\n            rel_cands.view(-1)[diagonal_inds(rel_cands)] = 0\n\n            # Require overlap for detection\n            if self.require_overlap:\n                rel_cands = rel_cands & (bbox_overlaps(box_priors.data,\n                                                       box_priors.data) > 0)\n\n                # if there are fewer then 100 things then we might as well add some?\n                amt_to_add = 100 - rel_cands.long().sum()\n\n            rel_cands = rel_cands.nonzero()\n            if rel_cands.dim() == 0:\n                rel_cands = im_inds.data.new(1, 2).fill_(0)\n\n            rel_inds = torch.cat((im_inds.data[rel_cands[:, 0]][:, None], rel_cands), 1)\n        return rel_inds\n\n    def obj_feature_map(self, features, rois):\n        \"\"\"\n        Gets the ROI features\n        :param features: [batch_size, dim, IM_SIZE/4, IM_SIZE/4] (features at level p2)\n        :param rois: [num_rois, 5] array of [img_num, x0, y0, x1, y1].\n        :return: [num_rois, #dim] array\n        \"\"\"\n        feature_pool = RoIAlignFunction(self.pooling_size, self.pooling_size, spatial_scale=1 / 16)(\n            features, rois)\n        return self.roi_fmap_obj(feature_pool.view(rois.size(0), -1))\n\n    def forward(self, x, im_sizes, image_offset,\n                gt_boxes=None, gt_classes=None, gt_rels=None, proposals=None, train_anchor_inds=None,\n                return_fmap=False):\n        \"\"\"\n        Forward pass for detection\n        :param x: Images@[batch_size, 3, IM_SIZE, IM_SIZE]\n        :param im_sizes: A numpy array of (h, w, scale) for each image.\n        :param image_offset: Offset onto what image we're on for MGPU training (if single GPU this is 0)\n        :param gt_boxes:\n\n        Training parameters:\n        :param gt_boxes: [num_gt, 4] GT boxes over the batch.\n        :param gt_classes: [num_gt, 2] gt boxes where each one is (img_id, class)\n        :param train_anchor_inds: a [num_train, 2] array of indices for the anchors that will\n                                  be used to compute the training loss. Each (img_ind, fpn_idx)\n        :return: If train:\n            scores, boxdeltas, labels, boxes, boxtargets, rpnscores, rpnboxes, rellabels\n            \n            if test:\n            prob dists, boxes, img inds, maxscores, classes\n            \n        \"\"\"\n        result = self.detector(x, im_sizes, image_offset, gt_boxes, gt_classes, gt_rels, proposals,\n                               train_anchor_inds, return_fmap=True)\n        if result.is_none():\n            return ValueError(\"heck\")\n\n        im_inds = result.im_inds - image_offset\n        boxes = result.rm_box_priors\n\n        if self.training and result.rel_labels is None:\n            assert self.mode == 'sgdet'\n            result.rel_labels = rel_assignments(im_inds.data, boxes.data, result.rm_obj_labels.data,\n                                                gt_boxes.data, gt_classes.data, gt_rels.data,\n                                                image_offset, filter_non_overlap=True,\n                                                num_sample_per_gt=1)\n\n        rel_inds = self.get_rel_inds(result.rel_labels, im_inds, boxes)\n\n        rois = torch.cat((im_inds[:, None].float(), boxes), 1)\n\n        result.obj_fmap = self.obj_feature_map(result.fmap.detach(), rois)\n\n        # Prevent gradients from flowing back into score_fc from elsewhere\n        result.rm_obj_dists, result.obj_preds, edge_ctx = self.context(\n            result.obj_fmap,\n            result.rm_obj_dists.detach(),\n            im_inds, result.rm_obj_labels if self.training or self.mode == 'predcls' else None,\n            boxes.data, result.boxes_all)\n\n        if edge_ctx is None:\n            edge_rep = self.post_emb(result.obj_preds)\n        else:\n            edge_rep = self.post_lstm(edge_ctx)\n\n        # Split into subject and object representations\n        edge_rep = edge_rep.view(edge_rep.size(0), 2, self.pooling_dim)\n\n        subj_rep = edge_rep[:, 0]\n        obj_rep = edge_rep[:, 1]\n\n        prod_rep = subj_rep[rel_inds[:, 1]] * obj_rep[rel_inds[:, 2]]\n\n        if self.use_vision:\n            vr = self.visual_rep(result.fmap.detach(), rois, rel_inds[:, 1:])\n            if self.limit_vision:\n                # exact value TBD\n                prod_rep = torch.cat((prod_rep[:,:2048] * vr[:,:2048], prod_rep[:,2048:]), 1)\n            else:\n                prod_rep = prod_rep * vr\n\n        if self.use_tanh:\n            prod_rep = F.tanh(prod_rep)\n\n        result.rel_dists = self.rel_compress(prod_rep)\n\n        if self.use_bias:\n            result.rel_dists = result.rel_dists + self.freq_bias.index_with_labels(torch.stack((\n                result.obj_preds[rel_inds[:, 1]],\n                result.obj_preds[rel_inds[:, 2]],\n            ), 1))\n\n        if self.training:\n            return result\n\n        twod_inds = arange(result.obj_preds.data) * self.num_classes + result.obj_preds.data\n        result.obj_scores = F.softmax(result.rm_obj_dists, dim=1).view(-1)[twod_inds]\n\n        # Bbox regression\n        if self.mode == 'sgdet':\n            bboxes = result.boxes_all.view(-1, 4)[twod_inds].view(result.boxes_all.size(0), 4)\n        else:\n            # Boxes will get fixed by filter_dets function.\n            bboxes = result.rm_box_priors\n\n        rel_rep = F.softmax(result.rel_dists, dim=1)\n        return filter_dets(bboxes, result.obj_scores,\n                           result.obj_preds, rel_inds[:, 1:], rel_rep)\n\n    def __getitem__(self, batch):\n        \"\"\" Hack to do multi-GPU training\"\"\"\n        batch.scatter()\n        if self.num_gpus == 1:\n            return self(*batch[0])\n\n        replicas = nn.parallel.replicate(self, devices=list(range(self.num_gpus)))\n        outputs = nn.parallel.parallel_apply(replicas, [batch[i] for i in range(self.num_gpus)])\n\n        if self.training:\n            return gather_res(outputs, 0, dim=0)\n        return outputs\n"
  },
  {
    "path": "lib/rel_model_stanford.py",
    "content": "\"\"\"\nLet's get the relationships yo\n\"\"\"\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.parallel\nfrom torch.autograd import Variable\nfrom torch.nn import functional as F\nfrom lib.surgery import filter_dets\nfrom lib.fpn.proposal_assignments.rel_assignments import rel_assignments\nfrom lib.pytorch_misc import arange\nfrom lib.object_detector import filter_det\nfrom lib.rel_model import RelModel\n\nMODES = ('sgdet', 'sgcls', 'predcls')\n\nSIZE=512\n\nclass RelModelStanford(RelModel):\n    \"\"\"\n    RELATIONSHIPS\n    \"\"\"\n\n    def __init__(self, classes, rel_classes, mode='sgdet', num_gpus=1, require_overlap_det=True,\n                 use_resnet=False, use_proposals=False, **kwargs):\n        \"\"\"\n        :param classes: Object classes\n        :param rel_classes: Relationship classes. None if were not using rel mode\n        :param num_gpus: how many GPUS 2 use\n        \"\"\"\n        super(RelModelStanford, self).__init__(classes, rel_classes, mode=mode, num_gpus=num_gpus,\n                                               require_overlap_det=require_overlap_det,\n                                               use_resnet=use_resnet,\n                                               nl_obj=0, nl_edge=0, use_proposals=use_proposals, thresh=0.01,\n                                               pooling_dim=4096)\n\n        del self.context\n        del self.post_lstm\n        del self.post_emb\n\n        self.rel_fc = nn.Linear(SIZE, self.num_rels)\n        self.obj_fc = nn.Linear(SIZE, self.num_classes)\n\n        self.obj_unary = nn.Linear(self.obj_dim, SIZE)\n        self.edge_unary = nn.Linear(4096, SIZE)\n\n\n        self.edge_gru = nn.GRUCell(input_size=SIZE, hidden_size=SIZE)\n        self.node_gru = nn.GRUCell(input_size=SIZE, hidden_size=SIZE)\n\n        self.n_iter = 3\n\n        self.sub_vert_w_fc = nn.Sequential(nn.Linear(SIZE*2, 1), nn.Sigmoid())\n        self.obj_vert_w_fc = nn.Sequential(nn.Linear(SIZE*2, 1), nn.Sigmoid())\n        self.out_edge_w_fc = nn.Sequential(nn.Linear(SIZE*2, 1), nn.Sigmoid())\n\n        self.in_edge_w_fc = nn.Sequential(nn.Linear(SIZE*2, 1), nn.Sigmoid())\n\n    def message_pass(self, rel_rep, obj_rep, rel_inds):\n        \"\"\"\n\n        :param rel_rep: [num_rel, fc]\n        :param obj_rep: [num_obj, fc]\n        :param rel_inds: [num_rel, 2] of the valid relationships\n        :return: object prediction [num_obj, 151], bbox_prediction [num_obj, 151*4] \n                and rel prediction [num_rel, 51]\n        \"\"\"\n        # [num_obj, num_rel] with binary!\n        numer = torch.arange(0, rel_inds.size(0)).long().cuda(rel_inds.get_device())\n\n        objs_to_outrels = rel_rep.data.new(obj_rep.size(0), rel_rep.size(0)).zero_()\n        objs_to_outrels.view(-1)[rel_inds[:, 0] * rel_rep.size(0) + numer] = 1\n        objs_to_outrels = Variable(objs_to_outrels)\n\n        objs_to_inrels = rel_rep.data.new(obj_rep.size(0), rel_rep.size(0)).zero_()\n        objs_to_inrels.view(-1)[rel_inds[:, 1] * rel_rep.size(0) + numer] = 1\n        objs_to_inrels = Variable(objs_to_inrels)\n\n        hx_rel = Variable(rel_rep.data.new(rel_rep.size(0), SIZE).zero_(), requires_grad=False)\n        hx_obj = Variable(obj_rep.data.new(obj_rep.size(0), SIZE).zero_(), requires_grad=False)\n\n        vert_factor = [self.node_gru(obj_rep, hx_obj)]\n        edge_factor = [self.edge_gru(rel_rep, hx_rel)]\n\n        for i in range(3):\n            # compute edge context\n            sub_vert = vert_factor[i][rel_inds[:, 0]]\n            obj_vert = vert_factor[i][rel_inds[:, 1]]\n            weighted_sub = self.sub_vert_w_fc(\n                torch.cat((sub_vert, edge_factor[i]), 1)) * sub_vert\n            weighted_obj = self.obj_vert_w_fc(\n                torch.cat((obj_vert, edge_factor[i]), 1)) * obj_vert\n\n            edge_factor.append(self.edge_gru(weighted_sub + weighted_obj, edge_factor[i]))\n\n            # Compute vertex context\n            pre_out = self.out_edge_w_fc(torch.cat((sub_vert, edge_factor[i]), 1)) * \\\n                      edge_factor[i]\n            pre_in = self.in_edge_w_fc(torch.cat((obj_vert, edge_factor[i]), 1)) * edge_factor[\n                i]\n\n            vert_ctx = objs_to_outrels @ pre_out + objs_to_inrels @ pre_in\n            vert_factor.append(self.node_gru(vert_ctx, vert_factor[i]))\n\n        # woohoo! done\n        return self.obj_fc(vert_factor[-1]), self.rel_fc(edge_factor[-1])\n               # self.box_fc(vert_factor[-1]).view(-1, self.num_classes, 4), \\\n               # self.rel_fc(edge_factor[-1])\n\n    def forward(self, x, im_sizes, image_offset,\n                gt_boxes=None, gt_classes=None, gt_rels=None, proposals=None, train_anchor_inds=None,\n                return_fmap=False):\n        \"\"\"\n        Forward pass for detection\n        :param x: Images@[batch_size, 3, IM_SIZE, IM_SIZE]\n        :param im_sizes: A numpy array of (h, w, scale) for each image.\n        :param image_offset: Offset onto what image we're on for MGPU training (if single GPU this is 0)\n        :param gt_boxes:\n\n        Training parameters:\n        :param gt_boxes: [num_gt, 4] GT boxes over the batch.\n        :param gt_classes: [num_gt, 2] gt boxes where each one is (img_id, class)\n        :param train_anchor_inds: a [num_train, 2] array of indices for the anchors that will\n                                  be used to compute the training loss. Each (img_ind, fpn_idx)\n        :return: If train:\n            scores, boxdeltas, labels, boxes, boxtargets, rpnscores, rpnboxes, rellabels\n            \n            if test:\n            prob dists, boxes, img inds, maxscores, classes\n            \n        \"\"\"\n        result = self.detector(x, im_sizes, image_offset, gt_boxes, gt_classes, gt_rels, proposals,\n                               train_anchor_inds, return_fmap=True)\n\n        if result.is_none():\n            return ValueError(\"heck\")\n\n        im_inds = result.im_inds - image_offset\n        boxes = result.rm_box_priors\n\n        if self.training and result.rel_labels is None:\n            assert self.mode == 'sgdet'\n            result.rel_labels = rel_assignments(im_inds.data, boxes.data, result.rm_obj_labels.data,\n                                                gt_boxes.data, gt_classes.data, gt_rels.data,\n                                                image_offset, filter_non_overlap=True, num_sample_per_gt=1)\n        rel_inds = self.get_rel_inds(result.rel_labels, im_inds, boxes)\n        rois = torch.cat((im_inds[:, None].float(), boxes), 1)\n        visual_rep = self.visual_rep(result.fmap, rois, rel_inds[:, 1:])\n\n        result.obj_fmap = self.obj_feature_map(result.fmap.detach(), rois)\n\n        # Now do the approximation WHEREVER THERES A VALID RELATIONSHIP.\n        result.rm_obj_dists, result.rel_dists = self.message_pass(\n            F.relu(self.edge_unary(visual_rep)), self.obj_unary(result.obj_fmap), rel_inds[:, 1:])\n\n        # result.box_deltas_update = box_deltas\n\n        if self.training:\n            return result\n\n        # Decode here ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n        if self.mode == 'predcls':\n            # Hack to get the GT object labels\n            result.obj_scores = result.rm_obj_dists.data.new(gt_classes.size(0)).fill_(1)\n            result.obj_preds = gt_classes.data[:, 1]\n        elif self.mode == 'sgdet':\n            order, obj_scores, obj_preds= filter_det(F.softmax(result.rm_obj_dists),\n                                                              result.boxes_all,\n                                                              start_ind=0,\n                                                              max_per_img=100,\n                                                              thresh=0.00,\n                                                              pre_nms_topn=6000,\n                                                              post_nms_topn=300,\n                                                              nms_thresh=0.3,\n                                                              nms_filter_duplicates=True)\n            idx, perm = torch.sort(order)\n            result.obj_preds = rel_inds.new(result.rm_obj_dists.size(0)).fill_(1)\n            result.obj_scores = result.rm_obj_dists.data.new(result.rm_obj_dists.size(0)).fill_(0)\n            result.obj_scores[idx] = obj_scores.data[perm]\n            result.obj_preds[idx] = obj_preds.data[perm]\n        else:\n            scores_nz = F.softmax(result.rm_obj_dists).data\n            scores_nz[:, 0] = 0.0\n            result.obj_scores, score_ord = scores_nz[:, 1:].sort(dim=1, descending=True)\n            result.obj_preds = score_ord[:,0] + 1\n            result.obj_scores = result.obj_scores[:,0]\n\n        result.obj_preds = Variable(result.obj_preds)\n        result.obj_scores = Variable(result.obj_scores)\n\n        # Set result's bounding boxes to be size\n        # [num_boxes, topk, 4] instead of considering every single object assignment.\n        twod_inds = arange(result.obj_preds.data) * self.num_classes + result.obj_preds.data\n\n        if self.mode == 'sgdet':\n            bboxes = result.boxes_all.view(-1, 4)[twod_inds].view(result.boxes_all.size(0), 4)\n        else:\n            # Boxes will get fixed by filter_dets function.\n            bboxes = result.rm_box_priors\n        rel_rep = F.softmax(result.rel_dists)\n\n        return filter_dets(bboxes, result.obj_scores,\n                           result.obj_preds, rel_inds[:, 1:], rel_rep)\n\n"
  },
  {
    "path": "lib/resnet.py",
    "content": "import torch.nn as nn\nimport math\nimport torch.utils.model_zoo as model_zoo\nfrom torchvision.models.resnet import model_urls, conv3x3, BasicBlock\nfrom torchvision.models.vgg import vgg16\nfrom config import BATCHNORM_MOMENTUM\n\nclass Bottleneck(nn.Module):\n    expansion = 4\n\n    def __init__(self, inplanes, planes, stride=1, downsample=None, relu_end=True):\n        super(Bottleneck, self).__init__()\n        self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)\n        self.bn1 = nn.BatchNorm2d(planes, momentum=BATCHNORM_MOMENTUM)\n        self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,\n                               padding=1, bias=False)\n        self.bn2 = nn.BatchNorm2d(planes, momentum=BATCHNORM_MOMENTUM)\n        self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)\n        self.bn3 = nn.BatchNorm2d(planes * 4, momentum=BATCHNORM_MOMENTUM)\n        self.relu = nn.ReLU(inplace=True)\n        self.downsample = downsample\n        self.stride = stride\n        self.relu_end = relu_end\n\n    def forward(self, x):\n        residual = x\n\n        out = self.conv1(x)\n        out = self.bn1(out)\n        out = self.relu(out)\n\n        out = self.conv2(out)\n        out = self.bn2(out)\n        out = self.relu(out)\n\n        out = self.conv3(out)\n        out = self.bn3(out)\n\n        if self.downsample is not None:\n            residual = self.downsample(x)\n\n        out += residual\n\n        if self.relu_end:\n            out = self.relu(out)\n        return out\n\n\nclass ResNet(nn.Module):\n\n    def __init__(self, block, layers, num_classes=1000):\n        self.inplanes = 64\n        super(ResNet, self).__init__()\n        self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,\n                               bias=False)\n        self.bn1 = nn.BatchNorm2d(64, momentum=BATCHNORM_MOMENTUM)\n        self.relu = nn.ReLU(inplace=True)\n        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)\n        self.layer1 = self._make_layer(block, 64, layers[0])\n        self.layer2 = self._make_layer(block, 128, layers[1], stride=2)\n        self.layer3 = self._make_layer(block, 256, layers[2], stride=2)\n        self.layer4 = self._make_layer(block, 512, layers[3], stride=1)  # HACK\n        self.avgpool = nn.AvgPool2d(7)\n        self.fc = nn.Linear(512 * block.expansion, num_classes)\n\n        for m in self.modules():\n            if isinstance(m, nn.Conv2d):\n                n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels\n                m.weight.data.normal_(0, math.sqrt(2. / n))\n            elif isinstance(m, nn.BatchNorm2d):\n                m.weight.data.fill_(1)\n                m.bias.data.zero_()\n\n    def _make_layer(self, block, planes, blocks, stride=1):\n        downsample = None\n        if stride != 1 or self.inplanes != planes * block.expansion:\n            downsample = nn.Sequential(\n                nn.Conv2d(self.inplanes, planes * block.expansion,\n                          kernel_size=1, stride=stride, bias=False),\n                nn.BatchNorm2d(planes * block.expansion, momentum=BATCHNORM_MOMENTUM),\n            )\n\n        layers = []\n        layers.append(block(self.inplanes, planes, stride, downsample))\n        self.inplanes = planes * block.expansion\n        for i in range(1, blocks):\n            layers.append(block(self.inplanes, planes))\n\n        return nn.Sequential(*layers)\n\n    def forward(self, x):\n        x = self.conv1(x)\n        x = self.bn1(x)\n        x = self.relu(x)\n        x = self.maxpool(x)\n\n        x = self.layer1(x)\n        x = self.layer2(x)\n        x = self.layer3(x)\n        x = self.layer4(x)\n\n        x = self.avgpool(x)\n        x = x.view(x.size(0), -1)\n        x = self.fc(x)\n\n        return x\n\ndef resnet101(pretrained=False, **kwargs):\n    \"\"\"Constructs a ResNet-101 model.\n\n    Args:\n        pretrained (bool): If True, returns a model pre-trained on ImageNet\n    \"\"\"\n    model = ResNet(Bottleneck, [3, 4, 23, 3], **kwargs)\n    if pretrained:\n        model.load_state_dict(model_zoo.load_url(model_urls['resnet101']))\n    return model\n\ndef resnet_l123():\n    model = resnet101(pretrained=True)\n    del model.layer4\n    del model.avgpool\n    del model.fc\n    return model\n\ndef resnet_l4(relu_end=True):\n    model = resnet101(pretrained=True)\n    l4 = model.layer4\n    if not relu_end:\n        l4[-1].relu_end = False\n    l4[0].conv2.stride = (1, 1)\n    l4[0].downsample[0].stride = (1, 1)\n    return l4\n\ndef vgg_fc(relu_end=True, linear_end=True):\n    model = vgg16(pretrained=True)\n    vfc = model.classifier\n    del vfc._modules['6'] # Get rid of linear layer\n    del vfc._modules['5'] # Get rid of linear layer\n    if not relu_end:\n        del vfc._modules['4'] # Get rid of linear layer\n        if not linear_end:\n            del vfc._modules['3']\n    return vfc\n\n\n"
  },
  {
    "path": "lib/sparse_targets.py",
    "content": "from lib.word_vectors import obj_edge_vectors\nimport torch.nn as nn\nimport torch\nfrom torch.autograd import Variable\nimport numpy as np\nfrom config import DATA_PATH\nimport os\nfrom lib.get_dataset_counts import get_counts\n\n\nclass FrequencyBias(nn.Module):\n    \"\"\"\n    The goal of this is to provide a simplified way of computing\n    P(predicate | obj1, obj2, img).\n    \"\"\"\n\n    def __init__(self, eps=1e-3):\n        super(FrequencyBias, self).__init__()\n\n        fg_matrix, bg_matrix = get_counts(must_overlap=True)\n        bg_matrix += 1\n        fg_matrix[:, :, 0] = bg_matrix\n\n        pred_dist = np.log(fg_matrix / fg_matrix.sum(2)[:, :, None] + eps)\n\n        self.num_objs = pred_dist.shape[0]\n        pred_dist = torch.FloatTensor(pred_dist).view(-1, pred_dist.shape[2])\n\n        self.obj_baseline = nn.Embedding(pred_dist.size(0), pred_dist.size(1))\n        self.obj_baseline.weight.data = pred_dist\n\n    def index_with_labels(self, labels):\n        \"\"\"\n        :param labels: [batch_size, 2] \n        :return: \n        \"\"\"\n        return self.obj_baseline(labels[:, 0] * self.num_objs + labels[:, 1])\n\n    def forward(self, obj_cands0, obj_cands1):\n        \"\"\"\n        :param obj_cands0: [batch_size, 151] prob distibution over cands.\n        :param obj_cands1: [batch_size, 151] prob distibution over cands.\n        :return: [batch_size, #predicates] array, which contains potentials for\n        each possibility\n        \"\"\"\n        # [batch_size, 151, 151] repr of the joint distribution\n        joint_cands = obj_cands0[:, :, None] * obj_cands1[:, None]\n\n        # [151, 151, 51] of targets per.\n        baseline = joint_cands.view(joint_cands.size(0), -1) @ self.obj_baseline.weight\n\n        return baseline\n"
  },
  {
    "path": "lib/surgery.py",
    "content": "# create predictions from the other stuff\n\"\"\"\nGo from proposals + scores to relationships.\n\npred-cls: No bbox regression, obj dist is exactly known\nsg-cls : No bbox regression\nsg-det : Bbox regression\n\nin all cases we'll return:\nboxes, objs, rels, pred_scores\n\n\"\"\"\n\nimport numpy as np\nimport torch\nfrom lib.pytorch_misc import unravel_index\nfrom lib.fpn.box_utils import bbox_overlaps\n# from ad3 import factor_graph as fg\nfrom time import time\n\ndef filter_dets(boxes, obj_scores, obj_classes, rel_inds, pred_scores):\n    \"\"\"\n    Filters detections....\n    :param boxes: [num_box, topk, 4] if bbox regression else [num_box, 4]\n    :param obj_scores: [num_box] probabilities for the scores\n    :param obj_classes: [num_box] class labels for the topk\n    :param rel_inds: [num_rel, 2] TENSOR consisting of (im_ind0, im_ind1)\n    :param pred_scores: [topk, topk, num_rel, num_predicates]\n    :param use_nms: True if use NMS to filter dets.\n    :return: boxes, objs, rels, pred_scores\n\n    \"\"\"\n    if boxes.dim() != 2:\n        raise ValueError(\"Boxes needs to be [num_box, 4] but its {}\".format(boxes.size()))\n\n    num_box = boxes.size(0)\n    assert obj_scores.size(0) == num_box\n\n    assert obj_classes.size() == obj_scores.size()\n    num_rel = rel_inds.size(0)\n    assert rel_inds.size(1) == 2\n    assert pred_scores.size(0) == num_rel\n\n    obj_scores0 = obj_scores.data[rel_inds[:,0]]\n    obj_scores1 = obj_scores.data[rel_inds[:,1]]\n\n    pred_scores_max, pred_classes_argmax = pred_scores.data[:,1:].max(1)\n    pred_classes_argmax = pred_classes_argmax + 1\n\n    rel_scores_argmaxed = pred_scores_max * obj_scores0 * obj_scores1\n    rel_scores_vs, rel_scores_idx = torch.sort(rel_scores_argmaxed.view(-1), dim=0, descending=True)\n\n    rels = rel_inds[rel_scores_idx].cpu().numpy()\n    pred_scores_sorted = pred_scores[rel_scores_idx].data.cpu().numpy()\n    obj_scores_np = obj_scores.data.cpu().numpy()\n    objs_np = obj_classes.data.cpu().numpy()\n    boxes_out = boxes.data.cpu().numpy()\n\n    return boxes_out, objs_np, obj_scores_np, rels, pred_scores_sorted\n\n# def _get_similar_boxes(boxes, obj_classes_topk, nms_thresh=0.3):\n#     \"\"\"\n#     Assuming bg is NOT A LABEL.\n#     :param boxes: [num_box, topk, 4] if bbox regression else [num_box, 4]\n#     :param obj_classes: [num_box, topk] class labels\n#     :return: num_box, topk, num_box, topk array containing similarities.\n#     \"\"\"\n#     topk = obj_classes_topk.size(1)\n#     num_box = boxes.size(0)\n#\n#     box_flat = boxes.view(-1, 4) if boxes.dim() == 3 else boxes[:, None].expand(\n#         num_box, topk, 4).contiguous().view(-1, 4)\n#     jax = bbox_overlaps(box_flat, box_flat).data > nms_thresh\n#     # Filter out things that are not gonna compete.\n#     classes_eq = obj_classes_topk.data.view(-1)[:, None] == obj_classes_topk.data.view(-1)[None, :]\n#     jax &= classes_eq\n#     boxes_are_similar = jax.view(num_box, topk, num_box, topk)\n#     return boxes_are_similar.cpu().numpy().astype(np.bool)\n"
  },
  {
    "path": "lib/word_vectors.py",
    "content": "\"\"\"\nAdapted from PyTorch's text library.\n\"\"\"\n\nimport array\nimport os\nimport zipfile\n\nimport six\nimport torch\nfrom six.moves.urllib.request import urlretrieve\nfrom tqdm import tqdm\n\nfrom config import DATA_PATH\nimport sys\n\ndef obj_edge_vectors(names, wv_type='glove.6B', wv_dir=DATA_PATH, wv_dim=300):\n    wv_dict, wv_arr, wv_size = load_word_vectors(wv_dir, wv_type, wv_dim)\n\n    vectors = torch.Tensor(len(names), wv_dim)\n    vectors.normal_(0,1)\n\n    for i, token in enumerate(names):\n        wv_index = wv_dict.get(token, None)\n        if wv_index is not None:\n            vectors[i] = wv_arr[wv_index]\n        else:\n            # Try the longest word (hopefully won't be a preposition\n            lw_token = sorted(token.split(' '), key=lambda x: len(x), reverse=True)[0]\n            print(\"{} -> {} \".format(token, lw_token))\n            wv_index = wv_dict.get(lw_token, None)\n            if wv_index is not None:\n                vectors[i] = wv_arr[wv_index]\n            else:\n                print(\"fail on {}\".format(token))\n\n    return vectors\n\nURL = {\n        'glove.42B': 'http://nlp.stanford.edu/data/glove.42B.300d.zip',\n        'glove.840B': 'http://nlp.stanford.edu/data/glove.840B.300d.zip',\n        'glove.twitter.27B': 'http://nlp.stanford.edu/data/glove.twitter.27B.zip',\n        'glove.6B': 'http://nlp.stanford.edu/data/glove.6B.zip',\n        }\n\n\ndef load_word_vectors(root, wv_type, dim):\n    \"\"\"Load word vectors from a path, trying .pt, .txt, and .zip extensions.\"\"\"\n    if isinstance(dim, int):\n        dim = str(dim) + 'd'\n    fname = os.path.join(root, wv_type + '.' + dim)\n    if os.path.isfile(fname + '.pt'):\n        fname_pt = fname + '.pt'\n        print('loading word vectors from', fname_pt)\n        try:\n            return torch.load(fname_pt)\n        except Exception as e:\n            print(\"\"\"\n                Error loading the model from {}\n\n                This could be because this code was previously run with one\n                PyTorch version to generate cached data and is now being\n                run with another version.\n                You can try to delete the cached files on disk (this file\n                  and others) and re-running the code\n\n                Error message:\n                ---------\n                {}\n                \"\"\".format(fname_pt, str(e)))\n            sys.exit(-1)\n    if os.path.isfile(fname + '.txt'):\n        fname_txt = fname + '.txt'\n        cm = open(fname_txt, 'rb')\n        cm = [line for line in cm]\n    elif os.path.basename(wv_type) in URL:\n        url = URL[wv_type]\n        print('downloading word vectors from {}'.format(url))\n        filename = os.path.basename(fname)\n        if not os.path.exists(root):\n            os.makedirs(root)\n        with tqdm(unit='B', unit_scale=True, miniters=1, desc=filename) as t:\n            fname, _ = urlretrieve(url, fname, reporthook=reporthook(t))\n            with zipfile.ZipFile(fname, \"r\") as zf:\n                print('extracting word vectors into {}'.format(root))\n                zf.extractall(root)\n        if not os.path.isfile(fname + '.txt'):\n            raise RuntimeError('no word vectors of requested dimension found')\n        return load_word_vectors(root, wv_type, dim)\n    else:\n        raise RuntimeError('unable to load word vectors')\n\n    wv_tokens, wv_arr, wv_size = [], array.array('d'), None\n    if cm is not None:\n        for line in tqdm(range(len(cm)), desc=\"loading word vectors from {}\".format(fname_txt)):\n            entries = cm[line].strip().split(b' ')\n            word, entries = entries[0], entries[1:]\n            if wv_size is None:\n                wv_size = len(entries)\n            try:\n                if isinstance(word, six.binary_type):\n                    word = word.decode('utf-8')\n            except:\n                print('non-UTF8 token', repr(word), 'ignored')\n                continue\n            wv_arr.extend(float(x) for x in entries)\n            wv_tokens.append(word)\n\n    wv_dict = {word: i for i, word in enumerate(wv_tokens)}\n    wv_arr = torch.Tensor(wv_arr).view(-1, wv_size)\n    ret = (wv_dict, wv_arr, wv_size)\n    torch.save(ret, fname + '.pt')\n    return ret\n\ndef reporthook(t):\n    \"\"\"https://github.com/tqdm/tqdm\"\"\"\n    last_b = [0]\n\n    def inner(b=1, bsize=1, tsize=None):\n        \"\"\"\n        b: int, optionala\n        Number of blocks just transferred [default: 1].\n        bsize: int, optional\n        Size of each block (in tqdm units) [default: 1].\n        tsize: int, optional\n        Total size (in tqdm units). If [default: None] remains unchanged.\n        \"\"\"\n        if tsize is not None:\n            t.total = tsize\n        t.update((b - last_b[0]) * bsize)\n        last_b[0] = b\n    return inner\n"
  },
  {
    "path": "misc/__init__.py",
    "content": ""
  },
  {
    "path": "misc/motifs.py",
    "content": "\"\"\"\nSCRIPT TO MAKE MEMES. this was from an old version of the code, so it might require some fixes to get working.\n\n\"\"\"\nfrom dataloaders.visual_genome import VG\n# import matplotlib\n# # matplotlib.use('Agg')\nfrom tqdm import tqdm\nimport seaborn as sns\nimport numpy as np\nfrom lib.fpn.box_intersections_cpu.bbox import bbox_overlaps\nfrom collections import defaultdict\ntrain, val, test = VG.splits(filter_non_overlap=False, num_val_im=2000)\n\ncount_threshold = 50\npmi_threshold = 10\n\no_type = []\nf = open(\"object_types.txt\")\nfor line in f.readlines():\n  tabs = line.strip().split(\"\\t\")\n  t = tabs[1].split(\"_\")[0]\n  o_type.append(t)\n\nr_type = []\nf = open(\"relation_types.txt\")\nfor line in f.readlines():\n  tabs = line.strip().split(\"\\t\")\n  t = tabs[1].split(\"_\")[0]\n  r_type.append(t) \n\nmax_id = 0\n\nmemes_id_id = {}\n\nmemes_id = {}\nid_memes = {}\n\nid_key = {}\nkey_id = {}\n#go through and assign keys\ndataset = []\nfor i in range(0, len(train)):\n  item = []\n  _r = train.relationships[i]\n  _o = train.gt_classes[i]\n  for j in range(0, len(_r)):\n    h = _o[_r[j][0]]\n    t = _o[_r[j][1]]\n    e = _r[j][2]\n    key1 = (h,e,t)\n    if key1 not in key_id: \n      id_key[max_id] = key1\n      key_id[key1] = max_id\n      max_id += 1\n    item.append(key_id[key1])\n  dataset.append(item)\n\ncids = train.ind_to_classes\nrids = train.ind_to_predicates\nall_memes = []\n\ndef id_to_str(_id):\n  key = id_key[_id]\n  if len(key) == 2:\n    pair = key\n    l1, s1 = id_to_str(pair[0])\n    l2, s2 = id_to_str(pair[1])\n    return (l1 + l2, s1 + \" & \" + s2)\n  else:\n    return (1,\"{}--{}-->{}\".format(cids[key[0]], rids[key[1]], cids[key[2]]))\n\nnew_meme_score = {}\nfor p in range(0,25):\n  print(\"iteration : {}\".format(p)) \n  unigrams = defaultdict(float)\n  bigrams = defaultdict(float) \n  unigrams_ori = defaultdict(float)\n  T = 0\n  T2 = 0\n  for i in range(0, len(dataset)):\n    item = dataset[i]\n    for j in range(0, len(item)):\n      key1 = item[j] \n      unigrams_ori[key1] += 1\n      #T += 1\n      for j2 in range(j+1 , len(item)):\n        key2 = item[j2]\n        if key1 > key2 : jkey = (key1, key2)\n        else: jkey = (key2, key1)\n        unigrams[key1] += 1\n        unigrams[key2] += 1\n        bigrams[jkey] += 1\n        T2 += 1\n  \n  pmi = []\n  for (jkey,val) in bigrams.items():\n    pval = (val / T2) / ( (unigrams[jkey[0]]/ T2) * (unigrams[jkey[0]] / T2 )) \n    #print(\"{} {} {}\".format(jkey, val, pval))\n    if val > count_threshold and unigrams_ori[jkey[0]] > count_threshold and unigrams_ori[jkey[1]] > count_threshold and pval > pmi_threshold : \n      pmi.append( (pval , jkey, val) )\n  #    new_memes.add(jkey)\n\n  new_memes = set()\n  pmi = sorted(pmi, key = lambda x: -x[0])\n  new_meme_c = set()\n  for (v,k, f) in pmi:\n    #if k[0] in all_memes and k[1] in all_memes: continue \n    #if len( new_memes) > 1000: break\n    if k[0] in new_meme_c or k[1] in new_meme_c: continue\n    new_meme_c.add(k[0])\n    new_meme_c.add(k[1])\n    print(\"{} & {} \\t {} \\t {} \\t {} \\t {}\".format(id_to_str(k[0]), id_to_str(k[1]), v, unigrams[k[0]], unigrams[k[1]], bigrams[k]))\n    new_memes.add(k)\n  #assign new ids to the memes\n    new_meme_score[k] = v \n    #break\n  for meme in new_memes:\n    if meme in key_id: continue\n    all_memes.append(max_id)\n    id_key[max_id] = meme\n    key_id[meme] = max_id\n    max_id+=1\n  print(\"{} memes discovered \".format(len(new_memes)))\n  #go through and adjust the dataset\n  new_dataset = []\n  eliminated = 0\n  for i in range(0,len(dataset)):\n    item_save = dataset[i]\n    item = item_save\n    new_item = []\n    #merges = {}\n    while True:\n     best = None\n     best_score = 0\n     for j in range(0, len(item)):\n      key1 = item[j]\n      for j2 in range(j+1 , len(item)):\n        key2 = item[j2]\n        if key1 > key2 : jkey = (key1, key2)\n        else: jkey = (key2, key1)\n        if jkey in new_meme_score and new_meme_score[jkey] > best_score: \n          best = (j, j2) \n          best_score = new_meme_score[jkey]\n        #if jkey in key_id and j not in merges and j2 not in merges: \n        #  merges[j] = j2\n        #  merges[j2] = j\n     if best is not None:\n      for j in range(0, len(item)):\n        if j == best[0]: \n          key1 = item[j]\n          key2 = item[best[1]]\n          if key1 > key2 : jkey = (key1, key2)\n          else: jkey = (key2, key1)\n          new_item.append(key_id[jkey]) \n        elif j == best[1]: continue\n        else: new_item.append(item[j])\n      #break\n      item = new_item\n      new_item = [] \n     else:\n      #print(\"done\")\n      new_item = item \n      break\n    #for j in range(0, len(item)):\n    #  if j not in merges: new_item.append(item[j])\n    #  elif j < merges[j]: \n    #    key1 = item[j]\n    #    key2 = item[merges[j]]\n    #   if key1 > key2 : jkey = (key1, key2)\n    #    else: jkey = (key2, key1)\n    #    new_item.append(key_id[jkey])  \n    eliminated += len(item_save) - len(new_item)\n    new_dataset.append(new_item)\n  print (\"{} total eliminated\".format(eliminated))\n  dataset = new_dataset\n\nmeme_freq = defaultdict(float)\n\ndef increment_recursive(i):\n  #meme = id_key[i]\n  if i in all_memes:\n    meme_freq[i] += 1\n    key1 = id_key[i][0]\n    key2 = id_key[i][1]\n    increment_recursive(key1)\n    increment_recursive(key2)\n\ndef meme_length(i):\n  if i in all_memes:\n    return meme_length(id_key[i][0]) + meme_length(id_key[i][1])\n  else: \n    return 1\n\n#compute statistics of memes\nfor i in range(0,len(dataset)):\n  item = dataset[i]\n  for j in range(0, len(item)):\n    increment_recursive(item[j])\n  \nfor meme in all_memes:\n  print (\"{} {}\".format( id_to_str(meme), meme_freq[meme]))\n\nT = 0 \nT2 = 0\nn_images = defaultdict(float)\nn_edges = defaultdict(float)\nfor item in dataset:\n  meme_lengths = []\n  for j in range(0, len(item)):\n    meme_lengths.append(meme_length(item[j]))\n  n_images[max(meme_lengths)] += 1\n  #for l in meme_lengths: n_images[l] +=1\n  T += 1\n\nfor item in dataset:\n  for j in range(0, len(item)):\n    l = meme_length(item[j])\n    n_edges[l] += l\n    T2 += l\n\nfor (k,v) in n_images.items():\n  print(\"{} {}\".format(k, v/T))\nprint(\"---\")\nfor (k,v) in n_edges.items():\n  print(\"{} {}\".format(k, v/T2))\n\n\n\n\n  \n"
  },
  {
    "path": "misc/object_types.txt",
    "content": "__background__\tbackground\nairplane\tvehicle\nanimal\tanimal\narm\tpart_animal_person\nbag\tclothes\nbanana\tfood\nbasket\tartifact\nbeach\tlocation\nbear\tanimal\nbed\tfurniture\nbench\tfurniture\nbike\tvehicle\nbird\tanimal\nboard\tartifact\nboat\tvehicle\nbook\tartifact\nboot\tclothes\nbottle\tartifact\nbowl\tartifact\nbox\tartifact\nboy\tperson\nbranch\tpart_flora\nbuilding\tbuilding\nbus\tvehicle\ncabinet\tfurniture\ncap\tclothes\ncar\tvehicle\ncat\tanimal\nchair\tfurniture\nchild\tperson\nclock\tartifact\ncoat\tclothes\ncounter\tfurniture\ncow\tanimal\ncup\tartifact\ncurtain\tartifact\ndesk\tfurniture\ndog\tanimal\ndoor\tpart_building\ndrawer\tpart_table\near\tpart_animal_person\nelephant\tanimal\nengine\tpart_vehicle\neye\tpart_animal_person\nface\tpart_animal_person\nfence\tstructure\nfinger\tpart_animal_person\nflag\tartifact\nflower\tflora\nfood\tfood\nfork\tartifact\nfruit\tfood\ngiraffe\tanimal\ngirl\tperson\nglass\tartifact\nglove\tclothes\nguy\tperson\nhair\tpart_animal_person\nhand\tpart_animal_person\nhandle\tpart_door\nhat\tclothes\nhead\tpart_animal_person\nhelmet\tclothes\nhill\tlocation\nhorse\tanimal\nhouse\tbuilding\njacket\tclothes\njean\tclothes\nkid\tperson\nkite\tartifact\nlady\tperson\nlamp\tartifact\nlaptop\tartifact\nleaf\tpart_flora\nleg\tpart_animal_person\nletter\tartifact\nlight\tartifact\nlogo\tpart_clothes\nman\tperson\nmen\tperson\nmotorcycle\tvehicle\nmountain\tlocation\nmouth\tpart_animal_perosn\nneck\tpart_animal_person\nnose\tpart_animal_person\nnumber\tartifact\norange\tfood\npant\tclothes\npaper\tartifact\npaw\tpart_animal\npeople\tperson\nperson\tperson\nphone\tartifact\npillow\tartifact\npizza\tfood\nplane\tvehicle\nplant\tflora\nplate\tartifact\nplayer\tperson\npole\tartifact\npost\tstructure\npot\tartifact\nracket\tartifact\nrailing\tfurniture\t\nrock\tlocation\nroof\tpart_building\nroom\tlocation\nscreen\tpart_laptop_phone\nseat\tpart_vehicle\nsheep\tanimal\nshelf\tartifact\nshirt\tclothes\nshoe\tclothes\nshort\tclothes\nsidewalk\tlocation\nsign\tstructure\nsink\tfurniture\nskateboard\tvehicle\nski\tartifact\nskier\tperson\nsneaker\tclothes\nsnow\tlocation\nsock\tclothes\nstand\tartifact\nstreet\tlocation\nsurfboard\tvehicle\ntable\tfurniture\ntail\tpart_animal\ntie\tclothes\ntile\tpart_building\ntire\tpart_vehicle\ntoilet\tartifact\ntowel\tartifact\ntower\tpart_building\ntrack\tartifact\ntrain\tvehicle\ntree\tflora\ntruck\tvehicle\ntrunk\tpart_flora\numbrella\tartifact\nvase\tartifact\nvegetable\tfood\nvehicle\tvehicle\nwave\tlocation\nwheel\tpart_vehicle\nwindow\tpart_building\nwindshield\tpart_vehicle\nwing\tpart_animal\nwire\tartifact\nwoman\tperson\nzebra\tanimal\n"
  },
  {
    "path": "misc/relation_types.txt",
    "content": "__background__\tbackground\nabove\tgeometric\nacross\tgeometric\nagainst\tgeometric\nalong\tgeometric\nand\tgeometric\nat\tgeometric\nattached to\tsemantic\nbehind\tgeometric\nbelonging to\tposession\nbetween\tgeometric\ncarrying\tsemantic\ncovered in\tsemantic\ncovering\tsemantic\neating\tsemantic\nflying in\tsemantic\nfor\tmisc\nfrom\tmisc\ngrowing on\tsemantic\nhanging from\tsemantic\nhas\tposession\nholding\tlight_semantic_posession\nin\tgeometric\nin front of\tgeometric\nlaying on\tsemantic\nlooking at\tsemantic\nlying on\tsemantic\nmade of\tmisc\nmounted on\tsemantic\nnear\tgeometric\nof\tposession\non\tgeometric\non back of\tgeometric\nover\tgeometric\npainted on\tsemantic\nparked on\tsemantic\npart of\tposession\nplaying\tsemantic\nriding\tsemantic\nsays\tsemantic\nsitting on\tsemantic\nstanding on\tsemantic\nto\tposession\nunder\tgeometric\nusing\tlight_semantic_posession\nwalking in\tsemantic\nwalking on\tsemantic\nwatching\tsemantic\nwearing\twearing\nwears\twearing\nwith\tposession\n"
  },
  {
    "path": "models/_visualize.py",
    "content": "\"\"\"\nVisualization script. I used this to create the figures in the paper.\n\nWARNING: I haven't tested this in a while. It's possible that some later features I added break things here, but hopefully there should be easy fixes. I'm uploading this in the off chance it might help someone. If you get it to work, let me know (and also send a PR with bugs/etc)\n\"\"\"\n\nfrom dataloaders.visual_genome import VGDataLoader, VG\nfrom lib.rel_model import RelModel\nimport numpy as np\nimport torch\n\nfrom config import ModelConfig\nfrom lib.pytorch_misc import optimistic_restore\nfrom lib.evaluation.sg_eval import BasicSceneGraphEvaluator\nfrom tqdm import tqdm\nfrom config import BOX_SCALE, IM_SCALE\nfrom lib.fpn.box_utils import bbox_overlaps\nfrom collections import defaultdict\nfrom PIL import Image, ImageDraw, ImageFont\nimport os\nfrom functools import reduce\n\nconf = ModelConfig()\ntrain, val, test = VG.splits(num_val_im=conf.val_size)\nif conf.test:\n    val = test\n\ntrain_loader, val_loader = VGDataLoader.splits(train, val, mode='rel',\n                                               batch_size=conf.batch_size,\n                                               num_workers=conf.num_workers,\n                                               num_gpus=conf.num_gpus)\n\ndetector = RelModel(classes=train.ind_to_classes, rel_classes=train.ind_to_predicates,\n                    num_gpus=conf.num_gpus, mode=conf.mode, require_overlap_det=True,\n                    use_resnet=conf.use_resnet, order=conf.order,\n                    nl_edge=conf.nl_edge, nl_obj=conf.nl_obj, hidden_dim=conf.hidden_dim,\n                    use_proposals=conf.use_proposals,\n                    pass_in_obj_feats_to_decoder=conf.pass_in_obj_feats_to_decoder,\n                    pass_in_obj_feats_to_edge=conf.pass_in_obj_feats_to_edge,\n                    pooling_dim=conf.pooling_dim,\n                    rec_dropout=conf.rec_dropout,\n                    use_bias=conf.use_bias,\n                    use_tanh=conf.use_tanh,\n                    limit_vision=conf.limit_vision\n                    )\ndetector.cuda()\nckpt = torch.load(conf.ckpt)\n\noptimistic_restore(detector, ckpt['state_dict'])\n\n\n############################################ HELPER FUNCTIONS ###################################\n\ndef get_cmap(N):\n    import matplotlib.cm as cmx\n    import matplotlib.colors as colors\n    \"\"\"Returns a function that maps each index in 0, 1, ... N-1 to a distinct RGB color.\"\"\"\n    color_norm = colors.Normalize(vmin=0, vmax=N - 1)\n    scalar_map = cmx.ScalarMappable(norm=color_norm, cmap='hsv')\n\n    def map_index_to_rgb_color(index):\n        pad = 40\n        return np.round(np.array(scalar_map.to_rgba(index)) * (255 - pad) + pad)\n\n    return map_index_to_rgb_color\n\n\ncmap = get_cmap(len(train.ind_to_classes) + 1)\n\n\ndef load_unscaled(fn):\n    \"\"\" Loads and scales images so that it's 1024 max-dimension\"\"\"\n    image_unpadded = Image.open(fn).convert('RGB')\n    im_scale = 1024.0 / max(image_unpadded.size)\n\n    image = image_unpadded.resize((int(im_scale * image_unpadded.size[0]), int(im_scale * image_unpadded.size[1])),\n                                  resample=Image.BICUBIC)\n    return image\n\n\nfont = ImageFont.truetype('/usr/share/fonts/truetype/freefont/FreeMonoBold.ttf', 32)\n\n\ndef draw_box(draw, boxx, cls_ind, text_str):\n    box = tuple([float(b) for b in boxx])\n    if '-GT' in text_str:\n        color = (255, 128, 0, 255)\n    else:\n        color = (0, 128, 0, 255)\n\n    # color = tuple([int(x) for x in cmap(cls_ind)])\n\n    # draw the fucking box\n    draw.line([(box[0], box[1]), (box[2], box[1])], fill=color, width=8)\n    draw.line([(box[2], box[1]), (box[2], box[3])], fill=color, width=8)\n    draw.line([(box[2], box[3]), (box[0], box[3])], fill=color, width=8)\n    draw.line([(box[0], box[3]), (box[0], box[1])], fill=color, width=8)\n\n    # draw.rectangle(box, outline=color)\n    w, h = draw.textsize(text_str, font=font)\n\n    x1text = box[0]\n    y1text = max(box[1] - h, 0)\n    x2text = min(x1text + w, draw.im.size[0])\n    y2text = y1text + h\n    print(\"drawing {}x{} rectangle at {:.1f} {:.1f} {:.1f} {:.1f}\".format(\n        h, w, x1text, y1text, x2text, y2text))\n\n    draw.rectangle((x1text, y1text, x2text, y2text), fill=color)\n    draw.text((x1text, y1text), text_str, fill='black', font=font)\n    return draw\n\n\ndef val_epoch():\n    detector.eval()\n    evaluator = BasicSceneGraphEvaluator.all_modes()\n    for val_b, batch in enumerate(tqdm(val_loader)):\n        val_batch(conf.num_gpus * val_b, batch, evaluator)\n\n    evaluator[conf.mode].print_stats()\n\n\ndef val_batch(batch_num, b, evaluator, thrs=(20, 50, 100)):\n    det_res = detector[b]\n    # if conf.num_gpus == 1:\n    #     det_res = [det_res]\n    assert conf.num_gpus == 1\n    boxes_i, objs_i, obj_scores_i, rels_i, pred_scores_i = det_res\n\n    gt_entry = {\n        'gt_classes': val.gt_classes[batch_num].copy(),\n        'gt_relations': val.relationships[batch_num].copy(),\n        'gt_boxes': val.gt_boxes[batch_num].copy(),\n    }\n    # gt_entry = {'gt_classes': gtc[i], 'gt_relations': gtr[i], 'gt_boxes': gtb[i]}\n    assert np.all(objs_i[rels_i[:, 0]] > 0) and np.all(objs_i[rels_i[:, 1]] > 0)\n    # assert np.all(rels_i[:, 2] > 0)\n\n    pred_entry = {\n        'pred_boxes': boxes_i * BOX_SCALE / IM_SCALE,\n        'pred_classes': objs_i,\n        'pred_rel_inds': rels_i,\n        'obj_scores': obj_scores_i,\n        'rel_scores': pred_scores_i,\n    }\n    pred_to_gt, pred_5ples, rel_scores = evaluator[conf.mode].evaluate_scene_graph_entry(\n        gt_entry,\n        pred_entry,\n    )\n\n    # SET RECALL THRESHOLD HERE\n    pred_to_gt = pred_to_gt[:20]\n    pred_5ples = pred_5ples[:20]\n\n    # Get a list of objects that match, and GT objects that dont\n    objs_match = (bbox_overlaps(pred_entry['pred_boxes'], gt_entry['gt_boxes']) >= 0.5) & (\n            objs_i[:, None] == gt_entry['gt_classes'][None]\n    )\n    objs_matched = objs_match.any(1)\n\n    has_seen = defaultdict(int)\n    has_seen_gt = defaultdict(int)\n    pred_ind2name = {}\n    gt_ind2name = {}\n    edges = {}\n    missededges = {}\n    badedges = {}\n\n    if val.filenames[batch_num].startswith('2343676'):\n        import ipdb\n        ipdb.set_trace()\n\n    def query_pred(pred_ind):\n        if pred_ind not in pred_ind2name:\n            has_seen[objs_i[pred_ind]] += 1\n            pred_ind2name[pred_ind] = '{}-{}'.format(train.ind_to_classes[objs_i[pred_ind]],\n                                                     has_seen[objs_i[pred_ind]])\n        return pred_ind2name[pred_ind]\n\n    def query_gt(gt_ind):\n        gt_cls = gt_entry['gt_classes'][gt_ind]\n        if gt_ind not in gt_ind2name:\n            has_seen_gt[gt_cls] += 1\n            gt_ind2name[gt_ind] = '{}-GT{}'.format(train.ind_to_classes[gt_cls], has_seen_gt[gt_cls])\n        return gt_ind2name[gt_ind]\n\n    matching_pred5ples = pred_5ples[np.array([len(x) > 0 for x in pred_to_gt])]\n    for fiveple in matching_pred5ples:\n        head_name = query_pred(fiveple[0])\n        tail_name = query_pred(fiveple[1])\n\n        edges[(head_name, tail_name)] = train.ind_to_predicates[fiveple[4]]\n\n    gt_5ples = np.column_stack((gt_entry['gt_relations'][:, :2],\n                                gt_entry['gt_classes'][gt_entry['gt_relations'][:, 0]],\n                                gt_entry['gt_classes'][gt_entry['gt_relations'][:, 1]],\n                                gt_entry['gt_relations'][:, 2],\n                                ))\n    has_match = reduce(np.union1d, pred_to_gt)\n    for gt in gt_5ples[np.setdiff1d(np.arange(gt_5ples.shape[0]), has_match)]:\n        # Head and tail\n        namez = []\n        for i in range(2):\n            matching_obj = np.where(objs_match[:, gt[i]])[0]\n            if matching_obj.size > 0:\n                name = query_pred(matching_obj[0])\n            else:\n                name = query_gt(gt[i])\n            namez.append(name)\n\n        missededges[tuple(namez)] = train.ind_to_predicates[gt[4]]\n\n    for fiveple in pred_5ples[np.setdiff1d(np.arange(pred_5ples.shape[0]), matching_pred5ples)]:\n\n        if fiveple[0] in pred_ind2name:\n            if fiveple[1] in pred_ind2name:\n                badedges[(pred_ind2name[fiveple[0]], pred_ind2name[fiveple[1]])] = train.ind_to_predicates[fiveple[4]]\n\n    theimg = load_unscaled(val.filenames[batch_num])\n    theimg2 = theimg.copy()\n    draw2 = ImageDraw.Draw(theimg2)\n\n    # Fix the names\n\n    for pred_ind in pred_ind2name.keys():\n        draw2 = draw_box(draw2, pred_entry['pred_boxes'][pred_ind],\n                         cls_ind=objs_i[pred_ind],\n                         text_str=pred_ind2name[pred_ind])\n    for gt_ind in gt_ind2name.keys():\n        draw2 = draw_box(draw2, gt_entry['gt_boxes'][gt_ind],\n                         cls_ind=gt_entry['gt_classes'][gt_ind],\n                         text_str=gt_ind2name[gt_ind])\n\n    recall = int(100 * len(reduce(np.union1d, pred_to_gt)) / gt_entry['gt_relations'].shape[0])\n\n    id = '{}-{}'.format(val.filenames[batch_num].split('/')[-1][:-4], recall)\n    pathname = os.path.join('qualitative', id)\n    if not os.path.exists(pathname):\n        os.mkdir(pathname)\n    theimg.save(os.path.join(pathname, 'img.jpg'), quality=100, subsampling=0)\n    theimg2.save(os.path.join(pathname, 'imgbox.jpg'), quality=100, subsampling=0)\n\n    with open(os.path.join(pathname, 'shit.txt'), 'w') as f:\n        f.write('good:\\n')\n        for (o1, o2), p in edges.items():\n            f.write('{} - {} - {}\\n'.format(o1, p, o2))\n        f.write('fn:\\n')\n        for (o1, o2), p in missededges.items():\n            f.write('{} - {} - {}\\n'.format(o1, p, o2))\n        f.write('shit:\\n')\n        for (o1, o2), p in badedges.items():\n            f.write('{} - {} - {}\\n'.format(o1, p, o2))\n\n\nmAp = val_epoch()\n"
  },
  {
    "path": "models/eval_rel_count.py",
    "content": "\"\"\"\nBaseline model that works by simply iterating through the training set to make a dictionary.\n\nAlso, caches this (we can use this for training).\n\nThe model is quite simple, so we don't use the base train/test code\n\n\"\"\"\nfrom dataloaders.visual_genome import VGDataLoader, VG\nfrom lib.object_detector import ObjectDetector\nimport numpy as np\nimport torch\nimport os\nfrom lib.get_dataset_counts import get_counts, box_filter\n\nfrom config import ModelConfig, FG_FRACTION, RPN_FG_FRACTION, DATA_PATH, BOX_SCALE, IM_SCALE, PROPOSAL_FN\nimport torch.backends.cudnn as cudnn\nfrom lib.pytorch_misc import optimistic_restore, nonintersecting_2d_inds\nfrom lib.evaluation.sg_eval import BasicSceneGraphEvaluator\nfrom tqdm import tqdm\nfrom copy import deepcopy\nimport dill as pkl\n\ncudnn.benchmark = True\nconf = ModelConfig()\n\nMUST_OVERLAP=False\ntrain, val, test = VG.splits(num_val_im=conf.val_size, filter_non_overlap=MUST_OVERLAP,\n                             filter_duplicate_rels=True,\n                             use_proposals=conf.use_proposals)\nif conf.test:\n    print(\"test data!\")\n    val = test\ntrain_loader, val_loader = VGDataLoader.splits(train, val, mode='rel',\n                                               batch_size=conf.batch_size,\n                                               num_workers=conf.num_workers,\n                                               num_gpus=conf.num_gpus)\n\nfg_matrix, bg_matrix = get_counts(train_data=train, must_overlap=MUST_OVERLAP)\n\ndetector = ObjectDetector(classes=train.ind_to_classes, num_gpus=conf.num_gpus,\n                          mode='rpntrain' if not conf.use_proposals else 'proposals', use_resnet=conf.use_resnet,\n                          nms_filter_duplicates=True, thresh=0.01)\ndetector.eval()\ndetector.cuda()\n\nclassifier = ObjectDetector(classes=train.ind_to_classes, num_gpus=conf.num_gpus,\n                            mode='gtbox', use_resnet=conf.use_resnet,\n                            nms_filter_duplicates=True, thresh=0.01)\nclassifier.eval()\nclassifier.cuda()\n\nckpt = torch.load(conf.ckpt)\nmismatch = optimistic_restore(detector, ckpt['state_dict'])\nmismatch = optimistic_restore(classifier, ckpt['state_dict'])\n\nMOST_COMMON_MODE = True\n\nif MOST_COMMON_MODE:\n    prob_matrix = fg_matrix.astype(np.float32)\n    prob_matrix[:,:,0] = bg_matrix\n\n    # TRYING SOMETHING NEW.\n    prob_matrix[:,:,0] += 1\n    prob_matrix /= np.sum(prob_matrix, 2)[:,:,None]\n    # prob_matrix /= float(fg_matrix.max())\n\n    np.save(os.path.join(DATA_PATH, 'pred_stats.npy'), prob_matrix)\n    prob_matrix[:,:,0] = 0 # Zero out BG\nelse:\n    prob_matrix = fg_matrix.astype(np.float64)\n    prob_matrix = prob_matrix / prob_matrix.max(2)[:,:,None]\n    np.save(os.path.join(DATA_PATH, 'pred_dist.npy'), prob_matrix)\n\n# It's test time!\ndef predict(boxes, classes):\n    relation_possibilities_ = np.array(box_filter(boxes, must_overlap=MUST_OVERLAP), dtype=int)\n    full_preds = np.zeros((boxes.shape[0], boxes.shape[0], train.num_predicates))\n    for o1, o2 in relation_possibilities_:\n        c1, c2 = classes[[o1, o2]]\n        full_preds[o1, o2] = prob_matrix[c1, c2]\n\n    full_preds[:,:,0] = 0.0 # Zero out BG.\n    return full_preds\n\n# ##########################################################################################\n# ##########################################################################################\n\n# For visualizing / exploring\n\nc_to_ind = {c: i for i, c in enumerate(train.ind_to_classes)}\ndef gimme_the_dist(c1name, c2name):\n    c1 = c_to_ind[c1name]\n    c2 = c_to_ind[c2name]\n    dist = prob_matrix[c1, c2]\n    argz = np.argsort(-dist)\n    for i, a in enumerate(argz):\n        if dist[a] > 0.0:\n            print(\"{:3d}: {:10s} ({:.4f})\".format(i, train.ind_to_predicates[a], dist[a]))\n\ncounts = np.zeros((train.num_classes, train.num_classes, train.num_predicates), dtype=np.int64)\nfor ex_ind in tqdm(range(len(val))):\n    gt_relations = val.relationships[ex_ind].copy()\n    gt_classes = val.gt_classes[ex_ind].copy()\n    o1o2 = gt_classes[gt_relations[:, :2]].tolist()\n    for (o1, o2), pred in zip(o1o2, gt_relations[:, 2]):\n        counts[o1, o2, pred] += 1\n\nzeroshot_case = counts[np.where(prob_matrix == 0)].sum() / float(counts.sum())\n\nmax_inds = prob_matrix.argmax(2).ravel()\nmax_counts = counts.reshape(-1, 51)[np.arange(max_inds.shape[0]), max_inds]\n\nmost_freq_port = max_counts.sum()/float(counts.sum())\n\n\nprint(\" Rel acc={:.2f}%, {:.2f}% zsl\".format(\n    most_freq_port*100, zeroshot_case*100))\n\n# ##########################################################################################\n# ##########################################################################################\nT = len(val)\nevaluator = BasicSceneGraphEvaluator.all_modes(multiple_preds=conf.multi_pred)\n\n # First do detection results\nimg_offset = 0\nall_pred_entries = {'sgdet':[], 'sgcls':[], 'predcls':[]}\nfor val_b, b in enumerate(tqdm(val_loader)):\n\n    det_result = detector[b]\n\n    img_ids = b.gt_classes_primary.data.cpu().numpy()[:,0]\n    scores_np = det_result.obj_scores.data.cpu().numpy()\n    cls_preds_np = det_result.obj_preds.data.cpu().numpy()\n    boxes_np = det_result.boxes_assigned.data.cpu().numpy()* BOX_SCALE/IM_SCALE\n    # boxpriors_np = det_result.box_priors.data.cpu().numpy()\n    im_inds_np = det_result.im_inds.data.cpu().numpy() + img_offset\n\n    for img_i in np.unique(img_ids + img_offset):\n        gt_entry = {\n            'gt_classes': val.gt_classes[img_i].copy(),\n            'gt_relations': val.relationships[img_i].copy(),\n            'gt_boxes': val.gt_boxes[img_i].copy(),\n        }\n\n        pred_boxes = boxes_np[im_inds_np == img_i]\n        pred_classes = cls_preds_np[im_inds_np == img_i]\n        obj_scores = scores_np[im_inds_np == img_i]\n\n        all_rels = nonintersecting_2d_inds(pred_boxes.shape[0])\n        fp = predict(pred_boxes, pred_classes)\n        fp_pred = fp[all_rels[:,0], all_rels[:,1]]\n\n        scores = np.column_stack((\n            obj_scores[all_rels[:,0]],\n            obj_scores[all_rels[:,1]],\n            fp_pred.max(1)\n        )).prod(1)\n        sorted_inds = np.argsort(-scores)\n        sorted_inds = sorted_inds[scores[sorted_inds] > 0] #[:100]\n        pred_entry = {\n            'pred_boxes': pred_boxes,\n            'pred_classes': pred_classes,\n            'obj_scores': obj_scores,\n            'pred_rel_inds': all_rels[sorted_inds],\n            'rel_scores': fp_pred[sorted_inds],\n        }\n        all_pred_entries['sgdet'].append(pred_entry)\n        evaluator['sgdet'].evaluate_scene_graph_entry(\n            gt_entry,\n            pred_entry,\n        )\n    img_offset += img_ids.max() + 1\nevaluator['sgdet'].print_stats()\n\n# -----------------------------------------------------------------------------------------\n# EVAL CLS AND SG\n\nimg_offset = 0\nfor val_b, b in enumerate(tqdm(val_loader)):\n\n    det_result = classifier[b]\n    scores, cls_preds = det_result.rm_obj_dists[:,1:].data.max(1)\n    scores_np = scores.cpu().numpy()\n    cls_preds_np = (cls_preds+1).cpu().numpy()\n\n    img_ids = b.gt_classes_primary.data.cpu().numpy()[:,0]\n    boxes_np = b.gt_boxes_primary.data.cpu().numpy()\n    im_inds_np = det_result.im_inds.data.cpu().numpy() + img_offset\n\n    for img_i in np.unique(img_ids + img_offset):\n        gt_entry = {\n            'gt_classes': val.gt_classes[img_i].copy(),\n            'gt_relations': val.relationships[img_i].copy(),\n            'gt_boxes': val.gt_boxes[img_i].copy(),\n        }\n\n        pred_boxes = boxes_np[im_inds_np == img_i]\n        pred_classes = cls_preds_np[im_inds_np == img_i]\n        obj_scores = scores_np[im_inds_np == img_i]\n\n        all_rels = nonintersecting_2d_inds(pred_boxes.shape[0])\n        fp = predict(pred_boxes, pred_classes)\n        fp_pred = fp[all_rels[:,0], all_rels[:,1]]\n\n        sg_cls_scores = np.column_stack((\n            obj_scores[all_rels[:,0]],\n            obj_scores[all_rels[:,1]],\n            fp_pred.max(1)\n        )).prod(1)\n        sg_cls_inds = np.argsort(-sg_cls_scores)\n        sg_cls_inds = sg_cls_inds[sg_cls_scores[sg_cls_inds] > 0] #[:100]\n\n        pred_entry = {\n            'pred_boxes': pred_boxes,\n            'pred_classes': pred_classes,\n            'obj_scores': obj_scores,\n            'pred_rel_inds': all_rels[sg_cls_inds],\n            'rel_scores': fp_pred[sg_cls_inds],\n        }\n        all_pred_entries['sgcls'].append(deepcopy(pred_entry))\n        evaluator['sgcls'].evaluate_scene_graph_entry(\n            gt_entry,\n            pred_entry,\n        )\n\n        ########################################################\n        fp = predict(gt_entry['gt_boxes'], gt_entry['gt_classes'])\n        fp_pred = fp[all_rels[:, 0], all_rels[:, 1]]\n\n        pred_cls_scores = fp_pred.max(1)\n        pred_cls_inds = np.argsort(-pred_cls_scores)\n        pred_cls_inds = pred_cls_inds[pred_cls_scores[pred_cls_inds] > 0][:100]\n\n        pred_entry['pred_rel_inds'] = all_rels[pred_cls_inds]\n        pred_entry['rel_scores'] = fp_pred[pred_cls_inds]\n        pred_entry['pred_classes'] = gt_entry['gt_classes']\n        pred_entry['obj_scores'] = np.ones(pred_entry['pred_classes'].shape[0])\n\n        all_pred_entries['predcls'].append(pred_entry)\n\n        evaluator['predcls'].evaluate_scene_graph_entry(\n            gt_entry,\n            pred_entry,\n        )\n    img_offset += img_ids.max() + 1\nevaluator['predcls'].print_stats()\nevaluator['sgcls'].print_stats()\n\nfor mode, entries in all_pred_entries.items():\n    with open('caches/freqbaseline-{}-{}.pkl'.format('overlap' if MUST_OVERLAP else 'nonoverlap', mode), 'wb') as f:\n        pkl.dump(entries, f)\n"
  },
  {
    "path": "models/eval_rels.py",
    "content": "\nfrom dataloaders.visual_genome import VGDataLoader, VG\nimport numpy as np\nimport torch\n\nfrom config import ModelConfig\nfrom lib.pytorch_misc import optimistic_restore\nfrom lib.evaluation.sg_eval import BasicSceneGraphEvaluator\nfrom tqdm import tqdm\nfrom config import BOX_SCALE, IM_SCALE\nimport dill as pkl\nimport os\n\nconf = ModelConfig()\nif conf.model == 'motifnet':\n    from lib.rel_model import RelModel\nelif conf.model == 'stanford':\n    from lib.rel_model_stanford import RelModelStanford as RelModel\nelse:\n    raise ValueError()\n\ntrain, val, test = VG.splits(num_val_im=conf.val_size, filter_duplicate_rels=True,\n                          use_proposals=conf.use_proposals,\n                          filter_non_overlap=conf.mode == 'sgdet')\nif conf.test:\n    val = test\ntrain_loader, val_loader = VGDataLoader.splits(train, val, mode='rel',\n                                               batch_size=conf.batch_size,\n                                               num_workers=conf.num_workers,\n                                               num_gpus=conf.num_gpus)\n\ndetector = RelModel(classes=train.ind_to_classes, rel_classes=train.ind_to_predicates,\n                    num_gpus=conf.num_gpus, mode=conf.mode, require_overlap_det=True,\n                    use_resnet=conf.use_resnet, order=conf.order,\n                    nl_edge=conf.nl_edge, nl_obj=conf.nl_obj, hidden_dim=conf.hidden_dim,\n                    use_proposals=conf.use_proposals,\n                    pass_in_obj_feats_to_decoder=conf.pass_in_obj_feats_to_decoder,\n                    pass_in_obj_feats_to_edge=conf.pass_in_obj_feats_to_edge,\n                    pooling_dim=conf.pooling_dim,\n                    rec_dropout=conf.rec_dropout,\n                    use_bias=conf.use_bias,\n                    use_tanh=conf.use_tanh,\n                    limit_vision=conf.limit_vision\n                    )\n\n\ndetector.cuda()\nckpt = torch.load(conf.ckpt)\n\noptimistic_restore(detector, ckpt['state_dict'])\n# if conf.mode == 'sgdet':\n#     det_ckpt = torch.load('checkpoints/new_vgdet/vg-19.tar')['state_dict']\n#     detector.detector.bbox_fc.weight.data.copy_(det_ckpt['bbox_fc.weight'])\n#     detector.detector.bbox_fc.bias.data.copy_(det_ckpt['bbox_fc.bias'])\n#     detector.detector.score_fc.weight.data.copy_(det_ckpt['score_fc.weight'])\n#     detector.detector.score_fc.bias.data.copy_(det_ckpt['score_fc.bias'])\n\nall_pred_entries = []\ndef val_batch(batch_num, b, evaluator, thrs=(20, 50, 100)):\n    det_res = detector[b]\n    if conf.num_gpus == 1:\n        det_res = [det_res]\n\n    for i, (boxes_i, objs_i, obj_scores_i, rels_i, pred_scores_i) in enumerate(det_res):\n        gt_entry = {\n            'gt_classes': val.gt_classes[batch_num + i].copy(),\n            'gt_relations': val.relationships[batch_num + i].copy(),\n            'gt_boxes': val.gt_boxes[batch_num + i].copy(),\n        }\n        assert np.all(objs_i[rels_i[:,0]] > 0) and np.all(objs_i[rels_i[:,1]] > 0)\n        # assert np.all(rels_i[:,2] > 0)\n\n        pred_entry = {\n            'pred_boxes': boxes_i * BOX_SCALE/IM_SCALE,\n            'pred_classes': objs_i,\n            'pred_rel_inds': rels_i,\n            'obj_scores': obj_scores_i,\n            'rel_scores': pred_scores_i,\n        }\n        all_pred_entries.append(pred_entry)\n\n        evaluator[conf.mode].evaluate_scene_graph_entry(\n            gt_entry,\n            pred_entry,\n        )\n\nevaluator = BasicSceneGraphEvaluator.all_modes(multiple_preds=conf.multi_pred)\nif conf.cache is not None and os.path.exists(conf.cache):\n    print(\"Found {}! Loading from it\".format(conf.cache))\n    with open(conf.cache,'rb') as f:\n        all_pred_entries = pkl.load(f)\n    for i, pred_entry in enumerate(tqdm(all_pred_entries)):\n        gt_entry = {\n            'gt_classes': val.gt_classes[i].copy(),\n            'gt_relations': val.relationships[i].copy(),\n            'gt_boxes': val.gt_boxes[i].copy(),\n        }\n        evaluator[conf.mode].evaluate_scene_graph_entry(\n            gt_entry,\n            pred_entry,\n        )\n    evaluator[conf.mode].print_stats()\nelse:\n    detector.eval()\n    for val_b, batch in enumerate(tqdm(val_loader)):\n        val_batch(conf.num_gpus*val_b, batch, evaluator)\n\n    evaluator[conf.mode].print_stats()\n\n    if conf.cache is not None:\n        with open(conf.cache,'wb') as f:\n            pkl.dump(all_pred_entries, f)\n"
  },
  {
    "path": "models/train_detector.py",
    "content": "\"\"\"\nTraining script 4 Detection\n\"\"\"\nfrom dataloaders.mscoco import CocoDetection, CocoDataLoader\nfrom dataloaders.visual_genome import VGDataLoader, VG\nfrom lib.object_detector import ObjectDetector\nimport numpy as np\nfrom torch import optim\nimport torch\nimport pandas as pd\nimport time\nimport os\nfrom config import ModelConfig, FG_FRACTION, RPN_FG_FRACTION, IM_SCALE, BOX_SCALE\nfrom torch.nn import functional as F\nfrom lib.fpn.box_utils import bbox_loss\nimport torch.backends.cudnn as cudnn\nfrom pycocotools.cocoeval import COCOeval\nfrom lib.pytorch_misc import optimistic_restore, clip_grad_norm\nfrom torch.optim.lr_scheduler import ReduceLROnPlateau\n\ncudnn.benchmark = True\nconf = ModelConfig()\n\nif conf.coco:\n    train, val = CocoDetection.splits()\n    val.ids = val.ids[:conf.val_size]\n    train.ids = train.ids\n    train_loader, val_loader = CocoDataLoader.splits(train, val, batch_size=conf.batch_size,\n                                                     num_workers=conf.num_workers,\n                                                     num_gpus=conf.num_gpus)\nelse:\n    train, val, _ = VG.splits(num_val_im=conf.val_size, filter_non_overlap=False,\n                              filter_empty_rels=False, use_proposals=conf.use_proposals)\n    train_loader, val_loader = VGDataLoader.splits(train, val, batch_size=conf.batch_size,\n                                                   num_workers=conf.num_workers,\n                                                   num_gpus=conf.num_gpus)\n\ndetector = ObjectDetector(classes=train.ind_to_classes, num_gpus=conf.num_gpus,\n                          mode='rpntrain' if not conf.use_proposals else 'proposals', use_resnet=conf.use_resnet)\ndetector.cuda()\n\n# Note: if you're doing the stanford setup, you'll need to change this to freeze the lower layers\nif conf.use_proposals:\n    for n, param in detector.named_parameters():\n        if n.startswith('features'):\n            param.requires_grad = False\n\noptimizer = optim.SGD([p for p in detector.parameters() if p.requires_grad],\n                      weight_decay=conf.l2, lr=conf.lr * conf.num_gpus * conf.batch_size, momentum=0.9)\nscheduler = ReduceLROnPlateau(optimizer, 'max', patience=3, factor=0.1,\n                              verbose=True, threshold=0.001, threshold_mode='abs', cooldown=1)\n\nstart_epoch = -1\nif conf.ckpt is not None:\n    ckpt = torch.load(conf.ckpt)\n    if optimistic_restore(detector, ckpt['state_dict']):\n        start_epoch = ckpt['epoch']\n\n\ndef train_epoch(epoch_num):\n    detector.train()\n    tr = []\n    start = time.time()\n    for b, batch in enumerate(train_loader):\n        tr.append(train_batch(batch))\n\n        if b % conf.print_interval == 0 and b >= conf.print_interval:\n            mn = pd.concat(tr[-conf.print_interval:], axis=1).mean(1)\n            time_per_batch = (time.time() - start) / conf.print_interval\n            print(\"\\ne{:2d}b{:5d}/{:5d} {:.3f}s/batch, {:.1f}m/epoch\".format(\n                epoch_num, b, len(train_loader), time_per_batch, len(train_loader) * time_per_batch / 60))\n            print(mn)\n            print('-----------', flush=True)\n            start = time.time()\n    return pd.concat(tr, axis=1)\n\n\ndef train_batch(b):\n    \"\"\"\n    :param b: contains:\n          :param imgs: the image, [batch_size, 3, IM_SIZE, IM_SIZE]\n          :param all_anchors: [num_anchors, 4] the boxes of all anchors that we'll be using\n          :param all_anchor_inds: [num_anchors, 2] array of the indices into the concatenated\n                                  RPN feature vector that give us all_anchors,\n                                  each one (img_ind, fpn_idx)\n          :param im_sizes: a [batch_size, 4] numpy array of (h, w, scale, num_good_anchors) for each image.\n\n          :param num_anchors_per_img: int, number of anchors in total over the feature pyramid per img\n\n          Training parameters:\n          :param train_anchor_inds: a [num_train, 5] array of indices for the anchors that will\n                                    be used to compute the training loss (img_ind, fpn_idx)\n          :param gt_boxes: [num_gt, 4] GT boxes over the batch.\n          :param gt_classes: [num_gt, 2] gt boxes where each one is (img_id, class)\n\n    :return:\n    \"\"\"\n    result = detector[b]\n    scores = result.od_obj_dists\n    box_deltas = result.od_box_deltas\n    labels = result.od_obj_labels\n    roi_boxes = result.od_box_priors\n    bbox_targets = result.od_box_targets\n    rpn_scores = result.rpn_scores\n    rpn_box_deltas = result.rpn_box_deltas\n\n    # detector loss\n    valid_inds = (labels.data != 0).nonzero().squeeze(1)\n    fg_cnt = valid_inds.size(0)\n    bg_cnt = labels.size(0) - fg_cnt\n    class_loss = F.cross_entropy(scores, labels)\n\n    # No gather_nd in pytorch so instead convert first 2 dims of tensor to 1d\n    box_reg_mult = 2 * (1. / FG_FRACTION) * fg_cnt / (fg_cnt + bg_cnt + 1e-4)\n    twod_inds = valid_inds * box_deltas.size(1) + labels[valid_inds].data\n\n    box_loss = bbox_loss(roi_boxes[valid_inds], box_deltas.view(-1, 4)[twod_inds],\n                         bbox_targets[valid_inds]) * box_reg_mult\n\n    loss = class_loss + box_loss\n\n    # RPN loss\n    if not conf.use_proposals:\n        train_anchor_labels = b.train_anchor_labels[:, -1]\n        train_anchors = b.train_anchors[:, :4]\n        train_anchor_targets = b.train_anchors[:, 4:]\n\n        train_valid_inds = (train_anchor_labels.data == 1).nonzero().squeeze(1)\n        rpn_class_loss = F.cross_entropy(rpn_scores, train_anchor_labels)\n\n        # print(\"{} fg {} bg, ratio of {:.3f} vs {:.3f}. RPN {}fg {}bg ratio of {:.3f} vs {:.3f}\".format(\n        #     fg_cnt, bg_cnt, fg_cnt / (fg_cnt + bg_cnt + 1e-4), FG_FRACTION,\n        #     train_valid_inds.size(0), train_anchor_labels.size(0)-train_valid_inds.size(0),\n        #     train_valid_inds.size(0) / (train_anchor_labels.size(0) + 1e-4), RPN_FG_FRACTION), flush=True)\n        rpn_box_mult = 2 * (1. / RPN_FG_FRACTION) * train_valid_inds.size(0) / (train_anchor_labels.size(0) + 1e-4)\n        rpn_box_loss = bbox_loss(train_anchors[train_valid_inds],\n                                 rpn_box_deltas[train_valid_inds],\n                                 train_anchor_targets[train_valid_inds]) * rpn_box_mult\n\n        loss += rpn_class_loss + rpn_box_loss\n        res = pd.Series([rpn_class_loss.data[0], rpn_box_loss.data[0],\n                         class_loss.data[0], box_loss.data[0], loss.data[0]],\n                        ['rpn_class_loss', 'rpn_box_loss', 'class_loss', 'box_loss', 'total'])\n    else:\n        res = pd.Series([class_loss.data[0], box_loss.data[0], loss.data[0]],\n                        ['class_loss', 'box_loss', 'total'])\n\n    optimizer.zero_grad()\n    loss.backward()\n    clip_grad_norm(\n        [(n, p) for n, p in detector.named_parameters() if p.grad is not None],\n        max_norm=conf.clip, clip=True)\n    optimizer.step()\n\n    return res\n\n\ndef val_epoch():\n    detector.eval()\n    # all_boxes is a list of length number-of-classes.\n    # Each list element is a list of length number-of-images.\n    # Each of those list elements is either an empty list []\n    # or a numpy array of detection.\n    vr = []\n    for val_b, batch in enumerate(val_loader):\n        vr.append(val_batch(val_b, batch))\n    vr = np.concatenate(vr, 0)\n    if vr.shape[0] == 0:\n        print(\"No detections anywhere\")\n        return 0.0\n\n    val_coco = val.coco\n    coco_dt = val_coco.loadRes(vr)\n    coco_eval = COCOeval(val_coco, coco_dt, 'bbox')\n    coco_eval.params.imgIds = val.ids if conf.coco else [x for x in range(len(val))]\n\n    coco_eval.evaluate()\n    coco_eval.accumulate()\n    coco_eval.summarize()\n    mAp = coco_eval.stats[1]\n    return mAp\n\n\ndef val_batch(batch_num, b):\n    result = detector[b]\n    if result is None:\n        return np.zeros((0, 7))\n    scores_np = result.obj_scores.data.cpu().numpy()\n    cls_preds_np = result.obj_preds.data.cpu().numpy()\n    boxes_np = result.boxes_assigned.data.cpu().numpy()\n    im_inds_np = result.im_inds.data.cpu().numpy()\n    im_scales = b.im_sizes.reshape((-1, 3))[:, 2]\n    if conf.coco:\n        boxes_np /= im_scales[im_inds_np][:, None]\n        boxes_np[:, 2:4] = boxes_np[:, 2:4] - boxes_np[:, 0:2] + 1\n        cls_preds_np[:] = [val.ind_to_id[c_ind] for c_ind in cls_preds_np]\n        im_inds_np[:] = [val.ids[im_ind + batch_num * conf.batch_size * conf.num_gpus]\n                         for im_ind in im_inds_np]\n    else:\n        boxes_np *= BOX_SCALE / IM_SCALE\n        boxes_np[:, 2:4] = boxes_np[:, 2:4] - boxes_np[:, 0:2] + 1\n        im_inds_np += batch_num * conf.batch_size * conf.num_gpus\n\n    return np.column_stack((im_inds_np, boxes_np, scores_np, cls_preds_np))\n\n\nprint(\"Training starts now!\")\nfor epoch in range(start_epoch + 1, start_epoch + 1 + conf.num_epochs):\n    rez = train_epoch(epoch)\n    print(\"overall{:2d}: ({:.3f})\\n{}\".format(epoch, rez.mean(1)['total'], rez.mean(1)), flush=True)\n    mAp = val_epoch()\n    scheduler.step(mAp)\n\n    torch.save({\n        'epoch': epoch,\n        'state_dict': detector.state_dict(),\n        'optimizer': optimizer.state_dict(),\n    }, os.path.join(conf.save_dir, '{}-{}.tar'.format('coco' if conf.coco else 'vg', epoch)))\n"
  },
  {
    "path": "models/train_rels.py",
    "content": "\"\"\"\nTraining script for scene graph detection. Integrated with my faster rcnn setup\n\"\"\"\n\nfrom dataloaders.visual_genome import VGDataLoader, VG\nimport numpy as np\nfrom torch import optim\nimport torch\nimport pandas as pd\nimport time\nimport os\n\nfrom config import ModelConfig, BOX_SCALE, IM_SCALE\nfrom torch.nn import functional as F\nfrom lib.pytorch_misc import optimistic_restore, de_chunkize, clip_grad_norm\nfrom lib.evaluation.sg_eval import BasicSceneGraphEvaluator\nfrom lib.pytorch_misc import print_para\nfrom torch.optim.lr_scheduler import ReduceLROnPlateau\n\nconf = ModelConfig()\nif conf.model == 'motifnet':\n    from lib.rel_model import RelModel\nelif conf.model == 'stanford':\n    from lib.rel_model_stanford import RelModelStanford as RelModel\nelse:\n    raise ValueError()\n\ntrain, val, _ = VG.splits(num_val_im=conf.val_size, filter_duplicate_rels=True,\n                          use_proposals=conf.use_proposals,\n                          filter_non_overlap=conf.mode == 'sgdet')\ntrain_loader, val_loader = VGDataLoader.splits(train, val, mode='rel',\n                                               batch_size=conf.batch_size,\n                                               num_workers=conf.num_workers,\n                                               num_gpus=conf.num_gpus)\n\ndetector = RelModel(classes=train.ind_to_classes, rel_classes=train.ind_to_predicates,\n                    num_gpus=conf.num_gpus, mode=conf.mode, require_overlap_det=True,\n                    use_resnet=conf.use_resnet, order=conf.order,\n                    nl_edge=conf.nl_edge, nl_obj=conf.nl_obj, hidden_dim=conf.hidden_dim,\n                    use_proposals=conf.use_proposals,\n                    pass_in_obj_feats_to_decoder=conf.pass_in_obj_feats_to_decoder,\n                    pass_in_obj_feats_to_edge=conf.pass_in_obj_feats_to_edge,\n                    pooling_dim=conf.pooling_dim,\n                    rec_dropout=conf.rec_dropout,\n                    use_bias=conf.use_bias,\n                    use_tanh=conf.use_tanh,\n                    limit_vision=conf.limit_vision\n                    )\n\n# Freeze the detector\nfor n, param in detector.detector.named_parameters():\n    param.requires_grad = False\n\nprint(print_para(detector), flush=True)\n\n\ndef get_optim(lr):\n    # Lower the learning rate on the VGG fully connected layers by 1/10th. It's a hack, but it helps\n    # stabilize the models.\n    fc_params = [p for n,p in detector.named_parameters() if n.startswith('roi_fmap') and p.requires_grad]\n    non_fc_params = [p for n,p in detector.named_parameters() if not n.startswith('roi_fmap') and p.requires_grad]\n    params = [{'params': fc_params, 'lr': lr / 10.0}, {'params': non_fc_params}]\n    # params = [p for n,p in detector.named_parameters() if p.requires_grad]\n\n    if conf.adam:\n        optimizer = optim.Adam(params, weight_decay=conf.l2, lr=lr, eps=1e-3)\n    else:\n        optimizer = optim.SGD(params, weight_decay=conf.l2, lr=lr, momentum=0.9)\n\n    scheduler = ReduceLROnPlateau(optimizer, 'max', patience=3, factor=0.1,\n                                  verbose=True, threshold=0.0001, threshold_mode='abs', cooldown=1)\n    return optimizer, scheduler\n\n\nckpt = torch.load(conf.ckpt)\nif conf.ckpt.split('-')[-2].split('/')[-1] == 'vgrel':\n    print(\"Loading EVERYTHING\")\n    start_epoch = ckpt['epoch']\n\n    if not optimistic_restore(detector, ckpt['state_dict']):\n        start_epoch = -1\n        # optimistic_restore(detector.detector, torch.load('checkpoints/vgdet/vg-28.tar')['state_dict'])\nelse:\n    start_epoch = -1\n    optimistic_restore(detector.detector, ckpt['state_dict'])\n\n    detector.roi_fmap[1][0].weight.data.copy_(ckpt['state_dict']['roi_fmap.0.weight'])\n    detector.roi_fmap[1][3].weight.data.copy_(ckpt['state_dict']['roi_fmap.3.weight'])\n    detector.roi_fmap[1][0].bias.data.copy_(ckpt['state_dict']['roi_fmap.0.bias'])\n    detector.roi_fmap[1][3].bias.data.copy_(ckpt['state_dict']['roi_fmap.3.bias'])\n\n    detector.roi_fmap_obj[0].weight.data.copy_(ckpt['state_dict']['roi_fmap.0.weight'])\n    detector.roi_fmap_obj[3].weight.data.copy_(ckpt['state_dict']['roi_fmap.3.weight'])\n    detector.roi_fmap_obj[0].bias.data.copy_(ckpt['state_dict']['roi_fmap.0.bias'])\n    detector.roi_fmap_obj[3].bias.data.copy_(ckpt['state_dict']['roi_fmap.3.bias'])\n\ndetector.cuda()\n\n\ndef train_epoch(epoch_num):\n    detector.train()\n    tr = []\n    start = time.time()\n    for b, batch in enumerate(train_loader):\n        tr.append(train_batch(batch, verbose=b % (conf.print_interval*10) == 0)) #b == 0))\n\n        if b % conf.print_interval == 0 and b >= conf.print_interval:\n            mn = pd.concat(tr[-conf.print_interval:], axis=1).mean(1)\n            time_per_batch = (time.time() - start) / conf.print_interval\n            print(\"\\ne{:2d}b{:5d}/{:5d} {:.3f}s/batch, {:.1f}m/epoch\".format(\n                epoch_num, b, len(train_loader), time_per_batch, len(train_loader) * time_per_batch / 60))\n            print(mn)\n            print('-----------', flush=True)\n            start = time.time()\n    return pd.concat(tr, axis=1)\n\n\ndef train_batch(b, verbose=False):\n    \"\"\"\n    :param b: contains:\n          :param imgs: the image, [batch_size, 3, IM_SIZE, IM_SIZE]\n          :param all_anchors: [num_anchors, 4] the boxes of all anchors that we'll be using\n          :param all_anchor_inds: [num_anchors, 2] array of the indices into the concatenated\n                                  RPN feature vector that give us all_anchors,\n                                  each one (img_ind, fpn_idx)\n          :param im_sizes: a [batch_size, 4] numpy array of (h, w, scale, num_good_anchors) for each image.\n\n          :param num_anchors_per_img: int, number of anchors in total over the feature pyramid per img\n\n          Training parameters:\n          :param train_anchor_inds: a [num_train, 5] array of indices for the anchors that will\n                                    be used to compute the training loss (img_ind, fpn_idx)\n          :param gt_boxes: [num_gt, 4] GT boxes over the batch.\n          :param gt_classes: [num_gt, 2] gt boxes where each one is (img_id, class)\n    :return:\n    \"\"\"\n    result = detector[b]\n\n    losses = {}\n    losses['class_loss'] = F.cross_entropy(result.rm_obj_dists, result.rm_obj_labels)\n    losses['rel_loss'] = F.cross_entropy(result.rel_dists, result.rel_labels[:, -1])\n    loss = sum(losses.values())\n\n    optimizer.zero_grad()\n    loss.backward()\n    clip_grad_norm(\n        [(n, p) for n, p in detector.named_parameters() if p.grad is not None],\n        max_norm=conf.clip, verbose=verbose, clip=True)\n    losses['total'] = loss\n    optimizer.step()\n    res = pd.Series({x: y.data[0] for x, y in losses.items()})\n    return res\n\n\ndef val_epoch():\n    detector.eval()\n    evaluator = BasicSceneGraphEvaluator.all_modes()\n    for val_b, batch in enumerate(val_loader):\n        val_batch(conf.num_gpus * val_b, batch, evaluator)\n    evaluator[conf.mode].print_stats()\n    return np.mean(evaluator[conf.mode].result_dict[conf.mode + '_recall'][100])\n\n\ndef val_batch(batch_num, b, evaluator):\n    det_res = detector[b]\n    if conf.num_gpus == 1:\n        det_res = [det_res]\n\n    for i, (boxes_i, objs_i, obj_scores_i, rels_i, pred_scores_i) in enumerate(det_res):\n        gt_entry = {\n            'gt_classes': val.gt_classes[batch_num + i].copy(),\n            'gt_relations': val.relationships[batch_num + i].copy(),\n            'gt_boxes': val.gt_boxes[batch_num + i].copy(),\n        }\n        assert np.all(objs_i[rels_i[:, 0]] > 0) and np.all(objs_i[rels_i[:, 1]] > 0)\n\n        pred_entry = {\n            'pred_boxes': boxes_i * BOX_SCALE/IM_SCALE,\n            'pred_classes': objs_i,\n            'pred_rel_inds': rels_i,\n            'obj_scores': obj_scores_i,\n            'rel_scores': pred_scores_i,  # hack for now.\n        }\n\n        evaluator[conf.mode].evaluate_scene_graph_entry(\n            gt_entry,\n            pred_entry,\n        )\n\n\nprint(\"Training starts now!\")\noptimizer, scheduler = get_optim(conf.lr * conf.num_gpus * conf.batch_size)\nfor epoch in range(start_epoch + 1, start_epoch + 1 + conf.num_epochs):\n    rez = train_epoch(epoch)\n    print(\"overall{:2d}: ({:.3f})\\n{}\".format(epoch, rez.mean(1)['total'], rez.mean(1)), flush=True)\n    if conf.save_dir is not None:\n        torch.save({\n            'epoch': epoch,\n            'state_dict': detector.state_dict(), #{k:v for k,v in detector.state_dict().items() if not k.startswith('detector.')},\n            # 'optimizer': optimizer.state_dict(),\n        }, os.path.join(conf.save_dir, '{}-{}.tar'.format('vgrel', epoch)))\n\n    mAp = val_epoch()\n    scheduler.step(mAp)\n    if any([pg['lr'] <= (conf.lr * conf.num_gpus * conf.batch_size)/99.0 for pg in optimizer.param_groups]):\n        print(\"exiting training early\", flush=True)\n        break\n"
  },
  {
    "path": "scripts/eval_models_sgcls.sh",
    "content": "#!/usr/bin/env bash\n\n# This is a script that will evaluate all models for SGCLS\nexport CUDA_VISIBLE_DEVICES=$1\n\nif [ $1 == \"0\" ]; then\n    echo \"EVALING THE BASELINE\"\n    python models/eval_rels.py -m sgcls -model motifnet -nl_obj 0 -nl_edge 0 -b 6 \\\n    -clip 5 -p 100 -pooling_dim 4096 -lr 1e-3 -ngpu 1 -ckpt checkpoints/baseline-sgcls/vgrel-11.tar \\\n    -nepoch 50 -use_bias -test -cache baseline_sgcls\n    python models/eval_rels.py -m predcls -model motifnet -nl_obj 0 -nl_edge 0 -b 6 \\\n    -clip 5 -p 100 -pooling_dim 4096 -lr 1e-3 -ngpu 1 -ckpt checkpoints/baseline-sgcls/vgrel-11.tar \\\n    -nepoch 50 -use_bias -test -cache baseline_predcls\nelif [ $1 == \"1\" ]; then\n    echo \"EVALING MESSAGE PASSING\"\n    python models/eval_rels.py -m sgcls -model stanford -b 6 -p 100 -lr 1e-3 -ngpu 1 -clip 5 \\\n    -ckpt checkpoints/stanford-sgcls/vgrel-11.tar -test -cache stanford_sgcls\n    python models/eval_rels.py -m predcls -model stanford -b 6 -p 100 -lr 1e-3 -ngpu 1 -clip 5 \\\n    -ckpt checkpoints/stanford-sgcls/vgrel-11.tar -test -cache stanford_predcls\nelif [ $1 == \"2\" ]; then\n    echo \"EVALING MOTIFNET\"\n    python models/eval_rels.py -m sgcls -model motifnet -order leftright -nl_obj 2 -nl_edge 4 -b 6 -clip 5 \\\n        -p 100 -hidden_dim 512 -pooling_dim 4096 -lr 1e-3 -ngpu 1 -test -ckpt checkpoints/motifnet-sgcls/vgrel-7.tar -nepoch 50 -use_bias -cache motifnet_sgcls\n    python models/eval_rels.py -m predcls -model motifnet -order leftright -nl_obj 2 -nl_edge 4 -b 6 -clip 5 \\\n        -p 100 -hidden_dim 512 -pooling_dim 4096 -lr 1e-3 -ngpu 1 -test -ckpt checkpoints/motifnet-sgcls/vgrel-7.tar -nepoch 50 -use_bias -cache motifnet_predcls\nfi\n\n\n\n"
  },
  {
    "path": "scripts/eval_models_sgdet.sh",
    "content": "#!/usr/bin/env bash\n\n# This is a script that will evaluate all the models for SGDET\nexport CUDA_VISIBLE_DEVICES=$1\n\nif [ $1 == \"0\" ]; then\n    echo \"EVALING THE BASELINE\"\n    python models/eval_rels.py -m sgdet -model motifnet -nl_obj 0 -nl_edge 0 -b 6 \\\n    -clip 5 -p 100 -pooling_dim 4096 -ngpu 1 -ckpt checkpoints/baseline-sgdet/vgrel-17.tar \\\n    -nepoch 50 -use_bias -cache baseline_sgdet.pkl -test\nelif [ $1 == \"1\" ]; then\n    echo \"EVALING MESSAGE PASSING\"\n\n    python models/eval_rels.py -m sgdet -model stanford -b 6 -p 100 -lr 1e-3 -ngpu 1 -clip 5 \\\n    -ckpt checkpoints/stanford-sgdet/vgrel-18.tar -cache stanford_sgdet.pkl -test\nelif [ $1 == \"2\" ]; then\n    echo \"EVALING MOTIFNET\"\n    python models/eval_rels.py -m sgdet -model motifnet -order leftright -nl_obj 2 -nl_edge 4 -b 6 -clip 5 \\\n        -p 100 -hidden_dim 512 -pooling_dim 4096 -lr 1e-3 -ngpu 1 -test -ckpt checkpoints/motifnet-sgdet/vgrel-14.tar -nepoch 50 -cache motifnet_sgdet.pkl -use_bias\nfi\n\n\n\n"
  },
  {
    "path": "scripts/pretrain_detector.sh",
    "content": "#!/usr/bin/env bash\n# Train the model without COCO pretraining\npython models/train_detector.py -b 6 -lr 1e-3 -save_dir checkpoints/vgdet -nepoch 50 -ngpu 3 -nwork 3 -p 100 -clip 5\n\n# If you want to evaluate on the frequency baseline now, run this command (replace the checkpoint with the\n# best checkpoint you found).\n#export CUDA_VISIBLE_DEVICES=0\n#python models/eval_rel_count.py -ngpu 1 -b 6 -ckpt checkpoints/vgdet/vg-24.tar -nwork 1 -p 100 -test\n#export CUDA_VISIBLE_DEVICES=1\n#python models/eval_rel_count.py -ngpu 1 -b 6 -ckpt checkpoints/vgdet/vg-28.tar -nwork 1 -p 100 -test\n#export CUDA_VISIBLE_DEVICES=2\n#python models/eval_rel_count.py -ngpu 1 -b 6 -ckpt checkpoints/vgdet/vg-28.tar -nwork 1 -p 100 -test\n#\n#\n"
  },
  {
    "path": "scripts/refine_for_detection.sh",
    "content": "#!/usr/bin/env bash\n\n# Refine Motifnet for detection\n\n\nexport CUDA_VISIBLE_DEVICES=$1\n\nif [ $1 == \"0\" ]; then\n     echo \"TRAINING THE BASELINE\"\n    python models/train_rels.py -m sgdet -model motifnet -nl_obj 0 -nl_edge 0 -b 6 \\\n    -clip 5 -p 100 -pooling_dim 4096 -lr 1e-4 -ngpu 1 -ckpt checkpoints/baseline-sgcls/vgrel-11.tar  -save_dir checkpoints/baseline-sgdet \\\n    -nepoch 50 -use_bias\nelif [ $1 == \"1\" ]; then\n    echo \"TRAINING STANFORD\"\n    python models/train_rels.py -m sgdet -model stanford -b 6 -p 100 -lr 1e-4 -ngpu 1 -clip 5 \\\n    -ckpt checkpoints/stanford-sgcls/vgrel-11.tar -save_dir checkpoints/stanford-sgdet\nelif [ $1 == \"2\" ]; then\n    echo \"Refining Motifnet for detection!\"\n    python models/train_rels.py -m sgdet -model motifnet -order leftright -nl_obj 2 -nl_edge 4 -b 6 -clip 5 \\\n        -p 100 -hidden_dim 512 -pooling_dim 4096 -lr 1e-4 -ngpu 1 -ckpt checkpoints/motifnet-sgcls/vgrel-7.tar \\\n        -save_dir checkpoints/motifnet-sgdet -nepoch 10 -use_bias\nfi"
  },
  {
    "path": "scripts/train_models_sgcls.sh",
    "content": "#!/usr/bin/env bash\n\n# This is a script that will train all of the models for scene graph classification and then evaluate them.\nexport CUDA_VISIBLE_DEVICES=$1\n\nif [ $1 == \"0\" ]; then\n    echo \"TRAINING THE BASELINE\"\n    python models/train_rels.py -m sgcls -model motifnet -nl_obj 0 -nl_edge 0 -b 6 \\\n    -clip 5 -p 100 -pooling_dim 4096 -lr 1e-3 -ngpu 1 -ckpt checkpoints/vgdet/vg-24.tar -save_dir checkpoints/baseline2 \\\n    -nepoch 50 -use_bias\nelif [ $1 == \"1\" ]; then\n    echo \"TRAINING MESSAGE PASSING\"\n\n    python models/train_rels.py -m sgcls -model stanford -b 6 -p 100 -lr 1e-3 -ngpu 1 -clip 5 \\\n    -ckpt checkpoints/vgdet/vg-24.tar -save_dir checkpoints/stanford2\nelif [ $1 == \"2\" ]; then\n    echo \"TRAINING MOTIFNET\"\n\n    python models/train_rels.py -m sgcls -model motifnet -order leftright -nl_obj 2 -nl_edge 4 -b 6 -clip 5 \\\n        -p 100 -hidden_dim 512 -pooling_dim 4096 -lr 1e-3 -ngpu 1 -ckpt checkpoints/vgdet/vg-24.tar \\\n        -save_dir checkpoints/motifnet2 -nepoch 50 -use_bias\nfi\n\n\n\n"
  },
  {
    "path": "scripts/train_motifnet.sh",
    "content": "#!/usr/bin/env bash\n\n# Train Motifnet using different orderings\n\nexport CUDA_VISIBLE_DEVICES=$1\n\nif [ $1 == \"0\" ]; then\n    echo \"TRAINING MOTIFNET V1\"\n    python models/train_rels.py -m sgcls -model motifnet -order size -nl_obj 2 -nl_edge 4 -b 6 -clip 5 \\\n        -p 100 -hidden_dim 512 -pooling_dim 4096 -lr 1e-3 -ngpu 1 -ckpt checkpoints/vgdet/vg-24.tar \\\n        -save_dir checkpoints/motifnet-size-sgcls -nepoch 50 -use_bias\nelif [ $1 == \"1\" ]; then\n    echo \"TRAINING MOTIFNET V2\"\n    python models/train_rels.py -m sgcls -model motifnet -order random -nl_obj 2 -nl_edge 4 -b 6 -clip 5 \\\n        -p 100 -hidden_dim 512 -pooling_dim 4096 -lr 1e-3 -ngpu 1 -ckpt checkpoints/vgdet/vg-24.tar \\\n        -save_dir checkpoints/motifnet-random-sgcls -nepoch 50 -use_bias\nelif [ $1 == \"2\" ]; then\n    echo \"TRAINING MOTIFNET V3\"\n    python models/train_rels.py -m sgcls -model motifnet -order confidence -nl_obj 2 -nl_edge 4 -b 6 -clip 5 \\\n        -p 100 -hidden_dim 512 -pooling_dim 4096 -lr 1e-3 -ngpu 1 -ckpt checkpoints/vgdet/vg-24.tar \\\n        -save_dir checkpoints/motifnet-conf-sgcls -nepoch 50 -use_bias\nfi"
  },
  {
    "path": "scripts/train_stanford.sh",
    "content": "#!/usr/bin/env bash\n\npython models/train_rels.py -m sgcls -model stanford -b 4 -p 400 -lr 1e-4 -ngpu 1 -ckpt checkpoints/vgdet/vg-24.tar -save_dir checkpoints/stanford -adam\n\n# To test you can run this command\n# python models/eval_rels.py -m sgcls -model stanford -ngpu 1 -ckpt checkpoints/stanford/vgrel-28.tar -test\n"
  }
]