[
  {
    "path": ".gitignore",
    "content": ".idea/\nexp/\ntmp/\n*.pyc"
  },
  {
    "path": "README.md",
    "content": "# Beyond Part Models: Person Retrieval with Refined Part Pooling\n\n**Related Projects:** [Strong Triplet Loss Baseline](https://github.com/huanghoujing/person-reid-triplet-loss-baseline)\n\nThis project implements PCB (Part-based Convolutional Baseline) of paper [Beyond Part Models: Person Retrieval with Refined Part Pooling](https://arxiv.org/abs/1711.09349) using [pytorch](https://github.com/pytorch/pytorch).\n\n\n# Current Results\n\nThe reproduced PCB is as follows. \n- `(Shared 1x1 Conv)` and `(Independent 1x1 Conv)` means the last 1x1 conv layers for stripes are shared or independent, respectively;\n- `(Paper)` means the scores reported in the paper; \n- `R.R.` means using re-ranking.\n\n\n|                                   | Rank-1 (%) | mAP (%) | R.R. Rank-1 (%) | R.R. mAP (%) |\n| ---                               | :---: | :---: | :---: | :---: |\n| Market1501 (Shared 1x1 Conv)      | 90.86 | 73.25 | 92.58 | 88.02 |\n| Market1501 (Independent 1x1 Conv) | 92.87 | 78.54 | 93.94 | 90.17 |\n| Market1501 (Paper)                | 92.40 | 77.30 | -     | -     |\n| | | | | |\n| Duke (Shared 1x1 Conv)            | 82.00 | 64.88 | 86.40 | 81.77 |\n| Duke (Independent 1x1 Conv)       | 84.47 | 69.94 | 88.78 | 84.73 |\n| Duke (Paper)                      | 81.90 | 65.30 | -     | -     |\n| | | | | |\n| CUHK03 (Shared 1x1 Conv)          | 47.29 | 42.05 | 56.50 | 57.91 |\n| CUHK03 (Independent 1x1 Conv)     | 59.14 | 53.93 | 69.07 | 70.17 |\n| CUHK03 (Paper)                    | 61.30 | 54.20 | -     | -     |\n\nWe can see that independent 1x1 conv layers for different stripes are critical for the performance. The performance on CUHK03 is still worse than the paper, while those on Market1501 and Duke are better.\n\n\n# Resources\n\nThis repository contains following resources\n\n- A beginner-level dataset interface independent of Pytorch, Tensorflow, etc, supporting multi-thread prefetching (README file is under way)\n- Three most used ReID datasets, Market1501, CUHK03 (new protocol) and DukeMTMC-reID\n- Python version ReID evaluation code (Originally from [open-reid](https://github.com/Cysu/open-reid))\n- Python version Re-ranking (Originally from [re_ranking](https://github.com/zhunzhong07/person-re-ranking/blob/master/python-version/re_ranking))\n- PCB (Part-based Convolutional Baseline, performance stays tuned)\n\n\n# Installation\n\nIt's recommended that you create and enter a python virtual environment, if versions of the packages required here conflict with yours.\n\nI use Python 2.7 and Pytorch 0.3. For installing Pytorch, follow the [official guide](http://pytorch.org/). Other packages are specified in `requirements.txt`.\n\n```bash\npip install -r requirements.txt\n```\n\nThen clone the repository:\n\n```bash\ngit clone https://github.com/huanghoujing/beyond-part-models.git\ncd beyond-part-models\n```\n\n\n# Dataset Preparation\n\nInspired by Tong Xiao's [open-reid](https://github.com/Cysu/open-reid) project, dataset directories are refactored to support a unified dataset interface.\n\nTransformed dataset has following features\n- All used images, including training and testing images, are inside the same folder named `images`\n- Images are renamed, with the name mapping from original images to new ones provided in a file named `ori_to_new_im_name.pkl`. The mapping may be needed in some cases.\n- The train/val/test partitions are recorded in a file named `partitions.pkl` which is a dict with the following keys\n  - `'trainval_im_names'`\n  - `'trainval_ids2labels'`\n  - `'train_im_names'`\n  - `'train_ids2labels'`\n  - `'val_im_names'`\n  - `'val_marks'`\n  - `'test_im_names'`\n  - `'test_marks'`\n- Validation set consists of 100 persons (configurable during transforming dataset) unseen in training set, and validation follows the same ranking protocol of testing.\n- Each val or test image is accompanied by a mark denoting whether it is from\n  - query (`mark == 0`), or\n  - gallery (`mark == 1`), or\n  - multi query (`mark == 2`) set\n\n## Market1501\n\nYou can download what I have transformed for the project from [Google Drive](https://drive.google.com/open?id=1CaWH7_csm9aDyTVgjs7_3dlZIWqoBlv4) or [BaiduYun](https://pan.baidu.com/s/1nvOhpot). Otherwise, you can download the original dataset and transform it using my script, described below.\n\nDownload the Market1501 dataset from [here](http://www.liangzheng.org/Project/project_reid.html). Run the following script to transform the dataset, replacing the paths with yours.\n\n```bash\npython script/dataset/transform_market1501.py \\\n--zip_file ~/Dataset/market1501/Market-1501-v15.09.15.zip \\\n--save_dir ~/Dataset/market1501\n```\n\n## CUHK03\n\nWe follow the new training/testing protocol proposed in paper\n```\n@article{zhong2017re,\n  title={Re-ranking Person Re-identification with k-reciprocal Encoding},\n  author={Zhong, Zhun and Zheng, Liang and Cao, Donglin and Li, Shaozi},\n  booktitle={CVPR},\n  year={2017}\n}\n```\nDetails of the new protocol can be found [here](https://github.com/zhunzhong07/person-re-ranking).\n\nYou can download what I have transformed for the project from [Google Drive](https://drive.google.com/open?id=1Ssp9r4g8UbGveX-9JvHmjpcesvw90xIF) or [BaiduYun](https://pan.baidu.com/s/1hsB0pIc). Otherwise, you can download the original dataset and transform it using my script, described below.\n\nDownload the CUHK03 dataset from [here](http://www.ee.cuhk.edu.hk/~xgwang/CUHK_identification.html). Then download the training/testing partition file from [Google Drive](https://drive.google.com/open?id=14lEiUlQDdsoroo8XJvQ3nLZDIDeEizlP) or [BaiduYun](https://pan.baidu.com/s/1miuxl3q). This partition file specifies which images are in training, query or gallery set. Finally run the following script to transform the dataset, replacing the paths with yours.\n\n```bash\npython script/dataset/transform_cuhk03.py \\\n--zip_file ~/Dataset/cuhk03/cuhk03_release.zip \\\n--train_test_partition_file ~/Dataset/cuhk03/re_ranking_train_test_split.pkl \\\n--save_dir ~/Dataset/cuhk03\n```\n\n\n## DukeMTMC-reID\n\nYou can download what I have transformed for the project from [Google Drive](https://drive.google.com/open?id=1P9Jr0en0HBu_cZ7txrb2ZA_dI36wzXbS) or [BaiduYun](https://pan.baidu.com/s/1miIdEek). Otherwise, you can download the original dataset and transform it using my script, described below.\n\nDownload the DukeMTMC-reID dataset from [here](https://github.com/layumi/DukeMTMC-reID_evaluation). Run the following script to transform the dataset, replacing the paths with yours.\n\n```bash\npython script/dataset/transform_duke.py \\\n--zip_file ~/Dataset/duke/DukeMTMC-reID.zip \\\n--save_dir ~/Dataset/duke\n```\n\n\n## Combining Trainval Set of Market1501, CUHK03, DukeMTMC-reID\n\nLarger training set tends to benefit deep learning models, so I combine trainval set of three datasets Market1501, CUHK03 and DukeMTMC-reID. After training on the combined trainval set, the model can be tested on three test sets as usual.\n\nTransform three separate datasets as introduced above if you have not done it.\n\nFor the trainval set, you can download what I have transformed from [Google Drive](https://drive.google.com/open?id=1hmZIRkaLvLb_lA1CcC4uGxmA4ppxPinj) or [BaiduYun](https://pan.baidu.com/s/1jIvNYPg). Otherwise, you can run the following script to combine the trainval sets, replacing the paths with yours.\n\n```bash\npython script/dataset/combine_trainval_sets.py \\\n--market1501_im_dir ~/Dataset/market1501/images \\\n--market1501_partition_file ~/Dataset/market1501/partitions.pkl \\\n--cuhk03_im_dir ~/Dataset/cuhk03/detected/images \\\n--cuhk03_partition_file ~/Dataset/cuhk03/detected/partitions.pkl \\\n--duke_im_dir ~/Dataset/duke/images \\\n--duke_partition_file ~/Dataset/duke/partitions.pkl \\\n--save_dir ~/Dataset/market1501_cuhk03_duke\n```\n\n## Configure Dataset Path\n\nThe project requires you to configure the dataset paths. In `bpm/dataset/__init__.py`, modify the following snippet according to your saving paths used in preparing datasets.\n\n```python\n# In file bpm/dataset/__init__.py\n\n########################################\n# Specify Directory and Partition File #\n########################################\n\nif name == 'market1501':\n  im_dir = ospeu('~/Dataset/market1501/images')\n  partition_file = ospeu('~/Dataset/market1501/partitions.pkl')\n\nelif name == 'cuhk03':\n  im_type = ['detected', 'labeled'][0]\n  im_dir = ospeu(ospj('~/Dataset/cuhk03', im_type, 'images'))\n  partition_file = ospeu(ospj('~/Dataset/cuhk03', im_type, 'partitions.pkl'))\n\nelif name == 'duke':\n  im_dir = ospeu('~/Dataset/duke/images')\n  partition_file = ospeu('~/Dataset/duke/partitions.pkl')\n\nelif name == 'combined':\n  assert part in ['trainval'], \\\n    \"Only trainval part of the combined dataset is available now.\"\n  im_dir = ospeu('~/Dataset/market1501_cuhk03_duke/trainval_images')\n  partition_file = ospeu('~/Dataset/market1501_cuhk03_duke/partitions.pkl')\n```\n\n## Evaluation Protocol\n\nDatasets used in this project all follow the standard evaluation protocol of Market1501, using CMC and mAP metric. According to [open-reid](https://github.com/Cysu/open-reid), the setting of CMC is as follows\n\n```python\n# In file bpm/dataset/__init__.py\n\ncmc_kwargs = dict(separate_camera_set=False,\n                  single_gallery_shot=False,\n                  first_match_break=True)\n```\n\nTo play with [different CMC options](https://cysu.github.io/open-reid/notes/evaluation_metrics.html), you can [modify it accordingly](https://github.com/Cysu/open-reid/blob/3293ca79a07ebee7f995ce647aafa7df755207b8/reid/evaluators.py#L85-L95).\n\n```python\n# In open-reid's reid/evaluators.py\n\n# Compute all kinds of CMC scores\ncmc_configs = {\n  'allshots': dict(separate_camera_set=False,\n                   single_gallery_shot=False,\n                   first_match_break=False),\n  'cuhk03': dict(separate_camera_set=True,\n                 single_gallery_shot=True,\n                 first_match_break=False),\n  'market1501': dict(separate_camera_set=False,\n                     single_gallery_shot=False,\n                     first_match_break=True)}\n```\n\n\n# Examples\n\n\n## Test PCB\n\nMy training log and saved model weights (trained with independent 1x1 conv) for three datasets can be downloaded from [Google Drive](https://drive.google.com/drive/folders/1G3mLsI1g8ZZkHyol6d3yHpygZeFsENqO?usp=sharing) or [BaiduYun](https://pan.baidu.com/s/1zfjeiePvr1TlBtu7yGovlQ).\n\nSpecify\n- a dataset name (one of `market1501`, `cuhk03`, `duke`)\n- an experiment directory for saving testing log\n- the path of the downloaded `model_weight.pth`\n\nin the following command and run it.\n\n```bash\npython script/experiment/train_pcb.py \\\n-d '(0,)' \\\n--only_test true \\\n--dataset DATASET_NAME \\\n--exp_dir EXPERIMENT_DIRECTORY \\\n--model_weight_file THE_DOWNLOADED_MODEL_WEIGHT_FILE\n```\n\n## Train PCB\n\nYou can also train it by yourself. The following command performs training, validation and finally testing automatically.\n\nSpecify\n- a dataset name (one of `['market1501', 'cuhk03', 'duke']`)\n- training on `trainval` set or `train` set (for tuning parameters)\n- an experiment directory for saving training log\n\nin the following command and run it.\n\n```bash\npython script/experiment/train_pcb.py \\\n-d '(0,)' \\\n--only_test false \\\n--dataset DATASET_NAME \\\n--trainset_part TRAINVAL_OR_TRAIN \\\n--exp_dir EXPERIMENT_DIRECTORY \\\n--steps_per_log 20 \\\n--epochs_per_val 1\n```\n\n### Log\n\nDuring training, you can run the [TensorBoard](https://github.com/lanpa/tensorboard-pytorch) and access port `6006` to watch the loss curves etc. E.g.\n\n```bash\n# Modify the path for `--logdir` accordingly.\ntensorboard --logdir YOUR_EXPERIMENT_DIRECTORY/tensorboard\n```\n\nFor more usage of TensorBoard, see the website and the help:\n\n```bash\ntensorboard --help\n```\n\n\n## Visualize Ranking List\n\nSpecify\n- a dataset name (one of `['market1501', 'cuhk03', 'duke']`)\n- either `model_weight_file` (the downloaded `model_weight.pth`) OR `ckpt_file` (saved `ckpt.pth` during training)\n- an experiment directory for saving images and log\n\nin the following command and run it.\n\n```bash\npython script/experiment/visualize_rank_list.py \\\n-d '(0,)' \\\n--num_queries 16 \\\n--rank_list_size 10 \\\n--dataset DATASET_NAME \\\n--exp_dir EXPERIMENT_DIRECTORY \\\n--model_weight_file '' \\\n--ckpt_file ''\n```\n\nEach query image and its ranking list would be saved to an image in directory `EXPERIMENT_DIRECTORY/rank_lists`. As shown in following examples, green boundary is added to true positive, and red to false positve.\n\n![](example_rank_lists_on_Market1501/00000156_0003_00000009.jpg)\n\n![](example_rank_lists_on_Market1501/00000305_0001_00000001.jpg)\n\n![](example_rank_lists_on_Market1501/00000492_0005_00000001.jpg)\n\n![](example_rank_lists_on_Market1501/00000881_0002_00000006.jpg)\n\n\n# Time and Space Consumption\n\n\nTest with CentOS 7, Intel(R) Xeon(R) CPU E5-2618L v3 @ 2.30GHz, GeForce GTX TITAN X.\n\n**Note that the following time consumption is not gauranteed across machines, especially when the system is busy.**\n\n### GPU Consumption in Training\n\nFor following settings\n\n- ResNet-50 `stride=1` in last block\n- `batch_size = 64`\n- image size `h x w = 384 x 128`\n\nit occupies ~11000MB GPU memory.\n\nIf not having a 12 GB GPU, you can decrease `batch_size` or use multiple GPUs.\n\n\n### Training Time\n\nTaking Market1501 as an example, it contains `31969` training images; each epoch takes ~205s; training for 60 epochs takes ~3.5 hours.\n\n### Testing Time\n\nTaking Market1501 as an example\n- With `images_per_batch = 32`, extracting feature of whole test set (12936 images) takes ~160s.\n- Computing query-gallery global distance, the result is a `3368 x 15913` matrix, ~2s\n- Computing CMC and mAP scores, ~15s\n- Re-ranking requires computing query-query distance (a `3368 x 3368` matrix) and gallery-gallery distance (a `15913 x 15913` matrix, most time-consuming), ~90s\n\n\n# References & Credits\n\n- [Beyond Part Models: Person Retrieval with Refined Part Pooling](https://arxiv.org/abs/1711.09349)\n- [open-reid](https://github.com/Cysu/open-reid)\n- [Re-ranking Person Re-identification with k-reciprocal Encoding](https://github.com/zhunzhong07/person-re-ranking)\n- [Market1501](http://www.liangzheng.org/Project/project_reid.html)\n- [CUHK03](http://www.ee.cuhk.edu.hk/~xgwang/CUHK_identification.html)\n- [DukeMTMC-reID](https://github.com/layumi/DukeMTMC-reID_evaluation)\n"
  },
  {
    "path": "bpm/__init__.py",
    "content": ""
  },
  {
    "path": "bpm/dataset/Dataset.py",
    "content": "from .PreProcessImage import PreProcessIm\nfrom .Prefetcher import Prefetcher\nimport numpy as np\n\n\nclass Dataset(object):\n  \"\"\"The core elements of a dataset.    \n  Args:\n    final_batch: bool. The last batch may not be complete, if to abandon this \n      batch, set 'final_batch' to False.\n  \"\"\"\n\n  def __init__(\n      self,\n      dataset_size=None,\n      batch_size=None,\n      final_batch=True,\n      shuffle=True,\n      num_prefetch_threads=1,\n      prng=np.random,\n      **pre_process_im_kwargs):\n\n    self.pre_process_im = PreProcessIm(\n      prng=prng,\n      **pre_process_im_kwargs)\n\n    self.prefetcher = Prefetcher(\n      self.get_sample,\n      dataset_size,\n      batch_size,\n      final_batch=final_batch,\n      num_threads=num_prefetch_threads)\n\n    self.shuffle = shuffle\n    self.epoch_done = True\n    self.prng = prng\n\n  def set_mirror_type(self, mirror_type):\n    self.pre_process_im.set_mirror_type(mirror_type)\n\n  def get_sample(self, ptr):\n    \"\"\"Get one sample to put to queue.\"\"\"\n    raise NotImplementedError\n\n  def next_batch(self):\n    \"\"\"Get a batch from the queue.\"\"\"\n    raise NotImplementedError\n\n  def set_batch_size(self, batch_size):\n    \"\"\"You can change batch size, had better at the beginning of a new epoch.\n    \"\"\"\n    self.prefetcher.set_batch_size(batch_size)\n    self.epoch_done = True\n\n  def stop_prefetching_threads(self):\n    \"\"\"This can be called to stop threads, e.g. after finishing using the \n    dataset, or when existing the python main program.\"\"\"\n    self.prefetcher.stop()\n"
  },
  {
    "path": "bpm/dataset/PreProcessImage.py",
    "content": "import numpy as np\nimport cv2\n\n\nclass PreProcessIm(object):\n  def __init__(\n      self,\n      crop_prob=0,\n      crop_ratio=1.0,\n      resize_h_w=None,\n      scale=True,\n      im_mean=None,\n      im_std=None,\n      mirror_type=None,\n      batch_dims='NCHW',\n      prng=np.random):\n    \"\"\"\n    Args:\n      crop_prob: the probability of each image to go through cropping\n      crop_ratio: a float. If == 1.0, no cropping.\n      resize_h_w: (height, width) after resizing. If `None`, no resizing.\n      scale: whether to scale the pixel value by 1/255\n      im_mean: (Optionally) subtracting image mean; `None` or a tuple or list or\n        numpy array with shape [3]\n      im_std: (Optionally) divided by image std; `None` or a tuple or list or\n        numpy array with shape [3]. Dividing is applied only when subtracting\n        mean is applied.\n      mirror_type: How image should be mirrored; one of\n        [None, 'random', 'always']\n      batch_dims: either 'NCHW' or 'NHWC'. 'N': batch size, 'C': num channels,\n        'H': im height, 'W': im width. PyTorch uses 'NCHW', while TensorFlow\n        uses 'NHWC'.\n      prng: can be set to a numpy.random.RandomState object, in order to have\n        random seed independent from the global one\n    \"\"\"\n    self.crop_prob = crop_prob\n    self.crop_ratio = crop_ratio\n    self.resize_h_w = resize_h_w\n    self.scale = scale\n    self.im_mean = im_mean\n    self.im_std = im_std\n    self.check_mirror_type(mirror_type)\n    self.mirror_type = mirror_type\n    self.check_batch_dims(batch_dims)\n    self.batch_dims = batch_dims\n    self.prng = prng\n\n  def __call__(self, im):\n    return self.pre_process_im(im)\n\n  @staticmethod\n  def check_mirror_type(mirror_type):\n    assert mirror_type in [None, 'random', 'always']\n\n  @staticmethod\n  def check_batch_dims(batch_dims):\n    # 'N': batch size, 'C': num channels, 'H': im height, 'W': im width\n    # PyTorch uses 'NCHW', while TensorFlow uses 'NHWC'.\n    assert batch_dims in ['NCHW', 'NHWC']\n\n  def set_mirror_type(self, mirror_type):\n    self.check_mirror_type(mirror_type)\n    self.mirror_type = mirror_type\n\n  @staticmethod\n  def rand_crop_im(im, new_size, prng=np.random):\n    \"\"\"Crop `im` to `new_size`: [new_w, new_h].\"\"\"\n    if (new_size[0] == im.shape[1]) and (new_size[1] == im.shape[0]):\n      return im\n    h_start = prng.randint(0, im.shape[0] - new_size[1])\n    w_start = prng.randint(0, im.shape[1] - new_size[0])\n    im = np.copy(\n      im[h_start: h_start + new_size[1], w_start: w_start + new_size[0], :])\n    return im\n\n  def pre_process_im(self, im):\n    \"\"\"Pre-process image.\n    `im` is a numpy array with shape [H, W, 3], e.g. the result of\n    matplotlib.pyplot.imread(some_im_path), or\n    numpy.asarray(PIL.Image.open(some_im_path)).\"\"\"\n\n    # Randomly crop a sub-image.\n    if ((self.crop_ratio < 1)\n        and (self.crop_prob > 0)\n        and (self.prng.uniform() < self.crop_prob)):\n      h_ratio = self.prng.uniform(self.crop_ratio, 1)\n      w_ratio = self.prng.uniform(self.crop_ratio, 1)\n      crop_h = int(im.shape[0] * h_ratio)\n      crop_w = int(im.shape[1] * w_ratio)\n      im = self.rand_crop_im(im, (crop_w, crop_h), prng=self.prng)\n\n    # Resize.\n    if (self.resize_h_w is not None) \\\n        and (self.resize_h_w != (im.shape[0], im.shape[1])):\n      im = cv2.resize(im, self.resize_h_w[::-1], interpolation=cv2.INTER_LINEAR)\n\n    # scaled by 1/255.\n    if self.scale:\n      im = im / 255.\n\n    # Subtract mean and scaled by std\n    # im -= np.array(self.im_mean) # This causes an error:\n    # Cannot cast ufunc subtract output from dtype('float64') to\n    # dtype('uint8') with casting rule 'same_kind'\n    if self.im_mean is not None:\n      im = im - np.array(self.im_mean)\n    if self.im_mean is not None and self.im_std is not None:\n      im = im / np.array(self.im_std).astype(float)\n\n    # May mirror image.\n    mirrored = False\n    if self.mirror_type == 'always' \\\n        or (self.mirror_type == 'random' and self.prng.uniform() > 0.5):\n      im = im[:, ::-1, :]\n      mirrored = True\n\n    # The original image has dims 'HWC', transform it to 'CHW'.\n    if self.batch_dims == 'NCHW':\n      im = im.transpose(2, 0, 1)\n\n    return im, mirrored"
  },
  {
    "path": "bpm/dataset/Prefetcher.py",
    "content": "import threading\nimport Queue\nimport time\n\n\nclass Counter(object):\n  \"\"\"A thread safe counter.\"\"\"\n\n  def __init__(self, val=0, max_val=0):\n    self._value = val\n    self.max_value = max_val\n    self._lock = threading.Lock()\n\n  def reset(self):\n    with self._lock:\n      self._value = 0\n\n  def set_max_value(self, max_val):\n    self.max_value = max_val\n\n  def increment(self):\n    with self._lock:\n      if self._value < self.max_value:\n        self._value += 1\n        incremented = True\n      else:\n        incremented = False\n      return incremented, self._value\n\n  def get_value(self):\n    with self._lock:\n      return self._value\n\n\nclass Enqueuer(object):\n  def __init__(self, get_element, num_elements, num_threads=1, queue_size=20):\n    \"\"\"\n    Args:\n      get_element: a function that takes a pointer and returns an element\n      num_elements: total number of elements to put into the queue\n      num_threads: num of parallel threads, >= 1\n      queue_size: the maximum size of the queue. Set to some positive integer\n        to save memory, otherwise, set to 0.\n    \"\"\"\n    self.get_element = get_element\n    assert num_threads > 0\n    self.num_threads = num_threads\n    self.queue_size = queue_size\n    self.queue = Queue.Queue(maxsize=queue_size)\n    # The pointer shared by threads.\n    self.ptr = Counter(max_val=num_elements)\n    # The event to wake up threads, it's set at the beginning of an epoch.\n    # It's cleared after an epoch is enqueued or when the states are reset.\n    self.event = threading.Event()\n    # To reset states.\n    self.reset_event = threading.Event()\n    # The event to terminate the threads.\n    self.stop_event = threading.Event()\n    self.threads = []\n    for _ in range(num_threads):\n      thread = threading.Thread(target=self.enqueue)\n      # Set the thread in daemon mode, so that the main program ends normally.\n      thread.daemon = True\n      thread.start()\n      self.threads.append(thread)\n\n  def start_ep(self):\n    \"\"\"Start enqueuing an epoch.\"\"\"\n    self.event.set()\n\n  def end_ep(self):\n    \"\"\"When all elements are enqueued, let threads sleep to save resources.\"\"\"\n    self.event.clear()\n    self.ptr.reset()\n\n  def reset(self):\n    \"\"\"Reset the threads, pointer and the queue to initial states. In common\n    case, this will not be called.\"\"\"\n    self.reset_event.set()\n    self.event.clear()\n    # wait for threads to pause. This is not an absolutely safe way. The safer\n    # way is to check some flag inside a thread, not implemented yet.\n    time.sleep(5)\n    self.reset_event.clear()\n    self.ptr.reset()\n    self.queue = Queue.Queue(maxsize=self.queue_size)\n\n  def set_num_elements(self, num_elements):\n    \"\"\"Reset the max number of elements.\"\"\"\n    self.reset()\n    self.ptr.set_max_value(num_elements)\n\n  def stop(self):\n    \"\"\"Wait for threads to terminate.\"\"\"\n    self.stop_event.set()\n    for thread in self.threads:\n      thread.join()\n\n  def enqueue(self):\n    while not self.stop_event.isSet():\n      # If the enqueuing event is not set, the thread just waits.\n      if not self.event.wait(0.5): continue\n      # Increment the counter to claim that this element has been enqueued by\n      # this thread.\n      incremented, ptr = self.ptr.increment()\n      if incremented:\n        element = self.get_element(ptr - 1)\n        # When enqueuing, keep an eye on the stop and reset signal.\n        while not self.stop_event.isSet() and not self.reset_event.isSet():\n          try:\n            # This operation will wait at most `timeout` for a free slot in\n            # the queue to be available.\n            self.queue.put(element, timeout=0.5)\n            break\n          except:\n            pass\n      else:\n        self.end_ep()\n    print('Exiting thread {}!!!!!!!!'.format(threading.current_thread().name))\n\n\nclass Prefetcher(object):\n  \"\"\"This helper class enables sample enqueuing and batch dequeuing, to speed\n  up batch fetching. It abstracts away the enqueuing and dequeuing logic.\"\"\"\n\n  def __init__(self, get_sample, dataset_size, batch_size, final_batch=True,\n               num_threads=1, prefetch_size=200):\n    \"\"\"\n    Args:\n      get_sample: a function that takes a pointer (index) and returns a sample\n      dataset_size: total number of samples in the dataset\n      final_batch: True or False, whether to keep or drop the final incomplete\n        batch\n      num_threads: num of parallel threads, >= 1\n      prefetch_size: the maximum size of the queue. Set to some positive integer\n        to save memory, otherwise, set to 0.\n    \"\"\"\n    self.full_dataset_size = dataset_size\n    self.final_batch = final_batch\n    final_sz = self.full_dataset_size % batch_size\n    if not final_batch:\n      dataset_size = self.full_dataset_size - final_sz\n    self.dataset_size = dataset_size\n    self.batch_size = batch_size\n    self.enqueuer = Enqueuer(get_element=get_sample, num_elements=dataset_size,\n                             num_threads=num_threads, queue_size=prefetch_size)\n    # The pointer indicating whether an epoch has been fetched from the queue\n    self.ptr = 0\n    self.ep_done = True\n\n  def set_batch_size(self, batch_size):\n    \"\"\"You had better change batch size at the beginning of a new epoch.\"\"\"\n    final_sz = self.full_dataset_size % batch_size\n    if not self.final_batch:\n      self.dataset_size = self.full_dataset_size - final_sz\n    self.enqueuer.set_num_elements(self.dataset_size)\n    self.batch_size = batch_size\n    self.ep_done = True\n\n  def next_batch(self):\n    \"\"\"Return a batch of samples, meanwhile indicate whether the epoch is\n    done. The purpose of this func is mainly to abstract away the loop and the\n    boundary-checking logic.\n    Returns:\n      samples: a list of samples\n      done: bool, whether the epoch is done\n    \"\"\"\n    # Start enqueuing and other preparation at the beginning of an epoch.\n    if self.ep_done:\n      self.start_ep_prefetching()\n    # Whether an epoch is done.\n    self.ep_done = False\n    samples = []\n    for _ in range(self.batch_size):\n      # Indeed, `>` will not occur.\n      if self.ptr >= self.dataset_size:\n        self.ep_done = True\n        break\n      else:\n        self.ptr += 1\n        sample = self.enqueuer.queue.get()\n        # print('queue size {}'.format(self.enqueuer.queue.qsize()))\n        samples.append(sample)\n    # print 'queue size: {}'.format(self.enqueuer.queue.qsize())\n    # Indeed, `>` will not occur.\n    if self.ptr >= self.dataset_size:\n      self.ep_done = True\n    return samples, self.ep_done\n\n  def start_ep_prefetching(self):\n    \"\"\"\n    NOTE: Has to be called at the start of every epoch.\n    \"\"\"\n    self.enqueuer.start_ep()\n    self.ptr = 0\n\n  def stop(self):\n    \"\"\"This can be called to stop threads, e.g. after finishing using the\n    dataset, or when existing the python main program.\"\"\"\n    self.enqueuer.stop()"
  },
  {
    "path": "bpm/dataset/TestSet.py",
    "content": "from __future__ import print_function\nimport sys\nimport time\nimport os.path as osp\nfrom PIL import Image\nimport numpy as np\nfrom collections import defaultdict\n\nfrom .Dataset import Dataset\n\nfrom ..utils.utils import measure_time\nfrom ..utils.re_ranking import re_ranking\nfrom ..utils.metric import cmc, mean_ap\nfrom ..utils.dataset_utils import parse_im_name\nfrom ..utils.distance import normalize\nfrom ..utils.distance import compute_dist\n\n\nclass TestSet(Dataset):\n  \"\"\"\n  Args:\n    extract_feat_func: a function to extract features. It takes a batch of\n      images and returns a batch of features.\n    marks: a list, each element e denoting whether the image is from \n      query (e == 0), or\n      gallery (e == 1), or \n      multi query (e == 2) set\n  \"\"\"\n\n  def __init__(\n      self,\n      im_dir=None,\n      im_names=None,\n      marks=None,\n      extract_feat_func=None,\n      separate_camera_set=None,\n      single_gallery_shot=None,\n      first_match_break=None,\n      **kwargs):\n\n    super(TestSet, self).__init__(dataset_size=len(im_names), **kwargs)\n\n    # The im dir of all images\n    self.im_dir = im_dir\n    self.im_names = im_names\n    self.marks = marks\n    self.extract_feat_func = extract_feat_func\n    self.separate_camera_set = separate_camera_set\n    self.single_gallery_shot = single_gallery_shot\n    self.first_match_break = first_match_break\n\n  def set_feat_func(self, extract_feat_func):\n    self.extract_feat_func = extract_feat_func\n\n  def get_sample(self, ptr):\n    im_name = self.im_names[ptr]\n    im_path = osp.join(self.im_dir, im_name)\n    im = np.asarray(Image.open(im_path))\n    im, _ = self.pre_process_im(im)\n    id = parse_im_name(self.im_names[ptr], 'id')\n    cam = parse_im_name(self.im_names[ptr], 'cam')\n    # denoting whether the im is from query, gallery, or multi query set\n    mark = self.marks[ptr]\n    return im, id, cam, im_name, mark\n\n  def next_batch(self):\n    if self.epoch_done and self.shuffle:\n      self.prng.shuffle(self.im_names)\n    samples, self.epoch_done = self.prefetcher.next_batch()\n    im_list, ids, cams, im_names, marks = zip(*samples)\n    # Transform the list into a numpy array with shape [N, ...]\n    ims = np.stack(im_list, axis=0)\n    ids = np.array(ids)\n    cams = np.array(cams)\n    im_names = np.array(im_names)\n    marks = np.array(marks)\n    return ims, ids, cams, im_names, marks, self.epoch_done\n\n  def extract_feat(self, normalize_feat, verbose=True):\n    \"\"\"Extract the features of the whole image set.\n    Args:\n      normalize_feat: True or False, whether to normalize feature to unit length\n      verbose: whether to print the progress of extracting feature\n    Returns:\n      feat: numpy array with shape [N, C]\n      ids: numpy array with shape [N]\n      cams: numpy array with shape [N]\n      im_names: numpy array with shape [N]\n      marks: numpy array with shape [N]\n    \"\"\"\n    feat, ids, cams, im_names, marks = [], [], [], [], []\n    done = False\n    step = 0\n    printed = False\n    st = time.time()\n    last_time = time.time()\n    while not done:\n      ims_, ids_, cams_, im_names_, marks_, done = self.next_batch()\n      feat_ = self.extract_feat_func(ims_)\n      feat.append(feat_)\n      ids.append(ids_)\n      cams.append(cams_)\n      im_names.append(im_names_)\n      marks.append(marks_)\n\n      if verbose:\n        # Print the progress of extracting feature\n        total_batches = (self.prefetcher.dataset_size\n                         // self.prefetcher.batch_size + 1)\n        step += 1\n        if step % 20 == 0:\n          if not printed:\n            printed = True\n          else:\n            # Clean the current line\n            sys.stdout.write(\"\\033[F\\033[K\")\n          print('{}/{} batches done, +{:.2f}s, total {:.2f}s'\n                .format(step, total_batches,\n                        time.time() - last_time, time.time() - st))\n          last_time = time.time()\n\n    feat = np.vstack(feat)\n    ids = np.hstack(ids)\n    cams = np.hstack(cams)\n    im_names = np.hstack(im_names)\n    marks = np.hstack(marks)\n    if normalize_feat:\n      feat = normalize(feat, axis=1)\n    return feat, ids, cams, im_names, marks\n\n  def eval(\n      self,\n      normalize_feat=True,\n      to_re_rank=True,\n      pool_type='average',\n      verbose=True):\n\n    \"\"\"Evaluate using metric CMC and mAP.\n    Args:\n      normalize_feat: whether to normalize features before computing distance\n      to_re_rank: whether to also report re-ranking scores\n      pool_type: 'average' or 'max', only for multi-query case\n      verbose: whether to print the intermediate information\n    \"\"\"\n\n    with measure_time('Extracting feature...', verbose=verbose):\n      feat, ids, cams, im_names, marks = self.extract_feat(\n        normalize_feat, verbose)\n\n    # query, gallery, multi-query indices\n    q_inds = marks == 0\n    g_inds = marks == 1\n    mq_inds = marks == 2\n\n    # A helper function just for avoiding code duplication.\n    def compute_score(\n        dist_mat,\n        query_ids=ids[q_inds],\n        gallery_ids=ids[g_inds],\n        query_cams=cams[q_inds],\n        gallery_cams=cams[g_inds]):\n      # Compute mean AP\n      mAP = mean_ap(\n        distmat=dist_mat,\n        query_ids=query_ids, gallery_ids=gallery_ids,\n        query_cams=query_cams, gallery_cams=gallery_cams)\n      # Compute CMC scores\n      cmc_scores = cmc(\n        distmat=dist_mat,\n        query_ids=query_ids, gallery_ids=gallery_ids,\n        query_cams=query_cams, gallery_cams=gallery_cams,\n        separate_camera_set=self.separate_camera_set,\n        single_gallery_shot=self.single_gallery_shot,\n        first_match_break=self.first_match_break,\n        topk=10)\n      return mAP, cmc_scores\n\n    def print_scores(mAP, cmc_scores):\n      print('[mAP: {:5.2%}], [cmc1: {:5.2%}], [cmc5: {:5.2%}], [cmc10: {:5.2%}]'\n            .format(mAP, *cmc_scores[[0, 4, 9]]))\n\n    ################\n    # Single Query #\n    ################\n\n    with measure_time('Computing distance...', verbose=verbose):\n      # query-gallery distance\n      q_g_dist = compute_dist(feat[q_inds], feat[g_inds], type='euclidean')\n\n    with measure_time('Computing scores...', verbose=verbose):\n      mAP, cmc_scores = compute_score(q_g_dist)\n\n    print('{:<30}'.format('Single Query:'), end='')\n    print_scores(mAP, cmc_scores)\n\n    ###############\n    # Multi Query #\n    ###############\n\n    mq_mAP, mq_cmc_scores = None, None\n    if any(mq_inds):\n      mq_ids = ids[mq_inds]\n      mq_cams = cams[mq_inds]\n      mq_feat = feat[mq_inds]\n      unique_mq_ids_cams = defaultdict(list)\n      for ind, (id, cam) in enumerate(zip(mq_ids, mq_cams)):\n        unique_mq_ids_cams[(id, cam)].append(ind)\n      keys = unique_mq_ids_cams.keys()\n      assert pool_type in ['average', 'max']\n      pool = np.mean if pool_type == 'average' else np.max\n      mq_feat = np.stack([pool(mq_feat[unique_mq_ids_cams[k]], axis=0)\n                          for k in keys])\n\n      with measure_time('Multi Query, Computing distance...', verbose=verbose):\n        # multi_query-gallery distance\n        mq_g_dist = compute_dist(mq_feat, feat[g_inds], type='euclidean')\n\n      with measure_time('Multi Query, Computing scores...', verbose=verbose):\n        mq_mAP, mq_cmc_scores = compute_score(\n          mq_g_dist,\n          query_ids=np.array(zip(*keys)[0]),\n          gallery_ids=ids[g_inds],\n          query_cams=np.array(zip(*keys)[1]),\n          gallery_cams=cams[g_inds]\n        )\n\n      print('{:<30}'.format('Multi Query:'), end='')\n      print_scores(mq_mAP, mq_cmc_scores)\n\n    if to_re_rank:\n\n      ##########################\n      # Re-ranked Single Query #\n      ##########################\n\n      with measure_time('Re-ranking distance...', verbose=verbose):\n        # query-query distance\n        q_q_dist = compute_dist(feat[q_inds], feat[q_inds], type='euclidean')\n        # gallery-gallery distance\n        g_g_dist = compute_dist(feat[g_inds], feat[g_inds], type='euclidean')\n        # re-ranked query-gallery distance\n        re_r_q_g_dist = re_ranking(q_g_dist, q_q_dist, g_g_dist)\n\n      with measure_time('Computing scores for re-ranked distance...',\n                        verbose=verbose):\n        mAP, cmc_scores = compute_score(re_r_q_g_dist)\n\n      print('{:<30}'.format('Re-ranked Single Query:'), end='')\n      print_scores(mAP, cmc_scores)\n\n      #########################\n      # Re-ranked Multi Query #\n      #########################\n\n      if any(mq_inds):\n        with measure_time('Multi Query, Re-ranking distance...',\n                          verbose=verbose):\n          # multi_query-multi_query distance\n          mq_mq_dist = compute_dist(mq_feat, mq_feat, type='euclidean')\n          # re-ranked multi_query-gallery distance\n          re_r_mq_g_dist = re_ranking(mq_g_dist, mq_mq_dist, g_g_dist)\n\n        with measure_time(\n            'Multi Query, Computing scores for re-ranked distance...',\n            verbose=verbose):\n          mq_mAP, mq_cmc_scores = compute_score(\n            re_r_mq_g_dist,\n            query_ids=np.array(zip(*keys)[0]),\n            gallery_ids=ids[g_inds],\n            query_cams=np.array(zip(*keys)[1]),\n            gallery_cams=cams[g_inds]\n          )\n\n        print('{:<30}'.format('Re-ranked Multi Query:'), end='')\n        print_scores(mq_mAP, mq_cmc_scores)\n\n    return mAP, cmc_scores, mq_mAP, mq_cmc_scores\n"
  },
  {
    "path": "bpm/dataset/TrainSet.py",
    "content": "from .Dataset import Dataset\nfrom ..utils.dataset_utils import parse_im_name\n\nimport os.path as osp\nfrom PIL import Image\nimport numpy as np\n\n\nclass TrainSet(Dataset):\n  \"\"\"Training set for identification loss.\n  Args:\n    ids2labels: a dict mapping ids to labels\n  \"\"\"\n  def __init__(self,\n               im_dir=None,\n               im_names=None,\n               ids2labels=None,\n               **kwargs):\n    super(TrainSet, self).__init__(dataset_size=len(im_names), **kwargs)\n    # The im dir of all images\n    self.im_dir = im_dir\n    self.im_names = im_names\n    self.ids2labels = ids2labels\n\n  def get_sample(self, ptr):\n    \"\"\"Get one sample to put to queue.\"\"\"\n    im_name = self.im_names[ptr]\n    im_path = osp.join(self.im_dir, im_name)\n    im = np.asarray(Image.open(im_path))\n    im, mirrored = self.pre_process_im(im)\n    id = parse_im_name(im_name, 'id')\n    label = self.ids2labels[id]\n    return im, im_name, label, mirrored\n\n  def next_batch(self):\n    \"\"\"Next batch of images and labels.\n    Returns:\n      ims: numpy array with shape [N, H, W, C] or [N, C, H, W], N >= 1\n      im_names: a numpy array of image names, len(im_names) >= 1\n      labels: a numpy array of image labels, len(labels) >= 1\n      mirrored: a numpy array of booleans, whether the images are mirrored\n      self.epoch_done: whether the epoch is over\n    \"\"\"\n    if self.epoch_done and self.shuffle:\n      self.prng.shuffle(self.im_names)\n    samples, self.epoch_done = self.prefetcher.next_batch()\n    im_list, im_names, labels, mirrored = zip(*samples)\n    # Transform the list into a numpy array with shape [N, ...]\n    ims = np.stack(im_list, axis=0)\n    im_names = np.array(im_names)\n    labels = np.array(labels)\n    mirrored = np.array(mirrored)\n    return ims, im_names, labels, mirrored, self.epoch_done\n"
  },
  {
    "path": "bpm/dataset/__init__.py",
    "content": "import numpy as np\nimport os.path as osp\nospj = osp.join\nospeu = osp.expanduser\n\nfrom ..utils.utils import load_pickle\nfrom ..utils.dataset_utils import parse_im_name\nfrom .TrainSet import TrainSet\nfrom .TestSet import TestSet\n\n\ndef create_dataset(\n    name='market1501',\n    part='trainval',\n    **kwargs):\n  assert name in ['market1501', 'cuhk03', 'duke', 'combined'], \\\n    \"Unsupported Dataset {}\".format(name)\n\n  assert part in ['trainval', 'train', 'val', 'test'], \\\n    \"Unsupported Dataset Part {}\".format(part)\n\n  ########################################\n  # Specify Directory and Partition File #\n  ########################################\n\n  if name == 'market1501':\n    im_dir = ospeu('~/Dataset/market1501/images')\n    partition_file = ospeu('~/Dataset/market1501/partitions.pkl')\n\n  elif name == 'cuhk03':\n    im_type = ['detected', 'labeled'][0]\n    im_dir = ospeu(ospj('~/Dataset/cuhk03', im_type, 'images'))\n    partition_file = ospeu(ospj('~/Dataset/cuhk03', im_type, 'partitions.pkl'))\n\n  elif name == 'duke':\n    im_dir = ospeu('~/Dataset/duke/images')\n    partition_file = ospeu('~/Dataset/duke/partitions.pkl')\n\n  elif name == 'combined':\n    assert part in ['trainval'], \\\n      \"Only trainval part of the combined dataset is available now.\"\n    im_dir = ospeu('~/Dataset/market1501_cuhk03_duke/trainval_images')\n    partition_file = ospeu('~/Dataset/market1501_cuhk03_duke/partitions.pkl')\n\n  ##################\n  # Create Dataset #\n  ##################\n\n  # Use standard Market1501 CMC settings for all datasets here.\n  cmc_kwargs = dict(separate_camera_set=False,\n                    single_gallery_shot=False,\n                    first_match_break=True)\n\n  partitions = load_pickle(partition_file)\n  im_names = partitions['{}_im_names'.format(part)]\n\n  if part == 'trainval':\n    ids2labels = partitions['trainval_ids2labels']\n\n    ret_set = TrainSet(\n      im_dir=im_dir,\n      im_names=im_names,\n      ids2labels=ids2labels,\n      **kwargs)\n\n  elif part == 'train':\n    ids2labels = partitions['train_ids2labels']\n\n    ret_set = TrainSet(\n      im_dir=im_dir,\n      im_names=im_names,\n      ids2labels=ids2labels,\n      **kwargs)\n\n  elif part == 'val':\n    marks = partitions['val_marks']\n    kwargs.update(cmc_kwargs)\n\n    ret_set = TestSet(\n      im_dir=im_dir,\n      im_names=im_names,\n      marks=marks,\n      **kwargs)\n\n  elif part == 'test':\n    marks = partitions['test_marks']\n    kwargs.update(cmc_kwargs)\n\n    ret_set = TestSet(\n      im_dir=im_dir,\n      im_names=im_names,\n      marks=marks,\n      **kwargs)\n\n  if part in ['trainval', 'train']:\n    num_ids = len(ids2labels)\n  elif part in ['val', 'test']:\n    ids = [parse_im_name(n, 'id') for n in im_names]\n    num_ids = len(list(set(ids)))\n    num_query = np.sum(np.array(marks) == 0)\n    num_gallery = np.sum(np.array(marks) == 1)\n    num_multi_query = np.sum(np.array(marks) == 2)\n\n  # Print dataset information\n  print('-' * 40)\n  print('{} {} set'.format(name, part))\n  print('-' * 40)\n  print('NO. Images: {}'.format(len(im_names)))\n  print('NO. IDs: {}'.format(num_ids))\n\n  try:\n    print('NO. Query Images: {}'.format(num_query))\n    print('NO. Gallery Images: {}'.format(num_gallery))\n    print('NO. Multi-query Images: {}'.format(num_multi_query))\n  except:\n    pass\n\n  print('-' * 40)\n\n  return ret_set\n"
  },
  {
    "path": "bpm/model/PCBModel.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.init as init\nimport torch.nn.functional as F\n\nfrom .resnet import resnet50\n\n\nclass PCBModel(nn.Module):\n  def __init__(\n      self,\n      last_conv_stride=1,\n      last_conv_dilation=1,\n      num_stripes=6,\n      local_conv_out_channels=256,\n      num_classes=0\n  ):\n    super(PCBModel, self).__init__()\n\n    self.base = resnet50(\n      pretrained=True,\n      last_conv_stride=last_conv_stride,\n      last_conv_dilation=last_conv_dilation)\n    self.num_stripes = num_stripes\n\n    self.local_conv_list = nn.ModuleList()\n    for _ in range(num_stripes):\n      self.local_conv_list.append(nn.Sequential(\n        nn.Conv2d(2048, local_conv_out_channels, 1),\n        nn.BatchNorm2d(local_conv_out_channels),\n        nn.ReLU(inplace=True)\n      ))\n\n    if num_classes > 0:\n      self.fc_list = nn.ModuleList()\n      for _ in range(num_stripes):\n        fc = nn.Linear(local_conv_out_channels, num_classes)\n        init.normal(fc.weight, std=0.001)\n        init.constant(fc.bias, 0)\n        self.fc_list.append(fc)\n\n  def forward(self, x):\n    \"\"\"\n    Returns:\n      local_feat_list: each member with shape [N, c]\n      logits_list: each member with shape [N, num_classes]\n    \"\"\"\n    # shape [N, C, H, W]\n    feat = self.base(x)\n    assert feat.size(2) % self.num_stripes == 0\n    stripe_h = int(feat.size(2) / self.num_stripes)\n    local_feat_list = []\n    logits_list = []\n    for i in range(self.num_stripes):\n      # shape [N, C, 1, 1]\n      local_feat = F.avg_pool2d(\n        feat[:, :, i * stripe_h: (i + 1) * stripe_h, :],\n        (stripe_h, feat.size(-1)))\n      # shape [N, c, 1, 1]\n      local_feat = self.local_conv_list[i](local_feat)\n      # shape [N, c]\n      local_feat = local_feat.view(local_feat.size(0), -1)\n      local_feat_list.append(local_feat)\n      if hasattr(self, 'fc_list'):\n        logits_list.append(self.fc_list[i](local_feat))\n\n    if hasattr(self, 'fc_list'):\n      return local_feat_list, logits_list\n\n    return local_feat_list\n"
  },
  {
    "path": "bpm/model/__init__.py",
    "content": ""
  },
  {
    "path": "bpm/model/resnet.py",
    "content": "import torch.nn as nn\nimport math\nimport torch.utils.model_zoo as model_zoo\n\n__all__ = ['ResNet', 'resnet18', 'resnet34', 'resnet50', 'resnet101',\n           'resnet152']\n\nmodel_urls = {\n  'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth',\n  'resnet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth',\n  'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth',\n  'resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth',\n  'resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth',\n}\n\n\ndef conv3x3(in_planes, out_planes, stride=1, dilation=1):\n  \"\"\"3x3 convolution with padding\"\"\"\n  # original padding is 1; original dilation is 1\n  return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,\n                   padding=dilation, bias=False, dilation=dilation)\n\n\nclass BasicBlock(nn.Module):\n  expansion = 1\n\n  def __init__(self, inplanes, planes, stride=1, downsample=None, dilation=1):\n    super(BasicBlock, self).__init__()\n    self.conv1 = conv3x3(inplanes, planes, stride, dilation)\n    self.bn1 = nn.BatchNorm2d(planes)\n    self.relu = nn.ReLU(inplace=True)\n    self.conv2 = conv3x3(planes, planes)\n    self.bn2 = nn.BatchNorm2d(planes)\n    self.downsample = downsample\n    self.stride = stride\n\n  def forward(self, x):\n    residual = x\n\n    out = self.conv1(x)\n    out = self.bn1(out)\n    out = self.relu(out)\n\n    out = self.conv2(out)\n    out = self.bn2(out)\n\n    if self.downsample is not None:\n      residual = self.downsample(x)\n\n    out += residual\n    out = self.relu(out)\n\n    return out\n\n\nclass Bottleneck(nn.Module):\n  expansion = 4\n\n  def __init__(self, inplanes, planes, stride=1, downsample=None, dilation=1):\n    super(Bottleneck, self).__init__()\n    self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)\n    self.bn1 = nn.BatchNorm2d(planes)\n    # original padding is 1; original dilation is 1\n    self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=dilation, bias=False, dilation=dilation)\n    self.bn2 = nn.BatchNorm2d(planes)\n    self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)\n    self.bn3 = nn.BatchNorm2d(planes * 4)\n    self.relu = nn.ReLU(inplace=True)\n    self.downsample = downsample\n    self.stride = stride\n\n  def forward(self, x):\n    residual = x\n\n    out = self.conv1(x)\n    out = self.bn1(out)\n    out = self.relu(out)\n\n    out = self.conv2(out)\n    out = self.bn2(out)\n    out = self.relu(out)\n\n    out = self.conv3(out)\n    out = self.bn3(out)\n\n    if self.downsample is not None:\n      residual = self.downsample(x)\n\n    out += residual\n    out = self.relu(out)\n\n    return out\n\n\nclass ResNet(nn.Module):\n\n  def __init__(self, block, layers, last_conv_stride=2, last_conv_dilation=1):\n\n    self.inplanes = 64\n    super(ResNet, self).__init__()\n    self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,\n                           bias=False)\n    self.bn1 = nn.BatchNorm2d(64)\n    self.relu = nn.ReLU(inplace=True)\n    self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)\n    self.layer1 = self._make_layer(block, 64, layers[0])\n    self.layer2 = self._make_layer(block, 128, layers[1], stride=2)\n    self.layer3 = self._make_layer(block, 256, layers[2], stride=2)\n    self.layer4 = self._make_layer(block, 512, layers[3], stride=last_conv_stride, dilation=last_conv_dilation)\n\n    for m in self.modules():\n      if isinstance(m, nn.Conv2d):\n        n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels\n        m.weight.data.normal_(0, math.sqrt(2. / n))\n      elif isinstance(m, nn.BatchNorm2d):\n        m.weight.data.fill_(1)\n        m.bias.data.zero_()\n\n  def _make_layer(self, block, planes, blocks, stride=1, dilation=1):\n    downsample = None\n    if stride != 1 or self.inplanes != planes * block.expansion:\n      downsample = nn.Sequential(\n        nn.Conv2d(self.inplanes, planes * block.expansion,\n                  kernel_size=1, stride=stride, bias=False),\n        nn.BatchNorm2d(planes * block.expansion),\n      )\n\n    layers = []\n    layers.append(block(self.inplanes, planes, stride, downsample, dilation))\n    self.inplanes = planes * block.expansion\n    for i in range(1, blocks):\n      layers.append(block(self.inplanes, planes))\n\n    return nn.Sequential(*layers)\n\n  def forward(self, x):\n    x = self.conv1(x)\n    x = self.bn1(x)\n    x = self.relu(x)\n    x = self.maxpool(x)\n\n    x = self.layer1(x)\n    x = self.layer2(x)\n    x = self.layer3(x)\n    x = self.layer4(x)\n\n    return x\n\n\ndef remove_fc(state_dict):\n  \"\"\"Remove the fc layer parameters from state_dict.\"\"\"\n  for key in list(state_dict):\n    if key.startswith('fc.'):\n      del state_dict[key]\n  return state_dict\n\n\ndef resnet18(pretrained=False, **kwargs):\n  \"\"\"Constructs a ResNet-18 model.\n\n  Args:\n      pretrained (bool): If True, returns a model pre-trained on ImageNet\n  \"\"\"\n  model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs)\n  if pretrained:\n    model.load_state_dict(remove_fc(model_zoo.load_url(model_urls['resnet18'])))\n  return model\n\n\ndef resnet34(pretrained=False, **kwargs):\n  \"\"\"Constructs a ResNet-34 model.\n\n  Args:\n      pretrained (bool): If True, returns a model pre-trained on ImageNet\n  \"\"\"\n  model = ResNet(BasicBlock, [3, 4, 6, 3], **kwargs)\n  if pretrained:\n    model.load_state_dict(remove_fc(model_zoo.load_url(model_urls['resnet34'])))\n  return model\n\n\ndef resnet50(pretrained=False, **kwargs):\n  \"\"\"Constructs a ResNet-50 model.\n\n  Args:\n      pretrained (bool): If True, returns a model pre-trained on ImageNet\n  \"\"\"\n  model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs)\n  if pretrained:\n    model.load_state_dict(remove_fc(model_zoo.load_url(model_urls['resnet50'])))\n  return model\n\n\ndef resnet101(pretrained=False, **kwargs):\n  \"\"\"Constructs a ResNet-101 model.\n\n  Args:\n      pretrained (bool): If True, returns a model pre-trained on ImageNet\n  \"\"\"\n  model = ResNet(Bottleneck, [3, 4, 23, 3], **kwargs)\n  if pretrained:\n    model.load_state_dict(\n      remove_fc(model_zoo.load_url(model_urls['resnet101'])))\n  return model\n\n\ndef resnet152(pretrained=False, **kwargs):\n  \"\"\"Constructs a ResNet-152 model.\n\n  Args:\n      pretrained (bool): If True, returns a model pre-trained on ImageNet\n  \"\"\"\n  model = ResNet(Bottleneck, [3, 8, 36, 3], **kwargs)\n  if pretrained:\n    model.load_state_dict(\n      remove_fc(model_zoo.load_url(model_urls['resnet152'])))\n  return model\n"
  },
  {
    "path": "bpm/utils/__init__.py",
    "content": ""
  },
  {
    "path": "bpm/utils/dataset_utils.py",
    "content": "from __future__ import print_function\nimport os.path as osp\nimport numpy as np\nimport glob\nfrom collections import defaultdict\nimport shutil\n\nnew_im_name_tmpl = '{:08d}_{:04d}_{:08d}.jpg'\n\ndef parse_im_name(im_name, parse_type='id'):\n  \"\"\"Get the person id or cam from an image name.\"\"\"\n  assert parse_type in ('id', 'cam')\n  if parse_type == 'id':\n    parsed = int(im_name[:8])\n  else:\n    parsed = int(im_name[9:13])\n  return parsed\n\n\ndef get_im_names(im_dir, pattern='*.jpg', return_np=True, return_path=False):\n  \"\"\"Get the image names in a dir. Optional to return numpy array, paths.\"\"\"\n  im_paths = glob.glob(osp.join(im_dir, pattern))\n  im_names = [osp.basename(path) for path in im_paths]\n  ret = im_paths if return_path else im_names\n  if return_np:\n    ret = np.array(ret)\n  return ret\n\n\ndef move_ims(ori_im_paths, new_im_dir, parse_im_name, new_im_name_tmpl):\n  \"\"\"Rename and move images to new directory.\"\"\"\n  cnt = defaultdict(int)\n  new_im_names = []\n  for im_path in ori_im_paths:\n    im_name = osp.basename(im_path)\n    id = parse_im_name(im_name, 'id')\n    cam = parse_im_name(im_name, 'cam')\n    cnt[(id, cam)] += 1\n    new_im_name = new_im_name_tmpl.format(id, cam, cnt[(id, cam)] - 1)\n    shutil.copy(im_path, osp.join(new_im_dir, new_im_name))\n    new_im_names.append(new_im_name)\n  return new_im_names\n\n\ndef partition_train_val_set(im_names, parse_im_name,\n                            num_val_ids=None, val_prop=None, seed=1):\n  \"\"\"Partition the trainval set into train and val set. \n  Args:\n    im_names: trainval image names\n    parse_im_name: a function to parse id and camera from image name\n    num_val_ids: number of ids for val set. If not set, val_prob is used.\n    val_prop: the proportion of validation ids\n    seed: the random seed to reproduce the partition results. If not to use, \n      then set to `None`.\n  Returns:\n    a dict with keys (`train_im_names`, \n                      `val_query_im_names`, \n                      `val_gallery_im_names`)\n  \"\"\"\n  np.random.seed(seed)\n  # Transform to numpy array for slicing.\n  if not isinstance(im_names, np.ndarray):\n    im_names = np.array(im_names)\n  np.random.shuffle(im_names)\n  ids = np.array([parse_im_name(n, 'id') for n in im_names])\n  cams = np.array([parse_im_name(n, 'cam') for n in im_names])\n  unique_ids = np.unique(ids)\n  np.random.shuffle(unique_ids)\n\n  # Query indices and gallery indices\n  query_inds = []\n  gallery_inds = []\n\n  if num_val_ids is None:\n    assert 0 < val_prop < 1\n    num_val_ids = int(len(unique_ids) * val_prop)\n  num_selected_ids = 0\n  for unique_id in unique_ids:\n    query_inds_ = []\n    # The indices of this id in trainval set.\n    inds = np.argwhere(unique_id == ids).flatten()\n    # The cams that this id has.\n    unique_cams = np.unique(cams[inds])\n    # For each cam, select one image for query set.\n    for unique_cam in unique_cams:\n      query_inds_.append(\n        inds[np.argwhere(cams[inds] == unique_cam).flatten()[0]])\n    gallery_inds_ = list(set(inds) - set(query_inds_))\n    # For each query image, if there is no same-id different-cam images in\n    # gallery, put it in gallery.\n    for query_ind in query_inds_:\n      if len(gallery_inds_) == 0 \\\n          or len(np.argwhere(cams[gallery_inds_] != cams[query_ind])\n                     .flatten()) == 0:\n        query_inds_.remove(query_ind)\n        gallery_inds_.append(query_ind)\n    # If no query image is left, leave this id in train set.\n    if len(query_inds_) == 0:\n      continue\n    query_inds.append(query_inds_)\n    gallery_inds.append(gallery_inds_)\n    num_selected_ids += 1\n    if num_selected_ids >= num_val_ids:\n      break\n\n  query_inds = np.hstack(query_inds)\n  gallery_inds = np.hstack(gallery_inds)\n  val_inds = np.hstack([query_inds, gallery_inds])\n  trainval_inds = np.arange(len(im_names))\n  train_inds = np.setdiff1d(trainval_inds, val_inds)\n\n  train_inds = np.sort(train_inds)\n  query_inds = np.sort(query_inds)\n  gallery_inds = np.sort(gallery_inds)\n\n  partitions = dict(train_im_names=im_names[train_inds],\n                    val_query_im_names=im_names[query_inds],\n                    val_gallery_im_names=im_names[gallery_inds])\n\n  return partitions\n"
  },
  {
    "path": "bpm/utils/distance.py",
    "content": "\"\"\"Numpy version of euclidean distance, etc.\nNotice the input/output shape of methods, so that you can better understand\nthe meaning of these methods.\"\"\"\nimport numpy as np\n\n\ndef normalize(nparray, order=2, axis=0):\n  \"\"\"Normalize a N-D numpy array along the specified axis.\"\"\"\n  norm = np.linalg.norm(nparray, ord=order, axis=axis, keepdims=True)\n  return nparray / (norm + np.finfo(np.float32).eps)\n\n\ndef compute_dist(array1, array2, type='euclidean'):\n  \"\"\"Compute the euclidean or cosine distance of all pairs.\n  Args:\n    array1: numpy array with shape [m1, n]\n    array2: numpy array with shape [m2, n]\n    type: one of ['cosine', 'euclidean']\n  Returns:\n    numpy array with shape [m1, m2]\n  \"\"\"\n  assert type in ['cosine', 'euclidean']\n  if type == 'cosine':\n    array1 = normalize(array1, axis=1)\n    array2 = normalize(array2, axis=1)\n    dist = np.matmul(array1, array2.T)\n    return dist\n  else:\n    # shape [m1, 1]\n    square1 = np.sum(np.square(array1), axis=1)[..., np.newaxis]\n    # shape [1, m2]\n    square2 = np.sum(np.square(array2), axis=1)[np.newaxis, ...]\n    squared_dist = - 2 * np.matmul(array1, array2.T) + square1 + square2\n    squared_dist[squared_dist < 0] = 0\n    dist = np.sqrt(squared_dist)\n    return dist\n"
  },
  {
    "path": "bpm/utils/metric.py",
    "content": "\"\"\"Modified from Tong Xiao's open-reid (https://github.com/Cysu/open-reid) \nreid/evaluation_metrics/ranking.py. Modifications: \n1) Only accepts numpy data input, no torch is involved.\n1) Here results of each query can be returned.\n2) In the single-gallery-shot evaluation case, the time of repeats is changed \n   from 10 to 100.\n\"\"\"\nfrom __future__ import absolute_import\nfrom collections import defaultdict\n\nimport numpy as np\nfrom sklearn.metrics import average_precision_score\n\n\ndef _unique_sample(ids_dict, num):\n  mask = np.zeros(num, dtype=np.bool)\n  for _, indices in ids_dict.items():\n    i = np.random.choice(indices)\n    mask[i] = True\n  return mask\n\n\ndef cmc(\n    distmat,\n    query_ids=None,\n    gallery_ids=None,\n    query_cams=None,\n    gallery_cams=None,\n    topk=100,\n    separate_camera_set=False,\n    single_gallery_shot=False,\n    first_match_break=False,\n    average=True):\n  \"\"\"\n  Args:\n    distmat: numpy array with shape [num_query, num_gallery], the \n      pairwise distance between query and gallery samples\n    query_ids: numpy array with shape [num_query]\n    gallery_ids: numpy array with shape [num_gallery]\n    query_cams: numpy array with shape [num_query]\n    gallery_cams: numpy array with shape [num_gallery]\n    average: whether to average the results across queries\n  Returns:\n    If `average` is `False`:\n      ret: numpy array with shape [num_query, topk]\n      is_valid_query: numpy array with shape [num_query], containing 0's and \n        1's, whether each query is valid or not\n    If `average` is `True`:\n      numpy array with shape [topk]\n  \"\"\"\n  # Ensure numpy array\n  assert isinstance(distmat, np.ndarray)\n  assert isinstance(query_ids, np.ndarray)\n  assert isinstance(gallery_ids, np.ndarray)\n  assert isinstance(query_cams, np.ndarray)\n  assert isinstance(gallery_cams, np.ndarray)\n\n  m, n = distmat.shape\n  # Sort and find correct matches\n  indices = np.argsort(distmat, axis=1)\n  matches = (gallery_ids[indices] == query_ids[:, np.newaxis])\n  # Compute CMC for each query\n  ret = np.zeros([m, topk])\n  is_valid_query = np.zeros(m)\n  num_valid_queries = 0\n  for i in range(m):\n    # Filter out the same id and same camera\n    valid = ((gallery_ids[indices[i]] != query_ids[i]) |\n             (gallery_cams[indices[i]] != query_cams[i]))\n    if separate_camera_set:\n      # Filter out samples from same camera\n      valid &= (gallery_cams[indices[i]] != query_cams[i])\n    if not np.any(matches[i, valid]): continue\n    is_valid_query[i] = 1\n    if single_gallery_shot:\n      repeat = 100\n      gids = gallery_ids[indices[i][valid]]\n      inds = np.where(valid)[0]\n      ids_dict = defaultdict(list)\n      for j, x in zip(inds, gids):\n        ids_dict[x].append(j)\n    else:\n      repeat = 1\n    for _ in range(repeat):\n      if single_gallery_shot:\n        # Randomly choose one instance for each id\n        sampled = (valid & _unique_sample(ids_dict, len(valid)))\n        index = np.nonzero(matches[i, sampled])[0]\n      else:\n        index = np.nonzero(matches[i, valid])[0]\n      delta = 1. / (len(index) * repeat)\n      for j, k in enumerate(index):\n        if k - j >= topk: break\n        if first_match_break:\n          ret[i, k - j] += 1\n          break\n        ret[i, k - j] += delta\n    num_valid_queries += 1\n  if num_valid_queries == 0:\n    raise RuntimeError(\"No valid query\")\n  ret = ret.cumsum(axis=1)\n  if average:\n    return np.sum(ret, axis=0) / num_valid_queries\n  return ret, is_valid_query\n\n\ndef mean_ap(\n    distmat,\n    query_ids=None,\n    gallery_ids=None,\n    query_cams=None,\n    gallery_cams=None,\n    average=True):\n  \"\"\"\n  Args:\n    distmat: numpy array with shape [num_query, num_gallery], the \n      pairwise distance between query and gallery samples\n    query_ids: numpy array with shape [num_query]\n    gallery_ids: numpy array with shape [num_gallery]\n    query_cams: numpy array with shape [num_query]\n    gallery_cams: numpy array with shape [num_gallery]\n    average: whether to average the results across queries\n  Returns:\n    If `average` is `False`:\n      ret: numpy array with shape [num_query]\n      is_valid_query: numpy array with shape [num_query], containing 0's and \n        1's, whether each query is valid or not\n    If `average` is `True`:\n      a scalar\n  \"\"\"\n\n  # -------------------------------------------------------------------------\n  # The behavior of method `sklearn.average_precision` has changed since version\n  # 0.19.\n  # Version 0.18.1 has same results as Matlab evaluation code by Zhun Zhong\n  # (https://github.com/zhunzhong07/person-re-ranking/\n  # blob/master/evaluation/utils/evaluation.m) and by Liang Zheng\n  # (http://www.liangzheng.org/Project/project_reid.html).\n  # My current awkward solution is sticking to this older version.\n  import sklearn\n  cur_version = sklearn.__version__\n  required_version = '0.18.1'\n  if cur_version != required_version:\n    print('User Warning: Version {} is required for package scikit-learn, '\n          'your current version is {}. '\n          'As a result, the mAP score may not be totally correct. '\n          'You can try `pip uninstall scikit-learn` '\n          'and then `pip install scikit-learn=={}`'.format(\n      required_version, cur_version, required_version))\n  # -------------------------------------------------------------------------\n\n  # Ensure numpy array\n  assert isinstance(distmat, np.ndarray)\n  assert isinstance(query_ids, np.ndarray)\n  assert isinstance(gallery_ids, np.ndarray)\n  assert isinstance(query_cams, np.ndarray)\n  assert isinstance(gallery_cams, np.ndarray)\n\n  m, n = distmat.shape\n\n  # Sort and find correct matches\n  indices = np.argsort(distmat, axis=1)\n  matches = (gallery_ids[indices] == query_ids[:, np.newaxis])\n  # Compute AP for each query\n  aps = np.zeros(m)\n  is_valid_query = np.zeros(m)\n  for i in range(m):\n    # Filter out the same id and same camera\n    valid = ((gallery_ids[indices[i]] != query_ids[i]) |\n             (gallery_cams[indices[i]] != query_cams[i]))\n    y_true = matches[i, valid]\n    y_score = -distmat[i][indices[i]][valid]\n    if not np.any(y_true): continue\n    is_valid_query[i] = 1\n    aps[i] = average_precision_score(y_true, y_score)\n  if len(aps) == 0:\n    raise RuntimeError(\"No valid query\")\n  if average:\n    return float(np.sum(aps)) / np.sum(is_valid_query)\n  return aps, is_valid_query\n"
  },
  {
    "path": "bpm/utils/re_ranking.py",
    "content": "\"\"\"\nCreated on Mon Jun 26 14:46:56 2017\n\n@author: luohao\n\nModified by Houjing Huang, 2017-12-22.\n- This version accepts distance matrix instead of raw features.\n- The difference of `/` division between python 2 and 3 is handled.\n- numpy.float16 is replaced by numpy.float32 for numerical precision.\n\"\"\"\n\n\"\"\"\nCVPR2017 paper:Zhong Z, Zheng L, Cao D, et al. Re-ranking Person Re-identification with k-reciprocal Encoding[J]. 2017.\nurl:http://openaccess.thecvf.com/content_cvpr_2017/papers/Zhong_Re-Ranking_Person_Re-Identification_CVPR_2017_paper.pdf\nMatlab version: https://github.com/zhunzhong07/person-re-ranking\n\"\"\"\n\n\"\"\"\nAPI\n\nq_g_dist: query-gallery distance matrix, numpy array, shape [num_query, num_gallery]\nq_q_dist: query-query distance matrix, numpy array, shape [num_query, num_query]\ng_g_dist: gallery-gallery distance matrix, numpy array, shape [num_gallery, num_gallery]\n\nk1, k2, lambda_value: parameters, the original paper is (k1=20, k2=6, lambda_value=0.3)\n\nReturns:\n  final_dist: re-ranked distance, numpy array, shape [num_query, num_gallery]\n\"\"\"\n\n\nimport numpy as np\n\n\ndef re_ranking(q_g_dist, q_q_dist, g_g_dist, k1=20, k2=6, lambda_value=0.3):\n\n    # The following naming, e.g. gallery_num, is different from outer scope.\n    # Don't care about it.\n\n    original_dist = np.concatenate(\n      [np.concatenate([q_q_dist, q_g_dist], axis=1),\n       np.concatenate([q_g_dist.T, g_g_dist], axis=1)],\n      axis=0)\n    original_dist = np.power(original_dist, 2).astype(np.float32)\n    original_dist = np.transpose(1. * original_dist/np.max(original_dist,axis = 0))\n    V = np.zeros_like(original_dist).astype(np.float32)\n    initial_rank = np.argsort(original_dist).astype(np.int32)\n\n    query_num = q_g_dist.shape[0]\n    gallery_num = q_g_dist.shape[0] + q_g_dist.shape[1]\n    all_num = gallery_num\n\n    for i in range(all_num):\n        # k-reciprocal neighbors\n        forward_k_neigh_index = initial_rank[i,:k1+1]\n        backward_k_neigh_index = initial_rank[forward_k_neigh_index,:k1+1]\n        fi = np.where(backward_k_neigh_index==i)[0]\n        k_reciprocal_index = forward_k_neigh_index[fi]\n        k_reciprocal_expansion_index = k_reciprocal_index\n        for j in range(len(k_reciprocal_index)):\n            candidate = k_reciprocal_index[j]\n            candidate_forward_k_neigh_index = initial_rank[candidate,:int(np.around(k1/2.))+1]\n            candidate_backward_k_neigh_index = initial_rank[candidate_forward_k_neigh_index,:int(np.around(k1/2.))+1]\n            fi_candidate = np.where(candidate_backward_k_neigh_index == candidate)[0]\n            candidate_k_reciprocal_index = candidate_forward_k_neigh_index[fi_candidate]\n            if len(np.intersect1d(candidate_k_reciprocal_index,k_reciprocal_index))> 2./3*len(candidate_k_reciprocal_index):\n                k_reciprocal_expansion_index = np.append(k_reciprocal_expansion_index,candidate_k_reciprocal_index)\n\n        k_reciprocal_expansion_index = np.unique(k_reciprocal_expansion_index)\n        weight = np.exp(-original_dist[i,k_reciprocal_expansion_index])\n        V[i,k_reciprocal_expansion_index] = 1.*weight/np.sum(weight)\n    original_dist = original_dist[:query_num,]\n    if k2 != 1:\n        V_qe = np.zeros_like(V,dtype=np.float32)\n        for i in range(all_num):\n            V_qe[i,:] = np.mean(V[initial_rank[i,:k2],:],axis=0)\n        V = V_qe\n        del V_qe\n    del initial_rank\n    invIndex = []\n    for i in range(gallery_num):\n        invIndex.append(np.where(V[:,i] != 0)[0])\n\n    jaccard_dist = np.zeros_like(original_dist,dtype = np.float32)\n\n\n    for i in range(query_num):\n        temp_min = np.zeros(shape=[1,gallery_num],dtype=np.float32)\n        indNonZero = np.where(V[i,:] != 0)[0]\n        indImages = []\n        indImages = [invIndex[ind] for ind in indNonZero]\n        for j in range(len(indNonZero)):\n            temp_min[0,indImages[j]] = temp_min[0,indImages[j]]+ np.minimum(V[i,indNonZero[j]],V[indImages[j],indNonZero[j]])\n        jaccard_dist[i] = 1-temp_min/(2.-temp_min)\n\n    final_dist = jaccard_dist*(1-lambda_value) + original_dist*lambda_value\n    del original_dist\n    del V\n    del jaccard_dist\n    final_dist = final_dist[:query_num,query_num:]\n    return final_dist"
  },
  {
    "path": "bpm/utils/utils.py",
    "content": "from __future__ import print_function\nimport os\nimport os.path as osp\nimport cPickle as pickle\nfrom scipy import io\nimport datetime\nimport time\nfrom contextlib import contextmanager\n\nimport torch\nfrom torch.autograd import Variable\n\n\ndef time_str(fmt=None):\n  if fmt is None:\n    fmt = '%Y-%m-%d_%H:%M:%S'\n  return datetime.datetime.today().strftime(fmt)\n\n\ndef load_pickle(path):\n  \"\"\"Check and load pickle object.\n  According to this post: https://stackoverflow.com/a/41733927, cPickle and \n  disabling garbage collector helps with loading speed.\"\"\"\n  assert osp.exists(path)\n  # gc.disable()\n  with open(path, 'rb') as f:\n    ret = pickle.load(f)\n  # gc.enable()\n  return ret\n\n\ndef save_pickle(obj, path):\n  \"\"\"Create dir and save file.\"\"\"\n  may_make_dir(osp.dirname(osp.abspath(path)))\n  with open(path, 'wb') as f:\n    pickle.dump(obj, f, protocol=2)\n\n\ndef save_mat(ndarray, path):\n  \"\"\"Save a numpy ndarray as .mat file.\"\"\"\n  io.savemat(path, dict(ndarray=ndarray))\n\n\ndef to_scalar(vt):\n  \"\"\"Transform a length-1 pytorch Variable or Tensor to scalar. \n  Suppose tx is a torch Tensor with shape tx.size() = torch.Size([1]), \n  then npx = tx.cpu().numpy() has shape (1,), not 1.\"\"\"\n  if isinstance(vt, Variable):\n    return vt.data.cpu().numpy().flatten()[0]\n  if torch.is_tensor(vt):\n    return vt.cpu().numpy().flatten()[0]\n  raise TypeError('Input should be a variable or tensor')\n\n\ndef transfer_optim_state(state, device_id=-1):\n  \"\"\"Transfer an optimizer.state to cpu or specified gpu, which means \n  transferring tensors of the optimizer.state to specified device. \n  The modification is in place for the state.\n  Args:\n    state: An torch.optim.Optimizer.state\n    device_id: gpu id, or -1 which means transferring to cpu\n  \"\"\"\n  for key, val in state.items():\n    if isinstance(val, dict):\n      transfer_optim_state(val, device_id=device_id)\n    elif isinstance(val, Variable):\n      raise RuntimeError(\"Oops, state[{}] is a Variable!\".format(key))\n    elif isinstance(val, torch.nn.Parameter):\n      raise RuntimeError(\"Oops, state[{}] is a Parameter!\".format(key))\n    else:\n      try:\n        if device_id == -1:\n          state[key] = val.cpu()\n        else:\n          state[key] = val.cuda(device=device_id)\n      except:\n        pass\n\n\ndef may_transfer_optims(optims, device_id=-1):\n  \"\"\"Transfer optimizers to cpu or specified gpu, which means transferring \n  tensors of the optimizer to specified device. The modification is in place \n  for the optimizers.\n  Args:\n    optims: A list, which members are either torch.nn.optimizer or None.\n    device_id: gpu id, or -1 which means transferring to cpu\n  \"\"\"\n  for optim in optims:\n    if isinstance(optim, torch.optim.Optimizer):\n      transfer_optim_state(optim.state, device_id=device_id)\n\n\ndef may_transfer_modules_optims(modules_and_or_optims, device_id=-1):\n  \"\"\"Transfer optimizers/modules to cpu or specified gpu.\n  Args:\n    modules_and_or_optims: A list, which members are either torch.nn.optimizer \n      or torch.nn.Module or None.\n    device_id: gpu id, or -1 which means transferring to cpu\n  \"\"\"\n  for item in modules_and_or_optims:\n    if isinstance(item, torch.optim.Optimizer):\n      transfer_optim_state(item.state, device_id=device_id)\n    elif isinstance(item, torch.nn.Module):\n      if device_id == -1:\n        item.cpu()\n      else:\n        item.cuda(device=device_id)\n    elif item is not None:\n      print('[Warning] Invalid type {}'.format(item.__class__.__name__))\n\n\nclass TransferVarTensor(object):\n  \"\"\"Return a copy of the input Variable or Tensor on specified device.\"\"\"\n\n  def __init__(self, device_id=-1):\n    self.device_id = device_id\n\n  def __call__(self, var_or_tensor):\n    return var_or_tensor.cpu() if self.device_id == -1 \\\n      else var_or_tensor.cuda(self.device_id)\n\n\nclass TransferModulesOptims(object):\n  \"\"\"Transfer optimizers/modules to cpu or specified gpu.\"\"\"\n\n  def __init__(self, device_id=-1):\n    self.device_id = device_id\n\n  def __call__(self, modules_and_or_optims):\n    may_transfer_modules_optims(modules_and_or_optims, self.device_id)\n\n\ndef set_devices(sys_device_ids):\n  \"\"\"\n  It sets some GPUs to be visible and returns some wrappers to transferring \n  Variables/Tensors and Modules/Optimizers.\n  Args:\n    sys_device_ids: a tuple; which GPUs to use\n      e.g.  sys_device_ids = (), only use cpu\n            sys_device_ids = (3,), use the 4th gpu\n            sys_device_ids = (0, 1, 2, 3,), use first 4 gpus\n            sys_device_ids = (0, 2, 4,), use the 1st, 3rd and 5th gpus\n  Returns:\n    TVT: a `TransferVarTensor` callable\n    TMO: a `TransferModulesOptims` callable\n  \"\"\"\n  # Set the CUDA_VISIBLE_DEVICES environment variable\n  import os\n  visible_devices = ''\n  for i in sys_device_ids:\n    visible_devices += '{}, '.format(i)\n  os.environ['CUDA_VISIBLE_DEVICES'] = visible_devices\n  # Return wrappers.\n  # Models and user defined Variables/Tensors would be transferred to the\n  # first device.\n  device_id = 0 if len(sys_device_ids) > 0 else -1\n  TVT = TransferVarTensor(device_id)\n  TMO = TransferModulesOptims(device_id)\n  return TVT, TMO\n\n\ndef set_devices_for_ml(sys_device_ids):\n  \"\"\"This version is for mutual learning.\n  \n  It sets some GPUs to be visible and returns some wrappers to transferring \n  Variables/Tensors and Modules/Optimizers.\n  \n  Args:\n    sys_device_ids: a tuple of tuples; which devices to use for each model, \n      len(sys_device_ids) should be equal to number of models. Examples:\n        \n      sys_device_ids = ((-1,), (-1,))\n        the two models both on CPU\n      sys_device_ids = ((-1,), (2,))\n        the 1st model on CPU, the 2nd model on GPU 2\n      sys_device_ids = ((3,),)\n        the only one model on the 4th gpu \n      sys_device_ids = ((0, 1), (2, 3))\n        the 1st model on GPU 0 and 1, the 2nd model on GPU 2 and 3\n      sys_device_ids = ((0,), (0,))\n        the two models both on GPU 0\n      sys_device_ids = ((0,), (0,), (1,), (1,))\n        the 1st and 2nd model on GPU 0, the 3rd and 4th model on GPU 1\n  \n  Returns:\n    TVTs: a list of `TransferVarTensor` callables, one for one model.\n    TMOs: a list of `TransferModulesOptims` callables, one for one model.\n    relative_device_ids: a list of lists; `sys_device_ids` transformed to \n      relative ids; to be used in `DataParallel`\n  \"\"\"\n  import os\n\n  all_ids = []\n  for ids in sys_device_ids:\n    all_ids += ids\n  unique_sys_device_ids = list(set(all_ids))\n  unique_sys_device_ids.sort()\n  if -1 in unique_sys_device_ids:\n    unique_sys_device_ids.remove(-1)\n\n  # Set the CUDA_VISIBLE_DEVICES environment variable\n\n  visible_devices = ''\n  for i in unique_sys_device_ids:\n    visible_devices += '{}, '.format(i)\n  os.environ['CUDA_VISIBLE_DEVICES'] = visible_devices\n\n  # Return wrappers\n\n  relative_device_ids = []\n  TVTs, TMOs = [], []\n  for ids in sys_device_ids:\n    relative_ids = []\n    for id in ids:\n      if id != -1:\n        id = find_index(unique_sys_device_ids, id)\n      relative_ids.append(id)\n    relative_device_ids.append(relative_ids)\n\n    # Models and user defined Variables/Tensors would be transferred to the\n    # first device.\n    TVTs.append(TransferVarTensor(relative_ids[0]))\n    TMOs.append(TransferModulesOptims(relative_ids[0]))\n  return TVTs, TMOs, relative_device_ids\n\n\ndef load_ckpt(modules_optims, ckpt_file, load_to_cpu=True, verbose=True):\n  \"\"\"Load state_dict's of modules/optimizers from file.\n  Args:\n    modules_optims: A list, which members are either torch.nn.optimizer \n      or torch.nn.Module.\n    ckpt_file: The file path.\n    load_to_cpu: Boolean. Whether to transform tensors in modules/optimizers \n      to cpu type.\n  \"\"\"\n  map_location = (lambda storage, loc: storage) if load_to_cpu else None\n  ckpt = torch.load(ckpt_file, map_location=map_location)\n  for m, sd in zip(modules_optims, ckpt['state_dicts']):\n    m.load_state_dict(sd)\n  if verbose:\n    print('Resume from ckpt {}, \\nepoch {}, \\nscores {}'.format(\n      ckpt_file, ckpt['ep'], ckpt['scores']))\n  return ckpt['ep'], ckpt['scores']\n\n\ndef save_ckpt(modules_optims, ep, scores, ckpt_file):\n  \"\"\"Save state_dict's of modules/optimizers to file. \n  Args:\n    modules_optims: A list, which members are either torch.nn.optimizer \n      or torch.nn.Module.\n    ep: the current epoch number\n    scores: the performance of current model\n    ckpt_file: The file path.\n  Note:\n    torch.save() reserves device type and id of tensors to save, so when \n    loading ckpt, you have to inform torch.load() to load these tensors to \n    cpu or your desired gpu, if you change devices.\n  \"\"\"\n  state_dicts = [m.state_dict() for m in modules_optims]\n  ckpt = dict(state_dicts=state_dicts,\n              ep=ep,\n              scores=scores)\n  may_make_dir(osp.dirname(osp.abspath(ckpt_file)))\n  torch.save(ckpt, ckpt_file)\n\n\ndef load_state_dict(model, src_state_dict):\n  \"\"\"Copy parameters and buffers from `src_state_dict` into `model` and its \n  descendants. The `src_state_dict.keys()` NEED NOT exactly match \n  `model.state_dict().keys()`. For dict key mismatch, just\n  skip it; for copying error, just output warnings and proceed.\n\n  Arguments:\n    model: A torch.nn.Module object. \n    src_state_dict (dict): A dict containing parameters and persistent buffers.\n  Note:\n    This is modified from torch.nn.modules.module.load_state_dict(), to make\n    the warnings and errors more detailed.\n  \"\"\"\n  from torch.nn import Parameter\n\n  dest_state_dict = model.state_dict()\n  for name, param in src_state_dict.items():\n    if name not in dest_state_dict:\n      continue\n    if isinstance(param, Parameter):\n      # backwards compatibility for serialized parameters\n      param = param.data\n    try:\n      dest_state_dict[name].copy_(param)\n    except Exception, msg:\n      print(\"Warning: Error occurs when copying '{}': {}\"\n            .format(name, str(msg)))\n\n  src_missing = set(dest_state_dict.keys()) - set(src_state_dict.keys())\n  if len(src_missing) > 0:\n    print(\"Keys not found in source state_dict: \")\n    for n in src_missing:\n      print('\\t', n)\n\n  dest_missing = set(src_state_dict.keys()) - set(dest_state_dict.keys())\n  if len(dest_missing) > 0:\n    print(\"Keys not found in destination state_dict: \")\n    for n in dest_missing:\n      print('\\t', n)\n\n\ndef is_iterable(obj):\n  return hasattr(obj, '__len__')\n\n\ndef may_set_mode(maybe_modules, mode):\n  \"\"\"maybe_modules: an object or a list of objects.\"\"\"\n  assert mode in ['train', 'eval']\n  if not is_iterable(maybe_modules):\n    maybe_modules = [maybe_modules]\n  for m in maybe_modules:\n    if isinstance(m, torch.nn.Module):\n      if mode == 'train':\n        m.train()\n      else:\n        m.eval()\n\n\ndef may_make_dir(path):\n  \"\"\"\n  Args:\n    path: a dir, or result of `osp.dirname(osp.abspath(file_path))`\n  Note:\n    `osp.exists('')` returns `False`, while `osp.exists('.')` returns `True`!\n  \"\"\"\n  # This clause has mistakes:\n  # if path is None or '':\n\n  if path in [None, '']:\n    return\n  if not osp.exists(path):\n    os.makedirs(path)\n\n\nclass AverageMeter(object):\n  \"\"\"Modified from Tong Xiao's open-reid. \n  Computes and stores the average and current value\"\"\"\n\n  def __init__(self):\n    self.val = 0\n    self.avg = 0\n    self.sum = 0\n    self.count = 0\n\n  def reset(self):\n    self.val = 0\n    self.avg = 0\n    self.sum = 0\n    self.count = 0\n\n  def update(self, val, n=1):\n    self.val = val\n    self.sum += val * n\n    self.count += n\n    self.avg = float(self.sum) / (self.count + 1e-20)\n\n\nclass RunningAverageMeter(object):\n  \"\"\"Computes and stores the running average and current value\"\"\"\n\n  def __init__(self, hist=0.99):\n    self.val = None\n    self.avg = None\n    self.hist = hist\n\n  def reset(self):\n    self.val = None\n    self.avg = None\n\n  def update(self, val):\n    if self.avg is None:\n      self.avg = val\n    else:\n      self.avg = self.avg * self.hist + val * (1 - self.hist)\n    self.val = val\n\n\nclass RecentAverageMeter(object):\n  \"\"\"Stores and computes the average of recent values.\"\"\"\n\n  def __init__(self, hist_size=100):\n    self.hist_size = hist_size\n    self.fifo = []\n    self.val = 0\n\n  def reset(self):\n    self.fifo = []\n    self.val = 0\n\n  def update(self, val):\n    self.val = val\n    self.fifo.append(val)\n    if len(self.fifo) > self.hist_size:\n      del self.fifo[0]\n\n  @property\n  def avg(self):\n    assert len(self.fifo) > 0\n    return float(sum(self.fifo)) / len(self.fifo)\n\n\ndef get_model_wrapper(model, multi_gpu):\n  from torch.nn.parallel import DataParallel\n  if multi_gpu:\n    return DataParallel(model)\n  else:\n    return model\n\n\nclass ReDirectSTD(object):\n  \"\"\"Modified from Tong Xiao's `Logger` in open-reid.\n  This class overwrites sys.stdout or sys.stderr, so that console logs can\n  also be written to file.\n  Args:\n    fpath: file path\n    console: one of ['stdout', 'stderr']\n    immediately_visible: If `False`, the file is opened only once and closed\n      after exiting. In this case, the message written to file may not be\n      immediately visible (Because the file handle is occupied by the\n      program?). If `True`, each writing operation of the console will\n      open, write to, and close the file. If your program has tons of writing\n      operations, the cost of opening and closing file may be obvious. (?)\n  Usage example:\n    `ReDirectSTD('stdout.txt', 'stdout', False)`\n    `ReDirectSTD('stderr.txt', 'stderr', False)`\n  NOTE: File will be deleted if already existing. Log dir and file is created\n    lazily -- if no message is written, the dir and file will not be created.\n  \"\"\"\n\n  def __init__(self, fpath=None, console='stdout', immediately_visible=False):\n    import sys\n    import os\n    import os.path as osp\n\n    assert console in ['stdout', 'stderr']\n    self.console = sys.stdout if console == 'stdout' else sys.stderr\n    self.file = fpath\n    self.f = None\n    self.immediately_visible = immediately_visible\n    if fpath is not None:\n      # Remove existing log file.\n      if osp.exists(fpath):\n        os.remove(fpath)\n\n    # Overwrite\n    if console == 'stdout':\n      sys.stdout = self\n    else:\n      sys.stderr = self\n\n  def __del__(self):\n    self.close()\n\n  def __enter__(self):\n    pass\n\n  def __exit__(self, *args):\n    self.close()\n\n  def write(self, msg):\n    self.console.write(msg)\n    if self.file is not None:\n      may_make_dir(os.path.dirname(osp.abspath(self.file)))\n      if self.immediately_visible:\n        with open(self.file, 'a') as f:\n          f.write(msg)\n      else:\n        if self.f is None:\n          self.f = open(self.file, 'w')\n        self.f.write(msg)\n\n  def flush(self):\n    self.console.flush()\n    if self.f is not None:\n      self.f.flush()\n      import os\n      os.fsync(self.f.fileno())\n\n  def close(self):\n    self.console.close()\n    if self.f is not None:\n      self.f.close()\n\n\ndef set_seed(seed):\n  import random\n  random.seed(seed)\n  print('setting random-seed to {}'.format(seed))\n\n  import numpy as np\n  np.random.seed(seed)\n  print('setting np-random-seed to {}'.format(seed))\n\n  import torch\n  torch.backends.cudnn.enabled = False\n  print('cudnn.enabled set to {}'.format(torch.backends.cudnn.enabled))\n  # set seed for CPU\n  torch.manual_seed(seed)\n  print('setting torch-seed to {}'.format(seed))\n\n\ndef print_array(array, fmt='{:.2f}', end=' '):\n  \"\"\"Print a 1-D tuple, list, or numpy array containing digits.\"\"\"\n  s = ''\n  for x in array:\n    s += fmt.format(float(x)) + end\n  s += '\\n'\n  print(s)\n  return s\n\n\n# Great idea from https://github.com/amdegroot/ssd.pytorch\ndef str2bool(v):\n  return v.lower() in (\"yes\", \"true\", \"t\", \"1\")\n\n\ndef tight_float_str(x, fmt='{:.4f}'):\n  return fmt.format(x).rstrip('0').rstrip('.')\n\n\ndef find_index(seq, item):\n  for i, x in enumerate(seq):\n    if item == x:\n      return i\n  return -1\n\n\ndef adjust_lr_staircase(param_groups, base_lrs, ep, decay_at_epochs, factor):\n  \"\"\"Multiplied by a factor at the BEGINNING of specified epochs. Different\n  param groups specify their own base learning rates.\n  \n  Args:\n    param_groups: a list of params\n    base_lrs: starting learning rates, len(base_lrs) = len(param_groups)\n    ep: current epoch, ep >= 1\n    decay_at_epochs: a list or tuple; learning rates are multiplied by a factor\n      at the BEGINNING of these epochs\n    factor: a number in range (0, 1)\n  \n  Example:\n    base_lrs = [0.1, 0.01]\n    decay_at_epochs = [51, 101]\n    factor = 0.1\n    It means the learning rate starts at 0.1 for 1st param group\n    (0.01 for 2nd param group) and is multiplied by 0.1 at the\n    BEGINNING of the 51'st epoch, and then further multiplied by 0.1 at the\n    BEGINNING of the 101'st epoch, then stays unchanged till the end of \n    training.\n  \n  NOTE: \n    It is meant to be called at the BEGINNING of an epoch.\n  \"\"\"\n  assert len(base_lrs) == len(param_groups), \\\n    \"You should specify base lr for each param group.\"\n  assert ep >= 1, \"Current epoch number should be >= 1\"\n\n  if ep not in decay_at_epochs:\n    return\n\n  ind = find_index(decay_at_epochs, ep)\n  for i, (g, base_lr) in enumerate(zip(param_groups, base_lrs)):\n    g['lr'] = base_lr * factor ** (ind + 1)\n    print('=====> Param group {}: lr adjusted to {:.10f}'\n          .format(i, g['lr']).rstrip('0'))\n\n\n@contextmanager\ndef measure_time(enter_msg, verbose=True):\n  if verbose:\n    st = time.time()\n    print(enter_msg)\n  yield\n  if verbose:\n    print('Done, {:.2f}s'.format(time.time() - st))"
  },
  {
    "path": "bpm/utils/visualization.py",
    "content": "import numpy as np\nfrom PIL import Image\nimport cv2\nfrom os.path import dirname as ospdn\n\nfrom bpm.utils.utils import may_make_dir\n\n\ndef add_border(im, border_width, value):\n  \"\"\"Add color border around an image. The resulting image size is not changed.\n  Args:\n    im: numpy array with shape [3, im_h, im_w]\n    border_width: scalar, measured in pixel\n    value: scalar, or numpy array with shape [3]; the color of the border\n  Returns:\n    im: numpy array with shape [3, im_h, im_w]\n  \"\"\"\n  assert (im.ndim == 3) and (im.shape[0] == 3)\n  im = np.copy(im)\n\n  if isinstance(value, np.ndarray):\n    # reshape to [3, 1, 1]\n    value = value.flatten()[:, np.newaxis, np.newaxis]\n  im[:, :border_width, :] = value\n  im[:, -border_width:, :] = value\n  im[:, :, :border_width] = value\n  im[:, :, -border_width:] = value\n\n  return im\n\ndef make_im_grid(ims, n_rows, n_cols, space, pad_val):\n  \"\"\"Make a grid of images with space in between.\n  Args:\n    ims: a list of [3, im_h, im_w] images\n    n_rows: num of rows\n    n_cols: num of columns\n    space: the num of pixels between two images\n    pad_val: scalar, or numpy array with shape [3]; the color of the space\n  Returns:\n    ret_im: a numpy array with shape [3, H, W]\n  \"\"\"\n  assert (ims[0].ndim == 3) and (ims[0].shape[0] == 3)\n  assert len(ims) <= n_rows * n_cols\n  h, w = ims[0].shape[1:]\n  H = h * n_rows + space * (n_rows - 1)\n  W = w * n_cols + space * (n_cols - 1)\n  if isinstance(pad_val, np.ndarray):\n    # reshape to [3, 1, 1]\n    pad_val = pad_val.flatten()[:, np.newaxis, np.newaxis]\n  ret_im = (np.ones([3, H, W]) * pad_val).astype(ims[0].dtype)\n  for n, im in enumerate(ims):\n    r = n // n_cols\n    c = n % n_cols\n    h1 = r * (h + space)\n    h2 = r * (h + space) + h\n    w1 = c * (w + space)\n    w2 = c * (w + space) + w\n    ret_im[:, h1:h2, w1:w2] = im\n  return ret_im\n\n\ndef get_rank_list(dist_vec, q_id, q_cam, g_ids, g_cams, rank_list_size):\n  \"\"\"Get the ranking list of a query image\n  Args:\n    dist_vec: a numpy array with shape [num_gallery_images], the distance\n      between the query image and all gallery images\n    q_id: a scalar, query id\n    q_cam: a scalar, query camera\n    g_ids: a numpy array with shape [num_gallery_images], gallery ids\n    g_cams: a numpy array with shape [num_gallery_images], gallery cameras\n    rank_list_size: a scalar, the number of images to show in a rank list\n  Returns:\n    rank_list: a list, the indices of gallery images to show\n    same_id: a list, len(same_id) = rank_list, whether each ranked image is\n      with same id as query\n  \"\"\"\n  sort_inds = np.argsort(dist_vec)\n  rank_list = []\n  same_id = []\n  i = 0\n  for ind, g_id, g_cam in zip(sort_inds, g_ids[sort_inds], g_cams[sort_inds]):\n    # Skip gallery images with same id and same camera as query\n    if (q_id == g_id) and (q_cam == g_cam):\n      continue\n    same_id.append(q_id == g_id)\n    rank_list.append(ind)\n    i += 1\n    if i >= rank_list_size:\n      break\n  return rank_list, same_id\n\n\ndef read_im(im_path):\n  # shape [H, W, 3]\n  im = np.asarray(Image.open(im_path))\n  # Resize to (im_h, im_w) = (128, 64)\n  resize_h_w = (128, 64)\n  if (im.shape[0], im.shape[1]) != resize_h_w:\n    im = cv2.resize(im, resize_h_w[::-1], interpolation=cv2.INTER_LINEAR)\n  # shape [3, H, W]\n  im = im.transpose(2, 0, 1)\n  return im\n\n\ndef save_im(im, save_path):\n  \"\"\"im: shape [3, H, W]\"\"\"\n  may_make_dir(ospdn(save_path))\n  im = im.transpose(1, 2, 0)\n  Image.fromarray(im).save(save_path)\n\n\ndef save_rank_list_to_im(rank_list, same_id, q_im_path, g_im_paths, save_path):\n  \"\"\"Save a query and its rank list as an image.\n  Args:\n    rank_list: a list, the indices of gallery images to show\n    same_id: a list, len(same_id) = rank_list, whether each ranked image is\n      with same id as query\n    q_im_path: query image path\n    g_im_paths: ALL gallery image paths\n    save_path: path to save the query and its rank list as an image\n  \"\"\"\n  ims = [read_im(q_im_path)]\n  for ind, sid in zip(rank_list, same_id):\n    im = read_im(g_im_paths[ind])\n    # Add green boundary to true positive, red to false positive\n    color = np.array([0, 255, 0]) if sid else np.array([255, 0, 0])\n    im = add_border(im, 3, color)\n    ims.append(im)\n  im = make_im_grid(ims, 1, len(rank_list) + 1, 8, 255)\n  save_im(im, save_path)\n"
  },
  {
    "path": "requirements.txt",
    "content": "opencv_python==3.2.0.7\nnumpy==1.11.3\nscipy==0.18.1\nh5py==2.6.0\ntensorboardX==0.8\n# for tensorboard web server\ntensorflow==1.2.0"
  },
  {
    "path": "script/dataset/combine_trainval_sets.py",
    "content": "from __future__ import print_function\n\nimport sys\nsys.path.insert(0, '.')\n\nimport os.path as osp\n\nospeu = osp.expanduser\nospj = osp.join\nospap = osp.abspath\n\nfrom collections import defaultdict\nimport shutil\n\nfrom bpm.utils.utils import may_make_dir\nfrom bpm.utils.utils import save_pickle\nfrom bpm.utils.utils import load_pickle\n\nfrom bpm.utils.dataset_utils import new_im_name_tmpl\nfrom bpm.utils.dataset_utils import parse_im_name\n\n\ndef move_ims(\n    ori_im_paths,\n    new_im_dir,\n    parse_im_name,\n    new_im_name_tmpl,\n    new_start_id):\n  \"\"\"Rename and move images to new directory.\"\"\"\n  ids = [parse_im_name(osp.basename(p), 'id') for p in ori_im_paths]\n  cams = [parse_im_name(osp.basename(p), 'cam') for p in ori_im_paths]\n\n  unique_ids = list(set(ids))\n  unique_ids.sort()\n  id_mapping = dict(\n    zip(unique_ids, range(new_start_id, new_start_id + len(unique_ids))))\n\n  new_im_names = []\n  cnt = defaultdict(int)\n  for im_path, id, cam in zip(ori_im_paths, ids, cams):\n    new_id = id_mapping[id]\n    cnt[(new_id, cam)] += 1\n    new_im_name = new_im_name_tmpl.format(new_id, cam, cnt[(new_id, cam)] - 1)\n    shutil.copy(im_path, ospj(new_im_dir, new_im_name))\n    new_im_names.append(new_im_name)\n  return new_im_names, id_mapping\n\n\ndef combine_trainval_sets(\n    im_dirs,\n    partition_files,\n    save_dir):\n  new_im_dir = ospj(save_dir, 'trainval_images')\n  may_make_dir(new_im_dir)\n  new_im_names = []\n  new_start_id = 0\n  for im_dir, partition_file in zip(im_dirs, partition_files):\n    partitions = load_pickle(partition_file)\n    im_paths = [ospj(im_dir, n) for n in partitions['trainval_im_names']]\n    im_paths.sort()\n    new_im_names_, id_mapping = move_ims(\n      im_paths, new_im_dir, parse_im_name, new_im_name_tmpl, new_start_id)\n    new_start_id += len(id_mapping)\n    new_im_names += new_im_names_\n\n  new_ids = range(new_start_id)\n  partitions = {'trainval_im_names': new_im_names,\n                'trainval_ids2labels': dict(zip(new_ids, new_ids)),\n                }\n  partition_file = ospj(save_dir, 'partitions.pkl')\n  save_pickle(partitions, partition_file)\n  print('Partition file saved to {}'.format(partition_file))\n\n\nif __name__ == '__main__':\n  import argparse\n\n  parser = argparse.ArgumentParser(\n    description=\"Combine Trainval Set of Market1501, CUHK03, DukeMTMC-reID\")\n\n  # Image directory and partition file of transformed datasets\n\n  parser.add_argument(\n    '--market1501_im_dir',\n    type=str,\n    default=ospeu('~/Dataset/market1501/images')\n  )\n  parser.add_argument(\n    '--market1501_partition_file',\n    type=str,\n    default=ospeu('~/Dataset/market1501/partitions.pkl')\n  )\n\n  cuhk03_im_type = ['detected', 'labeled'][0]\n  parser.add_argument(\n    '--cuhk03_im_dir',\n    type=str,\n    # Remember to select the detected or labeled set.\n    default=ospeu('~/Dataset/cuhk03/{}/images'.format(cuhk03_im_type))\n  )\n  parser.add_argument(\n    '--cuhk03_partition_file',\n    type=str,\n    # Remember to select the detected or labeled set.\n    default=ospeu('~/Dataset/cuhk03/{}/partitions.pkl'.format(cuhk03_im_type))\n  )\n\n  parser.add_argument(\n    '--duke_im_dir',\n    type=str,\n    default=ospeu('~/Dataset/duke/images'))\n  parser.add_argument(\n    '--duke_partition_file',\n    type=str,\n    default=ospeu('~/Dataset/duke/partitions.pkl')\n  )\n\n  parser.add_argument(\n    '--save_dir',\n    type=str,\n    default=ospeu('~/Dataset/market1501_cuhk03_duke')\n  )\n\n  args = parser.parse_args()\n\n  im_dirs = [\n    ospap(ospeu(args.market1501_im_dir)),\n    ospap(ospeu(args.cuhk03_im_dir)),\n    ospap(ospeu(args.duke_im_dir))\n  ]\n  partition_files = [\n    ospap(ospeu(args.market1501_partition_file)),\n    ospap(ospeu(args.cuhk03_partition_file)),\n    ospap(ospeu(args.duke_partition_file))\n  ]\n\n  save_dir = ospap(ospeu(args.save_dir))\n  may_make_dir(save_dir)\n\n  combine_trainval_sets(im_dirs, partition_files, save_dir)\n"
  },
  {
    "path": "script/dataset/mapping_im_names_duke.py",
    "content": "\"\"\"Mapping original image name (relative image path) -> my new image name.\nThe mapping is corresponding to transform_duke.py.\n\"\"\"\n\nfrom __future__ import print_function\n\nimport sys\nsys.path.insert(0, '.')\n\nimport os.path as osp\nfrom collections import defaultdict\n\nfrom bpm.utils.utils import save_pickle\nfrom bpm.utils.dataset_utils import get_im_names\nfrom bpm.utils.dataset_utils import new_im_name_tmpl\n\n\ndef parse_original_im_name(img_name, parse_type='id'):\n  \"\"\"Get the person id or cam from an image name.\"\"\"\n  assert parse_type in ('id', 'cam')\n  if parse_type == 'id':\n    parsed = int(img_name[:4])\n  else:\n    parsed = int(img_name[6])\n  return parsed\n\n\ndef map_im_names(ori_im_names, parse_im_name, new_im_name_tmpl):\n  \"\"\"Map original im names to new im names.\"\"\"\n  cnt = defaultdict(int)\n  new_im_names = []\n  for im_name in ori_im_names:\n    im_name = osp.basename(im_name)\n    id = parse_im_name(im_name, 'id')\n    cam = parse_im_name(im_name, 'cam')\n    cnt[(id, cam)] += 1\n    new_im_name = new_im_name_tmpl.format(id, cam, cnt[(id, cam)] - 1)\n    new_im_names.append(new_im_name)\n  return new_im_names\n\n\ndef save_im_name_mapping(raw_dir, ori_to_new_im_name_file):\n  im_names = []\n  for dir_name in ['bounding_box_train', 'bounding_box_test', 'query']:\n    im_names_ = get_im_names(osp.join(raw_dir, dir_name), return_path=False, return_np=False)\n    im_names_.sort()\n    # Images in different original directories may have same names,\n    # so here we use relative paths as original image names.\n    im_names_ = [osp.join(dir_name, n) for n in im_names_]\n    im_names += im_names_\n  new_im_names = map_im_names(im_names, parse_original_im_name, new_im_name_tmpl)\n  ori_to_new_im_name = dict(zip(im_names, new_im_names))\n  save_pickle(ori_to_new_im_name, ori_to_new_im_name_file)\n  print('File saved to {}'.format(ori_to_new_im_name_file))\n\n  ##################\n  # Just Some Info #\n  ##################\n\n  print('len(im_names)', len(im_names))\n  print('len(set(im_names))', len(set(im_names)))\n  print('len(set(new_im_names))', len(set(new_im_names)))\n  print('len(ori_to_new_im_name)', len(ori_to_new_im_name))\n\n  bounding_box_train_im_names = get_im_names(osp.join(raw_dir, 'bounding_box_train'), return_path=False, return_np=False)\n  bounding_box_test_im_names = get_im_names(osp.join(raw_dir, 'bounding_box_test'), return_path=False, return_np=False)\n  query_im_names = get_im_names(osp.join(raw_dir, 'query'), return_path=False, return_np=False)\n\n  print('set(bounding_box_train_im_names).isdisjoint(set(bounding_box_test_im_names))',\n        set(bounding_box_train_im_names).isdisjoint(set(bounding_box_test_im_names)))\n  print('set(bounding_box_train_im_names).isdisjoint(set(query_im_names))',\n        set(bounding_box_train_im_names).isdisjoint(set(query_im_names)))\n\n  print('set(bounding_box_test_im_names).isdisjoint(set(query_im_names))',\n        set(bounding_box_test_im_names).isdisjoint(set(query_im_names)))\n\n\nif __name__ == '__main__':\n  import argparse\n\n  parser = argparse.ArgumentParser(description=\"Mapping DukeMTMC-reID Image Names\")\n  parser.add_argument('--raw_dir', type=str, default=osp.expanduser('~/Dataset/duke/DukeMTMC-reID'))\n  parser.add_argument('--ori_to_new_im_name_file', type=str, default=osp.expanduser('~/Dataset/duke/ori_to_new_im_name.pkl'))\n  args = parser.parse_args()\n  save_im_name_mapping(args.raw_dir, args.ori_to_new_im_name_file)"
  },
  {
    "path": "script/dataset/mapping_im_names_market1501.py",
    "content": "\"\"\"Mapping original image name (relative image path) -> my new image name.\nThe mapping is corresponding to transform_market1501.py.\n\"\"\"\n\nfrom __future__ import print_function\n\nimport sys\nsys.path.insert(0, '.')\n\nimport os.path as osp\nfrom collections import defaultdict\n\nfrom bpm.utils.utils import save_pickle\nfrom bpm.utils.dataset_utils import get_im_names\nfrom bpm.utils.dataset_utils import new_im_name_tmpl\n\n\ndef parse_original_im_name(im_name, parse_type='id'):\n  \"\"\"Get the person id or cam from an image name.\"\"\"\n  assert parse_type in ('id', 'cam')\n  if parse_type == 'id':\n    parsed = -1 if im_name.startswith('-1') else int(im_name[:4])\n  else:\n    parsed = int(im_name[4]) if im_name.startswith('-1') \\\n      else int(im_name[6])\n  return parsed\n\n\ndef map_im_names(ori_im_names, parse_im_name, new_im_name_tmpl):\n  \"\"\"Map original im names to new im names.\"\"\"\n  cnt = defaultdict(int)\n  new_im_names = []\n  for im_name in ori_im_names:\n    im_name = osp.basename(im_name)\n    id = parse_im_name(im_name, 'id')\n    cam = parse_im_name(im_name, 'cam')\n    cnt[(id, cam)] += 1\n    new_im_name = new_im_name_tmpl.format(id, cam, cnt[(id, cam)] - 1)\n    new_im_names.append(new_im_name)\n  return new_im_names\n\n\ndef save_im_name_mapping(raw_dir, ori_to_new_im_name_file):\n  im_names = []\n  for dir_name in ['bounding_box_train', 'bounding_box_test', 'query', 'gt_bbox']:\n    im_names_ = get_im_names(osp.join(raw_dir, dir_name), return_path=False, return_np=False)\n    im_names_.sort()\n    # Filter out id -1\n    if dir_name == 'bounding_box_test':\n      im_names_ = [n for n in im_names_ if not n.startswith('-1')]\n    # Get (id, cam) in query set\n    if dir_name == 'query':\n      q_ids_cams = set([(parse_original_im_name(n, 'id'), parse_original_im_name(n, 'cam')) for n in im_names_])\n    # Filter out images that are not corresponding to query (id, cam)\n    if dir_name == 'gt_bbox':\n      im_names_ = [n for n in im_names_ if (parse_original_im_name(n, 'id'), parse_original_im_name(n, 'cam')) in q_ids_cams]\n    # Images in different original directories may have same names,\n    # so here we use relative paths as original image names.\n    im_names_ = [osp.join(dir_name, n) for n in im_names_]\n    im_names += im_names_\n  new_im_names = map_im_names(im_names, parse_original_im_name, new_im_name_tmpl)\n  ori_to_new_im_name = dict(zip(im_names, new_im_names))\n  save_pickle(ori_to_new_im_name, ori_to_new_im_name_file)\n  print('File saved to {}'.format(ori_to_new_im_name_file))\n\n  ##################\n  # Just Some Info #\n  ##################\n\n  print('len(im_names)', len(im_names))\n  print('len(set(im_names))', len(set(im_names)))\n  print('len(set(new_im_names))', len(set(new_im_names)))\n  print('len(ori_to_new_im_name)', len(ori_to_new_im_name))\n\n  bounding_box_train_im_names = get_im_names(osp.join(raw_dir, 'bounding_box_train'), return_path=False, return_np=False)\n  bounding_box_test_im_names = get_im_names(osp.join(raw_dir, 'bounding_box_test'), return_path=False, return_np=False)\n  query_im_names = get_im_names(osp.join(raw_dir, 'query'), return_path=False, return_np=False)\n  gt_bbox_im_names = get_im_names(osp.join(raw_dir, 'gt_bbox'), return_path=False, return_np=False)\n\n  print('set(bounding_box_train_im_names).isdisjoint(set(bounding_box_test_im_names))',\n        set(bounding_box_train_im_names).isdisjoint(set(bounding_box_test_im_names)))\n  print('set(bounding_box_train_im_names).isdisjoint(set(query_im_names))',\n        set(bounding_box_train_im_names).isdisjoint(set(query_im_names)))\n  print('set(bounding_box_train_im_names).isdisjoint(set(gt_bbox_im_names))',\n        set(bounding_box_train_im_names).isdisjoint(set(gt_bbox_im_names)))\n\n  print('set(bounding_box_test_im_names).isdisjoint(set(query_im_names))',\n        set(bounding_box_test_im_names).isdisjoint(set(query_im_names)))\n  print('set(bounding_box_test_im_names).isdisjoint(set(gt_bbox_im_names))',\n        set(bounding_box_test_im_names).isdisjoint(set(gt_bbox_im_names)))\n\n  print('set(query_im_names).isdisjoint(set(gt_bbox_im_names))',\n        set(query_im_names).isdisjoint(set(gt_bbox_im_names)))\n\n  print('len(query_im_names)', len(query_im_names))\n  print('len(gt_bbox_im_names)', len(gt_bbox_im_names))\n  print('len(set(query_im_names) & set(gt_bbox_im_names))', len(set(query_im_names) & set(gt_bbox_im_names)))\n  print('len(set(query_im_names) | set(gt_bbox_im_names))', len(set(query_im_names) | set(gt_bbox_im_names)))\n\n\nif __name__ == '__main__':\n  import argparse\n\n  parser = argparse.ArgumentParser(description=\"Mapping Market-1501 Image Names\")\n  parser.add_argument('--raw_dir', type=str, default=osp.expanduser('~/Dataset/market1501/Market-1501-v15.09.15'))\n  parser.add_argument('--ori_to_new_im_name_file', type=str, default=osp.expanduser('~/Dataset/market1501/ori_to_new_im_name.pkl'))\n  args = parser.parse_args()\n  save_im_name_mapping(args.raw_dir, args.ori_to_new_im_name_file)"
  },
  {
    "path": "script/dataset/transform_cuhk03.py",
    "content": "\"\"\"Refactor file directories, save/rename images and partition the \ntrain/val/test set, in order to support the unified dataset interface.\n\"\"\"\n\nfrom __future__ import print_function\n\nimport sys\nsys.path.insert(0, '.')\n\nfrom zipfile import ZipFile\nimport os.path as osp\nimport sys\nimport h5py\nfrom scipy.misc import imsave\nfrom itertools import chain\n\nfrom bpm.utils.utils import may_make_dir\nfrom bpm.utils.utils import load_pickle\nfrom bpm.utils.utils import save_pickle\n\nfrom bpm.utils.dataset_utils import partition_train_val_set\nfrom bpm.utils.dataset_utils import new_im_name_tmpl\nfrom bpm.utils.dataset_utils import parse_im_name\n\n\ndef save_images(mat_file, save_dir, new_im_name_tmpl):\n  def deref(mat, ref):\n    return mat[ref][:].T\n\n  def dump(mat, refs, pid, cam, im_dir):\n    \"\"\"Save the images of a person under one camera.\"\"\"\n    for i, ref in enumerate(refs):\n      im = deref(mat, ref)\n      if im.size == 0 or im.ndim < 2: break\n      fname = new_im_name_tmpl.format(pid, cam, i)\n      imsave(osp.join(im_dir, fname), im)\n\n  mat = h5py.File(mat_file, 'r')\n  labeled_im_dir = osp.join(save_dir, 'labeled/images')\n  detected_im_dir = osp.join(save_dir, 'detected/images')\n  all_im_dir = osp.join(save_dir, 'all/images')\n\n  may_make_dir(labeled_im_dir)\n  may_make_dir(detected_im_dir)\n  may_make_dir(all_im_dir)\n\n  # loop through camera pairs\n  pid = 0\n  for labeled, detected in zip(mat['labeled'][0], mat['detected'][0]):\n    labeled, detected = deref(mat, labeled), deref(mat, detected)\n    assert labeled.shape == detected.shape\n    # loop through ids in a camera pair\n    for i in range(labeled.shape[0]):\n      # We don't care about whether different persons are under same cameras,\n      # we only care about the same person being under different cameras or not.\n      dump(mat, labeled[i, :5], pid, 0, labeled_im_dir)\n      dump(mat, labeled[i, 5:], pid, 1, labeled_im_dir)\n      dump(mat, detected[i, :5], pid, 0, detected_im_dir)\n      dump(mat, detected[i, 5:], pid, 1, detected_im_dir)\n      dump(mat, chain(detected[i, :5], labeled[i, :5]), pid, 0, all_im_dir)\n      dump(mat, chain(detected[i, 5:], labeled[i, 5:]), pid, 1, all_im_dir)\n      pid += 1\n      if pid % 100 == 0:\n        sys.stdout.write('\\033[F\\033[K')\n        print('Saving images {}/{}'.format(pid, 1467))\n\n\ndef transform(zip_file, train_test_partition_file, save_dir=None):\n  \"\"\"Save images and partition the train/val/test set.\n  \"\"\"\n  print(\"Extracting zip file\")\n  root = osp.dirname(osp.abspath(zip_file))\n  if save_dir is None:\n    save_dir = root\n  may_make_dir(save_dir)\n  with ZipFile(zip_file) as z:\n    z.extractall(path=save_dir)\n  print(\"Extracting zip file done\")\n  mat_file = osp.join(save_dir, osp.basename(zip_file)[:-4], 'cuhk-03.mat')\n\n  save_images(mat_file, save_dir, new_im_name_tmpl)\n\n  if osp.exists(train_test_partition_file):\n    train_test_partition = load_pickle(train_test_partition_file)\n  else:\n    raise RuntimeError('Train/test partition file should be provided.')\n\n  for im_type in ['detected', 'labeled']:\n    trainval_im_names = train_test_partition[im_type]['train_im_names']\n    trainval_ids = list(set([parse_im_name(n, 'id')\n                             for n in trainval_im_names]))\n    # Sort ids, so that id-to-label mapping remains the same when running\n    # the code on different machines.\n    trainval_ids.sort()\n    trainval_ids2labels = dict(zip(trainval_ids, range(len(trainval_ids))))\n    train_val_partition = \\\n      partition_train_val_set(trainval_im_names, parse_im_name, num_val_ids=100)\n    train_im_names = train_val_partition['train_im_names']\n    train_ids = list(set([parse_im_name(n, 'id')\n                          for n in train_val_partition['train_im_names']]))\n    # Sort ids, so that id-to-label mapping remains the same when running\n    # the code on different machines.\n    train_ids.sort()\n    train_ids2labels = dict(zip(train_ids, range(len(train_ids))))\n\n    # A mark is used to denote whether the image is from\n    #   query (mark == 0), or\n    #   gallery (mark == 1), or\n    #   multi query (mark == 2) set\n\n    val_marks = [0, ] * len(train_val_partition['val_query_im_names']) \\\n                + [1, ] * len(train_val_partition['val_gallery_im_names'])\n    val_im_names = list(train_val_partition['val_query_im_names']) \\\n                   + list(train_val_partition['val_gallery_im_names'])\n    test_im_names = list(train_test_partition[im_type]['query_im_names']) \\\n                    + list(train_test_partition[im_type]['gallery_im_names'])\n    test_marks = [0, ] * len(train_test_partition[im_type]['query_im_names']) \\\n                 + [1, ] * len(\n      train_test_partition[im_type]['gallery_im_names'])\n    partitions = {'trainval_im_names': trainval_im_names,\n                  'trainval_ids2labels': trainval_ids2labels,\n                  'train_im_names': train_im_names,\n                  'train_ids2labels': train_ids2labels,\n                  'val_im_names': val_im_names,\n                  'val_marks': val_marks,\n                  'test_im_names': test_im_names,\n                  'test_marks': test_marks}\n    partition_file = osp.join(save_dir, im_type, 'partitions.pkl')\n    save_pickle(partitions, partition_file)\n    print('Partition file for \"{}\" saved to {}'.format(im_type, partition_file))\n\n\nif __name__ == '__main__':\n  import argparse\n\n  parser = argparse.ArgumentParser(description=\"Transform CUHK03 Dataset\")\n  parser.add_argument(\n    '--zip_file',\n    type=str,\n    default='~/Dataset/cuhk03/cuhk03_release.zip')\n  parser.add_argument(\n    '--save_dir',\n    type=str,\n    default='~/Dataset/cuhk03')\n  parser.add_argument(\n    '--train_test_partition_file',\n    type=str,\n    default='~/Dataset/cuhk03/re_ranking_train_test_split.pkl')\n  args = parser.parse_args()\n  zip_file = osp.abspath(osp.expanduser(args.zip_file))\n  train_test_partition_file = osp.abspath(osp.expanduser(\n    args.train_test_partition_file))\n  save_dir = osp.abspath(osp.expanduser(args.save_dir))\n  transform(zip_file, train_test_partition_file, save_dir)\n"
  },
  {
    "path": "script/dataset/transform_duke.py",
    "content": "\"\"\"Refactor file directories, save/rename images and partition the \ntrain/val/test set, in order to support the unified dataset interface.\n\"\"\"\n\nfrom __future__ import print_function\n\nimport sys\nsys.path.insert(0, '.')\n\nfrom zipfile import ZipFile\nimport os.path as osp\nimport numpy as np\n\nfrom bpm.utils.utils import may_make_dir\nfrom bpm.utils.utils import save_pickle\n\nfrom bpm.utils.dataset_utils import get_im_names\nfrom bpm.utils.dataset_utils import partition_train_val_set\nfrom bpm.utils.dataset_utils import new_im_name_tmpl\nfrom bpm.utils.dataset_utils import parse_im_name as parse_new_im_name\nfrom bpm.utils.dataset_utils import move_ims\n\n\ndef parse_original_im_name(img_name, parse_type='id'):\n  \"\"\"Get the person id or cam from an image name.\"\"\"\n  assert parse_type in ('id', 'cam')\n  if parse_type == 'id':\n    parsed = int(img_name[:4])\n  else:\n    parsed = int(img_name[6])\n  return parsed\n\n\ndef save_images(zip_file, save_dir=None, train_test_split_file=None):\n  \"\"\"Rename and move all used images to a directory.\"\"\"\n\n  print(\"Extracting zip file\")\n  root = osp.dirname(osp.abspath(zip_file))\n  if save_dir is None:\n    save_dir = root\n  may_make_dir(save_dir)\n  with ZipFile(zip_file) as z:\n    z.extractall(path=save_dir)\n  print(\"Extracting zip file done\")\n\n  new_im_dir = osp.join(save_dir, 'images')\n  may_make_dir(new_im_dir)\n  raw_dir = osp.join(save_dir, osp.basename(zip_file)[:-4])\n\n  im_paths = []\n  nums = []\n\n  for dir_name in ['bounding_box_train', 'bounding_box_test', 'query']:\n    im_paths_ = get_im_names(osp.join(raw_dir, dir_name),\n                             return_path=True, return_np=False)\n    im_paths_.sort()\n    im_paths += list(im_paths_)\n    nums.append(len(im_paths_))\n\n  im_names = move_ims(\n    im_paths, new_im_dir, parse_original_im_name, new_im_name_tmpl)\n\n  split = dict()\n  keys = ['trainval_im_names', 'gallery_im_names', 'q_im_names']\n  inds = [0] + nums\n  inds = np.cumsum(inds)\n  for i, k in enumerate(keys):\n    split[k] = im_names[inds[i]:inds[i + 1]]\n\n  save_pickle(split, train_test_split_file)\n  print('Saving images done.')\n  return split\n\n\ndef transform(zip_file, save_dir=None):\n  \"\"\"Refactor file directories, rename images and partition the train/val/test \n  set.\n  \"\"\"\n\n  train_test_split_file = osp.join(save_dir, 'train_test_split.pkl')\n  train_test_split = save_images(zip_file, save_dir, train_test_split_file)\n  # train_test_split = load_pickle(train_test_split_file)\n\n  # partition train/val/test set\n\n  trainval_ids = list(set([parse_new_im_name(n, 'id')\n                           for n in train_test_split['trainval_im_names']]))\n  # Sort ids, so that id-to-label mapping remains the same when running\n  # the code on different machines.\n  trainval_ids.sort()\n  trainval_ids2labels = dict(zip(trainval_ids, range(len(trainval_ids))))\n  partitions = partition_train_val_set(\n    train_test_split['trainval_im_names'], parse_new_im_name, num_val_ids=100)\n  train_im_names = partitions['train_im_names']\n  train_ids = list(set([parse_new_im_name(n, 'id')\n                        for n in partitions['train_im_names']]))\n  # Sort ids, so that id-to-label mapping remains the same when running\n  # the code on different machines.\n  train_ids.sort()\n  train_ids2labels = dict(zip(train_ids, range(len(train_ids))))\n\n  # A mark is used to denote whether the image is from\n  #   query (mark == 0), or\n  #   gallery (mark == 1), or\n  #   multi query (mark == 2) set\n\n  val_marks = [0, ] * len(partitions['val_query_im_names']) \\\n              + [1, ] * len(partitions['val_gallery_im_names'])\n  val_im_names = list(partitions['val_query_im_names']) \\\n                 + list(partitions['val_gallery_im_names'])\n\n  test_im_names = list(train_test_split['q_im_names']) \\\n                  + list(train_test_split['gallery_im_names'])\n  test_marks = [0, ] * len(train_test_split['q_im_names']) \\\n               + [1, ] * len(train_test_split['gallery_im_names'])\n\n  partitions = {'trainval_im_names': train_test_split['trainval_im_names'],\n                'trainval_ids2labels': trainval_ids2labels,\n                'train_im_names': train_im_names,\n                'train_ids2labels': train_ids2labels,\n                'val_im_names': val_im_names,\n                'val_marks': val_marks,\n                'test_im_names': test_im_names,\n                'test_marks': test_marks}\n  partition_file = osp.join(save_dir, 'partitions.pkl')\n  save_pickle(partitions, partition_file)\n  print('Partition file saved to {}'.format(partition_file))\n\n\nif __name__ == '__main__':\n  import argparse\n\n  parser = argparse.ArgumentParser(\n    description=\"Transform DukeMTMC-reID Dataset\")\n  parser.add_argument('--zip_file', type=str,\n                      default='~/Dataset/duke/DukeMTMC-reID.zip')\n  parser.add_argument('--save_dir', type=str,\n                      default='~/Dataset/duke')\n  args = parser.parse_args()\n  zip_file = osp.abspath(osp.expanduser(args.zip_file))\n  save_dir = osp.abspath(osp.expanduser(args.save_dir))\n  transform(zip_file, save_dir)\n"
  },
  {
    "path": "script/dataset/transform_market1501.py",
    "content": "\"\"\"Refactor file directories, save/rename images and partition the \ntrain/val/test set, in order to support the unified dataset interface.\n\"\"\"\n\nfrom __future__ import print_function\n\nimport sys\nsys.path.insert(0, '.')\n\nfrom zipfile import ZipFile\nimport os.path as osp\nimport numpy as np\n\nfrom bpm.utils.utils import may_make_dir\nfrom bpm.utils.utils import save_pickle\nfrom bpm.utils.utils import load_pickle\n\nfrom bpm.utils.dataset_utils import get_im_names\nfrom bpm.utils.dataset_utils import partition_train_val_set\nfrom bpm.utils.dataset_utils import new_im_name_tmpl\nfrom bpm.utils.dataset_utils import parse_im_name as parse_new_im_name\nfrom bpm.utils.dataset_utils import move_ims\n\n\ndef parse_original_im_name(im_name, parse_type='id'):\n  \"\"\"Get the person id or cam from an image name.\"\"\"\n  assert parse_type in ('id', 'cam')\n  if parse_type == 'id':\n    parsed = -1 if im_name.startswith('-1') else int(im_name[:4])\n  else:\n    parsed = int(im_name[4]) if im_name.startswith('-1') \\\n      else int(im_name[6])\n  return parsed\n\n\ndef save_images(zip_file, save_dir=None, train_test_split_file=None):\n  \"\"\"Rename and move all used images to a directory.\"\"\"\n\n  print(\"Extracting zip file\")\n  root = osp.dirname(osp.abspath(zip_file))\n  if save_dir is None:\n    save_dir = root\n  may_make_dir(osp.abspath(save_dir))\n  with ZipFile(zip_file) as z:\n    z.extractall(path=save_dir)\n  print(\"Extracting zip file done\")\n\n  new_im_dir = osp.join(save_dir, 'images')\n  may_make_dir(osp.abspath(new_im_dir))\n  raw_dir = osp.join(save_dir, osp.basename(zip_file)[:-4])\n\n  im_paths = []\n  nums = []\n\n  im_paths_ = get_im_names(osp.join(raw_dir, 'bounding_box_train'),\n                           return_path=True, return_np=False)\n  im_paths_.sort()\n  im_paths += list(im_paths_)\n  nums.append(len(im_paths_))\n\n  im_paths_ = get_im_names(osp.join(raw_dir, 'bounding_box_test'),\n                           return_path=True, return_np=False)\n  im_paths_.sort()\n  im_paths_ = [p for p in im_paths_ if not osp.basename(p).startswith('-1')]\n  im_paths += list(im_paths_)\n  nums.append(len(im_paths_))\n\n  im_paths_ = get_im_names(osp.join(raw_dir, 'query'),\n                           return_path=True, return_np=False)\n  im_paths_.sort()\n  im_paths += list(im_paths_)\n  nums.append(len(im_paths_))\n  q_ids_cams = set([(parse_original_im_name(osp.basename(p), 'id'),\n                     parse_original_im_name(osp.basename(p), 'cam'))\n                    for p in im_paths_])\n\n  im_paths_ = get_im_names(osp.join(raw_dir, 'gt_bbox'),\n                           return_path=True, return_np=False)\n  im_paths_.sort()\n  # Only gather images for those ids and cams used in testing.\n  im_paths_ = [p for p in im_paths_\n               if (parse_original_im_name(osp.basename(p), 'id'),\n                   parse_original_im_name(osp.basename(p), 'cam'))\n               in q_ids_cams]\n  im_paths += list(im_paths_)\n  nums.append(len(im_paths_))\n\n  im_names = move_ims(\n    im_paths, new_im_dir, parse_original_im_name, new_im_name_tmpl)\n\n  split = dict()\n  keys = ['trainval_im_names', 'gallery_im_names', 'q_im_names', 'mq_im_names']\n  inds = [0] + nums\n  inds = np.cumsum(np.array(inds))\n  for i, k in enumerate(keys):\n    split[k] = im_names[inds[i]:inds[i + 1]]\n\n  save_pickle(split, train_test_split_file)\n  print('Saving images done.')\n  return split\n\n\ndef transform(zip_file, save_dir=None):\n  \"\"\"Refactor file directories, rename images and partition the train/val/test \n  set.\n  \"\"\"\n\n  train_test_split_file = osp.join(save_dir, 'train_test_split.pkl')\n  train_test_split = save_images(zip_file, save_dir, train_test_split_file)\n  # train_test_split = load_pickle(train_test_split_file)\n\n  # partition train/val/test set\n\n  trainval_ids = list(set([parse_new_im_name(n, 'id')\n                           for n in train_test_split['trainval_im_names']]))\n  # Sort ids, so that id-to-label mapping remains the same when running\n  # the code on different machines.\n  trainval_ids.sort()\n  trainval_ids2labels = dict(zip(trainval_ids, range(len(trainval_ids))))\n  partitions = partition_train_val_set(\n    train_test_split['trainval_im_names'], parse_new_im_name, num_val_ids=100)\n  train_im_names = partitions['train_im_names']\n  train_ids = list(set([parse_new_im_name(n, 'id')\n                        for n in partitions['train_im_names']]))\n  # Sort ids, so that id-to-label mapping remains the same when running\n  # the code on different machines.\n  train_ids.sort()\n  train_ids2labels = dict(zip(train_ids, range(len(train_ids))))\n\n  # A mark is used to denote whether the image is from\n  #   query (mark == 0), or\n  #   gallery (mark == 1), or\n  #   multi query (mark == 2) set\n\n  val_marks = [0, ] * len(partitions['val_query_im_names']) \\\n              + [1, ] * len(partitions['val_gallery_im_names'])\n  val_im_names = list(partitions['val_query_im_names']) \\\n                 + list(partitions['val_gallery_im_names'])\n\n  test_im_names = list(train_test_split['q_im_names']) \\\n                  + list(train_test_split['mq_im_names']) \\\n                  + list(train_test_split['gallery_im_names'])\n  test_marks = [0, ] * len(train_test_split['q_im_names']) \\\n               + [2, ] * len(train_test_split['mq_im_names']) \\\n               + [1, ] * len(train_test_split['gallery_im_names'])\n\n  partitions = {'trainval_im_names': train_test_split['trainval_im_names'],\n                'trainval_ids2labels': trainval_ids2labels,\n                'train_im_names': train_im_names,\n                'train_ids2labels': train_ids2labels,\n                'val_im_names': val_im_names,\n                'val_marks': val_marks,\n                'test_im_names': test_im_names,\n                'test_marks': test_marks}\n  partition_file = osp.join(save_dir, 'partitions.pkl')\n  save_pickle(partitions, partition_file)\n  print('Partition file saved to {}'.format(partition_file))\n\n\nif __name__ == '__main__':\n  import argparse\n\n  parser = argparse.ArgumentParser(description=\"Transform Market1501 Dataset\")\n  parser.add_argument('--zip_file', type=str,\n                      default='~/Dataset/market1501/Market-1501-v15.09.15.zip')\n  parser.add_argument('--save_dir', type=str,\n                      default='~/Dataset/market1501')\n  args = parser.parse_args()\n  zip_file = osp.abspath(osp.expanduser(args.zip_file))\n  save_dir = osp.abspath(osp.expanduser(args.save_dir))\n  transform(zip_file, save_dir)\n"
  },
  {
    "path": "script/experiment/train_pcb.py",
    "content": "from __future__ import print_function\n\nimport sys\n\nsys.path.insert(0, '.')\n\nimport torch\nfrom torch.autograd import Variable\nimport torch.optim as optim\nfrom torch.nn.parallel import DataParallel\n\nimport time\nimport os.path as osp\nfrom tensorboardX import SummaryWriter\nimport numpy as np\nimport argparse\n\nfrom bpm.dataset import create_dataset\nfrom bpm.model.PCBModel import PCBModel as Model\n\nfrom bpm.utils.utils import time_str\nfrom bpm.utils.utils import str2bool\nfrom bpm.utils.utils import may_set_mode\nfrom bpm.utils.utils import load_state_dict\nfrom bpm.utils.utils import load_ckpt\nfrom bpm.utils.utils import save_ckpt\nfrom bpm.utils.utils import set_devices\nfrom bpm.utils.utils import AverageMeter\nfrom bpm.utils.utils import to_scalar\nfrom bpm.utils.utils import ReDirectSTD\nfrom bpm.utils.utils import set_seed\nfrom bpm.utils.utils import adjust_lr_staircase\n\n\nclass Config(object):\n  def __init__(self):\n\n    parser = argparse.ArgumentParser()\n    parser.add_argument('-d', '--sys_device_ids', type=eval, default=(0,))\n    parser.add_argument('-r', '--run', type=int, default=1)\n    parser.add_argument('--set_seed', type=str2bool, default=False)\n    parser.add_argument('--dataset', type=str, default='market1501',\n                        choices=['market1501', 'cuhk03', 'duke', 'combined'])\n    parser.add_argument('--trainset_part', type=str, default='trainval',\n                        choices=['trainval', 'train'])\n\n    parser.add_argument('--resize_h_w', type=eval, default=(384, 128))\n    # These several only for training set\n    parser.add_argument('--crop_prob', type=float, default=0)\n    parser.add_argument('--crop_ratio', type=float, default=1)\n    parser.add_argument('--mirror', type=str2bool, default=True)\n    parser.add_argument('--batch_size', type=int, default=64)\n\n    parser.add_argument('--log_to_file', type=str2bool, default=True)\n    parser.add_argument('--steps_per_log', type=int, default=20)\n    parser.add_argument('--epochs_per_val', type=int, default=1)\n\n    parser.add_argument('--last_conv_stride', type=int, default=1, choices=[1, 2])\n    # When the stride is changed to 1, we can compensate for the receptive field\n    # using dilated convolution. However, experiments show dilated convolution is useless.\n    parser.add_argument('--last_conv_dilation', type=int, default=1, choices=[1, 2])\n    parser.add_argument('--num_stripes', type=int, default=6)\n    parser.add_argument('--local_conv_out_channels', type=int, default=256)\n\n    parser.add_argument('--only_test', type=str2bool, default=False)\n    parser.add_argument('--resume', type=str2bool, default=False)\n    parser.add_argument('--exp_dir', type=str, default='')\n    parser.add_argument('--model_weight_file', type=str, default='')\n\n    parser.add_argument('--new_params_lr', type=float, default=0.1)\n    parser.add_argument('--finetuned_params_lr', type=float, default=0.01)\n    parser.add_argument('--staircase_decay_at_epochs',\n                        type=eval, default=(41,))\n    parser.add_argument('--staircase_decay_multiply_factor',\n                        type=float, default=0.1)\n    parser.add_argument('--total_epochs', type=int, default=60)\n\n    args = parser.parse_args()\n\n    # gpu ids\n    self.sys_device_ids = args.sys_device_ids\n\n    # If you want to make your results exactly reproducible, you have\n    # to fix a random seed.\n    if args.set_seed:\n      self.seed = 1\n    else:\n      self.seed = None\n\n    # The experiments can be run for several times and performances be averaged.\n    # `run` starts from `1`, not `0`.\n    self.run = args.run\n\n    ###########\n    # Dataset #\n    ###########\n\n    # If you want to make your results exactly reproducible, you have\n    # to also set num of threads to 1 during training.\n    if self.seed is not None:\n      self.prefetch_threads = 1\n    else:\n      self.prefetch_threads = 2\n\n    self.dataset = args.dataset\n    self.trainset_part = args.trainset_part\n\n    # Image Processing\n\n    # Just for training set\n    self.crop_prob = args.crop_prob\n    self.crop_ratio = args.crop_ratio\n    self.resize_h_w = args.resize_h_w\n\n    # Whether to scale by 1/255\n    self.scale_im = True\n    self.im_mean = [0.486, 0.459, 0.408]\n    self.im_std = [0.229, 0.224, 0.225]\n\n    self.train_mirror_type = 'random' if args.mirror else None\n    self.train_batch_size = args.batch_size\n    self.train_final_batch = False\n    self.train_shuffle = True\n\n    self.test_mirror_type = None\n    self.test_batch_size = 32\n    self.test_final_batch = True\n    self.test_shuffle = False\n\n    dataset_kwargs = dict(\n      name=self.dataset,\n      resize_h_w=self.resize_h_w,\n      scale=self.scale_im,\n      im_mean=self.im_mean,\n      im_std=self.im_std,\n      batch_dims='NCHW',\n      num_prefetch_threads=self.prefetch_threads)\n\n    prng = np.random\n    if self.seed is not None:\n      prng = np.random.RandomState(self.seed)\n    self.train_set_kwargs = dict(\n      part=self.trainset_part,\n      batch_size=self.train_batch_size,\n      final_batch=self.train_final_batch,\n      shuffle=self.train_shuffle,\n      crop_prob=self.crop_prob,\n      crop_ratio=self.crop_ratio,\n      mirror_type=self.train_mirror_type,\n      prng=prng)\n    self.train_set_kwargs.update(dataset_kwargs)\n\n    prng = np.random\n    if self.seed is not None:\n      prng = np.random.RandomState(self.seed)\n    self.val_set_kwargs = dict(\n      part='val',\n      batch_size=self.test_batch_size,\n      final_batch=self.test_final_batch,\n      shuffle=self.test_shuffle,\n      mirror_type=self.test_mirror_type,\n      prng=prng)\n    self.val_set_kwargs.update(dataset_kwargs)\n\n    prng = np.random\n    if self.seed is not None:\n      prng = np.random.RandomState(self.seed)\n    self.test_set_kwargs = dict(\n      part='test',\n      batch_size=self.test_batch_size,\n      final_batch=self.test_final_batch,\n      shuffle=self.test_shuffle,\n      mirror_type=self.test_mirror_type,\n      prng=prng)\n    self.test_set_kwargs.update(dataset_kwargs)\n\n    ###############\n    # ReID Model  #\n    ###############\n\n    # The last block of ResNet has stride 2. We can set the stride to 1 so that\n    # the spatial resolution before global pooling is doubled.\n    self.last_conv_stride = args.last_conv_stride\n    # When the stride is changed to 1, we can compensate for the receptive field\n    # using dilated convolution. However, experiments show dilated convolution is useless.\n    self.last_conv_dilation = args.last_conv_dilation\n    # Number of stripes (parts)\n    self.num_stripes = args.num_stripes\n    # Output channel of 1x1 conv\n    self.local_conv_out_channels = args.local_conv_out_channels\n\n    #############\n    # Training  #\n    #############\n\n    self.momentum = 0.9\n    self.weight_decay = 0.0005\n\n    # Initial learning rate\n    self.new_params_lr = args.new_params_lr\n    self.finetuned_params_lr = args.finetuned_params_lr\n    self.staircase_decay_at_epochs = args.staircase_decay_at_epochs\n    self.staircase_decay_multiply_factor = args.staircase_decay_multiply_factor\n    # Number of epochs to train\n    self.total_epochs = args.total_epochs\n\n    # How often (in epochs) to test on val set.\n    self.epochs_per_val = args.epochs_per_val\n\n    # How often (in batches) to log. If only need to log the average\n    # information for each epoch, set this to a large value, e.g. 1e10.\n    self.steps_per_log = args.steps_per_log\n\n    # Only test and without training.\n    self.only_test = args.only_test\n\n    self.resume = args.resume\n\n    #######\n    # Log #\n    #######\n\n    # If True,\n    # 1) stdout and stderr will be redirected to file,\n    # 2) training loss etc will be written to tensorboard,\n    # 3) checkpoint will be saved\n    self.log_to_file = args.log_to_file\n\n    # The root dir of logs.\n    if args.exp_dir == '':\n      self.exp_dir = osp.join(\n        'exp/train',\n        '{}'.format(self.dataset),\n        'run{}'.format(self.run),\n      )\n    else:\n      self.exp_dir = args.exp_dir\n\n    self.stdout_file = osp.join(\n      self.exp_dir, 'stdout_{}.txt'.format(time_str()))\n    self.stderr_file = osp.join(\n      self.exp_dir, 'stderr_{}.txt'.format(time_str()))\n\n    # Saving model weights and optimizer states, for resuming.\n    self.ckpt_file = osp.join(self.exp_dir, 'ckpt.pth')\n    # Just for loading a pretrained model; no optimizer states is needed.\n    self.model_weight_file = args.model_weight_file\n\n\nclass ExtractFeature(object):\n  \"\"\"A function to be called in the val/test set, to extract features.\n  Args:\n    TVT: A callable to transfer images to specific device.\n  \"\"\"\n\n  def __init__(self, model, TVT):\n    self.model = model\n    self.TVT = TVT\n\n  def __call__(self, ims):\n    old_train_eval_model = self.model.training\n    # Set eval mode.\n    # Force all BN layers to use global mean and variance, also disable\n    # dropout.\n    self.model.eval()\n\n    ims = Variable(self.TVT(torch.from_numpy(ims).float()))\n    try:\n      local_feat_list, logits_list = self.model(ims)\n    except:\n      local_feat_list = self.model(ims)\n    feat = [lf.data.cpu().numpy() for lf in local_feat_list]\n    feat = np.concatenate(feat, axis=1)\n\n    # Restore the model to its old train/eval mode.\n    self.model.train(old_train_eval_model)\n    return feat\n\n\ndef main():\n  cfg = Config()\n\n  # Redirect logs to both console and file.\n  if cfg.log_to_file:\n    ReDirectSTD(cfg.stdout_file, 'stdout', False)\n    ReDirectSTD(cfg.stderr_file, 'stderr', False)\n\n  # Lazily create SummaryWriter\n  writer = None\n\n  TVT, TMO = set_devices(cfg.sys_device_ids)\n\n  if cfg.seed is not None:\n    set_seed(cfg.seed)\n\n  # Dump the configurations to log.\n  import pprint\n  print('-' * 60)\n  print('cfg.__dict__')\n  pprint.pprint(cfg.__dict__)\n  print('-' * 60)\n\n  ###########\n  # Dataset #\n  ###########\n\n  train_set = create_dataset(**cfg.train_set_kwargs)\n  num_classes = len(train_set.ids2labels)\n  # The combined dataset does not provide val set currently.\n  val_set = None if cfg.dataset == 'combined' else create_dataset(**cfg.val_set_kwargs)\n\n  test_sets = []\n  test_set_names = []\n  if cfg.dataset == 'combined':\n    for name in ['market1501', 'cuhk03', 'duke']:\n      cfg.test_set_kwargs['name'] = name\n      test_sets.append(create_dataset(**cfg.test_set_kwargs))\n      test_set_names.append(name)\n  else:\n    test_sets.append(create_dataset(**cfg.test_set_kwargs))\n    test_set_names.append(cfg.dataset)\n\n  ###########\n  # Models  #\n  ###########\n\n  model = Model(\n    last_conv_stride=cfg.last_conv_stride,\n    num_stripes=cfg.num_stripes,\n    local_conv_out_channels=cfg.local_conv_out_channels,\n    num_classes=num_classes\n  )\n  # Model wrapper\n  model_w = DataParallel(model)\n\n  #############################\n  # Criteria and Optimizers   #\n  #############################\n\n  criterion = torch.nn.CrossEntropyLoss()\n\n  # To finetune from ImageNet weights\n  finetuned_params = list(model.base.parameters())\n  # To train from scratch\n  new_params = [p for n, p in model.named_parameters()\n                if not n.startswith('base.')]\n  param_groups = [{'params': finetuned_params, 'lr': cfg.finetuned_params_lr},\n                  {'params': new_params, 'lr': cfg.new_params_lr}]\n  optimizer = optim.SGD(\n    param_groups,\n    momentum=cfg.momentum,\n    weight_decay=cfg.weight_decay)\n\n  # Bind them together just to save some codes in the following usage.\n  modules_optims = [model, optimizer]\n\n  ################################\n  # May Resume Models and Optims #\n  ################################\n\n  if cfg.resume:\n    resume_ep, scores = load_ckpt(modules_optims, cfg.ckpt_file)\n\n  # May Transfer Models and Optims to Specified Device. Transferring optimizer\n  # is to cope with the case when you load the checkpoint to a new device.\n  TMO(modules_optims)\n\n  ########\n  # Test #\n  ########\n\n  def test(load_model_weight=False):\n    if load_model_weight:\n      if cfg.model_weight_file != '':\n        map_location = (lambda storage, loc: storage)\n        sd = torch.load(cfg.model_weight_file, map_location=map_location)\n        load_state_dict(model, sd)\n        print('Loaded model weights from {}'.format(cfg.model_weight_file))\n      else:\n        load_ckpt(modules_optims, cfg.ckpt_file)\n\n    for test_set, name in zip(test_sets, test_set_names):\n      test_set.set_feat_func(ExtractFeature(model_w, TVT))\n      print('\\n=========> Test on dataset: {} <=========\\n'.format(name))\n      test_set.eval(\n        normalize_feat=True,\n        verbose=True)\n\n  def validate():\n    if val_set.extract_feat_func is None:\n      val_set.set_feat_func(ExtractFeature(model_w, TVT))\n    print('\\n===== Test on validation set =====\\n')\n    mAP, cmc_scores, _, _ = val_set.eval(\n      normalize_feat=True,\n      to_re_rank=False,\n      verbose=True)\n    print()\n    return mAP, cmc_scores[0]\n\n  if cfg.only_test:\n    test(load_model_weight=True)\n    return\n\n  ############\n  # Training #\n  ############\n\n  start_ep = resume_ep if cfg.resume else 0\n  for ep in range(start_ep, cfg.total_epochs):\n\n    # Adjust Learning Rate\n    adjust_lr_staircase(\n      optimizer.param_groups,\n      [cfg.finetuned_params_lr, cfg.new_params_lr],\n      ep + 1,\n      cfg.staircase_decay_at_epochs,\n      cfg.staircase_decay_multiply_factor)\n\n    may_set_mode(modules_optims, 'train')\n\n    # For recording loss\n    loss_meter = AverageMeter()\n\n    ep_st = time.time()\n    step = 0\n    epoch_done = False\n    while not epoch_done:\n\n      step += 1\n      step_st = time.time()\n\n      ims, im_names, labels, mirrored, epoch_done = train_set.next_batch()\n\n      ims_var = Variable(TVT(torch.from_numpy(ims).float()))\n      labels_var = Variable(TVT(torch.from_numpy(labels).long()))\n\n      _, logits_list = model_w(ims_var)\n      loss = torch.sum(\n        torch.cat([criterion(logits, labels_var) for logits in logits_list]))\n\n      optimizer.zero_grad()\n      loss.backward()\n      optimizer.step()\n\n      ############\n      # Step Log #\n      ############\n\n      loss_meter.update(to_scalar(loss))\n\n      if step % cfg.steps_per_log == 0:\n        log = '\\tStep {}/Ep {}, {:.2f}s, loss {:.4f}'.format(\n          step, ep + 1, time.time() - step_st, loss_meter.val)\n        print(log)\n\n    #############\n    # Epoch Log #\n    #############\n\n    log = 'Ep {}, {:.2f}s, loss {:.4f}'.format(\n      ep + 1, time.time() - ep_st, loss_meter.avg)\n    print(log)\n\n    ##########################\n    # Test on Validation Set #\n    ##########################\n\n    mAP, Rank1 = 0, 0\n    if ((ep + 1) % cfg.epochs_per_val == 0) and (val_set is not None):\n      mAP, Rank1 = validate()\n\n    # Log to TensorBoard\n\n    if cfg.log_to_file:\n      if writer is None:\n        writer = SummaryWriter(log_dir=osp.join(cfg.exp_dir, 'tensorboard'))\n      writer.add_scalars(\n        'val scores',\n        dict(mAP=mAP,\n             Rank1=Rank1),\n        ep)\n      writer.add_scalars(\n        'loss',\n        dict(loss=loss_meter.avg, ),\n        ep)\n\n    # save ckpt\n    if cfg.log_to_file:\n      save_ckpt(modules_optims, ep + 1, 0, cfg.ckpt_file)\n\n  ########\n  # Test #\n  ########\n\n  test(load_model_weight=False)\n\n\nif __name__ == '__main__':\n  main()\n"
  },
  {
    "path": "script/experiment/visualize_rank_list.py",
    "content": "from __future__ import print_function\n\nimport sys\n\nsys.path.insert(0, '.')\n\nimport torch\nfrom torch.autograd import Variable\nfrom torch.nn.parallel import DataParallel\n\nimport os.path as osp\nfrom os.path import join as ospj\n\nimport numpy as np\nimport argparse\n\nfrom bpm.dataset import create_dataset\nfrom bpm.model.PCBModel import PCBModel as Model\n\nfrom bpm.utils.utils import time_str\nfrom bpm.utils.utils import str2bool\nfrom bpm.utils.utils import load_state_dict\nfrom bpm.utils.utils import set_devices\nfrom bpm.utils.utils import ReDirectSTD\nfrom bpm.utils.utils import measure_time\nfrom bpm.utils.distance import compute_dist\nfrom bpm.utils.visualization import get_rank_list\nfrom bpm.utils.visualization import save_rank_list_to_im\n\n\nclass Config(object):\n  def __init__(self):\n\n    parser = argparse.ArgumentParser()\n    parser.add_argument('-d', '--sys_device_ids', type=eval, default=(0,))\n    parser.add_argument('--dataset', type=str, default='market1501',\n                        choices=['market1501', 'cuhk03', 'duke'])\n\n    parser.add_argument('--num_queries', type=int, default=16)\n    parser.add_argument('--rank_list_size', type=int, default=10)\n\n    parser.add_argument('--resize_h_w', type=eval, default=(384, 128))\n    parser.add_argument('--last_conv_stride', type=int, default=1,\n                        choices=[1, 2])\n    parser.add_argument('--num_stripes', type=int, default=6)\n    parser.add_argument('--local_conv_out_channels', type=int, default=256)\n\n    parser.add_argument('--log_to_file', type=str2bool, default=True)\n    parser.add_argument('--exp_dir', type=str, default='')\n    parser.add_argument('--ckpt_file', type=str, default='')\n    parser.add_argument('--model_weight_file', type=str, default='')\n\n    args = parser.parse_args()\n\n    # gpu ids\n    self.sys_device_ids = args.sys_device_ids\n\n    self.num_queries = args.num_queries\n    self.rank_list_size = args.rank_list_size\n\n    ###########\n    # Dataset #\n    ###########\n\n    self.dataset = args.dataset\n    self.prefetch_threads = 2\n\n    # Image Processing\n\n    self.resize_h_w = args.resize_h_w\n\n    # Whether to scale by 1/255\n    self.scale_im = True\n    self.im_mean = [0.486, 0.459, 0.408]\n    self.im_std = [0.229, 0.224, 0.225]\n\n    self.test_mirror_type = None\n    self.test_batch_size = 32\n    self.test_final_batch = True\n    self.test_shuffle = False\n\n    dataset_kwargs = dict(\n      name=self.dataset,\n      resize_h_w=self.resize_h_w,\n      scale=self.scale_im,\n      im_mean=self.im_mean,\n      im_std=self.im_std,\n      batch_dims='NCHW',\n      num_prefetch_threads=self.prefetch_threads)\n\n    prng = np.random\n    self.test_set_kwargs = dict(\n      part='test',\n      batch_size=self.test_batch_size,\n      final_batch=self.test_final_batch,\n      shuffle=self.test_shuffle,\n      mirror_type=self.test_mirror_type,\n      prng=prng)\n    self.test_set_kwargs.update(dataset_kwargs)\n\n    ###############\n    # ReID Model  #\n    ###############\n\n    # The last block of ResNet has stride 2. We can set the stride to 1 so that\n    # the spatial resolution before global pooling is doubled.\n    self.last_conv_stride = args.last_conv_stride\n    # Number of stripes (parts)\n    self.num_stripes = args.num_stripes\n    # Output channel of 1x1 conv\n    self.local_conv_out_channels = args.local_conv_out_channels\n\n    #######\n    # Log #\n    #######\n\n    # If True, stdout and stderr will be redirected to file\n    self.log_to_file = args.log_to_file\n\n    # The root dir of logs.\n    if args.exp_dir == '':\n      self.exp_dir = osp.join(\n        'exp/visualize_rank_list',\n        '{}'.format(self.dataset),\n      )\n    else:\n      self.exp_dir = args.exp_dir\n\n    self.stdout_file = osp.join(\n      self.exp_dir, 'stdout_{}.txt'.format(time_str()))\n    self.stderr_file = osp.join(\n      self.exp_dir, 'stderr_{}.txt'.format(time_str()))\n\n    # Model weights and optimizer states, for resuming.\n    self.ckpt_file = args.ckpt_file\n    # Just for loading a pretrained model; no optimizer states is needed.\n    self.model_weight_file = args.model_weight_file\n\n\nclass ExtractFeature(object):\n  \"\"\"A function to be called in the val/test set, to extract features.\n  Args:\n    TVT: A callable to transfer images to specific device.\n  \"\"\"\n\n  def __init__(self, model, TVT):\n    self.model = model\n    self.TVT = TVT\n\n  def __call__(self, ims):\n    old_train_eval_model = self.model.training\n    # Set eval mode.\n    # Force all BN layers to use global mean and variance, also disable\n    # dropout.\n    self.model.eval()\n\n    ims = Variable(self.TVT(torch.from_numpy(ims).float()))\n    try:\n      local_feat_list, logits_list = self.model(ims)\n    except:\n      local_feat_list = self.model(ims)\n    feat = [lf.data.cpu().numpy() for lf in local_feat_list]\n    feat = np.concatenate(feat, axis=1)\n\n    # Restore the model to its old train/eval mode.\n    self.model.train(old_train_eval_model)\n    return feat\n\n\ndef main():\n  cfg = Config()\n\n  # Redirect logs to both console and file.\n  if cfg.log_to_file:\n    ReDirectSTD(cfg.stdout_file, 'stdout', False)\n    ReDirectSTD(cfg.stderr_file, 'stderr', False)\n\n  TVT, TMO = set_devices(cfg.sys_device_ids)\n\n  # Dump the configurations to log.\n  import pprint\n  print('-' * 60)\n  print('cfg.__dict__')\n  pprint.pprint(cfg.__dict__)\n  print('-' * 60)\n\n  ###########\n  # Dataset #\n  ###########\n\n  test_set = create_dataset(**cfg.test_set_kwargs)\n\n  #########\n  # Model #\n  #########\n\n  model = Model(\n    last_conv_stride=cfg.last_conv_stride,\n    num_stripes=cfg.num_stripes,\n    local_conv_out_channels=cfg.local_conv_out_channels,\n    num_classes=0\n  )\n  # Model wrapper\n  model_w = DataParallel(model)\n\n  # May Transfer Model to Specified Device.\n  TMO([model])\n\n  #####################\n  # Load Model Weight #\n  #####################\n\n  # To first load weights to CPU\n  map_location = (lambda storage, loc: storage)\n  used_file = cfg.model_weight_file or cfg.ckpt_file\n  loaded = torch.load(used_file, map_location=map_location)\n  if cfg.model_weight_file == '':\n    loaded = loaded['state_dicts'][0]\n  load_state_dict(model, loaded)\n  print('Loaded model weights from {}'.format(used_file))\n\n  ###################\n  # Extract Feature #\n  ###################\n\n  test_set.set_feat_func(ExtractFeature(model_w, TVT))\n\n  with measure_time('Extracting feature...', verbose=True):\n    feat, ids, cams, im_names, marks = test_set.extract_feat(True, verbose=True)\n\n  #######################\n  # Select Query Images #\n  #######################\n\n  # Fix some query images, so that the visualization for different models can\n  # be compared.\n\n  # Sort in the order of image names\n  inds = np.argsort(im_names)\n  feat, ids, cams, im_names, marks = \\\n    feat[inds], ids[inds], cams[inds], im_names[inds], marks[inds]\n\n  # query, gallery index mask\n  is_q = marks == 0\n  is_g = marks == 1\n\n  prng = np.random.RandomState(1)\n  # selected query indices\n  sel_q_inds = prng.permutation(range(np.sum(is_q)))[:cfg.num_queries]\n\n  q_ids = ids[is_q][sel_q_inds]\n  q_cams = cams[is_q][sel_q_inds]\n  q_feat = feat[is_q][sel_q_inds]\n  q_im_names = im_names[is_q][sel_q_inds]\n\n  ####################\n  # Compute Distance #\n  ####################\n\n  # query-gallery distance\n  q_g_dist = compute_dist(q_feat, feat[is_g], type='euclidean')\n\n  ###########################\n  # Save Rank List as Image #\n  ###########################\n\n  q_im_paths = [ospj(test_set.im_dir, n) for n in q_im_names]\n  save_paths = [ospj(cfg.exp_dir, 'rank_lists', n) for n in q_im_names]\n  g_im_paths = [ospj(test_set.im_dir, n) for n in im_names[is_g]]\n\n  for dist_vec, q_id, q_cam, q_im_path, save_path in zip(\n      q_g_dist, q_ids, q_cams, q_im_paths, save_paths):\n\n    rank_list, same_id = get_rank_list(\n      dist_vec, q_id, q_cam, ids[is_g], cams[is_g], cfg.rank_list_size)\n\n    save_rank_list_to_im(rank_list, same_id, q_im_path, g_im_paths, save_path)\n\n\nif __name__ == '__main__':\n  main()\n"
  }
]