master 1686e889eb01 cached
28 files
136.5 KB
37.4k tokens
162 symbols
1 requests
Download .txt
Repository: huanghoujing/beyond-part-models
Branch: master
Commit: 1686e889eb01
Files: 28
Total size: 136.5 KB

Directory structure:
gitextract_1xyz5y1s/

├── .gitignore
├── README.md
├── bpm/
│   ├── __init__.py
│   ├── dataset/
│   │   ├── Dataset.py
│   │   ├── PreProcessImage.py
│   │   ├── Prefetcher.py
│   │   ├── TestSet.py
│   │   ├── TrainSet.py
│   │   └── __init__.py
│   ├── model/
│   │   ├── PCBModel.py
│   │   ├── __init__.py
│   │   └── resnet.py
│   └── utils/
│       ├── __init__.py
│       ├── dataset_utils.py
│       ├── distance.py
│       ├── metric.py
│       ├── re_ranking.py
│       ├── utils.py
│       └── visualization.py
├── requirements.txt
└── script/
    ├── dataset/
    │   ├── combine_trainval_sets.py
    │   ├── mapping_im_names_duke.py
    │   ├── mapping_im_names_market1501.py
    │   ├── transform_cuhk03.py
    │   ├── transform_duke.py
    │   └── transform_market1501.py
    └── experiment/
        ├── train_pcb.py
        └── visualize_rank_list.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
.idea/
exp/
tmp/
*.pyc

================================================
FILE: README.md
================================================
# Beyond Part Models: Person Retrieval with Refined Part Pooling

**Related Projects:** [Strong Triplet Loss Baseline](https://github.com/huanghoujing/person-reid-triplet-loss-baseline)

This project implements PCB (Part-based Convolutional Baseline) of paper [Beyond Part Models: Person Retrieval with Refined Part Pooling](https://arxiv.org/abs/1711.09349) using [pytorch](https://github.com/pytorch/pytorch).


# Current Results

The reproduced PCB is as follows. 
- `(Shared 1x1 Conv)` and `(Independent 1x1 Conv)` means the last 1x1 conv layers for stripes are shared or independent, respectively;
- `(Paper)` means the scores reported in the paper; 
- `R.R.` means using re-ranking.


|                                   | Rank-1 (%) | mAP (%) | R.R. Rank-1 (%) | R.R. mAP (%) |
| ---                               | :---: | :---: | :---: | :---: |
| Market1501 (Shared 1x1 Conv)      | 90.86 | 73.25 | 92.58 | 88.02 |
| Market1501 (Independent 1x1 Conv) | 92.87 | 78.54 | 93.94 | 90.17 |
| Market1501 (Paper)                | 92.40 | 77.30 | -     | -     |
| | | | | |
| Duke (Shared 1x1 Conv)            | 82.00 | 64.88 | 86.40 | 81.77 |
| Duke (Independent 1x1 Conv)       | 84.47 | 69.94 | 88.78 | 84.73 |
| Duke (Paper)                      | 81.90 | 65.30 | -     | -     |
| | | | | |
| CUHK03 (Shared 1x1 Conv)          | 47.29 | 42.05 | 56.50 | 57.91 |
| CUHK03 (Independent 1x1 Conv)     | 59.14 | 53.93 | 69.07 | 70.17 |
| CUHK03 (Paper)                    | 61.30 | 54.20 | -     | -     |

We can see that independent 1x1 conv layers for different stripes are critical for the performance. The performance on CUHK03 is still worse than the paper, while those on Market1501 and Duke are better.


# Resources

This repository contains following resources

- A beginner-level dataset interface independent of Pytorch, Tensorflow, etc, supporting multi-thread prefetching (README file is under way)
- Three most used ReID datasets, Market1501, CUHK03 (new protocol) and DukeMTMC-reID
- Python version ReID evaluation code (Originally from [open-reid](https://github.com/Cysu/open-reid))
- Python version Re-ranking (Originally from [re_ranking](https://github.com/zhunzhong07/person-re-ranking/blob/master/python-version/re_ranking))
- PCB (Part-based Convolutional Baseline, performance stays tuned)


# Installation

It's recommended that you create and enter a python virtual environment, if versions of the packages required here conflict with yours.

I use Python 2.7 and Pytorch 0.3. For installing Pytorch, follow the [official guide](http://pytorch.org/). Other packages are specified in `requirements.txt`.

```bash
pip install -r requirements.txt
```

Then clone the repository:

```bash
git clone https://github.com/huanghoujing/beyond-part-models.git
cd beyond-part-models
```


# Dataset Preparation

Inspired by Tong Xiao's [open-reid](https://github.com/Cysu/open-reid) project, dataset directories are refactored to support a unified dataset interface.

Transformed dataset has following features
- All used images, including training and testing images, are inside the same folder named `images`
- Images are renamed, with the name mapping from original images to new ones provided in a file named `ori_to_new_im_name.pkl`. The mapping may be needed in some cases.
- The train/val/test partitions are recorded in a file named `partitions.pkl` which is a dict with the following keys
  - `'trainval_im_names'`
  - `'trainval_ids2labels'`
  - `'train_im_names'`
  - `'train_ids2labels'`
  - `'val_im_names'`
  - `'val_marks'`
  - `'test_im_names'`
  - `'test_marks'`
- Validation set consists of 100 persons (configurable during transforming dataset) unseen in training set, and validation follows the same ranking protocol of testing.
- Each val or test image is accompanied by a mark denoting whether it is from
  - query (`mark == 0`), or
  - gallery (`mark == 1`), or
  - multi query (`mark == 2`) set

## Market1501

You can download what I have transformed for the project from [Google Drive](https://drive.google.com/open?id=1CaWH7_csm9aDyTVgjs7_3dlZIWqoBlv4) or [BaiduYun](https://pan.baidu.com/s/1nvOhpot). Otherwise, you can download the original dataset and transform it using my script, described below.

Download the Market1501 dataset from [here](http://www.liangzheng.org/Project/project_reid.html). Run the following script to transform the dataset, replacing the paths with yours.

```bash
python script/dataset/transform_market1501.py \
--zip_file ~/Dataset/market1501/Market-1501-v15.09.15.zip \
--save_dir ~/Dataset/market1501
```

## CUHK03

We follow the new training/testing protocol proposed in paper
```
@article{zhong2017re,
  title={Re-ranking Person Re-identification with k-reciprocal Encoding},
  author={Zhong, Zhun and Zheng, Liang and Cao, Donglin and Li, Shaozi},
  booktitle={CVPR},
  year={2017}
}
```
Details of the new protocol can be found [here](https://github.com/zhunzhong07/person-re-ranking).

You can download what I have transformed for the project from [Google Drive](https://drive.google.com/open?id=1Ssp9r4g8UbGveX-9JvHmjpcesvw90xIF) or [BaiduYun](https://pan.baidu.com/s/1hsB0pIc). Otherwise, you can download the original dataset and transform it using my script, described below.

Download the CUHK03 dataset from [here](http://www.ee.cuhk.edu.hk/~xgwang/CUHK_identification.html). Then download the training/testing partition file from [Google Drive](https://drive.google.com/open?id=14lEiUlQDdsoroo8XJvQ3nLZDIDeEizlP) or [BaiduYun](https://pan.baidu.com/s/1miuxl3q). This partition file specifies which images are in training, query or gallery set. Finally run the following script to transform the dataset, replacing the paths with yours.

```bash
python script/dataset/transform_cuhk03.py \
--zip_file ~/Dataset/cuhk03/cuhk03_release.zip \
--train_test_partition_file ~/Dataset/cuhk03/re_ranking_train_test_split.pkl \
--save_dir ~/Dataset/cuhk03
```


## DukeMTMC-reID

You can download what I have transformed for the project from [Google Drive](https://drive.google.com/open?id=1P9Jr0en0HBu_cZ7txrb2ZA_dI36wzXbS) or [BaiduYun](https://pan.baidu.com/s/1miIdEek). Otherwise, you can download the original dataset and transform it using my script, described below.

Download the DukeMTMC-reID dataset from [here](https://github.com/layumi/DukeMTMC-reID_evaluation). Run the following script to transform the dataset, replacing the paths with yours.

```bash
python script/dataset/transform_duke.py \
--zip_file ~/Dataset/duke/DukeMTMC-reID.zip \
--save_dir ~/Dataset/duke
```


## Combining Trainval Set of Market1501, CUHK03, DukeMTMC-reID

Larger training set tends to benefit deep learning models, so I combine trainval set of three datasets Market1501, CUHK03 and DukeMTMC-reID. After training on the combined trainval set, the model can be tested on three test sets as usual.

Transform three separate datasets as introduced above if you have not done it.

For the trainval set, you can download what I have transformed from [Google Drive](https://drive.google.com/open?id=1hmZIRkaLvLb_lA1CcC4uGxmA4ppxPinj) or [BaiduYun](https://pan.baidu.com/s/1jIvNYPg). Otherwise, you can run the following script to combine the trainval sets, replacing the paths with yours.

```bash
python script/dataset/combine_trainval_sets.py \
--market1501_im_dir ~/Dataset/market1501/images \
--market1501_partition_file ~/Dataset/market1501/partitions.pkl \
--cuhk03_im_dir ~/Dataset/cuhk03/detected/images \
--cuhk03_partition_file ~/Dataset/cuhk03/detected/partitions.pkl \
--duke_im_dir ~/Dataset/duke/images \
--duke_partition_file ~/Dataset/duke/partitions.pkl \
--save_dir ~/Dataset/market1501_cuhk03_duke
```

## Configure Dataset Path

The project requires you to configure the dataset paths. In `bpm/dataset/__init__.py`, modify the following snippet according to your saving paths used in preparing datasets.

```python
# In file bpm/dataset/__init__.py

########################################
# Specify Directory and Partition File #
########################################

if name == 'market1501':
  im_dir = ospeu('~/Dataset/market1501/images')
  partition_file = ospeu('~/Dataset/market1501/partitions.pkl')

elif name == 'cuhk03':
  im_type = ['detected', 'labeled'][0]
  im_dir = ospeu(ospj('~/Dataset/cuhk03', im_type, 'images'))
  partition_file = ospeu(ospj('~/Dataset/cuhk03', im_type, 'partitions.pkl'))

elif name == 'duke':
  im_dir = ospeu('~/Dataset/duke/images')
  partition_file = ospeu('~/Dataset/duke/partitions.pkl')

elif name == 'combined':
  assert part in ['trainval'], \
    "Only trainval part of the combined dataset is available now."
  im_dir = ospeu('~/Dataset/market1501_cuhk03_duke/trainval_images')
  partition_file = ospeu('~/Dataset/market1501_cuhk03_duke/partitions.pkl')
```

## Evaluation Protocol

Datasets used in this project all follow the standard evaluation protocol of Market1501, using CMC and mAP metric. According to [open-reid](https://github.com/Cysu/open-reid), the setting of CMC is as follows

```python
# In file bpm/dataset/__init__.py

cmc_kwargs = dict(separate_camera_set=False,
                  single_gallery_shot=False,
                  first_match_break=True)
```

To play with [different CMC options](https://cysu.github.io/open-reid/notes/evaluation_metrics.html), you can [modify it accordingly](https://github.com/Cysu/open-reid/blob/3293ca79a07ebee7f995ce647aafa7df755207b8/reid/evaluators.py#L85-L95).

```python
# In open-reid's reid/evaluators.py

# Compute all kinds of CMC scores
cmc_configs = {
  'allshots': dict(separate_camera_set=False,
                   single_gallery_shot=False,
                   first_match_break=False),
  'cuhk03': dict(separate_camera_set=True,
                 single_gallery_shot=True,
                 first_match_break=False),
  'market1501': dict(separate_camera_set=False,
                     single_gallery_shot=False,
                     first_match_break=True)}
```


# Examples


## Test PCB

My training log and saved model weights (trained with independent 1x1 conv) for three datasets can be downloaded from [Google Drive](https://drive.google.com/drive/folders/1G3mLsI1g8ZZkHyol6d3yHpygZeFsENqO?usp=sharing) or [BaiduYun](https://pan.baidu.com/s/1zfjeiePvr1TlBtu7yGovlQ).

Specify
- a dataset name (one of `market1501`, `cuhk03`, `duke`)
- an experiment directory for saving testing log
- the path of the downloaded `model_weight.pth`

in the following command and run it.

```bash
python script/experiment/train_pcb.py \
-d '(0,)' \
--only_test true \
--dataset DATASET_NAME \
--exp_dir EXPERIMENT_DIRECTORY \
--model_weight_file THE_DOWNLOADED_MODEL_WEIGHT_FILE
```

## Train PCB

You can also train it by yourself. The following command performs training, validation and finally testing automatically.

Specify
- a dataset name (one of `['market1501', 'cuhk03', 'duke']`)
- training on `trainval` set or `train` set (for tuning parameters)
- an experiment directory for saving training log

in the following command and run it.

```bash
python script/experiment/train_pcb.py \
-d '(0,)' \
--only_test false \
--dataset DATASET_NAME \
--trainset_part TRAINVAL_OR_TRAIN \
--exp_dir EXPERIMENT_DIRECTORY \
--steps_per_log 20 \
--epochs_per_val 1
```

### Log

During training, you can run the [TensorBoard](https://github.com/lanpa/tensorboard-pytorch) and access port `6006` to watch the loss curves etc. E.g.

```bash
# Modify the path for `--logdir` accordingly.
tensorboard --logdir YOUR_EXPERIMENT_DIRECTORY/tensorboard
```

For more usage of TensorBoard, see the website and the help:

```bash
tensorboard --help
```


## Visualize Ranking List

Specify
- a dataset name (one of `['market1501', 'cuhk03', 'duke']`)
- either `model_weight_file` (the downloaded `model_weight.pth`) OR `ckpt_file` (saved `ckpt.pth` during training)
- an experiment directory for saving images and log

in the following command and run it.

```bash
python script/experiment/visualize_rank_list.py \
-d '(0,)' \
--num_queries 16 \
--rank_list_size 10 \
--dataset DATASET_NAME \
--exp_dir EXPERIMENT_DIRECTORY \
--model_weight_file '' \
--ckpt_file ''
```

Each query image and its ranking list would be saved to an image in directory `EXPERIMENT_DIRECTORY/rank_lists`. As shown in following examples, green boundary is added to true positive, and red to false positve.

![](example_rank_lists_on_Market1501/00000156_0003_00000009.jpg)

![](example_rank_lists_on_Market1501/00000305_0001_00000001.jpg)

![](example_rank_lists_on_Market1501/00000492_0005_00000001.jpg)

![](example_rank_lists_on_Market1501/00000881_0002_00000006.jpg)


# Time and Space Consumption


Test with CentOS 7, Intel(R) Xeon(R) CPU E5-2618L v3 @ 2.30GHz, GeForce GTX TITAN X.

**Note that the following time consumption is not gauranteed across machines, especially when the system is busy.**

### GPU Consumption in Training

For following settings

- ResNet-50 `stride=1` in last block
- `batch_size = 64`
- image size `h x w = 384 x 128`

it occupies ~11000MB GPU memory.

If not having a 12 GB GPU, you can decrease `batch_size` or use multiple GPUs.


### Training Time

Taking Market1501 as an example, it contains `31969` training images; each epoch takes ~205s; training for 60 epochs takes ~3.5 hours.

### Testing Time

Taking Market1501 as an example
- With `images_per_batch = 32`, extracting feature of whole test set (12936 images) takes ~160s.
- Computing query-gallery global distance, the result is a `3368 x 15913` matrix, ~2s
- Computing CMC and mAP scores, ~15s
- Re-ranking requires computing query-query distance (a `3368 x 3368` matrix) and gallery-gallery distance (a `15913 x 15913` matrix, most time-consuming), ~90s


# References & Credits

- [Beyond Part Models: Person Retrieval with Refined Part Pooling](https://arxiv.org/abs/1711.09349)
- [open-reid](https://github.com/Cysu/open-reid)
- [Re-ranking Person Re-identification with k-reciprocal Encoding](https://github.com/zhunzhong07/person-re-ranking)
- [Market1501](http://www.liangzheng.org/Project/project_reid.html)
- [CUHK03](http://www.ee.cuhk.edu.hk/~xgwang/CUHK_identification.html)
- [DukeMTMC-reID](https://github.com/layumi/DukeMTMC-reID_evaluation)


================================================
FILE: bpm/__init__.py
================================================


================================================
FILE: bpm/dataset/Dataset.py
================================================
from .PreProcessImage import PreProcessIm
from .Prefetcher import Prefetcher
import numpy as np


class Dataset(object):
  """The core elements of a dataset.    
  Args:
    final_batch: bool. The last batch may not be complete, if to abandon this 
      batch, set 'final_batch' to False.
  """

  def __init__(
      self,
      dataset_size=None,
      batch_size=None,
      final_batch=True,
      shuffle=True,
      num_prefetch_threads=1,
      prng=np.random,
      **pre_process_im_kwargs):

    self.pre_process_im = PreProcessIm(
      prng=prng,
      **pre_process_im_kwargs)

    self.prefetcher = Prefetcher(
      self.get_sample,
      dataset_size,
      batch_size,
      final_batch=final_batch,
      num_threads=num_prefetch_threads)

    self.shuffle = shuffle
    self.epoch_done = True
    self.prng = prng

  def set_mirror_type(self, mirror_type):
    self.pre_process_im.set_mirror_type(mirror_type)

  def get_sample(self, ptr):
    """Get one sample to put to queue."""
    raise NotImplementedError

  def next_batch(self):
    """Get a batch from the queue."""
    raise NotImplementedError

  def set_batch_size(self, batch_size):
    """You can change batch size, had better at the beginning of a new epoch.
    """
    self.prefetcher.set_batch_size(batch_size)
    self.epoch_done = True

  def stop_prefetching_threads(self):
    """This can be called to stop threads, e.g. after finishing using the 
    dataset, or when existing the python main program."""
    self.prefetcher.stop()


================================================
FILE: bpm/dataset/PreProcessImage.py
================================================
import numpy as np
import cv2


class PreProcessIm(object):
  def __init__(
      self,
      crop_prob=0,
      crop_ratio=1.0,
      resize_h_w=None,
      scale=True,
      im_mean=None,
      im_std=None,
      mirror_type=None,
      batch_dims='NCHW',
      prng=np.random):
    """
    Args:
      crop_prob: the probability of each image to go through cropping
      crop_ratio: a float. If == 1.0, no cropping.
      resize_h_w: (height, width) after resizing. If `None`, no resizing.
      scale: whether to scale the pixel value by 1/255
      im_mean: (Optionally) subtracting image mean; `None` or a tuple or list or
        numpy array with shape [3]
      im_std: (Optionally) divided by image std; `None` or a tuple or list or
        numpy array with shape [3]. Dividing is applied only when subtracting
        mean is applied.
      mirror_type: How image should be mirrored; one of
        [None, 'random', 'always']
      batch_dims: either 'NCHW' or 'NHWC'. 'N': batch size, 'C': num channels,
        'H': im height, 'W': im width. PyTorch uses 'NCHW', while TensorFlow
        uses 'NHWC'.
      prng: can be set to a numpy.random.RandomState object, in order to have
        random seed independent from the global one
    """
    self.crop_prob = crop_prob
    self.crop_ratio = crop_ratio
    self.resize_h_w = resize_h_w
    self.scale = scale
    self.im_mean = im_mean
    self.im_std = im_std
    self.check_mirror_type(mirror_type)
    self.mirror_type = mirror_type
    self.check_batch_dims(batch_dims)
    self.batch_dims = batch_dims
    self.prng = prng

  def __call__(self, im):
    return self.pre_process_im(im)

  @staticmethod
  def check_mirror_type(mirror_type):
    assert mirror_type in [None, 'random', 'always']

  @staticmethod
  def check_batch_dims(batch_dims):
    # 'N': batch size, 'C': num channels, 'H': im height, 'W': im width
    # PyTorch uses 'NCHW', while TensorFlow uses 'NHWC'.
    assert batch_dims in ['NCHW', 'NHWC']

  def set_mirror_type(self, mirror_type):
    self.check_mirror_type(mirror_type)
    self.mirror_type = mirror_type

  @staticmethod
  def rand_crop_im(im, new_size, prng=np.random):
    """Crop `im` to `new_size`: [new_w, new_h]."""
    if (new_size[0] == im.shape[1]) and (new_size[1] == im.shape[0]):
      return im
    h_start = prng.randint(0, im.shape[0] - new_size[1])
    w_start = prng.randint(0, im.shape[1] - new_size[0])
    im = np.copy(
      im[h_start: h_start + new_size[1], w_start: w_start + new_size[0], :])
    return im

  def pre_process_im(self, im):
    """Pre-process image.
    `im` is a numpy array with shape [H, W, 3], e.g. the result of
    matplotlib.pyplot.imread(some_im_path), or
    numpy.asarray(PIL.Image.open(some_im_path))."""

    # Randomly crop a sub-image.
    if ((self.crop_ratio < 1)
        and (self.crop_prob > 0)
        and (self.prng.uniform() < self.crop_prob)):
      h_ratio = self.prng.uniform(self.crop_ratio, 1)
      w_ratio = self.prng.uniform(self.crop_ratio, 1)
      crop_h = int(im.shape[0] * h_ratio)
      crop_w = int(im.shape[1] * w_ratio)
      im = self.rand_crop_im(im, (crop_w, crop_h), prng=self.prng)

    # Resize.
    if (self.resize_h_w is not None) \
        and (self.resize_h_w != (im.shape[0], im.shape[1])):
      im = cv2.resize(im, self.resize_h_w[::-1], interpolation=cv2.INTER_LINEAR)

    # scaled by 1/255.
    if self.scale:
      im = im / 255.

    # Subtract mean and scaled by std
    # im -= np.array(self.im_mean) # This causes an error:
    # Cannot cast ufunc subtract output from dtype('float64') to
    # dtype('uint8') with casting rule 'same_kind'
    if self.im_mean is not None:
      im = im - np.array(self.im_mean)
    if self.im_mean is not None and self.im_std is not None:
      im = im / np.array(self.im_std).astype(float)

    # May mirror image.
    mirrored = False
    if self.mirror_type == 'always' \
        or (self.mirror_type == 'random' and self.prng.uniform() > 0.5):
      im = im[:, ::-1, :]
      mirrored = True

    # The original image has dims 'HWC', transform it to 'CHW'.
    if self.batch_dims == 'NCHW':
      im = im.transpose(2, 0, 1)

    return im, mirrored

================================================
FILE: bpm/dataset/Prefetcher.py
================================================
import threading
import Queue
import time


class Counter(object):
  """A thread safe counter."""

  def __init__(self, val=0, max_val=0):
    self._value = val
    self.max_value = max_val
    self._lock = threading.Lock()

  def reset(self):
    with self._lock:
      self._value = 0

  def set_max_value(self, max_val):
    self.max_value = max_val

  def increment(self):
    with self._lock:
      if self._value < self.max_value:
        self._value += 1
        incremented = True
      else:
        incremented = False
      return incremented, self._value

  def get_value(self):
    with self._lock:
      return self._value


class Enqueuer(object):
  def __init__(self, get_element, num_elements, num_threads=1, queue_size=20):
    """
    Args:
      get_element: a function that takes a pointer and returns an element
      num_elements: total number of elements to put into the queue
      num_threads: num of parallel threads, >= 1
      queue_size: the maximum size of the queue. Set to some positive integer
        to save memory, otherwise, set to 0.
    """
    self.get_element = get_element
    assert num_threads > 0
    self.num_threads = num_threads
    self.queue_size = queue_size
    self.queue = Queue.Queue(maxsize=queue_size)
    # The pointer shared by threads.
    self.ptr = Counter(max_val=num_elements)
    # The event to wake up threads, it's set at the beginning of an epoch.
    # It's cleared after an epoch is enqueued or when the states are reset.
    self.event = threading.Event()
    # To reset states.
    self.reset_event = threading.Event()
    # The event to terminate the threads.
    self.stop_event = threading.Event()
    self.threads = []
    for _ in range(num_threads):
      thread = threading.Thread(target=self.enqueue)
      # Set the thread in daemon mode, so that the main program ends normally.
      thread.daemon = True
      thread.start()
      self.threads.append(thread)

  def start_ep(self):
    """Start enqueuing an epoch."""
    self.event.set()

  def end_ep(self):
    """When all elements are enqueued, let threads sleep to save resources."""
    self.event.clear()
    self.ptr.reset()

  def reset(self):
    """Reset the threads, pointer and the queue to initial states. In common
    case, this will not be called."""
    self.reset_event.set()
    self.event.clear()
    # wait for threads to pause. This is not an absolutely safe way. The safer
    # way is to check some flag inside a thread, not implemented yet.
    time.sleep(5)
    self.reset_event.clear()
    self.ptr.reset()
    self.queue = Queue.Queue(maxsize=self.queue_size)

  def set_num_elements(self, num_elements):
    """Reset the max number of elements."""
    self.reset()
    self.ptr.set_max_value(num_elements)

  def stop(self):
    """Wait for threads to terminate."""
    self.stop_event.set()
    for thread in self.threads:
      thread.join()

  def enqueue(self):
    while not self.stop_event.isSet():
      # If the enqueuing event is not set, the thread just waits.
      if not self.event.wait(0.5): continue
      # Increment the counter to claim that this element has been enqueued by
      # this thread.
      incremented, ptr = self.ptr.increment()
      if incremented:
        element = self.get_element(ptr - 1)
        # When enqueuing, keep an eye on the stop and reset signal.
        while not self.stop_event.isSet() and not self.reset_event.isSet():
          try:
            # This operation will wait at most `timeout` for a free slot in
            # the queue to be available.
            self.queue.put(element, timeout=0.5)
            break
          except:
            pass
      else:
        self.end_ep()
    print('Exiting thread {}!!!!!!!!'.format(threading.current_thread().name))


class Prefetcher(object):
  """This helper class enables sample enqueuing and batch dequeuing, to speed
  up batch fetching. It abstracts away the enqueuing and dequeuing logic."""

  def __init__(self, get_sample, dataset_size, batch_size, final_batch=True,
               num_threads=1, prefetch_size=200):
    """
    Args:
      get_sample: a function that takes a pointer (index) and returns a sample
      dataset_size: total number of samples in the dataset
      final_batch: True or False, whether to keep or drop the final incomplete
        batch
      num_threads: num of parallel threads, >= 1
      prefetch_size: the maximum size of the queue. Set to some positive integer
        to save memory, otherwise, set to 0.
    """
    self.full_dataset_size = dataset_size
    self.final_batch = final_batch
    final_sz = self.full_dataset_size % batch_size
    if not final_batch:
      dataset_size = self.full_dataset_size - final_sz
    self.dataset_size = dataset_size
    self.batch_size = batch_size
    self.enqueuer = Enqueuer(get_element=get_sample, num_elements=dataset_size,
                             num_threads=num_threads, queue_size=prefetch_size)
    # The pointer indicating whether an epoch has been fetched from the queue
    self.ptr = 0
    self.ep_done = True

  def set_batch_size(self, batch_size):
    """You had better change batch size at the beginning of a new epoch."""
    final_sz = self.full_dataset_size % batch_size
    if not self.final_batch:
      self.dataset_size = self.full_dataset_size - final_sz
    self.enqueuer.set_num_elements(self.dataset_size)
    self.batch_size = batch_size
    self.ep_done = True

  def next_batch(self):
    """Return a batch of samples, meanwhile indicate whether the epoch is
    done. The purpose of this func is mainly to abstract away the loop and the
    boundary-checking logic.
    Returns:
      samples: a list of samples
      done: bool, whether the epoch is done
    """
    # Start enqueuing and other preparation at the beginning of an epoch.
    if self.ep_done:
      self.start_ep_prefetching()
    # Whether an epoch is done.
    self.ep_done = False
    samples = []
    for _ in range(self.batch_size):
      # Indeed, `>` will not occur.
      if self.ptr >= self.dataset_size:
        self.ep_done = True
        break
      else:
        self.ptr += 1
        sample = self.enqueuer.queue.get()
        # print('queue size {}'.format(self.enqueuer.queue.qsize()))
        samples.append(sample)
    # print 'queue size: {}'.format(self.enqueuer.queue.qsize())
    # Indeed, `>` will not occur.
    if self.ptr >= self.dataset_size:
      self.ep_done = True
    return samples, self.ep_done

  def start_ep_prefetching(self):
    """
    NOTE: Has to be called at the start of every epoch.
    """
    self.enqueuer.start_ep()
    self.ptr = 0

  def stop(self):
    """This can be called to stop threads, e.g. after finishing using the
    dataset, or when existing the python main program."""
    self.enqueuer.stop()

================================================
FILE: bpm/dataset/TestSet.py
================================================
from __future__ import print_function
import sys
import time
import os.path as osp
from PIL import Image
import numpy as np
from collections import defaultdict

from .Dataset import Dataset

from ..utils.utils import measure_time
from ..utils.re_ranking import re_ranking
from ..utils.metric import cmc, mean_ap
from ..utils.dataset_utils import parse_im_name
from ..utils.distance import normalize
from ..utils.distance import compute_dist


class TestSet(Dataset):
  """
  Args:
    extract_feat_func: a function to extract features. It takes a batch of
      images and returns a batch of features.
    marks: a list, each element e denoting whether the image is from 
      query (e == 0), or
      gallery (e == 1), or 
      multi query (e == 2) set
  """

  def __init__(
      self,
      im_dir=None,
      im_names=None,
      marks=None,
      extract_feat_func=None,
      separate_camera_set=None,
      single_gallery_shot=None,
      first_match_break=None,
      **kwargs):

    super(TestSet, self).__init__(dataset_size=len(im_names), **kwargs)

    # The im dir of all images
    self.im_dir = im_dir
    self.im_names = im_names
    self.marks = marks
    self.extract_feat_func = extract_feat_func
    self.separate_camera_set = separate_camera_set
    self.single_gallery_shot = single_gallery_shot
    self.first_match_break = first_match_break

  def set_feat_func(self, extract_feat_func):
    self.extract_feat_func = extract_feat_func

  def get_sample(self, ptr):
    im_name = self.im_names[ptr]
    im_path = osp.join(self.im_dir, im_name)
    im = np.asarray(Image.open(im_path))
    im, _ = self.pre_process_im(im)
    id = parse_im_name(self.im_names[ptr], 'id')
    cam = parse_im_name(self.im_names[ptr], 'cam')
    # denoting whether the im is from query, gallery, or multi query set
    mark = self.marks[ptr]
    return im, id, cam, im_name, mark

  def next_batch(self):
    if self.epoch_done and self.shuffle:
      self.prng.shuffle(self.im_names)
    samples, self.epoch_done = self.prefetcher.next_batch()
    im_list, ids, cams, im_names, marks = zip(*samples)
    # Transform the list into a numpy array with shape [N, ...]
    ims = np.stack(im_list, axis=0)
    ids = np.array(ids)
    cams = np.array(cams)
    im_names = np.array(im_names)
    marks = np.array(marks)
    return ims, ids, cams, im_names, marks, self.epoch_done

  def extract_feat(self, normalize_feat, verbose=True):
    """Extract the features of the whole image set.
    Args:
      normalize_feat: True or False, whether to normalize feature to unit length
      verbose: whether to print the progress of extracting feature
    Returns:
      feat: numpy array with shape [N, C]
      ids: numpy array with shape [N]
      cams: numpy array with shape [N]
      im_names: numpy array with shape [N]
      marks: numpy array with shape [N]
    """
    feat, ids, cams, im_names, marks = [], [], [], [], []
    done = False
    step = 0
    printed = False
    st = time.time()
    last_time = time.time()
    while not done:
      ims_, ids_, cams_, im_names_, marks_, done = self.next_batch()
      feat_ = self.extract_feat_func(ims_)
      feat.append(feat_)
      ids.append(ids_)
      cams.append(cams_)
      im_names.append(im_names_)
      marks.append(marks_)

      if verbose:
        # Print the progress of extracting feature
        total_batches = (self.prefetcher.dataset_size
                         // self.prefetcher.batch_size + 1)
        step += 1
        if step % 20 == 0:
          if not printed:
            printed = True
          else:
            # Clean the current line
            sys.stdout.write("\033[F\033[K")
          print('{}/{} batches done, +{:.2f}s, total {:.2f}s'
                .format(step, total_batches,
                        time.time() - last_time, time.time() - st))
          last_time = time.time()

    feat = np.vstack(feat)
    ids = np.hstack(ids)
    cams = np.hstack(cams)
    im_names = np.hstack(im_names)
    marks = np.hstack(marks)
    if normalize_feat:
      feat = normalize(feat, axis=1)
    return feat, ids, cams, im_names, marks

  def eval(
      self,
      normalize_feat=True,
      to_re_rank=True,
      pool_type='average',
      verbose=True):

    """Evaluate using metric CMC and mAP.
    Args:
      normalize_feat: whether to normalize features before computing distance
      to_re_rank: whether to also report re-ranking scores
      pool_type: 'average' or 'max', only for multi-query case
      verbose: whether to print the intermediate information
    """

    with measure_time('Extracting feature...', verbose=verbose):
      feat, ids, cams, im_names, marks = self.extract_feat(
        normalize_feat, verbose)

    # query, gallery, multi-query indices
    q_inds = marks == 0
    g_inds = marks == 1
    mq_inds = marks == 2

    # A helper function just for avoiding code duplication.
    def compute_score(
        dist_mat,
        query_ids=ids[q_inds],
        gallery_ids=ids[g_inds],
        query_cams=cams[q_inds],
        gallery_cams=cams[g_inds]):
      # Compute mean AP
      mAP = mean_ap(
        distmat=dist_mat,
        query_ids=query_ids, gallery_ids=gallery_ids,
        query_cams=query_cams, gallery_cams=gallery_cams)
      # Compute CMC scores
      cmc_scores = cmc(
        distmat=dist_mat,
        query_ids=query_ids, gallery_ids=gallery_ids,
        query_cams=query_cams, gallery_cams=gallery_cams,
        separate_camera_set=self.separate_camera_set,
        single_gallery_shot=self.single_gallery_shot,
        first_match_break=self.first_match_break,
        topk=10)
      return mAP, cmc_scores

    def print_scores(mAP, cmc_scores):
      print('[mAP: {:5.2%}], [cmc1: {:5.2%}], [cmc5: {:5.2%}], [cmc10: {:5.2%}]'
            .format(mAP, *cmc_scores[[0, 4, 9]]))

    ################
    # Single Query #
    ################

    with measure_time('Computing distance...', verbose=verbose):
      # query-gallery distance
      q_g_dist = compute_dist(feat[q_inds], feat[g_inds], type='euclidean')

    with measure_time('Computing scores...', verbose=verbose):
      mAP, cmc_scores = compute_score(q_g_dist)

    print('{:<30}'.format('Single Query:'), end='')
    print_scores(mAP, cmc_scores)

    ###############
    # Multi Query #
    ###############

    mq_mAP, mq_cmc_scores = None, None
    if any(mq_inds):
      mq_ids = ids[mq_inds]
      mq_cams = cams[mq_inds]
      mq_feat = feat[mq_inds]
      unique_mq_ids_cams = defaultdict(list)
      for ind, (id, cam) in enumerate(zip(mq_ids, mq_cams)):
        unique_mq_ids_cams[(id, cam)].append(ind)
      keys = unique_mq_ids_cams.keys()
      assert pool_type in ['average', 'max']
      pool = np.mean if pool_type == 'average' else np.max
      mq_feat = np.stack([pool(mq_feat[unique_mq_ids_cams[k]], axis=0)
                          for k in keys])

      with measure_time('Multi Query, Computing distance...', verbose=verbose):
        # multi_query-gallery distance
        mq_g_dist = compute_dist(mq_feat, feat[g_inds], type='euclidean')

      with measure_time('Multi Query, Computing scores...', verbose=verbose):
        mq_mAP, mq_cmc_scores = compute_score(
          mq_g_dist,
          query_ids=np.array(zip(*keys)[0]),
          gallery_ids=ids[g_inds],
          query_cams=np.array(zip(*keys)[1]),
          gallery_cams=cams[g_inds]
        )

      print('{:<30}'.format('Multi Query:'), end='')
      print_scores(mq_mAP, mq_cmc_scores)

    if to_re_rank:

      ##########################
      # Re-ranked Single Query #
      ##########################

      with measure_time('Re-ranking distance...', verbose=verbose):
        # query-query distance
        q_q_dist = compute_dist(feat[q_inds], feat[q_inds], type='euclidean')
        # gallery-gallery distance
        g_g_dist = compute_dist(feat[g_inds], feat[g_inds], type='euclidean')
        # re-ranked query-gallery distance
        re_r_q_g_dist = re_ranking(q_g_dist, q_q_dist, g_g_dist)

      with measure_time('Computing scores for re-ranked distance...',
                        verbose=verbose):
        mAP, cmc_scores = compute_score(re_r_q_g_dist)

      print('{:<30}'.format('Re-ranked Single Query:'), end='')
      print_scores(mAP, cmc_scores)

      #########################
      # Re-ranked Multi Query #
      #########################

      if any(mq_inds):
        with measure_time('Multi Query, Re-ranking distance...',
                          verbose=verbose):
          # multi_query-multi_query distance
          mq_mq_dist = compute_dist(mq_feat, mq_feat, type='euclidean')
          # re-ranked multi_query-gallery distance
          re_r_mq_g_dist = re_ranking(mq_g_dist, mq_mq_dist, g_g_dist)

        with measure_time(
            'Multi Query, Computing scores for re-ranked distance...',
            verbose=verbose):
          mq_mAP, mq_cmc_scores = compute_score(
            re_r_mq_g_dist,
            query_ids=np.array(zip(*keys)[0]),
            gallery_ids=ids[g_inds],
            query_cams=np.array(zip(*keys)[1]),
            gallery_cams=cams[g_inds]
          )

        print('{:<30}'.format('Re-ranked Multi Query:'), end='')
        print_scores(mq_mAP, mq_cmc_scores)

    return mAP, cmc_scores, mq_mAP, mq_cmc_scores


================================================
FILE: bpm/dataset/TrainSet.py
================================================
from .Dataset import Dataset
from ..utils.dataset_utils import parse_im_name

import os.path as osp
from PIL import Image
import numpy as np


class TrainSet(Dataset):
  """Training set for identification loss.
  Args:
    ids2labels: a dict mapping ids to labels
  """
  def __init__(self,
               im_dir=None,
               im_names=None,
               ids2labels=None,
               **kwargs):
    super(TrainSet, self).__init__(dataset_size=len(im_names), **kwargs)
    # The im dir of all images
    self.im_dir = im_dir
    self.im_names = im_names
    self.ids2labels = ids2labels

  def get_sample(self, ptr):
    """Get one sample to put to queue."""
    im_name = self.im_names[ptr]
    im_path = osp.join(self.im_dir, im_name)
    im = np.asarray(Image.open(im_path))
    im, mirrored = self.pre_process_im(im)
    id = parse_im_name(im_name, 'id')
    label = self.ids2labels[id]
    return im, im_name, label, mirrored

  def next_batch(self):
    """Next batch of images and labels.
    Returns:
      ims: numpy array with shape [N, H, W, C] or [N, C, H, W], N >= 1
      im_names: a numpy array of image names, len(im_names) >= 1
      labels: a numpy array of image labels, len(labels) >= 1
      mirrored: a numpy array of booleans, whether the images are mirrored
      self.epoch_done: whether the epoch is over
    """
    if self.epoch_done and self.shuffle:
      self.prng.shuffle(self.im_names)
    samples, self.epoch_done = self.prefetcher.next_batch()
    im_list, im_names, labels, mirrored = zip(*samples)
    # Transform the list into a numpy array with shape [N, ...]
    ims = np.stack(im_list, axis=0)
    im_names = np.array(im_names)
    labels = np.array(labels)
    mirrored = np.array(mirrored)
    return ims, im_names, labels, mirrored, self.epoch_done


================================================
FILE: bpm/dataset/__init__.py
================================================
import numpy as np
import os.path as osp
ospj = osp.join
ospeu = osp.expanduser

from ..utils.utils import load_pickle
from ..utils.dataset_utils import parse_im_name
from .TrainSet import TrainSet
from .TestSet import TestSet


def create_dataset(
    name='market1501',
    part='trainval',
    **kwargs):
  assert name in ['market1501', 'cuhk03', 'duke', 'combined'], \
    "Unsupported Dataset {}".format(name)

  assert part in ['trainval', 'train', 'val', 'test'], \
    "Unsupported Dataset Part {}".format(part)

  ########################################
  # Specify Directory and Partition File #
  ########################################

  if name == 'market1501':
    im_dir = ospeu('~/Dataset/market1501/images')
    partition_file = ospeu('~/Dataset/market1501/partitions.pkl')

  elif name == 'cuhk03':
    im_type = ['detected', 'labeled'][0]
    im_dir = ospeu(ospj('~/Dataset/cuhk03', im_type, 'images'))
    partition_file = ospeu(ospj('~/Dataset/cuhk03', im_type, 'partitions.pkl'))

  elif name == 'duke':
    im_dir = ospeu('~/Dataset/duke/images')
    partition_file = ospeu('~/Dataset/duke/partitions.pkl')

  elif name == 'combined':
    assert part in ['trainval'], \
      "Only trainval part of the combined dataset is available now."
    im_dir = ospeu('~/Dataset/market1501_cuhk03_duke/trainval_images')
    partition_file = ospeu('~/Dataset/market1501_cuhk03_duke/partitions.pkl')

  ##################
  # Create Dataset #
  ##################

  # Use standard Market1501 CMC settings for all datasets here.
  cmc_kwargs = dict(separate_camera_set=False,
                    single_gallery_shot=False,
                    first_match_break=True)

  partitions = load_pickle(partition_file)
  im_names = partitions['{}_im_names'.format(part)]

  if part == 'trainval':
    ids2labels = partitions['trainval_ids2labels']

    ret_set = TrainSet(
      im_dir=im_dir,
      im_names=im_names,
      ids2labels=ids2labels,
      **kwargs)

  elif part == 'train':
    ids2labels = partitions['train_ids2labels']

    ret_set = TrainSet(
      im_dir=im_dir,
      im_names=im_names,
      ids2labels=ids2labels,
      **kwargs)

  elif part == 'val':
    marks = partitions['val_marks']
    kwargs.update(cmc_kwargs)

    ret_set = TestSet(
      im_dir=im_dir,
      im_names=im_names,
      marks=marks,
      **kwargs)

  elif part == 'test':
    marks = partitions['test_marks']
    kwargs.update(cmc_kwargs)

    ret_set = TestSet(
      im_dir=im_dir,
      im_names=im_names,
      marks=marks,
      **kwargs)

  if part in ['trainval', 'train']:
    num_ids = len(ids2labels)
  elif part in ['val', 'test']:
    ids = [parse_im_name(n, 'id') for n in im_names]
    num_ids = len(list(set(ids)))
    num_query = np.sum(np.array(marks) == 0)
    num_gallery = np.sum(np.array(marks) == 1)
    num_multi_query = np.sum(np.array(marks) == 2)

  # Print dataset information
  print('-' * 40)
  print('{} {} set'.format(name, part))
  print('-' * 40)
  print('NO. Images: {}'.format(len(im_names)))
  print('NO. IDs: {}'.format(num_ids))

  try:
    print('NO. Query Images: {}'.format(num_query))
    print('NO. Gallery Images: {}'.format(num_gallery))
    print('NO. Multi-query Images: {}'.format(num_multi_query))
  except:
    pass

  print('-' * 40)

  return ret_set


================================================
FILE: bpm/model/PCBModel.py
================================================
import torch
import torch.nn as nn
import torch.nn.init as init
import torch.nn.functional as F

from .resnet import resnet50


class PCBModel(nn.Module):
  def __init__(
      self,
      last_conv_stride=1,
      last_conv_dilation=1,
      num_stripes=6,
      local_conv_out_channels=256,
      num_classes=0
  ):
    super(PCBModel, self).__init__()

    self.base = resnet50(
      pretrained=True,
      last_conv_stride=last_conv_stride,
      last_conv_dilation=last_conv_dilation)
    self.num_stripes = num_stripes

    self.local_conv_list = nn.ModuleList()
    for _ in range(num_stripes):
      self.local_conv_list.append(nn.Sequential(
        nn.Conv2d(2048, local_conv_out_channels, 1),
        nn.BatchNorm2d(local_conv_out_channels),
        nn.ReLU(inplace=True)
      ))

    if num_classes > 0:
      self.fc_list = nn.ModuleList()
      for _ in range(num_stripes):
        fc = nn.Linear(local_conv_out_channels, num_classes)
        init.normal(fc.weight, std=0.001)
        init.constant(fc.bias, 0)
        self.fc_list.append(fc)

  def forward(self, x):
    """
    Returns:
      local_feat_list: each member with shape [N, c]
      logits_list: each member with shape [N, num_classes]
    """
    # shape [N, C, H, W]
    feat = self.base(x)
    assert feat.size(2) % self.num_stripes == 0
    stripe_h = int(feat.size(2) / self.num_stripes)
    local_feat_list = []
    logits_list = []
    for i in range(self.num_stripes):
      # shape [N, C, 1, 1]
      local_feat = F.avg_pool2d(
        feat[:, :, i * stripe_h: (i + 1) * stripe_h, :],
        (stripe_h, feat.size(-1)))
      # shape [N, c, 1, 1]
      local_feat = self.local_conv_list[i](local_feat)
      # shape [N, c]
      local_feat = local_feat.view(local_feat.size(0), -1)
      local_feat_list.append(local_feat)
      if hasattr(self, 'fc_list'):
        logits_list.append(self.fc_list[i](local_feat))

    if hasattr(self, 'fc_list'):
      return local_feat_list, logits_list

    return local_feat_list


================================================
FILE: bpm/model/__init__.py
================================================


================================================
FILE: bpm/model/resnet.py
================================================
import torch.nn as nn
import math
import torch.utils.model_zoo as model_zoo

__all__ = ['ResNet', 'resnet18', 'resnet34', 'resnet50', 'resnet101',
           'resnet152']

model_urls = {
  'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth',
  'resnet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth',
  'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth',
  'resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth',
  'resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth',
}


def conv3x3(in_planes, out_planes, stride=1, dilation=1):
  """3x3 convolution with padding"""
  # original padding is 1; original dilation is 1
  return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
                   padding=dilation, bias=False, dilation=dilation)


class BasicBlock(nn.Module):
  expansion = 1

  def __init__(self, inplanes, planes, stride=1, downsample=None, dilation=1):
    super(BasicBlock, self).__init__()
    self.conv1 = conv3x3(inplanes, planes, stride, dilation)
    self.bn1 = nn.BatchNorm2d(planes)
    self.relu = nn.ReLU(inplace=True)
    self.conv2 = conv3x3(planes, planes)
    self.bn2 = nn.BatchNorm2d(planes)
    self.downsample = downsample
    self.stride = stride

  def forward(self, x):
    residual = x

    out = self.conv1(x)
    out = self.bn1(out)
    out = self.relu(out)

    out = self.conv2(out)
    out = self.bn2(out)

    if self.downsample is not None:
      residual = self.downsample(x)

    out += residual
    out = self.relu(out)

    return out


class Bottleneck(nn.Module):
  expansion = 4

  def __init__(self, inplanes, planes, stride=1, downsample=None, dilation=1):
    super(Bottleneck, self).__init__()
    self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
    self.bn1 = nn.BatchNorm2d(planes)
    # original padding is 1; original dilation is 1
    self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=dilation, bias=False, dilation=dilation)
    self.bn2 = nn.BatchNorm2d(planes)
    self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
    self.bn3 = nn.BatchNorm2d(planes * 4)
    self.relu = nn.ReLU(inplace=True)
    self.downsample = downsample
    self.stride = stride

  def forward(self, x):
    residual = x

    out = self.conv1(x)
    out = self.bn1(out)
    out = self.relu(out)

    out = self.conv2(out)
    out = self.bn2(out)
    out = self.relu(out)

    out = self.conv3(out)
    out = self.bn3(out)

    if self.downsample is not None:
      residual = self.downsample(x)

    out += residual
    out = self.relu(out)

    return out


class ResNet(nn.Module):

  def __init__(self, block, layers, last_conv_stride=2, last_conv_dilation=1):

    self.inplanes = 64
    super(ResNet, self).__init__()
    self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,
                           bias=False)
    self.bn1 = nn.BatchNorm2d(64)
    self.relu = nn.ReLU(inplace=True)
    self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
    self.layer1 = self._make_layer(block, 64, layers[0])
    self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
    self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
    self.layer4 = self._make_layer(block, 512, layers[3], stride=last_conv_stride, dilation=last_conv_dilation)

    for m in self.modules():
      if isinstance(m, nn.Conv2d):
        n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
        m.weight.data.normal_(0, math.sqrt(2. / n))
      elif isinstance(m, nn.BatchNorm2d):
        m.weight.data.fill_(1)
        m.bias.data.zero_()

  def _make_layer(self, block, planes, blocks, stride=1, dilation=1):
    downsample = None
    if stride != 1 or self.inplanes != planes * block.expansion:
      downsample = nn.Sequential(
        nn.Conv2d(self.inplanes, planes * block.expansion,
                  kernel_size=1, stride=stride, bias=False),
        nn.BatchNorm2d(planes * block.expansion),
      )

    layers = []
    layers.append(block(self.inplanes, planes, stride, downsample, dilation))
    self.inplanes = planes * block.expansion
    for i in range(1, blocks):
      layers.append(block(self.inplanes, planes))

    return nn.Sequential(*layers)

  def forward(self, x):
    x = self.conv1(x)
    x = self.bn1(x)
    x = self.relu(x)
    x = self.maxpool(x)

    x = self.layer1(x)
    x = self.layer2(x)
    x = self.layer3(x)
    x = self.layer4(x)

    return x


def remove_fc(state_dict):
  """Remove the fc layer parameters from state_dict."""
  for key in list(state_dict):
    if key.startswith('fc.'):
      del state_dict[key]
  return state_dict


def resnet18(pretrained=False, **kwargs):
  """Constructs a ResNet-18 model.

  Args:
      pretrained (bool): If True, returns a model pre-trained on ImageNet
  """
  model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs)
  if pretrained:
    model.load_state_dict(remove_fc(model_zoo.load_url(model_urls['resnet18'])))
  return model


def resnet34(pretrained=False, **kwargs):
  """Constructs a ResNet-34 model.

  Args:
      pretrained (bool): If True, returns a model pre-trained on ImageNet
  """
  model = ResNet(BasicBlock, [3, 4, 6, 3], **kwargs)
  if pretrained:
    model.load_state_dict(remove_fc(model_zoo.load_url(model_urls['resnet34'])))
  return model


def resnet50(pretrained=False, **kwargs):
  """Constructs a ResNet-50 model.

  Args:
      pretrained (bool): If True, returns a model pre-trained on ImageNet
  """
  model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs)
  if pretrained:
    model.load_state_dict(remove_fc(model_zoo.load_url(model_urls['resnet50'])))
  return model


def resnet101(pretrained=False, **kwargs):
  """Constructs a ResNet-101 model.

  Args:
      pretrained (bool): If True, returns a model pre-trained on ImageNet
  """
  model = ResNet(Bottleneck, [3, 4, 23, 3], **kwargs)
  if pretrained:
    model.load_state_dict(
      remove_fc(model_zoo.load_url(model_urls['resnet101'])))
  return model


def resnet152(pretrained=False, **kwargs):
  """Constructs a ResNet-152 model.

  Args:
      pretrained (bool): If True, returns a model pre-trained on ImageNet
  """
  model = ResNet(Bottleneck, [3, 8, 36, 3], **kwargs)
  if pretrained:
    model.load_state_dict(
      remove_fc(model_zoo.load_url(model_urls['resnet152'])))
  return model


================================================
FILE: bpm/utils/__init__.py
================================================


================================================
FILE: bpm/utils/dataset_utils.py
================================================
from __future__ import print_function
import os.path as osp
import numpy as np
import glob
from collections import defaultdict
import shutil

new_im_name_tmpl = '{:08d}_{:04d}_{:08d}.jpg'

def parse_im_name(im_name, parse_type='id'):
  """Get the person id or cam from an image name."""
  assert parse_type in ('id', 'cam')
  if parse_type == 'id':
    parsed = int(im_name[:8])
  else:
    parsed = int(im_name[9:13])
  return parsed


def get_im_names(im_dir, pattern='*.jpg', return_np=True, return_path=False):
  """Get the image names in a dir. Optional to return numpy array, paths."""
  im_paths = glob.glob(osp.join(im_dir, pattern))
  im_names = [osp.basename(path) for path in im_paths]
  ret = im_paths if return_path else im_names
  if return_np:
    ret = np.array(ret)
  return ret


def move_ims(ori_im_paths, new_im_dir, parse_im_name, new_im_name_tmpl):
  """Rename and move images to new directory."""
  cnt = defaultdict(int)
  new_im_names = []
  for im_path in ori_im_paths:
    im_name = osp.basename(im_path)
    id = parse_im_name(im_name, 'id')
    cam = parse_im_name(im_name, 'cam')
    cnt[(id, cam)] += 1
    new_im_name = new_im_name_tmpl.format(id, cam, cnt[(id, cam)] - 1)
    shutil.copy(im_path, osp.join(new_im_dir, new_im_name))
    new_im_names.append(new_im_name)
  return new_im_names


def partition_train_val_set(im_names, parse_im_name,
                            num_val_ids=None, val_prop=None, seed=1):
  """Partition the trainval set into train and val set. 
  Args:
    im_names: trainval image names
    parse_im_name: a function to parse id and camera from image name
    num_val_ids: number of ids for val set. If not set, val_prob is used.
    val_prop: the proportion of validation ids
    seed: the random seed to reproduce the partition results. If not to use, 
      then set to `None`.
  Returns:
    a dict with keys (`train_im_names`, 
                      `val_query_im_names`, 
                      `val_gallery_im_names`)
  """
  np.random.seed(seed)
  # Transform to numpy array for slicing.
  if not isinstance(im_names, np.ndarray):
    im_names = np.array(im_names)
  np.random.shuffle(im_names)
  ids = np.array([parse_im_name(n, 'id') for n in im_names])
  cams = np.array([parse_im_name(n, 'cam') for n in im_names])
  unique_ids = np.unique(ids)
  np.random.shuffle(unique_ids)

  # Query indices and gallery indices
  query_inds = []
  gallery_inds = []

  if num_val_ids is None:
    assert 0 < val_prop < 1
    num_val_ids = int(len(unique_ids) * val_prop)
  num_selected_ids = 0
  for unique_id in unique_ids:
    query_inds_ = []
    # The indices of this id in trainval set.
    inds = np.argwhere(unique_id == ids).flatten()
    # The cams that this id has.
    unique_cams = np.unique(cams[inds])
    # For each cam, select one image for query set.
    for unique_cam in unique_cams:
      query_inds_.append(
        inds[np.argwhere(cams[inds] == unique_cam).flatten()[0]])
    gallery_inds_ = list(set(inds) - set(query_inds_))
    # For each query image, if there is no same-id different-cam images in
    # gallery, put it in gallery.
    for query_ind in query_inds_:
      if len(gallery_inds_) == 0 \
          or len(np.argwhere(cams[gallery_inds_] != cams[query_ind])
                     .flatten()) == 0:
        query_inds_.remove(query_ind)
        gallery_inds_.append(query_ind)
    # If no query image is left, leave this id in train set.
    if len(query_inds_) == 0:
      continue
    query_inds.append(query_inds_)
    gallery_inds.append(gallery_inds_)
    num_selected_ids += 1
    if num_selected_ids >= num_val_ids:
      break

  query_inds = np.hstack(query_inds)
  gallery_inds = np.hstack(gallery_inds)
  val_inds = np.hstack([query_inds, gallery_inds])
  trainval_inds = np.arange(len(im_names))
  train_inds = np.setdiff1d(trainval_inds, val_inds)

  train_inds = np.sort(train_inds)
  query_inds = np.sort(query_inds)
  gallery_inds = np.sort(gallery_inds)

  partitions = dict(train_im_names=im_names[train_inds],
                    val_query_im_names=im_names[query_inds],
                    val_gallery_im_names=im_names[gallery_inds])

  return partitions


================================================
FILE: bpm/utils/distance.py
================================================
"""Numpy version of euclidean distance, etc.
Notice the input/output shape of methods, so that you can better understand
the meaning of these methods."""
import numpy as np


def normalize(nparray, order=2, axis=0):
  """Normalize a N-D numpy array along the specified axis."""
  norm = np.linalg.norm(nparray, ord=order, axis=axis, keepdims=True)
  return nparray / (norm + np.finfo(np.float32).eps)


def compute_dist(array1, array2, type='euclidean'):
  """Compute the euclidean or cosine distance of all pairs.
  Args:
    array1: numpy array with shape [m1, n]
    array2: numpy array with shape [m2, n]
    type: one of ['cosine', 'euclidean']
  Returns:
    numpy array with shape [m1, m2]
  """
  assert type in ['cosine', 'euclidean']
  if type == 'cosine':
    array1 = normalize(array1, axis=1)
    array2 = normalize(array2, axis=1)
    dist = np.matmul(array1, array2.T)
    return dist
  else:
    # shape [m1, 1]
    square1 = np.sum(np.square(array1), axis=1)[..., np.newaxis]
    # shape [1, m2]
    square2 = np.sum(np.square(array2), axis=1)[np.newaxis, ...]
    squared_dist = - 2 * np.matmul(array1, array2.T) + square1 + square2
    squared_dist[squared_dist < 0] = 0
    dist = np.sqrt(squared_dist)
    return dist


================================================
FILE: bpm/utils/metric.py
================================================
"""Modified from Tong Xiao's open-reid (https://github.com/Cysu/open-reid) 
reid/evaluation_metrics/ranking.py. Modifications: 
1) Only accepts numpy data input, no torch is involved.
1) Here results of each query can be returned.
2) In the single-gallery-shot evaluation case, the time of repeats is changed 
   from 10 to 100.
"""
from __future__ import absolute_import
from collections import defaultdict

import numpy as np
from sklearn.metrics import average_precision_score


def _unique_sample(ids_dict, num):
  mask = np.zeros(num, dtype=np.bool)
  for _, indices in ids_dict.items():
    i = np.random.choice(indices)
    mask[i] = True
  return mask


def cmc(
    distmat,
    query_ids=None,
    gallery_ids=None,
    query_cams=None,
    gallery_cams=None,
    topk=100,
    separate_camera_set=False,
    single_gallery_shot=False,
    first_match_break=False,
    average=True):
  """
  Args:
    distmat: numpy array with shape [num_query, num_gallery], the 
      pairwise distance between query and gallery samples
    query_ids: numpy array with shape [num_query]
    gallery_ids: numpy array with shape [num_gallery]
    query_cams: numpy array with shape [num_query]
    gallery_cams: numpy array with shape [num_gallery]
    average: whether to average the results across queries
  Returns:
    If `average` is `False`:
      ret: numpy array with shape [num_query, topk]
      is_valid_query: numpy array with shape [num_query], containing 0's and 
        1's, whether each query is valid or not
    If `average` is `True`:
      numpy array with shape [topk]
  """
  # Ensure numpy array
  assert isinstance(distmat, np.ndarray)
  assert isinstance(query_ids, np.ndarray)
  assert isinstance(gallery_ids, np.ndarray)
  assert isinstance(query_cams, np.ndarray)
  assert isinstance(gallery_cams, np.ndarray)

  m, n = distmat.shape
  # Sort and find correct matches
  indices = np.argsort(distmat, axis=1)
  matches = (gallery_ids[indices] == query_ids[:, np.newaxis])
  # Compute CMC for each query
  ret = np.zeros([m, topk])
  is_valid_query = np.zeros(m)
  num_valid_queries = 0
  for i in range(m):
    # Filter out the same id and same camera
    valid = ((gallery_ids[indices[i]] != query_ids[i]) |
             (gallery_cams[indices[i]] != query_cams[i]))
    if separate_camera_set:
      # Filter out samples from same camera
      valid &= (gallery_cams[indices[i]] != query_cams[i])
    if not np.any(matches[i, valid]): continue
    is_valid_query[i] = 1
    if single_gallery_shot:
      repeat = 100
      gids = gallery_ids[indices[i][valid]]
      inds = np.where(valid)[0]
      ids_dict = defaultdict(list)
      for j, x in zip(inds, gids):
        ids_dict[x].append(j)
    else:
      repeat = 1
    for _ in range(repeat):
      if single_gallery_shot:
        # Randomly choose one instance for each id
        sampled = (valid & _unique_sample(ids_dict, len(valid)))
        index = np.nonzero(matches[i, sampled])[0]
      else:
        index = np.nonzero(matches[i, valid])[0]
      delta = 1. / (len(index) * repeat)
      for j, k in enumerate(index):
        if k - j >= topk: break
        if first_match_break:
          ret[i, k - j] += 1
          break
        ret[i, k - j] += delta
    num_valid_queries += 1
  if num_valid_queries == 0:
    raise RuntimeError("No valid query")
  ret = ret.cumsum(axis=1)
  if average:
    return np.sum(ret, axis=0) / num_valid_queries
  return ret, is_valid_query


def mean_ap(
    distmat,
    query_ids=None,
    gallery_ids=None,
    query_cams=None,
    gallery_cams=None,
    average=True):
  """
  Args:
    distmat: numpy array with shape [num_query, num_gallery], the 
      pairwise distance between query and gallery samples
    query_ids: numpy array with shape [num_query]
    gallery_ids: numpy array with shape [num_gallery]
    query_cams: numpy array with shape [num_query]
    gallery_cams: numpy array with shape [num_gallery]
    average: whether to average the results across queries
  Returns:
    If `average` is `False`:
      ret: numpy array with shape [num_query]
      is_valid_query: numpy array with shape [num_query], containing 0's and 
        1's, whether each query is valid or not
    If `average` is `True`:
      a scalar
  """

  # -------------------------------------------------------------------------
  # The behavior of method `sklearn.average_precision` has changed since version
  # 0.19.
  # Version 0.18.1 has same results as Matlab evaluation code by Zhun Zhong
  # (https://github.com/zhunzhong07/person-re-ranking/
  # blob/master/evaluation/utils/evaluation.m) and by Liang Zheng
  # (http://www.liangzheng.org/Project/project_reid.html).
  # My current awkward solution is sticking to this older version.
  import sklearn
  cur_version = sklearn.__version__
  required_version = '0.18.1'
  if cur_version != required_version:
    print('User Warning: Version {} is required for package scikit-learn, '
          'your current version is {}. '
          'As a result, the mAP score may not be totally correct. '
          'You can try `pip uninstall scikit-learn` '
          'and then `pip install scikit-learn=={}`'.format(
      required_version, cur_version, required_version))
  # -------------------------------------------------------------------------

  # Ensure numpy array
  assert isinstance(distmat, np.ndarray)
  assert isinstance(query_ids, np.ndarray)
  assert isinstance(gallery_ids, np.ndarray)
  assert isinstance(query_cams, np.ndarray)
  assert isinstance(gallery_cams, np.ndarray)

  m, n = distmat.shape

  # Sort and find correct matches
  indices = np.argsort(distmat, axis=1)
  matches = (gallery_ids[indices] == query_ids[:, np.newaxis])
  # Compute AP for each query
  aps = np.zeros(m)
  is_valid_query = np.zeros(m)
  for i in range(m):
    # Filter out the same id and same camera
    valid = ((gallery_ids[indices[i]] != query_ids[i]) |
             (gallery_cams[indices[i]] != query_cams[i]))
    y_true = matches[i, valid]
    y_score = -distmat[i][indices[i]][valid]
    if not np.any(y_true): continue
    is_valid_query[i] = 1
    aps[i] = average_precision_score(y_true, y_score)
  if len(aps) == 0:
    raise RuntimeError("No valid query")
  if average:
    return float(np.sum(aps)) / np.sum(is_valid_query)
  return aps, is_valid_query


================================================
FILE: bpm/utils/re_ranking.py
================================================
"""
Created on Mon Jun 26 14:46:56 2017

@author: luohao

Modified by Houjing Huang, 2017-12-22.
- This version accepts distance matrix instead of raw features.
- The difference of `/` division between python 2 and 3 is handled.
- numpy.float16 is replaced by numpy.float32 for numerical precision.
"""

"""
CVPR2017 paper:Zhong Z, Zheng L, Cao D, et al. Re-ranking Person Re-identification with k-reciprocal Encoding[J]. 2017.
url:http://openaccess.thecvf.com/content_cvpr_2017/papers/Zhong_Re-Ranking_Person_Re-Identification_CVPR_2017_paper.pdf
Matlab version: https://github.com/zhunzhong07/person-re-ranking
"""

"""
API

q_g_dist: query-gallery distance matrix, numpy array, shape [num_query, num_gallery]
q_q_dist: query-query distance matrix, numpy array, shape [num_query, num_query]
g_g_dist: gallery-gallery distance matrix, numpy array, shape [num_gallery, num_gallery]

k1, k2, lambda_value: parameters, the original paper is (k1=20, k2=6, lambda_value=0.3)

Returns:
  final_dist: re-ranked distance, numpy array, shape [num_query, num_gallery]
"""


import numpy as np


def re_ranking(q_g_dist, q_q_dist, g_g_dist, k1=20, k2=6, lambda_value=0.3):

    # The following naming, e.g. gallery_num, is different from outer scope.
    # Don't care about it.

    original_dist = np.concatenate(
      [np.concatenate([q_q_dist, q_g_dist], axis=1),
       np.concatenate([q_g_dist.T, g_g_dist], axis=1)],
      axis=0)
    original_dist = np.power(original_dist, 2).astype(np.float32)
    original_dist = np.transpose(1. * original_dist/np.max(original_dist,axis = 0))
    V = np.zeros_like(original_dist).astype(np.float32)
    initial_rank = np.argsort(original_dist).astype(np.int32)

    query_num = q_g_dist.shape[0]
    gallery_num = q_g_dist.shape[0] + q_g_dist.shape[1]
    all_num = gallery_num

    for i in range(all_num):
        # k-reciprocal neighbors
        forward_k_neigh_index = initial_rank[i,:k1+1]
        backward_k_neigh_index = initial_rank[forward_k_neigh_index,:k1+1]
        fi = np.where(backward_k_neigh_index==i)[0]
        k_reciprocal_index = forward_k_neigh_index[fi]
        k_reciprocal_expansion_index = k_reciprocal_index
        for j in range(len(k_reciprocal_index)):
            candidate = k_reciprocal_index[j]
            candidate_forward_k_neigh_index = initial_rank[candidate,:int(np.around(k1/2.))+1]
            candidate_backward_k_neigh_index = initial_rank[candidate_forward_k_neigh_index,:int(np.around(k1/2.))+1]
            fi_candidate = np.where(candidate_backward_k_neigh_index == candidate)[0]
            candidate_k_reciprocal_index = candidate_forward_k_neigh_index[fi_candidate]
            if len(np.intersect1d(candidate_k_reciprocal_index,k_reciprocal_index))> 2./3*len(candidate_k_reciprocal_index):
                k_reciprocal_expansion_index = np.append(k_reciprocal_expansion_index,candidate_k_reciprocal_index)

        k_reciprocal_expansion_index = np.unique(k_reciprocal_expansion_index)
        weight = np.exp(-original_dist[i,k_reciprocal_expansion_index])
        V[i,k_reciprocal_expansion_index] = 1.*weight/np.sum(weight)
    original_dist = original_dist[:query_num,]
    if k2 != 1:
        V_qe = np.zeros_like(V,dtype=np.float32)
        for i in range(all_num):
            V_qe[i,:] = np.mean(V[initial_rank[i,:k2],:],axis=0)
        V = V_qe
        del V_qe
    del initial_rank
    invIndex = []
    for i in range(gallery_num):
        invIndex.append(np.where(V[:,i] != 0)[0])

    jaccard_dist = np.zeros_like(original_dist,dtype = np.float32)


    for i in range(query_num):
        temp_min = np.zeros(shape=[1,gallery_num],dtype=np.float32)
        indNonZero = np.where(V[i,:] != 0)[0]
        indImages = []
        indImages = [invIndex[ind] for ind in indNonZero]
        for j in range(len(indNonZero)):
            temp_min[0,indImages[j]] = temp_min[0,indImages[j]]+ np.minimum(V[i,indNonZero[j]],V[indImages[j],indNonZero[j]])
        jaccard_dist[i] = 1-temp_min/(2.-temp_min)

    final_dist = jaccard_dist*(1-lambda_value) + original_dist*lambda_value
    del original_dist
    del V
    del jaccard_dist
    final_dist = final_dist[:query_num,query_num:]
    return final_dist

================================================
FILE: bpm/utils/utils.py
================================================
from __future__ import print_function
import os
import os.path as osp
import cPickle as pickle
from scipy import io
import datetime
import time
from contextlib import contextmanager

import torch
from torch.autograd import Variable


def time_str(fmt=None):
  if fmt is None:
    fmt = '%Y-%m-%d_%H:%M:%S'
  return datetime.datetime.today().strftime(fmt)


def load_pickle(path):
  """Check and load pickle object.
  According to this post: https://stackoverflow.com/a/41733927, cPickle and 
  disabling garbage collector helps with loading speed."""
  assert osp.exists(path)
  # gc.disable()
  with open(path, 'rb') as f:
    ret = pickle.load(f)
  # gc.enable()
  return ret


def save_pickle(obj, path):
  """Create dir and save file."""
  may_make_dir(osp.dirname(osp.abspath(path)))
  with open(path, 'wb') as f:
    pickle.dump(obj, f, protocol=2)


def save_mat(ndarray, path):
  """Save a numpy ndarray as .mat file."""
  io.savemat(path, dict(ndarray=ndarray))


def to_scalar(vt):
  """Transform a length-1 pytorch Variable or Tensor to scalar. 
  Suppose tx is a torch Tensor with shape tx.size() = torch.Size([1]), 
  then npx = tx.cpu().numpy() has shape (1,), not 1."""
  if isinstance(vt, Variable):
    return vt.data.cpu().numpy().flatten()[0]
  if torch.is_tensor(vt):
    return vt.cpu().numpy().flatten()[0]
  raise TypeError('Input should be a variable or tensor')


def transfer_optim_state(state, device_id=-1):
  """Transfer an optimizer.state to cpu or specified gpu, which means 
  transferring tensors of the optimizer.state to specified device. 
  The modification is in place for the state.
  Args:
    state: An torch.optim.Optimizer.state
    device_id: gpu id, or -1 which means transferring to cpu
  """
  for key, val in state.items():
    if isinstance(val, dict):
      transfer_optim_state(val, device_id=device_id)
    elif isinstance(val, Variable):
      raise RuntimeError("Oops, state[{}] is a Variable!".format(key))
    elif isinstance(val, torch.nn.Parameter):
      raise RuntimeError("Oops, state[{}] is a Parameter!".format(key))
    else:
      try:
        if device_id == -1:
          state[key] = val.cpu()
        else:
          state[key] = val.cuda(device=device_id)
      except:
        pass


def may_transfer_optims(optims, device_id=-1):
  """Transfer optimizers to cpu or specified gpu, which means transferring 
  tensors of the optimizer to specified device. The modification is in place 
  for the optimizers.
  Args:
    optims: A list, which members are either torch.nn.optimizer or None.
    device_id: gpu id, or -1 which means transferring to cpu
  """
  for optim in optims:
    if isinstance(optim, torch.optim.Optimizer):
      transfer_optim_state(optim.state, device_id=device_id)


def may_transfer_modules_optims(modules_and_or_optims, device_id=-1):
  """Transfer optimizers/modules to cpu or specified gpu.
  Args:
    modules_and_or_optims: A list, which members are either torch.nn.optimizer 
      or torch.nn.Module or None.
    device_id: gpu id, or -1 which means transferring to cpu
  """
  for item in modules_and_or_optims:
    if isinstance(item, torch.optim.Optimizer):
      transfer_optim_state(item.state, device_id=device_id)
    elif isinstance(item, torch.nn.Module):
      if device_id == -1:
        item.cpu()
      else:
        item.cuda(device=device_id)
    elif item is not None:
      print('[Warning] Invalid type {}'.format(item.__class__.__name__))


class TransferVarTensor(object):
  """Return a copy of the input Variable or Tensor on specified device."""

  def __init__(self, device_id=-1):
    self.device_id = device_id

  def __call__(self, var_or_tensor):
    return var_or_tensor.cpu() if self.device_id == -1 \
      else var_or_tensor.cuda(self.device_id)


class TransferModulesOptims(object):
  """Transfer optimizers/modules to cpu or specified gpu."""

  def __init__(self, device_id=-1):
    self.device_id = device_id

  def __call__(self, modules_and_or_optims):
    may_transfer_modules_optims(modules_and_or_optims, self.device_id)


def set_devices(sys_device_ids):
  """
  It sets some GPUs to be visible and returns some wrappers to transferring 
  Variables/Tensors and Modules/Optimizers.
  Args:
    sys_device_ids: a tuple; which GPUs to use
      e.g.  sys_device_ids = (), only use cpu
            sys_device_ids = (3,), use the 4th gpu
            sys_device_ids = (0, 1, 2, 3,), use first 4 gpus
            sys_device_ids = (0, 2, 4,), use the 1st, 3rd and 5th gpus
  Returns:
    TVT: a `TransferVarTensor` callable
    TMO: a `TransferModulesOptims` callable
  """
  # Set the CUDA_VISIBLE_DEVICES environment variable
  import os
  visible_devices = ''
  for i in sys_device_ids:
    visible_devices += '{}, '.format(i)
  os.environ['CUDA_VISIBLE_DEVICES'] = visible_devices
  # Return wrappers.
  # Models and user defined Variables/Tensors would be transferred to the
  # first device.
  device_id = 0 if len(sys_device_ids) > 0 else -1
  TVT = TransferVarTensor(device_id)
  TMO = TransferModulesOptims(device_id)
  return TVT, TMO


def set_devices_for_ml(sys_device_ids):
  """This version is for mutual learning.
  
  It sets some GPUs to be visible and returns some wrappers to transferring 
  Variables/Tensors and Modules/Optimizers.
  
  Args:
    sys_device_ids: a tuple of tuples; which devices to use for each model, 
      len(sys_device_ids) should be equal to number of models. Examples:
        
      sys_device_ids = ((-1,), (-1,))
        the two models both on CPU
      sys_device_ids = ((-1,), (2,))
        the 1st model on CPU, the 2nd model on GPU 2
      sys_device_ids = ((3,),)
        the only one model on the 4th gpu 
      sys_device_ids = ((0, 1), (2, 3))
        the 1st model on GPU 0 and 1, the 2nd model on GPU 2 and 3
      sys_device_ids = ((0,), (0,))
        the two models both on GPU 0
      sys_device_ids = ((0,), (0,), (1,), (1,))
        the 1st and 2nd model on GPU 0, the 3rd and 4th model on GPU 1
  
  Returns:
    TVTs: a list of `TransferVarTensor` callables, one for one model.
    TMOs: a list of `TransferModulesOptims` callables, one for one model.
    relative_device_ids: a list of lists; `sys_device_ids` transformed to 
      relative ids; to be used in `DataParallel`
  """
  import os

  all_ids = []
  for ids in sys_device_ids:
    all_ids += ids
  unique_sys_device_ids = list(set(all_ids))
  unique_sys_device_ids.sort()
  if -1 in unique_sys_device_ids:
    unique_sys_device_ids.remove(-1)

  # Set the CUDA_VISIBLE_DEVICES environment variable

  visible_devices = ''
  for i in unique_sys_device_ids:
    visible_devices += '{}, '.format(i)
  os.environ['CUDA_VISIBLE_DEVICES'] = visible_devices

  # Return wrappers

  relative_device_ids = []
  TVTs, TMOs = [], []
  for ids in sys_device_ids:
    relative_ids = []
    for id in ids:
      if id != -1:
        id = find_index(unique_sys_device_ids, id)
      relative_ids.append(id)
    relative_device_ids.append(relative_ids)

    # Models and user defined Variables/Tensors would be transferred to the
    # first device.
    TVTs.append(TransferVarTensor(relative_ids[0]))
    TMOs.append(TransferModulesOptims(relative_ids[0]))
  return TVTs, TMOs, relative_device_ids


def load_ckpt(modules_optims, ckpt_file, load_to_cpu=True, verbose=True):
  """Load state_dict's of modules/optimizers from file.
  Args:
    modules_optims: A list, which members are either torch.nn.optimizer 
      or torch.nn.Module.
    ckpt_file: The file path.
    load_to_cpu: Boolean. Whether to transform tensors in modules/optimizers 
      to cpu type.
  """
  map_location = (lambda storage, loc: storage) if load_to_cpu else None
  ckpt = torch.load(ckpt_file, map_location=map_location)
  for m, sd in zip(modules_optims, ckpt['state_dicts']):
    m.load_state_dict(sd)
  if verbose:
    print('Resume from ckpt {}, \nepoch {}, \nscores {}'.format(
      ckpt_file, ckpt['ep'], ckpt['scores']))
  return ckpt['ep'], ckpt['scores']


def save_ckpt(modules_optims, ep, scores, ckpt_file):
  """Save state_dict's of modules/optimizers to file. 
  Args:
    modules_optims: A list, which members are either torch.nn.optimizer 
      or torch.nn.Module.
    ep: the current epoch number
    scores: the performance of current model
    ckpt_file: The file path.
  Note:
    torch.save() reserves device type and id of tensors to save, so when 
    loading ckpt, you have to inform torch.load() to load these tensors to 
    cpu or your desired gpu, if you change devices.
  """
  state_dicts = [m.state_dict() for m in modules_optims]
  ckpt = dict(state_dicts=state_dicts,
              ep=ep,
              scores=scores)
  may_make_dir(osp.dirname(osp.abspath(ckpt_file)))
  torch.save(ckpt, ckpt_file)


def load_state_dict(model, src_state_dict):
  """Copy parameters and buffers from `src_state_dict` into `model` and its 
  descendants. The `src_state_dict.keys()` NEED NOT exactly match 
  `model.state_dict().keys()`. For dict key mismatch, just
  skip it; for copying error, just output warnings and proceed.

  Arguments:
    model: A torch.nn.Module object. 
    src_state_dict (dict): A dict containing parameters and persistent buffers.
  Note:
    This is modified from torch.nn.modules.module.load_state_dict(), to make
    the warnings and errors more detailed.
  """
  from torch.nn import Parameter

  dest_state_dict = model.state_dict()
  for name, param in src_state_dict.items():
    if name not in dest_state_dict:
      continue
    if isinstance(param, Parameter):
      # backwards compatibility for serialized parameters
      param = param.data
    try:
      dest_state_dict[name].copy_(param)
    except Exception, msg:
      print("Warning: Error occurs when copying '{}': {}"
            .format(name, str(msg)))

  src_missing = set(dest_state_dict.keys()) - set(src_state_dict.keys())
  if len(src_missing) > 0:
    print("Keys not found in source state_dict: ")
    for n in src_missing:
      print('\t', n)

  dest_missing = set(src_state_dict.keys()) - set(dest_state_dict.keys())
  if len(dest_missing) > 0:
    print("Keys not found in destination state_dict: ")
    for n in dest_missing:
      print('\t', n)


def is_iterable(obj):
  return hasattr(obj, '__len__')


def may_set_mode(maybe_modules, mode):
  """maybe_modules: an object or a list of objects."""
  assert mode in ['train', 'eval']
  if not is_iterable(maybe_modules):
    maybe_modules = [maybe_modules]
  for m in maybe_modules:
    if isinstance(m, torch.nn.Module):
      if mode == 'train':
        m.train()
      else:
        m.eval()


def may_make_dir(path):
  """
  Args:
    path: a dir, or result of `osp.dirname(osp.abspath(file_path))`
  Note:
    `osp.exists('')` returns `False`, while `osp.exists('.')` returns `True`!
  """
  # This clause has mistakes:
  # if path is None or '':

  if path in [None, '']:
    return
  if not osp.exists(path):
    os.makedirs(path)


class AverageMeter(object):
  """Modified from Tong Xiao's open-reid. 
  Computes and stores the average and current value"""

  def __init__(self):
    self.val = 0
    self.avg = 0
    self.sum = 0
    self.count = 0

  def reset(self):
    self.val = 0
    self.avg = 0
    self.sum = 0
    self.count = 0

  def update(self, val, n=1):
    self.val = val
    self.sum += val * n
    self.count += n
    self.avg = float(self.sum) / (self.count + 1e-20)


class RunningAverageMeter(object):
  """Computes and stores the running average and current value"""

  def __init__(self, hist=0.99):
    self.val = None
    self.avg = None
    self.hist = hist

  def reset(self):
    self.val = None
    self.avg = None

  def update(self, val):
    if self.avg is None:
      self.avg = val
    else:
      self.avg = self.avg * self.hist + val * (1 - self.hist)
    self.val = val


class RecentAverageMeter(object):
  """Stores and computes the average of recent values."""

  def __init__(self, hist_size=100):
    self.hist_size = hist_size
    self.fifo = []
    self.val = 0

  def reset(self):
    self.fifo = []
    self.val = 0

  def update(self, val):
    self.val = val
    self.fifo.append(val)
    if len(self.fifo) > self.hist_size:
      del self.fifo[0]

  @property
  def avg(self):
    assert len(self.fifo) > 0
    return float(sum(self.fifo)) / len(self.fifo)


def get_model_wrapper(model, multi_gpu):
  from torch.nn.parallel import DataParallel
  if multi_gpu:
    return DataParallel(model)
  else:
    return model


class ReDirectSTD(object):
  """Modified from Tong Xiao's `Logger` in open-reid.
  This class overwrites sys.stdout or sys.stderr, so that console logs can
  also be written to file.
  Args:
    fpath: file path
    console: one of ['stdout', 'stderr']
    immediately_visible: If `False`, the file is opened only once and closed
      after exiting. In this case, the message written to file may not be
      immediately visible (Because the file handle is occupied by the
      program?). If `True`, each writing operation of the console will
      open, write to, and close the file. If your program has tons of writing
      operations, the cost of opening and closing file may be obvious. (?)
  Usage example:
    `ReDirectSTD('stdout.txt', 'stdout', False)`
    `ReDirectSTD('stderr.txt', 'stderr', False)`
  NOTE: File will be deleted if already existing. Log dir and file is created
    lazily -- if no message is written, the dir and file will not be created.
  """

  def __init__(self, fpath=None, console='stdout', immediately_visible=False):
    import sys
    import os
    import os.path as osp

    assert console in ['stdout', 'stderr']
    self.console = sys.stdout if console == 'stdout' else sys.stderr
    self.file = fpath
    self.f = None
    self.immediately_visible = immediately_visible
    if fpath is not None:
      # Remove existing log file.
      if osp.exists(fpath):
        os.remove(fpath)

    # Overwrite
    if console == 'stdout':
      sys.stdout = self
    else:
      sys.stderr = self

  def __del__(self):
    self.close()

  def __enter__(self):
    pass

  def __exit__(self, *args):
    self.close()

  def write(self, msg):
    self.console.write(msg)
    if self.file is not None:
      may_make_dir(os.path.dirname(osp.abspath(self.file)))
      if self.immediately_visible:
        with open(self.file, 'a') as f:
          f.write(msg)
      else:
        if self.f is None:
          self.f = open(self.file, 'w')
        self.f.write(msg)

  def flush(self):
    self.console.flush()
    if self.f is not None:
      self.f.flush()
      import os
      os.fsync(self.f.fileno())

  def close(self):
    self.console.close()
    if self.f is not None:
      self.f.close()


def set_seed(seed):
  import random
  random.seed(seed)
  print('setting random-seed to {}'.format(seed))

  import numpy as np
  np.random.seed(seed)
  print('setting np-random-seed to {}'.format(seed))

  import torch
  torch.backends.cudnn.enabled = False
  print('cudnn.enabled set to {}'.format(torch.backends.cudnn.enabled))
  # set seed for CPU
  torch.manual_seed(seed)
  print('setting torch-seed to {}'.format(seed))


def print_array(array, fmt='{:.2f}', end=' '):
  """Print a 1-D tuple, list, or numpy array containing digits."""
  s = ''
  for x in array:
    s += fmt.format(float(x)) + end
  s += '\n'
  print(s)
  return s


# Great idea from https://github.com/amdegroot/ssd.pytorch
def str2bool(v):
  return v.lower() in ("yes", "true", "t", "1")


def tight_float_str(x, fmt='{:.4f}'):
  return fmt.format(x).rstrip('0').rstrip('.')


def find_index(seq, item):
  for i, x in enumerate(seq):
    if item == x:
      return i
  return -1


def adjust_lr_staircase(param_groups, base_lrs, ep, decay_at_epochs, factor):
  """Multiplied by a factor at the BEGINNING of specified epochs. Different
  param groups specify their own base learning rates.
  
  Args:
    param_groups: a list of params
    base_lrs: starting learning rates, len(base_lrs) = len(param_groups)
    ep: current epoch, ep >= 1
    decay_at_epochs: a list or tuple; learning rates are multiplied by a factor
      at the BEGINNING of these epochs
    factor: a number in range (0, 1)
  
  Example:
    base_lrs = [0.1, 0.01]
    decay_at_epochs = [51, 101]
    factor = 0.1
    It means the learning rate starts at 0.1 for 1st param group
    (0.01 for 2nd param group) and is multiplied by 0.1 at the
    BEGINNING of the 51'st epoch, and then further multiplied by 0.1 at the
    BEGINNING of the 101'st epoch, then stays unchanged till the end of 
    training.
  
  NOTE: 
    It is meant to be called at the BEGINNING of an epoch.
  """
  assert len(base_lrs) == len(param_groups), \
    "You should specify base lr for each param group."
  assert ep >= 1, "Current epoch number should be >= 1"

  if ep not in decay_at_epochs:
    return

  ind = find_index(decay_at_epochs, ep)
  for i, (g, base_lr) in enumerate(zip(param_groups, base_lrs)):
    g['lr'] = base_lr * factor ** (ind + 1)
    print('=====> Param group {}: lr adjusted to {:.10f}'
          .format(i, g['lr']).rstrip('0'))


@contextmanager
def measure_time(enter_msg, verbose=True):
  if verbose:
    st = time.time()
    print(enter_msg)
  yield
  if verbose:
    print('Done, {:.2f}s'.format(time.time() - st))

================================================
FILE: bpm/utils/visualization.py
================================================
import numpy as np
from PIL import Image
import cv2
from os.path import dirname as ospdn

from bpm.utils.utils import may_make_dir


def add_border(im, border_width, value):
  """Add color border around an image. The resulting image size is not changed.
  Args:
    im: numpy array with shape [3, im_h, im_w]
    border_width: scalar, measured in pixel
    value: scalar, or numpy array with shape [3]; the color of the border
  Returns:
    im: numpy array with shape [3, im_h, im_w]
  """
  assert (im.ndim == 3) and (im.shape[0] == 3)
  im = np.copy(im)

  if isinstance(value, np.ndarray):
    # reshape to [3, 1, 1]
    value = value.flatten()[:, np.newaxis, np.newaxis]
  im[:, :border_width, :] = value
  im[:, -border_width:, :] = value
  im[:, :, :border_width] = value
  im[:, :, -border_width:] = value

  return im

def make_im_grid(ims, n_rows, n_cols, space, pad_val):
  """Make a grid of images with space in between.
  Args:
    ims: a list of [3, im_h, im_w] images
    n_rows: num of rows
    n_cols: num of columns
    space: the num of pixels between two images
    pad_val: scalar, or numpy array with shape [3]; the color of the space
  Returns:
    ret_im: a numpy array with shape [3, H, W]
  """
  assert (ims[0].ndim == 3) and (ims[0].shape[0] == 3)
  assert len(ims) <= n_rows * n_cols
  h, w = ims[0].shape[1:]
  H = h * n_rows + space * (n_rows - 1)
  W = w * n_cols + space * (n_cols - 1)
  if isinstance(pad_val, np.ndarray):
    # reshape to [3, 1, 1]
    pad_val = pad_val.flatten()[:, np.newaxis, np.newaxis]
  ret_im = (np.ones([3, H, W]) * pad_val).astype(ims[0].dtype)
  for n, im in enumerate(ims):
    r = n // n_cols
    c = n % n_cols
    h1 = r * (h + space)
    h2 = r * (h + space) + h
    w1 = c * (w + space)
    w2 = c * (w + space) + w
    ret_im[:, h1:h2, w1:w2] = im
  return ret_im


def get_rank_list(dist_vec, q_id, q_cam, g_ids, g_cams, rank_list_size):
  """Get the ranking list of a query image
  Args:
    dist_vec: a numpy array with shape [num_gallery_images], the distance
      between the query image and all gallery images
    q_id: a scalar, query id
    q_cam: a scalar, query camera
    g_ids: a numpy array with shape [num_gallery_images], gallery ids
    g_cams: a numpy array with shape [num_gallery_images], gallery cameras
    rank_list_size: a scalar, the number of images to show in a rank list
  Returns:
    rank_list: a list, the indices of gallery images to show
    same_id: a list, len(same_id) = rank_list, whether each ranked image is
      with same id as query
  """
  sort_inds = np.argsort(dist_vec)
  rank_list = []
  same_id = []
  i = 0
  for ind, g_id, g_cam in zip(sort_inds, g_ids[sort_inds], g_cams[sort_inds]):
    # Skip gallery images with same id and same camera as query
    if (q_id == g_id) and (q_cam == g_cam):
      continue
    same_id.append(q_id == g_id)
    rank_list.append(ind)
    i += 1
    if i >= rank_list_size:
      break
  return rank_list, same_id


def read_im(im_path):
  # shape [H, W, 3]
  im = np.asarray(Image.open(im_path))
  # Resize to (im_h, im_w) = (128, 64)
  resize_h_w = (128, 64)
  if (im.shape[0], im.shape[1]) != resize_h_w:
    im = cv2.resize(im, resize_h_w[::-1], interpolation=cv2.INTER_LINEAR)
  # shape [3, H, W]
  im = im.transpose(2, 0, 1)
  return im


def save_im(im, save_path):
  """im: shape [3, H, W]"""
  may_make_dir(ospdn(save_path))
  im = im.transpose(1, 2, 0)
  Image.fromarray(im).save(save_path)


def save_rank_list_to_im(rank_list, same_id, q_im_path, g_im_paths, save_path):
  """Save a query and its rank list as an image.
  Args:
    rank_list: a list, the indices of gallery images to show
    same_id: a list, len(same_id) = rank_list, whether each ranked image is
      with same id as query
    q_im_path: query image path
    g_im_paths: ALL gallery image paths
    save_path: path to save the query and its rank list as an image
  """
  ims = [read_im(q_im_path)]
  for ind, sid in zip(rank_list, same_id):
    im = read_im(g_im_paths[ind])
    # Add green boundary to true positive, red to false positive
    color = np.array([0, 255, 0]) if sid else np.array([255, 0, 0])
    im = add_border(im, 3, color)
    ims.append(im)
  im = make_im_grid(ims, 1, len(rank_list) + 1, 8, 255)
  save_im(im, save_path)


================================================
FILE: requirements.txt
================================================
opencv_python==3.2.0.7
numpy==1.11.3
scipy==0.18.1
h5py==2.6.0
tensorboardX==0.8
# for tensorboard web server
tensorflow==1.2.0

================================================
FILE: script/dataset/combine_trainval_sets.py
================================================
from __future__ import print_function

import sys
sys.path.insert(0, '.')

import os.path as osp

ospeu = osp.expanduser
ospj = osp.join
ospap = osp.abspath

from collections import defaultdict
import shutil

from bpm.utils.utils import may_make_dir
from bpm.utils.utils import save_pickle
from bpm.utils.utils import load_pickle

from bpm.utils.dataset_utils import new_im_name_tmpl
from bpm.utils.dataset_utils import parse_im_name


def move_ims(
    ori_im_paths,
    new_im_dir,
    parse_im_name,
    new_im_name_tmpl,
    new_start_id):
  """Rename and move images to new directory."""
  ids = [parse_im_name(osp.basename(p), 'id') for p in ori_im_paths]
  cams = [parse_im_name(osp.basename(p), 'cam') for p in ori_im_paths]

  unique_ids = list(set(ids))
  unique_ids.sort()
  id_mapping = dict(
    zip(unique_ids, range(new_start_id, new_start_id + len(unique_ids))))

  new_im_names = []
  cnt = defaultdict(int)
  for im_path, id, cam in zip(ori_im_paths, ids, cams):
    new_id = id_mapping[id]
    cnt[(new_id, cam)] += 1
    new_im_name = new_im_name_tmpl.format(new_id, cam, cnt[(new_id, cam)] - 1)
    shutil.copy(im_path, ospj(new_im_dir, new_im_name))
    new_im_names.append(new_im_name)
  return new_im_names, id_mapping


def combine_trainval_sets(
    im_dirs,
    partition_files,
    save_dir):
  new_im_dir = ospj(save_dir, 'trainval_images')
  may_make_dir(new_im_dir)
  new_im_names = []
  new_start_id = 0
  for im_dir, partition_file in zip(im_dirs, partition_files):
    partitions = load_pickle(partition_file)
    im_paths = [ospj(im_dir, n) for n in partitions['trainval_im_names']]
    im_paths.sort()
    new_im_names_, id_mapping = move_ims(
      im_paths, new_im_dir, parse_im_name, new_im_name_tmpl, new_start_id)
    new_start_id += len(id_mapping)
    new_im_names += new_im_names_

  new_ids = range(new_start_id)
  partitions = {'trainval_im_names': new_im_names,
                'trainval_ids2labels': dict(zip(new_ids, new_ids)),
                }
  partition_file = ospj(save_dir, 'partitions.pkl')
  save_pickle(partitions, partition_file)
  print('Partition file saved to {}'.format(partition_file))


if __name__ == '__main__':
  import argparse

  parser = argparse.ArgumentParser(
    description="Combine Trainval Set of Market1501, CUHK03, DukeMTMC-reID")

  # Image directory and partition file of transformed datasets

  parser.add_argument(
    '--market1501_im_dir',
    type=str,
    default=ospeu('~/Dataset/market1501/images')
  )
  parser.add_argument(
    '--market1501_partition_file',
    type=str,
    default=ospeu('~/Dataset/market1501/partitions.pkl')
  )

  cuhk03_im_type = ['detected', 'labeled'][0]
  parser.add_argument(
    '--cuhk03_im_dir',
    type=str,
    # Remember to select the detected or labeled set.
    default=ospeu('~/Dataset/cuhk03/{}/images'.format(cuhk03_im_type))
  )
  parser.add_argument(
    '--cuhk03_partition_file',
    type=str,
    # Remember to select the detected or labeled set.
    default=ospeu('~/Dataset/cuhk03/{}/partitions.pkl'.format(cuhk03_im_type))
  )

  parser.add_argument(
    '--duke_im_dir',
    type=str,
    default=ospeu('~/Dataset/duke/images'))
  parser.add_argument(
    '--duke_partition_file',
    type=str,
    default=ospeu('~/Dataset/duke/partitions.pkl')
  )

  parser.add_argument(
    '--save_dir',
    type=str,
    default=ospeu('~/Dataset/market1501_cuhk03_duke')
  )

  args = parser.parse_args()

  im_dirs = [
    ospap(ospeu(args.market1501_im_dir)),
    ospap(ospeu(args.cuhk03_im_dir)),
    ospap(ospeu(args.duke_im_dir))
  ]
  partition_files = [
    ospap(ospeu(args.market1501_partition_file)),
    ospap(ospeu(args.cuhk03_partition_file)),
    ospap(ospeu(args.duke_partition_file))
  ]

  save_dir = ospap(ospeu(args.save_dir))
  may_make_dir(save_dir)

  combine_trainval_sets(im_dirs, partition_files, save_dir)


================================================
FILE: script/dataset/mapping_im_names_duke.py
================================================
"""Mapping original image name (relative image path) -> my new image name.
The mapping is corresponding to transform_duke.py.
"""

from __future__ import print_function

import sys
sys.path.insert(0, '.')

import os.path as osp
from collections import defaultdict

from bpm.utils.utils import save_pickle
from bpm.utils.dataset_utils import get_im_names
from bpm.utils.dataset_utils import new_im_name_tmpl


def parse_original_im_name(img_name, parse_type='id'):
  """Get the person id or cam from an image name."""
  assert parse_type in ('id', 'cam')
  if parse_type == 'id':
    parsed = int(img_name[:4])
  else:
    parsed = int(img_name[6])
  return parsed


def map_im_names(ori_im_names, parse_im_name, new_im_name_tmpl):
  """Map original im names to new im names."""
  cnt = defaultdict(int)
  new_im_names = []
  for im_name in ori_im_names:
    im_name = osp.basename(im_name)
    id = parse_im_name(im_name, 'id')
    cam = parse_im_name(im_name, 'cam')
    cnt[(id, cam)] += 1
    new_im_name = new_im_name_tmpl.format(id, cam, cnt[(id, cam)] - 1)
    new_im_names.append(new_im_name)
  return new_im_names


def save_im_name_mapping(raw_dir, ori_to_new_im_name_file):
  im_names = []
  for dir_name in ['bounding_box_train', 'bounding_box_test', 'query']:
    im_names_ = get_im_names(osp.join(raw_dir, dir_name), return_path=False, return_np=False)
    im_names_.sort()
    # Images in different original directories may have same names,
    # so here we use relative paths as original image names.
    im_names_ = [osp.join(dir_name, n) for n in im_names_]
    im_names += im_names_
  new_im_names = map_im_names(im_names, parse_original_im_name, new_im_name_tmpl)
  ori_to_new_im_name = dict(zip(im_names, new_im_names))
  save_pickle(ori_to_new_im_name, ori_to_new_im_name_file)
  print('File saved to {}'.format(ori_to_new_im_name_file))

  ##################
  # Just Some Info #
  ##################

  print('len(im_names)', len(im_names))
  print('len(set(im_names))', len(set(im_names)))
  print('len(set(new_im_names))', len(set(new_im_names)))
  print('len(ori_to_new_im_name)', len(ori_to_new_im_name))

  bounding_box_train_im_names = get_im_names(osp.join(raw_dir, 'bounding_box_train'), return_path=False, return_np=False)
  bounding_box_test_im_names = get_im_names(osp.join(raw_dir, 'bounding_box_test'), return_path=False, return_np=False)
  query_im_names = get_im_names(osp.join(raw_dir, 'query'), return_path=False, return_np=False)

  print('set(bounding_box_train_im_names).isdisjoint(set(bounding_box_test_im_names))',
        set(bounding_box_train_im_names).isdisjoint(set(bounding_box_test_im_names)))
  print('set(bounding_box_train_im_names).isdisjoint(set(query_im_names))',
        set(bounding_box_train_im_names).isdisjoint(set(query_im_names)))

  print('set(bounding_box_test_im_names).isdisjoint(set(query_im_names))',
        set(bounding_box_test_im_names).isdisjoint(set(query_im_names)))


if __name__ == '__main__':
  import argparse

  parser = argparse.ArgumentParser(description="Mapping DukeMTMC-reID Image Names")
  parser.add_argument('--raw_dir', type=str, default=osp.expanduser('~/Dataset/duke/DukeMTMC-reID'))
  parser.add_argument('--ori_to_new_im_name_file', type=str, default=osp.expanduser('~/Dataset/duke/ori_to_new_im_name.pkl'))
  args = parser.parse_args()
  save_im_name_mapping(args.raw_dir, args.ori_to_new_im_name_file)

================================================
FILE: script/dataset/mapping_im_names_market1501.py
================================================
"""Mapping original image name (relative image path) -> my new image name.
The mapping is corresponding to transform_market1501.py.
"""

from __future__ import print_function

import sys
sys.path.insert(0, '.')

import os.path as osp
from collections import defaultdict

from bpm.utils.utils import save_pickle
from bpm.utils.dataset_utils import get_im_names
from bpm.utils.dataset_utils import new_im_name_tmpl


def parse_original_im_name(im_name, parse_type='id'):
  """Get the person id or cam from an image name."""
  assert parse_type in ('id', 'cam')
  if parse_type == 'id':
    parsed = -1 if im_name.startswith('-1') else int(im_name[:4])
  else:
    parsed = int(im_name[4]) if im_name.startswith('-1') \
      else int(im_name[6])
  return parsed


def map_im_names(ori_im_names, parse_im_name, new_im_name_tmpl):
  """Map original im names to new im names."""
  cnt = defaultdict(int)
  new_im_names = []
  for im_name in ori_im_names:
    im_name = osp.basename(im_name)
    id = parse_im_name(im_name, 'id')
    cam = parse_im_name(im_name, 'cam')
    cnt[(id, cam)] += 1
    new_im_name = new_im_name_tmpl.format(id, cam, cnt[(id, cam)] - 1)
    new_im_names.append(new_im_name)
  return new_im_names


def save_im_name_mapping(raw_dir, ori_to_new_im_name_file):
  im_names = []
  for dir_name in ['bounding_box_train', 'bounding_box_test', 'query', 'gt_bbox']:
    im_names_ = get_im_names(osp.join(raw_dir, dir_name), return_path=False, return_np=False)
    im_names_.sort()
    # Filter out id -1
    if dir_name == 'bounding_box_test':
      im_names_ = [n for n in im_names_ if not n.startswith('-1')]
    # Get (id, cam) in query set
    if dir_name == 'query':
      q_ids_cams = set([(parse_original_im_name(n, 'id'), parse_original_im_name(n, 'cam')) for n in im_names_])
    # Filter out images that are not corresponding to query (id, cam)
    if dir_name == 'gt_bbox':
      im_names_ = [n for n in im_names_ if (parse_original_im_name(n, 'id'), parse_original_im_name(n, 'cam')) in q_ids_cams]
    # Images in different original directories may have same names,
    # so here we use relative paths as original image names.
    im_names_ = [osp.join(dir_name, n) for n in im_names_]
    im_names += im_names_
  new_im_names = map_im_names(im_names, parse_original_im_name, new_im_name_tmpl)
  ori_to_new_im_name = dict(zip(im_names, new_im_names))
  save_pickle(ori_to_new_im_name, ori_to_new_im_name_file)
  print('File saved to {}'.format(ori_to_new_im_name_file))

  ##################
  # Just Some Info #
  ##################

  print('len(im_names)', len(im_names))
  print('len(set(im_names))', len(set(im_names)))
  print('len(set(new_im_names))', len(set(new_im_names)))
  print('len(ori_to_new_im_name)', len(ori_to_new_im_name))

  bounding_box_train_im_names = get_im_names(osp.join(raw_dir, 'bounding_box_train'), return_path=False, return_np=False)
  bounding_box_test_im_names = get_im_names(osp.join(raw_dir, 'bounding_box_test'), return_path=False, return_np=False)
  query_im_names = get_im_names(osp.join(raw_dir, 'query'), return_path=False, return_np=False)
  gt_bbox_im_names = get_im_names(osp.join(raw_dir, 'gt_bbox'), return_path=False, return_np=False)

  print('set(bounding_box_train_im_names).isdisjoint(set(bounding_box_test_im_names))',
        set(bounding_box_train_im_names).isdisjoint(set(bounding_box_test_im_names)))
  print('set(bounding_box_train_im_names).isdisjoint(set(query_im_names))',
        set(bounding_box_train_im_names).isdisjoint(set(query_im_names)))
  print('set(bounding_box_train_im_names).isdisjoint(set(gt_bbox_im_names))',
        set(bounding_box_train_im_names).isdisjoint(set(gt_bbox_im_names)))

  print('set(bounding_box_test_im_names).isdisjoint(set(query_im_names))',
        set(bounding_box_test_im_names).isdisjoint(set(query_im_names)))
  print('set(bounding_box_test_im_names).isdisjoint(set(gt_bbox_im_names))',
        set(bounding_box_test_im_names).isdisjoint(set(gt_bbox_im_names)))

  print('set(query_im_names).isdisjoint(set(gt_bbox_im_names))',
        set(query_im_names).isdisjoint(set(gt_bbox_im_names)))

  print('len(query_im_names)', len(query_im_names))
  print('len(gt_bbox_im_names)', len(gt_bbox_im_names))
  print('len(set(query_im_names) & set(gt_bbox_im_names))', len(set(query_im_names) & set(gt_bbox_im_names)))
  print('len(set(query_im_names) | set(gt_bbox_im_names))', len(set(query_im_names) | set(gt_bbox_im_names)))


if __name__ == '__main__':
  import argparse

  parser = argparse.ArgumentParser(description="Mapping Market-1501 Image Names")
  parser.add_argument('--raw_dir', type=str, default=osp.expanduser('~/Dataset/market1501/Market-1501-v15.09.15'))
  parser.add_argument('--ori_to_new_im_name_file', type=str, default=osp.expanduser('~/Dataset/market1501/ori_to_new_im_name.pkl'))
  args = parser.parse_args()
  save_im_name_mapping(args.raw_dir, args.ori_to_new_im_name_file)

================================================
FILE: script/dataset/transform_cuhk03.py
================================================
"""Refactor file directories, save/rename images and partition the 
train/val/test set, in order to support the unified dataset interface.
"""

from __future__ import print_function

import sys
sys.path.insert(0, '.')

from zipfile import ZipFile
import os.path as osp
import sys
import h5py
from scipy.misc import imsave
from itertools import chain

from bpm.utils.utils import may_make_dir
from bpm.utils.utils import load_pickle
from bpm.utils.utils import save_pickle

from bpm.utils.dataset_utils import partition_train_val_set
from bpm.utils.dataset_utils import new_im_name_tmpl
from bpm.utils.dataset_utils import parse_im_name


def save_images(mat_file, save_dir, new_im_name_tmpl):
  def deref(mat, ref):
    return mat[ref][:].T

  def dump(mat, refs, pid, cam, im_dir):
    """Save the images of a person under one camera."""
    for i, ref in enumerate(refs):
      im = deref(mat, ref)
      if im.size == 0 or im.ndim < 2: break
      fname = new_im_name_tmpl.format(pid, cam, i)
      imsave(osp.join(im_dir, fname), im)

  mat = h5py.File(mat_file, 'r')
  labeled_im_dir = osp.join(save_dir, 'labeled/images')
  detected_im_dir = osp.join(save_dir, 'detected/images')
  all_im_dir = osp.join(save_dir, 'all/images')

  may_make_dir(labeled_im_dir)
  may_make_dir(detected_im_dir)
  may_make_dir(all_im_dir)

  # loop through camera pairs
  pid = 0
  for labeled, detected in zip(mat['labeled'][0], mat['detected'][0]):
    labeled, detected = deref(mat, labeled), deref(mat, detected)
    assert labeled.shape == detected.shape
    # loop through ids in a camera pair
    for i in range(labeled.shape[0]):
      # We don't care about whether different persons are under same cameras,
      # we only care about the same person being under different cameras or not.
      dump(mat, labeled[i, :5], pid, 0, labeled_im_dir)
      dump(mat, labeled[i, 5:], pid, 1, labeled_im_dir)
      dump(mat, detected[i, :5], pid, 0, detected_im_dir)
      dump(mat, detected[i, 5:], pid, 1, detected_im_dir)
      dump(mat, chain(detected[i, :5], labeled[i, :5]), pid, 0, all_im_dir)
      dump(mat, chain(detected[i, 5:], labeled[i, 5:]), pid, 1, all_im_dir)
      pid += 1
      if pid % 100 == 0:
        sys.stdout.write('\033[F\033[K')
        print('Saving images {}/{}'.format(pid, 1467))


def transform(zip_file, train_test_partition_file, save_dir=None):
  """Save images and partition the train/val/test set.
  """
  print("Extracting zip file")
  root = osp.dirname(osp.abspath(zip_file))
  if save_dir is None:
    save_dir = root
  may_make_dir(save_dir)
  with ZipFile(zip_file) as z:
    z.extractall(path=save_dir)
  print("Extracting zip file done")
  mat_file = osp.join(save_dir, osp.basename(zip_file)[:-4], 'cuhk-03.mat')

  save_images(mat_file, save_dir, new_im_name_tmpl)

  if osp.exists(train_test_partition_file):
    train_test_partition = load_pickle(train_test_partition_file)
  else:
    raise RuntimeError('Train/test partition file should be provided.')

  for im_type in ['detected', 'labeled']:
    trainval_im_names = train_test_partition[im_type]['train_im_names']
    trainval_ids = list(set([parse_im_name(n, 'id')
                             for n in trainval_im_names]))
    # Sort ids, so that id-to-label mapping remains the same when running
    # the code on different machines.
    trainval_ids.sort()
    trainval_ids2labels = dict(zip(trainval_ids, range(len(trainval_ids))))
    train_val_partition = \
      partition_train_val_set(trainval_im_names, parse_im_name, num_val_ids=100)
    train_im_names = train_val_partition['train_im_names']
    train_ids = list(set([parse_im_name(n, 'id')
                          for n in train_val_partition['train_im_names']]))
    # Sort ids, so that id-to-label mapping remains the same when running
    # the code on different machines.
    train_ids.sort()
    train_ids2labels = dict(zip(train_ids, range(len(train_ids))))

    # A mark is used to denote whether the image is from
    #   query (mark == 0), or
    #   gallery (mark == 1), or
    #   multi query (mark == 2) set

    val_marks = [0, ] * len(train_val_partition['val_query_im_names']) \
                + [1, ] * len(train_val_partition['val_gallery_im_names'])
    val_im_names = list(train_val_partition['val_query_im_names']) \
                   + list(train_val_partition['val_gallery_im_names'])
    test_im_names = list(train_test_partition[im_type]['query_im_names']) \
                    + list(train_test_partition[im_type]['gallery_im_names'])
    test_marks = [0, ] * len(train_test_partition[im_type]['query_im_names']) \
                 + [1, ] * len(
      train_test_partition[im_type]['gallery_im_names'])
    partitions = {'trainval_im_names': trainval_im_names,
                  'trainval_ids2labels': trainval_ids2labels,
                  'train_im_names': train_im_names,
                  'train_ids2labels': train_ids2labels,
                  'val_im_names': val_im_names,
                  'val_marks': val_marks,
                  'test_im_names': test_im_names,
                  'test_marks': test_marks}
    partition_file = osp.join(save_dir, im_type, 'partitions.pkl')
    save_pickle(partitions, partition_file)
    print('Partition file for "{}" saved to {}'.format(im_type, partition_file))


if __name__ == '__main__':
  import argparse

  parser = argparse.ArgumentParser(description="Transform CUHK03 Dataset")
  parser.add_argument(
    '--zip_file',
    type=str,
    default='~/Dataset/cuhk03/cuhk03_release.zip')
  parser.add_argument(
    '--save_dir',
    type=str,
    default='~/Dataset/cuhk03')
  parser.add_argument(
    '--train_test_partition_file',
    type=str,
    default='~/Dataset/cuhk03/re_ranking_train_test_split.pkl')
  args = parser.parse_args()
  zip_file = osp.abspath(osp.expanduser(args.zip_file))
  train_test_partition_file = osp.abspath(osp.expanduser(
    args.train_test_partition_file))
  save_dir = osp.abspath(osp.expanduser(args.save_dir))
  transform(zip_file, train_test_partition_file, save_dir)


================================================
FILE: script/dataset/transform_duke.py
================================================
"""Refactor file directories, save/rename images and partition the 
train/val/test set, in order to support the unified dataset interface.
"""

from __future__ import print_function

import sys
sys.path.insert(0, '.')

from zipfile import ZipFile
import os.path as osp
import numpy as np

from bpm.utils.utils import may_make_dir
from bpm.utils.utils import save_pickle

from bpm.utils.dataset_utils import get_im_names
from bpm.utils.dataset_utils import partition_train_val_set
from bpm.utils.dataset_utils import new_im_name_tmpl
from bpm.utils.dataset_utils import parse_im_name as parse_new_im_name
from bpm.utils.dataset_utils import move_ims


def parse_original_im_name(img_name, parse_type='id'):
  """Get the person id or cam from an image name."""
  assert parse_type in ('id', 'cam')
  if parse_type == 'id':
    parsed = int(img_name[:4])
  else:
    parsed = int(img_name[6])
  return parsed


def save_images(zip_file, save_dir=None, train_test_split_file=None):
  """Rename and move all used images to a directory."""

  print("Extracting zip file")
  root = osp.dirname(osp.abspath(zip_file))
  if save_dir is None:
    save_dir = root
  may_make_dir(save_dir)
  with ZipFile(zip_file) as z:
    z.extractall(path=save_dir)
  print("Extracting zip file done")

  new_im_dir = osp.join(save_dir, 'images')
  may_make_dir(new_im_dir)
  raw_dir = osp.join(save_dir, osp.basename(zip_file)[:-4])

  im_paths = []
  nums = []

  for dir_name in ['bounding_box_train', 'bounding_box_test', 'query']:
    im_paths_ = get_im_names(osp.join(raw_dir, dir_name),
                             return_path=True, return_np=False)
    im_paths_.sort()
    im_paths += list(im_paths_)
    nums.append(len(im_paths_))

  im_names = move_ims(
    im_paths, new_im_dir, parse_original_im_name, new_im_name_tmpl)

  split = dict()
  keys = ['trainval_im_names', 'gallery_im_names', 'q_im_names']
  inds = [0] + nums
  inds = np.cumsum(inds)
  for i, k in enumerate(keys):
    split[k] = im_names[inds[i]:inds[i + 1]]

  save_pickle(split, train_test_split_file)
  print('Saving images done.')
  return split


def transform(zip_file, save_dir=None):
  """Refactor file directories, rename images and partition the train/val/test 
  set.
  """

  train_test_split_file = osp.join(save_dir, 'train_test_split.pkl')
  train_test_split = save_images(zip_file, save_dir, train_test_split_file)
  # train_test_split = load_pickle(train_test_split_file)

  # partition train/val/test set

  trainval_ids = list(set([parse_new_im_name(n, 'id')
                           for n in train_test_split['trainval_im_names']]))
  # Sort ids, so that id-to-label mapping remains the same when running
  # the code on different machines.
  trainval_ids.sort()
  trainval_ids2labels = dict(zip(trainval_ids, range(len(trainval_ids))))
  partitions = partition_train_val_set(
    train_test_split['trainval_im_names'], parse_new_im_name, num_val_ids=100)
  train_im_names = partitions['train_im_names']
  train_ids = list(set([parse_new_im_name(n, 'id')
                        for n in partitions['train_im_names']]))
  # Sort ids, so that id-to-label mapping remains the same when running
  # the code on different machines.
  train_ids.sort()
  train_ids2labels = dict(zip(train_ids, range(len(train_ids))))

  # A mark is used to denote whether the image is from
  #   query (mark == 0), or
  #   gallery (mark == 1), or
  #   multi query (mark == 2) set

  val_marks = [0, ] * len(partitions['val_query_im_names']) \
              + [1, ] * len(partitions['val_gallery_im_names'])
  val_im_names = list(partitions['val_query_im_names']) \
                 + list(partitions['val_gallery_im_names'])

  test_im_names = list(train_test_split['q_im_names']) \
                  + list(train_test_split['gallery_im_names'])
  test_marks = [0, ] * len(train_test_split['q_im_names']) \
               + [1, ] * len(train_test_split['gallery_im_names'])

  partitions = {'trainval_im_names': train_test_split['trainval_im_names'],
                'trainval_ids2labels': trainval_ids2labels,
                'train_im_names': train_im_names,
                'train_ids2labels': train_ids2labels,
                'val_im_names': val_im_names,
                'val_marks': val_marks,
                'test_im_names': test_im_names,
                'test_marks': test_marks}
  partition_file = osp.join(save_dir, 'partitions.pkl')
  save_pickle(partitions, partition_file)
  print('Partition file saved to {}'.format(partition_file))


if __name__ == '__main__':
  import argparse

  parser = argparse.ArgumentParser(
    description="Transform DukeMTMC-reID Dataset")
  parser.add_argument('--zip_file', type=str,
                      default='~/Dataset/duke/DukeMTMC-reID.zip')
  parser.add_argument('--save_dir', type=str,
                      default='~/Dataset/duke')
  args = parser.parse_args()
  zip_file = osp.abspath(osp.expanduser(args.zip_file))
  save_dir = osp.abspath(osp.expanduser(args.save_dir))
  transform(zip_file, save_dir)


================================================
FILE: script/dataset/transform_market1501.py
================================================
"""Refactor file directories, save/rename images and partition the 
train/val/test set, in order to support the unified dataset interface.
"""

from __future__ import print_function

import sys
sys.path.insert(0, '.')

from zipfile import ZipFile
import os.path as osp
import numpy as np

from bpm.utils.utils import may_make_dir
from bpm.utils.utils import save_pickle
from bpm.utils.utils import load_pickle

from bpm.utils.dataset_utils import get_im_names
from bpm.utils.dataset_utils import partition_train_val_set
from bpm.utils.dataset_utils import new_im_name_tmpl
from bpm.utils.dataset_utils import parse_im_name as parse_new_im_name
from bpm.utils.dataset_utils import move_ims


def parse_original_im_name(im_name, parse_type='id'):
  """Get the person id or cam from an image name."""
  assert parse_type in ('id', 'cam')
  if parse_type == 'id':
    parsed = -1 if im_name.startswith('-1') else int(im_name[:4])
  else:
    parsed = int(im_name[4]) if im_name.startswith('-1') \
      else int(im_name[6])
  return parsed


def save_images(zip_file, save_dir=None, train_test_split_file=None):
  """Rename and move all used images to a directory."""

  print("Extracting zip file")
  root = osp.dirname(osp.abspath(zip_file))
  if save_dir is None:
    save_dir = root
  may_make_dir(osp.abspath(save_dir))
  with ZipFile(zip_file) as z:
    z.extractall(path=save_dir)
  print("Extracting zip file done")

  new_im_dir = osp.join(save_dir, 'images')
  may_make_dir(osp.abspath(new_im_dir))
  raw_dir = osp.join(save_dir, osp.basename(zip_file)[:-4])

  im_paths = []
  nums = []

  im_paths_ = get_im_names(osp.join(raw_dir, 'bounding_box_train'),
                           return_path=True, return_np=False)
  im_paths_.sort()
  im_paths += list(im_paths_)
  nums.append(len(im_paths_))

  im_paths_ = get_im_names(osp.join(raw_dir, 'bounding_box_test'),
                           return_path=True, return_np=False)
  im_paths_.sort()
  im_paths_ = [p for p in im_paths_ if not osp.basename(p).startswith('-1')]
  im_paths += list(im_paths_)
  nums.append(len(im_paths_))

  im_paths_ = get_im_names(osp.join(raw_dir, 'query'),
                           return_path=True, return_np=False)
  im_paths_.sort()
  im_paths += list(im_paths_)
  nums.append(len(im_paths_))
  q_ids_cams = set([(parse_original_im_name(osp.basename(p), 'id'),
                     parse_original_im_name(osp.basename(p), 'cam'))
                    for p in im_paths_])

  im_paths_ = get_im_names(osp.join(raw_dir, 'gt_bbox'),
                           return_path=True, return_np=False)
  im_paths_.sort()
  # Only gather images for those ids and cams used in testing.
  im_paths_ = [p for p in im_paths_
               if (parse_original_im_name(osp.basename(p), 'id'),
                   parse_original_im_name(osp.basename(p), 'cam'))
               in q_ids_cams]
  im_paths += list(im_paths_)
  nums.append(len(im_paths_))

  im_names = move_ims(
    im_paths, new_im_dir, parse_original_im_name, new_im_name_tmpl)

  split = dict()
  keys = ['trainval_im_names', 'gallery_im_names', 'q_im_names', 'mq_im_names']
  inds = [0] + nums
  inds = np.cumsum(np.array(inds))
  for i, k in enumerate(keys):
    split[k] = im_names[inds[i]:inds[i + 1]]

  save_pickle(split, train_test_split_file)
  print('Saving images done.')
  return split


def transform(zip_file, save_dir=None):
  """Refactor file directories, rename images and partition the train/val/test 
  set.
  """

  train_test_split_file = osp.join(save_dir, 'train_test_split.pkl')
  train_test_split = save_images(zip_file, save_dir, train_test_split_file)
  # train_test_split = load_pickle(train_test_split_file)

  # partition train/val/test set

  trainval_ids = list(set([parse_new_im_name(n, 'id')
                           for n in train_test_split['trainval_im_names']]))
  # Sort ids, so that id-to-label mapping remains the same when running
  # the code on different machines.
  trainval_ids.sort()
  trainval_ids2labels = dict(zip(trainval_ids, range(len(trainval_ids))))
  partitions = partition_train_val_set(
    train_test_split['trainval_im_names'], parse_new_im_name, num_val_ids=100)
  train_im_names = partitions['train_im_names']
  train_ids = list(set([parse_new_im_name(n, 'id')
                        for n in partitions['train_im_names']]))
  # Sort ids, so that id-to-label mapping remains the same when running
  # the code on different machines.
  train_ids.sort()
  train_ids2labels = dict(zip(train_ids, range(len(train_ids))))

  # A mark is used to denote whether the image is from
  #   query (mark == 0), or
  #   gallery (mark == 1), or
  #   multi query (mark == 2) set

  val_marks = [0, ] * len(partitions['val_query_im_names']) \
              + [1, ] * len(partitions['val_gallery_im_names'])
  val_im_names = list(partitions['val_query_im_names']) \
                 + list(partitions['val_gallery_im_names'])

  test_im_names = list(train_test_split['q_im_names']) \
                  + list(train_test_split['mq_im_names']) \
                  + list(train_test_split['gallery_im_names'])
  test_marks = [0, ] * len(train_test_split['q_im_names']) \
               + [2, ] * len(train_test_split['mq_im_names']) \
               + [1, ] * len(train_test_split['gallery_im_names'])

  partitions = {'trainval_im_names': train_test_split['trainval_im_names'],
                'trainval_ids2labels': trainval_ids2labels,
                'train_im_names': train_im_names,
                'train_ids2labels': train_ids2labels,
                'val_im_names': val_im_names,
                'val_marks': val_marks,
                'test_im_names': test_im_names,
                'test_marks': test_marks}
  partition_file = osp.join(save_dir, 'partitions.pkl')
  save_pickle(partitions, partition_file)
  print('Partition file saved to {}'.format(partition_file))


if __name__ == '__main__':
  import argparse

  parser = argparse.ArgumentParser(description="Transform Market1501 Dataset")
  parser.add_argument('--zip_file', type=str,
                      default='~/Dataset/market1501/Market-1501-v15.09.15.zip')
  parser.add_argument('--save_dir', type=str,
                      default='~/Dataset/market1501')
  args = parser.parse_args()
  zip_file = osp.abspath(osp.expanduser(args.zip_file))
  save_dir = osp.abspath(osp.expanduser(args.save_dir))
  transform(zip_file, save_dir)


================================================
FILE: script/experiment/train_pcb.py
================================================
from __future__ import print_function

import sys

sys.path.insert(0, '.')

import torch
from torch.autograd import Variable
import torch.optim as optim
from torch.nn.parallel import DataParallel

import time
import os.path as osp
from tensorboardX import SummaryWriter
import numpy as np
import argparse

from bpm.dataset import create_dataset
from bpm.model.PCBModel import PCBModel as Model

from bpm.utils.utils import time_str
from bpm.utils.utils import str2bool
from bpm.utils.utils import may_set_mode
from bpm.utils.utils import load_state_dict
from bpm.utils.utils import load_ckpt
from bpm.utils.utils import save_ckpt
from bpm.utils.utils import set_devices
from bpm.utils.utils import AverageMeter
from bpm.utils.utils import to_scalar
from bpm.utils.utils import ReDirectSTD
from bpm.utils.utils import set_seed
from bpm.utils.utils import adjust_lr_staircase


class Config(object):
  def __init__(self):

    parser = argparse.ArgumentParser()
    parser.add_argument('-d', '--sys_device_ids', type=eval, default=(0,))
    parser.add_argument('-r', '--run', type=int, default=1)
    parser.add_argument('--set_seed', type=str2bool, default=False)
    parser.add_argument('--dataset', type=str, default='market1501',
                        choices=['market1501', 'cuhk03', 'duke', 'combined'])
    parser.add_argument('--trainset_part', type=str, default='trainval',
                        choices=['trainval', 'train'])

    parser.add_argument('--resize_h_w', type=eval, default=(384, 128))
    # These several only for training set
    parser.add_argument('--crop_prob', type=float, default=0)
    parser.add_argument('--crop_ratio', type=float, default=1)
    parser.add_argument('--mirror', type=str2bool, default=True)
    parser.add_argument('--batch_size', type=int, default=64)

    parser.add_argument('--log_to_file', type=str2bool, default=True)
    parser.add_argument('--steps_per_log', type=int, default=20)
    parser.add_argument('--epochs_per_val', type=int, default=1)

    parser.add_argument('--last_conv_stride', type=int, default=1, choices=[1, 2])
    # When the stride is changed to 1, we can compensate for the receptive field
    # using dilated convolution. However, experiments show dilated convolution is useless.
    parser.add_argument('--last_conv_dilation', type=int, default=1, choices=[1, 2])
    parser.add_argument('--num_stripes', type=int, default=6)
    parser.add_argument('--local_conv_out_channels', type=int, default=256)

    parser.add_argument('--only_test', type=str2bool, default=False)
    parser.add_argument('--resume', type=str2bool, default=False)
    parser.add_argument('--exp_dir', type=str, default='')
    parser.add_argument('--model_weight_file', type=str, default='')

    parser.add_argument('--new_params_lr', type=float, default=0.1)
    parser.add_argument('--finetuned_params_lr', type=float, default=0.01)
    parser.add_argument('--staircase_decay_at_epochs',
                        type=eval, default=(41,))
    parser.add_argument('--staircase_decay_multiply_factor',
                        type=float, default=0.1)
    parser.add_argument('--total_epochs', type=int, default=60)

    args = parser.parse_args()

    # gpu ids
    self.sys_device_ids = args.sys_device_ids

    # If you want to make your results exactly reproducible, you have
    # to fix a random seed.
    if args.set_seed:
      self.seed = 1
    else:
      self.seed = None

    # The experiments can be run for several times and performances be averaged.
    # `run` starts from `1`, not `0`.
    self.run = args.run

    ###########
    # Dataset #
    ###########

    # If you want to make your results exactly reproducible, you have
    # to also set num of threads to 1 during training.
    if self.seed is not None:
      self.prefetch_threads = 1
    else:
      self.prefetch_threads = 2

    self.dataset = args.dataset
    self.trainset_part = args.trainset_part

    # Image Processing

    # Just for training set
    self.crop_prob = args.crop_prob
    self.crop_ratio = args.crop_ratio
    self.resize_h_w = args.resize_h_w

    # Whether to scale by 1/255
    self.scale_im = True
    self.im_mean = [0.486, 0.459, 0.408]
    self.im_std = [0.229, 0.224, 0.225]

    self.train_mirror_type = 'random' if args.mirror else None
    self.train_batch_size = args.batch_size
    self.train_final_batch = False
    self.train_shuffle = True

    self.test_mirror_type = None
    self.test_batch_size = 32
    self.test_final_batch = True
    self.test_shuffle = False

    dataset_kwargs = dict(
      name=self.dataset,
      resize_h_w=self.resize_h_w,
      scale=self.scale_im,
      im_mean=self.im_mean,
      im_std=self.im_std,
      batch_dims='NCHW',
      num_prefetch_threads=self.prefetch_threads)

    prng = np.random
    if self.seed is not None:
      prng = np.random.RandomState(self.seed)
    self.train_set_kwargs = dict(
      part=self.trainset_part,
      batch_size=self.train_batch_size,
      final_batch=self.train_final_batch,
      shuffle=self.train_shuffle,
      crop_prob=self.crop_prob,
      crop_ratio=self.crop_ratio,
      mirror_type=self.train_mirror_type,
      prng=prng)
    self.train_set_kwargs.update(dataset_kwargs)

    prng = np.random
    if self.seed is not None:
      prng = np.random.RandomState(self.seed)
    self.val_set_kwargs = dict(
      part='val',
      batch_size=self.test_batch_size,
      final_batch=self.test_final_batch,
      shuffle=self.test_shuffle,
      mirror_type=self.test_mirror_type,
      prng=prng)
    self.val_set_kwargs.update(dataset_kwargs)

    prng = np.random
    if self.seed is not None:
      prng = np.random.RandomState(self.seed)
    self.test_set_kwargs = dict(
      part='test',
      batch_size=self.test_batch_size,
      final_batch=self.test_final_batch,
      shuffle=self.test_shuffle,
      mirror_type=self.test_mirror_type,
      prng=prng)
    self.test_set_kwargs.update(dataset_kwargs)

    ###############
    # ReID Model  #
    ###############

    # The last block of ResNet has stride 2. We can set the stride to 1 so that
    # the spatial resolution before global pooling is doubled.
    self.last_conv_stride = args.last_conv_stride
    # When the stride is changed to 1, we can compensate for the receptive field
    # using dilated convolution. However, experiments show dilated convolution is useless.
    self.last_conv_dilation = args.last_conv_dilation
    # Number of stripes (parts)
    self.num_stripes = args.num_stripes
    # Output channel of 1x1 conv
    self.local_conv_out_channels = args.local_conv_out_channels

    #############
    # Training  #
    #############

    self.momentum = 0.9
    self.weight_decay = 0.0005

    # Initial learning rate
    self.new_params_lr = args.new_params_lr
    self.finetuned_params_lr = args.finetuned_params_lr
    self.staircase_decay_at_epochs = args.staircase_decay_at_epochs
    self.staircase_decay_multiply_factor = args.staircase_decay_multiply_factor
    # Number of epochs to train
    self.total_epochs = args.total_epochs

    # How often (in epochs) to test on val set.
    self.epochs_per_val = args.epochs_per_val

    # How often (in batches) to log. If only need to log the average
    # information for each epoch, set this to a large value, e.g. 1e10.
    self.steps_per_log = args.steps_per_log

    # Only test and without training.
    self.only_test = args.only_test

    self.resume = args.resume

    #######
    # Log #
    #######

    # If True,
    # 1) stdout and stderr will be redirected to file,
    # 2) training loss etc will be written to tensorboard,
    # 3) checkpoint will be saved
    self.log_to_file = args.log_to_file

    # The root dir of logs.
    if args.exp_dir == '':
      self.exp_dir = osp.join(
        'exp/train',
        '{}'.format(self.dataset),
        'run{}'.format(self.run),
      )
    else:
      self.exp_dir = args.exp_dir

    self.stdout_file = osp.join(
      self.exp_dir, 'stdout_{}.txt'.format(time_str()))
    self.stderr_file = osp.join(
      self.exp_dir, 'stderr_{}.txt'.format(time_str()))

    # Saving model weights and optimizer states, for resuming.
    self.ckpt_file = osp.join(self.exp_dir, 'ckpt.pth')
    # Just for loading a pretrained model; no optimizer states is needed.
    self.model_weight_file = args.model_weight_file


class ExtractFeature(object):
  """A function to be called in the val/test set, to extract features.
  Args:
    TVT: A callable to transfer images to specific device.
  """

  def __init__(self, model, TVT):
    self.model = model
    self.TVT = TVT

  def __call__(self, ims):
    old_train_eval_model = self.model.training
    # Set eval mode.
    # Force all BN layers to use global mean and variance, also disable
    # dropout.
    self.model.eval()

    ims = Variable(self.TVT(torch.from_numpy(ims).float()))
    try:
      local_feat_list, logits_list = self.model(ims)
    except:
      local_feat_list = self.model(ims)
    feat = [lf.data.cpu().numpy() for lf in local_feat_list]
    feat = np.concatenate(feat, axis=1)

    # Restore the model to its old train/eval mode.
    self.model.train(old_train_eval_model)
    return feat


def main():
  cfg = Config()

  # Redirect logs to both console and file.
  if cfg.log_to_file:
    ReDirectSTD(cfg.stdout_file, 'stdout', False)
    ReDirectSTD(cfg.stderr_file, 'stderr', False)

  # Lazily create SummaryWriter
  writer = None

  TVT, TMO = set_devices(cfg.sys_device_ids)

  if cfg.seed is not None:
    set_seed(cfg.seed)

  # Dump the configurations to log.
  import pprint
  print('-' * 60)
  print('cfg.__dict__')
  pprint.pprint(cfg.__dict__)
  print('-' * 60)

  ###########
  # Dataset #
  ###########

  train_set = create_dataset(**cfg.train_set_kwargs)
  num_classes = len(train_set.ids2labels)
  # The combined dataset does not provide val set currently.
  val_set = None if cfg.dataset == 'combined' else create_dataset(**cfg.val_set_kwargs)

  test_sets = []
  test_set_names = []
  if cfg.dataset == 'combined':
    for name in ['market1501', 'cuhk03', 'duke']:
      cfg.test_set_kwargs['name'] = name
      test_sets.append(create_dataset(**cfg.test_set_kwargs))
      test_set_names.append(name)
  else:
    test_sets.append(create_dataset(**cfg.test_set_kwargs))
    test_set_names.append(cfg.dataset)

  ###########
  # Models  #
  ###########

  model = Model(
    last_conv_stride=cfg.last_conv_stride,
    num_stripes=cfg.num_stripes,
    local_conv_out_channels=cfg.local_conv_out_channels,
    num_classes=num_classes
  )
  # Model wrapper
  model_w = DataParallel(model)

  #############################
  # Criteria and Optimizers   #
  #############################

  criterion = torch.nn.CrossEntropyLoss()

  # To finetune from ImageNet weights
  finetuned_params = list(model.base.parameters())
  # To train from scratch
  new_params = [p for n, p in model.named_parameters()
                if not n.startswith('base.')]
  param_groups = [{'params': finetuned_params, 'lr': cfg.finetuned_params_lr},
                  {'params': new_params, 'lr': cfg.new_params_lr}]
  optimizer = optim.SGD(
    param_groups,
    momentum=cfg.momentum,
    weight_decay=cfg.weight_decay)

  # Bind them together just to save some codes in the following usage.
  modules_optims = [model, optimizer]

  ################################
  # May Resume Models and Optims #
  ################################

  if cfg.resume:
    resume_ep, scores = load_ckpt(modules_optims, cfg.ckpt_file)

  # May Transfer Models and Optims to Specified Device. Transferring optimizer
  # is to cope with the case when you load the checkpoint to a new device.
  TMO(modules_optims)

  ########
  # Test #
  ########

  def test(load_model_weight=False):
    if load_model_weight:
      if cfg.model_weight_file != '':
        map_location = (lambda storage, loc: storage)
        sd = torch.load(cfg.model_weight_file, map_location=map_location)
        load_state_dict(model, sd)
        print('Loaded model weights from {}'.format(cfg.model_weight_file))
      else:
        load_ckpt(modules_optims, cfg.ckpt_file)

    for test_set, name in zip(test_sets, test_set_names):
      test_set.set_feat_func(ExtractFeature(model_w, TVT))
      print('\n=========> Test on dataset: {} <=========\n'.format(name))
      test_set.eval(
        normalize_feat=True,
        verbose=True)

  def validate():
    if val_set.extract_feat_func is None:
      val_set.set_feat_func(ExtractFeature(model_w, TVT))
    print('\n===== Test on validation set =====\n')
    mAP, cmc_scores, _, _ = val_set.eval(
      normalize_feat=True,
      to_re_rank=False,
      verbose=True)
    print()
    return mAP, cmc_scores[0]

  if cfg.only_test:
    test(load_model_weight=True)
    return

  ############
  # Training #
  ############

  start_ep = resume_ep if cfg.resume else 0
  for ep in range(start_ep, cfg.total_epochs):

    # Adjust Learning Rate
    adjust_lr_staircase(
      optimizer.param_groups,
      [cfg.finetuned_params_lr, cfg.new_params_lr],
      ep + 1,
      cfg.staircase_decay_at_epochs,
      cfg.staircase_decay_multiply_factor)

    may_set_mode(modules_optims, 'train')

    # For recording loss
    loss_meter = AverageMeter()

    ep_st = time.time()
    step = 0
    epoch_done = False
    while not epoch_done:

      step += 1
      step_st = time.time()

      ims, im_names, labels, mirrored, epoch_done = train_set.next_batch()

      ims_var = Variable(TVT(torch.from_numpy(ims).float()))
      labels_var = Variable(TVT(torch.from_numpy(labels).long()))

      _, logits_list = model_w(ims_var)
      loss = torch.sum(
        torch.cat([criterion(logits, labels_var) for logits in logits_list]))

      optimizer.zero_grad()
      loss.backward()
      optimizer.step()

      ############
      # Step Log #
      ############

      loss_meter.update(to_scalar(loss))

      if step % cfg.steps_per_log == 0:
        log = '\tStep {}/Ep {}, {:.2f}s, loss {:.4f}'.format(
          step, ep + 1, time.time() - step_st, loss_meter.val)
        print(log)

    #############
    # Epoch Log #
    #############

    log = 'Ep {}, {:.2f}s, loss {:.4f}'.format(
      ep + 1, time.time() - ep_st, loss_meter.avg)
    print(log)

    ##########################
    # Test on Validation Set #
    ##########################

    mAP, Rank1 = 0, 0
    if ((ep + 1) % cfg.epochs_per_val == 0) and (val_set is not None):
      mAP, Rank1 = validate()

    # Log to TensorBoard

    if cfg.log_to_file:
      if writer is None:
        writer = SummaryWriter(log_dir=osp.join(cfg.exp_dir, 'tensorboard'))
      writer.add_scalars(
        'val scores',
        dict(mAP=mAP,
             Rank1=Rank1),
        ep)
      writer.add_scalars(
        'loss',
        dict(loss=loss_meter.avg, ),
        ep)

    # save ckpt
    if cfg.log_to_file:
      save_ckpt(modules_optims, ep + 1, 0, cfg.ckpt_file)

  ########
  # Test #
  ########

  test(load_model_weight=False)


if __name__ == '__main__':
  main()


================================================
FILE: script/experiment/visualize_rank_list.py
================================================
from __future__ import print_function

import sys

sys.path.insert(0, '.')

import torch
from torch.autograd import Variable
from torch.nn.parallel import DataParallel

import os.path as osp
from os.path import join as ospj

import numpy as np
import argparse

from bpm.dataset import create_dataset
from bpm.model.PCBModel import PCBModel as Model

from bpm.utils.utils import time_str
from bpm.utils.utils import str2bool
from bpm.utils.utils import load_state_dict
from bpm.utils.utils import set_devices
from bpm.utils.utils import ReDirectSTD
from bpm.utils.utils import measure_time
from bpm.utils.distance import compute_dist
from bpm.utils.visualization import get_rank_list
from bpm.utils.visualization import save_rank_list_to_im


class Config(object):
  def __init__(self):

    parser = argparse.ArgumentParser()
    parser.add_argument('-d', '--sys_device_ids', type=eval, default=(0,))
    parser.add_argument('--dataset', type=str, default='market1501',
                        choices=['market1501', 'cuhk03', 'duke'])

    parser.add_argument('--num_queries', type=int, default=16)
    parser.add_argument('--rank_list_size', type=int, default=10)

    parser.add_argument('--resize_h_w', type=eval, default=(384, 128))
    parser.add_argument('--last_conv_stride', type=int, default=1,
                        choices=[1, 2])
    parser.add_argument('--num_stripes', type=int, default=6)
    parser.add_argument('--local_conv_out_channels', type=int, default=256)

    parser.add_argument('--log_to_file', type=str2bool, default=True)
    parser.add_argument('--exp_dir', type=str, default='')
    parser.add_argument('--ckpt_file', type=str, default='')
    parser.add_argument('--model_weight_file', type=str, default='')

    args = parser.parse_args()

    # gpu ids
    self.sys_device_ids = args.sys_device_ids

    self.num_queries = args.num_queries
    self.rank_list_size = args.rank_list_size

    ###########
    # Dataset #
    ###########

    self.dataset = args.dataset
    self.prefetch_threads = 2

    # Image Processing

    self.resize_h_w = args.resize_h_w

    # Whether to scale by 1/255
    self.scale_im = True
    self.im_mean = [0.486, 0.459, 0.408]
    self.im_std = [0.229, 0.224, 0.225]

    self.test_mirror_type = None
    self.test_batch_size = 32
    self.test_final_batch = True
    self.test_shuffle = False

    dataset_kwargs = dict(
      name=self.dataset,
      resize_h_w=self.resize_h_w,
      scale=self.scale_im,
      im_mean=self.im_mean,
      im_std=self.im_std,
      batch_dims='NCHW',
      num_prefetch_threads=self.prefetch_threads)

    prng = np.random
    self.test_set_kwargs = dict(
      part='test',
      batch_size=self.test_batch_size,
      final_batch=self.test_final_batch,
      shuffle=self.test_shuffle,
      mirror_type=self.test_mirror_type,
      prng=prng)
    self.test_set_kwargs.update(dataset_kwargs)

    ###############
    # ReID Model  #
    ###############

    # The last block of ResNet has stride 2. We can set the stride to 1 so that
    # the spatial resolution before global pooling is doubled.
    self.last_conv_stride = args.last_conv_stride
    # Number of stripes (parts)
    self.num_stripes = args.num_stripes
    # Output channel of 1x1 conv
    self.local_conv_out_channels = args.local_conv_out_channels

    #######
    # Log #
    #######

    # If True, stdout and stderr will be redirected to file
    self.log_to_file = args.log_to_file

    # The root dir of logs.
    if args.exp_dir == '':
      self.exp_dir = osp.join(
        'exp/visualize_rank_list',
        '{}'.format(self.dataset),
      )
    else:
      self.exp_dir = args.exp_dir

    self.stdout_file = osp.join(
      self.exp_dir, 'stdout_{}.txt'.format(time_str()))
    self.stderr_file = osp.join(
      self.exp_dir, 'stderr_{}.txt'.format(time_str()))

    # Model weights and optimizer states, for resuming.
    self.ckpt_file = args.ckpt_file
    # Just for loading a pretrained model; no optimizer states is needed.
    self.model_weight_file = args.model_weight_file


class ExtractFeature(object):
  """A function to be called in the val/test set, to extract features.
  Args:
    TVT: A callable to transfer images to specific device.
  """

  def __init__(self, model, TVT):
    self.model = model
    self.TVT = TVT

  def __call__(self, ims):
    old_train_eval_model = self.model.training
    # Set eval mode.
    # Force all BN layers to use global mean and variance, also disable
    # dropout.
    self.model.eval()

    ims = Variable(self.TVT(torch.from_numpy(ims).float()))
    try:
      local_feat_list, logits_list = self.model(ims)
    except:
      local_feat_list = self.model(ims)
    feat = [lf.data.cpu().numpy() for lf in local_feat_list]
    feat = np.concatenate(feat, axis=1)

    # Restore the model to its old train/eval mode.
    self.model.train(old_train_eval_model)
    return feat


def main():
  cfg = Config()

  # Redirect logs to both console and file.
  if cfg.log_to_file:
    ReDirectSTD(cfg.stdout_file, 'stdout', False)
    ReDirectSTD(cfg.stderr_file, 'stderr', False)

  TVT, TMO = set_devices(cfg.sys_device_ids)

  # Dump the configurations to log.
  import pprint
  print('-' * 60)
  print('cfg.__dict__')
  pprint.pprint(cfg.__dict__)
  print('-' * 60)

  ###########
  # Dataset #
  ###########

  test_set = create_dataset(**cfg.test_set_kwargs)

  #########
  # Model #
  #########

  model = Model(
    last_conv_stride=cfg.last_conv_stride,
    num_stripes=cfg.num_stripes,
    local_conv_out_channels=cfg.local_conv_out_channels,
    num_classes=0
  )
  # Model wrapper
  model_w = DataParallel(model)

  # May Transfer Model to Specified Device.
  TMO([model])

  #####################
  # Load Model Weight #
  #####################

  # To first load weights to CPU
  map_location = (lambda storage, loc: storage)
  used_file = cfg.model_weight_file or cfg.ckpt_file
  loaded = torch.load(used_file, map_location=map_location)
  if cfg.model_weight_file == '':
    loaded = loaded['state_dicts'][0]
  load_state_dict(model, loaded)
  print('Loaded model weights from {}'.format(used_file))

  ###################
  # Extract Feature #
  ###################

  test_set.set_feat_func(ExtractFeature(model_w, TVT))

  with measure_time('Extracting feature...', verbose=True):
    feat, ids, cams, im_names, marks = test_set.extract_feat(True, verbose=True)

  #######################
  # Select Query Images #
  #######################

  # Fix some query images, so that the visualization for different models can
  # be compared.

  # Sort in the order of image names
  inds = np.argsort(im_names)
  feat, ids, cams, im_names, marks = \
    feat[inds], ids[inds], cams[inds], im_names[inds], marks[inds]

  # query, gallery index mask
  is_q = marks == 0
  is_g = marks == 1

  prng = np.random.RandomState(1)
  # selected query indices
  sel_q_inds = prng.permutation(range(np.sum(is_q)))[:cfg.num_queries]

  q_ids = ids[is_q][sel_q_inds]
  q_cams = cams[is_q][sel_q_inds]
  q_feat = feat[is_q][sel_q_inds]
  q_im_names = im_names[is_q][sel_q_inds]

  ####################
  # Compute Distance #
  ####################

  # query-gallery distance
  q_g_dist = compute_dist(q_feat, feat[is_g], type='euclidean')

  ###########################
  # Save Rank List as Image #
  ###########################

  q_im_paths = [ospj(test_set.im_dir, n) for n in q_im_names]
  save_paths = [ospj(cfg.exp_dir, 'rank_lists', n) for n in q_im_names]
  g_im_paths = [ospj(test_set.im_dir, n) for n in im_names[is_g]]

  for dist_vec, q_id, q_cam, q_im_path, save_path in zip(
      q_g_dist, q_ids, q_cams, q_im_paths, save_paths):

    rank_list, same_id = get_rank_list(
      dist_vec, q_id, q_cam, ids[is_g], cams[is_g], cfg.rank_list_size)

    save_rank_list_to_im(rank_list, same_id, q_im_path, g_im_paths, save_path)


if __name__ == '__main__':
  main()
Download .txt
gitextract_1xyz5y1s/

├── .gitignore
├── README.md
├── bpm/
│   ├── __init__.py
│   ├── dataset/
│   │   ├── Dataset.py
│   │   ├── PreProcessImage.py
│   │   ├── Prefetcher.py
│   │   ├── TestSet.py
│   │   ├── TrainSet.py
│   │   └── __init__.py
│   ├── model/
│   │   ├── PCBModel.py
│   │   ├── __init__.py
│   │   └── resnet.py
│   └── utils/
│       ├── __init__.py
│       ├── dataset_utils.py
│       ├── distance.py
│       ├── metric.py
│       ├── re_ranking.py
│       ├── utils.py
│       └── visualization.py
├── requirements.txt
└── script/
    ├── dataset/
    │   ├── combine_trainval_sets.py
    │   ├── mapping_im_names_duke.py
    │   ├── mapping_im_names_market1501.py
    │   ├── transform_cuhk03.py
    │   ├── transform_duke.py
    │   └── transform_market1501.py
    └── experiment/
        ├── train_pcb.py
        └── visualize_rank_list.py
Download .txt
SYMBOL INDEX (162 symbols across 22 files)

FILE: bpm/dataset/Dataset.py
  class Dataset (line 6) | class Dataset(object):
    method __init__ (line 13) | def __init__(
    method set_mirror_type (line 38) | def set_mirror_type(self, mirror_type):
    method get_sample (line 41) | def get_sample(self, ptr):
    method next_batch (line 45) | def next_batch(self):
    method set_batch_size (line 49) | def set_batch_size(self, batch_size):
    method stop_prefetching_threads (line 55) | def stop_prefetching_threads(self):

FILE: bpm/dataset/PreProcessImage.py
  class PreProcessIm (line 5) | class PreProcessIm(object):
    method __init__ (line 6) | def __init__(
    method __call__ (line 48) | def __call__(self, im):
    method check_mirror_type (line 52) | def check_mirror_type(mirror_type):
    method check_batch_dims (line 56) | def check_batch_dims(batch_dims):
    method set_mirror_type (line 61) | def set_mirror_type(self, mirror_type):
    method rand_crop_im (line 66) | def rand_crop_im(im, new_size, prng=np.random):
    method pre_process_im (line 76) | def pre_process_im(self, im):

FILE: bpm/dataset/Prefetcher.py
  class Counter (line 6) | class Counter(object):
    method __init__ (line 9) | def __init__(self, val=0, max_val=0):
    method reset (line 14) | def reset(self):
    method set_max_value (line 18) | def set_max_value(self, max_val):
    method increment (line 21) | def increment(self):
    method get_value (line 30) | def get_value(self):
  class Enqueuer (line 35) | class Enqueuer(object):
    method __init__ (line 36) | def __init__(self, get_element, num_elements, num_threads=1, queue_siz...
    method start_ep (line 67) | def start_ep(self):
    method end_ep (line 71) | def end_ep(self):
    method reset (line 76) | def reset(self):
    method set_num_elements (line 88) | def set_num_elements(self, num_elements):
    method stop (line 93) | def stop(self):
    method enqueue (line 99) | def enqueue(self):
  class Prefetcher (line 122) | class Prefetcher(object):
    method __init__ (line 126) | def __init__(self, get_sample, dataset_size, batch_size, final_batch=T...
    method set_batch_size (line 151) | def set_batch_size(self, batch_size):
    method next_batch (line 160) | def next_batch(self):
    method start_ep_prefetching (line 190) | def start_ep_prefetching(self):
    method stop (line 197) | def stop(self):

FILE: bpm/dataset/TestSet.py
  class TestSet (line 19) | class TestSet(Dataset):
    method __init__ (line 30) | def __init__(
    method set_feat_func (line 52) | def set_feat_func(self, extract_feat_func):
    method get_sample (line 55) | def get_sample(self, ptr):
    method next_batch (line 66) | def next_batch(self):
    method extract_feat (line 79) | def extract_feat(self, normalize_feat, verbose=True):
    method eval (line 131) | def eval(

FILE: bpm/dataset/TrainSet.py
  class TrainSet (line 9) | class TrainSet(Dataset):
    method __init__ (line 14) | def __init__(self,
    method get_sample (line 25) | def get_sample(self, ptr):
    method next_batch (line 35) | def next_batch(self):

FILE: bpm/dataset/__init__.py
  function create_dataset (line 12) | def create_dataset(

FILE: bpm/model/PCBModel.py
  class PCBModel (line 9) | class PCBModel(nn.Module):
    method __init__ (line 10) | def __init__(
    method forward (line 42) | def forward(self, x):

FILE: bpm/model/resnet.py
  function conv3x3 (line 17) | def conv3x3(in_planes, out_planes, stride=1, dilation=1):
  class BasicBlock (line 24) | class BasicBlock(nn.Module):
    method __init__ (line 27) | def __init__(self, inplanes, planes, stride=1, downsample=None, dilati...
    method forward (line 37) | def forward(self, x):
  class Bottleneck (line 56) | class Bottleneck(nn.Module):
    method __init__ (line 59) | def __init__(self, inplanes, planes, stride=1, downsample=None, dilati...
    method forward (line 72) | def forward(self, x):
  class ResNet (line 95) | class ResNet(nn.Module):
    method __init__ (line 97) | def __init__(self, block, layers, last_conv_stride=2, last_conv_dilati...
    method _make_layer (line 119) | def _make_layer(self, block, planes, blocks, stride=1, dilation=1):
    method forward (line 136) | def forward(self, x):
  function remove_fc (line 150) | def remove_fc(state_dict):
  function resnet18 (line 158) | def resnet18(pretrained=False, **kwargs):
  function resnet34 (line 170) | def resnet34(pretrained=False, **kwargs):
  function resnet50 (line 182) | def resnet50(pretrained=False, **kwargs):
  function resnet101 (line 194) | def resnet101(pretrained=False, **kwargs):
  function resnet152 (line 207) | def resnet152(pretrained=False, **kwargs):

FILE: bpm/utils/dataset_utils.py
  function parse_im_name (line 10) | def parse_im_name(im_name, parse_type='id'):
  function get_im_names (line 20) | def get_im_names(im_dir, pattern='*.jpg', return_np=True, return_path=Fa...
  function move_ims (line 30) | def move_ims(ori_im_paths, new_im_dir, parse_im_name, new_im_name_tmpl):
  function partition_train_val_set (line 45) | def partition_train_val_set(im_names, parse_im_name,

FILE: bpm/utils/distance.py
  function normalize (line 7) | def normalize(nparray, order=2, axis=0):
  function compute_dist (line 13) | def compute_dist(array1, array2, type='euclidean'):

FILE: bpm/utils/metric.py
  function _unique_sample (line 15) | def _unique_sample(ids_dict, num):
  function cmc (line 23) | def cmc(
  function mean_ap (line 107) | def mean_ap(

FILE: bpm/utils/re_ranking.py
  function re_ranking (line 35) | def re_ranking(q_g_dist, q_q_dist, g_g_dist, k1=20, k2=6, lambda_value=0...

FILE: bpm/utils/utils.py
  function time_str (line 14) | def time_str(fmt=None):
  function load_pickle (line 20) | def load_pickle(path):
  function save_pickle (line 32) | def save_pickle(obj, path):
  function save_mat (line 39) | def save_mat(ndarray, path):
  function to_scalar (line 44) | def to_scalar(vt):
  function transfer_optim_state (line 55) | def transfer_optim_state(state, device_id=-1):
  function may_transfer_optims (line 80) | def may_transfer_optims(optims, device_id=-1):
  function may_transfer_modules_optims (line 93) | def may_transfer_modules_optims(modules_and_or_optims, device_id=-1):
  class TransferVarTensor (line 112) | class TransferVarTensor(object):
    method __init__ (line 115) | def __init__(self, device_id=-1):
    method __call__ (line 118) | def __call__(self, var_or_tensor):
  class TransferModulesOptims (line 123) | class TransferModulesOptims(object):
    method __init__ (line 126) | def __init__(self, device_id=-1):
    method __call__ (line 129) | def __call__(self, modules_and_or_optims):
  function set_devices (line 133) | def set_devices(sys_device_ids):
  function set_devices_for_ml (line 162) | def set_devices_for_ml(sys_device_ids):
  function load_ckpt (line 227) | def load_ckpt(modules_optims, ckpt_file, load_to_cpu=True, verbose=True):
  function save_ckpt (line 246) | def save_ckpt(modules_optims, ep, scores, ckpt_file):
  function load_state_dict (line 267) | def load_state_dict(model, src_state_dict):
  function is_iterable (line 308) | def is_iterable(obj):
  function may_set_mode (line 312) | def may_set_mode(maybe_modules, mode):
  function may_make_dir (line 325) | def may_make_dir(path):
  class AverageMeter (line 341) | class AverageMeter(object):
    method __init__ (line 345) | def __init__(self):
    method reset (line 351) | def reset(self):
    method update (line 357) | def update(self, val, n=1):
  class RunningAverageMeter (line 364) | class RunningAverageMeter(object):
    method __init__ (line 367) | def __init__(self, hist=0.99):
    method reset (line 372) | def reset(self):
    method update (line 376) | def update(self, val):
  class RecentAverageMeter (line 384) | class RecentAverageMeter(object):
    method __init__ (line 387) | def __init__(self, hist_size=100):
    method reset (line 392) | def reset(self):
    method update (line 396) | def update(self, val):
    method avg (line 403) | def avg(self):
  function get_model_wrapper (line 408) | def get_model_wrapper(model, multi_gpu):
  class ReDirectSTD (line 416) | class ReDirectSTD(object):
    method __init__ (line 436) | def __init__(self, fpath=None, console='stdout', immediately_visible=F...
    method __del__ (line 457) | def __del__(self):
    method __enter__ (line 460) | def __enter__(self):
    method __exit__ (line 463) | def __exit__(self, *args):
    method write (line 466) | def write(self, msg):
    method flush (line 478) | def flush(self):
    method close (line 485) | def close(self):
  function set_seed (line 491) | def set_seed(seed):
  function print_array (line 508) | def print_array(array, fmt='{:.2f}', end=' '):
  function str2bool (line 519) | def str2bool(v):
  function tight_float_str (line 523) | def tight_float_str(x, fmt='{:.4f}'):
  function find_index (line 527) | def find_index(seq, item):
  function adjust_lr_staircase (line 534) | def adjust_lr_staircase(param_groups, base_lrs, ep, decay_at_epochs, fac...
  function measure_time (line 574) | def measure_time(enter_msg, verbose=True):

FILE: bpm/utils/visualization.py
  function add_border (line 9) | def add_border(im, border_width, value):
  function make_im_grid (line 31) | def make_im_grid(ims, n_rows, n_cols, space, pad_val):
  function get_rank_list (line 62) | def get_rank_list(dist_vec, q_id, q_cam, g_ids, g_cams, rank_list_size):
  function read_im (line 93) | def read_im(im_path):
  function save_im (line 105) | def save_im(im, save_path):
  function save_rank_list_to_im (line 112) | def save_rank_list_to_im(rank_list, same_id, q_im_path, g_im_paths, save...

FILE: script/dataset/combine_trainval_sets.py
  function move_ims (line 23) | def move_ims(
  function combine_trainval_sets (line 49) | def combine_trainval_sets(

FILE: script/dataset/mapping_im_names_duke.py
  function parse_original_im_name (line 18) | def parse_original_im_name(img_name, parse_type='id'):
  function map_im_names (line 28) | def map_im_names(ori_im_names, parse_im_name, new_im_name_tmpl):
  function save_im_name_mapping (line 42) | def save_im_name_mapping(raw_dir, ori_to_new_im_name_file):

FILE: script/dataset/mapping_im_names_market1501.py
  function parse_original_im_name (line 18) | def parse_original_im_name(im_name, parse_type='id'):
  function map_im_names (line 29) | def map_im_names(ori_im_names, parse_im_name, new_im_name_tmpl):
  function save_im_name_mapping (line 43) | def save_im_name_mapping(raw_dir, ori_to_new_im_name_file):

FILE: script/dataset/transform_cuhk03.py
  function save_images (line 26) | def save_images(mat_file, save_dir, new_im_name_tmpl):
  function transform (line 68) | def transform(zip_file, train_test_partition_file, save_dir=None):

FILE: script/dataset/transform_duke.py
  function parse_original_im_name (line 24) | def parse_original_im_name(img_name, parse_type='id'):
  function save_images (line 34) | def save_images(zip_file, save_dir=None, train_test_split_file=None):
  function transform (line 75) | def transform(zip_file, save_dir=None):

FILE: script/dataset/transform_market1501.py
  function parse_original_im_name (line 25) | def parse_original_im_name(im_name, parse_type='id'):
  function save_images (line 36) | def save_images(zip_file, save_dir=None, train_test_split_file=None):
  function transform (line 103) | def transform(zip_file, save_dir=None):

FILE: script/experiment/train_pcb.py
  class Config (line 35) | class Config(object):
    method __init__ (line 36) | def __init__(self):
  class ExtractFeature (line 250) | class ExtractFeature(object):
    method __init__ (line 256) | def __init__(self, model, TVT):
    method __call__ (line 260) | def __call__(self, ims):
  function main (line 280) | def main():

FILE: script/experiment/visualize_rank_list.py
  class Config (line 31) | class Config(object):
    method __init__ (line 32) | def __init__(self):
  class ExtractFeature (line 140) | class ExtractFeature(object):
    method __init__ (line 146) | def __init__(self, model, TVT):
    method __call__ (line 150) | def __call__(self, ims):
  function main (line 170) | def main():
Condensed preview — 28 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (146K chars).
[
  {
    "path": ".gitignore",
    "chars": 22,
    "preview": ".idea/\nexp/\ntmp/\n*.pyc"
  },
  {
    "path": "README.md",
    "chars": 14212,
    "preview": "# Beyond Part Models: Person Retrieval with Refined Part Pooling\n\n**Related Projects:** [Strong Triplet Loss Baseline](h"
  },
  {
    "path": "bpm/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "bpm/dataset/Dataset.py",
    "chars": 1524,
    "preview": "from .PreProcessImage import PreProcessIm\nfrom .Prefetcher import Prefetcher\nimport numpy as np\n\n\nclass Dataset(object):"
  },
  {
    "path": "bpm/dataset/PreProcessImage.py",
    "chars": 4184,
    "preview": "import numpy as np\nimport cv2\n\n\nclass PreProcessIm(object):\n  def __init__(\n      self,\n      crop_prob=0,\n      crop_ra"
  },
  {
    "path": "bpm/dataset/Prefetcher.py",
    "chars": 6812,
    "preview": "import threading\nimport Queue\nimport time\n\n\nclass Counter(object):\n  \"\"\"A thread safe counter.\"\"\"\n\n  def __init__(self, "
  },
  {
    "path": "bpm/dataset/TestSet.py",
    "chars": 9302,
    "preview": "from __future__ import print_function\nimport sys\nimport time\nimport os.path as osp\nfrom PIL import Image\nimport numpy as"
  },
  {
    "path": "bpm/dataset/TrainSet.py",
    "chars": 1804,
    "preview": "from .Dataset import Dataset\nfrom ..utils.dataset_utils import parse_im_name\n\nimport os.path as osp\nfrom PIL import Imag"
  },
  {
    "path": "bpm/dataset/__init__.py",
    "chars": 3308,
    "preview": "import numpy as np\nimport os.path as osp\nospj = osp.join\nospeu = osp.expanduser\n\nfrom ..utils.utils import load_pickle\nf"
  },
  {
    "path": "bpm/model/PCBModel.py",
    "chars": 2008,
    "preview": "import torch\nimport torch.nn as nn\nimport torch.nn.init as init\nimport torch.nn.functional as F\n\nfrom .resnet import res"
  },
  {
    "path": "bpm/model/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "bpm/model/resnet.py",
    "chars": 6407,
    "preview": "import torch.nn as nn\nimport math\nimport torch.utils.model_zoo as model_zoo\n\n__all__ = ['ResNet', 'resnet18', 'resnet34'"
  },
  {
    "path": "bpm/utils/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "bpm/utils/dataset_utils.py",
    "chars": 4174,
    "preview": "from __future__ import print_function\nimport os.path as osp\nimport numpy as np\nimport glob\nfrom collections import defau"
  },
  {
    "path": "bpm/utils/distance.py",
    "chars": 1239,
    "preview": "\"\"\"Numpy version of euclidean distance, etc.\nNotice the input/output shape of methods, so that you can better understand"
  },
  {
    "path": "bpm/utils/metric.py",
    "chars": 6328,
    "preview": "\"\"\"Modified from Tong Xiao's open-reid (https://github.com/Cysu/open-reid) \nreid/evaluation_metrics/ranking.py. Modifica"
  },
  {
    "path": "bpm/utils/re_ranking.py",
    "chars": 4197,
    "preview": "\"\"\"\nCreated on Mon Jun 26 14:46:56 2017\n\n@author: luohao\n\nModified by Houjing Huang, 2017-12-22.\n- This version accepts "
  },
  {
    "path": "bpm/utils/utils.py",
    "chars": 17297,
    "preview": "from __future__ import print_function\nimport os\nimport os.path as osp\nimport cPickle as pickle\nfrom scipy import io\nimpo"
  },
  {
    "path": "bpm/utils/visualization.py",
    "chars": 4275,
    "preview": "import numpy as np\nfrom PIL import Image\nimport cv2\nfrom os.path import dirname as ospdn\n\nfrom bpm.utils.utils import ma"
  },
  {
    "path": "requirements.txt",
    "chars": 127,
    "preview": "opencv_python==3.2.0.7\nnumpy==1.11.3\nscipy==0.18.1\nh5py==2.6.0\ntensorboardX==0.8\n# for tensorboard web server\ntensorflow"
  },
  {
    "path": "script/dataset/combine_trainval_sets.py",
    "chars": 3862,
    "preview": "from __future__ import print_function\n\nimport sys\nsys.path.insert(0, '.')\n\nimport os.path as osp\n\nospeu = osp.expanduser"
  },
  {
    "path": "script/dataset/mapping_im_names_duke.py",
    "chars": 3399,
    "preview": "\"\"\"Mapping original image name (relative image path) -> my new image name.\nThe mapping is corresponding to transform_duk"
  },
  {
    "path": "script/dataset/mapping_im_names_market1501.py",
    "chars": 4918,
    "preview": "\"\"\"Mapping original image name (relative image path) -> my new image name.\nThe mapping is corresponding to transform_mar"
  },
  {
    "path": "script/dataset/transform_cuhk03.py",
    "chars": 6041,
    "preview": "\"\"\"Refactor file directories, save/rename images and partition the \ntrain/val/test set, in order to support the unified "
  },
  {
    "path": "script/dataset/transform_duke.py",
    "chars": 5021,
    "preview": "\"\"\"Refactor file directories, save/rename images and partition the \ntrain/val/test set, in order to support the unified "
  },
  {
    "path": "script/dataset/transform_market1501.py",
    "chars": 6390,
    "preview": "\"\"\"Refactor file directories, save/rename images and partition the \ntrain/val/test set, in order to support the unified "
  },
  {
    "path": "script/experiment/train_pcb.py",
    "chars": 15056,
    "preview": "from __future__ import print_function\n\nimport sys\n\nsys.path.insert(0, '.')\n\nimport torch\nfrom torch.autograd import Vari"
  },
  {
    "path": "script/experiment/visualize_rank_list.py",
    "chars": 7909,
    "preview": "from __future__ import print_function\n\nimport sys\n\nsys.path.insert(0, '.')\n\nimport torch\nfrom torch.autograd import Vari"
  }
]

About this extraction

This page contains the full source code of the huanghoujing/beyond-part-models GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 28 files (136.5 KB), approximately 37.4k tokens, and a symbol index with 162 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!