[
  {
    "path": ".gitignore",
    "content": ".idea\n**__pycache__\n.DS_Store\ndata/\npytorch-ckpt/\n.vscode/\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2018 Dai Zuozhuo\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# Batch DropBlock Network for Person Re-identification and Beyond\nOfficial source code of paper https://arxiv.org/abs/1811.07130\n\n## Update on 2019.3.15\nUpdate CUHK03 results. \n\n## Update on 2019.1.29\nTraning scripts are released. The best Markt1501 result is 95.3%! Please look at the training section of README.md.\n\n## Update on 2019.1.23\nIn-Shop Clothes Retrieval dataset and pretrained model are released!. The rank-1 result is 89.5 which is a litter bit higher than paper reported.\n\n## This paper is accepted by ICCV 2019. Please cite if you use this code in your research. \n\n```\n@article{dai2018batch,\n  title={Batch DropBlock Network for Person Re-identification and Beyond},\n  author={Dai, Zuozhuo and Chen, Mingqiang and Gu, Xiaodong and Zhu, Siyu and Tan, Ping},\n  journal={arXiv preprint arXiv:1811.07130},\n  year={2018}\n}\n```\n\n## Setup running environment\nThis project requires python3, cython, torch, torchvision, scikit-learn, tensorboardX, fire.\nThe baseline source code is borrowed from https://github.com/L1aoXingyu/reid_baseline.\n\n## Prepare dataset\n    \n    Create a directory to store reid datasets under this repo via\n    ```bash\n    cd reid\n    mkdir data\n    ```\n    \n    For market1501 dataset, \n    1. Download Market1501 dataset to `data/` from http://www.liangzheng.org/Project/project_reid.html\n    2. Extract dataset and rename to `market1501`. The data structure would like:\n    ```\n    market1501/\n        bounding_box_test/\n        bounding_box_train/\n        query/\n    ```\n\n    For CUHK03 dataset,\n    1. Download CUHK03-NP dataset from https://github.com/zhunzhong07/person-re-ranking/tree/master/CUHK03-NP \n    2. Extract dataset and rename folers inside it to cuhk-detect and cuhk-label.\n    For DukeMTMC-reID dataset,\n    Dowload from https://github.com/layumi/DukeMTMC-reID_evaluation\n\n    For In-Shop Clothes dataset,\n    1. Downlaod clothes dataset from http://virutalbuy-public.oss-cn-hangzhou.aliyuncs.com/share/bfe_models/clothes.tar\n    2. Extract dataset and put it to `data/` folder.\n\n## Results\n\nDataset | CUHK03-Label | CUHK03-Detect | DukeMTMC re-ID  | Market1501 | In-Shop Clothes|\n--------|--------------|---------------|-----------------|------------|----------------|\nRank-1  | 79.4         | 76.4          | 88.9            | 95.3       |89.5            |\nmAP     | 76.7         | 73.5          | 75.9            | 86.2       |72.3            |\nmodel   | [aliyun](http://virutalbuy-public.oss-cn-hangzhou.aliyuncs.com/share/bfe_models/cuhk-label-794.pth.tar)| [aliyun](http://virutalbuy-public.oss-cn-hangzhou.aliyuncs.com/share/bfe_models/cuhk-detect-764.pth.tar)] | [aliyun](http://virutalbuy-public.oss-cn-hangzhou.aliyuncs.com/share/bfe_models/duke_887.pth.tar) | [aliyun](http://virutalbuy-public.oss-cn-hangzhou.aliyuncs.com/share/bfe_models/market_953.pth.tar)|[aliyun](http://virutalbuy-public.oss-cn-hangzhou.aliyuncs.com/share/bfe_models/clothes_895.pth.tar)\n\nYou can download the pre-trained models from the above table and evaluate on person re-ID datasets.\nFor example, to evaluate CUHK03-Label dataset, you can download the model to './pytorch-ckpt/cuhk_label_bfe' directory and run the following commands.\n\n### Evaluate Market1501\n```bash\npython3 main_reid.py train --save_dir='./pytorch-ckpt/market_bfe' --model_name=bfe --train_batch=32 --test_batch=32 --dataset=market1501 --pretrained_model='./pytorch-ckpt/market_bfe/944.pth.tar' --evaluate\n```\n### Evaluate CUHK03-Label\n```bash\npython3 main_reid.py train --save_dir='./pytorch-ckpt/cuhk_label_bfe' --model_name=bfe --train_batch=32 --test_batch=32 --dataset=cuhk-label  --pretrained_model='./pytorch-ckpt/cuhk_label_bfe/750.pth.tar' --evaluate\n```\n### Evaluate In-Shop clothes\n```bash\npython main_reid.py train --save_dir='./pytorch-ckpt/clothes_bfe' --model_name=bfe --pretrained_model='./pytorch-ckpt/clothes_bfe/clothes_895.pth.tar' --test_batch=32 --dataset=clothes --evaluate\n```\n\n## Training\n\n### Traning Market1501\n```bash\npython main_reid.py train --save_dir='./pytorch-ckpt/market-bfe' --max_epoch=400 --eval_step=30 --dataset=market1501 --test_batch=128 --train_batch=128 --optim=adam --adjust_lr\n```\nThis traning command is tested on 4 GTX1080 gpus. Here is [training log](http://virutalbuy-public.oss-cn-hangzhou.aliyuncs.com/share/bfe_models/market_953.txt). You shoud get a result around 95%.\n"
  },
  {
    "path": "config.py",
    "content": "# encoding: utf-8\nimport warnings\nimport numpy as np\n\n\nclass DefaultConfig(object):\n    seed = 0\n\n    # dataset options\n    dataset = 'market1501'\n    datatype = 'person'\n    mode = 'retrieval'\n    # optimization options\n    loss = 'triplet'\n    optim = 'adam'\n    max_epoch = 60\n    train_batch = 32 \n    test_batch = 32\n    adjust_lr = False\n    lr = 0.0001\n    adjust_lr = False\n    gamma = 0.1\n    weight_decay = 5e-4\n    momentum = 0.9\n    random_crop = False\n    margin = None\n    num_instances = 4\n    num_gpu = 1\n    evaluate = False\n    savefig = None \n    re_ranking = False\n\n    # model options\n    model_name = 'bfe'  # triplet, softmax_triplet, bfe, ide\n    last_stride = 1\n    pretrained_model = None\n    \n    # miscs\n    print_freq = 30\n    eval_step = 50\n    save_dir = './pytorch-ckpt/market'\n    workers = 10\n    start_epoch = 0\n    best_rank = -np.inf\n\n    def _parse(self, kwargs):\n        for k, v in kwargs.items():\n            if not hasattr(self, k):\n                warnings.warn(\"Warning: opt has not attribut %s\" % k)\n            setattr(self, k, v)\n            if 'cls' in self.dataset:\n                self.mode='class'\n            if 'market' in self.dataset or 'cuhk' in self.dataset or 'duke' in self.dataset:\n                self.datatype = 'person'\n            elif 'cub' in self.dataset:\n                self.datatype = 'cub'\n            elif 'car' in self.dataset:\n                self.datatype = 'car'\n            elif 'clothes' in self.dataset:\n                self.datatype = 'clothes'\n            elif 'product' in self.dataset:\n                self.datatype = 'product'\n\n    def _state_dict(self):\n        return {k: getattr(self, k) for k, _ in DefaultConfig.__dict__.items()\n                if not k.startswith('_')}\n\nopt = DefaultConfig()\n"
  },
  {
    "path": "datasets/__init__.py",
    "content": ""
  },
  {
    "path": "datasets/data_loader.py",
    "content": "from __future__ import print_function, absolute_import\n\nfrom PIL import Image\nfrom torch.utils.data import Dataset\n\n\ndef read_image(img_path):\n    \"\"\"Keep reading image until succeed.\n    This can avoid IOError incurred by heavy IO process.\"\"\"\n    got_img = False\n    while not got_img:\n        try:\n            img = Image.open(img_path).convert('RGB')\n            got_img = True\n        except IOError:\n            print(\"IOError incurred when reading '{}'. Will redo. Don't worry. Just chill.\".format(img_path))\n            pass\n    return img\n\nclass ImageData(Dataset):\n    def __init__(self, dataset, transform):\n        self.dataset = dataset\n        self.transform = transform\n\n    def __getitem__(self, item):\n        img, pid, camid = self.dataset[item]\n        img = read_image(img)\n        if self.transform is not None:\n            img = self.transform(img)\n        return img, pid, camid\n\n    def __len__(self):\n        return len(self.dataset)\n"
  },
  {
    "path": "datasets/data_manager.py",
    "content": "from __future__ import print_function, absolute_import\n\nimport glob\nimport re\nfrom os import path as osp\nimport os\n\n\"\"\"Dataset classes\"\"\"\n\n\nclass Market1501(object):\n    \"\"\"\n    Market1501\n    Reference:\n    Zheng et al. Scalable Person Re-identification: A Benchmark. ICCV 2015.\n    URL: http://www.liangzheng.org/Project/project_reid.html\n\n    Dataset statistics:\n    # identities: 1501 (+1 for background)\n    # images: 12936 (train) + 3368 (query) + 15913 (gallery)\n    \"\"\"\n    def __init__(self, dataset_dir, mode, root='data'):\n        self.dataset_dir = dataset_dir\n        self.dataset_dir = osp.join(root, self.dataset_dir)\n        self.train_dir = osp.join(self.dataset_dir, 'bounding_box_train')\n        self.query_dir = osp.join(self.dataset_dir, 'query')\n        self.gallery_dir = osp.join(self.dataset_dir, 'bounding_box_test')\n\n        self._check_before_run()\n        train_relabel = (mode == 'retrieval')\n        train, num_train_pids, num_train_imgs = self._process_dir(self.train_dir, relabel=train_relabel)\n        query, num_query_pids, num_query_imgs = self._process_dir(self.query_dir, relabel=False)\n        gallery, num_gallery_pids, num_gallery_imgs = self._process_dir(self.gallery_dir, relabel=False)\n        num_total_pids = num_train_pids + num_query_pids\n        num_total_imgs = num_train_imgs + num_query_imgs + num_gallery_imgs\n\n        print(\"=> Market1501 loaded\")\n        print(\"Dataset statistics:\")\n        print(\"  ------------------------------\")\n        print(\"  subset   | # ids | # images\")\n        print(\"  ------------------------------\")\n        print(\"  train    | {:5d} | {:8d}\".format(num_train_pids, num_train_imgs))\n        print(\"  query    | {:5d} | {:8d}\".format(num_query_pids, num_query_imgs))\n        print(\"  gallery  | {:5d} | {:8d}\".format(num_gallery_pids, num_gallery_imgs))\n        print(\"  ------------------------------\")\n        print(\"  total    | {:5d} | {:8d}\".format(num_total_pids, num_total_imgs))\n        print(\"  ------------------------------\")\n\n        self.train = train\n        self.query = query\n        self.gallery = gallery\n\n        self.num_train_pids = num_train_pids\n        self.num_query_pids = num_query_pids\n        self.num_gallery_pids = num_gallery_pids\n\n    def _check_before_run(self):\n        \"\"\"Check if all files are available before going deeper\"\"\"\n        if not osp.exists(self.dataset_dir):\n            raise RuntimeError(\"'{}' is not available\".format(self.dataset_dir))\n        if not osp.exists(self.train_dir):\n            raise RuntimeError(\"'{}' is not available\".format(self.train_dir))\n        if not osp.exists(self.query_dir):\n            raise RuntimeError(\"'{}' is not available\".format(self.query_dir))\n        if not osp.exists(self.gallery_dir):\n            raise RuntimeError(\"'{}' is not available\".format(self.gallery_dir))\n\n    def _process_dir(self, dir_path, relabel=False):\n        img_names = os.listdir(dir_path)\n        img_paths = [os.path.join(dir_path, img_name) for img_name in img_names \\\n            if img_name.endswith('jpg') or img_name.endswith('png')]\n        pattern = re.compile(r'([-\\d]+)_c([-\\d]+)')\n\n        pid_container = set()\n        for img_path in img_paths:\n            pid, _ = map(int, pattern.search(img_path).groups())\n            if pid == -1: continue  # junk images are just ignored\n            pid_container.add(pid)\n        pid2label = {pid: label for label, pid in enumerate(pid_container)}\n\n        dataset = []\n        for img_path in img_paths:\n            pid, camid = map(int, pattern.search(img_path).groups())\n            if pid == -1:\n                continue  # junk images are just ignored\n            #assert 0 <= pid <= 1501  # pid == 0 means background\n            #assert 1 <= camid <= 6\n            camid -= 1  # index starts from 0\n            if relabel: pid = pid2label[pid]\n            dataset.append((img_path, pid, camid))\n\n        num_pids = len(pid_container)\n        num_imgs = len(dataset)\n        return dataset, num_pids, num_imgs\n\ndef init_dataset(name, mode):\n    return Market1501(name, mode)\n"
  },
  {
    "path": "datasets/samplers.py",
    "content": "from __future__ import absolute_import\n\nfrom collections import defaultdict\n\nimport numpy as np\nimport torch\nimport random\nfrom torch.utils.data.sampler import Sampler\n\n\nclass RandomIdentitySampler(Sampler):\n    def __init__(self, data_source, num_instances=4):\n        self.data_source = data_source\n        self.num_instances = num_instances\n        self.index_dic = defaultdict(list)\n        for index, (_, pid, _) in enumerate(data_source):\n            self.index_dic[pid].append(index)\n        self.pids = list(self.index_dic.keys())\n        self.num_identities = len(self.pids)\n\n    def __iter__(self):\n        indices = torch.randperm(self.num_identities)\n        ret = []\n        for i in indices:\n            pid = self.pids[i]\n            t = self.index_dic[pid]\n            replace = False if len(t) >= self.num_instances else True\n            t = np.random.choice(t, size=self.num_instances, replace=replace)\n            ret.extend(t)\n        return iter(ret)\n\n    def __len__(self):\n        return self.num_identities * self.num_instances\n"
  },
  {
    "path": "main_reid.py",
    "content": "# encoding: utf-8\nimport os\nimport sys\nfrom os import path as osp\nfrom pprint import pprint\n\nimport numpy as np\nimport torch\nfrom tensorboardX import SummaryWriter\nfrom torch import nn\nfrom torch.backends import cudnn\nfrom torch.utils.data import DataLoader\n\nfrom config import opt\nfrom datasets import data_manager\nfrom datasets.data_loader import ImageData\nfrom datasets.samplers import RandomIdentitySampler\nfrom models.networks import ResNetBuilder, IDE, Resnet, BFE\nfrom trainers.evaluator import ResNetEvaluator\nfrom trainers.trainer import cls_tripletTrainer\nfrom utils.loss import CrossEntropyLabelSmooth, TripletLoss, Margin\nfrom utils.LiftedStructure import LiftedStructureLoss\nfrom utils.DistWeightDevianceLoss import DistWeightBinDevianceLoss\nfrom utils.serialization import Logger, save_checkpoint\nfrom utils.transforms import TestTransform, TrainTransform\n\n\ndef train(**kwargs):\n    opt._parse(kwargs)\n\n    # set random seed and cudnn benchmark\n    torch.manual_seed(opt.seed)\n    os.makedirs(opt.save_dir, exist_ok=True)\n    use_gpu = torch.cuda.is_available()\n    sys.stdout = Logger(osp.join(opt.save_dir, 'log_train.txt'))\n\n    print('=========user config==========')\n    pprint(opt._state_dict())\n    print('============end===============')\n\n    if use_gpu:\n        print('currently using GPU')\n        cudnn.benchmark = True\n        torch.cuda.manual_seed_all(opt.seed)\n    else:\n        print('currently using cpu')\n\n    print('initializing dataset {}'.format(opt.dataset))\n    dataset = data_manager.init_dataset(name=opt.dataset, mode=opt.mode)\n\n    pin_memory = True if use_gpu else False\n\n    summary_writer = SummaryWriter(osp.join(opt.save_dir, 'tensorboard_log'))\n\n    trainloader = DataLoader(\n        ImageData(dataset.train, TrainTransform(opt.datatype)),\n        sampler=RandomIdentitySampler(dataset.train, opt.num_instances),\n        batch_size=opt.train_batch, num_workers=opt.workers,\n        pin_memory=pin_memory, drop_last=True\n    )\n\n    queryloader = DataLoader(\n        ImageData(dataset.query, TestTransform(opt.datatype)),\n        batch_size=opt.test_batch, num_workers=opt.workers,\n        pin_memory=pin_memory\n    )\n\n    galleryloader = DataLoader(\n        ImageData(dataset.gallery, TestTransform(opt.datatype)),\n        batch_size=opt.test_batch, num_workers=opt.workers,\n        pin_memory=pin_memory\n    )\n    queryFliploader = DataLoader(\n        ImageData(dataset.query, TestTransform(opt.datatype, True)),\n        batch_size=opt.test_batch, num_workers=opt.workers,\n        pin_memory=pin_memory\n    )\n\n    galleryFliploader = DataLoader(\n        ImageData(dataset.gallery, TestTransform(opt.datatype, True)),\n        batch_size=opt.test_batch, num_workers=opt.workers,\n        pin_memory=pin_memory\n    )\n\n    print('initializing model ...')\n    if opt.model_name == 'softmax' or opt.model_name == 'softmax_triplet':\n        model = ResNetBuilder(dataset.num_train_pids, 1, True)\n    elif opt.model_name == 'triplet':\n        model = ResNetBuilder(None, 1, True)\n    elif opt.model_name == 'bfe':\n        if opt.datatype == \"person\":\n            model = BFE(dataset.num_train_pids, 1.0, 0.33)\n        else:\n            model = BFE(dataset.num_train_pids, 0.5, 0.5)\n    elif opt.model_name == 'ide':\n        model = IDE(dataset.num_train_pids)\n    elif opt.model_name == 'resnet':\n        model = Resnet(dataset.num_train_pids)\n \n    optim_policy = model.get_optim_policy()\n\n    if opt.pretrained_model:\n        state_dict = torch.load(opt.pretrained_model)['state_dict']\n        #state_dict = {k: v for k, v in state_dict.items() \\\n        #        if not ('reduction' in k or 'softmax' in k)}\n        model.load_state_dict(state_dict, False)\n        print('load pretrained model ' + opt.pretrained_model)\n    print('model size: {:.5f}M'.format(sum(p.numel() for p in model.parameters()) / 1e6))\n\n    if use_gpu:\n        model = nn.DataParallel(model).cuda()\n    reid_evaluator = ResNetEvaluator(model)\n\n    if opt.evaluate:\n        reid_evaluator.evaluate(queryloader, galleryloader, \n            queryFliploader, galleryFliploader, re_ranking=opt.re_ranking, savefig=opt.savefig)\n        return\n\n    #xent_criterion = nn.CrossEntropyLoss()\n    xent_criterion = CrossEntropyLabelSmooth(dataset.num_train_pids)\n\n    if opt.loss == 'triplet':\n        embedding_criterion = TripletLoss(opt.margin)\n    elif opt.loss == 'lifted':\n        embedding_criterion = LiftedStructureLoss(hard_mining=True)\n    elif opt.loss == 'weight':\n        embedding_criterion = Margin()\n\n    def criterion(triplet_y, softmax_y, labels):\n        losses = [embedding_criterion(output, labels)[0] for output in triplet_y] + \\\n                     [xent_criterion(output, labels) for output in softmax_y]\n        loss = sum(losses)\n        return loss\n\n    # get optimizer\n    if opt.optim == \"sgd\":\n        optimizer = torch.optim.SGD(optim_policy, lr=opt.lr, momentum=0.9, weight_decay=opt.weight_decay)\n    else:\n        optimizer = torch.optim.Adam(optim_policy, lr=opt.lr, weight_decay=opt.weight_decay)\n\n\n    start_epoch = opt.start_epoch\n    # get trainer and evaluator\n    reid_trainer = cls_tripletTrainer(opt, model, optimizer, criterion, summary_writer)\n\n    def adjust_lr(optimizer, ep):\n        if ep < 50:\n            lr = 1e-4*(ep//5+1)\n        elif ep < 200:\n            lr = 1e-3\n        elif ep < 300:\n            lr = 1e-4\n        else:\n            lr = 1e-5\n        for p in optimizer.param_groups:\n            p['lr'] = lr\n\n    # start training\n    best_rank1 = opt.best_rank\n    best_epoch = 0\n    for epoch in range(start_epoch, opt.max_epoch):\n        if opt.adjust_lr:\n            adjust_lr(optimizer, epoch + 1)\n        reid_trainer.train(epoch, trainloader)\n\n        # skip if not save model\n        if opt.eval_step > 0 and (epoch + 1) % opt.eval_step == 0 or (epoch + 1) == opt.max_epoch:\n            if opt.mode == 'class':\n                rank1 = test(model, queryloader) \n            else:\n                rank1 = reid_evaluator.evaluate(queryloader, galleryloader, queryFliploader, galleryFliploader)\n            is_best = rank1 > best_rank1\n            if is_best:\n                best_rank1 = rank1\n                best_epoch = epoch + 1\n\n            if use_gpu:\n                state_dict = model.module.state_dict()\n            else:\n                state_dict = model.state_dict()\n            save_checkpoint({'state_dict': state_dict, 'epoch': epoch + 1}, \n                is_best=is_best, save_dir=opt.save_dir, \n                filename='checkpoint_ep' + str(epoch + 1) + '.pth.tar')\n\n    print('Best rank-1 {:.1%}, achived at epoch {}'.format(best_rank1, best_epoch))\n\ndef test(model, queryloader):\n    model.eval()\n    correct = 0\n    with torch.no_grad():\n        for data, target, _ in queryloader:\n            output = model(data).cpu() \n            # get the index of the max log-probability\n            pred = output.max(1, keepdim=True)[1]\n            correct += pred.eq(target.view_as(pred)).sum().item()\n\n    rank1 = 100. * correct / len(queryloader.dataset)\n    print('\\nTest set: Accuracy: {}/{} ({:.2f}%)\\n'.format(correct, len(queryloader.dataset), rank1))\n    return rank1 \n\nif __name__ == '__main__':\n    import fire\n    fire.Fire()\n"
  },
  {
    "path": "models/__init__.py",
    "content": ""
  },
  {
    "path": "models/networks.py",
    "content": "# encoding: utf-8\nimport copy\nimport itertools\n\nimport numpy as np\nimport torch\nimport torch.nn.functional as F\nimport torch.utils.model_zoo as model_zoo\nimport random\nfrom scipy.spatial.distance import cdist\nfrom sklearn.preprocessing import normalize\nfrom torch import nn, optim\nfrom torch.utils.data import dataloader\nfrom torchvision import transforms\nfrom torchvision.models.resnet import Bottleneck, resnet50\nfrom torchvision.transforms import functional\n\nfrom .resnet import ResNet\n\ndef weights_init_kaiming(m):\n    classname = m.__class__.__name__\n    if classname.find('Linear') != -1:\n        nn.init.kaiming_normal_(m.weight, a=0, mode='fan_out')\n        nn.init.constant_(m.bias, 0.0)\n    elif classname.find('Conv') != -1:\n        nn.init.kaiming_normal_(m.weight, a=0, mode='fan_in')\n        if m.bias is not None:\n            nn.init.constant_(m.bias, 0.0)\n    elif classname.find('BatchNorm') != -1:\n        if m.affine:\n            nn.init.normal_(m.weight, 1.0, 0.02)\n            nn.init.constant_(m.bias, 0.0)\n\n\ndef weights_init_classifier(m):\n    classname = m.__class__.__name__\n    if classname.find('Linear') != -1:\n        nn.init.normal_(m.weight, std=0.001)\n        if m.bias:\n            nn.init.constant_(m.bias, 0.0)\n\nclass SELayer(nn.Module):\n    def __init__(self, channel, reduction=16):\n        super(SELayer, self).__init__()\n        self.avg_pool = nn.AdaptiveAvgPool2d(1)\n        self.fc = nn.Sequential(\n                nn.Linear(channel, channel // reduction),\n                nn.ReLU(inplace=True),\n                nn.Linear(channel // reduction, channel),\n                nn.Sigmoid()\n        )\n\n    def forward(self, x):\n        b, c, _, _ = x.size()\n        y = self.avg_pool(x).view(b, c)\n        y = self.fc(y).view(b, c, 1, 1)\n        return x * y\n\nclass BatchDrop(nn.Module):\n    def __init__(self, h_ratio, w_ratio):\n        super(BatchDrop, self).__init__()\n        self.h_ratio = h_ratio\n        self.w_ratio = w_ratio\n    \n    def forward(self, x):\n        if self.training:\n            h, w = x.size()[-2:]\n            rh = round(self.h_ratio * h)\n            rw = round(self.w_ratio * w)\n            sx = random.randint(0, h-rh)\n            sy = random.randint(0, w-rw)\n            mask = x.new_ones(x.size())\n            mask[:, :, sx:sx+rh, sy:sy+rw] = 0\n            x = x * mask\n        return x\n\nclass BatchCrop(nn.Module):\n    def __init__(self, ratio):\n        super(BatchCrop, self).__init__()\n        self.ratio = ratio\n\n    def forward(self, x):\n        if self.training:\n            h, w = x.size()[-2:]\n            rw = int(self.ratio * w)\n            start = random.randint(0, h-1)\n            if start + rw > h:\n                select = list(range(0, start+rw-h)) + list(range(start, h))\n            else:\n                select = list(range(start, start+rw))\n            mask = x.new_zeros(x.size())\n            mask[:, :, select, :] = 1\n            x = x * mask\n        return x\n\nclass ResNetBuilder(nn.Module):\n    in_planes = 2048\n\n    def __init__(self, num_classes=None, last_stride=1, pretrained=False):\n        super().__init__()\n        self.base = ResNet(last_stride)\n        if pretrained:\n            model_url = 'https://download.pytorch.org/models/resnet50-19c8e357.pth'\n            self.base.load_param(model_zoo.load_url(model_url))\n\n        self.num_classes = num_classes\n        if num_classes is not None:\n            self.bottleneck = nn.Sequential(\n                nn.Linear(self.in_planes, 512),\n                nn.BatchNorm1d(512),\n                nn.LeakyReLU(0.1),\n                nn.Dropout(p=0.5)\n            )\n            self.bottleneck.apply(weights_init_kaiming)\n            self.classifier = nn.Linear(512, self.num_classes)\n            self.classifier.apply(weights_init_classifier)\n\n    def forward(self, x):\n        global_feat = self.base(x)\n        global_feat = F.avg_pool2d(global_feat, global_feat.shape[2:])  # (b, 2048, 1, 1)\n        global_feat = global_feat.view(global_feat.shape[0], -1)\n        if self.training and self.num_classes is not None:\n            feat = self.bottleneck(global_feat)\n            cls_score = self.classifier(feat)\n            return [global_feat], [cls_score]\n        else:\n            return global_feat\n\n    def get_optim_policy(self):\n        base_param_group = self.base.parameters()\n        if self.num_classes is not None:\n            add_param_group = itertools.chain(self.bottleneck.parameters(), self.classifier.parameters())\n            return [\n                {'params': base_param_group},\n                {'params': add_param_group}\n            ]\n        else:\n            return [\n                {'params': base_param_group}\n            ]\n\nclass BFE(nn.Module):\n    def __init__(self, num_classes, width_ratio=0.5, height_ratio=0.5):\n        super(BFE, self).__init__()\n        resnet = resnet50(pretrained=True)\n        self.backbone = nn.Sequential(\n            resnet.conv1,\n            resnet.bn1,\n            resnet.relu,\n            resnet.maxpool,\n            resnet.layer1,  # res_conv2\n            resnet.layer2,  # res_conv3\n            resnet.layer3,  # res_conv4\n        )\n        self.res_part = nn.Sequential(\n            Bottleneck(1024, 512, stride=1, downsample=nn.Sequential(\n                nn.Conv2d(1024, 2048, kernel_size=1, stride=1, bias=False),\n                nn.BatchNorm2d(2048),\n            )),\n            Bottleneck(2048, 512),\n            Bottleneck(2048, 512),\n        )\n        self.res_part.load_state_dict(resnet.layer4.state_dict())\n        reduction = nn.Sequential(\n            nn.Conv2d(2048, 512, 1), \n            nn.BatchNorm2d(512), \n            nn.ReLU()\n        )\n         # global branch\n        self.global_avgpool = nn.AdaptiveAvgPool2d((1, 1))\n        self.global_softmax = nn.Linear(512, num_classes) \n        self.global_softmax.apply(weights_init_kaiming)\n        self.global_reduction = copy.deepcopy(reduction)\n        self.global_reduction.apply(weights_init_kaiming)\n\n        # part branch\n        self.res_part2 = Bottleneck(2048, 512)\n     \n        self.part_maxpool = nn.AdaptiveMaxPool2d((1,1))\n        self.batch_crop = BatchDrop(height_ratio, width_ratio)\n        self.reduction = nn.Sequential(\n            nn.Linear(2048, 1024, 1),\n            nn.BatchNorm1d(1024),\n            nn.ReLU()\n        )\n        self.reduction.apply(weights_init_kaiming)\n        self.softmax = nn.Linear(1024, num_classes)\n        self.softmax.apply(weights_init_kaiming)\n\n    def forward(self, x):\n        \"\"\"\n        :param x: input image tensor of (N, C, H, W)\n        :return: (prediction, triplet_losses, softmax_losses)\n        \"\"\"\n        x = self.backbone(x)\n        x = self.res_part(x)\n\n        predict = []\n        triplet_features = []\n        softmax_features = []\n\n        #global branch\n        glob = self.global_avgpool(x)\n        global_triplet_feature = self.global_reduction(glob).squeeze()\n        global_softmax_class = self.global_softmax(global_triplet_feature)\n        softmax_features.append(global_softmax_class)\n        triplet_features.append(global_triplet_feature)\n        predict.append(global_triplet_feature)\n       \n        #part branch\n        x = self.res_part2(x)\n\n        x = self.batch_crop(x)\n        triplet_feature = self.part_maxpool(x).squeeze()\n        feature = self.reduction(triplet_feature)\n        softmax_feature = self.softmax(feature)\n        triplet_features.append(feature)\n        softmax_features.append(softmax_feature)\n        predict.append(feature)\n\n        if self.training:\n            return triplet_features, softmax_features\n        else:\n            return torch.cat(predict, 1)\n\n    def get_optim_policy(self):\n        params = [\n            {'params': self.backbone.parameters()},\n            {'params': self.res_part.parameters()},\n            {'params': self.global_reduction.parameters()},\n            {'params': self.global_softmax.parameters()},\n            {'params': self.res_part2.parameters()},\n            {'params': self.reduction.parameters()},\n            {'params': self.softmax.parameters()},\n        ]\n        return params\n\nclass Resnet(nn.Module):\n    def __init__(self, num_classes, resnet=None):\n        super(Resnet, self).__init__()\n        if not resnet:\n            resnet = resnet50(pretrained=True)\n        self.backbone = nn.Sequential(\n            resnet.conv1,\n            resnet.bn1,\n            resnet.relu,\n            resnet.maxpool,\n            resnet.layer1,  # res_conv2\n            resnet.layer2,  # res_conv3\n            resnet.layer3,  # res_conv4\n            resnet.layer4\n        )\n        self.global_avgpool = nn.AdaptiveAvgPool2d((1, 1))\n        self.softmax = nn.Linear(2048, num_classes)\n\n    def forward(self, x):\n        \"\"\"\n        :param x: input image tensor of (N, C, H, W)\n        :return: (prediction, triplet_losses, softmax_losses)\n        \"\"\"\n        x = self.backbone(x)\n\n        x = self.global_avgpool(x).squeeze()\n        feature = self.softmax(x)\n        if self.training:\n            return [], [feature]\n        else:\n            return feature\n\n    def get_optim_policy(self):\n        return self.parameters()\n\nclass IDE(nn.Module):\n    def __init__(self, num_classes, resnet=None):\n        super(IDE, self).__init__()\n        if not resnet:\n            resnet = resnet50(pretrained=True)\n        self.backbone = nn.Sequential(\n            resnet.conv1,\n            resnet.bn1,\n            resnet.relu,\n            resnet.maxpool,\n            resnet.layer1,  # res_conv2\n            resnet.layer2,  # res_conv3\n            resnet.layer3,  # res_conv4\n            resnet.layer4\n        )\n        self.global_avgpool = nn.AvgPool2d(kernel_size=(12, 4))\n\n    def forward(self, x):\n        \"\"\"\n        :param x: input image tensor of (N, C, H, W)\n        :return: (prediction, triplet_losses, softmax_losses)\n        \"\"\"\n        x = self.backbone(x)\n\n        feature = self.global_avgpool(x).squeeze()\n        if self.training:\n            return [feature], []\n        else:\n            return feature\n\n    def get_optim_policy(self):\n        return self.parameters()"
  },
  {
    "path": "models/resnet.py",
    "content": "# encoding: utf-8\nimport math\n\nimport torch as th\nimport torch\nfrom torch import nn\n\n\nclass Bottleneck(nn.Module):\n    expansion = 4\n\n    def __init__(self, inplanes, planes, stride=1, downsample=None):\n        super(Bottleneck, self).__init__()\n        self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)\n        self.bn1 = nn.BatchNorm2d(planes)\n        self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,\n                               padding=1, bias=False)\n        self.bn2 = nn.BatchNorm2d(planes)\n        self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)\n        self.bn3 = nn.BatchNorm2d(planes * 4)\n        self.relu = nn.ReLU(inplace=True)\n        self.downsample = downsample\n        self.stride = stride\n\n    def forward(self, x):\n        residual = x\n\n        out = self.conv1(x)\n        out = self.bn1(out)\n        out = self.relu(out)\n\n        out = self.conv2(out)\n        out = self.bn2(out)\n        out = self.relu(out)\n\n        out = self.conv3(out)\n        out = self.bn3(out)\n\n        if self.downsample is not None:\n            residual = self.downsample(x)\n\n        out += residual\n        out = self.relu(out)\n\n        return out\n\nclass CBAM_Module(nn.Module):\n\n    def __init__(self, channels, reduction):\n        super(CBAM_Module, self).__init__()\n        self.avg_pool = nn.AdaptiveAvgPool2d(1)\n        self.max_pool = nn.AdaptiveMaxPool2d(1)\n        self.fc1 = nn.Conv2d(channels, channels // reduction, kernel_size=1,\n                             padding=0)\n        self.relu = nn.ReLU(inplace=True)\n        self.fc2 = nn.Conv2d(channels // reduction, channels, kernel_size=1,\n                             padding=0)\n        self.sigmoid_channel = nn.Sigmoid()\n        self.conv_after_concat = nn.Conv2d(2, 1, kernel_size = 3, stride=1, padding = 1)\n        self.sigmoid_spatial = nn.Sigmoid()\n\n    def forward(self, x):\n        #channel attention\n        module_input = x\n        avg = self.avg_pool(x)\n        mx = self.max_pool(x)\n        avg = self.fc1(avg)\n        mx = self.fc1(mx)\n        avg = self.relu(avg)\n        mx = self.relu(mx)\n        avg = self.fc2(avg)\n        mx = self.fc2(mx)\n        x = avg + mx\n        x = self.sigmoid_channel(x)\n        x = module_input * x\n        #spatial attention\n        module_input = x \n        avg = torch.mean(x, 1, True)\n        mx, _ = torch.max(x, 1, True)\n        x = torch.cat((avg, mx), 1)\n        x = self.conv_after_concat(x)\n        x = self.sigmoid_spatial(x)\n        x = module_input * x\n        return x\n\nclass CBAMBottleneck(nn.Module):\n    expansion = 4\n\n    def __init__(self, inplanes, planes, stride=1, downsample=None):\n        super(CBAMBottleneck, self).__init__()\n        self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)\n        self.bn1 = nn.BatchNorm2d(planes)\n        self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,\n                               padding=1, bias=False)\n        self.bn2 = nn.BatchNorm2d(planes)\n        self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)\n        self.bn3 = nn.BatchNorm2d(planes * 4)\n        self.relu = nn.ReLU(inplace=True)\n        self.cbam = CBAM_Module(planes * 4, reduction=16)\n        self.downsample = downsample\n        self.stride = stride\n\n    def forward(self, x):\n        residual = x\n\n        out = self.conv1(x)\n        out = self.bn1(out)\n        out = self.relu(out)\n\n        out = self.conv2(out)\n        out = self.bn2(out)\n        out = self.relu(out)\n\n        out = self.conv3(out)\n        out = self.bn3(out)\n        out = self.cbam(out)\n        if self.downsample is not None:\n            residual = self.downsample(x)\n\n        out += residual\n        out = self.relu(out)\n\n        return out\n\ndef cbam_resnet50():\n    return ResNet(last_stride=1, block=CBAMBottleneck)\n\n\nclass ResNet(nn.Module):\n    def __init__(self, last_stride=2, block=Bottleneck, layers=[3, 4, 6, 3]):\n        self.inplanes = 64\n        super().__init__()\n        self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,\n                               bias=False)\n        self.bn1 = nn.BatchNorm2d(64)\n        self.relu = nn.ReLU(inplace=True)\n        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)\n        self.layer1 = self._make_layer(block, 64, layers[0])\n        self.layer2 = self._make_layer(block, 128, layers[1], stride=2)\n        self.layer3 = self._make_layer(block, 256, layers[2], stride=2)\n        self.layer4 = self._make_layer(\n            block, 512, layers[3], stride=last_stride)\n\n    def _make_layer(self, block, planes, blocks, stride=1):\n        downsample = None\n        if stride != 1 or self.inplanes != planes * block.expansion:\n            downsample = nn.Sequential(\n                nn.Conv2d(self.inplanes, planes * block.expansion,\n                          kernel_size=1, stride=stride, bias=False),\n                nn.BatchNorm2d(planes * block.expansion),\n            )\n\n        layers = []\n        layers.append(block(self.inplanes, planes, stride, downsample))\n        self.inplanes = planes * block.expansion\n        for i in range(1, blocks):\n            layers.append(block(self.inplanes, planes))\n\n        return nn.Sequential(*layers)\n\n    def forward(self, x):\n        x = self.conv1(x)\n        x = self.bn1(x)\n        x = self.relu(x)\n        x = self.maxpool(x)\n\n        x = self.layer1(x)\n        x = self.layer2(x)\n        x = self.layer3(x)\n        x = self.layer4(x)\n\n        return x\n\n    def load_param(self, param_dict):\n        for i in param_dict:\n            if 'fc' in i:\n                continue\n            self.state_dict()[i].copy_(param_dict[i])\n\n    def random_init(self):\n        for m in self.modules():\n            if isinstance(m, nn.Conv2d):\n                n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels\n                m.weight.data.normal_(0, math.sqrt(2. / n))\n            elif isinstance(m, nn.BatchNorm2d):\n                m.weight.data.fill_(1)\n                m.bias.data.zero_()\n\n\nif __name__ == \"__main__\":\n    net = ResNet(last_stride=2)\n    import torch\n\n    x = net(torch.zeros(1, 3, 256, 128))\n    print(x.shape)\n"
  },
  {
    "path": "requirements.txt",
    "content": "tensorboardX\nfire"
  },
  {
    "path": "trainers/__init__.py",
    "content": ""
  },
  {
    "path": "trainers/evaluator.py",
    "content": "# encoding: utf-8\nimport numpy as np\nimport os\nimport torch\nfrom PIL import Image\nimport matplotlib.pyplot as plt\n\nfrom trainers.re_ranking import re_ranking as re_ranking_func\n\nclass ResNetEvaluator:\n    def __init__(self, model):\n        self.model = model\n\n    def save_incorrect_pairs(self, distmat, queryloader, galleryloader, \n        g_pids, q_pids, g_camids, q_camids, savefig):\n        os.makedirs(savefig, exist_ok=True)\n        self.model.eval()\n        m = distmat.shape[0]\n        indices = np.argsort(distmat, axis=1)\n        for i in range(m):\n            for j in range(10):\n                index = indices[i][j]\n                if g_camids[index] == q_camids[i] and g_pids[index] == q_pids[i]:\n                    continue\n                else:\n                    break\n            if g_pids[index] == q_pids[i]:\n                continue\n            fig, axes =plt.subplots(1, 11, figsize=(12, 8))\n            img = queryloader.dataset.dataset[i][0]\n            img = Image.open(img).convert('RGB')\n            axes[0].set_title(q_pids[i])\n            axes[0].imshow(img)\n            axes[0].set_axis_off()\n            for j in range(10):\n                gallery_index = indices[i][j]\n                img = galleryloader.dataset.dataset[gallery_index][0]\n                img = Image.open(img).convert('RGB')\n                axes[j+1].set_title(g_pids[gallery_index])\n                axes[j+1].set_axis_off()\n                axes[j+1].imshow(img)\n            fig.savefig(os.path.join(savefig, '%d.png' %q_pids[i]))\n            plt.close(fig)\n\n    def evaluate(self, queryloader, galleryloader, queryFliploader, galleryFliploader, \n        ranks=[1, 2, 4, 5,8, 10, 16, 20], eval_flip=False, re_ranking=False, savefig=False):\n        self.model.eval()\n        qf, q_pids, q_camids = [], [], []\n        for inputs0, inputs1 in zip(queryloader, queryFliploader):\n            inputs, pids, camids = self._parse_data(inputs0)\n            feature0 = self._forward(inputs)\n            if eval_flip:\n                inputs, pids, camids = self._parse_data(inputs1)\n                feature1 = self._forward(inputs)\n                qf.append((feature0 + feature1) / 2.0)\n            else:\n                qf.append(feature0)\n\n            q_pids.extend(pids)\n            q_camids.extend(camids)\n        qf = torch.cat(qf, 0)\n        q_pids = torch.Tensor(q_pids)\n        q_camids = torch.Tensor(q_camids)\n\n        print(\"Extracted features for query set: {} x {}\".format(qf.size(0), qf.size(1)))\n\n        gf, g_pids, g_camids = [], [], []\n        for inputs0, inputs1 in zip(galleryloader, galleryFliploader):\n            inputs, pids, camids = self._parse_data(inputs0)\n            feature0 = self._forward(inputs)\n            if eval_flip:\n                inputs, pids, camids = self._parse_data(inputs1)\n                feature1 = self._forward(inputs)\n                gf.append((feature0 + feature1) / 2.0)\n            else:\n                gf.append(feature0)\n                \n            g_pids.extend(pids)\n            g_camids.extend(camids)\n        gf = torch.cat(gf, 0)\n        g_pids = torch.Tensor(g_pids)\n        g_camids = torch.Tensor(g_camids)\n\n        print(\"Extracted features for gallery set: {} x {}\".format(gf.size(0), gf.size(1)))\n\n        print(\"Computing distance matrix\")\n\n        m, n = qf.size(0), gf.size(0)\n        q_g_dist = torch.pow(qf, 2).sum(dim=1, keepdim=True).expand(m, n) + \\\n            torch.pow(gf, 2).sum(dim=1, keepdim=True).expand(n, m).t()\n        q_g_dist.addmm_(1, -2, qf, gf.t())\n\n        if re_ranking:\n            q_q_dist = torch.pow(qf, 2).sum(dim=1, keepdim=True).expand(m, m) + \\\n                torch.pow(qf, 2).sum(dim=1, keepdim=True).expand(m, m).t()\n            q_q_dist.addmm_(1, -2, qf, qf.t())\n\n            g_g_dist = torch.pow(gf, 2).sum(dim=1, keepdim=True).expand(n, n) + \\\n                torch.pow(gf, 2).sum(dim=1, keepdim=True).expand(n, n).t()\n            g_g_dist.addmm_(1, -2, gf, gf.t())\n\n            q_g_dist = q_g_dist.numpy()\n            q_g_dist[q_g_dist < 0] = 0\n            q_g_dist = np.sqrt(q_g_dist)\n\n            q_q_dist = q_q_dist.numpy()\n            q_q_dist[q_q_dist < 0] = 0\n            q_q_dist = np.sqrt(q_q_dist)\n\n            g_g_dist = g_g_dist.numpy()\n            g_g_dist[g_g_dist < 0] = 0\n            g_g_dist = np.sqrt(g_g_dist)\n\n            distmat = torch.Tensor(re_ranking_func(q_g_dist, q_q_dist, g_g_dist))\n        else:\n            distmat = q_g_dist \n\n        if savefig:\n            print(\"Saving fingure\")\n            self.save_incorrect_pairs(distmat.numpy(), queryloader, galleryloader, \n                g_pids.numpy(), q_pids.numpy(), g_camids.numpy(), q_camids.numpy(), savefig)\n\n        print(\"Computing CMC and mAP\")\n        cmc, mAP = self.eval_func_gpu(distmat, q_pids, g_pids, q_camids, g_camids)\n\n        print(\"Results ----------\")\n        print(\"mAP: {:.1%}\".format(mAP))\n        print(\"CMC curve\")\n        for r in ranks:\n            print(\"Rank-{:<3}: {:.1%}\".format(r, cmc[r - 1]))\n        print(\"------------------\")\n\n        return cmc[0]\n\n    def _parse_data(self, inputs):\n        imgs, pids, camids = inputs\n        return imgs.cuda(), pids, camids\n\n    def _forward(self, inputs):\n        with torch.no_grad():\n            feature = self.model(inputs)\n        return feature.cpu()\n\n    def eval_func_gpu(self, distmat, q_pids, g_pids, q_camids, g_camids, max_rank=50):\n        num_q, num_g = distmat.size()\n        if num_g < max_rank:\n            max_rank = num_g\n            print(\"Note: number of gallery samples is quite small, got {}\".format(num_g))\n        _, indices = torch.sort(distmat, dim=1)\n        matches = g_pids[indices] == q_pids.view([num_q, -1]) \n        keep = ~((g_pids[indices] == q_pids.view([num_q, -1])) & (g_camids[indices]  == q_camids.view([num_q, -1])))\n        #keep = g_camids[indices]  != q_camids.view([num_q, -1])\n\n        results = []\n        num_rel = []\n        for i in range(num_q):\n            m = matches[i][keep[i]]\n            if m.any():\n                num_rel.append(m.sum())\n                results.append(m[:max_rank].unsqueeze(0))\n        matches = torch.cat(results, dim=0).float()\n        num_rel = torch.Tensor(num_rel)\n\n        cmc = matches.cumsum(dim=1)\n        cmc[cmc > 1] = 1\n        all_cmc = cmc.sum(dim=0) / cmc.size(0)\n\n        pos = torch.Tensor(range(1, max_rank+1))\n        temp_cmc = matches.cumsum(dim=1) / pos * matches\n        AP = temp_cmc.sum(dim=1) / num_rel\n        mAP = AP.sum() / AP.size(0)\n        return all_cmc.numpy(), mAP.item()\n\n    def eval_func(self, distmat, q_pids, g_pids, q_camids, g_camids, max_rank=50):\n        \"\"\"Evaluation with market1501 metric\n            Key: for each query identity, its gallery images from the same camera view are discarded.\n            \"\"\"\n        num_q, num_g = distmat.shape\n        if num_g < max_rank:\n            max_rank = num_g\n            print(\"Note: number of gallery samples is quite small, got {}\".format(num_g))\n        indices = np.argsort(distmat, axis=1)\n        matches = (g_pids[indices] == q_pids[:, np.newaxis]).astype(np.int32)\n\n        # compute cmc curve for each query\n        all_cmc = []\n        all_AP = []\n        num_valid_q = 0.  # number of valid query\n        for q_idx in range(num_q):\n            # get query pid and camid\n            q_pid = q_pids[q_idx]\n            q_camid = q_camids[q_idx]\n\n            # remove gallery samples that have the same pid and camid with query\n            order = indices[q_idx]\n            remove = (g_pids[order] == q_pid) & (g_camids[order] == q_camid)\n            keep = np.invert(remove)\n\n            # compute cmc curve\n            # binary vector, positions with value 1 are correct matches\n            orig_cmc = matches[q_idx][keep]\n            if not np.any(orig_cmc):\n                # this condition is true when query identity does not appear in gallery\n                continue\n\n            cmc = orig_cmc.cumsum()\n            cmc[cmc > 1] = 1\n\n            all_cmc.append(cmc[:max_rank])\n            num_valid_q += 1.\n\n            # compute average precision\n            # reference: https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Average_precision\n            num_rel = orig_cmc.sum()\n            tmp_cmc = orig_cmc.cumsum()\n            tmp_cmc = [x / (i + 1.) for i, x in enumerate(tmp_cmc)]\n            tmp_cmc = np.asarray(tmp_cmc) * orig_cmc\n            AP = tmp_cmc.sum() / num_rel\n            all_AP.append(AP)\n\n        assert num_valid_q > 0, \"Error: all query identities do not appear in gallery\"\n\n        all_cmc = np.asarray(all_cmc).astype(np.float32)\n        all_cmc = all_cmc.sum(0) / num_valid_q\n        mAP = np.mean(all_AP)\n\n        return all_cmc, mAP\n"
  },
  {
    "path": "trainers/re_ranking.py",
    "content": "#!/usr/bin/env python2/python3\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Mon Jun 26 14:46:56 2017\n@author: luohao\nModified by Houjing Huang, 2017-12-22. \n- This version accepts distance matrix instead of raw features. \n- The difference of `/` division between python 2 and 3 is handled.\n- numpy.float16 is replaced by numpy.float32 for numerical precision.\n\nModified by Zhedong Zheng, 2018-1-12.\n- replace sort with topK, which save about 30s.\n\"\"\"\n\n\"\"\"\nCVPR2017 paper:Zhong Z, Zheng L, Cao D, et al. Re-ranking Person Re-identification with k-reciprocal Encoding[J]. 2017.\nurl:http://openaccess.thecvf.com/content_cvpr_2017/papers/Zhong_Re-Ranking_Person_Re-Identification_CVPR_2017_paper.pdf\nMatlab version: https://github.com/zhunzhong07/person-re-ranking\n\"\"\"\n\n\"\"\"\nAPI\nq_g_dist: query-gallery distance matrix, numpy array, shape [num_query, num_gallery]\nq_q_dist: query-query distance matrix, numpy array, shape [num_query, num_query]\ng_g_dist: gallery-gallery distance matrix, numpy array, shape [num_gallery, num_gallery]\nk1, k2, lambda_value: parameters, the original paper is (k1=20, k2=6, lambda_value=0.3)\nReturns:\n  final_dist: re-ranked distance, numpy array, shape [num_query, num_gallery]\n\"\"\"\n\n\nimport numpy as np\ndef re_ranking(q_g_dist, q_q_dist, g_g_dist, k1=20, k2=6, lambda_value=0.3):\n\n    # The following naming, e.g. gallery_num, is different from outer scope.\n    # Don't care about it.\n\n    original_dist = np.concatenate(\n      [np.concatenate([q_q_dist, q_g_dist], axis=1),\n       np.concatenate([q_g_dist.T, g_g_dist], axis=1)],\n      axis=0)\n    original_dist = np.power(original_dist, 2).astype(np.float32)\n    original_dist = np.transpose(1. * original_dist/np.max(original_dist,axis = 0))\n    V = np.zeros_like(original_dist).astype(np.float32)\n    initial_rank = np.argsort(original_dist).astype(np.int32)\n\n    query_num = q_g_dist.shape[0]\n    gallery_num = q_g_dist.shape[0] + q_g_dist.shape[1]\n    all_num = gallery_num\n\n    for i in range(all_num):\n        # k-reciprocal neighbors\n        forward_k_neigh_index = initial_rank[i,:k1+1]\n        backward_k_neigh_index = initial_rank[forward_k_neigh_index,:k1+1]\n        fi = np.where(backward_k_neigh_index==i)[0]\n        k_reciprocal_index = forward_k_neigh_index[fi]\n        k_reciprocal_expansion_index = k_reciprocal_index\n        for j in range(len(k_reciprocal_index)):\n            candidate = k_reciprocal_index[j]\n            candidate_forward_k_neigh_index = initial_rank[candidate,:int(np.around(k1/2.))+1]\n            candidate_backward_k_neigh_index = initial_rank[candidate_forward_k_neigh_index,:int(np.around(k1/2.))+1]\n            fi_candidate = np.where(candidate_backward_k_neigh_index == candidate)[0]\n            candidate_k_reciprocal_index = candidate_forward_k_neigh_index[fi_candidate]\n            if len(np.intersect1d(candidate_k_reciprocal_index,k_reciprocal_index))> 2./3*len(candidate_k_reciprocal_index):\n                k_reciprocal_expansion_index = np.append(k_reciprocal_expansion_index,candidate_k_reciprocal_index)\n\n        k_reciprocal_expansion_index = np.unique(k_reciprocal_expansion_index)\n        weight = np.exp(-original_dist[i,k_reciprocal_expansion_index])\n        V[i,k_reciprocal_expansion_index] = 1.*weight/np.sum(weight)\n    original_dist = original_dist[:query_num,]\n    if k2 != 1:\n        V_qe = np.zeros_like(V,dtype=np.float32)\n        for i in range(all_num):\n            V_qe[i,:] = np.mean(V[initial_rank[i,:k2],:],axis=0)\n        V = V_qe\n        del V_qe\n    del initial_rank\n    invIndex = []\n    for i in range(gallery_num):\n        invIndex.append(np.where(V[:,i] != 0)[0])\n\n    jaccard_dist = np.zeros_like(original_dist,dtype = np.float32)\n\n\n    for i in range(query_num):\n        temp_min = np.zeros(shape=[1,gallery_num],dtype=np.float32)\n        indNonZero = np.where(V[i,:] != 0)[0]\n        indImages = []\n        indImages = [invIndex[ind] for ind in indNonZero]\n        for j in range(len(indNonZero)):\n            temp_min[0,indImages[j]] = temp_min[0,indImages[j]]+ np.minimum(V[i,indNonZero[j]],V[indImages[j],indNonZero[j]])\n        jaccard_dist[i] = 1-temp_min/(2.-temp_min)\n\n    final_dist = jaccard_dist*(1-lambda_value) + original_dist*lambda_value\n    del original_dist\n    del V\n    del jaccard_dist\n    final_dist = final_dist[:query_num,query_num:]\n    return final_dist\n\ndef k_reciprocal_neigh( initial_rank, i, k1):\n    forward_k_neigh_index = initial_rank[i,:k1+1]\n    backward_k_neigh_index = initial_rank[forward_k_neigh_index,:k1+1]\n    fi = np.where(backward_k_neigh_index==i)[0]\n    return forward_k_neigh_index[fi]\n\ndef re_ranking_new(q_g_dist, q_q_dist, g_g_dist, k1=20, k2=6, lambda_value=0.3):\n    # The following naming, e.g. gallery_num, is different from outer scope.\n    # Don't care about it.\n    original_dist = np.concatenate(\n      [np.concatenate([q_q_dist, q_g_dist], axis=1),\n       np.concatenate([q_g_dist.T, g_g_dist], axis=1)],\n      axis=0)\n    original_dist = 2. - 2 * original_dist   #np.power(original_dist, 2).astype(np.float32)\n    original_dist = np.transpose(1. * original_dist/np.max(original_dist,axis = 0))\n    V = np.zeros_like(original_dist).astype(np.float32)\n    #initial_rank = np.argsort(original_dist).astype(np.int32)\n    # top K1+1\n    initial_rank = np.argpartition( original_dist, range(1,k1+1) )\n\n    query_num = q_g_dist.shape[0]\n    all_num = original_dist.shape[0]\n\n    for i in range(all_num):\n        # k-reciprocal neighbors\n        k_reciprocal_index = k_reciprocal_neigh( initial_rank, i, k1)\n        k_reciprocal_expansion_index = k_reciprocal_index\n        for j in range(len(k_reciprocal_index)):\n            candidate = k_reciprocal_index[j]\n            candidate_k_reciprocal_index = k_reciprocal_neigh( initial_rank, candidate, int(np.around(k1/2)))\n            if len(np.intersect1d(candidate_k_reciprocal_index,k_reciprocal_index))> 2./3*len(candidate_k_reciprocal_index):\n                k_reciprocal_expansion_index = np.append(k_reciprocal_expansion_index,candidate_k_reciprocal_index)\n\n        k_reciprocal_expansion_index = np.unique(k_reciprocal_expansion_index)\n        weight = np.exp(-original_dist[i,k_reciprocal_expansion_index])\n        V[i,k_reciprocal_expansion_index] = 1.*weight/np.sum(weight)\n\n    original_dist = original_dist[:query_num,]\n    if k2 != 1:\n        V_qe = np.zeros_like(V,dtype=np.float32)\n        for i in range(all_num):\n            V_qe[i,:] = np.mean(V[initial_rank[i,:k2],:],axis=0)\n        V = V_qe\n        del V_qe\n    del initial_rank\n    invIndex = []\n    for i in range(all_num):\n        invIndex.append(np.where(V[:,i] != 0)[0])\n\n    jaccard_dist = np.zeros_like(original_dist,dtype = np.float32)\n\n    for i in range(query_num):\n        temp_min = np.zeros(shape=[1,all_num],dtype=np.float32)\n        indNonZero = np.where(V[i,:] != 0)[0]\n        indImages = []\n        indImages = [invIndex[ind] for ind in indNonZero]\n        for j in range(len(indNonZero)):\n            temp_min[0,indImages[j]] = temp_min[0,indImages[j]]+ np.minimum(V[i,indNonZero[j]],V[indImages[j],indNonZero[j]])\n        jaccard_dist[i] = 1-temp_min/(2.-temp_min)\n\n    final_dist = jaccard_dist*(1-lambda_value) + original_dist*lambda_value\n    del original_dist\n    del V\n    del jaccard_dist\n    final_dist = final_dist[:query_num,query_num:]\n    return final_dist\n"
  },
  {
    "path": "trainers/trainer.py",
    "content": "# encoding: utf-8\nimport math\nimport time\nimport numpy as np\nimport random\nimport torch\nfrom torch import nn\nfrom torch.utils.data import DataLoader\nfrom utils.loss import euclidean_dist, hard_example_mining\nfrom utils.meters import AverageMeter\n\n\nclass cls_tripletTrainer:\n    def __init__(self, opt, model, optimzier, criterion, summary_writer):\n        self.opt = opt\n        self.model = model\n        self.optimizer= optimzier\n        self.criterion = criterion\n        self.summary_writer = summary_writer\n\n    def train(self, epoch, data_loader):\n        self.model.train()\n\n        batch_time = AverageMeter()\n        data_time = AverageMeter()\n        losses = AverageMeter()\n\n        start = time.time()\n        for i, inputs in enumerate(data_loader):\n            data_time.update(time.time() - start)\n\n            # model optimizer\n            self._parse_data(inputs)\n            self._forward()\n            self.optimizer.zero_grad()\n            self._backward()\n            self.optimizer.step()\n\n            batch_time.update(time.time() - start)\n            losses.update(self.loss.item())\n\n            # tensorboard\n            global_step = epoch * len(data_loader) + i\n            self.summary_writer.add_scalar('loss', self.loss.item(), global_step)\n            self.summary_writer.add_scalar('lr', self.optimizer.param_groups[0]['lr'], global_step)\n\n            start = time.time()\n\n            if (i + 1) % self.opt.print_freq == 0:\n                print('Epoch: [{}][{}/{}]\\t'\n                      'Batch Time {:.3f} ({:.3f})\\t'\n                      'Data Time {:.3f} ({:.3f})\\t'\n                      'Loss {:.3f} ({:.3f})\\t'\n                      .format(epoch, i + 1, len(data_loader),\n                              batch_time.val, batch_time.mean,\n                              data_time.val, data_time.mean,\n                              losses.val, losses.mean))\n        param_group = self.optimizer.param_groups\n        print('Epoch: [{}]\\tEpoch Time {:.3f} s\\tLoss {:.3f}\\t'\n              'Lr {:.2e}'\n              .format(epoch, batch_time.sum, losses.mean, param_group[0]['lr']))\n        print()\n\n    def _parse_data(self, inputs):\n        imgs, pids, _ = inputs\n        if self.opt.random_crop and random.random() > 0.3:\n            h, w = imgs.size()[-2:]\n            start = int((h-2*w)*random.random())\n            mask = imgs.new_zeros(imgs.size())\n            mask[:, :, start:start+2*w, :] = 1\n            imgs = imgs * mask\n        '''\n        if random.random() > 0.5:\n            h, w = imgs.size()[-2:]\n            for attempt in range(100):\n                area = h * w\n                target_area = random.uniform(0.02, 0.4) * area\n                aspect_ratio = random.uniform(0.3, 3.33)\n                ch = int(round(math.sqrt(target_area * aspect_ratio)))\n                cw = int(round(math.sqrt(target_area / aspect_ratio)))\n                if cw <  w and ch < h:\n                    x1 = random.randint(0, h - ch)\n                    y1 = random.randint(0, w - cw)\n                    imgs[:, :, x1:x1+h, y1:y1+w] = 0\n                    break\n        '''\n        self.data = imgs.cuda()\n        self.target = pids.cuda()\n\n    def _forward(self):\n        score, feat = self.model(self.data)\n        self.loss = self.criterion(score, feat, self.target)\n\n    def _backward(self):\n        self.loss.backward()\n"
  },
  {
    "path": "utils/DistWeightDevianceLoss.py",
    "content": "from __future__ import absolute_import\n\nimport torch\nfrom torch import nn\nfrom torch.autograd import Variable\nimport numpy as np\n\n\ndef similarity(inputs_):\n    # Compute similarity mat of deep feature\n    # n = inputs_.size(0)\n    sim = torch.matmul(inputs_, inputs_.t())\n    return sim\n\n\ndef GaussDistribution(data):\n    \"\"\"\n    :param data:\n    :return:\n    \"\"\"\n    mean_value = torch.mean(data)\n    diff = data - mean_value\n    std = torch.sqrt(torch.mean(torch.pow(diff, 2)))\n    return mean_value, std\n\n\nclass DistWeightBinDevianceLoss(nn.Module):\n    def __init__(self, margin=0.5):\n        super(DistWeightBinDevianceLoss, self).__init__()\n        self.margin = margin\n\n    def forward(self, inputs, targets):\n        n = inputs.size(0)\n        # Compute similarity matrix\n        sim_mat = similarity(inputs)\n        # print(sim_mat)\n        targets = targets.cuda()\n        # split the positive and negative pairs\n        eyes_ = Variable(torch.eye(n, n)).cuda()\n        # eyes_ = Variable(torch.eye(n, n))\n        pos_mask = targets.expand(n, n).eq(targets.expand(n, n).t())\n        neg_mask = eyes_.eq(eyes_) - pos_mask\n        pos_mask = pos_mask - eyes_.eq(1)\n\n        pos_sim = torch.masked_select(sim_mat, pos_mask)\n        neg_sim = torch.masked_select(sim_mat, neg_mask)\n\n        num_instances = len(pos_sim)//n + 1\n        num_neg_instances = n - num_instances\n\n        pos_sim = pos_sim.resize(len(pos_sim)//(num_instances-1), num_instances-1)\n        neg_sim = neg_sim.resize(\n            len(neg_sim) // num_neg_instances, num_neg_instances)\n\n        #  clear way to compute the loss first\n        loss = list()\n        c = 0\n\n        for i, pos_pair in enumerate(pos_sim):\n            # print(i)\n            pos_pair = torch.sort(pos_pair)[0]\n            neg_pair = torch.sort(neg_sim[i])[0]\n\n            neg_mean, neg_std = GaussDistribution(neg_pair)\n            prob = torch.exp(torch.pow(neg_pair - neg_mean, 2) / (2*torch.pow(neg_std, 2)))\n            neg_index = torch.multinomial(prob, num_instances - 1, replacement=False)\n\n            neg_pair = neg_pair[neg_index]\n\n            if len(neg_pair) < 1:\n                c += 1\n                continue\n            if pos_pair[-1].item() > neg_pair[-1].item() + 0.05:\n                c += 1\n\n            neg_pair = torch.sort(neg_pair)[0]\n\n            if i == 1 and np.random.randint(256) == 1:\n                print('neg_pair is ---------', neg_pair)\n                print('pos_pair is ---------', pos_pair.data)\n\n            pos_loss = torch.mean(torch.log(1 + torch.exp(-2*(pos_pair - self.margin))))\n            neg_loss = 0.04*torch.mean(torch.log(1 + torch.exp(50*(neg_pair - self.margin))))\n            loss.append(pos_loss + neg_loss)\n        loss = [torch.unsqueeze(l,0) for l in loss]\n        loss = torch.sum(torch.cat(loss))/n\n\n        prec = float(c)/n\n        neg_d = torch.mean(neg_sim).item()\n        pos_d = torch.mean(pos_sim).item()\n\n        return loss, prec, pos_d, neg_d\n\n\ndef main():\n    data_size = 32\n    input_dim = 3\n    output_dim = 2\n    num_class = 4\n    # margin = 0.5\n    x = Variable(torch.rand(data_size, input_dim), requires_grad=False)\n    # print(x)\n    w = Variable(torch.rand(input_dim, output_dim), requires_grad=True)\n    inputs = x.mm(w)\n    y_ = 8*list(range(num_class))\n    targets = Variable(torch.IntTensor(y_))\n\n    print(DistWeightBinDevianceLoss()(inputs, targets))\n\n\nif __name__ == '__main__':\n    main()\n    print('Congratulations to you!')\n\n\n"
  },
  {
    "path": "utils/LiftedStructure.py",
    "content": "from __future__ import absolute_import\n\nimport torch\nfrom torch import nn\nfrom torch.autograd import Variable\nimport torch.nn.functional as F\nimport numpy as np\n\n\ndef similarity(inputs_):\n    # Compute similarity mat of deep feature\n    # n = inputs_.size(0)\n    sim = torch.matmul(inputs_, inputs_.t())\n    return sim\n\ndef pdist(A, squared = False, eps = 1e-4):\n    prod = torch.mm(A, A.t())\n    norm = prod.diag().unsqueeze(1).expand_as(prod)\n    res = (norm + norm.t() - 2 * prod).clamp(min = 0)\n    return res if squared else res.clamp(min = eps).sqrt()\n\nclass LiftedStructureLoss(nn.Module):\n    def __init__(self, alpha=10, beta=2, margin=0.5, hard_mining=None, **kwargs):\n        super(LiftedStructureLoss, self).__init__()\n        self.margin = margin\n        self.alpha = alpha\n        self.beta = beta\n        self.hard_mining = hard_mining\n\n    def forward(self, embeddings, labels):\n        '''\n        score = embeddings\n        target = labels\n        loss = 0\n        counter = 0\n        bsz = score.size(0)\n        mag = (score ** 2).sum(1).expand(bsz, bsz)\n        sim = score.mm(score.transpose(0, 1))\n        dist = (mag + mag.transpose(0, 1) - 2 * sim)\n        dist = torch.nn.functional.relu(dist).sqrt()\n        \n        for i in range(bsz):\n            t_i = target[i].item()\n            for j in range(i + 1, bsz):\n                t_j = target[j].item()\n                if t_i == t_j:\n                    # Negative component\n                    # !! Could do other things (like softmax that weights closer negatives)\n                    l_ni = (self.margin - dist[i][target != t_i]).exp().sum()\n                    l_nj = (self.margin - dist[j][target != t_j]).exp().sum()\n                    l_n  = (l_ni + l_nj).log()\n                    # Positive component\n                    l_p  = dist[i,j]\n                    loss += torch.nn.functional.relu(l_n + l_p) ** 2\n                    counter += 1\n        return loss / (2 * counter), 0\n        '''\n        margin = 1.0\n        eps = 1e-4\n        d = pdist(embeddings, squared = False, eps = eps)\n        pos = torch.eq(*[labels.unsqueeze(dim).expand_as(d) for dim in [0, 1]]).type_as(d)\n        neg_i = torch.mul((margin - d).exp(), 1 - pos).sum(1).expand_as(d)\n        return torch.sum(F.relu(pos.triu(1) * ((neg_i + neg_i.t()).log() + d)).pow(2)) / (pos.sum() - len(d)), 0\n\ndef main():\n    data_size = 32\n    input_dim = 3\n    output_dim = 2\n    num_class = 4\n    # margin = 0.5\n    x = Variable(torch.rand(data_size, input_dim), requires_grad=False)\n    # print(x)\n    w = Variable(torch.rand(input_dim, output_dim), requires_grad=True)\n    inputs = x.mm(w)\n    y_ = 8*list(range(num_class))\n    targets = Variable(torch.IntTensor(y_))\n\n    print(LiftedStructureLoss()(inputs, targets))\n\n\nif __name__ == '__main__':\n    main()\n    print('Congratulations to you!')\n\n\n"
  },
  {
    "path": "utils/__init__.py",
    "content": ""
  },
  {
    "path": "utils/loss.py",
    "content": "# encoding: utf-8\nimport random\nimport torch\nfrom torch import nn\nimport torch.nn.functional as F\n\ndef topk_mask(input, dim, K = 10, **kwargs):\n    index = input.topk(max(1, min(K, input.size(dim))), dim = dim, **kwargs)[1]\n    return torch.autograd.Variable(torch.zeros_like(input.data)).scatter(dim, index, 1.0)\n\ndef pdist(A, squared = False, eps = 1e-4):\n    prod = torch.mm(A, A.t())\n    norm = prod.diag().unsqueeze(1).expand_as(prod)\n    res = (norm + norm.t() - 2 * prod).clamp(min = 0)\n    return res if squared else res.clamp(min = eps).sqrt()\n\n\ndef normalize(x, axis=-1):\n    \"\"\"Normalizing to unit length along the specified dimension.\n    Args:\n      x: pytorch Variable\n    Returns:\n      x: pytorch Variable, same shape as input\n    \"\"\"\n    x = 1. * x / (torch.norm(x, 2, axis, keepdim=True).expand_as(x) + 1e-12)\n    return x\n\n\ndef euclidean_dist(x, y):\n    \"\"\"\n    Args:\n      x: pytorch Variable, with shape [m, d]\n      y: pytorch Variable, with shape [n, d]\n    Returns:\n      dist: pytorch Variable, with shape [m, n]\n    \"\"\"\n    m, n = x.size(0), y.size(0)\n    xx = torch.pow(x, 2).sum(1, keepdim=True).expand(m, n)\n    yy = torch.pow(y, 2).sum(1, keepdim=True).expand(n, m).t()\n    dist = xx + yy\n    dist.addmm_(1, -2, x, y.t())\n    dist = dist.clamp(min=1e-12).sqrt()  # for numerical stability\n    return dist\n\n\ndef hard_example_mining(dist_mat, labels, margin, return_inds=False):\n    \"\"\"For each anchor, find the hardest positive and negative sample.\n    Args:\n      dist_mat: pytorch Variable, pair wise distance between samples, shape [N, N]\n      labels: pytorch LongTensor, with shape [N]\n      return_inds: whether to return the indices. Save time if `False`(?)\n    Returns:\n      dist_ap: pytorch Variable, distance(anchor, positive); shape [N]\n      dist_an: pytorch Variable, distance(anchor, negative); shape [N]\n      p_inds: pytorch LongTensor, with shape [N];\n        indices of selected hard positive samples; 0 <= p_inds[i] <= N - 1\n      n_inds: pytorch LongTensor, with shape [N];\n        indices of selected hard negative samples; 0 <= n_inds[i] <= N - 1\n    NOTE: Only consider the case in which all labels have same num of samples,\n      thus we can cope with all anchors in parallel.\n    \"\"\"\n\n    torch.set_printoptions(threshold=5000) \n    assert len(dist_mat.size()) == 2\n    assert dist_mat.size(0) == dist_mat.size(1)\n    N = dist_mat.size(0)\n\n    # shape [N, N]\n    is_pos = labels.expand(N, N).eq(labels.expand(N, N).t())\n    is_neg = labels.expand(N, N).ne(labels.expand(N, N).t())\n    # `dist_ap` means distance(anchor, positive)\n    # both `dist_ap` and `relative_p_inds` with shape [N, 1]\n    dist_ap, relative_p_inds = torch.max(\n        dist_mat[is_pos].contiguous().view(N, -1), 1, keepdim=True)\n    # `dist_an` means distance(anchor, negative)\n    # both `dist_an` and `relative_n_inds` with shape [N, 1]\n    dist_an, relative_n_inds = torch.min(\n        dist_mat[is_neg].contiguous().view(N, -1), 1, keepdim=True)\n    # shape [N]\n    dist_ap = dist_ap.squeeze(1)\n    dist_an = dist_an.squeeze(1)\n\n    if return_inds:\n        # shape [N, N]\n        ind = (labels.new().resize_as_(labels)\n               .copy_(torch.arange(0, N).long())\n               .unsqueeze(0).expand(N, N))\n        # shape [N, 1]\n        p_inds = torch.gather(\n            ind[is_pos].contiguous().view(N, -1), 1, relative_p_inds.data)\n        n_inds = torch.gather(\n            ind[is_neg].contiguous().view(N, -1), 1, relative_n_inds.data)\n        # shape [N]\n        p_inds = p_inds.squeeze(1)\n        n_inds = n_inds.squeeze(1)\n        return dist_ap, dist_an, p_inds, n_inds\n\n    return dist_ap, dist_an\n\n\nclass TripletLoss(object):\n    \"\"\"Modified from Tong Xiao's open-reid (https://github.com/Cysu/open-reid).\n    Related Triplet Loss theory can be found in paper 'In Defense of the Triplet\n    Loss for Person Re-Identification'.\"\"\"\n\n    def __init__(self, margin=None):\n        self.margin = margin\n        if margin is not None:\n            self.ranking_loss = nn.MarginRankingLoss(margin=margin)\n        else:\n            self.ranking_loss = nn.SoftMarginLoss()\n\n    def __call__(self, global_feat, labels, normalize_feature=False):\n        if normalize_feature:\n            global_feat = normalize(global_feat, axis=-1)\n        dist_mat = euclidean_dist(global_feat, global_feat)\n        dist_ap, dist_an = hard_example_mining(dist_mat, labels, self.margin)\n        y = dist_an.new().resize_as_(dist_an).fill_(1)\n        if self.margin is not None:\n            loss = self.ranking_loss(dist_an, dist_ap, y)\n        else:\n            loss = self.ranking_loss(dist_an - dist_ap, y)\n        return loss, dist_ap, dist_an\n\n\nclass CrossEntropyLabelSmooth(nn.Module):\n    \"\"\"Cross entropy loss with label smoothing regularizer.\n    Reference:\n    Szegedy et al. Rethinking the Inception Architecture for Computer Vision. CVPR 2016.\n    Equation: y = (1 - epsilon) * y + epsilon / K.\n    Args:\n        num_classes (int): number of classes.\n        epsilon (float): weight.\n    \"\"\"\n\n    def __init__(self, num_classes, epsilon=0.1, use_gpu=True):\n        super(CrossEntropyLabelSmooth, self).__init__()\n        self.num_classes = num_classes\n        self.epsilon = epsilon\n        self.use_gpu = use_gpu\n        self.logsoftmax = nn.LogSoftmax(dim=1)\n\n    def forward(self, inputs, targets):\n        \"\"\"\n        Args:\n            inputs: prediction matrix (before softmax) with shape (batch_size, num_classes)\n            targets: ground truth labels with shape (num_classes)\n        \"\"\"\n        log_probs = self.logsoftmax(inputs)\n        targets = torch.zeros(log_probs.size()).scatter_(1, targets.unsqueeze(1).cpu(), 1)\n        if self.use_gpu: targets = targets.cuda()\n        targets = (1 - self.epsilon) * targets + self.epsilon / self.num_classes\n        loss = (- targets * log_probs).mean(0).sum()\n        return loss\n\nclass Margin:\n    def __call__(self, embeddings, labels):\n        embeddings = F.normalize(embeddings)\n        alpha = 0.2\n        beta = 1.2\n        distance_threshold = 0.5\n        inf = 1e6\n        eps = 1e-6\n        distance_weighted_sampling = True\n        d = pdist(embeddings)\n        pos = torch.eq(*[labels.unsqueeze(dim).expand_as(d) for dim in [0, 1]]).type_as(d) - torch.autograd.Variable(torch.eye(len(d))).type_as(d)\n        num_neg = int(pos.data.sum() / len(pos))\n        if distance_weighted_sampling:\n            '''\n            dim = embeddings.size(-1)\n            distance = d.data.clamp(min = distance_threshold)\n            distribution = distance.pow(dim - 2) * ((1 - distance.pow(2) / 4).pow(0.5 * (dim - 3)))\n            weights = distribution.reciprocal().masked_fill_(pos.data + torch.eye(len(d)).type_as(d.data) > 0, eps)\n            samples = torch.multinomial(weights, replacement = False, num_samples = num_neg)\n            neg = torch.autograd.Variable(torch.zeros_like(pos.data).scatter_(1, samples, 1))\n            '''\n            neg = torch.autograd.Variable(torch.zeros_like(pos.data).scatter_(1, torch.multinomial((d.data.clamp(min = distance_threshold).pow(embeddings.size(-1) - 2) * (1 - d.data.clamp(min = distance_threshold).pow(2) / 4).pow(0.5 * (embeddings.size(-1) - 3))).reciprocal().masked_fill_(pos.data + torch.eye(len(d)).type_as(d.data) > 0, eps), replacement = False, num_samples = num_neg), 1))\n        else:\n            neg = topk_mask(d  + inf * ((pos > 0) + (d < distance_threshold)).type_as(d), dim = 1, largest = False, K = num_neg)\n        L = F.relu(alpha + (pos * 2 - 1) * (d - beta))\n        M = ((pos + neg > 0) * (L > 0)).float()\n        return (M * L).sum() / M.sum(), 0\n\n"
  },
  {
    "path": "utils/meters.py",
    "content": "# encoding: utf-8\nimport math\n\nimport numpy as np\n\n\nclass AverageMeter(object):\n    def __init__(self):\n        self.n = 0\n        self.sum = 0.0\n        self.var = 0.0\n        self.val = 0.0\n        self.mean = np.nan\n        self.std = np.nan\n\n    def update(self, value, n=1):\n        self.val = value\n        self.sum += value\n        self.var += value * value\n        self.n += n\n\n        if self.n == 0:\n            self.mean, self.std = np.nan, np.nan\n        elif self.n == 1:\n            self.mean, self.std = self.sum, np.inf\n        else:\n            self.mean = self.sum / self.n\n            self.std = math.sqrt(\n                (self.var - self.n * self.mean * self.mean) / (self.n - 1.0))\n\n    def value(self):\n        return self.mean, self.std\n\n    def reset(self):\n        self.n = 0\n        self.sum = 0.0\n        self.var = 0.0\n        self.val = 0.0\n        self.mean = np.nan\n        self.std = np.nan\n"
  },
  {
    "path": "utils/random_erasing.py",
    "content": "from __future__ import absolute_import\n\nfrom torchvision.transforms import *\n\nfrom PIL import Image\nimport random\nimport math\nimport numpy as np\nimport torch\n\nclass Cutout(object):\n    def __init__(self, probability = 0.5, size = 64, mean=[0.4914, 0.4822, 0.4465]):\n        self.probability = probability\n        self.mean = mean\n        self.size = size\n       \n    def __call__(self, img):\n\n        if random.uniform(0, 1) > self.probability:\n            return img\n\n        h = self.size\n        w = self.size\n        for attempt in range(100):\n            area = img.size()[1] * img.size()[2]\n            if w < img.size()[2] and h < img.size()[1]:\n                x1 = random.randint(0, img.size()[1] - h)\n                y1 = random.randint(0, img.size()[2] - w)\n                if img.size()[0] == 3:\n                    img[0, x1:x1+h, y1:y1+w] = self.mean[0]\n                    img[1, x1:x1+h, y1:y1+w] = self.mean[1]\n                    img[2, x1:x1+h, y1:y1+w] = self.mean[2]\n                else:\n                    img[0, x1:x1+h, y1:y1+w] = self.mean[0]\n                return img\n        return img\n\nclass RandomErasing(object):\n    \"\"\" Randomly selects a rectangle region in an image and erases its pixels.\n        'Random Erasing Data Augmentation' by Zhong et al.\n        See https://arxiv.org/pdf/1708.04896.pdf\n    Args:\n         probability: The probability that the Random Erasing operation will be performed.\n         sl: Minimum proportion of erased area against input image.\n         sh: Maximum proportion of erased area against input image.\n         r1: Minimum aspect ratio of erased area.\n         mean: Erasing value. \n    \"\"\"\n    \n    def __init__(self, probability = 0.5, sl = 0.02, sh = 0.4, r1 = 0.3, mean=[0.4914, 0.4822, 0.4465]):\n        self.probability = probability\n        self.mean = mean\n        self.sl = sl\n        self.sh = sh\n        self.r1 = r1\n       \n    def __call__(self, img):\n\n        if random.uniform(0, 1) > self.probability:\n            return img\n\n        for attempt in range(100):\n            area = img.size()[1] * img.size()[2]\n       \n            target_area = random.uniform(self.sl, self.sh) * area\n            aspect_ratio = random.uniform(self.r1, 1/self.r1)\n\n            h = int(round(math.sqrt(target_area * aspect_ratio)))\n            w = int(round(math.sqrt(target_area / aspect_ratio)))\n\n            if w < img.size()[2] and h < img.size()[1]:\n                x1 = random.randint(0, img.size()[1] - h)\n                y1 = random.randint(0, img.size()[2] - w)\n                if img.size()[0] == 3:\n                    img[0, x1:x1+h, y1:y1+w] = self.mean[0]\n                    img[1, x1:x1+h, y1:y1+w] = self.mean[1]\n                    img[2, x1:x1+h, y1:y1+w] = self.mean[2]\n                else:\n                    img[0, x1:x1+h, y1:y1+w] = self.mean[0]\n                return img\n\n        return img\n"
  },
  {
    "path": "utils/serialization.py",
    "content": "# encoding: utf-8\nimport errno\nimport os\nimport shutil\nimport sys\n\nimport os.path as osp\nimport torch\n\n\nclass Logger(object):\n    \"\"\"\n    Write console output to external text file.\n    Code imported from https://github.com/Cysu/open-reid/blob/master/reid/utils/logging.py.\n    \"\"\"\n\n    def __init__(self, fpath=None):\n        self.console = sys.stdout\n        self.file = None\n        if fpath is not None:\n            mkdir_if_missing(os.path.dirname(fpath))\n            self.file = open(fpath, 'w')\n\n    def __del__(self):\n        self.close()\n\n    def __enter__(self):\n        pass\n\n    def __exit__(self, *args):\n        self.close()\n\n    def write(self, msg):\n        self.console.write(msg)\n        if self.file is not None:\n            self.file.write(msg)\n\n    def flush(self):\n        self.console.flush()\n        if self.file is not None:\n            self.file.flush()\n            os.fsync(self.file.fileno())\n\n    def close(self):\n        self.console.close()\n        if self.file is not None:\n            self.file.close()\n\n\ndef mkdir_if_missing(dir_path):\n    try:\n        os.makedirs(dir_path)\n    except OSError as e:\n        if e.errno != errno.EEXIST:\n            raise\n\n\ndef save_checkpoint(state, is_best, save_dir, filename):\n    fpath = osp.join(save_dir, filename)\n    mkdir_if_missing(save_dir)\n    torch.save(state, fpath)\n    if is_best:\n        shutil.copy(fpath, osp.join(save_dir, 'model_best.pth.tar'))\n"
  },
  {
    "path": "utils/transforms.py",
    "content": "# encoding: utf-8\nfrom PIL import Image\nfrom torchvision import transforms as T\nfrom utils.random_erasing import RandomErasing, Cutout\nimport random\n\n\nclass Random2DTranslation(object):\n    \"\"\"\n    With a probability, first increase image size to (1 + 1/8), and then perform random crop.\n\n    Args:\n        height (int): target height.\n        width (int): target width.\n        p (float): probability of performing this transformation. Default: 0.5.\n    \"\"\"\n\n    def __init__(self, height, width, p=0.5, interpolation=Image.BILINEAR):\n        self.height = height\n        self.width = width\n        self.p = p\n        self.interpolation = interpolation\n\n    def __call__(self, img):\n        \"\"\"\n        Args:\n            img (PIL Image): Image to be cropped.\n\n        Returns:\n            PIL Image: Cropped image.\n        \"\"\"\n        if random.random() < self.p:\n            return img.resize((self.width, self.height), self.interpolation)\n        new_width, new_height = int(\n            round(self.width * 1.125)), int(round(self.height * 1.125))\n        resized_img = img.resize((new_width, new_height), self.interpolation)\n        x_maxrange = new_width - self.width\n        y_maxrange = new_height - self.height\n        x1 = int(round(random.uniform(0, x_maxrange)))\n        y1 = int(round(random.uniform(0, y_maxrange)))\n        croped_img = resized_img.crop(\n            (x1, y1, x1 + self.width, y1 + self.height))\n        return croped_img\n\ndef pad_shorter(x):\n    h,w = x.size[-2:]\n    s = max(h, w) \n    new_im = Image.new(\"RGB\", (s, s))\n    new_im.paste(x, ((s-h)//2, (s-w)//2))\n    return new_im\n\nclass TrainTransform(object):\n    def __init__(self, data):\n        self.data = data\n\n    def __call__(self, x):\n        if self.data == 'person':\n            x = T.Resize((384, 128))(x)\n        elif self.data == 'car':\n            x = pad_shorter(x)\n            x = T.Resize((256, 256))(x)\n            x = T.RandomCrop((224, 224))(x)\n        elif self.data == 'cub':\n            x = pad_shorter(x)\n            x = T.Resize((256, 256))(x)\n            x = T.RandomCrop((224, 224))(x)\n        elif self.data == 'clothes':\n            x = pad_shorter(x)\n            x = T.Resize((256, 256))(x)\n            x = T.RandomCrop((224, 224))(x)\n        elif self.data == 'product':\n            x = pad_shorter(x)\n            x = T.Resize((256, 256))(x)\n            x = T.RandomCrop((224, 224))(x)\n        elif self.data == 'cifar':\n            x = T.Resize((40, 40))(x)\n            x = T.RandomCrop((32, 32))(x)\n        x = T.RandomHorizontalFlip()(x)\n        x = T.ToTensor()(x)\n        x = T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])(x)\n        if self.data == 'person':\n            x = Cutout(probability = 0.5, size=64, mean=[0.0, 0.0, 0.0])(x)\n        else:\n            x = RandomErasing(probability = 0.5, mean=[0.0, 0.0, 0.0])(x)\n        return x\n\n\nclass TestTransform(object):\n    def __init__(self, data, flip=False):\n        self.data = data\n        self.flip = flip\n\n    def __call__(self, x=None):\n        if self.data == 'cub':\n            x = pad_shorter(x)\n            x = T.Resize((256, 256))(x)\n        elif self.data == 'car':\n            #x = pad_shorter(x)\n            x = T.Resize((256, 256))(x)\n        elif self.data == 'clothes':\n            x = pad_shorter(x)\n            x = T.Resize((256, 256))(x)\n        elif self.data == 'product':\n            x = pad_shorter(x)\n            x = T.Resize((224, 224))(x)\n        elif self.data == 'person':\n            x = T.Resize((384, 128))(x)\n\n        if self.flip:\n            x = T.functional.hflip(x)\n        x = T.ToTensor()(x)\n        x = T.Normalize(mean=[0.485, 0.456, 0.406],\n                        std=[0.229, 0.224, 0.225])(x)\n        return x\n"
  },
  {
    "path": "utils/validation_metrics.py",
    "content": "# encoding: utf-8\ndef accuracy(score, target, topk=(1,)):\n    maxk = max(topk)\n    batch_size = target.size(0)\n\n    _, pred = score.topk(maxk, 1, True, True)\n    pred = pred.t()\n    correct = pred.eq(target.view(1, -1).expand_as(pred))\n\n    ret = []\n    for k in topk:\n        correct_k = correct[:k].view(-1).float().sum(dim=0, keepdim=True)\n        ret.append(correct_k.mul_(1. / batch_size))\n    return ret\n"
  }
]