[
  {
    "path": ".gitignore",
    "content": "train_dir/\ndatasets/mnist\ndatasets/fashion_mnist\ndatasets/svhn\ndatasets/cifar10\n.ropeproject/\n*.py[cod]\n*.sw[op]\n*.hy\n*.txt\n*.gz\n"
  },
  {
    "path": "LICENSE",
    "content": "The MIT License (MIT)\n\nCopyright (c) 2018 Shao-Hua Sun\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# Group Normalization\n\nAs part of the implementation series of [Joseph Lim's group at USC](http://csail.mit.edu/~lim), our motivation is to accelerate (or sometimes delay) research in the AI community by promoting open-source projects. To this end, we implement state-of-the-art research papers, and publicly share them with concise reports. Please visit our [group github site](https://github.com/gitlimlab) for other projects.\n\nThis project is implemented by [Shao-Hua Sun](http://shaohua0116.github.io) and the codes have been reviewed by [Te-Lin Wu](https://github.com/telin0411) before being published.\n\n## Descriptions\nThis project includes a Tensorflow implementation of Group Normalizations proposed in the paper [Group Normalization](https://arxiv.org/abs/1803.08494) by Wu et al. [Batch Normalization](https://arxiv.org/abs/1502.03167) (BN) has been widely employed in the trainings of deep neural networks to alleviate the internal covariate shift [1].Specifically, BN aims to transform the inputs of each layer in such a way that they have a mean output activation of zero and standard deviation of one. While BN demonstrates it effectiveness in a variety of fields including computer vision, natural language processing, speech processing, robotics, etc., BN's performance substantially decrease when the training batch size become smaller, which limits the gain of utilizing BN in a task requiring small batches constrained by memory consumption. \n\nMotivated by this phenomenon, the Group Normalization (GN) technique is proposed. Instead of normalizing along the batch dimension, GN divides the channels into groups and computes within each group the mean and variance. Therefore, GN’s computation is independent of batch sizes, and so does its accuracy. The experiment section of the paper demonstrates the effectiveness of GN in a wide range of visual tasks, which include image classification (ImageNet), object detection and segmentation (COCO), and video classification (Kinect). This repository is simply a toy repository for those who want to quickly test GN and compare it against BN.\n\n<img src=\"figure/gn.png\" height=\"250\"/>\n\nThe illustration from the original GN paper. Each cube represent a 4D tensor of feature maps. Note that the spatial dimension are combined as a single dimension for visualization. N denotes the batch axis, C denotes the channel axis, and H, W denote the spatial axes. The values in blue are normalized by the same mean and variance, computed by aggregating the values of these pixels.\n\nBased on the implementation of this repository, GN is around 20% slower than BN on datasets such as CIFAR-10 and SVHN, which is probably because of the extra reshape and transpose operations. However, when the network goes deeper and the number of channels increase, GN gets even slower due to a larger group size. The model uses GN is around 4 times slower than the one uses BN when being trained ImageNet. This is not reported in the original GN paper.\n\n\\*This code is still being developed and subject to change.\n\n## Prerequisites\n\n- Python 2.7\n- [Tensorflow 1.3.0](https://github.com/tensorflow/tensorflow/tree/r1.0)\n- [SciPy](http://www.scipy.org/install.html)\n- [NumPy](http://www.numpy.org/)\n\n## Usage\n\n### Datasets\nTrain models on MNIST, Fashion MNIST, SVHN, CIFAR-10 datasets:\n- Download datasets with:\n```bash\n$ python download.py --dataset MNIST Fashion SVHN CIFAR10\n```\nTrain models on [Tiny ImageNet](https://tiny-imagenet.herokuapp.com/)\n- Download the dataset from the [webpage](https://tiny-imagenet.herokuapp.com/).\n- Move the downloaded file (named ) to `datasets/tiny_imagenet` and unzip it.\n\nTrain models on [ImageNet](http://image-net.org/download-images)\n- The ImageNet dataset is located in the Downloads section of the [website](http://image-net.org/download-images). Please specify the path to the downloaded dataset by changing the variable `__IMAGENET_IMG_PATH__` in `datasets/ImageNet.py`. Also, please provide a list of file names for trainings in the directory `__IMAGENET_LIST_PATH__` with the file name `train_list.txt`. By default, the `train_list.txt` includes all the training images in ImageNet dataset.\n\n### Train models with downloaded datasets:\nSpecify the type of normalization you want to use by `--norm_type batch` or `--norm_type group` \nand specify the batch size with `--batch_size BATCH_SIZE`.\n```bash\n$ python trainer.py --dataset MNIST --learning_rate 1e-3\n$ python trainer.py --dataset Fashion --prefix test\n$ python trainer.py --dataset SVHN --batch_size 128\n$ python trainer.py --dataset CIFAR10 \n```\n\n### Train and test your own datasets:\n\n* Create a directory\n```bash\n$ mkdir datasets/YOUR_DATASET\n```\n\n* Store your data as an h5py file datasets/YOUR_DATASET/data.hy and each data point contains\n    * 'image': has shape [h, w, c], where c is the number of channels (grayscale images: 1, color images: 3)\n    * 'label': represented as an one-hot vector\n* Maintain a list datasets/YOUR_DATASET/id.txt listing ids of all data points\n* Modify trainer.py including args, data_info, etc.\n* Finally, train and test models:\n```bash\n$ python trainer.py --dataset YOUR_DATASET\n$ python evaler.py --dataset YOUR_DATASET\n```\n\n## Results\n\n### CIFAR-10\n\n| Color    | Batch Size |\n| :------- | ---------- |\n| Orange   | 1          |\n| Blue     | 2          |\n| Sky blue | 4          |\n| Red      | 8          |\n| Green    | 16         |\n| Pink     | 32         |\n\n- Loss\n\n  <img src=\"figure/cifar_group_loss.png\" height=\"250\"/>\n\n- Accuracy\n\n<img src=\"figure/cifar_group_acc.png\" height=\"250\"/>\n\n### SVHN\n\n| Color    | Batch Size |\n| :------- | ---------- |\n| Pink     | 1          |\n| Blue     | 2          |\n| Sky blue | 4          |\n| Green    | 8          |\n| Red      | 16         |\n| Orange   | 32         |\n\n- Loss\n\n  <img src=\"figure/svhn_group_loss.png\" height=\"250\"/>\n\n- Accuracy\n\n<img src=\"figure/svhn_group_acc.png\" height=\"250\"/>\n\n### ImageNet\n\nThe trainings are ongoing...\n\n| Color  | Norm Type           |\n| :----- | ------------------- |\n| Orange | Group Normalization |\n| Blue   | Batch Normalization |\n\n<img src=\"figure/imagenet_ongoing.png\" height=\"250\"/>\n\n### Conclusion\nThe Group Normalization divides the channels into groups and computes within each group the mean and variance, and therefore its performance independent of training batch sizes, which is verified with this implementation. However, the performance of Batch Normalization does not vary a lot with different batch sizes on smaller image datasets including CIFAR-10, SVHN, etc. The ImageNet experiments are ongoing and the results will be updated later.\n\n## Related works\n* [Group Normalization](https://arxiv.org/abs/1803.08494)\n* [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift](https://arxiv.org/abs/1502.03167)\n* [Layer Normalization](https://arxiv.org/abs/1607.06450)\n* [Instance Normalization: The Missing Ingredient for Fast Stylization](https://arxiv.org/abs/1607.08022)\n\n## Author\n\nShao-Hua Sun / [@shaohua0116](https://shaohua0116.github.io/) @ [Joseph Lim's research lab](https://github.com/gitlimlab) @ USC\n"
  },
  {
    "path": "__init__.py",
    "content": ""
  },
  {
    "path": "datasets/__init__.py",
    "content": ""
  },
  {
    "path": "datasets/cifar10.py",
    "content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os.path\nimport numpy as np\nimport h5py\n\nfrom util import log\n\n__PATH__ = './datasets/cifar10'\n\nrs = np.random.RandomState(123)\n\n\nclass Dataset(object):\n\n    def __init__(self, ids, name='default',\n                 max_examples=None, is_train=True):\n        self._ids = list(ids)\n        self.name = name\n        self.is_train = is_train\n\n        if max_examples is not None:\n            self._ids = self._ids[:max_examples]\n\n        filename = 'data.hdf5'\n\n        file = os.path.join(__PATH__, filename)\n        log.info(\"Reading %s ...\", file)\n\n        try:\n            self.data = h5py.File(file, 'r')\n        except:\n            raise IOError('Dataset not found. Please make sure the dataset was downloaded.')\n        log.info(\"Reading Done: %s\", file)\n\n    def get_data(self, id):\n        # preprocessing and data augmentation\n        m = self.data[id]['image'].value/255.\n        l = self.data[id]['label'].value.astype(np.float32)\n        return m, l\n\n    @property\n    def ids(self):\n        return self._ids\n\n    def __len__(self):\n        return len(self.ids)\n\n    def __repr__(self):\n        return 'Dataset (%s, %d examples)' % (\n            self.name,\n            len(self)\n        )\n\n\ndef create_default_splits(is_train=True):\n    id_train, id_test = all_ids(50000)\n\n    dataset_train = Dataset(id_train, name='train', is_train=False)\n    dataset_test = Dataset(id_test, name='test', is_train=False)\n    return dataset_train, dataset_test\n\n\ndef all_ids(num_trains):\n    id_filename = 'id.txt'\n\n    id_txt = os.path.join(__PATH__, id_filename)\n    try:\n        with open(id_txt, 'r') as fp:\n            _ids = [s.strip() for s in fp.readlines() if s]\n    except:\n        raise IOError('Dataset not found. Please make sure the dataset was downloaded.')\n\n    id_train, id_test = _ids[:num_trains], _ids[num_trains:]\n    rs.shuffle(id_train)\n    rs.shuffle(id_test)\n\n    return id_train, id_test\n"
  },
  {
    "path": "datasets/fashion_mnist.py",
    "content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os.path\nimport numpy as np\nimport h5py\n\nfrom util import log\n\n__PATH__ = './datasets/fashion_mnist'\n\nrs = np.random.RandomState(123)\n\n\nclass Dataset(object):\n\n    def __init__(self, ids, name='default',\n                 max_examples=None, is_train=True):\n        self._ids = list(ids)\n        self.name = name\n        self.is_train = is_train\n\n        if max_examples is not None:\n            self._ids = self._ids[:max_examples]\n\n        filename = 'data.hdf5'\n\n        file = os.path.join(__PATH__, filename)\n        log.info(\"Reading %s ...\", file)\n\n        try:\n            self.data = h5py.File(file, 'r')\n        except:\n            raise IOError('Dataset not found. Please make sure the dataset was downloaded.')\n        log.info(\"Reading Done: %s\", file)\n\n    def get_data(self, id):\n        # preprocessing and data augmentation\n        m = self.data[id]['image'].value/255.\n        l = self.data[id]['label'].value.astype(np.float32)\n        return m, l\n\n    @property\n    def ids(self):\n        return self._ids\n\n    def __len__(self):\n        return len(self.ids)\n\n    def __repr__(self):\n        return 'Dataset (%s, %d examples)' % (\n            self.name,\n            len(self)\n        )\n\n\ndef create_default_splits(is_train=True):\n    id_train, id_test = all_ids(60000)\n\n    dataset_train = Dataset(id_train, name='train', is_train=False)\n    dataset_test = Dataset(id_test, name='test', is_train=False)\n    return dataset_train, dataset_test\n\n\ndef all_ids(num_trains):\n    id_filename = 'id.txt'\n\n    id_txt = os.path.join(__PATH__, id_filename)\n    try:\n        with open(id_txt, 'r') as fp:\n            _ids = [s.strip() for s in fp.readlines() if s]\n    except:\n        raise IOError('Dataset not found. Please make sure the dataset was downloaded.')\n\n    id_train, id_test = _ids[:num_trains], _ids[num_trains:]\n    rs.shuffle(id_train)\n    rs.shuffle(id_test)\n\n    return id_train, id_test\n"
  },
  {
    "path": "datasets/imagenet/__init__.py",
    "content": ""
  },
  {
    "path": "datasets/imagenet/map.py",
    "content": "class2num = {\n'n02119789': 1,  # kit_fox\n'n02100735': 2,  # English_setter\n'n02110185': 3,  # Siberian_husky\n'n02096294': 4,  # Australian_terrier\n'n02102040': 5,  # English_springer\n'n02066245': 6,  # grey_whale\n'n02509815': 7,  # lesser_panda\n'n02124075': 8,  # Egyptian_cat\n'n02417914': 9,  # ibex\n'n02123394': 10,  # Persian_cat\n'n02125311': 11,  # cougar\n'n02423022': 12,  # gazelle\n'n02346627': 13,  # porcupine\n'n02077923': 14,  # sea_lion\n'n02110063': 15,  # malamute\n'n02447366': 16,  # badger\n'n02109047': 17,  # Great_Dane\n'n02089867': 18,  # Walker_hound\n'n02102177': 19,  # Welsh_springer_spaniel\n'n02091134': 20,  # whippet\n'n02092002': 21,  # Scottish_deerhound\n'n02071294': 22,  # killer_whale\n'n02442845': 23,  # mink\n'n02504458': 24,  # African_elephant\n'n02092339': 25,  # Weimaraner\n'n02098105': 26,  # soft-coated_wheaten_terrier\n'n02096437': 27,  # Dandie_Dinmont\n'n02114712': 28,  # red_wolf\n'n02105641': 29,  # Old_English_sheepdog\n'n02128925': 30,  # jaguar\n'n02091635': 31,  # otterhound\n'n02088466': 32,  # bloodhound\n'n02096051': 33,  # Airedale\n'n02117135': 34,  # hyena\n'n02138441': 35,  # meerkat\n'n02097130': 36,  # giant_schnauzer\n'n02493509': 37,  # titi\n'n02457408': 38,  # three-toed_sloth\n'n02389026': 39,  # sorrel\n'n02443484': 40,  # black-footed_ferret\n'n02110341': 41,  # dalmatian\n'n02089078': 42,  # black-and-tan_coonhound\n'n02086910': 43,  # papillon\n'n02445715': 44,  # skunk\n'n02093256': 45,  # Staffordshire_bullterrier\n'n02113978': 46,  # Mexican_hairless\n'n02106382': 47,  # Bouvier_des_Flandres\n'n02441942': 48,  # weasel\n'n02113712': 49,  # miniature_poodle\n'n02113186': 50,  # Cardigan\n'n02105162': 51,  # malinois\n'n02415577': 52,  # bighorn\n'n02356798': 53,  # fox_squirrel\n'n02488702': 54,  # colobus\n'n02123159': 55,  # tiger_cat\n'n02098413': 56,  # Lhasa\n'n02422699': 57,  # impala\n'n02114855': 58,  # coyote\n'n02094433': 59,  # Yorkshire_terrier\n'n02111277': 60,  # Newfoundland\n'n02132136': 61,  # brown_bear\n'n02119022': 62,  # red_fox\n'n02091467': 63,  # Norwegian_elkhound\n'n02106550': 64,  # Rottweiler\n'n02422106': 65,  # hartebeest\n'n02091831': 66,  # Saluki\n'n02120505': 67,  # grey_fox\n'n02104365': 68,  # schipperke\n'n02086079': 69,  # Pekinese\n'n02112706': 70,  # Brabancon_griffon\n'n02098286': 71,  # West_Highland_white_terrier\n'n02095889': 72,  # Sealyham_terrier\n'n02484975': 73,  # guenon\n'n02137549': 74,  # mongoose\n'n02500267': 75,  # indri\n'n02129604': 76,  # tiger\n'n02090721': 77,  # Irish_wolfhound\n'n02396427': 78,  # wild_boar\n'n02108000': 79,  # EntleBucher\n'n02391049': 80,  # zebra\n'n02412080': 81,  # ram\n'n02108915': 82,  # French_bulldog\n'n02480495': 83,  # orangutan\n'n02110806': 84,  # basenji\n'n02128385': 85,  # leopard\n'n02107683': 86,  # Bernese_mountain_dog\n'n02085936': 87,  # Maltese_dog\n'n02094114': 88,  # Norfolk_terrier\n'n02087046': 89,  # toy_terrier\n'n02100583': 90,  # vizsla\n'n02096177': 91,  # cairn\n'n02494079': 92,  # squirrel_monkey\n'n02105056': 93,  # groenendael\n'n02101556': 94,  # clumber\n'n02123597': 95,  # Siamese_cat\n'n02481823': 96,  # chimpanzee\n'n02105505': 97,  # komondor\n'n02088094': 98,  # Afghan_hound\n'n02085782': 99,  # Japanese_spaniel\n'n02489166': 100,  # proboscis_monkey\n'n02364673': 101,  # guinea_pig\n'n02114548': 102,  # white_wolf\n'n02134084': 103,  # ice_bear\n'n02480855': 104,  # gorilla\n'n02090622': 105,  # borzoi\n'n02113624': 106,  # toy_poodle\n'n02093859': 107,  # Kerry_blue_terrier\n'n02403003': 108,  # ox\n'n02097298': 109,  # Scotch_terrier\n'n02108551': 110,  # Tibetan_mastiff\n'n02493793': 111,  # spider_monkey\n'n02107142': 112,  # Doberman\n'n02096585': 113,  # Boston_bull\n'n02107574': 114,  # Greater_Swiss_Mountain_dog\n'n02107908': 115,  # Appenzeller\n'n02086240': 116,  # Shih-Tzu\n'n02102973': 117,  # Irish_water_spaniel\n'n02112018': 118,  # Pomeranian\n'n02093647': 119,  # Bedlington_terrier\n'n02397096': 120,  # warthog\n'n02437312': 121,  # Arabian_camel\n'n02483708': 122,  # siamang\n'n02097047': 123,  # miniature_schnauzer\n'n02106030': 124,  # collie\n'n02099601': 125,  # golden_retriever\n'n02093991': 126,  # Irish_terrier\n'n02110627': 127,  # affenpinscher\n'n02106166': 128,  # Border_collie\n'n02326432': 129,  # hare\n'n02108089': 130,  # boxer\n'n02097658': 131,  # silky_terrier\n'n02088364': 132,  # beagle\n'n02111129': 133,  # Leonberg\n'n02100236': 134,  # German_short-haired_pointer\n'n02486261': 135,  # patas\n'n02115913': 136,  # dhole\n'n02486410': 137,  # baboon\n'n02487347': 138,  # macaque\n'n02099849': 139,  # Chesapeake_Bay_retriever\n'n02108422': 140,  # bull_mastiff\n'n02104029': 141,  # kuvasz\n'n02492035': 142,  # capuchin\n'n02110958': 143,  # pug\n'n02099429': 144,  # curly-coated_retriever\n'n02094258': 145,  # Norwich_terrier\n'n02099267': 146,  # flat-coated_retriever\n'n02395406': 147,  # hog\n'n02112350': 148,  # keeshond\n'n02109961': 149,  # Eskimo_dog\n'n02101388': 150,  # Brittany_spaniel\n'n02113799': 151,  # standard_poodle\n'n02095570': 152,  # Lakeland_terrier\n'n02128757': 153,  # snow_leopard\n'n02101006': 154,  # Gordon_setter\n'n02115641': 155,  # dingo\n'n02097209': 156,  # standard_schnauzer\n'n02342885': 157,  # hamster\n'n02097474': 158,  # Tibetan_terrier\n'n02120079': 159,  # Arctic_fox\n'n02095314': 160,  # wire-haired_fox_terrier\n'n02088238': 161,  # basset\n'n02408429': 162,  # water_buffalo\n'n02133161': 163,  # American_black_bear\n'n02328150': 164,  # Angora\n'n02410509': 165,  # bison\n'n02492660': 166,  # howler_monkey\n'n02398521': 167,  # hippopotamus\n'n02112137': 168,  # chow\n'n02510455': 169,  # giant_panda\n'n02093428': 170,  # American_Staffordshire_terrier\n'n02105855': 171,  # Shetland_sheepdog\n'n02111500': 172,  # Great_Pyrenees\n'n02085620': 173,  # Chihuahua\n'n02123045': 174,  # tabby\n'n02490219': 175,  # marmoset\n'n02099712': 176,  # Labrador_retriever\n'n02109525': 177,  # Saint_Bernard\n'n02454379': 178,  # armadillo\n'n02111889': 179,  # Samoyed\n'n02088632': 180,  # bluetick\n'n02090379': 181,  # redbone\n'n02443114': 182,  # polecat\n'n02361337': 183,  # marmot\n'n02105412': 184,  # kelpie\n'n02483362': 185,  # gibbon\n'n02437616': 186,  # llama\n'n02107312': 187,  # miniature_pinscher\n'n02325366': 188,  # wood_rabbit\n'n02091032': 189,  # Italian_greyhound\n'n02129165': 190,  # lion\n'n02102318': 191,  # cocker_spaniel\n'n02100877': 192,  # Irish_setter\n'n02074367': 193,  # dugong\n'n02504013': 194,  # Indian_elephant\n'n02363005': 195,  # beaver\n'n02102480': 196,  # Sussex_spaniel\n'n02113023': 197,  # Pembroke\n'n02086646': 198,  # Blenheim_spaniel\n'n02497673': 199,  # Madagascar_cat\n'n02087394': 200,  # Rhodesian_ridgeback\n'n02127052': 201,  # lynx\n'n02116738': 202,  # African_hunting_dog\n'n02488291': 203,  # langur\n'n02091244': 204,  # Ibizan_hound\n'n02114367': 205,  # timber_wolf\n'n02130308': 206,  # cheetah\n'n02089973': 207,  # English_foxhound\n'n02105251': 208,  # briard\n'n02134418': 209,  # sloth_bear\n'n02093754': 210,  # Border_terrier\n'n02106662': 211,  # German_shepherd\n'n02444819': 212,  # otter\n'n01882714': 213,  # koala\n'n01871265': 214,  # tusker\n'n01872401': 215,  # echidna\n'n01877812': 216,  # wallaby\n'n01873310': 217,  # platypus\n'n01883070': 218,  # wombat\n'n04086273': 219,  # revolver\n'n04507155': 220,  # umbrella\n'n04147183': 221,  # schooner\n'n04254680': 222,  # soccer_ball\n'n02672831': 223,  # accordion\n'n02219486': 224,  # ant\n'n02317335': 225,  # starfish\n'n01968897': 226,  # chambered_nautilus\n'n03452741': 227,  # grand_piano\n'n03642806': 228,  # laptop\n'n07745940': 229,  # strawberry\n'n02690373': 230,  # airliner\n'n04552348': 231,  # warplane\n'n02692877': 232,  # airship\n'n02782093': 233,  # balloon\n'n04266014': 234,  # space_shuttle\n'n03344393': 235,  # fireboat\n'n03447447': 236,  # gondola\n'n04273569': 237,  # speedboat\n'n03662601': 238,  # lifeboat\n'n02951358': 239,  # canoe\n'n04612504': 240,  # yawl\n'n02981792': 241,  # catamaran\n'n04483307': 242,  # trimaran\n'n03095699': 243,  # container_ship\n'n03673027': 244,  # liner\n'n03947888': 245,  # pirate\n'n02687172': 246,  # aircraft_carrier\n'n04347754': 247,  # submarine\n'n04606251': 248,  # wreck\n'n03478589': 249,  # half_track\n'n04389033': 250,  # tank\n'n03773504': 251,  # missile\n'n02860847': 252,  # bobsled\n'n03218198': 253,  # dogsled\n'n02835271': 254,  # bicycle-built-for-two\n'n03792782': 255,  # mountain_bike\n'n03393912': 256,  # freight_car\n'n03895866': 257,  # passenger_car\n'n02797295': 258,  # barrow\n'n04204347': 259,  # shopping_cart\n'n03791053': 260,  # motor_scooter\n'n03384352': 261,  # forklift\n'n03272562': 262,  # electric_locomotive\n'n04310018': 263,  # steam_locomotive\n'n02704792': 264,  # amphibian\n'n02701002': 265,  # ambulance\n'n02814533': 266,  # beach_wagon\n'n02930766': 267,  # cab\n'n03100240': 268,  # convertible\n'n03594945': 269,  # jeep\n'n03670208': 270,  # limousine\n'n03770679': 271,  # minivan\n'n03777568': 272,  # Model_T\n'n04037443': 273,  # racer\n'n04285008': 274,  # sports_car\n'n03444034': 275,  # go-kart\n'n03445924': 276,  # golfcart\n'n03785016': 277,  # moped\n'n04252225': 278,  # snowplow\n'n03345487': 279,  # fire_engine\n'n03417042': 280,  # garbage_truck\n'n03930630': 281,  # pickup\n'n04461696': 282,  # tow_truck\n'n04467665': 283,  # trailer_truck\n'n03796401': 284,  # moving_van\n'n03977966': 285,  # police_van\n'n04065272': 286,  # recreational_vehicle\n'n04335435': 287,  # streetcar\n'n04252077': 288,  # snowmobile\n'n04465501': 289,  # tractor\n'n03776460': 290,  # mobile_home\n'n04482393': 291,  # tricycle\n'n04509417': 292,  # unicycle\n'n03538406': 293,  # horse_cart\n'n03599486': 294,  # jinrikisha\n'n03868242': 295,  # oxcart\n'n02804414': 296,  # bassinet\n'n03125729': 297,  # cradle\n'n03131574': 298,  # crib\n'n03388549': 299,  # four-poster\n'n02870880': 300,  # bookcase\n'n03018349': 301,  # china_cabinet\n'n03742115': 302,  # medicine_chest\n'n03016953': 303,  # chiffonier\n'n04380533': 304,  # table_lamp\n'n03337140': 305,  # file\n'n03891251': 306,  # park_bench\n'n02791124': 307,  # barber_chair\n'n04429376': 308,  # throne\n'n03376595': 309,  # folding_chair\n'n04099969': 310,  # rocking_chair\n'n04344873': 311,  # studio_couch\n'n04447861': 312,  # toilet_seat\n'n03179701': 313,  # desk\n'n03982430': 314,  # pool_table\n'n03201208': 315,  # dining_table\n'n03290653': 316,  # entertainment_center\n'n04550184': 317,  # wardrobe\n'n07742313': 318,  # Granny_Smith\n'n07747607': 319,  # orange\n'n07749582': 320,  # lemon\n'n07753113': 321,  # fig\n'n07753275': 322,  # pineapple\n'n07753592': 323,  # banana\n'n07754684': 324,  # jackfruit\n'n07760859': 325,  # custard_apple\n'n07768694': 326,  # pomegranate\n'n12267677': 327,  # acorn\n'n12620546': 328,  # hip\n'n13133613': 329,  # ear\n'n11879895': 330,  # rapeseed\n'n12144580': 331,  # corn\n'n12768682': 332,  # buckeye\n'n03854065': 333,  # organ\n'n04515003': 334,  # upright\n'n03017168': 335,  # chime\n'n03249569': 336,  # drum\n'n03447721': 337,  # gong\n'n03720891': 338,  # maraca\n'n03721384': 339,  # marimba\n'n04311174': 340,  # steel_drum\n'n02787622': 341,  # banjo\n'n02992211': 342,  # cello\n'n04536866': 343,  # violin\n'n03495258': 344,  # harp\n'n02676566': 345,  # acoustic_guitar\n'n03272010': 346,  # electric_guitar\n'n03110669': 347,  # cornet\n'n03394916': 348,  # French_horn\n'n04487394': 349,  # trombone\n'n03494278': 350,  # harmonica\n'n03840681': 351,  # ocarina\n'n03884397': 352,  # panpipe\n'n02804610': 353,  # bassoon\n'n03838899': 354,  # oboe\n'n04141076': 355,  # sax\n'n03372029': 356,  # flute\n'n11939491': 357,  # daisy\n'n12057211': 358,  # yellow_lady's_slipper\n'n09246464': 359,  # cliff\n'n09468604': 360,  # valley\n'n09193705': 361,  # alp\n'n09472597': 362,  # volcano\n'n09399592': 363,  # promontory\n'n09421951': 364,  # sandbar\n'n09256479': 365,  # coral_reef\n'n09332890': 366,  # lakeside\n'n09428293': 367,  # seashore\n'n09288635': 368,  # geyser\n'n03498962': 369,  # hatchet\n'n03041632': 370,  # cleaver\n'n03658185': 371,  # letter_opener\n'n03954731': 372,  # plane\n'n03995372': 373,  # power_drill\n'n03649909': 374,  # lawn_mower\n'n03481172': 375,  # hammer\n'n03109150': 376,  # corkscrew\n'n02951585': 377,  # can_opener\n'n03970156': 378,  # plunger\n'n04154565': 379,  # screwdriver\n'n04208210': 380,  # shovel\n'n03967562': 381,  # plow\n'n03000684': 382,  # chain_saw\n'n01514668': 383,  # cock\n'n01514859': 384,  # hen\n'n01518878': 385,  # ostrich\n'n01530575': 386,  # brambling\n'n01531178': 387,  # goldfinch\n'n01532829': 388,  # house_finch\n'n01534433': 389,  # junco\n'n01537544': 390,  # indigo_bunting\n'n01558993': 391,  # robin\n'n01560419': 392,  # bulbul\n'n01580077': 393,  # jay\n'n01582220': 394,  # magpie\n'n01592084': 395,  # chickadee\n'n01601694': 396,  # water_ouzel\n'n01608432': 397,  # kite\n'n01614925': 398,  # bald_eagle\n'n01616318': 399,  # vulture\n'n01622779': 400,  # great_grey_owl\n'n01795545': 401,  # black_grouse\n'n01796340': 402,  # ptarmigan\n'n01797886': 403,  # ruffed_grouse\n'n01798484': 404,  # prairie_chicken\n'n01806143': 405,  # peacock\n'n01806567': 406,  # quail\n'n01807496': 407,  # partridge\n'n01817953': 408,  # African_grey\n'n01818515': 409,  # macaw\n'n01819313': 410,  # sulphur-crested_cockatoo\n'n01820546': 411,  # lorikeet\n'n01824575': 412,  # coucal\n'n01828970': 413,  # bee_eater\n'n01829413': 414,  # hornbill\n'n01833805': 415,  # hummingbird\n'n01843065': 416,  # jacamar\n'n01843383': 417,  # toucan\n'n01847000': 418,  # drake\n'n01855032': 419,  # red-breasted_merganser\n'n01855672': 420,  # goose\n'n01860187': 421,  # black_swan\n'n02002556': 422,  # white_stork\n'n02002724': 423,  # black_stork\n'n02006656': 424,  # spoonbill\n'n02007558': 425,  # flamingo\n'n02009912': 426,  # American_egret\n'n02009229': 427,  # little_blue_heron\n'n02011460': 428,  # bittern\n'n02012849': 429,  # crane\n'n02013706': 430,  # limpkin\n'n02018207': 431,  # American_coot\n'n02018795': 432,  # bustard\n'n02025239': 433,  # ruddy_turnstone\n'n02027492': 434,  # red-backed_sandpiper\n'n02028035': 435,  # redshank\n'n02033041': 436,  # dowitcher\n'n02037110': 437,  # oystercatcher\n'n02017213': 438,  # European_gallinule\n'n02051845': 439,  # pelican\n'n02056570': 440,  # king_penguin\n'n02058221': 441,  # albatross\n'n01484850': 442,  # great_white_shark\n'n01491361': 443,  # tiger_shark\n'n01494475': 444,  # hammerhead\n'n01496331': 445,  # electric_ray\n'n01498041': 446,  # stingray\n'n02514041': 447,  # barracouta\n'n02536864': 448,  # coho\n'n01440764': 449,  # tench\n'n01443537': 450,  # goldfish\n'n02526121': 451,  # eel\n'n02606052': 452,  # rock_beauty\n'n02607072': 453,  # anemone_fish\n'n02643566': 454,  # lionfish\n'n02655020': 455,  # puffer\n'n02640242': 456,  # sturgeon\n'n02641379': 457,  # gar\n'n01664065': 458,  # loggerhead\n'n01665541': 459,  # leatherback_turtle\n'n01667114': 460,  # mud_turtle\n'n01667778': 461,  # terrapin\n'n01669191': 462,  # box_turtle\n'n01675722': 463,  # banded_gecko\n'n01677366': 464,  # common_iguana\n'n01682714': 465,  # American_chameleon\n'n01685808': 466,  # whiptail\n'n01687978': 467,  # agama\n'n01688243': 468,  # frilled_lizard\n'n01689811': 469,  # alligator_lizard\n'n01692333': 470,  # Gila_monster\n'n01693334': 471,  # green_lizard\n'n01694178': 472,  # African_chameleon\n'n01695060': 473,  # Komodo_dragon\n'n01704323': 474,  # triceratops\n'n01697457': 475,  # African_crocodile\n'n01698640': 476,  # American_alligator\n'n01728572': 477,  # thunder_snake\n'n01728920': 478,  # ringneck_snake\n'n01729322': 479,  # hognose_snake\n'n01729977': 480,  # green_snake\n'n01734418': 481,  # king_snake\n'n01735189': 482,  # garter_snake\n'n01737021': 483,  # water_snake\n'n01739381': 484,  # vine_snake\n'n01740131': 485,  # night_snake\n'n01742172': 486,  # boa_constrictor\n'n01744401': 487,  # rock_python\n'n01748264': 488,  # Indian_cobra\n'n01749939': 489,  # green_mamba\n'n01751748': 490,  # sea_snake\n'n01753488': 491,  # horned_viper\n'n01755581': 492,  # diamondback\n'n01756291': 493,  # sidewinder\n'n01629819': 494,  # European_fire_salamander\n'n01630670': 495,  # common_newt\n'n01631663': 496,  # eft\n'n01632458': 497,  # spotted_salamander\n'n01632777': 498,  # axolotl\n'n01641577': 499,  # bullfrog\n'n01644373': 500,  # tree_frog\n'n01644900': 501,  # tailed_frog\n'n04579432': 502,  # whistle\n'n04592741': 503,  # wing\n'n03876231': 504,  # paintbrush\n'n03483316': 505,  # hand_blower\n'n03868863': 506,  # oxygen_mask\n'n04251144': 507,  # snorkel\n'n03691459': 508,  # loudspeaker\n'n03759954': 509,  # microphone\n'n04152593': 510,  # screen\n'n03793489': 511,  # mouse\n'n03271574': 512,  # electric_fan\n'n03843555': 513,  # oil_filter\n'n04332243': 514,  # strainer\n'n04265275': 515,  # space_heater\n'n04330267': 516,  # stove\n'n03467068': 517,  # guillotine\n'n02794156': 518,  # barometer\n'n04118776': 519,  # rule\n'n03841143': 520,  # odometer\n'n04141975': 521,  # scale\n'n02708093': 522,  # analog_clock\n'n03196217': 523,  # digital_clock\n'n04548280': 524,  # wall_clock\n'n03544143': 525,  # hourglass\n'n04355338': 526,  # sundial\n'n03891332': 527,  # parking_meter\n'n04328186': 528,  # stopwatch\n'n03197337': 529,  # digital_watch\n'n04317175': 530,  # stethoscope\n'n04376876': 531,  # syringe\n'n03706229': 532,  # magnetic_compass\n'n02841315': 533,  # binoculars\n'n04009552': 534,  # projector\n'n04356056': 535,  # sunglasses\n'n03692522': 536,  # loupe\n'n04044716': 537,  # radio_telescope\n'n02879718': 538,  # bow\n'n02950826': 539,  # cannon\n'n02749479': 540,  # assault_rifle\n'n04090263': 541,  # rifle\n'n04008634': 542,  # projectile\n'n03085013': 543,  # computer_keyboard\n'n04505470': 544,  # typewriter_keyboard\n'n03126707': 545,  # crane\n'n03666591': 546,  # lighter\n'n02666196': 547,  # abacus\n'n02977058': 548,  # cash_machine\n'n04238763': 549,  # slide_rule\n'n03180011': 550,  # desktop_computer\n'n03485407': 551,  # hand-held_computer\n'n03832673': 552,  # notebook\n'n06359193': 553,  # web_site\n'n03496892': 554,  # harvester\n'n04428191': 555,  # thresher\n'n04004767': 556,  # printer\n'n04243546': 557,  # slot\n'n04525305': 558,  # vending_machine\n'n04179913': 559,  # sewing_machine\n'n03602883': 560,  # joystick\n'n04372370': 561,  # switch\n'n03532672': 562,  # hook\n'n02974003': 563,  # car_wheel\n'n03874293': 564,  # paddlewheel\n'n03944341': 565,  # pinwheel\n'n03992509': 566,  # potter's_wheel\n'n03425413': 567,  # gas_pump\n'n02966193': 568,  # carousel\n'n04371774': 569,  # swing\n'n04067472': 570,  # reel\n'n04040759': 571,  # radiator\n'n04019541': 572,  # puck\n'n03492542': 573,  # hard_disc\n'n04355933': 574,  # sunglass\n'n03929660': 575,  # pick\n'n02965783': 576,  # car_mirror\n'n04258138': 577,  # solar_dish\n'n04074963': 578,  # remote_control\n'n03208938': 579,  # disk_brake\n'n02910353': 580,  # buckle\n'n03476684': 581,  # hair_slide\n'n03627232': 582,  # knot\n'n03075370': 583,  # combination_lock\n'n03874599': 584,  # padlock\n'n03804744': 585,  # nail\n'n04127249': 586,  # safety_pin\n'n04153751': 587,  # screw\n'n03803284': 588,  # muzzle\n'n04162706': 589,  # seat_belt\n'n04228054': 590,  # ski\n'n02948072': 591,  # candle\n'n03590841': 592,  # jack-o'-lantern\n'n04286575': 593,  # spotlight\n'n04456115': 594,  # torch\n'n03814639': 595,  # neck_brace\n'n03933933': 596,  # pier\n'n04485082': 597,  # tripod\n'n03733131': 598,  # maypole\n'n03794056': 599,  # mousetrap\n'n04275548': 600,  # spider_web\n'n01768244': 601,  # trilobite\n'n01770081': 602,  # harvestman\n'n01770393': 603,  # scorpion\n'n01773157': 604,  # black_and_gold_garden_spider\n'n01773549': 605,  # barn_spider\n'n01773797': 606,  # garden_spider\n'n01774384': 607,  # black_widow\n'n01774750': 608,  # tarantula\n'n01775062': 609,  # wolf_spider\n'n01776313': 610,  # tick\n'n01784675': 611,  # centipede\n'n01990800': 612,  # isopod\n'n01978287': 613,  # Dungeness_crab\n'n01978455': 614,  # rock_crab\n'n01980166': 615,  # fiddler_crab\n'n01981276': 616,  # king_crab\n'n01983481': 617,  # American_lobster\n'n01984695': 618,  # spiny_lobster\n'n01985128': 619,  # crayfish\n'n01986214': 620,  # hermit_crab\n'n02165105': 621,  # tiger_beetle\n'n02165456': 622,  # ladybug\n'n02167151': 623,  # ground_beetle\n'n02168699': 624,  # long-horned_beetle\n'n02169497': 625,  # leaf_beetle\n'n02172182': 626,  # dung_beetle\n'n02174001': 627,  # rhinoceros_beetle\n'n02177972': 628,  # weevil\n'n02190166': 629,  # fly\n'n02206856': 630,  # bee\n'n02226429': 631,  # grasshopper\n'n02229544': 632,  # cricket\n'n02231487': 633,  # walking_stick\n'n02233338': 634,  # cockroach\n'n02236044': 635,  # mantis\n'n02256656': 636,  # cicada\n'n02259212': 637,  # leafhopper\n'n02264363': 638,  # lacewing\n'n02268443': 639,  # dragonfly\n'n02268853': 640,  # damselfly\n'n02276258': 641,  # admiral\n'n02277742': 642,  # ringlet\n'n02279972': 643,  # monarch\n'n02280649': 644,  # cabbage_butterfly\n'n02281406': 645,  # sulphur_butterfly\n'n02281787': 646,  # lycaenid\n'n01910747': 647,  # jellyfish\n'n01914609': 648,  # sea_anemone\n'n01917289': 649,  # brain_coral\n'n01924916': 650,  # flatworm\n'n01930112': 651,  # nematode\n'n01943899': 652,  # conch\n'n01944390': 653,  # snail\n'n01945685': 654,  # slug\n'n01950731': 655,  # sea_slug\n'n01955084': 656,  # chiton\n'n02319095': 657,  # sea_urchin\n'n02321529': 658,  # sea_cucumber\n'n03584829': 659,  # iron\n'n03297495': 660,  # espresso_maker\n'n03761084': 661,  # microwave\n'n03259280': 662,  # Dutch_oven\n'n04111531': 663,  # rotisserie\n'n04442312': 664,  # toaster\n'n04542943': 665,  # waffle_iron\n'n04517823': 666,  # vacuum\n'n03207941': 667,  # dishwasher\n'n04070727': 668,  # refrigerator\n'n04554684': 669,  # washer\n'n03133878': 670,  # Crock_Pot\n'n03400231': 671,  # frying_pan\n'n04596742': 672,  # wok\n'n02939185': 673,  # caldron\n'n03063689': 674,  # coffeepot\n'n04398044': 675,  # teapot\n'n04270147': 676,  # spatula\n'n02699494': 677,  # altar\n'n04486054': 678,  # triumphal_arch\n'n03899768': 679,  # patio\n'n04311004': 680,  # steel_arch_bridge\n'n04366367': 681,  # suspension_bridge\n'n04532670': 682,  # viaduct\n'n02793495': 683,  # barn\n'n03457902': 684,  # greenhouse\n'n03877845': 685,  # palace\n'n03781244': 686,  # monastery\n'n03661043': 687,  # library\n'n02727426': 688,  # apiary\n'n02859443': 689,  # boathouse\n'n03028079': 690,  # church\n'n03788195': 691,  # mosque\n'n04346328': 692,  # stupa\n'n03956157': 693,  # planetarium\n'n04081281': 694,  # restaurant\n'n03032252': 695,  # cinema\n'n03529860': 696,  # home_theater\n'n03697007': 697,  # lumbermill\n'n03065424': 698,  # coil\n'n03837869': 699,  # obelisk\n'n04458633': 700,  # totem_pole\n'n02980441': 701,  # castle\n'n04005630': 702,  # prison\n'n03461385': 703,  # grocery_store\n'n02776631': 704,  # bakery\n'n02791270': 705,  # barbershop\n'n02871525': 706,  # bookshop\n'n02927161': 707,  # butcher_shop\n'n03089624': 708,  # confectionery\n'n04200800': 709,  # shoe_shop\n'n04443257': 710,  # tobacco_shop\n'n04462240': 711,  # toyshop\n'n03388043': 712,  # fountain\n'n03042490': 713,  # cliff_dwelling\n'n04613696': 714,  # yurt\n'n03216828': 715,  # dock\n'n02892201': 716,  # brass\n'n03743016': 717,  # megalith\n'n02788148': 718,  # bannister\n'n02894605': 719,  # breakwater\n'n03160309': 720,  # dam\n'n03000134': 721,  # chainlink_fence\n'n03930313': 722,  # picket_fence\n'n04604644': 723,  # worm_fence\n'n04326547': 724,  # stone_wall\n'n03459775': 725,  # grille\n'n04239074': 726,  # sliding_door\n'n04501370': 727,  # turnstile\n'n03792972': 728,  # mountain_tent\n'n04149813': 729,  # scoreboard\n'n03530642': 730,  # honeycomb\n'n03961711': 731,  # plate_rack\n'n03903868': 732,  # pedestal\n'n02814860': 733,  # beacon\n'n07711569': 734,  # mashed_potato\n'n07720875': 735,  # bell_pepper\n'n07714571': 736,  # head_cabbage\n'n07714990': 737,  # broccoli\n'n07715103': 738,  # cauliflower\n'n07716358': 739,  # zucchini\n'n07716906': 740,  # spaghetti_squash\n'n07717410': 741,  # acorn_squash\n'n07717556': 742,  # butternut_squash\n'n07718472': 743,  # cucumber\n'n07718747': 744,  # artichoke\n'n07730033': 745,  # cardoon\n'n07734744': 746,  # mushroom\n'n04209239': 747,  # shower_curtain\n'n03594734': 748,  # jean\n'n02971356': 749,  # carton\n'n03485794': 750,  # handkerchief\n'n04133789': 751,  # sandal\n'n02747177': 752,  # ashcan\n'n04125021': 753,  # safe\n'n07579787': 754,  # plate\n'n03814906': 755,  # necklace\n'n03134739': 756,  # croquet_ball\n'n03404251': 757,  # fur_coat\n'n04423845': 758,  # thimble\n'n03877472': 759,  # pajama\n'n04120489': 760,  # running_shoe\n'n03062245': 761,  # cocktail_shaker\n'n03014705': 762,  # chest\n'n03717622': 763,  # manhole_cover\n'n03777754': 764,  # modem\n'n04493381': 765,  # tub\n'n04476259': 766,  # tray\n'n02777292': 767,  # balance_beam\n'n07693725': 768,  # bagel\n'n03998194': 769,  # prayer_rug\n'n03617480': 770,  # kimono\n'n07590611': 771,  # hot_pot\n'n04579145': 772,  # whiskey_jug\n'n03623198': 773,  # knee_pad\n'n07248320': 774,  # book_jacket\n'n04277352': 775,  # spindle\n'n04229816': 776,  # ski_mask\n'n02823428': 777,  # beer_bottle\n'n03127747': 778,  # crash_helmet\n'n02877765': 779,  # bottlecap\n'n04435653': 780,  # tile_roof\n'n03724870': 781,  # mask\n'n03710637': 782,  # maillot\n'n03920288': 783,  # Petri_dish\n'n03379051': 784,  # football_helmet\n'n02807133': 785,  # bathing_cap\n'n04399382': 786,  # teddy\n'n03527444': 787,  # holster\n'n03983396': 788,  # pop_bottle\n'n03924679': 789,  # photocopier\n'n04532106': 790,  # vestment\n'n06785654': 791,  # crossword_puzzle\n'n03445777': 792,  # golf_ball\n'n07613480': 793,  # trifle\n'n04350905': 794,  # suit\n'n04562935': 795,  # water_tower\n'n03325584': 796,  # feather_boa\n'n03045698': 797,  # cloak\n'n07892512': 798,  # red_wine\n'n03250847': 799,  # drumstick\n'n04192698': 800,  # shield\n'n03026506': 801,  # Christmas_stocking\n'n03534580': 802,  # hoopskirt\n'n07565083': 803,  # menu\n'n04296562': 804,  # stage\n'n02869837': 805,  # bonnet\n'n07871810': 806,  # meat_loaf\n'n02799071': 807,  # baseball\n'n03314780': 808,  # face_powder\n'n04141327': 809,  # scabbard\n'n04357314': 810,  # sunscreen\n'n02823750': 811,  # beer_glass\n'n13052670': 812,  # hen-of-the-woods\n'n07583066': 813,  # guacamole\n'n03637318': 814,  # lampshade\n'n04599235': 815,  # wool\n'n07802026': 816,  # hay\n'n02883205': 817,  # bow_tie\n'n03709823': 818,  # mailbag\n'n04560804': 819,  # water_jug\n'n02909870': 820,  # bucket\n'n03207743': 821,  # dishrag\n'n04263257': 822,  # soup_bowl\n'n07932039': 823,  # eggnog\n'n03786901': 824,  # mortar\n'n04479046': 825,  # trench_coat\n'n03873416': 826,  # paddle\n'n02999410': 827,  # chain\n'n04367480': 828,  # swab\n'n03775546': 829,  # mixing_bowl\n'n07875152': 830,  # potpie\n'n04591713': 831,  # wine_bottle\n'n04201297': 832,  # shoji\n'n02916936': 833,  # bulletproof_vest\n'n03240683': 834,  # drilling_platform\n'n02840245': 835,  # binder\n'n02963159': 836,  # cardigan\n'n04370456': 837,  # sweatshirt\n'n03991062': 838,  # pot\n'n02843684': 839,  # birdhouse\n'n03482405': 840,  # hamper\n'n03942813': 841,  # ping-pong_ball\n'n03908618': 842,  # pencil_box\n'n03902125': 843,  # pay-phone\n'n07584110': 844,  # consomme\n'n02730930': 845,  # apron\n'n04023962': 846,  # punching_bag\n'n02769748': 847,  # backpack\n'n10148035': 848,  # groom\n'n02817516': 849,  # bearskin\n'n03908714': 850,  # pencil_sharpener\n'n02906734': 851,  # broom\n'n03788365': 852,  # mosquito_net\n'n02667093': 853,  # abaya\n'n03787032': 854,  # mortarboard\n'n03980874': 855,  # poncho\n'n03141823': 856,  # crutch\n'n03976467': 857,  # Polaroid_camera\n'n04264628': 858,  # space_bar\n'n07930864': 859,  # cup\n'n04039381': 860,  # racket\n'n06874185': 861,  # traffic_light\n'n04033901': 862,  # quill\n'n04041544': 863,  # radio\n'n07860988': 864,  # dough\n'n03146219': 865,  # cuirass\n'n03763968': 866,  # military_uniform\n'n03676483': 867,  # lipstick\n'n04209133': 868,  # shower_cap\n'n03782006': 869,  # monitor\n'n03857828': 870,  # oscilloscope\n'n03775071': 871,  # mitten\n'n02892767': 872,  # brassiere\n'n07684084': 873,  # French_loaf\n'n04522168': 874,  # vase\n'n03764736': 875,  # milk_can\n'n04118538': 876,  # rugby_ball\n'n03887697': 877,  # paper_towel\n'n13044778': 878,  # earthstar\n'n03291819': 879,  # envelope\n'n03770439': 880,  # miniskirt\n'n03124170': 881,  # cowboy_hat\n'n04487081': 882,  # trolleybus\n'n03916031': 883,  # perfume\n'n02808440': 884,  # bathtub\n'n07697537': 885,  # hotdog\n'n12985857': 886,  # coral_fungus\n'n02917067': 887,  # bullet_train\n'n03938244': 888,  # pillow\n'n15075141': 889,  # toilet_tissue\n'n02978881': 890,  # cassette\n'n02966687': 891,  # carpenter's_kit\n'n03633091': 892,  # ladle\n'n13040303': 893,  # stinkhorn\n'n03690938': 894,  # lotion\n'n03476991': 895,  # hair_spray\n'n02669723': 896,  # academic_gown\n'n03220513': 897,  # dome\n'n03127925': 898,  # crate\n'n04584207': 899,  # wig\n'n07880968': 900,  # burrito\n'n03937543': 901,  # pill_bottle\n'n03000247': 902,  # chain_mail\n'n04418357': 903,  # theater_curtain\n'n04590129': 904,  # window_shade\n'n02795169': 905,  # barrel\n'n04553703': 906,  # washbasin\n'n02783161': 907,  # ballpoint\n'n02802426': 908,  # basketball\n'n02808304': 909,  # bath_towel\n'n03124043': 910,  # cowboy_boot\n'n03450230': 911,  # gown\n'n04589890': 912,  # window_screen\n'n12998815': 913,  # agaric\n'n02992529': 914,  # cellular_telephone\n'n03825788': 915,  # nipple\n'n02790996': 916,  # barbell\n'n03710193': 917,  # mailbox\n'n03630383': 918,  # lab_coat\n'n03347037': 919,  # fire_screen\n'n03769881': 920,  # minibus\n'n03871628': 921,  # packet\n'n03733281': 922,  # maze\n'n03976657': 923,  # pole\n'n03535780': 924,  # horizontal_bar\n'n04259630': 925,  # sombrero\n'n03929855': 926,  # pickelhaube\n'n04049303': 927,  # rain_barrel\n'n04548362': 928,  # wallet\n'n02979186': 929,  # cassette_player\n'n06596364': 930,  # comic_book\n'n03935335': 931,  # piggy_bank\n'n06794110': 932,  # street_sign\n'n02825657': 933,  # bell_cote\n'n03388183': 934,  # fountain_pen\n'n04591157': 935,  # Windsor_tie\n'n04540053': 936,  # volleyball\n'n03866082': 937,  # overskirt\n'n04136333': 938,  # sarong\n'n04026417': 939,  # purse\n'n02865351': 940,  # bolo_tie\n'n02834397': 941,  # bib\n'n03888257': 942,  # parachute\n'n04235860': 943,  # sleeping_bag\n'n04404412': 944,  # television\n'n04371430': 945,  # swimming_trunks\n'n03733805': 946,  # measuring_cup\n'n07920052': 947,  # espresso\n'n07873807': 948,  # pizza\n'n02895154': 949,  # breastplate\n'n04204238': 950,  # shopping_basket\n'n04597913': 951,  # wooden_spoon\n'n04131690': 952,  # saltshaker\n'n07836838': 953,  # chocolate_sauce\n'n09835506': 954,  # ballplayer\n'n03443371': 955,  # goblet\n'n13037406': 956,  # gyromitra\n'n04336792': 957,  # stretcher\n'n04557648': 958,  # water_bottle\n'n03187595': 959,  # dial_telephone\n'n04254120': 960,  # soap_dispenser\n'n03595614': 961,  # jersey\n'n04146614': 962,  # school_bus\n'n03598930': 963,  # jigsaw_puzzle\n'n03958227': 964,  # plastic_bag\n'n04069434': 965,  # reflex_camera\n'n03188531': 966,  # diaper\n'n02786058': 967,  # Band_Aid\n'n07615774': 968,  # ice_lolly\n'n04525038': 969,  # velvet\n'n04409515': 970,  # tennis_ball\n'n03424325': 971,  # gasmask\n'n03223299': 972,  # doormat\n'n03680355': 973,  # Loafer\n'n07614500': 974,  # ice_cream\n'n07695742': 975,  # pretzel\n'n04033995': 976,  # quilt\n'n03710721': 977,  # maillot\n'n04392985': 978,  # tape_player\n'n03047690': 979,  # clog\n'n03584254': 980,  # iPod\n'n13054560': 981,  # bolete\n'n10565667': 982,  # scuba_diver\n'n03950228': 983,  # pitcher\n'n03729826': 984,  # matchstick\n'n02837789': 985,  # bikini\n'n04254777': 986,  # sock\n'n02988304': 987,  # CD_player\n'n03657121': 988,  # lens_cap\n'n04417672': 989,  # thatch\n'n04523525': 990,  # vault\n'n02815834': 991,  # beaker\n'n09229709': 992,  # bubble\n'n07697313': 993,  # cheeseburger\n'n03888605': 994,  # parallel_bars\n'n03355925': 995,  # flagpole\n'n03063599': 996,  # coffee_mug\n'n04116512': 997,  # rubber_eraser\n'n04325704': 998,  # stole\n'n07831146': 999,  # carbonara\n'n03255030': 1000,  # dumbbell\n}\n"
  },
  {
    "path": "datasets/imagenet.py",
    "content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os.path\nimport numpy as np\nfrom skimage.io import imread\nfrom scipy.misc import imresize\nfrom datasets.imagenet.map import class2num\n\nfrom util import log\n\n__IMAGENET_IMG_PATH__ = '/YOUR_IMAGENET_PATH/ILSVRC/Data/CLS-LOC'\n__IMAGENET_LIST_PATH__ = './datasets/tiny_imagenet'\n\nrs = np.random.RandomState(123)\n\n\nclass Dataset(object):\n\n    def __init__(self, ids, name='default',\n                 max_examples=None, is_train=True):\n        self._ids = list(ids)\n        self.name = name\n        self.is_train = is_train\n\n        if max_examples is not None:\n            self._ids = self._ids[:max_examples]\n\n        file = os.path.join(__IMAGENET_IMG_PATH__, self._ids[0])\n\n        try:\n            imread(file)\n        except:\n            raise IOError('Dataset not found. Please make sure the dataset was downloaded.')\n        log.info(\"Reading Done: %s\", file)\n\n    def load_image(self, id):\n        img = imread(\n            os.path.join(__IMAGENET_IMG_PATH__, id)) / 255.\n        img = imresize(img, [256, 256])\n\n        y = np.random.randint(img.shape[0]-224)\n        x = np.random.randint(img.shape[1]-224)\n        img = img[y:y+224, x:x+224, :3]\n\n        l = np.zeros(1000)\n        l[class2num[id.split('/')[-2]]] = 1\n        return img, l\n\n    def get_data(self, id):\n        # preprocessing and data augmentation\n        m, l = self.load_image(id)\n        return m, l\n\n    @property\n    def ids(self):\n        return self._ids\n\n    def __len__(self):\n        return len(self.ids)\n\n    def __size__(self):\n        return 114, 114\n\n    def __repr__(self):\n        return 'Dataset (%s, %d examples)' % (\n            self.name,\n            len(self)\n        )\n\n\ndef create_default_splits(is_train=True, ratio=0.8):\n    ids = all_ids()\n\n    num_trains = int(len(ids) * ratio)\n\n    dataset_train = Dataset(ids[:num_trains], name='train', is_train=False)\n    dataset_test = Dataset(ids[num_trains:], name='test', is_train=False)\n    return dataset_train, dataset_test\n\n\ndef all_ids():\n    id_filename = 'train_list.txt'\n\n    id_txt = os.path.join(__IMAGENET_LIST_PATH__, id_filename)\n    try:\n        with open(id_txt, 'r') as fp:\n            _ids = [s.strip() for s in fp.readlines() if s]\n    except:\n        raise IOError('Dataset not found. Please make sure the dataset was downloaded.')\n    rs.shuffle(_ids)\n    return _ids\n"
  },
  {
    "path": "datasets/mnist.py",
    "content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os.path\nimport numpy as np\nimport h5py\n\nfrom util import log\n\n__PATH__ = './datasets/mnist'\n\nrs = np.random.RandomState(123)\n\n\nclass Dataset(object):\n\n    def __init__(self, ids, name='default',\n                 max_examples=None, is_train=True):\n        self._ids = list(ids)\n        self.name = name\n        self.is_train = is_train\n\n        if max_examples is not None:\n            self._ids = self._ids[:max_examples]\n\n        filename = 'data.hdf5'\n\n        file = os.path.join(__PATH__, filename)\n        log.info(\"Reading %s ...\", file)\n\n        try:\n            self.data = h5py.File(file, 'r')\n        except:\n            raise IOError('Dataset not found. Please make sure the dataset was downloaded.')\n        log.info(\"Reading Done: %s\", file)\n\n    def get_data(self, id):\n        # preprocessing and data augmentation\n        m = self.data[id]['image'].value/255.\n        l = self.data[id]['label'].value.astype(np.float32)\n        return m, l\n\n    @property\n    def ids(self):\n        return self._ids\n\n    def __len__(self):\n        return len(self.ids)\n\n    def __repr__(self):\n        return 'Dataset (%s, %d examples)' % (\n            self.name,\n            len(self)\n        )\n\n\ndef create_default_splits(is_train=True):\n    id_train, id_test = all_ids(60000)\n\n    dataset_train = Dataset(id_train, name='train', is_train=False)\n    dataset_test = Dataset(id_test, name='test', is_train=False)\n    return dataset_train, dataset_test\n\n\ndef all_ids(num_trains):\n    id_filename = 'id.txt'\n\n    id_txt = os.path.join(__PATH__, id_filename)\n    try:\n        with open(id_txt, 'r') as fp:\n            _ids = [s.strip() for s in fp.readlines() if s]\n    except:\n        raise IOError('Dataset not found. Please make sure the dataset was downloaded.')\n\n    id_train, id_test = _ids[:num_trains], _ids[num_trains:]\n    rs.shuffle(id_train)\n    rs.shuffle(id_test)\n\n    return id_train, id_test\n"
  },
  {
    "path": "datasets/svhn.py",
    "content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os.path\nimport numpy as np\nimport h5py\n\nfrom util import log\n\n__PATH__ = './datasets/svhn'\n\nrs = np.random.RandomState(123)\n\n\nclass Dataset(object):\n\n    def __init__(self, ids, name='default',\n                 max_examples=None, is_train=True):\n        self._ids = list(ids)\n        self.name = name\n        self.is_train = is_train\n\n        if max_examples is not None:\n            self._ids = self._ids[:max_examples]\n\n        filename = 'data.hdf5'\n\n        file = os.path.join(__PATH__, filename)\n        log.info(\"Reading %s ...\", file)\n\n        try:\n            self.data = h5py.File(file, 'r')\n        except:\n            raise IOError('Dataset not found. Please make sure the dataset was downloaded.')\n        log.info(\"Reading Done: %s\", file)\n\n    def get_data(self, id):\n        # preprocessing and data augmentation\n        m = self.data[id]['image'].value/255.\n        l = self.data[id]['label'].value.astype(np.float32)\n        return m, l\n\n    @property\n    def ids(self):\n        return self._ids\n\n    def __len__(self):\n        return len(self.ids)\n\n    def __repr__(self):\n        return 'Dataset (%s, %d examples)' % (\n            self.name,\n            len(self)\n        )\n\n\ndef create_default_splits(is_train=True):\n    id_train, id_test = all_ids(60000)\n\n    dataset_train = Dataset(id_train, name='train', is_train=False)\n    dataset_test = Dataset(id_test, name='test', is_train=False)\n    return dataset_train, dataset_test\n\n\ndef all_ids(num_trains):\n    id_filename = 'id.txt'\n\n    id_txt = os.path.join(__PATH__, id_filename)\n    try:\n        with open(id_txt, 'r') as fp:\n            _ids = [s.strip() for s in fp.readlines() if s]\n    except:\n        raise IOError('Dataset not found. Please make sure the dataset was downloaded.')\n\n    id_train, id_test = _ids[:num_trains], _ids[num_trains:]\n    rs.shuffle(id_train)\n    rs.shuffle(id_test)\n\n    return id_train, id_test\n"
  },
  {
    "path": "datasets/tiny_imagenet.py",
    "content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os.path\nimport numpy as np\nfrom skimage.io import imread\nfrom scipy.misc import imresize\n\nfrom util import log\n\n__IMAGENET_IMG_PATH__ = './datasets/tiny_imagenet/tiny-imagenet-200/'\n__IMAGENET_LIST_PATH__ = './datasets/tiny_imagenet'\n\nrs = np.random.RandomState(123)\n\n\nclass Dataset(object):\n\n    def __init__(self, ids, name='default',\n                 max_examples=None, is_train=True):\n        self._ids = list(ids)\n        self.name = name\n        self.is_train = is_train\n\n        if max_examples is not None:\n            self._ids = self._ids[:max_examples]\n\n        file = os.path.join(__IMAGENET_IMG_PATH__, self._ids[0])\n\n        with open(os.path.join(__IMAGENET_IMG_PATH__, 'wnids.txt')) as f:\n            self.label_list = f.readlines()\n        self.label_list = [label.strip() for label in self.label_list]\n\n        with open(os.path.join(__IMAGENET_IMG_PATH__, 'val/val_annotations.txt')) as f:\n            self.val_label_list = f.readlines()\n        self.val_label_list = [label.split('\\t')[1] for label in self.val_label_list]\n        try:\n            imread(file)\n        except:\n            raise IOError('Dataset not found. Please make sure the dataset was downloaded.')\n        log.info(\"Reading Done: %s\", file)\n\n    def load_image(self, id):\n        img = imread(\n            os.path.join(__IMAGENET_IMG_PATH__, id))/255.\n        img = imresize(img, [72, 72])\n        if len(img.shape) == 2:\n            img = np.stack([img, img, img], axis=-1)\n\n        y = np.random.randint(img.shape[0]-64)\n        x = np.random.randint(img.shape[1]-64)\n        img = img[y:y+64, x:x+64, :3]\n\n        l = np.zeros(200)\n        if id.split('/')[1] == 'train':\n            l[self.label_list.index(id.split('/')[-3])] = 1\n        elif id.split('/')[1] == 'val':\n            img_idx = int(id.split('/')[-1].split('_')[-1].split('.')[0])\n            l[self.label_list.index(self.val_label_list[img_idx])] = 1\n        return img, l\n\n    def get_data(self, id):\n        # preprocessing and data augmentation\n        m, l = self.load_image(id)\n        return m, l\n\n    @property\n    def ids(self):\n        return self._ids\n\n    def __len__(self):\n        return len(self.ids)\n\n    def __size__(self):\n        return 64, 64\n\n    def __repr__(self):\n        return 'Dataset (%s, %d examples)' % (\n            self.name,\n            len(self)\n        )\n\n\ndef create_default_splits(is_train=True, ratio=0.8):\n    id_train, id_test = all_ids()\n\n    dataset_train = Dataset(id_train, name='train', is_train=False)\n    dataset_test = Dataset(id_test, name='test', is_train=False)\n    return dataset_train, dataset_test\n\n\ndef all_ids():\n    id_train_path = os.path.join(__IMAGENET_LIST_PATH__, 'train_list.txt')\n    id_val_path = os.path.join(__IMAGENET_LIST_PATH__, 'val_list.txt')\n    try:\n        with open(id_train_path, 'r') as fp:\n            id_train = [s.strip() for s in fp.readlines() if s]\n        with open(id_val_path, 'r') as fp:\n            id_val = [s.strip() for s in fp.readlines() if s]\n    except:\n        raise IOError('Dataset not found. Please make sure the dataset was downloaded.')\n    rs.shuffle(id_train)\n    rs.shuffle(id_val)\n    return id_train, id_val\n"
  },
  {
    "path": "download.py",
    "content": "from __future__ import print_function\nimport os\nimport tarfile\nimport subprocess\nimport argparse\nimport h5py\nimport numpy as np\n\n\nparser = argparse.ArgumentParser(description='Download datasets.')\nparser.add_argument('--datasets', metavar='N', type=str, nargs='+',\n                    choices=['MNIST', 'Fashion', 'SVHN', 'CIFAR10'])\n\n\ndef prepare_h5py(train_image, train_label, test_image,\n                 test_label, data_dir, num_class=10, shape=None):\n\n    image = np.concatenate((train_image, test_image), axis=0).astype(np.uint8)\n    label = np.concatenate((train_label, test_label), axis=0).astype(np.uint8)\n\n    print('Preprocessing data...')\n\n    import progressbar\n    bar = progressbar.ProgressBar(\n        maxval=100, widgets=[progressbar.Bar('=', '[', ']'),\n                             ' ', progressbar.Percentage()]\n    )\n    bar.start()\n\n    f = h5py.File(os.path.join(data_dir, 'data.hdf5'), 'w')\n    with open(os.path.join(data_dir, 'id.txt'), 'w') as data_id:\n        for i in range(image.shape[0]):\n\n            if i % (image.shape[0] / 100) == 0:\n                bar.update(i / (image.shape[0] / 100))\n\n            grp = f.create_group(str(i))\n            data_id.write('{}\\n'.format(i))\n            if shape:\n                grp['image'] = np.reshape(image[i], shape, order='F')\n            else:\n                grp['image'] = image[i]\n            label_vec = np.zeros(num_class)\n            label_vec[label[i] % num_class] = 1\n            grp['label'] = label_vec.astype(np.bool)\n        bar.finish()\n    f.close()\n    return\n\n\ndef check_file(data_dir):\n    if os.path.exists(data_dir):\n        if os.path.isfile(os.path.join(data_dir, 'data.hdf5')) and\\\n               os.path.isfile(os.path.join(data_dir, 'id.txt')):\n            return True\n    else:\n        os.mkdir(data_dir)\n    return False\n\n\ndef download_mnist(download_path, fashion_mnist=False):\n    if not fashion_mnist:\n        data_url = 'http://yann.lecun.com/exdb/mnist/'\n        data_dir = os.path.join(download_path, 'mnist')\n    else:\n        data_url = 'http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/'\n        data_dir = os.path.join(download_path, 'fashion_mnist')\n\n    if check_file(data_dir):\n        if not fashion_mnist:\n            print('MNIST was downloaded.')\n        else:\n            print('Fashion MNIST was downloaded.')\n        return\n\n    keys = ['train-images-idx3-ubyte.gz', 'train-labels-idx1-ubyte.gz',\n            't10k-images-idx3-ubyte.gz', 't10k-labels-idx1-ubyte.gz']\n\n    for k in keys:\n        url = (data_url+k).format(**locals())\n        target_path = os.path.join(data_dir, k)\n        cmd = ['curl', url, '-o', target_path]\n        print('Downloading ', k)\n        subprocess.call(cmd)\n        cmd = ['gzip', '-d', target_path]\n        print('Unzip ', k)\n        subprocess.call(cmd)\n\n    num_mnist_train = 60000\n    num_mnist_test = 10000\n\n    fd = open(os.path.join(data_dir, 'train-images-idx3-ubyte'))\n    loaded = np.fromfile(file=fd, dtype=np.uint8)\n    train_image = loaded[16:].reshape((num_mnist_train, 28, 28, 1)).astype(np.float)\n\n    fd = open(os.path.join(data_dir, 'train-labels-idx1-ubyte'))\n    loaded = np.fromfile(file=fd, dtype=np.uint8)\n    train_label = np.asarray(loaded[8:].reshape((num_mnist_train)).astype(np.float))\n\n    fd = open(os.path.join(data_dir, 't10k-images-idx3-ubyte'))\n    loaded = np.fromfile(file=fd, dtype=np.uint8)\n    test_image = loaded[16:].reshape((num_mnist_test, 28, 28, 1)).astype(np.float)\n\n    fd = open(os.path.join(data_dir, 't10k-labels-idx1-ubyte'))\n    loaded = np.fromfile(file=fd, dtype=np.uint8)\n    test_label = np.asarray(loaded[8:].reshape((num_mnist_test)).astype(np.float))\n\n    prepare_h5py(train_image, train_label, test_image, test_label, data_dir)\n\n    for k in keys:\n        cmd = ['rm', '-f', os.path.join(data_dir, k[:-3])]\n        subprocess.call(cmd)\n\n\ndef download_svhn(download_path):\n    data_dir = os.path.join(download_path, 'svhn')\n\n    import scipy.io as sio\n    # svhn file loader\n\n    def svhn_loader(url, path):\n        cmd = ['curl', url, '-o', path]\n        subprocess.call(cmd)\n        m = sio.loadmat(path)\n        return m['X'], m['y']\n\n    if check_file(data_dir):\n        print('SVHN was downloaded.')\n        return\n\n    data_url = 'http://ufldl.stanford.edu/housenumbers/train_32x32.mat'\n    train_image, train_label = svhn_loader(data_url, os.path.join(data_dir, 'train_32x32.mat'))\n\n    data_url = 'http://ufldl.stanford.edu/housenumbers/test_32x32.mat'\n    test_image, test_label = svhn_loader(data_url, os.path.join(data_dir, 'test_32x32.mat'))\n\n    prepare_h5py(np.transpose(train_image, (3, 0, 1, 2)), train_label,\n                 np.transpose(test_image, (3, 0, 1, 2)), test_label, data_dir)\n\n    cmd = ['rm', '-f', os.path.join(data_dir, '*.mat')]\n    subprocess.call(cmd)\n\n\ndef download_cifar10(download_path):\n    data_dir = os.path.join(download_path, 'cifar10')\n\n    # cifar file loader\n    def unpickle(file):\n        import cPickle\n        with open(file, 'rb') as fo:\n            dict = cPickle.load(fo)\n        return dict\n\n    if check_file(data_dir):\n        print('CIFAR was downloaded.')\n        return\n\n    data_url = 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz'\n    k = 'cifar-10-python.tar.gz'\n    target_path = os.path.join(data_dir, k)\n    print(target_path)\n    cmd = ['curl', data_url, '-o', target_path]\n    print('Downloading CIFAR10')\n    subprocess.call(cmd)\n    tarfile.open(target_path, 'r:gz').extractall(data_dir)\n\n    num_cifar_train = 50000\n    num_cifar_test = 10000\n\n    target_path = os.path.join(data_dir, 'cifar-10-batches-py')\n    train_image = []\n    train_label = []\n    for i in range(5):\n        fd = os.path.join(target_path, 'data_batch_{}'.format(i+1))\n        dict = unpickle(fd)\n        train_image.append(dict['data'])\n        train_label.append(dict['labels'])\n\n    train_image = np.reshape(np.stack(train_image, axis=0), [num_cifar_train, 32*32*3])\n    train_label = np.reshape(np.array(np.stack(train_label, axis=0)), [num_cifar_train])\n\n    fd = os.path.join(target_path, 'test_batch')\n    dict = unpickle(fd)\n    test_image = np.reshape(dict['data'], [num_cifar_test, 32*32*3])\n    test_label = np.reshape(dict['labels'], [num_cifar_test])\n\n    prepare_h5py(train_image, train_label, test_image, test_label, data_dir, shape=[32, 32, 3])\n\n    cmd = ['rm', '-f', os.path.join(data_dir, 'cifar-10-python.tar.gz')]\n    subprocess.call(cmd)\n    cmd = ['rm', '-rf', os.path.join(data_dir, 'cifar-10-batches-py')]\n    subprocess.call(cmd)\n\nif __name__ == '__main__':\n    args = parser.parse_args()\n    path = './datasets'\n    if not os.path.exists(path):\n        os.mkdir(path)\n\n    if args.datasets is None:\n        raise ValueError('Please at least specify one dataset to be downloaded.')\n\n    if 'MNIST' in args.datasets:\n        download_mnist('./datasets')\n    if 'Fashion' in args.datasets:\n        download_mnist('./datasets', fashion_mnist=True)\n    if 'SVHN' in args.datasets:\n        download_svhn('./datasets')\n    if 'CIFAR10' in args.datasets:\n        download_cifar10('./datasets')\n"
  },
  {
    "path": "input_ops.py",
    "content": "import numpy as np\nimport tensorflow as tf\n\nfrom util import log\n\ndef check_data_id(dataset, data_id):\n    if not data_id:\n        return\n\n    wrong = []\n    for id in data_id:\n        if id in dataset.data:\n            pass\n        else:\n            wrong.append(id)\n\n    if len(wrong) > 0:\n        raise RuntimeError(\"There are %d invalid ids, including %s\" % (\n            len(wrong), wrong[:5]\n        ))\n\n\ndef create_input_ops(dataset,\n                     batch_size,\n                     num_threads=1,           # for creating batches\n                     is_training=False,\n                     data_id=None,\n                     scope='inputs',\n                     shuffle=True,\n                     ):\n    '''\n    Return a batched tensor for the inputs from the dataset.\n    '''\n    input_ops = {}\n\n    if data_id is None:\n        data_id = dataset.ids\n        log.info(\"input_ops [%s]: Using %d IDs from dataset\", scope, len(data_id))\n    else:\n        log.info(\"input_ops [%s]: Using specified %d IDs\", scope, len(data_id))\n\n    # single operations\n    with tf.device(\"/cpu:0\"), tf.name_scope(scope):\n        input_ops['id'] = tf.train.string_input_producer(\n           tf.convert_to_tensor(data_id),\n            capacity=128\n        ).dequeue(name='input_ids_dequeue')\n\n        m, label = dataset.get_data(data_id[0])\n\n        def load_fn(id):\n            # image [n, n], label: [m]\n            image, label = dataset.get_data(id)\n            return (id,\n                    image.astype(np.float32),\n                    label.astype(np.float32))\n\n        input_ops['id'], input_ops['image'], input_ops['label'] = tf.py_func(\n            load_fn, inp=[input_ops['id']],\n            Tout=[tf.string, tf.float32, tf.float32],\n            name='func_hp'\n        )\n\n        input_ops['id'].set_shape([])\n        input_ops['image'].set_shape(list(m.shape))\n        input_ops['label'].set_shape(list(label.shape))\n\n    # batchify\n    capacity = 2 * batch_size * num_threads\n    min_capacity = min(int(capacity * 0.75), 1024)\n\n    if shuffle:\n        batch_ops = tf.train.shuffle_batch(\n            input_ops,\n            batch_size=batch_size,\n            num_threads=num_threads,\n            capacity=capacity,\n            min_after_dequeue=min_capacity,\n        )\n    else:\n        batch_ops = tf.train.batch(\n            input_ops,\n            batch_size=batch_size,\n            num_threads=num_threads,\n            capacity=capacity,\n        )\n\n    return input_ops, batch_ops\n"
  },
  {
    "path": "model.py",
    "content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tensorflow as tf\nimport tensorflow.contrib.slim as slim\n\nfrom ops import conv2d, fc, residual_block\nfrom util import log, train_test_summary\n\n\nclass Model(object):\n\n    def __init__(self, config, debug_information=False, is_train=True):\n        self.debug = debug_information\n\n        self.config = config\n        self.batch_size = self.config.batch_size\n        self.input_height = self.config.data_info[0]\n        self.input_width = self.config.data_info[1]\n        self.c_dim = self.config.data_info[2]\n        self.num_class = self.config.data_info[3]\n        self.norm_type = self.config.norm_type\n\n        # create placeholders for the input\n        self.image = tf.placeholder(\n            name='image', dtype=tf.float32,\n            shape=[self.batch_size, self.input_height, self.input_width, self.c_dim],\n        )\n        self.label = tf.placeholder(\n            name='label', dtype=tf.float32, shape=[self.batch_size, self.num_class],\n        )\n\n        self.is_training = tf.placeholder_with_default(bool(is_train), [], name='is_training')\n\n        self.build(is_train=is_train)\n\n    def get_feed_dict(self, batch_chunk, step=None, is_training=None):\n        fd = {\n            self.image: batch_chunk['image'],  # [B, h, w, c]\n            self.label: batch_chunk['label'],  # [B, n]\n        }\n        if is_training is not None:\n            fd[self.is_training] = is_training\n\n        return fd\n\n    def build(self, is_train=True):\n\n        n = self.num_class\n\n        # build loss and accuracy {{{\n        def build_loss(logits, labels):\n            # Cross-entropy loss\n            loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels)\n\n            # Classification accuracy\n            correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(self.label, 1))\n            accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n            return tf.reduce_mean(loss), accuracy\n        # }}}\n\n        # Classifier: takes images as input and tries to output class label [B, n]\n        def C(img, scope='Classifier', reuse=False):\n            with tf.variable_scope(scope) as scope:\n                log.warn(scope.name)\n                _ = img\n\n                # MNIST, Fashion MNIST, SVHN, CIFAR\n                if not self.config.dataset == 'ImageNet':\n                    # conv layers\n                    num_channels = [64, 128, 256, 512]\n                    for i in range(len(num_channels)):\n                        _ = conv2d(_, num_channels[i], is_train, norm_type=self.norm_type,\n                                   info=not reuse, name='conv_{}'.format(i))\n                        _ = slim.dropout(_, keep_prob=0.5, is_training=is_train)\n\n                    # fc layers\n                    _ = tf.reshape(_, [self.batch_size, -1])\n                    num_fc_channels = [512, 128, 32, n]\n                    for i in range(len(num_fc_channels)):\n                        _ = fc(_, num_fc_channels[i], is_train, norm_type='none',\n                               info=not reuse, name='fc_{}'.format(i))\n                # ImageNet\n                else:\n                    # conv layers\n                    num_channels = [64, 128, 256, 512, 1024]\n                    num_residual_block = [0, 2, 3, 5, 2]\n                    for i in range(len(num_channels)):\n                        _ = conv2d(_, num_channels[i], is_train, norm_type=self.norm_type,\n                                   info=not reuse, name='conv_{}'.format(i))\n                        for j in range(num_residual_block[i]):\n                            _ = residual_block(_, num_channels[i], is_train,\n                                               norm_type=self.norm_type, info=not reuse,\n                                               name='residual_{}_{}'.format(i, j))\n                    _ = tf.layers.average_pooling2d(_, [7, 7], [7, 7])\n                    log.info('{} {}'.format(_.name, _.get_shape().as_list()))\n                    # fc layers\n                    _ = tf.reshape(_, [self.batch_size, -1])\n                    num_fc_channels = [n]\n                    for i in range(len(num_fc_channels)):\n                        _ = fc(_, num_fc_channels[i], is_train, norm_type='none',\n                               info=not reuse, name='fc_{}'.format(i))\n                return _\n\n        logits = C(self.image)\n        self.entropy, self.accuracy = build_loss(logits, self.label)\n        self.loss = self.entropy\n\n        train_test_summary(\"loss/accuracy\", self.accuracy)\n        train_test_summary(\"loss/loss\", self.loss)\n        train_test_summary(\"img/image\", self.image, summary_type='image')\n        log.warn('Successfully loaded the model.')\n"
  },
  {
    "path": "ops.py",
    "content": "import tensorflow as tf\nimport tensorflow.contrib.slim as slim\nfrom util import log\n\n\ndef norm(x, norm_type, is_train, G=32, esp=1e-5):\n    with tf.variable_scope('{}_norm'.format(norm_type)):\n        if norm_type == 'none':\n            output = x\n        elif norm_type == 'batch':\n            output = tf.contrib.layers.batch_norm(\n                x, center=True, scale=True, decay=0.999,\n                is_training=is_train, updates_collections=None\n            )\n        elif norm_type == 'group':\n            # normalize\n            # tranpose: [bs, h, w, c] to [bs, c, h, w] following the paper\n            x = tf.transpose(x, [0, 3, 1, 2])\n            N, C, H, W = x.get_shape().as_list()\n            G = min(G, C)\n            x = tf.reshape(x, [-1, G, C // G, H, W])\n            mean, var = tf.nn.moments(x, [2, 3, 4], keep_dims=True)\n            x = (x - mean) / tf.sqrt(var + esp)\n            # per channel gamma and beta\n            gamma = tf.Variable(tf.constant(1.0, shape=[C]), dtype=tf.float32, name='gamma')\n            beta = tf.Variable(tf.constant(0.0, shape=[C]), dtype=tf.float32, name='beta')\n            gamma = tf.reshape(gamma, [1, C, 1, 1])\n            beta = tf.reshape(beta, [1, C, 1, 1])\n\n            output = tf.reshape(x, [-1, C, H, W]) * gamma + beta\n            # tranpose: [bs, c, h, w, c] to [bs, h, w, c] following the paper\n            output = tf.transpose(output, [0, 2, 3, 1])\n        else:\n            raise NotImplementedError\n    return output\n\n\ndef lrelu(x, leak=0.2, name=\"lrelu\"):\n    with tf.variable_scope(name):\n        f1 = 0.5 * (1 + leak)\n        f2 = 0.5 * (1 - leak)\n        return f1 * x + f2 * abs(x)\n\n\ndef selu(x):\n    alpha = 1.6732632423543772848170429916717\n    scale = 1.0507009873554804934193349852946\n    return scale * tf.where(x > 0.0, x, alpha * tf.exp(x) - alpha)\n\n\ndef huber_loss(labels, predictions, delta=1.0):\n    residual = tf.abs(predictions - labels)\n    condition = tf.less(residual, delta)\n    small_res = 0.5 * tf.square(residual)\n    large_res = delta * residual - 0.5 * tf.square(delta)\n    return tf.where(condition, small_res, large_res)\n\n\ndef conv2d(input, output_shape, is_train, info=False,\n           activation_fn=lrelu, norm_type='batch',\n           k=4, s=2, stddev=0.02, name=\"conv2d\"):\n    with tf.variable_scope(name):\n        w = tf.get_variable('w', [k, k, input.get_shape()[-1], output_shape],\n                            initializer=tf.truncated_normal_initializer(stddev=stddev))\n        conv = tf.nn.conv2d(input, w, strides=[1, s, s, 1], padding='SAME')\n        biases = tf.get_variable('biases', [output_shape],\n                                 initializer=tf.constant_initializer(0.0))\n        if activation_fn is not None:\n            activation = activation_fn(conv + biases)\n        else:\n            activation = conv + biases\n        output = norm(activation, norm_type, is_train)\n    if info: log.info('{} {}'.format(name, output))\n    return output\n\n\ndef fc(input, output_shape, is_train, info=False,\n       norm_type='batch', activation_fn=lrelu, name=\"fc\"):\n    activation = slim.fully_connected(input, output_shape, activation_fn=activation_fn)\n    output = norm(activation, norm_type, is_train)\n    if info: log.info('{} {}'.format(name, output))\n    return output\n\n\ndef residual_block(input, output_shape, is_train, info=False, k=3, s=1,\n                   name=\"residual\", activation_fn=lrelu, norm_type='batch'):\n    with tf.variable_scope(name):\n        with tf.variable_scope('res1'):\n            _ = conv2d(input, output_shape, is_train, k=k, s=s,\n                       activation_fn=None, norm_type=norm_type)\n            _ = norm(_, norm_type, is_train)\n            _ = activation_fn(_)\n        with tf.variable_scope('res2'):\n            _ = conv2d(_, output_shape, is_train, k=k, s=s,\n                       activation_fn=None, norm_type=norm_type)\n            _ = norm(_, norm_type, is_train)\n        _ = activation_fn(_ + input)\n        if info: log.info('{} {}'.format(name, _.get_shape().as_list()))\n    return _\n"
  },
  {
    "path": "trainer.py",
    "content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom six.moves import xrange\n\nfrom util import log\nfrom pprint import pprint\n\nfrom model import Model\nfrom input_ops import create_input_ops\n\nimport os\nimport time\nimport numpy as np\nimport tensorflow.contrib.slim as slim\nimport tensorflow as tf\n\n\nclass Trainer(object):\n    def __init__(self,\n                 config,\n                 dataset,\n                 dataset_test):\n        self.config = config\n        hyper_parameter_str = '{}_lr_{}_bs_{}_norm_type_{}'.format(\n            config.dataset, config.learning_rate,\n            config.batch_size, config.norm_type\n        )\n        self.train_dir = './train_dir/%s-%s-%s' % (\n            config.prefix,\n            hyper_parameter_str,\n            time.strftime(\"%Y%m%d-%H%M%S\")\n        )\n\n        if not os.path.exists(self.train_dir): os.makedirs(self.train_dir)\n        log.infov(\"Train Dir: %s\", self.train_dir)\n\n        # --- input ops ---\n        self.batch_size = config.batch_size\n\n        _, self.batch_train = create_input_ops(dataset, self.batch_size,\n                                               is_training=True)\n        _, self.batch_test = create_input_ops(dataset_test, self.batch_size,\n                                              is_training=False)\n\n        # --- create model ---\n        self.model = Model(config)\n\n        # --- optimizer ---\n        self.global_step = tf.contrib.framework.get_or_create_global_step(graph=None)\n        self.learning_rate = config.learning_rate\n\n        self.check_op = tf.no_op()\n\n        all_vars = tf.trainable_variables()\n        slim.model_analyzer.analyze_vars(all_vars, print_info=True)\n\n        if not config.no_adjust_learning_rate:\n            config.learning_rate = config.learning_rate * config.batch_size\n\n        if not config.dataset == 'ImageNet':\n            self.optimizer = tf.contrib.layers.optimize_loss(\n                loss=self.model.loss,\n                global_step=self.global_step,\n                learning_rate=self.learning_rate,\n                optimizer=tf.train.AdamOptimizer,\n                clip_gradients=20.0,\n                name='optimizer_loss'\n            )\n\n            self.optimizer_dummy = tf.contrib.layers.optimize_loss(\n                loss=self.model.loss,\n                global_step=self.global_step,\n                learning_rate=self.learning_rate,\n                optimizer=tf.train.AdamOptimizer,\n                clip_gradients=20.0,\n                increment_global_step=False,\n                name='optimizer_loss_dummy'\n            )\n        else:\n            config.learning_rate = config.learning_rate * 1e2\n            self.optimizer = tf.contrib.layers.optimize_loss(\n                loss=self.model.loss,\n                global_step=self.global_step,\n                learning_rate=self.learning_rate,\n                optimizer=tf.train.MomentumOptimizer(self.learning_rate, momentum=0.9),\n                clip_gradients=20.0,\n                name='optimizer_loss'\n            )\n\n            self.optimizer_dummy = tf.contrib.layers.optimize_loss(\n                loss=self.model.loss,\n                global_step=self.global_step,\n                learning_rate=self.learning_rate,\n                optimizer=tf.train.MomentumOptimizer(self.learning_rate, momentum=0.9),\n                clip_gradients=20.0,\n                increment_global_step=False,\n                name='optimizer_loss_dummy'\n            )\n\n        self.train_summary_op = tf.summary.merge_all(key='train')\n        self.test_summary_op = tf.summary.merge_all(key='test')\n\n        self.saver = tf.train.Saver(max_to_keep=100)\n        self.pretrain_saver = tf.train.Saver(var_list=tf.trainable_variables(),\n                                             max_to_keep=100)\n        self.summary_writer = tf.summary.FileWriter(self.train_dir)\n        self.log_step = self.config.log_step\n        self.test_sample_step = self.config.test_sample_step\n        self.write_summary_step = self.config.write_summary_step\n\n        self.checkpoint_secs = 600  # 10 min\n\n        self.supervisor = tf.train.Supervisor(\n            logdir=self.train_dir,\n            is_chief=True,\n            saver=None,\n            summary_op=None,\n            summary_writer=self.summary_writer,\n            save_summaries_secs=300,\n            save_model_secs=self.checkpoint_secs,\n            global_step=self.global_step,\n        )\n\n        session_config = tf.ConfigProto(\n            allow_soft_placement=True,\n            gpu_options=tf.GPUOptions(allow_growth=True),\n            device_count={'GPU': 1},\n        )\n        self.session = self.supervisor.prepare_or_wait_for_session(config=session_config)\n\n        self.ckpt_path = config.checkpoint\n        if self.ckpt_path is not None:\n            log.info(\"Checkpoint path: %s\", self.ckpt_path)\n            self.pretrain_saver.restore(self.session, self.ckpt_path)\n            log.info(\"Loaded the pretrain parameters from the provided checkpoint path\")\n\n    def train(self):\n        log.infov(\"Training Starts!\")\n        pprint(self.batch_train)\n\n        ckpt_save_step = self.config.ckpt_save_step\n        log_step = self.log_step\n        test_sample_step = self.test_sample_step\n        write_summary_step = self.write_summary_step\n        step = 0\n\n        for s in xrange(self.config.max_training_step):\n            # periodic inference\n            if s % test_sample_step == 0:\n                accuracy, test_summary, loss, step_time = \\\n                    self.run_test(self.batch_test, is_train=False)\n                self.log_step_message(step, accuracy, loss, step_time, is_train=False)\n                self.summary_writer.add_summary(test_summary, global_step=step)\n\n            step, accuracy, train_summary, loss, step_time = \\\n                self.run_single_step(self.batch_train, s, is_train=True)\n            if not self.config.no_adjust_learning_rate:\n                for i in range(int(self.config.max_batch_size/self.config.batch_size-1)):\n                    _, accuracy, train_summary, loss, step_time = \\\n                        self.run_single_step(self.batch_train, s, is_train=True,\n                                             update_global_step=False)\n\n            if s % log_step == 0:\n                self.log_step_message(step, accuracy, loss, step_time)\n\n            if s % write_summary_step == 0:\n                self.summary_writer.add_summary(train_summary, global_step=step)\n\n            if s % ckpt_save_step == 0 and s > 0:\n                log.infov(\"Saved checkpoint at %d\", s)\n                self.saver.save(self.session,\n                                os.path.join(self.train_dir, 'model'),\n                                global_step=step)\n\n    def run_single_step(self, batch, step, is_train=True, update_global_step=True):\n        _start_time = time.time()\n\n        batch_chunk = self.session.run(batch)\n\n        fetch = [self.global_step, self.model.accuracy, self.train_summary_op,\n                 self.model.loss, self.check_op,\n                 self.optimizer if update_global_step else self.optimizer_dummy]\n\n        fetch_values = self.session.run(\n            fetch,\n            feed_dict=self.model.get_feed_dict(batch_chunk, step=step)\n        )\n\n        [step, accuracy, summary, loss] = fetch_values[:4]\n\n        _end_time = time.time()\n\n        return step, accuracy, summary, loss,  (_end_time - _start_time)\n\n    def run_test(self, batch, is_train=False, repeat_times=8):\n        _start_time = time.time()\n\n        batch_chunk = self.session.run(batch)\n\n        accuracy, summary, loss = self.session.run(\n            [self.model.accuracy,\n             self.test_summary_op, self.model.loss],\n            feed_dict=self.model.get_feed_dict(batch_chunk, is_training=False)\n        )\n\n        _end_time = time.time()\n\n        return accuracy, summary, loss,  (_end_time - _start_time)\n\n    def log_step_message(self, step, accuracy, loss, step_time, is_train=True):\n        if step_time == 0: step_time = 0.001\n        log_fn = (is_train and log.info or log.infov)\n        log_fn((\" [{split_mode:5s} step {step:4d}] \" +\n                \"Loss: {loss:.5f} \" +\n                \"Accuracy: {accuracy:.2f}% \"\n                \"({sec_per_batch:.3f} sec/batch, {instance_per_sec:.3f} instances/sec) \"\n                ).format(split_mode=(is_train and 'train' or 'val'),\n                         step=step,\n                         loss=loss,\n                         accuracy=accuracy*100,\n                         sec_per_batch=step_time,\n                         instance_per_sec=self.batch_size / step_time\n                         )\n               )\n\n\ndef main():\n    import argparse\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--batch_size', type=int, default=64)\n    parser.add_argument('--max_batch_size', type=int, default=64)\n    parser.add_argument('--prefix', type=str, default='default',\n                        help='the nickname of this training job')\n    parser.add_argument('--checkpoint', type=str, default=None)\n    parser.add_argument('--dataset', type=str, default='MNIST',\n                        choices=['MNIST', 'Fashion', 'SVHN',\n                                 'CIFAR10', 'ImageNet', 'TinyImageNet'])\n    parser.add_argument('--norm_type', type=str, default='batch',\n                        choices=['batch', 'group'])\n    # Log\n    parser.add_argument('--max_training_step', type=int, default=100000)\n    parser.add_argument('--log_step', type=int, default=10)\n    parser.add_argument('--test_sample_step', type=int, default=10)\n    parser.add_argument('--write_summary_step', type=int, default=10)\n    parser.add_argument('--ckpt_save_step', type=int, default=1000)\n    # Learning\n    parser.add_argument('--learning_rate', type=float, default=1e-5)\n    parser.add_argument('--no_adjust_learning_rate', action='store_true', default=False)\n    config = parser.parse_args()\n\n\n    if config.dataset == 'MNIST':\n        import datasets.mnist as dataset\n    elif config.dataset == 'Fashion':\n        import datasets.fashion_mnist as dataset\n    elif config.dataset == 'SVHN':\n        import datasets.svhn as dataset\n    elif config.dataset == 'CIFAR10':\n        import datasets.cifar10 as dataset\n    elif config.dataset == 'TinyImageNet':\n        import datasets.tiny_imagenet as dataset\n    elif config.dataset == 'ImageNet':\n        import datasets.imagenet as dataset\n    else:\n        raise ValueError(config.dataset)\n\n    dataset_train, dataset_test = dataset.create_default_splits()\n    image, label = dataset_train.get_data(dataset_train.ids[0])\n    config.data_info = np.concatenate([np.asarray(image.shape), np.asarray(label.shape)])\n\n    trainer = Trainer(config,\n                      dataset_train, dataset_test)\n\n    log.warning(\"dataset: %s, learning_rate: %f\", config.dataset, config.learning_rate)\n    trainer.train()\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "util.py",
    "content": "\"\"\" Utilities \"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\n\n# Logging\n# =======\n\nimport logging\nfrom colorlog import ColoredFormatter\nimport tensorflow as tf\n\n\nch = logging.StreamHandler()\nch.setLevel(logging.DEBUG)\n\nformatter = ColoredFormatter(\n    \"%(log_color)s[%(asctime)s] %(message)s\",\n    datefmt=None,\n    reset=True,\n    log_colors={\n        'DEBUG':    'cyan',\n        'INFO':     'white,bold',\n        'INFOV':    'cyan,bold',\n        'WARNING':  'yellow',\n        'ERROR':    'red,bold',\n        'CRITICAL': 'red,bg_white',\n    },\n    secondary_log_colors={},\n    style='%'\n)\nch.setFormatter(formatter)\n\nlog = logging.getLogger('Log')\nlog.setLevel(logging.DEBUG)\nlog.handlers = []       # No duplicated handlers\nlog.propagate = False   # workaround for duplicated logs in ipython\nlog.addHandler(ch)\n\nlogging.addLevelName(logging.INFO + 1, 'INFOV')\n\n\ndef _infov(self, msg, *args, **kwargs):\n    self.log(logging.INFO + 1, msg, *args, **kwargs)\n\nlogging.Logger.infov = _infov\n\n\ndef train_test_summary(name, value, max_outputs=4, summary_type='scalar'):\n    if summary_type == 'scalar':\n        tf.summary.scalar(name, value, collections=['train'])\n        tf.summary.scalar(\"test_{}\".format(name), value, collections=['test'])\n    elif summary_type == 'image':\n        tf.summary.image(name, value, max_outputs=max_outputs, collections=['train'])\n        tf.summary.image(\"test_{}\".format(name), value,\n                         max_outputs=max_outputs, collections=['test'])\n"
  }
]