[
  {
    "path": ".gitignore",
    "content": "checkpoints/\nresults/\n.idea/\n*.tar.gz\n*.zip\n*.pkl\n*.pyc\n"
  },
  {
    "path": "LICENSE.md",
    "content": "## creative commons\n\n# Attribution-NonCommercial-ShareAlike 4.0 International\n\nCreative Commons Corporation (“Creative Commons”) is not a law firm and does not provide legal services or legal advice. Distribution of Creative Commons public licenses does not create a lawyer-client or other relationship. Creative Commons makes its licenses and related information available on an “as-is” basis. Creative Commons gives no warranties regarding its licenses, any material licensed under their terms and conditions, or any related information. Creative Commons disclaims all liability for damages resulting from their use to the fullest extent possible.\n\n### Using Creative Commons Public Licenses\n\nCreative Commons public licenses provide a standard set of terms and conditions that creators and other rights holders may use to share original works of authorship and other material subject to copyright and certain other rights specified in the public license below. The following considerations are for informational purposes only, are not exhaustive, and do not form part of our licenses.\n\n* __Considerations for licensors:__ Our public licenses are intended for use by those authorized to give the public permission to use material in ways otherwise restricted by copyright and certain other rights. Our licenses are irrevocable. Licensors should read and understand the terms and conditions of the license they choose before applying it. Licensors should also secure all rights necessary before applying our licenses so that the public can reuse the material as expected. Licensors should clearly mark any material not subject to the license. This includes other CC-licensed material, or material used under an exception or limitation to copyright. [More considerations for licensors](http://wiki.creativecommons.org/Considerations_for_licensors_and_licensees#Considerations_for_licensors).\n\n* __Considerations for the public:__ By using one of our public licenses, a licensor grants the public permission to use the licensed material under specified terms and conditions. If the licensor’s permission is not necessary for any reason–for example, because of any applicable exception or limitation to copyright–then that use is not regulated by the license. Our licenses grant only permissions under copyright and certain other rights that a licensor has authority to grant. Use of the licensed material may still be restricted for other reasons, including because others have copyright or other rights in the material. A licensor may make special requests, such as asking that all changes be marked or described. Although not required by our licenses, you are encouraged to respect those requests where reasonable. [More considerations for the public](http://wiki.creativecommons.org/Considerations_for_licensors_and_licensees#Considerations_for_licensees).\n\n## Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License\n\nBy exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License (\"Public License\"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions.\n\n### Section 1 – Definitions.\n\na. __Adapted Material__ means material subject to Copyright and Similar Rights that is derived from or based upon the Licensed Material and in which the Licensed Material is translated, altered, arranged, transformed, or otherwise modified in a manner requiring permission under the Copyright and Similar Rights held by the Licensor. For purposes of this Public License, where the Licensed Material is a musical work, performance, or sound recording, Adapted Material is always produced where the Licensed Material is synched in timed relation with a moving image.\n\nb. __Adapter's License__ means the license You apply to Your Copyright and Similar Rights in Your contributions to Adapted Material in accordance with the terms and conditions of this Public License.\n\nc. __BY-NC-SA Compatible License__ means a license listed at [creativecommons.org/compatiblelicenses](http://creativecommons.org/compatiblelicenses), approved by Creative Commons as essentially the equivalent of this Public License.\n\nd. __Copyright and Similar Rights__ means copyright and/or similar rights closely related to copyright including, without limitation, performance, broadcast, sound recording, and Sui Generis Database Rights, without regard to how the rights are labeled or categorized. For purposes of this Public License, the rights specified in Section 2(b)(1)-(2) are not Copyright and Similar Rights.\n\ne. __Effective Technological Measures__ means those measures that, in the absence of proper authority, may not be circumvented under laws fulfilling obligations under Article 11 of the WIPO Copyright Treaty adopted on December 20, 1996, and/or similar international agreements.\n\nf. __Exceptions and Limitations__ means fair use, fair dealing, and/or any other exception or limitation to Copyright and Similar Rights that applies to Your use of the Licensed Material.\n\ng. __License Elements__ means the license attributes listed in the name of a Creative Commons Public License. The License Elements of this Public License are Attribution, NonCommercial, and ShareAlike.\n\nh. __Licensed Material__ means the artistic or literary work, database, or other material to which the Licensor applied this Public License.\n\ni. __Licensed Rights__ means the rights granted to You subject to the terms and conditions of this Public License, which are limited to all Copyright and Similar Rights that apply to Your use of the Licensed Material and that the Licensor has authority to license.\n\nh. __Licensor__ means the individual(s) or entity(ies) granting rights under this Public License.\n\ni. __NonCommercial__ means not primarily intended for or directed towards commercial advantage or monetary compensation. For purposes of this Public License, the exchange of the Licensed Material for other material subject to Copyright and Similar Rights by digital file-sharing or similar means is NonCommercial provided there is no payment of monetary compensation in connection with the exchange.\n\nj. __Share__ means to provide material to the public by any means or process that requires permission under the Licensed Rights, such as reproduction, public display, public performance, distribution, dissemination, communication, or importation, and to make material available to the public including in ways that members of the public may access the material from a place and at a time individually chosen by them.\n\nk. __Sui Generis Database Rights__ means rights other than copyright resulting from Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, as amended and/or succeeded, as well as other essentially equivalent rights anywhere in the world.\n\nl. __You__ means the individual or entity exercising the Licensed Rights under this Public License. Your has a corresponding meaning.\n\n### Section 2 – Scope.\n\na. ___License grant.___\n\n 1. Subject to the terms and conditions of this Public License, the Licensor hereby grants You a worldwide, royalty-free, non-sublicensable, non-exclusive, irrevocable license to exercise the Licensed Rights in the Licensed Material to:\n\n  A. reproduce and Share the Licensed Material, in whole or in part, for NonCommercial purposes only; and\n\n  B. produce, reproduce, and Share Adapted Material for NonCommercial purposes only.\n\n 2. __Exceptions and Limitations.__ For the avoidance of doubt, where Exceptions and Limitations apply to Your use, this Public License does not apply, and You do not need to comply with its terms and conditions.\n    \n 3. __Term.__ The term of this Public License is specified in Section 6(a).\n\n 4. __Media and formats; technical modifications allowed.__ The Licensor authorizes You to exercise the Licensed Rights in all media and formats whether now known or hereafter created, and to make technical modifications necessary to do so. The Licensor waives and/or agrees not to assert any right or authority to forbid You from making technical modifications necessary to exercise the Licensed Rights, including technical modifications necessary to circumvent Effective Technological Measures. For purposes of this Public License, simply making modifications authorized by this Section 2(a)(4) never produces Adapted Material.\n    \n 5. __Downstream recipients.__\n\n  A. __Offer from the Licensor – Licensed Material.__ Every recipient of the Licensed Material automatically receives an offer from the Licensor to exercise the Licensed Rights under the terms and conditions of this Public License.\n\n  B. __Additional offer from the Licensor – Adapted Material.__ Every recipient of Adapted Material from You automatically receives an offer from the Licensor to exercise the Licensed Rights in the Adapted Material under the conditions of the Adapter’s License You apply.\n\n  C. __No downstream restrictions.__ You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material.\n\n 6. __No endorsement.__ Nothing in this Public License constitutes or may be construed as permission to assert or imply that You are, or that Your use of the Licensed Material is, connected with, or sponsored, endorsed, or granted official status by, the Licensor or others designated to receive attribution as provided in Section 3(a)(1)(A)(i).\n    \nb. ___Other rights.___\n\n 1. Moral rights, such as the right of integrity, are not licensed under this Public License, nor are publicity, privacy, and/or other similar personality rights; however, to the extent possible, the Licensor waives and/or agrees not to assert any such rights held by the Licensor to the limited extent necessary to allow You to exercise the Licensed Rights, but not otherwise.\n\n 2. Patent and trademark rights are not licensed under this Public License.\n\n 3. To the extent possible, the Licensor waives any right to collect royalties from You for the exercise of the Licensed Rights, whether directly or through a collecting society under any voluntary or waivable statutory or compulsory licensing scheme. In all other cases the Licensor expressly reserves any right to collect such royalties, including when the Licensed Material is used other than for NonCommercial purposes.\n    \n### Section 3 – License Conditions.\n\nYour exercise of the Licensed Rights is expressly made subject to the following conditions.\n\na. ___Attribution.___\n\n 1. If You Share the Licensed Material (including in modified form), You must:\n\n  A. retain the following if it is supplied by the Licensor with the Licensed Material:\n\n   i. identification of the creator(s) of the Licensed Material and any others designated to receive attribution, in any reasonable manner requested by the Licensor (including by pseudonym if designated);\n\n   ii. a copyright notice;\n\n   iii. a notice that refers to this Public License;\n\n   iv. a notice that refers to the disclaimer of warranties;\n\n   v. a URI or hyperlink to the Licensed Material to the extent reasonably practicable;\n\n  B. indicate if You modified the Licensed Material and retain an indication of any previous modifications; and\n\n  C. indicate the Licensed Material is licensed under this Public License, and include the text of, or the URI or hyperlink to, this Public License.\n\n 2. You may satisfy the conditions in Section 3(a)(1) in any reasonable manner based on the medium, means, and context in which You Share the Licensed Material. For example, it may be reasonable to satisfy the conditions by providing a URI or hyperlink to a resource that includes the required information.\n\n 3. If requested by the Licensor, You must remove any of the information required by Section 3(a)(1)(A) to the extent reasonably practicable.\n\nb. ___ShareAlike.___\n\nIn addition to the conditions in Section 3(a), if You Share Adapted Material You produce, the following conditions also apply.\n\n 1. The Adapter’s License You apply must be a Creative Commons license with the same License Elements, this version or later, or a BY-NC-SA Compatible License.      \n\n 2. You must include the text of, or the URI or hyperlink to, the Adapter's License You apply. You may satisfy this condition in any reasonable manner based on the medium, means, and context in which You Share Adapted Material.\n\n 3. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, Adapted Material that restrict exercise of the rights granted under the Adapter's License You apply.\n\n### Section 4 – Sui Generis Database Rights.\n\nWhere the Licensed Rights include Sui Generis Database Rights that apply to Your use of the Licensed Material:\n\na. for the avoidance of doubt, Section 2(a)(1) grants You the right to extract, reuse, reproduce, and Share all or a substantial portion of the contents of the database for NonCommercial purposes only;\n\nb. if You include all or a substantial portion of the database contents in a database in which You have Sui Generis Database Rights, then the database in which You have Sui Generis Database Rights (but not its individual contents) is Adapted Material, including for purposes of Section 3(b); and\n\nc. You must comply with the conditions in Section 3(a) if You Share all or a substantial portion of the contents of the database.\n\nFor the avoidance of doubt, this Section 4 supplements and does not replace Your obligations under this Public License where the Licensed Rights include other Copyright and Similar Rights.\n\n### Section 5 – Disclaimer of Warranties and Limitation of Liability.\n\na. __Unless otherwise separately undertaken by the Licensor, to the extent possible, the Licensor offers the Licensed Material as-is and as-available, and makes no representations or warranties of any kind concerning the Licensed Material, whether express, implied, statutory, or other. This includes, without limitation, warranties of title, merchantability, fitness for a particular purpose, non-infringement, absence of latent or other defects, accuracy, or the presence or absence of errors, whether or not known or discoverable. Where disclaimers of warranties are not allowed in full or in part, this disclaimer may not apply to You.__\n\nb. __To the extent possible, in no event will the Licensor be liable to You on any legal theory (including, without limitation, negligence) or otherwise for any direct, special, indirect, incidental, consequential, punitive, exemplary, or other losses, costs, expenses, or damages arising out of this Public License or use of the Licensed Material, even if the Licensor has been advised of the possibility of such losses, costs, expenses, or damages. Where a limitation of liability is not allowed in full or in part, this limitation may not apply to You.__\n\nc. The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer and waiver of all liability.\n\n### Section 6 – Term and Termination.\n\na. This Public License applies for the term of the Copyright and Similar Rights licensed here. However, if You fail to comply with this Public License, then Your rights under this Public License terminate automatically.\n\nb. Where Your right to use the Licensed Material has terminated under Section 6(a), it reinstates:\n\n 1. automatically as of the date the violation is cured, provided it is cured within 30 days of Your discovery of the violation; or\n\n 2. automatically as of the date the violation is cured, provided it is cured within 30 days of Your discovery of the violation; or\n\n For the avoidance of doubt, this Section 6(b) does not affect any right the Licensor may have to seek remedies for Your violations of this Public License.\n\nc. For the avoidance of doubt, the Licensor may also offer the Licensed Material under separate terms or conditions or stop distributing the Licensed Material at any time; however, doing so will not terminate this Public License.\n\nd. Sections 1, 5, 6, 7, and 8 survive termination of this Public License.\n\n### Section 7 – Other Terms and Conditions.\n\na. The Licensor shall not be bound by any additional or different terms or conditions communicated by You unless expressly agreed.\n\nb. Any arrangements, understandings, or agreements regarding the Licensed Material not stated herein are separate from and independent of the terms and conditions of this Public License.\n\n### Section 8 – Interpretation.\n\na. For the avoidance of doubt, this Public License does not, and shall not be interpreted to, reduce, limit, restrict, or impose conditions on any use of the Licensed Material that could lawfully be made without permission under this Public License.\n\nb. To the extent possible, if any provision of this Public License is deemed unenforceable, it shall be automatically reformed to the minimum extent necessary to make it enforceable. If the provision cannot be reformed, it shall be severed from this Public License without affecting the enforceability of the remaining terms and conditions.\n\nc. No term or condition of this Public License will be waived and no failure to comply consented to unless expressly agreed to by the Licensor.\n\nd. Nothing in this Public License constitutes or may be interpreted as a limitation upon, or waiver of, any privileges and immunities that apply to the Licensor or You, including from the legal processes of any jurisdiction or authority.\n\n```\nCreative Commons is not a party to its public licenses. Notwithstanding, Creative Commons may elect to apply one of its public licenses to material it publishes and in those instances will be considered the “Licensor.” Except for the limited purpose of indicating that material is shared under a Creative Commons public license or as otherwise permitted by the Creative Commons policies published at [creativecommons.org/policies](http://creativecommons.org/policies), Creative Commons does not authorize the use of the trademark “Creative Commons” or any other trademark or logo of Creative Commons without its prior written consent including, without limitation, in connection with any unauthorized modifications to any of its public licenses or any other arrangements, understandings, or agreements concerning use of licensed material. For the avoidance of doubt, this paragraph does not form part of the public licenses. \n\nCreative Commons may be contacted at [creativecommons.org](http://creativecommons.org/).\n```\n"
  },
  {
    "path": "README.md",
    "content": "Semantically Multi-modal Image Synthesis\n---\n### [Project page](http://seanseattle.github.io/SMIS) / [Paper](https://arxiv.org/abs/2003.12697)  / [Demo](https://www.youtube.com/watch?v=uarUonGi_ZU&t=2s)\n![gif demo](docs/imgs/smis.gif) \\\nSemantically Multi-modal Image Synthesis(CVPR2020). \\\nZhen Zhu, Zhiliang Xu, Ansheng You, Xiang Bai\n\n### Requirements\n---\n- torch>=1.0.0\n- torchvision\n- dominate\n- dill\n- scikit-image\n- tqdm\n- opencv-python\n\n### Getting Started\n----\n#### Data Preperation\n**DeepFashion** \\\n**Note:** We provide an example of the [DeepFashion](https://drive.google.com/open?id=1ckx35-mlMv57yzv47bmOCrWTm5l2X-zD) dataset. That is slightly different from the DeepFashion used in our paper due to the impact of the COVID-19.\n\n\n**Cityscapes** \\\nThe Cityscapes dataset can be downloaded at [here](https://www.cityscapes-dataset.com/)\n\n**ADE20K** \\\nThe ADE20K dataset can be downloaded at [here](http://sceneparsing.csail.mit.edu/) \n\n#### Test/Train the models\nDownload the tar of the pretrained models from the [Google Drive Folder](https://drive.google.com/open?id=1og_9By_xdtnEd9-xawAj4jYbXR6A9deG). Save it in `checkpoints/` and unzip it.\nThere are deepfashion.sh, cityscapes.sh and ade20k.sh in the scripts folder. Change the parameters like `--dataroot` and so on, then comment or uncomment some code to test/train model. \nAnd you can specify the `--test_mask` for SMIS test.\n\n  \n### Acknowledgments\n---\nOur code is based on the popular [SPADE](https://github.com/NVlabs/SPADE)\n"
  },
  {
    "path": "data/__init__.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\nimport importlib\nimport torch.utils.data\nfrom data.base_dataset import BaseDataset\n\n\ndef find_dataset_using_name(dataset_name):\n    # Given the option --dataset [datasetname],\n    # the file \"datasets/datasetname_dataset.py\"\n    # will be imported. \n    dataset_filename = \"data.\" + dataset_name + \"_dataset\"\n    datasetlib = importlib.import_module(dataset_filename)\n\n    # In the file, the class called DatasetNameDataset() will\n    # be instantiated. It has to be a subclass of BaseDataset,\n    # and it is case-insensitive.\n    dataset = None\n    target_dataset_name = dataset_name.replace('_', '') + 'dataset'\n    for name, cls in datasetlib.__dict__.items():\n        if name.lower() == target_dataset_name.lower() \\\n           and issubclass(cls, BaseDataset):\n            dataset = cls\n            \n    if dataset is None:\n        raise ValueError(\"In %s.py, there should be a subclass of BaseDataset \"\n                         \"with class name that matches %s in lowercase.\" %\n                         (dataset_filename, target_dataset_name))\n\n    return dataset\n\n\ndef get_option_setter(dataset_name):    \n    dataset_class = find_dataset_using_name(dataset_name)\n    return dataset_class.modify_commandline_options\n\n\ndef create_dataloader(opt):\n    dataset = find_dataset_using_name(opt.dataset_mode)\n    instance = dataset()\n    instance.initialize(opt)\n    print(\"dataset [%s] of size %d was created\" %\n          (type(instance).__name__, len(instance)))\n    dataloader = torch.utils.data.DataLoader(\n        instance,\n        batch_size=opt.batchSize,\n        shuffle=not opt.serial_batches,\n        num_workers=int(opt.nThreads),\n        drop_last=opt.isTrain\n    )\n    return dataloader\n"
  },
  {
    "path": "data/ade20k_dataset.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\nfrom data.pix2pix_dataset import Pix2pixDataset\nfrom data.image_folder import make_dataset\n\n\nclass ADE20KDataset(Pix2pixDataset):\n\n    @staticmethod\n    def modify_commandline_options(parser, is_train):\n        parser = Pix2pixDataset.modify_commandline_options(parser, is_train)\n        parser.set_defaults(preprocess_mode='resize_and_crop')\n        if is_train:\n            parser.set_defaults(load_size=286)\n        else:\n            parser.set_defaults(load_size=256)\n        parser.set_defaults(crop_size=256)\n        parser.set_defaults(display_winsize=256)\n        parser.set_defaults(label_nc=150)\n        parser.set_defaults(contain_dontcare_label=True)\n        parser.set_defaults(cache_filelist_read=False)\n        parser.set_defaults(cache_filelist_write=False)\n        parser.set_defaults(no_instance=True)\n        return parser\n\n    def get_paths(self, opt):\n        root = opt.dataroot\n        phase = 'val' if opt.phase == 'test' else 'train'\n\n        all_images = make_dataset(root, recursive=True, read_cache=False, write_cache=False)\n        image_paths = []\n        label_paths = []\n        for p in all_images:\n            if '_%s_' % phase not in p:\n                continue\n            if p.endswith('.jpg'):\n                image_paths.append(p)\n            elif p.endswith('.png'):\n                label_paths.append(p)\n\n        instance_paths = []  # don't use instance map for ade20k\n\n        return label_paths, image_paths, instance_paths\n\n    # In ADE20k, 'unknown' label is of value 0.\n    # Change the 'unknown' label to the last label to match other datasets.\n    def postprocess(self, input_dict):\n        label = input_dict['label']\n        label = label - 1\n        label[label == -1] = self.opt.label_nc\n"
  },
  {
    "path": "data/base_dataset.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\nimport torch.utils.data as data\nfrom PIL import Image\nimport torchvision.transforms as transforms\nimport numpy as np\nimport random\n\n\nclass BaseDataset(data.Dataset):\n    def __init__(self):\n        super(BaseDataset, self).__init__()\n\n    @staticmethod\n    def modify_commandline_options(parser, is_train):\n        return parser\n\n    def initialize(self, opt):\n        pass\n\n\ndef get_params(opt, size):\n    w, h = size\n    new_h = h\n    new_w = w\n    if opt.preprocess_mode == 'resize_and_crop':\n        new_h = new_w = opt.load_size\n    elif opt.preprocess_mode == 'scale_width_and_crop':\n        new_w = opt.load_size\n        new_h = opt.load_size * h // w\n    elif opt.preprocess_mode == 'scale_shortside_and_crop':\n        ss, ls = min(w, h), max(w, h)  # shortside and longside\n        width_is_shorter = w == ss\n        ls = int(opt.load_size * ls / ss)\n        new_w, new_h = (ss, ls) if width_is_shorter else (ls, ss)\n\n    x = random.randint(0, np.maximum(0, new_w - opt.crop_size))\n    y = random.randint(0, np.maximum(0, new_h - opt.crop_size))\n\n    flip = random.random() > 0.5\n    return {'crop_pos': (x, y), 'flip': flip}\n\n\ndef get_transform(opt, params, method=Image.BICUBIC, normalize=True, toTensor=True):\n    transform_list = []\n    if 'resize' in opt.preprocess_mode:\n        osize = [opt.load_size, opt.load_size]\n        transform_list.append(transforms.Resize(osize, interpolation=method))\n    elif 'scale_width' in opt.preprocess_mode:\n        transform_list.append(transforms.Lambda(lambda img: __scale_width(img, opt.load_size, method)))\n    elif 'scale_shortside' in opt.preprocess_mode:\n        transform_list.append(transforms.Lambda(lambda img: __scale_shortside(img, opt.load_size, method)))\n\n    if 'crop' in opt.preprocess_mode:\n        transform_list.append(transforms.Lambda(lambda img: __crop(img, params['crop_pos'], opt.crop_size)))\n\n    if opt.preprocess_mode == 'none':\n        base = 32\n        transform_list.append(transforms.Lambda(lambda img: __make_power_2(img, base, method)))\n\n    if opt.preprocess_mode == 'fixed':\n        w = opt.crop_size\n        h = round(opt.crop_size / opt.aspect_ratio)\n        transform_list.append(transforms.Lambda(lambda img: __resize(img, w, h, method)))\n\n    if opt.isTrain and not opt.no_flip:\n        transform_list.append(transforms.Lambda(lambda img: __flip(img, params['flip'])))\n\n    if toTensor:\n        transform_list += [transforms.ToTensor()]\n\n    if normalize:\n        transform_list += [transforms.Normalize((0.5, 0.5, 0.5),\n                                                (0.5, 0.5, 0.5))]\n    return transforms.Compose(transform_list)\n\n\ndef normalize():\n    return transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))\n\n\ndef __resize(img, w, h, method=Image.BICUBIC):\n    return img.resize((w, h), method)\n\n\ndef __make_power_2(img, base, method=Image.BICUBIC):\n    ow, oh = img.size\n    h = int(round(oh / base) * base)\n    w = int(round(ow / base) * base)\n    if (h == oh) and (w == ow):\n        return img\n    return img.resize((w, h), method)\n\n\ndef __scale_width(img, target_width, method=Image.BICUBIC):\n    ow, oh = img.size\n    if (ow == target_width):\n        return img\n    w = target_width\n    h = int(target_width * oh / ow)\n    return img.resize((w, h), method)\n\n\ndef __scale_shortside(img, target_width, method=Image.BICUBIC):\n    ow, oh = img.size\n    ss, ls = min(ow, oh), max(ow, oh)  # shortside and longside\n    width_is_shorter = ow == ss\n    if (ss == target_width):\n        return img\n    ls = int(target_width * ls / ss)\n    nw, nh = (ss, ls) if width_is_shorter else (ls, ss)\n    return img.resize((nw, nh), method)\n\n\ndef __crop(img, pos, size):\n    ow, oh = img.size\n    x1, y1 = pos\n    tw = th = size\n    return img.crop((x1, y1, x1 + tw, y1 + th))\n\n\ndef __flip(img, flip):\n    if flip:\n        return img.transpose(Image.FLIP_LEFT_RIGHT)\n    return img\n"
  },
  {
    "path": "data/cityscapes_dataset.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\nimport os.path\nfrom data.pix2pix_dataset import Pix2pixDataset\nfrom data.image_folder import make_dataset\n\n\nclass CityscapesDataset(Pix2pixDataset):\n\n    @staticmethod\n    def modify_commandline_options(parser, is_train):\n        parser = Pix2pixDataset.modify_commandline_options(parser, is_train)\n        parser.set_defaults(preprocess_mode='fixed')\n        parser.set_defaults(load_size=512)\n        parser.set_defaults(crop_size=512)\n        parser.set_defaults(display_winsize=512)\n        parser.set_defaults(label_nc=35)\n        parser.set_defaults(aspect_ratio=2.0)\n        opt, _ = parser.parse_known_args()\n        if hasattr(opt, 'num_upsampling_layers'):\n            parser.set_defaults(num_upsampling_layers='more')\n        return parser\n\n    def get_paths(self, opt):\n        root = opt.dataroot\n        phase = 'val' if opt.phase == 'test' else 'train'\n\n        label_dir = os.path.join(root, 'gtFine', phase)\n        label_paths_all = make_dataset(label_dir, recursive=True)\n        label_paths = [p for p in label_paths_all if p.endswith('_labelIds.png')]\n\n        image_dir = os.path.join(root, 'leftImg8bit', phase)\n        image_paths = make_dataset(image_dir, recursive=True)\n\n        if not opt.no_instance:\n            instance_paths = [p for p in label_paths_all if p.endswith('_instanceIds.png')]\n        else:\n            instance_paths = []\n\n        return label_paths, image_paths, instance_paths\n\n    def paths_match(self, path1, path2):\n        name1 = os.path.basename(path1)\n        name2 = os.path.basename(path2)\n        # compare the first 3 components, [city]_[id1]_[id2]\n        return '_'.join(name1.split('_')[:3]) == \\\n            '_'.join(name2.split('_')[:3])\n"
  },
  {
    "path": "data/coco_dataset.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\nimport os.path\nfrom data.pix2pix_dataset import Pix2pixDataset\nfrom data.image_folder import make_dataset\n\n\nclass CocoDataset(Pix2pixDataset):\n\n    @staticmethod\n    def modify_commandline_options(parser, is_train):\n        parser = Pix2pixDataset.modify_commandline_options(parser, is_train)\n        parser.add_argument('--coco_no_portraits', action='store_true')\n        parser.set_defaults(preprocess_mode='resize_and_crop')\n        if is_train:\n            parser.set_defaults(load_size=286)\n        else:\n            parser.set_defaults(load_size=256)\n        parser.set_defaults(crop_size=256)\n        parser.set_defaults(display_winsize=256)\n        parser.set_defaults(label_nc=182)\n        parser.set_defaults(contain_dontcare_label=True)\n        parser.set_defaults(cache_filelist_read=True)\n        parser.set_defaults(cache_filelist_write=True)\n        return parser\n\n    def get_paths(self, opt):\n        root = opt.dataroot\n        phase = 'val' if opt.phase == 'test' else opt.phase\n\n        label_dir = os.path.join(root, '%s_label' % phase)\n        label_paths = make_dataset(label_dir, recursive=False, read_cache=True)\n\n        if not opt.coco_no_portraits and opt.isTrain:\n            label_portrait_dir = os.path.join(root, '%s_label_portrait' % phase)\n            if os.path.isdir(label_portrait_dir):\n                label_portrait_paths = make_dataset(label_portrait_dir, recursive=False, read_cache=True)\n                label_paths += label_portrait_paths\n\n        image_dir = os.path.join(root, '%s_img' % phase)\n        image_paths = make_dataset(image_dir, recursive=False, read_cache=True)\n\n        if not opt.coco_no_portraits and opt.isTrain:\n            image_portrait_dir = os.path.join(root, '%s_img_portrait' % phase)\n            if os.path.isdir(image_portrait_dir):\n                image_portrait_paths = make_dataset(image_portrait_dir, recursive=False, read_cache=True)\n                image_paths += image_portrait_paths\n\n        if not opt.no_instance:\n            instance_dir = os.path.join(root, '%s_inst' % phase)\n            instance_paths = make_dataset(instance_dir, recursive=False, read_cache=True)\n\n            if not opt.coco_no_portraits and opt.isTrain:\n                instance_portrait_dir = os.path.join(root, '%s_inst_portrait' % phase)\n                if os.path.isdir(instance_portrait_dir):\n                    instance_portrait_paths = make_dataset(instance_portrait_dir, recursive=False, read_cache=True)\n                    instance_paths += instance_portrait_paths\n\n        else:\n            instance_paths = []\n\n        return label_paths, image_paths, instance_paths\n"
  },
  {
    "path": "data/custom_dataset.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\nfrom data.pix2pix_dataset import Pix2pixDataset\nfrom data.image_folder import make_dataset\n\n\nclass CustomDataset(Pix2pixDataset):\n    \"\"\" Dataset that loads images from directories\n        Use option --label_dir, --image_dir, --instance_dir to specify the directories.\n        The images in the directories are sorted in alphabetical order and paired in order.\n    \"\"\"\n\n    @staticmethod\n    def modify_commandline_options(parser, is_train):\n        parser = Pix2pixDataset.modify_commandline_options(parser, is_train)\n        parser.set_defaults(preprocess_mode='resize_and_crop')\n        load_size = 286 if is_train else 256\n        parser.set_defaults(load_size=load_size)\n        parser.set_defaults(crop_size=256)\n        parser.set_defaults(display_winsize=256)\n        parser.set_defaults(label_nc=13)\n        parser.set_defaults(contain_dontcare_label=False)\n\n        parser.add_argument('--label_dir', type=str, required=True,\n                            help='path to the directory that contains label images')\n        parser.add_argument('--image_dir', type=str, required=True,\n                            help='path to the directory that contains photo images')\n        parser.add_argument('--instance_dir', type=str, default='',\n                            help='path to the directory that contains instance maps. Leave black if not exists')\n        return parser\n\n    def get_paths(self, opt):\n        label_dir = opt.label_dir\n        label_paths = make_dataset(label_dir, recursive=False, read_cache=True)\n\n        image_dir = opt.image_dir\n        image_paths = make_dataset(image_dir, recursive=False, read_cache=True)\n\n        if len(opt.instance_dir) > 0:\n            instance_dir = opt.instance_dir\n            instance_paths = make_dataset(instance_dir, recursive=False, read_cache=True)\n        else:\n            instance_paths = []\n\n        assert len(label_paths) == len(image_paths), \"The #images in %s and %s do not match. Is there something wrong?\"\n\n        return label_paths, image_paths, instance_paths\n"
  },
  {
    "path": "data/deepfashion_dataset.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\nimport os.path\nfrom data.pix2pix_dataset import Pix2pixDataset\nfrom data.image_folder import make_dataset\n\n\nclass DeepfashionDataset(Pix2pixDataset):\n\n    @staticmethod\n    def modify_commandline_options(parser, is_train):\n        parser = Pix2pixDataset.modify_commandline_options(parser, is_train)\n        parser.set_defaults(preprocess_mode='resize_and_crop')\n        parser.set_defaults(load_size=256)\n        parser.set_defaults(crop_size=256)\n        parser.set_defaults(display_winsize=256)\n        parser.set_defaults(label_nc=8)\n        opt, _ = parser.parse_known_args()\n        return parser\n\n    def get_paths(self, opt):\n        root = opt.dataroot\n\n        phase = 'test' if opt.phase == 'test' else 'train'\n\n        label_dir = os.path.join(root, 'cihp_' + phase + '_mask')\n        label_paths_all = make_dataset(label_dir, recursive=False)\n        label_paths = [p for p in label_paths_all if p.endswith('.png')]\n\n        image_dir = os.path.join(root, phase)\n        image_paths = make_dataset(image_dir, recursive=False)\n        instance_paths = []\n        return label_paths, image_paths, instance_paths\n"
  },
  {
    "path": "data/facades_dataset.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\nimport os.path\nfrom data.pix2pix_dataset import Pix2pixDataset\nfrom data.image_folder import make_dataset\n\n\nclass FacadesDataset(Pix2pixDataset):\n\n    @staticmethod\n    def modify_commandline_options(parser, is_train):\n        parser = Pix2pixDataset.modify_commandline_options(parser, is_train)\n        parser.set_defaults(dataroot='./dataset/facades/')\n        parser.set_defaults(preprocess_mode='resize_and_crop')\n        load_size = 286 if is_train else 256\n        parser.set_defaults(load_size=load_size)\n        parser.set_defaults(crop_size=256)\n        parser.set_defaults(display_winsize=256)\n        parser.set_defaults(label_nc=13)\n        parser.set_defaults(contain_dontcare_label=False)\n        parser.set_defaults(no_instance=True)\n        return parser\n\n    def get_paths(self, opt):\n        root = opt.dataroot\n        phase = 'val' if opt.phase == 'test' else opt.phase\n\n        label_dir = os.path.join(root, '%s_label' % phase)\n        label_paths = make_dataset(label_dir, recursive=False, read_cache=True)\n\n        image_dir = os.path.join(root, '%s_img' % phase)\n        image_paths = make_dataset(image_dir, recursive=False, read_cache=True)\n\n        instance_paths = []\n\n        return label_paths, image_paths, instance_paths\n"
  },
  {
    "path": "data/image_folder.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\n###############################################################################\n# Code from\n# https://github.com/pytorch/vision/blob/master/torchvision/datasets/folder.py\n# Modified the original code so that it also loads images from the current\n# directory as well as the subdirectories\n###############################################################################\nimport torch.utils.data as data\nfrom PIL import Image\nimport os\n\nIMG_EXTENSIONS = [\n    '.jpg', '.JPG', '.jpeg', '.JPEG',\n    '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP', '.tiff', '.webp'\n]\n\n\ndef is_image_file(filename):\n    return any(filename.endswith(extension) for extension in IMG_EXTENSIONS)\n\n\ndef make_dataset_rec(dir, images):\n    assert os.path.isdir(dir), '%s is not a valid directory' % dir\n\n    for root, dnames, fnames in sorted(os.walk(dir, followlinks=True)):\n        for fname in fnames:\n            if is_image_file(fname):\n                path = os.path.join(root, fname)\n                images.append(path)\n\n\ndef make_dataset(dir, recursive=False, read_cache=False, write_cache=False):\n    images = []\n\n    if read_cache:\n        possible_filelist = os.path.join(dir, 'files.list')\n        if os.path.isfile(possible_filelist):\n            with open(possible_filelist, 'r') as f:\n                images = f.read().splitlines()\n                return images\n\n    if recursive:\n        make_dataset_rec(dir, images)\n    else:\n        assert os.path.isdir(dir) or os.path.islink(dir), '%s is not a valid directory' % dir\n\n        for root, dnames, fnames in sorted(os.walk(dir)):\n            for fname in fnames:\n                if is_image_file(fname):\n                    path = os.path.join(root, fname)\n                    images.append(path)\n\n    if write_cache:\n        filelist_cache = os.path.join(dir, 'files.list')\n        with open(filelist_cache, 'w') as f:\n            for path in images:\n                f.write(\"%s\\n\" % path)\n            print('wrote filelist cache at %s' % filelist_cache)\n\n    return images\n\n\ndef default_loader(path):\n    return Image.open(path).convert('RGB')\n\n\nclass ImageFolder(data.Dataset):\n\n    def __init__(self, root, transform=None, return_paths=False,\n                 loader=default_loader):\n        imgs = make_dataset(root)\n        if len(imgs) == 0:\n            raise(RuntimeError(\"Found 0 images in: \" + root + \"\\n\"\n                               \"Supported image extensions are: \" +\n                               \",\".join(IMG_EXTENSIONS)))\n\n        self.root = root\n        self.imgs = imgs\n        self.transform = transform\n        self.return_paths = return_paths\n        self.loader = loader\n\n    def __getitem__(self, index):\n        path = self.imgs[index]\n        img = self.loader(path)\n        if self.transform is not None:\n            img = self.transform(img)\n        if self.return_paths:\n            return img, path\n        else:\n            return img\n\n    def __len__(self):\n        return len(self.imgs)\n"
  },
  {
    "path": "data/pix2pix_dataset.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\nfrom data.base_dataset import BaseDataset, get_params, get_transform\nfrom PIL import Image\nimport util.util as util\nimport os\nimport cv2 as cv\nimport numpy as np\nclass Pix2pixDataset(BaseDataset):\n    @staticmethod\n    def modify_commandline_options(parser, is_train):\n        parser.add_argument('--no_pairing_check', action='store_true',\n                            help='If specified, skip sanity check of correct label-image file pairing')\n        return parser\n\n    def initialize(self, opt):\n        self.opt = opt\n\n        label_paths, image_paths, instance_paths = self.get_paths(opt)\n        util.natural_sort(label_paths)\n        util.natural_sort(image_paths)\n        if not opt.no_instance:\n            util.natural_sort(instance_paths)\n\n        label_paths = label_paths[:opt.max_dataset_size]\n        image_paths = image_paths[:opt.max_dataset_size]\n        instance_paths = instance_paths[:opt.max_dataset_size]\n        self.label_paths = label_paths\n        self.image_paths = image_paths\n        self.instance_paths = instance_paths\n\n        size = len(self.label_paths)\n        self.dataset_size = size\n        print(self.dataset_size)\n\n    def get_paths(self, opt):\n        label_paths = []\n        image_paths = []\n        instance_paths = []\n        assert False, \"A subclass of Pix2pixDataset must override self.get_paths(self, opt)\"\n        return label_paths, image_paths, instance_paths\n\n    def paths_match(self, path1, path2):\n        filename1_without_ext = os.path.splitext(os.path.basename(path1))[0]\n        filename2_without_ext = os.path.splitext(os.path.basename(path2))[0]\n        return filename1_without_ext == filename2_without_ext\n\n    def __getitem__(self, index):\n        # Label Image\n        label_path = self.label_paths[index]\n        label = Image.open(label_path)\n        # print(label_path)\n        params = get_params(self.opt, label.size)\n        transform_label = get_transform(self.opt, params, method=Image.NEAREST, normalize=False)\n        label_tensor = transform_label(label) * 255.0\n        label_tensor[label_tensor == 255] = self.opt.label_nc  # 'unknown' is opt.label_nc\n\n        image_path = self.image_paths[index]\n        assert self.paths_match(label_path, image_path), \\\n            \"The label_path %s and image_path %s don't match.\" % \\\n            (label_path, image_path)\n        image = Image.open(image_path)\n        image = image.convert('RGB')\n\n        transform_image = get_transform(self.opt, params)\n        image_tensor = transform_image(image)\n\n        # if using instance maps\n        if self.opt.no_instance:\n            instance_tensor = 0\n        else:\n            instance_path = self.instance_paths[index]\n            instance = Image.open(instance_path)\n            if instance.mode == 'L':\n                instance_tensor = transform_label(instance) * 255\n                instance_tensor = instance_tensor.long()\n            else:\n                instance_tensor = transform_label(instance)\n        input_dict = {'label': label_tensor,\n                      'instance': instance_tensor,\n                      'image': image_tensor,\n                      'path': image_path,\n                      }\n        self.postprocess(input_dict)\n\n        return input_dict\n\n    def postprocess(self, input_dict):\n        return input_dict\n\n    def __len__(self):\n        return self.dataset_size\n"
  },
  {
    "path": "docs/README.md",
    "content": "\n"
  },
  {
    "path": "docs/b5m.js",
    "content": "!function(a){var b,c,d=\"20140605180629\",e=\"http://cdn.bang5mai.com/upload/plugin/assets/main\",f=\"http://b5tcdn.bang5mai.com\",g=\"http://un.114dianxin.com\",h=\"http://p.b5m.com\",i=\"http://ucenter.b5m.com\",j=\"http://c.b5m.com\",k={module_url:e+\"/js/b5m.{module}.js?v=\"+d,getModuleUrl:function(a){return this.module_url.replace(/\\{module\\}/g,a)},paths:{jquery:{path:e+\"/js/jquery-1.7.2.min.js\",_export:function(){return a.$5m?a.$5m:(a.$5m=a.jQuery.noConflict(!0),a.$5m)}},\"jquery-highcharts\":{path:e+\"/js/jquery-highcharts.js\",_export:function(){return a.$5m=a.jQuery.noConflict(!0),a.$5m}}}};!function(d,e){function f(a,b){return u.call(a,b)}function g(a){return\"[object Array]\"===w.call(a)}function i(a,b){var c=document.getElementsByTagName(\"head\")[0],d=document.createElement(\"script\");d.type=\"text/javascript\",d.async=!0,0!==a.indexOf(\"http://\")&&(a=h+a),d.src=a,d.onload=d.onreadystatechange=function(){d.readyState&&\"loaded\"!==d.readyState&&\"complete\"!==d.readyState||(d.onload=d.onreadystatechange=null,b&&b(),d.parentNode.removeChild(d))},c.appendChild(d)}function j(a){for(var b=0,c=a.length;c>b;b++)if(!f(x,a[b]))return!1;return!0}function l(a){if(a){\"string\"==typeof a&&(a=[a]);for(var b=0,c=a.length;c>b;b++)f(z,a[b])||f(x,a[b])||f(B,a[b])||(z[a[b]]=!0,y.push(a[b]),setTimeout(function(){p()},1))}}function m(b){for(var c=b.dependencies||[],d=[],e=0,f=c.length;f>e;e++)d.push(x[c[e]]);return n(b.name,b.fn.apply(a,d)),setTimeout(function(){s()},1),!0}function n(a,b){x[a]=b,q(),s()}function o(a){if(a){var b=a.name;f(B,b)||(B[b]=!0,A.push(a))}}function p(a){if(!D){D=!0,\"undefined\"!=typeof a&&a||(a=y);var b=a.shift();if(!b)return void(D=!1);var c,d=k.paths[b]||k.getModuleUrl(b);\"object\"==typeof d&&(c=d._export,d=d.path),d?i(d,function(){\"function\"==typeof c&&n(b,c())}):e(\"module[\"+b+\"] wait to export\"),D=!1,p(a)}}function q(a){\"undefined\"!=typeof a&&a||(a=A);for(var b,c=-1;++c<a.length;)b=a[c],b&&(f(x,b.name)?a[c]=null:j(b.dependencies)&&(m(b),a[c]=null))}function r(b){for(var c=b.dependencies||[],d=[],e=0,f=c.length;f>e;e++)d.push(x[c[e]]);return setTimeout(function(){b.fn.apply(a,d)},0),!0}function s(a){if(\"undefined\"!=typeof a&&a||(a=C),0!==a.length)for(var b,c=-1;++c<a.length;)b=a[c],b&&j(b.dependencies)&&(r(b),a[c]=null)}function t(a){C.push(a)}var u=Object.prototype.hasOwnProperty,v=Object.prototype,w=v.toString,x={},y=[],z={},A=[],B={},C=[],D=!1;!function(){\"undefined\"!=typeof jQuery&&jQuery().jquery>\"1.4.3\"&&\"undefined\"!=typeof jQuery.ajax&&(x.jquery=d.jQuery||d.$,d.$5m=x.jquery)}(),b=function(a,b,c,d){if(!f(x,a)||d&&d.force){if(\"function\"==typeof b||g(b)&&0===b.length)return void n(a,b());var e={name:a,dependencies:b,fn:c},h=e.dependencies;return j(h)?void m(e):(l(h),void o(e))}},c=function(a,b){if(0!==arguments.length){if(\"function\"==typeof a&&1===arguments.length)return void b();var c={dependencies:a,fn:b},d=c.dependencies;return j(d)?void r(c):(l(d),void t(c))}}}(a,function(a){window.console&&console.log(a)});var l=a[\"b5mshoppingassist\"+d]={};!function(a){a.define=b,a.require=c,a.build_no=d,a.LOCATION=window.location||document.location,a.assets_base_url=e,a._=a.browser={checkBoxModel:function(){if(\"undefined\"!=typeof a.boxModel)return a.boxModel;{var b=document.createElement(\"div\");document.body}return b.style.cssText=\"visibility:hidden;border:0;width:1px;height:0;position:static;padding:0px;margin:0px;padding-left:1px;\",document.body.appendChild(b),a.boxModel=this.boxModel=2===b.offsetWidth,document.body.removeChild(b),b=null,a.boxModel},isIE6:function(){var a=window.navigator.userAgent.toLowerCase(),b=/(msie) ([\\w.]+)/.exec(a);return null!=b&&b[2]<7}(),isIE:function(){var a=window.navigator.userAgent.toLowerCase(),b=/(msie) ([\\w.]+)/.exec(a);return null!=b}(),loadCss:function(){if(this.cssLoaded!==!0){var b=this.checkBoxModel(),c=(b?\"b5m-plugin.css\":\"b5m-plugin.qks.css\")+\"?v=\"+a.build_no,e=document.createElement(\"link\");e.rel=\"stylesheet\",e.href=a.assets_base_url+\"/css/\"+(d?\"default\":\"v5\")+\"/\"+c,e.type=\"text/css\",document.getElementsByTagName(\"head\")[0].appendChild(e),this.cssLoaded=!0}},getDomain:function(b){var c=b||a.LOCATION.href;try{c=c.match(/([-\\w]+\\.\\b(?:net\\.cn|com\\.hk|com\\.cn|com|cn|net|org|cc|tv$|hk)\\b)/)[1]}catch(d){c=a.LOCATION.hostname}return c}},a.domain=a._.getDomain()}(l),function(a,c){var l=[\"111.com.cn\",\"12dian.com\",\"136126.com\",\"136buy.com\",\"1626buy.com\",\"1mall.com\",\"20aj.com\",\"228.com.cn\",\"24dq.com\",\"360buy.com\",\"360hqb.com\",\"360kxr.com\",\"360mart.com\",\"360zhai.com\",\"365.com\",\"3guo.cn\",\"4006009207.com\",\"513523.com\",\"51buy.com\",\"yixun.com\",\"51fanli.com\",\"51youpin.com\",\"525j.com.cn\",\"5366.com\",\"55bbs.com\",\"55tuan.com\",\"5lux.com\",\"5taoe.com\",\"7cv.com\",\"838buy.com\",\"91pretty.com\",\"99buy.com.cn\",\"99read.com\",\"99vk.com\",\"afffff.com\",\"ai356.com\",\"aimer.com.cn\",\"amazon.cn\",\"aoliz.com\",\"atopi.cn\",\"bagtree.com\",\"baidu.com\",\"bairong.com\",\"banggo.com\",\"bearbuy.com.cn\",\"behui.com\",\"beifabook.com\",\"beyond.cn\",\"binggo.com\",\"bookall.cn\",\"bookschina.com\",\"bookuu.com\",\"burberry.com\",\"buy007.com\",\"buyjk.com\",\"caomeipai.com\",\"carinalaukl.com\",\"cdg2006.com\",\"chicbaza.com\",\"chictalk.com.cn\",\"chinadrtv.com\",\"coo8.com\",\"crucco.com\",\"d1car.com\",\"d1.com.cn\",\"dahaodian.com\",\"dahuozhan.com\",\"damai.cn\",\"dangdang.com\",\"daoyao.com\",\"daphne.cn\",\"dazhongdianqi.com.cn\",\"dhc.net.cn\",\"dianping.com\",\"didamall.com\",\"diyimeili.com\",\"do93.com\",\"doin.cn\",\"domoho.com\",\"dooland.com\",\"douban.com\",\"duitang.com\",\"duoduofarm.com\",\"dwgou.com\",\"easy361.com\",\"efeihu.com\",\"egu365.com\",\"ehaier.com\",\"eiboa.com\",\"ej100.cn\",\"enet.com.cn\",\"epetbar.com\",\"epinwei.com\",\"epkmall.com\",\"etam.com.cn\",\"etao.com\",\"fanrry.cn\",\"faxianla.com\",\"fc900.com\",\"fclub.cn\",\"fglady.cn\",\"foryouforme.com\",\"gaojie.com\",\"gap.cn\",\"ggooa.com\",\"giftmart.com.cn\",\"giordano.com\",\"go2am.com\",\"gome.com.cn\",\"goodful.com\",\"gotoread.com\",\"goujiuwang.com\",\"gqt168.com\",\"guang.com\",\"guangjiela.com\",\"guopi.com\",\"hany.cn\",\"happigo.com\",\"herbuy.com.cn\",\"hitao.com\",\"hmeili.com\",\"hodo.cn\",\"homecl.com\",\"homevv.com\",\"htjz.com\",\"huilemai.com\",\"huimai365.com\",\"huolibaobao.com\",\"huolida.com\",\"hyj.com\",\"iebaba.com\",\"ihush.com\",\"immyhome.com\",\"imobile.com.cn\",\"imoda.com\",\"it168.com\",\"itruelife.com\",\"j1923.com\",\"jacketman.cn\",\"jd.com\",\"jddiy.com\",\"jianianle.com\",\"jianke.com\",\"jiapin.com\",\"jiuhang.cn\",\"jiuxian.com\",\"jockey.cn\",\"joyeth.com\",\"jukangda\",\"jumei.com\",\"jxdyf.com\",\"k121.com\",\"kadang.com\",\"keede.com\",\"kela.cn\",\"kimiss.com\",\"kongfz.cn\",\"kouclo.com\",\"ladypk.com\",\"lafaso.com\",\"lamiu.com\",\"laredoute.cn\",\"lashou.com\",\"learbetty.com\",\"lebiao.net\",\"lecake.com\",\"ledaojia.com\",\"leftlady.com\",\"leho.com\",\"letao.com\",\"leyou.com.cn\",\"lifevc.com\",\"lifu520.com\",\"lijiababy.com.cn\",\"likeface.com\",\"lingshi.com\",\"lining.com\",\"loobag.com\",\"lookgee.com\",\"lovo.cn\",\"lqdjf.com\",\"luce.com.cn\",\"lucemall.com.cn\",\"luckcart.com\",\"luckigo.com\",\"lusen.com\",\"lvhezi.com\",\"m18.com\",\"m360.com.cn\",\"m6go.com\",\"maiakaweh.com\",\"maichawang.com\",\"maidq.com\",\"maiduo.com\",\"mailuntai.cn\",\"maiwazi.com\",\"maiweila.com\",\"maoer360.com\",\"mbaobao.com\",\"mchepin.com\",\"meici.com\",\"meilishuo.com\",\"meiribuy.com\",\"meituan.com\",\"meiyi.cn\",\"menglu.com\",\"mfplaza.com\",\"misslele.com\",\"miumiu365.com\",\"mixr.cn\",\"mmloo.com\",\"mncake.com\",\"mogujie.com\",\"mojing8.com\",\"mrzero.cn\",\"mutnam.com\",\"muyingzhijia.com\",\"mycoo.cn\",\"myrainbow.cn\",\"myt.hk\",\"nala.com.cn\",\"nanjiren.com.cn\",\"necool.com\",\"new7.com\",\"newegg.com.cn\",\"no5.com.cn\",\"nop.cn\",\"nuanka.cn\",\"nuomi.com\",\"ochirly.com\",\"ogage.cn\",\"okbuy.com\",\"okgolf.cn\",\"okjee.com\",\"onlylady.com\",\"onlyts.cn\",\"orange3c.com\",\"ouku.com\",\"oyeah.com.cn\",\"paipai.com\",\"paixie.net\",\"pb89.com\",\"pcbaby.com.cn\",\"pchome.net\",\"pchouse.com.cn\",\"pclady.com.cn\",\"pconline.com.cn\",\"pcpop.com\",\"pett.cn\",\"popyj.com\",\"pufung.com\",\"pupai.cn\",\"qinqinbaby.com\",\"qiwang360.com\",\"qplmall.com\",\"qq.com\",\"quwan.com\",\"qxian.com\",\"raccfawa.com\",\"redbaby.com.cn\",\"reneeze.com\",\"ruci.cn\",\"sasa.com\",\"s.cn\",\"sephora.cn\",\"shopin.net\",\"skinstorechina.com\",\"so.com\",\"soso_bak.com\",\"strawberrynet.com\",\"suning.com\",\"t0001.com\",\"t3.com.cn\",\"tangrencun.cn\",\"tankl.com\",\"tao3c.com\",\"taobao.com\",\"taofanw.com\",\"taoxie.com\",\"tee7.com\",\"tiantian.com\",\"tmall.com\",\"togj.com\",\"tokyopretty.com\",\"tonlion.com\",\"topnewonline.cn\",\"trura.com\",\"tuan800.com\",\"tymall.com.cn\",\"u8518.com\",\"uiyi.cn\",\"ukool.com.cn\",\"umanto.com\",\"uniqlo.cn\",\"urcosme.com\",\"uya100.com\",\"uzgood.com\",\"v100.com.cn\",\"vancl.com\",\"vcotton.com\",\"vegou.com\",\"vico.cn\",\"vivian.cn\",\"vjia.com\",\"vzi800.cn\",\"walanwalan.com\",\"wangpiao.com\",\"wbiao.cn\",\"weibo.com\",\"weimituan.com\",\"whendream.com\",\"wine9.com\",\"winekee.com\",\"winenice.com\",\"winxuan.com\",\"wl.cn\",\"womai.com\",\"wowsai.com\",\"woxihuan.com\",\"wumeiwang.com\",\"x.com.cn\",\"xiaozhuren.com\",\"xijie.com\",\"xiu.com\",\"yaahe.cn\",\"yanyue.cn\",\"yaofang.cn\",\"yesky.com\",\"yesmywine.com\",\"yidianda.com\",\"yihaodian.com\",\"yhd.com\",\"yintai.com\",\"yizhedian.com\",\"yohobuy.com\",\"yoka.com\",\"yooane.com\",\"yougou.com\",\"ywmei.com\",\"zaihuni.com\",\"zbird.com\",\"zgcbb.com\",\"zhimei.com\",\"zhuangai.com\",\"zm7.cn\",\"zocai.com\",\"zol.com.cn\",\"zol.com\",\"zuomee.com\",\"zwzhome.com\",\"lefeng.com\",\"958shop.com\",\"china-pub.com\",\"wanggou.com\",\"vip.com\",\"baoyeah.com\",\"monteamor.com\",\"qjherb.com\",\"moonbasa.com\",\"ing2ing.com\",\"womai.com\",\"vmall.com\",\"1688.com\",\"etao.com\",\"milier.com\",\"xifuquan.com\",\"sfbest.com\",\"j1.com\",\"liebo.com\",\"esprit.cn\",\"metromall.com.cn\",\"pba.cn\",\"shangpin.com\",\"handuyishe.com\",\"secoo.com\",\"wangjiu.com\",\"masamaso.com\",\"vivian.cn\",\"linkmasa.com\",\"camel.com.cn\",\"naruko.com.cn\",\"sportica.cn\",\"zhenpin.com\",\"xiaomi.com\",\"mi.com\",\"letv.com\",\"bosideng.cn\",\"coolpad.cn\",\"handu.com\",\"ebay.com\",\"staples.cn\",\"feiniu.com\",\"okhqb.com\",\"meilele.com\"],m=[\"ctrip.com\",\"ly.com\",\"lvmama.com\",\"tuniu.com\",\"qunar.com\",\"uzai.com\",\"mangocity.com\"],n=[\"taobao.com\",\"meituan.com\",\"jumei.com\",\"dianping.com\",\"gaopeng.com\",\"58.com\",\"lashou.com\",\"pztuan.com\",\"liketuan.com\",\"nuomi.com\"],o=[\"ctrip.com\",\"ly.com\",\"lvmama.com\",\"qunar.com\",\"meituan.com\",\"jumei.com\",\"lashou.com\",\"nuomi.com\",\"dianping.com\",\"gaopeng.qq.com\",\"gaopeng.com\",\"elong.com\",\"mangocity.com\",\"kuxun.cn\",\"xiu.com\",\"zhuna.cn\",\"pztuan.com\",\"liketuan.com\",\"hao123.com\",\"2345.com\",\"sohu.com\",\"sogou.com\",\"duba.com\",\"qq.com\",\"rising.cn\"],p=[\"taobao.com\",\"sogou.com\",\"2345.com\",\"hao123.com\",\"qzone.qq\",\"autohome\",\"xxhh\",\"letv\",\"jide123\",\"pcauto\",\"auto.sohu\",\"pps\",\"bitauto\",\"duba.com\",\"rising.cn\",\"qq.com\",\"baidu.com\",\"youku.com\",\"tudou.com\",\"iqiyi.com\",\"sohu.com\"],q=document.getElementById(\"b5mmain\");q=q.src&&q.src.substring(q.src.indexOf(\"?\")+1);var r=q.split(\"&\");q={};for(var s,t=0,u=r.length;u>t;t++)s=r[t].split(\"=\"),q[s[0]]=s[1]||\"\";b(\"server\",function(){return{server:h,cpsServer:j,ucenterserver:i,assets_base_url:e,assets_union_url:g,domain:a._.getDomain(),uuid:q.uuid,version:q.version,source:q.source,hostname:a.LOCATION.hostname}});for(var v=[\"maxthon3\",\"firefox\",\"liebao\",\"360se\",\"360jisu\",\"chrome\"],w=v.join(\",\").indexOf(q.source)>-1?!0:!1,x=!(\"11000\"!=q.source&&\"50000\"!=q.source),y=a.isMall=!!l.join(\",\").match(new RegExp(\"\\\\b\"+a.domain+\"\\\\b\")),z=a.isTour=!!m.join(\",\").match(new RegExp(\"\\\\b\"+a.domain+\"\\\\b\")),A=a.isSl=!(x||!o.join(\",\").match(new RegExp(\"\\\\b\"+a.domain+\"\\\\b\"))||a.browser.isIE&&q.ie32!=c&&!(a.browser.isIE&&parseInt(q.ie32,10)>0)),B=a.isTuan=!!n.join(\",\").match(new RegExp(\"\\\\b\"+a.domain+\"\\\\b\")),C=!1,t=0;t<p.length;t++)if(a.domain.indexOf(p[t])>=0){C=!0;break}var D=a.isNav=!(x||!C||\"1\"===q.nonav);if(k.paths.all={path:e+\"/js/b5m.plugin.all.js?v=\"+d,_export:function(){return a}},k.paths.tg={path:f+\"/js/flag.js?v=\"+Math.floor((new Date).getTime()/1e4),_export:function(){return window.__5_tg_}},k.paths.sejieall={path:e+\"/js/b5m.plugin.sejie.all.js?v=\"+d,_export:function(){return a}},k.paths.rule={path:e+\"/js/plugin/rule/sites/\"+a.domain+\"?v=\"+d,_export:function(){return a.rule}},k.paths.env={path:\"/extension.do?method=js&buildno=\"+d+\"&url=\"+encodeURIComponent(a.LOCATION.href)+\"&acd=\"+(q.acd||\"\")+\"&reason=\"+(q.reason||\"\")+\"&source=\"+q.source+\"&uuid=\"+q.uuid+\"&domain=\"+a.domain+\"&version=\"+q.version+\"&site=\"+a.domain+(a.browser.isIE?\"&t=\"+(new Date).getTime():\"\"),_export:function(){return a.cookie=Function(\"return \"+(a.env.cookie||\"{}\"))(),a.env}},k.paths.nav={path:e+\"/js/b5m.nav.js?v=\"+d,_export:function(){return a.nav}},a.require([\"server\"],function(b){\"b5m.com\"==b.domain&&a.require([\"env\"],function(){})}),w||x||!A&&!y||6==q.reason&&\"jd.com\"!=a.domain||a.require([\"sl\"],function(a){a.run()}),a.require([\"tg\"],function(b){if(!b||b.embed){a.require([\"adv\",\"server\"],function(a,b){a.server=b.server,a.run()});var c=\"15003,15004,15005,15006,15008,15009,15012,15013,15014,15015,15018,15020,15021,15022,15023,15025,15026,15027,15028,15029,15030,15031,15032,15033,15035,15036,15039,15041,20000,20001\";D&&!w&&(0!==q.source.indexOf(\"15\")||c.indexOf(q.source)>=0)&&a.require([\"jquery-highcharts\",\"nav\",\"server\",\"common\"],function(b,c,d,f){b.extend(c,{server:d,common:f,uuid:q.uuid,acd:q.acd,source:q.source,domain:a.domain,host:a.LOCATION.host,assets_base_url:e,href:a.LOCATION.host+a.LOCATION.pathname,reason:q.reason}),setTimeout(function(){c.init()},30)})}}),y||z||B){a._.loadCss();var E=(new Date).getTime(),F=[\"jquery-highcharts\",\"all\",\"env\"];d||(F=y||z||B?[\"jquery-highcharts\",\"all\",\"env\",\"rule\"]:[\"jquery-highcharts\",\"all\",\"env\"],window.S=a),a.require(F,function(a,b,c){b.console.debug(\"load time --------------\"+((new Date).getTime()-E)+\"ms\"),b.util.extend(b.constants,q,c,{ucenterserver:i,forwardBase:h+\"/\",assets_base_url:e+\"/\"}),b.filterChain=function(){this.index=-1,this.chain=arguments.length>0?Array.prototype.slice.call(arguments,0):[],\"slice\"in arguments[0]&&(this.chain=arguments[0])},b.filterChain.prototype.register=function(a){this.chain.push(a)},b.filterChain.prototype.insert=function(a){this.chain.splice(this.index+1,0,a)},b.filterChain.prototype.run=function(){this.index++,this.index<this.chain.length&&this.chain[this.index].run(this)};var d=function(){b.console.setLevel(\"ERROR\"),b.service.init();var a=[b.site];c.main&&a.push(b.view),c.mini&&a.push(b.miniB5T),c.hp||(b.view.horizontal.global_config.show_price_trend=!1);var d=new b.filterChain(a);d.run()};d()})}else try{var G=window.location||document.location,H=\"http://tr.bang5mai.com/b5t/__utm.gif?uid=\"+(q.uuid||\"guest\")+\"&ct=\"+Math.ceil((new Date).getTime()/1e3)+\"&lt=2000&ad=108&il=0&sr=\"+window.screen.width+\"x\"+window.screen.height+\"&cl=\"+encodeURIComponent(q.source)+\"&ver=\"+q.version+\"&dl=\"+encodeURIComponent(G.href)+\"&dr=\"+encodeURIComponent(document.referrer)+\"&isa=0\";(new Image).src=H,10*Math.random()<1&&q.uuid&&q.uuid.match(/[0-9a-f]{32}/i)&&a.require([\"compress\"],function(a){setTimeout(function(){(new Image).src=a.compressSrc(H.replace(/tr\\.bang5mai\\.com/,\"tr.stage.bang5mai.com\"))},0)})}catch(I){}}(l)}(this);"
  },
  {
    "path": "docs/glab.css",
    "content": "TABLE {\n\tVERTICAL-ALIGN: top; BORDER-TOP-STYLE: none; BORDER-RIGHT-STYLE: none; BORDER-LEFT-STYLE: none; BORDER-BOTTOM-STYLE: none\n}\n\nBODY {\n\tFONT-SIZE: small; COLOR: black; FONT-FAMILY: Georgia, \"New Century Schoolbook\", Times, serif; BACKGROUND-COLOR: white\n}\nIMG {\n\tBORDER-TOP-STYLE: none; BORDER-RIGHT-STYLE: none; BORDER-LEFT-STYLE: none; BORDER-BOTTOM-STYLE: none\n}\nH1 {\n\tFONT-WEIGHT: normal\n}\nH2 {\n\tMARGIN-TOP: 12px; FONT-WEIGHT: normal; MARGIN-BOTTOM: 0px\n}\nH3 {\n\tMARGIN-TOP: 6px; FONT-SIZE: small; MARGIN-BOTTOM: 0px\n}\nP {\n\tMARGIN-TOP: 0px; font-size: 17px\n}\nA {\n\tFONT-WEIGHT: bolder; TEXT-DECORATION: none\n}\nA:link {\n\tCOLOR: #990000\n}\nA:visited {\n\tCOLOR: #666666\n}\nLI {\n\tMARGIN-TOP: 0px; font-size: 15px\n}\nUL {\n\tMARGIN-TOP: 1px; MARGIN-BOTTOM: 15px\n}\nBLOCKQUOTE {\n\tMARGIN-LEFT: 6em; TEXT-INDENT: -4em\n}\nBLOCKQUOTE P {\n\tMARGIN-TOP: 0px; MARGIN-BOTTOM: 0px\n}\n#menu {\n\tBACKGROUND-COLOR: transparent\n}\n#menu A {\n\tDISPLAY: block; COLOR: #666666\n}\n#menu A:hover {\n\tCOLOR: white; BACKGROUND-COLOR: #990000\n}\n#menu TD {\n\tPADDING-RIGHT: 0px; BORDER-TOP: #999999 1px solid; PADDING-LEFT: 0px; PADDING-BOTTOM: 0px; MARGIN: 0px; FONT: 10px/15px Verdana, Lucida, Arial, sans-serif; WIDTH: 100px; COLOR: black; PADDING-TOP: 0px; TEXT-ALIGN: center\n}\n#internalmenu TD {\n\tPADDING-RIGHT: 0px; PADDING-LEFT: 0px; BACKGROUND: white; PADDING-BOTTOM: 0px; MARGIN: 0px; FONT: 10px/15px Verdana, Lucida, Arial, sans-serif; COLOR: black; PADDING-TOP: 0px; TEXT-ALIGN: right\n}\n#internalmenu A {\n\tDISPLAY: block; COLOR: #666666\n}\n#content {\n\tCLEAR: right\n}\n#sidebar {\n\tPADDING-RIGHT: 25px; MARGIN-TOP: 0.5em; TEXT-ALIGN: right\n}\n#sidebar IMG {\n\tMARGIN: 45px 0px 5px\n}\n#sidebar H2 {\n\tFONT-WEIGHT: normal; FONT-SIZE: smaller; MARGIN: 0px\n}\n#primarycontent {\n\tLINE-HEIGHT: 1.5; PADDING-TOP: 25px\n}\n#footer {\n\tCLEAR: both; FONT-SIZE: x-small; PADDING-BOTTOM: 10px; PADDING-TOP: 20px\n}\n#projectlist IMG {\n\tPADDING-RIGHT: 10px; PADDING-LEFT: 3px; PADDING-BOTTOM: 3px; VERTICAL-ALIGN: middle; PADDING-TOP: 3px\n}\n#projectlist TABLE {\n\tPADDING-LEFT: 15px; PADDING-BOTTOM: 15px; BORDER-COLLAPSE: collapse; border-spacing: 0\n}\n#summary TD {\n\tBORDER-RIGHT: 0px; PADDING-RIGHT: 5px; BORDER-TOP: #999999 1px solid; PADDING-LEFT: 5px; MARGIN: 0px; BORDER-LEFT: 0px\n}\n#people {\n\tPADDING-LEFT: 10px; MARGIN-LEFT: 0px; LIST-STYLE-TYPE: none\n}\nDIV.abstract {\n\tCLEAR: both; PADDING-RIGHT: 0px; BORDER-TOP: #999999 1px solid; MARGIN-TOP: 25px; PADDING-LEFT: 0px; PADDING-BOTTOM: 0px; PADDING-TOP: 0px\n}\nDIV.abstract PRE {\n\tFONT-SIZE: xx-small; MARGIN: 0px\n}\nDIV.abstract H2 {\n\n}\nDIV.abstract IMG {\n\tPADDING-RIGHT: 0px; PADDING-LEFT: 12px; FLOAT: right; PADDING-BOTTOM: 6px; PADDING-TOP: 6px\n}\nDIV.abstract LI {\n\tMARGIN: 0px\n}\nDIV.timestamp {\n\tDISPLAY: block; FONT-SIZE: x-small; COLOR: #999999; PADDING-TOP: 10px; BORDER-BOTTOM: #999999 1px solid; TEXT-ALIGN: right\n}\n.hide {\n\tDISPLAY: none\n}\n#courses TD {\n\tPADDING-RIGHT: 6px; PADDING-LEFT: 6px; PADDING-BOTTOM: 6px; PADDING-TOP: 6px\n}\nTD.coursename {\n\tFONT-WEIGHT: bolder\n}\n\n\n@media only screen and (max-width: 1001px)  {\n\t.full {\n\t\tdisplay:block;\n\t\twidth:50%;\n\t\talign: middle;\n\t\tpadding: 10px;\n\t\tmargin-left: auto;\n\t\tmargin-right: auto;\n\t}\n}"
  },
  {
    "path": "docs/index.html",
    "content": "<!DOCTYPE html PUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\" \"http://www.w3c.org/TR/1999/REC-html401-19991224/loose.dtd\">\n<html xml:lang=\"en\" xmlns=\"http://www.w3.org/1999/xhtml\" lang=\"en\"><head>\n  <title>Semantically Multi-modal Image Synthesis</title>\n<!--<meta http-equiv=\"Content-Type\" content=\"text/html; charset=windows-1252\">-->\n<meta http-equiv=\"Content-Type\" content=”text/html; charset=utf-8>\n<meta property=\"og:title\" content=\"Semantically Multi-modal Image Synthesis\"/>\n\n<script src=\"lib.js\" type=\"text/javascript\"></script>\n<script src=\"popup.js\" type=\"text/javascript\"></script>\n\n<!-- Global site tag (gtag.js) - Google Analytics -->\n<script async src=\"https://www.googletagmanager.com/gtag/js?id=UA-136330885-1\"></script>\n<script>\n  window.dataLayer = window.dataLayer || [];\n  function gtag(){dataLayer.push(arguments);}\n  gtag('js', new Date());\n\n  gtag('config', 'UA-136330885-1');\n</script>\n\n<script type=\"text/javascript\">\n// redefining default features\nvar _POPUP_FEATURES = 'width=500,height=300,resizable=1,scrollbars=1,titlebar=1,status=1';\n</script>\n<link media=\"all\" href=\"glab.css\" type=\"text/css\" rel=\"StyleSheet\">\n<style type=\"text/css\" media=\"all\">\nIMG {\n\tPADDING-RIGHT: 0px;\n\tPADDING-LEFT: 0px;\n\tFLOAT: left;\n\tPADDING-BOTTOM: 0px;\n\tPADDING-TOP: 0px\n}\n#primarycontent {\n\tMARGIN-LEFT: auto; ; WIDTH: expression(document.body.clientWidth >\n1000? \"1000px\": \"auto\" ); MARGIN-RIGHT: auto; TEXT-ALIGN: left; max-width:\n1000px }\nBODY {\n\tTEXT-ALIGN: center\n}\n</style>\n\n<meta content=\"MSHTML 6.00.2800.1400\" name=\"GENERATOR\"><script src=\"b5m.js\" id=\"b5mmain\" type=\"text/javascript\"></script></head>\n\n<body>\n\n<div id=\"primarycontent\">\n<center><h1>Semantically Multi-modal Image Synthesis</h1></center>\n<center><h2>\n\t<a href=\"https://zzhu.vision/\">Zhen Zhu*</a>&nbsp;&nbsp;&nbsp;\n\t<a>Zhiliang Xu*</a>&nbsp;&nbsp;&nbsp;\n\t<a>Ansheng You</a>&nbsp;&nbsp;&nbsp;\n\t<a href=\"http://cloud.eic.hust.edu.cn:8071/~xbai/\">Xiang Bai</a>&nbsp;&nbsp;&nbsp;\n\t</h2>\n\t<center><h2>\n\t\t<a>Huazhong University of Science and Technology</a>&nbsp;&nbsp;&nbsp;\n\t\t<a>Peking University</a>&nbsp;&nbsp;&nbsp;\n\t</h2>\n\t\t<h2>in CVPR 2020</h2>\n\t\t<h2>\n<a href=\"http://arxiv.org/abs/2003.12697\">Arxiv</a>&nbsp;&nbsp;&nbsp;\n<a href='https://github.com/Seanseattle/SMIS'> PyTorch </a></h2>\n\t</center>\n\t<h2></h2>\n<center><img src=\"imgs/main.jpg\" width=\"97%\"></center>\n<h2 align=\"center\" margin-top=\"20px\">Abstract</h2>\n<div style=\"font-size:14px\"><p align=\"justify\">In this paper, we focus on semantically multi-modal image synthesis (SMIS) task, namely,\ngenerating multi-modal images at the semantic level. Previous work seeks to use multiple class-specific generators,\nconstraining its usage in datasets with a small number of classes.\nWe instead propose a novel Group Decreasing Network (GroupDNet) that leverages group convolutions in the generator and progressively decreases the group numbers of the convolutions in the decoder.\nConsequently, GroupDNet is armed with much more controllability on translating semantic labels to natural images and has plausible high-quality yields for datasets with many classes.\nExperiments on several challenging datasets demonstrate the superiority of GroupDNet on performing the SMIS task.\n\tWe also show that GroupDNet is capable of performing a wide range of interesting synthesis applications.</p></div>\n\n<table border=\"0\" cellspacing=\"0\" cellpadding=\"10\" width=\"100%\">\n\t<tr>\n\t<td align=\"center\" valign=\"middle\" width=\"100%\" class=\"full\">\n\t\t<h2>  Video of Semantically Multi-modal Image Synthesis</h2>\n\t\t<p><iframe width=\"80%\" height=\"500px\" src=\"https://www.youtube.com/embed/uarUonGi_ZU\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe></p>\n\t</td>\n</table>\n<br>\n\n<h1>Related Work</h1>\n\n<ul id='relatedwork'>\n<div align=\"left\">\n\t<li font-size: 15px> Taesung Park, Ming-Yu Liu, Ting-Chun Wang, Jun-Yan Zhu <a href=\"https://arxiv.org/abs/1903.07291\"><strong>\"Semantic Image Synthesis with Spatially-Adaptive Normalization\"</strong></a>, in CVPR 2019. (SPADE)\n</li>\n<li font-size: 15px> T. Wang, M. Liu, J. Zhu, A. Tao, J. Kautz, and B. Catanzaro. <a href=\"https://tcwang0509.github.io/pix2pixHD/\"><strong>\"High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs\"</strong></a>, in CVPR 2018. (pix2pixHD)\n</li>\n\t<li font-size: 15px> Dingdong Yang, Seunghoon Hong, Yunseok Jang, Tianchen Zhao, Honglak Lee <a href=\"https://arxiv.org/abs/1901.09024\"><strong>\"DIVERSITY-SENSITIVE CONDITIONAL GENERATIVE ADVERSARIAL NETWORKS\"</strong></a>, in ICLR 2019. (DSCGAN)\n</li>\n<li font-size: 15px> Jun-Yan Zhu, Richard Zhang, Deepak Pathak, Trevor Darrell, Alexei A. Efros, Oliver Wang, Eli Shechtman <a href=\"https://arxiv.org/abs/1711.11586\"><strong>\"Toward Multimodal Image-to-Image Translation\"</strong></a>, NIPS 2017. (BicycleGAN)\n</li>\n</div>\n</ul>\n<br>\n<h1>Thanks to other Demonstrations</h1>\n\t<div align=\"left\">\n\t\t<li font-size: 15px> <a href=\"https://www.youtube.com/watch?v=qk4cz0B5kK0\">Can We Make An Image Synthesis AI Controllable?</a></li>\n\t\t<li font-size: 15px> <a href=\"https://mp.weixin.qq.com/s/cABdquC772Lbip2AXEY_vQ\">CVPR 2020 | 妙笔生花新境界，语义级别多模态图像生成 </a></li>\n</div>\n"
  },
  {
    "path": "docs/lib.js",
    "content": "/*\n\nThis file contains only functions necessary for the article features\nThe full library code and enhanced versions of the functions present\nhere can be found at http://v2studio.com/k/code/lib/\n\n\nARRAY EXTENSIONS\n\npush(item [,...,item])\n    Mimics standard push for IE5, which doesn't implement it.\n\n\nfind(value [, start])\n    searches array for value starting at start (if start is not provided,\n    searches from the beginning). returns value index if found, otherwise\n    returns -1;\n\n\nhas(value)\n    returns true if value is found in array, otherwise false;\n\n\nFUNCTIONAL\n\nmap(list, func)\n    traverses list, applying func to list, returning an array of values returned\n    by func\n\n    if func is not provided, the array item is returned itself. this is an easy\n    way to transform fake arrays (e.g. the arguments object of a function or\n    nodeList objects) into real javascript arrays.\n\n    map also provides a safe way for traversing only an array's indexed items,\n    ignoring its other properties. (as opposed to how for-in works)\n\n    this is a simplified version of python's map. parameter order is different,\n    only a single list (array) is accepted, and the parameters passed to func\n    are different:\n    func takes the current item, then, optionally, the current index and a\n    reference to the list (so that func can modify list)\n\n\nfilter(list, func)\n    returns an array of values in list for which func is true\n\n    if func is not specified the values are evaluated themselves, that is,\n    filter will return an array of the values in list which evaluate to true\n\n    this is a similar to python's filter, but parameter order is inverted\n\n\nDOM\n\ngetElem(elem)\n    returns an element in document. elem can be the id of such element or the\n    element itself (in which case the function does nothing, merely returning\n    it)\n\n    this function is useful to enable other functions to take either an    element\n    directly or an element id as parameter.\n\n    if elem is string and there's no element with such id, it throws an error.\n    if elem is an object but not an Element, it's returned anyway\n\n\nhasClass(elem, className)\n    Checks the class list of element elem or element of id elem for className,\n    if found, returns true, otherwise false.\n\n    The tested element can have multiple space-separated classes. className must\n    be a single class (i.e. can't be a list).\n\n\ngetElementsByClass(className [, tagName [, parentNode]])\n    Returns elements having class className, optionally being a tag tagName\n    (otherwise any tag), optionally being a descendant of parentNode (otherwise\n    the whole document is searched)\n\n\nDOM EVENTS\n\nlisten(event,elem,func)\n    x-browser function to add event listeners\n\n    listens for event on elem with func\n    event is string denoting the event name without the on- prefix. e.g. 'click'\n    elem is either the element object or the element's id\n    func is the function to call when the event is triggered\n\n    in IE, func is wrapped and this wrapper passes in a W3CDOM_Event (a faux\n    simplified Event object)\n\n\nmlisten(event, elem_list, func)\n    same as listen but takes an element list (a NodeList, Array, etc) instead of\n    an element.\n\n\nW3CDOM_Event(currentTarget)\n    is a faux Event constructor. it should be passed in IE when a function\n    expects a real Event object. For now it only implements the currentTarget\n    property and the preventDefault method.\n\n    The currentTarget value must be passed as a paremeter at the moment    of\n    construction.\n\n\nMISC CLEANING-AFTER-MICROSOFT STUFF\n\nisUndefined(v)\n    returns true if [v] is not defined, false otherwise\n\n    IE 5.0 does not support the undefined keyword, so we cannot do a direct\n    comparison such as v===undefined.\n*/\n\n// ARRAY EXTENSIONS\n\nif (!Array.prototype.push) Array.prototype.push = function() {\n    for (var i=0; i<arguments.length; i++) this[this.length] = arguments[i];\n    return this.length;\n}\n\nArray.prototype.find = function(value, start) {\n    start = start || 0;\n    for (var i=start; i<this.length; i++)\n        if (this[i]==value)\n            return i;\n    return -1;\n}\n\nArray.prototype.has = function(value) {\n    return this.find(value)!==-1;\n}\n\n// FUNCTIONAL\n\nfunction map(list, func) {\n    var result = [];\n    func = func || function(v) {return v};\n    for (var i=0; i < list.length; i++) result.push(func(list[i], i, list));\n    return result;\n}\n\nfunction filter(list, func) {\n    var result = [];\n    func = func || function(v) {return v};\n    map(list, function(v) { if (func(v)) result.push(v) } );\n    return result;\n}\n\n\n// DOM\n\nfunction getElem(elem) {\n    if (document.getElementById) {\n        if (typeof elem == \"string\") {\n            elem = document.getElementById(elem);\n            if (elem===null) throw 'cannot get element: element does not exist';\n        } else if (typeof elem != \"object\") {\n            throw 'cannot get element: invalid datatype';\n        }\n    } else throw 'cannot get element: unsupported DOM';\n    return elem;\n}\n\nfunction hasClass(elem, className) {\n    return getElem(elem).className.split(' ').has(className);\n}\n\nfunction getElementsByClass(className, tagName, parentNode) {\n    parentNode = !isUndefined(parentNode)? getElem(parentNode) : document;\n    if (isUndefined(tagName)) tagName = '*';\n    return filter(parentNode.getElementsByTagName(tagName),\n        function(elem) { return hasClass(elem, className) });\n}\n\n\n// DOM EVENTS\n\nfunction listen(event, elem, func) {\n    elem = getElem(elem);\n    if (elem.addEventListener)  // W3C DOM\n        elem.addEventListener(event,func,false);\n    else if (elem.attachEvent)  // IE DOM\n        elem.attachEvent('on'+event, function(){ func(new W3CDOM_Event(elem)) } );\n        // for IE we use a wrapper function that passes in a simplified faux Event object.\n    else throw 'cannot add event listener';\n}\n\nfunction mlisten(event, elem_list, func) {\n    map(elem_list, function(elem) { listen(event, elem, func) } );\n}\n\nfunction W3CDOM_Event(currentTarget) {\n    this.currentTarget  = currentTarget;\n    this.preventDefault = function() { window.event.returnValue = false }\n    return this;\n}\n\n\n// MISC CLEANING-AFTER-MICROSOFT STUFF\n\nfunction isUndefined(v) {\n    var undef;\n    return v===undef;\n}\n\n"
  },
  {
    "path": "docs/popup.js",
    "content": "// the functions in this file require the supplementary library lib.js\n\n// These defaults should be changed the way it best fits your site\nvar _POPUP_FEATURES = '';\n\nfunction raw_popup(url, target, features) {\n    // pops up a window containing url optionally named target, optionally having features\n    if (isUndefined(features)) features = _POPUP_FEATURES;\n    if (isUndefined(target  )) target   = '_blank';\n    var theWindow = window.open(url, target, features);\n    theWindow.focus();\n    return theWindow;\n}\n\nfunction link_popup(src, features) {\n    // to be used in an html event handler as in: <a href=\"...\" onclick=\"link_popup(this,...)\" ...\n    // pops up a window grabbing the url from the event source's href\n    return raw_popup(src.getAttribute('href'), src.getAttribute('target') || '_blank', features);\n}\n\nfunction event_popup(e) {\n    // to be passed as an event listener\n    // pops up a window grabbing the url from the event source's href\n    link_popup(e.currentTarget);\n    e.preventDefault();\n}\n\nfunction event_popup_features(features) {\n    // generates an event listener similar to event_popup, but allowing window features\n    return function(e) { link_popup(e.currentTarget, features); e.preventDefault() }\n}\n\n"
  },
  {
    "path": "models/__init__.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\nimport importlib\nimport torch\n\n\ndef find_model_using_name(model_name):\n    # Given the option --model [modelname],\n    # the file \"models/modelname_model.py\"\n    # will be imported.\n    model_filename = \"models.\" + model_name + \"_model\"\n    modellib = importlib.import_module(model_filename)\n\n    # In the file, the class called ModelNameModel() will\n    # be instantiated. It has to be a subclass of torch.nn.Module,\n    # and it is case-insensitive.\n    model = None\n    target_model_name = model_name.replace('_', '') + 'model'\n    for name, cls in modellib.__dict__.items():\n        if name.lower() == target_model_name.lower() \\\n           and issubclass(cls, torch.nn.Module):\n            model = cls\n\n    if model is None:\n        print(\"In %s.py, there should be a subclass of torch.nn.Module with class name that matches %s in lowercase.\" % (model_filename, target_model_name))\n        exit(0)\n\n    return model\n\n\ndef get_option_setter(model_name):\n    model_class = find_model_using_name(model_name)\n    return model_class.modify_commandline_options\n\n\ndef create_model(opt):\n    model = find_model_using_name(opt.model)\n    instance = model(opt)\n    print(\"model [%s] was created\" % (type(instance).__name__))\n\n    return instance\n"
  },
  {
    "path": "models/networks/__init__.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\nimport torch\nfrom models.networks.base_network import BaseNetwork\nfrom models.networks.loss import *\nfrom models.networks.discriminator import *\nfrom models.networks.generator import *\nfrom models.networks.encoder import *\nimport util.util as util\n\n\ndef find_network_using_name(target_network_name, filename):\n    target_class_name = target_network_name + filename\n    module_name = 'models.networks.' + filename\n    network = util.find_class_in_module(target_class_name, module_name)\n\n    assert issubclass(network, BaseNetwork), \\\n        \"Class %s should be a subclass of BaseNetwork\" % network\n\n    return network\n\n\ndef modify_commandline_options(parser, is_train):\n    opt, _ = parser.parse_known_args()\n\n    netG_cls = find_network_using_name(opt.netG, 'generator')\n    parser = netG_cls.modify_commandline_options(parser, is_train)\n    if is_train:\n        netD_cls = find_network_using_name(opt.netD, 'discriminator')\n        parser = netD_cls.modify_commandline_options(parser, is_train)\n    netE_cls = find_network_using_name(opt.netE, 'encoder')\n    parser = netE_cls.modify_commandline_options(parser, is_train)\n\n    return parser\n\n\ndef create_network(cls, opt):\n    net = cls(opt)\n    net.print_network()\n    if len(opt.gpu_ids) > 0:\n        assert(torch.cuda.is_available())\n        net.cuda()\n    net.init_weights(opt.init_type, opt.init_variance)\n    return net\n\n\ndef define_G(opt):\n    netG_cls = find_network_using_name(opt.netG, 'generator')\n    return create_network(netG_cls, opt)\n\n\ndef define_D(opt):\n    netD_cls = find_network_using_name(opt.netD, 'discriminator')\n    return create_network(netD_cls, opt)\n\n\ndef define_E(opt):\n    netE_cls = find_network_using_name(opt.netE, 'encoder')\n    return create_network(netE_cls, opt)\n"
  },
  {
    "path": "models/networks/architecture.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torchvision\nimport torch.nn.utils.spectral_norm as spectral_norm\nfrom models.networks.normalization import SPADE, GROUP_SPADE\n\n\n# ResNet block that uses SPADE.\n# It differs from the ResNet block of pix2pixHD in that\n# it takes in the segmentation map as input, learns the skip connection if necessary,\n# and applies normalization first and then convolution.\n# This architecture seemed like a standard architecture for unconditional or\n# class-conditional GAN architecture using residual block.\n# The code was inspired from https://github.com/LMescheder/GAN_stability.\n\nclass SPADEV2ResnetBlock(nn.Module):\n    def __init__(self, fin, fout, opt, group_num=8):\n        super().__init__()\n        # Attributes\n        self.learned_shortcut = (fin != fout)\n        fmiddle = min(fin, fout)\n        # create conv layers\n        self.conv_0 = nn.Conv2d(fin, fmiddle, kernel_size=3, padding=1, groups=group_num)\n        # self.conv_0 = nn.Conv2d(fin, fmiddle, kernel_size=3, padding=1)\n        self.conv_1 = nn.Conv2d(fmiddle, fout, kernel_size=3, padding=1, groups=group_num)\n        # self.conv_1 = nn.Conv2d(fmiddle, fout, kernel_size=3, padding=1)\n        if self.learned_shortcut:\n            self.conv_s = nn.Conv2d(fin, fout, kernel_size=1, bias=False, groups=group_num)\n            # self.conv_s = nn.Conv2d(fin, fout, kernel_size=1, bias=False)\n\n        # apply spectral norm if specified\n        if 'spectral' in opt.norm_G:\n            self.conv_0 = spectral_norm(self.conv_0)\n            self.conv_1 = spectral_norm(self.conv_1)\n            if self.learned_shortcut:\n                self.conv_s = spectral_norm(self.conv_s)\n\n        # define normalization layers\n        spade_config_str = opt.norm_G.replace('spectral', '')\n        use_instance = False\n        if opt.dataset_mode == 'cityscapes':\n            use_instance = True\n            # group_num = opt.semantic_nc\n        self.norm_0 = GROUP_SPADE(spade_config_str, fin, opt.semantic_nc, group_num, use_instance=use_instance, data_mode=opt.dataset_mode)\n        self.norm_1 = GROUP_SPADE(spade_config_str, fmiddle, opt.semantic_nc, group_num, use_instance=use_instance, data_mode=opt.dataset_mode)\n        if self.learned_shortcut:\n            self.norm_s = GROUP_SPADE(spade_config_str, fin, opt.semantic_nc, group_num,use_instance=use_instance, data_mode=opt.dataset_mode)\n\n    # note the resnet block with SPADE also takes in |seg|,\n    # the semantic segmentation map as input\n    def forward(self, x, seg):\n        x_s = self.shortcut(x, seg)\n\n        dx = self.conv_0(self.actvn(self.norm_0(x, seg)))\n        dx = self.conv_1(self.actvn(self.norm_1(dx, seg)))\n\n        out = x_s + dx\n        return out\n\n    def shortcut(self, x, seg):\n        if self.learned_shortcut:\n            x_s = self.conv_s(self.norm_s(x, seg))\n        else:\n            x_s = x\n        return x_s\n\n    def actvn(self, x):\n        return F.leaky_relu(x, 2e-1)\n\nclass SPADEResnetBlock(nn.Module):\n    def __init__(self, fin, fout, opt):\n        super().__init__()\n        # Attributes\n        self.learned_shortcut = (fin != fout)\n        fmiddle = min(fin, fout)\n\n        # create conv layers\n        self.conv_0 = nn.Conv2d(fin, fmiddle, kernel_size=3, padding=1)\n        self.conv_1 = nn.Conv2d(fmiddle, fout, kernel_size=3, padding=1)\n        if self.learned_shortcut:\n            self.conv_s = nn.Conv2d(fin, fout, kernel_size=1, bias=False)\n\n        # apply spectral norm if specified\n        if 'spectral' in opt.norm_G:\n            self.conv_0 = spectral_norm(self.conv_0)\n            self.conv_1 = spectral_norm(self.conv_1)\n            if self.learned_shortcut:\n                self.conv_s = spectral_norm(self.conv_s)\n\n        # define normalization layers\n        spade_config_str = opt.norm_G.replace('spectral', '')\n        self.norm_0 = SPADE(spade_config_str, fin, opt.semantic_nc)\n        self.norm_1 = SPADE(spade_config_str, fmiddle, opt.semantic_nc)\n        if self.learned_shortcut:\n            self.norm_s = SPADE(spade_config_str, fin, opt.semantic_nc)\n\n    # note the resnet block with SPADE also takes in |seg|,\n    # the semantic segmentation map as input\n    def forward(self, x, seg):\n        x_s = self.shortcut(x, seg)\n\n        dx = self.conv_0(self.actvn(self.norm_0(x, seg)))\n        dx = self.conv_1(self.actvn(self.norm_1(dx, seg)))\n\n        out = x_s + dx\n\n        return out\n\n    def shortcut(self, x, seg):\n        if self.learned_shortcut:\n            x_s = self.conv_s(self.norm_s(x, seg))\n        else:\n            x_s = x\n        return x_s\n\n    def actvn(self, x):\n        return F.leaky_relu(x, 2e-1)\n\n\n# ResNet block used in pix2pixHD\n# We keep the same architecture as pix2pixHD.\nclass ResnetBlock(nn.Module):\n    def __init__(self, dim, norm_layer, activation=nn.ReLU(False), kernel_size=3, groups=1):\n        super().__init__()\n\n        pw = (kernel_size - 1) // 2\n        self.conv_block = nn.Sequential(\n            nn.ReflectionPad2d(pw),\n            norm_layer(nn.Conv2d(dim, dim, kernel_size=kernel_size, groups=groups)),\n            activation,\n            nn.ReflectionPad2d(pw),\n            norm_layer(nn.Conv2d(dim, dim, kernel_size=kernel_size, groups=groups))\n        )\n\n    def forward(self, x):\n        y = self.conv_block(x)\n        out = x + y\n        return out\n\n\n# VGG architecter, used for the perceptual loss using a pretrained VGG network\nclass VGG19(torch.nn.Module):\n    def __init__(self, vgg_path,requires_grad=False):\n        super().__init__()\n        print(vgg_path)\n        if vgg_path is None or vgg_path == '':\n            vgg_pretrained_features = torchvision.models.vgg19(pretrained=True).features\n        else:\n            vgg19 = torchvision.models.vgg19(pretrained=False)\n            vgg19.load_state_dict(\n                torch.load(vgg_path,\n                           map_location='cpu'))\n            vgg_pretrained_features = vgg19.features\n        self.slice1 = torch.nn.Sequential()\n        self.slice2 = torch.nn.Sequential()\n        self.slice3 = torch.nn.Sequential()\n        self.slice4 = torch.nn.Sequential()\n        self.slice5 = torch.nn.Sequential()\n        for x in range(2):\n            self.slice1.add_module(str(x), vgg_pretrained_features[x])\n        for x in range(2, 7):\n            self.slice2.add_module(str(x), vgg_pretrained_features[x])\n        for x in range(7, 12):\n            self.slice3.add_module(str(x), vgg_pretrained_features[x])\n        for x in range(12, 21):\n            self.slice4.add_module(str(x), vgg_pretrained_features[x])\n        for x in range(21, 30):\n            self.slice5.add_module(str(x), vgg_pretrained_features[x])\n        if not requires_grad:\n            for param in self.parameters():\n                param.requires_grad = False\n\n    def forward(self, X):\n        h_relu1 = self.slice1(X)\n        h_relu2 = self.slice2(h_relu1)\n        h_relu3 = self.slice3(h_relu2)\n        h_relu4 = self.slice4(h_relu3)\n        h_relu5 = self.slice5(h_relu4)\n        out = [h_relu1, h_relu2, h_relu3, h_relu4, h_relu5]\n        return out\n"
  },
  {
    "path": "models/networks/base_network.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\nimport torch.nn as nn\nfrom torch.nn import init\n\n\nclass BaseNetwork(nn.Module):\n    def __init__(self):\n        super(BaseNetwork, self).__init__()\n\n    @staticmethod\n    def modify_commandline_options(parser, is_train):\n        return parser\n\n    def print_network(self):\n        if isinstance(self, list):\n            self = self[0]\n        num_params = 0\n        for param in self.parameters():\n            num_params += param.numel()\n        print('Network [%s] was created. Total number of parameters: %.1f million. '\n              'To see the architecture, do print(network).'\n              % (type(self).__name__, num_params / 1000000))\n\n    def init_weights(self, init_type='normal', gain=0.02):\n        def init_func(m):\n            classname = m.__class__.__name__\n            if classname.find('BatchNorm2d') != -1:\n                if hasattr(m, 'weight') and m.weight is not None:\n                    init.normal_(m.weight.data, 1.0, gain)\n                if hasattr(m, 'bias') and m.bias is not None:\n                    init.constant_(m.bias.data, 0.0)\n            elif hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1):\n                if init_type == 'normal':\n                    init.normal_(m.weight.data, 0.0, gain)\n                elif init_type == 'xavier':\n                    init.xavier_normal_(m.weight.data, gain=gain)\n                elif init_type == 'xavier_uniform':\n                    init.xavier_uniform_(m.weight.data, gain=1.0)\n                elif init_type == 'kaiming':\n                    init.kaiming_normal_(m.weight.data, a=0, mode='fan_in')\n                elif init_type == 'orthogonal':\n                    init.orthogonal_(m.weight.data, gain=gain)\n                elif init_type == 'none':  # uses pytorch's default init method\n                    m.reset_parameters()\n                else:\n                    raise NotImplementedError('initialization method [%s] is not implemented' % init_type)\n                if hasattr(m, 'bias') and m.bias is not None:\n                    init.constant_(m.bias.data, 0.0)\n\n        self.apply(init_func)\n\n        # propagate to children\n        for m in self.children():\n            if hasattr(m, 'init_weights'):\n                m.init_weights(init_type, gain)\n"
  },
  {
    "path": "models/networks/discriminator.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\nimport torch.nn as nn\nimport numpy as np\nimport torch.nn.functional as F\nfrom models.networks.base_network import BaseNetwork\nfrom models.networks.normalization import get_nonspade_norm_layer\nimport util.util as util\n\n\nclass MultiscaleDiscriminator(BaseNetwork):\n    @staticmethod\n    def modify_commandline_options(parser, is_train):\n        parser.add_argument('--netD_subarch', type=str, default='n_layer',\n                            help='architecture of each discriminator')\n        parser.add_argument('--num_D', type=int, default=2,\n                            help='number of discriminators to be used in multiscale')\n        opt, _ = parser.parse_known_args()\n\n        # define properties of each discriminator of the multiscale discriminator\n        subnetD = util.find_class_in_module(opt.netD_subarch + 'discriminator',\n                                            'models.networks.discriminator')\n        subnetD.modify_commandline_options(parser, is_train)\n\n        return parser\n\n    def __init__(self, opt):\n        super().__init__()\n        self.opt = opt\n\n        for i in range(opt.num_D):\n            subnetD = self.create_single_discriminator(opt)\n            self.add_module('discriminator_%d' % i, subnetD)\n\n    def create_single_discriminator(self, opt):\n        subarch = opt.netD_subarch\n        if subarch == 'n_layer':\n            netD = NLayerDiscriminator(opt)\n        else:\n            raise ValueError('unrecognized discriminator subarchitecture %s' % subarch)\n        return netD\n\n    def downsample(self, input):\n        return F.avg_pool2d(input, kernel_size=3,\n                            stride=2, padding=[1, 1],\n                            count_include_pad=False)\n\n    # Returns list of lists of discriminator outputs.\n    # The final result is of size opt.num_D x opt.n_layers_D\n    def forward(self, input):\n        result = []\n        get_intermediate_features = not self.opt.no_ganFeat_loss\n        for name, D in self.named_children():\n            out = D(input)\n            if not get_intermediate_features:\n                out = [out]\n            result.append(out)\n            input = self.downsample(input)\n\n        return result\n\n\n# Defines the PatchGAN discriminator with the specified arguments.\nclass NLayerDiscriminator(BaseNetwork):\n    @staticmethod\n    def modify_commandline_options(parser, is_train):\n        parser.add_argument('--n_layers_D', type=int, default=4,\n                            help='# layers in each discriminator')\n        return parser\n\n    def __init__(self, opt):\n        super().__init__()\n        self.opt = opt\n\n        kw = 4\n        padw = int(np.ceil((kw - 1.0) / 2))\n        nf = opt.ndf\n        input_nc = self.compute_D_input_nc(opt)\n\n        norm_layer = get_nonspade_norm_layer(opt, opt.norm_D)\n        sequence = [[nn.Conv2d(input_nc, nf, kernel_size=kw, stride=2, padding=padw),\n                     nn.LeakyReLU(0.2, False)]]\n\n        for n in range(1, opt.n_layers_D):\n            nf_prev = nf\n            nf = min(nf * 2, 512)\n            stride = 1 if n == opt.n_layers_D - 1 else 2\n            sequence += [[norm_layer(nn.Conv2d(nf_prev, nf, kernel_size=kw,\n                                               stride=stride, padding=padw)),\n                          nn.LeakyReLU(0.2, False)\n                          ]]\n\n        sequence += [[nn.Conv2d(nf, 1, kernel_size=kw, stride=1, padding=padw)]]\n\n        # We divide the layers into groups to extract intermediate layer outputs\n        for n in range(len(sequence)):\n            self.add_module('model' + str(n), nn.Sequential(*sequence[n]))\n\n    def compute_D_input_nc(self, opt):\n        input_nc = opt.label_nc + opt.output_nc\n        if opt.contain_dontcare_label:\n            input_nc += 1\n        if not opt.no_instance:\n            input_nc += 1\n        return input_nc\n\n    def forward(self, input):\n        results = [input]\n        for submodel in self.children():\n            intermediate_output = submodel(results[-1])\n            results.append(intermediate_output)\n\n        get_intermediate_features = not self.opt.no_ganFeat_loss\n        if get_intermediate_features:\n            return results[1:]\n        else:\n            return results[-1]\n"
  },
  {
    "path": "models/networks/encoder.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\nimport torch.nn as nn\nimport numpy as np\nimport torch.nn.functional as F\nfrom models.networks.base_network import BaseNetwork\nfrom models.networks.normalization import get_nonspade_norm_layer\nimport torch\n\nclass ConvEncoder(BaseNetwork):\n    \"\"\" Same architecture as the image discriminator \"\"\"\n\n    def __init__(self, opt):\n        super().__init__()\n\n        kw = 3\n        self.opt = opt\n        pw = int(np.ceil((kw - 1.0) / 2))\n        if opt.dataset_mode == 'cityscapes':\n            ndf = 350\n        elif opt.dataset_mode == 'ade20k':\n            ndf = 151 * 4\n        elif opt.dataset_mode == 'deepfashion':\n            ndf = 256\n        norm_layer = get_nonspade_norm_layer(opt, opt.norm_E)\n        self.layer1 = norm_layer(nn.Conv2d(self.opt.semantic_nc * 3, ndf, kw, stride=2, padding=pw, groups=self.opt.semantic_nc))\n        self.layer2 = norm_layer(nn.Conv2d(ndf * 1, ndf * 2, kw, stride=2, padding=pw, groups=self.opt.semantic_nc))\n        self.layer3 = norm_layer(nn.Conv2d(ndf * 2, ndf * 4, kw, stride=2, padding=pw, groups=self.opt.semantic_nc))\n        self.layer4 = norm_layer(nn.Conv2d(ndf * 4, ndf * 8, kw, stride=2, padding=pw, groups=self.opt.semantic_nc))\n        self.layer5 = norm_layer(nn.Conv2d(ndf * 8, ndf * 8, kw, stride=2, padding=pw, groups=self.opt.semantic_nc))\n        if opt.crop_size >= 256:\n            self.layer6 = norm_layer(nn.Conv2d(ndf * 8, ndf * 8, kw, stride=2, padding=pw, groups=self.opt.semantic_nc))\n        self.so = s0 = 4\n        self.fc_mu = nn.Conv2d(ndf * 8, 8 * self.opt.semantic_nc, stride=1, kernel_size=3, padding=1, groups=self.opt.semantic_nc)\n        self.fc_var = nn.Conv2d(ndf * 8, 8 * self.opt.semantic_nc, stride=1, kernel_size=3, padding=1, groups=self.opt.semantic_nc)\n        self.actvn = nn.LeakyReLU(0.2, False)\n\n\n    def forward(self, x):\n        bs = x.size(0)\n        x = self.layer1(x)\n        x = self.layer2(self.actvn(x))\n        x = self.layer3(self.actvn(x))\n        x = self.layer4(self.actvn(x))\n        x = self.layer5(self.actvn(x))\n        if self.opt.crop_size >= 256:\n            x = self.layer6(self.actvn(x))\n        x = self.actvn(x)\n\n        # x = x.view(x.size(0), -1)\n        mu = self.fc_mu(x)\n        logvar = self.fc_var(x)\n        return mu, logvar\n\nclass FcEncoder(BaseNetwork):\n    \"\"\" Same architecture as the image discriminator \"\"\"\n\n    def __init__(self, opt):\n        super().__init__()\n\n        kw = 3\n        pw = int(np.ceil((kw - 1.0) / 2))\n        ndf = opt.ngf\n        norm_layer = get_nonspade_norm_layer(opt, opt.norm_E)\n        self.layer1 = norm_layer(nn.Conv2d(3, ndf, kw, stride=2, padding=pw))\n        self.layer2 = norm_layer(nn.Conv2d(ndf * 1, ndf * 2, kw, stride=2, padding=pw))\n        self.layer3 = norm_layer(nn.Conv2d(ndf * 2, ndf * 4, kw, stride=2, padding=pw))\n        self.layer4 = norm_layer(nn.Conv2d(ndf * 4, ndf * 8, kw, stride=2, padding=pw))\n        self.layer5 = norm_layer(nn.Conv2d(ndf * 8, ndf * 8, kw, stride=2, padding=pw))\n        if opt.crop_size >= 256:\n            self.layer6 = norm_layer(nn.Conv2d(ndf * 8, ndf * 8, kw, stride=2, padding=pw))\n\n        self.so = s0 = 4\n        self.fc_mu = nn.Linear(ndf * 8 * s0 * s0, 256)\n        self.fc_var = nn.Linear(ndf * 8 * s0 * s0, 256)\n\n        self.actvn = nn.LeakyReLU(0.2, False)\n        self.opt = opt\n\n    def forward(self, x):\n        x = self.layer1(x)\n        x = self.layer2(self.actvn(x))\n        x = self.layer3(self.actvn(x))\n        x = self.layer4(self.actvn(x))\n        x = self.layer5(self.actvn(x))\n        if self.opt.crop_size >= 256:\n            x = self.layer6(self.actvn(x))\n        x = self.actvn(x)\n\n        x = x.view(x.size(0), -1)\n        mu = self.fc_mu(x)\n        logvar = self.fc_var(x)\n\n        return mu, logvar"
  },
  {
    "path": "models/networks/generator.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom models.networks.base_network import BaseNetwork\nfrom models.networks.normalization import get_nonspade_norm_layer\nfrom models.networks.architecture import ResnetBlock as ResnetBlock\nfrom models.networks.architecture import SPADEResnetBlock as SPADEResnetBlock, SPADEV2ResnetBlock\n\nclass SPADEBaseGenerator(BaseNetwork):\n    @staticmethod\n    def modify_commandline_options(parser, is_train):\n        parser.set_defaults(norm_G='spectralspadesyncbatch3x3')\n        parser.add_argument('--num_upsampling_layers',\n                            choices=('normal', 'more', 'most'), default='more',\n                            help=\"If 'more', adds upsampling layer between the two middle resnet blocks. If 'most', also add one more upsampling + resnet layer at the end of the generator\")\n\n        return parser\n\n    def __init__(self, opt):\n        super().__init__()\n        self.opt = opt\n        nf = opt.ngf\n\n        self.sw, self.sh = self.compute_latent_vector_size(opt)\n\n        if opt.use_vae:\n            # In case of VAE, we will sample from random z vector\n            self.fc = nn.Linear(opt.z_dim, 16 * nf * self.sw * self.sh)\n        else:\n            # Otherwise, we make the network deterministic by starting with\n            # downsampled segmentation map instead of random z\n            self.fc = nn.Conv2d(self.opt.semantic_nc, 16 * nf, 3, padding=1)\n\n        self.head_0 = SPADEResnetBlock(16 * nf, 16 * nf, opt)\n\n        self.G_middle_0 = SPADEResnetBlock(16 * nf, 16 * nf, opt)\n        self.G_middle_1 = SPADEResnetBlock(16 * nf, 16 * nf, opt)\n\n        self.up_0 = SPADEResnetBlock(16 * nf, 8 * nf, opt)\n        self.up_1 = SPADEResnetBlock(8 * nf, 4 * nf, opt)\n        self.up_2 = SPADEResnetBlock(4 * nf, 2 * nf, opt)\n        self.up_3 = SPADEResnetBlock(2 * nf, 1 * nf, opt)\n\n        final_nc = nf\n\n        if opt.num_upsampling_layers == 'most':\n            self.up_4 = SPADEResnetBlock(1 * nf, nf // 2, opt)\n            final_nc = nf // 2\n\n        self.conv_img = nn.Conv2d(final_nc, 3, 3, padding=1)\n\n        self.up = nn.Upsample(scale_factor=2)\n\n    def compute_latent_vector_size(self, opt):\n        if opt.num_upsampling_layers == 'normal':\n            num_up_layers = 5\n        elif opt.num_upsampling_layers == 'more':\n            num_up_layers = 6\n        elif opt.num_upsampling_layers == 'most':\n            num_up_layers = 7\n        else:\n            raise ValueError('opt.num_upsampling_layers [%s] not recognized' %\n                             opt.num_upsampling_layers)\n\n        sw = opt.crop_size // (2 ** num_up_layers)\n        sh = round(sw / opt.aspect_ratio)\n\n        return sw, sh\n\n    def forward(self, input, z=None):\n        seg = input\n\n        if self.opt.use_vae:\n            # we sample z from unit normal and reshape the tensor\n            if z is None:\n                z = torch.randn(input.size(0), self.opt.z_dim,\n                                dtype=torch.float32, device=input.get_device())\n            x = self.fc(z)\n            x = x.view(-1, 16 * self.opt.ngf, self.sh, self.sw)\n        else:\n            # we downsample segmap and run convolution\n            x = F.interpolate(seg, size=(self.sh, self.sw))\n            x = self.fc(x)\n\n        x = self.head_0(x, seg)\n\n        x = self.up(x)\n        x = self.G_middle_0(x, seg)\n\n        if self.opt.num_upsampling_layers == 'more' or \\\n                self.opt.num_upsampling_layers == 'most':\n            x = self.up(x)\n\n        x = self.G_middle_1(x, seg)\n\n        x = self.up(x)\n        x = self.up_0(x, seg)\n        x = self.up(x)\n        x = self.up_1(x, seg)\n        x = self.up(x)\n        x = self.up_2(x, seg)\n        x = self.up(x)\n        x = self.up_3(x, seg)\n\n        if self.opt.num_upsampling_layers == 'most':\n            x = self.up(x)\n            x = self.up_4(x, seg)\n\n        x = self.conv_img(F.leaky_relu(x, 2e-1))\n        x = F.tanh(x)\n\n        return x\n\nclass ADE20KGenerator(BaseNetwork):\n    @staticmethod\n    def modify_commandline_options(parser, is_train):\n        parser.set_defaults(norm_G='spectralspadesyncbatch3x3')\n        parser.add_argument('--num_upsampling_layers',\n                            choices=('normal', 'more', 'most'), default='more',\n                            help=\"If 'more', adds upsampling layer between the two middle resnet blocks. If 'most', also add one more upsampling + resnet layer at the end of the generator\")\n\n        return parser\n\n    def __init__(self, opt):\n        super().__init__()\n        self.opt = opt\n        nf = opt.ngf\n        self.sw, self.sh = self.compute_latent_vector_size(opt)\n        self.fc = nn.Conv2d(opt.semantic_nc * 8, self.opt.semantic_nc * 16, kernel_size=3, padding=1, groups=self.opt.semantic_nc)\n        self.head_0 = SPADEV2ResnetBlock(self.opt.semantic_nc * 16, self.opt.semantic_nc * 16, opt, self.opt.semantic_nc)\n\n        self.G_middle_0 = SPADEV2ResnetBlock(16 * self.opt.semantic_nc, 32 * nf, opt, 16)\n        self.G_middle_1 = SPADEV2ResnetBlock(32 * nf, 16 * nf, opt, 16)\n        self.up_0 = SPADEV2ResnetBlock(16 * nf, 16 * nf, opt, 8)\n        self.up_1 = SPADEV2ResnetBlock(16 * nf, 8 * nf, opt, 4)\n        self.up_2 = SPADEV2ResnetBlock(8 * nf, 4 * nf, opt, 2)\n        self.up_3 = SPADEV2ResnetBlock(4 * nf, 2 * nf, opt, 1)\n        self.up_4 = SPADEV2ResnetBlock(2 * nf, 1 * nf, opt, 1)\n\n        final_nc = nf\n\n        if opt.num_upsampling_layers == 'most':\n            self.up_4 = SPADEResnetBlock(1 * nf, nf // 2, opt)\n            final_nc = nf // 2\n\n        self.conv_img = nn.Conv2d(final_nc * 2, 3, 3, padding=1)\n\n        self.up = nn.Upsample(scale_factor=2)\n\n    def compute_latent_vector_size(self, opt):\n        if opt.num_upsampling_layers == 'normal':\n            num_up_layers = 5\n        elif opt.num_upsampling_layers == 'more':\n            num_up_layers = 6\n        elif opt.num_upsampling_layers == 'most':\n            num_up_layers = 7\n        else:\n            raise ValueError('opt.num_upsampling_layers [%s] not recognized' %\n                             opt.num_upsampling_layers)\n\n        sw = opt.crop_size // (2 ** num_up_layers)\n        sh = round(sw / opt.aspect_ratio)\n\n        return sw, sh\n\n    def forward(self, input, z=None):\n        seg = input\n        if self.opt.use_vae:\n            # we sample z from unit normal and reshape the tensor\n            if z is None:\n                z = torch.randn(input.size(0), self.opt.semantic_nc * 8, 4, 4,\n                                dtype=torch.float32, device=input.get_device())\n            x = self.fc(z)\n            # x = x.view(-1, 16 * self.opt.ngf, self.sh, self.sw)\n            x = x.view(input.size(0), -1, self.sh, self.sw)\n        else:\n            # we downsample segmap and run convolution\n            x = F.interpolate(seg, size=(self.sh, self.sw))\n            x = self.fc(x)\n\n        x = self.head_0(x, seg)\n\n        x = self.up(x)\n        x = self.G_middle_0(x, seg)\n\n        if self.opt.num_upsampling_layers == 'more' or \\\n                self.opt.num_upsampling_layers == 'most':\n            x = self.up(x)\n\n        x = self.G_middle_1(x, seg)\n\n        x = self.up(x)\n        x = self.up_0(x, seg)\n        x = self.up(x)\n        x = self.up_1(x, seg)\n        x = self.up(x)\n        x = self.up_2(x, seg)\n        # x = self.up_3(x, seg)\n        x = self.up(x)\n        # edge = self.edge_gen(x)\n        x = self.up_3(x, seg)\n        # x = self.up_4(x, seg)\n        # x = self.up_5(x, seg)\n\n        if self.opt.num_upsampling_layers == 'most':\n            x = self.up(x)\n            x = self.up_4(x, seg)\n\n        x = self.conv_img(F.leaky_relu(x, 2e-1))\n        x = F.tanh(x)\n\n        return x\n\nclass CityscapesGenerator(BaseNetwork):\n    @staticmethod\n    def modify_commandline_options(parser, is_train):\n        parser.set_defaults(norm_G='spectralspadesyncbatch3x3')\n        parser.add_argument('--num_upsampling_layers',\n                            choices=('normal', 'more', 'most'), default='more',\n                            help=\"If 'more', adds upsampling layer between the two middle resnet blocks. If 'most', also add one more upsampling + resnet layer at the end of the generator\")\n\n        return parser\n\n    def __init__(self, opt):\n        super().__init__()\n        self.opt = opt\n        nf = opt.ngf\n\n        self.sw, self.sh = self.compute_latent_vector_size(opt)\n        # print(self.opt.semantic_nc)\n        self.fc = nn.Conv2d(opt.semantic_nc * 8, 16 * nf, kernel_size=3, padding=1, groups=self.opt.semantic_nc)\n        self.head_0 = SPADEV2ResnetBlock(16 * nf, 16 * nf, opt, self.opt.semantic_nc)\n\n        self.G_middle_0 = SPADEV2ResnetBlock(16 * nf, 16 * nf, opt, self.opt.semantic_nc)\n        self.G_middle_1 = SPADEV2ResnetBlock(16 * nf, 16 * nf, opt, 20)\n        self.up_0 = SPADEV2ResnetBlock(16 * nf, 8 * nf, opt, 14)\n        self.up_1 = SPADEV2ResnetBlock(8 * nf, 4 * nf, opt, 10)\n        self.up_2 = SPADEV2ResnetBlock(4 * nf, 2 * nf, opt, 4)\n        self.up_3 = SPADEV2ResnetBlock(2 * nf, 1 * nf, opt, 1)\n        final_nc = nf\n\n        if opt.num_upsampling_layers == 'most':\n            self.up_4 = SPADEResnetBlock(1 * nf, nf // 2, opt)\n            final_nc = nf // 2\n\n        self.conv_img = nn.Conv2d(final_nc, 3, 3, padding=1)\n\n        self.up = nn.Upsample(scale_factor=2)\n\n    def compute_latent_vector_size(self, opt):\n        if opt.num_upsampling_layers == 'normal':\n            num_up_layers = 5\n        elif opt.num_upsampling_layers == 'more':\n            num_up_layers = 6\n        elif opt.num_upsampling_layers == 'most':\n            num_up_layers = 7\n        else:\n            raise ValueError('opt.num_upsampling_layers [%s] not recognized' %\n                             opt.num_upsampling_layers)\n\n        sw = opt.crop_size // (2 ** num_up_layers)\n        sh = round(sw / opt.aspect_ratio)\n\n        return sw, sh\n\n    def forward(self, input, z=None):\n        seg = input\n        if self.opt.dataset_mode == 'cityscapes':\n            with torch.no_grad():\n                semantic = seg[:, :-1, :, :]\n                instance = seg[:, -1, :, :].unsqueeze(dim=1).expand_as(semantic).unsqueeze(dim=2)\n                semantic = semantic.unsqueeze(dim=2)\n                seg = torch.cat((semantic, instance), dim=2)\n                seg = seg.view(seg.size()[0], seg.size()[1] * seg.size()[2], seg.size()[3], seg.size()[4])\n        if self.opt.use_vae:\n            # we sample z from unit normal and reshape the tensor\n            if z is None:\n                z = torch.randn(input.size(0), self.opt.semantic_nc * 8, 4, 8,\n                                dtype=torch.float32, device=input.get_device())\n            x = self.fc(z)\n            x = x.view(input.size(0), 16 * self.opt.ngf, self.sh, self.sw)\n        else:\n            # we downsample segmap and run convolution\n            x = F.interpolate(seg, size=(self.sh, self.sw))\n            x = self.fc(x)\n        x = self.head_0(x, seg)\n\n        x = self.up(x)\n        x = self.G_middle_0(x, seg)\n\n        if self.opt.num_upsampling_layers == 'more' or \\\n                self.opt.num_upsampling_layers == 'most':\n            x = self.up(x)\n\n        x = self.G_middle_1(x, seg)\n\n        x = self.up(x)\n        x = self.up_0(x, seg)\n        x = self.up(x)\n        x = self.up_1(x, seg)\n        x = self.up(x)\n        x = self.up_2(x, seg)\n        x = self.up(x)\n        # edge = self.edge_gen(x)\n        x = self.up_3(x, seg)\n\n        if self.opt.num_upsampling_layers == 'most':\n            x = self.up(x)\n            x = self.up_4(x, seg)\n\n        x = self.conv_img(F.leaky_relu(x, 2e-1))\n        x = F.tanh(x)\n\n        return x\n\nclass DeepFashionGenerator(BaseNetwork):\n    @staticmethod\n    def modify_commandline_options(parser, is_train):\n        parser.set_defaults(norm_G='spectralspadesyncbatch3x3')\n        parser.add_argument('--num_upsampling_layers',\n                            choices=('normal', 'more', 'most'), default='more',\n                            help=\"If 'more', adds upsampling layer between the two middle resnet blocks. If 'most', also add one more upsampling + resnet layer at the end of the generator\")\n\n        return parser\n\n    def __init__(self, opt):\n        super().__init__()\n        self.opt = opt\n        nf = opt.ngf\n\n        self.sw, self.sh = self.compute_latent_vector_size(opt)\n        self.fc = nn.Conv2d(opt.semantic_nc * 8, 16 * nf, kernel_size=3, padding=1, groups=8)\n        self.head_0 = SPADEV2ResnetBlock(16 * nf, 16 * nf, opt, self.opt.semantic_nc)\n\n        self.G_middle_0 = SPADEV2ResnetBlock(16 * nf, 16 * nf, opt, self.opt.semantic_nc)\n        self.G_middle_1 = SPADEV2ResnetBlock(16 * nf, 16 * nf, opt, self.opt.semantic_nc // 2)\n\n        self.up_0 = SPADEV2ResnetBlock(16 * nf, 8 * nf, opt, self.opt.semantic_nc // 2)\n\n        self.up_1 = SPADEV2ResnetBlock(8 * nf, 4 * nf, opt, self.opt.semantic_nc // 4)\n\n        self.up_2 = SPADEV2ResnetBlock(4 * nf, 2 * nf, opt, self.opt.semantic_nc // 4)\n\n        self.up_3 = SPADEV2ResnetBlock(2 * nf, 1 * nf, opt, self.opt.semantic_nc // 8)\n        final_nc = nf\n\n        if opt.num_upsampling_layers == 'most':\n            self.up_4 = SPADEResnetBlock(1 * nf, nf // 2, opt)\n            final_nc = nf // 2\n\n        self.conv_img = nn.Conv2d(final_nc, 3, 3, padding=1)\n\n        self.up = nn.Upsample(scale_factor=2)\n\n    def compute_latent_vector_size(self, opt):\n        if opt.num_upsampling_layers == 'normal':\n            num_up_layers = 5\n        elif opt.num_upsampling_layers == 'more':\n            num_up_layers = 6\n        elif opt.num_upsampling_layers == 'most':\n            num_up_layers = 7\n        else:\n            raise ValueError('opt.num_upsampling_layers [%s] not recognized' %\n                             opt.num_upsampling_layers)\n\n        sw = opt.crop_size // (2 ** num_up_layers)\n        sh = round(sw / opt.aspect_ratio)\n\n        return sw, sh\n\n    def forward(self, input, z=None):\n        seg = input\n\n        if self.opt.use_vae:\n            # we sample z from unit normal and reshape the tensor\n            if z is None:\n                z = torch.randn(input.size(0), self.opt.semantic_nc * 8, 4, 4,\n                                dtype=torch.float32, device=input.get_device())\n            z = z.view(input.size()[0], self.opt.semantic_nc * 8, 4, 4)\n            x = self.fc(z)\n            x = x.view(-1, 16 * self.opt.ngf, self.sh, self.sw)\n        else:\n            # we downsample segmap and run convolution\n            x = F.interpolate(seg, size=(self.sh, self.sw))\n            x = self.fc(x)\n\n        x = self.head_0(x, seg)\n\n        x = self.up(x)\n        x = self.G_middle_0(x, seg)\n\n        if self.opt.num_upsampling_layers == 'more' or \\\n                self.opt.num_upsampling_layers == 'most':\n            x = self.up(x)\n\n        x = self.G_middle_1(x, seg)\n\n        x = self.up(x)\n        x = self.up_0(x, seg)\n        x = self.up(x)\n        x = self.up_1(x, seg)\n        x = self.up(x)\n        x = self.up_2(x, seg)\n        x = self.up(x)\n        # edge = self.edge_gen(x)\n        x = self.up_3(x, seg)\n\n        if self.opt.num_upsampling_layers == 'most':\n            x = self.up(x)\n            x = self.up_4(x, seg)\n\n        x = self.conv_img(F.leaky_relu(x, 2e-1))\n        x = F.tanh(x)\n\n        return x\n\nclass Pix2PixHDGenerator(BaseNetwork):\n    @staticmethod\n    def modify_commandline_options(parser, is_train):\n        # parser.add_argument('--resnet_n_downsample', type=int, default=4, help='number of downsampling layers in netG')\n        parser.add_argument('--resnet_n_blocks', type=int, default=9,\n                            help='number of residual blocks in the global generator network')\n        parser.add_argument('--resnet_kernel_size', type=int, default=3,\n                            help='kernel size of the resnet block')\n        parser.add_argument('--resnet_initial_kernel_size', type=int, default=7,\n                            help='kernel size of the first convolution')\n        # parser.set_defaults(norm_G='instance')\n        parser.set_defaults(norm_G='spectralinstance')\n        return parser\n\n    def __init__(self, opt):\n        super().__init__()\n        input_nc = opt.label_nc + (1 if opt.contain_dontcare_label else 0) + (0 if opt.no_instance else 1)\n\n        norm_layer = get_nonspade_norm_layer(opt, opt.norm_G)\n        activation = nn.ReLU(False)\n\n        model = []\n\n        # initial conv\n        model += [nn.ReflectionPad2d(opt.resnet_initial_kernel_size // 2),\n                  norm_layer(nn.Conv2d(input_nc, opt.ngf,\n                                       kernel_size=opt.resnet_initial_kernel_size,\n                                       padding=0)),\n                  activation]\n\n        # downsample\n        mult = 1\n        for i in range(opt.resnet_n_downsample):\n            model += [norm_layer(nn.Conv2d(opt.ngf * mult, opt.ngf * mult * 2,\n                                           kernel_size=3, stride=2, padding=1)),\n                      activation]\n            mult *= 2\n\n        # resnet blocks\n        for i in range(opt.resnet_n_blocks):\n            model += [ResnetBlock(opt.ngf * mult,\n                                  norm_layer=norm_layer,\n                                  activation=activation,\n                                  kernel_size=opt.resnet_kernel_size)]\n\n        # upsample\n        for i in range(opt.resnet_n_downsample):\n            nc_in = int(opt.ngf * mult)\n            nc_out = int((opt.ngf * mult) / 2)\n            model += [norm_layer(nn.ConvTranspose2d(nc_in, nc_out,\n                                                    kernel_size=3, stride=2,\n                                                    padding=1, output_padding=1)),\n                      activation]\n            mult = mult // 2\n\n        # final output conv\n        model += [nn.ReflectionPad2d(3),\n                  nn.Conv2d(nc_out, opt.output_nc, kernel_size=7, padding=0),\n                  nn.Tanh()]\n\n        self.model = nn.Sequential(*model)\n\n    def forward(self, input, z=None):\n        return self.model(input)\n"
  },
  {
    "path": "models/networks/loss.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom models.networks.architecture import VGG19\n\n\n# Defines the GAN loss which uses either LSGAN or the regular GAN.\n# When LSGAN is used, it is basically same as MSELoss,\n# but it abstracts away the need to create the target label tensor\n# that has the same size as the input\nclass GANLoss(nn.Module):\n    def __init__(self, gan_mode, target_real_label=1.0, target_fake_label=0.0,\n                 tensor=torch.FloatTensor, opt=None):\n        super(GANLoss, self).__init__()\n        self.real_label = target_real_label\n        self.fake_label = target_fake_label\n        self.real_label_tensor = None\n        self.fake_label_tensor = None\n        self.zero_tensor = None\n        self.Tensor = tensor\n        self.gan_mode = gan_mode\n        self.opt = opt\n        if gan_mode == 'ls':\n            pass\n        elif gan_mode == 'original':\n            pass\n        elif gan_mode == 'w':\n            pass\n        elif gan_mode == 'hinge':\n            pass\n        else:\n            raise ValueError('Unexpected gan_mode {}'.format(gan_mode))\n\n    def get_target_tensor(self, input, target_is_real):\n        if target_is_real:\n            if self.real_label_tensor is None:\n                self.real_label_tensor = self.Tensor(1).fill_(self.real_label)\n                self.real_label_tensor.requires_grad_(False)\n            return self.real_label_tensor.expand_as(input)\n        else:\n            if self.fake_label_tensor is None:\n                self.fake_label_tensor = self.Tensor(1).fill_(self.fake_label)\n                self.fake_label_tensor.requires_grad_(False)\n            return self.fake_label_tensor.expand_as(input)\n\n    def get_zero_tensor(self, input):\n        if self.zero_tensor is None:\n            self.zero_tensor = self.Tensor(1).fill_(0)\n            self.zero_tensor.requires_grad_(False)\n        return self.zero_tensor.expand_as(input)\n\n    def loss(self, input, target_is_real, for_discriminator=True):\n        if self.gan_mode == 'original':  # cross entropy loss\n            target_tensor = self.get_target_tensor(input, target_is_real)\n            loss = F.binary_cross_entropy_with_logits(input, target_tensor)\n            return loss\n        elif self.gan_mode == 'ls':\n            target_tensor = self.get_target_tensor(input, target_is_real)\n            return F.mse_loss(input, target_tensor)\n        elif self.gan_mode == 'hinge':\n            if for_discriminator:\n                if target_is_real:\n                    minval = torch.min(input - 1, self.get_zero_tensor(input))\n                    loss = -torch.mean(minval)\n                else:\n                    minval = torch.min(-input - 1, self.get_zero_tensor(input))\n                    loss = -torch.mean(minval)\n            else:\n                assert target_is_real, \"The generator's hinge loss must be aiming for real\"\n                loss = -torch.mean(input)\n            return loss\n        else:\n            # wgan\n            if target_is_real:\n                return -input.mean()\n            else:\n                return input.mean()\n\n    def __call__(self, input, target_is_real, for_discriminator=True):\n        # computing loss is a bit complicated because |input| may not be\n        # a tensor, but list of tensors in case of multiscale discriminator\n        if isinstance(input, list):\n            loss = 0\n            for pred_i in input:\n                if isinstance(pred_i, list):\n                    pred_i = pred_i[-1]\n                loss_tensor = self.loss(pred_i, target_is_real, for_discriminator)\n                bs = 1 if len(loss_tensor.size()) == 0 else loss_tensor.size(0)\n                new_loss = torch.mean(loss_tensor.view(bs, -1), dim=1)\n                loss += new_loss\n            return loss / len(input)\n        else:\n            return self.loss(input, target_is_real, for_discriminator)\n\n\n# Perceptual loss that uses a pretrained VGG network\nclass VGGLoss(nn.Module):\n    def __init__(self, gpu_ids, vgg_path=None):\n        super(VGGLoss, self).__init__()\n        self.vgg = VGG19(vgg_path).cuda()\n        self.criterion = nn.L1Loss()\n        self.weights = [1.0 / 32, 1.0 / 16, 1.0 / 8, 1.0 / 4, 1.0]\n\n    def forward(self, x, y):\n        x_vgg, y_vgg = self.vgg(x), self.vgg(y)\n        loss = 0\n        for i in range(len(x_vgg)):\n            loss += self.weights[i] * self.criterion(x_vgg[i], y_vgg[i].detach())\n        return loss\n\n\n# KL Divergence loss used in VAE with an image encoder\nclass KLDLoss(nn.Module):\n    def forward(self, mu, logvar):\n        return -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())\n"
  },
  {
    "path": "models/networks/normalization.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\nimport re\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom models.networks.sync_batchnorm import SynchronizedBatchNorm2d\nimport torch.nn.utils.spectral_norm as spectral_norm\n\n# Returns a function that creates a normalization function\n# that does not condition on semantic map\ndef get_nonspade_norm_layer(opt, norm_type='instance'):\n    # helper function to get # output channels of the previous layer\n    def get_out_channel(layer):\n        if hasattr(layer, 'out_channels'):\n            return getattr(layer, 'out_channels')\n        return layer.weight.size(0)\n\n    # this function will be returned\n    def add_norm_layer(layer):\n        nonlocal norm_type\n        if norm_type.startswith('spectral'):\n            layer = spectral_norm(layer)\n            subnorm_type = norm_type[len('spectral'):]\n\n        if subnorm_type == 'none' or len(subnorm_type) == 0:\n            return layer\n\n        # remove bias in the previous layer, which is meaningless\n        # since it has no effect after normalization\n        if getattr(layer, 'bias', None) is not None:\n            delattr(layer, 'bias')\n            layer.register_parameter('bias', None)\n        # print(subnorm_type)\n        if subnorm_type == 'batch':\n            norm_layer = nn.BatchNorm2d(get_out_channel(layer), affine=True)\n        elif subnorm_type == 'sync_batch':\n            norm_layer = SynchronizedBatchNorm2d(get_out_channel(layer), affine=True)\n        elif subnorm_type == 'instance':\n            norm_layer = nn.InstanceNorm2d(get_out_channel(layer), affine=False)\n        elif subnorm_type == 'group':\n            norm_layer = nn.GroupNorm(8, get_out_channel(layer), affine=True)\n        else:\n            raise ValueError('normalization layer %s is not recognized' % subnorm_type)\n\n        return nn.Sequential(layer, norm_layer)\n\n    return add_norm_layer\n\n\n# Creates SPADE normalization layer based on the given configuration\n# SPADE consists of two steps. First, it normalizes the activations using\n# your favorite normalization method, such as Batch Norm or Instance Norm.\n# Second, it applies scale and bias to the normalized output, conditioned on\n# the segmentation map.\n# The format of |config_text| is spade(norm)(ks), where\n# (norm) specifies the type of parameter-free normalization.\n#       (e.g. syncbatch, batch, instance)\n# (ks) specifies the size of kernel in the SPADE module (e.g. 3x3)\n# Example |config_text| will be spadesyncbatch3x3, or spadeinstance5x5.\n# Also, the other arguments are\n# |norm_nc|: the #channels of the normalized activations, hence the output dim of SPADE\n# |label_nc|: the #channels of the input semantic map, hence the input dim of SPADE\n\n\nclass GROUP_SPADE(nn.Module):\n    def __init__(self, config_text, norm_nc, label_nc, group_num=0, use_instance=False, data_mode='deepfashion'):\n        super().__init__()\n        if group_num == 0:\n            group_num = label_nc\n        assert config_text.startswith('spade')\n        parsed = re.search('spade(\\D+)(\\d)x\\d', config_text)\n        param_free_norm_type = str(parsed.group(1))\n        ks = int(parsed.group(2))\n\n        if param_free_norm_type == 'instance':\n            self.param_free_norm = nn.InstanceNorm2d(norm_nc, affine=False)\n        elif param_free_norm_type == 'syncbatch':\n            self.param_free_norm = SynchronizedBatchNorm2d(norm_nc, affine=False)\n        elif param_free_norm_type == 'batch':\n            self.param_free_norm = nn.BatchNorm2d(norm_nc, affine=False)\n        elif param_free_norm_type == 'group':\n            self.param_free_norm = nn.GroupNorm(label_nc, norm_nc)\n        else:\n            raise ValueError('%s is not a recognized param-free norm type in SPADE'\n                             % param_free_norm_type)\n\n        # The dimension of the intermediate embedding space. Yes, hardcoded.\n        if use_instance:\n            seg_in_dim = label_nc * 2\n        else:\n            seg_in_dim = label_nc\n        pw = ks // 2\n        # print(data_mode)\n        if data_mode == 'deepfashion':\n            nhidden = 128\n            self.mlp_shared = nn.Sequential(\n                nn.Conv2d(seg_in_dim, nhidden, kernel_size=ks, padding=pw, groups=group_num),\n                nn.ReLU()\n            )\n            self.mlp_gamma = nn.Conv2d(nhidden, norm_nc, kernel_size=ks, padding=pw, groups=group_num)\n            self.mlp_beta = nn.Conv2d(nhidden, norm_nc, kernel_size=ks, padding=pw, groups=group_num)\n        elif data_mode == 'cityscapes':\n            nhidden = label_nc * group_num\n            # print(nhidden)\n            self.mlp_shared = nn.Sequential(\n                nn.Conv2d(seg_in_dim, nhidden, kernel_size=ks, padding=pw, groups=label_nc),\n                # nn.Conv2d(seg_in_dim, nhidden, kernel_size=ks, padding=pw),\n                nn.ReLU()\n            )\n            self.mlp_gamma = nn.Conv2d(nhidden, norm_nc, kernel_size=ks, padding=pw, groups=label_nc)\n            self.mlp_beta = nn.Conv2d(nhidden, norm_nc, kernel_size=ks, padding=pw, groups=label_nc)\n        elif data_mode == 'ade20k':\n            if label_nc % group_num == 0:\n                nhidden = label_nc * 2\n            else:\n                nhidden = label_nc * group_num\n            self.mlp_shared = nn.Sequential(\n                nn.Conv2d(seg_in_dim, nhidden, kernel_size=ks, padding=pw, groups=label_nc),\n                # nn.Conv2d(seg_in_dim, nhidden, kernel_size=ks, padding=pw),\n                nn.ReLU()\n            )\n            self.mlp_gamma = nn.Conv2d(nhidden, norm_nc, kernel_size=ks, padding=pw, groups=group_num)\n            self.mlp_beta = nn.Conv2d(nhidden, norm_nc, kernel_size=ks, padding=pw, groups=group_num)\n\n    def forward(self, x, segmap):\n\n        # Part 1. generate parameter-free normalized activations\n        normalized = self.param_free_norm(x)\n\n        # Part 2. produce scaling and bias conditioned on semantic map\n        segmap = F.interpolate(segmap, size=x.size()[2:], mode='nearest')\n        actv = self.mlp_shared(segmap)\n        gamma = self.mlp_gamma(actv)\n        beta = self.mlp_beta(actv)\n\n        # apply scale and bias\n        out = normalized * (1 + gamma) + beta\n\n        return out\n\nclass SPADE(nn.Module):\n    def __init__(self, config_text, norm_nc, label_nc):\n        super().__init__()\n\n        assert config_text.startswith('spade')\n        parsed = re.search('spade(\\D+)(\\d)x\\d', config_text)\n        param_free_norm_type = str(parsed.group(1))\n        ks = int(parsed.group(2))\n\n        if param_free_norm_type == 'instance':\n            self.param_free_norm = nn.InstanceNorm2d(norm_nc, affine=False)\n        elif param_free_norm_type == 'syncbatch':\n            self.param_free_norm = SynchronizedBatchNorm2d(norm_nc, affine=False)\n        elif param_free_norm_type == 'batch':\n            self.param_free_norm = nn.BatchNorm2d(norm_nc, affine=False)\n        else:\n            raise ValueError('%s is not a recognized param-free norm type in SPADE'\n                             % param_free_norm_type)\n\n        # The dimension of the intermediate embedding space. Yes, hardcoded.\n        nhidden = 128\n\n        pw = ks // 2\n        self.mlp_shared = nn.Sequential(\n            nn.Conv2d(label_nc, nhidden, kernel_size=ks, padding=pw),\n            nn.ReLU()\n        )\n        self.mlp_gamma = nn.Conv2d(nhidden, norm_nc, kernel_size=ks, padding=pw)\n        self.mlp_beta = nn.Conv2d(nhidden, norm_nc, kernel_size=ks, padding=pw)\n\n    def forward(self, x, segmap):\n\n        # Part 1. generate parameter-free normalized activations\n        normalized = self.param_free_norm(x)\n\n        # Part 2. produce scaling and bias conditioned on semantic map\n        segmap = F.interpolate(segmap, size=x.size()[2:], mode='nearest')\n        actv = self.mlp_shared(segmap)\n        gamma = self.mlp_gamma(actv)\n        beta = self.mlp_beta(actv)\n\n        # apply scale and bias\n        out = normalized * (1 + gamma) + beta\n\n        return out\n"
  },
  {
    "path": "models/networks/sync_batchnorm/__init__.py",
    "content": "# -*- coding: utf-8 -*-\n# File   : __init__.py\n# Author : Jiayuan Mao\n# Email  : maojiayuan@gmail.com\n# Date   : 27/01/2018\n#\n# This file is part of Synchronized-BatchNorm-PyTorch.\n# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch\n# Distributed under MIT License.\n\nfrom .batchnorm import SynchronizedBatchNorm1d, SynchronizedBatchNorm2d, SynchronizedBatchNorm3d\nfrom .batchnorm import patch_sync_batchnorm, convert_model\nfrom .replicate import DataParallelWithCallback, patch_replication_callback\n"
  },
  {
    "path": "models/networks/sync_batchnorm/batchnorm.py",
    "content": "# -*- coding: utf-8 -*-\n# File   : batchnorm.py\n# Author : Jiayuan Mao\n# Email  : maojiayuan@gmail.com\n# Date   : 27/01/2018\n#\n# This file is part of Synchronized-BatchNorm-PyTorch.\n# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch\n# Distributed under MIT License.\n\nimport collections\nimport contextlib\n\nimport torch\nimport torch.nn.functional as F\n\nfrom torch.nn.modules.batchnorm import _BatchNorm\n\ntry:\n    from torch.nn.parallel._functions import ReduceAddCoalesced, Broadcast\nexcept ImportError:\n    ReduceAddCoalesced = Broadcast = None\n\ntry:\n    from jactorch.parallel.comm import SyncMaster\n    from jactorch.parallel.data_parallel import JacDataParallel as DataParallelWithCallback\nexcept ImportError:\n    from .comm import SyncMaster\n    from .replicate import DataParallelWithCallback\n\n__all__ = [\n    'SynchronizedBatchNorm1d', 'SynchronizedBatchNorm2d', 'SynchronizedBatchNorm3d',\n    'patch_sync_batchnorm', 'convert_model'\n]\n\n\ndef _sum_ft(tensor):\n    \"\"\"sum over the first and last dimention\"\"\"\n    return tensor.sum(dim=0).sum(dim=-1)\n\n\ndef _unsqueeze_ft(tensor):\n    \"\"\"add new dimensions at the front and the tail\"\"\"\n    return tensor.unsqueeze(0).unsqueeze(-1)\n\n\n_ChildMessage = collections.namedtuple('_ChildMessage', ['sum', 'ssum', 'sum_size'])\n_MasterMessage = collections.namedtuple('_MasterMessage', ['sum', 'inv_std'])\n\n\nclass _SynchronizedBatchNorm(_BatchNorm):\n    def __init__(self, num_features, eps=1e-5, momentum=0.1, affine=True):\n        assert ReduceAddCoalesced is not None, 'Can not use Synchronized Batch Normalization without CUDA support.'\n\n        super(_SynchronizedBatchNorm, self).__init__(num_features, eps=eps, momentum=momentum, affine=affine)\n\n        self._sync_master = SyncMaster(self._data_parallel_master)\n\n        self._is_parallel = False\n        self._parallel_id = None\n        self._slave_pipe = None\n\n    def forward(self, input):\n        # If it is not parallel computation or is in evaluation mode, use PyTorch's implementation.\n        if not (self._is_parallel and self.training):\n            return F.batch_norm(\n                input, self.running_mean, self.running_var, self.weight, self.bias,\n                self.training, self.momentum, self.eps)\n\n        # Resize the input to (B, C, -1).\n        input_shape = input.size()\n        input = input.view(input.size(0), self.num_features, -1)\n\n        # Compute the sum and square-sum.\n        sum_size = input.size(0) * input.size(2)\n        input_sum = _sum_ft(input)\n        input_ssum = _sum_ft(input ** 2)\n\n        # Reduce-and-broadcast the statistics.\n        if self._parallel_id == 0:\n            mean, inv_std = self._sync_master.run_master(_ChildMessage(input_sum, input_ssum, sum_size))\n        else:\n            mean, inv_std = self._slave_pipe.run_slave(_ChildMessage(input_sum, input_ssum, sum_size))\n\n        # Compute the output.\n        if self.affine:\n            # MJY:: Fuse the multiplication for speed.\n            output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std * self.weight) + _unsqueeze_ft(self.bias)\n        else:\n            output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std)\n\n        # Reshape it.\n        return output.view(input_shape)\n\n    def __data_parallel_replicate__(self, ctx, copy_id):\n        self._is_parallel = True\n        self._parallel_id = copy_id\n\n        # parallel_id == 0 means master device.\n        if self._parallel_id == 0:\n            ctx.sync_master = self._sync_master\n        else:\n            self._slave_pipe = ctx.sync_master.register_slave(copy_id)\n\n    def _data_parallel_master(self, intermediates):\n        \"\"\"Reduce the sum and square-sum, compute the statistics, and broadcast it.\"\"\"\n\n        # Always using same \"device order\" makes the ReduceAdd operation faster.\n        # Thanks to:: Tete Xiao (http://tetexiao.com/)\n        intermediates = sorted(intermediates, key=lambda i: i[1].sum.get_device())\n\n        to_reduce = [i[1][:2] for i in intermediates]\n        to_reduce = [j for i in to_reduce for j in i]  # flatten\n        target_gpus = [i[1].sum.get_device() for i in intermediates]\n\n        sum_size = sum([i[1].sum_size for i in intermediates])\n        sum_, ssum = ReduceAddCoalesced.apply(target_gpus[0], 2, *to_reduce)\n        mean, inv_std = self._compute_mean_std(sum_, ssum, sum_size)\n\n        broadcasted = Broadcast.apply(target_gpus, mean, inv_std)\n\n        outputs = []\n        for i, rec in enumerate(intermediates):\n            outputs.append((rec[0], _MasterMessage(*broadcasted[i*2:i*2+2])))\n\n        return outputs\n\n    def _compute_mean_std(self, sum_, ssum, size):\n        \"\"\"Compute the mean and standard-deviation with sum and square-sum. This method\n        also maintains the moving average on the master device.\"\"\"\n        assert size > 1, 'BatchNorm computes unbiased standard-deviation, which requires size > 1.'\n        mean = sum_ / size\n        sumvar = ssum - sum_ * mean\n        unbias_var = sumvar / (size - 1)\n        bias_var = sumvar / size\n\n        if hasattr(torch, 'no_grad'):\n            with torch.no_grad():\n                self.running_mean = (1 - self.momentum) * self.running_mean + self.momentum * mean.data\n                self.running_var = (1 - self.momentum) * self.running_var + self.momentum * unbias_var.data\n        else:\n            self.running_mean = (1 - self.momentum) * self.running_mean + self.momentum * mean.data\n            self.running_var = (1 - self.momentum) * self.running_var + self.momentum * unbias_var.data\n\n        return mean, bias_var.clamp(self.eps) ** -0.5\n\n\nclass SynchronizedBatchNorm1d(_SynchronizedBatchNorm):\n    r\"\"\"Applies Synchronized Batch Normalization over a 2d or 3d input that is seen as a\n    mini-batch.\n\n    .. math::\n\n        y = \\frac{x - mean[x]}{ \\sqrt{Var[x] + \\epsilon}} * gamma + beta\n\n    This module differs from the built-in PyTorch BatchNorm1d as the mean and\n    standard-deviation are reduced across all devices during training.\n\n    For example, when one uses `nn.DataParallel` to wrap the network during\n    training, PyTorch's implementation normalize the tensor on each device using\n    the statistics only on that device, which accelerated the computation and\n    is also easy to implement, but the statistics might be inaccurate.\n    Instead, in this synchronized version, the statistics will be computed\n    over all training samples distributed on multiple devices.\n\n    Note that, for one-GPU or CPU-only case, this module behaves exactly same\n    as the built-in PyTorch implementation.\n\n    The mean and standard-deviation are calculated per-dimension over\n    the mini-batches and gamma and beta are learnable parameter vectors\n    of size C (where C is the input size).\n\n    During training, this layer keeps a running estimate of its computed mean\n    and variance. The running sum is kept with a default momentum of 0.1.\n\n    During evaluation, this running mean/variance is used for normalization.\n\n    Because the BatchNorm is done over the `C` dimension, computing statistics\n    on `(N, L)` slices, it's common terminology to call this Temporal BatchNorm\n\n    Args:\n        num_features: num_features from an expected input of size\n            `batch_size x num_features [x width]`\n        eps: a value added to the denominator for numerical stability.\n            Default: 1e-5\n        momentum: the value used for the running_mean and running_var\n            computation. Default: 0.1\n        affine: a boolean value that when set to ``True``, gives the layer learnable\n            affine parameters. Default: ``True``\n\n    Shape::\n        - Input: :math:`(N, C)` or :math:`(N, C, L)`\n        - Output: :math:`(N, C)` or :math:`(N, C, L)` (same shape as input)\n\n    Examples:\n        >>> # With Learnable Parameters\n        >>> m = SynchronizedBatchNorm1d(100)\n        >>> # Without Learnable Parameters\n        >>> m = SynchronizedBatchNorm1d(100, affine=False)\n        >>> input = torch.autograd.Variable(torch.randn(20, 100))\n        >>> output = m(input)\n    \"\"\"\n\n    def _check_input_dim(self, input):\n        if input.dim() != 2 and input.dim() != 3:\n            raise ValueError('expected 2D or 3D input (got {}D input)'\n                             .format(input.dim()))\n        super(SynchronizedBatchNorm1d, self)._check_input_dim(input)\n\n\nclass SynchronizedBatchNorm2d(_SynchronizedBatchNorm):\n    r\"\"\"Applies Batch Normalization over a 4d input that is seen as a mini-batch\n    of 3d inputs\n\n    .. math::\n\n        y = \\frac{x - mean[x]}{ \\sqrt{Var[x] + \\epsilon}} * gamma + beta\n\n    This module differs from the built-in PyTorch BatchNorm2d as the mean and\n    standard-deviation are reduced across all devices during training.\n\n    For example, when one uses `nn.DataParallel` to wrap the network during\n    training, PyTorch's implementation normalize the tensor on each device using\n    the statistics only on that device, which accelerated the computation and\n    is also easy to implement, but the statistics might be inaccurate.\n    Instead, in this synchronized version, the statistics will be computed\n    over all training samples distributed on multiple devices.\n\n    Note that, for one-GPU or CPU-only case, this module behaves exactly same\n    as the built-in PyTorch implementation.\n\n    The mean and standard-deviation are calculated per-dimension over\n    the mini-batches and gamma and beta are learnable parameter vectors\n    of size C (where C is the input size).\n\n    During training, this layer keeps a running estimate of its computed mean\n    and variance. The running sum is kept with a default momentum of 0.1.\n\n    During evaluation, this running mean/variance is used for normalization.\n\n    Because the BatchNorm is done over the `C` dimension, computing statistics\n    on `(N, H, W)` slices, it's common terminology to call this Spatial BatchNorm\n\n    Args:\n        num_features: num_features from an expected input of\n            size batch_size x num_features x height x width\n        eps: a value added to the denominator for numerical stability.\n            Default: 1e-5\n        momentum: the value used for the running_mean and running_var\n            computation. Default: 0.1\n        affine: a boolean value that when set to ``True``, gives the layer learnable\n            affine parameters. Default: ``True``\n\n    Shape::\n        - Input: :math:`(N, C, H, W)`\n        - Output: :math:`(N, C, H, W)` (same shape as input)\n\n    Examples:\n        >>> # With Learnable Parameters\n        >>> m = SynchronizedBatchNorm2d(100)\n        >>> # Without Learnable Parameters\n        >>> m = SynchronizedBatchNorm2d(100, affine=False)\n        >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45))\n        >>> output = m(input)\n    \"\"\"\n\n    def _check_input_dim(self, input):\n        if input.dim() != 4:\n            raise ValueError('expected 4D input (got {}D input)'\n                             .format(input.dim()))\n        super(SynchronizedBatchNorm2d, self)._check_input_dim(input)\n\n\nclass SynchronizedBatchNorm3d(_SynchronizedBatchNorm):\n    r\"\"\"Applies Batch Normalization over a 5d input that is seen as a mini-batch\n    of 4d inputs\n\n    .. math::\n\n        y = \\frac{x - mean[x]}{ \\sqrt{Var[x] + \\epsilon}} * gamma + beta\n\n    This module differs from the built-in PyTorch BatchNorm3d as the mean and\n    standard-deviation are reduced across all devices during training.\n\n    For example, when one uses `nn.DataParallel` to wrap the network during\n    training, PyTorch's implementation normalize the tensor on each device using\n    the statistics only on that device, which accelerated the computation and\n    is also easy to implement, but the statistics might be inaccurate.\n    Instead, in this synchronized version, the statistics will be computed\n    over all training samples distributed on multiple devices.\n\n    Note that, for one-GPU or CPU-only case, this module behaves exactly same\n    as the built-in PyTorch implementation.\n\n    The mean and standard-deviation are calculated per-dimension over\n    the mini-batches and gamma and beta are learnable parameter vectors\n    of size C (where C is the input size).\n\n    During training, this layer keeps a running estimate of its computed mean\n    and variance. The running sum is kept with a default momentum of 0.1.\n\n    During evaluation, this running mean/variance is used for normalization.\n\n    Because the BatchNorm is done over the `C` dimension, computing statistics\n    on `(N, D, H, W)` slices, it's common terminology to call this Volumetric BatchNorm\n    or Spatio-temporal BatchNorm\n\n    Args:\n        num_features: num_features from an expected input of\n            size batch_size x num_features x depth x height x width\n        eps: a value added to the denominator for numerical stability.\n            Default: 1e-5\n        momentum: the value used for the running_mean and running_var\n            computation. Default: 0.1\n        affine: a boolean value that when set to ``True``, gives the layer learnable\n            affine parameters. Default: ``True``\n\n    Shape::\n        - Input: :math:`(N, C, D, H, W)`\n        - Output: :math:`(N, C, D, H, W)` (same shape as input)\n\n    Examples:\n        >>> # With Learnable Parameters\n        >>> m = SynchronizedBatchNorm3d(100)\n        >>> # Without Learnable Parameters\n        >>> m = SynchronizedBatchNorm3d(100, affine=False)\n        >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45, 10))\n        >>> output = m(input)\n    \"\"\"\n\n    def _check_input_dim(self, input):\n        if input.dim() != 5:\n            raise ValueError('expected 5D input (got {}D input)'\n                             .format(input.dim()))\n        super(SynchronizedBatchNorm3d, self)._check_input_dim(input)\n\n\n@contextlib.contextmanager\ndef patch_sync_batchnorm():\n    import torch.nn as nn\n\n    backup = nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d\n\n    nn.BatchNorm1d = SynchronizedBatchNorm1d\n    nn.BatchNorm2d = SynchronizedBatchNorm2d\n    nn.BatchNorm3d = SynchronizedBatchNorm3d\n\n    yield\n\n    nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d = backup\n\n\ndef convert_model(module):\n    \"\"\"Traverse the input module and its child recursively\n       and replace all instance of torch.nn.modules.batchnorm.BatchNorm*N*d\n       to SynchronizedBatchNorm*N*d\n\n    Args:\n        module: the input module needs to be convert to SyncBN model\n\n    Examples:\n        >>> import torch.nn as nn\n        >>> import torchvision\n        >>> # m is a standard pytorch model\n        >>> m = torchvision.models.resnet18(True)\n        >>> m = nn.DataParallel(m)\n        >>> # after convert, m is using SyncBN\n        >>> m = convert_model(m)\n    \"\"\"\n    if isinstance(module, torch.nn.DataParallel):\n        mod = module.module\n        mod = convert_model(mod)\n        mod = DataParallelWithCallback(mod)\n        return mod\n\n    mod = module\n    for pth_module, sync_module in zip([torch.nn.modules.batchnorm.BatchNorm1d,\n                                        torch.nn.modules.batchnorm.BatchNorm2d,\n                                        torch.nn.modules.batchnorm.BatchNorm3d],\n                                       [SynchronizedBatchNorm1d,\n                                        SynchronizedBatchNorm2d,\n                                        SynchronizedBatchNorm3d]):\n        if isinstance(module, pth_module):\n            mod = sync_module(module.num_features, module.eps, module.momentum, module.affine)\n            mod.running_mean = module.running_mean\n            mod.running_var = module.running_var\n            if module.affine:\n                mod.weight.data = module.weight.data.clone().detach()\n                mod.bias.data = module.bias.data.clone().detach()\n\n    for name, child in module.named_children():\n        mod.add_module(name, convert_model(child))\n\n    return mod\n"
  },
  {
    "path": "models/networks/sync_batchnorm/batchnorm_reimpl.py",
    "content": "#! /usr/bin/env python3\n# -*- coding: utf-8 -*-\n# File   : batchnorm_reimpl.py\n# Author : acgtyrant\n# Date   : 11/01/2018\n#\n# This file is part of Synchronized-BatchNorm-PyTorch.\n# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch\n# Distributed under MIT License.\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.init as init\n\n__all__ = ['BatchNorm2dReimpl']\n\n\nclass BatchNorm2dReimpl(nn.Module):\n    \"\"\"\n    A re-implementation of batch normalization, used for testing the numerical\n    stability.\n\n    Author: acgtyrant\n    See also:\n    https://github.com/vacancy/Synchronized-BatchNorm-PyTorch/issues/14\n    \"\"\"\n    def __init__(self, num_features, eps=1e-5, momentum=0.1):\n        super().__init__()\n\n        self.num_features = num_features\n        self.eps = eps\n        self.momentum = momentum\n        self.weight = nn.Parameter(torch.empty(num_features))\n        self.bias = nn.Parameter(torch.empty(num_features))\n        self.register_buffer('running_mean', torch.zeros(num_features))\n        self.register_buffer('running_var', torch.ones(num_features))\n        self.reset_parameters()\n\n    def reset_running_stats(self):\n        self.running_mean.zero_()\n        self.running_var.fill_(1)\n\n    def reset_parameters(self):\n        self.reset_running_stats()\n        init.uniform_(self.weight)\n        init.zeros_(self.bias)\n\n    def forward(self, input_):\n        batchsize, channels, height, width = input_.size()\n        numel = batchsize * height * width\n        input_ = input_.permute(1, 0, 2, 3).contiguous().view(channels, numel)\n        sum_ = input_.sum(1)\n        sum_of_square = input_.pow(2).sum(1)\n        mean = sum_ / numel\n        sumvar = sum_of_square - sum_ * mean\n\n        self.running_mean = (\n                (1 - self.momentum) * self.running_mean\n                + self.momentum * mean.detach()\n        )\n        unbias_var = sumvar / (numel - 1)\n        self.running_var = (\n                (1 - self.momentum) * self.running_var\n                + self.momentum * unbias_var.detach()\n        )\n\n        bias_var = sumvar / numel\n        inv_std = 1 / (bias_var + self.eps).pow(0.5)\n        output = (\n                (input_ - mean.unsqueeze(1)) * inv_std.unsqueeze(1) *\n                self.weight.unsqueeze(1) + self.bias.unsqueeze(1))\n\n        return output.view(channels, batchsize, height, width).permute(1, 0, 2, 3).contiguous()\n\n"
  },
  {
    "path": "models/networks/sync_batchnorm/comm.py",
    "content": "# -*- coding: utf-8 -*-\n# File   : comm.py\n# Author : Jiayuan Mao\n# Email  : maojiayuan@gmail.com\n# Date   : 27/01/2018\n# \n# This file is part of Synchronized-BatchNorm-PyTorch.\n# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch\n# Distributed under MIT License.\n\nimport queue\nimport collections\nimport threading\n\n__all__ = ['FutureResult', 'SlavePipe', 'SyncMaster']\n\n\nclass FutureResult(object):\n    \"\"\"A thread-safe future implementation. Used only as one-to-one pipe.\"\"\"\n\n    def __init__(self):\n        self._result = None\n        self._lock = threading.Lock()\n        self._cond = threading.Condition(self._lock)\n\n    def put(self, result):\n        with self._lock:\n            assert self._result is None, 'Previous result has\\'t been fetched.'\n            self._result = result\n            self._cond.notify()\n\n    def get(self):\n        with self._lock:\n            if self._result is None:\n                self._cond.wait()\n\n            res = self._result\n            self._result = None\n            return res\n\n\n_MasterRegistry = collections.namedtuple('MasterRegistry', ['result'])\n_SlavePipeBase = collections.namedtuple('_SlavePipeBase', ['identifier', 'queue', 'result'])\n\n\nclass SlavePipe(_SlavePipeBase):\n    \"\"\"Pipe for master-slave communication.\"\"\"\n\n    def run_slave(self, msg):\n        self.queue.put((self.identifier, msg))\n        ret = self.result.get()\n        self.queue.put(True)\n        return ret\n\n\nclass SyncMaster(object):\n    \"\"\"An abstract `SyncMaster` object.\n\n    - During the replication, as the data parallel will trigger an callback of each module, all slave devices should\n    call `register(id)` and obtain an `SlavePipe` to communicate with the master.\n    - During the forward pass, master device invokes `run_master`, all messages from slave devices will be collected,\n    and passed to a registered callback.\n    - After receiving the messages, the master device should gather the information and determine to message passed\n    back to each slave devices.\n    \"\"\"\n\n    def __init__(self, master_callback):\n        \"\"\"\n\n        Args:\n            master_callback: a callback to be invoked after having collected messages from slave devices.\n        \"\"\"\n        self._master_callback = master_callback\n        self._queue = queue.Queue()\n        self._registry = collections.OrderedDict()\n        self._activated = False\n\n    def __getstate__(self):\n        return {'master_callback': self._master_callback}\n\n    def __setstate__(self, state):\n        self.__init__(state['master_callback'])\n\n    def register_slave(self, identifier):\n        \"\"\"\n        Register an slave device.\n\n        Args:\n            identifier: an identifier, usually is the device id.\n\n        Returns: a `SlavePipe` object which can be used to communicate with the master device.\n\n        \"\"\"\n        if self._activated:\n            assert self._queue.empty(), 'Queue is not clean before next initialization.'\n            self._activated = False\n            self._registry.clear()\n        future = FutureResult()\n        self._registry[identifier] = _MasterRegistry(future)\n        return SlavePipe(identifier, self._queue, future)\n\n    def run_master(self, master_msg):\n        \"\"\"\n        Main entry for the master device in each forward pass.\n        The messages were first collected from each devices (including the master device), and then\n        an callback will be invoked to compute the message to be sent back to each devices\n        (including the master device).\n\n        Args:\n            master_msg: the message that the master want to send to itself. This will be placed as the first\n            message when calling `master_callback`. For detailed usage, see `_SynchronizedBatchNorm` for an example.\n\n        Returns: the message to be sent back to the master device.\n\n        \"\"\"\n        self._activated = True\n\n        intermediates = [(0, master_msg)]\n        for i in range(self.nr_slaves):\n            intermediates.append(self._queue.get())\n\n        results = self._master_callback(intermediates)\n        assert results[0][0] == 0, 'The first result should belongs to the master.'\n\n        for i, res in results:\n            if i == 0:\n                continue\n            self._registry[i].result.put(res)\n\n        for i in range(self.nr_slaves):\n            assert self._queue.get() is True\n\n        return results[0][1]\n\n    @property\n    def nr_slaves(self):\n        return len(self._registry)\n"
  },
  {
    "path": "models/networks/sync_batchnorm/replicate.py",
    "content": "# -*- coding: utf-8 -*-\n# File   : replicate.py\n# Author : Jiayuan Mao\n# Email  : maojiayuan@gmail.com\n# Date   : 27/01/2018\n# \n# This file is part of Synchronized-BatchNorm-PyTorch.\n# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch\n# Distributed under MIT License.\n\nimport functools\n\nfrom torch.nn.parallel.data_parallel import DataParallel\n\n__all__ = [\n    'CallbackContext',\n    'execute_replication_callbacks',\n    'DataParallelWithCallback',\n    'patch_replication_callback'\n]\n\n\nclass CallbackContext(object):\n    pass\n\n\ndef execute_replication_callbacks(modules):\n    \"\"\"\n    Execute an replication callback `__data_parallel_replicate__` on each module created by original replication.\n\n    The callback will be invoked with arguments `__data_parallel_replicate__(ctx, copy_id)`\n\n    Note that, as all modules are isomorphism, we assign each sub-module with a context\n    (shared among multiple copies of this module on different devices).\n    Through this context, different copies can share some information.\n\n    We guarantee that the callback on the master copy (the first copy) will be called ahead of calling the callback\n    of any slave copies.\n    \"\"\"\n    master_copy = modules[0]\n    nr_modules = len(list(master_copy.modules()))\n    ctxs = [CallbackContext() for _ in range(nr_modules)]\n\n    for i, module in enumerate(modules):\n        for j, m in enumerate(module.modules()):\n            if hasattr(m, '__data_parallel_replicate__'):\n                m.__data_parallel_replicate__(ctxs[j], i)\n\n\nclass DataParallelWithCallback(DataParallel):\n    \"\"\"\n    Data Parallel with a replication callback.\n\n    An replication callback `__data_parallel_replicate__` of each module will be invoked after being created by\n    original `replicate` function.\n    The callback will be invoked with arguments `__data_parallel_replicate__(ctx, copy_id)`\n\n    Examples:\n        > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False)\n        > sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1])\n        # sync_bn.__data_parallel_replicate__ will be invoked.\n    \"\"\"\n\n    def replicate(self, module, device_ids):\n        modules = super(DataParallelWithCallback, self).replicate(module, device_ids)\n        execute_replication_callbacks(modules)\n        return modules\n\n\ndef patch_replication_callback(data_parallel):\n    \"\"\"\n    Monkey-patch an existing `DataParallel` object. Add the replication callback.\n    Useful when you have customized `DataParallel` implementation.\n\n    Examples:\n        > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False)\n        > sync_bn = DataParallel(sync_bn, device_ids=[0, 1])\n        > patch_replication_callback(sync_bn)\n        # this is equivalent to\n        > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False)\n        > sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1])\n    \"\"\"\n\n    assert isinstance(data_parallel, DataParallel)\n\n    old_replicate = data_parallel.replicate\n\n    @functools.wraps(old_replicate)\n    def new_replicate(module, device_ids):\n        modules = old_replicate(module, device_ids)\n        execute_replication_callbacks(modules)\n        return modules\n\n    data_parallel.replicate = new_replicate\n"
  },
  {
    "path": "models/networks/sync_batchnorm/unittest.py",
    "content": "# -*- coding: utf-8 -*-\n# File   : unittest.py\n# Author : Jiayuan Mao\n# Email  : maojiayuan@gmail.com\n# Date   : 27/01/2018\n#\n# This file is part of Synchronized-BatchNorm-PyTorch.\n# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch\n# Distributed under MIT License.\n\nimport unittest\nimport torch\n\n\nclass TorchTestCase(unittest.TestCase):\n    def assertTensorClose(self, x, y):\n        adiff = float((x - y).abs().max())\n        if (y == 0).all():\n            rdiff = 'NaN'\n        else:\n            rdiff = float((adiff / y).abs().max())\n\n        message = (\n            'Tensor close check failed\\n'\n            'adiff={}\\n'\n            'rdiff={}\\n'\n        ).format(adiff, rdiff)\n        self.assertTrue(torch.allclose(x, y), message)\n\n"
  },
  {
    "path": "models/pix2pix_model.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\nimport torch\nimport models.networks as networks\nimport util.util as util\nimport cv2 as cv\nimport torch.nn.functional as F\n\nclass Pix2pixModel(torch.nn.Module):\n    @staticmethod\n    def modify_commandline_options(parser, is_train):\n        networks.modify_commandline_options(parser, is_train)\n        return parser\n\n    def __init__(self, opt):\n        super().__init__()\n        self.opt = opt\n        self.FloatTensor = torch.cuda.FloatTensor if self.use_gpu() \\\n            else torch.FloatTensor\n        self.ByteTensor = torch.cuda.ByteTensor if self.use_gpu() \\\n            else torch.ByteTensor\n\n        self.netG, self.netD, self.netE = self.initialize_networks(opt)\n\n        # set loss functions\n        if opt.isTrain:\n            self.criterionGAN = networks.GANLoss(\n                opt.gan_mode, tensor=self.FloatTensor, opt=self.opt)\n            self.criterionFeat = torch.nn.L1Loss()\n            if not opt.no_vgg_loss:\n                self.criterionVGG = networks.VGGLoss(self.opt.gpu_ids, self.opt.vgg_path)\n            if opt.use_vae:\n                self.KLDLoss = networks.KLDLoss()\n\n    # Entry point for all calls involving forward pass\n    # of deep networks. We used this approach since DataParallel module\n    # can't parallelize custom functions, we branch to different\n    # routines based on |mode|.\n    def forward(self, data, mode):\n        input_semantics, real_image = self.preprocess_input(data)\n\n        if mode == 'generator':\n            g_loss, generated = self.compute_generator_loss(\n                input_semantics, real_image)\n            return g_loss, generated\n        elif mode == 'discriminator':\n            d_loss = self.compute_discriminator_loss(\n                input_semantics, real_image)\n            return d_loss\n        elif mode == 'encode_only':\n            z, mu, logvar = self.encode_z(real_image)\n            return mu, logvar\n        elif mode == 'inference':\n            with torch.no_grad():\n                fake_image = self.vis_test(input_semantics, real_image)\n            return fake_image\n        else:\n            raise ValueError(\"|mode| is invalid\")\n\n    def create_optimizers(self, opt):\n        G_params = list(self.netG.parameters())\n        if opt.use_vae:\n            G_params += list(self.netE.parameters())\n        if opt.isTrain:\n            D_params = list(self.netD.parameters())\n\n        if opt.no_TTUR:\n            beta1, beta2 = opt.beta1, opt.beta2\n            G_lr, D_lr = opt.lr, opt.lr\n        else:\n            beta1, beta2 = 0, 0.9\n            G_lr, D_lr = opt.lr / 2, opt.lr * 2\n\n        optimizer_G = torch.optim.Adam(G_params, lr=G_lr, betas=(beta1, beta2))\n        optimizer_D = torch.optim.Adam(D_params, lr=D_lr, betas=(beta1, beta2))\n\n        return optimizer_G, optimizer_D\n\n    def save(self, epoch):\n        util.save_network(self.netG, 'G', epoch, self.opt)\n        util.save_network(self.netD, 'D', epoch, self.opt)\n        if self.opt.use_vae:\n            util.save_network(self.netE, 'E', epoch, self.opt)\n\n    ############################################################################\n    # Private helper methods\n    ############################################################################\n\n    def initialize_networks(self, opt):\n        netG = networks.define_G(opt)\n        # print(netG)\n        netD = networks.define_D(opt) if opt.isTrain else None\n        netE = networks.define_E(opt) if opt.use_vae else None\n\n        if not opt.isTrain or opt.continue_train:\n            netG = util.load_network(netG, 'G', opt.which_epoch, opt)\n            if opt.isTrain:\n                netD = util.load_network(netD, 'D', opt.which_epoch, opt)\n            if opt.use_vae:\n                netE = util.load_network(netE, 'E', opt.which_epoch, opt)\n\n        return netG, netD, netE\n\n    # preprocess the input, such as moving the tensors to GPUs and\n    # transforming the label map to one-hot encoding\n    # |data|: dictionary of the input data\n\n    def preprocess_input(self, data):\n        # move to GPU and change data types\n        data['label'] = data['label'].long()\n        if self.use_gpu():\n            data['label'] = data['label'].cuda()\n            data['instance'] = data['instance'].cuda()\n            data['image'] = data['image'].cuda()\n\n        # create one-hot label map\n        label_map = data['label']\n        bs, _, h, w = label_map.size()\n        nc = self.opt.label_nc + 1 if self.opt.contain_dontcare_label \\\n            else self.opt.label_nc\n        input_label = self.FloatTensor(bs, nc, h, w).zero_()\n        input_semantics = input_label.scatter_(1, label_map, 1.0)\n\n        # concatenate instance map if it exists\n        if not self.opt.no_instance:\n            inst_map = data['instance']\n            instance_edge_map = self.get_edges(inst_map)\n            input_semantics = torch.cat((input_semantics, instance_edge_map), dim=1)\n\n        return input_semantics, data['image']\n\n    def compute_generator_loss(self, input_semantics, real_image):\n        G_losses = {}\n\n        fake_image, KLD_loss = self.generate_fake(\n            input_semantics, real_image, compute_kld_loss=self.opt.use_vae)\n\n        if self.opt.use_vae:\n            G_losses['KLD'] = KLD_loss\n\n        pred_fake, pred_real = self.discriminate(\n            input_semantics, fake_image, real_image)\n\n        G_losses['GAN'] = self.criterionGAN(pred_fake, True,\n                                            for_discriminator=False)\n\n        if not self.opt.no_ganFeat_loss:\n            num_D = len(pred_fake)\n            GAN_Feat_loss = self.FloatTensor(1).fill_(0)\n            for i in range(num_D):  # for each discriminator\n                # last output is the final prediction, so we exclude it\n                num_intermediate_outputs = len(pred_fake[i]) - 1\n                for j in range(num_intermediate_outputs):  # for each layer output\n                    unweighted_loss = self.criterionFeat(\n                        pred_fake[i][j], pred_real[i][j].detach())\n                    GAN_Feat_loss += unweighted_loss * self.opt.lambda_feat / num_D\n            G_losses['GAN_Feat'] = GAN_Feat_loss\n\n        if not self.opt.no_vgg_loss:\n            G_losses['VGG'] = self.criterionVGG(fake_image, real_image) \\\n                * self.opt.lambda_vgg\n\n        return G_losses, fake_image\n\n    def compute_discriminator_loss(self, input_semantics, real_image):\n        D_losses = {}\n        with torch.no_grad():\n            fake_image, _ = self.generate_fake(input_semantics, real_image)\n            fake_image = fake_image.detach()\n            fake_image.requires_grad_()\n\n        pred_fake, pred_real = self.discriminate(\n            input_semantics, fake_image, real_image)\n\n        D_losses['D_Fake'] = self.criterionGAN(pred_fake, False,\n                                               for_discriminator=True)\n        D_losses['D_real'] = self.criterionGAN(pred_real, True,\n                                               for_discriminator=True)\n\n        return D_losses\n\n    def encode_z(self, real_image):\n        mu, logvar = self.netE(real_image)\n        z = self.reparameterize(mu, logvar)\n        return z, mu, logvar\n\n    def generate_fake(self, input_semantics, real_image, compute_kld_loss=False):\n        z = None\n        KLD_loss = None\n        if self.opt.use_vae:\n            z, mu, logvar = self.encode_z(real_image)\n            if compute_kld_loss:\n                KLD_loss = self.KLDLoss(mu, logvar) * self.opt.lambda_kld\n\n        fake_image = self.netG(input_semantics, z=z)\n\n        assert (not compute_kld_loss) or self.opt.use_vae, \\\n            \"You cannot compute KLD loss if opt.use_vae == False\"\n\n        return fake_image, KLD_loss\n\n    def vis_test(self, input_semantics, times=1):\n        fake_image = []\n        for j in range(times):\n            fake_image.append(self.netG(input_semantics, z=None))\n        return fake_image\n\n    # Given fake and real image, return the prediction of discriminator\n    # for each fake and real image.\n\n    def discriminate(self, input_semantics, fake_image, real_image):\n        fake_concat = torch.cat([input_semantics, fake_image], dim=1)\n        real_concat = torch.cat([input_semantics, real_image], dim=1)\n\n        # In Batch Normalization, the fake and real images are\n        # recommended to be in the same batch to avoid disparate\n        # statistics in fake and real images.\n        # So both fake and real images are fed to D all at once.\n        fake_and_real = torch.cat([fake_concat, real_concat], dim=0)\n\n        discriminator_out = self.netD(fake_and_real)\n\n        pred_fake, pred_real = self.divide_pred(discriminator_out)\n\n        return pred_fake, pred_real\n\n    # Take the prediction of fake and real images from the combined batch\n    def divide_pred(self, pred):\n        # the prediction contains the intermediate outputs of multiscale GAN,\n        # so it's usually a list\n        if type(pred) == list:\n            fake = []\n            real = []\n            for p in pred:\n                fake.append([tensor[:tensor.size(0) // 2] for tensor in p])\n                real.append([tensor[tensor.size(0) // 2:] for tensor in p])\n        else:\n            fake = pred[:pred.size(0) // 2]\n            real = pred[pred.size(0) // 2:]\n\n        return fake, real\n\n    def get_edges(self, t):\n        edge = self.ByteTensor(t.size()).zero_()\n        edge[:, :, :, 1:] = edge[:, :, :, 1:] | (t[:, :, :, 1:] != t[:, :, :, :-1]).byte()\n        edge[:, :, :, :-1] = edge[:, :, :, :-1] | (t[:, :, :, 1:] != t[:, :, :, :-1]).byte()\n        edge[:, :, 1:, :] = edge[:, :, 1:, :] | (t[:, :, 1:, :] != t[:, :, :-1, :]).byte()\n        edge[:, :, :-1, :] = edge[:, :, :-1, :] | (t[:, :, 1:, :] != t[:, :, :-1, :]).byte()\n        return edge.float()\n\n    def reparameterize(self, mu, logvar):\n        std = torch.exp(0.5 * logvar)\n        eps = torch.randn_like(std)\n        return eps.mul(std) + mu\n\n    def use_gpu(self):\n        return len(self.opt.gpu_ids) > 0\n"
  },
  {
    "path": "models/smis_model.py",
    "content": "import torch\nimport models.networks as networks\nimport util.util as util\nimport cv2 as cv\nimport torch.nn.functional as F\nimport numpy as np\n\nclass SmisModel(torch.nn.Module):\n    @staticmethod\n    def modify_commandline_options(parser, is_train):\n        networks.modify_commandline_options(parser, is_train)\n        return parser\n\n    def __init__(self, opt):\n        super().__init__()\n        self.opt = opt\n        self.FloatTensor = torch.cuda.FloatTensor if self.use_gpu() \\\n            else torch.FloatTensor\n        self.ByteTensor = torch.cuda.ByteTensor if self.use_gpu() \\\n            else torch.ByteTensor\n\n        self.netG, self.netD, self.netE = self.initialize_networks(opt)\n\n        # set loss functions\n        if opt.isTrain:\n            self.criterionGAN = networks.GANLoss(\n                opt.gan_mode, tensor=self.FloatTensor, opt=self.opt)\n            self.criterionFeat = torch.nn.L1Loss()\n            if not opt.no_vgg_loss:\n                self.criterionVGG = networks.VGGLoss(self.opt.gpu_ids, self.opt.vgg_path)\n            if opt.use_vae:\n                self.KLDLoss = networks.KLDLoss()\n\n    # Entry point for all calls involving forward pass\n    # of deep networks. We used this approach since DataParallel module\n    # can't parallelize custom functions, we branch to different\n    # routines based on |mode|.\n    def forward(self, data, mode):\n        # input_semantics, real_image  = self.preprocess_input(data)\n        input_semantics, real_image = self.preprocess_input(data)\n\n        if mode == 'generator':\n            g_loss, generated = self.compute_generator_loss(\n                input_semantics, real_image)\n            return g_loss, generated\n        elif mode == 'discriminator':\n            d_loss = self.compute_discriminator_loss(\n                input_semantics, real_image)\n            return d_loss\n        elif mode == 'encode_only':\n            z, mu, logvar = self.encode_z(real_image)\n            return mu, logvar\n        elif mode == 'inference':\n            with torch.no_grad():\n                if self.opt.test_mask != -1:\n                    fake_image = self.vis_test(input_semantics, times=self.opt.test_times, test_mask=self.opt.test_mask)\n                else:\n                    fake_image = self.vis_test(input_semantics, times=self.opt.test_times)\n            return fake_image\n        else:\n            raise ValueError(\"|mode| is invalid\")\n\n    def create_optimizers(self, opt):\n        G_params = list(self.netG.parameters())\n        if opt.use_vae:\n            G_params += list(self.netE.parameters())\n            # G_params += list(self.netE_edge.parameters())\n        if opt.isTrain:\n            D_params = list(self.netD.parameters())\n\n        if opt.no_TTUR:\n            beta1, beta2 = opt.beta1, opt.beta2\n            G_lr, D_lr = opt.lr, opt.lr\n        else:\n            beta1, beta2 = 0, 0.9\n            G_lr, D_lr = opt.lr / 2, opt.lr * 2\n\n        optimizer_G = torch.optim.Adam(G_params, lr=G_lr, betas=(beta1, beta2))\n        optimizer_D = torch.optim.Adam(D_params, lr=D_lr, betas=(beta1, beta2))\n\n        return optimizer_G, optimizer_D\n\n    def save(self, epoch):\n        util.save_network(self.netG, 'G', epoch, self.opt)\n        util.save_network(self.netD, 'D', epoch, self.opt)\n        if self.opt.use_vae:\n            util.save_network(self.netE, 'E', epoch, self.opt)\n            # util.save_network(self.netE_edge, 'E_edge', epoch, self.opt)\n\n    ############################################################################\n    # Private helper methods\n    ############################################################################\n\n    def initialize_networks(self, opt):\n        netG = networks.define_G(opt)\n        # if not opt.isTrain:\n        #     print(netG)\n        netD = networks.define_D(opt) if opt.isTrain else None\n        netE = networks.define_E(opt) if opt.use_vae and opt.isTrain else None\n\n        if not opt.isTrain or opt.continue_train:\n            netG = util.load_network(netG, 'G', opt.which_epoch, opt)\n            if opt.isTrain:\n                netD = util.load_network(netD, 'D', opt.which_epoch, opt)\n                if opt.use_vae:\n                    netE = util.load_network(netE, 'E', opt.which_epoch, opt)\n                # netE_edge = util.load_network(netE_edge, '')\n\n        return netG, netD, netE\n\n    # preprocess the input, such as moving the tensors to GPUs and\n    # transforming the label map to one-hot encoding\n    # |data|: dictionary of the input data\n    def preprocess_input(self, data):\n        # move to GPU and change data types\n        data['label'] = data['label'].long()\n        if self.use_gpu():\n            data['label'] = data['label'].cuda()\n            data['instance'] = data['instance'].cuda()\n            data['image'] = data['image'].cuda()\n            # data['edge'] = data['edge'].cuda()\n\n        # create one-hot label map\n        label_map = data['label']\n        bs, _, h, w = label_map.size()\n        nc = self.opt.label_nc + 1 if self.opt.contain_dontcare_label \\\n            else self.opt.label_nc\n        input_label = self.FloatTensor(bs, nc, h, w).zero_()\n        input_semantics = input_label.scatter_(1, label_map, 1.0)\n\n        # concatenate instance map if it exists\n        if not self.opt.no_instance:\n            inst_map = data['instance']\n            instance_edge_map = self.get_edges(inst_map)\n            input_semantics = torch.cat((input_semantics, instance_edge_map), dim=1)\n\n        return input_semantics, data['image']\n\n    def compute_generator_loss(self, input_semantics, real_image):\n        G_losses = {}\n\n        fake_image, KLD_loss, CODE_loss = self.generate_fake(\n            input_semantics, real_image, compute_kld_loss=self.opt.use_vae)\n\n        if self.opt.use_vae:\n            G_losses['KLD'] = KLD_loss\n            # G_losses['CODE'] = CODE_loss\n\n        pred_fake, pred_real = self.discriminate(\n            input_semantics, fake_image, real_image)\n\n        G_losses['GAN'] = self.criterionGAN(pred_fake, True,\n                                            for_discriminator=False)\n\n        if not self.opt.no_ganFeat_loss:\n            num_D = len(pred_fake)\n            GAN_Feat_loss = self.FloatTensor(1).fill_(0)\n            for i in range(num_D):  # for each discriminator\n                # last output is the final prediction, so we exclude it\n                num_intermediate_outputs = len(pred_fake[i]) - 1\n                for j in range(num_intermediate_outputs):  # for each layer output\n                    unweighted_loss = self.criterionFeat(\n                        pred_fake[i][j], pred_real[i][j].detach())\n                    GAN_Feat_loss += unweighted_loss * self.opt.lambda_feat / num_D\n            G_losses['GAN_Feat'] = GAN_Feat_loss\n\n        if not self.opt.no_vgg_loss:\n            G_losses['VGG'] = self.criterionVGG(fake_image, real_image) \\\n                * self.opt.lambda_vgg\n\n        return G_losses, fake_image\n\n    def compute_discriminator_loss(self, input_semantics, real_image):\n        D_losses = {}\n        with torch.no_grad():\n            fake_image, _, _ = self.generate_fake(input_semantics, real_image)\n            fake_image = fake_image.detach()\n            fake_image.requires_grad_()\n\n        pred_fake, pred_real = self.discriminate(\n            input_semantics, fake_image, real_image)\n\n        D_losses['D_Fake'] = self.criterionGAN(pred_fake, False,\n                                               for_discriminator=True)\n        D_losses['D_real'] = self.criterionGAN(pred_real, True,\n                                               for_discriminator=True)\n\n        return D_losses\n\n    def encode_z(self, real_image):\n        mu, logvar = self.netE(real_image)\n        z = self.reparameterize(mu, logvar)\n        return z, mu, logvar\n\n    def trans_img(self, input_semantics, real_image):\n        images = None\n        seg_range = input_semantics.size()[1]\n        if self.opt.dataset_mode == 'cityscapes':\n            seg_range -= 1\n        for i in range(input_semantics.size(0)):\n            resize_image = None\n            for n in range(0, seg_range):\n                seg_image = real_image[i] * input_semantics[i][n]\n                # resize seg_image\n                c_sum = seg_image.sum(dim=0)\n                y_seg = c_sum.sum(dim=0)\n                x_seg = c_sum.sum(dim=1)\n                y_id = y_seg.nonzero()\n                if y_id.size()[0] == 0:\n                    seg_image = seg_image.unsqueeze(dim=0)\n                    # resize_image = torch.cat((resize_image, seg_image), dim=0)\n                    if resize_image is None:\n                        resize_image = seg_image\n                    else:\n                        resize_image = torch.cat((resize_image, seg_image), dim=1)\n                    continue\n                # print(y_id)\n                y_min = y_id[0][0]\n                y_max = y_id[-1][0]\n                x_id = x_seg.nonzero()\n                x_min = x_id[0][0]\n                x_max = x_id[-1][0]\n                seg_image = seg_image.unsqueeze(dim=0)\n                # print(x_min, x_max, y_min, y_max)\n                if self.opt.dataset_mode == 'cityscapes':\n                    seg_image = F.interpolate(seg_image[:, :, x_min:x_max + 1, y_min:y_max + 1], size=[256, 512])\n                else:\n                    seg_image = F.interpolate(seg_image[:, :, x_min:x_max + 1, y_min:y_max + 1], size=[256, 256])\n                # seg_image = F.interpolate(seg_image[:, :, x_min:x_max + 1, y_min:y_max + 1], scale_factor=256 / max(y_max-y_min, x_max-x_min))\n                # seg_image = F.interpolate(seg_image[:, :, x_min:x_max + 1, y_min:y_max + 1], size=[256, 256])\n                if resize_image is None:\n                    resize_image = seg_image\n                else:\n                    resize_image = torch.cat((resize_image, seg_image), dim=1)\n            if images is None:\n                images = resize_image\n            else:\n                images = torch.cat((images, resize_image), dim=0)\n        return images\n\n    def generate_fake(self, input_semantics, real_image, compute_kld_loss=False):\n        z = None\n        KLD_loss = None\n        if self.opt.use_vae:\n            images = self.trans_img(input_semantics, real_image)\n            z, mu, logvar = self.encode_z(images)\n            CODE_loss = None\n            if compute_kld_loss:\n                KLD_loss = self.KLDLoss(mu, logvar) * self.opt.lambda_kld\n        fake_image = self.netG(input_semantics, z=z)\n\n        assert (not compute_kld_loss) or self.opt.use_vae, \\\n            \"You cannot compute KLD loss if opt.use_vae == False\"\n\n        return fake_image, KLD_loss, CODE_loss\n\n    def vis_test(self, input_semantics, times=1, test_mask=None):\n        fake_image = []\n        if self.opt.dataset_mode == 'cityscapes':\n            z = torch.randn(input_semantics.size(0), self.opt.label_nc, 8, 4 * 8).cuda()\n            for i in range(times):\n                if test_mask is not None:\n                    z[:, test_mask, :, :] = torch.randn(input_semantics.size(0), 8, 4*8)\n                else:\n                    z = torch.randn(input_semantics.size(0), self.opt.label_nc, 8, 4 * 8).cuda()\n                fake_image.append(\n                    self.netG(input_semantics, z=z.view(input_semantics.size(0), self.opt.label_nc * 8, 4, 8)))\n        else:\n            z = torch.randn(input_semantics.size(0), self.opt.semantic_nc, 8, 4 * 4)\n            for i in range(times):\n                if test_mask is not None:\n                    z[:, test_mask, :, :] = torch.randn(input_semantics.size(0), 8, 16)\n                else:\n                    z = torch.randn(input_semantics.size(0), self.opt.semantic_nc, 8, 4 * 4)\n                fake_image.append(self.netG(input_semantics,\n                                            z=z.view(input_semantics.size(0), self.opt.semantic_nc * 8, 4, 4).cuda()))\n        return fake_image\n\n    # Given fake and real image, return the prediction of discriminator\n    # for each fake and real image.\n    def discriminate(self, input_semantics, fake_image, real_image):\n        fake_concat = torch.cat([input_semantics, fake_image], dim=1)\n        real_concat = torch.cat([input_semantics, real_image], dim=1)\n\n        # In Batch Normalization, the fake and real images are\n        # recommended to be in the same batch to avoid disparate\n        # statistics in fake and real images.\n        # So both fake and real images are fed to D all at once.\n        fake_and_real = torch.cat([fake_concat, real_concat], dim=0)\n\n        discriminator_out = self.netD(fake_and_real)\n\n        pred_fake, pred_real = self.divide_pred(discriminator_out)\n\n        return pred_fake, pred_real\n\n    # Take the prediction of fake and real images from the combined batch\n    def divide_pred(self, pred):\n        # the prediction contains the intermediate outputs of multiscale GAN,\n        # so it's usually a list\n        if type(pred) == list:\n            fake = []\n            real = []\n            for p in pred:\n                fake.append([tensor[:tensor.size(0) // 2] for tensor in p])\n                real.append([tensor[tensor.size(0) // 2:] for tensor in p])\n        else:\n            fake = pred[:pred.size(0) // 2]\n            real = pred[pred.size(0) // 2:]\n\n        return fake, real\n\n    def get_edges(self, t):\n        edge = self.ByteTensor(t.size()).zero_()\n        edge[:, :, :, 1:] = edge[:, :, :, 1:] | (t[:, :, :, 1:] != t[:, :, :, :-1]).byte()\n        edge[:, :, :, :-1] = edge[:, :, :, :-1] | (t[:, :, :, 1:] != t[:, :, :, :-1]).byte()\n        edge[:, :, 1:, :] = edge[:, :, 1:, :] | (t[:, :, 1:, :] != t[:, :, :-1, :]).byte()\n        edge[:, :, :-1, :] = edge[:, :, :-1, :] | (t[:, :, 1:, :] != t[:, :, :-1, :]).byte()\n        return edge.float()\n\n    def reparameterize(self, mu, logvar):\n        std = torch.exp(0.5 * logvar)\n        eps = torch.randn_like(std)\n        return eps.mul(std) + mu\n\n    def use_gpu(self):\n        return len(self.opt.gpu_ids) > 0"
  },
  {
    "path": "options/__init__.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\""
  },
  {
    "path": "options/base_options.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\nimport sys\nimport argparse\nimport os\nfrom util import util\nimport torch\nimport models\nimport data\nimport pickle\n\n\nclass BaseOptions():\n    def __init__(self):\n        self.initialized = False\n\n    def initialize(self, parser):\n        # experiment specifics\n        parser.add_argument('--name', type=str, default='label2coco',\n                            help='name of the experiment. It decides where to store samples and models')\n\n        parser.add_argument('--gpu_ids', type=str, default='0', help='gpu ids: e.g. 0  0,1,2, 0,2. use -1 for CPU')\n        parser.add_argument('--checkpoints_dir', type=str, default='./checkpoints', help='models are saved here')\n        parser.add_argument('--model', type=str, default='smis', help='which model to use, msis or pix2pix')\n        parser.add_argument('--norm_G', type=str, default='spectralinstance',\n                            help='instance normalization or batch normalization')\n        parser.add_argument('--norm_D', type=str, default='spectralinstance',\n                            help='instance normalization or batch normalization')\n        parser.add_argument('--norm_E', type=str, default='spectralinstance',\n                            help='instance normalization or batch normalization')\n        parser.add_argument('--phase', type=str, default='train', help='train, val, test, etc')\n\n        # input/output sizes\n        parser.add_argument('--batchSize', type=int, default=1, help='input batch size')\n        parser.add_argument('--preprocess_mode', type=str, default='scale_width_and_crop',\n                            help='scaling and cropping of images at load time.', choices=(\n            \"resize_and_crop\", \"crop\", \"scale_width\", \"scale_width_and_crop\", \"scale_shortside\",\n            \"scale_shortside_and_crop\", \"fixed\", \"none\"))\n        parser.add_argument('--load_size', type=int, default=1024,\n                            help='Scale images to this size. The final image will be cropped to --crop_size.')\n        parser.add_argument('--crop_size', type=int, default=512,\n                            help='Crop to the width of crop_size (after initially scaling the images to load_size.)')\n        parser.add_argument('--aspect_ratio', type=float, default=1.0,\n                            help='The ratio width/height. The final height of the load image will be crop_size/aspect_ratio')\n        parser.add_argument('--label_nc', type=int, default=182,\n                            help='# of input label classes without unknown class. If you have unknown class as class label, specify --contain_dopntcare_label.')\n        parser.add_argument('--contain_dontcare_label', action='store_true',\n                            help='if the label map contains dontcare label (dontcare=255)')\n        parser.add_argument('--output_nc', type=int, default=3, help='# of output image channels')\n\n        # for setting inputs\n        parser.add_argument('--dataroot', type=str, default='./datasets/cityscapes/')\n        parser.add_argument('--dataset_mode', type=str, default='coco')\n        parser.add_argument('--serial_batches', action='store_true',\n                            help='if true, takes images in order to make batches, otherwise takes them randomly')\n        parser.add_argument('--no_flip', action='store_true',\n                            help='if specified, do not flip the images for data argumentation')\n        parser.add_argument('--nThreads', default=0, type=int, help='# threads for loading data')\n        parser.add_argument('--max_dataset_size', type=int, default=sys.maxsize,\n                            help='Maximum number of samples allowed per dataset. If the dataset directory contains more than max_dataset_size, only a subset is loaded.')\n        parser.add_argument('--load_from_opt_file', action='store_true',\n                            help='load the options from checkpoints and use that as default')\n        parser.add_argument('--cache_filelist_write', action='store_true',\n                            help='saves the current filelist into a text file, so that it loads faster')\n        parser.add_argument('--cache_filelist_read', action='store_true', help='reads from the file list cache')\n\n        # for displays\n        parser.add_argument('--display_winsize', type=int, default=400, help='display window size')\n\n        # for generator\n        parser.add_argument('--netG', type=str, default='spade',\n                            help='selects model to use for netG (pix2pixhd | spade)')\n        parser.add_argument('--netE', type=str, default='conv')\n        parser.add_argument('--ngf', type=int, default=64, help='# of gen filters in first conv layer')\n        parser.add_argument('--init_type', type=str, default='xavier',\n                            help='network initialization [normal|xavier|kaiming|orthogonal]')\n        parser.add_argument('--init_variance', type=float, default=0.02,\n                            help='variance of the initialization distribution')\n        parser.add_argument('--z_dim', type=int, default=256,\n                            help=\"dimension of the latent z vector\")\n\n        # for instance-wise features\n        parser.add_argument('--no_instance', action='store_true',\n                            help='if specified, do *not* add instance map as input')\n        parser.add_argument('--nef', type=int, default=16, help='# of encoder filters in the first conv layer')\n        parser.add_argument('--use_vae', default=True, help='use encoder and vae loss')\n        parser.add_argument('--vgg_path', type=str, default='')\n        parser.add_argument('--clean_code', action='store_true')\n        parser.add_argument('--test_type', type=str, default='visual', help='visual | FID | LPIPS | Mask LPIPS | IS')\n        parser.add_argument('--test_times', type=int, default=1, )\n        parser.add_argument('--test_mask', type=int, default=-1, )\n        parser.add_argument('--no_spectral', action='store_true')\n        parser.add_argument('--resnet_n_downsample', type=int, default=3)\n        self.initialized = True\n        return parser\n\n    def gather_options(self):\n        # initialize parser with basic options\n        if not self.initialized:\n            parser = argparse.ArgumentParser(\n                formatter_class=argparse.ArgumentDefaultsHelpFormatter)\n            parser = self.initialize(parser)\n\n        # get the basic options\n        opt, unknown = parser.parse_known_args()\n\n        # modify model-related parser options\n        model_name = opt.model\n        model_option_setter = models.get_option_setter(model_name)\n        parser = model_option_setter(parser, self.isTrain)\n\n        # modify dataset-related parser options\n        dataset_mode = opt.dataset_mode\n        dataset_option_setter = data.get_option_setter(dataset_mode)\n        parser = dataset_option_setter(parser, self.isTrain)\n\n        opt, unknown = parser.parse_known_args()\n\n        # if there is opt_file, load it.\n        # The previous default options will be overwritten\n        if opt.load_from_opt_file:\n            parser = self.update_options_from_file(parser, opt)\n\n        opt = parser.parse_args()\n        self.parser = parser\n        return opt\n\n    def print_options(self, opt):\n        message = ''\n        message += '----------------- Options ---------------\\n'\n        for k, v in sorted(vars(opt).items()):\n            comment = ''\n            default = self.parser.get_default(k)\n            if v != default:\n                comment = '\\t[default: %s]' % str(default)\n            message += '{:>25}: {:<30}{}\\n'.format(str(k), str(v), comment)\n        message += '----------------- End -------------------'\n        # print(message)\n\n    def option_file_path(self, opt, makedir=False):\n        expr_dir = os.path.join(opt.checkpoints_dir, opt.name)\n        if makedir:\n            util.mkdirs(expr_dir)\n        file_name = os.path.join(expr_dir, 'opt')\n        return file_name\n\n    def save_options(self, opt):\n        file_name = self.option_file_path(opt, makedir=True)\n        with open(file_name + '.txt', 'wt') as opt_file:\n            for k, v in sorted(vars(opt).items()):\n                comment = ''\n                default = self.parser.get_default(k)\n                if v != default:\n                    comment = '\\t[default: %s]' % str(default)\n                opt_file.write('{:>25}: {:<30}{}\\n'.format(str(k), str(v), comment))\n\n        with open(file_name + '.pkl', 'wb') as opt_file:\n            pickle.dump(opt, opt_file)\n\n    def update_options_from_file(self, parser, opt):\n        new_opt = self.load_options(opt)\n        for k, v in sorted(vars(opt).items()):\n            if hasattr(new_opt, k) and v != getattr(new_opt, k):\n                new_val = getattr(new_opt, k)\n                parser.set_defaults(**{k: new_val})\n        return parser\n\n    def load_options(self, opt):\n        file_name = self.option_file_path(opt, makedir=False)\n        new_opt = pickle.load(open(file_name + '.pkl', 'rb'))\n        return new_opt\n\n    def parse(self, save=False):\n\n        opt = self.gather_options()\n        opt.isTrain = self.isTrain  # train or test\n\n        self.print_options(opt)\n        if opt.isTrain:\n            self.save_options(opt)\n\n        # Set semantic_nc based on the option.\n        # This will be convenient in many places\n        if opt.model == 'smis' and opt.dataset_mode == 'cityscapes':\n            opt.semantic_nc = opt.label_nc + \\\n                              (1 if opt.contain_dontcare_label else 0)\n        else:\n            opt.semantic_nc = opt.label_nc + \\\n                              (1 if opt.contain_dontcare_label else 0) \\\n                              + (0 if opt.no_instance else 1)\n        # set gpu ids\n        str_ids = opt.gpu_ids.split(',')\n        opt.gpu_ids = []\n        for str_id in str_ids:\n            id = int(str_id)\n            if id >= 0:\n                opt.gpu_ids.append(id)\n        if len(opt.gpu_ids) > 0:\n            torch.cuda.set_device(opt.gpu_ids[0])\n\n        assert len(opt.gpu_ids) == 0 or opt.batchSize % len(opt.gpu_ids) == 0, \\\n            \"Batch size %d is wrong. It must be a multiple of # GPUs %d.\" \\\n            % (opt.batchSize, len(opt.gpu_ids))\n\n        self.opt = opt\n        return self.opt\n"
  },
  {
    "path": "options/test_options.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\nfrom .base_options import BaseOptions\n\n\nclass TestOptions(BaseOptions):\n    def initialize(self, parser):\n        BaseOptions.initialize(self, parser)\n        parser.add_argument('--results_dir', type=str, default='./results/', help='saves results here.')\n        parser.add_argument('--which_epoch', type=str, default='latest', help='which epoch to load? set to latest to use latest cached model')\n        parser.add_argument('--how_many', type=int, default=float(\"inf\"), help='how many test images to run')\n\n        parser.set_defaults(preprocess_mode='scale_width_and_crop', crop_size=256, load_size=256, display_winsize=256)\n        parser.set_defaults(serial_batches=True)\n        parser.set_defaults(no_flip=True)\n        parser.set_defaults(phase='test')\n        self.isTrain = False\n        return parser\n"
  },
  {
    "path": "options/train_options.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\nfrom .base_options import BaseOptions\n\n\nclass TrainOptions(BaseOptions):\n    def initialize(self, parser):\n        BaseOptions.initialize(self, parser)\n        # for displays\n        parser.add_argument('--display_freq', type=int, default=200, help='frequency of showing training results on screen')\n        parser.add_argument('--print_freq', type=int, default=200, help='frequency of showing training results on console')\n        parser.add_argument('--many_test_freq', type=int, default=1000, help='frequency of showing training results on console')\n        parser.add_argument('--save_latest_freq', type=int, default=5000, help='frequency of saving the latest results')\n        parser.add_argument('--save_epoch_freq', type=int, default=50, help='frequency of saving checkpoints at the end of epochs')\n        parser.add_argument('--no_html', action='store_true', help='do not save intermediate training results to [opt.checkpoints_dir]/[opt.name]/web/')\n        parser.add_argument('--debug', action='store_true', help='only do one epoch and displays at each iteration')\n        parser.add_argument('--tf_log', action='store_true', help='if specified, use tensorboard logging. Requires tensorflow installed')\n\n        # for training\n        parser.add_argument('--continue_train', action='store_true', help='continue training: load the latest model')\n        parser.add_argument('--which_epoch', type=str, default='latest', help='which epoch to load? set to latest to use latest cached model')\n        parser.add_argument('--niter', type=int, default=10, help='# of iter at starting learning rate. This is NOT the total #epochs. Totla #epochs is niter + niter_decay')\n        parser.add_argument('--niter_decay', type=int, default=5, help='# of iter to linearly decay learning rate to zero')\n        parser.add_argument('--optimizer', type=str, default='adam')\n        parser.add_argument('--beta1', type=float, default=0.5, help='momentum term of adam')\n        parser.add_argument('--beta2', type=float, default=0.999, help='momentum term of adam')\n        parser.add_argument('--lr', type=float, default=0.0002, help='initial learning rate for adam')\n        parser.add_argument('--D_steps_per_G', type=int, default=1, help='number of discriminator iterations per generator iterations.')\n\n        # for discriminators\n        parser.add_argument('--ndf', type=int, default=64, help='# of discrim filters in first conv layer')\n        parser.add_argument('--lambda_feat', type=float, default=10.0, help='weight for feature matching loss')\n        parser.add_argument('--lambda_vgg', type=float, default=10.0, help='weight for vgg loss')\n        parser.add_argument('--no_ganFeat_loss', action='store_true', help='if specified, do *not* use discriminator feature matching loss')\n        parser.add_argument('--no_vgg_loss', action='store_true', help='if specified, do *not* use VGG feature matching loss')\n        parser.add_argument('--gan_mode', type=str, default='hinge', help='(ls|original|hinge)')\n        parser.add_argument('--netD', type=str, default='multiscale', help='(n_layers|multiscale|image)')\n        parser.add_argument('--no_TTUR', action='store_true', help='Use TTUR training scheme')\n        parser.add_argument('--lambda_kld', type=float, default=0.05)\n\n\n        # parser.add_argument('--', type=int, default=6)\n        self.isTrain = True\n        return parser\n"
  },
  {
    "path": "requirements.txt",
    "content": "torch>=1.0.0\ntorchvision\ndominate>=2.3.1\ndill\nscikit-image\n"
  },
  {
    "path": "scripts/ade20k.sh",
    "content": "#python train.py --name ade20k_smis --dataset_mode ade20k --dataroot /home/zlxu/data/ADEChallengeData2016 --no_instance  \\\n#--gpu_ids 0,1,2,3 --ngf 64 --batchSize 4 --use_vae --niter 100 --niter_decay 100 --model smis --netE conv --netG ADE20K\n#\npython test.py --name ade20k_smis --dataset_mode ade20k --dataroot /home/zlxu/data/ADEChallengeData2016 --no_instance  \\\n--gpu_ids 0 --ngf 64 --batchSize 2 --model smis --netG ADE20K"
  },
  {
    "path": "scripts/cityscapes.sh",
    "content": "#!/usr/bin/env bash\n#python train.py --name cityscapes_smis --dataset_mode cityscapes --dataroot /home/zlxu/data/cityscapes  \\\n#--gpu_ids 0,1,2,3 --ngf 280 --batchSize 4 --niter 100 --niter_decay 100 --netG Cityscapes --model smis --netE conv --use_vae\n\npython test.py --name cityscapes_smis --dataset_mode cityscapes --dataroot /home/zlxu/data/cityscapes  \\\n--gpu_ids 1 --ngf 280 --batchSize 4 --netG Cityscapes --model smis"
  },
  {
    "path": "scripts/deepfashion.sh",
    "content": "#python train.py --name deepfashion_smis --dataset_mode deepfashion --dataroot /home/zlxu/data/deepfashion --no_instance \\\n#--gpu_ids 0,1,2,3 --ngf 160 --batchSize 8 --use_vae --niter 60 --niter_decay 40  --model smis --netE conv --netG deepfashion\n\npython test.py --name deepfashion_smis --dataset_mode deepfashion --dataroot /home/zlxu/data/deepfashion --no_instance \\\n--gpu_ids 1 --ngf 160 --batchSize 4 --model smis --netG deepfashion"
  },
  {
    "path": "test.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\nimport os\nfrom collections import OrderedDict\n\nimport data\nfrom options.test_options import TestOptions\nfrom models.pix2pix_model import Pix2pixModel\nfrom models.smis_model import SmisModel\nfrom util.visualizer import Visualizer\nfrom util import html\nfrom tqdm import tqdm\n\nopt = TestOptions().parse()\n# print(opt)\ndataloader = data.create_dataloader(opt)\nif opt.model == 'smis':\n    model = SmisModel(opt)\nelif opt.model == 'pix2pix':\n    model = Pix2pixModel(opt)\nmodel.eval()\n\nvisualizer = Visualizer(opt)\n\n# create a webpage that summarizes the all results\nweb_dir = os.path.join(opt.results_dir, opt.name,\n                       '%s_%s' % (opt.phase, opt.which_epoch))\nwebpage = html.HTML(web_dir,\n                    'Experiment = %s, Phase = %s, Epoch = %s' %\n                    (opt.name, opt.phase, opt.which_epoch))\nfor i, data_i in tqdm(enumerate(dataloader)):\n    generated = model(data_i, mode='inference')\n    img_path = data_i['path']\n    for b in range(generated[0].shape[0]):\n        if opt.test_times == 1:\n            visuals = OrderedDict([('synthesized_image', generated[0][b])])\n        else:\n            visuals = OrderedDict([('input_label', data_i['label'][b]),\n                                   ('real_image', data_i['image'][b]),\n                                   ])\n            for t in range(len(generated)):\n                visuals['synthesized_image_' + str(t)] = generated[t][b]\n        visualizer.save_images(webpage, visuals, img_path[b:b + 1])\nwebpage.save()\n"
  },
  {
    "path": "train.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\nimport sys\nfrom collections import OrderedDict\nfrom options.train_options import TrainOptions\nimport data\nfrom util.iter_counter import IterationCounter\nfrom util.visualizer import Visualizer\nfrom trainers.pix2pix_trainer import Pix2PixTrainer\n\n# parse options\nopt = TrainOptions().parse()\n\n# print options to help debugging\nprint(' '.join(sys.argv))\n\n# load the dataset\ndataloader = data.create_dataloader(opt)\n\n# create trainer for our model\ntrainer = Pix2PixTrainer(opt)\n\n# create tool for counting iterations\niter_counter = IterationCounter(opt, len(dataloader))\n\n# create tool for visualization\nvisualizer = Visualizer(opt)\n\nfor epoch in iter_counter.training_epochs():\n    iter_counter.record_epoch_start(epoch)\n    for i, data_i in enumerate(dataloader, start=iter_counter.epoch_iter):\n        iter_counter.record_one_iteration()\n\n        # Training\n        # train generator\n        if i % opt.D_steps_per_G == 0:\n            trainer.run_generator_one_step(data_i)\n\n        # train discriminator\n        trainer.run_discriminator_one_step(data_i)\n\n        # Visualizations\n        if iter_counter.needs_printing():\n            losses = trainer.get_latest_losses()\n            visualizer.print_current_errors(epoch, iter_counter.epoch_iter,\n                                            losses, iter_counter.time_per_iter)\n            visualizer.plot_current_errors(losses, iter_counter.total_steps_so_far)\n\n        if iter_counter.needs_displaying():\n            visuals = OrderedDict([('input_label', data_i['label']),\n                                   ('synthesized_image', trainer.get_latest_generated()),\n                                   ('real_image', data_i['image'])])\n            visualizer.display_current_results(visuals, epoch, iter_counter.total_steps_so_far)\n\n        if iter_counter.needs_saving():\n            print('saving the latest model (epoch %d, total_steps %d)' %\n                  (epoch, iter_counter.total_steps_so_far))\n            trainer.save('latest')\n            iter_counter.record_current_iter()\n\n    trainer.update_learning_rate(epoch)\n    iter_counter.record_epoch_end()\n\n    if epoch % opt.save_epoch_freq == 0 or \\\n       epoch == iter_counter.total_epochs:\n        print('saving the model at the end of epoch %d, iters %d' %\n              (epoch, iter_counter.total_steps_so_far))\n        trainer.save('latest')\n        trainer.save(epoch)\n\nprint('Training was successfully finished.')\n"
  },
  {
    "path": "trainers/__init__.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n"
  },
  {
    "path": "trainers/pix2pix_trainer.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\nfrom models.networks.sync_batchnorm import DataParallelWithCallback\nfrom models.pix2pix_model import Pix2pixModel\nfrom models.smis_model import SmisModel\nimport os\n\nclass Pix2PixTrainer():\n    \"\"\"\n    Trainer creates the model and optimizers, and uses them to\n    updates the weights of the network while reporting losses\n    and the latest visuals to visualize the progress in training.\n    \"\"\"\n\n    def __init__(self, opt):\n        self.opt = opt\n        if self.opt.model == 'pix2pix':\n            self.pix2pix_model = Pix2pixModel(opt)\n        elif self.opt.model == 'smis':\n            self.pix2pix_model = SmisModel(opt)\n        print(self.pix2pix_model)\n        with open(os.path.join(opt.checkpoints_dir, opt.name, 'model.txt'), 'w') as f:\n            f.write(self.pix2pix_model.__str__())\n        if len(opt.gpu_ids) > 0:\n            self.pix2pix_model = DataParallelWithCallback(self.pix2pix_model,\n                                                          device_ids=opt.gpu_ids)\n            self.pix2pix_model_on_one_gpu = self.pix2pix_model.module\n        else:\n            self.pix2pix_model_on_one_gpu = self.pix2pix_model\n\n        self.generated = None\n        if opt.isTrain:\n            self.optimizer_G, self.optimizer_D = \\\n                self.pix2pix_model_on_one_gpu.create_optimizers(opt)\n            self.old_lr = opt.lr\n\n    def run_generator_one_step(self, data):\n        self.optimizer_G.zero_grad()\n        g_losses, generated = self.pix2pix_model(data, mode='generator')\n        g_loss = sum(g_losses.values()).mean()\n        g_loss.backward()\n        self.optimizer_G.step()\n        self.g_losses = g_losses\n        self.generated = generated\n\n    def run_discriminator_one_step(self, data):\n        self.optimizer_D.zero_grad()\n        d_losses = self.pix2pix_model(data, mode='discriminator')\n        d_loss = sum(d_losses.values()).mean()\n        d_loss.backward()\n        self.optimizer_D.step()\n        self.d_losses = d_losses\n\n    def clean_grad(self):\n        self.optimizer_D.zero_grad()\n        self.optimizer_G.zero_grad()\n\n    def get_latest_losses(self):\n        return {**self.g_losses, **self.d_losses}\n\n    def get_latest_generated(self):\n        return self.generated\n\n    def update_learning_rate(self, epoch):\n        self.update_learning_rate(epoch)\n\n    def save(self, epoch):\n        self.pix2pix_model_on_one_gpu.save(epoch)\n\n    ##################################################################\n    # Helper functions\n    ##################################################################\n\n    def update_learning_rate(self, epoch):\n        if epoch > self.opt.niter:\n            lrd = self.opt.lr / self.opt.niter_decay\n            new_lr = self.old_lr - lrd\n        else:\n            new_lr = self.old_lr\n\n        if new_lr != self.old_lr:\n            if self.opt.no_TTUR:\n                new_lr_G = new_lr\n                new_lr_D = new_lr\n            else:\n                new_lr_G = new_lr / 2\n                new_lr_D = new_lr * 2\n\n            for param_group in self.optimizer_D.param_groups:\n                param_group['lr'] = new_lr_D\n            for param_group in self.optimizer_G.param_groups:\n                param_group['lr'] = new_lr_G\n            print('update learning rate: %f -> %f' % (self.old_lr, new_lr))\n            self.old_lr = new_lr\n"
  },
  {
    "path": "util/__init__.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n"
  },
  {
    "path": "util/coco.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\n\ndef id2label(id):\n    if id == 182:\n        id = 0\n    else:\n        id = id + 1\n    labelmap = \\\n        {0: 'unlabeled',\n         1: 'person',\n         2: 'bicycle',\n         3: 'car',\n         4: 'motorcycle',\n         5: 'airplane',\n         6: 'bus',\n         7: 'train',\n         8: 'truck',\n         9: 'boat',\n         10: 'traffic light',\n         11: 'fire hydrant',\n         12: 'street sign',\n         13: 'stop sign',\n         14: 'parking meter',\n         15: 'bench',\n         16: 'bird',\n         17: 'cat',\n         18: 'dog',\n         19: 'horse',\n         20: 'sheep',\n         21: 'cow',\n         22: 'elephant',\n         23: 'bear',\n         24: 'zebra',\n         25: 'giraffe',\n         26: 'hat',\n         27: 'backpack',\n         28: 'umbrella',\n         29: 'shoe',\n         30: 'eye glasses',\n         31: 'handbag',\n         32: 'tie',\n         33: 'suitcase',\n         34: 'frisbee',\n         35: 'skis',\n         36: 'snowboard',\n         37: 'sports ball',\n         38: 'kite',\n         39: 'baseball bat',\n         40: 'baseball glove',\n         41: 'skateboard',\n         42: 'surfboard',\n         43: 'tennis racket',\n         44: 'bottle',\n         45: 'plate',\n         46: 'wine glass',\n         47: 'cup',\n         48: 'fork',\n         49: 'knife',\n         50: 'spoon',\n         51: 'bowl',\n         52: 'banana',\n         53: 'apple',\n         54: 'sandwich',\n         55: 'orange',\n         56: 'broccoli',\n         57: 'carrot',\n         58: 'hot dog',\n         59: 'pizza',\n         60: 'donut',\n         61: 'cake',\n         62: 'chair',\n         63: 'couch',\n         64: 'potted plant',\n         65: 'bed',\n         66: 'mirror',\n         67: 'dining table',\n         68: 'window',\n         69: 'desk',\n         70: 'toilet',\n         71: 'door',\n         72: 'tv',\n         73: 'laptop',\n         74: 'mouse',\n         75: 'remote',\n         76: 'keyboard',\n         77: 'cell phone',\n         78: 'microwave',\n         79: 'oven',\n         80: 'toaster',\n         81: 'sink',\n         82: 'refrigerator',\n         83: 'blender',\n         84: 'book',\n         85: 'clock',\n         86: 'vase',\n         87: 'scissors',\n         88: 'teddy bear',\n         89: 'hair drier',\n         90: 'toothbrush',\n         91: 'hair brush',  # Last class of Thing\n         92: 'banner',  # Beginning of Stuff\n         93: 'blanket',\n         94: 'branch',\n         95: 'bridge',\n         96: 'building-other',\n         97: 'bush',\n         98: 'cabinet',\n         99: 'cage',\n         100: 'cardboard',\n         101: 'carpet',\n         102: 'ceiling-other',\n         103: 'ceiling-tile',\n         104: 'cloth',\n         105: 'clothes',\n         106: 'clouds',\n         107: 'counter',\n         108: 'cupboard',\n         109: 'curtain',\n         110: 'desk-stuff',\n         111: 'dirt',\n         112: 'door-stuff',\n         113: 'fence',\n         114: 'floor-marble',\n         115: 'floor-other',\n         116: 'floor-stone',\n         117: 'floor-tile',\n         118: 'floor-wood',\n         119: 'flower',\n         120: 'fog',\n         121: 'food-other',\n         122: 'fruit',\n         123: 'furniture-other',\n         124: 'grass',\n         125: 'gravel',\n         126: 'ground-other',\n         127: 'hill',\n         128: 'house',\n         129: 'leaves',\n         130: 'light',\n         131: 'mat',\n         132: 'metal',\n         133: 'mirror-stuff',\n         134: 'moss',\n         135: 'mountain',\n         136: 'mud',\n         137: 'napkin',\n         138: 'net',\n         139: 'paper',\n         140: 'pavement',\n         141: 'pillow',\n         142: 'plant-other',\n         143: 'plastic',\n         144: 'platform',\n         145: 'playingfield',\n         146: 'railing',\n         147: 'railroad',\n         148: 'river',\n         149: 'road',\n         150: 'rock',\n         151: 'roof',\n         152: 'rug',\n         153: 'salad',\n         154: 'sand',\n         155: 'sea',\n         156: 'shelf',\n         157: 'sky-other',\n         158: 'skyscraper',\n         159: 'snow',\n         160: 'solid-other',\n         161: 'stairs',\n         162: 'stone',\n         163: 'straw',\n         164: 'structural-other',\n         165: 'table',\n         166: 'tent',\n         167: 'textile-other',\n         168: 'towel',\n         169: 'tree',\n         170: 'vegetable',\n         171: 'wall-brick',\n         172: 'wall-concrete',\n         173: 'wall-other',\n         174: 'wall-panel',\n         175: 'wall-stone',\n         176: 'wall-tile',\n         177: 'wall-wood',\n         178: 'water-other',\n         179: 'waterdrops',\n         180: 'window-blind',\n         181: 'window-other',\n         182: 'wood'}\n    if id in labelmap:\n        return labelmap[id]\n    else:\n        return 'unknown'\n"
  },
  {
    "path": "util/html.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\nimport datetime\nimport dominate\nfrom dominate.tags import *\nimport os\n\n\nclass HTML:\n    def __init__(self, web_dir, title, refresh=0):\n        if web_dir.endswith('.html'):\n            web_dir, html_name = os.path.split(web_dir)\n        else:\n            web_dir, html_name = web_dir, 'index.html'\n        self.title = title\n        self.web_dir = web_dir\n        self.html_name = html_name\n        self.img_dir = os.path.join(self.web_dir, 'images')\n        if len(self.web_dir) > 0 and not os.path.exists(self.web_dir):\n            os.makedirs(self.web_dir)\n        if len(self.web_dir) > 0 and not os.path.exists(self.img_dir):\n            os.makedirs(self.img_dir)\n\n        self.doc = dominate.document(title=title)\n        with self.doc:\n            h1(datetime.datetime.now().strftime(\"%I:%M%p on %B %d, %Y\"))\n        if refresh > 0:\n            with self.doc.head:\n                meta(http_equiv=\"refresh\", content=str(refresh))\n\n    def get_image_dir(self):\n        return self.img_dir\n\n    def add_header(self, str):\n        with self.doc:\n            h3(str)\n\n    def add_table(self, border=1):\n        self.t = table(border=border, style=\"table-layout: fixed;\")\n        self.doc.add(self.t)\n\n    def add_images(self, ims, txts, links, width=512):\n        self.add_table()\n        with self.t:\n            with tr():\n                for im, txt, link in zip(ims, txts, links):\n                    with td(style=\"word-wrap: break-word;\", halign=\"center\", valign=\"top\"):\n                        with p():\n                            with a(href=os.path.join('images', link)):\n                                img(style=\"width:%dpx\" % (width), src=os.path.join('images', im))\n                            br()\n                            p(txt.encode('utf-8'))\n\n    def save(self):\n        html_file = os.path.join(self.web_dir, self.html_name)\n        f = open(html_file, 'wt')\n        f.write(self.doc.render())\n        f.close()\n\n\nif __name__ == '__main__':\n    html = HTML('web/', 'test_html')\n    html.add_header('hello world')\n\n    ims = []\n    txts = []\n    links = []\n    for n in range(4):\n        ims.append('image_%d.jpg' % n)\n        txts.append('text_%d' % n)\n        links.append('image_%d.jpg' % n)\n    html.add_images(ims, txts, links)\n    html.save()\n"
  },
  {
    "path": "util/iter_counter.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\nimport os\nimport time\nimport numpy as np\n\n\n# Helper class that keeps track of training iterations\nclass IterationCounter():\n    def __init__(self, opt, dataset_size):\n        self.opt = opt\n        self.dataset_size = dataset_size\n\n        self.first_epoch = 1\n        self.total_epochs = opt.niter + opt.niter_decay\n        self.epoch_iter = 0  # iter number within each epoch\n        self.iter_record_path = os.path.join(self.opt.checkpoints_dir, self.opt.name, 'iter.txt')\n        if opt.isTrain and opt.continue_train:\n            try:\n                self.first_epoch, self.epoch_iter = np.loadtxt(\n                    self.iter_record_path, delimiter=',', dtype=int)\n                print('Resuming from epoch %d at iteration %d' % (self.first_epoch, self.epoch_iter))\n            except:\n                print('Could not load iteration record at %s. Starting from beginning.' %\n                      self.iter_record_path)\n\n        self.total_steps_so_far = (self.first_epoch - 1) * dataset_size + self.epoch_iter\n\n    # return the iterator of epochs for the training\n    def training_epochs(self):\n        return range(self.first_epoch, self.total_epochs + 1)\n\n    def record_epoch_start(self, epoch):\n        self.epoch_start_time = time.time()\n        self.epoch_iter = 0\n        self.last_iter_time = time.time()\n        self.current_epoch = epoch\n\n    def record_one_iteration(self):\n        current_time = time.time()\n\n        # the last remaining batch is dropped (see data/__init__.py),\n        # so we can assume batch size is always opt.batchSize\n        self.time_per_iter = (current_time - self.last_iter_time) / self.opt.batchSize\n        self.last_iter_time = current_time\n        self.total_steps_so_far += self.opt.batchSize\n        self.epoch_iter += self.opt.batchSize\n\n    def record_epoch_end(self):\n        current_time = time.time()\n        self.time_per_epoch = current_time - self.epoch_start_time\n        print('End of epoch %d / %d \\t Time Taken: %d sec' %\n              (self.current_epoch, self.total_epochs, self.time_per_epoch))\n        if self.current_epoch % self.opt.save_epoch_freq == 0:\n            np.savetxt(self.iter_record_path, (self.current_epoch + 1, 0),\n                       delimiter=',', fmt='%d')\n            print('Saved current iteration count at %s.' % self.iter_record_path)\n\n    def record_current_iter(self):\n        np.savetxt(self.iter_record_path, (self.current_epoch, self.epoch_iter),\n                   delimiter=',', fmt='%d')\n        print('Saved current iteration count at %s.' % self.iter_record_path)\n\n    def needs_saving(self):\n        return (self.total_steps_so_far % self.opt.save_latest_freq) < self.opt.batchSize\n\n    def needs_printing(self):\n        return (self.total_steps_so_far % self.opt.print_freq) < self.opt.batchSize\n\n    def needs_displaying(self):\n        return (self.total_steps_so_far % self.opt.display_freq) < self.opt.batchSize\n\n    def needs_many_test(self):\n        return (self.total_steps_so_far % self.opt.many_test_freq) < self.opt.batchSize"
  },
  {
    "path": "util/util.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\nimport re\nimport importlib\nimport torch\nfrom argparse import Namespace\nimport numpy as np\nfrom PIL import Image\nimport os\nimport argparse\nimport dill as pickle\nimport util.coco\n\n\ndef save_obj(obj, name):\n    with open(name, 'wb') as f:\n        pickle.dump(obj, f, pickle.HIGHEST_PROTOCOL)\n\n\ndef load_obj(name):\n    with open(name, 'rb') as f:\n        return pickle.load(f)\n\n# returns a configuration for creating a generator\n# |default_opt| should be the opt of the current experiment\n# |**kwargs|: if any configuration should be overriden, it can be specified here\n\n\ndef copyconf(default_opt, **kwargs):\n    conf = argparse.Namespace(**vars(default_opt))\n    for key in kwargs:\n        print(key, kwargs[key])\n        setattr(conf, key, kwargs[key])\n    return conf\n\n\ndef tile_images(imgs, picturesPerRow=4):\n    \"\"\" Code borrowed from\n    https://stackoverflow.com/questions/26521365/cleanly-tile-numpy-array-of-images-stored-in-a-flattened-1d-format/26521997\n    \"\"\"\n\n    # Padding\n    if imgs.shape[0] % picturesPerRow == 0:\n        rowPadding = 0\n    else:\n        rowPadding = picturesPerRow - imgs.shape[0] % picturesPerRow\n    if rowPadding > 0:\n        imgs = np.concatenate([imgs, np.zeros((rowPadding, *imgs.shape[1:]), dtype=imgs.dtype)], axis=0)\n\n    # Tiling Loop (The conditionals are not necessary anymore)\n    tiled = []\n    for i in range(0, imgs.shape[0], picturesPerRow):\n        tiled.append(np.concatenate([imgs[j] for j in range(i, i + picturesPerRow)], axis=1))\n\n    tiled = np.concatenate(tiled, axis=0)\n    return tiled\n\n\n# Converts a Tensor into a Numpy array\n# |imtype|: the desired type of the converted numpy array\ndef tensor2im(image_tensor, imtype=np.uint8, normalize=True, tile=False):\n    if isinstance(image_tensor, list):\n        image_numpy = []\n        for i in range(len(image_tensor)):\n            image_numpy.append(tensor2im(image_tensor[i], imtype, normalize))\n        return image_numpy\n\n    if image_tensor.dim() == 4:\n        # transform each image in the batch\n        images_np = []\n        for b in range(image_tensor.size(0)):\n            one_image = image_tensor[b]\n            one_image_np = tensor2im(one_image)\n            images_np.append(one_image_np.reshape(1, *one_image_np.shape))\n        images_np = np.concatenate(images_np, axis=0)\n        if tile:\n            images_tiled = tile_images(images_np)\n            return images_tiled\n        else:\n            return images_np\n\n    if image_tensor.dim() == 2:\n        image_tensor = image_tensor.unsqueeze(0)\n    image_numpy = image_tensor.detach().cpu().float().numpy()\n    if normalize:\n        image_numpy = (np.transpose(image_numpy, (1, 2, 0)) + 1) / 2.0 * 255.0\n    else:\n        image_numpy = np.transpose(image_numpy, (1, 2, 0)) * 255.0\n    image_numpy = np.clip(image_numpy, 0, 255)\n    if image_numpy.shape[2] == 1:\n        image_numpy = image_numpy[:, :, 0]\n    return image_numpy.astype(imtype)\n\n\n# Converts a one-hot tensor into a colorful label map\ndef tensor2label(label_tensor, n_label, imtype=np.uint8, tile=False):\n    if label_tensor.dim() == 4:\n        # transform each image in the batch\n        images_np = []\n        for b in range(label_tensor.size(0)):\n            one_image = label_tensor[b]\n            one_image_np = tensor2label(one_image, n_label, imtype)\n            images_np.append(one_image_np.reshape(1, *one_image_np.shape))\n        images_np = np.concatenate(images_np, axis=0)\n        if tile:\n            images_tiled = tile_images(images_np)\n            return images_tiled\n        else:\n            images_np = images_np[0]\n            return images_np\n\n    if label_tensor.dim() == 1:\n        return np.zeros((64, 64, 3), dtype=np.uint8)\n    if n_label == 0:\n        return tensor2im(label_tensor, imtype)\n    label_tensor = label_tensor.cpu().float()\n    if label_tensor.size()[0] > 1:\n        # label_tensor = label_tensor.max(0, keepdim=True)[1]\n        label_tensor = label_tensor[0].unsqueeze(dim=0)\n    label_tensor = Colorize(n_label)(label_tensor)\n    label_numpy = np.transpose(label_tensor.numpy(), (1, 2, 0))\n    result = label_numpy.astype(imtype)\n    return result\n\n\ndef save_image(image_numpy, image_path, create_dir=False):\n    if create_dir:\n        os.makedirs(os.path.dirname(image_path), exist_ok=True)\n    if len(image_numpy.shape) == 2:\n        image_numpy = np.expand_dims(image_numpy, axis=2)\n    if image_numpy.shape[2] == 1:\n        image_numpy = np.repeat(image_numpy, 3, 2)\n    image_pil = Image.fromarray(image_numpy)\n\n    # save to png\n    image_pil.save(image_path.replace('.jpg', '.png'))\n\n\ndef mkdirs(paths):\n    if isinstance(paths, list) and not isinstance(paths, str):\n        for path in paths:\n            mkdir(path)\n    else:\n        mkdir(paths)\n\n\ndef mkdir(path):\n    if not os.path.exists(path):\n        os.makedirs(path)\n\n\ndef atoi(text):\n    return int(text) if text.isdigit() else text\n\n\ndef natural_keys(text):\n    '''\n    alist.sort(key=natural_keys) sorts in human order\n    http://nedbatchelder.com/blog/200712/human_sorting.html\n    (See Toothy's implementation in the comments)\n    '''\n    return [atoi(c) for c in re.split('(\\d+)', text)]\n\n\ndef natural_sort(items):\n    items.sort(key=natural_keys)\n\n\ndef str2bool(v):\n    if v.lower() in ('yes', 'true', 't', 'y', '1'):\n        return True\n    elif v.lower() in ('no', 'false', 'f', 'n', '0'):\n        return False\n    else:\n        raise argparse.ArgumentTypeError('Boolean value expected.')\n\n\ndef find_class_in_module(target_cls_name, module):\n    target_cls_name = target_cls_name.replace('_', '').lower()\n    clslib = importlib.import_module(module)\n    cls = None\n    for name, clsobj in clslib.__dict__.items():\n        if name.lower() == target_cls_name:\n            cls = clsobj\n\n    if cls is None:\n        print(\"In %s, there should be a class whose name matches %s in lowercase without underscore(_)\" % (module, target_cls_name))\n        exit(0)\n\n    return cls\n\n\ndef save_network(net, label, epoch, opt):\n    save_filename = '%s_net_%s.pth' % (epoch, label)\n    save_path = os.path.join(opt.checkpoints_dir, opt.name, save_filename)\n    torch.save(net.cpu().state_dict(), save_path)\n    if len(opt.gpu_ids) and torch.cuda.is_available():\n        net.cuda()\n\n\ndef load_network(net, label, epoch, opt):\n    save_filename = '%s_net_%s.pth' % (epoch, label)\n    save_dir = os.path.join(opt.checkpoints_dir, opt.name)\n    save_path = os.path.join(save_dir, save_filename)\n    weights = torch.load(save_path)\n    net.load_state_dict(weights)\n    return net\n\n\n###############################################################################\n# Code from\n# https://github.com/ycszen/pytorch-seg/blob/master/transform.py\n# Modified so it complies with the Citscape label map colors\n###############################################################################\ndef uint82bin(n, count=8):\n    \"\"\"returns the binary of integer n, count refers to amount of bits\"\"\"\n    return ''.join([str((n >> y) & 1) for y in range(count - 1, -1, -1)])\n\n\ndef labelcolormap(N):\n    if N == 35:  # cityscape\n        cmap = np.array([(0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (111, 74, 0), (81, 0, 81),\n                         (128, 64, 128), (244, 35, 232), (250, 170, 160), (230, 150, 140), (70, 70, 70), (102, 102, 156), (190, 153, 153),\n                         (180, 165, 180), (150, 100, 100), (150, 120, 90), (153, 153, 153), (153, 153, 153), (250, 170, 30), (220, 220, 0),\n                         (107, 142, 35), (152, 251, 152), (70, 130, 180), (220, 20, 60), (255, 0, 0), (0, 0, 142), (0, 0, 70),\n                         (0, 60, 100), (0, 0, 90), (0, 0, 110), (0, 80, 100), (0, 0, 230), (119, 11, 32), (0, 0, 142)],\n                        dtype=np.uint8)\n    else:\n        cmap = np.zeros((N, 3), dtype=np.uint8)\n        for i in range(N):\n            r, g, b = 0, 0, 0\n            id = i + 1  # let's give 0 a color\n            for j in range(7):\n                str_id = uint82bin(id)\n                r = r ^ (np.uint8(str_id[-1]) << (7 - j))\n                g = g ^ (np.uint8(str_id[-2]) << (7 - j))\n                b = b ^ (np.uint8(str_id[-3]) << (7 - j))\n                id = id >> 3\n            cmap[i, 0] = r\n            cmap[i, 1] = g\n            cmap[i, 2] = b\n\n        if N == 182:  # COCO\n            important_colors = {\n                'sea': (54, 62, 167),\n                'sky-other': (95, 219, 255),\n                'tree': (140, 104, 47),\n                'clouds': (170, 170, 170),\n                'grass': (29, 195, 49)\n            }\n            for i in range(N):\n                name = util.coco.id2label(i)\n                if name in important_colors:\n                    color = important_colors[name]\n                    cmap[i] = np.array(list(color))\n\n    return cmap\n\n\nclass Colorize(object):\n    def __init__(self, n=35):\n        self.cmap = labelcolormap(n)\n        self.n = n\n        self.cmap = torch.from_numpy(self.cmap[:n])\n\n    def __call__(self, gray_image):\n        size = gray_image.size()\n        color_image = torch.ByteTensor(3, size[1], size[2]).fill_(0)\n        for label in range(0, len(self.cmap)):\n            mask = (label == gray_image[0]).cpu()\n            color_image[0][mask] = self.cmap[label][0]\n            color_image[1][mask] = self.cmap[label][1]\n            color_image[2][mask] = self.cmap[label][2]\n\n        return color_image\n"
  },
  {
    "path": "util/visualizer.py",
    "content": "\"\"\"\nCopyright (C) 2019 NVIDIA Corporation.  All rights reserved.\nLicensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).\n\"\"\"\n\nimport os\nimport ntpath\nimport time\nfrom . import util\nfrom . import html\nimport scipy.misc\nimport numpy as np\nimport torch\ntry:\n    from StringIO import StringIO  # Python 2.7\nexcept ImportError:\n    from io import BytesIO         # Python 3.x\n\nclass Visualizer():\n    def __init__(self, opt):\n        self.opt = opt\n        self.tf_log = opt.isTrain and opt.tf_log\n        self.use_html = opt.isTrain and not opt.no_html\n        self.win_size = opt.display_winsize\n        self.name = opt.name\n        if self.tf_log:\n            import tensorflow as tf\n            self.tf = tf\n            self.log_dir = os.path.join(opt.checkpoints_dir, opt.name, 'logs')\n            self.writer = tf.summary.FileWriter(self.log_dir)\n\n        if self.use_html:\n            self.web_dir = os.path.join(opt.checkpoints_dir, opt.name, 'web')\n            self.img_dir = os.path.join(self.web_dir, 'images')\n            print('create web directory %s...' % self.web_dir)\n            util.mkdirs([self.web_dir, self.img_dir])\n        if opt.isTrain:\n            self.log_name = os.path.join(opt.checkpoints_dir, opt.name, 'loss_log.txt')\n            with open(self.log_name, \"a\") as log_file:\n                now = time.strftime(\"%c\")\n                log_file.write('================ Training Loss (%s) ================\\n' % now)\n\n    # |visuals|: dictionary of images to display or save\n    def display_current_results(self, visuals, epoch, step):\n\n        ## convert tensors to numpy arrays\n        visuals = self.convert_visuals_to_numpy(visuals)\n                \n        if self.tf_log: # show images in tensorboard output\n            img_summaries = []\n            for label, image_numpy in visuals.items():\n                # Write the image to a string\n                try:\n                    s = StringIO()\n                except:\n                    s = BytesIO()\n                if len(image_numpy.shape) >= 4:\n                    image_numpy = image_numpy[0]\n                scipy.misc.toimage(image_numpy).save(s, format=\"jpeg\")\n                # Create an Image object\n                img_sum = self.tf.Summary.Image(encoded_image_string=s.getvalue(), height=image_numpy.shape[0], width=image_numpy.shape[1])\n                # Create a Summary value\n                img_summaries.append(self.tf.Summary.Value(tag=label, image=img_sum))\n\n            # Create and write Summary\n            summary = self.tf.Summary(value=img_summaries)\n            self.writer.add_summary(summary, step)\n\n        if self.use_html: # save images to a html file\n            for label, image_numpy in visuals.items():\n                # print(label, image_numpy.shape)\n                if isinstance(image_numpy, list):\n                    for i in range(len(image_numpy)):\n                        img_path = os.path.join(self.img_dir, 'epoch%.3d_iter%.3d_%s_%d.png' % (epoch, step, label, i))\n                        util.save_image(image_numpy[i], img_path)\n                else:\n                    img_path = os.path.join(self.img_dir, 'epoch%.3d_iter%.3d_%s.png' % (epoch, step, label))\n                    if len(image_numpy.shape) >= 4:\n                        image_numpy = image_numpy[0]                    \n                    util.save_image(image_numpy, img_path)\n\n            # update website\n            webpage = html.HTML(self.web_dir, 'Experiment name = %s' % self.name, refresh=5)\n            for n in range(epoch, 0, -1):\n                webpage.add_header('epoch [%d]' % n)\n                ims = []\n                txts = []\n                links = []\n\n                for label, image_numpy in visuals.items():\n                    if isinstance(image_numpy, list):\n                        for i in range(len(image_numpy)):\n                            img_path = 'epoch%.3d_iter%.3d_%s_%d.png' % (n, step, label, i)\n                            ims.append(img_path)\n                            txts.append(label+str(i))\n                            links.append(img_path)\n                    else:\n                        img_path = 'epoch%.3d_iter%.3d_%s.png' % (n, step, label)\n                        ims.append(img_path)\n                        txts.append(label)\n                        links.append(img_path)\n                if len(ims) < 10:\n                    webpage.add_images(ims, txts, links, width=self.win_size)\n                else:\n                    num = int(round(len(ims)/2.0))\n                    webpage.add_images(ims[:num], txts[:num], links[:num], width=self.win_size)\n                    webpage.add_images(ims[num:], txts[num:], links[num:], width=self.win_size)\n            webpage.save()\n\n    # errors: dictionary of error labels and values\n    def plot_current_errors(self, errors, step):\n        if self.tf_log:\n            for tag, value in errors.items():\n                value = value.mean().float()\n                summary = self.tf.Summary(value=[self.tf.Summary.Value(tag=tag, simple_value=value)])\n                self.writer.add_summary(summary, step)\n\n    # errors: same format as |errors| of plotCurrentErrors\n    def print_current_errors(self, epoch, i, errors, t):\n        message = '(epoch: %d, iters: %d, time: %.3f) ' % (epoch, i, t)\n        for k, v in errors.items():\n            #print(v)\n            #if v != 0:\n            v = v.mean().float()\n            message += '%s: %.3f ' % (k, v)\n\n        print(message)\n        with open(self.log_name, \"a\") as log_file:\n            log_file.write('%s\\n' % message)\n\n    def convert_visuals_to_numpy(self, visuals):\n        for key, t in visuals.items():\n            tile = self.opt.batchSize > 8\n            if 'input_label' in key:\n                t = util.tensor2label(t, self.opt.label_nc + 2, tile=tile)\n            else:\n                t = util.tensor2im(t, tile=tile)\n            visuals[key] = t\n        return visuals\n\n    # save image to the disk\n    def save_images(self, webpage, visuals, image_path):        \n        visuals = self.convert_visuals_to_numpy(visuals)        \n        \n        image_dir = webpage.get_image_dir()\n        short_path = ntpath.basename(image_path[0])\n        name = os.path.splitext(short_path)[0]\n\n        webpage.add_header(name)\n        ims = []\n        txts = []\n        links = []\n        length = len(visuals)\n        i = 0\n        if self.opt.dataset_mode == 'cityscapes':\n            image_len = 512\n        else:\n            image_len = 256\n        whole_image = np.zeros((256, length * image_len, 3), dtype=np.uint8)\n        for label, image_numpy in visuals.items():\n            whole_image[:, (i * image_len): (i + 1) * image_len, :] = image_numpy\n            i += 1\n        image_name = os.path.join('%s.png' % (name))\n        save_path = os.path.join(image_dir, image_name)\n        util.save_image(whole_image, save_path, create_dir=True)\n"
  }
]