[
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2022 The Learning and Vision Atelier (LAVA)\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# DASR\nPytorch implementation of \"Unsupervised Degradation Representation Learning for Blind Super-Resolution\", CVPR 2021\n\n[[arXiv]](http://arxiv.org/pdf/2104.00416) [[CVF]](https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_Unsupervised_Degradation_Representation_Learning_for_Blind_Super-Resolution_CVPR_2021_paper.pdf) [[Supp]](https://openaccess.thecvf.com/content/CVPR2021/supplemental/Wang_Unsupervised_Degradation_Representation_CVPR_2021_supplemental.pdf)\n\n\n## Overview\n\n<p align=\"center\"> <img src=\"Figs/fig.1.png\" width=\"50%\"> </p>\n\n\n<p align=\"center\"> <img src=\"Figs/fig.2.png\" width=\"100%\"> </p>\n\n\n## Requirements\n- Python 3.6\n- PyTorch == 1.1.0\n- numpy\n- skimage\n- imageio\n- matplotlib\n- cv2\n\n\n## Train\n### 1. Prepare training data \n\n1.1 Download the [DIV2K](https://data.vision.ee.ethz.ch/cvl/DIV2K/)  dataset and the [Flickr2K](http://cv.snu.ac.kr/research/EDSR/Flickr2K.tar) dataset.\n\n1.2 Combine the HR images from these two datasets in `your_data_path/DF2K/HR` to build the DF2K dataset. \n\n### 2. Begin to train\nRun `./main.sh` to train on the DF2K dataset. Please update `dir_data` in the bash file as `your_data_path`.\n\n\n## Test\n### 1. Prepare test data \nDownload [benchmark datasets](https://github.com/xinntao/BasicSR/blob/a19aac61b277f64be050cef7fe578a121d944a0e/docs/Datasets.md) (e.g., Set5, Set14 and other test sets) and prepare HR/LR images in `your_data_path/benchmark`.\n\n\n### 2. Begin to test\nRun `./test.sh` to test on benchmark datasets. Please update `dir_data` in the bash file as `your_data_path`.\n\n\n## Quick Test on An LR Image\nRun `./quick_test.sh` to test on an LR image. Please update `img_dir` in the bash file as `your_img_path`.\n\n## Visualization of Degradation Representations\n<p align=\"center\"> <img src=\"Figs/fig.6.png\" width=\"50%\"> </p>\n\n## Comparative Results\n### Noise-Free Degradations with Isotropic Gaussian Kernels\n\n<p align=\"center\"> <img src=\"Figs/tab2.png\" width=\"100%\"> </p>\n\n<p align=\"center\"> <img src=\"Figs/fig.5.png\" width=\"100%\"> </p>\n\n\n### General Degradations with Anisotropic Gaussian Kernels and Noises\n<p align=\"center\"> <img src=\"Figs/tab3.png\" width=\"100%\"> </p>\n\n<p align=\"center\"> <img src=\"Figs/fig.7.png\" width=\"100%\"> </p>\n\n### Unseen Degradations \n\n<p align=\"center\"> <img src=\"Figs/fig.III.png\" width=\"50%\"> </p>\n\n### Real Degradations (AIM real-world SR challenge)\n\n<p align=\"center\"> <img src=\"Figs/fig.VII.png\" width=\"50%\"> </p>\n\n## Citation\n```\n@InProceedings{Wang2021Unsupervised,\n  author    = {Wang, Longguang and Wang, Yingqian and Dong, Xiaoyu and Xu, Qingyu and Yang, Jungang and An, Wei and Guo, Yulan},\n  title     = {Unsupervised Degradation Representation Learning for Blind Super-Resolution},\n  booktitle = {CVPR},\n  year      = {2021},\n}\n```\n\n## Acknowledgements\nThis code is built on [EDSR (PyTorch)](https://github.com/thstkdgus35/EDSR-PyTorch), [IKC](https://github.com/yuanjunchai/IKC) and [MoCo](https://github.com/facebookresearch/moco). We thank the authors for sharing the codes.\n\n"
  },
  {
    "path": "data/__init__.py",
    "content": "from importlib import import_module\nfrom dataloader import MSDataLoader\n\nclass Data:\n    def __init__(self, args):\n        self.loader_train = None\n        if not args.test_only:\n            module_train = import_module('data.' + args.data_train.lower())     ## load the right dataset loader module\n            trainset = getattr(module_train, args.data_train)(args)             ## load the dataset, args.data_train is the  dataset name\n            self.loader_train = MSDataLoader(\n                args,\n                trainset,\n                batch_size=args.batch_size,\n                shuffle=True,\n                pin_memory=not args.cpu\n            )\n\n        if args.data_test in ['Set5', 'Set14', 'B100', 'Manga109', 'Urban100']:\n            module_test = import_module('data.benchmark')\n            testset = getattr(module_test, 'Benchmark')(args, name=args.data_test,train=False)\n        else:\n            module_test = import_module('data.' + args.data_test.lower())\n            testset = getattr(module_test, args.data_test)(args, train=False)\n\n        self.loader_test = MSDataLoader(\n            args,\n            testset,\n            batch_size=1,\n            shuffle=False,\n            pin_memory=not args.cpu\n        )\n\n"
  },
  {
    "path": "data/benchmark.py",
    "content": "import os\nfrom data import common\nfrom data import multiscalesrdata as srdata\n\n\nclass Benchmark(srdata.SRData):\n    def __init__(self, args, name='', train=True):\n        super(Benchmark, self).__init__(\n            args, name=name, train=train, benchmark=True\n        )\n\n    def _set_filesystem(self, dir_data):\n        self.apath = os.path.join(dir_data,'benchmark', self.name)\n        self.dir_hr = os.path.join(self.apath, 'HR')\n        self.dir_lr = os.path.join(self.apath, 'LR_bicubic')\n        self.ext = ('.png','.png')\n        print(self.dir_hr)\n        print(self.dir_lr)\n"
  },
  {
    "path": "data/common.py",
    "content": "import random\nimport numpy as np\nimport skimage.color as sc\nimport torch\n\n\ndef get_patch(img, patch_size=48, scale=1):\n    th, tw = img.shape[:2]  ## HR image\n\n    tp = round(scale * patch_size)\n\n    tx = random.randrange(0, (tw-tp))\n    ty = random.randrange(0, (th-tp))\n\n    return img[ty:ty + tp, tx:tx + tp, :]\n\n\ndef set_channel(img, n_channels=3):\n    if img.ndim == 2:\n        img = np.expand_dims(img, axis=2)\n\n    c = img.shape[2]\n    if n_channels == 1 and c == 3:\n        img = np.expand_dims(sc.rgb2ycbcr(img)[:, :, 0], 2)\n    elif n_channels == 3 and c == 1:\n        img = np.concatenate([img] * n_channels, 2)\n\n    return img\n\n\ndef np2Tensor(img, rgb_range=255):\n    np_transpose = np.ascontiguousarray(img.transpose((2, 0, 1)))\n    tensor = torch.from_numpy(np_transpose).float()\n    tensor.mul_(rgb_range / 255)\n\n    return tensor\n\n\ndef augment(img, hflip=True, rot=True):\n    hflip = hflip and random.random() < 0.5\n    vflip = rot and random.random() < 0.5\n    rot90 = rot and random.random() < 0.5\n\n    if hflip: img = img[:, ::-1, :]\n    if vflip: img = img[::-1, :, :]\n    if rot90: img = img.transpose(1, 0, 2)\n\n    return img\n\n"
  },
  {
    "path": "data/df2k.py",
    "content": "import os\nfrom data import multiscalesrdata\n\n\nclass DF2K(multiscalesrdata.SRData):\n    def __init__(self, args, name='DF2K', train=True, benchmark=False):\n        super(DF2K, self).__init__(args, name=name, train=train, benchmark=benchmark)\n\n    def _scan(self):\n        names_hr = super(DF2K, self)._scan()\n        names_hr = names_hr[self.begin - 1:self.end]\n\n        return names_hr\n\n    def _set_filesystem(self, dir_data):\n        super(DF2K, self)._set_filesystem(dir_data)\n        self.dir_hr = os.path.join(self.apath, 'HR')\n        self.dir_lr = os.path.join(self.apath, 'LR_bicubic')\n\n"
  },
  {
    "path": "data/multiscalesrdata.py",
    "content": "import os\nimport glob\n\nfrom data import common\nimport pickle\nimport numpy as np\nimport imageio\n\nimport torch\nimport torch.utils.data as data\n\n\nclass SRData(data.Dataset):\n    def __init__(self, args, name='', train=True, benchmark=False):\n        self.args = args\n        self.name = name\n        self.train = train\n        self.split = 'train' if train else 'test'\n        self.do_eval = True\n        self.benchmark = benchmark\n        self.scale = args.scale\n        self.idx_scale = 0\n\n        data_range = [r.split('-') for r in args.data_range.split('/')]\n        if train:\n            data_range = data_range[0]\n        else:\n            if args.test_only and len(data_range) == 1:\n                data_range = data_range[0]\n            else:\n                data_range = data_range[1]\n        self.begin, self.end = list(map(lambda x: int(x), data_range))\n        self._set_filesystem(args.dir_data)\n        if args.ext.find('img') < 0:\n            path_bin = os.path.join(self.apath, 'bin')\n            os.makedirs(path_bin, exist_ok=True)\n\n        list_hr = self._scan()\n        if args.ext.find('bin') >= 0:\n            # Binary files are stored in 'bin' folder\n            # If the binary file exists, load it. If not, make it.\n            list_hr = self._scan()\n            self.images_hr = self._check_and_load(\n                args.ext, list_hr, self._name_hrbin()\n            )\n        else:\n            if args.ext.find('img') >= 0 or benchmark:\n                self.images_hr = list_hr\n            elif args.ext.find('sep') >= 0:\n                os.makedirs(\n                    self.dir_hr.replace(self.apath, path_bin),\n                    exist_ok=True\n                )\n\n                self.images_hr = []\n                for h in list_hr:\n                    b = h.replace(self.apath, path_bin)\n                    b = b.replace(self.ext[0], '.pt')\n                    self.images_hr.append(b)\n                    self._check_and_load(\n                        args.ext, [h], b, verbose=True, load=False\n                    )\n\n        if train:\n            self.repeat = args.test_every // (len(self.images_hr) // args.batch_size)\n\n    # Below functions as used to prepare images\n    def _scan(self):\n        names_hr = sorted(\n            glob.glob(os.path.join(self.dir_hr, '*' + self.ext[0]))\n        )\n        print(len(names_hr))\n\n        return names_hr\n\n    def _set_filesystem(self, dir_data):\n        self.apath = os.path.join(dir_data, self.name)\n        self.dir_hr = os.path.join(self.apath, 'HR')\n        self.dir_lr = os.path.join(self.apath, 'LR_bicubic')\n        self.ext = ('.png', '.png')\n\n    def _name_hrbin(self):\n        return os.path.join(\n            self.apath,\n            'bin',\n            '{}_bin_HR.pt'.format(self.split)\n        )\n\n    def _name_lrbin(self, scale):\n        return os.path.join(\n            self.apath,\n            'bin',\n            '{}_bin_LR_X{}.pt'.format(self.split, scale)\n        )\n\n    def _check_and_load(self, ext, l, f, verbose=True, load=True):\n        if os.path.isfile(f) and ext.find('reset') < 0:\n            if load:\n                if verbose: print('Loading {}...'.format(f))\n                with open(f, 'rb') as _f:\n                    ret = pickle.load(_f)\n                return ret\n            else:\n                return None\n        else:\n            if verbose:\n                if ext.find('reset') >= 0:\n                    print('Making a new binary: {}'.format(f))\n                else:\n                    print('{} does not exist. Now making binary...'.format(f))\n            b = [{\n                'name': os.path.splitext(os.path.basename(_l))[0],\n                'image': imageio.imread(_l)\n            } for _l in l]\n            with open(f, 'wb') as _f:\n                pickle.dump(b, _f)\n            return b\n\n    def __getitem__(self, idx):\n        hr, filename = self._load_file(idx)\n        hr = self.get_patch(hr)\n        hr = [common.set_channel(img, n_channels=self.args.n_colors) for img in hr]\n        hr_tensor = [common.np2Tensor(img, rgb_range=self.args.rgb_range)\n                     for img in hr]\n\n        return torch.stack(hr_tensor, 0), filename\n\n    def __len__(self):\n        if self.train:\n            return len(self.images_hr) * self.repeat\n        else:\n            return len(self.images_hr)\n\n    def _get_index(self, idx):\n        if self.train:\n            return idx % len(self.images_hr)\n        else:\n            return idx\n\n    def _load_file(self, idx):\n        idx = self._get_index(idx)\n        f_hr = self.images_hr[idx]\n\n        if self.args.ext.find('bin') >= 0:\n            filename = f_hr['name']\n            hr = f_hr['image']\n        else:\n            filename, _ = os.path.splitext(os.path.basename(f_hr))\n            if self.args.ext == 'img' or self.benchmark:\n                hr = imageio.imread(f_hr)\n            elif self.args.ext.find('sep') >= 0:\n                with open(f_hr, 'rb') as _f:\n                    hr = np.load(_f)[0]['image']\n\n        return hr, filename\n\n    def get_patch(self, hr):\n        scale = self.scale[self.idx_scale]\n        if self.train:\n            out = []\n            hr = common.augment(hr) if not self.args.no_augment else hr\n            # extract two patches from each image\n            for _ in range(2):\n                hr_patch = common.get_patch(\n                    hr,\n                    patch_size=self.args.patch_size,\n                    scale=scale\n                )\n                out.append(hr_patch)\n        else:\n            out = [hr]\n        return out\n\n    def set_scale(self, idx_scale):\n        self.idx_scale = idx_scale\n\n"
  },
  {
    "path": "dataloader.py",
    "content": "import sys\nimport threading\nimport queue\nimport random\nimport collections\n\nimport torch\nimport torch.multiprocessing as multiprocessing\n\nfrom torch._C import _set_worker_signal_handlers\nfrom torch.utils.data.dataloader import DataLoader\nfrom torch.utils.data.dataloader import _DataLoaderIter\nfrom torch.utils.data import _utils\n\nif sys.version_info[0] == 2:\n    import Queue as queue\nelse:\n    import queue\n\ndef _ms_loop(dataset, index_queue, data_queue, collate_fn, scale, seed, init_fn, worker_id):\n    global _use_shared_memory\n    _use_shared_memory = True\n    _set_worker_signal_handlers()\n\n    torch.set_num_threads(1)\n    torch.manual_seed(seed)\n    while True:\n        r = index_queue.get()\n        if r is None:\n            break\n        idx, batch_indices = r\n        try:\n            idx_scale = 0\n            if len(scale) > 1 and dataset.train:\n                idx_scale = random.randrange(0, len(scale))\n                dataset.set_scale(idx_scale)\n\n            samples = collate_fn([dataset[i] for i in batch_indices])\n            samples.append(idx_scale)\n\n        except Exception:\n            data_queue.put((idx, _utils.ExceptionWrapper(sys.exc_info())))\n        else:\n            data_queue.put((idx, samples))\n\nclass _MSDataLoaderIter(_DataLoaderIter):\n    def __init__(self, loader):\n        self.dataset = loader.dataset\n        self.scale = loader.scale\n        self.collate_fn = loader.collate_fn\n        self.batch_sampler = loader.batch_sampler\n        self.num_workers = loader.num_workers\n        self.pin_memory = loader.pin_memory and torch.cuda.is_available()\n        self.timeout = loader.timeout\n        self.done_event = threading.Event()\n\n        self.sample_iter = iter(self.batch_sampler)\n\n        if self.num_workers > 0:\n            self.worker_init_fn = loader.worker_init_fn\n            self.index_queues = [\n                multiprocessing.Queue() for _ in range(self.num_workers)\n            ]\n            self.worker_queue_idx = 0\n            self.worker_result_queue = multiprocessing.Queue()\n            self.batches_outstanding = 0\n            self.worker_pids_set = False\n            self.shutdown = False\n            self.send_idx = 0\n            self.rcvd_idx = 0\n            self.reorder_dict = {}\n\n            base_seed = torch.LongTensor(1).random_()[0]\n            self.workers = [\n                multiprocessing.Process(\n                    target=_ms_loop,\n                    args=(\n                        self.dataset,\n                        self.index_queues[i],\n                        self.worker_result_queue,\n                        self.collate_fn,\n                        self.scale,\n                        base_seed + i,\n                        self.worker_init_fn,\n                        i\n                    )\n                )\n                for i in range(self.num_workers)]\n\n            if self.pin_memory or self.timeout > 0:\n                self.data_queue = queue.Queue()\n                if self.pin_memory:\n                    maybe_device_id = torch.cuda.current_device()\n                else:\n                    # do not initialize cuda context if not necessary\n                    maybe_device_id = None\n                self.pin_memory_thread = threading.Thread(\n                    target=_utils.pin_memory._pin_memory_loop,\n                    args=(self.worker_result_queue, self.data_queue, maybe_device_id, self.done_event))\n                self.pin_memory_thread.daemon = True\n                self.pin_memory_thread.start()\n            else:\n                self.data_queue = self.worker_result_queue\n\n            for w in self.workers:\n                w.daemon = True  # ensure that the worker exits on process exit\n                w.start()\n\n            _utils.signal_handling._set_worker_pids(id(self), tuple(w.pid for w in self.workers))\n            _utils.signal_handling._set_SIGCHLD_handler()\n            self.worker_pids_set = True\n\n            # prime the prefetch loop\n            for _ in range(2 * self.num_workers):\n                self._put_indices()\n\nclass MSDataLoader(DataLoader):\n    def __init__(\n        self, args, dataset, batch_size=1, shuffle=False,\n        sampler=None, batch_sampler=None,\n        collate_fn=_utils.collate.default_collate, pin_memory=False, drop_last=True,\n        timeout=0, worker_init_fn=None):\n\n        super(MSDataLoader, self).__init__(\n            dataset, batch_size=batch_size, shuffle=shuffle,\n            sampler=sampler, batch_sampler=batch_sampler,\n            num_workers=args.n_threads, collate_fn=collate_fn,\n            pin_memory=pin_memory, drop_last=drop_last,\n            timeout=timeout, worker_init_fn=worker_init_fn)\n\n        self.scale = args.scale\n\n    def __iter__(self):\n        return _MSDataLoaderIter(self)\n"
  },
  {
    "path": "loss/__init__.py",
    "content": "import os\nfrom importlib import import_module\n\nimport matplotlib.pyplot as plt\n\nimport numpy as np\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\n\nclass Loss(nn.modules.loss._Loss):\n    def __init__(self, args, ckp):\n        super(Loss, self).__init__()\n        print('Preparing loss function:')\n\n        self.n_GPUs = args.n_GPUs\n        self.loss = []\n        self.loss_module = nn.ModuleList()\n        for loss in args.loss.split('+'):\n            weight, loss_type = loss.split('*')\n            if loss_type == 'MSE':\n                loss_function = nn.MSELoss()\n            elif loss_type == 'L1':\n                loss_function = nn.L1Loss()\n            elif loss_type == 'CE':\n                loss_function = nn.CrossEntropyLoss()\n            elif loss_type.find('VGG') >= 0:\n                module = import_module('loss.vgg')\n                loss_function = getattr(module, 'VGG')(\n                    loss_type[3:],\n                    rgb_range=args.rgb_range\n                )\n            elif loss_type.find('GAN') >= 0:\n                module = import_module('loss.adversarial')\n                loss_function = getattr(module, 'Adversarial')(\n                    args,\n                    loss_type\n                )\n           \n            self.loss.append({\n                'type': loss_type,\n                'weight': float(weight),\n                'function': loss_function}\n            )\n            if loss_type.find('GAN') >= 0:\n                self.loss.append({'type': 'DIS', 'weight': 1, 'function': None})\n\n        if len(self.loss) > 1:\n            self.loss.append({'type': 'Total', 'weight': 0, 'function': None})\n\n        for l in self.loss:\n            if l['function'] is not None:\n                print('{:.3f} * {}'.format(l['weight'], l['type']))\n                self.loss_module.append(l['function'])\n\n        self.log = torch.Tensor()\n\n        device = torch.device('cpu' if args.cpu else 'cuda')\n        self.loss_module.to(device)\n        if args.precision == 'half': self.loss_module.half()\n        if not args.cpu and args.n_GPUs > 1:\n            self.loss_module = nn.DataParallel(\n                self.loss_module, range(args.n_GPUs)\n            )\n\n        if args.load != '.': self.load(ckp.dir, cpu=args.cpu)\n\n    def forward(self, sr, hr):\n        losses = []\n        for i, l in enumerate(self.loss):\n            if l['function'] is not None:\n                loss = l['function'](sr, hr)\n                effective_loss = l['weight'] * loss\n                losses.append(effective_loss)\n                self.log[-1, i] += effective_loss.item()\n            elif l['type'] == 'DIS':\n                self.log[-1, i] += self.loss[i - 1]['function'].loss\n\n        loss_sum = sum(losses)\n        if len(self.loss) > 1:\n            self.log[-1, -1] += loss_sum.item()\n\n        return loss_sum\n\n    def step(self):\n        for l in self.get_loss_module():\n            if hasattr(l, 'scheduler'):\n                l.scheduler.step()\n\n    def start_log(self):\n        self.log = torch.cat((self.log, torch.zeros(1, len(self.loss))))\n\n    def end_log(self, n_batches):\n        self.log[-1].div_(n_batches)\n\n    def display_loss(self, batch):\n        n_samples = batch + 1\n        log = []\n        for l, c in zip(self.loss, self.log[-1]):\n            log.append('[{}: {:.4f}]'.format(l['type'], c / n_samples))\n\n        return ''.join(log)\n\n    def plot_loss(self, apath, epoch):\n        axis = np.linspace(1, epoch, epoch)\n        for i, l in enumerate(self.loss):\n            label = '{} Loss'.format(l['type'])\n            fig = plt.figure()\n            plt.title(label)\n            plt.plot(axis, self.log[:, i].numpy(), label=label)\n            plt.legend()\n            plt.xlabel('Epochs')\n            plt.ylabel('Loss')\n            plt.grid(True)\n            plt.savefig('{}/loss_{}.pdf'.format(apath, l['type']))\n            plt.close(fig)\n\n    def get_loss_module(self):\n        if self.n_GPUs == 1:\n            return self.loss_module\n        else:\n            return self.loss_module.module\n\n    def save(self, apath):\n        torch.save(self.state_dict(), os.path.join(apath, 'loss.pt'))\n        torch.save(self.log, os.path.join(apath, 'loss_log.pt'))\n\n    def load(self, apath, cpu=False):\n        if cpu:\n            kwargs = {'map_location': lambda storage, loc: storage}\n        else:\n            kwargs = {}\n\n        self.load_state_dict(torch.load(\n            os.path.join(apath, 'loss.pt'),\n            **kwargs\n        ))\n        self.log = torch.load(os.path.join(apath, 'loss_log.pt'))\n        for l in self.loss_module:\n            if hasattr(l, 'scheduler'):\n                for _ in range(len(self.log)): l.scheduler.step()"
  },
  {
    "path": "loss/adversarial.py",
    "content": "import utility\nfrom model import common\nfrom loss import discriminator\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nfrom torch.autograd import Variable\n\nclass Adversarial(nn.Module):\n    def __init__(self, args, gan_type):\n        super(Adversarial, self).__init__()\n        self.gan_type = gan_type\n        self.gan_k = args.gan_k\n        self.discriminator = discriminator.Discriminator(args, gan_type)\n        if gan_type != 'WGAN_GP':\n            self.optimizer = utility.make_optimizer(args, self.discriminator)\n        else:\n            self.optimizer = optim.Adam(\n                self.discriminator.parameters(),\n                betas=(0, 0.9), eps=1e-8, lr=1e-5\n            )\n        self.scheduler = utility.make_scheduler(args, self.optimizer)\n\n    def forward(self, fake, real):\n        fake_detach = fake.detach()\n\n        self.loss = 0\n        for _ in range(self.gan_k):\n            self.optimizer.zero_grad()\n            d_fake = self.discriminator(fake_detach)\n            d_real = self.discriminator(real)\n            if self.gan_type == 'GAN':\n                label_fake = torch.zeros_like(d_fake)\n                label_real = torch.ones_like(d_real)\n                loss_d \\\n                    = F.binary_cross_entropy_with_logits(d_fake, label_fake) \\\n                    + F.binary_cross_entropy_with_logits(d_real, label_real)\n            elif self.gan_type.find('WGAN') >= 0:\n                loss_d = (d_fake - d_real).mean()\n                if self.gan_type.find('GP') >= 0:\n                    epsilon = torch.rand_like(fake).view(-1, 1, 1, 1)\n                    hat = fake_detach.mul(1 - epsilon) + real.mul(epsilon)\n                    hat.requires_grad = True\n                    d_hat = self.discriminator(hat)\n                    gradients = torch.autograd.grad(\n                        outputs=d_hat.sum(), inputs=hat,\n                        retain_graph=True, create_graph=True, only_inputs=True\n                    )[0]\n                    gradients = gradients.view(gradients.size(0), -1)\n                    gradient_norm = gradients.norm(2, dim=1)\n                    gradient_penalty = 10 * gradient_norm.sub(1).pow(2).mean()\n                    loss_d += gradient_penalty\n\n            # Discriminator update\n            self.loss += loss_d.item()\n            loss_d.backward()\n            self.optimizer.step()\n\n            if self.gan_type == 'WGAN':\n                for p in self.discriminator.parameters():\n                    p.data.clamp_(-1, 1)\n\n        self.loss /= self.gan_k\n\n        d_fake_for_g = self.discriminator(fake)\n        if self.gan_type == 'GAN':\n            loss_g = F.binary_cross_entropy_with_logits(\n                d_fake_for_g, label_real\n            )\n        elif self.gan_type.find('WGAN') >= 0:\n            loss_g = -d_fake_for_g.mean()\n\n        # Generator loss\n        return loss_g\n    \n    def state_dict(self, *args, **kwargs):\n        state_discriminator = self.discriminator.state_dict(*args, **kwargs)\n        state_optimizer = self.optimizer.state_dict()\n\n        return dict(**state_discriminator, **state_optimizer)\n               \n# Some references\n# https://github.com/kuc2477/pytorch-wgan-gp/blob/master/model.py\n# OR\n# https://github.com/caogang/wgan-gp/blob/master/gan_cifar10.py\n"
  },
  {
    "path": "loss/discriminator.py",
    "content": "from model import common\n\nimport torch.nn as nn\n\nclass Discriminator(nn.Module):\n    def __init__(self, args, gan_type='GAN'):\n        super(Discriminator, self).__init__()\n\n        in_channels = 3\n        out_channels = 64\n        depth = 7\n        #bn = not gan_type == 'WGAN_GP'\n        bn = True\n        act = nn.LeakyReLU(negative_slope=0.2, inplace=True)\n\n        m_features = [\n            common.BasicBlock(args.n_colors, out_channels, 3, bn=bn, act=act)\n        ]\n        for i in range(depth):\n            in_channels = out_channels\n            if i % 2 == 1:\n                stride = 1\n                out_channels *= 2\n            else:\n                stride = 2\n            m_features.append(common.BasicBlock(\n                in_channels, out_channels, 3, stride=stride, bn=bn, act=act\n            ))\n\n        self.features = nn.Sequential(*m_features)\n\n        patch_size = args.patch_size // (2**((depth + 1) // 2))\n        m_classifier = [\n            nn.Linear(out_channels * patch_size**2, 1024),\n            act,\n            nn.Linear(1024, 1)\n        ]\n        self.classifier = nn.Sequential(*m_classifier)\n\n    def forward(self, x):\n        features = self.features(x)\n        output = self.classifier(features.view(features.size(0), -1))\n\n        return output\n\n"
  },
  {
    "path": "loss/vgg.py",
    "content": "from model import common\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torchvision.models as models\nfrom torch.autograd import Variable\n\nclass VGG(nn.Module):\n    def __init__(self, conv_index, rgb_range=1):\n        super(VGG, self).__init__()\n        vgg_features = models.vgg19(pretrained=True).features\n        modules = [m for m in vgg_features]\n        if conv_index == '22':\n            self.vgg = nn.Sequential(*modules[:8])\n        elif conv_index == '54':\n            self.vgg = nn.Sequential(*modules[:35])\n\n        vgg_mean = (0.485, 0.456, 0.406)\n        vgg_std = (0.229 * rgb_range, 0.224 * rgb_range, 0.225 * rgb_range)\n        self.sub_mean = common.MeanShift(rgb_range, vgg_mean, vgg_std)\n        self.vgg.requires_grad = False\n\n    def forward(self, sr, hr):\n        def _forward(x):\n            x = self.sub_mean(x)\n            x = self.vgg(x)\n            return x\n            \n        vgg_sr = _forward(sr)\n        with torch.no_grad():\n            vgg_hr = _forward(hr.detach())\n\n        loss = F.mse_loss(vgg_sr, vgg_hr)\n\n        return loss\n"
  },
  {
    "path": "main.py",
    "content": "from option import args\nimport torch\nimport utility\nimport data\nimport model\nimport loss\nfrom trainer import Trainer\n\n\nif __name__ == '__main__':\n    torch.manual_seed(args.seed)\n    checkpoint = utility.checkpoint(args)\n    if checkpoint.ok:\n        loader = data.Data(args)\n        model = model.Model(args, checkpoint)\n        loss = loss.Loss(args, checkpoint) if not args.test_only else None\n        t = Trainer(args, loader, model, loss, checkpoint)\n        while not t.terminate():\n            t.train()\n\n        checkpoint.done()\n"
  },
  {
    "path": "main.sh",
    "content": "# noise-free degradations with isotropic Gaussian blurs\npython main.py --dir_data='D:/LongguangWang/Data' \\\n               --model='blindsr' \\\n               --scale='2' \\\n               --blur_type='iso_gaussian' \\\n               --noise=0.0 \\\n               --sig_min=0.2 \\\n               --sig_max=4.0\n\n\n# general degradations with anisotropic Gaussian blurs and noises\npython main.py --dir_data='D:/LongguangWang/Data' \\\n               --model='blindsr' \\\n               --scale='4' \\\n               --blur_type='aniso_gaussian' \\\n               --noise=25.0 \\\n               --lambda_min=0.2 \\\n               --lambda_max=4.0\n\ncmd /k\n"
  },
  {
    "path": "moco/__init__.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n"
  },
  {
    "path": "moco/builder.py",
    "content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nimport torch\nimport torch.nn as nn\n\n\nclass MoCo(nn.Module):\n    \"\"\"\n    Build a MoCo model with: a query encoder, a key encoder, and a queue\n    https://arxiv.org/abs/1911.05722\n    \"\"\"\n    def __init__(self, base_encoder, dim=256, K=32*256, m=0.999, T=0.07, mlp=False):\n        \"\"\"\n        dim: feature dimension (default: 128)\n        K: queue size; number of negative keys (default: 65536)\n        m: moco momentum of updating key encoder (default: 0.999)\n        T: softmax temperature (default: 0.07)\n        \"\"\"\n        super(MoCo, self).__init__()\n\n        self.K = K\n        self.m = m\n        self.T = T\n\n        # create the encoders\n        # num_classes is the output fc dimension\n        self.encoder_q = base_encoder()\n        self.encoder_k = base_encoder()\n\n        for param_q, param_k in zip(self.encoder_q.parameters(), self.encoder_k.parameters()):\n            param_k.data.copy_(param_q.data)  # initialize\n            param_k.requires_grad = False  # not update by gradient\n\n        # create the queue\n        self.register_buffer(\"queue\", torch.randn(dim, K))\n        self.queue = nn.functional.normalize(self.queue, dim=0)\n\n        self.register_buffer(\"queue_ptr\", torch.zeros(1, dtype=torch.long))\n\n    @torch.no_grad()\n    def _momentum_update_key_encoder(self):\n        \"\"\"\n        Momentum update of the key encoder\n        \"\"\"\n        for param_q, param_k in zip(self.encoder_q.parameters(), self.encoder_k.parameters()):\n            param_k.data = param_k.data * self.m + param_q.data * (1. - self.m)\n\n    @torch.no_grad()\n    def _dequeue_and_enqueue(self, keys):\n        # gather keys before updating queue\n        # keys = concat_all_gather(keys)\n        batch_size = keys.shape[0]\n\n        ptr = int(self.queue_ptr)\n        assert self.K % batch_size == 0  # for simplicity\n\n        # replace the keys at ptr (dequeue and enqueue)\n        self.queue[:, ptr:ptr + batch_size] = keys.transpose(0, 1)\n        ptr = (ptr + batch_size) % self.K  # move pointer\n\n        self.queue_ptr[0] = ptr\n\n    @torch.no_grad()\n    def _batch_shuffle_ddp(self, x):\n        \"\"\"\n        Batch shuffle, for making use of BatchNorm.\n        *** Only support DistributedDataParallel (DDP) model. ***\n        \"\"\"\n        # gather from all gpus\n        batch_size_this = x.shape[0]\n        x_gather = concat_all_gather(x)\n        batch_size_all = x_gather.shape[0]\n\n        num_gpus = batch_size_all // batch_size_this\n\n        # random shuffle index\n        idx_shuffle = torch.randperm(batch_size_all).cuda()\n\n        # broadcast to all gpus\n        torch.distributed.broadcast(idx_shuffle, src=0)\n\n        # index for restoring\n        idx_unshuffle = torch.argsort(idx_shuffle)\n\n        # shuffled index for this gpu\n        gpu_idx = torch.distributed.get_rank()\n        idx_this = idx_shuffle.view(num_gpus, -1)[gpu_idx]\n\n        return x_gather[idx_this], idx_unshuffle\n\n    @torch.no_grad()\n    def _batch_unshuffle_ddp(self, x, idx_unshuffle):\n        \"\"\"\n        Undo batch shuffle.\n        *** Only support DistributedDataParallel (DDP) model. ***\n        \"\"\"\n        # gather from all gpus\n        batch_size_this = x.shape[0]\n        x_gather = concat_all_gather(x)\n        batch_size_all = x_gather.shape[0]\n\n        num_gpus = batch_size_all // batch_size_this\n\n        # restored index for this gpu\n        gpu_idx = torch.distributed.get_rank()\n        idx_this = idx_unshuffle.view(num_gpus, -1)[gpu_idx]\n\n        return x_gather[idx_this]\n\n    def forward(self, im_q, im_k):\n        \"\"\"\n        Input:\n            im_q: a batch of query images\n            im_k: a batch of key images\n        Output:\n            logits, targets\n        \"\"\"\n        if self.training:\n            # compute query features\n            embedding, q = self.encoder_q(im_q)  # queries: NxC\n            q = nn.functional.normalize(q, dim=1)\n\n            # compute key features\n            with torch.no_grad():  # no gradient to keys\n                self._momentum_update_key_encoder()  # update the key encoder\n\n                _, k = self.encoder_k(im_k)  # keys: NxC\n                k = nn.functional.normalize(k, dim=1)\n\n            # compute logits\n            # Einstein sum is more intuitive\n            # positive logits: Nx1\n            l_pos = torch.einsum('nc,nc->n', [q, k]).unsqueeze(-1)\n            # negative logits: NxK\n            l_neg = torch.einsum('nc,ck->nk', [q, self.queue.clone().detach()])\n\n            # logits: Nx(1+K)\n            logits = torch.cat([l_pos, l_neg], dim=1)\n\n            # apply temperature\n            logits /= self.T\n\n            # labels: positive key indicators\n            labels = torch.zeros(logits.shape[0], dtype=torch.long).cuda()\n\n            # dequeue and enqueue\n            self._dequeue_and_enqueue(k)\n\n            return embedding, logits, labels\n        else:\n            embedding, _ = self.encoder_q(im_q)\n\n            return embedding\n\n\n# utils\n@torch.no_grad()\ndef concat_all_gather(tensor):\n    \"\"\"\n    Performs all_gather operation on the provided tensors.\n    *** Warning ***: torch.distributed.all_gather has no gradient.\n    \"\"\"\n    tensors_gather = [torch.ones_like(tensor)\n        for _ in range(torch.distributed.get_world_size())]\n    torch.distributed.all_gather(tensors_gather, tensor, async_op=False)\n\n    output = torch.cat(tensors_gather, dim=0)\n    return output\n"
  },
  {
    "path": "model/__init__.py",
    "content": "import os\nfrom importlib import import_module\n\nimport torch\nimport torch.nn as nn\n\n\nclass Model(nn.Module):\n    def __init__(self, args, ckp):\n        super(Model, self).__init__()\n        print('Making model...')\n        self.args = args\n        self.scale = args.scale\n        self.idx_scale = 0\n        self.self_ensemble = args.self_ensemble\n        self.chop = args.chop\n        self.precision = args.precision\n        self.cpu = args.cpu\n        self.device = torch.device('cpu' if args.cpu else 'cuda')\n        self.n_GPUs = args.n_GPUs\n        self.save_models = args.save_models\n        self.save = args.save\n\n        module = import_module('model.'+args.model)\n        self.model = module.make_model(args).to(self.device)\n        if args.precision == 'half': self.model.half()\n\n        if not args.cpu and args.n_GPUs > 1:\n            self.model = nn.DataParallel(self.model, range(args.n_GPUs))\n\n        self.load(\n            ckp.dir,\n            pre_train=args.pre_train,\n            resume=args.resume,\n            cpu=args.cpu\n        )\n\n    def forward(self, x):\n        if self.self_ensemble and not self.training:\n            if self.chop:\n                forward_function = self.forward_chop\n            else:\n                forward_function = self.model.forward\n\n            return self.forward_x8(x, forward_function)\n        elif self.chop and not self.training:\n            return self.forward_chop(x)\n        else:\n            return self.model(x)\n\n    def get_model(self):\n        if self.n_GPUs <= 1 or self.cpu:\n            return self.model\n        else:\n            return self.model.module\n\n    def state_dict(self, **kwargs):\n        target = self.get_model()\n        return target.state_dict(**kwargs)\n\n    def save(self, apath, epoch, is_best=False):\n        target = self.get_model()\n        torch.save(\n            target.state_dict(),\n            os.path.join(apath, 'model', 'model_latest.pt')\n        )\n        if is_best:\n            torch.save(\n                target.state_dict(),\n                os.path.join(apath, 'model', 'model_best.pt')\n            )\n\n        if self.save_models:\n            torch.save(\n                target.state_dict(),\n                os.path.join(apath, 'model', 'model_{}.pt'.format(epoch))\n            )\n\n    def load(self, apath, pre_train='.', resume=-1, cpu=False):\n        if cpu:\n            kwargs = {'map_location': lambda storage, loc: storage}\n        else:\n            kwargs = {}\n\n        if resume == -1:\n            self.get_model().load_state_dict(\n                torch.load(os.path.join(apath, 'model', 'model_latest.pt'), **kwargs),\n                strict=True\n            )\n\n        elif resume == 0:\n            if pre_train != '.':\n                self.get_model().load_state_dict(\n                    torch.load(pre_train, **kwargs),\n                    strict=True\n                )\n\n        elif resume > 0:\n            self.get_model().load_state_dict(\n                torch.load(os.path.join(apath, 'model', 'model_{}.pt'.format(resume)), **kwargs),\n                strict=False\n            )\n\n    def forward_chop(self, x, shave=10, min_size=160000):\n        scale = self.scale[self.idx_scale]\n        n_GPUs = min(self.n_GPUs, 4)\n        b, c, h, w = x.size()\n        h_half, w_half = h // 2, w // 2\n        h_size, w_size = h_half + shave, w_half + shave\n        lr_list = [\n            x[:, :, 0:h_size, 0:w_size],\n            x[:, :, 0:h_size, (w - w_size):w],\n            x[:, :, (h - h_size):h, 0:w_size],\n            x[:, :, (h - h_size):h, (w - w_size):w]]\n\n        if w_size * h_size < min_size:\n            sr_list = []\n            for i in range(0, 4, n_GPUs):\n                lr_batch = torch.cat(lr_list[i:(i + n_GPUs)], dim=0)\n                sr_batch = self.model(lr_batch)\n                sr_list.extend(sr_batch.chunk(n_GPUs, dim=0))\n        else:\n            sr_list = [\n                self.forward_chop(patch, shave=shave, min_size=min_size) \\\n                for patch in lr_list\n            ]\n\n        h, w = scale * h, scale * w\n        h_half, w_half = scale * h_half, scale * w_half\n        h_size, w_size = scale * h_size, scale * w_size\n        shave *= scale\n\n        output = x.new(b, c, h, w)\n        output[:, :, 0:h_half, 0:w_half] \\\n            = sr_list[0][:, :, 0:h_half, 0:w_half]\n        output[:, :, 0:h_half, w_half:w] \\\n            = sr_list[1][:, :, 0:h_half, (w_size - w + w_half):w_size]\n        output[:, :, h_half:h, 0:w_half] \\\n            = sr_list[2][:, :, (h_size - h + h_half):h_size, 0:w_half]\n        output[:, :, h_half:h, w_half:w] \\\n            = sr_list[3][:, :, (h_size - h + h_half):h_size, (w_size - w + w_half):w_size]\n\n        return output\n\n    def forward_x8(self, x, forward_function):\n        def _transform(v, op):\n            if self.precision != 'single': v = v.float()\n\n            v2np = v.data.cpu().numpy()\n            if op == 'v':\n                tfnp = v2np[:, :, :, ::-1].copy()\n            elif op == 'h':\n                tfnp = v2np[:, :, ::-1, :].copy()\n            elif op == 't':\n                tfnp = v2np.transpose((0, 1, 3, 2)).copy()\n\n            ret = torch.Tensor(tfnp).to(self.device)\n            if self.precision == 'half': ret = ret.half()\n\n            return ret\n\n        lr_list = [x]\n        for tf in 'v', 'h', 't':\n            lr_list.extend([_transform(t, tf) for t in lr_list])\n\n        sr_list = [forward_function(aug) for aug in lr_list]\n        for i in range(len(sr_list)):\n            if i > 3:\n                sr_list[i] = _transform(sr_list[i], 't')\n            if i % 4 > 1:\n                sr_list[i] = _transform(sr_list[i], 'h')\n            if (i % 4) % 2 == 1:\n                sr_list[i] = _transform(sr_list[i], 'v')\n\n        output_cat = torch.cat(sr_list, dim=0)\n        output = output_cat.mean(dim=0, keepdim=True)\n\n        return output\n\n"
  },
  {
    "path": "model/blindsr.py",
    "content": "import torch\nfrom torch import nn\nimport model.common as common\nimport torch.nn.functional as F\nfrom moco.builder import MoCo\n\n\ndef make_model(args):\n    return BlindSR(args)\n\n\nclass DA_conv(nn.Module):\n    def __init__(self, channels_in, channels_out, kernel_size, reduction):\n        super(DA_conv, self).__init__()\n        self.channels_out = channels_out\n        self.channels_in = channels_in\n        self.kernel_size = kernel_size\n\n        self.kernel = nn.Sequential(\n            nn.Linear(64, 64, bias=False),\n            nn.LeakyReLU(0.1, True),\n            nn.Linear(64, 64 * self.kernel_size * self.kernel_size, bias=False)\n        )\n        self.conv = common.default_conv(channels_in, channels_out, 1)\n        self.ca = CA_layer(channels_in, channels_out, reduction)\n\n        self.relu = nn.LeakyReLU(0.1, True)\n\n    def forward(self, x):\n        '''\n        :param x[0]: feature map: B * C * H * W\n        :param x[1]: degradation representation: B * C\n        '''\n        b, c, h, w = x[0].size()\n\n        # branch 1\n        kernel = self.kernel(x[1]).view(-1, 1, self.kernel_size, self.kernel_size)\n        out = self.relu(F.conv2d(x[0].view(1, -1, h, w), kernel, groups=b*c, padding=(self.kernel_size-1)//2))\n        out = self.conv(out.view(b, -1, h, w))\n\n        # branch 2\n        out = out + self.ca(x)\n\n        return out\n\n\nclass CA_layer(nn.Module):\n    def __init__(self, channels_in, channels_out, reduction):\n        super(CA_layer, self).__init__()\n        self.conv_du = nn.Sequential(\n            nn.Conv2d(channels_in, channels_in//reduction, 1, 1, 0, bias=False),\n            nn.LeakyReLU(0.1, True),\n            nn.Conv2d(channels_in // reduction, channels_out, 1, 1, 0, bias=False),\n            nn.Sigmoid()\n        )\n\n    def forward(self, x):\n        '''\n        :param x[0]: feature map: B * C * H * W\n        :param x[1]: degradation representation: B * C\n        '''\n        att = self.conv_du(x[1][:, :, None, None])\n\n        return x[0] * att\n\n\nclass DAB(nn.Module):\n    def __init__(self, conv, n_feat, kernel_size, reduction):\n        super(DAB, self).__init__()\n\n        self.da_conv1 = DA_conv(n_feat, n_feat, kernel_size, reduction)\n        self.da_conv2 = DA_conv(n_feat, n_feat, kernel_size, reduction)\n        self.conv1 = conv(n_feat, n_feat, kernel_size)\n        self.conv2 = conv(n_feat, n_feat, kernel_size)\n\n        self.relu =  nn.LeakyReLU(0.1, True)\n\n    def forward(self, x):\n        '''\n        :param x[0]: feature map: B * C * H * W\n        :param x[1]: degradation representation: B * C\n        '''\n\n        out = self.relu(self.da_conv1(x))\n        out = self.relu(self.conv1(out))\n        out = self.relu(self.da_conv2([out, x[1]]))\n        out = self.conv2(out) + x[0]\n\n        return out\n\n\nclass DAG(nn.Module):\n    def __init__(self, conv, n_feat, kernel_size, reduction, n_blocks):\n        super(DAG, self).__init__()\n        self.n_blocks = n_blocks\n        modules_body = [\n            DAB(conv, n_feat, kernel_size, reduction) \\\n            for _ in range(n_blocks)\n        ]\n        modules_body.append(conv(n_feat, n_feat, kernel_size))\n\n        self.body = nn.Sequential(*modules_body)\n\n    def forward(self, x):\n        '''\n        :param x[0]: feature map: B * C * H * W\n        :param x[1]: degradation representation: B * C\n        '''\n        res = x[0]\n        for i in range(self.n_blocks):\n            res = self.body[i]([res, x[1]])\n        res = self.body[-1](res)\n        res = res + x[0]\n\n        return res\n\n\nclass DASR(nn.Module):\n    def __init__(self, args, conv=common.default_conv):\n        super(DASR, self).__init__()\n\n        self.n_groups = 5\n        n_blocks = 5\n        n_feats = 64\n        kernel_size = 3\n        reduction = 8\n        scale = int(args.scale[0])\n\n        # RGB mean for DIV2K\n        rgb_mean = (0.4488, 0.4371, 0.4040)\n        rgb_std = (1.0, 1.0, 1.0)\n        self.sub_mean = common.MeanShift(255.0, rgb_mean, rgb_std)\n        self.add_mean = common.MeanShift(255.0, rgb_mean, rgb_std, 1)\n\n        # head module\n        modules_head = [conv(3, n_feats, kernel_size)]\n        self.head = nn.Sequential(*modules_head)\n\n        # compress\n        self.compress = nn.Sequential(\n            nn.Linear(256, 64, bias=False),\n            nn.LeakyReLU(0.1, True)\n        )\n\n        # body\n        modules_body = [\n            DAG(common.default_conv, n_feats, kernel_size, reduction, n_blocks) \\\n            for _ in range(self.n_groups)\n        ]\n        modules_body.append(conv(n_feats, n_feats, kernel_size))\n        self.body = nn.Sequential(*modules_body)\n\n        # tail\n        modules_tail = [common.Upsampler(conv, scale, n_feats, act=False),\n                        conv(n_feats, 3, kernel_size)]\n        self.tail = nn.Sequential(*modules_tail)\n\n    def forward(self, x, k_v):\n        k_v = self.compress(k_v)\n\n        # sub mean\n        x = self.sub_mean(x)\n\n        # head\n        x = self.head(x)\n\n        # body\n        res = x\n        for i in range(self.n_groups):\n            res = self.body[i]([res, k_v])\n        res = self.body[-1](res)\n        res = res + x\n\n        # tail\n        x = self.tail(res)\n\n        # add mean\n        x = self.add_mean(x)\n\n        return x\n\n\nclass Encoder(nn.Module):\n    def __init__(self):\n        super(Encoder, self).__init__()\n\n        self.E = nn.Sequential(\n            nn.Conv2d(3, 64, kernel_size=3, padding=1),\n            nn.BatchNorm2d(64),\n            nn.LeakyReLU(0.1, True),\n            nn.Conv2d(64, 64, kernel_size=3, padding=1),\n            nn.BatchNorm2d(64),\n            nn.LeakyReLU(0.1, True),\n            nn.Conv2d(64, 128, kernel_size=3, stride=2, padding=1),\n            nn.BatchNorm2d(128),\n            nn.LeakyReLU(0.1, True),\n            nn.Conv2d(128, 128, kernel_size=3, padding=1),\n            nn.BatchNorm2d(128),\n            nn.LeakyReLU(0.1, True),\n            nn.Conv2d(128, 256, kernel_size=3, stride=2, padding=1),\n            nn.BatchNorm2d(256),\n            nn.LeakyReLU(0.1, True),\n            nn.Conv2d(256, 256, kernel_size=3, padding=1),\n            nn.BatchNorm2d(256),\n            nn.LeakyReLU(0.1, True),\n            nn.AdaptiveAvgPool2d(1),\n        )\n        self.mlp = nn.Sequential(\n            nn.Linear(256, 256),\n            nn.LeakyReLU(0.1, True),\n            nn.Linear(256, 256),\n        )\n\n    def forward(self, x):\n        fea = self.E(x).squeeze(-1).squeeze(-1)\n        out = self.mlp(fea)\n\n        return fea, out\n\n\nclass BlindSR(nn.Module):\n    def __init__(self, args):\n        super(BlindSR, self).__init__()\n\n        # Generator\n        self.G = DASR(args)\n\n        # Encoder\n        self.E = MoCo(base_encoder=Encoder)\n\n    def forward(self, x):\n        if self.training:\n            x_query = x[:, 0, ...]                          # b, c, h, w\n            x_key = x[:, 1, ...]                            # b, c, h, w\n\n            # degradation-aware represenetion learning\n            fea, logits, labels = self.E(x_query, x_key)\n\n            # degradation-aware SR\n            sr = self.G(x_query, fea)\n\n            return sr, logits, labels\n        else:\n            # degradation-aware represenetion learning\n            fea = self.E(x, x)\n\n            # degradation-aware SR\n            sr = self.G(x, fea)\n\n            return sr\n"
  },
  {
    "path": "model/common.py",
    "content": "import math\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\n\ndef default_conv(in_channels, out_channels, kernel_size, bias=True):\n    return nn.Conv2d(in_channels, out_channels, kernel_size, padding=(kernel_size//2), bias=bias)\n\n\nclass MeanShift(nn.Conv2d):\n    def __init__(self, rgb_range, rgb_mean, rgb_std, sign=-1):\n        super(MeanShift, self).__init__(3, 3, kernel_size=1)\n        std = torch.Tensor(rgb_std)\n        self.weight.data = torch.eye(3).view(3, 3, 1, 1)\n        self.weight.data.div_(std.view(3, 1, 1, 1))\n        self.bias.data = sign * rgb_range * torch.Tensor(rgb_mean)\n        self.bias.data.div_(std)\n        self.weight.requires_grad = False\n        self.bias.requires_grad = False\n\n\nclass Upsampler(nn.Sequential):\n    def __init__(self, conv, scale, n_feat, act=False, bias=True):\n        m = []\n        if (scale & (scale - 1)) == 0:    # Is scale = 2^n?\n            for _ in range(int(math.log(scale, 2))):\n                m.append(conv(n_feat, 4 * n_feat, 3, bias))\n                m.append(nn.PixelShuffle(2))\n                if act: m.append(act())\n        elif scale == 3:\n            m.append(conv(n_feat, 9 * n_feat, 3, bias))\n            m.append(nn.PixelShuffle(3))\n            if act: m.append(act())\n        else:\n            raise NotImplementedError\n\n        super(Upsampler, self).__init__(*m)\n\n"
  },
  {
    "path": "option.py",
    "content": "import argparse\nimport template\n\nparser = argparse.ArgumentParser(description='EDSR and MDSR')\n\nparser.add_argument('--debug', action='store_true',\n                    help='Enables debug mode')\nparser.add_argument('--template', default='.',\n                    help='You can set various templates in option.py')\n\n# Hardware specifications\nparser.add_argument('--n_threads', type=int, default=4,\n                    help='number of threads for data loading')\nparser.add_argument('--cpu', type=bool, default=False,\n                    help='use cpu only')\nparser.add_argument('--n_GPUs', type=int, default=2,\n                    help='number of GPUs')\nparser.add_argument('--seed', type=int, default=1,\n                    help='random seed')\n\n# Data specifications\nparser.add_argument('--dir_data', type=str, default='D:/LongguangWang/Data',\n                    help='dataset directory')\nparser.add_argument('--dir_demo', type=str, default='../test',\n                    help='demo image directory')\nparser.add_argument('--data_train', type=str, default='DF2K',\n                    help='train dataset name')\nparser.add_argument('--data_test', type=str, default='Set14',\n                    help='test dataset name')\nparser.add_argument('--data_range', type=str, default='1-3450/801-810',\n                    help='train/test data range')\nparser.add_argument('--ext', type=str, default='sep',\n                    help='dataset file extension')\nparser.add_argument('--scale', type=str, default='4',\n                    help='super resolution scale')\nparser.add_argument('--patch_size', type=int, default=48,\n                    help='output patch size')\nparser.add_argument('--rgb_range', type=int, default=255,\n                    help='maximum value of RGB')\nparser.add_argument('--n_colors', type=int, default=3,\n                    help='number of color channels to use')\nparser.add_argument('--chop', action='store_true',\n                    help='enable memory-efficient forward')\nparser.add_argument('--no_augment', action='store_true',\n                    help='do not use data augmentation')\n\n# Degradation specifications\nparser.add_argument('--blur_kernel', type=int, default=21,\n                    help='size of blur kernels')\nparser.add_argument('--blur_type', type=str, default='iso_gaussian',\n                    help='blur types (iso_gaussian | aniso_gaussian)')\nparser.add_argument('--mode', type=str, default='bicubic',\n                    help='downsampler (bicubic | s-fold)')\nparser.add_argument('--noise', type=float, default=0.0,\n                    help='noise level')\n## isotropic Gaussian blur\nparser.add_argument('--sig_min', type=float, default=0.2,\n                    help='minimum sigma of isotropic Gaussian blurs')\nparser.add_argument('--sig_max', type=float, default=4.0,\n                    help='maximum sigma of isotropic Gaussian blurs')\nparser.add_argument('--sig', type=float, default=4.0,\n                    help='specific sigma of isotropic Gaussian blurs')\n## anisotropic Gaussian blur\nparser.add_argument('--lambda_min', type=float, default=0.2,\n                    help='minimum value for the eigenvalue of anisotropic Gaussian blurs')\nparser.add_argument('--lambda_max', type=float, default=4.0,\n                    help='maximum value for the eigenvalue of anisotropic Gaussian blurs')\nparser.add_argument('--lambda_1', type=float, default=0.2,\n                    help='one eigenvalue of anisotropic Gaussian blurs')\nparser.add_argument('--lambda_2', type=float, default=4.0,\n                    help='another eigenvalue of anisotropic Gaussian blurs')\nparser.add_argument('--theta', type=float, default=0.0,\n                    help='rotation angle of anisotropic Gaussian blurs [0, 180]')\n\n\n# Model specifications\nparser.add_argument('--model', default='blindsr',\n                    help='model name')\nparser.add_argument('--pre_train', type=str, default= '.',\n                    help='pre-trained model directory')\nparser.add_argument('--extend', type=str, default='.',\n                    help='pre-trained model directory')\nparser.add_argument('--shift_mean', default=True,\n                    help='subtract pixel mean from the input')\nparser.add_argument('--dilation', action='store_true',\n                    help='use dilated convolution')\nparser.add_argument('--precision', type=str, default='single',\n                    choices=('single', 'half'),\n                    help='FP precision for test (single | half)')\n\n# Training specifications\nparser.add_argument('--reset', action='store_true',\n                    help='reset the training')\nparser.add_argument('--test_every', type=int, default=1000,\n                    help='do test per every N batches')\nparser.add_argument('--epochs_encoder', type=int, default=100,\n                    help='number of epochs to train the degradation encoder')\nparser.add_argument('--epochs_sr', type=int, default=500,\n                    help='number of epochs to train the whole network')\nparser.add_argument('--batch_size', type=int, default=32,\n                    help='input batch size for training')\nparser.add_argument('--split_batch', type=int, default=1,\n                    help='split the batch into smaller chunks')\nparser.add_argument('--self_ensemble', action='store_true',\n                    help='use self-ensemble method for test')\nparser.add_argument('--test_only', action='store_true',\n                    help='set this option to test the model')\n\n# Optimization specifications\nparser.add_argument('--lr_encoder', type=float, default=1e-3,\n                    help='learning rate to train the degradation encoder')\nparser.add_argument('--lr_sr', type=float, default=1e-4,\n                    help='learning rate to train the whole network')\nparser.add_argument('--lr_decay_encoder', type=int, default=60,\n                    help='learning rate decay per N epochs')\nparser.add_argument('--lr_decay_sr', type=int, default=125,\n                    help='learning rate decay per N epochs')\nparser.add_argument('--decay_type', type=str, default='step',\n                    help='learning rate decay type')\nparser.add_argument('--gamma_encoder', type=float, default=0.1,\n                    help='learning rate decay factor for step decay')\nparser.add_argument('--gamma_sr', type=float, default=0.5,\n                    help='learning rate decay factor for step decay')\nparser.add_argument('--optimizer', default='ADAM',\n                    choices=('SGD', 'ADAM', 'RMSprop'),\n                    help='optimizer to use (SGD | ADAM | RMSprop)')\nparser.add_argument('--momentum', type=float, default=0.9,\n                    help='SGD momentum')\nparser.add_argument('--beta1', type=float, default=0.9,\n                    help='ADAM beta1')\nparser.add_argument('--beta2', type=float, default=0.999,\n                    help='ADAM beta2')\nparser.add_argument('--epsilon', type=float, default=1e-8,\n                    help='ADAM epsilon for numerical stability')\nparser.add_argument('--weight_decay', type=float, default=0,\n                    help='weight decay')\nparser.add_argument('--start_epoch', type=int, default=0,\n                    help='resume from the snapshot, and the start_epoch')\n\n# Loss specifications\nparser.add_argument('--loss', type=str, default='1*L1',\n                    help='loss function configuration')\nparser.add_argument('--skip_threshold', type=float, default='1e6',\n                    help='skipping batch that has large error')\n\n# Log specifications\nparser.add_argument('--save', type=str, default='blindsr',\n                    help='file name to save')\nparser.add_argument('--load', type=str, default='.',\n                    help='file name to load')\nparser.add_argument('--resume', type=int, default=0,\n                    help='resume from specific checkpoint')\nparser.add_argument('--save_models', action='store_true',\n                    help='save all intermediate models')\nparser.add_argument('--print_every', type=int, default=200,\n                    help='how many batches to wait before logging training status')\nparser.add_argument('--save_results', default=False,\n                    help='save output results')\n\nargs = parser.parse_args()\ntemplate.set_template(args)\n\nargs.scale = list(map(lambda x: float(x), args.scale.split('+')))\n\n"
  },
  {
    "path": "quick_test.py",
    "content": "from model.blindsr import BlindSR\nimport torch\nimport numpy as np\nimport imageio\nimport argparse\nimport os\nimport utility\nimport cv2\n\n\ndef parse_args():\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--img_dir', type=str, default='D:/LongguangWang/Data/test.png',\n                        help='image directory')\n    parser.add_argument('--scale', type=str, default='2',\n                        help='super resolution scale')\n    parser.add_argument('--resume', type=int, default=600,\n                        help='resume from specific checkpoint')\n    parser.add_argument('--blur_type', type=str, default='iso_gaussian',\n                        help='blur types (iso_gaussian | aniso_gaussian)')\n    return parser.parse_args()\n\n\ndef main():\n    args = parse_args()\n    if args.blur_type == 'iso_gaussian':\n        dir = './experiment/blindsr_x' + str(int(args.scale[0])) + '_bicubic_iso'\n    elif args.blur_type == 'aniso_gaussian':\n        dir = './experiment/blindsr_x' + str(int(args.scale[0])) + '_bicubic_aniso'\n\n    # path to save sr images\n    save_dir = dir + '/results'\n    if not os.path.exists(save_dir):\n        os.mkdir(save_dir)\n\n    DASR = BlindSR(args).cuda()\n    DASR.load_state_dict(torch.load(dir + '/model/model_' + str(args.resume) + '.pt'), strict=False)\n    DASR.eval()\n\n    lr = imageio.imread(args.img_dir)\n    lr = np.ascontiguousarray(lr.transpose((2, 0, 1)))\n    lr = torch.from_numpy(lr).float().cuda().unsqueeze(0).unsqueeze(0)\n\n    # inference\n    sr = DASR(lr[:, 0, ...])\n    sr = utility.quantize(sr, 255.0)\n\n    # save sr results\n    img_name = args.img_dir.split('.png')[0].split('/')[-1]\n    sr = np.array(sr.squeeze(0).permute(1, 2, 0).data.cpu())\n    sr = sr[:, :, [2, 1, 0]]\n    cv2.imwrite(save_dir + '/' + img_name + '_sr.png', sr)\n\n\nif __name__ == '__main__':\n    with torch.no_grad():\n        main()"
  },
  {
    "path": "quick_test.sh",
    "content": "# super-resolve an LR image (x2) using the model trained on noise-free degradations with isotropic Gaussian blurs\npython quick_test.py --img_dir='D:/LongguangWang/Data/test.png' \\\n                     --scale='2' \\\n                     --resume=600 \\\n                     --blur_type='iso_gaussian'\n\n# super-resolve an LR image (x4) using the model trained on general degradations with anisotropic Gaussian blurs and noises\npython quick_test.py --img_dir='D:/LongguangWang/Data/test.png' \\\n                     --scale='4' \\\n                     --resume=600 \\\n                     --blur_type='aniso_gaussian'\n\ncmd /k\n"
  },
  {
    "path": "template.py",
    "content": "def set_template(args):\n    # Set the templates here\n    if args.template.find('jpeg') >= 0:\n        args.data_train = 'DIV2K_jpeg'\n        args.data_test = 'DIV2K_jpeg'\n        args.epochs = 200\n        args.lr_decay = 100\n\n    if args.template.find('EDSR_paper') >= 0:\n        args.model = 'EDSR'\n        args.n_resblocks = 32\n        args.n_feats = 256\n        args.res_scale = 0.1\n\n    if args.template.find('MDSR') >= 0:\n        args.model = 'MDSR'\n        args.patch_size = 48\n        args.epochs = 650\n\n    if args.template.find('DDBPN') >= 0:\n        args.model = 'DDBPN'\n        args.patch_size = 128\n        args.scale = '4'\n\n        args.data_test = 'Set5'\n\n        args.batch_size = 20\n        args.epochs = 1000\n        args.lr_decay = 500\n        args.gamma = 0.1\n        args.weight_decay = 1e-4\n\n        args.loss = '1*MSE'\n\n    if args.template.find('GAN') >= 0:\n        args.epochs = 200\n        args.lr = 5e-5\n        args.lr_decay = 150\n\n    if args.template.find('RCAN') >= 0:\n        args.model = 'RCAN'\n        args.n_resgroups = 10\n        args.n_resblocks = 20\n        args.n_feats = 64\n        args.chop = True\n\n"
  },
  {
    "path": "test.py",
    "content": "from option import args\nimport torch\nimport utility\nimport data\nimport model\nimport loss\nfrom trainer import Trainer\n\n\nif __name__ == '__main__':\n    torch.manual_seed(args.seed)\n    checkpoint = utility.checkpoint(args)\n\n    if checkpoint.ok:\n        loader = data.Data(args)\n        model = model.Model(args, checkpoint)\n        loss = loss.Loss(args, checkpoint) if not args.test_only else None\n        t = Trainer(args, loader, model, loss, checkpoint)\n        while not t.terminate():\n            t.test()\n\n        checkpoint.done()\n"
  },
  {
    "path": "test.sh",
    "content": "# noise-free degradations with isotropic Gaussian blurs\npython test.py --test_only \\\n               --dir_data='D:/LongguangWang/Data' \\\n               --data_test='Set14' \\\n               --model='blindsr' \\\n               --scale='2' \\\n               --resume=600 \\\n               --blur_type='iso_gaussian' \\\n               --noise=0.0 \\\n               --sig=1.2\n\n\n# general degradations with anisotropic Gaussian blurs and noises\npython test.py --test_only \\\n               --dir_data='D:/LongguangWang/Data' \\\n               --data_test='Set14' \\\n               --model='blindsr' \\\n               --scale='4' \\\n               --resume=600 \\\n               --blur_type='aniso_gaussian' \\\n               --noise=10.0 \\\n               --theta=0.0 \\\n               --lambda_1=0.2 \\\n               --lambda_2=4.0\n\ncmd /k"
  },
  {
    "path": "trainer.py",
    "content": "import os\nimport utility\nimport torch\nfrom decimal import Decimal\nimport torch.nn.functional as F\nfrom utils import util\n\n\nclass Trainer():\n    def __init__(self, args, loader, my_model, my_loss, ckp):\n        self.args = args\n        self.scale = args.scale\n\n        self.ckp = ckp\n        self.loader_train = loader.loader_train\n        self.loader_test = loader.loader_test\n        self.model = my_model\n        self.model_E = torch.nn.DataParallel(self.model.get_model().E, range(self.args.n_GPUs))\n        self.loss = my_loss\n        self.contrast_loss = torch.nn.CrossEntropyLoss().cuda()\n        self.optimizer = utility.make_optimizer(args, self.model)\n        self.scheduler = utility.make_scheduler(args, self.optimizer)\n\n        if self.args.load != '.':\n            self.optimizer.load_state_dict(\n                torch.load(os.path.join(ckp.dir, 'optimizer.pt'))\n            )\n            for _ in range(len(ckp.log)): self.scheduler.step()\n\n    def train(self):\n        self.scheduler.step()\n        self.loss.step()\n        epoch = self.scheduler.last_epoch + 1\n\n        # lr stepwise\n        if epoch <= self.args.epochs_encoder:\n            lr = self.args.lr_encoder * (self.args.gamma_encoder ** (epoch // self.args.lr_decay_encoder))\n            for param_group in self.optimizer.param_groups:\n                param_group['lr'] = lr\n        else:\n            lr = self.args.lr_sr * (self.args.gamma_sr ** ((epoch - self.args.epochs_encoder) // self.args.lr_decay_sr))\n            for param_group in self.optimizer.param_groups:\n                param_group['lr'] = lr\n\n        self.ckp.write_log('[Epoch {}]\\tLearning rate: {:.2e}'.format(epoch, Decimal(lr)))\n        self.loss.start_log()\n        self.model.train()\n\n        degrade = util.SRMDPreprocessing(\n            self.scale[0],\n            kernel_size=self.args.blur_kernel,\n            blur_type=self.args.blur_type,\n            sig_min=self.args.sig_min,\n            sig_max=self.args.sig_max,\n            lambda_min=self.args.lambda_min,\n            lambda_max=self.args.lambda_max,\n            noise=self.args.noise\n        )\n\n        timer = utility.timer()\n        losses_contrast, losses_sr = utility.AverageMeter(), utility.AverageMeter()\n\n        for batch, (hr, _, idx_scale) in enumerate(self.loader_train):\n            hr = hr.cuda()                              # b, n, c, h, w\n            lr, b_kernels = degrade(hr)                 # bn, c, h, w\n\n            self.optimizer.zero_grad()\n\n            timer.tic()\n            # forward\n            ## train degradation encoder\n            if epoch <= self.args.epochs_encoder:\n                _, output, target = self.model_E(im_q=lr[:,0,...], im_k=lr[:,1,...])\n                loss_constrast = self.contrast_loss(output, target)\n                loss = loss_constrast\n\n                losses_contrast.update(loss_constrast.item())\n            ## train the whole network\n            else:\n                sr, output, target = self.model(lr)\n                loss_SR = self.loss(sr, hr[:,0,...])\n                loss_constrast = self.contrast_loss(output, target)\n                loss = loss_constrast + loss_SR\n\n                losses_sr.update(loss_SR.item())\n                losses_contrast.update(loss_constrast.item())\n\n            # backward\n            loss.backward()\n            self.optimizer.step()\n            timer.hold()\n\n            if epoch <= self.args.epochs_encoder:\n                if (batch + 1) % self.args.print_every == 0:\n                    self.ckp.write_log(\n                        'Epoch: [{:03d}][{:04d}/{:04d}]\\t'\n                        'Loss [contrastive loss: {:.3f}]\\t'\n                        'Time [{:.1f}s]'.format(\n                            epoch, (batch + 1) * self.args.batch_size, len(self.loader_train.dataset),\n                            losses_contrast.avg,\n                            timer.release()\n                        ))\n            else:\n                if (batch + 1) % self.args.print_every == 0:\n                    self.ckp.write_log(\n                        'Epoch: [{:04d}][{:04d}/{:04d}]\\t'\n                        'Loss [SR loss:{:.3f} | contrastive loss: {:.3f}]\\t'\n                        'Time [{:.1f}s]'.format(\n                            epoch, (batch + 1) * self.args.batch_size, len(self.loader_train.dataset),\n                            losses_sr.avg, losses_contrast.avg,\n                            timer.release(),\n                        ))\n\n        self.loss.end_log(len(self.loader_train))\n\n        # save model\n        target = self.model.get_model()\n        model_dict = target.state_dict()\n        keys = list(model_dict.keys())\n        for key in keys:\n            if 'E.encoder_k' in key or 'queue' in key:\n                del model_dict[key]\n        torch.save(\n            model_dict,\n            os.path.join(self.ckp.dir, 'model', 'model_{}.pt'.format(epoch))\n        )\n\n    def test(self):\n        self.ckp.write_log('\\nEvaluation:')\n        self.ckp.add_log(torch.zeros(1, len(self.scale)))\n        self.model.eval()\n\n        timer_test = utility.timer()\n\n        with torch.no_grad():\n            for idx_scale, scale in enumerate(self.scale):\n                self.loader_test.dataset.set_scale(idx_scale)\n                eval_psnr = 0\n                eval_ssim = 0\n\n                degrade = util.SRMDPreprocessing(\n                    self.scale[0],\n                    kernel_size=self.args.blur_kernel,\n                    blur_type=self.args.blur_type,\n                    sig=self.args.sig,\n                    lambda_1=self.args.lambda_1,\n                    lambda_2=self.args.lambda_2,\n                    theta=self.args.theta,\n                    noise=self.args.noise\n                )\n\n                for idx_img, (hr, filename, _) in enumerate(self.loader_test):\n                    hr = hr.cuda()                      # b, 1, c, h, w\n                    hr = self.crop_border(hr, scale)\n                    lr, _ = degrade(hr, random=False)   # b, 1, c, h, w\n                    hr = hr[:, 0, ...]                  # b, c, h, w\n\n                    # inference\n                    timer_test.tic()\n                    sr = self.model(lr[:, 0, ...])\n                    timer_test.hold()\n\n                    sr = utility.quantize(sr, self.args.rgb_range)\n                    hr = utility.quantize(hr, self.args.rgb_range)\n\n                    # metrics\n                    eval_psnr += utility.calc_psnr(\n                        sr, hr, scale, self.args.rgb_range,\n                        benchmark=self.loader_test.dataset.benchmark\n                    )\n                    eval_ssim += utility.calc_ssim(\n                        sr, hr, scale,\n                        benchmark=self.loader_test.dataset.benchmark\n                    )\n\n                    # save results\n                    if self.args.save_results:\n                        save_list = [sr]\n                        filename = filename[0]\n                        self.ckp.save_results(filename, save_list, scale)\n\n                self.ckp.log[-1, idx_scale] = eval_psnr / len(self.loader_test)\n                self.ckp.write_log(\n                    '[Epoch {}---{} x{}]\\tPSNR: {:.3f} SSIM: {:.4f}'.format(\n                        self.args.resume,\n                        self.args.data_test,\n                        scale,\n                        eval_psnr / len(self.loader_test),\n                        eval_ssim / len(self.loader_test),\n                    ))\n\n    def crop_border(self, img_hr, scale):\n        b, n, c, h, w = img_hr.size()\n\n        img_hr = img_hr[:, :, :, :int(h//scale*scale), :int(w//scale*scale)]\n\n        return img_hr\n\n    def terminate(self):\n        if self.args.test_only:\n            self.test()\n            return True\n        else:\n            epoch = self.scheduler.last_epoch + 1\n            return epoch >= self.args.epochs_encoder + self.args.epochs_sr\n\n"
  },
  {
    "path": "utility.py",
    "content": "import os\nimport math\nimport time\nimport datetime\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.misc as misc\nimport cv2\nimport torch\nimport torch.optim as optim\nimport torch.optim.lr_scheduler as lrs\n\n\nclass AverageMeter(object):\n    \"\"\"Computes and stores the average and current value\"\"\"\n    def __init__(self):\n        self.reset()\n\n    def reset(self):\n        self.val = 0\n        self.avg = 0\n        self.sum = 0\n        self.count = 0\n\n    def update(self, val, n=1):\n        self.val = val\n        self.sum += val * n\n        self.count += n\n        self.avg = self.sum / self.count\n\n\nclass timer():\n    def __init__(self):\n        self.acc = 0\n        self.tic()\n\n    def tic(self):\n        self.t0 = time.time()\n\n    def toc(self):\n        return time.time() - self.t0\n\n    def hold(self):\n        self.acc += self.toc()\n\n    def release(self):\n        ret = self.acc\n        self.acc = 0\n\n        return ret\n\n    def reset(self):\n        self.acc = 0\n\n\nclass checkpoint():\n    def __init__(self, args):\n        self.args = args\n        self.ok = True\n        self.log = torch.Tensor()\n        now = datetime.datetime.now().strftime('%Y-%m-%d-%H:%M:%S')\n\n        if args.blur_type == 'iso_gaussian':\n            self.dir = './experiment/' + args.save + '_x' + str(int(args.scale[0])) + '_' + args.mode + '_iso'\n        elif args.blur_type == 'aniso_gaussian':\n            self.dir = './experiment/' + args.save + '_x' + str(int(args.scale[0])) + '_' + args.mode + '_aniso'\n\n        def _make_dir(path):\n            if not os.path.exists(path): os.makedirs(path)\n\n        _make_dir(self.dir)\n        _make_dir(self.dir + '/model')\n        _make_dir(self.dir + '/results')\n\n        open_type = 'a' if os.path.exists(self.dir + '/log.txt') else 'w'\n        self.log_file = open(self.dir + '/log.txt', open_type)\n        with open(self.dir + '/config.txt', open_type) as f:\n            f.write(now + '\\n\\n')\n            for arg in vars(args):\n                f.write('{}: {}\\n'.format(arg, getattr(args, arg)))\n            f.write('\\n')\n\n    def save(self, trainer, epoch, is_best=False):\n        trainer.model.save(self.dir, epoch, is_best=is_best)\n        trainer.loss.save(self.dir)\n        trainer.loss.plot_loss(self.dir, epoch)\n\n        self.plot_psnr(epoch)\n        torch.save(self.log, os.path.join(self.dir, 'psnr_log.pt'))\n        torch.save(\n            trainer.optimizer.state_dict(),\n            os.path.join(self.dir, 'optimizer.pt')\n        )\n\n    def add_log(self, log):\n        self.log = torch.cat([self.log, log])\n\n    def write_log(self, log, refresh=False):\n        print(log)\n        self.log_file.write(log + '\\n')\n        if refresh:\n            self.log_file.close()\n            self.log_file = open(self.dir + '/log.txt', 'a')\n\n    def done(self):\n        self.log_file.close()\n\n    def plot_psnr(self, epoch):\n        axis = np.linspace(1, epoch, epoch)\n        label = 'SR on {}'.format(self.args.data_test)\n        fig = plt.figure()\n        plt.title(label)\n        for idx_scale, scale in enumerate(self.args.scale):\n            plt.plot(\n                axis,\n                self.log[:, idx_scale].numpy(),\n                label='Scale {}'.format(scale)\n            )\n        plt.legend()\n        plt.xlabel('Epochs')\n        plt.ylabel('PSNR')\n        plt.grid(True)\n        plt.savefig('{}/test_{}.pdf'.format(self.dir, self.args.data_test))\n        plt.close(fig)\n\n    def save_results(self, filename, save_list, scale):\n        filename = '{}/results/{}_x{}_'.format(self.dir, filename, scale)\n\n        normalized = save_list[0][0].data.mul(255 / self.args.rgb_range)\n        ndarr = normalized.byte().permute(1, 2, 0).cpu().numpy()\n        misc.imsave('{}{}.png'.format(filename, 'SR'), ndarr)\n\n\ndef quantize(img, rgb_range):\n    pixel_range = 255 / rgb_range\n    return img.mul(pixel_range).clamp(0, 255).round().div(pixel_range)\n\n\ndef calc_psnr(sr, hr, scale, rgb_range, benchmark=False):\n    diff = (sr - hr).data.div(rgb_range)\n    if benchmark:\n        shave = scale\n        if diff.size(1) > 1:\n            convert = diff.new(1, 3, 1, 1)\n            convert[0, 0, 0, 0] = 65.738\n            convert[0, 1, 0, 0] = 129.057\n            convert[0, 2, 0, 0] = 25.064\n            diff.mul_(convert).div_(256)\n            diff = diff.sum(dim=1, keepdim=True)\n    else:\n        shave = scale + 6\n    import math\n    shave = math.ceil(shave)\n    valid = diff[:, :, shave:-shave, shave:-shave]\n    mse = valid.pow(2).mean()\n\n    return -10 * math.log10(mse)\n\n\ndef calc_ssim(img1, img2, scale=2, benchmark=False):\n    '''calculate SSIM\n    the same outputs as MATLAB's\n    img1, img2: [0, 255]\n    '''\n    if benchmark:\n        border = math.ceil(scale)\n    else:\n        border = math.ceil(scale) + 6\n\n    img1 = img1.data.squeeze().float().clamp(0, 255).round().cpu().numpy()\n    img1 = np.transpose(img1, (1, 2, 0))\n    img2 = img2.data.squeeze().cpu().numpy()\n    img2 = np.transpose(img2, (1, 2, 0))\n\n    img1_y = np.dot(img1, [65.738, 129.057, 25.064]) / 255.0 + 16.0\n    img2_y = np.dot(img2, [65.738, 129.057, 25.064]) / 255.0 + 16.0\n    if not img1.shape == img2.shape:\n        raise ValueError('Input images must have the same dimensions.')\n    h, w = img1.shape[:2]\n    img1_y = img1_y[border:h - border, border:w - border]\n    img2_y = img2_y[border:h - border, border:w - border]\n\n    if img1_y.ndim == 2:\n        return ssim(img1_y, img2_y)\n    elif img1.ndim == 3:\n        if img1.shape[2] == 3:\n            ssims = []\n            for i in range(3):\n                ssims.append(ssim(img1, img2))\n            return np.array(ssims).mean()\n        elif img1.shape[2] == 1:\n            return ssim(np.squeeze(img1), np.squeeze(img2))\n    else:\n        raise ValueError('Wrong input image dimensions.')\n\n\ndef ssim(img1, img2):\n    C1 = (0.01 * 255) ** 2\n    C2 = (0.03 * 255) ** 2\n\n    img1 = img1.astype(np.float64)\n    img2 = img2.astype(np.float64)\n    kernel = cv2.getGaussianKernel(11, 1.5)\n    window = np.outer(kernel, kernel.transpose())\n\n    mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5]  # valid\n    mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5]\n    mu1_sq = mu1 ** 2\n    mu2_sq = mu2 ** 2\n    mu1_mu2 = mu1 * mu2\n    sigma1_sq = cv2.filter2D(img1 ** 2, -1, window)[5:-5, 5:-5] - mu1_sq\n    sigma2_sq = cv2.filter2D(img2 ** 2, -1, window)[5:-5, 5:-5] - mu2_sq\n    sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2\n\n    ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) *\n                                                            (sigma1_sq + sigma2_sq + C2))\n    return ssim_map.mean()\n\n\ndef make_optimizer(args, my_model):\n    trainable = filter(lambda x: x.requires_grad, my_model.parameters())\n\n    if args.optimizer == 'SGD':\n        optimizer_function = optim.SGD\n        kwargs = {'momentum': args.momentum}\n    elif args.optimizer == 'ADAM':\n        optimizer_function = optim.Adam\n        kwargs = {\n            'betas': (args.beta1, args.beta2),\n            'eps': args.epsilon\n        }\n    elif args.optimizer == 'RMSprop':\n        optimizer_function = optim.RMSprop\n        kwargs = {'eps': args.epsilon}\n\n    kwargs['weight_decay'] = args.weight_decay\n\n    return optimizer_function(trainable, **kwargs)\n\n\ndef make_scheduler(args, my_optimizer):\n    if args.decay_type == 'step':\n        scheduler = lrs.StepLR(\n            my_optimizer,\n            step_size=args.lr_decay_sr,\n            gamma=args.gamma_sr,\n        )\n    elif args.decay_type.find('step') >= 0:\n        milestones = args.decay_type.split('_')\n        milestones.pop(0)\n        milestones = list(map(lambda x: int(x), milestones))\n        scheduler = lrs.MultiStepLR(\n            my_optimizer,\n            milestones=milestones,\n            gamma=args.gamma\n        )\n\n    scheduler.step(args.start_epoch - 1)\n\n    return scheduler\n\n"
  },
  {
    "path": "utils/__init__.py",
    "content": ""
  },
  {
    "path": "utils/util.py",
    "content": "import math\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\n\ndef cal_sigma(sig_x, sig_y, radians):\n    sig_x = sig_x.view(-1, 1, 1)\n    sig_y = sig_y.view(-1, 1, 1)\n    radians = radians.view(-1, 1, 1)\n\n    D = torch.cat([F.pad(sig_x ** 2, [0, 1, 0, 0]), F.pad(sig_y ** 2, [1, 0, 0, 0])], 1)\n    U = torch.cat([torch.cat([radians.cos(), -radians.sin()], 2),\n                   torch.cat([radians.sin(), radians.cos()], 2)], 1)\n    sigma = torch.bmm(U, torch.bmm(D, U.transpose(1, 2)))\n\n    return sigma\n\n\ndef anisotropic_gaussian_kernel(batch, kernel_size, covar):\n    ax = torch.arange(kernel_size).float().cuda() - kernel_size // 2\n\n    xx = ax.repeat(kernel_size).view(1, kernel_size, kernel_size).expand(batch, -1, -1)\n    yy = ax.repeat_interleave(kernel_size).view(1, kernel_size, kernel_size).expand(batch, -1, -1)\n    xy = torch.stack([xx, yy], -1).view(batch, -1, 2)\n\n    inverse_sigma = torch.inverse(covar)\n    kernel = torch.exp(- 0.5 * (torch.bmm(xy, inverse_sigma) * xy).sum(2)).view(batch, kernel_size, kernel_size)\n\n    return kernel / kernel.sum([1, 2], keepdim=True)\n\n\ndef isotropic_gaussian_kernel(batch, kernel_size, sigma):\n    ax = torch.arange(kernel_size).float().cuda() - kernel_size//2\n    xx = ax.repeat(kernel_size).view(1, kernel_size, kernel_size).expand(batch, -1, -1)\n    yy = ax.repeat_interleave(kernel_size).view(1, kernel_size, kernel_size).expand(batch, -1, -1)\n    kernel = torch.exp(-(xx ** 2 + yy ** 2) / (2. * sigma.view(-1, 1, 1) ** 2))\n\n    return kernel / kernel.sum([1,2], keepdim=True)\n\n\ndef random_anisotropic_gaussian_kernel(batch=1, kernel_size=21, lambda_min=0.2, lambda_max=4.0):\n    theta = torch.rand(batch).cuda() * math.pi\n    lambda_1 = torch.rand(batch).cuda() * (lambda_max - lambda_min) + lambda_min\n    lambda_2 = torch.rand(batch).cuda() * (lambda_max - lambda_min) + lambda_min\n\n    covar = cal_sigma(lambda_1, lambda_2, theta)\n    kernel = anisotropic_gaussian_kernel(batch, kernel_size, covar)\n    return kernel\n\n\ndef stable_anisotropic_gaussian_kernel(kernel_size=21, theta=0, lambda_1=0.2, lambda_2=4.0):\n    theta = torch.ones(1).cuda() * theta / 180 * math.pi\n    lambda_1 = torch.ones(1).cuda() * lambda_1\n    lambda_2 = torch.ones(1).cuda() * lambda_2\n\n    covar = cal_sigma(lambda_1, lambda_2, theta)\n    kernel = anisotropic_gaussian_kernel(1, kernel_size, covar)\n    return kernel\n\n\ndef random_isotropic_gaussian_kernel(batch=1, kernel_size=21, sig_min=0.2, sig_max=4.0):\n    x = torch.rand(batch).cuda() * (sig_max - sig_min) + sig_min\n    k = isotropic_gaussian_kernel(batch, kernel_size, x)\n    return k\n\n\ndef stable_isotropic_gaussian_kernel(kernel_size=21, sig=4.0):\n    x = torch.ones(1).cuda() * sig\n    k = isotropic_gaussian_kernel(1, kernel_size, x)\n    return k\n\n\ndef random_gaussian_kernel(batch, kernel_size=21, blur_type='iso_gaussian', sig_min=0.2, sig_max=4.0, lambda_min=0.2, lambda_max=4.0):\n    if blur_type == 'iso_gaussian':\n        return random_isotropic_gaussian_kernel(batch=batch, kernel_size=kernel_size, sig_min=sig_min, sig_max=sig_max)\n    elif blur_type == 'aniso_gaussian':\n        return random_anisotropic_gaussian_kernel(batch=batch, kernel_size=kernel_size, lambda_min=lambda_min, lambda_max=lambda_max)\n\n\ndef stable_gaussian_kernel(kernel_size=21, blur_type='iso_gaussian', sig=2.6, lambda_1=0.2, lambda_2=4.0, theta=0):\n    if blur_type == 'iso_gaussian':\n        return stable_isotropic_gaussian_kernel(kernel_size=kernel_size, sig=sig)\n    elif blur_type == 'aniso_gaussian':\n        return stable_anisotropic_gaussian_kernel(kernel_size=kernel_size, lambda_1=lambda_1, lambda_2=lambda_2, theta=theta)\n\n\n# implementation of matlab bicubic interpolation in pytorch\nclass bicubic(nn.Module):\n    def __init__(self):\n        super(bicubic, self).__init__()\n\n    def cubic(self, x):\n        absx = torch.abs(x)\n        absx2 = torch.abs(x) * torch.abs(x)\n        absx3 = torch.abs(x) * torch.abs(x) * torch.abs(x)\n\n        condition1 = (absx <= 1).to(torch.float32)\n        condition2 = ((1 < absx) & (absx <= 2)).to(torch.float32)\n\n        f = (1.5 * absx3 - 2.5 * absx2 + 1) * condition1 + (-0.5 * absx3 + 2.5 * absx2 - 4 * absx + 2) * condition2\n        return f\n\n    def contribute(self, in_size, out_size, scale):\n        kernel_width = 4\n        if scale < 1:\n            kernel_width = 4 / scale\n        x0 = torch.arange(start=1, end=out_size[0] + 1).to(torch.float32).cuda()\n        x1 = torch.arange(start=1, end=out_size[1] + 1).to(torch.float32).cuda()\n\n        u0 = x0 / scale + 0.5 * (1 - 1 / scale)\n        u1 = x1 / scale + 0.5 * (1 - 1 / scale)\n\n        left0 = torch.floor(u0 - kernel_width / 2)\n        left1 = torch.floor(u1 - kernel_width / 2)\n\n        P = np.ceil(kernel_width) + 2\n\n        indice0 = left0.unsqueeze(1) + torch.arange(start=0, end=P).to(torch.float32).unsqueeze(0).cuda()\n        indice1 = left1.unsqueeze(1) + torch.arange(start=0, end=P).to(torch.float32).unsqueeze(0).cuda()\n\n        mid0 = u0.unsqueeze(1) - indice0.unsqueeze(0)\n        mid1 = u1.unsqueeze(1) - indice1.unsqueeze(0)\n\n        if scale < 1:\n            weight0 = scale * self.cubic(mid0 * scale)\n            weight1 = scale * self.cubic(mid1 * scale)\n        else:\n            weight0 = self.cubic(mid0)\n            weight1 = self.cubic(mid1)\n\n        weight0 = weight0 / (torch.sum(weight0, 2).unsqueeze(2))\n        weight1 = weight1 / (torch.sum(weight1, 2).unsqueeze(2))\n\n        indice0 = torch.min(torch.max(torch.FloatTensor([1]).cuda(), indice0), torch.FloatTensor([in_size[0]]).cuda()).unsqueeze(0)\n        indice1 = torch.min(torch.max(torch.FloatTensor([1]).cuda(), indice1), torch.FloatTensor([in_size[1]]).cuda()).unsqueeze(0)\n\n        kill0 = torch.eq(weight0, 0)[0][0]\n        kill1 = torch.eq(weight1, 0)[0][0]\n\n        weight0 = weight0[:, :, kill0 == 0]\n        weight1 = weight1[:, :, kill1 == 0]\n\n        indice0 = indice0[:, :, kill0 == 0]\n        indice1 = indice1[:, :, kill1 == 0]\n\n        return weight0, weight1, indice0, indice1\n\n    def forward(self, input, scale=1/4):\n        b, c, h, w = input.shape\n\n        weight0, weight1, indice0, indice1 = self.contribute([h, w], [int(h * scale), int(w * scale)], scale)\n        weight0 = weight0[0]\n        weight1 = weight1[0]\n\n        indice0 = indice0[0].long()\n        indice1 = indice1[0].long()\n\n        out = input[:, :, (indice0 - 1), :] * (weight0.unsqueeze(0).unsqueeze(1).unsqueeze(4))\n        out = (torch.sum(out, dim=3))\n        A = out.permute(0, 1, 3, 2)\n\n        out = A[:, :, (indice1 - 1), :] * (weight1.unsqueeze(0).unsqueeze(1).unsqueeze(4))\n        out = out.sum(3).permute(0, 1, 3, 2)\n\n        return out\n\n\nclass Gaussin_Kernel(object):\n    def __init__(self, kernel_size=21, blur_type='iso_gaussian',\n                 sig=2.6, sig_min=0.2, sig_max=4.0,\n                 lambda_1=0.2, lambda_2=4.0, theta=0, lambda_min=0.2, lambda_max=4.0):\n        self.kernel_size = kernel_size\n        self.blur_type = blur_type\n\n        self.sig = sig\n        self.sig_min = sig_min\n        self.sig_max = sig_max\n\n        self.lambda_1 = lambda_1\n        self.lambda_2 = lambda_2\n        self.theta = theta\n        self.lambda_min = lambda_min\n        self.lambda_max = lambda_max\n\n    def __call__(self, batch, random):\n        # random kernel\n        if random == True:\n            return random_gaussian_kernel(batch, kernel_size=self.kernel_size, blur_type=self.blur_type,\n                                          sig_min=self.sig_min, sig_max=self.sig_max,\n                                          lambda_min=self.lambda_min, lambda_max=self.lambda_max)\n\n        # stable kernel\n        else:\n            return stable_gaussian_kernel(kernel_size=self.kernel_size, blur_type=self.blur_type,\n                                          sig=self.sig,\n                                          lambda_1=self.lambda_1, lambda_2=self.lambda_2, theta=self.theta)\n\nclass BatchBlur(nn.Module):\n    def __init__(self, kernel_size=21):\n        super(BatchBlur, self).__init__()\n        self.kernel_size = kernel_size\n        if kernel_size % 2 == 1:\n            self.pad = nn.ReflectionPad2d(kernel_size//2)\n        else:\n            self.pad = nn.ReflectionPad2d((kernel_size//2, kernel_size//2-1, kernel_size//2, kernel_size//2-1))\n\n    def forward(self, input, kernel):\n        B, C, H, W = input.size()\n        input_pad = self.pad(input)\n        H_p, W_p = input_pad.size()[-2:]\n\n        if len(kernel.size()) == 2:\n            input_CBHW = input_pad.view((C * B, 1, H_p, W_p))\n            kernel = kernel.contiguous().view((1, 1, self.kernel_size, self.kernel_size))\n\n            return F.conv2d(input_CBHW, kernel, padding=0).view((B, C, H, W))\n        else:\n            input_CBHW = input_pad.view((1, C * B, H_p, W_p))\n            kernel = kernel.contiguous().view((B, 1, self.kernel_size, self.kernel_size))\n            kernel = kernel.repeat(1, C, 1, 1).view((B * C, 1, self.kernel_size, self.kernel_size))\n\n            return F.conv2d(input_CBHW, kernel, groups=B*C).view((B, C, H, W))\n\n\nclass SRMDPreprocessing(object):\n    def __init__(self,\n                 scale,\n                 mode='bicubic',\n                 kernel_size=21,\n                 blur_type='iso_gaussian',\n                 sig=2.6,\n                 sig_min=0.2,\n                 sig_max=4.0,\n                 lambda_1=0.2,\n                 lambda_2=4.0,\n                 theta=0,\n                 lambda_min=0.2,\n                 lambda_max=4.0,\n                 noise=0.0\n                 ):\n        '''\n        # sig, sig_min and sig_max are used for isotropic Gaussian blurs\n        During training phase (random=True):\n            the width of the blur kernel is randomly selected from [sig_min, sig_max]\n        During test phase (random=False):\n            the width of the blur kernel is set to sig\n\n        # lambda_1, lambda_2, theta, lambda_min and lambda_max are used for anisotropic Gaussian blurs\n        During training phase (random=True):\n            the eigenvalues of the covariance is randomly selected from [lambda_min, lambda_max]\n            the angle value is randomly selected from [0, pi]\n        During test phase (random=False):\n            the eigenvalues of the covariance are set to lambda_1 and lambda_2\n            the angle value is set to theta\n        '''\n        self.kernel_size = kernel_size\n        self.scale = scale\n        self.mode = mode\n        self.noise = noise\n\n        self.gen_kernel = Gaussin_Kernel(\n            kernel_size=kernel_size, blur_type=blur_type,\n            sig=sig, sig_min=sig_min, sig_max=sig_max,\n            lambda_1=lambda_1, lambda_2=lambda_2, theta=theta, lambda_min=lambda_min, lambda_max=lambda_max\n        )\n        self.blur = BatchBlur(kernel_size=kernel_size)\n        self.bicubic = bicubic()\n\n    def __call__(self, hr_tensor, random=True):\n        with torch.no_grad():\n            # only downsampling\n            if self.gen_kernel.blur_type == 'iso_gaussian' and self.gen_kernel.sig == 0:\n                B, N, C, H, W = hr_tensor.size()\n                hr_blured = hr_tensor.view(-1, C, H, W)\n                b_kernels = None\n\n            # gaussian blur + downsampling\n            else:\n                B, N, C, H, W = hr_tensor.size()\n                b_kernels = self.gen_kernel(B, random)  # B degradations\n\n                # blur\n                hr_blured = self.blur(hr_tensor.view(B, -1, H, W), b_kernels)\n                hr_blured = hr_blured.view(-1, C, H, W)  # BN, C, H, W\n\n            # downsampling\n            if self.mode == 'bicubic':\n                lr_blured = self.bicubic(hr_blured, scale=1/self.scale)\n            elif self.mode == 's-fold':\n                lr_blured = hr_blured.view(-1, C, H//self.scale, self.scale, W//self.scale, self.scale)[:, :, :, 0, :, 0]\n\n\n            # add noise\n            if self.noise > 0:\n                _, C, H_lr, W_lr = lr_blured.size()\n                noise_level = torch.rand(B, 1, 1, 1, 1).to(lr_blured.device) * self.noise if random else self.noise\n                noise = torch.randn_like(lr_blured).view(-1, N, C, H_lr, W_lr).mul_(noise_level).view(-1, C, H_lr, W_lr)\n                lr_blured.add_(noise)\n\n            lr_blured = torch.clamp(lr_blured.round(), 0, 255)\n\n\n            return lr_blured.view(B, N, C, H//int(self.scale), W//int(self.scale)), b_kernels\n\n"
  }
]