Full Code of greatlog/UnpairedSR for AI

master 771312ee2bd4 cached
308 files
1.2 MB
343.9k tokens
1926 symbols
1 requests
Download .txt
Showing preview only (1,320K chars total). Download the full file or copy to clipboard to get everything.
Repository: greatlog/UnpairedSR
Branch: master
Commit: 771312ee2bd4
Files: 308
Total size: 1.2 MB

Directory structure:
gitextract_gen0q67r/

├── .gitignore
├── README.md
└── codes/
    ├── config/
    │   ├── BSRGAN/
    │   │   ├── README.md
    │   │   ├── archs/
    │   │   │   ├── __init__.py
    │   │   │   ├── discriminator.py
    │   │   │   ├── edsr.py
    │   │   │   ├── loss.py
    │   │   │   ├── lr_scheduler.py
    │   │   │   ├── module_util.py
    │   │   │   ├── rcan.py
    │   │   │   ├── rrdb.py
    │   │   │   ├── srresnet.py
    │   │   │   ├── translator.py
    │   │   │   └── vgg.py
    │   │   ├── count_flops.py
    │   │   ├── inference.py
    │   │   ├── models/
    │   │   │   ├── __init__.py
    │   │   │   ├── base_model.py
    │   │   │   └── sr_model.py
    │   │   ├── options/
    │   │   │   └── test/
    │   │   │       ├── 2017Track2_2020Track1.yml
    │   │   │       ├── 2018Track2_2018Track4.yml
    │   │   │       └── 2020Track2.yml
    │   │   ├── test.py
    │   │   └── train.py
    │   ├── Bicubic/
    │   │   ├── README.md
    │   │   ├── archs/
    │   │   │   ├── __init__.py
    │   │   │   ├── bicubic.py
    │   │   │   ├── discriminator.py
    │   │   │   ├── edsr.py
    │   │   │   ├── loss.py
    │   │   │   ├── lr_scheduler.py
    │   │   │   ├── module_util.py
    │   │   │   ├── rcan.py
    │   │   │   ├── rrdb.py
    │   │   │   ├── srresnet.py
    │   │   │   └── vgg.py
    │   │   ├── count_flops.py
    │   │   ├── inference.py
    │   │   ├── models/
    │   │   │   ├── __init__.py
    │   │   │   ├── base_model.py
    │   │   │   └── sr_model.py
    │   │   ├── options/
    │   │   │   └── test/
    │   │   │       ├── 2017Track2_2020Track1.yml
    │   │   │       ├── 2018Track2_2020Track4.yml
    │   │   │       └── 2020Track2.yml
    │   │   ├── test.py
    │   │   └── train.py
    │   ├── Bulat/
    │   │   ├── README.md
    │   │   ├── archs/
    │   │   │   ├── __init__.py
    │   │   │   ├── deg_arch.py
    │   │   │   ├── discriminator.py
    │   │   │   ├── edsr.py
    │   │   │   ├── loss.py
    │   │   │   ├── lr_scheduler.py
    │   │   │   ├── module_util.py
    │   │   │   ├── rcan.py
    │   │   │   ├── rrdb.py
    │   │   │   ├── srresnet.py
    │   │   │   ├── translator.py
    │   │   │   └── vgg.py
    │   │   ├── count_flops.py
    │   │   ├── inference.py
    │   │   ├── models/
    │   │   │   ├── __init__.py
    │   │   │   ├── base_model.py
    │   │   │   └── deg_sr_model.py
    │   │   ├── options/
    │   │   │   ├── test/
    │   │   │   │   ├── 2017Track2.yml
    │   │   │   │   ├── 2018Track2.yml
    │   │   │   │   ├── 2018Track4.yml
    │   │   │   │   └── 2020Track1.yml
    │   │   │   └── train/
    │   │   │       └── psnr/
    │   │   │           ├── 2017Track2.yml
    │   │   │           ├── 2018Track2.yml
    │   │   │           ├── 2018Track4.yml
    │   │   │           └── 2020Track1.yml
    │   │   ├── test.py
    │   │   └── train.py
    │   ├── CinGAN/
    │   │   ├── README.md
    │   │   ├── archs/
    │   │   │   ├── __init__.py
    │   │   │   ├── discriminator.py
    │   │   │   ├── edsr.py
    │   │   │   ├── loss.py
    │   │   │   ├── lr_scheduler.py
    │   │   │   ├── module_util.py
    │   │   │   ├── rcan.py
    │   │   │   ├── rrdb.py
    │   │   │   ├── srresnet.py
    │   │   │   ├── translator.py
    │   │   │   └── vgg.py
    │   │   ├── count_flops.py
    │   │   ├── inference.py
    │   │   ├── models/
    │   │   │   ├── __init__.py
    │   │   │   ├── base_model.py
    │   │   │   ├── cingan_model.py
    │   │   │   └── trans_model.py
    │   │   ├── options/
    │   │   │   ├── test/
    │   │   │   │   └── sr/
    │   │   │   │       ├── 2017Track1.yml
    │   │   │   │       ├── 2018Track2.yml
    │   │   │   │       ├── 2018Track4.yml
    │   │   │   │       └── 2020Track1.yml
    │   │   │   └── train/
    │   │   │       ├── sr/
    │   │   │       │   ├── 2017Track2.yml
    │   │   │       │   ├── 2018Track2.yml
    │   │   │       │   ├── 2018Track4.yml
    │   │   │       │   └── 2020Track1.yml
    │   │   │       └── trans/
    │   │   │           ├── 2017Track2.yml
    │   │   │           ├── 2018Track2.yml
    │   │   │           ├── 2018Track4.yml
    │   │   │           └── 2020Track1.yml
    │   │   ├── test.py
    │   │   └── train.py
    │   ├── CycleSR/
    │   │   ├── README.md
    │   │   ├── archs/
    │   │   │   ├── __init__.py
    │   │   │   ├── discriminator.py
    │   │   │   ├── edsr.py
    │   │   │   ├── loss.py
    │   │   │   ├── lr_scheduler.py
    │   │   │   ├── module_util.py
    │   │   │   ├── rcan.py
    │   │   │   ├── rrdb.py
    │   │   │   ├── srresnet.py
    │   │   │   ├── translator.py
    │   │   │   └── vgg.py
    │   │   ├── count_flops.py
    │   │   ├── inference.py
    │   │   ├── models/
    │   │   │   ├── __init__.py
    │   │   │   ├── base_model.py
    │   │   │   ├── cyclegan_model.py
    │   │   │   └── cyclesr_model.py
    │   │   ├── options/
    │   │   │   ├── test/
    │   │   │   │   └── sr/
    │   │   │   │       ├── 2017Track1.yml
    │   │   │   │       ├── 2018Track2.yml
    │   │   │   │       ├── 2018Track4.yml
    │   │   │   │       ├── 2020Track1.yml
    │   │   │   │       └── 2020Track1_percep.yml
    │   │   │   └── train/
    │   │   │       ├── sr/
    │   │   │       │   └── psnr/
    │   │   │       │       ├── 2017Track2.yml
    │   │   │       │       ├── 2018Track2.yml
    │   │   │       │       ├── 2018Track4.yml
    │   │   │       │       └── 2020Track1.yml
    │   │   │       └── trans/
    │   │   │           ├── 2017Track2.yml
    │   │   │           ├── 2018Track2.yml
    │   │   │           ├── 2018Track4.yml
    │   │   │           └── 2020Track1.yml
    │   │   ├── test.py
    │   │   └── train.py
    │   ├── DSGANSR/
    │   │   ├── README.md
    │   │   ├── archs/
    │   │   │   ├── __init__.py
    │   │   │   ├── deg_arch.py
    │   │   │   ├── discriminator.py
    │   │   │   ├── edsr.py
    │   │   │   ├── loss.py
    │   │   │   ├── lr_scheduler.py
    │   │   │   ├── module_util.py
    │   │   │   ├── rcan.py
    │   │   │   ├── rrdb.py
    │   │   │   ├── srresnet.py
    │   │   │   ├── translator.py
    │   │   │   └── vgg.py
    │   │   ├── count_flops.py
    │   │   ├── inference.py
    │   │   ├── models/
    │   │   │   ├── __init__.py
    │   │   │   ├── base_model.py
    │   │   │   └── deg_sr_model.py
    │   │   ├── options/
    │   │   │   ├── test/
    │   │   │   │   ├── 2017Track1.yml
    │   │   │   │   ├── 2018Track2.yml
    │   │   │   │   ├── 2018Track4.yml
    │   │   │   │   └── 2020Track1.yml
    │   │   │   └── train/
    │   │   │       ├── deg/
    │   │   │       │   ├── 2017Track2.yml
    │   │   │       │   ├── 2018Track2.yml
    │   │   │       │   ├── 2018Track4.yml
    │   │   │       │   └── 2020Track1.yml
    │   │   │       └── sr/
    │   │   │           ├── 2017Track2.yml
    │   │   │           ├── 2018Track2.yml
    │   │   │           ├── 2018Track4.yml
    │   │   │           └── 2020Track1.yml
    │   │   ├── test.py
    │   │   └── train.py
    │   ├── EDSR/
    │   │   ├── archs/
    │   │   │   ├── __init__.py
    │   │   │   ├── bicubic.py
    │   │   │   ├── discriminator.py
    │   │   │   ├── edsr.py
    │   │   │   ├── loss.py
    │   │   │   ├── lr_scheduler.py
    │   │   │   ├── module_util.py
    │   │   │   ├── rcan.py
    │   │   │   ├── rrdb.py
    │   │   │   ├── srresnet.py
    │   │   │   ├── translator.py
    │   │   │   └── vgg.py
    │   │   ├── count_flops.py
    │   │   ├── inference.py
    │   │   ├── models/
    │   │   │   ├── __init__.py
    │   │   │   ├── base_model.py
    │   │   │   └── sr_model.py
    │   │   ├── options/
    │   │   │   └── test/
    │   │   │       ├── 2017Track2_2020Track1.yml
    │   │   │       ├── 2018Track2_2020Track4.yml
    │   │   │       └── 2020Track2.yml
    │   │   ├── test.py
    │   │   └── train.py
    │   ├── Maeda/
    │   │   ├── README.md
    │   │   ├── archs/
    │   │   │   ├── __init__.py
    │   │   │   ├── discriminator.py
    │   │   │   ├── edsr.py
    │   │   │   ├── loss.py
    │   │   │   ├── lr_scheduler.py
    │   │   │   ├── module_util.py
    │   │   │   ├── rcan.py
    │   │   │   ├── rrdb.py
    │   │   │   ├── srresnet.py
    │   │   │   ├── translator.py
    │   │   │   └── vgg.py
    │   │   ├── count_flops.py
    │   │   ├── inference.py
    │   │   ├── models/
    │   │   │   ├── __init__.py
    │   │   │   ├── base_model.py
    │   │   │   └── pseudo_supervision_model.py
    │   │   ├── options/
    │   │   │   ├── test/
    │   │   │   │   ├── 2017Track2.yml
    │   │   │   │   ├── 2018Track2.yml
    │   │   │   │   ├── 2018Track4.yml
    │   │   │   │   └── 2020Track1.yml
    │   │   │   └── train/
    │   │   │       ├── 2017Track2.yml
    │   │   │       ├── 2018Track2.yml
    │   │   │       ├── 2018Track4.yml
    │   │   │       └── 2020Track1.yml
    │   │   ├── test.py
    │   │   └── train.py
    │   ├── PDM-SR/
    │   │   ├── archs/
    │   │   │   ├── __init__.py
    │   │   │   ├── deg_arch.py
    │   │   │   ├── discriminator.py
    │   │   │   ├── edsr.py
    │   │   │   ├── loss.py
    │   │   │   ├── lr_scheduler.py
    │   │   │   ├── module_util.py
    │   │   │   ├── rcan.py
    │   │   │   ├── rrdb.py
    │   │   │   ├── srresnet.py
    │   │   │   └── vgg.py
    │   │   ├── count_flops.py
    │   │   ├── inference.py
    │   │   ├── models/
    │   │   │   ├── __init__.py
    │   │   │   ├── base_model.py
    │   │   │   └── deg_sr_model.py
    │   │   ├── options/
    │   │   │   ├── test/
    │   │   │   │   ├── 2017Track1.yml
    │   │   │   │   ├── 2018Track2.yml
    │   │   │   │   ├── 2018Track4.yml
    │   │   │   │   ├── 2020Track1.yml
    │   │   │   │   └── 2020Track2.yml
    │   │   │   └── train/
    │   │   │       ├── deg/
    │   │   │       │   ├── 2017Track1.yml
    │   │   │       │   ├── 2018Track2.yml
    │   │   │       │   ├── 2018Track4.yml
    │   │   │       │   ├── 2020Track1.yml
    │   │   │       │   └── 2020Track2.yml
    │   │   │       ├── percep/
    │   │   │       │   ├── 2017Track1.yml
    │   │   │       │   ├── 2018Track2.yml
    │   │   │       │   ├── 2018Track4.yml
    │   │   │       │   ├── 2020Track1.yml
    │   │   │       │   └── 2020Track2.yml
    │   │   │       └── psnr/
    │   │   │           ├── 2017Track2.yml
    │   │   │           ├── 2018Track2.yml
    │   │   │           ├── 2018Track4.yml
    │   │   │           ├── 2020Track1.yml
    │   │   │           └── 2020Track2.yml
    │   │   ├── test.py
    │   │   └── train.py
    │   └── RealESRGAN/
    │       ├── README.md
    │       ├── archs/
    │       │   ├── __init__.py
    │       │   ├── discriminator.py
    │       │   ├── edsr.py
    │       │   ├── loss.py
    │       │   ├── lr_scheduler.py
    │       │   ├── module_util.py
    │       │   ├── rcan.py
    │       │   ├── rrdb.py
    │       │   ├── srresnet.py
    │       │   ├── translator.py
    │       │   └── vgg.py
    │       ├── count_flops.py
    │       ├── inference.py
    │       ├── models/
    │       │   ├── __init__.py
    │       │   ├── base_model.py
    │       │   └── sr_model.py
    │       ├── options/
    │       │   └── test/
    │       │       ├── 2017Track2_2020Track1.yml
    │       │       ├── 2018Track2_2018Track4.yml
    │       │       └── 2020Track2.yml
    │       ├── test.py
    │       └── train.py
    ├── data/
    │   ├── __init__.py
    │   ├── data_sampler.py
    │   ├── debug_dataset.py
    │   ├── fixed_image_dataset.py
    │   ├── paired_ref_dataset.py
    │   ├── paried_dataset.py
    │   ├── single_dataset.py
    │   ├── single_image_dataset.py
    │   └── unpaired_dataset.py
    ├── metrics/
    │   ├── __init__.py
    │   ├── best_psnr.py
    │   ├── measure.py
    │   ├── psnr.py
    │   └── ssim.py
    ├── scripts/
    │   ├── create_lmdb.py
    │   ├── extract_subimgs_single.py
    │   ├── generate_mod_LR_bic.m
    │   ├── generate_mod_LR_bic.py
    │   ├── generate_mod_blur_LR_bic.py
    │   └── test_imgs.py
    └── utils/
        ├── __init__.py
        ├── data_utils.py
        ├── deg_utils.py
        ├── file_utils.py
        ├── img_utils.py
        ├── option.py
        ├── registry.py
        └── resize_utils.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
__pycache__/
experiments/
results/
result/
result
log/
log

data_samples/
checkpoints/

*.pkl
*.pt
*.pth
*.jpg
*.png
*.state
*.event


================================================
FILE: README.md
================================================
This is an offical implementation of the CVPR2022's paper [Learning the Degradation Distribution for Blind Image Super-Resolution](https://arxiv.org/abs/2203.04962). This repo also contains the implementations of many other blind SR methods in [config](codes/config/), including CinGAN, CycleSR, DSGAN-SR, etc.

If you find this repo useful for your work, please cite our paper:
```
@inproceedings{PDMSR,
  title={Learning the Degradation Distribution for Blind Image Super-Resolution},
  author={Zhengxiong Luo and Yan Huang and and Shang Li and Liang Wang and Tieniu Tan},
  booktitle={CVPR},
  year={2022}
}
```

The codes are built on the basis of [BasicSR](https://github.com/xinntao/BasicSR).

## Dependences
1. lpips (pip install --user lpips)
2. matlab (to support the evaluation of NIQE). The details about installing a matlab API for python can refer to [here](https://ww2.mathworks.cn/help/matlab/matlab_external/install-the-matlab-engine-for-python.html)

## Datasets
The datasets in NTIRE2017 and NTIRE2018 can be downloaded from [here](https://data.vision.ee.ethz.ch/cvl/DIV2K/). The datasets in NTIRE2020 can be downloaded from the [competition site](https://competitions.codalab.org/competitions/22220).

## Start up
We provide the checkpoints in in [Google drive](https://drive.google.com/drive/folders/1bVMGaGF7yLyQhM0xmRVMD2SolOtgLvxO?usp=sharing) and [BaiduYun](https://pan.baidu.com/s/1BcYcX0yCS-3-6XqT4BgYAQ?pwd=ovmw)(password: ovmw). Please download them into the [checkpoints](checkpoints/) directoty. To get a quick start:

```bash
cd codes/config/PDM-SR/
python3 inference.py --opt options/test/2020Track2.yml
```

================================================
FILE: codes/config/BSRGAN/README.md
================================================
This repo currently only supports the test of [BSRGAN](https://arxiv.org/abs/2103.14006). The training related codes may be added in the future. 

================================================
FILE: codes/config/BSRGAN/archs/__init__.py
================================================
import importlib
import os
import os.path as osp

from utils.registry import ARCH_REGISTRY, LOSS_REGISTRY, LR_SCHEDULER_REGISTRY

arch_folder = osp.dirname(osp.abspath(__file__))
arch_filenames = [
    osp.splitext(osp.basename(v))[0]
    for v in os.listdir(arch_folder)
    if v.endswith(".py")
]
# import all the arch modules
_arch_modules = [
    importlib.import_module(f"archs.{file_name}") for file_name in arch_filenames
]


def build_network(net_opt):
    which_network = net_opt["which_network"]
    net = ARCH_REGISTRY.get(which_network)(**net_opt["setting"])
    return net


def build_loss(loss_opt):
    loss_type = loss_opt.pop("type")
    loss = LOSS_REGISTRY.get(loss_type)(**loss_opt)
    return loss

def build_scheduler(optimizer, scheduler_opt):
    scheduler_type = scheduler_opt.pop("type")
    scheduler = LR_SCHEDULER_REGISTRY.get(scheduler_type)(optimizer, **scheduler_opt)
    return scheduler


================================================
FILE: codes/config/BSRGAN/archs/discriminator.py
================================================
import torch
import torch.nn as nn
import torchvision
import functools

from utils.registry import ARCH_REGISTRY


@ARCH_REGISTRY.register()
class DiscriminatorVGG128(nn.Module):
    def __init__(self, in_nc, nf):
        super().__init__()
        # [64, 128, 128]
        self.conv0_0 = nn.Conv2d(in_nc, nf, 3, 1, 1, bias=True)
        self.conv0_1 = nn.Conv2d(nf, nf, 4, 2, 1, bias=False)
        self.bn0_1 = nn.BatchNorm2d(nf, affine=True)
        # [64, 64, 64]
        self.conv1_0 = nn.Conv2d(nf, nf * 2, 3, 1, 1, bias=False)
        self.bn1_0 = nn.BatchNorm2d(nf * 2, affine=True)
        self.conv1_1 = nn.Conv2d(nf * 2, nf * 2, 4, 2, 1, bias=False)
        self.bn1_1 = nn.BatchNorm2d(nf * 2, affine=True)
        # [128, 32, 32]
        self.conv2_0 = nn.Conv2d(nf * 2, nf * 4, 3, 1, 1, bias=False)
        self.bn2_0 = nn.BatchNorm2d(nf * 4, affine=True)
        self.conv2_1 = nn.Conv2d(nf * 4, nf * 4, 4, 2, 1, bias=False)
        self.bn2_1 = nn.BatchNorm2d(nf * 4, affine=True)
        # [256, 16, 16]
        self.conv3_0 = nn.Conv2d(nf * 4, nf * 8, 3, 1, 1, bias=False)
        self.bn3_0 = nn.BatchNorm2d(nf * 8, affine=True)
        self.conv3_1 = nn.Conv2d(nf * 8, nf * 8, 4, 2, 1, bias=False)
        self.bn3_1 = nn.BatchNorm2d(nf * 8, affine=True)
        # [512, 8, 8]
        self.conv4_0 = nn.Conv2d(nf * 8, nf * 8, 3, 1, 1, bias=False)
        self.bn4_0 = nn.BatchNorm2d(nf * 8, affine=True)
        self.conv4_1 = nn.Conv2d(nf * 8, nf * 8, 4, 2, 1, bias=False)
        self.bn4_1 = nn.BatchNorm2d(nf * 8, affine=True)

        self.linear1 = nn.Linear(512 * 4 * 4, 100)
        self.linear2 = nn.Linear(100, 1)

        # activation function
        self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)

    def forward(self, x):
        fea = self.lrelu(self.conv0_0(x))
        fea = self.lrelu(self.bn0_1(self.conv0_1(fea)))

        fea = self.lrelu(self.bn1_0(self.conv1_0(fea)))
        fea = self.lrelu(self.bn1_1(self.conv1_1(fea)))

        fea = self.lrelu(self.bn2_0(self.conv2_0(fea)))
        fea = self.lrelu(self.bn2_1(self.conv2_1(fea)))

        fea = self.lrelu(self.bn3_0(self.conv3_0(fea)))
        fea = self.lrelu(self.bn3_1(self.conv3_1(fea)))

        fea = self.lrelu(self.bn4_0(self.conv4_0(fea)))
        fea = self.lrelu(self.bn4_1(self.conv4_1(fea)))

        fea = fea.view(fea.size(0), -1)
        fea = self.lrelu(self.linear1(fea))
        out = self.linear2(fea)
        return out


@ARCH_REGISTRY.register()
class DiscriminatorVGG32(nn.Module):
    def __init__(self, in_nc, nf):
        super().__init__()
        # [64, 128, 128]
        self.conv0_0 = nn.Conv2d(in_nc, nf, 3, 1, 1, bias=True)
        self.conv0_1 = nn.Conv2d(nf, nf, 4, 2, 1, bias=False)
        self.bn0_1 = nn.BatchNorm2d(nf, affine=True)
        # [64, 64, 64]
        self.conv1_0 = nn.Conv2d(nf, nf * 2, 3, 1, 1, bias=False)
        self.bn1_0 = nn.BatchNorm2d(nf * 2, affine=True)
        self.conv1_1 = nn.Conv2d(nf * 2, nf * 2, 4, 2, 1, bias=False)
        self.bn1_1 = nn.BatchNorm2d(nf * 2, affine=True)
        # [128, 32, 32]
        self.conv2_0 = nn.Conv2d(nf * 2, nf * 4, 3, 1, 1, bias=False)
        self.bn2_0 = nn.BatchNorm2d(nf * 4, affine=True)
        self.conv2_1 = nn.Conv2d(nf * 4, nf * 4, 4, 2, 1, bias=False)
        self.bn2_1 = nn.BatchNorm2d(nf * 4, affine=True)
        # [256, 16, 16]
        self.conv3_0 = nn.Conv2d(nf * 4, nf * 8, 3, 1, 1, bias=False)
        self.bn3_0 = nn.BatchNorm2d(nf * 8, affine=True)
        self.conv3_1 = nn.Conv2d(nf * 8, nf * 8, 4, 2, 1, bias=False)
        self.bn3_1 = nn.BatchNorm2d(nf * 8, affine=True)
        # [512, 8, 8]
        self.conv4_0 = nn.Conv2d(nf * 8, nf * 8, 3, 1, 1, bias=False)
        self.bn4_0 = nn.BatchNorm2d(nf * 8, affine=True)
        self.conv4_1 = nn.Conv2d(nf * 8, nf * 8, 4, 2, 1, bias=False)
        self.bn4_1 = nn.BatchNorm2d(nf * 8, affine=True)

        self.linear1 = nn.Linear(512, 100)
        self.linear2 = nn.Linear(100, 1)

        # activation function
        self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)

    def forward(self, x):
        fea = self.lrelu(self.conv0_0(x))
        fea = self.lrelu(self.bn0_1(self.conv0_1(fea)))

        fea = self.lrelu(self.bn1_0(self.conv1_0(fea)))
        fea = self.lrelu(self.bn1_1(self.conv1_1(fea)))

        fea = self.lrelu(self.bn2_0(self.conv2_0(fea)))
        fea = self.lrelu(self.bn2_1(self.conv2_1(fea)))

        fea = self.lrelu(self.bn3_0(self.conv3_0(fea)))
        fea = self.lrelu(self.bn3_1(self.conv3_1(fea)))

        fea = self.lrelu(self.bn4_0(self.conv4_0(fea)))
        fea = self.lrelu(self.bn4_1(self.conv4_1(fea)))

        fea = fea.view(fea.size(0), -1)
        fea = self.lrelu(self.linear1(fea))
        out = self.linear2(fea)
        return out


@ARCH_REGISTRY.register()
class PatchGANDiscriminator(nn.Module):
    """Defines a PatchGAN discriminator"""

    def __init__(self, in_c, nf, nb, stride=1, norm_layer=nn.InstanceNorm2d):
        """Construct a PatchGAN discriminator

        Parameters:
            input_nc (int)  -- the number of channels in input images
            ndf (int)       -- the number of filters in the last conv layer
            n_layers (int)  -- the number of conv layers in the discriminator
            norm_layer      -- normalization layer
        """
        super().__init__()
        if (
            type(norm_layer) == functools.partial
        ):  # no need to use bias as BatchNorm2d has affine parameters
            use_bias = norm_layer.func == nn.InstanceNorm2d
        else:
            use_bias = norm_layer == nn.InstanceNorm2d

        kw = 3
        padw = 1
        sequence = [
            nn.Conv2d(in_c, nf, kernel_size=kw, stride=1, padding=padw),
            nn.LeakyReLU(0.2, True),
        ]
        nf_mult = 1
        nf_mult_prev = 1
        for n in range(1, nb):  # gradually increase the number of filters
            nf_mult_prev = nf_mult
            nf_mult = min(2 ** n, 8)
            sequence += [
                nn.Conv2d(
                    nf * nf_mult_prev,
                    nf * nf_mult,
                    kernel_size=kw,
                    stride=stride,
                    padding=padw,
                    bias=use_bias,
                ),
                norm_layer(nf * nf_mult),
                nn.LeakyReLU(0.2, True),
            ]

        nf_mult_prev = nf_mult
        nf_mult = min(2 ** nb, 8)
        sequence += [
            nn.Conv2d(
                nf * nf_mult_prev,
                nf * nf_mult,
                kernel_size=kw,
                stride=1,
                padding=padw,
                bias=use_bias,
            ),
            norm_layer(nf * nf_mult),
            nn.LeakyReLU(0.2, True),
        ]

        sequence += [
            nn.Conv2d(nf * nf_mult, nf, kernel_size=kw, stride=1, padding=padw)
        ]  # output 1 channel prediction map
        self.model = nn.Sequential(*sequence)

    def forward(self, input):
        """Standard forward."""
        return self.model(input)


================================================
FILE: codes/config/BSRGAN/archs/edsr.py
================================================
import math

import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable

from utils.registry import ARCH_REGISTRY


def default_conv(in_channels, out_channels, kernel_size, bias=True):
    return nn.Conv2d(
        in_channels, out_channels, kernel_size, padding=(kernel_size // 2), bias=bias
    )


class MeanShift(nn.Conv2d):
    def __init__(
        self,
        rgb_range,
        rgb_mean=(0.4488, 0.4371, 0.4040),
        rgb_std=(1.0, 1.0, 1.0),
        sign=-1,
    ):
        super(MeanShift, self).__init__(3, 3, kernel_size=1)
        std = torch.Tensor(rgb_std)
        self.weight.data = torch.eye(3).view(3, 3, 1, 1)
        self.weight.data.div_(std.view(3, 1, 1, 1))
        self.bias.data = sign * rgb_range * torch.Tensor(rgb_mean)
        self.bias.data.div_(std)
        self.requires_grad = False


class BasicBlock(nn.Sequential):
    def __init__(
        self,
        in_channels,
        out_channels,
        kernel_size,
        stride=1,
        bias=False,
        bn=True,
        act=nn.ReLU(True),
    ):

        m = [
            nn.Conv2d(
                in_channels,
                out_channels,
                kernel_size,
                padding=(kernel_size // 2),
                stride=stride,
                bias=bias,
            )
        ]
        if bn:
            m.append(nn.BatchNorm2d(out_channels))
        if act is not None:
            m.append(act)
        super(BasicBlock, self).__init__(*m)


class ResBlock(nn.Module):
    def __init__(
        self,
        conv,
        n_feat,
        kernel_size,
        bias=True,
        bn=False,
        act=nn.ReLU(True),
        res_scale=1,
    ):

        super(ResBlock, self).__init__()
        m = []
        for i in range(2):
            m.append(conv(n_feat, n_feat, kernel_size, bias=bias))
            if bn:
                m.append(nn.BatchNorm2d(n_feat))
            if i == 0:
                m.append(act)

        self.body = nn.Sequential(*m)
        self.res_scale = res_scale

    def forward(self, x):
        res = self.body(x).mul(self.res_scale)
        res += x

        return res


class Upsampler(nn.Sequential):
    def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):

        m = []
        if (scale & (scale - 1)) == 0:  # Is scale = 2^n?
            for _ in range(int(math.log(scale, 2))):
                m.append(conv(n_feat, 4 * n_feat, 3, bias))
                m.append(nn.PixelShuffle(2))
                if bn:
                    m.append(nn.BatchNorm2d(n_feat))
                if act:
                    m.append(act())
        elif scale == 3:
            m.append(conv(n_feat, 9 * n_feat, 3, bias))
            m.append(nn.PixelShuffle(3))
            if bn:
                m.append(nn.BatchNorm2d(n_feat))
            if act:
                m.append(act())
        elif scale == 1:
            m.append(nn.Identity())
        else:
            raise NotImplementedError

        super(Upsampler, self).__init__(*m)


def make_model(args, parent=False):
    return RCAN(args)


## Channel Attention (CA) Layer


@ARCH_REGISTRY.register()
class EDSR(nn.Module):
    def __init__(self, nb, nf, res_scale=0.1, upscale=4, conv=default_conv):
        super(EDSR, self).__init__()

        n_resblocks = nb
        n_feats = nf
        kernel_size = 3
        scale = upscale
        act = nn.ReLU(True)
        # url_name = 'r{}f{}x{}'.format(nb, nf, upscale)
        # if url_name in url:
        #     self.url = url[url_name]
        # else:
        #     self.url = None
        self.sub_mean = MeanShift(255.0, sign=-1)
        self.add_mean = MeanShift(255.0, sign=1)

        # define head module
        m_head = [conv(3, n_feats, kernel_size)]

        # define body module
        m_body = [
            ResBlock(conv, n_feats, kernel_size, act=act, res_scale=res_scale)
            for _ in range(n_resblocks)
        ]
        m_body.append(conv(n_feats, n_feats, kernel_size))

        # define tail module
        m_tail = [
            Upsampler(conv, scale, n_feats, act=False),
            conv(n_feats, 3, kernel_size),
        ]

        self.head = nn.Sequential(*m_head)
        self.body = nn.Sequential(*m_body)
        self.tail = nn.Sequential(*m_tail)

    def forward(self, x):
        x = self.sub_mean(x * 255.0)
        x = self.head(x)

        res = self.body(x)
        res += x

        x = self.tail(res)
        x = self.add_mean(x) / 255.0

        return x


================================================
FILE: codes/config/BSRGAN/archs/loss.py
================================================
import torch
import torch.nn as nn
import torch.nn.functional as F
import lpips as lp

from utils.registry import LOSS_REGISTRY

from .vgg import VGGFeatureExtractor


@LOSS_REGISTRY.register()
class GaussGuided(nn.Module):
    def __init__(self, ksize, sigma):
        super().__init__()

        ax = torch.arange(0, ksize) - ksize//2
        xx, yy = torch.meshgrid(ax, ax)
        dis = (xx ** 2 + yy ** 2)
        dis = torch.exp(-dis / sigma ** 2)
        dis = dis / dis.sum()

        self.register_buffer("gauss", dis.view(1, ksize**2, 1, 1))
    
    def forward(self, kernel):

        return F.mse_loss(self.gauss, kernel)

@LOSS_REGISTRY.register()
class PerceptualLossLPIPS(nn.Module):
    def __init__(self, net="alex", normalize=True):
        super().__init__()
        self.fn = lp.LPIPS(net=net, spatial=True)
        for p in self.fn.parameters():
            p.requires_grad = False
        
        self.normalize = normalize
    
    def forward(self, res, ref):
        return self.fn(res, ref, normalize=self.normalize).mean(), None


@LOSS_REGISTRY.register()
class MSELoss(nn.Module):
    def __init__(self, *args, **kwargs):
        super().__init__()

    def forward(self, res, ref):
        return F.mse_loss(res, ref)


@LOSS_REGISTRY.register()
class L1Loss(nn.Module):
    def __init__(self, *args, **kwargs):
        super().__init__()

    def forward(self, res, ref):
        return F.l1_loss(res, ref)


@LOSS_REGISTRY.register()
class GANLoss(nn.Module):
    """Define GAN loss.
    Args:
        gan_type (str): Support 'vanilla', 'lsgan', 'wgan', 'hinge'.
        real_label_val (float): The value for real label. Default: 1.0.
        fake_label_val (float): The value for fake label. Default: 0.0.
    """

    def __init__(self, gan_type, real_label_val=1.0, fake_label_val=0.0):
        super(GANLoss, self).__init__()
        self.gan_type = gan_type
        self.real_label_val = real_label_val
        self.fake_label_val = fake_label_val

        if self.gan_type == "vanilla":
            self.loss = nn.BCEWithLogitsLoss()
        elif self.gan_type == "lsgan":
            self.loss = nn.MSELoss()
        elif self.gan_type == "wgan":
            self.loss = self._wgan_loss
        elif self.gan_type == "wgan_softplus":
            self.loss = self._wgan_softplus_loss
        elif self.gan_type == "hinge":
            self.loss = nn.ReLU()
        else:
            raise NotImplementedError(f"GAN type {self.gan_type} is not implemented.")

    def _wgan_loss(self, input, target):
        """wgan loss.
        Args:
            input (Tensor): Input tensor.
            target (bool): Target label.
        Returns:
            Tensor: wgan loss.
        """
        return -input.mean() if target else input.mean()

    def _wgan_softplus_loss(self, input, target):
        """wgan loss with soft plus. softplus is a smooth approximation to the
        ReLU function.
        In StyleGAN2, it is called:
            Logistic loss for discriminator;
            Non-saturating loss for generator.
        Args:
            input (Tensor): Input tensor.
            target (bool): Target label.
        Returns:
            Tensor: wgan loss.
        """
        return F.softplus(-input).mean() if target else F.softplus(input).mean()

    def get_target_label(self, input, target_is_real):
        """Get target label.
        Args:
            input (Tensor): Input tensor.
            target_is_real (bool): Whether the target is real or fake.
        Returns:
            (bool | Tensor): Target tensor. Return bool for wgan, otherwise,
                return Tensor.
        """

        if self.gan_type in ["wgan", "wgan_softplus"]:
            return target_is_real
        target_val = self.real_label_val if target_is_real else self.fake_label_val
        return input.new_ones(input.size()) * target_val

    def forward(self, input, target_is_real, is_disc=False):
        """
        Args:
            input (Tensor): The input for the loss module, i.e., the network
                prediction.
            target_is_real (bool): Whether the targe is real or fake.
            is_disc (bool): Whether the loss for discriminators or not.
                Default: False.
        Returns:
            Tensor: GAN loss value.
        """
        target_label = self.get_target_label(input, target_is_real)
        if self.gan_type == "hinge":
            if is_disc:  # for discriminators in hinge-gan
                input = -input if target_is_real else input
                loss = self.loss(1 + input).mean()
            else:  # for generators in hinge-gan
                loss = -input.mean()
        else:  # other gan types
            loss = self.loss(input, target_label)

        return loss


@LOSS_REGISTRY.register()
class PerceptualLoss(nn.Module):
    """Perceptual loss with commonly used style loss.
    Args:
        layer_weights (dict): The weight for each layer of vgg feature.
            Here is an example: {'conv5_4': 1.}, which means the conv5_4
            feature layer (before relu5_4) will be extracted with weight
            1.0 in calculting losses.
        vgg_type (str): The type of vgg network used as feature extractor.
            Default: 'vgg19'.
        use_input_norm (bool):  If True, normalize the input image in vgg.
            Default: True.
        range_norm (bool): If True, norm images with range [-1, 1] to [0, 1].
            Default: False.
        perceptual_weight (float): If `perceptual_weight > 0`, the perceptual
            loss will be calculated and the loss will multiplied by the
            weight. Default: 1.0.
        style_weight (float): If `style_weight > 0`, the style loss will be
            calculated and the loss will multiplied by the weight.
            Default: 0.
        criterion (str): Criterion used for perceptual loss. Default: 'l1'.
    """

    def __init__(
        self,
        layer_weights,
        vgg_type="vgg19",
        use_input_norm=True,
        range_norm=False,
        perceptual_weight=1.0,
        style_weight=0.0,
        criterion="l1",
    ):
        super(PerceptualLoss, self).__init__()
        self.perceptual_weight = perceptual_weight
        self.style_weight = style_weight
        self.layer_weights = layer_weights
        self.vgg = VGGFeatureExtractor(
            layer_name_list=list(layer_weights.keys()),
            vgg_type=vgg_type,
            use_input_norm=use_input_norm,
            range_norm=range_norm,
        )

        self.criterion_type = criterion
        if self.criterion_type == "l1":
            self.criterion = torch.nn.L1Loss()
        elif self.criterion_type == "l2":
            self.criterion = torch.nn.L2loss()
        elif self.criterion_type == "fro":
            self.criterion = None
        else:
            raise NotImplementedError(f"{criterion} criterion has not been supported.")

    def forward(self, x, gt):
        """Forward function.
        Args:
            x (Tensor): Input tensor with shape (n, c, h, w).
            gt (Tensor): Ground-truth tensor with shape (n, c, h, w).
        Returns:
            Tensor: Forward results.
        """
        # extract vgg features
        x_features = self.vgg(x)
        gt_features = self.vgg(gt.detach())

        # calculate perceptual loss
        if self.perceptual_weight > 0:
            percep_loss = 0
            for k in x_features.keys():
                if self.criterion_type == "fro":
                    percep_loss += (
                        torch.norm(x_features[k] - gt_features[k], p="fro")
                        * self.layer_weights[k]
                    )
                else:
                    percep_loss += (
                        self.criterion(x_features[k], gt_features[k])
                        * self.layer_weights[k]
                    )
            percep_loss *= self.perceptual_weight
        else:
            percep_loss = None

        # calculate style loss
        if self.style_weight > 0:
            style_loss = 0
            for k in x_features.keys():
                if self.criterion_type == "fro":
                    style_loss += (
                        torch.norm(
                            self._gram_mat(x_features[k])
                            - self._gram_mat(gt_features[k]),
                            p="fro",
                        )
                        * self.layer_weights[k]
                    )
                else:
                    style_loss += (
                        self.criterion(
                            self._gram_mat(x_features[k]),
                            self._gram_mat(gt_features[k]),
                        )
                        * self.layer_weights[k]
                    )
            style_loss *= self.style_weight
        else:
            style_loss = None

        return percep_loss, style_loss

    def _gram_mat(self, x):
        """Calculate Gram matrix.
        Args:
            x (torch.Tensor): Tensor with shape of (n, c, h, w).
        Returns:
            torch.Tensor: Gram matrix.
        """
        n, c, h, w = x.size()
        features = x.view(n, c, w * h)
        features_t = features.transpose(1, 2)
        gram = features.bmm(features_t) / (c * h * w)
        return gram


@LOSS_REGISTRY.register()
class CharbonnierLoss(nn.Module):
    """Charbonnier Loss (L1)"""

    def __init__(self, eps=1e-6):
        super(CharbonnierLoss, self).__init__()
        self.eps = eps

    def forward(self, x, y):
        diff = x - y
        loss = torch.mean(torch.sqrt(diff * diff + self.eps))
        return loss


class GradientPenaltyLoss(nn.Module):
    def __init__(self, device=torch.device("cpu")):
        super(GradientPenaltyLoss, self).__init__()
        self.register_buffer("grad_outputs", torch.Tensor())
        self.grad_outputs = self.grad_outputs.to(device)

    def get_grad_outputs(self, input):
        if self.grad_outputs.size() != input.size():
            self.grad_outputs.resize_(input.size()).fill_(1.0)
        return self.grad_outputs

    def forward(self, interp, interp_crit):
        grad_outputs = self.get_grad_outputs(interp_crit)
        grad_interp = torch.autograd.grad(
            outputs=interp_crit,
            inputs=interp,
            grad_outputs=grad_outputs,
            create_graph=True,
            retain_graph=True,
            only_inputs=True,
        )[0]
        grad_interp = grad_interp.view(grad_interp.size(0), -1)
        grad_interp_norm = grad_interp.norm(2, dim=1)

        loss = ((grad_interp_norm - 1) ** 2).mean()
        return loss


================================================
FILE: codes/config/BSRGAN/archs/lr_scheduler.py
================================================
import math
from collections import Counter, defaultdict

import torch
from torch.optim.lr_scheduler import _LRScheduler

from utils.registry import LR_SCHEDULER_REGISTRY


@LR_SCHEDULER_REGISTRY.register()
class LinearDecayLR(_LRScheduler):
    def __init__(
        self,
        optimizer,
        decay_prop,
        total_steps,
        last_epoch=-1,
    ):
        self.decay_prop = decay_prop
        self.total_steps = total_steps

        super().__init__(optimizer, last_epoch)

    def get_lr(self):

        return [
            group["initial_lr"]
            * (1 - (self.last_epoch + 1) * self.decay_prop / self.total_steps)
            for group in self.optimizer.param_groups
        ]


@LR_SCHEDULER_REGISTRY.register()
class MultiStepRestartLR(_LRScheduler):
    def __init__(
        self,
        optimizer,
        milestones,
        restarts=None,
        weights=None,
        gamma=0.1,
        clear_state=False,
        last_epoch=-1,
    ):
        self.milestones = Counter(milestones)
        self.gamma = gamma
        self.clear_state = clear_state
        self.restarts = restarts if restarts else [0]
        self.restart_weights = weights if weights else [1]
        assert len(self.restarts) == len(
            self.restart_weights
        ), "restarts and their weights do not match."
        super().__init__(optimizer, last_epoch)

    def get_lr(self):
        if self.last_epoch in self.restarts:
            if self.clear_state:
                self.optimizer.state = defaultdict(dict)
            weight = self.restart_weights[self.restarts.index(self.last_epoch)]
            return [
                group["initial_lr"] * weight for group in self.optimizer.param_groups
            ]
        if self.last_epoch not in self.milestones:
            return [group["lr"] for group in self.optimizer.param_groups]
        return [
            group["lr"] * self.gamma ** self.milestones[self.last_epoch]
            for group in self.optimizer.param_groups
        ]


@LR_SCHEDULER_REGISTRY.register()
class CosineAnnealingRestartLR(_LRScheduler):
    def __init__(
        self, optimizer, T_period, restarts=None, weights=None, eta_min=0, last_epoch=-1
    ):
        self.T_period = T_period
        self.T_max = self.T_period[0]  # current T period
        self.eta_min = eta_min
        self.restarts = restarts if restarts else [0]
        self.restart_weights = weights if weights else [1]
        self.last_restart = 0
        assert len(self.restarts) == len(
            self.restart_weights
        ), "restarts and their weights do not match."
        super().__init__(optimizer, last_epoch)

    def get_lr(self):
        if self.last_epoch == 0:
            return self.base_lrs
        elif self.last_epoch in self.restarts:
            self.last_restart = self.last_epoch
            self.T_max = self.T_period[self.restarts.index(self.last_epoch) + 1]
            weight = self.restart_weights[self.restarts.index(self.last_epoch)]
            return [
                group["initial_lr"] * weight for group in self.optimizer.param_groups
            ]
        elif (self.last_epoch - self.last_restart - 1 - self.T_max) % (
            2 * self.T_max
        ) == 0:
            return [
                group["lr"]
                + (base_lr - self.eta_min) * (1 - math.cos(math.pi / self.T_max)) / 2
                for base_lr, group in zip(self.base_lrs, self.optimizer.param_groups)
            ]
        return [
            (1 + math.cos(math.pi * (self.last_epoch - self.last_restart) / self.T_max))
            / (
                1
                + math.cos(
                    math.pi * ((self.last_epoch - self.last_restart) - 1) / self.T_max
                )
            )
            * (group["lr"] - self.eta_min)
            + self.eta_min
            for group in self.optimizer.param_groups
        ]


================================================
FILE: codes/config/BSRGAN/archs/module_util.py
================================================
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.nn.init as init


def initialize_weights(net_l, scale=1):
    if not isinstance(net_l, list):
        net_l = [net_l]
    for net in net_l:
        for m in net.modules():
            if isinstance(m, nn.Conv2d):
                init.kaiming_normal_(m.weight, a=0, mode="fan_in")
                m.weight.data *= scale  # for residual block
                if m.bias is not None:
                    m.bias.data.zero_()
            elif isinstance(m, nn.Linear):
                init.kaiming_normal_(m.weight, a=0, mode="fan_in")
                m.weight.data *= scale
                if m.bias is not None:
                    m.bias.data.zero_()
            elif isinstance(m, nn.BatchNorm2d):
                init.constant_(m.weight, 1)
                init.constant_(m.bias.data, 0.0)


def make_layer(block, n_layers):
    layers = []
    for _ in range(n_layers):
        layers.append(block())
    return nn.Sequential(*layers)


class ResidualBlock_noBN(nn.Module):
    """Residual block w/o BN
    ---Conv-ReLU-Conv-+-
     |________________|
    """

    def __init__(self, nf=64):
        super(ResidualBlock_noBN, self).__init__()
        self.conv1 = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)
        self.conv2 = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)

        # initialization
        initialize_weights([self.conv1, self.conv2], 0.1)

    def forward(self, x):
        identity = x
        out = F.relu(self.conv1(x), inplace=True)
        out = self.conv2(out)
        return identity + out


def flow_warp(x, flow, interp_mode="bilinear", padding_mode="zeros"):
    """Warp an image or feature map with optical flow
    Args:
        x (Tensor): size (N, C, H, W)
        flow (Tensor): size (N, H, W, 2), normal value
        interp_mode (str): 'nearest' or 'bilinear'
        padding_mode (str): 'zeros' or 'border' or 'reflection'

    Returns:
        Tensor: warped image or feature map
    """
    assert x.size()[-2:] == flow.size()[1:3]
    B, C, H, W = x.size()
    # mesh grid
    grid_y, grid_x = torch.meshgrid(torch.arange(0, H), torch.arange(0, W))
    grid = torch.stack((grid_x, grid_y), 2).float()  # W(x), H(y), 2
    grid.requires_grad = False
    grid = grid.type_as(x)
    vgrid = grid + flow
    # scale grid to [-1,1]
    vgrid_x = 2.0 * vgrid[:, :, :, 0] / max(W - 1, 1) - 1.0
    vgrid_y = 2.0 * vgrid[:, :, :, 1] / max(H - 1, 1) - 1.0
    vgrid_scaled = torch.stack((vgrid_x, vgrid_y), dim=3)
    output = F.grid_sample(x, vgrid_scaled, mode=interp_mode, padding_mode=padding_mode)
    return output


================================================
FILE: codes/config/BSRGAN/archs/rcan.py
================================================
import math

import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable

from utils.registry import ARCH_REGISTRY


def default_conv(in_channels, out_channels, kernel_size, bias=True):
    return nn.Conv2d(
        in_channels, out_channels, kernel_size, padding=(kernel_size // 2), bias=bias
    )


class MeanShift(nn.Conv2d):
    def __init__(self, rgb_range, rgb_mean, rgb_std, sign=-1):
        super(MeanShift, self).__init__(3, 3, kernel_size=1)
        std = torch.Tensor(rgb_std)
        self.weight.data = torch.eye(3).view(3, 3, 1, 1)
        self.weight.data.div_(std.view(3, 1, 1, 1))
        self.bias.data = sign * rgb_range * torch.Tensor(rgb_mean)
        self.bias.data.div_(std)
        self.requires_grad = False


class BasicBlock(nn.Sequential):
    def __init__(
        self,
        in_channels,
        out_channels,
        kernel_size,
        stride=1,
        bias=False,
        bn=True,
        act=nn.ReLU(True),
    ):

        m = [
            nn.Conv2d(
                in_channels,
                out_channels,
                kernel_size,
                padding=(kernel_size // 2),
                stride=stride,
                bias=bias,
            )
        ]
        if bn:
            m.append(nn.BatchNorm2d(out_channels))
        if act is not None:
            m.append(act)
        super(BasicBlock, self).__init__(*m)


class ResBlock(nn.Module):
    def __init__(
        self,
        conv,
        n_feat,
        kernel_size,
        bias=True,
        bn=False,
        act=nn.ReLU(True),
        res_scale=1,
    ):

        super(ResBlock, self).__init__()
        m = []
        for i in range(2):
            m.append(conv(n_feat, n_feat, kernel_size, bias=bias))
            if bn:
                m.append(nn.BatchNorm2d(n_feat))
            if i == 0:
                m.append(act)

        self.body = nn.Sequential(*m)
        self.res_scale = res_scale

    def forward(self, x):
        res = self.body(x).mul(self.res_scale)
        res += x

        return res


class Upsampler(nn.Sequential):
    def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):

        m = []
        if (scale & (scale - 1)) == 0:  # Is scale = 2^n?
            for _ in range(int(math.log(scale, 2))):
                m.append(conv(n_feat, 4 * n_feat, 3, bias))
                m.append(nn.PixelShuffle(2))
                if bn:
                    m.append(nn.BatchNorm2d(n_feat))
                if act:
                    m.append(act())
        elif scale == 3:
            m.append(conv(n_feat, 9 * n_feat, 3, bias))
            m.append(nn.PixelShuffle(3))
            if bn:
                m.append(nn.BatchNorm2d(n_feat))
            if act:
                m.append(act())
        else:
            raise NotImplementedError

        super(Upsampler, self).__init__(*m)


def make_model(args, parent=False):
    return RCAN(args)


## Channel Attention (CA) Layer
class CALayer(nn.Module):
    def __init__(self, channel, reduction=16):
        super(CALayer, self).__init__()
        # global average pooling: feature --> point
        self.avg_pool = nn.AdaptiveAvgPool2d(1)
        # feature channel downscale and upscale --> channel weight
        self.conv_du = nn.Sequential(
            nn.Conv2d(channel, channel // reduction, 1, padding=0, bias=True),
            nn.ReLU(inplace=True),
            nn.Conv2d(channel // reduction, channel, 1, padding=0, bias=True),
            nn.Sigmoid(),
        )

    def forward(self, x):
        y = self.avg_pool(x)
        y = self.conv_du(y)
        return x * y


## Residual Channel Attention Block (RCAB)
class RCAB(nn.Module):
    def __init__(
        self,
        conv,
        n_feat,
        kernel_size,
        reduction,
        bias=True,
        bn=False,
        act=nn.ReLU(True),
        res_scale=1,
    ):

        super(RCAB, self).__init__()
        modules_body = []
        for i in range(2):
            modules_body.append(conv(n_feat, n_feat, kernel_size, bias=bias))
            if bn:
                modules_body.append(nn.BatchNorm2d(n_feat))
            if i == 0:
                modules_body.append(act)
        modules_body.append(CALayer(n_feat, reduction))
        self.body = nn.Sequential(*modules_body)
        self.res_scale = res_scale

    def forward(self, x):
        res = self.body(x)
        # res = self.body(x).mul(self.res_scale)
        res += x
        return res


## Residual Group (RG)
class ResidualGroup(nn.Module):
    def __init__(
        self, conv, n_feat, kernel_size, reduction, act, res_scale, n_resblocks
    ):
        super(ResidualGroup, self).__init__()
        modules_body = []
        modules_body = [
            RCAB(
                conv,
                n_feat,
                kernel_size,
                reduction,
                bias=True,
                bn=False,
                act=nn.ReLU(True),
                res_scale=1,
            )
            for _ in range(n_resblocks)
        ]
        modules_body.append(conv(n_feat, n_feat, kernel_size))
        self.body = nn.Sequential(*modules_body)

    def forward(self, x):
        res = self.body(x)
        res += x
        return res


## Residual Channel Attention Network (RCAN)
@ARCH_REGISTRY.register()
class RCAN(nn.Module):
    def __init__(self, ng, nb, nf, reduction=16, upscale=4, conv=default_conv):
        super(RCAN, self).__init__()

        n_resgroups = ng
        n_resblocks = nb
        n_feats = nf
        kernel_size = 3
        reduction = reduction
        scale = upscale

        act = nn.ReLU(True)

        # RGB mean for DIV2K
        rgb_mean = (0.4488, 0.4371, 0.4040)
        rgb_std = (1.0, 1.0, 1.0)
        self.sub_mean = MeanShift(1.0, rgb_mean, rgb_std, -1)

        # define head module
        modules_head = [conv(3, n_feats, kernel_size)]

        # define body module
        modules_body = [
            ResidualGroup(
                conv,
                n_feats,
                kernel_size,
                reduction,
                act=act,
                res_scale=1.0,
                n_resblocks=nb,
            )
            for _ in range(ng)
        ]

        modules_body.append(conv(n_feats, n_feats, kernel_size))

        # define tail module
        modules_tail = [
            Upsampler(conv, scale, n_feats, act=False),
            conv(n_feats, 3, kernel_size),
        ]

        self.add_mean = MeanShift(1.0, rgb_mean, rgb_std, 1)

        self.head = nn.Sequential(*modules_head)
        self.body = nn.Sequential(*modules_body)
        self.tail = nn.Sequential(*modules_tail)

    def forward(self, x):
        x = self.sub_mean(x)
        x = self.head(x)

        res = self.body(x)
        res += x

        x = self.tail(res)
        x = self.add_mean(x)

        return x

    def load_state_dict(self, state_dict, strict=False):
        own_state = self.state_dict()
        for name, param in state_dict.items():
            if name in own_state:
                if isinstance(param, nn.Parameter):
                    param = param.data
                try:
                    own_state[name].copy_(param)
                except Exception:
                    if name.find("tail") >= 0:
                        print("Replace pre-trained upsampler to new one...")
                    else:
                        raise RuntimeError(
                            "While copying the parameter named {}, "
                            "whose dimensions in the model are {} and "
                            "whose dimensions in the checkpoint are {}.".format(
                                name, own_state[name].size(), param.size()
                            )
                        )
            elif strict:
                if name.find("tail") == -1:
                    raise KeyError('unexpected key "{}" in state_dict'.format(name))

        if strict:
            missing = set(own_state.keys()) - set(state_dict.keys())
            if len(missing) > 0:
                raise KeyError('missing keys in state_dict: "{}"'.format(missing))


================================================
FILE: codes/config/BSRGAN/archs/rrdb.py
================================================
import functools

from utils.registry import ARCH_REGISTRY

from .module_util import *


class ResidualDenseBlock_5C(nn.Module):
    def __init__(self, nf=64, gc=32, bias=True):
        super(ResidualDenseBlock_5C, self).__init__()
        # gc: growth channel, i.e. intermediate channels
        self.conv1 = nn.Conv2d(nf, gc, 3, 1, 1, bias=bias)
        self.conv2 = nn.Conv2d(nf + gc, gc, 3, 1, 1, bias=bias)
        self.conv3 = nn.Conv2d(nf + 2 * gc, gc, 3, 1, 1, bias=bias)
        self.conv4 = nn.Conv2d(nf + 3 * gc, gc, 3, 1, 1, bias=bias)
        self.conv5 = nn.Conv2d(nf + 4 * gc, nf, 3, 1, 1, bias=bias)
        self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)

        # initialization
        initialize_weights(
            [self.conv1, self.conv2, self.conv3, self.conv4, self.conv5], 0.1
        )

    def forward(self, x):
        x1 = self.lrelu(self.conv1(x))
        x2 = self.lrelu(self.conv2(torch.cat((x, x1), 1)))
        x3 = self.lrelu(self.conv3(torch.cat((x, x1, x2), 1)))
        x4 = self.lrelu(self.conv4(torch.cat((x, x1, x2, x3), 1)))
        x5 = self.conv5(torch.cat((x, x1, x2, x3, x4), 1))
        return x5 * 0.2 + x


class RRDB(nn.Module):
    """Residual in Residual Dense Block"""

    def __init__(self, nf, gc=32):
        super(RRDB, self).__init__()
        self.RDB1 = ResidualDenseBlock_5C(nf, gc)
        self.RDB2 = ResidualDenseBlock_5C(nf, gc)
        self.RDB3 = ResidualDenseBlock_5C(nf, gc)

    def forward(self, x):
        out = self.RDB1(x)
        out = self.RDB2(out)
        out = self.RDB3(out)
        return out * 0.2 + x


@ARCH_REGISTRY.register()
class RRDBNet(nn.Module):
    def __init__(self, in_nc, out_nc, nf, nb, gc=32, upscale=4):
        super(RRDBNet, self).__init__()
        self.upscale = upscale
        RRDB_block_f = functools.partial(RRDB, nf=nf, gc=gc)

        self.conv_first = nn.Conv2d(in_nc, nf, 3, 1, 1, bias=True)
        self.RRDB_trunk = make_layer(RRDB_block_f, nb)
        self.trunk_conv = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)
        #### upsampling
        self.upconv1 = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)
        if upscale == 4:
            self.upconv2 = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)
        self.HRconv = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)
        self.conv_last = nn.Conv2d(nf, out_nc, 3, 1, 1, bias=True)

        self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)

    def forward(self, x):
        fea = self.conv_first(x)
        trunk = self.trunk_conv(self.RRDB_trunk(fea))
        fea = fea + trunk

        if self.upscale == 2 or self.upscale == 3:
            fea = self.lrelu(
                self.upconv1(
                    F.interpolate(fea, scale_factor=self.upscale, mode="nearest")
                )
            )
        if self.upscale == 4:
            fea = self.lrelu(
                self.upconv1(F.interpolate(fea, scale_factor=2, mode="nearest"))
            )
            fea = self.lrelu(
                self.upconv2(F.interpolate(fea, scale_factor=2, mode="nearest"))
            )
        out = self.conv_last(self.lrelu(self.HRconv(fea)))

        return out


================================================
FILE: codes/config/BSRGAN/archs/srresnet.py
================================================
import functools

from utils.registry import ARCH_REGISTRY

from .module_util import *


@ARCH_REGISTRY.register()
class MSRResNet(nn.Module):
    """modified SRResNet"""

    def __init__(self, in_nc=3, out_nc=3, nf=64, nb=16, upscale=4):
        super(MSRResNet, self).__init__()
        self.upscale = upscale

        self.conv_first = nn.Conv2d(in_nc, nf, 3, 1, 1, bias=True)
        basic_block = functools.partial(ResidualBlock_noBN, nf=nf)
        self.recon_trunk = make_layer(basic_block, nb)

        # upsampling
        if self.upscale == 2:
            self.upconv1 = nn.Conv2d(nf, nf * 4, 3, 1, 1, bias=True)
            self.pixel_shuffle = nn.PixelShuffle(2)
        elif self.upscale == 3:
            self.upconv1 = nn.Conv2d(nf, nf * 9, 3, 1, 1, bias=True)
            self.pixel_shuffle = nn.PixelShuffle(3)
        elif self.upscale == 4:
            self.upconv1 = nn.Conv2d(nf, nf * 4, 3, 1, 1, bias=True)
            self.upconv2 = nn.Conv2d(nf, nf * 4, 3, 1, 1, bias=True)
            self.pixel_shuffle = nn.PixelShuffle(2)

        self.HRconv = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)
        self.conv_last = nn.Conv2d(nf, out_nc, 3, 1, 1, bias=True)

        # activation function
        self.lrelu = nn.LeakyReLU(negative_slope=0.1, inplace=True)

        # initialization
        initialize_weights(
            [self.conv_first, self.upconv1, self.HRconv, self.conv_last], 0.1
        )
        if self.upscale == 4:
            initialize_weights(self.upconv2, 0.1)

    def forward(self, x):
        fea = self.lrelu(self.conv_first(x))
        out = self.recon_trunk(fea)

        if self.upscale == 4:
            out = self.lrelu(self.pixel_shuffle(self.upconv1(out)))
            out = self.lrelu(self.pixel_shuffle(self.upconv2(out)))
        elif self.upscale == 3 or self.upscale == 2:
            out = self.lrelu(self.pixel_shuffle(self.upconv1(out)))

        out = self.conv_last(self.lrelu(self.HRconv(out)))
        base = F.interpolate(
            x, scale_factor=self.upscale, mode="bilinear", align_corners=False
        )
        out += base
        return out


================================================
FILE: codes/config/BSRGAN/archs/translator.py
================================================
import math

import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable

from utils.registry import ARCH_REGISTRY


def default_conv(in_channels, out_channels, kernel_size, bias=True):
    return nn.Conv2d(
        in_channels, out_channels, kernel_size, padding=(kernel_size // 2), bias=bias
    )


class BasicBlock(nn.Sequential):
    def __init__(
        self,
        in_channels,
        out_channels,
        kernel_size,
        stride=1,
        bias=False,
        bn=True,
        act=nn.ReLU(True),
    ):

        m = [
            nn.Conv2d(
                in_channels,
                out_channels,
                kernel_size,
                padding=(kernel_size // 2),
                stride=stride,
                bias=bias,
            )
        ]
        if bn:
            m.append(nn.BatchNorm2d(out_channels))
        if act is not None:
            m.append(act)
        super(BasicBlock, self).__init__(*m)


class ResBlock(nn.Module):
    def __init__(
        self,
        conv,
        n_feat,
        kernel_size,
        bias=True,
        bn=False,
        act=nn.ReLU(True),
        res_scale=1,
    ):

        super(ResBlock, self).__init__()
        m = []
        for i in range(2):
            m.append(conv(n_feat, n_feat, kernel_size, bias=bias))
            if bn:
                m.append(nn.BatchNorm2d(n_feat))
            if i == 0:
                m.append(act)

        self.body = nn.Sequential(*m)
        self.res_scale = res_scale

    def forward(self, x):
        res = self.body(x).mul(self.res_scale)
        res += x

        return res


class Upsampler(nn.Sequential):
    def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):

        m = []
        if (scale & (scale - 1)) == 0:  # Is scale = 2^n?
            for _ in range(int(math.log(scale, 2))):
                m.append(conv(n_feat, 4 * n_feat, 3, bias))
                m.append(nn.PixelShuffle(2))
                if bn:
                    m.append(nn.BatchNorm2d(n_feat))
                if act:
                    m.append(act())
        elif scale == 3:
            m.append(conv(n_feat, 9 * n_feat, 3, bias))
            m.append(nn.PixelShuffle(3))
            if bn:
                m.append(nn.BatchNorm2d(n_feat))
            if act:
                m.append(act())
        elif scale == 1:
            m.append(nn.Identity())
        else:
            raise NotImplementedError

        super(Upsampler, self).__init__(*m)


@ARCH_REGISTRY.register()
class Translator(nn.Module):
    def __init__(self, in_nc, out_nc, nf, nb, scale=4, conv=default_conv):
        super().__init__()

        self.scale = scale

        # define head module
        if scale >= 1:
            m_head = [conv(in_nc, nf, 3)]
        else:
            s = int(1 / scale)
            m_head = [nn.Conv2d(in_nc, nf, kernel_size=2 * s + 1, stride=s, padding=s)]

        # define body module
        m_body = [
            ResBlock(conv, nf, 3, act=nn.ReLU(True), res_scale=1) for _ in range(nb)
        ]
        m_body.append(conv(nf, nf, 3))

        # define tail module
        m_tail = [
            Upsampler(conv, scale, nf, act=False) if scale > 1 else nn.Identity(),
            conv(nf, out_nc, 3),
        ]

        self.head = nn.Sequential(*m_head)
        self.body = nn.Sequential(*m_body)
        self.tail = nn.Sequential(*m_tail)

    def forward(self, x):

        x = self.head(x)
        f = self.body(x)
        x = f + x
        x = self.tail(x)

        return x


================================================
FILE: codes/config/BSRGAN/archs/vgg.py
================================================
import os
from collections import OrderedDict

import torch
from torch import nn as nn
from torchvision.models import vgg as vgg

from utils.registry import ARCH_REGISTRY

VGG_PRETRAIN_PATH = "checkpoints/pretrained_models/vgg19-dcbb9e9d.pth"
NAMES = {
    "vgg11": [
        "conv1_1",
        "relu1_1",
        "pool1",
        "conv2_1",
        "relu2_1",
        "pool2",
        "conv3_1",
        "relu3_1",
        "conv3_2",
        "relu3_2",
        "pool3",
        "conv4_1",
        "relu4_1",
        "conv4_2",
        "relu4_2",
        "pool4",
        "conv5_1",
        "relu5_1",
        "conv5_2",
        "relu5_2",
        "pool5",
    ],
    "vgg13": [
        "conv1_1",
        "relu1_1",
        "conv1_2",
        "relu1_2",
        "pool1",
        "conv2_1",
        "relu2_1",
        "conv2_2",
        "relu2_2",
        "pool2",
        "conv3_1",
        "relu3_1",
        "conv3_2",
        "relu3_2",
        "pool3",
        "conv4_1",
        "relu4_1",
        "conv4_2",
        "relu4_2",
        "pool4",
        "conv5_1",
        "relu5_1",
        "conv5_2",
        "relu5_2",
        "pool5",
    ],
    "vgg16": [
        "conv1_1",
        "relu1_1",
        "conv1_2",
        "relu1_2",
        "pool1",
        "conv2_1",
        "relu2_1",
        "conv2_2",
        "relu2_2",
        "pool2",
        "conv3_1",
        "relu3_1",
        "conv3_2",
        "relu3_2",
        "conv3_3",
        "relu3_3",
        "pool3",
        "conv4_1",
        "relu4_1",
        "conv4_2",
        "relu4_2",
        "conv4_3",
        "relu4_3",
        "pool4",
        "conv5_1",
        "relu5_1",
        "conv5_2",
        "relu5_2",
        "conv5_3",
        "relu5_3",
        "pool5",
    ],
    "vgg19": [
        "conv1_1",
        "relu1_1",
        "conv1_2",
        "relu1_2",
        "pool1",
        "conv2_1",
        "relu2_1",
        "conv2_2",
        "relu2_2",
        "pool2",
        "conv3_1",
        "relu3_1",
        "conv3_2",
        "relu3_2",
        "conv3_3",
        "relu3_3",
        "conv3_4",
        "relu3_4",
        "pool3",
        "conv4_1",
        "relu4_1",
        "conv4_2",
        "relu4_2",
        "conv4_3",
        "relu4_3",
        "conv4_4",
        "relu4_4",
        "pool4",
        "conv5_1",
        "relu5_1",
        "conv5_2",
        "relu5_2",
        "conv5_3",
        "relu5_3",
        "conv5_4",
        "relu5_4",
        "pool5",
    ],
}


def insert_bn(names):
    """Insert bn layer after each conv.
    Args:
        names (list): The list of layer names.
    Returns:
        list: The list of layer names with bn layers.
    """
    names_bn = []
    for name in names:
        names_bn.append(name)
        if "conv" in name:
            position = name.replace("conv", "")
            names_bn.append("bn" + position)
    return names_bn


@ARCH_REGISTRY.register()
class VGGFeatureExtractor(nn.Module):
    """VGG network for feature extraction.
    In this implementation, we allow users to choose whether use normalization
    in the input feature and the type of vgg network. Note that the pretrained
    path must fit the vgg type.
    Args:
        layer_name_list (list[str]): Forward function returns the corresponding
            features according to the layer_name_list.
            Example: {'relu1_1', 'relu2_1', 'relu3_1'}.
        vgg_type (str): Set the type of vgg network. Default: 'vgg19'.
        use_input_norm (bool): If True, normalize the input image. Importantly,
            the input feature must in the range [0, 1]. Default: True.
        range_norm (bool): If True, norm images with range [-1, 1] to [0, 1].
            Default: False.
        requires_grad (bool): If true, the parameters of VGG network will be
            optimized. Default: False.
        remove_pooling (bool): If true, the max pooling operations in VGG net
            will be removed. Default: False.
        pooling_stride (int): The stride of max pooling operation. Default: 2.
    """

    def __init__(
        self,
        layer_name_list,
        vgg_type="vgg19",
        use_input_norm=True,
        range_norm=False,
        requires_grad=False,
        remove_pooling=False,
        pooling_stride=2,
    ):
        super(VGGFeatureExtractor, self).__init__()

        self.layer_name_list = layer_name_list
        self.use_input_norm = use_input_norm
        self.range_norm = range_norm

        self.names = NAMES[vgg_type.replace("_bn", "")]
        if "bn" in vgg_type:
            self.names = insert_bn(self.names)

        # only borrow layers that will be used to avoid unused params
        max_idx = 0
        for v in layer_name_list:
            idx = self.names.index(v)
            if idx > max_idx:
                max_idx = idx

        if os.path.exists(VGG_PRETRAIN_PATH):
            vgg_net = getattr(vgg, vgg_type)(pretrained=False)
            state_dict = torch.load(
                VGG_PRETRAIN_PATH, map_location=lambda storage, loc: storage
            )
            vgg_net.load_state_dict(state_dict)
        else:
            vgg_net = getattr(vgg, vgg_type)(pretrained=True)

        features = vgg_net.features[: max_idx + 1]

        modified_net = OrderedDict()
        for k, v in zip(self.names, features):
            if "pool" in k:
                # if remove_pooling is true, pooling operation will be removed
                if remove_pooling:
                    continue
                else:
                    # in some cases, we may want to change the default stride
                    modified_net[k] = nn.MaxPool2d(kernel_size=2, stride=pooling_stride)
            else:
                modified_net[k] = v

        self.vgg_net = nn.Sequential(modified_net)

        if not requires_grad:
            self.vgg_net.eval()
            for param in self.parameters():
                param.requires_grad = False
        else:
            self.vgg_net.train()
            for param in self.parameters():
                param.requires_grad = True

        if self.use_input_norm:
            # the mean is for image with range [0, 1]
            self.register_buffer(
                "mean", torch.Tensor([0.485, 0.456, 0.406]).view(1, 3, 1, 1)
            )
            # the std is for image with range [0, 1]
            self.register_buffer(
                "std", torch.Tensor([0.229, 0.224, 0.225]).view(1, 3, 1, 1)
            )

    def forward(self, x):
        """Forward function.
        Args:
            x (Tensor): Input tensor with shape (n, c, h, w).
        Returns:
            Tensor: Forward results.
        """
        if self.range_norm:
            x = (x + 1) / 2
        if self.use_input_norm:
            x = (x - self.mean) / self.std

        output = {}
        for key, layer in self.vgg_net._modules.items():
            x = layer(x)
            if key in self.layer_name_list:
                output[key] = x.clone()

        return output


================================================
FILE: codes/config/BSRGAN/count_flops.py
================================================
import argparse
import sys

import torch
from torchsummaryX import summary

sys.path.append("../../")
import utils.option as option
from models import create_model


parser = argparse.ArgumentParser()
parser.add_argument(
    "--opt",
    type=str,
    default="options/setting1/test/test_setting1_x4.yml",
    help="Path to option YMAL file of Predictor.",
)
args = parser.parse_args()
opt = option.parse(args.opt, root_path=".", is_train=True)

opt = option.dict_to_nonedict(opt)
model = create_model(opt)

test_tensor = torch.randn(1, 3, 270, 180).cuda()
for name, net in model.networks.items():
    summary(net.cuda(), x=test_tensor)
    print("Above are results for net {}".format(name))
    input()


================================================
FILE: codes/config/BSRGAN/inference.py
================================================
import argparse
import logging
import math
import os
import os.path as osp
import random
import sys
import cv2
from collections import defaultdict
from glob import glob
from tqdm import tqdm

import numpy as np
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
from tensorboardX import SummaryWriter

sys.path.append("../../")
import utils as util
import utils.option as option
from data import create_dataloader, create_dataset
from data.data_sampler import DistIterSampler
from metrics import IQA
from models import create_model



#### options
parser = argparse.ArgumentParser()
parser.add_argument(
    "-opt",
    type=str,
    default="options/test/2020Track2.yml",
    help="Path to options YMAL file.",
)
parser.add_argument("-input_dir", type=str, default="../../../data_samples/LR")
parser.add_argument("-output_dir", type=str, default="../../../data_samples/BSRGAN")
args = parser.parse_args()
opt = option.parse(args.opt, is_train=False)

opt = option.dict_to_nonedict(opt)

model = create_model(opt)

if not osp.exists(args.output_dir):
    os.makedirs(args.output_dir)

test_files = glob(osp.join(args.input_dir, "*"))
for inx, path in tqdm(enumerate(test_files)):
    name = path.split("/")[-1].split(".")[0]

    img = cv2.imread(path)[:, :, [2, 1, 0]]
    img = img.transpose(2, 0, 1)[None] / 255
    img_t = torch.as_tensor(np.ascontiguousarray(img)).float()

    model.test({"src": img_t})
    outdict = model.get_current_visuals()

    sr = outdict["sr"]
    sr_im = util.tensor2img(sr)

    save_path = osp.join(args.output_dir, "{}_x{}.png".format(name, opt["scale"]))
    cv2.imwrite(save_path, sr_im)


================================================
FILE: codes/config/BSRGAN/models/__init__.py
================================================
import importlib
import logging
import os
import os.path as osp

from utils.registry import MODEL_REGISTRY

logger = logging.getLogger("base")

model_folder = osp.dirname(__file__)
model_names = [
    osp.splitext(osp.basename(v))[0]
    for v in os.listdir(model_folder)
    if v.endswith("_model.py")
]
_model_modules = [
    importlib.import_module(f"models.{file_name}") for file_name in model_names
]


def create_model(opt, **kwarg):
    model = opt["model"]
    m = MODEL_REGISTRY.get(model)(opt, **kwarg)
    logger.info("Model [{:s}] is created.".format(m.__class__.__name__))
    return m


================================================
FILE: codes/config/BSRGAN/models/base_model.py
================================================
import logging
import os
from collections import OrderedDict

import torch
import torch.nn as nn
from torch.nn.parallel import DataParallel, DistributedDataParallel

from archs import build_loss, build_network, build_scheduler
from utils.registry import MODEL_REGISTRY

logger = logging.getLogger("base")


@MODEL_REGISTRY.register()
class BaseModel:
    def __init__(self, opt):

        self.opt = opt

        if opt["dist"]:
            self.rank = torch.distributed.get_rank()
            self.world_size = torch.distributed.get_world_size()
        else:
            self.rank = 0  # non dist training

        self.device = torch.device("cuda" if opt["gpu_ids"] is not None else "cpu")
        self.is_train = opt["is_train"]
        self.log_dict = OrderedDict()

        self.data_names = []
        self.networks = {}

        self.optimizers = {}
        self.schedulers = {}

    def setup_train(self, train_opt):
        # define losses
        loss_opt = train_opt["losses"]
        self.losses = self.build_losses(loss_opt)

        # build optmizers
        optimizer_opts = train_opt["optimizers"]
        self.optimizers = self.build_optimizers(optimizer_opts)

        # set schedulers
        scheduler_opts = train_opt["schedulers"]
        self.schedulers = self.build_schedulers(scheduler_opts)

        # set to training state
        self.set_network_state(self.networks.keys(), "train")

    def feed_data(self, data):
        pass

    def optimize_parameters(self):
        pass

    def get_current_visuals(self):
        pass

    def get_current_losses(self):
        pass

    def print_network(self):
        pass

    def save(self, label):
        pass

    def load(self):
        pass

    def build_network(self, net_opt):

        net = build_network(net_opt)

        if isinstance(net, nn.Module):
            net = self.model_to_device(net)

            if net_opt.get("pretrain"):
                pretrain = net_opt.pop("pretrain")
                self.load_network(net, pretrain["path"], pretrain["strict_load"])

            self.print_network(net)
        return net

    def build_losses(self, loss_opt):
        losses = {}

        defined_loss_names = list(loss_opt.keys())
        assert set(defined_loss_names).issubset(set(self.loss_names))

        for name in defined_loss_names:
            loss_conf = loss_opt.get(name)
            if loss_conf["weight"] > 0:
                self.loss_weights[name] = loss_conf.pop("weight")
                losses[name] = build_loss(loss_conf).to(self.device)

        return losses

    def build_optimizers(self, optim_opts):
        optimizers = {}

        if "default" in optim_opts.keys():
            default_optim = optim_opts.pop("default")

        defined_optimizer_names = list(optim_opts.keys())
        assert set(defined_optimizer_names).issubset(self.networks.keys())

        for name in defined_optimizer_names:
            optim_opt = optim_opts[name]
            if optim_opt is None:
                optim_opt = default_optim.copy()

            params = []
            for v in self.networks[name].parameters():
                if v.requires_grad:
                    params.append(v)

            optim_type = optim_opt.pop("type")
            optimizer = getattr(torch.optim, optim_type)(params=params, **optim_opt)
            optimizers[name] = optimizer

        return optimizers

    def build_schedulers(self, scheduler_opts):
        """Set up scheduler."""
        schedulers = {}
        if "default" in scheduler_opts.keys():
            default_opt = scheduler_opts.pop("default")

        for name in self.optimizers.keys():
            scheduler_opt = scheduler_opts[name]
            if scheduler_opt is None:
                scheduler_opt = default_opt.copy()

            schedulers[name] = build_scheduler(self.optimizers[name], scheduler_opt)

        return schedulers

    def model_to_device(self, net):
        """Model to device. It also warps models with DistributedDataParallel
        or DataParallel.
        Args:
            net (nn.Module)
        """
        net = net.to(self.device)
        if self.opt["dist"]:
            net = DistributedDataParallel(net, device_ids=[torch.cuda.current_device()])
        else:
            net = DataParallel(net)
        return net

    def print_network(self, net):
        # Generator
        s, n = self.get_network_description(net)
        if isinstance(net, nn.DataParallel) or isinstance(net, DistributedDataParallel):
            net_struc_str = "{} - {}".format(
                net.__class__.__name__, net.module.__class__.__name__
            )
        else:
            net_struc_str = "{}".format(net.__class__.__name__)
        if self.rank <= 0:
            logger.info(
                "Network G structure: {}, with parameters: {:,d}".format(
                    net_struc_str, n
                )
            )
            logger.info(s)

    def set_optimizer(self, names, operation):
        for name in names:
            getattr(self.optimizers[name], operation)()

    def set_requires_grad(self, names, requires_grad):
        for name in names:
            if isinstance(self.networks[name], nn.Module):
                for v in self.networks[name].parameters():
                    v.requires_grad = requires_grad

    def set_network_state(self, names, state):
        for name in names:
            if isinstance(self.networks[name], nn.Module):
                getattr(self.networks[name], state)()

    def clip_grad_norm(self, names, norm):
        for name in names:
            nn.utils.clip_grad_norm_(self.networks[name].parameters(), max_norm=norm)

    def _set_lr(self, lr_groups_l):
        """set learning rate for warmup,
        lr_groups_l: list for lr_groups. each for a optimizer"""
        for optimizer, lr_groups in zip(self.optimizers, lr_groups_l):
            for param_group, lr in zip(optimizer.param_groups, lr_groups):
                param_group["lr"] = lr

    def _get_init_lr(self):
        # get the initial lr, which is set by the scheduler
        init_lr_groups_l = []
        for optimizer in self.optimizers:
            init_lr_groups_l.append([v["initial_lr"] for v in optimizer.param_groups])
        return init_lr_groups_l

    def update_learning_rate(self, cur_iter, warmup_iter=-1):
        for _, scheduler in self.schedulers.items():
            scheduler.step()
        #### set up warm up learning rate
        if cur_iter < warmup_iter:
            # get initial lr for each group
            init_lr_g_l = self._get_init_lr()
            # modify warming-up learning rates
            warm_up_lr_l = []
            for init_lr_g in init_lr_g_l:
                warm_up_lr_l.append([v / warmup_iter * cur_iter for v in init_lr_g])
            # set learning rate
            self._set_lr(warm_up_lr_l)

    def get_current_learning_rate(self):
        # return self.schedulers[0].get_lr()[0]
        return list(self.optimizers.values())[0].param_groups[0]["lr"]

    def get_network_description(self, network):
        """Get the string and total parameters of the network"""
        if isinstance(network, nn.DataParallel) or isinstance(
            network, DistributedDataParallel
        ):
            network = network.module
        s = str(network)
        n = sum(map(lambda x: x.numel(), network.parameters()))
        return s, n

    def save_network(self, network, network_label, iter_label):
        save_filename = "{}_{}.pth".format(iter_label, network_label)
        save_path = os.path.join(self.opt["path"]["models"], save_filename)
        if isinstance(network, nn.DataParallel) or isinstance(
            network, DistributedDataParallel
        ):
            network = network.module
        state_dict = network.state_dict()
        for key, param in state_dict.items():
            state_dict[key] = param.cpu()
        torch.save(state_dict, save_path)

    def save(self, iter_label):
        for name in self.optimizers.keys():
            self.save_network(self.networks[name], name, iter_label)

    def load_network(self, network, load_path, strict=True):
        if load_path is not None:
            if isinstance(network, nn.DataParallel) or isinstance(
                network, DistributedDataParallel
            ):
                network = network.module
            load_net = torch.load(load_path)
            load_net_clean = OrderedDict()  # remove unnecessary 'module.'
            for k, v in load_net.items():
                if k.startswith("module."):
                    load_net_clean[k[7:]] = v
                else:
                    load_net_clean[k] = v
            network.load_state_dict(load_net_clean, strict=strict)

    def save_training_state(self, epoch, iter_step):
        """Saves training state during training, which will be used for resuming"""
        state = {"epoch": epoch, "iter": iter_step, "schedulers": {}, "optimizers": {}}
        for k, s in self.schedulers.items():
            state["schedulers"][k] = s.state_dict()
        for k, o in self.optimizers.items():
            state["optimizers"][k] = o.state_dict()
        save_filename = "{}.state".format(iter_step)
        save_path = os.path.join(self.opt["path"]["training_state"], save_filename)
        torch.save(state, save_path)

    def resume_training(self, resume_state):
        """Resume the optimizers and schedulers for training"""
        resume_optimizers = resume_state["optimizers"]
        resume_schedulers = resume_state["schedulers"]
        assert len(resume_optimizers) == len(
            self.optimizers
        ), "Wrong lengths of optimizers"
        assert len(resume_schedulers) == len(
            self.schedulers
        ), "Wrong lengths of schedulers"
        for name, o in resume_optimizers.items():
            self.optimizers[name].load_state_dict(o)
        for name, s in resume_schedulers.items():
            self.schedulers[name].load_state_dict(s)

    def reduce_loss_dict(self, loss_dict):
        """reduce loss dict.
        In distributed training, it averages the losses among different GPUs .
        Args:
            loss_dict (OrderedDict): Loss dict.
        """
        with torch.no_grad():
            if self.opt["dist"]:
                keys = []
                losses = []
                for name, value in loss_dict.items():
                    keys.append(name)
                    losses.append(value)
                losses = torch.stack(losses, 0)
                torch.distributed.reduce(losses, dst=0)
                if self.rank == 0:
                    losses /= self.world_size
                loss_dict = {key: loss for key, loss in zip(keys, losses)}

            log_dict = OrderedDict()
            for name, value in loss_dict.items():
                log_dict[name] = value.mean().item()

            return log_dict

    def get_current_log(self):
        return self.log_dict


================================================
FILE: codes/config/BSRGAN/models/sr_model.py
================================================
import logging
from collections import OrderedDict

import torch
import torch.nn as nn

from utils.registry import MODEL_REGISTRY

from .base_model import BaseModel

logger = logging.getLogger("base")


@MODEL_REGISTRY.register()
class SRModel(BaseModel):
    def __init__(self, opt):
        super().__init__(opt)

        self.data_names = ["lr", "hr"]

        self.network_names = ["netSR"]
        self.networks = {}

        self.loss_names = ["sr_adv", "sr_pix", "sr_percep"]
        self.loss_weights = {}
        self.losses = {}
        self.optimizers = {}

        # define networks and load pretrained models
        nets_opt = opt["networks"]
        defined_network_names = list(nets_opt.keys())
        assert set(defined_network_names).issubset(set(self.network_names))

        for name in defined_network_names:
            setattr(self, name, self.build_network(nets_opt[name]))
            self.networks[name] = getattr(self, name)

        if self.is_train:
            # setup loss, optimizers, schedulers
            self.setup_train(opt["train"])

    def feed_data(self, data):

        self.lr = data["src"].to(self.device)
        self.hr = data["tgt"].to(self.device)

    def forward(self):

        self.sr = self.netSR(self.lr)

    def optimize_parameters(self, step):

        self.forward()

        loss_dict = OrderedDict()

        l_sr = 0

        sr_pix = self.losses["sr_pix"](self.hr, self.sr)
        loss_dict["sr_pix"] = sr_pix
        l_sr += self.loss_weights["sr_pix"] * sr_pix

        if self.losses.get("sr_adv"):
            self.set_requires_grad(["netD"], False)
            sr_adv_g = self.calculate_rgan_loss_G(
                self.netD, self.losses["sr_adv"], self.hr, self.sr
            )
            loss_dict["sr_adv_g"] = sr_adv_g
            l_sr += self.loss_weights["sr_adv"] * sr_adv_g

        if self.losses.get("sr_percep"):
            sr_percep, sr_style = self.losses["sr_percep"](self.hr, self.sr)
            loss_dict["sr_percep"] = sr_percep
            if sr_style is not None:
                loss_dict["sr_style"] = sr_style
                l_sr += self.loss_weights["sr_percep"] * sr_style
            l_sr += self.loss_weights["sr_percep"] * sr_percep

        self.set_optimizer(names=["netSR"], operation="zero_grad")
        l_sr.backward()
        self.set_optimizer(names=["netSR"], operation="step")

        if self.losses.get("sr_adv"):
            self.set_requires_grad(["netD"], True)
            sr_adv_d = self.calculate_rgan_loss_D(
                self.netD, self.losses["sr_adv"], self.hr, self.sr
            )
            loss_dict["sr_adv_d"] = sr_adv_d

            self.optimizers["netD"].zero_grad()
            sr_adv_d.backward()
            self.optimizers["netD"].step()

        self.log_dict = self.reduce_loss_dict(loss_dict)

    def calculate_rgan_loss_D(self, netD, criterion, real, fake):

        d_pred_fake = netD(fake.detach())
        d_pred_real = netD(real)
        loss_real = criterion(
            d_pred_real - d_pred_fake.detach().mean(), True, is_disc=False
        )
        loss_fake = criterion(
            d_pred_fake - d_pred_real.detach().mean(), False, is_disc=False
        )

        loss = (loss_real + loss_fake) / 2

        return loss

    def calculate_rgan_loss_G(self, netD, criterion, real, fake):

        d_pred_fake = netD(fake)
        d_pred_real = netD(real).detach()
        loss_real = criterion(d_pred_real - d_pred_fake.mean(), False, is_disc=False)
        loss_fake = criterion(d_pred_fake - d_pred_real.mean(), True, is_disc=False)

        loss = (loss_real + loss_fake) / 2

        return loss

    def test(self, data, crop_size=None):
        self.real_lr = data["src"].to(self.device)
        self.netSR.eval()
        with torch.no_grad():
            if crop_size is None:
                self.fake_real_hr = self.netSR(self.real_lr)
            else:
                self.fake_real_hr = self.crop_test(self.real_lr, crop_size)
        self.netSR.train()
    
    def crop_test(self, lr, crop_size):
        b, c, h, w = lr.shape
        scale = self.opt["scale"]

        h_start = list(range(0, h-crop_size, crop_size))
        w_start = list(range(0, w-crop_size, crop_size))

        sr1 = torch.zeros(b, c, int(h*scale), int(w* scale), device=self.device) - 1
        for hs in h_start:
            for ws in w_start:
                lr_patch = lr[:, :, hs: hs+crop_size, ws: ws+crop_size]
                sr_patch = self.netSR(lr_patch)

                sr1[:, :, 
                    int(hs*scale):int((hs+crop_size)*scale),
                    int(ws*scale):int((ws+crop_size)*scale)
                ] = sr_patch
        
        h_end = list(range(h, crop_size, -crop_size))
        w_end = list(range(w, crop_size, -crop_size))

        sr2 = torch.zeros(b, c, int(h*scale), int(w* scale), device=self.device) - 1
        for hd in h_end:
            for wd in w_end:
                lr_patch = lr[:, :, hd-crop_size:hd, wd-crop_size:wd]
                sr_patch = self.netSR(lr_patch)

                sr2[:, :, 
                    int((hd-crop_size)*scale):int(hd*scale),
                    int((wd-crop_size)*scale):int(wd*scale)
                ] = sr_patch

        mask1 = (
            (sr1 == -1).float() * 0 + 
            (sr2 == -1).float() * 1 + 
            ((sr1 > 0) * (sr2 > 0)).float() * 0.5
        )

        mask2 = (
            (sr1 == -1).float() * 1 + 
            (sr2 == -1).float() * 0 + 
            ((sr1 > 0) * (sr2 > 0)).float() * 0.5
        )

        sr = mask1 * sr1 + mask2 * sr2

        return sr
            
    def get_current_visuals(self, need_GT=True):
        out_dict = OrderedDict()
        out_dict["lr"] = self.real_lr.detach()[0].float().cpu()
        out_dict["sr"] = self.fake_real_hr.detach()[0].float().cpu()
        return out_dict


================================================
FILE: codes/config/BSRGAN/options/test/2017Track2_2020Track1.yml
================================================
#### general settings
name: 2017Track2_2020Track1
use_tb_logger: false
model: SRModel
scale: 4
gpu_ids: [6]

metrics: [psnr, ssim, lpips, niqe, piqe, brisque] 

datasets:
  test1:
    name: 2017Track2
    mode: PairedDataset
    data_type: lmdb
    dataroot_src: /home/lzx/SRDatasets/NTIRE2017/valid_LR/x4.lmdb
    dataroot_tgt: /home/lzx/SRDatasets/DIV2K_valid/HR/x4.lmdb
  test2:
    name: 2020Track1
    mode: PairedDataset
    data_type: lmdb
    dataroot_src: /home/lzx/SRDatasets/NTIRE2020/track1/valid.lmdb
    dataroot_tgt: /home/lzx/SRDatasets/DIV2K_valid/HR/x4.lmdb

#### network structures
networks:
  netSR:
    which_network: RRDBNet
    setting:
      in_nc: 3
      out_nc: 3
      nf: 64
      nb: 23
      gc: 32
      upscale: 4
    pretrain: 
      path: ../../../checkpoints/BSRGAN/BSRGAN.pth
      strict_load: true


================================================
FILE: codes/config/BSRGAN/options/test/2018Track2_2018Track4.yml
================================================
#### general settings
name: 2018Track2_2018Track4
use_tb_logger: false
model: SRModel
scale: 4
gpu_ids: [6]

metrics: [best_psnr, best_ssim, best_lpips, niqe, piqe, brisque] 

datasets:
  test1:
    name: 2018Track2
    mode: PairedDataset
    data_type: lmdb
    dataroot_src: /home/lzx/SRDatasets/NTIRE2018/track2/valid.lmdb
    dataroot_tgt: /home/lzx/SRDatasets/DIV2K_valid/HR/x4.lmdb
  test2:
    name: 2018Track4
    mode: PairedDataset
    data_type: lmdb
    dataroot_src: /home/lzx/SRDatasets/NTIRE2018/track4/valid.lmdb
    dataroot_tgt: /home/lzx/SRDatasets/DIV2K_valid/HR/x4.lmdb

#### network structures
networks:
  netSR:
    which_network: RRDBNet
    setting:
      in_nc: 3
      out_nc: 3
      nf: 64
      nb: 23
      gc: 32
      upscale: 4
    pretrain: 
      path: ../../../checkpoints/BSRGAN/BSRGAN.pth
      strict_load: true


================================================
FILE: codes/config/BSRGAN/options/test/2020Track2.yml
================================================
#### general settings
name: 2020Track2
use_tb_logger: false
model: SRModel
scale: 4
gpu_ids: [0]

metrics: [niqe, piqe, brisque] 

datasets:
  test1:
    name: 2020Track2
    mode: SingleDataset
    data_type: lmdb
    dataroot: /home/lzx/SRDatasets/NTIRE2020/track2/test.lmdb

#### network structures
networks:
  netSR:
    which_network: RRDBNet
    setting:
      in_nc: 3
      out_nc: 3
      nf: 64
      nb: 23
      gc: 32
      upscale: 4
    pretrain: 
      path: ../../../checkpoints/BSRGAN/BSRGAN.pth
      strict_load: true


================================================
FILE: codes/config/BSRGAN/test.py
================================================
import argparse
import logging
import os.path
import sys
import time
from collections import OrderedDict, defaultdict

import numpy as np
import torch
import torch.distributed as dist
import torch.multiprocessing as mp

sys.path.append("../../")
import utils as util
import utils.option as option
from data import create_dataloader, create_dataset
from metrics import IQA
from models import create_model
from utils import bgr2ycbcr, imresize


def parse_args():
    parser = argparse.ArgumentParser(description="Train keypoints network")
    # general
    parser.add_argument(
        "--opt", help="experiment configure file name", required=True, type=str
    )
    parser.add_argument(
        "--root_path",
        help="experiment configure file name",
        default="../../../",
        type=str,
    )
    # distributed training
    parser.add_argument("--gpu", help="gpu id for multiprocessing training", type=str)
    parser.add_argument(
        "--world-size",
        default=1,
        type=int,
        help="number of nodes for distributed training",
    )
    parser.add_argument(
        "--dist-url",
        default="tcp://127.0.0.1:23456",
        type=str,
        help="url used to set up distributed training",
    )
    parser.add_argument(
        "--rank", default=0, type=int, help="node rank for distributed training"
    )

    args = parser.parse_args()

    return args


def main():
    args = parse_args()
    opt = option.parse(args.opt, args.root_path, is_train=False)

    # convert to NoneDict, which returns None for missing keys
    opt = option.dict_to_nonedict(opt)

    if args.dist_url == "env://" and args.world_size == -1:
        args.world_size = int(os.environ["WORLD_SIZE"])

    ngpus_per_node = torch.cuda.device_count()
    args.world_size = ngpus_per_node * args.world_size

    opt["dist"] = args.world_size > 1

    util.mkdirs(
        (path for key, path in opt["path"].items() if not key == "experiments_root")
    )

    os.system("rm ./result")
    os.symlink(os.path.join(opt["path"]["results_root"], ".."), "./result")

    if opt["dist"]:
        mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, opt, args))
    else:
        main_worker(0, 1, opt, args)


def main_worker(gpu, ngpus_per_node, opt, args):

    if opt["dist"]:
        if args.dist_url == "env://" and args.rank == -1:
            rank = int(os.environ["RANK"])

        rank = args.rank * ngpus_per_node + gpu
        print(
            f"Init process group: dist_url: {args.dist_url}, world_size: {args.world_size}, rank: {rank}"
        )

        dist.init_process_group(
            backend="nccl",
            init_method=args.dist_url,
            world_size=args.world_size,
            rank=rank,
        )

        torch.cuda.set_device(gpu)

    else:
        rank = 0

    torch.backends.cudnn.benchmark = True

    util.setup_logger(
        "base",
        opt["path"]["log"],
        "test_" + opt["name"] + "_rank{}".format(rank),
        level=logging.INFO,
        screen=True,
        tofile=True,
    )

    measure = IQA(metrics=opt["metrics"], cuda=True)

    logger = logging.getLogger("base")
    logger.info(option.dict2str(opt))

    # Create test dataset and dataloader
    test_datasets = []
    test_loaders = []

    for phase, dataset_opt in sorted(opt["datasets"].items()):

        test_set = create_dataset(dataset_opt)
        test_loader = create_dataloader(test_set, dataset_opt, opt["dist"])

        if rank == 0:
            logger.info(
                "Number of test images in [{:s}]: {:d}".format(
                    dataset_opt["name"], len(test_set)
                )
            )
        test_datasets.append(test_set)
        test_loaders.append(test_loader)

    # load pretrained model by default
    model = create_model(opt)

    for test_dataset, test_loader in zip(test_datasets, test_loaders):

        test_set_name = test_dataset.opt["name"]
        dataset_dir = os.path.join(opt["path"]["results_root"], test_set_name)

        if rank == 0:
            logger.info("\nTesting [{:s}]...".format(test_set_name))
            util.mkdir(dataset_dir)

        validate(
            model,
            test_dataset,
            test_loader,
            opt,
            measure,
            dataset_dir,
            test_set_name,
            logger,
        )


def validate(
    model, dataset, dist_loader, opt, measure, dataset_dir, test_set_name, logger
):

    test_results = {}
    test_results_y = {}
    for metric in opt["metrics"]:
        test_results[metric] = torch.zeros((len(dataset))).cuda()
        test_results_y[metric] = torch.zeros((len(dataset))).cuda()

    if opt["dist"]:
        rank = dist.get_rank()
        world_size = dist.get_world_size()
    else:
        world_size = 1
        rank = 0

    indices = list(range(rank, len(dataset), world_size))
    for (
        idx,
        test_data,
    ) in enumerate(dist_loader):
        idx = indices[idx]

        img_path = test_data["src_path"][0]
        img_name = img_path.split("/")[-1].split(".")[0]

        model.test(test_data)
        visuals = model.get_current_visuals()
        sr_img = util.tensor2img(visuals["sr"])  # uint8

        suffix = opt["suffix"]
        if suffix:
            save_img_path = os.path.join(dataset_dir, img_name + suffix + ".png")
        else:
            save_img_path = os.path.join(dataset_dir, img_name + ".png")
        util.save_img(sr_img, save_img_path)

        message = "img:{:15s}; ".format(img_name)

        crop_border = opt["crop_border"] if opt["crop_border"] else opt["scale"]

        if crop_border == 0:
            cropped_sr_img = sr_img
        else:
            cropped_sr_img = sr_img[
                crop_border:-crop_border, crop_border:-crop_border, :
            ]

        if "tgt" in test_data.keys():
            gt_img = util.tensor2img(test_data["tgt"][0].double().cpu())

            if crop_border == 0:
                cropped_gt_img = gt_img
            else:
                cropped_gt_img = gt_img[
                    crop_border:-crop_border, crop_border:-crop_border, :
                ]
        else:
            gt_img = None
            cropped_gt_img = None

        message += "Scores - "
        scores = measure(res=cropped_sr_img, ref=cropped_gt_img, metrics=opt["metrics"])
        for k, v in scores.items():
            test_results[k][idx] = v
            message += "{}: {:.6f}; ".format(k, v)

        if sr_img.shape[2] == 3:  # RGB image
            sr_img_y = bgr2ycbcr(sr_img, only_y=True)
            if crop_border == 0:
                cropped_sr_img_y = sr_img_y * 255
            else:
                cropped_sr_img_y = (
                    sr_img_y[crop_border:-crop_border, crop_border:-crop_border] * 255
                )
            if gt_img is not None:
                gt_img_y = bgr2ycbcr(gt_img, only_y=True)
                if crop_border == 0:
                    cropped_gt_img_y = gt_img_y * 255
                else:
                    cropped_gt_img_y = (
                        gt_img_y[crop_border:-crop_border, crop_border:-crop_border]
                        * 255
                    )
            else:
                gt_img_y = None
                cropped_gt_img_y = None

            message += "Y Scores - "
            scores = measure(
                res=cropped_sr_img_y, ref=cropped_gt_img_y, metrics=opt["metrics"]
            )
            for k, v in scores.items():
                test_results_y[k][idx] = v
                message += "{}: {:.6f}; ".format(k, v)

        logger.info(message)

    if opt["dist"]:
        for k, v in test_results.items():
            dist.reduce(v, dst=0)
        dist.barrier()

        for k, v in test_results_y.items():
            dist.reduce(v, dst=0)
        dist.barrier()

    # log
    avg_results = {}
    message = "Average Results for {}\n".format(test_set_name)

    if rank == 0:
        for k, v in test_results.items():
            avg_results[k] = sum(v) / len(v)
            message += "{}: {:.6f}; ".format(k, avg_results[k])

        logger.info(message)

    avg_results_y = {}
    message = "Average Results on Y channel for {}\n".format(test_set_name)

    if rank == 0:
        for k, v in test_results_y.items():
            avg_results[k] = sum(v) / len(v)
            message += "{}: {:.6f}; ".format(k, avg_results[k])

        logger.info(message)


if __name__ == "__main__":
    main()


================================================
FILE: codes/config/BSRGAN/train.py
================================================
import argparse
import logging
import math
import os
import random
import sys
import time
from collections import defaultdict

import numpy as np
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
from tensorboardX import SummaryWriter
from tqdm import tqdm

sys.path.append("../../")
import utils as util
import utils.option as option
from data import create_dataloader, create_dataset
from metrics import IQA
from models import create_model


def parse_args():
    parser = argparse.ArgumentParser(description="Train keypoints network")
    # general
    parser.add_argument(
        "--opt", help="experiment configure file name", required=True, type=str
    )
    parser.add_argument(
        "--root_path",
        help="experiment configure file name",
        default="../../../",
        type=str,
    )
    # distributed training
    parser.add_argument("--gpu", help="gpu id for multiprocessing training", type=str)
    parser.add_argument(
        "--world-size",
        default=1,
        type=int,
        help="number of nodes for distributed training",
    )
    parser.add_argument(
        "--dist-url",
        default="tcp://127.0.0.1:23456",
        type=str,
        help="url used to set up distributed training",
    )
    parser.add_argument(
        "--rank", default=0, type=int, help="node rank for distributed training"
    )

    args = parser.parse_args()

    return args


def setup_dataloaer(opt, logger):

    if opt["dist"]:
        rank = dist.get_rank()
        world_size = dist.get_world_size()
    else:
        rank = 0
        world_size = 1

    for phase, dataset_opt in opt["datasets"].items():
        if phase == "train":
            train_set = create_dataset(dataset_opt)
            train_loader = create_dataloader(train_set, dataset_opt, opt["dist"])
            total_iters = opt["train"]["niter"]
            total_epochs = total_iters // (len(train_loader) - 1) + 1
            if rank == 0:
                logger.info(
                    "Number of train images: {:,d}, iters: {:,d}".format(
                        len(train_set), len(train_loader)
                    )
                )
                logger.info(
                    "Total epochs needed: {:d} for iters {:,d}".format(
                        total_epochs, opt["train"]["niter"]
                    )
                )

        elif phase == "val":
            val_set = create_dataset(dataset_opt)
            val_loader = create_dataloader(val_set, dataset_opt, opt["dist"])
            if rank == 0:
                logger.info(
                    "Number of val images in [{:s}]: {:d}".format(
                        dataset_opt["name"], len(val_set)
                    )
                )
        else:
            raise NotImplementedError("Phase [{:s}] is not recognized.".format(phase))

    assert train_loader is not None
    assert val_loader is not None

    return train_set, train_loader, val_set, val_loader, total_iters, total_epochs


def main():
    args = parse_args()
    opt = option.parse(args.opt, args.root_path, is_train=True)

    # convert to NoneDict, which returns None for missing keys
    opt = option.dict_to_nonedict(opt)

    if args.dist_url == "env://" and args.world_size == -1:
        args.world_size = int(os.environ["WORLD_SIZE"])

    ngpus_per_node = torch.cuda.device_count()
    args.world_size = ngpus_per_node * args.world_size

    opt["dist"] = args.world_size > 1

    if opt["train"].get("resume_state", None) is None:
        util.mkdir_and_rename(
            opt["path"]["experiments_root"]
        )  # rename experiment folder if exists
        util.mkdirs(
            (path for key, path in opt["path"].items() if not key == "experiments_root")
        )
        os.system("rm ./log")
        os.symlink(os.path.join(opt["path"]["experiments_root"], ".."), "./log")

    if opt["dist"]:
        mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, opt, args))
    else:
        main_worker(0, 1, opt, args)


def main_worker(gpu, ngpus_per_node, opt, args):

    if opt["dist"]:
        if args.dist_url == "env://" and args.rank == -1:
            rank = int(os.environ["RANK"])

        rank = args.rank * ngpus_per_node + gpu
        print(
            f"Init process group: dist_url: \
            {args.dist_url}, world_size: {args.world_size}, rank: {rank}"
        )

        dist.init_process_group(
            backend="nccl",
            init_method=args.dist_url,
            world_size=args.world_size,
            rank=rank,
        )

        torch.cuda.set_device(gpu)

    else:
        rank = 0

    seed = opt["train"]["manual_seed"]
    if seed is None:
        util.set_random_seed(rank)

    torch.backends.cudnn.benchmark = True
    # torch.backends.cudnn.deterministic = True

    # setup tensorboard and val logger
    if rank == 0:
        if opt["use_tb_logger"] and "debug" not in opt["name"]:
            tb_logger = SummaryWriter(log_dir="log/{}/tb_logger/".format(opt["name"]))

        util.setup_logger(
            "val",
            opt["path"]["log"],
            "val_" + opt["name"],
            level=logging.INFO,
            screen=True,
            tofile=True,
        )

    measure = IQA(metrics=opt["metrics"], cuda=True)

    # config loggers. Before it, the log will not work
    util.setup_logger(
        "base",
        opt["path"]["log"],
        "train_" + opt["name"] + "_rank{}".format(rank),
        level=logging.INFO if rank == 0 else logging.ERROR,
        screen=True,
        tofile=True,
    )

    logger = logging.getLogger("base")
    if rank == 0:
        logger.info(option.dict2str(opt))

    # create dataset
    (
        train_set,
        train_loader,
        val_set,
        val_loader,
        total_iters,
        total_epochs,
    ) = setup_dataloaer(opt, logger)

    # create model
    model = create_model(opt)

    # loading resume state if exists
    if opt["train"].get("resume_state", None):
        # distributed resuming: all load into default GPU
        device_id = gpu
        resume_state = torch.load(
            opt["train"]["resume_state"],
            map_location=lambda storage, loc: storage.cuda(device_id),
        )

        logger.info(
            "Resuming training from epoch: {}, iter: {}.".format(
                resume_state["epoch"], resume_state["iter"]
            )
        )

        start_epoch = resume_state["epoch"]
        current_step = resume_state["iter"]
        model.resume_training(resume_state)  # handle optimizers and schedulers

    else:
        current_step = 0
        start_epoch = 0

    logger.info(
        "Start training from epoch: {:d}, iter: {:d}".format(start_epoch, current_step)
    )
    data_time, iter_time = time.time(), time.time()
    avg_data_time = avg_iter_time = 0
    count = 0
    for epoch in range(start_epoch, total_epochs + 1):
        for _, train_data in enumerate(train_loader):

            current_step += 1
            count += 1
            if current_step > total_iters:
                break

            data_time = time.time() - data_time
            avg_data_time = (avg_data_time * (count - 1) + data_time) / count

            model.feed_data(train_data)
            model.optimize_parameters(current_step)
            model.update_learning_rate(
                current_step, warmup_iter=opt["train"]["warmup_iter"]
            )

            iter_time = time.time() - iter_time
            avg_iter_time = (avg_iter_time * (count - 1) + iter_time) / count

            # log
            if current_step % opt["logger"]["print_freq"] == 0:
                logs = model.get_current_log()
                message = (
                    f"<epoch:{epoch:3d}, iter:{current_step:8,d}, "
                    f"lr:{model.get_current_learning_rate():.3e}> "
                )

                message += f'[time (data): {avg_iter_time:.3f} ({avg_data_time:.3f})] '
                for k, v in logs.items():
                    message += "{:s}: {:.4e}; ".format(k, v)
                    # tensorboard logger
                    if opt["use_tb_logger"] and "debug" not in opt["name"]:
                        if rank == 0:
                            tb_logger.add_scalar(k, v, current_step)
                logger.info(message)

            # validation
            if current_step % opt["train"]["val_freq"] == 0:

                avg_results = validate(
                    model, val_set, val_loader, opt, measure, epoch, current_step
                )

            # tensorboard logger
            if rank == 0:
                if opt["use_tb_logger"] and "debug" not in opt["name"]:
                    for k, v in avg_results.items():
                        tb_logger.add_scalar(k, v, current_step)

            # save models and training states
            if current_step % opt["logger"]["save_checkpoint_freq"] == 0:
                if rank == 0:
                    logger.info("Saving models and training states.")
                    model.save(current_step)
                    model.save_training_state(epoch, current_step)
            
            data_time = time.time()
            iter_time = time.time()

    if rank == 0:
        logger.info("Saving the final model.")
        model.save("latest")
        logger.info("End of training.")
        if opt["use_tb_logger"] and "debug" not in opt["name"]:
            tb_logger.close()


def validate(model, dataset, dist_loader, opt, measure, epoch, current_step):

    test_results = {}
    for metric in opt["metrics"]:
        test_results[metric] = torch.zeros((len(dataset))).cuda()

    if opt["dist"]:
        rank = dist.get_rank()
        world_size = dist.get_world_size()
    else:
        world_size = 1
        rank = 0

    if rank == 0:
        pbar = tqdm(total=len(dataset), leave=False, dynamic_ncols=True)

    indices = list(range(rank, len(dataset), world_size))
    for (
        idx,
        val_data,
    ) in enumerate(dist_loader):
        idx = indices[idx]

        LR_img = val_data["src"]
        lr_img = util.tensor2img(LR_img)  # save LR image for reference

        model.test(val_data)
        visuals = model.get_current_visuals()

        # Save images for reference
        img_name = val_data["src_path"][0].split("/")[-1].split(".")[0]
        img_dir = os.path.join(opt["path"]["val_images"], img_name)

        util.mkdir(img_dir)
        save_lr_path = os.path.join(img_dir, "{:s}_LR.png".format(img_name))
        util.save_img(lr_img, save_lr_path)

        sr_img = util.tensor2img(visuals["sr"])  # uint8
        save_img_path = os.path.join(
            img_dir, "{:s}_{:d}.png".format(img_name, current_step)
        )
        util.save_img(sr_img, save_img_path)

        if "fake_lr" in visuals.keys():
            fake_lr_img = util.tensor2img(visuals["fake_lr"])
            save_img_path = os.path.join(
                img_dir, f"fake_lr_{current_step:d}.png"
            )
            util.save_img(fake_lr_img, save_img_path)

        # calculate scores
        crop_size = opt["scale"]
        cropped_sr_img = sr_img[crop_size:-crop_size, crop_size:-crop_size, :]
        if "tgt" in val_data.keys():
            gt_img = util.tensor2img(val_data["tgt"])
            cropped_gt_img = gt_img[crop_size:-crop_size, crop_size:-crop_size, :]
        else:
            cropped_gt_img = gt_img = None

        scores = measure(res=cropped_sr_img, ref=cropped_gt_img, metrics=opt["metrics"])
        for k, v in scores.items():
            test_results[k][idx] = v

        if rank == 0:
            for _ in range(world_size):
                pbar.update(1)
    if rank == 0:
        pbar.close()

    # log
    avg_results = {}
    message = " <epoch:{:3d}, iter:{:8,d}, Average sccores:\t".format(
        epoch, current_step
    )

    if opt["dist"]:
        for k, v in test_results.items():
            dist.reduce(v, dst=0)
        dist.barrier()

    if rank == 0:
        for k, v in test_results.items():
            avg_results[k] = sum(v) / len(v)
            message += "{}: {:.6f}; ".format(k, avg_results[k])

        logger_val = logging.getLogger("val")  # validation logger
        logger_val.info(message)
    
    del test_results
    torch.cuda.empty_cache()
    return avg_results


if __name__ == "__main__":
    main()


================================================
FILE: codes/config/Bicubic/README.md
================================================
We use the same bicubic interpolation as that in matlab

================================================
FILE: codes/config/Bicubic/archs/__init__.py
================================================
import importlib
import os
import os.path as osp

from utils.registry import ARCH_REGISTRY, LOSS_REGISTRY, LR_SCHEDULER_REGISTRY

arch_folder = osp.dirname(osp.abspath(__file__))
arch_filenames = [
    osp.splitext(osp.basename(v))[0]
    for v in os.listdir(arch_folder)
    if v.endswith(".py")
]
# import all the arch modules
_arch_modules = [
    importlib.import_module(f"archs.{file_name}") for file_name in arch_filenames
]


def build_network(net_opt):
    which_network = net_opt["which_network"]
    net = ARCH_REGISTRY.get(which_network)(**net_opt["setting"])
    return net


def build_loss(loss_opt):
    loss_type = loss_opt.pop("type")
    loss = LOSS_REGISTRY.get(loss_type)(**loss_opt)
    return loss

def build_scheduler(optimizer, scheduler_opt):
    scheduler_type = scheduler_opt.pop("type")
    scheduler = LR_SCHEDULER_REGISTRY.get(scheduler_type)(optimizer, **scheduler_opt)
    return scheduler


================================================
FILE: codes/config/Bicubic/archs/bicubic.py
================================================
import math

import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable

from utils.registry import ARCH_REGISTRY
from utils.resize_utils import imresize


@ARCH_REGISTRY.register()
class BicuBic(nn.Module):
    def __init__(self, upscale=4):
        super().__init__()

        self.empty = nn.Parameter(torch.FloatTensor([0.0]))
        self.upscale = upscale

    def forward(self, x):
        y  = imresize(x, self.upscale)
        return y


================================================
FILE: codes/config/Bicubic/archs/discriminator.py
================================================
import torch
import torch.nn as nn
import torchvision
import functools

from utils.registry import ARCH_REGISTRY


@ARCH_REGISTRY.register()
class DiscriminatorVGG128(nn.Module):
    def __init__(self, in_nc, nf):
        super().__init__()
        # [64, 128, 128]
        self.conv0_0 = nn.Conv2d(in_nc, nf, 3, 1, 1, bias=True)
        self.conv0_1 = nn.Conv2d(nf, nf, 4, 2, 1, bias=False)
        self.bn0_1 = nn.BatchNorm2d(nf, affine=True)
        # [64, 64, 64]
        self.conv1_0 = nn.Conv2d(nf, nf * 2, 3, 1, 1, bias=False)
        self.bn1_0 = nn.BatchNorm2d(nf * 2, affine=True)
        self.conv1_1 = nn.Conv2d(nf * 2, nf * 2, 4, 2, 1, bias=False)
        self.bn1_1 = nn.BatchNorm2d(nf * 2, affine=True)
        # [128, 32, 32]
        self.conv2_0 = nn.Conv2d(nf * 2, nf * 4, 3, 1, 1, bias=False)
        self.bn2_0 = nn.BatchNorm2d(nf * 4, affine=True)
        self.conv2_1 = nn.Conv2d(nf * 4, nf * 4, 4, 2, 1, bias=False)
        self.bn2_1 = nn.BatchNorm2d(nf * 4, affine=True)
        # [256, 16, 16]
        self.conv3_0 = nn.Conv2d(nf * 4, nf * 8, 3, 1, 1, bias=False)
        self.bn3_0 = nn.BatchNorm2d(nf * 8, affine=True)
        self.conv3_1 = nn.Conv2d(nf * 8, nf * 8, 4, 2, 1, bias=False)
        self.bn3_1 = nn.BatchNorm2d(nf * 8, affine=True)
        # [512, 8, 8]
        self.conv4_0 = nn.Conv2d(nf * 8, nf * 8, 3, 1, 1, bias=False)
        self.bn4_0 = nn.BatchNorm2d(nf * 8, affine=True)
        self.conv4_1 = nn.Conv2d(nf * 8, nf * 8, 4, 2, 1, bias=False)
        self.bn4_1 = nn.BatchNorm2d(nf * 8, affine=True)

        self.linear1 = nn.Linear(512 * 4 * 4, 100)
        self.linear2 = nn.Linear(100, 1)

        # activation function
        self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)

    def forward(self, x):
        fea = self.lrelu(self.conv0_0(x))
        fea = self.lrelu(self.bn0_1(self.conv0_1(fea)))

        fea = self.lrelu(self.bn1_0(self.conv1_0(fea)))
        fea = self.lrelu(self.bn1_1(self.conv1_1(fea)))

        fea = self.lrelu(self.bn2_0(self.conv2_0(fea)))
        fea = self.lrelu(self.bn2_1(self.conv2_1(fea)))

        fea = self.lrelu(self.bn3_0(self.conv3_0(fea)))
        fea = self.lrelu(self.bn3_1(self.conv3_1(fea)))

        fea = self.lrelu(self.bn4_0(self.conv4_0(fea)))
        fea = self.lrelu(self.bn4_1(self.conv4_1(fea)))

        fea = fea.view(fea.size(0), -1)
        fea = self.lrelu(self.linear1(fea))
        out = self.linear2(fea)
        return out


@ARCH_REGISTRY.register()
class DiscriminatorVGG32(nn.Module):
    def __init__(self, in_nc, nf):
        super().__init__()
        # [64, 128, 128]
        self.conv0_0 = nn.Conv2d(in_nc, nf, 3, 1, 1, bias=True)
        self.conv0_1 = nn.Conv2d(nf, nf, 4, 2, 1, bias=False)
        self.bn0_1 = nn.BatchNorm2d(nf, affine=True)
        # [64, 64, 64]
        self.conv1_0 = nn.Conv2d(nf, nf * 2, 3, 1, 1, bias=False)
        self.bn1_0 = nn.BatchNorm2d(nf * 2, affine=True)
        self.conv1_1 = nn.Conv2d(nf * 2, nf * 2, 4, 2, 1, bias=False)
        self.bn1_1 = nn.BatchNorm2d(nf * 2, affine=True)
        # [128, 32, 32]
        self.conv2_0 = nn.Conv2d(nf * 2, nf * 4, 3, 1, 1, bias=False)
        self.bn2_0 = nn.BatchNorm2d(nf * 4, affine=True)
        self.conv2_1 = nn.Conv2d(nf * 4, nf * 4, 4, 2, 1, bias=False)
        self.bn2_1 = nn.BatchNorm2d(nf * 4, affine=True)
        # [256, 16, 16]
        self.conv3_0 = nn.Conv2d(nf * 4, nf * 8, 3, 1, 1, bias=False)
        self.bn3_0 = nn.BatchNorm2d(nf * 8, affine=True)
        self.conv3_1 = nn.Conv2d(nf * 8, nf * 8, 4, 2, 1, bias=False)
        self.bn3_1 = nn.BatchNorm2d(nf * 8, affine=True)
        # [512, 8, 8]
        self.conv4_0 = nn.Conv2d(nf * 8, nf * 8, 3, 1, 1, bias=False)
        self.bn4_0 = nn.BatchNorm2d(nf * 8, affine=True)
        self.conv4_1 = nn.Conv2d(nf * 8, nf * 8, 4, 2, 1, bias=False)
        self.bn4_1 = nn.BatchNorm2d(nf * 8, affine=True)

        self.linear1 = nn.Linear(512, 100)
        self.linear2 = nn.Linear(100, 1)

        # activation function
        self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)

    def forward(self, x):
        fea = self.lrelu(self.conv0_0(x))
        fea = self.lrelu(self.bn0_1(self.conv0_1(fea)))

        fea = self.lrelu(self.bn1_0(self.conv1_0(fea)))
        fea = self.lrelu(self.bn1_1(self.conv1_1(fea)))

        fea = self.lrelu(self.bn2_0(self.conv2_0(fea)))
        fea = self.lrelu(self.bn2_1(self.conv2_1(fea)))

        fea = self.lrelu(self.bn3_0(self.conv3_0(fea)))
        fea = self.lrelu(self.bn3_1(self.conv3_1(fea)))

        fea = self.lrelu(self.bn4_0(self.conv4_0(fea)))
        fea = self.lrelu(self.bn4_1(self.conv4_1(fea)))

        fea = fea.view(fea.size(0), -1)
        fea = self.lrelu(self.linear1(fea))
        out = self.linear2(fea)
        return out


@ARCH_REGISTRY.register()
class PatchGANDiscriminator(nn.Module):
    """Defines a PatchGAN discriminator"""

    def __init__(self, in_c, nf, nb, stride=1, norm_layer=nn.InstanceNorm2d):
        """Construct a PatchGAN discriminator

        Parameters:
            input_nc (int)  -- the number of channels in input images
            ndf (int)       -- the number of filters in the last conv layer
            n_layers (int)  -- the number of conv layers in the discriminator
            norm_layer      -- normalization layer
        """
        super().__init__()
        if (
            type(norm_layer) == functools.partial
        ):  # no need to use bias as BatchNorm2d has affine parameters
            use_bias = norm_layer.func == nn.InstanceNorm2d
        else:
            use_bias = norm_layer == nn.InstanceNorm2d

        kw = 3
        padw = 1
        sequence = [
            nn.Conv2d(in_c, nf, kernel_size=kw, stride=1, padding=padw),
            nn.LeakyReLU(0.2, True),
        ]
        nf_mult = 1
        nf_mult_prev = 1
        for n in range(1, nb):  # gradually increase the number of filters
            nf_mult_prev = nf_mult
            nf_mult = min(2 ** n, 8)
            sequence += [
                nn.Conv2d(
                    nf * nf_mult_prev,
                    nf * nf_mult,
                    kernel_size=kw,
                    stride=stride,
                    padding=padw,
                    bias=use_bias,
                ),
                norm_layer(nf * nf_mult),
                nn.LeakyReLU(0.2, True),
            ]

        nf_mult_prev = nf_mult
        nf_mult = min(2 ** nb, 8)
        sequence += [
            nn.Conv2d(
                nf * nf_mult_prev,
                nf * nf_mult,
                kernel_size=kw,
                stride=1,
                padding=padw,
                bias=use_bias,
            ),
            norm_layer(nf * nf_mult),
            nn.LeakyReLU(0.2, True),
        ]

        sequence += [
            nn.Conv2d(nf * nf_mult, nf, kernel_size=kw, stride=1, padding=padw)
        ]  # output 1 channel prediction map
        self.model = nn.Sequential(*sequence)

    def forward(self, input):
        """Standard forward."""
        return self.model(input)


================================================
FILE: codes/config/Bicubic/archs/edsr.py
================================================
import math

import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable

from utils.registry import ARCH_REGISTRY


def default_conv(in_channels, out_channels, kernel_size, bias=True):
    return nn.Conv2d(
        in_channels, out_channels, kernel_size, padding=(kernel_size // 2), bias=bias
    )


class MeanShift(nn.Conv2d):
    def __init__(
        self,
        rgb_range,
        rgb_mean=(0.4488, 0.4371, 0.4040),
        rgb_std=(1.0, 1.0, 1.0),
        sign=-1,
    ):
        super(MeanShift, self).__init__(3, 3, kernel_size=1)
        std = torch.Tensor(rgb_std)
        self.weight.data = torch.eye(3).view(3, 3, 1, 1)
        self.weight.data.div_(std.view(3, 1, 1, 1))
        self.bias.data = sign * rgb_range * torch.Tensor(rgb_mean)
        self.bias.data.div_(std)
        self.requires_grad = False


class BasicBlock(nn.Sequential):
    def __init__(
        self,
        in_channels,
        out_channels,
        kernel_size,
        stride=1,
        bias=False,
        bn=True,
        act=nn.ReLU(True),
    ):

        m = [
            nn.Conv2d(
                in_channels,
                out_channels,
                kernel_size,
                padding=(kernel_size // 2),
                stride=stride,
                bias=bias,
            )
        ]
        if bn:
            m.append(nn.BatchNorm2d(out_channels))
        if act is not None:
            m.append(act)
        super(BasicBlock, self).__init__(*m)


class ResBlock(nn.Module):
    def __init__(
        self,
        conv,
        n_feat,
        kernel_size,
        bias=True,
        bn=False,
        act=nn.ReLU(True),
        res_scale=1,
    ):

        super(ResBlock, self).__init__()
        m = []
        for i in range(2):
            m.append(conv(n_feat, n_feat, kernel_size, bias=bias))
            if bn:
                m.append(nn.BatchNorm2d(n_feat))
            if i == 0:
                m.append(act)

        self.body = nn.Sequential(*m)
        self.res_scale = res_scale

    def forward(self, x):
        res = self.body(x).mul(self.res_scale)
        res += x

        return res


class Upsampler(nn.Sequential):
    def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):

        m = []
        if (scale & (scale - 1)) == 0:  # Is scale = 2^n?
            for _ in range(int(math.log(scale, 2))):
                m.append(conv(n_feat, 4 * n_feat, 3, bias))
                m.append(nn.PixelShuffle(2))
                if bn:
                    m.append(nn.BatchNorm2d(n_feat))
                if act:
                    m.append(act())
        elif scale == 3:
            m.append(conv(n_feat, 9 * n_feat, 3, bias))
            m.append(nn.PixelShuffle(3))
            if bn:
                m.append(nn.BatchNorm2d(n_feat))
            if act:
                m.append(act())
        elif scale == 1:
            m.append(nn.Identity())
        else:
            raise NotImplementedError

        super(Upsampler, self).__init__(*m)


def make_model(args, parent=False):
    return RCAN(args)


## Channel Attention (CA) Layer


@ARCH_REGISTRY.register()
class EDSR(nn.Module):
    def __init__(self, nb, nf, res_scale=0.1, upscale=4, conv=default_conv):
        super(EDSR, self).__init__()

        n_resblocks = nb
        n_feats = nf
        kernel_size = 3
        scale = upscale
        act = nn.ReLU(True)
        # url_name = 'r{}f{}x{}'.format(nb, nf, upscale)
        # if url_name in url:
        #     self.url = url[url_name]
        # else:
        #     self.url = None
        self.sub_mean = MeanShift(255.0, sign=-1)
        self.add_mean = MeanShift(255.0, sign=1)

        # define head module
        m_head = [conv(3, n_feats, kernel_size)]

        # define body module
        m_body = [
            ResBlock(conv, n_feats, kernel_size, act=act, res_scale=res_scale)
            for _ in range(n_resblocks)
        ]
        m_body.append(conv(n_feats, n_feats, kernel_size))

        # define tail module
        m_tail = [
            Upsampler(conv, scale, n_feats, act=False),
            conv(n_feats, 3, kernel_size),
        ]

        self.head = nn.Sequential(*m_head)
        self.body = nn.Sequential(*m_body)
        self.tail = nn.Sequential(*m_tail)

    def forward(self, x):
        x = self.sub_mean(x * 255.0)
        x = self.head(x)

        res = self.body(x)
        res += x

        x = self.tail(res)
        x = self.add_mean(x) / 255.0

        return x


================================================
FILE: codes/config/Bicubic/archs/loss.py
================================================
import torch
import torch.nn as nn
import torch.nn.functional as F
import lpips as lp

from utils.registry import LOSS_REGISTRY

from .vgg import VGGFeatureExtractor


@LOSS_REGISTRY.register()
class GaussGuided(nn.Module):
    def __init__(self, ksize, sigma):
        super().__init__()

        ax = torch.arange(0, ksize) - ksize//2
        xx, yy = torch.meshgrid(ax, ax)
        dis = (xx ** 2 + yy ** 2)
        dis = torch.exp(-dis / sigma ** 2)
        dis = dis / dis.sum()

        self.register_buffer("gauss", dis.view(1, ksize**2, 1, 1))
    
    def forward(self, kernel):

        return F.mse_loss(self.gauss, kernel)

@LOSS_REGISTRY.register()
class PerceptualLossLPIPS(nn.Module):
    def __init__(self, net="alex", normalize=True):
        super().__init__()
        self.fn = lp.LPIPS(net=net, spatial=True)
        for p in self.fn.parameters():
            p.requires_grad = False
        
        self.normalize = normalize
    
    def forward(self, res, ref):
        return self.fn(res, ref, normalize=self.normalize).mean(), None


@LOSS_REGISTRY.register()
class MSELoss(nn.Module):
    def __init__(self, *args, **kwargs):
        super().__init__()

    def forward(self, res, ref):
        return F.mse_loss(res, ref)


@LOSS_REGISTRY.register()
class L1Loss(nn.Module):
    def __init__(self, *args, **kwargs):
        super().__init__()

    def forward(self, res, ref):
        return F.l1_loss(res, ref)


@LOSS_REGISTRY.register()
class GANLoss(nn.Module):
    """Define GAN loss.
    Args:
        gan_type (str): Support 'vanilla', 'lsgan', 'wgan', 'hinge'.
        real_label_val (float): The value for real label. Default: 1.0.
        fake_label_val (float): The value for fake label. Default: 0.0.
    """

    def __init__(self, gan_type, real_label_val=1.0, fake_label_val=0.0):
        super(GANLoss, self).__init__()
        self.gan_type = gan_type
        self.real_label_val = real_label_val
        self.fake_label_val = fake_label_val

        if self.gan_type == "vanilla":
            self.loss = nn.BCEWithLogitsLoss()
        elif self.gan_type == "lsgan":
            self.loss = nn.MSELoss()
        elif self.gan_type == "wgan":
            self.loss = self._wgan_loss
        elif self.gan_type == "wgan_softplus":
            self.loss = self._wgan_softplus_loss
        elif self.gan_type == "hinge":
            self.loss = nn.ReLU()
        else:
            raise NotImplementedError(f"GAN type {self.gan_type} is not implemented.")

    def _wgan_loss(self, input, target):
        """wgan loss.
        Args:
            input (Tensor): Input tensor.
            target (bool): Target label.
        Returns:
            Tensor: wgan loss.
        """
        return -input.mean() if target else input.mean()

    def _wgan_softplus_loss(self, input, target):
        """wgan loss with soft plus. softplus is a smooth approximation to the
        ReLU function.
        In StyleGAN2, it is called:
            Logistic loss for discriminator;
            Non-saturating loss for generator.
        Args:
            input (Tensor): Input tensor.
            target (bool): Target label.
        Returns:
            Tensor: wgan loss.
        """
        return F.softplus(-input).mean() if target else F.softplus(input).mean()

    def get_target_label(self, input, target_is_real):
        """Get target label.
        Args:
            input (Tensor): Input tensor.
            target_is_real (bool): Whether the target is real or fake.
        Returns:
            (bool | Tensor): Target tensor. Return bool for wgan, otherwise,
                return Tensor.
        """

        if self.gan_type in ["wgan", "wgan_softplus"]:
            return target_is_real
        target_val = self.real_label_val if target_is_real else self.fake_label_val
        return input.new_ones(input.size()) * target_val

    def forward(self, input, target_is_real, is_disc=False):
        """
        Args:
            input (Tensor): The input for the loss module, i.e., the network
                prediction.
            target_is_real (bool): Whether the targe is real or fake.
            is_disc (bool): Whether the loss for discriminators or not.
                Default: False.
        Returns:
            Tensor: GAN loss value.
        """
        target_label = self.get_target_label(input, target_is_real)
        if self.gan_type == "hinge":
            if is_disc:  # for discriminators in hinge-gan
                input = -input if target_is_real else input
                loss = self.loss(1 + input).mean()
            else:  # for generators in hinge-gan
                loss = -input.mean()
        else:  # other gan types
            loss = self.loss(input, target_label)

        return loss


@LOSS_REGISTRY.register()
class PerceptualLoss(nn.Module):
    """Perceptual loss with commonly used style loss.
    Args:
        layer_weights (dict): The weight for each layer of vgg feature.
            Here is an example: {'conv5_4': 1.}, which means the conv5_4
            feature layer (before relu5_4) will be extracted with weight
            1.0 in calculting losses.
        vgg_type (str): The type of vgg network used as feature extractor.
            Default: 'vgg19'.
        use_input_norm (bool):  If True, normalize the input image in vgg.
            Default: True.
        range_norm (bool): If True, norm images with range [-1, 1] to [0, 1].
            Default: False.
        perceptual_weight (float): If `perceptual_weight > 0`, the perceptual
            loss will be calculated and the loss will multiplied by the
            weight. Default: 1.0.
        style_weight (float): If `style_weight > 0`, the style loss will be
            calculated and the loss will multiplied by the weight.
            Default: 0.
        criterion (str): Criterion used for perceptual loss. Default: 'l1'.
    """

    def __init__(
        self,
        layer_weights,
        vgg_type="vgg19",
        use_input_norm=True,
        range_norm=False,
        perceptual_weight=1.0,
        style_weight=0.0,
        criterion="l1",
    ):
        super(PerceptualLoss, self).__init__()
        self.perceptual_weight = perceptual_weight
        self.style_weight = style_weight
        self.layer_weights = layer_weights
        self.vgg = VGGFeatureExtractor(
            layer_name_list=list(layer_weights.keys()),
            vgg_type=vgg_type,
            use_input_norm=use_input_norm,
            range_norm=range_norm,
        )

        self.criterion_type = criterion
        if self.criterion_type == "l1":
            self.criterion = torch.nn.L1Loss()
        elif self.criterion_type == "l2":
            self.criterion = torch.nn.L2loss()
        elif self.criterion_type == "fro":
            self.criterion = None
        else:
            raise NotImplementedError(f"{criterion} criterion has not been supported.")

    def forward(self, x, gt):
        """Forward function.
        Args:
            x (Tensor): Input tensor with shape (n, c, h, w).
            gt (Tensor): Ground-truth tensor with shape (n, c, h, w).
        Returns:
            Tensor: Forward results.
        """
        # extract vgg features
        x_features = self.vgg(x)
        gt_features = self.vgg(gt.detach())

        # calculate perceptual loss
        if self.perceptual_weight > 0:
            percep_loss = 0
            for k in x_features.keys():
                if self.criterion_type == "fro":
                    percep_loss += (
                        torch.norm(x_features[k] - gt_features[k], p="fro")
                        * self.layer_weights[k]
                    )
                else:
                    percep_loss += (
                        self.criterion(x_features[k], gt_features[k])
                        * self.layer_weights[k]
                    )
            percep_loss *= self.perceptual_weight
        else:
            percep_loss = None

        # calculate style loss
        if self.style_weight > 0:
            style_loss = 0
            for k in x_features.keys():
                if self.criterion_type == "fro":
                    style_loss += (
                        torch.norm(
                            self._gram_mat(x_features[k])
                            - self._gram_mat(gt_features[k]),
                            p="fro",
                        )
                        * self.layer_weights[k]
                    )
                else:
                    style_loss += (
                        self.criterion(
                            self._gram_mat(x_features[k]),
                            self._gram_mat(gt_features[k]),
                        )
                        * self.layer_weights[k]
                    )
            style_loss *= self.style_weight
        else:
            style_loss = None

        return percep_loss, style_loss

    def _gram_mat(self, x):
        """Calculate Gram matrix.
        Args:
            x (torch.Tensor): Tensor with shape of (n, c, h, w).
        Returns:
            torch.Tensor: Gram matrix.
        """
        n, c, h, w = x.size()
        features = x.view(n, c, w * h)
        features_t = features.transpose(1, 2)
        gram = features.bmm(features_t) / (c * h * w)
        return gram


@LOSS_REGISTRY.register()
class CharbonnierLoss(nn.Module):
    """Charbonnier Loss (L1)"""

    def __init__(self, eps=1e-6):
        super(CharbonnierLoss, self).__init__()
        self.eps = eps

    def forward(self, x, y):
        diff = x - y
        loss = torch.mean(torch.sqrt(diff * diff + self.eps))
        return loss


class GradientPenaltyLoss(nn.Module):
    def __init__(self, device=torch.device("cpu")):
        super(GradientPenaltyLoss, self).__init__()
        self.register_buffer("grad_outputs", torch.Tensor())
        self.grad_outputs = self.grad_outputs.to(device)

    def get_grad_outputs(self, input):
        if self.grad_outputs.size() != input.size():
            self.grad_outputs.resize_(input.size()).fill_(1.0)
        return self.grad_outputs

    def forward(self, interp, interp_crit):
        grad_outputs = self.get_grad_outputs(interp_crit)
        grad_interp = torch.autograd.grad(
            outputs=interp_crit,
            inputs=interp,
            grad_outputs=grad_outputs,
            create_graph=True,
            retain_graph=True,
            only_inputs=True,
        )[0]
        grad_interp = grad_interp.view(grad_interp.size(0), -1)
        grad_interp_norm = grad_interp.norm(2, dim=1)

        loss = ((grad_interp_norm - 1) ** 2).mean()
        return loss


================================================
FILE: codes/config/Bicubic/archs/lr_scheduler.py
================================================
import math
from collections import Counter, defaultdict

import torch
from torch.optim.lr_scheduler import _LRScheduler

from utils.registry import LR_SCHEDULER_REGISTRY


@LR_SCHEDULER_REGISTRY.register()
class LinearDecayLR(_LRScheduler):
    def __init__(
        self,
        optimizer,
        decay_prop,
        total_steps,
        last_epoch=-1,
    ):
        self.decay_prop = decay_prop
        self.total_steps = total_steps

        super().__init__(optimizer, last_epoch)

    def get_lr(self):

        return [
            group["initial_lr"]
            * (1 - (self.last_epoch + 1) * self.decay_prop / self.total_steps)
            for group in self.optimizer.param_groups
        ]


@LR_SCHEDULER_REGISTRY.register()
class MultiStepRestartLR(_LRScheduler):
    def __init__(
        self,
        optimizer,
        milestones,
        restarts=None,
        weights=None,
        gamma=0.1,
        clear_state=False,
        last_epoch=-1,
    ):
        self.milestones = Counter(milestones)
        self.gamma = gamma
        self.clear_state = clear_state
        self.restarts = restarts if restarts else [0]
        self.restart_weights = weights if weights else [1]
        assert len(self.restarts) == len(
            self.restart_weights
        ), "restarts and their weights do not match."
        super().__init__(optimizer, last_epoch)

    def get_lr(self):
        if self.last_epoch in self.restarts:
            if self.clear_state:
                self.optimizer.state = defaultdict(dict)
            weight = self.restart_weights[self.restarts.index(self.last_epoch)]
            return [
                group["initial_lr"] * weight for group in self.optimizer.param_groups
            ]
        if self.last_epoch not in self.milestones:
            return [group["lr"] for group in self.optimizer.param_groups]
        return [
            group["lr"] * self.gamma ** self.milestones[self.last_epoch]
            for group in self.optimizer.param_groups
        ]


@LR_SCHEDULER_REGISTRY.register()
class CosineAnnealingRestartLR(_LRScheduler):
    def __init__(
        self, optimizer, T_period, restarts=None, weights=None, eta_min=0, last_epoch=-1
    ):
        self.T_period = T_period
        self.T_max = self.T_period[0]  # current T period
        self.eta_min = eta_min
        self.restarts = restarts if restarts else [0]
        self.restart_weights = weights if weights else [1]
        self.last_restart = 0
        assert len(self.restarts) == len(
            self.restart_weights
        ), "restarts and their weights do not match."
        super().__init__(optimizer, last_epoch)

    def get_lr(self):
        if self.last_epoch == 0:
            return self.base_lrs
        elif self.last_epoch in self.restarts:
            self.last_restart = self.last_epoch
            self.T_max = self.T_period[self.restarts.index(self.last_epoch) + 1]
            weight = self.restart_weights[self.restarts.index(self.last_epoch)]
            return [
                group["initial_lr"] * weight for group in self.optimizer.param_groups
            ]
        elif (self.last_epoch - self.last_restart - 1 - self.T_max) % (
            2 * self.T_max
        ) == 0:
            return [
                group["lr"]
                + (base_lr - self.eta_min) * (1 - math.cos(math.pi / self.T_max)) / 2
                for base_lr, group in zip(self.base_lrs, self.optimizer.param_groups)
            ]
        return [
            (1 + math.cos(math.pi * (self.last_epoch - self.last_restart) / self.T_max))
            / (
                1
                + math.cos(
                    math.pi * ((self.last_epoch - self.last_restart) - 1) / self.T_max
                )
            )
            * (group["lr"] - self.eta_min)
            + self.eta_min
            for group in self.optimizer.param_groups
        ]


================================================
FILE: codes/config/Bicubic/archs/module_util.py
================================================
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.nn.init as init


def initialize_weights(net_l, scale=1):
    if not isinstance(net_l, list):
        net_l = [net_l]
    for net in net_l:
        for m in net.modules():
            if isinstance(m, nn.Conv2d):
                init.kaiming_normal_(m.weight, a=0, mode="fan_in")
                m.weight.data *= scale  # for residual block
                if m.bias is not None:
                    m.bias.data.zero_()
            elif isinstance(m, nn.Linear):
                init.kaiming_normal_(m.weight, a=0, mode="fan_in")
                m.weight.data *= scale
                if m.bias is not None:
                    m.bias.data.zero_()
            elif isinstance(m, nn.BatchNorm2d):
                init.constant_(m.weight, 1)
                init.constant_(m.bias.data, 0.0)


def make_layer(block, n_layers):
    layers = []
    for _ in range(n_layers):
        layers.append(block())
    return nn.Sequential(*layers)


class ResidualBlock_noBN(nn.Module):
    """Residual block w/o BN
    ---Conv-ReLU-Conv-+-
     |________________|
    """

    def __init__(self, nf=64):
        super(ResidualBlock_noBN, self).__init__()
        self.conv1 = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)
        self.conv2 = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)

        # initialization
        initialize_weights([self.conv1, self.conv2], 0.1)

    def forward(self, x):
        identity = x
        out = F.relu(self.conv1(x), inplace=True)
        out = self.conv2(out)
        return identity + out


def flow_warp(x, flow, interp_mode="bilinear", padding_mode="zeros"):
    """Warp an image or feature map with optical flow
    Args:
        x (Tensor): size (N, C, H, W)
        flow (Tensor): size (N, H, W, 2), normal value
        interp_mode (str): 'nearest' or 'bilinear'
        padding_mode (str): 'zeros' or 'border' or 'reflection'

    Returns:
        Tensor: warped image or feature map
    """
    assert x.size()[-2:] == flow.size()[1:3]
    B, C, H, W = x.size()
    # mesh grid
    grid_y, grid_x = torch.meshgrid(torch.arange(0, H), torch.arange(0, W))
    grid = torch.stack((grid_x, grid_y), 2).float()  # W(x), H(y), 2
    grid.requires_grad = False
    grid = grid.type_as(x)
    vgrid = grid + flow
    # scale grid to [-1,1]
    vgrid_x = 2.0 * vgrid[:, :, :, 0] / max(W - 1, 1) - 1.0
    vgrid_y = 2.0 * vgrid[:, :, :, 1] / max(H - 1, 1) - 1.0
    vgrid_scaled = torch.stack((vgrid_x, vgrid_y), dim=3)
    output = F.grid_sample(x, vgrid_scaled, mode=interp_mode, padding_mode=padding_mode)
    return output


================================================
FILE: codes/config/Bicubic/archs/rcan.py
================================================
import math

import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable

from utils.registry import ARCH_REGISTRY


def default_conv(in_channels, out_channels, kernel_size, bias=True):
    return nn.Conv2d(
        in_channels, out_channels, kernel_size, padding=(kernel_size // 2), bias=bias
    )


class MeanShift(nn.Conv2d):
    def __init__(self, rgb_range, rgb_mean, rgb_std, sign=-1):
        super(MeanShift, self).__init__(3, 3, kernel_size=1)
        std = torch.Tensor(rgb_std)
        self.weight.data = torch.eye(3).view(3, 3, 1, 1)
        self.weight.data.div_(std.view(3, 1, 1, 1))
        self.bias.data = sign * rgb_range * torch.Tensor(rgb_mean)
        self.bias.data.div_(std)
        self.requires_grad = False


class BasicBlock(nn.Sequential):
    def __init__(
        self,
        in_channels,
        out_channels,
        kernel_size,
        stride=1,
        bias=False,
        bn=True,
        act=nn.ReLU(True),
    ):

        m = [
            nn.Conv2d(
                in_channels,
                out_channels,
                kernel_size,
                padding=(kernel_size // 2),
                stride=stride,
                bias=bias,
            )
        ]
        if bn:
            m.append(nn.BatchNorm2d(out_channels))
        if act is not None:
            m.append(act)
        super(BasicBlock, self).__init__(*m)


class ResBlock(nn.Module):
    def __init__(
        self,
        conv,
        n_feat,
        kernel_size,
        bias=True,
        bn=False,
        act=nn.ReLU(True),
        res_scale=1,
    ):

        super(ResBlock, self).__init__()
        m = []
        for i in range(2):
            m.append(conv(n_feat, n_feat, kernel_size, bias=bias))
            if bn:
                m.append(nn.BatchNorm2d(n_feat))
            if i == 0:
                m.append(act)

        self.body = nn.Sequential(*m)
        self.res_scale = res_scale

    def forward(self, x):
        res = self.body(x).mul(self.res_scale)
        res += x

        return res


class Upsampler(nn.Sequential):
    def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):

        m = []
        if (scale & (scale - 1)) == 0:  # Is scale = 2^n?
            for _ in range(int(math.log(scale, 2))):
                m.append(conv(n_feat, 4 * n_feat, 3, bias))
                m.append(nn.PixelShuffle(2))
                if bn:
                    m.append(nn.BatchNorm2d(n_feat))
                if act:
                    m.append(act())
        elif scale == 3:
            m.append(conv(n_feat, 9 * n_feat, 3, bias))
            m.append(nn.PixelShuffle(3))
            if bn:
                m.append(nn.BatchNorm2d(n_feat))
            if act:
                m.append(act())
        else:
            raise NotImplementedError

        super(Upsampler, self).__init__(*m)


def make_model(args, parent=False):
    return RCAN(args)


## Channel Attention (CA) Layer
class CALayer(nn.Module):
    def __init__(self, channel, reduction=16):
        super(CALayer, self).__init__()
        # global average pooling: feature --> point
        self.avg_pool = nn.AdaptiveAvgPool2d(1)
        # feature channel downscale and upscale --> channel weight
        self.conv_du = nn.Sequential(
            nn.Conv2d(channel, channel // reduction, 1, padding=0, bias=True),
            nn.ReLU(inplace=True),
            nn.Conv2d(channel // reduction, channel, 1, padding=0, bias=True),
            nn.Sigmoid(),
        )

    def forward(self, x):
        y = self.avg_pool(x)
        y = self.conv_du(y)
        return x * y


## Residual Channel Attention Block (RCAB)
class RCAB(nn.Module):
    def __init__(
        self,
        conv,
        n_feat,
        kernel_size,
        reduction,
        bias=True,
        bn=False,
        act=nn.ReLU(True),
        res_scale=1,
    ):

        super(RCAB, self).__init__()
        modules_body = []
        for i in range(2):
            modules_body.append(conv(n_feat, n_feat, kernel_size, bias=bias))
            if bn:
                modules_body.append(nn.BatchNorm2d(n_feat))
            if i == 0:
                modules_body.append(act)
        modules_body.append(CALayer(n_feat, reduction))
        self.body = nn.Sequential(*modules_body)
        self.res_scale = res_scale

    def forward(self, x):
        res = self.body(x)
        # res = self.body(x).mul(self.res_scale)
        res += x
        return res


## Residual Group (RG)
class ResidualGroup(nn.Module):
    def __init__(
        self, conv, n_feat, kernel_size, reduction, act, res_scale, n_resblocks
    ):
        super(ResidualGroup, self).__init__()
        modules_body = []
        modules_body = [
            RCAB(
                conv,
                n_feat,
                kernel_size,
                reduction,
                bias=True,
                bn=False,
                act=nn.ReLU(True),
                res_scale=1,
            )
            for _ in range(n_resblocks)
        ]
        modules_body.append(conv(n_feat, n_feat, kernel_size))
        self.body = nn.Sequential(*modules_body)

    def forward(self, x):
        res = self.body(x)
        res += x
        return res


## Residual Channel Attention Network (RCAN)
@ARCH_REGISTRY.register()
class RCAN(nn.Module):
    def __init__(self, ng, nb, nf, reduction=16, upscale=4, conv=default_conv):
        super(RCAN, self).__init__()

        n_resgroups = ng
        n_resblocks = nb
        n_feats = nf
        kernel_size = 3
        reduction = reduction
        scale = upscale

        act = nn.ReLU(True)

        # RGB mean for DIV2K
        rgb_mean = (0.4488, 0.4371, 0.4040)
        rgb_std = (1.0, 1.0, 1.0)
        self.sub_mean = MeanShift(1.0, rgb_mean, rgb_std, -1)

        # define head module
        modules_head = [conv(3, n_feats, kernel_size)]

        # define body module
        modules_body = [
            ResidualGroup(
                conv,
                n_feats,
                kernel_size,
                reduction,
                act=act,
                res_scale=1.0,
                n_resblocks=nb,
            )
            for _ in range(ng)
        ]

        modules_body.append(conv(n_feats, n_feats, kernel_size))

        # define tail module
        modules_tail = [
            Upsampler(conv, scale, n_feats, act=False),
            conv(n_feats, 3, kernel_size),
        ]

        self.add_mean = MeanShift(1.0, rgb_mean, rgb_std, 1)

        self.head = nn.Sequential(*modules_head)
        self.body = nn.Sequential(*modules_body)
        self.tail = nn.Sequential(*modules_tail)

    def forward(self, x):
        x = self.sub_mean(x)
        x = self.head(x)

        res = self.body(x)
        res += x

        x = self.tail(res)
        x = self.add_mean(x)

        return x

    def load_state_dict(self, state_dict, strict=False):
        own_state = self.state_dict()
        for name, param in state_dict.items():
            if name in own_state:
                if isinstance(param, nn.Parameter):
                    param = param.data
                try:
                    own_state[name].copy_(param)
                except Exception:
                    if name.find("tail") >= 0:
                        print("Replace pre-trained upsampler to new one...")
                    else:
                        raise RuntimeError(
                            "While copying the parameter named {}, "
                            "whose dimensions in the model are {} and "
                            "whose dimensions in the checkpoint are {}.".format(
                                name, own_state[name].size(), param.size()
                            )
                        )
            elif strict:
                if name.find("tail") == -1:
                    raise KeyError('unexpected key "{}" in state_dict'.format(name))

        if strict:
            missing = set(own_state.keys()) - set(state_dict.keys())
            if len(missing) > 0:
                raise KeyError('missing keys in state_dict: "{}"'.format(missing))


================================================
FILE: codes/config/Bicubic/archs/rrdb.py
================================================
import functools

from utils.registry import ARCH_REGISTRY

from .module_util import *


class ResidualDenseBlock_5C(nn.Module):
    def __init__(self, nf=64, gc=32, bias=True):
        super(ResidualDenseBlock_5C, self).__init__()
        # gc: growth channel, i.e. intermediate channels
        self.conv1 = nn.Conv2d(nf, gc, 3, 1, 1, bias=bias)
        self.conv2 = nn.Conv2d(nf + gc, gc, 3, 1, 1, bias=bias)
        self.conv3 = nn.Conv2d(nf + 2 * gc, gc, 3, 1, 1, bias=bias)
        self.conv4 = nn.Conv2d(nf + 3 * gc, gc, 3, 1, 1, bias=bias)
        self.conv5 = nn.Conv2d(nf + 4 * gc, nf, 3, 1, 1, bias=bias)
        self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)

        # initialization
        initialize_weights(
            [self.conv1, self.conv2, self.conv3, self.conv4, self.conv5], 0.1
        )

    def forward(self, x):
        x1 = self.lrelu(self.conv1(x))
        x2 = self.lrelu(self.conv2(torch.cat((x, x1), 1)))
        x3 = self.lrelu(self.conv3(torch.cat((x, x1, x2), 1)))
        x4 = self.lrelu(self.conv4(torch.cat((x, x1, x2, x3), 1)))
        x5 = self.conv5(torch.cat((x, x1, x2, x3, x4), 1))
        return x5 * 0.2 + x


class RRDB(nn.Module):
    """Residual in Residual Dense Block"""

    def __init__(self, nf, gc=32):
        super(RRDB, self).__init__()
        self.RDB1 = ResidualDenseBlock_5C(nf, gc)
        self.RDB2 = ResidualDenseBlock_5C(nf, gc)
        self.RDB3 = ResidualDenseBlock_5C(nf, gc)

    def forward(self, x):
        out = self.RDB1(x)
        out = self.RDB2(out)
        out = self.RDB3(out)
        return out * 0.2 + x


@ARCH_REGISTRY.register()
class RRDBNet(nn.Module):
    def __init__(self, in_nc, out_nc, nf, nb, gc=32, upscale=4):
        super(RRDBNet, self).__init__()
        self.upscale = upscale
        RRDB_block_f = functools.partial(RRDB, nf=nf, gc=gc)

        self.conv_first = nn.Conv2d(in_nc, nf, 3, 1, 1, bias=True)
        self.RRDB_trunk = make_layer(RRDB_block_f, nb)
        self.trunk_conv = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)
        #### upsampling
        self.upconv1 = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)
        if upscale == 4:
            self.upconv2 = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)
        self.HRconv = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)
        self.conv_last = nn.Conv2d(nf, out_nc, 3, 1, 1, bias=True)

        self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)

    def forward(self, x):
        fea = self.conv_first(x)
        trunk = self.trunk_conv(self.RRDB_trunk(fea))
        fea = fea + trunk

        if self.upscale == 2 or self.upscale == 3:
            fea = self.lrelu(
                self.upconv1(
                    F.interpolate(fea, scale_factor=self.upscale, mode="nearest")
                )
            )
        if self.upscale == 4:
            fea = self.lrelu(
                self.upconv1(F.interpolate(fea, scale_factor=2, mode="nearest"))
            )
            fea = self.lrelu(
                self.upconv2(F.interpolate(fea, scale_factor=2, mode="nearest"))
            )
        out = self.conv_last(self.lrelu(self.HRconv(fea)))

        return out


================================================
FILE: codes/config/Bicubic/archs/srresnet.py
================================================
import functools

from utils.registry import ARCH_REGISTRY

from .module_util import *


@ARCH_REGISTRY.register()
class MSRResNet(nn.Module):
    """modified SRResNet"""

    def __init__(self, in_nc=3, out_nc=3, nf=64, nb=16, upscale=4):
        super(MSRResNet, self).__init__()
        self.upscale = upscale

        self.conv_first = nn.Conv2d(in_nc, nf, 3, 1, 1, bias=True)
        basic_block = functools.partial(ResidualBlock_noBN, nf=nf)
        self.recon_trunk = make_layer(basic_block, nb)

        # upsampling
        if self.upscale == 2:
            self.upconv1 = nn.Conv2d(nf, nf * 4, 3, 1, 1, bias=True)
            self.pixel_shuffle = nn.PixelShuffle(2)
        elif self.upscale == 3:
            self.upconv1 = nn.Conv2d(nf, nf * 9, 3, 1, 1, bias=True)
            self.pixel_shuffle = nn.PixelShuffle(3)
        elif self.upscale == 4:
            self.upconv1 = nn.Conv2d(nf, nf * 4, 3, 1, 1, bias=True)
            self.upconv2 = nn.Conv2d(nf, nf * 4, 3, 1, 1, bias=True)
            self.pixel_shuffle = nn.PixelShuffle(2)

        self.HRconv = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)
        self.conv_last = nn.Conv2d(nf, out_nc, 3, 1, 1, bias=True)

        # activation function
        self.lrelu = nn.LeakyReLU(negative_slope=0.1, inplace=True)

        # initialization
        initialize_weights(
            [self.conv_first, self.upconv1, self.HRconv, self.conv_last], 0.1
        )
        if self.upscale == 4:
            initialize_weights(self.upconv2, 0.1)

    def forward(self, x):
        fea = self.lrelu(self.conv_first(x))
        out = self.recon_trunk(fea)

        if self.upscale == 4:
            out = self.lrelu(self.pixel_shuffle(self.upconv1(out)))
            out = self.lrelu(self.pixel_shuffle(self.upconv2(out)))
        elif self.upscale == 3 or self.upscale == 2:
            out = self.lrelu(self.pixel_shuffle(self.upconv1(out)))

        out = self.conv_last(self.lrelu(self.HRconv(out)))
        base = F.interpolate(
            x, scale_factor=self.upscale, mode="bilinear", align_corners=False
        )
        out += base
        return out


================================================
FILE: codes/config/Bicubic/archs/vgg.py
================================================
import os
from collections import OrderedDict

import torch
from torch import nn as nn
from torchvision.models import vgg as vgg

from utils.registry import ARCH_REGISTRY

VGG_PRETRAIN_PATH = "checkpoints/pretrained_models/vgg19-dcbb9e9d.pth"
NAMES = {
    "vgg11": [
        "conv1_1",
        "relu1_1",
        "pool1",
        "conv2_1",
        "relu2_1",
        "pool2",
        "conv3_1",
        "relu3_1",
        "conv3_2",
        "relu3_2",
        "pool3",
        "conv4_1",
        "relu4_1",
        "conv4_2",
        "relu4_2",
        "pool4",
        "conv5_1",
        "relu5_1",
        "conv5_2",
        "relu5_2",
        "pool5",
    ],
    "vgg13": [
        "conv1_1",
        "relu1_1",
        "conv1_2",
        "relu1_2",
        "pool1",
        "conv2_1",
        "relu2_1",
        "conv2_2",
        "relu2_2",
        "pool2",
        "conv3_1",
        "relu3_1",
        "conv3_2",
        "relu3_2",
        "pool3",
        "conv4_1",
        "relu4_1",
        "conv4_2",
        "relu4_2",
        "pool4",
        "conv5_1",
        "relu5_1",
        "conv5_2",
        "relu5_2",
        "pool5",
    ],
    "vgg16": [
        "conv1_1",
        "relu1_1",
        "conv1_2",
        "relu1_2",
        "pool1",
        "conv2_1",
        "relu2_1",
        "conv2_2",
        "relu2_2",
        "pool2",
        "conv3_1",
        "relu3_1",
        "conv3_2",
        "relu3_2",
        "conv3_3",
        "relu3_3",
        "pool3",
        "conv4_1",
        "relu4_1",
        "conv4_2",
        "relu4_2",
        "conv4_3",
        "relu4_3",
        "pool4",
        "conv5_1",
        "relu5_1",
        "conv5_2",
        "relu5_2",
        "conv5_3",
        "relu5_3",
        "pool5",
    ],
    "vgg19": [
        "conv1_1",
        "relu1_1",
        "conv1_2",
        "relu1_2",
        "pool1",
        "conv2_1",
        "relu2_1",
        "conv2_2",
        "relu2_2",
        "pool2",
        "conv3_1",
        "relu3_1",
        "conv3_2",
        "relu3_2",
        "conv3_3",
        "relu3_3",
        "conv3_4",
        "relu3_4",
        "pool3",
        "conv4_1",
        "relu4_1",
        "conv4_2",
        "relu4_2",
        "conv4_3",
        "relu4_3",
        "conv4_4",
        "relu4_4",
        "pool4",
        "conv5_1",
        "relu5_1",
        "conv5_2",
        "relu5_2",
        "conv5_3",
        "relu5_3",
        "conv5_4",
        "relu5_4",
        "pool5",
    ],
}


def insert_bn(names):
    """Insert bn layer after each conv.
    Args:
        names (list): The list of layer names.
    Returns:
        list: The list of layer names with bn layers.
    """
    names_bn = []
    for name in names:
        names_bn.append(name)
        if "conv" in name:
            position = name.replace("conv", "")
            names_bn.append("bn" + position)
    return names_bn


@ARCH_REGISTRY.register()
class VGGFeatureExtractor(nn.Module):
    """VGG network for feature extraction.
    In this implementation, we allow users to choose whether use normalization
    in the input feature and the type of vgg network. Note that the pretrained
    path must fit the vgg type.
    Args:
        layer_name_list (list[str]): Forward function returns the corresponding
            features according to the layer_name_list.
            Example: {'relu1_1', 'relu2_1', 'relu3_1'}.
        vgg_type (str): Set the type of vgg network. Default: 'vgg19'.
        use_input_norm (bool): If True, normalize the input image. Importantly,
            the input feature must in the range [0, 1]. Default: True.
        range_norm (bool): If True, norm images with range [-1, 1] to [0, 1].
            Default: False.
        requires_grad (bool): If true, the parameters of VGG network will be
            optimized. Default: False.
        remove_pooling (bool): If true, the max pooling operations in VGG net
            will be removed. Default: False.
        pooling_stride (int): The stride of max pooling operation. Default: 2.
    """

    def __init__(
        self,
        layer_name_list,
        vgg_type="vgg19",
        use_input_norm=True,
        range_norm=False,
        requires_grad=False,
        remove_pooling=False,
        pooling_stride=2,
    ):
        super(VGGFeatureExtractor, self).__init__()

        self.layer_name_list = layer_name_list
        self.use_input_norm = use_input_norm
        self.range_norm = range_norm

        self.names = NAMES[vgg_type.replace("_bn", "")]
        if "bn" in vgg_type:
            self.names = insert_bn(self.names)

        # only borrow layers that will be used to avoid unused params
        max_idx = 0
        for v in layer_name_list:
            idx = self.names.index(v)
            if idx > max_idx:
                max_idx = idx

        if os.path.exists(VGG_PRETRAIN_PATH):
            vgg_net = getattr(vgg, vgg_type)(pretrained=False)
            state_dict = torch.load(
                VGG_PRETRAIN_PATH, map_location=lambda storage, loc: storage
            )
            vgg_net.load_state_dict(state_dict)
        else:
            vgg_net = getattr(vgg, vgg_type)(pretrained=True)

        features = vgg_net.features[: max_idx + 1]

        modified_net = OrderedDict()
        for k, v in zip(self.names, features):
            if "pool" in k:
                # if remove_pooling is true, pooling operation will be removed
                if remove_pooling:
                    continue
                else:
                    # in some cases, we may want to change the default stride
                    modified_net[k] = nn.MaxPool2d(kernel_size=2, stride=pooling_stride)
            else:
                modified_net[k] = v

        self.vgg_net = nn.Sequential(modified_net)

        if not requires_grad:
            self.vgg_net.eval()
            for param in self.parameters():
                param.requires_grad = False
        else:
            self.vgg_net.train()
            for param in self.parameters():
                param.requires_grad = True

        if self.use_input_norm:
            # the mean is for image with range [0, 1]
            self.register_buffer(
                "mean", torch.Tensor([0.485, 0.456, 0.406]).view(1, 3, 1, 1)
            )
            # the std is for image with range [0, 1]
            self.register_buffer(
                "std", torch.Tensor([0.229, 0.224, 0.225]).view(1, 3, 1, 1)
            )

    def forward(self, x):
        """Forward function.
        Args:
            x (Tensor): Input tensor with shape (n, c, h, w).
        Returns:
            Tensor: Forward results.
        """
        if self.range_norm:
            x = (x + 1) / 2
        if self.use_input_norm:
            x = (x - self.mean) / self.std

        output = {}
        for key, layer in self.vgg_net._modules.items():
            x = layer(x)
            if key in self.layer_name_list:
                output[key] = x.clone()

        return output


================================================
FILE: codes/config/Bicubic/count_flops.py
================================================
import argparse
import sys

import torch
from torchsummaryX import summary

sys.path.append("../../")
import utils.option as option
from models import create_model


parser = argparse.ArgumentParser()
parser.add_argument(
    "--opt",
    type=str,
    default="options/setting1/test/test_setting1_x4.yml",
    help="Path to option YMAL file of Predictor.",
)
args = parser.parse_args()
opt = option.parse(args.opt, root_path=".", is_train=True)

opt = option.dict_to_nonedict(opt)
model = create_model(opt)

test_tensor = torch.randn(1, 3, 270, 180).cuda()
for name, net in model.networks.items():
    summary(net.cuda(), x=test_tensor)
    print("Above are results for net {}".format(name))
    input()


================================================
FILE: codes/config/Bicubic/inference.py
================================================
import argparse
import logging
import math
import os
import os.path as osp
import random
import sys
import cv2
from collections import defaultdict
from glob import glob
from tqdm import tqdm

import numpy as np
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
from tensorboardX import SummaryWriter

sys.path.append("../../")
import utils as util
import utils.option as option
from data import create_dataloader, create_dataset
from data.data_sampler import DistIterSampler
from metrics import IQA
from models import create_model



#### options
parser = argparse.ArgumentParser()
parser.add_argument(
    "-opt",
    type=str,
    default="options/test/2020Track2.yml",
    help="Path to options YMAL file.",
)
parser.add_argument("-input_dir", type=str, default="../../../data_samples/LR")
parser.add_argument("-output_dir", type=str, default="../../../data_samples/BSRGAN")
args = parser.parse_args()
opt = option.parse(args.opt, is_train=False)

opt = option.dict_to_nonedict(opt)

model = create_model(opt)

if not osp.exists(args.output_dir):
    os.makedirs(args.output_dir)

test_files = glob(osp.join(args.input_dir, "*"))
for inx, path in tqdm(enumerate(test_files)):
    name = path.split("/")[-1].split(".")[0]

    img = cv2.imread(path)[:, :, [2, 1, 0]]
    img = img.transpose(2, 0, 1)[None] / 255
    img_t = torch.as_tensor(np.ascontiguousarray(img)).float()

    model.test({"src": img_t}, crop_size=512)
    outdict = model.get_current_visuals()

    sr = outdict["sr"]
    sr_im = util.tensor2img(sr)

    save_path = osp.join(args.output_dir, "{}_x{}.png".format(name, opt["scale"]))
    cv2.imwrite(save_path, sr_im)


================================================
FILE: codes/config/Bicubic/models/__init__.py
================================================
import importlib
import logging
import os
import os.path as osp

from utils.registry import MODEL_REGISTRY

logger = logging.getLogger("base")

model_folder = osp.dirname(__file__)
model_names = [
    osp.splitext(osp.basename(v))[0]
    for v in os.listdir(model_folder)
    if v.endswith("_model.py")
]
_model_modules = [
    importlib.import_module(f"models.{file_name}") for file_name in model_names
]


def create_model(opt, **kwarg):
    model = opt["model"]
    m = MODEL_REGISTRY.get(model)(opt, **kwarg)
    logger.info("Model [{:s}] is created.".format(m.__class__.__name__))
    return m


================================================
FILE: codes/config/Bicubic/models/base_model.py
================================================
import logging
import os
from collections import OrderedDict

import torch
import torch.nn as nn
from torch.nn.parallel import DataParallel, DistributedDataParallel

from archs import build_loss, build_network, build_scheduler
from utils.registry import MODEL_REGISTRY

logger = logging.getLogger("base")


@MODEL_REGISTRY.register()
class BaseModel:
    def __init__(self, opt):

        self.opt = opt

        if opt["dist"]:
            self.rank = torch.distributed.get_rank()
            self.world_size = torch.distributed.get_world_size()
        else:
            self.rank = 0  # non dist training

        self.device = torch.device("cuda" if opt["gpu_ids"] is not None else "cpu")
        self.is_train = opt["is_train"]
        self.log_dict = OrderedDict()

        self.data_names = []
        self.networks = {}

        self.optimizers = {}
        self.schedulers = {}

    def setup_train(self, train_opt):
        # define losses
        loss_opt = train_opt["losses"]
        self.losses = self.build_losses(loss_opt)

        # build optmizers
        optimizer_opts = train_opt["optimizers"]
        self.optimizers = self.build_optimizers(optimizer_opts)

        # set schedulers
        scheduler_opts = train_opt["schedulers"]
        self.schedulers = self.build_schedulers(scheduler_opts)

        # set to training state
        self.set_network_state(self.networks.keys(), "train")

    def feed_data(self, data):
        pass

    def optimize_parameters(self):
        pass

    def get_current_visuals(self):
        pass

    def get_current_losses(self):
        pass

    def print_network(self):
        pass

    def save(self, label):
        pass

    def load(self):
        pass

    def build_network(self, net_opt):

        net = build_network(net_opt)

        if isinstance(net, nn.Module):
            net = self.model_to_device(net)

            if net_opt.get("pretrain"):
                pretrain = net_opt.pop("pretrain")
                self.load_network(net, pretrain["path"], pretrain["strict_load"])

            self.print_network(net)
        return net

    def build_losses(self, loss_opt):
        losses = {}

        defined_loss_names = list(loss_opt.keys())
        assert set(defined_loss_names).issubset(set(self.loss_names))

        for name in defined_loss_names:
            loss_conf = loss_opt.get(name)
            if loss_conf["weight"] > 0:
                self.loss_weights[name] = loss_conf.pop("weight")
                losses[name] = build_loss(loss_conf).to(self.device)

        return losses

    def build_optimizers(self, optim_opts):
        optimizers = {}

        if "default" in optim_opts.keys():
            default_optim = optim_opts.pop("default")

        defined_optimizer_names = list(optim_opts.keys())
        assert set(defined_optimizer_names).issubset(self.networks.keys())

        for name in defined_optimizer_names:
            optim_opt = optim_opts[name]
            if optim_opt is None:
                optim_opt = default_optim.copy()

            params = []
            for v in self.networks[name].parameters():
                if v.requires_grad:
                    params.append(v)

            optim_type = optim_opt.pop("type")
            optimizer = getattr(torch.optim, optim_type)(params=params, **optim_opt)
            optimizers[name] = optimizer

        return optimizers

    def build_schedulers(self, scheduler_opts):
        """Set up scheduler."""
        schedulers = {}
        if "default" in scheduler_opts.keys():
            default_opt = scheduler_opts.pop("default")

        for name in self.optimizers.keys():
            scheduler_opt = scheduler_opts[name]
            if scheduler_opt is None:
                scheduler_opt = default_opt.copy()

            schedulers[name] = build_scheduler(self.optimizers[name], scheduler_opt)

        return schedulers

    def model_to_device(self, net):
        """Model to device. It also warps models with DistributedDataParallel
        or DataParallel.
        Args:
            net (nn.Module)
        """
        net = net.to(self.device)
        if self.opt["dist"]:
            net = DistributedDataParallel(net, device_ids=[torch.cuda.current_device()])
        else:
            net = DataParallel(net)
        return net

    def print_network(self, net):
        # Generator
        s, n = self.get_network_description(net)
        if isinstance(net, nn.DataParallel) or isinstance(net, DistributedDataParallel):
            net_struc_str = "{} - {}".format(
                net.__class__.__name__, net.module.__class__.__name__
            )
        else:
            net_struc_str = "{}".format(net.__class__.__name__)
        if self.rank <= 0:
            logger.info(
                "Network G structure: {}, with parameters: {:,d}".format(
                    net_struc_str, n
                )
            )
            logger.info(s)

    def set_optimizer(self, names, operation):
        for name in names:
            getattr(self.optimizers[name], operation)()

    def set_requires_grad(self, names, requires_grad):
        for name in names:
            if isinstance(self.networks[name], nn.Module):
                for v in self.networks[name].parameters():
                    v.requires_grad = requires_grad

    def set_network_state(self, names, state):
        for name in names:
            if isinstance(self.networks[name], nn.Module):
                getattr(self.networks[name], state)()

    def clip_grad_norm(self, names, norm):
        for name in names:
            nn.utils.clip_grad_norm_(self.networks[name].parameters(), max_norm=norm)

    def _set_lr(self, lr_groups_l):
        """set learning rate for warmup,
        lr_groups_l: list for lr_groups. each for a optimizer"""
        for optimizer, lr_groups in zip(self.optimizers, lr_groups_l):
            for param_group, lr in zip(optimizer.param_groups, lr_groups):
                param_group["lr"] = lr

    def _get_init_lr(self):
        # get the initial lr, which is set by the scheduler
        init_lr_groups_l = []
        for optimizer in self.optimizers:
            init_lr_groups_l.append([v["initial_lr"] for v in optimizer.param_groups])
        return init_lr_groups_l

    def update_learning_rate(self, cur_iter, warmup_iter=-1):
        for _, scheduler in self.schedulers.items():
            scheduler.step()
        #### set up warm up learning rate
        if cur_iter < warmup_iter:
            # get initial lr for each group
            init_lr_g_l = self._get_init_lr()
            # modify warming-up learning rates
            warm_up_lr_l = []
            for init_lr_g in init_lr_g_l:
                warm_up_lr_l.append([v / warmup_iter * cur_iter for v in init_lr_g])
            # set learning rate
            self._set_lr(warm_up_lr_l)

    def get_current_learning_rate(self):
        # return self.schedulers[0].get_lr()[0]
        return list(self.optimizers.values())[0].param_groups[0]["lr"]

    def get_network_description(self, network):
        """Get the string and total parameters of the network"""
        if isinstance(network, nn.DataParallel) or isinstance(
            network, DistributedDataParallel
        ):
            network = network.module
        s = str(network)
        n = sum(map(lambda x: x.numel(), network.parameters()))
        return s, n

    def save_network(self, network, network_label, iter_label):
        save_filename = "{}_{}.pth".format(iter_label, network_label)
        save_path = os.path.join(self.opt["path"]["models"], save_filename)
        if isinstance(network, nn.DataParallel) or isinstance(
            network, DistributedDataParallel
        ):
            network = network.module
        state_dict = network.state_dict()
        for key, param in state_dict.items():
            state_dict[key] = param.cpu()
        torch.save(state_dict, save_path)

    def save(self, iter_label):
        for name in self.optimizers.keys():
            self.save_network(self.networks[name], name, iter_label)

    def load_network(self, network, load_path, strict=True):
        if load_path is not None:
            if isinstance(network, nn.DataParallel) or isinstance(
                network, DistributedDataParallel
            ):
                network = network.module
            load_net = torch.load(load_path)
            load_net_clean = OrderedDict()  # remove unnecessary 'module.'
            for k, v in load_net.items():
                if k.startswith("module."):
                    load_net_clean[k[7:]] = v
                else:
                    load_net_clean[k] = v
            network.load_state_dict(load_net_clean, strict=strict)

    def save_training_state(self, epoch, iter_step):
        """Saves training state during training, which will be used for resuming"""
        state = {"epoch": epoch, "iter": iter_step, "schedulers": {}, "optimizers": {}}
        for k, s in self.schedulers.items():
            state["schedulers"][k] = s.state_dict()
        for k, o in self.optimizers.items():
            state["optimizers"][k] = o.state_dict()
        save_filename = "{}.state".format(iter_step)
        save_path = os.path.join(self.opt["path"]["training_state"], save_filename)
        torch.save(state, save_path)

    def resume_training(self, resume_state):
        """Resume the optimizers and schedulers for training"""
        resume_optimizers = resume_state["optimizers"]
        resume_schedulers = resume_state["schedulers"]
        assert len(resume_optimizers) == len(
            self.optimizers
        ), "Wrong lengths of optimizers"
        assert len(resume_schedulers) == len(
            self.schedulers
        ), "Wrong lengths of schedulers"
        for name, o in resume_optimizers.items():
            self.optimizers[name].load_state_dict(o)
        for name, s in resume_schedulers.items():
            self.schedulers[name].load_state_dict(s)

    def reduce_loss_dict(self, loss_dict):
        """reduce loss dict.
        In distributed training, it averages the losses among different GPUs .
        Args:
            loss_dict (OrderedDict): Loss dict.
        """
        with torch.no_grad():
            if self.opt["dist"]:
                keys = []
                losses = []
                for name, value in loss_dict.items():
                    keys.append(name)
                    losses.append(value)
                losses = torch.stack(losses, 0)
                torch.distributed.reduce(losses, dst=0)
                if self.rank == 0:
                    losses /= self.world_size
                loss_dict = {key: loss for key, loss in zip(keys, losses)}

            log_dict = OrderedDict()
            for name, value in loss_dict.items():
                log_dict[name] = value.mean().item()

            return log_dict

    def get_current_log(self):
        return self.log_dict


================================================
FILE: codes/config/Bicubic/models/sr_model.py
================================================
import logging
from collections import OrderedDict

import torch
import torch.nn as nn

from utils.registry import MODEL_REGISTRY

from .base_model import BaseModel

logger = logging.getLogger("base")


@MODEL_REGISTRY.register()
class SRModel(BaseModel):
    def __init__(self, opt):
        super().__init__(opt)

        self.data_names = ["lr", "hr"]

        self.network_names = ["netSR"]
        self.networks = {}

        self.loss_names = ["sr_adv", "sr_pix", "sr_percep"]
        self.loss_weights = {}
        self.losses = {}
        self.optimizers = {}

        # define networks and load pretrained models
        nets_opt = opt["networks"]
        defined_network_names = list(nets_opt.keys())
        assert set(defined_network_names).issubset(set(self.network_names))

        for name in defined_network_names:
            setattr(self, name, self.build_network(nets_opt[name]))
            self.networks[name] = getattr(self, name)

        if self.is_train:
            # setup loss, optimizers, schedulers
            self.setup_train(opt["train"])

    def feed_data(self, data):

        self.lr = data["src"].to(self.device)
        self.hr = data["tgt"].to(self.device)

    def forward(self):

        self.sr = self.netSR(self.lr)

    def optimize_parameters(self, step):

        self.forward()

        loss_dict = OrderedDict()

        l_sr = 0

        sr_pix = self.losses["sr_pix"](self.hr, self.sr)
        loss_dict["sr_pix"] = sr_pix
        l_sr += self.loss_weights["sr_pix"] * sr_pix

        if self.losses.get("sr_adv"):
            self.set_requires_grad(["netD"], False)
            sr_adv_g = self.calculate_rgan_loss_G(
                self.netD, self.losses["sr_adv"], self.hr, self.sr
            )
            loss_dict["sr_adv_g"] = sr_adv_g
            l_sr += self.loss_weights["sr_adv"] * sr_adv_g

        if self.losses.get("sr_percep"):
            sr_percep, sr_style = self.losses["sr_percep"](self.hr, self.sr)
            loss_dict["sr_percep"] = sr_percep
            if sr_style is not None:
                loss_dict["sr_style"] = sr_style
                l_sr += self.loss_weights["sr_percep"] * sr_style
            l_sr += self.loss_weights["sr_percep"] * sr_percep

        self.set_optimizer(names=["netSR"], operation="zero_grad")
        l_sr.backward()
        self.set_optimizer(names=["netSR"], operation="step")

        if self.losses.get("sr_adv"):
            self.set_requires_grad(["netD"], True)
            sr_adv_d = self.calculate_rgan_loss_D(
                self.netD, self.losses["sr_adv"], self.hr, self.sr
            )
            loss_dict["sr_adv_d"] = sr_adv_d

            self.optimizers["netD"].zero_grad()
            sr_adv_d.backward()
            self.optimizers["netD"].step()

        self.log_dict = self.reduce_loss_dict(loss_dict)

    def calculate_rgan_loss_D(self, netD, criterion, real, fake):

        d_pred_fake = netD(fake.detach())
        d_pred_real = netD(real)
        loss_real = criterion(
            d_pred_real - d_pred_fake.detach().mean(), True, is_disc=False
        )
        loss_fake = criterion(
            d_pred_fake - d_pred_real.detach().mean(), False, is_disc=False
        )

        loss = (loss_real + loss_fake) / 2

        return loss

    def calculate_rgan_loss_G(self, netD, criterion, real, fake):

        d_pred_fake = netD(fake)
        d_pred_real = netD(real).detach()
        loss_real = criterion(d_pred_real - d_pred_fake.mean(), False, is_disc=False)
        loss_fake = criterion(d_pred_fake - d_pred_real.mean(), True, is_disc=False)

        loss = (loss_real + loss_fake) / 2

        return loss

    def test(self, data, crop_size=None):
        self.real_lr = data["src"].to(self.device)
        self.netSR.eval()
        with torch.no_grad():
            if crop_size is None:
                self.fake_real_hr = self.netSR(self.real_lr)
            else:
                self.fake_real_hr = self.crop_test(self.real_lr, crop_size)
        self.netSR.train()
    
    def crop_test(self, lr, crop_size):
        b, c, h, w = lr.shape
        scale = self.opt["scale"]

        h_start = list(range(0, h-crop_size, crop_size))
        w_start = list(range(0, w-crop_size, crop_size))

        sr1 = torch.zeros(b, c, int(h*scale), int(w* scale), device=self.device) - 1
        for hs in h_start:
            for ws in w_start:
                lr_patch = lr[:, :, hs: hs+crop_size, ws: ws+crop_size]
                sr_patch = self.netSR(lr_patch)

                sr1[:, :, 
                    int(hs*scale):int((hs+crop_size)*scale),
                    int(ws*scale):int((ws+crop_size)*scale)
                ] = sr_patch
        
        h_end = list(range(h, crop_size, -crop_size))
        w_end = list(range(w, crop_size, -crop_size))

        sr2 = torch.zeros(b, c, int(h*scale), int(w* scale), device=self.device) - 1
        for hd in h_end:
            for wd in w_end:
                lr_patch = lr[:, :, hd-crop_size:hd, wd-crop_size:wd]
                sr_patch = self.netSR(lr_patch)

                sr2[:, :, 
                    int((hd-crop_size)*scale):int(hd*scale),
                    int((wd-crop_size)*scale):int(wd*scale)
                ] = sr_patch

        mask1 = (
            (sr1 == -1).float() * 0 + 
            (sr2 == -1).float() * 1 + 
            ((sr1 > 0) * (sr2 > 0)).float() * 0.5
        )

        mask2 = (
            (sr1 == -1).float() * 1 + 
            (sr2 == -1).float() * 0 + 
            ((sr1 > 0) * (sr2 > 0)).float() * 0.5
        )

        sr = mask1 * sr1 + mask2 * sr2

        return sr
            
    def get_current_visuals(self, need_GT=True):
        out_dict = OrderedDict()
        out_dict["lr"] = self.real_lr.detach()[0].float().cpu()
        out_dict["sr"] = self.fake_real_hr.detach()[0].float().cpu()
        return out_dict


================================================
FILE: codes/config/Bicubic/options/test/2017Track2_2020Track1.yml
================================================
#### general settings
name: Bicubic_2017Track2_2020Track1
use_tb_logger: false
model: SRModel
scale: 4
gpu_ids: [5]

metrics: [psnr, ssim, lpips, niqe, piqe, brisque] 

datasets:
  test1:
    name: 2017Track1
    mode: PairedDataset
    data_type: lmdb
    dataroot_src: /home/lzx/SRDatasets/NTIRE2017/valid_LR/x4.lmdb
    dataroot_tgt: /home/lzx/SRDatasets/DIV2K_valid/HR/x4.lmdb
  test5:
    name: 2020Track1
    mode: PairedDataset
    data_type: lmdb
    dataroot_src: /home/lzx/SRDatasets/NTIRE2020/track1/valid.lmdb
    dataroot_tgt: /home/lzx/SRDatasets/DIV2K_valid/HR/x4.lmdb

#### network structures
networks:
  netSR:
    which_network: BicuBic
    setting:
      upscale: 4
    pretrain: 
      path: ~
      strict_load: true


================================================
FILE: codes/config/Bicubic/options/test/2018Track2_2020Track4.yml
================================================
#### general settings
name: Bicubic_2018Track2_2018Track4
use_tb_logger: false
model: SRModel
scale: 4
gpu_ids: [5]

metrics: [best_psnr, best_ssim, lpips, niqe, piqe, brisque] 

datasets:
  test1:
    name: 2018Track2
    mode: PairedDataset
    data_type: lmdb
    dataroot_src: /home/lzx/SRDatasets/NTIRE2018/track2/valid.lmdb
    dataroot_tgt: /home/lzx/SRDatasets/DIV2K_valid/HR/x4.lmdb
  test2:
    name: 2018Track4
    mode: PairedDataset
    data_type: lmdb
    dataroot_src: /home/lzx/SRDatasets/NTIRE2018/track4/valid.lmdb
    dataroot_tgt: /home/lzx/SRDatasets/DIV2K_valid/HR/x4.lmdb

#### network structures
networks:
  netSR:
    which_network: BicuBic
    setting:
      upscale: 4
    pretrain: 
      path: ~
      strict_load: true


================================================
FILE: codes/config/Bicubic/options/test/2020Track2.yml
================================================
#### general settings
name: 2020Track2
use_tb_logger: false
model: SRModel
scale: 4
gpu_ids: [5]

metrics: [niqe, piqe, brisque] 

datasets:
  test1:
    name: 2020Track2
    mode: SingleDataset
    data_type: lmdb
    dataroot: /home/lzx/SRDatasets/NTIRE2020/track2/test.lmdb

#### network structures
networks:
  netSR:
    which_network: BicuBic
    setting:
      upscale: 4
    pretrain: 
      path: ~
      strict_load: true

================================================
FILE: codes/config/Bicubic/test.py
================================================
import argparse
import logging
import os.path
import sys
import time
from collections import OrderedDict, defaultdict

import numpy as np
import torch
import torch.distributed as dist
import torch.multiprocessing as mp

sys.path.append("../../")
import utils as util
import utils.option as option
from data import create_dataloader, create_dataset
from metrics import IQA
from models import create_model
from utils import bgr2ycbcr, imresize


def parse_args():
    parser = argparse.ArgumentParser(description="Train keypoints network")
    # general
    parser.add_argument(
        "--opt", help="experiment configure file name", required=True, type=str
    )
    parser.add_argument(
        "--root_path",
        help="experiment configure file name",
        default="../../../",
        type=str,
    )
    # distributed training
    parser.add_argument("--gpu", help="gpu id for multiprocessing training", type=str)
    parser.add_argument(
        "--world-size",
        default=1,
        type=int,
        help="number of nodes for distributed training",
    )
    parser.add_argument(
        "--dist-url",
        default="tcp://127.0.0.1:23456",
        type=str,
        help="url used to set up distributed training",
    )
    parser.add_argument(
        "--rank", default=0, type=int, help="node rank for distributed training"
    )

    args = parser.parse_args()

    return args


def main():
    args = parse_args()
    opt = option.parse(args.opt, args.root_path, is_train=False)

    # convert to NoneDict, which returns None for missing keys
    opt = option.dict_to_nonedict(opt)

    if args.dist_url == "env://" and args.world_size == -1:
        args.world_size = int(os.environ["WORLD_SIZE"])

    ngpus_per_node = torch.cuda.device_count()
    args.world_size = ngpus_per_node * args.world_size

    opt["dist"] = args.world_size > 1

    util.mkdirs(
        (path for key, path in opt["path"].items() if not key == "experiments_root")
    )

    os.system("rm ./result")
    os.symlink(os.path.join(opt["path"]["results_root"], ".."), "./result")

    if opt["dist"]:
        mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, opt, args))
    else:
        main_worker(0, 1, opt, args)


def main_worker(gpu, ngpus_per_node, opt, args):

    if opt["dist"]:
        if args.dist_url == "env://" and args.rank == -1:
            rank = int(os.environ["RANK"])

        rank = args.rank * ngpus_per_node + gpu
        print(
            f"Init process group: dist_url: {args.dist_url}, world_size: {args.world_size}, rank: {rank}"
        )

        dist.init_process_group(
            backend="nccl",
            init_method=args.dist_url,
            world_size=args.world_size,
            rank=rank,
        )

        torch.cuda.set_device(gpu)

    else:
        rank = 0

    torch.backends.cudnn.benchmark = True

    util.setup_logger(
        "base",
        opt["path"]["log"],
        "test_" + opt["name"] + "_rank{}".format(rank),
        level=logging.INFO,
        screen=True,
        tofile=True,
    )

    measure = IQA(metrics=opt["metrics"], cuda=True)

    logger = logging.getLogger("base")
    logger.info(option.dict2str(opt))

    # Create test dataset and dataloader
    test_datasets = []
    test_loaders = []

    for phase, dataset_opt in sorted(opt["datasets"].items()):

        test_set = create_dataset(dataset_opt)
        test_loader = create_dataloader(test_set, dataset_opt, opt["dist"])

        if rank == 0:
            logger.info(
                "Number of test images in [{:s}]: {:d}".format(
                    dataset_opt["name"], len(test_set)
                )
            )
        test_datasets.append(test_set)
        test_loaders.append(test_loader)

    # load pretrained model by default
    model = create_model(opt)

    for test_dataset, test_loader in zip(test_datasets, test_loaders):

        test_set_name = test_dataset.opt["name"]
        dataset_dir = os.path.join(opt["path"]["results_root"], test_set_name)

        if rank == 0:
            logger.info("\nTesting [{:s}]...".format(test_set_name))
            util.mkdir(dataset_dir)

        validate(
            model,
            test_dataset,
            test_loader,
            opt,
            measure,
            dataset_dir,
            test_set_name,
            logger,
        )


def validate(
    model, dataset, dist_loader, opt, measure, dataset_dir, test_set_name, logger
):

    test_results = {}
    test_results_y = {}
    for metric in opt["metrics"]:
        test_results[metric] = torch.zeros((len(dataset))).cuda()
        test_results_y[metric] = torch.zeros((len(dataset))).cuda()

    if opt["dist"]:
        rank = dist.get_rank()
        world_size = dist.get_world_size()
    else:
        world_size = 1
        rank = 0

    indices = list(range(rank, len(dataset), world_size))
    for (
        idx,
        test_data,
    ) in enumerate(dist_loader):
        idx = indices[idx]

        img_path = test_data["src_path"][0]
        img_name = img_path.split("/")[-1].split(".")[0]

        model.test(test_data)
        visuals = model.get_current_visuals()
        sr_img = util.tensor2img(visuals["sr"])  # uint8

        suffix = opt["suffix"]
        if suffix:
            save_img_path = os.path.join(dataset_dir, img_name + suffix + ".png")
        else:
            save_img_path = os.path.join(dataset_dir, img_name + ".png")
        util.save_img(sr_img, save_img_path)

        message = "img:{:15s}; ".format(img_name)

        crop_border = opt["crop_border"] if opt["crop_border"] else opt["scale"]

        if crop_border == 0:
            cropped_sr_img = sr_img
        else:
            cropped_sr_img = sr_img[
                crop_border:-crop_border, crop_border:-crop_border, :
            ]

        if "tgt" in test_data.keys():
            gt_img = util.tensor2img(test_data["tgt"][0].double().cpu())

            if crop_border == 0:
                cropped_gt_img = gt_img
            else:
                cropped_gt_img = gt_img[
                    crop_border:-crop_border, crop_border:-crop_border, :
                ]
        else:
            gt_img = None
            cropped_gt_img = None

        message += "Scores - "
        scores = measure(res=cropped_sr_img, ref=cropped_gt_img, metrics=opt["metrics"])
        for k, v in scores.items():
            test_results[k][idx] = v
            message += "{}: {:.6f}; ".format(k, v)

        if sr_img.shape[2] == 3:  # RGB image
            sr_img_y = bgr2ycbcr(sr_img, only_y=True)
            if crop_border == 0:
                cropped_sr_img_y = sr_img_y * 255
            else:
                cropped_sr_img_y = (
                    sr_img_y[crop_border:-crop_border, crop_border:-crop_border] * 255
                )
            if gt_img is not None:
                gt_img_y = bgr2ycbcr(gt_img, only_y=True)
                if crop_border == 0:
                    cropped_gt_img_y = gt_img_y * 255
                else:
                    cropped_gt_img_y = (
                        gt_img_y[crop_border:-crop_border, crop_border:-crop_border]
                        * 255
                    )
            else:
                gt_img_y = None
                cropped_gt_img_y = None

            message += "Y Scores - "
            scores = measure(
                res=cropped_sr_img_y, ref=cropped_gt_img_y, metrics=opt["metrics"]
            )
            for k, v in scores.items():
                test_results_y[k][idx] = v
                message += "{}: {:.6f}; ".format(k, v)

        logger.info(message)

    if opt["dist"]:
        for k, v in test_results.items():
            dist.reduce(v, dst=0)
        dist.barrier()

        for k, v in test_results_y.items():
            dist.reduce(v, dst=0)
        dist.barrier()

    # log
    avg_results = {}
    message = "Average Results for {}\n".format(test_set_name)

    if rank == 0:
        for k, v in test_results.items():
            avg_results[k] = sum(v) / len(v)
            message += "{}: {:.6f}; ".format(k, avg_results[k])

        logger.info(message)

    avg_results_y = {}
    message = "Average Results on Y channel for {}\n".format(test_set_name)

    if rank == 0:
        for k, v in test_results_y.items():
            avg_results[k] = sum(v) / len(v)
            message += "{}: {:.6f}; ".format(k, avg_results[k])

        logger.info(message)


if __name__ == "__main__":
    main()


================================================
FILE: codes/config/Bicubic/train.py
================================================
import argparse
import logging
import math
import os
import random
import sys
import time
from collections import defaultdict

import numpy as np
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
from tensorboardX import SummaryWriter
from tqdm import tqdm

sys.path.append("../../")
import utils as util
import utils.option as option
from data import create_dataloader, create_dataset
from metrics import IQA
from models import create_model


def parse_args():
    parser = argparse.ArgumentParser(description="Train keypoints network")
    # general
    parser.add_argument(
        "--opt", help="experiment configure file name", required=True, type=str
    )
    parser.add_argument(
        "--root_path",
        help="experiment configure file name",
        default="../../../",
        type=str,
    )
    # distributed training
    parser.add_argument("--gpu", help="gpu id for multiprocessing training", type=str)
    parser.add_argument(
        "--world-size",
        default=1,
        type=int,
        help="number of nodes for distributed training",
    )
    parser.add_argument(
        "--dist-url",
        default="tcp://127.0.0.1:23456",
        type=str,
        help="url used to set up distributed training",
    )
    parser.add_argument(
        "--rank", default=0, type=int, help="node rank for distributed training"
    )

    args = parser.parse_args()

    return args


def setup_dataloaer(opt, logger):

    if opt["dist"]:
        rank = dist.get_rank()
        world_size = dist.get_world_size()
    else:
        rank = 0
        world_size = 1

    for phase, dataset_opt in opt["datasets"].items():
        if phase == "train":
            train_set = create_dataset(dataset_opt)
            train_loader = create_dataloader(train_set, dataset_opt, opt["dist"])
            total_iters = opt["train"]["niter"]
            total_epochs = total_iters // (len(train_loader) - 1) + 1
            if rank == 0:
                logger.info(
                    "Number of train images: {:,d}, iters: {:,d}".format(
                        len(train_set), len(train_loader)
                    )
                )
                logger.info(
                    "Total epochs needed: {:d} for iters {:,d}".format(
                        total_epochs, opt["train"]["niter"]
                    )
                )

        elif phase == "val":
            val_set = create_dataset(dataset_opt)
            val_loader = create_dataloader(val_set, dataset_opt, opt["dist"])
            if rank == 0:
                logger.info(
                    "Number of val images in [{:s}]: {:d}".format(
                        dataset_opt["name"], len(val_set)
                    )
                )
        else:
            raise NotImplementedError("Phase [{:s}] is not recognized.".format(phase))

    assert train_loader is not None
    assert val_loader is not None

    return train_set, train_loader, val_set, val_loader, total_iters, total_epochs


def main():
    args = parse_args()
    opt = option.parse(args.opt, args.root_path, is_train=True)

    # convert to NoneDict, which returns None for missing keys
    opt = option.dict_to_nonedict(opt)

    if args.dist_url == "env://" and args.world_size == -1:
        args.world_size = int(os.environ["WORLD_SIZE"])

    ngpus_per_node = torch.cuda.device_count()
    args.world_size = ngpus_per_node * args.world_size

    opt["dist"] = args.world_size > 1

    if opt["train"].get("resume_state", None) is None:
        util.mkdir_and_rename(
            opt["path"]["experiments_root"]
        )  # rename experiment folder if exists
        util.mkdirs(
            (path for key, path in opt["path"].items() if not key == "experiments_root")
        )
        os.system("rm ./log")
        os.symlink(os.path.join(opt["path"]["experiments_root"], ".."), "./log")

    if opt["dist"]:
        mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, opt, args))
    else:
        main_worker(0, 1, opt, args)


def main_worker(gpu, ngpus_per_node, opt, args):

    if opt["dist"]:
        if args.dist_url == "env://" and args.rank == -1:
            rank = int(os.environ["RANK"])

        rank = args.rank * ngpus_per_node + gpu
        print(
            f"Init process group: dist_url: \
            {args.dist_url}, world_size: {args.world_size}, rank: {rank}"
        )

        dist.init_process_group(
            backend="nccl",
            init_method=args.dist_url,
            world_size=args.world_size,
            rank=rank,
        )

        torch.cuda.set_device(gpu)

    else:
        rank = 0

    seed = opt["train"]["manual_seed"]
    if seed is None:
        util.set_random_seed(rank)

    torch.backends.cudnn.benchmark = True
    # torch.backends.cudnn.deterministic = True

    # setup tensorboard and val logger
    if rank == 0:
        if opt["use_tb_logger"] and "debug" not in opt["name"]:
            tb_logger = SummaryWriter(log_dir="log/{}/tb_logger/".format(opt["name"]))

        util.setup_logger(
            "val",
            opt["path"]["log"],
            "val_" + opt["name"],
            level=logging.INFO,
            screen=True,
            tofile=True,
        )

    measure = IQA(metrics=opt["metrics"], cuda=True)

    # config loggers. Before it, the log will not work
    util.setup_logger(
        "base",
        opt["path"]["log"],
        "train_" + opt["name"] + "_rank{}".format(rank),
        level=logging.INFO if rank == 0 else logging.ERROR,
        screen=True,
        tofile=True,
    )

    logger = logging.getLogger("base")
    if rank == 0:
        logger.info(option.dict2str(opt))

    # create dataset
    (
        train_set,
        train_loader,
        val_set,
        val_loader,
        total_iters,
        total_epochs,
    ) = setup_dataloaer(opt, logger)

    # create model
    model = create_model(opt)

    # loading resume state if exists
    if opt["train"].get("resume_state", None):
        # distributed resuming: all load into default GPU
        device_id = gpu
        resume_state = torch.load(
            opt["train"]["resume_state"],
            map_location=lambda storage, loc: storage.cuda(device_id),
        )

        logger.info(
            "Resuming training from epoch: {}, iter: {}.".format(
                resume_state["epoch"], resume_state["iter"]
            )
        )

        start_epoch = resume_state["epoch"]
        current_step = resume_state["iter"]
        model.resume_training(resume_state)  # handle optimizers and schedulers

    else:
        current_step = 0
        start_epoch = 0

    logger.info(
        "Start training from epoch: {:d}, iter: {:d}".format(start_epoch, current_step)
    )
    data_time, iter_time = time.time(), time.time()
    avg_data_time = avg_iter_time = 0
    count = 0
    for epoch in range(start_epoch, total_epochs + 1):
        for _, train_data in enumerate(train_loader):

            current_step += 1
            count += 1
            if current_step > total_iters:
                break

            data_time = time.time() - data_time
            avg_data_time = (avg_data_time * (count - 1) + data_time) / count

            model.feed_data(train_data)
            model.optimize_parameters(current_step)
            model.update_learning_rate(
                current_step, warmup_iter=opt["train"]["warmup_iter"]
            )

            iter_time = time.time() - iter_time
            avg_iter_time = (avg_iter_time * (count - 1) + iter_time) / count

            # log
            if current_step % opt["logger"]["print_freq"] == 0:
                logs = model.get_current_log()
                message = (
                    f"<epoch:{epoch:3d}, iter:{current_step:8,d}, "
                    f"lr:{model.get_current_learning_rate():.3e}> "
                )

                message += f'[time (data): {avg_iter_time:.3f} ({avg_data_time:.3f})] '
                for k, v in logs.items():
                    message += "{:s}: {:.4e}; ".format(k, v)
                    # tensorboard logger
                    if opt["use_tb_logger"] and "debug" not in opt["name"]:
                        if rank == 0:
                            tb_logger.add_scalar(k, v, current_step)
                logger.info(message)

            # validation
            if current_step % opt["train"]["val_freq"] == 0:

                avg_results = validate(
                    model, val_set, val_loader, opt, measure, epoch, current_step
                )

            # tensorboard logger
            if rank == 0:
                if opt["use_tb_logger"] and "debug" not in opt["name"]:
                    for k, v in avg_results.items():
                        tb_logger.add_scalar(k, v, current_step)

            # save models and training states
            if current_step % opt["logger"]["save_checkpoint_freq"] == 0:
                if rank == 0:
                    logger.info("Saving models and training states.")
                    model.save(current_step)
                    model.save_training_state(epoch, current_step)
            
            data_time = time.time()
            iter_time = time.time()

    if rank == 0:
        logger.info("Saving the final model.")
        model.save("latest")
        logger.info("End of training.")
        if opt["use_tb_logger"] and "debug" not in opt["name"]:
            tb_logger.close()


def validate(model, dataset, dist_loader, opt, measure, epoch, current_step):

    test_results = {}
    for metric in opt["metrics"]:
        test_results[metric] = torch.zeros((len(dataset))).cuda()

    if opt["dist"]:
        rank = dist.get_rank()
        world_size = dist.get_world_size()
    else:
        world_size = 1
        rank = 0

    if rank == 0:
        pbar = tqdm(total=len(dataset), leave=False, dynamic_ncols=True)

    indices = list(range(rank, len(dataset), world_size))
    for (
        idx,
        val_data,
    ) in enumerate(dist_loader):
        idx = indices[idx]

        LR_img = val_data["src"]
        lr_img = util.tensor2img(LR_img)  # save LR image for reference

        model.test(val_data)
        visuals = model.get_current_visuals()

        # Save images for reference
        img_name = val_data["src_path"][0].split("/")[-1].split(".")[0]
        img_dir = os.path.join(opt["path"]["val_images"], img_name)

        util.mkdir(img_dir)
        save_lr_path = os.path.join(img_dir, "{:s}_LR.png".format(img_name))
        util.save_img(lr_img, save_lr_path)

        sr_img = util.tensor2img(visuals["sr"])  # uint8
        save_img_path = os.path.join(
            img_dir, "{:s}_{:d}.png".format(img_name, current_step)
        )
        util.save_img(sr_img, save_img_path)

        if "fake_lr" in visuals.keys():
            fake_lr_img = util.tensor2img(visuals["fake_lr"])
            save_img_path = os.path.join(
                img_dir, f"fake_lr_{current_step:d}.png"
            )
            util.save_img(fake_lr_img, save_img_path)

        # calculate scores
        crop_size = opt["scale"]
        cropped_sr_img = sr_img[crop_size:-crop_size, crop_size:-crop_size, :]
        if "tgt" in val_data.keys():
            gt_img = util.tensor2img(val_data["tgt"])
            cropped_gt_img = gt_img[crop_size:-crop_size, crop_size:-crop_size, :]
        else:
            cropped_gt_img = gt_img = None

        scores = measure(res=cropped_sr_img, ref=cropped_gt_img, metrics=opt["metrics"])
        for k, v in scores.items():
            test_results[k][idx] = v

        if rank == 0:
            for _ in range(world_size):
                pbar.update(1)
    if rank == 0:
        pbar.close()

    # log
    avg_results = {}
    message = " <epoch:{:3d}, iter:{:8,d}, Average sccores:\t".format(
        epoch, current_step
    )

    if opt["dist"]:
        for k, v in test_results.items():
            dist.reduce(v, dst=0)
        dist.barrier()

    if rank == 0:
        for k, v in test_results.items():
            avg_results[k] = sum(v) / len(v)
            message += "{}: {:.6f}; ".format(k, avg_results[k])

        logger_val = logging.getLogger("val")  # validation logger
        logger_val.info(message)
    
    del test_results
    torch.cuda.empty_cache()
    return avg_results


if __name__ == "__main__":
    main()


================================================
FILE: codes/config/Bulat/README.md
================================================
This repo supports the training and testing of ECCV paper [To learn image super-resolution, use a GAN to learn how to do image degradation firs](https://arxiv.org/abs/1807.11458)

================================================
FILE: codes/config/Bulat/archs/__init__.py
================================================
import importlib
import os
import os.path as osp

from utils.registry import ARCH_REGISTRY, LOSS_REGISTRY, LR_SCHEDULER_REGISTRY

arch_folder = osp.dirname(osp.abspath(__file__))
arch_filenames = [
    osp.splitext(osp.basename(v))[0]
    for v in os.listdir(arch_folder)
    if v.endswith(".py")
]
# import all the arch modules
_arch_modules = [
    importlib.import_module(f"archs.{file_name}") for file_name in arch_filenames
]


def build_network(net_opt):
    which_network = net_opt["which_network"]
    net = ARCH_REGISTRY.get(which_network)(**net_opt["setting"])
    return net


def build_loss(loss_opt):
    loss_type = loss_opt.pop("type")
    loss = LOSS_REGISTRY.get(loss_type)(**loss_opt)
    return loss

def build_scheduler(optimizer, scheduler_opt):
    scheduler_type = scheduler_opt.pop("type")
    scheduler = LR_SCHEDULER_REGISTRY.get(scheduler_type)(optimizer, **scheduler_opt)
    return scheduler


================================================
FILE: codes/config/Bulat/archs/deg_arch.py
================================================
import math

import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable

from utils.registry import ARCH_REGISTRY
from .edsr import default_conv, BasicBlock, ResBlock, Upsampler


@ARCH_REGISTRY.register()
class DegModel(nn.Module):
    def __init__(self, nb, nf, scale=4, zero_tail=False, conv=default_conv):
        super()
Download .txt
gitextract_gen0q67r/

├── .gitignore
├── README.md
└── codes/
    ├── config/
    │   ├── BSRGAN/
    │   │   ├── README.md
    │   │   ├── archs/
    │   │   │   ├── __init__.py
    │   │   │   ├── discriminator.py
    │   │   │   ├── edsr.py
    │   │   │   ├── loss.py
    │   │   │   ├── lr_scheduler.py
    │   │   │   ├── module_util.py
    │   │   │   ├── rcan.py
    │   │   │   ├── rrdb.py
    │   │   │   ├── srresnet.py
    │   │   │   ├── translator.py
    │   │   │   └── vgg.py
    │   │   ├── count_flops.py
    │   │   ├── inference.py
    │   │   ├── models/
    │   │   │   ├── __init__.py
    │   │   │   ├── base_model.py
    │   │   │   └── sr_model.py
    │   │   ├── options/
    │   │   │   └── test/
    │   │   │       ├── 2017Track2_2020Track1.yml
    │   │   │       ├── 2018Track2_2018Track4.yml
    │   │   │       └── 2020Track2.yml
    │   │   ├── test.py
    │   │   └── train.py
    │   ├── Bicubic/
    │   │   ├── README.md
    │   │   ├── archs/
    │   │   │   ├── __init__.py
    │   │   │   ├── bicubic.py
    │   │   │   ├── discriminator.py
    │   │   │   ├── edsr.py
    │   │   │   ├── loss.py
    │   │   │   ├── lr_scheduler.py
    │   │   │   ├── module_util.py
    │   │   │   ├── rcan.py
    │   │   │   ├── rrdb.py
    │   │   │   ├── srresnet.py
    │   │   │   └── vgg.py
    │   │   ├── count_flops.py
    │   │   ├── inference.py
    │   │   ├── models/
    │   │   │   ├── __init__.py
    │   │   │   ├── base_model.py
    │   │   │   └── sr_model.py
    │   │   ├── options/
    │   │   │   └── test/
    │   │   │       ├── 2017Track2_2020Track1.yml
    │   │   │       ├── 2018Track2_2020Track4.yml
    │   │   │       └── 2020Track2.yml
    │   │   ├── test.py
    │   │   └── train.py
    │   ├── Bulat/
    │   │   ├── README.md
    │   │   ├── archs/
    │   │   │   ├── __init__.py
    │   │   │   ├── deg_arch.py
    │   │   │   ├── discriminator.py
    │   │   │   ├── edsr.py
    │   │   │   ├── loss.py
    │   │   │   ├── lr_scheduler.py
    │   │   │   ├── module_util.py
    │   │   │   ├── rcan.py
    │   │   │   ├── rrdb.py
    │   │   │   ├── srresnet.py
    │   │   │   ├── translator.py
    │   │   │   └── vgg.py
    │   │   ├── count_flops.py
    │   │   ├── inference.py
    │   │   ├── models/
    │   │   │   ├── __init__.py
    │   │   │   ├── base_model.py
    │   │   │   └── deg_sr_model.py
    │   │   ├── options/
    │   │   │   ├── test/
    │   │   │   │   ├── 2017Track2.yml
    │   │   │   │   ├── 2018Track2.yml
    │   │   │   │   ├── 2018Track4.yml
    │   │   │   │   └── 2020Track1.yml
    │   │   │   └── train/
    │   │   │       └── psnr/
    │   │   │           ├── 2017Track2.yml
    │   │   │           ├── 2018Track2.yml
    │   │   │           ├── 2018Track4.yml
    │   │   │           └── 2020Track1.yml
    │   │   ├── test.py
    │   │   └── train.py
    │   ├── CinGAN/
    │   │   ├── README.md
    │   │   ├── archs/
    │   │   │   ├── __init__.py
    │   │   │   ├── discriminator.py
    │   │   │   ├── edsr.py
    │   │   │   ├── loss.py
    │   │   │   ├── lr_scheduler.py
    │   │   │   ├── module_util.py
    │   │   │   ├── rcan.py
    │   │   │   ├── rrdb.py
    │   │   │   ├── srresnet.py
    │   │   │   ├── translator.py
    │   │   │   └── vgg.py
    │   │   ├── count_flops.py
    │   │   ├── inference.py
    │   │   ├── models/
    │   │   │   ├── __init__.py
    │   │   │   ├── base_model.py
    │   │   │   ├── cingan_model.py
    │   │   │   └── trans_model.py
    │   │   ├── options/
    │   │   │   ├── test/
    │   │   │   │   └── sr/
    │   │   │   │       ├── 2017Track1.yml
    │   │   │   │       ├── 2018Track2.yml
    │   │   │   │       ├── 2018Track4.yml
    │   │   │   │       └── 2020Track1.yml
    │   │   │   └── train/
    │   │   │       ├── sr/
    │   │   │       │   ├── 2017Track2.yml
    │   │   │       │   ├── 2018Track2.yml
    │   │   │       │   ├── 2018Track4.yml
    │   │   │       │   └── 2020Track1.yml
    │   │   │       └── trans/
    │   │   │           ├── 2017Track2.yml
    │   │   │           ├── 2018Track2.yml
    │   │   │           ├── 2018Track4.yml
    │   │   │           └── 2020Track1.yml
    │   │   ├── test.py
    │   │   └── train.py
    │   ├── CycleSR/
    │   │   ├── README.md
    │   │   ├── archs/
    │   │   │   ├── __init__.py
    │   │   │   ├── discriminator.py
    │   │   │   ├── edsr.py
    │   │   │   ├── loss.py
    │   │   │   ├── lr_scheduler.py
    │   │   │   ├── module_util.py
    │   │   │   ├── rcan.py
    │   │   │   ├── rrdb.py
    │   │   │   ├── srresnet.py
    │   │   │   ├── translator.py
    │   │   │   └── vgg.py
    │   │   ├── count_flops.py
    │   │   ├── inference.py
    │   │   ├── models/
    │   │   │   ├── __init__.py
    │   │   │   ├── base_model.py
    │   │   │   ├── cyclegan_model.py
    │   │   │   └── cyclesr_model.py
    │   │   ├── options/
    │   │   │   ├── test/
    │   │   │   │   └── sr/
    │   │   │   │       ├── 2017Track1.yml
    │   │   │   │       ├── 2018Track2.yml
    │   │   │   │       ├── 2018Track4.yml
    │   │   │   │       ├── 2020Track1.yml
    │   │   │   │       └── 2020Track1_percep.yml
    │   │   │   └── train/
    │   │   │       ├── sr/
    │   │   │       │   └── psnr/
    │   │   │       │       ├── 2017Track2.yml
    │   │   │       │       ├── 2018Track2.yml
    │   │   │       │       ├── 2018Track4.yml
    │   │   │       │       └── 2020Track1.yml
    │   │   │       └── trans/
    │   │   │           ├── 2017Track2.yml
    │   │   │           ├── 2018Track2.yml
    │   │   │           ├── 2018Track4.yml
    │   │   │           └── 2020Track1.yml
    │   │   ├── test.py
    │   │   └── train.py
    │   ├── DSGANSR/
    │   │   ├── README.md
    │   │   ├── archs/
    │   │   │   ├── __init__.py
    │   │   │   ├── deg_arch.py
    │   │   │   ├── discriminator.py
    │   │   │   ├── edsr.py
    │   │   │   ├── loss.py
    │   │   │   ├── lr_scheduler.py
    │   │   │   ├── module_util.py
    │   │   │   ├── rcan.py
    │   │   │   ├── rrdb.py
    │   │   │   ├── srresnet.py
    │   │   │   ├── translator.py
    │   │   │   └── vgg.py
    │   │   ├── count_flops.py
    │   │   ├── inference.py
    │   │   ├── models/
    │   │   │   ├── __init__.py
    │   │   │   ├── base_model.py
    │   │   │   └── deg_sr_model.py
    │   │   ├── options/
    │   │   │   ├── test/
    │   │   │   │   ├── 2017Track1.yml
    │   │   │   │   ├── 2018Track2.yml
    │   │   │   │   ├── 2018Track4.yml
    │   │   │   │   └── 2020Track1.yml
    │   │   │   └── train/
    │   │   │       ├── deg/
    │   │   │       │   ├── 2017Track2.yml
    │   │   │       │   ├── 2018Track2.yml
    │   │   │       │   ├── 2018Track4.yml
    │   │   │       │   └── 2020Track1.yml
    │   │   │       └── sr/
    │   │   │           ├── 2017Track2.yml
    │   │   │           ├── 2018Track2.yml
    │   │   │           ├── 2018Track4.yml
    │   │   │           └── 2020Track1.yml
    │   │   ├── test.py
    │   │   └── train.py
    │   ├── EDSR/
    │   │   ├── archs/
    │   │   │   ├── __init__.py
    │   │   │   ├── bicubic.py
    │   │   │   ├── discriminator.py
    │   │   │   ├── edsr.py
    │   │   │   ├── loss.py
    │   │   │   ├── lr_scheduler.py
    │   │   │   ├── module_util.py
    │   │   │   ├── rcan.py
    │   │   │   ├── rrdb.py
    │   │   │   ├── srresnet.py
    │   │   │   ├── translator.py
    │   │   │   └── vgg.py
    │   │   ├── count_flops.py
    │   │   ├── inference.py
    │   │   ├── models/
    │   │   │   ├── __init__.py
    │   │   │   ├── base_model.py
    │   │   │   └── sr_model.py
    │   │   ├── options/
    │   │   │   └── test/
    │   │   │       ├── 2017Track2_2020Track1.yml
    │   │   │       ├── 2018Track2_2020Track4.yml
    │   │   │       └── 2020Track2.yml
    │   │   ├── test.py
    │   │   └── train.py
    │   ├── Maeda/
    │   │   ├── README.md
    │   │   ├── archs/
    │   │   │   ├── __init__.py
    │   │   │   ├── discriminator.py
    │   │   │   ├── edsr.py
    │   │   │   ├── loss.py
    │   │   │   ├── lr_scheduler.py
    │   │   │   ├── module_util.py
    │   │   │   ├── rcan.py
    │   │   │   ├── rrdb.py
    │   │   │   ├── srresnet.py
    │   │   │   ├── translator.py
    │   │   │   └── vgg.py
    │   │   ├── count_flops.py
    │   │   ├── inference.py
    │   │   ├── models/
    │   │   │   ├── __init__.py
    │   │   │   ├── base_model.py
    │   │   │   └── pseudo_supervision_model.py
    │   │   ├── options/
    │   │   │   ├── test/
    │   │   │   │   ├── 2017Track2.yml
    │   │   │   │   ├── 2018Track2.yml
    │   │   │   │   ├── 2018Track4.yml
    │   │   │   │   └── 2020Track1.yml
    │   │   │   └── train/
    │   │   │       ├── 2017Track2.yml
    │   │   │       ├── 2018Track2.yml
    │   │   │       ├── 2018Track4.yml
    │   │   │       └── 2020Track1.yml
    │   │   ├── test.py
    │   │   └── train.py
    │   ├── PDM-SR/
    │   │   ├── archs/
    │   │   │   ├── __init__.py
    │   │   │   ├── deg_arch.py
    │   │   │   ├── discriminator.py
    │   │   │   ├── edsr.py
    │   │   │   ├── loss.py
    │   │   │   ├── lr_scheduler.py
    │   │   │   ├── module_util.py
    │   │   │   ├── rcan.py
    │   │   │   ├── rrdb.py
    │   │   │   ├── srresnet.py
    │   │   │   └── vgg.py
    │   │   ├── count_flops.py
    │   │   ├── inference.py
    │   │   ├── models/
    │   │   │   ├── __init__.py
    │   │   │   ├── base_model.py
    │   │   │   └── deg_sr_model.py
    │   │   ├── options/
    │   │   │   ├── test/
    │   │   │   │   ├── 2017Track1.yml
    │   │   │   │   ├── 2018Track2.yml
    │   │   │   │   ├── 2018Track4.yml
    │   │   │   │   ├── 2020Track1.yml
    │   │   │   │   └── 2020Track2.yml
    │   │   │   └── train/
    │   │   │       ├── deg/
    │   │   │       │   ├── 2017Track1.yml
    │   │   │       │   ├── 2018Track2.yml
    │   │   │       │   ├── 2018Track4.yml
    │   │   │       │   ├── 2020Track1.yml
    │   │   │       │   └── 2020Track2.yml
    │   │   │       ├── percep/
    │   │   │       │   ├── 2017Track1.yml
    │   │   │       │   ├── 2018Track2.yml
    │   │   │       │   ├── 2018Track4.yml
    │   │   │       │   ├── 2020Track1.yml
    │   │   │       │   └── 2020Track2.yml
    │   │   │       └── psnr/
    │   │   │           ├── 2017Track2.yml
    │   │   │           ├── 2018Track2.yml
    │   │   │           ├── 2018Track4.yml
    │   │   │           ├── 2020Track1.yml
    │   │   │           └── 2020Track2.yml
    │   │   ├── test.py
    │   │   └── train.py
    │   └── RealESRGAN/
    │       ├── README.md
    │       ├── archs/
    │       │   ├── __init__.py
    │       │   ├── discriminator.py
    │       │   ├── edsr.py
    │       │   ├── loss.py
    │       │   ├── lr_scheduler.py
    │       │   ├── module_util.py
    │       │   ├── rcan.py
    │       │   ├── rrdb.py
    │       │   ├── srresnet.py
    │       │   ├── translator.py
    │       │   └── vgg.py
    │       ├── count_flops.py
    │       ├── inference.py
    │       ├── models/
    │       │   ├── __init__.py
    │       │   ├── base_model.py
    │       │   └── sr_model.py
    │       ├── options/
    │       │   └── test/
    │       │       ├── 2017Track2_2020Track1.yml
    │       │       ├── 2018Track2_2018Track4.yml
    │       │       └── 2020Track2.yml
    │       ├── test.py
    │       └── train.py
    ├── data/
    │   ├── __init__.py
    │   ├── data_sampler.py
    │   ├── debug_dataset.py
    │   ├── fixed_image_dataset.py
    │   ├── paired_ref_dataset.py
    │   ├── paried_dataset.py
    │   ├── single_dataset.py
    │   ├── single_image_dataset.py
    │   └── unpaired_dataset.py
    ├── metrics/
    │   ├── __init__.py
    │   ├── best_psnr.py
    │   ├── measure.py
    │   ├── psnr.py
    │   └── ssim.py
    ├── scripts/
    │   ├── create_lmdb.py
    │   ├── extract_subimgs_single.py
    │   ├── generate_mod_LR_bic.m
    │   ├── generate_mod_LR_bic.py
    │   ├── generate_mod_blur_LR_bic.py
    │   └── test_imgs.py
    └── utils/
        ├── __init__.py
        ├── data_utils.py
        ├── deg_utils.py
        ├── file_utils.py
        ├── img_utils.py
        ├── option.py
        ├── registry.py
        └── resize_utils.py
Download .txt
SYMBOL INDEX (1926 symbols across 189 files)

FILE: codes/config/BSRGAN/archs/__init__.py
  function build_network (line 19) | def build_network(net_opt):
  function build_loss (line 25) | def build_loss(loss_opt):
  function build_scheduler (line 30) | def build_scheduler(optimizer, scheduler_opt):

FILE: codes/config/BSRGAN/archs/discriminator.py
  class DiscriminatorVGG128 (line 10) | class DiscriminatorVGG128(nn.Module):
    method __init__ (line 11) | def __init__(self, in_nc, nf):
    method forward (line 44) | def forward(self, x):
  class DiscriminatorVGG32 (line 67) | class DiscriminatorVGG32(nn.Module):
    method __init__ (line 68) | def __init__(self, in_nc, nf):
    method forward (line 101) | def forward(self, x):
  class PatchGANDiscriminator (line 124) | class PatchGANDiscriminator(nn.Module):
    method __init__ (line 127) | def __init__(self, in_c, nf, nb, stride=1, norm_layer=nn.InstanceNorm2d):
    method forward (line 188) | def forward(self, input):

FILE: codes/config/BSRGAN/archs/edsr.py
  function default_conv (line 11) | def default_conv(in_channels, out_channels, kernel_size, bias=True):
  class MeanShift (line 17) | class MeanShift(nn.Conv2d):
    method __init__ (line 18) | def __init__(
  class BasicBlock (line 34) | class BasicBlock(nn.Sequential):
    method __init__ (line 35) | def __init__(
  class ResBlock (line 63) | class ResBlock(nn.Module):
    method __init__ (line 64) | def __init__(
    method forward (line 87) | def forward(self, x):
  class Upsampler (line 94) | class Upsampler(nn.Sequential):
    method __init__ (line 95) | def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):
  function make_model (line 121) | def make_model(args, parent=False):
  class EDSR (line 129) | class EDSR(nn.Module):
    method __init__ (line 130) | def __init__(self, nb, nf, res_scale=0.1, upscale=4, conv=default_conv):
    method forward (line 166) | def forward(self, x):

FILE: codes/config/BSRGAN/archs/loss.py
  class GaussGuided (line 12) | class GaussGuided(nn.Module):
    method __init__ (line 13) | def __init__(self, ksize, sigma):
    method forward (line 24) | def forward(self, kernel):
  class PerceptualLossLPIPS (line 29) | class PerceptualLossLPIPS(nn.Module):
    method __init__ (line 30) | def __init__(self, net="alex", normalize=True):
    method forward (line 38) | def forward(self, res, ref):
  class MSELoss (line 43) | class MSELoss(nn.Module):
    method __init__ (line 44) | def __init__(self, *args, **kwargs):
    method forward (line 47) | def forward(self, res, ref):
  class L1Loss (line 52) | class L1Loss(nn.Module):
    method __init__ (line 53) | def __init__(self, *args, **kwargs):
    method forward (line 56) | def forward(self, res, ref):
  class GANLoss (line 61) | class GANLoss(nn.Module):
    method __init__ (line 69) | def __init__(self, gan_type, real_label_val=1.0, fake_label_val=0.0):
    method _wgan_loss (line 88) | def _wgan_loss(self, input, target):
    method _wgan_softplus_loss (line 98) | def _wgan_softplus_loss(self, input, target):
    method get_target_label (line 112) | def get_target_label(self, input, target_is_real):
    method forward (line 127) | def forward(self, input, target_is_real, is_disc=False):
  class PerceptualLoss (line 152) | class PerceptualLoss(nn.Module):
    method __init__ (line 174) | def __init__(
    method forward (line 205) | def forward(self, x, gt):
    method _gram_mat (line 262) | def _gram_mat(self, x):
  class CharbonnierLoss (line 277) | class CharbonnierLoss(nn.Module):
    method __init__ (line 280) | def __init__(self, eps=1e-6):
    method forward (line 284) | def forward(self, x, y):
  class GradientPenaltyLoss (line 290) | class GradientPenaltyLoss(nn.Module):
    method __init__ (line 291) | def __init__(self, device=torch.device("cpu")):
    method get_grad_outputs (line 296) | def get_grad_outputs(self, input):
    method forward (line 301) | def forward(self, interp, interp_crit):

FILE: codes/config/BSRGAN/archs/lr_scheduler.py
  class LinearDecayLR (line 11) | class LinearDecayLR(_LRScheduler):
    method __init__ (line 12) | def __init__(
    method get_lr (line 24) | def get_lr(self):
  class MultiStepRestartLR (line 34) | class MultiStepRestartLR(_LRScheduler):
    method __init__ (line 35) | def __init__(
    method get_lr (line 55) | def get_lr(self):
  class CosineAnnealingRestartLR (line 72) | class CosineAnnealingRestartLR(_LRScheduler):
    method __init__ (line 73) | def __init__(
    method get_lr (line 87) | def get_lr(self):

FILE: codes/config/BSRGAN/archs/module_util.py
  function initialize_weights (line 7) | def initialize_weights(net_l, scale=1):
  function make_layer (line 27) | def make_layer(block, n_layers):
  class ResidualBlock_noBN (line 34) | class ResidualBlock_noBN(nn.Module):
    method __init__ (line 40) | def __init__(self, nf=64):
    method forward (line 48) | def forward(self, x):
  function flow_warp (line 55) | def flow_warp(x, flow, interp_mode="bilinear", padding_mode="zeros"):

FILE: codes/config/BSRGAN/archs/rcan.py
  function default_conv (line 11) | def default_conv(in_channels, out_channels, kernel_size, bias=True):
  class MeanShift (line 17) | class MeanShift(nn.Conv2d):
    method __init__ (line 18) | def __init__(self, rgb_range, rgb_mean, rgb_std, sign=-1):
  class BasicBlock (line 28) | class BasicBlock(nn.Sequential):
    method __init__ (line 29) | def __init__(
  class ResBlock (line 57) | class ResBlock(nn.Module):
    method __init__ (line 58) | def __init__(
    method forward (line 81) | def forward(self, x):
  class Upsampler (line 88) | class Upsampler(nn.Sequential):
    method __init__ (line 89) | def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):
  function make_model (line 113) | def make_model(args, parent=False):
  class CALayer (line 118) | class CALayer(nn.Module):
    method __init__ (line 119) | def __init__(self, channel, reduction=16):
    method forward (line 131) | def forward(self, x):
  class RCAB (line 138) | class RCAB(nn.Module):
    method __init__ (line 139) | def __init__(
    method forward (line 163) | def forward(self, x):
  class ResidualGroup (line 171) | class ResidualGroup(nn.Module):
    method __init__ (line 172) | def __init__(
    method forward (line 193) | def forward(self, x):
  class RCAN (line 201) | class RCAN(nn.Module):
    method __init__ (line 202) | def __init__(self, ng, nb, nf, reduction=16, upscale=4, conv=default_c...
    method forward (line 250) | def forward(self, x):
    method load_state_dict (line 262) | def load_state_dict(self, state_dict, strict=False):

FILE: codes/config/BSRGAN/archs/rrdb.py
  class ResidualDenseBlock_5C (line 8) | class ResidualDenseBlock_5C(nn.Module):
    method __init__ (line 9) | def __init__(self, nf=64, gc=32, bias=True):
    method forward (line 24) | def forward(self, x):
  class RRDB (line 33) | class RRDB(nn.Module):
    method __init__ (line 36) | def __init__(self, nf, gc=32):
    method forward (line 42) | def forward(self, x):
  class RRDBNet (line 50) | class RRDBNet(nn.Module):
    method __init__ (line 51) | def __init__(self, in_nc, out_nc, nf, nb, gc=32, upscale=4):
    method forward (line 68) | def forward(self, x):

FILE: codes/config/BSRGAN/archs/srresnet.py
  class MSRResNet (line 9) | class MSRResNet(nn.Module):
    method __init__ (line 12) | def __init__(self, in_nc=3, out_nc=3, nf=64, nb=16, upscale=4):
    method forward (line 45) | def forward(self, x):

FILE: codes/config/BSRGAN/archs/translator.py
  function default_conv (line 11) | def default_conv(in_channels, out_channels, kernel_size, bias=True):
  class BasicBlock (line 17) | class BasicBlock(nn.Sequential):
    method __init__ (line 18) | def __init__(
  class ResBlock (line 46) | class ResBlock(nn.Module):
    method __init__ (line 47) | def __init__(
    method forward (line 70) | def forward(self, x):
  class Upsampler (line 77) | class Upsampler(nn.Sequential):
    method __init__ (line 78) | def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):
  class Translator (line 105) | class Translator(nn.Module):
    method __init__ (line 106) | def __init__(self, in_nc, out_nc, nf, nb, scale=4, conv=default_conv):
    method forward (line 134) | def forward(self, x):

FILE: codes/config/BSRGAN/archs/vgg.py
  function insert_bn (line 137) | def insert_bn(names):
  class VGGFeatureExtractor (line 154) | class VGGFeatureExtractor(nn.Module):
    method __init__ (line 175) | def __init__(
    method forward (line 246) | def forward(self, x):

FILE: codes/config/BSRGAN/models/__init__.py
  function create_model (line 21) | def create_model(opt, **kwarg):

FILE: codes/config/BSRGAN/models/base_model.py
  class BaseModel (line 16) | class BaseModel:
    method __init__ (line 17) | def __init__(self, opt):
    method setup_train (line 37) | def setup_train(self, train_opt):
    method feed_data (line 53) | def feed_data(self, data):
    method optimize_parameters (line 56) | def optimize_parameters(self):
    method get_current_visuals (line 59) | def get_current_visuals(self):
    method get_current_losses (line 62) | def get_current_losses(self):
    method print_network (line 65) | def print_network(self):
    method save (line 68) | def save(self, label):
    method load (line 71) | def load(self):
    method build_network (line 74) | def build_network(self, net_opt):
    method build_losses (line 88) | def build_losses(self, loss_opt):
    method build_optimizers (line 102) | def build_optimizers(self, optim_opts):
    method build_schedulers (line 127) | def build_schedulers(self, scheduler_opts):
    method model_to_device (line 142) | def model_to_device(self, net):
    method print_network (line 155) | def print_network(self, net):
    method set_optimizer (line 172) | def set_optimizer(self, names, operation):
    method set_requires_grad (line 176) | def set_requires_grad(self, names, requires_grad):
    method set_network_state (line 182) | def set_network_state(self, names, state):
    method clip_grad_norm (line 187) | def clip_grad_norm(self, names, norm):
    method _set_lr (line 191) | def _set_lr(self, lr_groups_l):
    method _get_init_lr (line 198) | def _get_init_lr(self):
    method update_learning_rate (line 205) | def update_learning_rate(self, cur_iter, warmup_iter=-1):
    method get_current_learning_rate (line 219) | def get_current_learning_rate(self):
    method get_network_description (line 223) | def get_network_description(self, network):
    method save_network (line 233) | def save_network(self, network, network_label, iter_label):
    method save (line 245) | def save(self, iter_label):
    method load_network (line 249) | def load_network(self, network, load_path, strict=True):
    method save_training_state (line 264) | def save_training_state(self, epoch, iter_step):
    method resume_training (line 275) | def resume_training(self, resume_state):
    method reduce_loss_dict (line 290) | def reduce_loss_dict(self, loss_dict):
    method get_current_log (line 315) | def get_current_log(self):

FILE: codes/config/BSRGAN/models/sr_model.py
  class SRModel (line 15) | class SRModel(BaseModel):
    method __init__ (line 16) | def __init__(self, opt):
    method feed_data (line 42) | def feed_data(self, data):
    method forward (line 47) | def forward(self):
    method optimize_parameters (line 51) | def optimize_parameters(self, step):
    method calculate_rgan_loss_D (line 96) | def calculate_rgan_loss_D(self, netD, criterion, real, fake):
    method calculate_rgan_loss_G (line 111) | def calculate_rgan_loss_G(self, netD, criterion, real, fake):
    method test (line 122) | def test(self, data, crop_size=None):
    method crop_test (line 132) | def crop_test(self, lr, crop_size):
    method get_current_visuals (line 180) | def get_current_visuals(self, need_GT=True):

FILE: codes/config/BSRGAN/test.py
  function parse_args (line 22) | def parse_args():
  function main (line 57) | def main():
  function main_worker (line 85) | def main_worker(gpu, ngpus_per_node, opt, args):
  function validate (line 166) | def validate(

FILE: codes/config/BSRGAN/train.py
  function parse_args (line 25) | def parse_args():
  function setup_dataloaer (line 60) | def setup_dataloaer(opt, logger):
  function main (line 105) | def main():
  function main_worker (line 136) | def main_worker(gpu, ngpus_per_node, opt, args):
  function validate (line 307) | def validate(model, dataset, dist_loader, opt, measure, epoch, current_s...

FILE: codes/config/Bicubic/archs/__init__.py
  function build_network (line 19) | def build_network(net_opt):
  function build_loss (line 25) | def build_loss(loss_opt):
  function build_scheduler (line 30) | def build_scheduler(optimizer, scheduler_opt):

FILE: codes/config/Bicubic/archs/bicubic.py
  class BicuBic (line 13) | class BicuBic(nn.Module):
    method __init__ (line 14) | def __init__(self, upscale=4):
    method forward (line 20) | def forward(self, x):

FILE: codes/config/Bicubic/archs/discriminator.py
  class DiscriminatorVGG128 (line 10) | class DiscriminatorVGG128(nn.Module):
    method __init__ (line 11) | def __init__(self, in_nc, nf):
    method forward (line 44) | def forward(self, x):
  class DiscriminatorVGG32 (line 67) | class DiscriminatorVGG32(nn.Module):
    method __init__ (line 68) | def __init__(self, in_nc, nf):
    method forward (line 101) | def forward(self, x):
  class PatchGANDiscriminator (line 124) | class PatchGANDiscriminator(nn.Module):
    method __init__ (line 127) | def __init__(self, in_c, nf, nb, stride=1, norm_layer=nn.InstanceNorm2d):
    method forward (line 188) | def forward(self, input):

FILE: codes/config/Bicubic/archs/edsr.py
  function default_conv (line 11) | def default_conv(in_channels, out_channels, kernel_size, bias=True):
  class MeanShift (line 17) | class MeanShift(nn.Conv2d):
    method __init__ (line 18) | def __init__(
  class BasicBlock (line 34) | class BasicBlock(nn.Sequential):
    method __init__ (line 35) | def __init__(
  class ResBlock (line 63) | class ResBlock(nn.Module):
    method __init__ (line 64) | def __init__(
    method forward (line 87) | def forward(self, x):
  class Upsampler (line 94) | class Upsampler(nn.Sequential):
    method __init__ (line 95) | def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):
  function make_model (line 121) | def make_model(args, parent=False):
  class EDSR (line 129) | class EDSR(nn.Module):
    method __init__ (line 130) | def __init__(self, nb, nf, res_scale=0.1, upscale=4, conv=default_conv):
    method forward (line 166) | def forward(self, x):

FILE: codes/config/Bicubic/archs/loss.py
  class GaussGuided (line 12) | class GaussGuided(nn.Module):
    method __init__ (line 13) | def __init__(self, ksize, sigma):
    method forward (line 24) | def forward(self, kernel):
  class PerceptualLossLPIPS (line 29) | class PerceptualLossLPIPS(nn.Module):
    method __init__ (line 30) | def __init__(self, net="alex", normalize=True):
    method forward (line 38) | def forward(self, res, ref):
  class MSELoss (line 43) | class MSELoss(nn.Module):
    method __init__ (line 44) | def __init__(self, *args, **kwargs):
    method forward (line 47) | def forward(self, res, ref):
  class L1Loss (line 52) | class L1Loss(nn.Module):
    method __init__ (line 53) | def __init__(self, *args, **kwargs):
    method forward (line 56) | def forward(self, res, ref):
  class GANLoss (line 61) | class GANLoss(nn.Module):
    method __init__ (line 69) | def __init__(self, gan_type, real_label_val=1.0, fake_label_val=0.0):
    method _wgan_loss (line 88) | def _wgan_loss(self, input, target):
    method _wgan_softplus_loss (line 98) | def _wgan_softplus_loss(self, input, target):
    method get_target_label (line 112) | def get_target_label(self, input, target_is_real):
    method forward (line 127) | def forward(self, input, target_is_real, is_disc=False):
  class PerceptualLoss (line 152) | class PerceptualLoss(nn.Module):
    method __init__ (line 174) | def __init__(
    method forward (line 205) | def forward(self, x, gt):
    method _gram_mat (line 262) | def _gram_mat(self, x):
  class CharbonnierLoss (line 277) | class CharbonnierLoss(nn.Module):
    method __init__ (line 280) | def __init__(self, eps=1e-6):
    method forward (line 284) | def forward(self, x, y):
  class GradientPenaltyLoss (line 290) | class GradientPenaltyLoss(nn.Module):
    method __init__ (line 291) | def __init__(self, device=torch.device("cpu")):
    method get_grad_outputs (line 296) | def get_grad_outputs(self, input):
    method forward (line 301) | def forward(self, interp, interp_crit):

FILE: codes/config/Bicubic/archs/lr_scheduler.py
  class LinearDecayLR (line 11) | class LinearDecayLR(_LRScheduler):
    method __init__ (line 12) | def __init__(
    method get_lr (line 24) | def get_lr(self):
  class MultiStepRestartLR (line 34) | class MultiStepRestartLR(_LRScheduler):
    method __init__ (line 35) | def __init__(
    method get_lr (line 55) | def get_lr(self):
  class CosineAnnealingRestartLR (line 72) | class CosineAnnealingRestartLR(_LRScheduler):
    method __init__ (line 73) | def __init__(
    method get_lr (line 87) | def get_lr(self):

FILE: codes/config/Bicubic/archs/module_util.py
  function initialize_weights (line 7) | def initialize_weights(net_l, scale=1):
  function make_layer (line 27) | def make_layer(block, n_layers):
  class ResidualBlock_noBN (line 34) | class ResidualBlock_noBN(nn.Module):
    method __init__ (line 40) | def __init__(self, nf=64):
    method forward (line 48) | def forward(self, x):
  function flow_warp (line 55) | def flow_warp(x, flow, interp_mode="bilinear", padding_mode="zeros"):

FILE: codes/config/Bicubic/archs/rcan.py
  function default_conv (line 11) | def default_conv(in_channels, out_channels, kernel_size, bias=True):
  class MeanShift (line 17) | class MeanShift(nn.Conv2d):
    method __init__ (line 18) | def __init__(self, rgb_range, rgb_mean, rgb_std, sign=-1):
  class BasicBlock (line 28) | class BasicBlock(nn.Sequential):
    method __init__ (line 29) | def __init__(
  class ResBlock (line 57) | class ResBlock(nn.Module):
    method __init__ (line 58) | def __init__(
    method forward (line 81) | def forward(self, x):
  class Upsampler (line 88) | class Upsampler(nn.Sequential):
    method __init__ (line 89) | def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):
  function make_model (line 113) | def make_model(args, parent=False):
  class CALayer (line 118) | class CALayer(nn.Module):
    method __init__ (line 119) | def __init__(self, channel, reduction=16):
    method forward (line 131) | def forward(self, x):
  class RCAB (line 138) | class RCAB(nn.Module):
    method __init__ (line 139) | def __init__(
    method forward (line 163) | def forward(self, x):
  class ResidualGroup (line 171) | class ResidualGroup(nn.Module):
    method __init__ (line 172) | def __init__(
    method forward (line 193) | def forward(self, x):
  class RCAN (line 201) | class RCAN(nn.Module):
    method __init__ (line 202) | def __init__(self, ng, nb, nf, reduction=16, upscale=4, conv=default_c...
    method forward (line 250) | def forward(self, x):
    method load_state_dict (line 262) | def load_state_dict(self, state_dict, strict=False):

FILE: codes/config/Bicubic/archs/rrdb.py
  class ResidualDenseBlock_5C (line 8) | class ResidualDenseBlock_5C(nn.Module):
    method __init__ (line 9) | def __init__(self, nf=64, gc=32, bias=True):
    method forward (line 24) | def forward(self, x):
  class RRDB (line 33) | class RRDB(nn.Module):
    method __init__ (line 36) | def __init__(self, nf, gc=32):
    method forward (line 42) | def forward(self, x):
  class RRDBNet (line 50) | class RRDBNet(nn.Module):
    method __init__ (line 51) | def __init__(self, in_nc, out_nc, nf, nb, gc=32, upscale=4):
    method forward (line 68) | def forward(self, x):

FILE: codes/config/Bicubic/archs/srresnet.py
  class MSRResNet (line 9) | class MSRResNet(nn.Module):
    method __init__ (line 12) | def __init__(self, in_nc=3, out_nc=3, nf=64, nb=16, upscale=4):
    method forward (line 45) | def forward(self, x):

FILE: codes/config/Bicubic/archs/vgg.py
  function insert_bn (line 137) | def insert_bn(names):
  class VGGFeatureExtractor (line 154) | class VGGFeatureExtractor(nn.Module):
    method __init__ (line 175) | def __init__(
    method forward (line 246) | def forward(self, x):

FILE: codes/config/Bicubic/models/__init__.py
  function create_model (line 21) | def create_model(opt, **kwarg):

FILE: codes/config/Bicubic/models/base_model.py
  class BaseModel (line 16) | class BaseModel:
    method __init__ (line 17) | def __init__(self, opt):
    method setup_train (line 37) | def setup_train(self, train_opt):
    method feed_data (line 53) | def feed_data(self, data):
    method optimize_parameters (line 56) | def optimize_parameters(self):
    method get_current_visuals (line 59) | def get_current_visuals(self):
    method get_current_losses (line 62) | def get_current_losses(self):
    method print_network (line 65) | def print_network(self):
    method save (line 68) | def save(self, label):
    method load (line 71) | def load(self):
    method build_network (line 74) | def build_network(self, net_opt):
    method build_losses (line 88) | def build_losses(self, loss_opt):
    method build_optimizers (line 102) | def build_optimizers(self, optim_opts):
    method build_schedulers (line 127) | def build_schedulers(self, scheduler_opts):
    method model_to_device (line 142) | def model_to_device(self, net):
    method print_network (line 155) | def print_network(self, net):
    method set_optimizer (line 172) | def set_optimizer(self, names, operation):
    method set_requires_grad (line 176) | def set_requires_grad(self, names, requires_grad):
    method set_network_state (line 182) | def set_network_state(self, names, state):
    method clip_grad_norm (line 187) | def clip_grad_norm(self, names, norm):
    method _set_lr (line 191) | def _set_lr(self, lr_groups_l):
    method _get_init_lr (line 198) | def _get_init_lr(self):
    method update_learning_rate (line 205) | def update_learning_rate(self, cur_iter, warmup_iter=-1):
    method get_current_learning_rate (line 219) | def get_current_learning_rate(self):
    method get_network_description (line 223) | def get_network_description(self, network):
    method save_network (line 233) | def save_network(self, network, network_label, iter_label):
    method save (line 245) | def save(self, iter_label):
    method load_network (line 249) | def load_network(self, network, load_path, strict=True):
    method save_training_state (line 264) | def save_training_state(self, epoch, iter_step):
    method resume_training (line 275) | def resume_training(self, resume_state):
    method reduce_loss_dict (line 290) | def reduce_loss_dict(self, loss_dict):
    method get_current_log (line 315) | def get_current_log(self):

FILE: codes/config/Bicubic/models/sr_model.py
  class SRModel (line 15) | class SRModel(BaseModel):
    method __init__ (line 16) | def __init__(self, opt):
    method feed_data (line 42) | def feed_data(self, data):
    method forward (line 47) | def forward(self):
    method optimize_parameters (line 51) | def optimize_parameters(self, step):
    method calculate_rgan_loss_D (line 96) | def calculate_rgan_loss_D(self, netD, criterion, real, fake):
    method calculate_rgan_loss_G (line 111) | def calculate_rgan_loss_G(self, netD, criterion, real, fake):
    method test (line 122) | def test(self, data, crop_size=None):
    method crop_test (line 132) | def crop_test(self, lr, crop_size):
    method get_current_visuals (line 180) | def get_current_visuals(self, need_GT=True):

FILE: codes/config/Bicubic/test.py
  function parse_args (line 22) | def parse_args():
  function main (line 57) | def main():
  function main_worker (line 85) | def main_worker(gpu, ngpus_per_node, opt, args):
  function validate (line 166) | def validate(

FILE: codes/config/Bicubic/train.py
  function parse_args (line 25) | def parse_args():
  function setup_dataloaer (line 60) | def setup_dataloaer(opt, logger):
  function main (line 105) | def main():
  function main_worker (line 136) | def main_worker(gpu, ngpus_per_node, opt, args):
  function validate (line 307) | def validate(model, dataset, dist_loader, opt, measure, epoch, current_s...

FILE: codes/config/Bulat/archs/__init__.py
  function build_network (line 19) | def build_network(net_opt):
  function build_loss (line 25) | def build_loss(loss_opt):
  function build_scheduler (line 30) | def build_scheduler(optimizer, scheduler_opt):

FILE: codes/config/Bulat/archs/deg_arch.py
  class DegModel (line 13) | class DegModel(nn.Module):
    method __init__ (line 14) | def __init__(self, nb, nf, scale=4, zero_tail=False, conv=default_conv):
    method forward (line 44) | def forward(self, x):

FILE: codes/config/Bulat/archs/discriminator.py
  class DiscriminatorVGG128 (line 10) | class DiscriminatorVGG128(nn.Module):
    method __init__ (line 11) | def __init__(self, in_nc, nf):
    method forward (line 44) | def forward(self, x):
  class DiscriminatorVGG32 (line 67) | class DiscriminatorVGG32(nn.Module):
    method __init__ (line 68) | def __init__(self, in_nc, nf):
    method forward (line 101) | def forward(self, x):
  class PatchGANDiscriminator (line 124) | class PatchGANDiscriminator(nn.Module):
    method __init__ (line 127) | def __init__(self, in_c, nf, nb, stride=2, norm_layer=nn.InstanceNorm2d):
    method forward (line 188) | def forward(self, input):

FILE: codes/config/Bulat/archs/edsr.py
  function default_conv (line 11) | def default_conv(in_channels, out_channels, kernel_size, bias=True):
  class MeanShift (line 17) | class MeanShift(nn.Conv2d):
    method __init__ (line 18) | def __init__(
  class BasicBlock (line 34) | class BasicBlock(nn.Sequential):
    method __init__ (line 35) | def __init__(
  class ResBlock (line 63) | class ResBlock(nn.Module):
    method __init__ (line 64) | def __init__(
    method forward (line 87) | def forward(self, x):
  class Upsampler (line 94) | class Upsampler(nn.Sequential):
    method __init__ (line 95) | def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):
  function make_model (line 121) | def make_model(args, parent=False):
  class EDSR (line 129) | class EDSR(nn.Module):
    method __init__ (line 130) | def __init__(self, nb, nf, res_scale=0.1, upscale=4, conv=default_conv):
    method forward (line 166) | def forward(self, x):

FILE: codes/config/Bulat/archs/loss.py
  class ColorLoss (line 11) | class ColorLoss(nn.Module):
    method __init__ (line 12) | def __init__(self, gauss_opt=None, pool_opt=None, stride=1, recursion=...
    method forward (line 38) | def forward(self, src, tgt):
  class GaussGuided (line 51) | class GaussGuided(nn.Module):
    method __init__ (line 52) | def __init__(self, ksize, sigma):
    method forward (line 63) | def forward(self, kernel):
  class PerceptualLossLPIPS (line 68) | class PerceptualLossLPIPS(nn.Module):
    method __init__ (line 69) | def __init__(self, net="alex", normalize=True):
    method forward (line 77) | def forward(self, res, ref):
  class MSELoss (line 82) | class MSELoss(nn.Module):
    method __init__ (line 83) | def __init__(self, *args, **kwargs):
    method forward (line 86) | def forward(self, res, ref):
  class L1Loss (line 91) | class L1Loss(nn.Module):
    method __init__ (line 92) | def __init__(self, *args, **kwargs):
    method forward (line 95) | def forward(self, res, ref):
  class GANLoss (line 100) | class GANLoss(nn.Module):
    method __init__ (line 108) | def __init__(self, gan_type, real_label_val=1.0, fake_label_val=0.0):
    method _wgan_loss (line 127) | def _wgan_loss(self, input, target):
    method _wgan_softplus_loss (line 137) | def _wgan_softplus_loss(self, input, target):
    method get_target_label (line 151) | def get_target_label(self, input, target_is_real):
    method forward (line 166) | def forward(self, input, target_is_real, is_disc=False):
  class PerceptualLoss (line 191) | class PerceptualLoss(nn.Module):
    method __init__ (line 213) | def __init__(
    method forward (line 244) | def forward(self, x, gt):
    method _gram_mat (line 301) | def _gram_mat(self, x):
  class CharbonnierLoss (line 316) | class CharbonnierLoss(nn.Module):
    method __init__ (line 319) | def __init__(self, eps=1e-6):
    method forward (line 323) | def forward(self, x, y):
  class GradientPenaltyLoss (line 329) | class GradientPenaltyLoss(nn.Module):
    method __init__ (line 330) | def __init__(self, device=torch.device("cpu")):
    method get_grad_outputs (line 335) | def get_grad_outputs(self, input):
    method forward (line 340) | def forward(self, interp, interp_crit):

FILE: codes/config/Bulat/archs/lr_scheduler.py
  class LinearDecayLR (line 11) | class LinearDecayLR(_LRScheduler):
    method __init__ (line 12) | def __init__(
    method get_lr (line 24) | def get_lr(self):
  class MultiStepRestartLR (line 34) | class MultiStepRestartLR(_LRScheduler):
    method __init__ (line 35) | def __init__(
    method get_lr (line 55) | def get_lr(self):
  class CosineAnnealingRestartLR (line 72) | class CosineAnnealingRestartLR(_LRScheduler):
    method __init__ (line 73) | def __init__(
    method get_lr (line 87) | def get_lr(self):

FILE: codes/config/Bulat/archs/module_util.py
  function initialize_weights (line 7) | def initialize_weights(net_l, scale=1):
  function make_layer (line 27) | def make_layer(block, n_layers):
  class ResidualBlock_noBN (line 34) | class ResidualBlock_noBN(nn.Module):
    method __init__ (line 40) | def __init__(self, nf=64):
    method forward (line 48) | def forward(self, x):
  function flow_warp (line 55) | def flow_warp(x, flow, interp_mode="bilinear", padding_mode="zeros"):

FILE: codes/config/Bulat/archs/rcan.py
  function default_conv (line 11) | def default_conv(in_channels, out_channels, kernel_size, bias=True):
  class MeanShift (line 17) | class MeanShift(nn.Conv2d):
    method __init__ (line 18) | def __init__(self, rgb_range, rgb_mean, rgb_std, sign=-1):
  class BasicBlock (line 28) | class BasicBlock(nn.Sequential):
    method __init__ (line 29) | def __init__(
  class ResBlock (line 57) | class ResBlock(nn.Module):
    method __init__ (line 58) | def __init__(
    method forward (line 81) | def forward(self, x):
  class Upsampler (line 88) | class Upsampler(nn.Sequential):
    method __init__ (line 89) | def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):
  function make_model (line 113) | def make_model(args, parent=False):
  class CALayer (line 118) | class CALayer(nn.Module):
    method __init__ (line 119) | def __init__(self, channel, reduction=16):
    method forward (line 131) | def forward(self, x):
  class RCAB (line 138) | class RCAB(nn.Module):
    method __init__ (line 139) | def __init__(
    method forward (line 163) | def forward(self, x):
  class ResidualGroup (line 171) | class ResidualGroup(nn.Module):
    method __init__ (line 172) | def __init__(
    method forward (line 193) | def forward(self, x):
  class RCAN (line 201) | class RCAN(nn.Module):
    method __init__ (line 202) | def __init__(self, ng, nb, nf, reduction=16, upscale=4, conv=default_c...
    method forward (line 250) | def forward(self, x):
    method load_state_dict (line 262) | def load_state_dict(self, state_dict, strict=False):

FILE: codes/config/Bulat/archs/rrdb.py
  class ResidualDenseBlock_5C (line 8) | class ResidualDenseBlock_5C(nn.Module):
    method __init__ (line 9) | def __init__(self, nf=64, gc=32, bias=True):
    method forward (line 24) | def forward(self, x):
  class RRDB (line 33) | class RRDB(nn.Module):
    method __init__ (line 36) | def __init__(self, nf, gc=32):
    method forward (line 42) | def forward(self, x):
  class RRDBNet (line 50) | class RRDBNet(nn.Module):
    method __init__ (line 51) | def __init__(self, in_nc, out_nc, nf, nb, gc=32, upscale=4):
    method forward (line 68) | def forward(self, x):

FILE: codes/config/Bulat/archs/srresnet.py
  class MSRResNet (line 9) | class MSRResNet(nn.Module):
    method __init__ (line 12) | def __init__(self, in_nc=3, out_nc=3, nf=64, nb=16, upscale=4):
    method forward (line 45) | def forward(self, x):

FILE: codes/config/Bulat/archs/translator.py
  function default_conv (line 11) | def default_conv(in_channels, out_channels, kernel_size, bias=True):
  class BasicBlock (line 17) | class BasicBlock(nn.Sequential):
    method __init__ (line 18) | def __init__(
  class ResBlock (line 46) | class ResBlock(nn.Module):
    method __init__ (line 47) | def __init__(
    method forward (line 70) | def forward(self, x):
  class Upsampler (line 77) | class Upsampler(nn.Sequential):
    method __init__ (line 78) | def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):
  class Translator (line 105) | class Translator(nn.Module):
    method __init__ (line 106) | def __init__(self, nb, nf, scale=4, zero_tail=False, conv=default_conv):
    method forward (line 138) | def forward(self, x):

FILE: codes/config/Bulat/archs/vgg.py
  function insert_bn (line 137) | def insert_bn(names):
  class VGGFeatureExtractor (line 154) | class VGGFeatureExtractor(nn.Module):
    method __init__ (line 175) | def __init__(
    method forward (line 246) | def forward(self, x):

FILE: codes/config/Bulat/models/__init__.py
  function create_model (line 21) | def create_model(opt, **kwarg):

FILE: codes/config/Bulat/models/base_model.py
  class BaseModel (line 16) | class BaseModel:
    method __init__ (line 17) | def __init__(self, opt):
    method setup_train (line 37) | def setup_train(self, train_opt):
    method feed_data (line 53) | def feed_data(self, data):
    method optimize_parameters (line 56) | def optimize_parameters(self):
    method get_current_visuals (line 59) | def get_current_visuals(self):
    method get_current_losses (line 62) | def get_current_losses(self):
    method print_network (line 65) | def print_network(self):
    method save (line 68) | def save(self, label):
    method load (line 71) | def load(self):
    method build_network (line 74) | def build_network(self, net_opt):
    method build_losses (line 88) | def build_losses(self, loss_opt):
    method build_optimizers (line 102) | def build_optimizers(self, optim_opts):
    method build_schedulers (line 127) | def build_schedulers(self, scheduler_opts):
    method model_to_device (line 142) | def model_to_device(self, net):
    method print_network (line 155) | def print_network(self, net):
    method set_optimizer (line 172) | def set_optimizer(self, names, operation):
    method set_requires_grad (line 176) | def set_requires_grad(self, names, requires_grad):
    method set_network_state (line 182) | def set_network_state(self, names, state):
    method clip_grad_norm (line 187) | def clip_grad_norm(self, names, norm):
    method _set_lr (line 191) | def _set_lr(self, lr_groups_l):
    method _get_init_lr (line 198) | def _get_init_lr(self):
    method update_learning_rate (line 205) | def update_learning_rate(self, cur_iter, warmup_iter=-1):
    method get_current_learning_rate (line 219) | def get_current_learning_rate(self):
    method get_network_description (line 223) | def get_network_description(self, network):
    method save_network (line 233) | def save_network(self, network, network_label, iter_label):
    method save (line 245) | def save(self, iter_label):
    method load_network (line 249) | def load_network(self, network, load_path, strict=True):
    method save_training_state (line 264) | def save_training_state(self, epoch, iter_step):
    method resume_training (line 275) | def resume_training(self, resume_state):
    method reduce_loss_dict (line 290) | def reduce_loss_dict(self, loss_dict):
    method get_current_log (line 315) | def get_current_log(self):

FILE: codes/config/Bulat/models/deg_sr_model.py
  class DegSRModel (line 16) | class DegSRModel(BaseModel):
    method __init__ (line 17) | def __init__(self, opt):
    method feed_data (line 64) | def feed_data(self, data):
    method forward (line 69) | def forward(self):
    method optimize_parameters (line 75) | def optimize_parameters(self, step):
    method calculate_gan_loss_D (line 169) | def calculate_gan_loss_D(self, netD, criterion, real, fake):
    method calculate_gan_loss_G (line 179) | def calculate_gan_loss_G(self, netD, criterion, real, fake):
    method test (line 186) | def test(self, data):
    method get_current_visuals (line 193) | def get_current_visuals(self, need_GT=True):
  class ShuffleBuffer (line 200) | class ShuffleBuffer():
    method __init__ (line 206) | def __init__(self, buffer_size):
    method choose (line 215) | def choose(self, images, prob=0.5):

FILE: codes/config/Bulat/test.py
  function parse_args (line 22) | def parse_args():
  function main (line 57) | def main():
  function main_worker (line 85) | def main_worker(gpu, ngpus_per_node, opt, args):
  function validate (line 166) | def validate(

FILE: codes/config/Bulat/train.py
  function parse_args (line 25) | def parse_args():
  function setup_dataloaer (line 60) | def setup_dataloaer(opt, logger):
  function main (line 105) | def main():
  function main_worker (line 136) | def main_worker(gpu, ngpus_per_node, opt, args):
  function validate (line 307) | def validate(model, dataset, dist_loader, opt, measure, epoch, current_s...

FILE: codes/config/CinGAN/archs/__init__.py
  function build_network (line 19) | def build_network(net_opt):
  function build_loss (line 25) | def build_loss(loss_opt):
  function build_scheduler (line 30) | def build_scheduler(optimizer, scheduler_opt):

FILE: codes/config/CinGAN/archs/discriminator.py
  class DiscriminatorVGG128 (line 10) | class DiscriminatorVGG128(nn.Module):
    method __init__ (line 11) | def __init__(self, in_nc, nf):
    method forward (line 44) | def forward(self, x):
  class DiscriminatorVGG32 (line 67) | class DiscriminatorVGG32(nn.Module):
    method __init__ (line 68) | def __init__(self, in_nc, nf):
    method forward (line 101) | def forward(self, x):
  class PatchGANDiscriminator (line 124) | class PatchGANDiscriminator(nn.Module):
    method __init__ (line 127) | def __init__(self, in_c, nf, nb, stride=1, norm_layer=nn.InstanceNorm2d):
    method forward (line 188) | def forward(self, input):

FILE: codes/config/CinGAN/archs/edsr.py
  function default_conv (line 11) | def default_conv(in_channels, out_channels, kernel_size, bias=True):
  class MeanShift (line 17) | class MeanShift(nn.Conv2d):
    method __init__ (line 18) | def __init__(
  class BasicBlock (line 34) | class BasicBlock(nn.Sequential):
    method __init__ (line 35) | def __init__(
  class ResBlock (line 63) | class ResBlock(nn.Module):
    method __init__ (line 64) | def __init__(
    method forward (line 87) | def forward(self, x):
  class Upsampler (line 94) | class Upsampler(nn.Sequential):
    method __init__ (line 95) | def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):
  function make_model (line 121) | def make_model(args, parent=False):
  class EDSR (line 129) | class EDSR(nn.Module):
    method __init__ (line 130) | def __init__(self, nb, nf, res_scale=0.1, upscale=4, conv=default_conv):
    method forward (line 166) | def forward(self, x):

FILE: codes/config/CinGAN/archs/loss.py
  class TVLoss (line 11) | class TVLoss(nn.Module):
    method __init__ (line 12) | def __init__(self, penealty="L1Loss"):
    method forward (line 16) | def forward(self, pred):
  class MSELoss (line 26) | class MSELoss(nn.Module):
    method __init__ (line 27) | def __init__(self, *args, **kwargs):
    method forward (line 30) | def forward(self, res, ref):
  class L1Loss (line 35) | class L1Loss(nn.Module):
    method __init__ (line 36) | def __init__(self, *args, **kwargs):
    method forward (line 39) | def forward(self, res, ref):
  class GANLoss (line 44) | class GANLoss(nn.Module):
    method __init__ (line 52) | def __init__(self, gan_type, real_label_val=1.0, fake_label_val=0.0):
    method _wgan_loss (line 71) | def _wgan_loss(self, input, target):
    method _wgan_softplus_loss (line 81) | def _wgan_softplus_loss(self, input, target):
    method get_target_label (line 95) | def get_target_label(self, input, target_is_real):
    method forward (line 110) | def forward(self, input, target_is_real, is_disc=False):
  class PerceptualLoss (line 135) | class PerceptualLoss(nn.Module):
    method __init__ (line 157) | def __init__(
    method forward (line 188) | def forward(self, x, gt):
    method _gram_mat (line 245) | def _gram_mat(self, x):
  class CharbonnierLoss (line 260) | class CharbonnierLoss(nn.Module):
    method __init__ (line 263) | def __init__(self, eps=1e-6):
    method forward (line 267) | def forward(self, x, y):
  class GradientPenaltyLoss (line 273) | class GradientPenaltyLoss(nn.Module):
    method __init__ (line 274) | def __init__(self, device=torch.device("cpu")):
    method get_grad_outputs (line 279) | def get_grad_outputs(self, input):
    method forward (line 284) | def forward(self, interp, interp_crit):

FILE: codes/config/CinGAN/archs/lr_scheduler.py
  class LinearDecayLR (line 11) | class LinearDecayLR(_LRScheduler):
    method __init__ (line 12) | def __init__(
    method get_lr (line 24) | def get_lr(self):
  class MultiStepRestartLR (line 34) | class MultiStepRestartLR(_LRScheduler):
    method __init__ (line 35) | def __init__(
    method get_lr (line 55) | def get_lr(self):
  class CosineAnnealingRestartLR (line 72) | class CosineAnnealingRestartLR(_LRScheduler):
    method __init__ (line 73) | def __init__(
    method get_lr (line 87) | def get_lr(self):

FILE: codes/config/CinGAN/archs/module_util.py
  function initialize_weights (line 7) | def initialize_weights(net_l, scale=1):
  function make_layer (line 27) | def make_layer(block, n_layers):
  class ResidualBlock_noBN (line 34) | class ResidualBlock_noBN(nn.Module):
    method __init__ (line 40) | def __init__(self, nf=64):
    method forward (line 48) | def forward(self, x):
  function flow_warp (line 55) | def flow_warp(x, flow, interp_mode="bilinear", padding_mode="zeros"):

FILE: codes/config/CinGAN/archs/rcan.py
  function default_conv (line 11) | def default_conv(in_channels, out_channels, kernel_size, bias=True):
  class MeanShift (line 17) | class MeanShift(nn.Conv2d):
    method __init__ (line 18) | def __init__(self, rgb_range, rgb_mean, rgb_std, sign=-1):
  class BasicBlock (line 28) | class BasicBlock(nn.Sequential):
    method __init__ (line 29) | def __init__(
  class ResBlock (line 57) | class ResBlock(nn.Module):
    method __init__ (line 58) | def __init__(
    method forward (line 81) | def forward(self, x):
  class Upsampler (line 88) | class Upsampler(nn.Sequential):
    method __init__ (line 89) | def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):
  function make_model (line 113) | def make_model(args, parent=False):
  class CALayer (line 118) | class CALayer(nn.Module):
    method __init__ (line 119) | def __init__(self, channel, reduction=16):
    method forward (line 131) | def forward(self, x):
  class RCAB (line 138) | class RCAB(nn.Module):
    method __init__ (line 139) | def __init__(
    method forward (line 163) | def forward(self, x):
  class ResidualGroup (line 171) | class ResidualGroup(nn.Module):
    method __init__ (line 172) | def __init__(
    method forward (line 193) | def forward(self, x):
  class RCAN (line 201) | class RCAN(nn.Module):
    method __init__ (line 202) | def __init__(self, ng, nb, nf, reduction=16, upscale=4, conv=default_c...
    method forward (line 250) | def forward(self, x):
    method load_state_dict (line 262) | def load_state_dict(self, state_dict, strict=False):

FILE: codes/config/CinGAN/archs/rrdb.py
  class ResidualDenseBlock_5C (line 8) | class ResidualDenseBlock_5C(nn.Module):
    method __init__ (line 9) | def __init__(self, nf=64, gc=32, bias=True):
    method forward (line 24) | def forward(self, x):
  class RRDB (line 33) | class RRDB(nn.Module):
    method __init__ (line 36) | def __init__(self, nf, gc=32):
    method forward (line 42) | def forward(self, x):
  class RRDBNet (line 50) | class RRDBNet(nn.Module):
    method __init__ (line 51) | def __init__(self, in_nc, out_nc, nf, nb, gc=32, upscale=4):
    method forward (line 68) | def forward(self, x):

FILE: codes/config/CinGAN/archs/srresnet.py
  class MSRResNet (line 9) | class MSRResNet(nn.Module):
    method __init__ (line 12) | def __init__(self, in_nc=3, out_nc=3, nf=64, nb=16, upscale=4):
    method forward (line 45) | def forward(self, x):

FILE: codes/config/CinGAN/archs/translator.py
  function default_conv (line 11) | def default_conv(in_channels, out_channels, kernel_size, bias=True):
  class BasicBlock (line 17) | class BasicBlock(nn.Sequential):
    method __init__ (line 18) | def __init__(
  class ResBlock (line 46) | class ResBlock(nn.Module):
    method __init__ (line 47) | def __init__(
    method forward (line 70) | def forward(self, x):
  class Upsampler (line 77) | class Upsampler(nn.Sequential):
    method __init__ (line 78) | def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):
  class Translator (line 105) | class Translator(nn.Module):
    method __init__ (line 106) | def __init__(self, nb, nf, scale=4, zero_tail=False, conv=default_conv):
    method forward (line 137) | def forward(self, x):

FILE: codes/config/CinGAN/archs/vgg.py
  function insert_bn (line 137) | def insert_bn(names):
  class VGGFeatureExtractor (line 154) | class VGGFeatureExtractor(nn.Module):
    method __init__ (line 175) | def __init__(
    method forward (line 246) | def forward(self, x):

FILE: codes/config/CinGAN/models/__init__.py
  function create_model (line 21) | def create_model(opt, **kwarg):

FILE: codes/config/CinGAN/models/base_model.py
  class BaseModel (line 16) | class BaseModel:
    method __init__ (line 17) | def __init__(self, opt):
    method setup_train (line 37) | def setup_train(self, train_opt):
    method feed_data (line 53) | def feed_data(self, data):
    method optimize_parameters (line 56) | def optimize_parameters(self):
    method get_current_visuals (line 59) | def get_current_visuals(self):
    method get_current_losses (line 62) | def get_current_losses(self):
    method print_network (line 65) | def print_network(self):
    method save (line 68) | def save(self, label):
    method load (line 71) | def load(self):
    method build_network (line 74) | def build_network(self, net_opt):
    method build_losses (line 88) | def build_losses(self, loss_opt):
    method build_optimizers (line 102) | def build_optimizers(self, optim_opts):
    method build_schedulers (line 127) | def build_schedulers(self, scheduler_opts):
    method model_to_device (line 142) | def model_to_device(self, net):
    method print_network (line 155) | def print_network(self, net):
    method set_optimizer (line 172) | def set_optimizer(self, names, operation):
    method set_requires_grad (line 176) | def set_requires_grad(self, names, requires_grad):
    method set_network_state (line 182) | def set_network_state(self, names, state):
    method clip_grad_norm (line 187) | def clip_grad_norm(self, names, norm):
    method _set_lr (line 191) | def _set_lr(self, lr_groups_l):
    method _get_init_lr (line 198) | def _get_init_lr(self):
    method update_learning_rate (line 205) | def update_learning_rate(self, cur_iter, warmup_iter=-1):
    method get_current_learning_rate (line 219) | def get_current_learning_rate(self):
    method get_network_description (line 223) | def get_network_description(self, network):
    method save_network (line 233) | def save_network(self, network, network_label, iter_label):
    method save (line 245) | def save(self, iter_label):
    method load_network (line 249) | def load_network(self, network, load_path, strict=True):
    method save_training_state (line 264) | def save_training_state(self, epoch, iter_step):
    method resume_training (line 275) | def resume_training(self, resume_state):
    method reduce_loss_dict (line 290) | def reduce_loss_dict(self, loss_dict):
    method get_current_log (line 315) | def get_current_log(self):

FILE: codes/config/CinGAN/models/cingan_model.py
  class CinGANModel (line 16) | class CinGANModel(BaseModel):
    method __init__ (line 17) | def __init__(self, opt):
    method feed_data (line 60) | def feed_data(self, data):
    method foward_trans (line 66) | def foward_trans(self):
    method forward_sr (line 70) | def forward_sr(self):
    method optimize_parameters (line 76) | def optimize_parameters(self, step):
    method calculate_gan_loss_D (line 164) | def calculate_gan_loss_D(self, netD, criterion, real, fake):
    method calculate_gan_loss_G (line 174) | def calculate_gan_loss_G(self, netD, criterion, real, fake):
    method calculate_rgan_loss_D (line 181) | def calculate_rgan_loss_D(self, netD, criterion, real, fake):
    method calculate_rgan_loss_G (line 196) | def calculate_rgan_loss_G(self, netD, criterion, real, fake):
    method test (line 207) | def test(self, data):
    method get_current_visuals (line 215) | def get_current_visuals(self, need_GT=True):

FILE: codes/config/CinGAN/models/trans_model.py
  class TransModel (line 16) | class TransModel(BaseModel):
    method __init__ (line 17) | def __init__(self, opt):
    method feed_data (line 57) | def feed_data(self, data):
    method forward (line 62) | def forward(self):
    method optimize_parameters (line 67) | def optimize_parameters(self, step):
    method calculate_gan_loss_D (line 120) | def calculate_gan_loss_D(self, netD, criterion, real, fake):
    method calculate_gan_loss_G (line 130) | def calculate_gan_loss_G(self, netD, criterion, real, fake):
    method calculate_rgan_loss_D (line 137) | def calculate_rgan_loss_D(self, netD, criterion, real, fake):
    method calculate_rgan_loss_G (line 152) | def calculate_rgan_loss_G(self, netD, criterion, real, fake):
    method test (line 163) | def test(self, data):
    method get_current_visuals (line 170) | def get_current_visuals(self, need_GT=True):
  class ShuffleBuffer (line 176) | class ShuffleBuffer():
    method __init__ (line 182) | def __init__(self, buffer_size):
    method choose (line 191) | def choose(self, images, prob=0.5):

FILE: codes/config/CinGAN/test.py
  function parse_args (line 22) | def parse_args():
  function main (line 57) | def main():
  function main_worker (line 85) | def main_worker(gpu, ngpus_per_node, opt, args):
  function validate (line 166) | def validate(

FILE: codes/config/CinGAN/train.py
  function parse_args (line 25) | def parse_args():
  function setup_dataloaer (line 60) | def setup_dataloaer(opt, logger):
  function main (line 105) | def main():
  function main_worker (line 136) | def main_worker(gpu, ngpus_per_node, opt, args):
  function validate (line 307) | def validate(model, dataset, dist_loader, opt, measure, epoch, current_s...

FILE: codes/config/CycleSR/archs/__init__.py
  function build_network (line 19) | def build_network(net_opt):
  function build_loss (line 25) | def build_loss(loss_opt):
  function build_scheduler (line 30) | def build_scheduler(optimizer, scheduler_opt):

FILE: codes/config/CycleSR/archs/discriminator.py
  class DiscriminatorVGG128 (line 10) | class DiscriminatorVGG128(nn.Module):
    method __init__ (line 11) | def __init__(self, in_nc, nf):
    method forward (line 44) | def forward(self, x):
  class DiscriminatorVGG32 (line 67) | class DiscriminatorVGG32(nn.Module):
    method __init__ (line 68) | def __init__(self, in_nc, nf):
    method forward (line 101) | def forward(self, x):
  class PatchGANDiscriminator (line 124) | class PatchGANDiscriminator(nn.Module):
    method __init__ (line 127) | def __init__(self, in_c, nf, nb, stride=1, norm_layer=nn.InstanceNorm2d):
    method forward (line 188) | def forward(self, input):

FILE: codes/config/CycleSR/archs/edsr.py
  function default_conv (line 11) | def default_conv(in_channels, out_channels, kernel_size, bias=True):
  class MeanShift (line 17) | class MeanShift(nn.Conv2d):
    method __init__ (line 18) | def __init__(
  class BasicBlock (line 34) | class BasicBlock(nn.Sequential):
    method __init__ (line 35) | def __init__(
  class ResBlock (line 63) | class ResBlock(nn.Module):
    method __init__ (line 64) | def __init__(
    method forward (line 87) | def forward(self, x):
  class Upsampler (line 94) | class Upsampler(nn.Sequential):
    method __init__ (line 95) | def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):
  function make_model (line 121) | def make_model(args, parent=False):
  class EDSR (line 129) | class EDSR(nn.Module):
    method __init__ (line 130) | def __init__(self, nb, nf, res_scale=0.1, upscale=4, conv=default_conv):
    method forward (line 166) | def forward(self, x):

FILE: codes/config/CycleSR/archs/loss.py
  class GaussGuided (line 12) | class GaussGuided(nn.Module):
    method __init__ (line 13) | def __init__(self, ksize, sigma):
    method forward (line 24) | def forward(self, kernel):
  class PerceptualLossLPIPS (line 29) | class PerceptualLossLPIPS(nn.Module):
    method __init__ (line 30) | def __init__(self, net="alex", normalize=True):
    method forward (line 38) | def forward(self, res, ref):
  class MSELoss (line 43) | class MSELoss(nn.Module):
    method __init__ (line 44) | def __init__(self, *args, **kwargs):
    method forward (line 47) | def forward(self, res, ref):
  class L1Loss (line 52) | class L1Loss(nn.Module):
    method __init__ (line 53) | def __init__(self, *args, **kwargs):
    method forward (line 56) | def forward(self, res, ref):
  class GANLoss (line 61) | class GANLoss(nn.Module):
    method __init__ (line 69) | def __init__(self, gan_type, real_label_val=1.0, fake_label_val=0.0):
    method _wgan_loss (line 88) | def _wgan_loss(self, input, target):
    method _wgan_softplus_loss (line 98) | def _wgan_softplus_loss(self, input, target):
    method get_target_label (line 112) | def get_target_label(self, input, target_is_real):
    method forward (line 127) | def forward(self, input, target_is_real, is_disc=False):
  class PerceptualLoss (line 152) | class PerceptualLoss(nn.Module):
    method __init__ (line 174) | def __init__(
    method forward (line 205) | def forward(self, x, gt):
    method _gram_mat (line 262) | def _gram_mat(self, x):
  class CharbonnierLoss (line 277) | class CharbonnierLoss(nn.Module):
    method __init__ (line 280) | def __init__(self, eps=1e-6):
    method forward (line 284) | def forward(self, x, y):
  class GradientPenaltyLoss (line 290) | class GradientPenaltyLoss(nn.Module):
    method __init__ (line 291) | def __init__(self, device=torch.device("cpu")):
    method get_grad_outputs (line 296) | def get_grad_outputs(self, input):
    method forward (line 301) | def forward(self, interp, interp_crit):

FILE: codes/config/CycleSR/archs/lr_scheduler.py
  class LinearDecayLR (line 11) | class LinearDecayLR(_LRScheduler):
    method __init__ (line 12) | def __init__(
    method get_lr (line 24) | def get_lr(self):
  class MultiStepRestartLR (line 34) | class MultiStepRestartLR(_LRScheduler):
    method __init__ (line 35) | def __init__(
    method get_lr (line 55) | def get_lr(self):
  class CosineAnnealingRestartLR (line 72) | class CosineAnnealingRestartLR(_LRScheduler):
    method __init__ (line 73) | def __init__(
    method get_lr (line 87) | def get_lr(self):

FILE: codes/config/CycleSR/archs/module_util.py
  function initialize_weights (line 7) | def initialize_weights(net_l, scale=1):
  function make_layer (line 27) | def make_layer(block, n_layers):
  class ResidualBlock_noBN (line 34) | class ResidualBlock_noBN(nn.Module):
    method __init__ (line 40) | def __init__(self, nf=64):
    method forward (line 48) | def forward(self, x):
  function flow_warp (line 55) | def flow_warp(x, flow, interp_mode="bilinear", padding_mode="zeros"):

FILE: codes/config/CycleSR/archs/rcan.py
  function default_conv (line 11) | def default_conv(in_channels, out_channels, kernel_size, bias=True):
  class MeanShift (line 17) | class MeanShift(nn.Conv2d):
    method __init__ (line 18) | def __init__(self, rgb_range, rgb_mean, rgb_std, sign=-1):
  class BasicBlock (line 28) | class BasicBlock(nn.Sequential):
    method __init__ (line 29) | def __init__(
  class ResBlock (line 57) | class ResBlock(nn.Module):
    method __init__ (line 58) | def __init__(
    method forward (line 81) | def forward(self, x):
  class Upsampler (line 88) | class Upsampler(nn.Sequential):
    method __init__ (line 89) | def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):
  function make_model (line 113) | def make_model(args, parent=False):
  class CALayer (line 118) | class CALayer(nn.Module):
    method __init__ (line 119) | def __init__(self, channel, reduction=16):
    method forward (line 131) | def forward(self, x):
  class RCAB (line 138) | class RCAB(nn.Module):
    method __init__ (line 139) | def __init__(
    method forward (line 163) | def forward(self, x):
  class ResidualGroup (line 171) | class ResidualGroup(nn.Module):
    method __init__ (line 172) | def __init__(
    method forward (line 193) | def forward(self, x):
  class RCAN (line 201) | class RCAN(nn.Module):
    method __init__ (line 202) | def __init__(self, ng, nb, nf, reduction=16, upscale=4, conv=default_c...
    method forward (line 250) | def forward(self, x):
    method load_state_dict (line 262) | def load_state_dict(self, state_dict, strict=False):

FILE: codes/config/CycleSR/archs/rrdb.py
  class ResidualDenseBlock_5C (line 8) | class ResidualDenseBlock_5C(nn.Module):
    method __init__ (line 9) | def __init__(self, nf=64, gc=32, bias=True):
    method forward (line 24) | def forward(self, x):
  class RRDB (line 33) | class RRDB(nn.Module):
    method __init__ (line 36) | def __init__(self, nf, gc=32):
    method forward (line 42) | def forward(self, x):
  class RRDBNet (line 50) | class RRDBNet(nn.Module):
    method __init__ (line 51) | def __init__(self, in_nc, out_nc, nf, nb, gc=32, upscale=4):
    method forward (line 68) | def forward(self, x):

FILE: codes/config/CycleSR/archs/srresnet.py
  class MSRResNet (line 9) | class MSRResNet(nn.Module):
    method __init__ (line 12) | def __init__(self, in_nc=3, out_nc=3, nf=64, nb=16, upscale=4):
    method forward (line 45) | def forward(self, x):

FILE: codes/config/CycleSR/archs/translator.py
  class Translator (line 13) | class Translator(nn.Module):
    method __init__ (line 14) | def __init__(self, nb, nf, scale=4, zero_tail=False, conv=default_conv):
    method forward (line 45) | def forward(self, x):

FILE: codes/config/CycleSR/archs/vgg.py
  function insert_bn (line 137) | def insert_bn(names):
  class VGGFeatureExtractor (line 154) | class VGGFeatureExtractor(nn.Module):
    method __init__ (line 175) | def __init__(
    method forward (line 246) | def forward(self, x):

FILE: codes/config/CycleSR/models/__init__.py
  function create_model (line 21) | def create_model(opt, **kwarg):

FILE: codes/config/CycleSR/models/base_model.py
  class BaseModel (line 16) | class BaseModel:
    method __init__ (line 17) | def __init__(self, opt):
    method setup_train (line 37) | def setup_train(self, train_opt):
    method feed_data (line 53) | def feed_data(self, data):
    method optimize_parameters (line 56) | def optimize_parameters(self):
    method get_current_visuals (line 59) | def get_current_visuals(self):
    method get_current_losses (line 62) | def get_current_losses(self):
    method print_network (line 65) | def print_network(self):
    method save (line 68) | def save(self, label):
    method load (line 71) | def load(self):
    method build_network (line 74) | def build_network(self, net_opt):
    method build_losses (line 88) | def build_losses(self, loss_opt):
    method build_optimizers (line 102) | def build_optimizers(self, optim_opts):
    method build_schedulers (line 127) | def build_schedulers(self, scheduler_opts):
    method model_to_device (line 142) | def model_to_device(self, net):
    method print_network (line 155) | def print_network(self, net):
    method set_optimizer (line 172) | def set_optimizer(self, names, operation):
    method set_requires_grad (line 176) | def set_requires_grad(self, names, requires_grad):
    method set_network_state (line 182) | def set_network_state(self, names, state):
    method clip_grad_norm (line 187) | def clip_grad_norm(self, names, norm):
    method _set_lr (line 191) | def _set_lr(self, lr_groups_l):
    method _get_init_lr (line 198) | def _get_init_lr(self):
    method update_learning_rate (line 205) | def update_learning_rate(self, cur_iter, warmup_iter=-1):
    method get_current_learning_rate (line 219) | def get_current_learning_rate(self):
    method get_network_description (line 223) | def get_network_description(self, network):
    method save_network (line 233) | def save_network(self, network, network_label, iter_label):
    method save (line 245) | def save(self, iter_label):
    method load_network (line 249) | def load_network(self, network, load_path, strict=True):
    method save_training_state (line 264) | def save_training_state(self, epoch, iter_step):
    method resume_training (line 275) | def resume_training(self, resume_state):
    method reduce_loss_dict (line 290) | def reduce_loss_dict(self, loss_dict):
    method get_current_log (line 315) | def get_current_log(self):

FILE: codes/config/CycleSR/models/cyclegan_model.py
  class CycleGANModel (line 16) | class CycleGANModel(BaseModel):
    method __init__ (line 17) | def __init__(self, opt):
    method feed_data (line 61) | def feed_data(self, data):
    method forward (line 66) | def forward(self):
    method optimize_parameters (line 73) | def optimize_parameters(self, step):
    method calculate_gan_loss_D (line 144) | def calculate_gan_loss_D(self, netD, criterion, real, fake):
    method calculate_gan_loss_G (line 154) | def calculate_gan_loss_G(self, netD, criterion, real, fake):
    method test (line 161) | def test(self, data):
    method get_current_visuals (line 168) | def get_current_visuals(self, need_GT=True):
  class ShuffleBuffer (line 174) | class ShuffleBuffer():
    method __init__ (line 180) | def __init__(self, buffer_size):
    method choose (line 189) | def choose(self, images, prob=0.5):

FILE: codes/config/CycleSR/models/cyclesr_model.py
  class CycleSRModel (line 16) | class CycleSRModel(BaseModel):
    method __init__ (line 17) | def __init__(self, opt):
    method feed_data (line 64) | def feed_data(self, data):
    method forward_trans (line 70) | def forward_trans(self):
    method forward_sr (line 79) | def forward_sr(self):
    method optimize_trans_models (line 84) | def optimize_trans_models(self, step, loss_dict):
    method optimize_sr_models (line 155) | def optimize_sr_models(self, step, loss_dict):
    method optimize_parameters (line 204) | def optimize_parameters(self, step):
    method calculate_gan_loss_D (line 213) | def calculate_gan_loss_D(self, netD, criterion, real, fake):
    method calculate_gan_loss_G (line 223) | def calculate_gan_loss_G(self, netD, criterion, real, fake):
    method test (line 230) | def test(self, data):
    method get_current_visuals (line 237) | def get_current_visuals(self, need_GT=True):

FILE: codes/config/CycleSR/test.py
  function parse_args (line 22) | def parse_args():
  function main (line 57) | def main():
  function main_worker (line 85) | def main_worker(gpu, ngpus_per_node, opt, args):
  function validate (line 166) | def validate(

FILE: codes/config/CycleSR/train.py
  function parse_args (line 25) | def parse_args():
  function setup_dataloaer (line 60) | def setup_dataloaer(opt, logger):
  function main (line 105) | def main():
  function main_worker (line 136) | def main_worker(gpu, ngpus_per_node, opt, args):
  function validate (line 307) | def validate(model, dataset, dist_loader, opt, measure, epoch, current_s...

FILE: codes/config/DSGANSR/archs/__init__.py
  function build_network (line 19) | def build_network(net_opt):
  function build_loss (line 25) | def build_loss(loss_opt):
  function build_scheduler (line 30) | def build_scheduler(optimizer, scheduler_opt):

FILE: codes/config/DSGANSR/archs/deg_arch.py
  class ResBlock (line 9) | class ResBlock(nn.Module):
    method __init__ (line 10) | def __init__(self, nf, ksize, norm=nn.BatchNorm2d, act=nn.ReLU):
    method forward (line 20) | def forward(self, x):
  class DegModel (line 25) | class DegModel(nn.Module):
    method __init__ (line 26) | def __init__(
    method forward (line 93) | def forward(self, inp):

FILE: codes/config/DSGANSR/archs/discriminator.py
  class DiscriminatorVGG128 (line 10) | class DiscriminatorVGG128(nn.Module):
    method __init__ (line 11) | def __init__(self, in_nc, nf):
    method forward (line 44) | def forward(self, x):
  class DiscriminatorVGG32 (line 67) | class DiscriminatorVGG32(nn.Module):
    method __init__ (line 68) | def __init__(self, in_nc, nf):
    method forward (line 101) | def forward(self, x):
  class PatchGANDiscriminator (line 124) | class PatchGANDiscriminator(nn.Module):
    method __init__ (line 127) | def __init__(self, in_c, nf, nb, stride=1, norm_layer=nn.InstanceNorm2d):
    method forward (line 188) | def forward(self, input):

FILE: codes/config/DSGANSR/archs/edsr.py
  function default_conv (line 11) | def default_conv(in_channels, out_channels, kernel_size, bias=True):
  class MeanShift (line 17) | class MeanShift(nn.Conv2d):
    method __init__ (line 18) | def __init__(
  class BasicBlock (line 34) | class BasicBlock(nn.Sequential):
    method __init__ (line 35) | def __init__(
  class ResBlock (line 63) | class ResBlock(nn.Module):
    method __init__ (line 64) | def __init__(
    method forward (line 87) | def forward(self, x):
  class Upsampler (line 94) | class Upsampler(nn.Sequential):
    method __init__ (line 95) | def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):
  function make_model (line 121) | def make_model(args, parent=False):
  class EDSR (line 129) | class EDSR(nn.Module):
    method __init__ (line 130) | def __init__(self, nb, nf, res_scale=0.1, upscale=4, conv=default_conv):
    method forward (line 166) | def forward(self, x):

FILE: codes/config/DSGANSR/archs/loss.py
  class ColorLoss (line 11) | class ColorLoss(nn.Module):
    method __init__ (line 12) | def __init__(self, ksize=5, sigma=None, stride=1, recursion=1, loss_ty...
    method forward (line 31) | def forward(self, src, tgt):
  class GaussGuided (line 41) | class GaussGuided(nn.Module):
    method __init__ (line 42) | def __init__(self, ksize, sigma):
    method forward (line 53) | def forward(self, kernel):
  class PerceptualLossLPIPS (line 58) | class PerceptualLossLPIPS(nn.Module):
    method __init__ (line 59) | def __init__(self, net="alex", normalize=True):
    method forward (line 67) | def forward(self, res, ref):
  class MSELoss (line 72) | class MSELoss(nn.Module):
    method __init__ (line 73) | def __init__(self, *args, **kwargs):
    method forward (line 76) | def forward(self, res, ref):
  class L1Loss (line 81) | class L1Loss(nn.Module):
    method __init__ (line 82) | def __init__(self, *args, **kwargs):
    method forward (line 85) | def forward(self, res, ref):
  class GANLoss (line 90) | class GANLoss(nn.Module):
    method __init__ (line 98) | def __init__(self, gan_type, real_label_val=1.0, fake_label_val=0.0):
    method _wgan_loss (line 117) | def _wgan_loss(self, input, target):
    method _wgan_softplus_loss (line 127) | def _wgan_softplus_loss(self, input, target):
    method get_target_label (line 141) | def get_target_label(self, input, target_is_real):
    method forward (line 156) | def forward(self, input, target_is_real, is_disc=False):
  class PerceptualLoss (line 181) | class PerceptualLoss(nn.Module):
    method __init__ (line 203) | def __init__(
    method forward (line 234) | def forward(self, x, gt):
    method _gram_mat (line 291) | def _gram_mat(self, x):
  class CharbonnierLoss (line 306) | class CharbonnierLoss(nn.Module):
    method __init__ (line 309) | def __init__(self, eps=1e-6):
    method forward (line 313) | def forward(self, x, y):
  class GradientPenaltyLoss (line 319) | class GradientPenaltyLoss(nn.Module):
    method __init__ (line 320) | def __init__(self, device=torch.device("cpu")):
    method get_grad_outputs (line 325) | def get_grad_outputs(self, input):
    method forward (line 330) | def forward(self, interp, interp_crit):

FILE: codes/config/DSGANSR/archs/lr_scheduler.py
  class LinearDecayLR (line 11) | class LinearDecayLR(_LRScheduler):
    method __init__ (line 12) | def __init__(
    method get_lr (line 24) | def get_lr(self):
  class MultiStepRestartLR (line 34) | class MultiStepRestartLR(_LRScheduler):
    method __init__ (line 35) | def __init__(
    method get_lr (line 55) | def get_lr(self):
  class CosineAnnealingRestartLR (line 72) | class CosineAnnealingRestartLR(_LRScheduler):
    method __init__ (line 73) | def __init__(
    method get_lr (line 87) | def get_lr(self):

FILE: codes/config/DSGANSR/archs/module_util.py
  function initialize_weights (line 7) | def initialize_weights(net_l, scale=1):
  function make_layer (line 27) | def make_layer(block, n_layers):
  class ResidualBlock_noBN (line 34) | class ResidualBlock_noBN(nn.Module):
    method __init__ (line 40) | def __init__(self, nf=64):
    method forward (line 48) | def forward(self, x):
  function flow_warp (line 55) | def flow_warp(x, flow, interp_mode="bilinear", padding_mode="zeros"):

FILE: codes/config/DSGANSR/archs/rcan.py
  function default_conv (line 11) | def default_conv(in_channels, out_channels, kernel_size, bias=True):
  class MeanShift (line 17) | class MeanShift(nn.Conv2d):
    method __init__ (line 18) | def __init__(self, rgb_range, rgb_mean, rgb_std, sign=-1):
  class BasicBlock (line 28) | class BasicBlock(nn.Sequential):
    method __init__ (line 29) | def __init__(
  class ResBlock (line 57) | class ResBlock(nn.Module):
    method __init__ (line 58) | def __init__(
    method forward (line 81) | def forward(self, x):
  class Upsampler (line 88) | class Upsampler(nn.Sequential):
    method __init__ (line 89) | def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):
  function make_model (line 113) | def make_model(args, parent=False):
  class CALayer (line 118) | class CALayer(nn.Module):
    method __init__ (line 119) | def __init__(self, channel, reduction=16):
    method forward (line 131) | def forward(self, x):
  class RCAB (line 138) | class RCAB(nn.Module):
    method __init__ (line 139) | def __init__(
    method forward (line 163) | def forward(self, x):
  class ResidualGroup (line 171) | class ResidualGroup(nn.Module):
    method __init__ (line 172) | def __init__(
    method forward (line 193) | def forward(self, x):
  class RCAN (line 201) | class RCAN(nn.Module):
    method __init__ (line 202) | def __init__(self, ng, nb, nf, reduction=16, upscale=4, conv=default_c...
    method forward (line 250) | def forward(self, x):
    method load_state_dict (line 262) | def load_state_dict(self, state_dict, strict=False):

FILE: codes/config/DSGANSR/archs/rrdb.py
  class ResidualDenseBlock_5C (line 8) | class ResidualDenseBlock_5C(nn.Module):
    method __init__ (line 9) | def __init__(self, nf=64, gc=32, bias=True):
    method forward (line 24) | def forward(self, x):
  class RRDB (line 33) | class RRDB(nn.Module):
    method __init__ (line 36) | def __init__(self, nf, gc=32):
    method forward (line 42) | def forward(self, x):
  class RRDBNet (line 50) | class RRDBNet(nn.Module):
    method __init__ (line 51) | def __init__(self, in_nc, out_nc, nf, nb, gc=32, upscale=4):
    method forward (line 68) | def forward(self, x):

FILE: codes/config/DSGANSR/archs/srresnet.py
  class MSRResNet (line 9) | class MSRResNet(nn.Module):
    method __init__ (line 12) | def __init__(self, in_nc=3, out_nc=3, nf=64, nb=16, upscale=4):
    method forward (line 45) | def forward(self, x):

FILE: codes/config/DSGANSR/archs/translator.py
  class Translator (line 13) | class Translator(nn.Module):
    method __init__ (line 14) | def __init__(self, nb, nf, scale=4, zero_tail=False, conv=default_conv):
    method forward (line 45) | def forward(self, x):

FILE: codes/config/DSGANSR/archs/vgg.py
  function insert_bn (line 137) | def insert_bn(names):
  class VGGFeatureExtractor (line 154) | class VGGFeatureExtractor(nn.Module):
    method __init__ (line 175) | def __init__(
    method forward (line 246) | def forward(self, x):

FILE: codes/config/DSGANSR/models/__init__.py
  function create_model (line 21) | def create_model(opt, **kwarg):

FILE: codes/config/DSGANSR/models/base_model.py
  class BaseModel (line 16) | class BaseModel:
    method __init__ (line 17) | def __init__(self, opt):
    method setup_train (line 37) | def setup_train(self, train_opt):
    method feed_data (line 53) | def feed_data(self, data):
    method optimize_parameters (line 56) | def optimize_parameters(self):
    method get_current_visuals (line 59) | def get_current_visuals(self):
    method get_current_losses (line 62) | def get_current_losses(self):
    method print_network (line 65) | def print_network(self):
    method save (line 68) | def save(self, label):
    method load (line 71) | def load(self):
    method build_network (line 74) | def build_network(self, net_opt):
    method build_losses (line 88) | def build_losses(self, loss_opt):
    method build_optimizers (line 102) | def build_optimizers(self, optim_opts):
    method build_schedulers (line 127) | def build_schedulers(self, scheduler_opts):
    method model_to_device (line 142) | def model_to_device(self, net):
    method print_network (line 155) | def print_network(self, net):
    method set_optimizer (line 172) | def set_optimizer(self, names, operation):
    method set_requires_grad (line 176) | def set_requires_grad(self, names, requires_grad):
    method set_network_state (line 182) | def set_network_state(self, names, state):
    method clip_grad_norm (line 187) | def clip_grad_norm(self, names, norm):
    method _set_lr (line 191) | def _set_lr(self, lr_groups_l):
    method _get_init_lr (line 198) | def _get_init_lr(self):
    method update_learning_rate (line 205) | def update_learning_rate(self, cur_iter, warmup_iter=-1):
    method get_current_learning_rate (line 219) | def get_current_learning_rate(self):
    method get_network_description (line 223) | def get_network_description(self, network):
    method save_network (line 233) | def save_network(self, network, network_label, iter_label):
    method save (line 245) | def save(self, iter_label):
    method load_network (line 249) | def load_network(self, network, load_path, strict=True):
    method save_training_state (line 264) | def save_training_state(self, epoch, iter_step):
    method resume_training (line 275) | def resume_training(self, resume_state):
    method reduce_loss_dict (line 290) | def reduce_loss_dict(self, loss_dict):
    method get_current_log (line 315) | def get_current_log(self):

FILE: codes/config/DSGANSR/models/deg_sr_model.py
  class Quant (line 14) | class Quant(torch.autograd.Function):
    method forward (line 17) | def forward(ctx, input):
    method backward (line 23) | def backward(ctx, grad_output):
  class Quantization (line 26) | class Quantization(nn.Module):
    method __init__ (line 27) | def __init__(self):
    method forward (line 30) | def forward(self, input):
  class DegSRModel (line 35) | class DegSRModel(BaseModel):
    method __init__ (line 36) | def __init__(self, opt):
    method feed_data (line 86) | def feed_data(self, data):
    method encoder_forward (line 91) | def encoder_forward(self):
    method decoder_forward (line 94) | def decoder_forward(self):
    method optimize_trans_models (line 100) | def optimize_trans_models(self, loss_dict, step):
    method optimize_sr_models (line 158) | def optimize_sr_models(self, loss_dict, step):
    method optimize_parameters (line 214) | def optimize_parameters(self, step):
    method calculate_gan_loss_D (line 227) | def calculate_gan_loss_D(self, netD, criterion, real, fake):
    method calculate_gan_loss_G (line 237) | def calculate_gan_loss_G(self, netD, criterion, real, fake):
    method test (line 244) | def test(self, test_data):
    method get_current_visuals (line 258) | def get_current_visuals(self, need_GT=True):
  class ShuffleBuffer (line 267) | class ShuffleBuffer():
    method __init__ (line 273) | def __init__(self, buffer_size):
    method choose (line 282) | def choose(self, images, prob=0.5):

FILE: codes/config/DSGANSR/test.py
  function parse_args (line 22) | def parse_args():
  function main (line 57) | def main():
  function main_worker (line 85) | def main_worker(gpu, ngpus_per_node, opt, args):
  function validate (line 166) | def validate(

FILE: codes/config/DSGANSR/train.py
  function parse_args (line 25) | def parse_args():
  function setup_dataloaer (line 60) | def setup_dataloaer(opt, logger):
  function main (line 105) | def main():
  function main_worker (line 136) | def main_worker(gpu, ngpus_per_node, opt, args):
  function validate (line 307) | def validate(model, dataset, dist_loader, opt, measure, epoch, current_s...

FILE: codes/config/EDSR/archs/__init__.py
  function build_network (line 19) | def build_network(net_opt):
  function build_loss (line 25) | def build_loss(loss_opt):
  function build_scheduler (line 30) | def build_scheduler(optimizer, scheduler_opt):

FILE: codes/config/EDSR/archs/bicubic.py
  class BicuBic (line 13) | class BicuBic(nn.Module):
    method __init__ (line 14) | def __init__(self, upscale=4):
    method forward (line 20) | def forward(self, x):

FILE: codes/config/EDSR/archs/discriminator.py
  class DiscriminatorVGG128 (line 10) | class DiscriminatorVGG128(nn.Module):
    method __init__ (line 11) | def __init__(self, in_nc, nf):
    method forward (line 44) | def forward(self, x):
  class DiscriminatorVGG32 (line 67) | class DiscriminatorVGG32(nn.Module):
    method __init__ (line 68) | def __init__(self, in_nc, nf):
    method forward (line 101) | def forward(self, x):
  class PatchGANDiscriminator (line 124) | class PatchGANDiscriminator(nn.Module):
    method __init__ (line 127) | def __init__(self, in_c, nf, nb, stride=1, norm_layer=nn.InstanceNorm2d):
    method forward (line 188) | def forward(self, input):

FILE: codes/config/EDSR/archs/edsr.py
  function default_conv (line 11) | def default_conv(in_channels, out_channels, kernel_size, bias=True):
  class MeanShift (line 17) | class MeanShift(nn.Conv2d):
    method __init__ (line 18) | def __init__(
  class BasicBlock (line 34) | class BasicBlock(nn.Sequential):
    method __init__ (line 35) | def __init__(
  class ResBlock (line 63) | class ResBlock(nn.Module):
    method __init__ (line 64) | def __init__(
    method forward (line 87) | def forward(self, x):
  class Upsampler (line 94) | class Upsampler(nn.Sequential):
    method __init__ (line 95) | def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):
  function make_model (line 121) | def make_model(args, parent=False):
  class EDSR (line 129) | class EDSR(nn.Module):
    method __init__ (line 130) | def __init__(self, nb, nf, res_scale=0.1, upscale=4, conv=default_conv):
    method forward (line 166) | def forward(self, x):

FILE: codes/config/EDSR/archs/loss.py
  class GaussGuided (line 12) | class GaussGuided(nn.Module):
    method __init__ (line 13) | def __init__(self, ksize, sigma):
    method forward (line 24) | def forward(self, kernel):
  class PerceptualLossLPIPS (line 29) | class PerceptualLossLPIPS(nn.Module):
    method __init__ (line 30) | def __init__(self, net="alex", normalize=True):
    method forward (line 38) | def forward(self, res, ref):
  class MSELoss (line 43) | class MSELoss(nn.Module):
    method __init__ (line 44) | def __init__(self, *args, **kwargs):
    method forward (line 47) | def forward(self, res, ref):
  class L1Loss (line 52) | class L1Loss(nn.Module):
    method __init__ (line 53) | def __init__(self, *args, **kwargs):
    method forward (line 56) | def forward(self, res, ref):
  class GANLoss (line 61) | class GANLoss(nn.Module):
    method __init__ (line 69) | def __init__(self, gan_type, real_label_val=1.0, fake_label_val=0.0):
    method _wgan_loss (line 88) | def _wgan_loss(self, input, target):
    method _wgan_softplus_loss (line 98) | def _wgan_softplus_loss(self, input, target):
    method get_target_label (line 112) | def get_target_label(self, input, target_is_real):
    method forward (line 127) | def forward(self, input, target_is_real, is_disc=False):
  class PerceptualLoss (line 152) | class PerceptualLoss(nn.Module):
    method __init__ (line 174) | def __init__(
    method forward (line 205) | def forward(self, x, gt):
    method _gram_mat (line 262) | def _gram_mat(self, x):
  class CharbonnierLoss (line 277) | class CharbonnierLoss(nn.Module):
    method __init__ (line 280) | def __init__(self, eps=1e-6):
    method forward (line 284) | def forward(self, x, y):
  class GradientPenaltyLoss (line 290) | class GradientPenaltyLoss(nn.Module):
    method __init__ (line 291) | def __init__(self, device=torch.device("cpu")):
    method get_grad_outputs (line 296) | def get_grad_outputs(self, input):
    method forward (line 301) | def forward(self, interp, interp_crit):

FILE: codes/config/EDSR/archs/lr_scheduler.py
  class LinearDecayLR (line 11) | class LinearDecayLR(_LRScheduler):
    method __init__ (line 12) | def __init__(
    method get_lr (line 24) | def get_lr(self):
  class MultiStepRestartLR (line 34) | class MultiStepRestartLR(_LRScheduler):
    method __init__ (line 35) | def __init__(
    method get_lr (line 55) | def get_lr(self):
  class CosineAnnealingRestartLR (line 72) | class CosineAnnealingRestartLR(_LRScheduler):
    method __init__ (line 73) | def __init__(
    method get_lr (line 87) | def get_lr(self):

FILE: codes/config/EDSR/archs/module_util.py
  function initialize_weights (line 7) | def initialize_weights(net_l, scale=1):
  function make_layer (line 27) | def make_layer(block, n_layers):
  class ResidualBlock_noBN (line 34) | class ResidualBlock_noBN(nn.Module):
    method __init__ (line 40) | def __init__(self, nf=64):
    method forward (line 48) | def forward(self, x):
  function flow_warp (line 55) | def flow_warp(x, flow, interp_mode="bilinear", padding_mode="zeros"):

FILE: codes/config/EDSR/archs/rcan.py
  function default_conv (line 11) | def default_conv(in_channels, out_channels, kernel_size, bias=True):
  class MeanShift (line 17) | class MeanShift(nn.Conv2d):
    method __init__ (line 18) | def __init__(self, rgb_range, rgb_mean, rgb_std, sign=-1):
  class BasicBlock (line 28) | class BasicBlock(nn.Sequential):
    method __init__ (line 29) | def __init__(
  class ResBlock (line 57) | class ResBlock(nn.Module):
    method __init__ (line 58) | def __init__(
    method forward (line 81) | def forward(self, x):
  class Upsampler (line 88) | class Upsampler(nn.Sequential):
    method __init__ (line 89) | def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):
  function make_model (line 113) | def make_model(args, parent=False):
  class CALayer (line 118) | class CALayer(nn.Module):
    method __init__ (line 119) | def __init__(self, channel, reduction=16):
    method forward (line 131) | def forward(self, x):
  class RCAB (line 138) | class RCAB(nn.Module):
    method __init__ (line 139) | def __init__(
    method forward (line 163) | def forward(self, x):
  class ResidualGroup (line 171) | class ResidualGroup(nn.Module):
    method __init__ (line 172) | def __init__(
    method forward (line 193) | def forward(self, x):
  class RCAN (line 201) | class RCAN(nn.Module):
    method __init__ (line 202) | def __init__(self, ng, nb, nf, reduction=16, upscale=4, conv=default_c...
    method forward (line 250) | def forward(self, x):
    method load_state_dict (line 262) | def load_state_dict(self, state_dict, strict=False):

FILE: codes/config/EDSR/archs/rrdb.py
  class ResidualDenseBlock_5C (line 8) | class ResidualDenseBlock_5C(nn.Module):
    method __init__ (line 9) | def __init__(self, nf=64, gc=32, bias=True):
    method forward (line 24) | def forward(self, x):
  class RRDB (line 33) | class RRDB(nn.Module):
    method __init__ (line 36) | def __init__(self, nf, gc=32):
    method forward (line 42) | def forward(self, x):
  class RRDBNet (line 50) | class RRDBNet(nn.Module):
    method __init__ (line 51) | def __init__(self, in_nc, out_nc, nf, nb, gc=32, upscale=4):
    method forward (line 68) | def forward(self, x):

FILE: codes/config/EDSR/archs/srresnet.py
  class MSRResNet (line 9) | class MSRResNet(nn.Module):
    method __init__ (line 12) | def __init__(self, in_nc=3, out_nc=3, nf=64, nb=16, upscale=4):
    method forward (line 45) | def forward(self, x):

FILE: codes/config/EDSR/archs/translator.py
  function default_conv (line 11) | def default_conv(in_channels, out_channels, kernel_size, bias=True):
  class BasicBlock (line 17) | class BasicBlock(nn.Sequential):
    method __init__ (line 18) | def __init__(
  class ResBlock (line 46) | class ResBlock(nn.Module):
    method __init__ (line 47) | def __init__(
    method forward (line 70) | def forward(self, x):
  class Upsampler (line 77) | class Upsampler(nn.Sequential):
    method __init__ (line 78) | def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):
  class Translator (line 105) | class Translator(nn.Module):
    method __init__ (line 106) | def __init__(self, in_nc, out_nc, nf, nb, scale=4, conv=default_conv):
    method forward (line 134) | def forward(self, x):

FILE: codes/config/EDSR/archs/vgg.py
  function insert_bn (line 137) | def insert_bn(names):
  class VGGFeatureExtractor (line 154) | class VGGFeatureExtractor(nn.Module):
    method __init__ (line 175) | def __init__(
    method forward (line 246) | def forward(self, x):

FILE: codes/config/EDSR/models/__init__.py
  function create_model (line 21) | def create_model(opt, **kwarg):

FILE: codes/config/EDSR/models/base_model.py
  class BaseModel (line 16) | class BaseModel:
    method __init__ (line 17) | def __init__(self, opt):
    method setup_train (line 37) | def setup_train(self, train_opt):
    method feed_data (line 53) | def feed_data(self, data):
    method optimize_parameters (line 56) | def optimize_parameters(self):
    method get_current_visuals (line 59) | def get_current_visuals(self):
    method get_current_losses (line 62) | def get_current_losses(self):
    method print_network (line 65) | def print_network(self):
    method save (line 68) | def save(self, label):
    method load (line 71) | def load(self):
    method build_network (line 74) | def build_network(self, net_opt):
    method build_losses (line 88) | def build_losses(self, loss_opt):
    method build_optimizers (line 102) | def build_optimizers(self, optim_opts):
    method build_schedulers (line 127) | def build_schedulers(self, scheduler_opts):
    method model_to_device (line 142) | def model_to_device(self, net):
    method print_network (line 155) | def print_network(self, net):
    method set_optimizer (line 172) | def set_optimizer(self, names, operation):
    method set_requires_grad (line 176) | def set_requires_grad(self, names, requires_grad):
    method set_network_state (line 182) | def set_network_state(self, names, state):
    method clip_grad_norm (line 187) | def clip_grad_norm(self, names, norm):
    method _set_lr (line 191) | def _set_lr(self, lr_groups_l):
    method _get_init_lr (line 198) | def _get_init_lr(self):
    method update_learning_rate (line 205) | def update_learning_rate(self, cur_iter, warmup_iter=-1):
    method get_current_learning_rate (line 219) | def get_current_learning_rate(self):
    method get_network_description (line 223) | def get_network_description(self, network):
    method save_network (line 233) | def save_network(self, network, network_label, iter_label):
    method save (line 245) | def save(self, iter_label):
    method load_network (line 249) | def load_network(self, network, load_path, strict=True):
    method save_training_state (line 264) | def save_training_state(self, epoch, iter_step):
    method resume_training (line 275) | def resume_training(self, resume_state):
    method reduce_loss_dict (line 290) | def reduce_loss_dict(self, loss_dict):
    method get_current_log (line 315) | def get_current_log(self):

FILE: codes/config/EDSR/models/sr_model.py
  class SRModel (line 15) | class SRModel(BaseModel):
    method __init__ (line 16) | def __init__(self, opt):
    method feed_data (line 42) | def feed_data(self, data):
    method forward (line 47) | def forward(self):
    method optimize_parameters (line 51) | def optimize_parameters(self, step):
    method calculate_rgan_loss_D (line 96) | def calculate_rgan_loss_D(self, netD, criterion, real, fake):
    method calculate_rgan_loss_G (line 111) | def calculate_rgan_loss_G(self, netD, criterion, real, fake):
    method test (line 122) | def test(self, data, crop_size=None):
    method crop_test (line 132) | def crop_test(self, lr, crop_size):
    method get_current_visuals (line 180) | def get_current_visuals(self, need_GT=True):

FILE: codes/config/EDSR/test.py
  function parse_args (line 22) | def parse_args():
  function main (line 57) | def main():
  function main_worker (line 85) | def main_worker(gpu, ngpus_per_node, opt, args):
  function validate (line 166) | def validate(

FILE: codes/config/EDSR/train.py
  function parse_args (line 25) | def parse_args():
  function setup_dataloaer (line 60) | def setup_dataloaer(opt, logger):
  function main (line 105) | def main():
  function main_worker (line 136) | def main_worker(gpu, ngpus_per_node, opt, args):
  function validate (line 307) | def validate(model, dataset, dist_loader, opt, measure, epoch, current_s...

FILE: codes/config/Maeda/archs/__init__.py
  function build_network (line 19) | def build_network(net_opt):
  function build_loss (line 25) | def build_loss(loss_opt):
  function build_scheduler (line 30) | def build_scheduler(optimizer, scheduler_opt):

FILE: codes/config/Maeda/archs/discriminator.py
  class DiscriminatorVGG128 (line 10) | class DiscriminatorVGG128(nn.Module):
    method __init__ (line 11) | def __init__(self, in_nc, nf):
    method forward (line 44) | def forward(self, x):
  class DiscriminatorVGG32 (line 67) | class DiscriminatorVGG32(nn.Module):
    method __init__ (line 68) | def __init__(self, in_nc, nf):
    method forward (line 101) | def forward(self, x):
  class PatchGANDiscriminator (line 124) | class PatchGANDiscriminator(nn.Module):
    method __init__ (line 127) | def __init__(self, in_c, nf, nb, stride=1, norm_layer=nn.InstanceNorm2d):
    method forward (line 188) | def forward(self, input):

FILE: codes/config/Maeda/archs/edsr.py
  function default_conv (line 11) | def default_conv(in_channels, out_channels, kernel_size, bias=True):
  class MeanShift (line 17) | class MeanShift(nn.Conv2d):
    method __init__ (line 18) | def __init__(
  class BasicBlock (line 34) | class BasicBlock(nn.Sequential):
    method __init__ (line 35) | def __init__(
  class ResBlock (line 63) | class ResBlock(nn.Module):
    method __init__ (line 64) | def __init__(
    method forward (line 87) | def forward(self, x):
  class Upsampler (line 94) | class Upsampler(nn.Sequential):
    method __init__ (line 95) | def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):
  function make_model (line 121) | def make_model(args, parent=False):
  class EDSR (line 129) | class EDSR(nn.Module):
    method __init__ (line 130) | def __init__(self, nb, nf, res_scale=0.1, upscale=4, conv=default_conv):
    method forward (line 166) | def forward(self, x):

FILE: codes/config/Maeda/archs/loss.py
  class TVLoss (line 11) | class TVLoss(nn.Module):
    method __init__ (line 12) | def __init__(self, penealty="L1Loss"):
    method forward (line 16) | def forward(self, pred):
  class MSELoss (line 26) | class MSELoss(nn.Module):
    method __init__ (line 27) | def __init__(self, *args, **kwargs):
    method forward (line 30) | def forward(self, res, ref):
  class L1Loss (line 35) | class L1Loss(nn.Module):
    method __init__ (line 36) | def __init__(self, *args, **kwargs):
    method forward (line 39) | def forward(self, res, ref):
  class GANLoss (line 44) | class GANLoss(nn.Module):
    method __init__ (line 52) | def __init__(self, gan_type, real_label_val=1.0, fake_label_val=0.0):
    method _wgan_loss (line 71) | def _wgan_loss(self, input, target):
    method _wgan_softplus_loss (line 81) | def _wgan_softplus_loss(self, input, target):
    method get_target_label (line 95) | def get_target_label(self, input, target_is_real):
    method forward (line 110) | def forward(self, input, target_is_real, is_disc=False):
  class PerceptualLoss (line 135) | class PerceptualLoss(nn.Module):
    method __init__ (line 157) | def __init__(
    method forward (line 188) | def forward(self, x, gt):
    method _gram_mat (line 245) | def _gram_mat(self, x):
  class CharbonnierLoss (line 260) | class CharbonnierLoss(nn.Module):
    method __init__ (line 263) | def __init__(self, eps=1e-6):
    method forward (line 267) | def forward(self, x, y):
  class GradientPenaltyLoss (line 273) | class GradientPenaltyLoss(nn.Module):
    method __init__ (line 274) | def __init__(self, device=torch.device("cpu")):
    method get_grad_outputs (line 279) | def get_grad_outputs(self, input):
    method forward (line 284) | def forward(self, interp, interp_crit):

FILE: codes/config/Maeda/archs/lr_scheduler.py
  class LinearDecayLR (line 11) | class LinearDecayLR(_LRScheduler):
    method __init__ (line 12) | def __init__(
    method get_lr (line 24) | def get_lr(self):
  class MultiStepRestartLR (line 34) | class MultiStepRestartLR(_LRScheduler):
    method __init__ (line 35) | def __init__(
    method get_lr (line 55) | def get_lr(self):
  class CosineAnnealingRestartLR (line 72) | class CosineAnnealingRestartLR(_LRScheduler):
    method __init__ (line 73) | def __init__(
    method get_lr (line 87) | def get_lr(self):

FILE: codes/config/Maeda/archs/module_util.py
  function initialize_weights (line 7) | def initialize_weights(net_l, scale=1):
  function make_layer (line 27) | def make_layer(block, n_layers):
  class ResidualBlock_noBN (line 34) | class ResidualBlock_noBN(nn.Module):
    method __init__ (line 40) | def __init__(self, nf=64):
    method forward (line 48) | def forward(self, x):
  function flow_warp (line 55) | def flow_warp(x, flow, interp_mode="bilinear", padding_mode="zeros"):

FILE: codes/config/Maeda/archs/rcan.py
  function default_conv (line 11) | def default_conv(in_channels, out_channels, kernel_size, bias=True):
  class MeanShift (line 17) | class MeanShift(nn.Conv2d):
    method __init__ (line 18) | def __init__(self, rgb_range, rgb_mean, rgb_std, sign=-1):
  class BasicBlock (line 28) | class BasicBlock(nn.Sequential):
    method __init__ (line 29) | def __init__(
  class ResBlock (line 57) | class ResBlock(nn.Module):
    method __init__ (line 58) | def __init__(
    method forward (line 81) | def forward(self, x):
  class Upsampler (line 88) | class Upsampler(nn.Sequential):
    method __init__ (line 89) | def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):
  function make_model (line 113) | def make_model(args, parent=False):
  class CALayer (line 118) | class CALayer(nn.Module):
    method __init__ (line 119) | def __init__(self, channel, reduction=16):
    method forward (line 131) | def forward(self, x):
  class RCAB (line 138) | class RCAB(nn.Module):
    method __init__ (line 139) | def __init__(
    method forward (line 163) | def forward(self, x):
  class ResidualGroup (line 171) | class ResidualGroup(nn.Module):
    method __init__ (line 172) | def __init__(
    method forward (line 193) | def forward(self, x):
  class RCAN (line 201) | class RCAN(nn.Module):
    method __init__ (line 202) | def __init__(self, ng, nb, nf, reduction=16, upscale=4, conv=default_c...
    method forward (line 250) | def forward(self, x):
    method load_state_dict (line 262) | def load_state_dict(self, state_dict, strict=False):

FILE: codes/config/Maeda/archs/rrdb.py
  class ResidualDenseBlock_5C (line 8) | class ResidualDenseBlock_5C(nn.Module):
    method __init__ (line 9) | def __init__(self, nf=64, gc=32, bias=True):
    method forward (line 24) | def forward(self, x):
  class RRDB (line 33) | class RRDB(nn.Module):
    method __init__ (line 36) | def __init__(self, nf, gc=32):
    method forward (line 42) | def forward(self, x):
  class RRDBNet (line 50) | class RRDBNet(nn.Module):
    method __init__ (line 51) | def __init__(self, in_nc, out_nc, nf, nb, gc=32, upscale=4):
    method forward (line 68) | def forward(self, x):

FILE: codes/config/Maeda/archs/srresnet.py
  class MSRResNet (line 9) | class MSRResNet(nn.Module):
    method __init__ (line 12) | def __init__(self, in_nc=3, out_nc=3, nf=64, nb=16, upscale=4):
    method forward (line 45) | def forward(self, x):

FILE: codes/config/Maeda/archs/translator.py
  function default_conv (line 11) | def default_conv(in_channels, out_channels, kernel_size, bias=True):
  class BasicBlock (line 17) | class BasicBlock(nn.Sequential):
    method __init__ (line 18) | def __init__(
  class ResBlock (line 46) | class ResBlock(nn.Module):
    method __init__ (line 47) | def __init__(
    method forward (line 70) | def forward(self, x):
  class Upsampler (line 77) | class Upsampler(nn.Sequential):
    method __init__ (line 78) | def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):
  class Translator (line 105) | class Translator(nn.Module):
    method __init__ (line 106) | def __init__(self, nb, nf, noise_nf=0, scale=4, zero_tail=False, conv=...
    method forward (line 138) | def forward(self, x):

FILE: codes/config/Maeda/archs/vgg.py
  function insert_bn (line 137) | def insert_bn(names):
  class VGGFeatureExtractor (line 154) | class VGGFeatureExtractor(nn.Module):
    method __init__ (line 175) | def __init__(
    method forward (line 246) | def forward(self, x):

FILE: codes/config/Maeda/models/__init__.py
  function create_model (line 21) | def create_model(opt, **kwarg):

FILE: codes/config/Maeda/models/base_model.py
  class BaseModel (line 16) | class BaseModel:
    method __init__ (line 17) | def __init__(self, opt):
    method setup_train (line 37) | def setup_train(self, train_opt):
    method feed_data (line 53) | def feed_data(self, data):
    method optimize_parameters (line 56) | def optimize_parameters(self):
    method get_current_visuals (line 59) | def get_current_visuals(self):
    method get_current_losses (line 62) | def get_current_losses(self):
    method print_network (line 65) | def print_network(self):
    method save (line 68) | def save(self, label):
    method load (line 71) | def load(self):
    method build_network (line 74) | def build_network(self, net_opt):
    method build_losses (line 88) | def build_losses(self, loss_opt):
    method build_optimizers (line 102) | def build_optimizers(self, optim_opts):
    method build_schedulers (line 127) | def build_schedulers(self, scheduler_opts):
    method model_to_device (line 142) | def model_to_device(self, net):
    method print_network (line 155) | def print_network(self, net):
    method set_optimizer (line 172) | def set_optimizer(self, names, operation):
    method set_requires_grad (line 176) | def set_requires_grad(self, names, requires_grad):
    method set_network_state (line 182) | def set_network_state(self, names, state):
    method clip_grad_norm (line 187) | def clip_grad_norm(self, names, norm):
    method _set_lr (line 191) | def _set_lr(self, lr_groups_l):
    method _get_init_lr (line 198) | def _get_init_lr(self):
    method update_learning_rate (line 205) | def update_learning_rate(self, cur_iter, warmup_iter=-1):
    method get_current_learning_rate (line 219) | def get_current_learning_rate(self):
    method get_network_description (line 223) | def get_network_description(self, network):
    method save_network (line 233) | def save_network(self, network, network_label, iter_label):
    method save (line 245) | def save(self, iter_label):
    method load_network (line 249) | def load_network(self, network, load_path, strict=True):
    method save_training_state (line 264) | def save_training_state(self, epoch, iter_step):
    method resume_training (line 275) | def resume_training(self, resume_state):
    method reduce_loss_dict (line 290) | def reduce_loss_dict(self, loss_dict):
    method get_current_log (line 315) | def get_current_log(self):

FILE: codes/config/Maeda/models/pseudo_supervision_model.py
  class PseudoSupModel (line 15) | class PseudoSupModel(BaseModel):
    method __init__ (line 16) | def __init__(self, opt):
    method feed_data (line 58) | def feed_data(self, data):
    method forward (line 63) | def forward(self):
    method optimize_parameters (line 74) | def optimize_parameters(self, step):
    method calculate_gan_loss_D (line 159) | def calculate_gan_loss_D(self, netD, criterion, real, fake):
    method calculate_gan_loss_G (line 169) | def calculate_gan_loss_G(self, netD, criterion, real, fake):
    method test (line 176) | def test(self, data):
    method get_current_visuals (line 184) | def get_current_visuals(self, need_GT=True):

FILE: codes/config/Maeda/test.py
  function parse_args (line 22) | def parse_args():
  function main (line 57) | def main():
  function main_worker (line 85) | def main_worker(gpu, ngpus_per_node, opt, args):
  function validate (line 166) | def validate(

FILE: codes/config/Maeda/train.py
  function parse_args (line 25) | def parse_args():
  function setup_dataloaer (line 60) | def setup_dataloaer(opt, logger):
  function main (line 105) | def main():
  function main_worker (line 136) | def main_worker(gpu, ngpus_per_node, opt, args):
  function validate (line 307) | def validate(model, dataset, dist_loader, opt, measure, epoch, current_s...

FILE: codes/config/PDM-SR/archs/__init__.py
  function build_network (line 19) | def build_network(net_opt):
  function build_loss (line 25) | def build_loss(loss_opt):
  function build_scheduler (line 30) | def build_scheduler(optimizer, scheduler_opt):

FILE: codes/config/PDM-SR/archs/deg_arch.py
  class ResBlock (line 9) | class ResBlock(nn.Module):
    method __init__ (line 10) | def __init__(self, nf, ksize, norm=nn.BatchNorm2d, act=nn.ReLU):
    method forward (line 20) | def forward(self, x):
  class Quantization (line 23) | class Quantization(nn.Module):
    method __init__ (line 24) | def __init__(self, n=5):
    method forward (line 28) | def forward(self, inp):
  class KernelModel (line 36) | class KernelModel(nn.Module):
    method __init__ (line 37) | def __init__(self, opt, scale):
    method forward (line 76) | def forward(self, x):
  class NoiseModel (line 110) | class NoiseModel(nn.Module):
    method __init__ (line 111) | def __init__(self, opt, scale):
    method forward (line 148) | def forward(self, x):
  class DegModel (line 172) | class DegModel(nn.Module):
    method __init__ (line 173) | def __init__(
    method forward (line 192) | def forward(self, inp):

FILE: codes/config/PDM-SR/archs/discriminator.py
  class DiscriminatorVGG128 (line 12) | class DiscriminatorVGG128(nn.Module):
    method __init__ (line 13) | def __init__(self, in_nc, nf):
    method forward (line 46) | def forward(self, x):
  class DiscriminatorVGG32 (line 69) | class DiscriminatorVGG32(nn.Module):
    method __init__ (line 70) | def __init__(self, in_nc, nf):
    method forward (line 103) | def forward(self, x):
  class PatchGANDiscriminator (line 126) | class PatchGANDiscriminator(nn.Module):
    method __init__ (line 129) | def __init__(self, in_c, nf, nb, stride=1, norm_layer=nn.InstanceNorm2d):
    method forward (line 190) | def forward(self, input):
  class UNetDiscriminatorSN (line 196) | class UNetDiscriminatorSN(nn.Module):
    method __init__ (line 199) | def __init__(self, nc, nf=64, skip_connection=True):
    method forward (line 220) | def forward(self, x):

FILE: codes/config/PDM-SR/archs/edsr.py
  function default_conv (line 11) | def default_conv(in_channels, out_channels, kernel_size, bias=True):
  class MeanShift (line 17) | class MeanShift(nn.Conv2d):
    method __init__ (line 18) | def __init__(
  class BasicBlock (line 34) | class BasicBlock(nn.Sequential):
    method __init__ (line 35) | def __init__(
  class ResBlock (line 63) | class ResBlock(nn.Module):
    method __init__ (line 64) | def __init__(
    method forward (line 87) | def forward(self, x):
  class Upsampler (line 94) | class Upsampler(nn.Sequential):
    method __init__ (line 95) | def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):
  function make_model (line 121) | def make_model(args, parent=False):
  class EDSR (line 129) | class EDSR(nn.Module):
    method __init__ (line 130) | def __init__(self, nb, nf, res_scale=0.1, upscale=4, conv=default_conv):
    method forward (line 166) | def forward(self, x):

FILE: codes/config/PDM-SR/archs/loss.py
  class TVLoss (line 12) | class TVLoss(nn.Module):
    method __init__ (line 13) | def __init__(self, penealty="L1Loss"):
    method forward (line 17) | def forward(self, pred):
  class GaussGuided (line 26) | class GaussGuided(nn.Module):
    method __init__ (line 27) | def __init__(self, ksize, sigma):
    method forward (line 38) | def forward(self, kernel):
  class PerceptualLossLPIPS (line 43) | class PerceptualLossLPIPS(nn.Module):
    method __init__ (line 44) | def __init__(self, net="alex", normalize=True):
    method forward (line 52) | def forward(self, res, ref):
  class MSELoss (line 57) | class MSELoss(nn.Module):
    method __init__ (line 58) | def __init__(self, *args, **kwargs):
    method forward (line 61) | def forward(self, res, ref):
  class L1Loss (line 66) | class L1Loss(nn.Module):
    method __init__ (line 67) | def __init__(self, *args, **kwargs):
    method forward (line 70) | def forward(self, res, ref):
  class GANLoss (line 75) | class GANLoss(nn.Module):
    method __init__ (line 83) | def __init__(self, gan_type, real_label_val=1.0, fake_label_val=0.0):
    method _wgan_loss (line 102) | def _wgan_loss(self, input, target):
    method _wgan_softplus_loss (line 112) | def _wgan_softplus_loss(self, input, target):
    method get_target_label (line 126) | def get_target_label(self, input, target_is_real):
    method forward (line 141) | def forward(self, input, target_is_real, is_disc=False):
  class PerceptualLoss (line 166) | class PerceptualLoss(nn.Module):
    method __init__ (line 188) | def __init__(
    method forward (line 219) | def forward(self, x, gt):
    method _gram_mat (line 276) | def _gram_mat(self, x):
  class CharbonnierLoss (line 291) | class CharbonnierLoss(nn.Module):
    method __init__ (line 294) | def __init__(self, eps=1e-6):
    method forward (line 298) | def forward(self, x, y):
  class GradientPenaltyLoss (line 304) | class GradientPenaltyLoss(nn.Module):
    method __init__ (line 305) | def __init__(self, device=torch.device("cpu")):
    method get_grad_outputs (line 310) | def get_grad_outputs(self, input):
    method forward (line 315) | def forward(self, interp, interp_crit):

FILE: codes/config/PDM-SR/archs/lr_scheduler.py
  class LinearDecayLR (line 11) | class LinearDecayLR(_LRScheduler):
    method __init__ (line 12) | def __init__(
    method get_lr (line 24) | def get_lr(self):
  class MultiStepRestartLR (line 32) | class MultiStepRestartLR(_LRScheduler):
    method __init__ (line 33) | def __init__(
    method get_lr (line 53) | def get_lr(self):
  class CosineAnnealingRestartLR (line 69) | class CosineAnnealingRestartLR(_LRScheduler):
    method __init__ (line 70) | def __init__(
    method get_lr (line 84) | def get_lr(self):

FILE: codes/config/PDM-SR/archs/module_util.py
  function initialize_weights (line 7) | def initialize_weights(net_l, scale=1):
  function make_layer (line 27) | def make_layer(block, n_layers):
  class ResidualBlock_noBN (line 34) | class ResidualBlock_noBN(nn.Module):
    method __init__ (line 40) | def __init__(self, nf=64):
    method forward (line 48) | def forward(self, x):
  function flow_warp (line 55) | def flow_warp(x, flow, interp_mode="bilinear", padding_mode="zeros"):

FILE: codes/config/PDM-SR/archs/rcan.py
  function default_conv (line 11) | def default_conv(in_channels, out_channels, kernel_size, bias=True):
  class MeanShift (line 17) | class MeanShift(nn.Conv2d):
    method __init__ (line 18) | def __init__(self, rgb_range, rgb_mean, rgb_std, sign=-1):
  class BasicBlock (line 28) | class BasicBlock(nn.Sequential):
    method __init__ (line 29) | def __init__(
  class ResBlock (line 57) | class ResBlock(nn.Module):
    method __init__ (line 58) | def __init__(
    method forward (line 81) | def forward(self, x):
  class Upsampler (line 88) | class Upsampler(nn.Sequential):
    method __init__ (line 89) | def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):
  function make_model (line 113) | def make_model(args, parent=False):
  class CALayer (line 118) | class CALayer(nn.Module):
    method __init__ (line 119) | def __init__(self, channel, reduction=16):
    method forward (line 131) | def forward(self, x):
  class RCAB (line 138) | class RCAB(nn.Module):
    method __init__ (line 139) | def __init__(
    method forward (line 163) | def forward(self, x):
  class ResidualGroup (line 171) | class ResidualGroup(nn.Module):
    method __init__ (line 172) | def __init__(
    method forward (line 193) | def forward(self, x):
  class RCAN (line 201) | class RCAN(nn.Module):
    method __init__ (line 202) | def __init__(self, ng, nb, nf, reduction=16, upscale=4, conv=default_c...
    method forward (line 250) | def forward(self, x):
    method load_state_dict (line 262) | def load_state_dict(self, state_dict, strict=False):

FILE: codes/config/PDM-SR/archs/rrdb.py
  class ResidualDenseBlock_5C (line 8) | class ResidualDenseBlock_5C(nn.Module):
    method __init__ (line 9) | def __init__(self, nf=64, gc=32, bias=True):
    method forward (line 24) | def forward(self, x):
  class RRDB (line 33) | class RRDB(nn.Module):
    method __init__ (line 36) | def __init__(self, nf, gc=32):
    method forward (line 42) | def forward(self, x):
  class RRDBNet (line 50) | class RRDBNet(nn.Module):
    method __init__ (line 51) | def __init__(self, in_nc, out_nc, nf, nb, gc=32, upscale=4):
    method forward (line 68) | def forward(self, x):

FILE: codes/config/PDM-SR/archs/srresnet.py
  class MSRResNet (line 9) | class MSRResNet(nn.Module):
    method __init__ (line 12) | def __init__(self, in_nc=3, out_nc=3, nf=64, nb=16, upscale=4):
    method forward (line 45) | def forward(self, x):

FILE: codes/config/PDM-SR/archs/vgg.py
  function insert_bn (line 137) | def insert_bn(names):
  class VGGFeatureExtractor (line 154) | class VGGFeatureExtractor(nn.Module):
    method __init__ (line 175) | def __init__(
    method forward (line 246) | def forward(self, x):

FILE: codes/config/PDM-SR/models/__init__.py
  function create_model (line 21) | def create_model(opt, **kwarg):

FILE: codes/config/PDM-SR/models/base_model.py
  class BaseModel (line 16) | class BaseModel:
    method __init__ (line 17) | def __init__(self, opt):
    method setup_train (line 37) | def setup_train(self, train_opt):
    method feed_data (line 53) | def feed_data(self, data):
    method optimize_parameters (line 56) | def optimize_parameters(self):
    method get_current_visuals (line 59) | def get_current_visuals(self):
    method get_current_losses (line 62) | def get_current_losses(self):
    method print_network (line 65) | def print_network(self):
    method save (line 68) | def save(self, label):
    method load (line 71) | def load(self):
    method build_network (line 74) | def build_network(self, net_opt):
    method build_losses (line 88) | def build_losses(self, loss_opt):
    method build_optimizers (line 102) | def build_optimizers(self, optim_opts):
    method build_schedulers (line 127) | def build_schedulers(self, scheduler_opts):
    method model_to_device (line 142) | def model_to_device(self, net):
    method print_network (line 155) | def print_network(self, net):
    method set_optimizer (line 172) | def set_optimizer(self, names, operation):
    method set_requires_grad (line 176) | def set_requires_grad(self, names, requires_grad):
    method set_network_state (line 182) | def set_network_state(self, names, state):
    method clip_grad_norm (line 187) | def clip_grad_norm(self, names, norm):
    method _set_lr (line 191) | def _set_lr(self, lr_groups_l):
    method _get_init_lr (line 198) | def _get_init_lr(self):
    method update_learning_rate (line 205) | def update_learning_rate(self, cur_iter, warmup_iter=-1):
    method get_current_learning_rate (line 219) | def get_current_learning_rate(self):
    method get_network_description (line 223) | def get_network_description(self, network):
    method save_network (line 233) | def save_network(self, network, network_label, iter_label):
    method save (line 245) | def save(self, iter_label):
    method load_network (line 249) | def load_network(self, network, load_path, strict=True):
    method save_training_state (line 264) | def save_training_state(self, epoch, iter_step):
    method resume_training (line 275) | def resume_training(self, resume_state):
    method reduce_loss_dict (line 290) | def reduce_loss_dict(self, loss_dict):
    method get_current_log (line 315) | def get_current_log(self):

FILE: codes/config/PDM-SR/models/deg_sr_model.py
  class Quant (line 15) | class Quant(torch.autograd.Function):
    method forward (line 18) | def forward(ctx, input):
    method backward (line 24) | def backward(ctx, grad_output):
  class Quantization (line 27) | class Quantization(nn.Module):
    method __init__ (line 28) | def __init__(self):
    method forward (line 31) | def forward(self, input):
  class DegSRModel (line 36) | class DegSRModel(BaseModel):
    method __init__ (line 37) | def __init__(self, opt):
    method feed_data (line 89) | def feed_data(self, data):
    method deg_forward (line 94) | def deg_forward(self):
    method sr_forward (line 104) | def sr_forward(self):
    method optimize_trans_models (line 115) | def optimize_trans_models(self, step, loss_dict):
    method optimize_sr_models (line 176) | def optimize_sr_models(self, step, loss_dict):
    method optimize_parameters (line 230) | def optimize_parameters(self, step):
    method calculate_gan_loss_D (line 243) | def calculate_gan_loss_D(self, netD, criterion, real, fake):
    method calculate_gan_loss_G (line 253) | def calculate_gan_loss_G(self, netD, criterion, real, fake):
    method test (line 260) | def test(self, test_data, crop_size=None):
    method get_current_visuals (line 280) | def get_current_visuals(self, need_GT=True):
    method crop_test (line 288) | def crop_test(self, lr, crop_size):
  class ShuffleBuffer (line 337) | class ShuffleBuffer():
    method __init__ (line 343) | def __init__(self, buffer_size):
    method choose (line 352) | def choose(self, images, prob=0.5):

FILE: codes/config/PDM-SR/test.py
  function parse_args (line 22) | def parse_args():
  function main (line 57) | def main():
  function main_worker (line 85) | def main_worker(gpu, ngpus_per_node, opt, args):
  function validate (line 166) | def validate(

FILE: codes/config/PDM-SR/train.py
  function parse_args (line 25) | def parse_args():
  function setup_dataloaer (line 60) | def setup_dataloaer(opt, logger):
  function main (line 105) | def main():
  function main_worker (line 136) | def main_worker(gpu, ngpus_per_node, opt, args):
  function validate (line 307) | def validate(model, dataset, dist_loader, opt, measure, epoch, current_s...

FILE: codes/config/RealESRGAN/archs/__init__.py
  function build_network (line 19) | def build_network(net_opt):
  function build_loss (line 25) | def build_loss(loss_opt):
  function build_scheduler (line 30) | def build_scheduler(optimizer, scheduler_opt):

FILE: codes/config/RealESRGAN/archs/discriminator.py
  class DiscriminatorVGG128 (line 10) | class DiscriminatorVGG128(nn.Module):
    method __init__ (line 11) | def __init__(self, in_nc, nf):
    method forward (line 44) | def forward(self, x):
  class DiscriminatorVGG32 (line 67) | class DiscriminatorVGG32(nn.Module):
    method __init__ (line 68) | def __init__(self, in_nc, nf):
    method forward (line 101) | def forward(self, x):
  class PatchGANDiscriminator (line 124) | class PatchGANDiscriminator(nn.Module):
    method __init__ (line 127) | def __init__(self, in_c, nf, nb, stride=1, norm_layer=nn.InstanceNorm2d):
    method forward (line 188) | def forward(self, input):

FILE: codes/config/RealESRGAN/archs/edsr.py
  function default_conv (line 11) | def default_conv(in_channels, out_channels, kernel_size, bias=True):
  class MeanShift (line 17) | class MeanShift(nn.Conv2d):
    method __init__ (line 18) | def __init__(
  class BasicBlock (line 34) | class BasicBlock(nn.Sequential):
    method __init__ (line 35) | def __init__(
  class ResBlock (line 63) | class ResBlock(nn.Module):
    method __init__ (line 64) | def __init__(
    method forward (line 87) | def forward(self, x):
  class Upsampler (line 94) | class Upsampler(nn.Sequential):
    method __init__ (line 95) | def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):
  function make_model (line 121) | def make_model(args, parent=False):
  class EDSR (line 129) | class EDSR(nn.Module):
    method __init__ (line 130) | def __init__(self, nb, nf, res_scale=0.1, upscale=4, conv=default_conv):
    method forward (line 166) | def forward(self, x):

FILE: codes/config/RealESRGAN/archs/loss.py
  class GaussGuided (line 12) | class GaussGuided(nn.Module):
    method __init__ (line 13) | def __init__(self, ksize, sigma):
    method forward (line 24) | def forward(self, kernel):
  class PerceptualLossLPIPS (line 29) | class PerceptualLossLPIPS(nn.Module):
    method __init__ (line 30) | def __init__(self, net="alex", normalize=True):
    method forward (line 38) | def forward(self, res, ref):
  class MSELoss (line 43) | class MSELoss(nn.Module):
    method __init__ (line 44) | def __init__(self, *args, **kwargs):
    method forward (line 47) | def forward(self, res, ref):
  class L1Loss (line 52) | class L1Loss(nn.Module):
    method __init__ (line 53) | def __init__(self, *args, **kwargs):
    method forward (line 56) | def forward(self, res, ref):
  class GANLoss (line 61) | class GANLoss(nn.Module):
    method __init__ (line 69) | def __init__(self, gan_type, real_label_val=1.0, fake_label_val=0.0):
    method _wgan_loss (line 88) | def _wgan_loss(self, input, target):
    method _wgan_softplus_loss (line 98) | def _wgan_softplus_loss(self, input, target):
    method get_target_label (line 112) | def get_target_label(self, input, target_is_real):
    method forward (line 127) | def forward(self, input, target_is_real, is_disc=False):
  class PerceptualLoss (line 152) | class PerceptualLoss(nn.Module):
    method __init__ (line 174) | def __init__(
    method forward (line 205) | def forward(self, x, gt):
    method _gram_mat (line 262) | def _gram_mat(self, x):
  class CharbonnierLoss (line 277) | class CharbonnierLoss(nn.Module):
    method __init__ (line 280) | def __init__(self, eps=1e-6):
    method forward (line 284) | def forward(self, x, y):
  class GradientPenaltyLoss (line 290) | class GradientPenaltyLoss(nn.Module):
    method __init__ (line 291) | def __init__(self, device=torch.device("cpu")):
    method get_grad_outputs (line 296) | def get_grad_outputs(self, input):
    method forward (line 301) | def forward(self, interp, interp_crit):

FILE: codes/config/RealESRGAN/archs/lr_scheduler.py
  class LinearDecayLR (line 11) | class LinearDecayLR(_LRScheduler):
    method __init__ (line 12) | def __init__(
    method get_lr (line 24) | def get_lr(self):
  class MultiStepRestartLR (line 34) | class MultiStepRestartLR(_LRScheduler):
    method __init__ (line 35) | def __init__(
    method get_lr (line 55) | def get_lr(self):
  class CosineAnnealingRestartLR (line 72) | class CosineAnnealingRestartLR(_LRScheduler):
    method __init__ (line 73) | def __init__(
    method get_lr (line 87) | def get_lr(self):

FILE: codes/config/RealESRGAN/archs/module_util.py
  function initialize_weights (line 7) | def initialize_weights(net_l, scale=1):
  function make_layer (line 27) | def make_layer(block, n_layers):
  class ResidualBlock_noBN (line 34) | class ResidualBlock_noBN(nn.Module):
    method __init__ (line 40) | def __init__(self, nf=64):
    method forward (line 48) | def forward(self, x):
  function flow_warp (line 55) | def flow_warp(x, flow, interp_mode="bilinear", padding_mode="zeros"):

FILE: codes/config/RealESRGAN/archs/rcan.py
  function default_conv (line 11) | def default_conv(in_channels, out_channels, kernel_size, bias=True):
  class MeanShift (line 17) | class MeanShift(nn.Conv2d):
    method __init__ (line 18) | def __init__(self, rgb_range, rgb_mean, rgb_std, sign=-1):
  class BasicBlock (line 28) | class BasicBlock(nn.Sequential):
    method __init__ (line 29) | def __init__(
  class ResBlock (line 57) | class ResBlock(nn.Module):
    method __init__ (line 58) | def __init__(
    method forward (line 81) | def forward(self, x):
  class Upsampler (line 88) | class Upsampler(nn.Sequential):
    method __init__ (line 89) | def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):
  function make_model (line 113) | def make_model(args, parent=False):
  class CALayer (line 118) | class CALayer(nn.Module):
    method __init__ (line 119) | def __init__(self, channel, reduction=16):
    method forward (line 131) | def forward(self, x):
  class RCAB (line 138) | class RCAB(nn.Module):
    method __init__ (line 139) | def __init__(
    method forward (line 163) | def forward(self, x):
  class ResidualGroup (line 171) | class ResidualGroup(nn.Module):
    method __init__ (line 172) | def __init__(
    method forward (line 193) | def forward(self, x):
  class RCAN (line 201) | class RCAN(nn.Module):
    method __init__ (line 202) | def __init__(self, ng, nb, nf, reduction=16, upscale=4, conv=default_c...
    method forward (line 250) | def forward(self, x):
    method load_state_dict (line 262) | def load_state_dict(self, state_dict, strict=False):

FILE: codes/config/RealESRGAN/archs/rrdb.py
  class ResidualDenseBlock_5C (line 8) | class ResidualDenseBlock_5C(nn.Module):
    method __init__ (line 9) | def __init__(self, nf=64, gc=32, bias=True):
    method forward (line 24) | def forward(self, x):
  class RRDB (line 33) | class RRDB(nn.Module):
    method __init__ (line 36) | def __init__(self, nf, gc=32):
    method forward (line 42) | def forward(self, x):
  class RRDBNet (line 50) | class RRDBNet(nn.Module):
    method __init__ (line 51) | def __init__(self, in_nc, out_nc, nf, nb, gc=32, upscale=4):
    method forward (line 68) | def forward(self, x):

FILE: codes/config/RealESRGAN/archs/srresnet.py
  class MSRResNet (line 9) | class MSRResNet(nn.Module):
    method __init__ (line 12) | def __init__(self, in_nc=3, out_nc=3, nf=64, nb=16, upscale=4):
    method forward (line 45) | def forward(self, x):

FILE: codes/config/RealESRGAN/archs/translator.py
  function default_conv (line 11) | def default_conv(in_channels, out_channels, kernel_size, bias=True):
  class BasicBlock (line 17) | class BasicBlock(nn.Sequential):
    method __init__ (line 18) | def __init__(
  class ResBlock (line 46) | class ResBlock(nn.Module):
    method __init__ (line 47) | def __init__(
    method forward (line 70) | def forward(self, x):
  class Upsampler (line 77) | class Upsampler(nn.Sequential):
    method __init__ (line 78) | def __init__(self, conv, scale, n_feat, bn=False, act=False, bias=True):
  class Translator (line 105) | class Translator(nn.Module):
    method __init__ (line 106) | def __init__(self, in_nc, out_nc, nf, nb, scale=4, conv=default_conv):
    method forward (line 134) | def forward(self, x):

FILE: codes/config/RealESRGAN/archs/vgg.py
  function insert_bn (line 137) | def insert_bn(names):
  class VGGFeatureExtractor (line 154) | class VGGFeatureExtractor(nn.Module):
    method __init__ (line 175) | def __init__(
    method forward (line 246) | def forward(self, x):

FILE: codes/config/RealESRGAN/models/__init__.py
  function create_model (line 21) | def create_model(opt, **kwarg):

FILE: codes/config/RealESRGAN/models/base_model.py
  class BaseModel (line 18) | class BaseModel:
    method __init__ (line 19) | def __init__(self, opt):
    method feed_data (line 39) | def feed_data(self, data):
    method optimize_parameters (line 42) | def optimize_parameters(self):
    method get_current_visuals (line 45) | def get_current_visuals(self):
    method get_current_losses (line 48) | def get_current_losses(self):
    method print_network (line 51) | def print_network(self):
    method save (line 54) | def save(self, label):
    method load (line 57) | def load(self):
    method build_network (line 60) | def build_network(self, net_opt):
    method build_loss (line 72) | def build_loss(self, loss_config):
    method build_optimizer (line 78) | def build_optimizer(net, optim_config):
    method setup_schedulers (line 89) | def setup_schedulers(self, scheduler_opt):
    method model_to_device (line 107) | def model_to_device(self, net):
    method print_network (line 120) | def print_network(self, net):
    method set_optimizer (line 137) | def set_optimizer(self, names, operation):
    method set_requires_grad (line 141) | def set_requires_grad(self, names, requires_grad):
    method set_network_state (line 146) | def set_network_state(self, names, state):
    method clip_grad_norm (line 150) | def clip_grad_norm(self, names, norm):
    method _set_lr (line 154) | def _set_lr(self, lr_groups_l):
    method _get_init_lr (line 161) | def _get_init_lr(self):
    method update_learning_rate (line 168) | def update_learning_rate(self, cur_iter, warmup_iter=-1):
    method get_current_learning_rate (line 182) | def get_current_learning_rate(self):
    method get_network_description (line 186) | def get_network_description(self, network):
    method save_network (line 196) | def save_network(self, network, network_label, iter_label):
    method save (line 208) | def save(self, iter_label):
    method load_network (line 212) | def load_network(self, network, load_path, strict=True):
    method save_training_state (line 227) | def save_training_state(self, epoch, iter_step):
    method resume_training (line 238) | def resume_training(self, resume_state):
    method reduce_loss_dict (line 253) | def reduce_loss_dict(self, loss_dict):
    method get_current_log (line 278) | def get_current_log(self):

FILE: codes/config/RealESRGAN/models/sr_model.py
  class SRModel (line 15) | class SRModel(BaseModel):
    method __init__ (line 16) | def __init__(self, opt):
    method __init__ (line 41) | def __init__(self, opt):
    method feed_data (line 67) | def feed_data(self, data):
    method forward (line 72) | def forward(self):
    method optimize_parameters (line 76) | def optimize_parameters(self, step):
    method calculate_rgan_loss_D (line 121) | def calculate_rgan_loss_D(self, netD, criterion, real, fake):
    method calculate_rgan_loss_G (line 136) | def calculate_rgan_loss_G(self, netD, criterion, real, fake):
    method test (line 147) | def test(self, data, crop_size=None):
    method crop_test (line 157) | def crop_test(self, lr, crop_size):
    method get_current_visuals (line 205) | def get_current_visuals(self, need_GT=True):
    method feed_data (line 254) | def feed_data(self, data):
    method forward (line 259) | def forward(self):
    method optimize_parameters (line 263) | def optimize_parameters(self, step):
    method calculate_rgan_loss_D (line 308) | def calculate_rgan_loss_D(self, netD, criterion, real, fake):
    method calculate_rgan_loss_G (line 323) | def calculate_rgan_loss_G(self, netD, criterion, real, fake):
    method test (line 334) | def test(self, data, crop_size=None):
    method crop_test (line 344) | def crop_test(self, lr, crop_size):
    method get_current_visuals (line 392) | def get_current_visuals(self, need_GT=True):
  class SRModel (line 40) | class SRModel(BaseModel):
    method __init__ (line 16) | def __init__(self, opt):
    method __init__ (line 41) | def __init__(self, opt):
    method feed_data (line 67) | def feed_data(self, data):
    method forward (line 72) | def forward(self):
    method optimize_parameters (line 76) | def optimize_parameters(self, step):
    method calculate_rgan_loss_D (line 121) | def calculate_rgan_loss_D(self, netD, criterion, real, fake):
    method calculate_rgan_loss_G (line 136) | def calculate_rgan_loss_G(self, netD, criterion, real, fake):
    method test (line 147) | def test(self, data, crop_size=None):
    method crop_test (line 157) | def crop_test(self, lr, crop_size):
    method get_current_visuals (line 205) | def get_current_visuals(self, need_GT=True):
    method feed_data (line 254) | def feed_data(self, data):
    method forward (line 259) | def forward(self):
    method optimize_parameters (line 263) | def optimize_parameters(self, step):
    method calculate_rgan_loss_D (line 308) | def calculate_rgan_loss_D(self, netD, criterion, real, fake):
    method calculate_rgan_loss_G (line 323) | def calculate_rgan_loss_G(self, netD, criterion, real, fake):
    method test (line 334) | def test(self, data, crop_size=None):
    method crop_test (line 344) | def crop_test(self, lr, crop_size):
    method get_current_visuals (line 392) | def get_current_visuals(self, need_GT=True):

FILE: codes/config/RealESRGAN/test.py
  function parse_args (line 22) | def parse_args():
  function main (line 57) | def main():
  function main_worker (line 85) | def main_worker(gpu, ngpus_per_node, opt, args):
  function validate (line 166) | def validate(

FILE: codes/config/RealESRGAN/train.py
  function parse_args (line 25) | def parse_args():
  function setup_dataloaer (line 60) | def setup_dataloaer(opt, logger):
  function main (line 105) | def main():
  function main_worker (line 136) | def main_worker(gpu, ngpus_per_node, opt, args):
  function validate (line 307) | def validate(model, dataset, dist_loader, opt, measure, epoch, current_s...

FILE: codes/data/__init__.py
  class DataLoaderX (line 28) | class DataLoaderX(DataLoader):
    method __iter__ (line 30) | def __iter__(self):
  function create_dataloader (line 33) | def create_dataloader(dataset, dataset_opt, dist=False):
  function create_dataset (line 77) | def create_dataset(dataset_opt, **kwarg):
  function worker_init_fn (line 88) | def worker_init_fn(worker_id, num_workers, rank, seed):

FILE: codes/data/data_sampler.py
  class DistIterSampler (line 13) | class DistIterSampler(Sampler):
    method __init__ (line 31) | def __init__(self, dataset, num_replicas=None, rank=None):
    method __iter__ (line 47) | def __iter__(self):
    method __len__ (line 64) | def __len__(self):
    method set_epoch (line 67) | def set_epoch(self, epoch):

FILE: codes/data/debug_dataset.py
  class DebugDataset (line 16) | class DebugDataset(data.Dataset):
    method __init__ (line 21) | def __init__(self, opt):
    method _init_lmdb (line 44) | def _init_lmdb(self, dataroots):
    method __getitem__ (line 55) | def __getitem__(self, index):
    method __len__ (line 151) | def __len__(self):

FILE: codes/data/fixed_image_dataset.py
  class FixedImageDataset (line 16) | class FixedImageDataset(data.Dataset):
    method __init__ (line 21) | def __init__(self, opt, img_path):
    method _init_lmdb (line 37) | def _init_lmdb(self, dataroots):
    method __getitem__ (line 48) | def __getitem__(self, index):
    method __len__ (line 132) | def __len__(self):

FILE: codes/data/paired_ref_dataset.py
  class PairedRefDataset (line 16) | class PairedRefDataset(data.Dataset):
    method __init__ (line 22) | def __init__(self, opt):
    method _init_lmdb (line 59) | def _init_lmdb(self, dataroots):
    method __getitem__ (line 70) | def __getitem__(self, index):
    method __len__ (line 187) | def __len__(self):

FILE: codes/data/paried_dataset.py
  class PairedDataset (line 16) | class PairedDataset(data.Dataset):
    method __init__ (line 22) | def __init__(self, opt):
    method _init_lmdb (line 43) | def _init_lmdb(self, dataroots):
    method __getitem__ (line 54) | def __getitem__(self, index):
    method __len__ (line 141) | def __len__(self):

FILE: codes/data/single_dataset.py
  class SingleImageDataset (line 16) | class SingleImageDataset(data.Dataset):
    method __init__ (line 22) | def __init__(self, opt):
    method _init_lmdb (line 33) | def _init_lmdb(self, dataroots):
    method __getitem__ (line 44) | def __getitem__(self, index):
    method __len__ (line 97) | def __len__(self):

FILE: codes/data/single_image_dataset.py
  class SingleDataset (line 16) | class SingleDataset(data.Dataset):
    method __init__ (line 22) | def __init__(self, opt):
    method _init_lmdb (line 33) | def _init_lmdb(self, dataroots):
    method __getitem__ (line 44) | def __getitem__(self, index):
    method __len__ (line 97) | def __len__(self):

FILE: codes/data/unpaired_dataset.py
  class UnPairedDataset (line 16) | class UnPairedDataset(data.Dataset):
    method __init__ (line 21) | def __init__(self, opt):
    method _init_lmdb (line 44) | def _init_lmdb(self, dataroots):
    method __getitem__ (line 55) | def __getitem__(self, index):
    method __len__ (line 149) | def __len__(self):

FILE: codes/metrics/best_psnr.py
  function ignore_boundary (line 6) | def ignore_boundary(img, SCALE):
  function best_psnr (line 13) | def best_psnr(img_orig, img_out):

FILE: codes/metrics/measure.py
  class IQA (line 12) | class IQA:
    method __init__ (line 18) | def __init__(self, metrics, lpips_type="alex", cuda=True):
    method __call__ (line 38) | def __call__(self, res, ref=None, metrics=["niqe"]):
    method calculate_lpips (line 67) | def calculate_lpips(self, res, ref):
    method calculate_niqe (line 78) | def calculate_niqe(self, res):
    method calculate_brisque (line 81) | def calculate_brisque(self, res):
    method calculate_piqe (line 84) | def calculate_piqe(self, piqe):
    method calculate_best_psnr (line 87) | def calculate_best_psnr(self, res, ref):
    method calculate_best_ssim (line 92) | def calculate_best_ssim(self, res, ref):
    method calculate_psnr (line 97) | def calculate_psnr(res, ref):
    method calculate_ssim (line 101) | def calculate_ssim(res, ref):

FILE: codes/metrics/psnr.py
  function psnr (line 6) | def psnr(img1, img2):

FILE: codes/metrics/ssim.py
  function ssim (line 7) | def ssim(img1, img2):
  function calculate_ssim (line 31) | def calculate_ssim(img1, img2):

FILE: codes/scripts/extract_subimgs_single.py
  function main (line 16) | def main():
  function worker (line 59) | def worker(path, save_folder, crop_sz, step, thres_sz, compression_level):

FILE: codes/scripts/generate_mod_LR_bic.py
  function generate_mod_LR_bic (line 14) | def generate_mod_LR_bic():

FILE: codes/scripts/generate_mod_blur_LR_bic.py
  function generate_mod_LR_bic (line 16) | def generate_mod_LR_bic():

FILE: codes/scripts/test_imgs.py
  function parse_argumnets (line 16) | def parse_argumnets():
  function bgr2ycbcr (line 34) | def bgr2ycbcr(img, only_y=True):
  function main (line 68) | def main():

FILE: codes/utils/data_utils.py
  function is_image_file (line 26) | def is_image_file(filename):
  function _get_paths_from_images (line 30) | def _get_paths_from_images(path):
  function _get_paths_from_lmdb (line 43) | def _get_paths_from_lmdb(dataroot):
  function get_image_paths (line 53) | def get_image_paths(data_type, dataroot):
  function _read_img_lmdb (line 72) | def _read_img_lmdb(env, key, size):
  function read_img (line 83) | def read_img(env, path, size=None):
  function augment (line 103) | def augment(img, hflip=True, rot=True, mode=None):
  function augment_flow (line 124) | def augment_flow(img_list, flow_list, hflip=True, rot=True):

FILE: codes/utils/deg_utils.py
  function DUF_downsample (line 12) | def DUF_downsample(x, scale=4):
  function PCA (line 49) | def PCA(data, k=2):
  function random_batch_kernel (line 57) | def random_batch_kernel(
  function stable_batch_kernel (line 117) | def stable_batch_kernel(batch, l=21, sig=2.6, tensor=True):
  function b_Bicubic (line 128) | def b_Bicubic(variable, scale):
  function random_batch_noise (line 137) | def random_batch_noise(batch, high, rate_cln=1.0):
  function b_GaussianNoising (line 145) | def b_GaussianNoising(tensor, sigma, mean=0.0, noise_size=None, min=0.0,...
  function b_GaussianNoising (line 157) | def b_GaussianNoising(tensor, noise_high, mean=0.0, noise_size=None, min...
  class BatchSRKernel (line 168) | class BatchSRKernel(object):
    method __init__ (line 169) | def __init__(
    method __call__ (line 185) | def __call__(self, random, batch, tensor=False):
  class BatchBlurKernel (line 200) | class BatchBlurKernel(object):
    method __init__ (line 201) | def __init__(self, kernels_path):
    method __call__ (line 206) | def __call__(self, random, batch, tensor=False):
  class PCAEncoder (line 212) | class PCAEncoder(nn.Module):
    method __init__ (line 213) | def __init__(self, weight):
    method forward (line 218) | def forward(self, batch_kernel):
  class BatchBlur (line 225) | class BatchBlur(object):
    method __init__ (line 226) | def __init__(self, l=15):
    method __call__ (line 234) | def __call__(self, input, kernel):
  class SRMDPreprocessing (line 254) | class SRMDPreprocessing(object):
    method __init__ (line 255) | def __init__(
    method __call__ (line 299) | def __call__(self, hr_tensor, kernel=False):

FILE: codes/utils/file_utils.py
  function get_timestamp (line 18) | def get_timestamp():
  function mkdir (line 22) | def mkdir(path):
  function mkdirs (line 27) | def mkdirs(paths):
  function mkdir_and_rename (line 35) | def mkdir_and_rename(path):
  function set_random_seed (line 45) | def set_random_seed(seed):
  function setup_logger (line 52) | def setup_logger(
  class ProgressBar (line 74) | class ProgressBar(object):
    method __init__ (line 79) | def __init__(self, task_num=0, bar_width=50, start=True):
    method _get_max_bar_width (line 87) | def _get_max_bar_width(self):
    method start (line 98) | def start(self):
    method update (line 110) | def update(self, msg="In progress..."):

FILE: codes/utils/img_utils.py
  function tensor2img (line 12) | def tensor2img(tensor, out_type=np.uint8, min_max=(0, 1)):
  function save_img (line 42) | def save_img(img, img_path, mode="BGR"):
  function img2tensor (line 46) | def img2tensor(img):
  function channel_convert (line 58) | def channel_convert(tar_type, img_list):
  function rgb2ycbcr (line 92) | def rgb2ycbcr(img, only_y=True):
  function bgr2ycbcr (line 126) | def bgr2ycbcr(img, only_y=True):
  function ycbcr2rgb (line 160) | def ycbcr2rgb(img):
  function modcrop (line 190) | def modcrop(img_in, scale):

FILE: codes/utils/option.py
  function ordered_yaml (line 10) | def ordered_yaml():
  function parse (line 34) | def parse(opt_path, root_path=".", is_train=True):
  function dict2str (line 82) | def dict2str(opt, indent_l=1):
  class NoneDict (line 95) | class NoneDict(dict):
    method __missing__ (line 96) | def __missing__(self, key):
  function dict_to_nonedict (line 101) | def dict_to_nonedict(opt):

FILE: codes/utils/registry.py
  class Registry (line 2) | class Registry:
    method __init__ (line 19) | def __init__(self, name):
    method _do_register (line 27) | def _do_register(self, name, obj):
    method register (line 34) | def register(self, obj=None):
    method get (line 53) | def get(self, name):
    method __contains__ (line 61) | def __contains__(self, name):
    method __iter__ (line 64) | def __iter__(self):
    method keys (line 67) | def keys(self):

FILE: codes/utils/resize_utils.py
  function cubic (line 8) | def cubic(x):
  function calculate_weights_indices (line 19) | def calculate_weights_indices(
  function imresize (line 77) | def imresize(img, scale, antialiasing=True):
Condensed preview — 308 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (1,337K chars).
[
  {
    "path": ".gitignore",
    "chars": 133,
    "preview": "__pycache__/\nexperiments/\nresults/\nresult/\nresult\nlog/\nlog\n\ndata_samples/\ncheckpoints/\n\n*.pkl\n*.pt\n*.pth\n*.jpg\n*.png\n*.s"
  },
  {
    "path": "README.md",
    "chars": 1639,
    "preview": "This is an offical implementation of the CVPR2022's paper [Learning the Degradation Distribution for Blind Image Super-R"
  },
  {
    "path": "codes/config/BSRGAN/README.md",
    "chars": 145,
    "preview": "This repo currently only supports the test of [BSRGAN](https://arxiv.org/abs/2103.14006). The training related codes may"
  },
  {
    "path": "codes/config/BSRGAN/archs/__init__.py",
    "chars": 921,
    "preview": "import importlib\nimport os\nimport os.path as osp\n\nfrom utils.registry import ARCH_REGISTRY, LOSS_REGISTRY, LR_SCHEDULER_"
  },
  {
    "path": "codes/config/BSRGAN/archs/discriminator.py",
    "chars": 7055,
    "preview": "import torch\nimport torch.nn as nn\nimport torchvision\nimport functools\n\nfrom utils.registry import ARCH_REGISTRY\n\n\n@ARCH"
  },
  {
    "path": "codes/config/BSRGAN/archs/edsr.py",
    "chars": 4516,
    "preview": "import math\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\nfro"
  },
  {
    "path": "codes/config/BSRGAN/archs/loss.py",
    "chars": 10641,
    "preview": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport lpips as lp\n\nfrom utils.registry import LOSS_R"
  },
  {
    "path": "codes/config/BSRGAN/archs/lr_scheduler.py",
    "chars": 3885,
    "preview": "import math\nfrom collections import Counter, defaultdict\n\nimport torch\nfrom torch.optim.lr_scheduler import _LRScheduler"
  },
  {
    "path": "codes/config/BSRGAN/archs/module_util.py",
    "chars": 2619,
    "preview": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.nn.init as init\n\n\ndef initialize_weights"
  },
  {
    "path": "codes/config/BSRGAN/archs/rcan.py",
    "chars": 8148,
    "preview": "import math\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\nfro"
  },
  {
    "path": "codes/config/BSRGAN/archs/rrdb.py",
    "chars": 3134,
    "preview": "import functools\n\nfrom utils.registry import ARCH_REGISTRY\n\nfrom .module_util import *\n\n\nclass ResidualDenseBlock_5C(nn."
  },
  {
    "path": "codes/config/BSRGAN/archs/srresnet.py",
    "chars": 2114,
    "preview": "import functools\n\nfrom utils.registry import ARCH_REGISTRY\n\nfrom .module_util import *\n\n\n@ARCH_REGISTRY.register()\nclass"
  },
  {
    "path": "codes/config/BSRGAN/archs/translator.py",
    "chars": 3562,
    "preview": "import math\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\nfro"
  },
  {
    "path": "codes/config/BSRGAN/archs/vgg.py",
    "chars": 6977,
    "preview": "import os\nfrom collections import OrderedDict\n\nimport torch\nfrom torch import nn as nn\nfrom torchvision.models import vg"
  },
  {
    "path": "codes/config/BSRGAN/count_flops.py",
    "chars": 705,
    "preview": "import argparse\nimport sys\n\nimport torch\nfrom torchsummaryX import summary\n\nsys.path.append(\"../../\")\nimport utils.optio"
  },
  {
    "path": "codes/config/BSRGAN/inference.py",
    "chars": 1656,
    "preview": "import argparse\nimport logging\nimport math\nimport os\nimport os.path as osp\nimport random\nimport sys\nimport cv2\nfrom coll"
  },
  {
    "path": "codes/config/BSRGAN/models/__init__.py",
    "chars": 599,
    "preview": "import importlib\nimport logging\nimport os\nimport os.path as osp\n\nfrom utils.registry import MODEL_REGISTRY\n\nlogger = log"
  },
  {
    "path": "codes/config/BSRGAN/models/base_model.py",
    "chars": 10960,
    "preview": "import logging\nimport os\nfrom collections import OrderedDict\n\nimport torch\nimport torch.nn as nn\nfrom torch.nn.parallel "
  },
  {
    "path": "codes/config/BSRGAN/models/sr_model.py",
    "chars": 5868,
    "preview": "import logging\nfrom collections import OrderedDict\n\nimport torch\nimport torch.nn as nn\n\nfrom utils.registry import MODEL"
  },
  {
    "path": "codes/config/BSRGAN/options/test/2017Track2_2020Track1.yml",
    "chars": 837,
    "preview": "#### general settings\nname: 2017Track2_2020Track1\nuse_tb_logger: false\nmodel: SRModel\nscale: 4\ngpu_ids: [6]\n\nmetrics: [p"
  },
  {
    "path": "codes/config/BSRGAN/options/test/2018Track2_2018Track4.yml",
    "chars": 853,
    "preview": "#### general settings\nname: 2018Track2_2018Track4\nuse_tb_logger: false\nmodel: SRModel\nscale: 4\ngpu_ids: [6]\n\nmetrics: [b"
  },
  {
    "path": "codes/config/BSRGAN/options/test/2020Track2.yml",
    "chars": 538,
    "preview": "#### general settings\nname: 2020Track2\nuse_tb_logger: false\nmodel: SRModel\nscale: 4\ngpu_ids: [0]\n\nmetrics: [niqe, piqe, "
  },
  {
    "path": "codes/config/BSRGAN/test.py",
    "chars": 8489,
    "preview": "import argparse\nimport logging\nimport os.path\nimport sys\nimport time\nfrom collections import OrderedDict, defaultdict\n\ni"
  },
  {
    "path": "codes/config/BSRGAN/train.py",
    "chars": 12369,
    "preview": "import argparse\nimport logging\nimport math\nimport os\nimport random\nimport sys\nimport time\nfrom collections import defaul"
  },
  {
    "path": "codes/config/Bicubic/README.md",
    "chars": 55,
    "preview": "We use the same bicubic interpolation as that in matlab"
  },
  {
    "path": "codes/config/Bicubic/archs/__init__.py",
    "chars": 921,
    "preview": "import importlib\nimport os\nimport os.path as osp\n\nfrom utils.registry import ARCH_REGISTRY, LOSS_REGISTRY, LR_SCHEDULER_"
  },
  {
    "path": "codes/config/Bicubic/archs/bicubic.py",
    "chars": 489,
    "preview": "import math\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\nfro"
  },
  {
    "path": "codes/config/Bicubic/archs/discriminator.py",
    "chars": 7055,
    "preview": "import torch\nimport torch.nn as nn\nimport torchvision\nimport functools\n\nfrom utils.registry import ARCH_REGISTRY\n\n\n@ARCH"
  },
  {
    "path": "codes/config/Bicubic/archs/edsr.py",
    "chars": 4516,
    "preview": "import math\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\nfro"
  },
  {
    "path": "codes/config/Bicubic/archs/loss.py",
    "chars": 10641,
    "preview": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport lpips as lp\n\nfrom utils.registry import LOSS_R"
  },
  {
    "path": "codes/config/Bicubic/archs/lr_scheduler.py",
    "chars": 3885,
    "preview": "import math\nfrom collections import Counter, defaultdict\n\nimport torch\nfrom torch.optim.lr_scheduler import _LRScheduler"
  },
  {
    "path": "codes/config/Bicubic/archs/module_util.py",
    "chars": 2619,
    "preview": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.nn.init as init\n\n\ndef initialize_weights"
  },
  {
    "path": "codes/config/Bicubic/archs/rcan.py",
    "chars": 8148,
    "preview": "import math\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\nfro"
  },
  {
    "path": "codes/config/Bicubic/archs/rrdb.py",
    "chars": 3134,
    "preview": "import functools\n\nfrom utils.registry import ARCH_REGISTRY\n\nfrom .module_util import *\n\n\nclass ResidualDenseBlock_5C(nn."
  },
  {
    "path": "codes/config/Bicubic/archs/srresnet.py",
    "chars": 2114,
    "preview": "import functools\n\nfrom utils.registry import ARCH_REGISTRY\n\nfrom .module_util import *\n\n\n@ARCH_REGISTRY.register()\nclass"
  },
  {
    "path": "codes/config/Bicubic/archs/vgg.py",
    "chars": 6977,
    "preview": "import os\nfrom collections import OrderedDict\n\nimport torch\nfrom torch import nn as nn\nfrom torchvision.models import vg"
  },
  {
    "path": "codes/config/Bicubic/count_flops.py",
    "chars": 705,
    "preview": "import argparse\nimport sys\n\nimport torch\nfrom torchsummaryX import summary\n\nsys.path.append(\"../../\")\nimport utils.optio"
  },
  {
    "path": "codes/config/Bicubic/inference.py",
    "chars": 1671,
    "preview": "import argparse\nimport logging\nimport math\nimport os\nimport os.path as osp\nimport random\nimport sys\nimport cv2\nfrom coll"
  },
  {
    "path": "codes/config/Bicubic/models/__init__.py",
    "chars": 599,
    "preview": "import importlib\nimport logging\nimport os\nimport os.path as osp\n\nfrom utils.registry import MODEL_REGISTRY\n\nlogger = log"
  },
  {
    "path": "codes/config/Bicubic/models/base_model.py",
    "chars": 10960,
    "preview": "import logging\nimport os\nfrom collections import OrderedDict\n\nimport torch\nimport torch.nn as nn\nfrom torch.nn.parallel "
  },
  {
    "path": "codes/config/Bicubic/models/sr_model.py",
    "chars": 5868,
    "preview": "import logging\nfrom collections import OrderedDict\n\nimport torch\nimport torch.nn as nn\n\nfrom utils.registry import MODEL"
  },
  {
    "path": "codes/config/Bicubic/options/test/2017Track2_2020Track1.yml",
    "chars": 738,
    "preview": "#### general settings\nname: Bicubic_2017Track2_2020Track1\nuse_tb_logger: false\nmodel: SRModel\nscale: 4\ngpu_ids: [5]\n\nmet"
  },
  {
    "path": "codes/config/Bicubic/options/test/2018Track2_2020Track4.yml",
    "chars": 749,
    "preview": "#### general settings\nname: Bicubic_2018Track2_2018Track4\nuse_tb_logger: false\nmodel: SRModel\nscale: 4\ngpu_ids: [5]\n\nmet"
  },
  {
    "path": "codes/config/Bicubic/options/test/2020Track2.yml",
    "chars": 430,
    "preview": "#### general settings\nname: 2020Track2\nuse_tb_logger: false\nmodel: SRModel\nscale: 4\ngpu_ids: [5]\n\nmetrics: [niqe, piqe, "
  },
  {
    "path": "codes/config/Bicubic/test.py",
    "chars": 8489,
    "preview": "import argparse\nimport logging\nimport os.path\nimport sys\nimport time\nfrom collections import OrderedDict, defaultdict\n\ni"
  },
  {
    "path": "codes/config/Bicubic/train.py",
    "chars": 12369,
    "preview": "import argparse\nimport logging\nimport math\nimport os\nimport random\nimport sys\nimport time\nfrom collections import defaul"
  },
  {
    "path": "codes/config/Bulat/README.md",
    "chars": 178,
    "preview": "This repo supports the training and testing of ECCV paper [To learn image super-resolution, use a GAN to learn how to do"
  },
  {
    "path": "codes/config/Bulat/archs/__init__.py",
    "chars": 921,
    "preview": "import importlib\nimport os\nimport os.path as osp\n\nfrom utils.registry import ARCH_REGISTRY, LOSS_REGISTRY, LR_SCHEDULER_"
  },
  {
    "path": "codes/config/Bulat/archs/deg_arch.py",
    "chars": 1559,
    "preview": "import math\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\nfro"
  },
  {
    "path": "codes/config/Bulat/archs/discriminator.py",
    "chars": 7054,
    "preview": "import torch\nimport torch.nn as nn\nimport torchvision\nimport functools\n\nfrom utils.registry import ARCH_REGISTRY\n\n\n@ARCH"
  },
  {
    "path": "codes/config/Bulat/archs/edsr.py",
    "chars": 4516,
    "preview": "import math\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\nfro"
  },
  {
    "path": "codes/config/Bulat/archs/loss.py",
    "chars": 12105,
    "preview": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport lpips as lp\n\nfrom utils.registry import LOSS_R"
  },
  {
    "path": "codes/config/Bulat/archs/lr_scheduler.py",
    "chars": 3885,
    "preview": "import math\nfrom collections import Counter, defaultdict\n\nimport torch\nfrom torch.optim.lr_scheduler import _LRScheduler"
  },
  {
    "path": "codes/config/Bulat/archs/module_util.py",
    "chars": 2619,
    "preview": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.nn.init as init\n\n\ndef initialize_weights"
  },
  {
    "path": "codes/config/Bulat/archs/rcan.py",
    "chars": 8148,
    "preview": "import math\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\nfro"
  },
  {
    "path": "codes/config/Bulat/archs/rrdb.py",
    "chars": 3134,
    "preview": "import functools\n\nfrom utils.registry import ARCH_REGISTRY\n\nfrom .module_util import *\n\n\nclass ResidualDenseBlock_5C(nn."
  },
  {
    "path": "codes/config/Bulat/archs/srresnet.py",
    "chars": 2114,
    "preview": "import functools\n\nfrom utils.registry import ARCH_REGISTRY\n\nfrom .module_util import *\n\n\n@ARCH_REGISTRY.register()\nclass"
  },
  {
    "path": "codes/config/Bulat/archs/translator.py",
    "chars": 3790,
    "preview": "import math\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\nfro"
  },
  {
    "path": "codes/config/Bulat/archs/vgg.py",
    "chars": 6977,
    "preview": "import os\nfrom collections import OrderedDict\n\nimport torch\nfrom torch import nn as nn\nfrom torchvision.models import vg"
  },
  {
    "path": "codes/config/Bulat/count_flops.py",
    "chars": 705,
    "preview": "import argparse\nimport sys\n\nimport torch\nfrom torchsummaryX import summary\n\nsys.path.append(\"../../\")\nimport utils.optio"
  },
  {
    "path": "codes/config/Bulat/inference.py",
    "chars": 1671,
    "preview": "import argparse\nimport logging\nimport math\nimport os\nimport os.path as osp\nimport random\nimport sys\nimport cv2\nfrom coll"
  },
  {
    "path": "codes/config/Bulat/models/__init__.py",
    "chars": 599,
    "preview": "import importlib\nimport logging\nimport os\nimport os.path as osp\n\nfrom utils.registry import MODEL_REGISTRY\n\nlogger = log"
  },
  {
    "path": "codes/config/Bulat/models/base_model.py",
    "chars": 10960,
    "preview": "import logging\nimport os\nfrom collections import OrderedDict\n\nimport torch\nimport torch.nn as nn\nfrom torch.nn.parallel "
  },
  {
    "path": "codes/config/Bulat/models/deg_sr_model.py",
    "chars": 8811,
    "preview": "import logging\nfrom collections import OrderedDict\nimport random\n\nimport torch\nimport torch.nn as nn\n\nfrom utils.registr"
  },
  {
    "path": "codes/config/Bulat/options/test/2017Track2.yml",
    "chars": 806,
    "preview": "#### general settings\nname: 2017Track2_psnr\nuse_tb_logger: false\nmodel: DegSRModel\nscale: 4\ngpu_ids: [0]\n\nmetrics: [psnr"
  },
  {
    "path": "codes/config/Bulat/options/test/2018Track2.yml",
    "chars": 817,
    "preview": "#### general settings\nname: 2018Track2_psnr\nuse_tb_logger: false\nmodel: DegSRModel\nscale: 4\ngpu_ids: [1]\n\nmetrics: [best"
  },
  {
    "path": "codes/config/Bulat/options/test/2018Track4.yml",
    "chars": 818,
    "preview": "#### general settings\nname: 2018Track4_psnr\nuse_tb_logger: false\nmodel: DegSRModel\nscale: 4\ngpu_ids: [5]\n\nmetrics: [best"
  },
  {
    "path": "codes/config/Bulat/options/test/2020Track1.yml",
    "chars": 807,
    "preview": "#### general settings\nname: 2020Track1_psnr\nuse_tb_logger: false\nmodel: DegSRModel\nscale: 4\ngpu_ids: [5]\n\nmetrics: [psnr"
  },
  {
    "path": "codes/config/Bulat/options/train/psnr/2017Track2.yml",
    "chars": 2238,
    "preview": "#### general settings\nname: 2017Track2\nuse_tb_logger: false\nmodel: DegSRModel\nscale: 4\ngpu_ids: [0]\nmetrics: [psnr, ssim"
  },
  {
    "path": "codes/config/Bulat/options/train/psnr/2018Track2.yml",
    "chars": 2243,
    "preview": "#### general settings\nname: 2018Track2\nuse_tb_logger: false\nmodel: DegSRModel\nscale: 4\ngpu_ids: [3]\nmetrics: [best_psnr,"
  },
  {
    "path": "codes/config/Bulat/options/train/psnr/2018Track4.yml",
    "chars": 2233,
    "preview": "#### general settings\nname: 2018Track4\nuse_tb_logger: false\nmodel: DegSRModel\nscale: 4\ngpu_ids: [2]\nmetrics: [best_psnr,"
  },
  {
    "path": "codes/config/Bulat/options/train/psnr/2020Track1.yml",
    "chars": 2232,
    "preview": "#### general settings\nname: 2020Track1\nuse_tb_logger: false\nmodel: DegSRModel\nscale: 4\ngpu_ids: [1]\nmetrics: [psnr, ssim"
  },
  {
    "path": "codes/config/Bulat/test.py",
    "chars": 8423,
    "preview": "import argparse\nimport logging\nimport os.path\nimport sys\nimport time\nfrom collections import OrderedDict, defaultdict\n\ni"
  },
  {
    "path": "codes/config/Bulat/train.py",
    "chars": 12369,
    "preview": "import argparse\nimport logging\nimport math\nimport os\nimport random\nimport sys\nimport time\nfrom collections import defaul"
  },
  {
    "path": "codes/config/CinGAN/README.md",
    "chars": 191,
    "preview": "This repo supports the training and testing of CinGAN in the paper [Unsupervised Image Super-Resolution using Cycle-in-C"
  },
  {
    "path": "codes/config/CinGAN/archs/__init__.py",
    "chars": 921,
    "preview": "import importlib\nimport os\nimport os.path as osp\n\nfrom utils.registry import ARCH_REGISTRY, LOSS_REGISTRY, LR_SCHEDULER_"
  },
  {
    "path": "codes/config/CinGAN/archs/discriminator.py",
    "chars": 7020,
    "preview": "import torch\nimport torch.nn as nn\nimport torchvision\nimport functools\n\nfrom utils.registry import ARCH_REGISTRY\n\n\n@ARCH"
  },
  {
    "path": "codes/config/CinGAN/archs/edsr.py",
    "chars": 4516,
    "preview": "import math\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\nfro"
  },
  {
    "path": "codes/config/CinGAN/archs/loss.py",
    "chars": 10124,
    "preview": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom utils.registry import LOSS_REGISTRY\n\nfrom .vgg "
  },
  {
    "path": "codes/config/CinGAN/archs/lr_scheduler.py",
    "chars": 3885,
    "preview": "import math\nfrom collections import Counter, defaultdict\n\nimport torch\nfrom torch.optim.lr_scheduler import _LRScheduler"
  },
  {
    "path": "codes/config/CinGAN/archs/module_util.py",
    "chars": 2619,
    "preview": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.nn.init as init\n\n\ndef initialize_weights"
  },
  {
    "path": "codes/config/CinGAN/archs/rcan.py",
    "chars": 8148,
    "preview": "import math\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\nfro"
  },
  {
    "path": "codes/config/CinGAN/archs/rrdb.py",
    "chars": 3134,
    "preview": "import functools\n\nfrom utils.registry import ARCH_REGISTRY\n\nfrom .module_util import *\n\n\nclass ResidualDenseBlock_5C(nn."
  },
  {
    "path": "codes/config/CinGAN/archs/srresnet.py",
    "chars": 2114,
    "preview": "import functools\n\nfrom utils.registry import ARCH_REGISTRY\n\nfrom .module_util import *\n\n\n@ARCH_REGISTRY.register()\nclass"
  },
  {
    "path": "codes/config/CinGAN/archs/translator.py",
    "chars": 3798,
    "preview": "import math\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\nfro"
  },
  {
    "path": "codes/config/CinGAN/archs/vgg.py",
    "chars": 6977,
    "preview": "import os\nfrom collections import OrderedDict\n\nimport torch\nfrom torch import nn as nn\nfrom torchvision.models import vg"
  },
  {
    "path": "codes/config/CinGAN/count_flops.py",
    "chars": 705,
    "preview": "import argparse\nimport sys\n\nimport torch\nfrom torchsummaryX import summary\n\nsys.path.append(\"../../\")\nimport utils.optio"
  },
  {
    "path": "codes/config/CinGAN/inference.py",
    "chars": 1671,
    "preview": "import argparse\nimport logging\nimport math\nimport os\nimport os.path as osp\nimport random\nimport sys\nimport cv2\nfrom coll"
  },
  {
    "path": "codes/config/CinGAN/models/__init__.py",
    "chars": 599,
    "preview": "import importlib\nimport logging\nimport os\nimport os.path as osp\n\nfrom utils.registry import MODEL_REGISTRY\n\nlogger = log"
  },
  {
    "path": "codes/config/CinGAN/models/base_model.py",
    "chars": 10960,
    "preview": "import logging\nimport os\nfrom collections import OrderedDict\n\nimport torch\nimport torch.nn as nn\nfrom torch.nn.parallel "
  },
  {
    "path": "codes/config/CinGAN/models/cingan_model.py",
    "chars": 7516,
    "preview": "import logging\nfrom collections import OrderedDict\n\nimport torch\nimport torch.nn as nn\n\nfrom utils.registry import MODEL"
  },
  {
    "path": "codes/config/CinGAN/models/trans_model.py",
    "chars": 7069,
    "preview": "import logging\nfrom collections import OrderedDict\nimport random\n\nimport torch\nimport torch.nn as nn\n\nfrom utils.registr"
  },
  {
    "path": "codes/config/CinGAN/options/test/sr/2017Track1.yml",
    "chars": 1691,
    "preview": "#### general settings\nname: 2017Track1\nuse_tb_logger: false\nmodel: CinGANModel\nscale: 4\ngpu_ids: [0]\n\nmetrics: [psnr, ss"
  },
  {
    "path": "codes/config/CinGAN/options/test/sr/2018Track2.yml",
    "chars": 1702,
    "preview": "#### general settings\nname: 2018Track2\nuse_tb_logger: false\nmodel: CinGANModel\nscale: 4\ngpu_ids: [5]\n\nmetrics: [best_psn"
  },
  {
    "path": "codes/config/CinGAN/options/test/sr/2018Track4.yml",
    "chars": 1703,
    "preview": "#### general settings\nname: 2018Track4\nuse_tb_logger: false\nmodel: CinGANModel\nscale: 4\ngpu_ids: [5]\n\nmetrics: [best_psn"
  },
  {
    "path": "codes/config/CinGAN/options/test/sr/2020Track1.yml",
    "chars": 1679,
    "preview": "#### general settings\nname: 2020Track1\nuse_tb_logger: false\nmodel: CinGANModel\nscale: 4\ngpu_ids: [1]\n\nmetrics: [psnr, ss"
  },
  {
    "path": "codes/config/CinGAN/options/train/sr/2017Track2.yml",
    "chars": 3249,
    "preview": "#### general settings\nname: CinGAN2017Track2\nuse_tb_logger: false\nmodel: CinGANModel\nscale: 4\ngpu_ids: [5]\nmetrics: [psn"
  },
  {
    "path": "codes/config/CinGAN/options/train/sr/2018Track2.yml",
    "chars": 3247,
    "preview": "#### general settings\nname: CinGAN2018Track2\nuse_tb_logger: false\nmodel: CinGANModel\nscale: 4\ngpu_ids: [6]\nmetrics: [psn"
  },
  {
    "path": "codes/config/CinGAN/options/train/sr/2018Track4.yml",
    "chars": 3474,
    "preview": "#### general settings\nname: CinGAN2018Track4\nuse_tb_logger: false\nmodel: CinGANModel\nscale: 4\ngpu_ids: [1]\nmetrics: [psn"
  },
  {
    "path": "codes/config/CinGAN/options/train/sr/2020Track1.yml",
    "chars": 3484,
    "preview": "#### general settings\nname: CinGAN2020Track1\nuse_tb_logger: false\nmodel: CinGANModel\nscale: 4\ngpu_ids: [5]\nmetrics: [psn"
  },
  {
    "path": "codes/config/CinGAN/options/train/trans/2017Track2.yml",
    "chars": 2170,
    "preview": "#### general settings\nname: Trans2017Track2\nuse_tb_logger: false\nmodel: TransModel\nscale: 1\ngpu_ids: [2]\nmetrics: [psnr,"
  },
  {
    "path": "codes/config/CinGAN/options/train/trans/2018Track2.yml",
    "chars": 2169,
    "preview": "#### general settings\nname: Trans2018Track2\nuse_tb_logger: false\nmodel: TransModel\nscale: 1\ngpu_ids: [3]\nmetrics: [psnr,"
  },
  {
    "path": "codes/config/CinGAN/options/train/trans/2018Track4.yml",
    "chars": 2158,
    "preview": "#### general settings\nname: Trans2018Track4\nuse_tb_logger: false\nmodel: TransModel\nscale: 1\ngpu_ids: [4]\nmetrics: [psnr,"
  },
  {
    "path": "codes/config/CinGAN/options/train/trans/2020Track1.yml",
    "chars": 2168,
    "preview": "#### general settings\nname: Trans2020Track1\nuse_tb_logger: false\nmodel: TransModel\nscale: 1\ngpu_ids: [0]\nmetrics: [psnr,"
  },
  {
    "path": "codes/config/CinGAN/test.py",
    "chars": 8438,
    "preview": "import argparse\nimport logging\nimport os.path\nimport sys\nimport time\nfrom collections import OrderedDict, defaultdict\n\ni"
  },
  {
    "path": "codes/config/CinGAN/train.py",
    "chars": 12369,
    "preview": "import argparse\nimport logging\nimport math\nimport os\nimport random\nimport sys\nimport time\nfrom collections import defaul"
  },
  {
    "path": "codes/config/CycleSR/README.md",
    "chars": 172,
    "preview": "This repo supports the training and testing of CycleSR in the paper [Unsupervised Image Super-Resolution with an Indirec"
  },
  {
    "path": "codes/config/CycleSR/archs/__init__.py",
    "chars": 921,
    "preview": "import importlib\nimport os\nimport os.path as osp\n\nfrom utils.registry import ARCH_REGISTRY, LOSS_REGISTRY, LR_SCHEDULER_"
  },
  {
    "path": "codes/config/CycleSR/archs/discriminator.py",
    "chars": 7020,
    "preview": "import torch\nimport torch.nn as nn\nimport torchvision\nimport functools\n\nfrom utils.registry import ARCH_REGISTRY\n\n\n@ARCH"
  },
  {
    "path": "codes/config/CycleSR/archs/edsr.py",
    "chars": 4516,
    "preview": "import math\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\nfro"
  },
  {
    "path": "codes/config/CycleSR/archs/loss.py",
    "chars": 10641,
    "preview": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport lpips as lp\n\nfrom utils.registry import LOSS_R"
  },
  {
    "path": "codes/config/CycleSR/archs/lr_scheduler.py",
    "chars": 3885,
    "preview": "import math\nfrom collections import Counter, defaultdict\n\nimport torch\nfrom torch.optim.lr_scheduler import _LRScheduler"
  },
  {
    "path": "codes/config/CycleSR/archs/module_util.py",
    "chars": 2619,
    "preview": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.nn.init as init\n\n\ndef initialize_weights"
  },
  {
    "path": "codes/config/CycleSR/archs/rcan.py",
    "chars": 8148,
    "preview": "import math\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\nfro"
  },
  {
    "path": "codes/config/CycleSR/archs/rrdb.py",
    "chars": 3134,
    "preview": "import functools\n\nfrom utils.registry import ARCH_REGISTRY\n\nfrom .module_util import *\n\n\nclass ResidualDenseBlock_5C(nn."
  },
  {
    "path": "codes/config/CycleSR/archs/srresnet.py",
    "chars": 2114,
    "preview": "import functools\n\nfrom utils.registry import ARCH_REGISTRY\n\nfrom .module_util import *\n\n\n@ARCH_REGISTRY.register()\nclass"
  },
  {
    "path": "codes/config/CycleSR/archs/translator.py",
    "chars": 1489,
    "preview": "import math\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\nfro"
  },
  {
    "path": "codes/config/CycleSR/archs/vgg.py",
    "chars": 6977,
    "preview": "import os\nfrom collections import OrderedDict\n\nimport torch\nfrom torch import nn as nn\nfrom torchvision.models import vg"
  },
  {
    "path": "codes/config/CycleSR/count_flops.py",
    "chars": 705,
    "preview": "import argparse\nimport sys\n\nimport torch\nfrom torchsummaryX import summary\n\nsys.path.append(\"../../\")\nimport utils.optio"
  },
  {
    "path": "codes/config/CycleSR/inference.py",
    "chars": 1671,
    "preview": "import argparse\nimport logging\nimport math\nimport os\nimport os.path as osp\nimport random\nimport sys\nimport cv2\nfrom coll"
  },
  {
    "path": "codes/config/CycleSR/models/__init__.py",
    "chars": 599,
    "preview": "import importlib\nimport logging\nimport os\nimport os.path as osp\n\nfrom utils.registry import MODEL_REGISTRY\n\nlogger = log"
  },
  {
    "path": "codes/config/CycleSR/models/base_model.py",
    "chars": 10960,
    "preview": "import logging\nimport os\nfrom collections import OrderedDict\n\nimport torch\nimport torch.nn as nn\nfrom torch.nn.parallel "
  },
  {
    "path": "codes/config/CycleSR/models/cyclegan_model.py",
    "chars": 7263,
    "preview": "import logging\nfrom collections import OrderedDict\nimport random\n\nimport torch\nimport torch.nn as nn\n\nfrom utils.registr"
  },
  {
    "path": "codes/config/CycleSR/models/cyclesr_model.py",
    "chars": 8581,
    "preview": "import logging\nfrom collections import OrderedDict\n\nimport torch\nimport torch.nn as nn\n\nfrom utils.registry import MODEL"
  },
  {
    "path": "codes/config/CycleSR/options/test/sr/2017Track1.yml",
    "chars": 1481,
    "preview": "#### general settings\nname: 2017Track1\nuse_tb_logger: false\nmodel: CycleSRModel\nscale: 4\ngpu_ids: [5]\n\nmetrics: [psnr, s"
  },
  {
    "path": "codes/config/CycleSR/options/test/sr/2018Track2.yml",
    "chars": 1488,
    "preview": "#### general settings\nname: 2018Track2\nuse_tb_logger: false\nmodel: CycleSRModel\nscale: 4\ngpu_ids: [2]\n\nmetrics: [best_ps"
  },
  {
    "path": "codes/config/CycleSR/options/test/sr/2018Track4.yml",
    "chars": 1494,
    "preview": "#### general settings\nname: 2018Track4\nuse_tb_logger: false\nmodel: CycleSRModel\nscale: 4\ngpu_ids: [3]\n\nmetrics: [best_ps"
  },
  {
    "path": "codes/config/CycleSR/options/test/sr/2020Track1.yml",
    "chars": 1464,
    "preview": "#### general settings\nname: 2020Track1\nuse_tb_logger: false\nmodel: CycleSRModel\nscale: 4\ngpu_ids: [0]\n\nmetrics: [psnr, s"
  },
  {
    "path": "codes/config/CycleSR/options/test/sr/2020Track1_percep.yml",
    "chars": 1477,
    "preview": "#### general settings\nname: 2020Track1_percep\nuse_tb_logger: false\nmodel: CycleSRModel\nscale: 4\ngpu_ids: [2]\n\nmetrics: ["
  },
  {
    "path": "codes/config/CycleSR/options/train/sr/psnr/2017Track2.yml",
    "chars": 3741,
    "preview": "#### general settings\nname: CycleSR2017Track1\nuse_tb_logger: false\nmodel: CycleSRModel\nscale: 4\ngpu_ids: [3]\nmetrics: [p"
  },
  {
    "path": "codes/config/CycleSR/options/train/sr/psnr/2018Track2.yml",
    "chars": 3997,
    "preview": "#### general settings\nname: CycleSR2017Track1\nuse_tb_logger: false\nmodel: CycleSRModel\nscale: 4\ngpu_ids: [0]\nmetrics: [b"
  },
  {
    "path": "codes/config/CycleSR/options/train/sr/psnr/2018Track4.yml",
    "chars": 3981,
    "preview": "#### general settings\nname: CycleSR2018Track4\nuse_tb_logger: false\nmodel: CycleSRModel\nscale: 4\ngpu_ids: [0]\nmetrics: [b"
  },
  {
    "path": "codes/config/CycleSR/options/train/sr/psnr/2020Track1.yml",
    "chars": 3981,
    "preview": "#### general settings\nname: CycleSR2020Track1\nuse_tb_logger: false\nmodel: CycleSRModel\nscale: 4\ngpu_ids: [4]\nmetrics: [p"
  },
  {
    "path": "codes/config/CycleSR/options/train/trans/2017Track2.yml",
    "chars": 2503,
    "preview": "#### general settings\nname: Trans2017Track1\nuse_tb_logger: false\nmodel: CycleGANModel\nscale: 1\ngpu_ids: [3]\nmetrics: [ps"
  },
  {
    "path": "codes/config/CycleSR/options/train/trans/2018Track2.yml",
    "chars": 2511,
    "preview": "#### general settings\nname: Trans2018Track2\nuse_tb_logger: false\nmodel: CycleGANModel\nscale: 1\ngpu_ids: [0]\nmetrics: [ps"
  },
  {
    "path": "codes/config/CycleSR/options/train/trans/2018Track4.yml",
    "chars": 2501,
    "preview": "#### general settings\nname: Trans2018Track4\nuse_tb_logger: false\nmodel: CycleGANModel\nscale: 1\ngpu_ids: [1]\nmetrics: [ps"
  },
  {
    "path": "codes/config/CycleSR/options/train/trans/2020Track1.yml",
    "chars": 2505,
    "preview": "#### general settings\nname: Trans2020Track1\nuse_tb_logger: false\nmodel: CycleGANModel\nscale: 1\ngpu_ids: [1]\nmetrics: [ps"
  },
  {
    "path": "codes/config/CycleSR/test.py",
    "chars": 8438,
    "preview": "import argparse\nimport logging\nimport os.path\nimport sys\nimport time\nfrom collections import OrderedDict, defaultdict\n\ni"
  },
  {
    "path": "codes/config/CycleSR/train.py",
    "chars": 12369,
    "preview": "import argparse\nimport logging\nimport math\nimport os\nimport random\nimport sys\nimport time\nfrom collections import defaul"
  },
  {
    "path": "codes/config/DSGANSR/README.md",
    "chars": 163,
    "preview": "This repo supports the training of the degradation model DSGAN proposed in [Frequency Separation for Real-World Super-Re"
  },
  {
    "path": "codes/config/DSGANSR/archs/__init__.py",
    "chars": 921,
    "preview": "import importlib\nimport os\nimport os.path as osp\n\nfrom utils.registry import ARCH_REGISTRY, LOSS_REGISTRY, LR_SCHEDULER_"
  },
  {
    "path": "codes/config/DSGANSR/archs/deg_arch.py",
    "chars": 5150,
    "preview": "import torch\nfrom torch import nn\nimport torch.nn.functional as F\n\nfrom utils.registry import ARCH_REGISTRY\nfrom kornia."
  },
  {
    "path": "codes/config/DSGANSR/archs/discriminator.py",
    "chars": 7055,
    "preview": "import torch\nimport torch.nn as nn\nimport torchvision\nimport functools\n\nfrom utils.registry import ARCH_REGISTRY\n\n\n@ARCH"
  },
  {
    "path": "codes/config/DSGANSR/archs/edsr.py",
    "chars": 4516,
    "preview": "import math\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\nfro"
  },
  {
    "path": "codes/config/DSGANSR/archs/loss.py",
    "chars": 11664,
    "preview": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport lpips as lp\n\nfrom utils.registry import LOSS_R"
  },
  {
    "path": "codes/config/DSGANSR/archs/lr_scheduler.py",
    "chars": 3885,
    "preview": "import math\nfrom collections import Counter, defaultdict\n\nimport torch\nfrom torch.optim.lr_scheduler import _LRScheduler"
  },
  {
    "path": "codes/config/DSGANSR/archs/module_util.py",
    "chars": 2619,
    "preview": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.nn.init as init\n\n\ndef initialize_weights"
  },
  {
    "path": "codes/config/DSGANSR/archs/rcan.py",
    "chars": 8148,
    "preview": "import math\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\nfro"
  },
  {
    "path": "codes/config/DSGANSR/archs/rrdb.py",
    "chars": 3134,
    "preview": "import functools\n\nfrom utils.registry import ARCH_REGISTRY\n\nfrom .module_util import *\n\n\nclass ResidualDenseBlock_5C(nn."
  },
  {
    "path": "codes/config/DSGANSR/archs/srresnet.py",
    "chars": 2114,
    "preview": "import functools\n\nfrom utils.registry import ARCH_REGISTRY\n\nfrom .module_util import *\n\n\n@ARCH_REGISTRY.register()\nclass"
  },
  {
    "path": "codes/config/DSGANSR/archs/translator.py",
    "chars": 1500,
    "preview": "import math\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\nfro"
  },
  {
    "path": "codes/config/DSGANSR/archs/vgg.py",
    "chars": 6977,
    "preview": "import os\nfrom collections import OrderedDict\n\nimport torch\nfrom torch import nn as nn\nfrom torchvision.models import vg"
  },
  {
    "path": "codes/config/DSGANSR/count_flops.py",
    "chars": 705,
    "preview": "import argparse\nimport sys\n\nimport torch\nfrom torchsummaryX import summary\n\nsys.path.append(\"../../\")\nimport utils.optio"
  },
  {
    "path": "codes/config/DSGANSR/inference.py",
    "chars": 1671,
    "preview": "import argparse\nimport logging\nimport math\nimport os\nimport os.path as osp\nimport random\nimport sys\nimport cv2\nfrom coll"
  },
  {
    "path": "codes/config/DSGANSR/models/__init__.py",
    "chars": 599,
    "preview": "import importlib\nimport logging\nimport os\nimport os.path as osp\n\nfrom utils.registry import MODEL_REGISTRY\n\nlogger = log"
  },
  {
    "path": "codes/config/DSGANSR/models/base_model.py",
    "chars": 10960,
    "preview": "import logging\nimport os\nfrom collections import OrderedDict\n\nimport torch\nimport torch.nn as nn\nfrom torch.nn.parallel "
  },
  {
    "path": "codes/config/DSGANSR/models/deg_sr_model.py",
    "chars": 10815,
    "preview": "import logging\nfrom collections import OrderedDict\nimport random\n\nimport torch\nimport torch.nn as nn\n\nfrom utils.registr"
  },
  {
    "path": "codes/config/DSGANSR/options/test/2017Track1.yml",
    "chars": 1764,
    "preview": "#### general settings\nname: 2017Track1\nuse_tb_logger: false\nmodel: DegSRModel\nscale: 4\ngpu_ids: [0]\n\nmetrics: [psnr, ssi"
  },
  {
    "path": "codes/config/DSGANSR/options/test/2018Track2.yml",
    "chars": 848,
    "preview": "#### general settings\nname: 2018Track2\nuse_tb_logger: false\nmodel: DegSRModel\nscale: 4\ngpu_ids: [0]\n\nmetrics: [best_psnr"
  },
  {
    "path": "codes/config/DSGANSR/options/test/2018Track4.yml",
    "chars": 848,
    "preview": "#### general settings\nname: 2018Track4\nuse_tb_logger: false\nmodel: DegSRModel\nscale: 4\ngpu_ids: [6]\n\nmetrics: [best_psnr"
  },
  {
    "path": "codes/config/DSGANSR/options/test/2020Track1.yml",
    "chars": 838,
    "preview": "#### general settings\nname: 2020Track1\nuse_tb_logger: false\nmodel: DegSRModel\nscale: 4\ngpu_ids: [0]\n\nmetrics: [psnr, ssi"
  },
  {
    "path": "codes/config/DSGANSR/options/train/deg/2017Track2.yml",
    "chars": 2726,
    "preview": "#### general settings\nname: 2017Track2_deg\nuse_tb_logger: false\nmodel: DegSRModel\nscale: 4\ngpu_ids: [5]\nmetrics: [psnr, "
  },
  {
    "path": "codes/config/DSGANSR/options/train/deg/2018Track2.yml",
    "chars": 2692,
    "preview": "#### general settings\nname: 2018Track2_deg\nuse_tb_logger: false\nmodel: DegSRModel\nscale: 4\ngpu_ids: [0]\nmetrics: [best_p"
  },
  {
    "path": "codes/config/DSGANSR/options/train/deg/2018Track4.yml",
    "chars": 2694,
    "preview": "#### general settings\nname: 2018Track4_deg\nuse_tb_logger: false\nmodel: DegSRModel\nscale: 4\ngpu_ids: [4]\nmetrics: [best_p"
  },
  {
    "path": "codes/config/DSGANSR/options/train/deg/2020Track1.yml",
    "chars": 2678,
    "preview": "#### general settings\nname: 2020Track1_deg\nuse_tb_logger: false\nmodel: DegSRModel\nscale: 4\ngpu_ids: [2]\nmetrics: [psnr, "
  },
  {
    "path": "codes/config/DSGANSR/options/train/sr/2017Track2.yml",
    "chars": 1784,
    "preview": "#### general settings\nname: 2017Track2\nuse_tb_logger: false\nmodel: DegSRModel\nscale: 4\ngpu_ids: [0]\nmetrics: [psnr, ssim"
  },
  {
    "path": "codes/config/DSGANSR/options/train/sr/2018Track2.yml",
    "chars": 1787,
    "preview": "#### general settings\nname: 2018Track2\nuse_tb_logger: false\nmodel: DegSRModel\nscale: 4\ngpu_ids: [0]\nmetrics: [best_psnr,"
  },
  {
    "path": "codes/config/DSGANSR/options/train/sr/2018Track4.yml",
    "chars": 1772,
    "preview": "#### general settings\nname: 2018Track4\nuse_tb_logger: false\nmodel: DegSRModel\nscale: 4\ngpu_ids: [6]\nmetrics: [best_psnr,"
  },
  {
    "path": "codes/config/DSGANSR/options/train/sr/2020Track1.yml",
    "chars": 1777,
    "preview": "#### general settings\nname: 2020Track1\nuse_tb_logger: false\nmodel: DegSRModel\nscale: 4\ngpu_ids: [7]\nmetrics: [psnr, ssim"
  },
  {
    "path": "codes/config/DSGANSR/test.py",
    "chars": 8423,
    "preview": "import argparse\nimport logging\nimport os.path\nimport sys\nimport time\nfrom collections import OrderedDict, defaultdict\n\ni"
  },
  {
    "path": "codes/config/DSGANSR/train.py",
    "chars": 12369,
    "preview": "import argparse\nimport logging\nimport math\nimport os\nimport random\nimport sys\nimport time\nfrom collections import defaul"
  },
  {
    "path": "codes/config/EDSR/archs/__init__.py",
    "chars": 921,
    "preview": "import importlib\nimport os\nimport os.path as osp\n\nfrom utils.registry import ARCH_REGISTRY, LOSS_REGISTRY, LR_SCHEDULER_"
  },
  {
    "path": "codes/config/EDSR/archs/bicubic.py",
    "chars": 489,
    "preview": "import math\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\nfro"
  },
  {
    "path": "codes/config/EDSR/archs/discriminator.py",
    "chars": 7055,
    "preview": "import torch\nimport torch.nn as nn\nimport torchvision\nimport functools\n\nfrom utils.registry import ARCH_REGISTRY\n\n\n@ARCH"
  },
  {
    "path": "codes/config/EDSR/archs/edsr.py",
    "chars": 4516,
    "preview": "import math\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\nfro"
  },
  {
    "path": "codes/config/EDSR/archs/loss.py",
    "chars": 10641,
    "preview": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport lpips as lp\n\nfrom utils.registry import LOSS_R"
  },
  {
    "path": "codes/config/EDSR/archs/lr_scheduler.py",
    "chars": 3885,
    "preview": "import math\nfrom collections import Counter, defaultdict\n\nimport torch\nfrom torch.optim.lr_scheduler import _LRScheduler"
  },
  {
    "path": "codes/config/EDSR/archs/module_util.py",
    "chars": 2619,
    "preview": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.nn.init as init\n\n\ndef initialize_weights"
  },
  {
    "path": "codes/config/EDSR/archs/rcan.py",
    "chars": 8148,
    "preview": "import math\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\nfro"
  },
  {
    "path": "codes/config/EDSR/archs/rrdb.py",
    "chars": 3134,
    "preview": "import functools\n\nfrom utils.registry import ARCH_REGISTRY\n\nfrom .module_util import *\n\n\nclass ResidualDenseBlock_5C(nn."
  },
  {
    "path": "codes/config/EDSR/archs/srresnet.py",
    "chars": 2114,
    "preview": "import functools\n\nfrom utils.registry import ARCH_REGISTRY\n\nfrom .module_util import *\n\n\n@ARCH_REGISTRY.register()\nclass"
  },
  {
    "path": "codes/config/EDSR/archs/translator.py",
    "chars": 3562,
    "preview": "import math\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\nfro"
  },
  {
    "path": "codes/config/EDSR/archs/vgg.py",
    "chars": 6977,
    "preview": "import os\nfrom collections import OrderedDict\n\nimport torch\nfrom torch import nn as nn\nfrom torchvision.models import vg"
  },
  {
    "path": "codes/config/EDSR/count_flops.py",
    "chars": 705,
    "preview": "import argparse\nimport sys\n\nimport torch\nfrom torchsummaryX import summary\n\nsys.path.append(\"../../\")\nimport utils.optio"
  },
  {
    "path": "codes/config/EDSR/inference.py",
    "chars": 1656,
    "preview": "import argparse\nimport logging\nimport math\nimport os\nimport os.path as osp\nimport random\nimport sys\nimport cv2\nfrom coll"
  },
  {
    "path": "codes/config/EDSR/models/__init__.py",
    "chars": 599,
    "preview": "import importlib\nimport logging\nimport os\nimport os.path as osp\n\nfrom utils.registry import MODEL_REGISTRY\n\nlogger = log"
  },
  {
    "path": "codes/config/EDSR/models/base_model.py",
    "chars": 10960,
    "preview": "import logging\nimport os\nfrom collections import OrderedDict\n\nimport torch\nimport torch.nn as nn\nfrom torch.nn.parallel "
  },
  {
    "path": "codes/config/EDSR/models/sr_model.py",
    "chars": 5868,
    "preview": "import logging\nfrom collections import OrderedDict\n\nimport torch\nimport torch.nn as nn\n\nfrom utils.registry import MODEL"
  },
  {
    "path": "codes/config/EDSR/options/test/2017Track2_2020Track1.yml",
    "chars": 828,
    "preview": "#### general settings\nname: Bicubic_2017Track2_2020Track1\nuse_tb_logger: false\nmodel: SRModel\nscale: 4\ngpu_ids: [5]\n\nmet"
  },
  {
    "path": "codes/config/EDSR/options/test/2018Track2_2020Track4.yml",
    "chars": 839,
    "preview": "#### general settings\nname: Bicubic_2018Track2_2018Track4\nuse_tb_logger: false\nmodel: SRModel\nscale: 4\ngpu_ids: [5]\n\nmet"
  },
  {
    "path": "codes/config/EDSR/options/test/2020Track2.yml",
    "chars": 520,
    "preview": "#### general settings\nname: 2020Track2\nuse_tb_logger: false\nmodel: SRModel\nscale: 4\ngpu_ids: [5]\n\nmetrics: [niqe, piqe, "
  },
  {
    "path": "codes/config/EDSR/test.py",
    "chars": 8489,
    "preview": "import argparse\nimport logging\nimport os.path\nimport sys\nimport time\nfrom collections import OrderedDict, defaultdict\n\ni"
  },
  {
    "path": "codes/config/EDSR/train.py",
    "chars": 12369,
    "preview": "import argparse\nimport logging\nimport math\nimport os\nimport random\nimport sys\nimport time\nfrom collections import defaul"
  },
  {
    "path": "codes/config/Maeda/README.md",
    "chars": 145,
    "preview": "This repo supports the training and testing of paper [Unpaired Image Super-Resolution using Pseudo-Supervision](https://"
  },
  {
    "path": "codes/config/Maeda/archs/__init__.py",
    "chars": 921,
    "preview": "import importlib\nimport os\nimport os.path as osp\n\nfrom utils.registry import ARCH_REGISTRY, LOSS_REGISTRY, LR_SCHEDULER_"
  },
  {
    "path": "codes/config/Maeda/archs/discriminator.py",
    "chars": 7020,
    "preview": "import torch\nimport torch.nn as nn\nimport torchvision\nimport functools\n\nfrom utils.registry import ARCH_REGISTRY\n\n\n@ARCH"
  },
  {
    "path": "codes/config/Maeda/archs/edsr.py",
    "chars": 4516,
    "preview": "import math\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\nfro"
  },
  {
    "path": "codes/config/Maeda/archs/loss.py",
    "chars": 10124,
    "preview": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom utils.registry import LOSS_REGISTRY\n\nfrom .vgg "
  },
  {
    "path": "codes/config/Maeda/archs/lr_scheduler.py",
    "chars": 3885,
    "preview": "import math\nfrom collections import Counter, defaultdict\n\nimport torch\nfrom torch.optim.lr_scheduler import _LRScheduler"
  },
  {
    "path": "codes/config/Maeda/archs/module_util.py",
    "chars": 2619,
    "preview": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.nn.init as init\n\n\ndef initialize_weights"
  }
]

// ... and 108 more files (download for full content)

About this extraction

This page contains the full source code of the greatlog/UnpairedSR GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 308 files (1.2 MB), approximately 343.9k tokens, and a symbol index with 1926 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!